text
stringlengths
59
500k
subset
stringclasses
6 values
Trace identity In mathematics, a trace identity is any equation involving the trace of a matrix. Properties Trace identities are invariant under simultaneous conjugation. Uses They are frequently used in the invariant theory of $n\times n$ matrices to find the generators and relations of the ring of invariants, and therefore are useful in answering questions similar to that posed by Hilbert's fourteenth problem. Examples • The Cayley–Hamilton theorem says that every square matrix satisfies its own characteristic polynomial. This also implies that all square matrices satisfy $\operatorname {tr} \left(A^{n}\right)-c_{n-1}\operatorname {tr} (A)\operatorname {tr} \left(A^{n-1}\right)+\cdots +(-1)^{n}n\det(A)=0\,$ where the coefficients $c_{i}$ are given by the elementary symmetric polynomials of the eigenvalues of A. • All square matrices satisfy $\operatorname {tr} (A)=\operatorname {tr} \left(A^{\mathsf {T}}\right).\,$ See also • Trace inequality – inequalities involving linear operators on Hilbert spacesPages displaying wikidata descriptions as a fallback References Rowen, Louis Halle (2008), Graduate Algebra: Noncommutative View, Graduate Studies in Mathematics, vol. 2, American Mathematical Society, p. 412, ISBN 9780821841532.
Wikipedia
Ramsey theorem with forbidden induced subgraph Given $n \in \mathbb{N}$, I want to prove that there exist $f(n) \in \mathbb{N}, 0<\alpha(n) \leq 1$ such that for every graph $G=(V,E)$ with $|V| > f(n)$ vertices, and a set $R \subseteq V \times V$ where $|R| \leq \alpha(n) {|V| \choose 2}$, there is a clique or anti clique of size $n$ whose edges/anti-edges are not in $R$ (non of the edges are in $R$). Very similar to Ramsey thorem but with the constraint of $R$. I tried to prove in the following manner: Take $f(n)$ to be the $n'$th Ramsey number. When choosing a pair of vertices $\{u,v\} \in V \times V$ randomly with uniform distribution , the probability that this pair is in $R$ is: $\frac{|R|}{|V \times V|} = \alpha(n)$. Since there is at least one clique/anti-clique, there are at least $n \choose 2$ edges/anti-edges that are part of clique/anti-clique. Hence the probability that random pair is a part of a clique/anti-clique is $\frac{{n \choose 2}}{{|V| \choose 2}} = \frac{n(n-1)}{|V|(|V|-1)}$. Thus, the probability that a random pair is NOT in $R$ AND in a clique/anti-clique is $\frac{n(n-1)}{|V|(|V|-1)}(1 - \alpha(n))$. There are $|V| \choose 2$ such pairs. I want the expectation of the number of pairs to be larger than $n \choose 2$ , so: $$ \frac{n(n-1)}{|V|(|V|-1)} (1-\alpha(n))\frac{|V|(|V|-1)}{2} \geq \frac{n(n-1)}{2}$$ Which leads to $\alpha(n) = 0$ - clearly incorrect. My question is: how can I improve this method in order to achieve $\alpha(n) > 0$? I thought about taking $f(n)$ to be such that every graph with $|V| > f(n)$ vertices will have at least $n$ cliques, but I did all the math and it's also incorrect. probability combinatorics graph-theory ramsey-theory Horvey HorveyHorvey $\begingroup$ By "and a set $R$" do you mean "for every set $R$" or do you mean "there exists a set $R$"? The only restriction on $\alpha(n)$ is that $\alpha(n)\le1$? No lower bound? Not even $\alpha(n)\gt0$? $\endgroup$ – bof Apr 21 '17 at 10:07 $\begingroup$ $\alpha(n)>0$. The set $R$ is given, it cannot hold for every set $R$ (it's size is larger than the clique size for sufficient large $|V|$, so there is such set that contains the clique). $\endgroup$ – Horvey Apr 21 '17 at 12:46 $\begingroup$ Do you mean $|R| \leq \alpha(n) \binom{|V|}{2}$? $\endgroup$ – Perry Elliott-Iverson Apr 21 '17 at 14:35 $\begingroup$ yes. My mistake. Will edit $\endgroup$ – Horvey Apr 21 '17 at 14:44 Let $f(n)$ be the $n$th Ramsey number, and let $\alpha(n) = \left(\frac{1}{f(n)-1}\right)^2$. Then $|R|\leq \alpha(n) \binom{|V|}{2}$ and we have: $$\begin{align} |(V \times V) \backslash R| &\geq \binom{|V|}{2} - \alpha(n)\binom{|V|}{2} \\ &= \binom{|V|}{2}\left(1-\left(\frac{1}{f(n)-1}\right)^2\right) \\ &= \frac{|V|(|V|-1)}{2}\left(1+\frac{1}{f(n)-1}\right)\left(1-\frac{1}{f(n)-1}\right) \\ &> \frac{|V|(|V|-1)}{2}\left(1+\frac{1}{|V|-1}\right)\left(1-\frac{1}{f(n)-1}\right) \\ &= \frac{|V|(|V|-1)}{2}\left(\frac{|V|}{|V|-1}\right)\left(1-\frac{1}{f(n)-1}\right) \\ &= \frac{|V|^2}{2}\left(1-\frac{1}{f(n)-1}\right) \end{align}$$ Thus by Turán's Theorem, there is a subset $S$ of $V$ with $|S|=f(n)$ so that $G[S]$ does not contain any edges of $R$, and $G[S]$ has either $K_n$ or $\overline{K_n}$. Perry Elliott-IversonPerry Elliott-Iverson $\begingroup$ Turan's theorem states that if I had $|V|^2/2 (1-1/(f(n)-1))$ edges, then I have clique of size $f(n)$. How is it helping to take the $n$'th Ramsey number? Not sure I understand your "punchline" $\endgroup$ – Horvey Apr 22 '17 at 18:02 $\begingroup$ There are at least $\frac{|V|^2}{2}\left(1-\frac{1}{f(n)-1}\right)$ members of $V \times V$ that are not in $R$, so Turan's theorem says that the graph $G'$ with vertex set $V$ and edge set $(V \times V) \backslash R$ must have a $K_{f(n)}$. Let $S \subseteq V$ be the set of vertices of this $K_{f(n)}$. Then no members of $R$ have both ends in $S$, and $G[S]$ is a graph with $f(n)$ vertices, which must have $K_n$ or $\overline{K_n}$ by Ramsey's Theorem. $\endgroup$ – Perry Elliott-Iverson Apr 22 '17 at 19:56 $\begingroup$ But $G$ does not necessarily contain all the edges in $V \times V$. What if $G$ has much less edges? $\endgroup$ – Horvey Apr 23 '17 at 4:47 $\begingroup$ The number of edges in $G$ is irrelevant. We found a large enough (order $f(n)$, which guarantees $K_n$ or $\overline{K_n}$) subset of the vertices of $G$ with no members of $R$ having both ends in that subset. $\endgroup$ – Perry Elliott-Iverson Apr 24 '17 at 19:51 Not the answer you're looking for? Browse other questions tagged probability combinatorics graph-theory ramsey-theory or ask your own question. Maximal and Maximum Cliques Prove Ramsey Number R(3,5)=14 What is a Ramsey Graph? Modification of the Ramsey number $(r+1)$ Clique of Induced subgraph and Turan's theorem How to find a subgraph of cliques with the minimum total sum weight Induced subgraph problem Number of cliques or anti-cliques using Ramsey Theorem Probability that in fully connected graph there is a clique of different colors How do I apply the infinite Ramsey theorem to graph theory?
CommonCrawl
Analysis of Markov-modulated fluid polling systems with gated discipline JIMO Home Hybrid social spider optimization algorithm with differential mutation operator for the job-shop scheduling problem March 2021, 17(2): 549-573. doi: 10.3934/jimo.2019123 Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization Sujit Kumar Samanta , and Rakesh Nandi Department of Mathematics, National Institute of Technology Raipur, Raipur-492010, India * Corresponding author: Sujit Kumar Samanta Received September 2018 Revised May 2019 Published October 2019 Fund Project: The first author acknowledges the Council of Scientific and Industrial Research (CSIR), New Delhi, India, for partial support from the project grant 25(0271)/17/EMR-Ⅱ. Figure(2) / Table(10) This paper analyzes an infinite-buffer single-server queueing system wherein customers arrive in batches of random size according to a discrete-time renewal process. The customers are served one at a time under discrete-time Markovian service process. Based on the censoring technique, the UL-type $ RG $-factorization for the Toeplitz type block-structured Markov chain is used to obtain the prearrival epoch probabilities. The random epoch probabilities are obtained with the help of classical principle based on Markov renewal theory. The system-length distributions at outside observer's, intermediate and post-departure epochs are obtained by making relations among various time epochs. The analysis of waiting-time distribution measured in slots of an arbitrary customer in an arrival batch has also been investigated. In order to unify the results of both discrete-time and its continuous-time counterpart, we give a brief demonstration to get the continuous-time results from those of the discrete-time ones. A variety of numerical results are provided to illustrate the effect of model parameters on the performance measures. Keywords: Toeplitz type block-structured Markov chain, censored Markov chain, discrete-time Markovian service process (D-MSP), general independent batch arrival, queueing, UL-type $ RG $-factorization. Mathematics Subject Classification: Primary: 60K25, 90B22, 68M20, 60K20. Citation: Sujit Kumar Samanta, Rakesh Nandi. Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization. Journal of Industrial & Management Optimization, 2021, 17 (2) : 549-573. doi: 10.3934/jimo.2019123 J. Abate, G. L. Choudhury and W. Whitt, Asymptotics for steady-state tail probabilities in structured Markov queueing models, Comm. Statist. Stochastic Models, 10 (1994), 99-143. doi: 10.1080/15326349408807290. Google Scholar A. S. Alfa, Applied Discrete-Time Queues, 2$^{nd}$ edition, Springer-Verlag, New York, 2016. doi: 10.1007/978-1-4939-3420-1. Google Scholar A. S. Alfa, J. Xue and Q. Ye, Perturbation theory for the asymptotic decay rates in the queues with Markovian arrival process and/or Markovian service process, Queueing Syst., 36 (2000), 287-301. doi: 10.1023/A:1011032718715. Google Scholar J. R. Artalejo, I. Atencia and P. Moreno, A discrete-time $Geo^{[X]}/G/1$ retrial queue with control of admission, Applied Mathematical Modelling, 29 (2005), 1100-1120. doi: 10.1016/j.apm.2005.02.005. Google Scholar J. R. Artalejo and Q. L. Li, Performance analysis of a block-structured discrete-time retrial queue with state-dependent arrivals, Discrete Event Dyn. Syst., 20 (2010), 325-347. doi: 10.1007/s10626-009-0075-6. Google Scholar F. Avram and D. F. Chedom, On symbolic $RG$-factorization of quasi-birth-and-death processes, TOP, 19 (2011), 317-335. doi: 10.1007/s11750-011-0195-7. Google Scholar A. D. Banik and U. C. Gupta, Analyzing the finite buffer batch arrival queue under Markovian service process: $GI^{X}/MSP/1/N$, TOP, 15 (2007), 146-160. doi: 10.1007/s11750-007-0007-2. Google Scholar P. P. Bocharov, C. D'Apice and S. Salerno, The stationary characteristics of the $G/MSP/1/r$ queueing system, Autom. Remote Control, 64 (2003), 288-301. doi: 10.1023/A:1022219232282. Google Scholar H. Bruneel and B. G. Kim, Discrete-time Models for Communication Systems including ATM, The Springer International Series in Engineering and Computer Science, 205, Kluwer Academic Publishers, Boston, 1993. doi: 10.1007/978-1-4615-3130-2. Google Scholar M. L. Chaudhry, A. D. Banik and A. Pacheco, A simple analysis of the batch arrival queue with infinite-buffer and Markovian service process using roots method: $GI^{[X]}/C$-$MSP/1/\infty $, Ann. Oper. Res., 252 (2017), 135-173. doi: 10.1007/s10479-015-2026-y. Google Scholar M. L. Chaudhry, S. K. Samanta and A. Pacheco, Analytically explicit results for the $GI/C$-$MSP/1/\infty$ queueing system using roots, Probab. Engrg. Inform. Sci., 26 (2012), 221-244. doi: 10.1017/S0269964811000349. Google Scholar E. Çinlar, Introduction to Stochastic Process, Prentice Hall, New Jersey, 1975. Google Scholar D. Freedman, Approximating Countable Markov Chains, 2$^{nd}$ edition, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4613-8230-0. Google Scholar Y. Gao and W. Liu, Analysis of the $GI/Geo/c$ queue with working vacations, Applied Mechanics and Materials, 197 (2012), 534-541. doi: 10.4028/www.scientific.net/AMM.197.534. Google Scholar V. Goswami, U. C. Gupta and S. K. Samanta, Analyzing discrete-time bulk-service $Geo/Geo^b/m$ queue, RAIRO Operations Research, 40 (2006), 267-284. doi: 10.1051/ro:2006021. Google Scholar W. K. Grassmann and D. P. Heyman, Equilibrium distribution of block-structured Markov chains with repeating rows, J. Appl. Probab., 27 (1990), 557-576. doi: 10.2307/3214541. Google Scholar U. C. Gupta and A. D. Banik, Complete analysis of finite and infinite buffer $GI/MSP/1$ queue — A computational approach, Oper. Res. Lett., 35 (2007), 273-280. doi: 10.1016/j.orl.2006.02.003. Google Scholar A. Horváth, G. Horváth and M. Telek, A joint moments based analysis of networks of $MAP/MAP/1$ queues, Performance Evaluation, 67 (2010), 759-778. doi: 10.1016/j.peva.2009.12.006. Google Scholar J. J. Hunter, Mathematical techniques of applied probability, in Discrete-Time Models: Techniques and Applications, Operations Research and Industrial Engineering, Academic Press, New York, 1983. Google Scholar T. Jiang and L. Liu, Analysis of a batch service multi-server polling system with dynamic service control, J. Ind. Manag. Optim., 14 (2018), 743-757. doi: 10.3934/jimo.2017073. Google Scholar N. K. Kim, S. H. Chang and K. C. Chae, On the relationships among queue lengths at arrival, departure, and random epochs in the discrete-time queue with D-BMAP arrivals, Oper. Res. Lett., 30 (2002), 25-32. doi: 10.1016/S0167-6377(01)00110-9. Google Scholar Q. Li, Y. Ying and Y. Q. Zhao, A $BMAP/G/1$ retrial queue with a server subject to breakdowns and repairs, Ann. Oper. Res., 141 (2006), 233-270. doi: 10.1007/s10479-006-5301-0. Google Scholar Q. L. Li, Constructive Computation in Stochastic Models with Applications: The RGfactorization, Springer, Berlin and Tsinghua University Press, Beijing, 2010. doi: 10.1007/978-3-642-11492-2. Google Scholar Q. L. Li and Y. Q. Zhao, Light-tailed asymptotics of stationary probability vectors of Markov chains of $GI/G/1$ type, Adv. in Appl. Probab., 37 (2005), 1075-1093. doi: 10.1017/S0001867800000677. Google Scholar Q. L. Li and Y. Q. Zhao, A $MAP/G/1$ queue with negative customers, Queueing Syst., 47(1) (2004), 5–43. doi: 10.1023/B:QUES.0000032798.65858.19. Google Scholar D. M. Lucantoni and M. F. Neuts, Some steady-state distributions for the $MAP/SM/1$ queue, Comm. Statist. Stochastic Models, 10 (1994), 575-598. doi: 10.1080/15326349408807311. Google Scholar C. D. Meyer, Stochastic complementation, uncoupling Markov chains, and the theory of nearly reducible systems, SIAM Review, 31 (1989), 240-272. doi: 10.1137/1031050. Google Scholar M. S. Mushtaq, S. Fowler and A. Mellouk, QoE in 5G cloud networks using multimedia services, in Proceeding of IEEE international Wireless Communication and Networking Conference (WCNC'16), Doha, Qatar, 2016. doi: 10.1109/WCNC.2016.7565173. Google Scholar T. Ozawa, Analysis of queues with Markovian service processes, Stochastic Models, 20 (2004), 391-413. doi: 10.1081/STM-200033073. Google Scholar A. Pacheco, S. K. Samanta and M. L. Chaudhry, A short note on the $GI/Geo/1$ queueing system, Statist. Probab. Lett., 82 (2012), 268-273. doi: 10.1016/j.spl.2011.09.022. Google Scholar S. K. Samanta, Sojourn-time distribution of the $GI/MSP/1$ queueing system, OPSEARCH, 52 (2015), 756-770. doi: 10.1007/s12597-015-0202-0. Google Scholar S. K. Samanta, M. L. Chaudhry and A. Pacheco, Analysis of $BMAP/MSP/1$ queue, Methodol. Comput. Appl. Probab., 18 (2016), 419-440. doi: 10.1007/s11009-014-9429-0. Google Scholar S. K. Samanta, M. L. Chaudhry, A. Pacheco and U. C. Gupta, Analytic and computational analysis of the discrete-time $GI/D$-$MSP/1$ queue using roots, Comput. Oper. Res., 56 (2015), 33-40. doi: 10.1016/j.cor.2014.10.017. Google Scholar S. K. Samanta, U. C. Gupta and M. L. Chaudhry, Analysis of stationary discrete-time $GI/D$-$MSP/1$ queue with finite and infinite buffers, 4OR, 7 (2009), 337-361. doi: 10.1007/s10288-008-0088-2. Google Scholar S. K. Samanta and Z. G. Zhang, Stationary analysis of a discrete-time $GI/D$-$MSP/1$ queue with multiple vacations, Appl. Math. Model., 36 (2012), 5964-5975. doi: 10.1016/j.apm.2012.01.049. Google Scholar K. D. Turck, S. D. Vuyst, D. Fiems, H. Bruneel and and S. Wittevrongel, Efficient performance analysis of newly proposed sleep-mode mechanisms for IEEE 802.16m in case of correlated downlink traffic, Wireless Networks, 19 (2013), 831-842. doi: 10.1007/s11276-012-0504-6. Google Scholar Y. C. Wang, J. H. Chou and S. Y. Wang, Loss pattern of $DBMAP/DMSP/1/K$ queue and its application in wireless local communications, Appl. Math. Model., 35 (2011), 1782-1797. doi: 10.1016/j.apm.2010.10.009. Google Scholar Y. Wang, C. Linb and Q. L. Li, Performance analysis of email systems under three types of attacks, Performance Evaluation, 67 (2010), 485-499. doi: 10.1016/j.peva.2010.01.003. Google Scholar M. Yu and A. S. Alfa, Algorithm for computing the queue length distribution at various time epochs in $DMAP/G^{(1, a, b)}/1/N$ queue with batch-size-dependent service time, European J. Oper. Res., 244 (2015), 227-239. doi: 10.1016/j.ejor.2015.01.056. Google Scholar M. Zhang and Z. Hou, Performance analysis of $MAP/G/1$ queue with working vacations and vacation interruption, Applied Mathematical Modelling, 35 (2011), 1551-1560. doi: 10.1016/j.apm.2010.09.031. Google Scholar J. A. Zhao, B. Li, C. W. Kok and I. Ahmad, MPEG-4 video transmission over wireless networks: A link level performance study, Wireless Networks, 10 (2004), 133-146. doi: 10.1023/B:WINE.0000013078.74259.13. Google Scholar Y. Q. Zhao, Censoring technique in studying block-structured Markov chains, in Advance in Algorithmic Methods for Stochastic Models, Notable Publications, 2000, 417–433. Google Scholar Y. Q. Zhao, W. Li and W. J. Braun, Infinite block-structured transition matrices and their properties, Adv. in Appl. Probab., 30 (1998), 365-384. doi: 10.1239/aap/1035228074. Google Scholar Y. Q. Zhao and D. Liu, The censored Markov chain and the best augmentation, Journal of Applied Probability, 33 (1996), 623-629. doi: 10.1017/S0021900200100063. Google Scholar Figure 1. Various time epochs in LAS-DA Figure 2. Various time epochs in EAS Table 1. System-length distribution at prearrival epoch (LAS-DA) $ n $ $ \pi^{-}_1(n) $ $ \pi^{-}_2(n) $ $ \pi^{-}_3(n) $ $ \pi^{-}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{-}(n){\bf e} $ 0 0.147931 0.087562 0.141983 0.215337 0.592813 10 0.005820 0.002639 0.005923 0.004720 0.019101 150 0.000000 0.000000 0.000000 0.000000 0.000000 $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ Sum 0.268062 0.144522 0.263029 0.324388 1.000000 Table 2. System-length distribution at random epoch (LAS-DA) $ n $ $ \pi_1(n) $ $ \pi_2(n) $ $ \pi_3(n) $ $ \pi_4(n) $ $ \mathit{\boldsymbol{\pi }}(n){\bf e} $ $ L_{q}= 4.110096 $, $ W_{q}\equiv L_{q}/\lambda\overline{g}=16.988397 $ Table 3. System-length distribution at intermediate epoch (LAS-DA) $ n $ $ \pi^{\bullet}_1(n) $ $ \pi^{\bullet}_2(n) $ $ \pi^{\bullet}_3(n) $ $ \pi^{\bullet}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{\bullet}(n){\bf e} $ Table 4. System-length distribution at post-departure epoch (LAS-DA) $ n $ $ \pi^{+}_1(n) $ $ \pi^{+}_2(n) $ $ \pi^{+}_3(n) $ $ \pi^{+}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{+}(n){\bf e} $ Table 5. Waiting-time distribution of an arbitrary customer (LAS-DA) $ k $ $ w_1(k) $ $ w_2(k) $ $ w_3(k) $ $ w_4(k) $ $ {\bf w}(k){\bf e} $ $ W_{q}\equiv \sum_{k=1}^{\infty}k{\bf w}(k){\bf e}=16.988398 $ Table 6. System-length distribution at prearrival epoch (EAS) Table 7. System-length distribution at random epoch (EAS) Table 8. System-length distribution at outside observer's epoch (EAS) $ n $ $ \pi^{\circ}_1(n) $ $ \pi^{\circ}_2(n) $ $ \pi^{\circ}_3(n) $ $ \pi^{\circ}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{\circ}(n){\bf e} $ $ L_{q}=1.040384 $, $ W_{q}\equiv L_{q}/\lambda\overline{g}=4.265574 $ Table 9. System-length distribution at post-departure epoch (EAS) Table 10. Waiting-time distribution of an arbitrary customer (EAS) $ W_{q}\equiv \sum_{k=1}^{\infty}k{\bf w}(k){\bf e}=4.265574 $ Angelica Pachon, Federico Polito, Costantino Ricciuti. On discrete-time semi-Markov processes. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1499-1529. doi: 10.3934/dcdsb.2020170 Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133 Zonghong Cao, Jie Min. Selection and impact of decision mode of encroachment and retail service in a dual-channel supply chain. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020167 Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339 Veena Goswami, Gopinath Panda. Optimal customer behavior in observable and unobservable discrete-time queues. Journal of Industrial & Management Optimization, 2021, 17 (1) : 299-316. doi: 10.3934/jimo.2019112 Ming Chen, Hao Wang. Dynamics of a discrete-time stoichiometric optimal foraging model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 107-120. doi: 10.3934/dcdsb.2020264 Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331 Meng Chen, Yong Hu, Matteo Penegini. On projective threefolds of general type with small positive geometric genus. Electronic Research Archive, , () : -. doi: 10.3934/era.2020117 Jian Zhang, Tony T. Lee, Tong Ye, Liang Huang. An approximate mean queue length formula for queueing systems with varying service rate. Journal of Industrial & Management Optimization, 2021, 17 (1) : 185-204. doi: 10.3934/jimo.2019106 Zsolt Saffer, Miklós Telek, Gábor Horváth. Analysis of Markov-modulated fluid polling systems with gated discipline. Journal of Industrial & Management Optimization, 2021, 17 (2) : 575-599. doi: 10.3934/jimo.2019124 Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229 Haixiang Yao, Ping Chen, Miao Zhang, Xun Li. Dynamic discrete-time portfolio selection for defined contribution pension funds with inflation risk. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020166 Xiaorui Wang, Genqi Xu, Hao Chen. Uniform stabilization of 1-D Schrödinger equation with internal difference-type control. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021022 Honglin Yang, Jiawu Peng. Coordinating a supply chain with demand information updating. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020181 Xin Zhao, Tao Feng, Liang Wang, Zhipeng Qiu. Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021010 Sushil Kumar Dey, Bibhas C. Giri. Coordination of a sustainable reverse supply chain with revenue sharing contract. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020165 Xi Zhao, Teng Niu. Impacts of horizontal mergers on dual-channel supply chain. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020173 Mahir Demir, Suzanne Lenhart. A spatial food chain model for the Black Sea Anchovy, and its optimal fishery. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 155-171. doi: 10.3934/dcdsb.2020373 Wenyan Zhuo, Honglin Yang, Leopoldo Eduardo Cárdenas-Barrón, Hong Wan. Loss-averse supply chain decisions with a capital constrained retailer. Journal of Industrial & Management Optimization, 2021, 17 (2) : 711-732. doi: 10.3934/jimo.2019131 Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052 Sujit Kumar Samanta Rakesh Nandi
CommonCrawl
\begin{document} \title{Stable laws and Beurling kernels} \author[London School of Economics]{Adam J. Ostaszewski} \address{Mathematics Department, London School of Economics, Houghton Street, London WC2A 2AE, UK} \email{[email protected]} \begin{abstract}We identify a close relation between stable distributions and the limiting homomorphisms central to the theory of regular variation. In so doing some simplifications are achieved in the direct analysis of these laws in Pitman and Pitman (2016); stable distributions are themselves linked to homomorphy. \end{abstract} \keywords{Stable laws; Beurling regular variation; quantifier weakening; homomorphism; Goldie equation; \foreignlanguage{polish}{Go\l\aob b--Schinzel} equation; Levi--Civita equation} \ams{60E07}{26A03; 39B22; 34D05; 39A20} \renewcommand{\number \count63}{\number \count63} \setcounter{footnote}{0} \section{Introduction}\label{s:intro} This note\footnote{This expanded version of \cite{OstA} includes new material in \S 4 and an Appendix.} takes its inspiration from Pitman and Pitman's approach \cite{PitP}, in this volume, to the characterization of stable laws \emph{directly} from their characteristic functional equation \cite[(2.2)]{PitP}, \eqref{ChFE} below, which they complement with the derivation of parameter restrictions by an appeal to \emph{Karamata} (classical) regular variation (rather than \emph{indirectly} as a special case of the L\'{e}vy--Khintchine characterization of infinitely decomposable laws---cf.\ \cite[Section 4]{PitP}). We take up their functional-equation tactic with three aims in mind. The first and primary one is to extract a hidden connection with the more general theory of \emph{Beurling regular variation}, which embraces the original Karamata theory and its later `Bojani\'{c}--Karamata--de Haan' variants. (This has received renewed attention: \cite{BinO1,BinO4,Ost1}). The connection is made via another functional equation, the \emph{Goldie equation} \begin{equation}\label{GFE} \kappa(x+y)-\kappa(x)=\gamma(x)\kappa(y)\qquad(x,y\in\mathbb{R}),\tag{\emph{GFE}} \end{equation} with \emph{vanishing side condition} $\kappa(0)=0$ and \emph{auxiliary function }$\gamma$, or more properly with its multiplicative variant: \begin{equation}\label{GFEx} K(st)-K(s)=G(s)K(t)\qquad (s,t\in \mathbb{R}_+:=(0,\infty)),\tag{${\it GFE}_{\times}$} \end{equation} with corresponding side condition $K(1)=0$; the additive variant arises first in \cite{BinG} (see also \cite[Lemma 3.2.1 and Theorem 3.2.5]{BinGT}), but has only latterly been so named in recognition of its key role both there and in the recent developments \cite{BinO2,BinO3}, inspired both by \emph{Beurling slow variation} \cite[Section 2.11]{BinGT} and by its generalizations \cite{BinO1,BinO4} and \cite{Ost1}. This equation describes the family of \emph{Beurling kernels} (the asymptotic homomorphisms of Beurling regular variation), that is, the functions $K_{F}$ arising as locally uniform limits of the form \begin{equation}\label{BKer} K_{F}(t):=\lim_{x\rightarrow \infty }[F(x+t\varphi (x))-F(x)], \tag{\emph{BKer}} \end{equation} for $\varphi(\cdot)$ ranging over \emph{self-neglecting} functions (\emph{SN}). (See \cite{Ost1,Ost2} for the larger family of kernels arising when $\varphi(\cdot)$ ranges over the \emph{self-equivarying} functions (\emph{SE}), both classes recalled in the complements section \ref{ss:SNSE}.) A secondary aim is achieved in the omission of extensive special-case arguments for the limiting cases in the Pitman analysis (especially the case of characteristic exponent $\alpha =1$ in \cite[Section 5.2]{PitP}---affecting parts of \cite[Section 8]{PitP}), employing here instead the more natural approach of interpreting the `generic' case `in the limit' via the l'Hospital rule. A final general objective of further streamlining is achieved, \emph{en passant}, by telescoping various cases into one, simple, group-theoretic argument; this helps clarify the `group' aspects as distinct from `asymptotics', which relate parameter restrictions to tail balance---see Remark \ref{r:dominant}. A random variable $X$ has a \emph{stable law} if for each $n\in \mathbb{N}$ the law of the random walk $S_{n}:=X_{1}+\dotsb+X_{n},$ where the $n$ steps are independent and with law identical to $X$, is of the same type, i.e.\ the same in distribution up to scale and location: \[ S_{n}\eqdist a_{n}X+b_{n}, \] for some real constants $a_{n}$, $b_{n}$ with $a_{n}>0$; cf.\ \cite[VI.1]{Fel} and \cite[(1.1)]{PitP}. These laws may be characterized by the \emph{characteristic functional equation} (of the characteristic function of $X$, $\varphi (t)= \mathbb{E}[\re^{\ri tX}]$), as in \cite[(2.2)]{PitP}: \begin{equation}\label{ChFE} \varphi(t)^n=\varphi(a_nt)\exp(\ri b_nt)\qquad(n\in\mathbb{N},\; t\in\mathbb{R}_+).\tag{\emph{ChFE}} \end{equation} The standard way of solving \eqref{ChFE} is to deduce the equations satisfied by the functions $a:n\mapsto a_{n}$ and $b:n\mapsto b_{n}$. Pitman and Pitman \cite{PitP} proceed directly by proving the map $a$ \emph{injective}, then extending the map $b$ to $\mathbb{R}_{+}:=(0,\infty )$, and exploiting the classical Cauchy (or Hamel) exponential functional equation (for which see \cite{AczD} and \cite{Kuc}): \begin{equation}\label{CEE} K(xy)=K(x)K(y)\qquad (x,y\in \mathbb{R}_{+});\tag{\emph{CEE}} \end{equation} \eqref{CEE} is satisfied by $K(\cdot)=a(\cdot)$ on the smaller domain $\mathbb{N}$, as a consequence of \eqref{ChFE}. See \cite{RamL} for a similar, but less self-contained account. For other applications see the recent \cite{GupJTS}, which characterizes `generalized stable laws'. We show in Section \ref{s:reduction} the surprising equivalence of \eqref{ChFE} with the fundamental equation \eqref{GFE} of the recently established theory of \emph{Beurling regular variation}. There is thus a one-to-one relation between Beurling kernels arising through \eqref{BKer} and the continuous solutions of \eqref{ChFE}, amongst which are the one-dimensional stable distributions. This involves passage from discrete to continuous, a normal feature of the theory of regular variation (see \cite[Section 1.9]{BinGT}) which, rather than unquestioningly adopt, we track carefully via Lemma 1 and Corollary 1 of Section \ref{s:reduction}: the ultimate justification here is the extension of $a$ to $\mathbb{R}_{+}$ (Ger's extension theorem \cite[Section 18.7]{Kuc} being thematic here), and the continuity of characteristic functions. The emergence of a particular kind of functional equation, one interpretable as a \emph{group} homomorphism (see Section \ref{ss:Homo}), is linked to the simpler than usual form here of `probabilistic associativity' (as in \cite{Bin}) in the incrementation process of the stable random walk; in more general walks, functional equations (and integrated functional equations---see \cite{RamL}) arise over an associated \emph{hypergroup}, as with the Kingman--Bessel hypergroup and Bingham-Gegenbauer (ultraspherical) hypergroup (see \cite{Bin} and \cite{BloH}). We return to these matters, and connections with the theory of flows, elsewhere---\cite{Ost3}. The material is organized as follows. Below we identify the solutions to \eqref{GFE} and in Section \ref{s:reduction} we prove equivalence of \eqref{GFE} and \eqref{ChFE}; our proof is self-contained modulo the (elementary) result that, for $\varphi $ a characteristic function, \eqref{ChFE} implies $a_{n}=n^{k}$ for some $k>0$ (in fact we need only to know that $k\neq 0$). Then in Section \ref{s:form} we read off the form of the characteristic functions of the stable laws. In Section \ref{s:sequenceidentification} we show that, for an arbitrary continuous solution $\varphi$ of \eqref{ChFE}, necessarily $a_{n}=n^{k}$ for some $k\neq 0$. We conclude in Section \ref{s:complements} with complements describing the families \emph{SN} and \emph{SE} mentioned above, and identifying the group structure implied, or `encoded', by \eqref{GFEx} to be $(\mathbb{R}_{+},\times )$, the multiplicative positive reals. In the Appendix we offer an elementary derivation of a a key formula needed in \cite{PitP}. The following result, which has antecedents in several settings (some cited below), is key; on account of its significance, this has recently received further attention in \cite[especially Theorem 3]{BinO2} and \cite[especially Theorem 1]{Ost2}, to which we refer for background---cf.\ Section \ref{ss:ThmGFE}. { \makeatletter \def\th@plain{\normalfont\itshape \def\@begintheorem##1##2{ \item[\hskip\labelsep \theorem@headerfont ##1{\bf .}] } } \makeatother \begin{ThmGFE} {\rm (\cite[Theorem 1]{BinO2}, \cite[(2.2)]{BojK}, \cite[Lemma 3.2.1]{BinGT}; cf.\ \cite{AczG}.)} For $\mathbb{C}$-valued functions $\kappa$ and $\gamma$ with $\gamma$ locally bounded at $0$, with $\gamma(0)=1$ and $\gamma\neq1$ except at $0$, if $\kappa\not\equiv0$ satisfies \eqref{GFE} subject to the side condition $\kappa(0)=0$, then for some $\gamma_0$, $\kappa_0\in \mathbb{C}$: \[ \gamma(u)=\re^{\gamma_0u}\quad\text{and}\quad \kappa(x)\equiv\kappa_0H_{\gamma_0}(x):=\kappa_0\int_0^x\gamma(u)\sd u =\kappa_0\frac{\re^{\gamma_0x}-1}{\gamma_0}, \] under the usual l'Hospital convention for interpreting $\gamma_0=0$. \end{ThmGFE} } \begin{rem}\label{r:extend} The cited proof is ostensibly for $\mathbb{R}$-valued $\kappa(\cdot)$ but immediately extends to $\mathbb{C}$-valued $\kappa$. Indeed, in brief, the proof rests on symmetry: \[ \gamma (v)\kappa(u)+\kappa(v)=\kappa(u+v)=\kappa(v+u) =\gamma(u)\kappa(v)+\kappa(u). \] So, for $u$, $v$ not in $\{x:\gamma(x)=1\}$, an additive subgroup, \[ \kappa(u)[\gamma(v)-1]=\kappa(v)[\gamma(u)-1]:\qquad \frac{\kappa(u)}{\gamma(u)-1}=\frac{\kappa(v)}{\gamma(v)-1} =\kappa_0, \] as in \cite[Lemma 3.2.1]{BinGT}. If $\kappa(\cdot)$ is to satisfy \eqref{GFE}, $\gamma(\cdot)$ needs to satisfy \eqref{CEE}. The notation $H_\rho$ (originating in \cite{BojK}) is from \cite[Chapter 3: de Haan theory]{BinGT} and, modulo exponentiation, links to the `inverse' functions $\eta_\rho(t)=1+\rho t$ (see Section \ref{ss:Homo}) which permeate regular variation (albeit long undetected), a testament to the underlying \emph{flow} and \emph{group} structure, for which see especially \cite{BinO1,BinO4}. The Goldie equation is a special case of the \emph{Levi--Civita equations}; for a text-book treatment of their solutions for domain a semigroup and range $\mathbb{C}$ see \cite[Chapter 5]{Ste}. \end{rem} \begin{rem}\label{r:constants} We denote the constants $\gamma_0$ and $\kappa_0$ more simply by $\gamma$ and $\kappa$, whenever context permits. To prevent conflict with the $\gamma$ of \cite[Section 5.1]{PitP} we denote that here by $\gamma_{\text{P}}(k),$ showing also dependence on the index of growth of $a_n$: see Section \ref{ss:notation}. \end{rem} \begin{rem}\label{r:stuv} To solve \eqref{GFEx} write $s=\re^u$ and $t=\re^v$, obtaining \eqref{GFE}; then \begin{align*} G(\re^u)&=\gamma(u)=\re^{\gamma u}:\qquad G(s)=s^{\gamma}\\ K(\re^u)&=\kappa(u)=\kappa\,\frac{\re^{\gamma u}-1}\gamma:\qquad K(s)=\kappa\,\frac{s^\gamma-1}\gamma. \end{align*} \end{rem} \begin{rem}\label{r:altreg} Alternative regularity conditions, yielding continuity and the same $H_\gamma$ conclusion, include in \cite[Theorem 2]{BinO2} the case of $\mathbb{R}$-valued functions with $\kappa(\cdot)$ and $\gamma(\cdot)$ both non-negative on $\mathbb{R}_+$ with $\gamma\neq1$ except at $0$ (as then either $\kappa\equiv0$, or both are continuous). \end{rem} \section{Reduction to the Goldie Equation}\label{s:reduction} In this section we establish a Proposition connecting \eqref{ChFE} with \eqref{GFEx}, and so stable laws with Beurling kernels. Here in the interests of brevity\footnote{In \S 4 we prove from \eqref{ChFE}, with $\varphi$ arbitrary but continuous, that $a_n=n^k$ for some $k\ne0$, cf. \cite{Ost3}.}, this makes use of a well-known result concerning the norming constants (cf.\ \cite[VI.1, Theorem 1]{Fel}, \cite[Lemma 5.3]{PitP}), that $a:n\mapsto a_n$ satisfies $a_n=n^k$ for some $k>0$, and so is extendible to a continuous surjection onto $\mathbb{R}_+:=(0,\infty)$: \[ \tilde a(\nu)=\nu^k\qquad(\nu>0); \] this is used below to justify the validity of the definition \[ f(t):=\log\varphi(t)\qquad(t>0), \] with $\log$ here the principal logarithm, a tacit step in \cite[Section 5.1]{PitP}, albeit based on \cite[Lemma 5.2]{PitP}. We write $a_{m/n}=\tilde a_{m/n}=a_m/a_n$ and put $\mathbb{A}_{\mathbb{N}}:=\{a_n:n\in\mathbb{N}\}$ and $\mathbb{A}_{\mathbb{Q}}:=\{a_{m/n}:m,n\in \mathbb{N}\}$. The Lemma below re-proves an assertion from \cite[Lemma 5.2]{PitP}, but without assuming that $\varphi$ is a characteristic function. Its Corollary needs no explicit formula for $b_{m/n},$ since the term will eventually be eliminated. { \begin{lemma}\label{l} For continuous $\varphi\not\equiv0$ satisfying \eqref{ChFE} with $a_n=n^k$ ($k\ne0$), $\varphi$ has no zeros on $\mathbb{R}_+$. \end{lemma} \begin{proof}If $\varphi(\tau)=0$ for some $\tau>0$ then $\varphi(a_m\tau)=0$ for all $m$, by \eqref{ChFE}. Again by \eqref{ChFE}, $\abs{\varphi(\tau a_m/a_n)}^n=\abs{\varphi(a_m\tau)}=0$, so $\varphi$ is zero on the dense subset of points $\tau a_m/a_n$; then, by continuity, $\varphi\equiv0$ on $\mathbb{R}_+$, a contradiction. \end{proof} \begin{corollary}\label{c} The equation \eqref{ChFE} with continuous $\varphi\not\equiv0$ and $a_n=n^k$ ($k\ne0$) holds on the dense subgroup $\mathbb{A}_{\mathbb{Q}}$: there are constants $\{b_{m/n}\}_{m,n\in\mathbb{N}}$ with $$\varphi(t)^{m/n}=\varphi(a_{m/n}t)\exp(\ri b_{m/n}t)\qquad(t\ge0).$$ \end{corollary} \begin{proof}Taking $t/a_n$ for $t$ in \eqref{ChFE} gives $\varphi(t/a_n)^n=\varphi(t)\exp(\ri b_nt/a_n)$, so by Lemma 1, using principal values, $\varphi(t)^{1/n}=\varphi(t/a_n)\exp(-\ri tb_n/(na_n))$, whence $$\varphi(t)^{m/n}=\varphi\Bigl(\frac t{a_n}\Bigr)^m \exp\Bigl(-\frac{\ri tmb_n}{na_n}\Bigr).$$ Replacing $n$ by $m$ in \eqref{ChFE} and then replacing $t$ by $t/a_n$ gives $\varphi(t/a_n)^m=\varphi(a_mt/a_n)\break\exp(\ri b_mt/a_n)$. Substituting this into the above and using $a_m/a_n=a_{m/n}$: $$\varphi(t)^{m/n}=\varphi(a_{m/n}t)\exp\Bigl(\ri t\,\frac{nb_m-mb_n}{na_n}\Bigr).$$ As the left-hand side, and the first term on the right, depend on $m$ and $n$ only through $m/n$, we may rewrite the constant $(nb_m-mb_n)/(na_n)$ as $b_{m/n}$. The result follows. \end{proof} Our main result below, on equational equivalence, uses a condition \eqref{GARplus} applied to the dense subgroup $\mathbb{A}=\mathbb{A}_{\mathbb{Q}}$. This is a \emph{quantifier weakening} relative to \eqref{GFE} and is similar to a condition with all variables ranging over $\mathbb{A}=\mathbb{A}_{\mathbb{Q}}$, denoted (${\it G}_{\mathbb{A}}$) in \cite{BinO2}, to which we refer for background on quantifier weakening. In Proposition 1 below we may also impose just $({\it G}_{\mathbb{A}_{\mathbb{Q}}})$, granted continuity of $\varphi$. \begin{proposition}\label{p} For $\varphi$ continuous and $a_n=n^k$ ($k\ne0$), the functional equation \eqref{ChFE} is equivalent to \begin{equation}\label{GARplus} K(st)-K(s)=K(t)G(s)\qquad(s\in\mathbb{A},\;t\in\mathbb{R}_+), \tag{${\it G}_{\mathbb{A},\mathbb{R}_+}$} \end{equation} for either of $\mathbb{A}=\mathbb{A}_{\mathbb{N}}$ or $\mathbb{A}=\mathbb{A}_{\mathbb{Q}}$, both with side condition $K(0)=1$ and with $K$ and $G$ continuous; the latter directly implies \eqref{GFEx}. The correspondence is given by \[ K(t)= \begin{cases} \displaystyle \frac{f(t)}{t\mathstrut},&\text{if $f(1)=0$},\\ \displaystyle \frac{f(t)}{tf(1)}-1,&\text{if $f(1)\neq0$}. \end{cases} \] \end{proposition} } \begin{proof}By the Lemma, using principal values, \eqref{ChFE} may be re-written as \[ \varphi(t)^{n/t}=\varphi(a_nt)^{1/t}\exp(\ri b_n)\qquad (n\in\mathbb{N},\;t\in \mathbb{R}_+). \] From here, on taking principal logarithms and adjusting notation ($f:=\log \varphi $, $h(n)=-$\textrm{i}$b_{n},$ and $g(n):=a_{n}\in \mathbb{R}_{+}$), pass first to the form \[ \frac{f(g(n)t)}t=\frac{nf(t)}t+h(n)\qquad(n\in\mathbb{N},\;t\in\mathbb{R}_+); \] here the last term does not depend on $t$, and is defined for each $n$ so as to achieve equality. Then, with $s:=g(n)\in\mathbb{R}_{+}$, replacement of $n$ by $g^{-1}(s)$, valid by injectivity, gives, on cross-multiplying by $t$, \[ f(st)=g^{-1}(s)f(t)+h(g^{-1}(s))t. \] As $s,t\in \mathbb{R}_+$, take $F(t):=f(t)/t$, $G(s):=g^{-1}(s)/s$, $H(s):=h(g^{-1}(s))/s$; then \begin{equation}\label{dag} F(st)=F(t)G(s)+H(s)\qquad (s\in\mathbb{A}_{\mathbb{N}},\;t\in\mathbb{R}_+). \tag{\dag } \end{equation} This equation contains \emph{three} unknown functions: $F$, $G$, $H$ (cf.\ the Pexider-like formats considered in \cite[Section 4]{BinO2}), but we may reduce the number of unknown functions to \emph{two} by entirely eliminating\footnote{This loses the ``affine action'': $K\mapsto G(t)K+H(t)$.} $H$. The elimination argument splits according as $F(1)=f(1)$ is zero or not. \begin{enumerate}[\it C{a}se 1:\/] \item $f(1)=0$ (i.e.\ $\varphi (1)=1$). Taking $t=1$ in \eqref{dag} yields $F(s)=H(s)$, and so \eqref{GARplus} holds for $K=F$, with side condition $K(1)=0$ ($=F(1)$). \item $f(1)\neq 0$. Then, with $\tilde{F}:=F/F(1)$ and $\tilde{H}:=H/F(1)$ in \eqref{dag}, \[ \tilde{F}(st)=\tilde{F}(t)G(s)+\tilde{H}(s)\qquad (s\in\mathbb{A},\;t\in\mathbb{R}_+), \] and $\tilde{F}(1)=1$. Taking again $t=1$ gives $\tilde{F}(s)=G(s)+\tilde{H}(s)$. Setting \begin{equation}\label{dagdag} K(t):=\tilde{F}(t)-1=\frac{F(t)}{F(1)}-1\tag{\dag\dag } \end{equation} (so that $K(1)=0$), and using $\tilde{H}=\tilde{F}-G$ in \eqref{dag} gives \begin{align*} \tilde{F}(st)&=\tilde{F}(t)G(s)+\tilde{F}(s)-G(s),\\ (\tilde{F}(st)-1)-(\tilde{F}(s)-1)&=(\tilde{F}(t)-1)G(s),\\ K(st)-K(s)&=K(t)G(s). \end{align*} That is, $K$ satisfies \eqref{GARplus} with side condition $K(1)=0$. \end{enumerate} In summary: in both cases elimination of $H$ yields $(G_{\mathbb{A},\mathbb{R}_+})$ and the side condition of vanishing at the identity. So far, in \eqref{GARplus} above, $t$ ranges over $\mathbb{R}_+$ whereas $s$ ranges over $\mathbb{A}_{\mathbb{N}}=\{a_n:n\in\mathbb{N}\}$, but $s$ can be allowed to range over $\{a_{m/n}:m,n\in \mathbb{N}\}$, by the Corollary. As before, since $a:n\mapsto a_n$ has $\tilde{a}$ as its continuous extension to a bijection onto $\mathbb{R}_+$, and $\varphi$ is continuous, we conclude that $s$ may range over $\mathbb{R}_+$, yielding the multiplicative form of the Goldie equation \eqref{GFEx} with the side-condition of vanishing at the identity. \end{proof} \begin{rem}\label{r:onecase} As in \cite[Section 5]{PitP}, we consider only \emph{non-degenerate} stable distributions, consequently `Case 1' will not figure below (as this case yields an arithmetic distribution---cf.\ \cite[XVI.1, Lemma 4]{Fel}, so here concentrated on $0$). \end{rem} \begin{rem}\label{r:case2} In `Case 2' above, $\tilde{H}(st)-\tilde{H}(s)=\tilde{H}(t)G(s)$, since $G(st)=G(s)G(t)$, by Remark \ref{r:altreg}. So $\tilde H(\re^u)=\kappa H_{\gamma}(u)=\kappa(\re^{\gamma u}-1)/\gamma$. We use this in Section \ref{s:form}. \end{rem} \section{Stable laws: their form}\label{s:form} This section demonstrates how to `telescope' several cases of the analysis in \cite{PitP} into one, and to make l'Hospital's Rule carry the burden of the `limiting' case $\alpha =1$. At little cost, we also deduce the form of the location constants $b_n$, without needing the separate analysis conducted in \cite[Section 5.2]{PitP}. We break up the material into steps, beginning with a statement of the result. \subsection{Form of the law}\label{ss:form} The \emph{form} of $\varphi $ for a non-degenerate stable distribution is an immediate corollary of Theorem GFE (Section \ref{s:intro}) applied to \eqref{dagdag} above. For some $\gamma\in\mathbb{R}$, $\kappa\in\mathbb{C}$ and with $A:=\kappa/\gamma$ and $B:=1-A$, \begin{equation}\label{ddag} f(t)=\log\varphi(t)= \begin{cases} f(1)(At^{\gamma+1}+Bt),&\text{for $\gamma\neq 0$},\\ f(1)(t+\kappa t\log t),&\text{with $\gamma=0$}, \end{cases} \qquad(t>0).\tag{\ddag} \end{equation} Here $\alpha:=\gamma+1$ is the \emph{characteristic exponent}. From this follows a formula for $t<0$ (by complex conjugation---see below). The connection with \cite[Section 5 at end]{PitP} is given by: \begin{enumerate}[(i)] \item $f(1):=\log\varphi(1)=-c+\ri y$ (with $c>0$, as $\abs{\varphi(t)}<1$ for some $t>0$); \item $f(1)\kappa=-\ri\lambda$. So $f(1)B=-c+\ri(y+\lambda/\gamma)$, and $\kappa=\lambda(-y+\ri c)/(c^2+y^2)$. \end{enumerate} \begin{rem}\label{r:dominant} We note, for the sake of completeness, that restrictions on the two parameters $\alpha$ and $\kappa$ (equivalently $\gamma$ and $\kappa$) follow from asymptotic analysis of the `initial' behaviour of the characteristic function $\varphi$ (i.e.\ near the origin). This is equivalent to the `final' or tail behaviour (i.e.\ at infinity) of the corresponding distribution function. Specifically, the `dominance ratio' of the imaginary part of the \emph{dominant} behaviour in $f(t)$ to the value $c$ (as in (i) above) relates to the `tail balance' ratio $\beta$ of \cite[(6.10)]{PitP}, i.e.\ the asymptotic ratio of the distribution's tail difference to its tail sum---cf.\ \cite[Section 8]{PitP}. Technical arguments, based on Fourier inversion, exploit the regularly varying behaviour as $t\downarrow 0$ (with index of variation $\alpha $---see above) in the real and imaginary parts of $1-\varphi(t)$ to yield the not unexpected result \cite[Theorem 6.2]{PitP} that, for $\alpha\neq1$, the dominance ratio is proportional to the tail-balance ratio $\beta$ by a factor equal to the ratio of the sine and cosine variants of Euler's Gamma integral\footnote{ In view of that factor's key role, a quick and elementary derivation is offered in the Appendix (for $0<\alpha<1$).} (on account of the dominant power function)---compare \cite[Theorem 4.10.3]{BinGT}. \end{rem} \subsection{On notation}\label{ss:notation} The parameter $\gamma:=\alpha-1$ is linked to the auxiliary function $G$ of \eqref{GFE}; this usage of $\gamma$ conflicts with \cite{PitP}, where two letters are used in describing the behaviour of the ratio $b_n/n$: $\lambda$ for the `case $\alpha=1$', and otherwise $\gamma$ (following Feller \cite[VI.1 Footnote 2]{Fel}). The latter we denote by $\gamma _{\text{P}}(k)$, reflecting the $k$ value in the `case $\alpha=1/k\neq 1$'. In Section \ref{ss:locgen} below it emerges that $\gamma_{\text{P}}(1+)=\lambda\log n$. \subsection{Verification of the form (\ref{ddag})}\label{ss:ver} By Remark \ref{r:onecase}, only the second case of the Proposition applies: the function $K(t)=\tilde{F}(t)-1=f(t)/(tf(1))-1$ solves \eqref{GFEx} with side-condition $K(1)=0$. Writing $t=e^u$ (as in Remark \ref{r:stuv}) yields \[ \frac{f(t)}{tf(1)}=\frac{f(\re^u)\re^{-u}}{f(1)}=1+K(\re^u)=\kappa(u) =1+\kappa\,\frac{\re^{\gamma u}-1}\gamma, \] for some complex $\kappa$ and $\gamma\neq0$ (with passage to $\gamma=0$, in the limit, to follow). So, for $t>0$, with $A:=\kappa/\gamma$ and $B:=1-A$, as above, \[ f(t)=\log \varphi(t)=f(1)t\Bigl(1+\kappa\,\frac{t^\gamma-1}\gamma\Bigr) =f(1)(At^\alpha+Bt), \] with $\alpha=\gamma+1$. On the domain $t>0$, this agrees with \cite[(5.5)]{PitP}; for $t<0$ the appropriate formula is immediate via complex conjugation, verbatim as in the derivation of \cite[(5.5)]{PitP}, save for the $\gamma$ usage. To cover the case $\gamma=0$, apply the l'Hospital convention; as in \cite[(5.8)]{PitP}, for $t>0$ and $u>0$ and some $\kappa\in\mathbb{C}$, \[ \kappa (t):=\frac{f(e^t)e^{-t}}{f(1)}=1+\kappa t:\qquad f(u)=f(1)(u+\kappa u\log u). \] \subsection{Location parameters: general case $\alpha\neq1$}\label{ss:locgen} Here $\gamma =\alpha -1\neq 0$. From the proof of the Proposition, $G(t):=g^{-1}(\re^t)\re^{-t}$, so $g^{-1}(\re^t)=\re^t\re^{\gamma t}=\re^{\alpha t}$. Put $k=1/\alpha$; then \[ v=g^{-1}(u)=u^\alpha:\qquad u=g(v)=v^{1/\alpha}=v^k, \] confirming $a_n=g(n)=n^k$, as in \cite[Lemma 5.3]{PitP}. (Here $k>0$, as \emph{strict} monotonicity was assumed in the Proposition). Furthermore, as in Remark \ref{r:case2}, $$\kappa\,\frac{\re^{\gamma t}-1}\gamma=\tilde H(\re^t) =\frac{h(g^{-1}(\re^t))\re^{-t}}{f(1)};$$ so $$ h(g^{-1}(e^t))=f(1)\kappa\,\frac{\re^{\alpha t}-\re^t}\gamma:\qquad h(u)=f(1)\kappa\,\frac{u-u^{1/\alpha}}\gamma =f(1)\kappa\,\frac{u-u^k}\gamma, $$ where $\gamma=\alpha-1=(1-k)/k$. So $$b_n=\ri h(n)=\ri f(1)\kappa\,\frac{n-n^k}\gamma,$$ as in the Pitman analysis: see \cite[Section 5.1]{PitP}. Here $b_n$ is real, since $f(1)\kappa=-\ri\lambda$, according to (ii) in Section \ref{ss:form} above and conforming with \cite[Section 5.1]{PitP}. So as $b_n/n=\gamma _{\text{P}}(k)$, similarly to \cite[end of proof of Lemma 4.1]{PitP}, again as $f(1)\kappa=-\ri\lambda$, for any $n\in \mathbb{N}$ \[ \lim_{k\rightarrow1}\gamma_{\text{P}}(k)=\ri f(1)\kappa\, \lim_{k\rightarrow1}k\frac{1-n^{k-1}}{k-1}=\lambda\log n. \] \subsection{Location parameters: special case $\alpha=1$}\label{ss:special} Here $\gamma =0$. In Section \ref{ss:ver} above the form of $g$ specializes to \[ g^{-1}(\re^t)=\re^t:\qquad g(u)=u. \] Applying the l'Hospital convention yields the form of $h$: for $t>0$ and $u>0$, \[ h(g^{-1}(\re^t))=h(\re^t)=f(1)\kappa t\re^t:\qquad h(u)=f(1)\kappa u\log u; \] so, as in \cite[(5.8)]{PitP}, $b_n=\lambda n\log n$ (since $b_n=\ri h(n)$ and again $\lambda=\ri f(1)\kappa$). \section{Identifying $a_{n}$ from the continuity of $\protect\varphi $}\label{s:sequenceidentification} In \S 3 the form of the continuous solutions $\varphi $ of $(ChFE)$ was derived from the known continuous solutions of the Goldie equation $(GFE)$ on the assumption that $a_{n}=n^{k}$, for some $k\neq 0$ (as then $ \{a_{m}/a_{n}:m,n\in \mathbb{N}\}$ is dense in $\mathbb{R}_{+})$. Here we show that the side condition on $a_{n}$ may itself be deduced from $(ChFE) $ provided the solution $\varphi $ is continuous and \textit{non-trivial,} i.e. neither $|\varphi |\equiv 0$ nor $|\varphi |\equiv 1$ holds, so obviating the assumption that $\varphi $ is the characteristic function of a (non-degenerate) distribution. { \makeatletter \def\th@plain{\normalfont\itshape \def\@begintheorem##1##2{ \item[\hskip\labelsep \theorem@headerfont ##1{\bf .}] } } \begin{theorem}If $\varphi $ is a non-trivial continuous function and satisfies $(ChFE)$ for some sequence $a_{n}\geq 0$, then $a_{n}=n^{k}$ for some $k\neq 0.$ \end{theorem} } We will first need to establish a further lemma and proposition. \begin{lemma}If $(ChFE)$\ is satisfied by a\ non-trivial continuous function $\varphi$, then the sequence $a_{n}$ is either convergent to $0,$ or divergent (`convergent to $+\infty $'). \end{lemma} \begin{proof}Suppose otherwise. Then for some $\mathbb{M} \subseteq \mathbb{N}$, and $a>0,$ \[ a_{m}\rightarrow a\text{ through }\mathbb{M}. \] W.l.o.g. $\mathbb{M}=\mathbb{N}$, otherwise interpret $m$ below as restricted to $\mathbb{M}$. For any $t,$ $a_{m}t\rightarrow at,$ so $ K_{t}:=\sup_{m}\{|\varphi (a_{m}t)|\}$ is finite. Then for all $m$ \[ |\varphi (t)|^{m}=|\varphi (a_{m}t)|\leq K_{t}, \] and so $|\varphi (t)|\leq 1,$ for all $t.$ By continuity, \[ |\varphi (at)|=\lim_{m}|\varphi (a_{m}t)|=\lim_{m}|\varphi (t)|^{m}=0\text{ or }1. \] Then, setting $N_{k}:=\{t:|\varphi (at)|=k\},$ \[ \mathbb{R}_{+}=N_{0}\cup N_{1}. \] By the connectedness of $\mathbb{R}_{+}$, one of $N_{0},N_{1}$ is empty, as the sets $N_{k}$ are closed; so respectively $|\varphi |\equiv 0$ or $ |\varphi |\equiv 1,$ contradicting non-triviality. \end{proof} The next result essentially contains \cite[Lemma 5.2]{PitP}, which relies on $ |\varphi (0)|=1,$ the continuity of $\varphi ,$ and the existence of some $t$ with $\varphi (t)<1$ (guaranteed below by the non-triviality of $\varphi ).$ We assume less here, and so must also consider the possibility that $ |\varphi (0)|=0.$ \begin{proposition} \textit{If }$(ChFE)$\ \textit{is satisfied by a non-trivial continuous function }$\varphi $\textit{\ and for some } $c>0,$\textit{\ }$|\varphi (t)|=$\textit{\ }$|\varphi (ct)|$\textit{\ for all }$t>0,$\textit{\ then }$c=1$\textit{.} \end{proposition} \begin{proof} Note first that $a_{n}>0$ for all $n;$ indeed, otherwise, for some $k\geq 1$ \[ |\varphi (t)|^{k}=|\varphi (0)|\qquad (t\geq 0). \] Assume first that $k>1;$ taking $t=0$ yields $|\varphi (0)|=0$ or $1,$ which as in Lemma 2 implies $|\varphi |\equiv 0$ or $|\varphi |\equiv 1.$ If $k=1$ then $|\varphi (t)|=|\varphi (0)|$ and for all $n>1,$ $|\varphi (0)|^{n}=|\varphi (0)|,$ so that again $|\varphi (0)|=0$ or $1,$ which again implies $|\varphi |\equiv 0$ or $|\varphi |\equiv 1.$ Applying Lemma 2, the sequence $a_{n}$ converges either to $0$ or to $ \infty .$ First suppose that $a_{n}\rightarrow 0.$ Then, as above (referring again to $ K_{t}$), we obtain $|\varphi (t)|\leq 1$ for all $t.$ Now, since \[ |\varphi (0)|=\lim |\varphi (a_{n}t)|=\lim_{n}|\varphi (t)|^{n}, \] if $|\varphi (t)|=1$ for \textit{some} $t,$ then $|\varphi (0)|=1$, and that in turn yields, for the very same reason, that $|\varphi (t)|\equiv 1$ for \textit{all} $t,$ a trivial solution, which is ruled out. So in fact $ |\varphi (t)|<1$ for all $t,$ and so $|\varphi (0)|=0.$ Now suppose that for some $c>0,$ $|\varphi (t)|=$ $|\varphi (ct)|$ for all $t>0.$ We show that $ c=1.$ If not, w.l.o.g. $c<1,$ (otherwise replace $c$ by $c^{-1}$ and note that $|\varphi (t/c)|=$ $|\varphi (ct/c)|=|\varphi (t)|$ ); then \[ 0=|\varphi (0)|=\lim_{n}|\varphi (c^{n}t)|=|\varphi (t)|,\text{ for }t>0, \] and so $\varphi $ is trivial, a contradiction. So indeed $c=1$ in this case. Now suppose that $a_{n}\rightarrow \infty .$ As $\varphi$ is non-trivial, choose $s$ with $\varphi (s)\neq 0,$ then \[ |\varphi (0)|=\lim_{n}|\varphi (s/a_{n})|=\lim_{n}\exp \left( \frac{1}{n} \log |\varphi (s)|\right) =1, \] i.e. $|\varphi (0)|=1.$ Again suppose that for some $c>0,$ $ |\varphi (t)|=$ $|\varphi (ct)|$ for all $t>0.$ To show that $c=1,$ suppose again w.l.o.g. that $c<1;$ then \[ 1=|\varphi (0)|=\lim_{n}|\varphi (c^{n}t)|=|\varphi (t)|\text{ for }t>0, \] and so $|\varphi (t)|\equiv 1,$ again a trivial solution. So again $c=1.$ \end{proof} \textit{Proof of the Theorem. } $(ChFE)$ implies that \[ |\varphi (a_{mn}t)|=|\varphi (t)|^{mn}=|\varphi (a_{m}t)|^{n}=|\varphi (a_{m}a_{n}t)|. \] By Proposition 2, $a_n$ satisfies the discrete version of the Cauchy exponential equation $(CEE)$ \[ a_{mn}=a_{m}a_{n}\qquad (m,n\in \mathbb{N}), \] whose solution is known to take the form $n^{k}$ (cf. \cite[Lemma 5.4]{PitP}), since $a_{n}>0$ (as in Prop. 2). If $a_{n}=1$ for some $n>1,$ then, for each $t>0,$ $|\varphi (t)|=0$ or 1 (as $|\varphi (t)|=|\varphi (t)|^{n}$) and so again, by continuity as in Lemma 2, $\varphi $ is trivial. So $k\neq 0.$ $ \square $ \begin{rem} Continuity is essential to the theorem: take $a_n\equiv 1$, then a Borel function $\varphi$ may take the values 0 and 1 arbitrarily. \end{rem} \section{Complements}\label{s:complements} \subsection{Self-neglecting and self-equivarying functions}\label{ss:SNSE} Recall (cf.\ \cite[Section 2.11]{BinGT}) that a self-map $\varphi$ of $\mathbb{R}_+$ is \emph{self-neglecting} ($\varphi\in{\it SN}$) if \begin{equation}\label{SN} \varphi(x+t\varphi(x))/\varphi(x)\rightarrow1\quad\text{locally uniformly in $t$ for all $t\in\mathbb{R}_+$},\tag{\emph{SN}} \end{equation} and $\varphi(x)=\mathrm o(x)$ as $x\rightarrow \infty$. This traditional restriction may be usefully relaxed in two ways, as in \cite{Ost1}: firstly, in imposing the weaker order condition $\varphi(x)=\mathrm O(x)$, and secondly by replacing the limit $1$ by a general limit function $\eta$, so that \begin{equation}\label{SE} \varphi(x+t\varphi(x))/\varphi(x)\rightarrow\eta(t)\quad\text{locally uniformly in $t$ for all $t\in\mathbb{R}_+$}.\tag{\emph{SE}} \end{equation} A $\varphi $ satisfying \eqref{SE} is called \emph{self-equivarying} in \cite{Ost1}, and the limit function $\eta=\eta^\varphi$ necessarily satisfies the equation \begin{equation}\label{BFE} \eta(u+v\eta(u))=\eta(u)\eta(v)\qquad(u,v\in\mathbb{R}_+) \tag{\emph{BFE}} \end{equation} (this is a special case of the \foreignlanguage{polish}{Go\l\aob b--Schinzel} equation---see also e.g.\ \cite{Brz}, or \cite{BinO2}, where \eqref{BFE} is termed the \emph{Beurling functional equation}). As $\eta\ge0$, imposing the natural condition $\eta>0$ (on $\mathbb{R}_+$) implies that it is continuous and of the form $\eta(t)=1+\rho t$, for some $\rho\ge0$ (see \cite{BinO2}); the case $\rho=0$ recovers \eqref{SN}. A function $\varphi\in{\it SE}$ has the representation \[ \varphi(t)\sim\eta^\varphi(t)\int_1^t e(u)\sd u\quad\text{for some continuous $e\rightarrow0$} \] (where $f\sim g$ if $f(x)/g(x)\rightarrow1$, as $x\rightarrow\infty$), and the second factor is in ${\it SN}$ (see \cite[Theorem 9]{BinO1}, \cite{Ost1}). \subsection{Theorem GFE}\label{ss:ThmGFE} This theorem has antecedents in \cite{Acz} and \cite{Chu}, \cite[Theorem 1]{Ost2}, and is generalized in \cite[Theorem 3]{BinO2}. It is also studied in \cite{BinO3} and \cite{Ost2}. \subsection{Homomorphisms and random walks}\label{ss:Homo} In the context of a ring, the `\foreignlanguage{polish}{Go\l\aob b--Schinzel} functions' $\eta_\rho(t)=1+\rho t$, as above, were used by Popa and Javor (see \cite{Ost2} for references) to define associated (generalized) \emph{circle operations}: \[ a\circ_\rho b=a+\eta_\rho(a)b=a+(1+\rho a)b=a+b+\rho ab. \] (Note that $a\circ_1b=a+b+ab$ is the familiar circle operation, and $a\circ_0b=a+b$.) These were studied in the context of $\mathbb{R}$ in \cite[Section 3.1]{Ost2}; it is straightforward to lift that analysis to the present context of the ring $\mathbb{C}$, yielding the \emph{complex circle groups} \[ \mathbb{C}_\rho:=\{x\in\mathbb{C}:1+\rho x\ne0\} =\mathbb{C}\backslash\{\rho^{-1}\}\qquad (\rho\ne0). \] Since \begin{align*} (1+\rho a)(1+\rho b)&=1+\rho a+\rho b+\rho^2ab =1+\rho\lbrack a+b+\rho ab],\\ \eta_\rho(a)\eta_\rho(a)&=\eta_\rho(a\circ_\rho b), \end{align*} $\eta_\rho:(\mathbb{C}_\rho,\circ_\rho)\rightarrow (\mathbb{C}^*,\cdot)=(\mathbb{C}\backslash\{0\},\times)$ is an isomorphism (`from $\mathbb{C}_\rho$ to $\mathbb{C}_\infty$'). We may recast \eqref{GFEx} along the lines of \eqref{dag} so that $G(s)=s^\gamma$ with $\gamma\ne0$, and $K(t)=(t^\gamma-1)\rho^{-1}$, for $$\rho=\frac\gamma\kappa=\frac{1-k}{k\kappa}.$$ Then, as $\eta_\rho(x)=1+\rho x=G(K^{-1}(x))$, \[ K(st)=K(s)\circ_\rho K(t)=K(s)+\eta_\rho(K(s))K(t)=K(s)+G(s)K(t). \] For $\gamma\ne0$, $K$ is a homomorphism from the multiplicative reals $\mathbb{R}_+$ into $\mathbb{C}_\rho$; more precisely, it is an isomorphism between $\mathbb{R}_+$ and the conjugate subgroup $(\mathbb{R}_+-1)\rho^{-1}$. In the case $\gamma=0$ ($k=1$), $\mathbb{C}_0=\mathbb{C}$ is the additive group of complex numbers; from \eqref{GFEx} it is immediate that $K$ maps logarithmically into $(\mathbb{R},+),$ `the additive reals'. \acks The final form of this manuscript owes much to the referee's supportively penetrating reading of an earlier draft, and to the editors' advice and good offices, for which sincere thanks. \section*{Appendix: a ratio formula} We give an elementary derivation (using Riemann integrals) of the formula \[ \int_{0}^{\infty }\frac{\cos x}{x^{k}}e^{-\delta x}\,\mathrm{d}x\left/ \int_{0}^{\infty }\frac{\sin x}{x^{k}}e^{-\delta x}\,\mathrm{d}x\right. =\tan \pi k/2\qquad (0<k<1). \] Substitution for $\delta >0$ of $s=\delta +i$ $=re^{i\theta }$, with $ r^{2}=1+\delta ^{2}$ and $\theta =\theta _{\delta }=\tan ^{-1}(1/\delta )$, in the Gamma integral: \[ \frac{\Gamma (1-k)}{s^{1-k}}=\int_{0}^{\infty }\frac{e^{-sx}}{x^{k}}\, \mathrm{d}x, \] with $0<k<1$, gives \[ \int_{0}^{\infty }\frac{\cos x-i\sin x}{x^{k}}e^{-\delta x}\,\mathrm{d}x= \frac{\Gamma (1-k)}{(1+\delta ^{2})^{(1-k)/2}}[\cos (1-k)\theta _{\delta }-i\sin (1-k)\theta _{\delta }]\qquad (\delta >0). \] This yields in the limit as $\delta \downarrow 0,$ since $\theta _{\delta }\rightarrow \pi /2,$ the ratio of the real and imaginary parts of the left-hand side for $\delta =0$ to be \[ \cot (1-k)\pi /2=\tan \pi k/2. \] Passage to the limit $\delta \downarrow 0$ on the left is validated, for any $k>0$, by an appeal to Abel's method: first integration by parts (twice) yields an indefinite integral \[ (1+\delta ^{2})\int e^{\delta x}\sin x\,\mathrm{d}x=-e^{\delta x}\cos x+\delta e^{\delta x}\sin x, \] valid for all $\delta ,$ whence (again by parts) \[ \int_{1}^{T}\frac{e^{-\delta x}\sin x\,\mathrm{d}x}{x^{k}}=\frac{e^{-\delta }(\delta \sin 1+\cos 1)}{(1+\delta ^{2})}-\frac{e^{-\delta T}(\delta \sin T+\cos T)}{T^{k}(1+\delta ^{2})}-k\int_{1}^{T}\frac{e^{-\delta x}(\delta \sin x+\cos x)\,\mathrm{d}x}{x^{k+1}(1+\delta ^{2})}. \] Here $e^{-\delta x}$ is uniformly bounded as $\delta \downarrow 0,$ so by joint continuity on $[0,1]$ \begin{eqnarray*} \lim_{\delta \downarrow 0}\int_{0}^{\infty }\frac{1}{x^{k}}e^{-\delta x}\sin x\,\mathrm{d}x &=&\lim_{\delta \downarrow 0}\int_{0}^{1}\frac{1}{x^{k}} e^{-\delta x}\sin x\,\mathrm{d}x+\lim_{\delta \downarrow 0}\int_{1}^{\infty } \frac{1}{x^{k}}e^{-\delta x}\sin x\,\mathrm{d}x \\ &=&\int_{0}^{\infty }\frac{\sin x}{x^{k}}\,\mathrm{d}x, \end{eqnarray*} and likewise with $\cos $ for $\sin $. \end{document}
arXiv
Methodology | Open | Published: 14 December 2015 The ubiquitous self-organizing map for non-stationary data streams Bruno Silva1 & Nuno Cavalheiro Marques2 The Internet of things promises a continuous flow of data where traditional database and data-mining methods cannot be applied. This paper presents improvements on the Ubiquitous Self-Organized Map (UbiSOM), a novel variant of the well-known Self-Organized Map (SOM), tailored for streaming environments. This approach allows ambient intelligence solutions using multidimensional clustering over a continuous data stream to provide continuous exploratory data analysis. The average quantization error and average neuron utility over time are proposed and used to estimating the learning parameters, allowing the model to retain an indefinite plasticity and to cope with changes within a multidimensional data stream. We perform parameter sensitivity analysis and our experiments show that UbiSOM outperforms existing proposals in continuously modeling possibly non-stationary data streams, converging faster to stable models when the underlying distribution is stationary and reacting accordingly to the nature of the change in continuous real world data streams. At present, all kinds of stream data processing based on instantaneous data have become critical issues of Internet, Internet of Things (ubiquitous computing), social networking and other technologies. The massive amounts of data being generated in all these environments push the need for algorithms that can extract knowledge in a readily manner. Within this increasingly important field of research the application of artificial neural networks to such task remains a fairly unexplored path. The self-organizing map (SOM) [1] is an unsupervised neural-network algorithm with topology preservation. The SOM has been applied extensively within fields ranging from engineering sciences to medicine, biology, and economics [2] over the years. The powerful visualization techniques for SOM models result from the useful and unique feature of SOM for detection of emergent complex cluster structures and non-linear relationships in the feature space [3]. The SOM can be visualized as a sheet-like neural network array, whose neurons become specifically tuned to various input vectors (examples) in an orderly fashion. For instance, the SOM and \(\mathcal {K}\)-means both represent data in a similar way through prototypes of data, i.e., centroids in \(\mathcal {K}\)-means and neuron weights in SOM, and their relation and different usages has already been studied [4]. However, it is the topological ordering of these prototypes in large SOM networks that allows the application of exploratory visualization techniques. This paper is an extended version of work published in [5], introducing a novel variant of SOM, called the ubiquitous self-organizing map (UbiSOM), specially tailored for streaming and big data. We extend our previous work by improving the overall algorithm with the use of a drift function to estimate learning parameters, that weighs the previous average quantization error and a new introduced metric: the average neuron utility. Also, the UbiSOM algorithm now implements a finite-state machine, which allows it to cope with drastic changes in the underlying stream. We also performed parameter sensitivity analysis on new parameters imposed by the algorithm. Our experiments, with artificial data and a real-world electric consumption sensor data stream, show that UbiSOM can be applied to data processing systems that want to use the SOM method to provide a fast response and timely mine valuable information from the data. Indeed our approach, albeit being a single-pass algorithm, outperforms current online SOM proposals in continuously modeling non-stationary data streams, converging faster to stable models when the underlying distribution is stationary and reacting accordingly to the nature of the change. Background and literature review In this section we introduce data streams and review current SOM algorithms that can, in theory, be used for streaming data, highlighting their problems in this setting. Data streams Nowadays, data streams [6, 7] are generated naturally within several applications as opposed to simple datasets. Such applications include network monitoring, web mining, sensor networks, telecommunications, and financial applications. All have vast amounts of data arriving continuously. Being able to produce clustering models in real-time assumes great importance within these applications. Hence, learning from streams not only is required in ubiquitous environments, but also is of relevance to other current hot topics, namely Big Data. The rationale behind the requirement of learning from streams is that the amount of information being generated is to big to be stored in devices, where traditional mining techniques could be applied. Data streams arrive continuously and are potentially unbounded. Therefore, it is impossible to keep the entire stream in memory. Data streams require fast and real time processing to keep up with the high rate of data arrival and mining results are expected to be available within short response time. Data streams also imply non-stationarity of data, i.e., the underlying distribution may change. This may involve appearance/disappearance of clusters, changes in mean and/or variance and also correlations between variables. Consequently, algorithms performing over data streams are presented with additional challenges not previously tackled in traditional data mining. One thing that is agreed is that these algorithms can only return approximate models since data cannot be revisited to fine-tune the models [7], hence the need for incremental learning. More formally, a data stream \(\mathcal {S}\) is a massive sequence of examples \({\mathbf{x}}_1, {\mathbf{x}}_2, \ldots , {\mathbf{x}}_N\), i.e., \(\mathcal {S} = \{ {\mathbf{x}}_i \}_{i=1}^{N}\), which is potentially unbounded (\(N \rightarrow \infty\)). Each example is described by an d-dimensional feature vector \({\mathbf{x}} = [ x_{i}^{j} ]_{j=1}^{d}\) belonging to a feature space \(\Omega\) that can be continuous, categorical or mixed. In out work we only consider continuous spaces. The Self-Organizing Map The SOM establishes a projection from the manifold \(\Omega\) onto a set \(\mathcal {K}\) of neurons (or units), formally written as \(\Omega \rightarrow \mathcal {K}\), hence performing both vector quantization and projection. Each unit \(\mathcal {K}\) is associated with a prototype \({\mathbf{w}}_k \in \mathbb {R}^d\), all of which establish the set \(\mathcal {K}\) that is referred as the codebook. Consequently, the SOM can be interpreted as a topology preserving mapping from an high-dimensional input space onto the 2D grid of map units. The number of prototypes K is defined by the dimensions of the grid (lattice size), i.e, \(width \times height\). The classical Online SOM algorithm performs iteratively over time. An example \({\mathbf{x}}\) is presented at each iteration t and distances between \({\mathbf{x}}_t\) and all prototypes are computed. Usually the Euclidean distance is used and previous normalization of \({\mathbf{x}}\) is suggested to equate the dynamic ranges along each dimension; this ensures that no feature dominates the distance computations, improving the numerical accuracy. The best matching unit (BMU), which we denote by c, is the map unit with prototype closest to \({\mathbf{x}}_t\): $$\begin{aligned} {\mathbf{w}}_{c}(t)=\underset{k}{min}\| {\mathbf{x}}-{\mathbf{w}}_{k}\|. \end{aligned}$$ It is important to highlight that \(E(t) = \parallel {\mathbf{x}}_t-{\mathbf{w}}_{c}(t)\parallel\) is the map error at time t, and is referred to as the quantization error. Next, the prototype vectors are updated: the BMU and its topological neighbors are moved towards the example in the input space by the Kohonen learning rule [1, 8]: $$\begin{aligned} {\mathbf{w}}_{k}(t+1)={\mathbf{w}}_{k}(t)+\eta (t)\, h_{ck}(t)\left[ {\mathbf{x}}_t-{\mathbf{w}}_{k}(t)\right] \end{aligned}$$ $$\begin{aligned}&t \text{ is the time};\\ &\eta (t)\text{ is the learning rate};\\ &h_{ck}(t)\text{ is the neighborhood kernel centered on the BMU:} \end{aligned}$$ $$\begin{aligned} h_{ck}(t)=e^{-\Bigg (\frac{\parallel r_{c}-r_{k}\parallel}{\sigma (t)}\Bigg ) ^{2}} \end{aligned}$$ where \(r_c\) and \(r_k\) are positions of units c and \(k\) on the SOM grid. Both \(\sigma (t)\) and \(\eta (t)\) decrease monotonically with time, a critical condition for the network to converge steadily towards a topological ordered state and to map the input space density. The following decreasing functions are common: $$\begin{aligned} \sigma (t)=\sigma _{i}\left( \frac{\sigma _{f}}{\sigma _{i}}\right) ^{t/t_{f}},\quad \eta (t)=\eta _{i}\left( \frac{\eta _{f}}{\eta _{i}}\right) ^{t/t_{f}}, \end{aligned}$$ where \(\sigma _i\) and \(\sigma _f\) are respectively the initial and final neighborhood width and \(\eta _i\) and \(\eta _f\) the initial and final learning rate. From [8] it is suggested that: the width of the neighborhood should be decreased from a width approximately the width of the lattice, for an initial global ordering of the prototypes, down to only encompassing the adjacent map units. This iterative scheme endures until \(t_f\) is reached and is typically defined so the dataset is presented several times. SOM models for streaming data In a real-world streaming environment \(t_f\) is unknown or not defined, so the classical algorithm cannot be used. Even with a bounded stream the Online SOM loses plasticity over time (due to the decrease of the learning parameters) and cannot cope easily with changes in the underlying distribution. Despite the huge amount of SOM literature around SOM and SOM-like networks, there is surprisingly and comparatively very little work dealing with incremental learning. Furthermore, most of these works are based on incremental models, that is, networks that create and/or delete nodes as necessary. For example, the modified GNG model [9] is able to follow non-stationary distributions by creating nodes like in a regular GNG and deleting them when they have a too small utility parameter. Similarly, the evolving self-organizing map (ESOM) [10, 11] is based on an incremental network quite similar to GNG that creates dynamically based on the measure of the distance of the BMU to the example (but the new node is created at exact data point instead of the mid-point as in GNG). Self-organizing incremental neural network (SOINN) [12] and its enhanced version (ESOINN) [13] are also based on an incremental structure where the first version is using a two layers network while the enhanced version proposed a single layer network. These proposals, however, do not guarantee a compact model, given that the number of nodes can increase unbounded in a non-stationary environment if not parameterized correctly. On the other hand, our proposal keeps the size of the map fixed. Some SOM time-independent variants, obeying to this restriction, have been proposed. The two most recent examples are: the Parameterless SOM (PLSOM) [14], which evaluates the local error \({E(t)}\) and calculates the learning parameters depending on the local quadratic fitting error of the map to the input space, and; the Dynamic SOM (DSOM) [15] which follows a similar reasoning by adjusting the magnitude of the learning parameters to the local error, but fails to converge from a totally unordered state. Moreover, authors of both proposals admit that their algorithms are unable to map the input space density onto the SOM, which has a severe impact on the application of common visualization techniques for exploratory analysis. Also, these variants are very sensitive to outliers, i.e., noisy data, by using instantaneous \({E(t)}\) values. On the other hand, the proposed UbiSOM algorithm in this paper estimates learning parameters based on the performance of the map over streaming data by monitoring the average quantization error, being more tolerant to noise and aware of real changes in the underlying distribution. The ubiquitous self-organizing map The proposed UbiSOM algorithm relies on two learning assessment metrics, namely the average quantization error and the average neuron utility, computed over a sliding window. While the first assesses the trend of the vector quantization process towards the underlying distribution, the later is able to detect regions of the map that may become "unused" given some changes in the distribution, e.g., disappearance of clusters. Both metrics are weighed in a drift function that gives an overall indication of the performance of the map over the data stream, used to estimate learning parameters. The UbiSOM implements a finite state-machine consisting in two states, namely ordering and learning. The ordering state allows the map to initially unfold over the underlying distribution with monotonically decreasing learning parameters; it is also used to obtain the first values of the assessment metrics, transitioning afterwards to the learning state. Here, the learning parameters, i.e., learning rate and neighborhood radius, are decreased or increased based on the drift function. This allows the UbiSOM to retain an indefinite plasticity, while maintaining the original SOM properties, over non-stationary data streams. These states also coincide with the two typical training phases suggested by Kohonen. It is possible, however, that unrecoverable situations from abrupt changes in the underlying distribution are detected, which leads the algorithm to transition back to the ordering state. Each UbiSOM neuron \(k\) is a tuple \(\mathcal {W}_{k}=\langle {\mathbf{w}}_{k},\, t_{k}^{update}\rangle\), where \({\mathbf{w}}_{k}\in \mathbb {R}^{d}\) is the prototype and \(t_{k}^{update}\) stores the time stamp of the last time its prototype was updated. For each incoming observation \({\mathbf{x}}_{t}\), presented at time t, two metrics are computed, within a sliding window of length T, namely the average quantization error \(\overline{qe}(t)\) and the average neuron utility \(\overline{\lambda }(t)\). We assume that all features of the data stream are equally normalized between \([d_{min},d_{max}]\). The local quantization error \({E(t)}\) is normalized by \(|\Omega |=(d_{max}-d_{min})\sqrt{d}\), so that \(\overline{qe}(t)\in [0,1]\). The \(\overline{\lambda }(t)\) metric averages neuron utility (\(\lambda (t)\)) values that are computed as a ratio of updated neurons during the last T observations. Both metrics are used in a drift function \(d(t)\), where the parameter \(\beta \in [0,1]\) weighs both metrics. The UbiSOM switches between the ordering and learning states, both using the classical SOM update rule, but with different mechanisms for estimating learning parameters \(\sigma\) and \(\eta\). The ordering state endures for \(T\) examples, until the first values of \(\overline{qe}(t)\) and \(\overline{\lambda }(t)\) are available, establishing an interval \([t_{i},t_{f}]\), during which monotonically decreasing functions \(\sigma (t)\) and \(\eta (t)\) are used to decrease values between \(\{\sigma _{i},\sigma _{f}\}\) and \(\{\eta _{i},\eta _{f}\},\) respectively. The learning state estimates learning parameters as a function of the drift function. UbiSOM neighborhood function is defined in a way that uses \(\sigma \in [0,1]\) as opposed to existing variants, where the domain of the values is problem-dependent. Online assessment metrics The purpose of these metrics is to assess the "fit" of the map to the underlying distribution. Both proposed metrics are computed over a sliding window of length \(T\). Average quantization error The widely used global quantization error (QE) metric is the standard measure of fit of a SOM model to a particular distribution. It is typically used to compare SOM models obtained for different runs and/or parameterizations and used in a batch setting. The rationale is that the model which exhibits a lower QE value is better at summarizing the input space. Regarding data streams this metric, as it stands, is not applicable because data is potentially infinite. Competing approaches to the proposed UbiSOM use only the local quantization error \(E(t)\). Kohonen stated that both \(\eta (t)\) and \(\sigma (t)\) should decrease monotonically with time, a critical condition to achieve convergence [8]. However, the local error is very unstable because \(\Omega \rightarrow \mathcal {K}\) is a many-to-few mapping, where some observations are better represented than others. As an example, with stationary data the local error does not decrease monotonically over time. We argue this is the reason why other existing approaches, e.g., PLSOM and DSOM, fail to model the input space density correctly. In the proposed algorithm, the quantization error was modified to a running mean in the form of the average quantization error \(\overline{qe}(t)\), based on the premise that the error of a learner will decrease over time for an increasing number of examples if the underlying distribution is stationary; otherwise, if the distribution changes, the error increases. For each observation \({\mathbf{x}}_t\) the \(E{^{\prime }}(t)\) local quantization error is obtained during the BMU search, as the normalized Euclidean distance $$\begin{aligned} E{^{\prime }}(t)=\frac{\Vert \,{\mathbf{x}}_{t}-{\mathbf{w}}_{c}\,\Vert }{|\Omega |}. \end{aligned}$$ These values are averaged over a window of length \(T \gg 0\) to obtain \(\overline{qe}(t)\), defined in Eq. (5). Consequently, the value of \(T\) establishes a short, medium or long-term trend of the model adaptation. $$\begin{aligned} \overline{qe}(t)=\frac{1}{T}\sum _{t}^{t-T+1}E{^{\prime }}(t) \end{aligned}$$ Figure 1 depicts the typical behavior of \(E{^{\prime }}(t)\) values obtained during a run of the classical SOM algorithm, together with the computed \(\overline{qe}(t)\) values for \(T=2000\), over a data stream where the underlying distribution suffers an abrupt change at \(t\) = \(50\,000\). We can observe that \(E{^{\prime }}(t)\) values exhibit a large variance throughout time, as opposed to \(\overline{qe}(t)\) which is smoother and indicates the trend of the convergence. Therefore, it is implicitly assumed that if \(\overline{qe}(t)\) is decreasing, then the underlying distribution is stationary; otherwise, it is changing. Behavior of local \(E{^{\prime }}(t)\) vs. average quantization error \(\overline{qe}(t)\) Average neuron utility The average quantization error \(\overline{qe}(t)\) may be a good overall indicator of the fit of the model. Despite that, it may be unable to detect the abrupt disappearance of clusters. Figure 2 illustrates such a scenario, depicting the "unused" area of the map after the inner cluster disappears. Here \(\overline{qe}(t)\) does not increase, however in this situation, the learning parameters should increase and allow the map to recover from this situation.As a consequence, the average neuron utility was proposed as a means to detect these cases. Example of a distribution change not detected by the average quantization error. a Before change; b after the disappearance of the inner cluster, where it is visible a region of unused neurons To compute this assessment metric each UbiSOM neuron \(\mathcal {K}\) is extended with a time stamp \(t_{k}^{update}\) which stores the last time the corresponding prototype was updated, functioning as an aging mechanism. A prototype is updated if it is the BMU or if it falls in the influence region of the BMU, limited by the neighborhood function. Initially, \(t_{k}^{update}=0\). The neuron utility \(\lambda (t)\) is given by Eq. (7). It measures the ratio of neurons that were updated within the last \(T\) observations, over the total number of neurons. Consequently, if all neurons have been recently updated, then \(\lambda (t)=1\). The values are then averaged by Eq. (8) to obtain \(\overline{{\lambda }}(t)\). $$\begin{aligned} \lambda (t)=\frac{\sum _{k=1}^{K}1_{\{t-t_{k}^{update}\le T\}}}{K} \end{aligned}$$ $$\begin{aligned} \overline{\lambda }(t)=\frac{1}{T}\sum _{t}^{t-T+1}\lambda (t). \end{aligned}$$ As a result, a decrease in \(\overline{\lambda }(t)\) indicates that there are neurons that are not being used to quantize the data stream. While it is not unusual to obtain these "dead-units" with stationary data after the map has converged, the decreasing trend should alert for changes in the underlying distribution. The drift function The previous metrics \(\overline{qe}(t)\) and \(\overline{\lambda }(t)\) are both weighed in a drift function that is used by the UbiSOM to estimate learning parameters. In short: \(\overline{qe}(t)\) The average quantization error gives an indication of how well the map is currently quantifying the underlying distribution, previously defined in Eq. (6). In most situation where the underlying data stream is stationary, \(\overline{qe}(t)\) is expected to decrease and stabilize, i.e., the map is converging. If the shape of the distribution changes, \(\overline{qe}(t)\) is expected to increase. \(\overline{\lambda }(t)\) The average neuron utility is an additional measure which gives an indication of the proportion of neurons that are actively being updated, previously defined in Eq. (8). The decrease of \(\overline{\lambda }(t)\) indicates neurons are being underused, which can reflect changes in the underlying distribution not detected by \(\overline{qe}(t)\). The drift function is defined as $$\begin{aligned} d(t)=\beta \,\overline{qe}(t)+(1-\beta )\,(1-\overline{\lambda }(t)) \end{aligned}$$ where \(\beta \in [0,1]\) is a weighting factor that establishes the balance of importance between the two metrics. Since both \(\overline{qe}(t)\) and \(\overline{\lambda }(t)\) are only obtained after \(T\) observations, so is \(d(t)\). A quick analysis of \(d(t)\) should be made: with high learning parameters, specially the neighborhood \(\sigma\) value, \(\overline{\lambda }(t)\) is expected to be \(\thickapprox 1\), which practically eliminates the second term of the equation. Consequently, the drift function in only governed by \(\overline{qe}(t)\). When the neuron utility decreases the second term contributes to the increase of \(d(t)\) in proportion to the chosen \(\mathbf {\beta }\) value. Ultimately, if \(\beta =1\) then the drift function is only defined by the \(\overline{qe}(t)\) metric. Empirically, \(\beta\) should be parameterized with relatively high values, establishing \(\overline{qe}(t)\) as the main measure of "fit" and using \(\overline{\lambda }(t)\) as a failsafe mechanism. The neighborhood function The UbiSOM algorithm uses a normalized neighborhood radius \(\sigma\) learning parameter and a truncated neighborhood function. The latter is what effectively allows \(\overline{\lambda }(t)\) to be computed. The classical SOM neighborhood function relies on a \(\sigma\) value that is problem-dependent, i.e., the used values depend on the lattice size. This complicates the parameterization of \(\sigma\) for different values of \(\mathcal {K}\), i.e., \(width\times height\). The performed normalization is based on the maximum distance between any two neurons in the lattice. In rectangular maps the farthest neurons are the ones at opposing diagonals, e.g., positions (0, 0) and \((width-1,\, height-1)\) in Fig. 3. Hence distances within the lattice are normalized by the Euclidean norm of the vector \({\mathbf{diag}}=(width-1,height-1)\), defined as $$\begin{aligned} \| {\mathbf{diag}}\| =\sqrt{(width-1)^{2}+(height-1)^{2}}. \end{aligned}$$ This effectively limits the maximum neighborhood width the UbiSOM can use and establishes \(\sigma \in [0,1]\). Maximum lattice distance used for normalization of \(\sigma\) The neighborhood function of the UbiSOM variant is given by $$\begin{aligned} h_{ck}^{\prime }(t)=e^{-\Big (\frac{\parallel r_{c}-r_{k}\parallel )}{\sigma \,\parallel {\mathbf{diag}}\parallel }\Big )^{2}} \end{aligned}$$ where \(r_{c}\) is the position in the grid of the BMU for observation \({\mathbf{x}}_{t}\). To get a grasp on how different \(\sigma\) values determine the influence region around the BMU, Fig. 4 depicts Eq. (11) for different \(\sigma\) values. Neurons whose values of \(h_{ck}^{\prime }(t)\) are below a threshold of 0.01 are not updated. This is critical for the computation of \(\lambda (t)\), since \(h_{ck}^{\prime }(t)\) is a continuous function and as \(h_{ck}^{\prime }(t)\rightarrow 0\) all other neurons would still be updated with very small values. The truncated neighborhood function is also a performance improvement, avoiding negligible updates to prototypes. The UbiSOM neighborhood function. The threshold value is also depicted, below which no updates to prototypes are performed UbiSOM states and transitions States and transitions The UbiSOM algorithm implements a finite state-machine, i.e., it can switch between two states. This design was, on one hand, imposed by the initial delay in obtaining values for the assessment metrics and, as a consequence, for the drift function \(d(t)\); on the other hand, seen as a desirable mechanism to conform to Kohonen's proposal of an ordering and a convergence phase for the SOM [8] and to deal with drastic changes that can occur in the underlying distribution. The two possible states of the UbiSOM algorithm, namely ordering state and learning state are depicted in Fig. 5 and described next. Both use a similar update equation as the classical algorithm, but with the neighborhood function defined in Eq. (11), as defined in Eq. (12). Please note that the prototypes are only updated above the neighborhood function threshold. $$\begin{aligned} {\mathbf{w}}_{k}(t+1)={\left\{ \begin{array}{ll} {\mathbf{w}}_{k}(t)+\eta (t)h_{ck}^{\prime }(t)\left[ {\mathbf{x}}_{t}-{\mathbf{w}}_{k}(t)\right] &\quad h_{ck}^{\prime }(t)>0.01\\ {\mathbf{w}}_{k}(t) &\quad otherwise \end{array}\right. } \end{aligned}$$ However, each state estimates learning parameters with different functions for \(\eta (t)\) and \(\sigma (t)\). Ordering state The ordering state is the initial state of the UbiSOM algorithm and to where it possibly reverts if it can not recover from an abrupt change in the data stream. It endures for \(T\) observations where learning parameters are estimated with a monotonically decreasing function, i.e., time-dependent, similar to the classical SOM. Thus, the parameter \(T\) simultaneously defines the window length of the assessment metrics, as well as dictates the duration of the ordering state. The parameters should be relatively high, so the map can order itself from a totally unordered initialization regarding the underlying distribution. This phase also allows for the first value of the drift function \(d(t)\) to be available. After \(T\) observations the algorithm switches to the learning state. Let \(t_{i}\) and \(t_{f}=t_{i}+T-1\) be the first and last iterations of the ordering phase, respectively. This state requires choosing appropriate parameter values for \(\eta _{i}\), \(\eta _{f}\), \(\sigma _{i}\) and \(\sigma _{f}\), which are, respectively, the initial and final values for the learning rate and the normalized neighborhood radius. The choice of values will greatly impact the initial ordering of the prototypes and will affect the estimation of parameters of the learning state. Any monotonically decreasing function can be used, although in this research the following were used: $$\begin{aligned} \sigma (t)=\sigma _{i}\left( \frac{\sigma _{f}}{\sigma _{i}}\right) ^{t/t_{f}},\quad \eta (t)=\eta _{i}\left( \frac{\eta _{f}}{\eta _{i}}\right) ^{t/t_{f}}\qquad \forall t\in \{t_{i},t_{i+1},\ldots ,t_{f}\} \end{aligned}$$ At the end of the \(t_{f}\) iteration, the first value of the drift function is obtained, i.e., \(d(t_{f}),\)and the UbiSOM algorithm transitions to the learning state. Learning state The learning state begins at \(t_{f}+1\) and is the main state of the UbiSOM algorithm, during which learning parameters are estimated in a time-independent manner. Here learning parameters are estimated solely based on the drift function \(d(t)\), decreasing or increasing relative to the first computed value \(d(t_{f})\) and final values (\(\eta _{f},\sigma _{f}\)) of the ordering state. Given that in this state the map is expected to start converging, the values of \(d(t)\) should also decrease. Hence, the value \(d(t_{f})\) is used as a reference value establishing a threshold above which the map is considered to be irrecoverably diverging from changes in the underlying distribution, e.g., in some abrupt changes the drift function can increase rapidly to very high values. Consequently, it also limits the maximum values that learning parameters can attain during this state Learning parameters \(\eta (t)\) and \(\sigma (t)\) are estimated for an observation presented at time t by Eq. (14), where \(d(t)\) is defined as in Eq. (9). One can easily derive that learning parameters are estimated proportionally to \(d(t)\). Also, final values of the ordering state for \(\eta _{f}\) and \(\sigma _{f}\) establish an upper bounded for the learning parameters in this state. $$\begin{aligned} \eta (t)={\left\{ \begin{array}{ll} \frac{\eta _{f}}{d(t_{f})}\, d(t) &\quad d(t)<d(t_{t})\\ \eta _{f} &\quad otherwise \end{array}\right. }\quad \sigma (t)={\left\{ \begin{array}{ll} \frac{\sigma _{f}}{d(t_{f})}\, d(t) &\quad d(t)<d(t_{f})\\ \sigma _{f} &\quad otherwise. \end{array}\right. } \end{aligned}$$ The outcome of these equations is that if the distribution is stationary the learning parameters accompany the decrease of the drift function values, allowing the map to converge to a stable state. On the contrary, if changes occur, the drift function values rise, consequently increasing the learning parameters, and increase the plasticity of the map to a point where \(d(t)\) should decrease again. The increased plasticity should allow the map to adjust to the distribution change. However, there may be cases of abrupt changes from where the map cannot recover, i.e., the map does not resume convergence with decreasing \(d(t)\) values. Therefore, if we detect that learning parameters are in their peak values during at least \(T\) iterations, i.e., \(\sum 1_{\{d(t)\ge d(t_{f})\}}\ge T\), then this situation is confirmed and the UbiSOM transitions back to the ordering state. Time and space complexity The UbiSOM algorithm (and model) does not increase the time complexity of the classical SOM algorithm, since all the potentially penalizing additional operations, namely the computations of the assessment metrics, can be obtained in O(1). Regarding space complexity, it increases the space needed for: (1) storing an additional timestamps for each neuron \(k\); (2) storing two queues for the assessment metrics \(\overline{qe}(t)\) and \(\overline{\lambda }(t)\), each of length \(T\). Therefore, after the initial creation of data structures (map and queues) in O(\(K\)) time and \(O(Kd+2K+2T)\) space, every observation \({\mathbf{x}}_{t}\) is processed in constant O(2Kd) time and constant space. No observations are kept in memory. Hence, the UbiSOM algorithm is scalable in respect to the number of observations N, since the cost per observations is kept constant. However, the increase of the number of neurons \(K\), i.e., the size of the lattice, and the dimensionality d of the data stream will increase this cost linearly. A series of experiments was conducted using artificial data streams to assess the UbiSOM parameterization and performance over stationary and non-stationary data, while comparing it to current proposals, namely the classical Online SOM, PLSOM and DSOM. With artificial data we can establish the ground truth of the expected outcome and illustrate some key points. Afterwards we apply the UbiSOM to a real-world electric power consumption problem where we further illustrate the potential of the UbiSOM when dealing with sensor data in a streaming environment. Table 1 summarizes the artificial data streams used in the presented results. These are two- and three-dimensional for the purpose of easy visualization of obtained maps. The Gauss data stream, as the name suggests, describes a Gaussian cluster of points and is used to check if the algorithms can map the input space density properly; Chain describes two inter-locked rings—this represents a cluster structure that partitional clustering algorithms, e.g., \(k\)-means, fail to cluster properly; Hepta describes a distribution of 7 evenly spaced Gaussian clusters, where at time \(t\) = \(50\,000\) one cluster disappears, previously depicted in Fig. 2, and; Clouds contains an evolving cluster structure with separation and merge of clusters (between \(t\) = \(50\,000\) and \(t\) = \(150\,000\)) and aims at evaluating how the different tested algorithms react to continuous changes in the distribution. All data streams were normalized such that \({\mathbf{x}}_{t}\in [0,1]^{d}\). Table 1 Summary of artificial data streams used in presented experiments The parameterization of any SOM algorithm is mainly performed empirically, since only rules of thumb exist towards finding good parameters [8]. In the next section present an initial parameter sensitivity analysis of the new parameters introduced in the UbiSOM, e.g., \(T\) and \(\beta\), while empirically setting the remaining parameters, shared at some extent with the classical SOM algorithm. Concerning the lattice size it should be rectangular in order to minimize projection distortions, hence we use a \(20\times 40\) lattice for all algorithms, which also allows for a good quantization of the input space. In the ordering state of the UbiSOM algorithm we have empirically set \(\eta _{i}=0.1\), \(\eta _{f}=0.08\), \(\sigma _{i}=0.6\) and \(\sigma _{f}=0.2\), based on the recommendation that learning parameters should be initially relatively high to allow the unfolding of the map over the underlying distribution. These values have shown optimal results in the presented experiments and many others not included in this paper. A parameter sensitivity analysis including these parameters is reserved for future work. Regarding the other algorithms, after several tries the best parameters for the chosen map size and for each compared algorithm were selected. The classical Online SOM uses \(\eta _{i}=0.1\) and \(\sigma _{i}=2\sqrt{K}\), decreasing monotonically to \(\eta _{f}=0.01\) and \(\sigma _{f}=1\) respectively; PLSOM uses a single parameter \(\gamma\) called neighborhood range and the values yielding the best results for the used lattice size were \(\gamma =(65,37,35,130)\) for the Gauss, Chain, Hepta and Clouds data streams, respectively. DSOM was parameterized as in [15] with \(elasticity=3\) and \(\varepsilon =0.1\), but since it fails to unfold from an initial random state, it was left out of further experiments. The authors admit that their algorithm has this drawback. Maps across all experiments use the same random initialization of prototypes at the center of the input space, so no results are affected by different initial states. Parameter sensitivity analysis We present a parameter sensitivity analysis for parameters \(T\) and \(\beta\) introduced in the UbiSOM. The first establishes the length of the sliding window used to compute the assessment metrics, and consequently weather it uses a short, medium or long-term trend to estimate learning parameters. While a shorter window is more sensitive to the variance of the \(E{^{\prime }}(t)\) and to noise, a longer window increases the reaction time of the algorithm to true change in the underlying distribution. It also implicitly dictates the duration of the ordering state, where Kohonen recommends, as another rule-of-thumb, that it should not cover less that \(1\,000\) examples [8]. The later weights the importance of both assessment metrics in the drift function \(d(t)\) and, as discussed earlier, we should use higher values so as to favor the \(\overline{qe}(t)\) values while estimating learning parameters. Hence, we chose \(T=\{500,1000,1500,2000,2500,3000\}\) and \(\beta =\{0.5,0.6,0.7,0.8,0.9,1\}\) as the sets of values from where to perform the parameter sensitivity analysis. To shed some light on how these parameters could affect learning, we opted to measure the mean quantization error [(Mean \(E{^{\prime }}(t)\)], so as to obtain a single value that could characterize the quantization procedure across the entire stream. Similarly, we used the mean neuron activity [(Mean \(\lambda (t)\)] to measure in a single value the proportion of utilized neurons during learning from stationary and non-stationary data streams. Thus, we were interested in finding ideal intervals for the tested parameters that could simultaneously minimize the mean quantization error, while maximizing the mean neuron utility. We also computed \(\overline{qe}(t)\) for the different values of \(T\) to obtain a grasp on the delay in convergence imposed by this parameter. From the minimum \(\overline{qe}(t)\) obtained throughout the stream, we computed the iteration where the \(\overline{qe}(t)\) value falls within 5 % of the minimum (Convergence t), as a temporal indicator of convergence. The results for all combinations of the chosen parameter values for Chain and Clouds data streams are presented in Tables 2 and 3, respectively. It comes at no surprise that for increasing \(T\), the convergence happens latter in the stream. More importantly, results empirically suggest that \(T=\{1500,2000\}\) and \(\beta =\{0.6,0.7,0.8\}\) exhibit the best compromise between the minimization of the error and the maximization of neuron utility. Table 2 Parameter sensitivity analysis with the chain data stream Table 3 Parameter sensitivity analysis with the clouds data stream After experimentally trying these values, we opted for \(T=2000\) and \(\beta =0.7\) since it consistently gave good results across a variety of data streams, some not included in this paper. Hence, all the remaining experiments use these parameter values. Density mapping We illustrate the modeling and quantization process of all tested algorithms in Fig. 6, for the stationary Gauss data stream. It can be seen that only the Online SOM and the UbiSOM are able to model the input space density correctly, assigning more neurons to the denser area of points. The inability of PLSOM to map the density limits its applicability to exploratory analysis with some visualization techniques illustrated in this work. It can also be seen that DSOM fails to unfold the map to cover this distribution. Final maps obtained for the stationary Gaussian data stream. a UbiSOM; b online SOM; c PLSOM; d DSOM Convergence with stationary and non-stationary data The following results compare the UbiSOM algorithm against the Online SOM and the PLSOM algorithms across all artificial data streams. Table 4 summarizes the obtained values for the previously used measures, namely the mean \(E{^{\prime }}(t)\) and mean \(\lambda (t)\) values. While PLSOM exhibits a lower mean \(E{^{\prime }}(t)\) on all data streams, except Gauss, it does so at the expense of not mapping the input space density, as previously demonstrated. The density mapping of the vector projection and the quantization process can be seen as conflicting goals. On the other hand, the UbiSOM algorithm performs very similarly in this measure, but consistently exhibits higher mean \(\lambda (t)\) values, most importantly in the non-stationary data streams, which may indicate that the other algorithms may have not performed so well in the presence of changes in the underlying distribution. Table 4 Comparison of the UbiSOM, Online SOM and PLSOM algorithms across all data streams The previous measures can establish a baseline comparison between algorithms, but are not conclusive regarding the quality of the obtained maps. Consequently, we computed the average quantization error \(\overline{qe}\) with \(T=2000\) for all algorithms and data streams. Please note that although this is the value that the UbiSOM also uses, we consider this fair for all algorithms, since it is simply evaluating the trend of the quantization error for each algorithm. The results are depicted in Fig. 7 and it can be seen that the UbiSOM algorithm generally converges faster to stationary phases of the distributions, while the PLSOM converges less steadily and slower in half of the streams, and the convergence of the Online SOM is dictated by the monotonic decrease of the learning parameters. In the Hepta data stream only the UbiSOM algorithm is able to detect the disappearance of the cluster, which we can derive by the increase of the \(\overline{qe}\) values. Please note that this was a consequence of the contribution of the average neuron activity into the drift function. Similarly, in the Clouds data stream the UbiSOM algorithm quickly reacts to the start of the gradual change in the position of the clusters through the \(\overline{qe}\) metric, while the average neuron utility is responsible for the observed "spikes" when it detects currently unused regions of the map. Given that the UbiSOM learning parameters are mainly estimated through \(\overline{qe}\), we can get a very close idea of the evolution of the learning parameters across these different datasets. Average quantization error \(\overline{qe}(t)\) of algorithms across all data streams. Results for the UbiSOM, online SOM and PLSOM in left, center and right columns, respectively. The rows regard the Gauss, Chain, Hepta and Clouds data streams, respectively In order to support the above inferences, Fig. 8 illustrates the UbiSOM over the Clouds data stream, confirming it models the changing underlying distribution correctly over time, maintaining density mapping and topological ordering of prototypes with few signs of unused portions of the map. On the other hand, the final Online SOM and PLSOM maps for this data stream are presented in Fig. 9. Neither is able to correctly model the distribution in its final state. Whereas the Online SOM is progressively less capable to model changes due to decreasing learning parameters, the PLSOM suffers from the fact it also uses an estimation of the input space diameter, which also is changing, to compute learning parameters. Evolution of the UbiSOM over the Clouds data stream. Maps at a \(t\) = 10,000; b \(t\) = 80,000; c \(t\) = 130,000, and; d \(t\) = 190,000 Final maps for the Online SOM and PLSOM over the Clouds data stream a Online SOM; b PLSOM As an additional example of the clustering of the UbiSOM through exploratory analysis, in Fig. 10 we illustrate the final map obtained for the Chain data stream and corresponding U-matrix [3], a color-scaled visualization that can be used here to correctly identify the two clusters. Warmer colors translate to higher distances between neurons, consequence of the vector projection, establishing borders between clusters. Obtained UbiSOM for the Chain data stream and corresponding U-matrix. Two clear clusters can be derived from this visualization Exploratory analysis in real-time A real world demonstration is achieved by applying the UbiSOM to the real-world Household electric power consumption data stream from the UCI repository [16], comprising \(2\,049\,280\) observations of seven different measurements (i.e., \(d=7\)), collected to the minute, over a span of 4 years. Only features regarding sensor values were used, namely global active power (kW), voltage (V), global intensity (A), global reactive power (kW) and sub-meterings for the kitchen, laundry room and heating (W/h). The Household data stream contains several drifts in the underlying distribution, given the nature of electric power consumption, and we believe these are the most challenging environments where UbiSOM can operate. Here, we briefly present another visualization technique called component planes [3], that further motivates the application of UbiSOM to a non-stationary data stream. Component planes can be regarded as a "sliced" version of the SOM, showing the distribution of different features values in the map, through a color scale. This visualization can be obtained at any point in time, providing a snapshot of the model for the present and recent past. Ultimately, one can take several snapshots and inspect the evolution of the underlying stream. Figure 11 depicts the last 5000 presented examples (\({\sim }3.5\) days) of the Household data stream, i.e., from \(t\) = \(495\,000\) to \(t\) = \(500\,000\), while Fig. 12 shows the component planes obtained at t = \(500\,000\) using the UbiSOM. For illustration purposes we discarded the global reactive power feature. These visualizations indicate correlated features, namely Global active power and Global intensity are strongly correlated (identical component planes), while exhibiting some degree of inverse correlation to Voltage. Since the UbiSOM is able to map the input space density, the component planes of the heating sensors indicate their relative overall usage right before that period of time, e.g., Heating has a high consumption approximately 2/3 of the time. Since this point in time concerns the month of December 2008, this seems self-explanatory. Household data stream from \(t\) = \(495\,000\) to t = \(500\,000\) UbiSOM obtained component planes at t = \(500\,000\) with the Household data stream Component planes also show that Global active power has its highest values when Kitchen (Sub_metering_1) and Heating (Sub_metering_3) are active at the same time; the overlap of higher values for Laundry room (Sub_metering_2) Kitchen (Sub_metering_1) is low, indicating that they are not used very often at the same time. All these empirical inductions from the exploratory analysis of the component planes seem correct looking at the plotted data in Fig. 11, and highlight the visualization strengths of UbiSOM with streaming data. This paper presented the improved version of the ubiquitous self-organizing map (UbiSOM), a variant tailored for real-time exploratory analysis over data streams. Based on literature review and the conducted experiments, it is the first SOM algorithm capable of learning stationary and non-stationary distributions, while maintaining the original SOM properties. It introduces a novel average neuron utility assessment metric in addition to the previously used average quantization error, both used in a drift function that measures the performance of the map over non-stationary data and allows for learning parameters to be estimated accordingly. Experiments show this is a reliable method to achieve the proposed goal and the assessment metrics proved fairly robust. The UbiSOM outperforms current SOM algorithms in stationary and non-stationary data streams. The real-time exploratory analysis capabilities of the UbiSOM are, in our opinion, extremely relevant to a large set of domains. Besides cluster analysis, the component-plane based exploratory analysis of the Household data stream exemplifies the relevancy of the proposed algorithm. This points to a particular useful usage of UbiSOM in many practical applications, e.g., with high social value, including health monitoring, powering a greener economy in smart cities or the financial domain. Coincidently, ongoing work is targeting the financial domain to model the relationships between a wide variety of asset prices for portfolio selection and to signal changes in the model over time as an alert mechanism. In parallel, we continue conducting research with distributed air quality sensor data in Portugal. Kohonen T. Self-organized formation of topologically correct feature maps. Biol Cybern. 1982;43(1):59–69. Pöllä M, Honkela T, Kohonen T. Bibliography of self-organizing map (som) papers: 2002–2005 addendum. Neural Computing Surveys. 2009. Ultsch A, Herrmann L. The architecture of emergent self-organizing maps to reduce projection errors. In: Verleysen M, editor. Proceedings of the European Symposium on Artificial Neural Networks (ESANN 2005); 2005. pp. 1–6. Ultsch A. Self organizing neural networks perform different from statistical k-means clustering. In: Proceedings of GfKl '95. 1995. Silva B, Marques NC. Ubiquitous self-organizing map: learning concept-drifting data streams. New contributions in information systems and technologies. Advances in Intelligent Systems and Computing: Springer; 2015. p. 713–22. Aggarwal CC. Data streams: models and algorithms, vol. 31. Springer; 2007. Gama J, Rodrigues PP, Spinosa EJ, de Carvalho ACPLF. Knowledge discovery from data streams. Chapman and Hall/CRC Boca Raton. 2010. Kohonen T. Self-organizing maps, vol 30. New York: Springer; 2001. Fritzke B. A self-organizing network that can follow non-stationary distributions. In: Artificial Neural Networks—ICANN 97. Springer. 1997. p. 613–618 Deng D, Kasabov N. Esom. An algorithm to evolve self-organizing maps from on-line data streams. Neural Networks, IEEE-INNS-ENNS International Joint Conference on IEEE Computer Society, vol. 6; 2000. p. 6003. Deng D, Kasabov N. On-line pattern analysis by evolving self-organizing maps. Neurocomputing. 2003;51:87–103. Furao S, Hasegawa O. An incremental network for on-line unsupervised classification and topology learning. Neural Netw. 2006;19(1):90–106. Furao S, Ogura T, Hasegawa O. An enhanced self-organizing incremental neural network for online unsupervised learning. Neural Netw. 2007;20(8):893–903. Berglund E. Improved plsom algorithm. Appl Intell. 2010;32(1):122–30. Rougier N, Boniface Y. Dynamic self-organising map. Neurocomputing. 2011;74(11):1840–7. Bache K, Lichman M. UCI machine learning repository. 2013. http://archive.ics.uci.edu/html. BS is the principal researcher for the work proposed in this article. His contributions include the underlying idea, background investigation, initial drafting of the article, and results implementation. NCM supervised the research and played a pivotal role in writing the article. Both authors read and approved the final manuscript. The research of BS was partially funded by Fundação para a Ciência e Tecnologia with the Ph.D. scholarship SFRH/BD/49723/2009. The authors would also like to thank Project VeedMind, funded by QREN, SI IDT 38662. DSI/ESTSetúbal, Instituto Politécnico de Setúbal, Campus do IPS, Estefanilha, 2914-761, Setúbal, Portugal NOVA Laboratory for Computer Science and Informatics, DI/FCT, Universidade Nova de Lisboa, Monte da Caparica, Portugal Nuno Cavalheiro Marques Search for Bruno Silva in: Search for Nuno Cavalheiro Marques in: Correspondence to Bruno Silva. Self-organizing maps Non-stationary data Exploratory analysis Sensor data
CommonCrawl
\begin{document} \title{Optimizing microwave photodetection: Input-Output theory} \author{M. Sch\"ondorf} \author{L. C. G. Govia} \altaffiliation[Current Address: ]{Institute for Molecular Engineering, University of Chicago, Chicago, Illinois, USA} \affiliation{Theoretical Physics, Saarland University, 66123 Saarbr{\"u}cken, Germany} \author{M. G. Vavilov} \author{R. McDermott} \affiliation{Department of Physics, University of Wisconsin-Madison, Madison, WI 53706, USA} \author{F. K. Wilhelm} \affiliation{Theoretical Physics, Saarland University, 66123 Saarbr{\"u}cken, Germany} \begin{abstract} High fidelity microwave photon counting is an important tool for various areas from background radiation analysis in astronomy to the implementation of circuit QED architectures for the realization of a scalable quantum information processor. In this work we describe a microwave photon counter coupled to a semi-infinite transmission line. We employ input-output theory to examine a continuously driven transmission line as well as traveling photon wave packets. Using analytic and numerical methods, we calculate the conditions on the system parameters necessary to optimize measurement and achieve high detection efficiency. With this we can derive a general matching condition depending on the different system rates, under which the measurement process is optimal. \end{abstract} \maketitle \section{Introduction} \label{sec:1} Circuit quantum electrodynamics (cQED) has emerged as a powerful paradigm for the realization of quantum computational circuits in a scalable architecture \cite{clarke2008superconducting,blais2007quantum,chow2012universal,kelly2015state,brecht2015multilayer} as well as a demonstration of quantum radiation-matter interaction in the strong and ultra strong coupling regimes \cite{you2011atomic,hofheinz2009synthesizing,Niemczyk2010ultrastrong,baust2014ultrastrong}. Here, the lowest energy levels of a superconducting Josephson circuit play the role of an artificial atom, while thin film cavities and transmission lines are used to realize electromagnetic field modes. Strong coupling between the cavity fields and the artificial atom has been used to create strongly non classical states of the electromagnetic field \cite{hofheinz2009synthesizing,hofheinz2008generation,vlastakis2013deterministically,kirchmair2013observation,wang2016schrodinger,holland2015single}; in addition, coupling between these modes and the Josephson circuit can be used for high fidelity control \cite{PhysRevA.69.062320,krastanov2015universal} and measurement \cite{blais2004cavity,Sun2014photontracking,abdo2013directional,abdo2014josephson,riste2013deterministic,kindel2015generation,ribeill2011superconducting,hover2012superconducting,kinion2010microstrip,kinion2011superconducting}. In conventional quantum optics at optical frequencies, detection of the electromagnetic mode is performed by a photon counter. The counter is typically modeled as an ensemble of two-level states that are weakly coupled to the light field \cite{Tannoudji}. Photon absorption is triggering a large, easily measured classical signal, and detector performance is expressed in terms of quantum efficiency and spurious dark count rate \cite{hadfield2009single}. In the microwave frequency range, conventional wisdom holds that there exists no material that can be photoionized by the lower frequency radiation. On the other hand, a variety of Josephson circuits are capable of detecting microwave photons down to the limit of a single photon with high efficiency \cite{narla2016robust,govia2012theory,chen2011microwave,poudel2012quantum,romero2009microwave,oelsner2016detection,peropadre2011approaching,fan2014nonabsorbing,inomata2016single,wong2015quantum,oelsner2016detection}. Microwave photons can also be detected by lateral quantum dots\cite{wong2015quantum,kyriienko2016continuous}. In contrast to optical-frequency counters, Josephson-based microwave photon counters are realized as {\em single} effective two-level systems that couple strongly to the incident microwave field \cite{govia2012theory}. For this reason, they differ fundamentally from optical frequency counters. It is the purpose of this paper to explore the conditions for high-efficiency detection of propagating photons by these single, strongly coupled Josephson circuits. For the sake of completeness, we consider the Josephson photomultiplier (JPM), a current-biased junction capable of efficient detection of microwaves that are near resonant with the transition between the two lowest states in the metastable minima of the circuit potential. Previously, the JPM has been applied to investigation of temporal correlations of incident coherent and thermal microwave fields \cite{chen2011microwave}, and the JPM is currently under investigation for high fidelity measurement of single qubits \cite{govia2014high} and of multiqubit parity operators \cite{govia2015scalable}. Other approaches to single microwave photodetection include driven $\Lambda$ systems \cite{koshino2013implementation}. In this approach, the dressed states of a qubit-resonator system constitute an impedance-matched system, which absorbs an input photon with a near-unity efficiency \cite{koshino2015theory,koshino2016dressed}. Here, we demonstrate that efficient microwave photon detection can be understood from a simple intuitive picture of rate matching, which has as its classical analog the usual impedance matching condition that provides for optimal power transfer in microwave circuits \cite{Pozar}. We present a general description of a transmission line directly coupled to a JPM, and explore the conditions that must be met to maximize detector quantum efficiency. Our results agree with those of \cite{romero2009photodetection}, where only a continuous drive input state was considered. Furthermore, our results extend beyond those of \cite{romero2009photodetection} as we include additional incoherent channels and study pulsed input states. A comparable condition was also found numerically in \cite{kyriienko2016continuous} for a different setting. Here they study a qutrit coupled to two transmission lines. In one of the transmission lines they induce a photon pulse. They show that the reflection coefficient is minimal if the coupling rate between the input transmission line and the qutrit is equal to the decay rate to the target state. These two rates can be translated into $\gamma_{\rm TL}$ and $\gamma_1$ in our description. To describe our system, we use the input-output formalism \cite{gardiner1985input,clerk2010introduction}, a tool from the field of open quantum systems theory, that leads to generalized Heisenberg equations. The advantage of this approach is that it can be taken very far before specifying the form of the photon pulse in the transmission line making it versatile and its results broadly applicable. As a result, we can examine arbitrary states in the transmission line, including both continuous wave drive and wave packets with finite photon number. While equivalent to a density matrix approach, it is thus more effective for the problem at hand. The input-output formalism leads to a system of equations, from which we determine conditions on the system parameters that allow us to optimize detection efficiency. A sufficient set of these parameters can be designed or even controlled in experiment such that this paper provides a guide towards practical implementation of the measurement of traveling photons using a JPM, achieving the optimal measurement efficiency experimentally possible. This paper is organized as follows. In Sec. \ref{sec:2}, we present the system of interest and derive the corresponding equations of motion using input-output formalism. In Sec. \ref{sec:3}, we use a mean field approach that captures most of the quantum mechanical character of the system. We find the optimization conditions for continuous drive inputs, and for various pulsed waveforms. In Sec. \ref{sec:4}, we solve the equations by substituting operators with their corresponding expectation values. This simplification leads to rate equations, the solution of which yields a general matching condition for measurement optimization, which agrees with the result of Sec. \ref{sec:3}. In Sec. \ref{sec:5}, we present our conclusions. \section{System and Equations of Motion} \label{sec:2} The system of interest is a microwave transmission line directly coupled to a JPM. The system Hamiltonian is written as \begin{align} \hat H = \hat H_{\rm JPM} + \hat H_{\rm TL} + \hat H_{\rm INT}, \end{align} where $\hat H_{\rm JPM}$ denotes the Hamiltonian of the JPM, $\hat H_{\rm TL}$ is the bare transmission line Hamiltonian, and $\hat H_{\rm INT}$ describes the interaction between the transmission line and the JPM. The JPM is realized through a current biased Josephson junction and is described by a tilted washboard potential \cite{martinis1985energy}, from which one can isolate two quasi-bound energy levels $\ket{0}$ and $\ket{1}$, with associated Hamiltonian \begin{align} \hat H_{\rm JPM} = -\hbar \omega_0 \frac{\hat \sigma_z}{2}. \end{align} Here, $\omega_0$ is the transition frequency and $\hat \sigma_z = \left[\hat \sigma^{-},\hat \sigma^{+}\right]$ is the usual Pauli-Z operator with \begin{align} \hat \sigma^{-} = \ket{0}\bra{1} \hspace{0.5cm} \hat \sigma^{+} = \ket{1}\bra{0}. \end{align} Note that the local minima in the JPM potential are physically equivalent and only transitions between them can be detected \cite{likharev1985theory} (see Fig. \ref{fig:JPM_TL}). Both states can tunnel to the continuum with rate $\gamma_0$ and $\gamma_1$, respectively. For our description, we represent the continuum by a fictitious measurement state $\ket{m}$. Incoherent tunneling to the $\ket{m}$ state corresponds to generation of a measurable voltage pulse. Absorption of a resonant photon induces a transition from $\ket{0}$ to $\ket{1}$, which tunnels rapidly to the continuum since $\gamma_1 \gg \gamma_0$; this system can thus be used to count incoming photons. Quantization of the transmission line \cite{blais2004cavity} leads to the usual multimode harmonic oscillator Hamiltonian \begin{align} \hat H_{\rm TL} = \hbar \int_{0}^{\infty} |f(\omega)|^2 \omega \hat a^{\dag}(\omega)\hat a(\omega) {\rm d}\omega. \end{align} Here, $\omega$ is the frequency of the transmission line mode and $\hat a^{\dag}(\omega)$, $\hat a(\omega)$ are the bosonic creation and annihilation operators for a photon at frequency $\omega$, respectively. $f(\omega)$ is the envelope of the incoming radiation in frequency space and has units $1/\sqrt{\omega}$ which in our case is assumed to be real (for more detail on how to model incoming radiation fields in the Heisenberg picture see \cite{baragiola2012n} and \cite{divincenzo2013multi}). \begin{figure} \caption{ System schematic. The JPM is directly coupled to a transmission line which excites the JPM by an incoming photon flux. The potential of the JPM is a tilted washboard with two quasi-bound states in the local minima.} \label{fig:JPM_TL} \end{figure} The interaction between the JPM and the transmission line arises from the additional bias on the JPM caused by the transmission line current (see Fig. \ref{fig:JPM_TL}). This leads to a dipole interaction between the JPM states and the transmission line described by the Hamiltonian \begin{align} \hat H_{\rm INT} = \Delta \hat I \frac{\Phi_0}{2\pi} \hat \varphi_J, \label{interaction} \end{align} where $\Phi_0 \equiv h/2e$ is the magnetic flux quantum and $ \Delta \hat I$ and $\hat \varphi_J$ describe the additional quantized current coming from the transmission line and the quantized phase of the JPM, respectively. To derive expressions for $\Delta \hat I$ and $\hat \varphi_J$ we use standard circuit quantization, which yields \cite{geller2007quantum,johansson2010dynamical} \begin{align} \Delta \hat I &= \sqrt{\frac{\hbar \omega_s}{4\pi Z_0}} \int_{0}^{\infty} {\rm d}\omega f(\omega)\left(\hat a^{\dagger}(\omega)+\hat a(\omega)\right)\label{current}\\ \hat \varphi_J &= \frac{i}{\sqrt{2}}\left(\frac{2E_C}{E_J}\right)^{\frac{1}{4}} \left(\hat \sigma^{+}-\hat \sigma^{-}\right). \label{phi} \end{align} Here $Z_0$ is the transmission line impedance at the characteristic frequency $\omega_s$ of the incoming signal; $E_C = (2e)^2/2C_J$ is the Cooper pair charging energy, with the junction self-capacitance $C_J$; and $E_J = \hbar I_c/2e$ is the Josephson coupling energy, where $I_c$ is the critical current of the junction. Inserting expressions \eqref{current} and \eqref{phi} into \eqref{interaction}, we obtain the quantized interaction Hamiltonian \begin{align} \hat H_{\rm INT} = i \hbar \sqrt{\frac{\gamma_{\rm TL}}{2\pi}} \int_{-\infty}^{\infty} {\rm d}\omega f(\omega) \left[\hat a^{\dag}(\omega) \hat \sigma^{-} - \hat \sigma^{+} \hat a(\omega)\right], \label{interaction_hamilton} \end{align} where $\gamma_{\rm TL} = \omega_s Z_J/4 Z_0$ describes the coupling rate between the transmission line and the JPM. The expression for $\gamma_{\rm TL}$ includes the junction impedance $Z_J = 1/\omega_s C_J$. For this derivation (see Appendix \ref{app:1}) we applied the rotating-wave-approximation (RWA) \cite{WallsMilburn}, which leads to a continuous Jaynes-Cummings interaction \cite{jaynes1963comparison} and allows us to put the lower limit of integration to $-\infty$ instead of $0$. We further assumed that the coupling is constant over all modes, which is the first Markov approximation \cite{QuantumNoise}. Since the interaction is described by \eqref{interaction_hamilton}, we can use standard input-output formalism \cite{gardiner1985input} to derive the quantum mechanical Langevin equation for an arbitrary JPM operator $\hat S$ (see after eq. \eqref{Langevin_Lindblad} for further remarks) \begin{align} \begin{split} \dot{\hat S}(t) &= \frac{i}{\hbar}\left[\hat H_{\rm JPM}, \hat S(t)\right]\\ &- \left[\hat S(t), \hat \sigma^{+}(t)\right]\left\{ \frac{\gamma_{\rm TL}}{2}\hat \sigma^{-}(t) - \sqrt{\gamma_{\rm TL}} \hat a_{\rm in}(t)\right\} \\ &+ \left\{\frac{\gamma_{\rm TL}}{2}\hat \sigma^{+}(t) - \sqrt{\gamma_{\rm TL}} \hat a_{\rm in}^{\dag}(t)\right\}\left[\hat S(t), \hat \sigma^{-}(t) \right], \end{split} \label{Langevin} \end{align} with input field operator defined as \begin{align} \hat a_{\rm in}(t) \equiv -\frac{i}{\sqrt{2\pi}} \int_{-\infty}^{\infty} {\rm d}\omega \exp \left[-i \omega \left(t-t_0\right)\right] f(\omega) \hat a_{t_0}(\omega), \label{a_in_def} \end{align} where $\hat a_{t_0}(\omega)$ is the field operator at time $t=t_0$ and $f(\omega)$ is again the envelop of the incoming radiation. Without loss of generality, we set the starting point of the interaction to zero, $t_0 = 0$. Our system satisfies the standard input-output relation \cite{WallsMilburn} \begin{align} \hat a_{\rm out}(t) + \hat a_{\rm in}(t) = \sqrt{\gamma_{\rm TL}}\hat \sigma^{-}(t), \label{inout_Relation} \end{align} where the output field operator is defined as \begin{align} \hat a_{\rm out}(t) = \frac{i}{\sqrt{2\pi}} \int_{-\infty}^{\infty} {\rm d}\omega \exp\left[- i \left(t-t_1\right)\right] f(\omega) \hat a_{t_1}(\omega). \end{align} Here, $\hat a_{t_1}(\omega)$ is similar to $\hat a_{t_0}(\omega)$ in that it is defined as the field operator at a time $t_1>t_0$ after the interaction between transmission line and JPM is turned on. Here $f(\omega)$ describes the envelop of the outgoing radiation. Up to now we have not considered incoherent decay channels of the JPM. We include them using the standard Lindblad formalism. The Lindblad operator that describes tunneling from the excited state to the continuum (measurement process) is \begin{align} \hat L_1 = \sqrt{\gamma_1} \ket{m}\bra{1}, \end{align} with tunneling rate $\gamma_1$, where the state $\ket{m}$ represents all states outside the potential well of the quasi-bound states. Another incoherent channel is given by dark counts \begin{align} \hat L_0 = \sqrt{\gamma_0} \ket{m} \bra{0}, \end{align} a tunneling with rate $\gamma_0$ from the ground state of the JPM into the measurement state. We also take into account the possibility of relaxation from $\ket{1}$ to $\ket{0}$ through energy loss to the environment. This process is described by the Lindblad operator \begin{align} \hat L_{\rm rel} = \sqrt{\gamma_{\rm rel}} \ket{0}\bra{1}, \end{align} where $\gamma_{\rm rel}$ is the relaxation rate. This rate only includes emission into the intrinsic environment of the JPM, since emission back to the transmission line is already built into the input-output equations. Finally, we assume that the JPM has the possibility to reset after a measurement, such that multiple measurements are possible. The reset is described by the operator \begin{align} \hat L_{\rm res} = \sqrt{\gamma_{\rm res}} \ket{0}\bra{m}, \end{align} where $\gamma_{\rm res}$ is the reset rate. The reset process brings the JPM from the measurement state $\ket{m}$ back to the ground state $\ket{0}$. To include these Lindblad channels in the above Langevin equation, we use the adjoint master equation \cite{Breuer} \begin{align}\label{Lindblad} \begin{split} \dot{\hat S}(t) &= \frac{i}{\hbar} \left[H_{\rm JPM}, \hat S(t)\right] \\ &\hspace{-0.5cm} + \sum_k\left(\hat L_k^{\dag}S(t)\hat L_k - \frac{1}{2} \hat S(t) \hat L_k^{\dag}\hat L_k - \frac{1}{2} \hat L_k^{\dag}\hat L_k \hat S(t)\right), \end{split} \end{align} with $k \in \{0,1,{\rm rel},{\rm res}\}$ and $\hat S$ an arbitrary JPM operator. Combining \eqref{Langevin} and \eqref{Lindblad}, we obtain a Langevin-Lindblad master equation that describes the coherent and incoherent dynamics of an arbitrary system operator \begin{align}\label{Langevin_Lindblad} \begin{split} \dot{\hat S}(t) &= \frac{i}{\hbar}\left[\hat H_{\rm JPM}, \hat S(t)\right] \\ &\hspace{-0.4cm}- \left[\hat S(t), \hat \sigma^{+}(t)\right]\left\{ \frac{\gamma_{\rm TL}}{2}\hat \sigma^{-}(t) - \sqrt{\gamma_{\rm TL}} \hat a_{\rm in}(t)\right\}\\ &\hspace{-0.4cm}+ \left\{\frac{\gamma_{\rm TL}}{2}\hat \sigma^{+}(t) - \sqrt{\gamma_{\rm TL}} \hat a_{\rm in}^{\dag}(t)\right\}\left[\hat S(t), \hat \sigma^{-}(t) \right]\\ &\hspace{-0.4cm}+ \sum_k \left(\hat L_k^{\dag}S(t)\hat L_k - \frac{1}{2}\left[\hat S(t) \hat L_k^{\dag}\hat L_k + \hat L_k^{\dag}\hat L_k \hat S(t) \right] \right). \end{split} \end{align} All of the above Lindblad operators describe loss channels of the JPM. Note that this is written for as an equation for JPM operators, hence the transmission line operators act as noise sources like in the classical Langevin equation. They are operator-valued to reflect the quantum nature of the noise (for more details see \cite{QuantumNoise}). In general, the transmission line can also evolve incoherently; however, the rates for these processes are slow compared to JPM processes \cite{goppl2008coplanar},\cite{megrant2012planar}, so they are ignored in our calculations. We are interested in the occupation probabilities of the different JPM states, defined by the projection operators \begin{align} \begin{split} \mathcal{\hat P}_0 \equiv \ket{0}\bra{0}\hspace{0.5cm} \mathcal{\hat P}_1 \equiv \ket{1}\bra{1}\hspace{0.5cm} \mathcal{\hat P}_m \equiv \ket{m}\bra{m}. \end{split} \end{align} To obtain a complete system of equations, we must also include the system raising and lowering operators $\hat \sigma^{-}$, $\hat \sigma^{+}$. Putting these five operators into equation \eqref{Langevin_Lindblad} leads to a set of coupled ordinary differential equations \begin{subequations} \begin{align} \label{System1} \dot{\hat \sigma}^{-} &= -i\omega_0 \hat \sigma^{-}+\sqrt{\gamma_{\rm TL}}\hat \sigma_z \hat{a}_{\rm in} - \frac{\tilde \gamma}{2} \hat \sigma^{-} \\ \label{System2} \dot{\hat \sigma}^{+} &= i\omega_0 \hat \sigma^{+} +\sqrt{\gamma_{\rm TL}}\hat{a}_{\rm in}^{\dagger} \hat \sigma_z - \frac{\tilde \gamma}{2} \hat \sigma^{+} \\ \label{System3} \dot{\hat{\mathcal P}}_0 &= -\gamma_0 \mathcal{\hat P}_0 + (\gamma_{\rm TL}+\gamma_{\rm rel}) \mathcal{\hat P}_1 -\sqrt{\gamma_{\rm TL}} \mathcal{\hat W}+\gamma_{\rm res} \mathcal{\hat P}_m \\ \label{System4} \dot{\hat{\mathcal P}}_1 &= -(\gamma_{\rm TL}+\gamma_{\rm rel}+\gamma_1) \mathcal{\hat P}_1 +\sqrt{\gamma_{\rm TL}} \mathcal{\hat W} \\ \label{System5} \dot{\hat{\mathcal P}}_m &= \gamma_0 \mathcal{\hat P}_0+\gamma_1 \mathcal{\hat P}_1 -\gamma_{\rm res} \mathcal{\hat P}_m, \end{align} \end{subequations} where $\tilde \gamma$ is defined as $\tilde \gamma \equiv \gamma_{\rm TL}+\gamma_0+\gamma_1+\gamma_{\rm rel}$ and $\mathcal{\hat W} \equiv \hat{a}_{\rm in}^{\dagger}\hat\sigma^{-} + \hat \sigma^{+} \hat{a}_{\rm in}$. All operators are time-dependent, since we are in the Heisenberg picture. Here and in the following, however, we will only indicate this time dependence explicitly when it is necessary for clarity. It should be noted that up to this point we have made no assumptions about the input field $\hat a_{\rm in}$, such that the derived system of equations describes a completely general pulse/drive. This allows us to examine different incoming fields in the transmission line, including both continuous drive and various forms of pulses. \section{Mean Field Approach} \label{sec:3} In this section, we use a mean field approach (see \cite{kocabacs2012resonance}) to simplify equations \eqref{System1}-\eqref{System5}. This approach includes first order correlations between the transmission line and the JPM. It is based on the assumption that the transmission line stays in a coherent state described by a single amplitude $\alpha$. It tacitly assumes that not only $\hat{a}|\alpha\rangle=\alpha|\alpha\rangle$ as usual but also $\hat{a}^\dagger|\alpha\rangle=\alpha^\ast|\alpha\rangle$ or, alternatively, $\left\langle \alpha^\prime |\alpha \right\rangle= 0$ for $\alpha' \neq \alpha$, thus assuming a large initial coherent state with $|\alpha|\gg 1$ (see \cite{kocabacs2012resonance}). An important point for the whole section is that the variable $|\alpha|^2$ in our case is the amplitude of a photon flux whereas in the standard case it denotes the actual photon number. This fact arises from the usual formalism used in input-output theory, where field creation and anhilation operators are not unitless (see e.g.\cite{WallsMilburn}). The actual photon number that hits the detector during the measurement time interval $t_m$ is then given by $n = |\alpha|^2 \omega_0 t_m$ (see App. \ref{app:photon_flux}). Hence the condition for the validity of the approximation in our case reads $|\alpha|^2 \omega_s t_m \gg 1$. Note that some of the results we show in the following extrapolate to regimes where this condition is not fulfilled, e.g. we start with $|\alpha|^2 =0 $ in some plots, but the key results are in the regime where the approximation holds. In the following, we only consider one measurement event ($\gamma_{\rm res} = 0$) and look at the measurement probability to define the efficiency of the counter, since this value corresponds to the efficiency in the multi-count case (for short enough reset time). Additionally, we neglect dark counts ($\gamma_0 = 0$) since the typical dark count rates of a JPM do not change the results significantly, as we will see in Sec \ref{sec:4}. For simplicity we also assume that we do not have any relaxation ($\gamma_{\rm rel} = 0$). We are especially interested in the choice of $\gamma_{\rm TL}$, that maximizes the measurement probability. We refer to this rate as $\gamma_{\rm TL}^{\rm max}$. \begin{figure*} \caption{ (a) Occupation probabilities as a function of the measurement time $t_m$. (b) Measurement probability as a function of the rate of incoming photons for optimal rate choice $\gamma_{\rm TL} = \gamma_{\rm TL}^{\rm max}$ (before steady state is reached). One sees a saturation at around when the rate of incoming photons exceeds the measurement rate, such that increasing the rate of incoming photons does not further increase the measurement probability. (c),(d) Measurement probability versus the rates $\gamma_{\rm TL}$ and $\gamma_1$ after $t_m = 10$ ns (before stationary state is reached) for two different values of $|\alpha|^2$. (c) For small values of $|\alpha|^2$ (0.5 photons during $t_m$), the optimal measurement regime coincides with the matching condition \eqref{matching_simple} found in Section II. (d) For high values of $|\alpha|^2$ (50 photons during $t_m$), we see a plateau behavior, such that the measurement probability is independent of $\gamma_{\rm TL}$.} \label{fig:continuous} \end{figure*} \subsection{Continuous Input} \label{sec:4A} We assume that we have a continuous, coherent drive at frequency $\omega_0$ (such that the signal frequency $\omega_s$ is equal to $\omega_0$) and photon flux amplitude $\alpha$, such that the initial state reads \begin{align} \ket{\Phi (t=0)} = \ket{0}_{\rm JPM} \otimes \ket{\alpha}_{\rm TL} = \ket{0,\alpha}, \label{initial} \end{align} where the JPM is arranged in the ground state before measurement and the transmission line is in a coherent state of amplitude $\alpha$ and frequency $\omega_0$. We can take the expectation value in the system of equations \eqref{System1}-\eqref{System5} with respect to state \eqref{initial} (note that the time dependence is included in the operators, such that $\ket{\Phi}$ stays constant). To trace out the transmission line degrees of freedom, we apply $\hat a_{\rm in}$ to the right and $\hat a^{\dag}_{\rm in}$ to the left, which gives \begin{align} \begin{split} \hat a_{\rm in} \ket{0,\alpha_{\omega_0}} &= - \frac{i}{\sqrt{2\pi}} \int_{-\infty}^{\infty}{\rm d}\omega \exp[-i\omega t] f(\omega)\hat a(\omega) \ket{0,\alpha} \\ &= -\frac{i}{\sqrt{2\pi}} \alpha \sqrt{\omega_0} \exp\left[-i\omega_0 t\right] \ket{0,\alpha} \end{split}, \end{align} since a single mode drive is described by a $\delta$-function in frequency space for a continuous drive at frequency $\omega_0$: $f(\omega) = \sqrt{\omega_0}\delta(\omega-\omega_0)$. In addition we apply the transformation $\hat \sigma^{-} \longmapsto \exp[-i\omega_0 t]\hat\sigma^{-}$ and $\hat \sigma^{+} \longmapsto \exp[i\omega_0 t]\hat \sigma^{+}$ in order to make the equations time independent. After these steps, we finally end up with equations of motion for the expectation values of the JPM operators: \begin{figure}\label{fig:onemode_gammaTL} \end{figure} \begin{subequations} \begin{align} \left<\dot{\hat \sigma}^{-}\right> &= - \frac{\tilde \gamma}{2} \left<\hat \sigma^{-}\right> - \rm{i} \frac{\omega_R}{2} \left(\left<\mathcal{\hat P}_0\right> - \left<\mathcal{\hat P}_1\right>\right) \label{continuous_1} \\ \left<\dot{\hat \sigma}^{+}\right> &= - \frac{\tilde \gamma}{2} \left<\hat \sigma^{+}\right>+ \rm{i} \frac{\omega_R}{2} \left(\left<\mathcal{\hat P}_0\right> - \left<\mathcal{\hat P}_1\right>\right)\label{continuous_2} \\ \left<\dot{\hat{\mathcal P}}_0\right> &= \gamma_{\rm TL} \left<\mathcal{\hat P}_1\right> - \rm{i} \frac{\omega_R}{2} \left(\left<\hat \sigma^{-}\right> - \left<\hat \sigma^{+}\right> \right)\label{continuous_3}\\ \left<\dot{\hat{\mathcal P}}_1\right> &= -\tilde \gamma \left<\mathcal{\hat P}_1\right> + \rm{i} \frac{\omega_R}{2} \left(\left<\hat \sigma^{-}\right> - \left<\hat \sigma^{+}\right> \right) \label{continuous_4}\\ \left<\dot{\hat{\mathcal P}}_m\right> &= \gamma_1 \left<\mathcal{\hat P}_1\right>\label{continuous_5}, \end{align} \end{subequations} where $\omega_R \equiv \sqrt{2|\alpha|^2\gamma_{\rm TL}\omega_0/\pi}$ denotes the Rabi frequency, and where we have used the relation $\left<\hat \sigma_z\right> = \left<\mathcal{\hat P}_0\right>-\left<\mathcal{\hat P}_1\right>$ to eliminate $\left<\hat \sigma_z\right>$. This system of equations can be solved numerically (see Fig. \ref{fig:continuous}). We are mostly interested in the measurement probability $\left<\mathcal{\hat P}_m\right>$. For every choice of parameters, the measurement probability reaches unity after some time since we assume a continuous drive (see Fig. \ref{fig:continuous}(a)), so that energy transfer to the JPM continues for as long as needed to tunnel to the measurement state. The switching time depends on the choice of parameters, and we see that for small values of $|\alpha|^2$, the condition that minimizes this time is $\gamma_{\rm TL}=\gamma_1$ which we refer to as the matching condition. In the next section we will see, that we find the same matching condition analytically with a less rigorous approximation (see Fig.\ref{fig:continuous}(c) and Fig. \ref{fig:onemode_gammaTL}). For higher values of $|\alpha|^2$, the matching condition shifts to smaller values of $\gamma_{\rm TL}$ (see Fig. \ref{fig:onemode_gammaTL}). If $\omega_r$ $\gg$ $\gamma_1$, the system dynamics are much faster than the measurement process, such that the JPM likely oscillates back to the ground state before tunneling from the excited state to the measurement state. On the other hand, if $\gamma_1$ $\gg$ $\omega_r$, measurement can be seen as a continuous projection and therefore freezes the system dynamics. This effect is well known as the quantum zeno effect \cite{misra1977zeno,itano1990quantum,damborenea2002measurement,jacobs2007continuous,rossi2008quantum,wang2008quantum,helmer2009quantum}. If we match the rates and look at the correlation between measurement time and the rate of incoming photons, we see a saturation at the point when the rate of incoming photons becomes greater than the measurement time, since then the arrival of a photon at the detector during the measurement time is guaranteed (see Fig. \ref{fig:continuous}(b)). This means that adding more photons per time interval does not increase the measurement probability, since the JPM can only measure one photon (see Fig. \ref{fig:continuous}(b)). The measurement probability is one at this saturation point if the measurement time is longer than the required time for a tunneling process, and smaller than one otherwise (see Fig. \ref{fig:continuous}(b)). Moreover, we find that for high values of $|\alpha|^2$ there is a large region where the measurement probability is independent of $\gamma_{\rm TL}$ and only varies with $\gamma_1$ (see Fig. \ref{fig:continuous}(d)), corresponds to the classical regime. Note again that $|\alpha|^2$ corresponds to the photon flux and not the actual photon number. In Appendix \ref{app:3}, we additionally provide an analytical solution for the continuous mean field approach using the Laplace transformation. \begin{figure*} \caption{ Results for pulse shaped inputs. (a) Time evolution of the state occupation probabilities for an exponentially damped pulse with mean photon number $|\alpha|^2$. Additionally we show the measurement probability for $|\alpha|^2 = 1$. (b) Dependence of the optimal choice of rates $\gamma_{\rm TL}^{\rm max}/\gamma_1$ on $|\alpha|^2$ for a Gaussian pulse. (c),(d) Dependence of the optimal measurement probability depending on $\kappa$ and $\sigma$ for exponentially damped and Gaussian pulses, respectively. Note that the x-axis does not start at $0$ since the pulse is not well defined for $\kappa = 0$ and $\sigma = 0$, respectively. (e), (f) Shift of the optimal measurement region for different values of $|\alpha|^2$ in the Gaussian case. (e) shows the behavior for $250$ photons arriving during $t_m$ and (f) for $2500$ photons arriving during $t_m.$} \label{fig:gauss} \end{figure*} \subsection{Pulsed Input} For applications to qubit measurement \cite{govia2014high} we wish to perform threshold detection on a \textit{coherent} input pulse of $n$ photons. Therefore, we want to extend the above solutions to the more general case of an arbitrary input waveform. In this case the form factor $f(\omega)$ is no longer proportional to a simple $\delta$-function, it describes the shape of the pulse in the frequency space. We assume the form factor in the time domain $f(t)$, which is given by the Fourier transformation of $f(\omega)$, to be real. Note that especially in quantum optical treatments it is typical to include additional noise operators into the ladder operators, since they treat noise channels as additional input/output fields. However, we include all noise channels directly through Lindblad operators and therefore have no need to include additional noise channels in the expression for $\hat a_{\rm out}$ and $\hat a_{\rm in}$. We incorporate this form factor into the system of equations and follow the same procedure as in the previous section. By using the Fourier relation $\int_{-\infty}^{\infty}{\rm d}\omega f(\pm\omega) \exp\left(\mp i \omega t\right) = f(t)$ we can bring the resulting system of equations to the following form: \begin{subequations} \begin{align} \label{Pulse1} \left<\dot{\hat \sigma}^{-}\right> &= - \frac{\tilde \gamma}{2} \left<\hat \sigma^{-}\right> - i \frac{\omega_R(t)}{2} \left(\left<\mathcal{\hat P}_0\right> - \left<\mathcal{\hat P}_1\right>\right) \\ \label{Pulse2} \left<\dot{\hat \sigma}^{+}\right> &= - \frac{\tilde \gamma}{2} \left<\hat \sigma^{+}\right>+ i \frac{\omega_R(t)}{2} \left(\left<\mathcal{\hat P}_0\right> - \left<\mathcal{\hat P}_1\right>\right) \\ \label{Pulse3} \left<\dot{\hat{\mathcal P}}_0\right> &= \gamma_{\rm TL} \left<\mathcal{\hat P}_1\right> - i \frac{\omega_R(t)}{2} \left(\left<\hat \sigma^{-}\right> - \left<\hat \sigma^{+}\right> \right) \\ \label{Pulse4} \left<\dot{\hat{\mathcal P}}_1\right> &= -\tilde \gamma \left<\mathcal{\hat P}_1\right> + i \frac{\omega_R(t)}{2} \left(\left<\hat \sigma^{-}\right> - \left<\hat \sigma^{+}\right> \right) \\ \label{Pulse5} \left<\dot{\hat{\mathcal P}}_m\right> &= \gamma_1 \left<\mathcal{\hat P}_1\right>, \end{align} \end{subequations} where $\omega_R(t) \equiv f(t)\sqrt{2|\alpha|^2\gamma_{\rm TL}/\pi}$ depends on the pulse shape in the time domain. This system of equations is similar to that in the previous section, apart from an additional factor $f(t)$ that specifies the pulse shape. Using these equations, we can solve for the time evolution of the state occupations for an arbitrary pulse shape. Here, we study two different shapes, an exponential damped pulse and a Gaussian pulse. The first pulse shape is especially relevant for qubit measurement, since it describes the shape of a pulse created from a spontaneous emission source \cite{meschede1985one,brune1987realization,ginzel1993quantum,wenner2014catching}. This pulse is described by the form factor \begin{align} f(t) &= \sqrt{\kappa}\exp\left(-\frac{\kappa}{2}t\right) \end{align} with signal frequency $\omega_s$ of the pulse and duration $\tau_e = 2\pi/\kappa$. Again we assume the signal frequency to be equal to the JPM transition frequency, $\omega_s = \omega_0$. Next, we study the most natural choice for a few-photon wave packet, namely the Gaussian pulse \begin{align} \begin{split} f(\omega) &= \frac{1}{\left(2\pi \sigma^2\right)^{\frac{1}{4}}} \exp\left(-\frac{(\omega-\omega_s)^2}{4\sigma^2}\right) \\ f(t) &= \left(8\pi\sigma^2\right)^{\frac{1}{4}} \exp\left(-\sigma^2 (t-t_0)^2\right), \end{split} \end{align} with duration $\tau_G = 2\pi/\sigma$. We assume the signal frequency $\omega_s$ to coincide with the transition frequency of the JPM ($\omega_s = \omega_0$). Note that we choose $t_0$ different from zero to include all of the Gaussian features (i.e. choose $t_0$ such that both minima of the pulse are included). The results are similar to the results for the exponentially damped pulse, except that $\sigma$ plays the role of $\kappa$ in this case (see Fig. \ref{fig:gauss}). Note that all the pulses are normalized to one, which means $\int_{0}^{\infty} {\rm d}t |f(t)|^2=1$. For small amplitudes $|\alpha|^2$, we observe the matching condition \eqref{matching_simple} we found in Sec. \ref{sec:4A}. Increasing $|\alpha|^2$ shifts the maximum regime to higher values of $\gamma_1$ and smaller values of $\gamma_{\rm TL}$, for the same reason as in the continuous drive case. The behavior of $\gamma_{\rm TL}^{\rm max}/\gamma_1$ for a Gaussian pulse is shown in Fig. \ref{fig:gauss}(b) for two different values of $\sigma$. The agreement between the matching condition in the continuous case and the pulse case can be explained by the fact that a continuous drive is a special case of e.g a Gaussian pulse when $\sigma \longrightarrow 0$. Therefore it makes sense that we found the same optimization conditions at least for small enough $\sigma$. Anyways Fig. \ref{fig:gauss} indicates that the agreement can also be found for higher values of $\sigma$. We see that the ratio starts at one and then immediately drops to smaller values before asymptotically tending to zero in the classical regime. The movement of the optimal measurement region is also shown in Fig. \ref{fig:gauss}(e-f). In contrast to the continuous drive case, the measurement probability for pulsed input does not saturate at one, since a finite number of photons hits the detector. The actual value of $P_m$ in the steady state depends heavily on $|\alpha|^2$ (see Fig. \ref{fig:gauss}(a)) On the other hand, the maximum of the measurement probability for fixed values of $|\alpha|^2$ depends on the parameters $\kappa$ and $\sigma$ for the exponentially damped and Gaussian pulse, respectively (see Fig. \ref{fig:gauss}(c),(d)). In both cases we see that the shorter the pulse, the smaller the measurement probability since for longer pulses it is more likely that a photon excites the JPM. For the exponentially damped pulse, it is also possible to obtain analytical results using the Laplace transformation. We find the following expression for the measurement probability in the stationary state \begin{align} \begin{split} \lim\limits_{t \rightarrow \infty} \left<\mathcal{\hat P}_m(t)\right> &= \frac{\tilde \omega_R^2}{4\kappa\left(\kappa+\frac{\tilde \gamma}{2}\right)\left(1 + \frac{\gamma_{\rm TL}}{\gamma_1}\right)} \\&\hspace{-0.5cm}- \sum_{l=0}^{\infty} \frac{\omega_R^2}{2} \frac{1+4\frac{\kappa}{\gamma_1}}{\left(\kappa+\frac{\tilde \gamma}{2}\right)\left(1+\frac{\gamma_{\rm TL}}{\gamma_1}\right)} \frac{\left<\mathcal{\hat P}_m(0)^{(l)}\right>}{(2\kappa)^{-(l+1)}}. \end{split} \end{align} with $\tilde\omega_R = \sqrt{2|\alpha|^2\kappa\gamma_{\rm TL}/\pi}$. For given initial conditions of the system and starting with the set of equations \eqref{Pulse1}-\eqref{Pulse5}, one can calculate $\left<\mathcal{\hat P}_m(0)^{(l)}\right>$ to arbitrary order (for more details see Appendix \ref{app:4}). In Fig \ref{fig:analytic_comparison} we see the deviation between the the analytic solution up to fifth order and the numerical solution of \eqref{Pulse1}-\eqref{Pulse5}. We see that increasing $\alpha$ and small $\kappa$ the devitation is quite high, since the small parameter is $\alpha/\kappa$, but for high kappa the agreement is very good. \begin{figure} \caption{ Deviation of the steady state measurement probability between the analytical solution up to fifth order and the numerical solution for the exponentially damped pulse with different values of $\kappa$, as a function of $|\alpha|^2$. For small $\kappa$ and large $|\alpha|^2$ the deviation is quite high, but for increasing $\kappa$ the approximation fits the numerical results well. For $\kappa = 5$ GHz the deviation is almost zero.} \label{fig:analytic_comparison} \end{figure} \section{Rate equations} \label{sec:4} In this section, we approximate equations \eqref{System1}-\eqref{System5} and find optimal conditions to maximize measurement efficiency in the stationary state. We assume $\gamma_{\rm res} \neq 0$ to derive rate equations for the occupation probabilities which can be solved analytically. The measurement probability is then given by the occupation probability of the measurement state. In case of the JPM it is difficult to reset the counter since it tunnels into a continuum of states. Ideas exist to reset the JPM using relaxation oscillations, but for the time being the JPM is restricted to a single measurement. For this reason we assumed the reset rate to be zero in Sec. \ref{sec:3}. However, our techniques are general, and can be applied to any counter, e.g. a counter based on a driven $\Lambda$ system \cite{inomata2016single} can be reseted using a control pulse that drives the system back to its initial state. For such a system the reset times are around $400$ ns. The main result of this section will be an analytical derivation of the matching condition for small input fields that was found in the last section. We also derive a generalized matching condition where we include dark counts and relaxation. All results in this section are for a continuous drive, as we cannot treat pulses with this approach. Additionally, we extend the results to the case where dark counts $(\gamma_0\neq 0)$ and relaxation processes $(\gamma_{\rm rel}\neq 0)$ are present. The approach used in this section to derive the rate equations obscure the quantum mechanical nature of the system, and do not capture effects such as the Rabi oscillations in the measurement probability. The missing Rabi oscillations can probably be explained by the fact, that we ignore correlations between the field and the JPM due to approximation \eqref{approx_rate} (wit this approximation we automatically split expectation values like $\left<\mathcal{\hat O}_{0} \hat a_{\rm in}^{\dag}\right> \approx \left<\mathcal{\hat O}_{0} \right>\left<\hat a_{\rm in}^{\dag}\right>$). However, the results of this section still coincide well with the results of the mean field approach, especially the average measurement probability (see Fig. \ref{fig:comparison}). In the limit of fast decay of $\hat \sigma_z$, we can assume that the JPM dynamics are entirely incoherent (i.e. the expectation values of $\hat \sigma_x$ and $\hat \sigma_y$ decay quickly), hence we substitute for the operator $\hat \sigma_z$ its expectation value \begin{align} \hat \sigma_z(t) \longmapsto \left<\sigma_z(t)\right> = P_0(t)-P_1(t), \label{approx_rate} \end{align} where $P_0$ and $P_1$ denote the probability to be in the ground and excited state, respectively. Given the many rates contributing to the decay of $\hat \sigma_z$, this condition is met under a wide range of parameters, consistent with the effectiveness of this approximation that we shall demonstrate later on (see Fig. \ref{fig:comparison}). Especially the rate $\gamma_1$ should be large in experiment, since it determines how fast the tunneling from the metastable state into the measurement state happens. We want to study a continuous resonant drive $\omega_s = \omega_0$, such that \begin{align} f(\omega)\hat a_{t_0}(\omega) = \sqrt{\omega_0}\delta(\omega-\omega_0) \hat a_{t_0}(\omega), \label{incoming_radiation} \end{align} similar to Sec. \ref{sec:4A}. In this case, the Fourier transformation of \eqref{System1} can easily be done: \begin{align} \begin{split} -i \omega_0 \hat \sigma^{-}(\omega_0) = &-\left(i\omega_0+\frac{\tilde \gamma}{2}\right)\hat \sigma^{-}(\omega_0) \\ &+ \sqrt{\gamma_{\rm TL}} \hat a_{\rm in}(\omega_0)(P_0(\omega_0)-P_1(\omega_0)). \end{split} \label{help1} \end{align} Note that all the appearing operators actually act on the transmission line and the JPM. While e.g. $\hat \sigma^{-}(t=0)=\hat \sigma^{-}\otimes\mathbbm{1}$ acts as the identity on the transmission line, this is no longer true at later times, highlighting the build-up of entanglement. One nicely sees this in Eq. \eqref{help1}, the second part of the right hand side leads to a contribution to $\hat \sigma^{-}$ that acts on the transmission line, i.e. if the JPM is in the ground state, then $\hat \sigma^{-}$ becomes more transmission line like as time evolves, hence the plus sign, while if the JPM is in the excited state the qubit operator becomes less transmission line like, hence the minus sign. Here transmission line like corresponds to the field operator part $\hat \sigma^{-}$ gets due to the time evolution under Eq. \eqref{help1}. With \eqref{help1}, relation \eqref{inout_Relation} leads to \begin{align} \hat a_{\rm out}(\omega_0) = R(\omega_0) \hat a_{\rm in}(\omega_0), \label{Reflection_1} \end{align} with the reflection coefficient \begin{align} R(\omega_0) = -1 + \frac{2\gamma_{\rm TL}}{\tilde \gamma} \left[P_0(\omega_0)-P_1(\omega_0)\right]. \label{Reflection_omega} \end{align} Inverse Fourier transform of Equation \eqref{Reflection_1} yields the time-domain relation \begin{align} \hat a_{\rm out}(t) &= \mathcal{F}^{-1}\left[R(\omega_0)\right] \ast \mathcal{F}^{-1}\left[\hat a_{\rm in}(\omega_0)\right] \\ &= R(t) \hat a_{\rm in}(t), \label{help2} \end{align} where we have only to substitute $P_{0/1} (\omega_0)$ with $P_{0/1}(t)$ in Equation \eqref{Reflection_omega} for $R$, because $\hat a_{\rm in}\propto \delta(\omega-\omega_0)$, which makes the resulting convolution easy to solve. Note that this is only possible for a continuous drive. The absolute value of the reflection coefficient in our system can be greater than one if $P_0(t)<P_1(t)$, because in this case the incoming signal can be amplified by spontaneous or stimulated emission. All the equations \eqref{help1}-\eqref{help2} are also valid for the non-resonant case, provided one substitutes $\omega_0$ in \eqref{incoming_radiation} with a frequency that is not equal with the JPM transition frequency $\omega_s \neq \omega_0$. To obtain the rate equations for the system, we replace $\mathcal{\hat P}_0$, $\mathcal{\hat P}_1$, and $\mathcal{\hat P}_m$ with the corresponding occupation probabilities $P_0$, $P_1$, and $P_m$, which leads to \begin{subequations} \begin{align} \dot{P}_0 &= -\gamma_0 P_0 + (\gamma_{\rm TL}+\gamma_{\rm rel})P_1 \\ &\hspace{0.5cm}-\sqrt{\gamma_{\rm TL}}\left(\left<\hat{a}_{\rm in}^{\dagger} \hat \sigma^{-} \right> + \left<\hat \sigma^{+}\hat{a}_{\rm in}\right>\right) + \gamma_{\rm res} P_m \nonumber\\ \dot{P}_1 &= -(\gamma_{\rm TL}+\gamma_1+\gamma_{\rm rel})P_1 +\sqrt{\gamma_{\rm TL}}\left(\left<\hat{a}_{\rm in}^{\dagger}\hat \sigma^{-} \right> + \left<\hat \sigma^{+}\hat{a}_{\rm in} \right>\right)\\ \dot{P}_m &= \gamma_0 P_0 + \gamma_{1} P_1 -\gamma_{\rm res} P_m. \end{align} \end{subequations} Using relation \eqref{inout_Relation} and the expression for $R$, we end up with a system of coupled rate equations where we have eliminated $\hat \sigma^{-}$ and $\hat \sigma^{+}$ \begin{subequations} \begin{align} \dot{P}_0 &= -(\beta N_{\rm in}+\gamma_0) P_0 +(\beta N_{\rm in}+\gamma_{\rm TL}+\gamma_{\rm rel})P_1 + \gamma_{\rm res} P_m \label{rate_1}\\ \dot{P}_1 &= \beta N_{\rm in} P_0 -(\beta N_{\rm in}+\gamma_{\rm TL}+\gamma_1+\gamma_{\rm rel}) P_1, \label{rate_2}\\ \dot{P}_m &= \gamma_0 P_0 + \gamma_1 P_1 - \gamma_{\rm res} P_m \label{rate_3}. \end{align} \end{subequations} with $\beta = \frac{2}{\pi}\frac{\gamma_{\rm TL}}{\tilde \gamma}$ and the incoming photon flux $N_{\rm in} = \left<\hat a^{\dag}\hat a\right>$ (for more details see Appendix \ref{app:photon_flux}). Note that $\beta < 1$, such that the excitation rate of ground to excited state is smaller than the rate of incoming photons. The overall measurement efficiency is given in the stationary state; therefore, we set $\dot P_0 = \dot P_1 = \dot P_m = 0$. Doing so and using the constraint $P_0 + P_1 + P_m = 1$, we end up with an expression for stationary $P_0$, and $P_1$ \begin{align} \begin{split} P_0&= \frac{1}{1+\frac{\gamma_0}{\gamma_{\rm res}}} \\&\hspace{-0.3cm}- \frac{\beta \tilde \gamma N_{\rm in}}{\tilde \gamma \left(\beta N_{\rm in} \left[\frac{\gamma_{\rm res}+\gamma_1}{\gamma_{\rm res}+\gamma_0}\right]+\gamma_{\rm TL}+\gamma_1+\gamma_{\rm rel}\right)\left(1+\frac{\gamma_0}{\gamma_{\rm res}}\right)^2} \end{split} \label{rate_p0} \\ P_1&= \frac{\beta \tilde\gamma N_{\rm in}}{\tilde \gamma \left(\beta N_{\rm in} \left[\frac{\gamma_{\rm res}+\gamma_1}{\gamma_{\rm res}+\gamma_0}\right]+\gamma_{\rm TL}+\gamma_1+\gamma_{\rm rel}\right)\left(1+\frac{\gamma_0}{\gamma_{\rm res}}\right)}. \label{rate_p1} \end{align} To get from equations \eqref{rate_1}-\eqref{rate_3} to the expressions \eqref{rate_p0}, \eqref{rate_p1} we had to assume that $\gamma_{\rm res}>0$, such that the expressions for $P_0$ and $P_1$ are only valid for the case $\gamma_{\rm res}\neq 0$. The exact solution for the case $\gamma_{\rm res}=0$ is given in Appendix \ref{app:2}. The dark count correction is given by the counting rate in absence of incoming photons; therefore, $\Gamma_{\rm dark} = \gamma_0P_0(N_{\rm in}=0)$. If we use the fact that the dead time of the counter can be expressed in terms of the reset rate as $\tau_{\rm dead} = 1/\gamma_{\rm res}$, we obtain the well known expression for the dark count correction for quantum optical counters \cite{hadfield2009single} \begin{align} \Gamma_{\rm dark} = \gamma_0 P_0(N_{\rm in}=0) = \frac{\gamma_0}{1+\gamma_0\tau_{\rm dead}}. \label{dark} \end{align} The overall counting rate on the other hand is given by \begin{align} \Gamma_{\rm count} = \gamma_1 P_1(N_{\rm in})+\gamma_0 P_0(N_{\rm in}). \label{bright} \end{align} With \eqref{dark} and \eqref{bright}, the bright count rate, which describes the rate at which incoming photons are detected, can be written as \begin{align} \Gamma_{\rm bright} = \Gamma_{\rm count} - \Gamma_{\rm dark}. \end{align} The fidelity of a photon counter can in general be characterized by its efficiency, which is defined as the rate of detected photons $\Gamma_{\rm bright}$ over the rate of incident photons $\Gamma_{\rm incident} = N_{\rm in}$ \cite{hadfield2009single}. For the JPM, the efficiency is given by \begin{align} \eta &= \frac{\Gamma_{\rm bright}}{\Gamma_{\rm incident}} \label{eff1} \\ &= \frac{1}{N_{\rm in}} \left[\gamma_1 P_1(N_{\rm in}) + \gamma_0 P_0(N_{\rm in}) - \gamma_0P_0(N_{\rm in}=0)\right].\nonumber \end{align} If we put the expressions for $P_0$ and $P_1$ into \eqref{eff1}, we obtain an overall expression for the detection efficiency: \begin{align} \eta = \frac{4 \gamma_{\rm TL}\gamma_{\rm res} \left[\gamma_1\left(\gamma_0+\gamma_{\rm res}\right)+ \gamma_0 \left(\gamma_1+\gamma_{\rm res}\right)\right]}{(\gamma_{\rm TL}+\gamma_1+\gamma_{\rm rel})(\gamma_{\rm TL}+\gamma_1+\gamma_0+\gamma_{\rm rel})\left(\gamma_0+\gamma_{\rm res}\right)^2}, \label{efficiency_general} \end{align} where we have assumed the low excitation limit ($N_{\rm in}/\omega_0\ll1$), such that the terms proportional to $N_{\rm in}$ in the denominators of \eqref{rate_p0} and \eqref{rate_p1} can be ignored. \begin{figure} \caption{ Efficiency $\eta$ as a function of the coupling rate $\gamma_{\rm TL}$. The efficiency has a distinct maximum value given by equation \eqref{matching_general} that depends on $\gamma_0$, $\gamma_1$, $\gamma_{\rm rel}$. For $\gamma_0 = \gamma_{\rm rel} = 0$ (blue), the general matching condition simplifies to \eqref{matching_simple} also found in the last section and the efficiency reaches 1. An additional dark count rate $\gamma_0$ (red) leads to a small shift and reduction of the maximum value; both are barely visible for typical values of $\gamma_0$. On the other hand, the inclusion of relaxation $\gamma_{\rm rel}$ (green) reduces the maximum value significantly and furthermore leads to a visible shift of the maximum to higher values of $\gamma_{\rm TL}$.} \label{fig:rate_approach} \end{figure} The efficiency possesses a distinct maximum (see Fig. \ref{fig:rate_approach}) that is reached when the following relation between rates is satisfied \begin{align} \gamma_{\rm TL}^{\rm max} = \sqrt{(\gamma_1+\gamma_{\rm rel})(\gamma_1+\gamma_{\rm rel}+\gamma_0)}. \label{matching_general} \end{align} We refer to this expression as the general matching condition, since compared to \eqref{matching_simple} it additionally includes dark counts and relaxation. Note that the matching condition itself does not depend on $\gamma_{\rm res}$, but if $\gamma_{\rm res} < \gamma_1$ it limits the maximal efficiency (see Fig. \ref{fig:efficiency2}). If the rates are chosen such that \eqref{matching_general} is satisfied, we say the JPM and the transmission line are matched, to make a connection to impedance matching in microwave circuits \cite{Pozar}. When the JPM is matched to the transmission line and under the condition $\gamma_{\rm res}>\gamma_1$, we find an efficiency \begin{align} \eta_{\rm max} &= \frac{4(\gamma_0+\gamma_1)}{\gamma_0+2\left(\gamma_1+\gamma_{\rm rel} + \sqrt{(\gamma_1+\gamma_{\rm rel})(\gamma_0+\gamma_1+\gamma_{\rm rel})}\right)}, \label{rate_efficiency} \end{align} To get expression \eqref{rate_efficiency} out of \eqref{efficiency_general} we assumed a high reset rate $\gamma_{\rm res}\gg \gamma_1$, hence $\gamma_1/\gamma_{\rm res}\approx 0$. However Fig. \ref{fig:efficiency2} indicates that \eqref{rate_efficiency} is valid as soon as $\gamma_{res}$ exceeds $\gamma_1$. If there are no dark counts and no relaxation, the efficiency is given by \begin{align} \eta = \frac{4\gamma_{\rm TL}\gamma_1}{(\gamma_{\rm TL}+\gamma_1)^2}, \label{eq:eff1} \end{align} and the general matching condition simplifies to the matching condition \begin{align} \gamma_{\rm TL} = \gamma_1, \label{matching_simple} \end{align} that coincides with the result found in the last section. This result coincides with the optimal matching condition found in Romero \textit{et al.} \cite{romero2009photodetection}; however, the efficiency was limited to $1/2$. The reason for this is that Romero \textit{et al.} assumed an infinite transmission line with a JPM in the middle. Therefore, an excitation in the JPM can spontaneously emit into the other side of the transmission line at a rate $\gamma_{\rm TL}$, allowing for transmission through the JPM. For maximum efficiency $\gamma_{\rm TL} = \gamma_1$, both photon detection and photon transmission through the JPM will occur with equal probability, reducing the efficiency to $1/2$. In this work, we assume a semi-infinite transmission line terminated by the JPM, such that the transmission process is not possible, which leads to a maximum efficiency of $1$. In our case there are four main processes that limit detector efficiency: coupling losses (reflection), energy relaxation, dark counts, and dead time. Usually one distinguishes between two separate efficiencies: the efficiency due to coupling losses $\eta_{\rm loss}$ and the intrinsic quantum efficiency of the detector $\eta_{\rm det}$. Here, $\eta_{\rm loss}$ includes the effect of rate mismatch between the JPM and the transmission line, as described above. On the other hand, $\eta_{\rm det}$ includes the effects of dark counts, relaxation, and dead time. The overall efficiency can be written as the product of these two: $\eta = \eta_{\rm loss}\cdot \eta_{\rm det}$. Here $\eta_{\rm loss}$ can be extracted from \eqref{efficiency_general} by dividing it through \eqref{rate_efficiency}, since $\eta_{det} = \eta_{\rm max}$ (reflection losses are zero at matching point) and would in the general case (under the assumption $\gamma_{\rm res} \gg \gamma_1$) be given by \begin{align} \eta_{\rm loss} = \frac{\gamma_{\rm TL}\left(\gamma_0+2(\gamma_1+\gamma_0)+\sqrt{(\gamma_1+\gamma_{\rm rel})(\gamma_0+\gamma_1+\gamma_{\rm rel})}\right)}{(\gamma_{\rm TL}+\gamma_1+\gamma_{\rm rel})(\gamma_{\rm TL}+\gamma_0+\gamma_1+\gamma_{\rm rel})} \end{align} In the ideal case ($\gamma_0=\gamma_{\rm rel}= 0 $ and $\gamma_{\rm res} > \gamma_1$, such that $\eta_{\rm det}=1$), the efficiency is only limited by $\eta_{\rm loss}$. Condition \eqref{matching_simple} then determines the coupling rate for which coupling loss is zero, such that $\eta_{\rm loss} = 1$ and we reach unit efficiency (see Fig. \ref{fig:rate_approach}). This is exactly the point where all incoming photons reach the measurement state of the counter and all the incoming power is transferred into a measured signal. In the non-ideal case where we have dark counts and relaxation, even at the general matching point \eqref{matching_general} the efficiency is limited to a value smaller than one (since $\eta_{\rm det}<1$), such that the optimal power matching condition \eqref{matching_general} can only lead to an overall efficiency of $\eta_{\rm det}$ (see Fig. \ref{fig:rate_approach}). In Fig. \ref{fig:efficiency2}, we see that the reset time also has a significant influence on $\eta_{\rm det}$. For $\gamma_{\rm res} < \gamma_1$, the efficiency increases rapidly with increasing $\gamma_{\rm res}$ up to the point where $\gamma_{\rm res} \approx \gamma_1$, after which the efficiency is approximately constant if we increase $\gamma_{\rm res}$. This can be explained by the fact that for a system with $\gamma_{\rm res} \approx \gamma_1$, the reset happens with the same rate as the measurement, such that increasing $\gamma_{\rm res}$ no longer has an influence on $\eta_{\rm det}$. \begin{figure} \caption{ Efficiency $\eta$ as a function of the reset rate $\gamma_{\rm res}$. For small values of $\gamma_{\rm res}$, increasing the reset rate leads to a strong enhancement of the efficiency up to a point where the reset is roughly as fast as the decay into the measurement state ($ \gamma_{\rm res} \approx \gamma_1$). From then on the efficiency stays constant with increasing $\gamma_{\rm res}$, since the reset is faster than the average measurement time.} \label{fig:efficiency2} \end{figure} \begin{figure*} \caption{ Comparison of measurement probability given by the numerical solution of equation system \eqref{continuous_1}-\eqref{continuous_5} (red) and the analytical solution of the rate equations \eqref{Gl:A2} found in App. \ref{app:2} (blue), in the quantum (left) and classical regimes (right). The two approaches give similar results apart from the absence of Rabi oscillations in the rate equation approach, where the JPM is treated classically.} \label{fig:comparison} \end{figure*} In many applications of detection of continuous-wave signals, it is helpful to express detector performance in terms of noise equivalent power (NEP), the effective noise power per unit bandwidth referred to the detector input. In the case of a photon counter with dark count rate $\gamma_0$ operated for an integration time $\tau$, Poisson uncertainty in the number of dark counts is given by $\sigma_N=\sqrt{\gamma_0\tau}$. Expressing this uncertainty as a photon flux at the input, we find (for the definition of the general NEP $\sigma_P$ see \cite{zmuidzinas2003thermal}) \begin{align} \sigma_P = \frac{\hbar \omega_0}{\eta \tau} \sqrt{\gamma_0 \tau}. \end{align} If we choose an integration time of 0$.5$ s, corresponding to a detection bandwidth of $1$ Hz, we obtain the standard expression for the NEP of a photon counter \cite{zmuidzinas2003thermal, hadfield2009single} \begin{align} {\rm NEP} = \frac{\hbar\omega_0}{\eta}\sqrt{2\gamma_0}; \end{align} if we put in the expression \eqref{efficiency_general} for JPM efficiency, we obtain the NEP for the JPM. For the JPM parameters $\gamma_{\rm rel} = 33$ kHz \cite{kelly2015state}, $\gamma_1 = 1$ GHz, $\gamma_0 = \gamma_1/100$ and $\omega_0/2\pi = 5$ GHz, we find an NEP of $2\times 10^{-20}$ W/$\sqrt{{\rm Hz}}$ at the matching point. This is to be compared against NEP of order $1 \times 10^{-17}$ W/$\sqrt{{\rm Hz}}$ achieved by transition edge sensors (TES) \cite{thornton2016atacama} and microwave kinetic inductance detectors (MKIDs) \cite{flanigan2016photon} at higher frequencies in the range from 40-300 GHz, relevant for cosmic microwave background (CMB) studies. It is possible that Josephson junctions based on higher-gap materials such as NbN could be used to realize JPMs with plasma frequencies in the tens of GHz range, suitable for low-noise detection of the CMB. In Appendix \ref{app:2}, we solve the time evolution of the rate equations of this section analytically to compare them to the results reached in Sec. \ref{sec:4A} for the continuous drive case. The comparison is shown in Fig. \ref{fig:comparison}. We see that the results of both approaches are very similar for both the classical and the quantum regimes. Optimal conditions for photon detection using semiconductor quantum dots was also discussed in \cite{wong2015quantum}. In that case, the optimal condition is satisfied for the Cooperativity factor $\sim 1$. \section{Conclusion} \label{sec:5} In conclusion, we have derived a general set of equations that describe a two-level photon counter strongly coupled to a transmission line. We have shown that one can reach high-efficiency photon detection of a traveling microwave state using appropriate matching of system parameters. The conditions vary for different input states; in general, for low input power the coupling rate between the counter and the transmission line should be equal to the measurement rate. At higher power, the matching condition shifts, such that the coupling rate should be smaller than the measurement rate. Because of the generality of the input-output formalism we used, the approach described here can be applied to arbitrary input pulses and thus modified to fit the particular radiation source of any experiment. As a result, this work presents a guide to tune parameters to reach the optimal measurement efficiency for a range of experimental situations. Moreover, the presented method can be extended to any lossy two-level system coupled to a semi-infinite resonator. \begin{appendix} \section{Hamiltonian of the system} \label{app:1} From the circuit diagram Fig. \ref{fig:JPM_TL} we can derive the Lagrangian of the system: \begin{align} \begin{split} \mathcal{L} &= \mathcal{L}_{\rm TL} + E_J \cos(\varphi_J) + (I_b+\Delta I) \left(\frac{\Phi_0}{2\pi}\right) \varphi_J \\ &\hspace{1.1cm}+ \frac{1}{2} C_J \left(\frac{\Phi_0}{2\pi}\right)^2 \dot{\varphi}_J^2\\ &= \mathcal{L}_{\rm TL} + \mathcal{L}_{\rm JPM} + \Delta I \left(\frac{\Phi_0}{2\pi}\right) \varphi_J, \label{eq:appendix1} \end{split} \end{align} where $\mathcal{L}_{\rm TL}$ is the bare transmission line Lagrangian (sum of harmonic oscillators), $\varphi_J$ the phase of the JPM, $I_b$ the bias current, $E_J$ the Josephson energy, $C_J$ the junction capacitance, $\Phi_0$ the flux quantum, and $\Delta I$ the additional current coming from the transmission line. Here, $\mathcal{L}_{\rm JPM} \equiv E_J \cos(\varphi_J) + I_b \frac{\Phi_0}{2\pi} \varphi_J + \frac{1}{2} C_J \left(\frac{\Phi_0}{2\pi}\right)^2 \dot{\varphi}_J^2$ is the Lagrangian of the JPM. The last term of \eqref{eq:appendix1} leads to an interaction between the JPM and the transmission line. Using the Legendre transformation, we obtain the Hamiltonian of the system: \begin{align} \mathcal{H} = \mathcal{H}_{\rm TL}+\mathcal{H}_{\rm SYS} + \Delta I \frac{\Phi_0}{2\pi} \varphi_J, \label{Hamilton_Appendix} \end{align} where $\mathcal{H}_{\rm TL}$ is the Hamiltonian describing the transmission line and $\mathcal{H}_{\rm JPM}$ is the Hamiltonian of the JPM. We want to take a closer look at the interaction term. If we use the normal procedure of quantizing the transmission line and the JPM, we get the following expression for the current \cite{johansson2010dynamical} and phase operators \cite{geller2007quantum}: \begin{align} \Delta \hat I &= \sqrt{\frac{\hbar \omega_s}{4\pi Z_0}} \int_{0}^{\infty} {\rm d}\omega\left(\hat a^{\dagger}(\omega)+\hat a(\omega)\right) \label{DeltaI}\\ \hat \varphi_J &= \frac{i}{\sqrt{2}}\left(\frac{2E_C}{E_J}\right)^{\frac{1}{4}} \left(\hat \sigma^{+}-\hat \sigma^{-}\right), \label{Phi} \end{align} where $\hat a$,$\hat a^{\dagger}$ and $\hat \sigma^{-}$,$\hat \sigma^{+}$ are the raising and lowering operators of the transmission line field and the JPM states, respectively. Equations \eqref{Hamilton_Appendix}-\eqref{Phi} assuming a rotating-wave approximation, lead to the following expression for the interaction part of the Hamiltonian (infinite number of input modes): \begin{align} \hat H_{\rm INT} = i \hbar g \int_{-\infty}^{\infty} {\rm d}\omega (\hat a^{\dagger}(\omega) \hat \sigma^{-} - \hat \sigma^{+}(\omega) \hat a), \end{align} with $g \equiv (\omega_s Z_J/8\pi Z_0)^{1/2}$, where $Z_J$ is the junction impedance. \section{Analytical solution for the continuous mean field case} \label{app:3} Here we give an analytical solution of the system of equations derived in Sec. \ref{sec:4A}. First we use the Laplace transformation $\mathcal{L}[f(t)] = f(s) = \int_0^{\infty} {\rm d}t f(t) {\rm e}^{-st}$ to rewrite the system: \begin{subequations} \begin{align} \label{Laplace_System1} s\left<\hat \sigma^{-}(s)\right> &= - \frac{\tilde \gamma}{2} \left<\hat \sigma^{-}(s)\right> - \rm{i} \frac{\omega_R}{2} \left(\left<\mathcal{\hat P}_0(s)\right> - \left<\mathcal{\hat P}_1(s)\right>\right) \\ \label{Laplace_System2} s\left<\hat \sigma^{+}(s)\right> &= - \frac{\tilde \gamma}{2} \left<\hat \sigma^{+}(s)\right>+ \rm{i} \frac{\omega_R}{2} \left(\left<\mathcal{\hat P}_0(s)\right> - \left<\mathcal{\hat P}_1(s)\right>\right) \\ \label{Laplace_System3} s\left<\mathcal{\hat P}_0(s)\right>&= \gamma_{\rm TL} \left<\mathcal{\hat P}_1(s)\right> - \rm{i} \frac{\omega_R}{2} \left(\left<\hat \sigma^{-}(s)\right> - \left<\hat \sigma^{+}(s)\right> \right)+1 \\ \label{Laplace_System4} s\left<\mathcal{\hat P}_1(s)\right> &= -\tilde \gamma \left<\mathcal{\hat P}_1(s)\right> + \rm{i} \frac{\omega_R}{2} \left(\left<\hat \sigma^{-}(s)\right> - \left<\hat \sigma^{+}(s)\right> \right) \\ \label{Laplace_System5} s\left<\mathcal{\hat P}_m(s)\right> &= \gamma_1 \left<\mathcal{\hat P}_1(s)\right>. \end{align} \end{subequations} The first two equations give the expressions \begin{align} \left<\hat\sigma^{-}(s)\right> &= -\rm{i} \frac{\frac{\omega_R}{2}}{s+\frac{\tilde \gamma}{2}}\left(\left<\mathcal{\hat P}_0(s)\right>-\left<\mathcal{\hat P}_1(s)\right>\right) \\ \left<\hat\sigma^{+}(s)\right> &= \rm{i} \frac{\frac{\omega_R}{2}}{s+\frac{\tilde \gamma}{2}}\left(\left<\mathcal{\hat P}_0(s)\right>-\left<\mathcal{\hat P}_1(s)\right>\right), \end{align} which can be put into the equation for $\left<\mathcal{\hat P}_1(s)\right>$: \begin{align} \hspace{-0.5cm}s \left<\mathcal{\hat P}_1(s)\right> = -\tilde \gamma + \frac{\frac{\omega_R}{2}}{s+\frac{\tilde \gamma}{2}} \left(\left<\mathcal{\hat P}_0(s)\right>-\left<\mathcal{\hat P}_1(s)\right>\right). \label{kont_Laplace_1} \end{align} Using the conservation of probabilities in Laplace space \begin{align} \left<\mathcal{\hat P}_0(s)\right>+\left<\mathcal{\hat P}_1(s)\right>+\left<\mathcal{\hat P}_m(s)\right> = \frac{1}{s}, \end{align} we can eliminate $\left<\mathcal{\hat P}_0(s)\right>$ in \eqref{kont_Laplace_1}: \begin{align} s \left<\mathcal{\hat P}_1(s)\right> = -\tilde \gamma + \frac{\frac{\omega_R}{2}}{s+\frac{\tilde \gamma}{2}} \left(\frac{1}{s}-2\left<\mathcal{\hat P}_1\right>(s)-\left<\mathcal{\hat P}_m(s)\right>\right). \label{kont_Laplace_2} \end{align} Additionally, we can eliminate $\left<\mathcal{\hat P}_1(s)\right>$ in \eqref{kont_Laplace_2} with the equation for $\left<\mathcal{\hat P}_m(s)\right>$ \eqref{Laplace_System5} \begin{align} \hspace{-0.5cm}\left<\mathcal{\hat P}_m(s)\right> = \frac{\frac{\omega_R^2}{2}}{s\left(s+\frac{\tilde \gamma}{2}\right)\left[\frac{s^2}{\gamma_1}+\frac{\tilde \gamma s}{\gamma_1}+\frac{\frac{\omega_R^2}{2}}{s+\frac{\tilde \gamma}{2}}\left(\frac{2s}{\gamma_1}+1\right)\right]}. \label{kont_Laplace_3} \end{align} To show that the numerical results of Section III give the right stationary solution, we can calculate $\lim\limits_{t \rightarrow \infty} \left<\mathcal{\hat P}_m(t)\right>$ from \eqref{kont_Laplace_3} using the relation between limits in Laplace space and real space \begin{align} \lim\limits_{t \rightarrow \infty} g(t) = \lim\limits_{s \rightarrow 0} s \mathcal{L}\left[g(t)\right]. \label{Laplace_limit} \end{align} We find \begin{align} \lim\limits_{t \rightarrow \infty} \left<\mathcal{\hat P}_m(t)\right> = \lim\limits_{s \rightarrow 0} s \left<\mathcal{\hat P}_m(s)\right> = 1. \end{align} Therefore the measurement probability in the stationary state is always one, as we have seen in the numerical results. We next transform \eqref{kont_Laplace_3} back to real space in order to get an analytical solution for the time evolution of the measurement probability. This back transformation can be done as in Section IV using the residue theorem. The singularities of \eqref{kont_Laplace_3} are \begin{widetext} \begin{align*} s_1 &= 0 \\ s_2 &= -\frac{\tilde \gamma}{2} + \frac{\tilde \gamma^2-\omega_R^2}{2\left[54(\tilde \gamma -\gamma_1)\omega_R^2+3\sqrt{36\omega_R^2\tilde\gamma^4+36(\tilde\gamma-3\gamma_1)(5\tilde\gamma-3\gamma_1)\omega_R^4+194\omega_R^6}\right]^{\frac{1}{3}}} \\ &+ \frac{\left(\frac{27\tilde\gamma\omega_R^2}{2}-\frac{27\gamma_1\omega_R^2}{2}+\sqrt{\frac{729}{4}(\gamma_1-\tilde \gamma)^2\omega_R^4+4\left(3\omega_R^2-\frac{3\tilde\gamma^2}{4}\right)^3}\right)^{\frac{1}{3}}}{3 \cdot 2^{\frac{2}{3}}} \\ s_3 &= -\frac{\tilde \gamma}{2} + \frac{\left(1+\rm{i}\sqrt{3}\right)\left(3\omega_R^2+\frac{3\tilde\gamma^3}{4}\right)}{3\cdot 2^{\frac{2}{3}}\left(-\frac{27\gamma_1\omega_R^2}{2}+\frac{27\tilde\gamma\omega_R^2}{2}+\sqrt{4\left(3\omega_R^2-\frac{3\tilde\gamma^2}{4}\right)^3+\left(-\frac{27\gamma_1\omega_R^2}{2}+\frac{27\tilde\gamma\omega_R^2}{2}\right)^2}\right)^{\frac{1}{3}}} \\ &- \frac{\left(1-\rm{i}\sqrt{3}\right)\left(-\frac{27\gamma_1\omega_R^2}{2}+\frac{27\tilde\gamma\omega_R^2}{2}+\sqrt{4\left(3\omega_R^2-\frac{3\tilde\gamma^2}{4}\right)^3+\left(-\frac{27\gamma_1\omega_R^2}{2}+\frac{27\tilde\gamma\omega_R^2}{2}\right)^2}\right)^{\frac{1}{3}}}{6\cdot 2^{\frac{1}{3}}} \\ s_4 & = s_3^{*}, \end{align*} \end{widetext} and the back transformation of \eqref{kont_Laplace_3} is given by \begin{align} \left<\mathcal{\hat P}_m(t)\right> = \frac{\omega_R^2}{2} \sum_{\stackrel{i=1}{i\neq j\neq k}}^3 \frac{\exp\left(-s_i t\right)}{\alpha_i\left(\alpha_i-\alpha_j\right)\left(\alpha_i-\alpha_k,\right)}, \end{align} where $\alpha_i$ are the corresponding residues. Due to the first order of the singularities (all other cases are trivial), the residues are given by \begin{align} Res\left(s_i,\left<\mathcal{\hat P}_m(s)\right>\right) = \lim\limits_{s \rightarrow s_i} \left<\mathcal{\hat P}_m(s)\right>\left(s-s_i\right). \end{align} \section{Analytical solution for the exponentially damped pulse} \label{app:4} In this appendix, we calculate an analytical solution for the exponentially damped pulse. We start with the Laplace transformation of the system of equations (25) \begin{widetext} \begin{subequations} \begin{align} s\left<\hat \sigma^{-}(s)\right> &= - \frac{\tilde \gamma}{2} \left<\hat \sigma^{-}(s)\right> - \rm{i} \frac{\tilde\omega_R}{2} \left(\left<\mathcal{\hat P}_0(s+\kappa)\right> - \left<\mathcal{\hat P}_1(s+\kappa)\right>\right) \label{Analytic_1} \\ s\left<\hat \sigma^{+}(s)\right> &= - \frac{\tilde \gamma}{2} \left<\hat \sigma^{+}(s)\right>+ \rm{i} \frac{\tilde\omega_R}{2} \left(\left<\mathcal{\hat P}_0(s+\kappa)\right> - \left<\mathcal{\hat P}_1(s+\kappa)\right>\right)\label{Analytic_2} \\ s\left<\mathcal{\hat P}_0(s)\right> &= \gamma_{\rm TL} \left<\mathcal{\hat P}_1(s)\right> - \rm{i} \frac{\tilde\omega_R}{2} \left(\left<\hat \sigma^{-}(s+\kappa)\right> - \left<\hat \sigma^{+}(s+\kappa)\right> \right)+1\label{Analytic_3} \\ s\left<\mathcal{\hat P}_1(s)\right> &= -\tilde \gamma \left<\mathcal{\hat P}_1(s)\right> + \rm{i} \frac{\tilde\omega_R}{2} \left(\left<\hat \sigma^{-}(s+\kappa)\right> - \left<\hat \sigma^{+}(s+\kappa)\right> \right)\label{Analytic_4} \\ s\left<\mathcal{\hat P}_m(s)\right> &= \gamma_1 \left<\mathcal{\hat P}_1(s)\right>\label{Analytic_5}, \end{align} \end{subequations} \end{widetext} with $\tilde\omega_R = \sqrt{2|\alpha|^2\kappa\gamma_{\rm TL}/\pi}$ and where we have used the relation \begin{align} \mathcal{L}\left[g(t) \exp(-\kappa t)\right] = g(s+\kappa), \end{align} which holds for an arbitrary function $g(t)$ whose Laplace transformation exists. To simplify the equations we have to calculate $\left<\hat \sigma^{-}(s+\kappa)\right>$ and $\left<\hat \sigma^{+}(s+\kappa)\right>$, which can be done by multiplying \eqref{Analytic_1} and \eqref{Analytic_2} with $\exp(-\kappa t)$: \begin{align} \left<\hat \sigma^{-}(s+\kappa)\right> &= \frac{-\rm{i}\frac{ \tilde\omega_R^2}{2}}{s+\kappa+\frac{\tilde \gamma}{2}} \left(\left<\mathcal{\hat P}_0(s+2\kappa)\right> - \left<\mathcal{\hat P}_1(s+2\kappa\right)\right> \label{sigma1} \\ \left<\hat \sigma^{+}(s+\kappa)\right> &= \frac{\rm{i}\frac{\tilde \omega_R^2}{2}}{s+\kappa+\frac{\tilde \gamma}{2}} \left(\left<\mathcal{\hat P}_0(s+2\kappa)\right> - \left<\mathcal{\hat P}_1(s+2\kappa)\right>\right). \label{sigma2} \end{align} Putting \eqref{sigma1} and \eqref{sigma2} into \eqref{Analytic_3} leads to \begin{align} \begin{split} s\left<\mathcal{\hat P}_0(s)\right> &= \gamma_{\rm TL} \left<\mathcal{\hat P}_1(s)\right> +1 \\ &\hspace{-0.5cm}- \frac{\frac{\tilde\omega_R^2}{2}}{s+\kappa+\frac{\tilde \gamma}{2}} \left(\left<\mathcal{\hat P}_0(s+2\kappa)\right> - \left<\mathcal{\hat P}_1(s+2\kappa)\right>\right). \end{split} \end{align} To eliminate $\left<\mathcal{\hat P}_0(s)\right>$ in this expression, we can use the conservation of probabilities in Laplace space \begin{align} \left<\mathcal{\hat P}_0(s)\right> &= \frac{1}{s} - \left<\mathcal{\hat P}_1(s)\right> - \left<\mathcal{\hat P}_m(s)\right> \\ \left<\mathcal{\hat P}_0(s+2\kappa)\right> &= \frac{1}{s+2\kappa} - \left<\mathcal{\hat P}_1(s+2\kappa)\right> - \left<\mathcal{\hat P}_m(s+2\kappa)\right>, \end{align} which gives \begin{align} \begin{split} \hspace{-2cm} s\left(\frac{1}{s}-\left<\mathcal{\hat P}_1(s)\right> -\left<\mathcal{\hat P}_m(s)\right>\right)-1 = \gamma_{\rm TL} \left<\mathcal{\hat P}_1(s)\right> \\ - \frac{\frac{\tilde\omega_R^2}{2}}{s+\kappa+\frac{\tilde \gamma}{2}} \left[\frac{1}{s+2\kappa}-2\left<\mathcal{\hat P}_1(s)\right>-\left<\mathcal{\hat P}_m(s)\right>\right]. \end{split} \end{align} Additionally, we can use \eqref{Analytic_5} to eliminate $\left<\mathcal{\hat P}_1(s)\right>$ and $\left<\mathcal{\hat P}_1(s+\kappa)\right>$: \begin{align} s \left<\mathcal{\hat P}_m(s)\right> &= \gamma_1 \left<\mathcal{\hat P}_1(s)\right> \\ (s+2\kappa) \left<\mathcal{\hat P}_m(s+2\kappa)\right> &= \gamma_1 \left<\mathcal{\hat P}_1(s+2\kappa)\right>. \end{align} Finally, we end up with the following equation: \begin{align} \begin{split} \hspace{-0.5cm}\left<\mathcal{\hat P}_m(s)\right> &+ f(s) \left<\mathcal{\hat P}_m(s+2\kappa)\right> \\ &= \frac{\frac{\tilde\omega_R^2}{2}}{\left(s+2\kappa\right)\left(s+\kappa+\frac{\tilde \gamma}{2}\right)\left(s + \frac{s\left(\gamma_{\rm TL}+s\right)}{\gamma_1}\right)},\label{overall} \end{split} \end{align} with the rational function \begin{align} f(s) \equiv \frac{\frac{\tilde\omega_R^2}{2}\left(1+\frac{2s+4\kappa}{\gamma_1}\right)}{s+\frac{s\left(\gamma_{\rm TL}+s\right)}{\gamma_1}} \label{OM}. \end{align} We are interested in the measurement probability in the stationary state, so we want to calculate $\lim\limits_{t \rightarrow \infty} \left<\mathcal{\hat P}_m\right>(t) $. To do so we use relation \eqref{Laplace_limit}. Taking the limit on the right hand side of \eqref{OM} is straightforward, but the left hand side is more difficult. Taking a closer look at the left hand side we see \begin{align} \begin{split} &\lim\limits_{t \rightarrow \infty} \mathcal{L}^{-1}\left[\left<\mathcal{\hat P}_m(s)\right> + f(s) \left<\mathcal{\hat P}_m(s+2\kappa)\right>\right] \\ = &\underbrace{\lim\limits_{t \rightarrow \infty} \mathcal{L}^{-1} \left[\left<\mathcal{\hat P}_m(s)\right>\right]}_{= \lim\limits_{t \rightarrow \infty} \left<\mathcal{\hat P}_m\right>(t) } \\ &\hspace{-0.35cm}+ \underbrace{\lim\limits_{t \rightarrow \infty} \int_0^t {\rm dt'} f(t') \exp\left[-2\kappa(t-t')\right] \left<\mathcal{\hat P}_m(t-t')\right>}_{\equiv (*)}. \end{split} \end{align} The first term gives us the desired limit, while the second one describes a memory kernel that depends on the past of the system. To solve the integral in (*), we first have to transform $f(s)$ into real space. Since it is a rational function with only first order singularities (the other cases are trivial), $f(t)$ can be calculated using the residue theorem \begin{align} f(t') = \sum_i \alpha_i \exp\left(s_i t'\right), \end{align} where $s_i$ are the singularities of the function and $\alpha_i$ the corresponding residues. The singularities are $s_1 = 0$, $s_2 = -(\kappa+\frac{\tilde \gamma}{2})$, and $s_3 = -\tilde \gamma$. Since $\left<\mathcal{\hat P}_m(t-t')\right>$ is bounded by one, the limit of the integral is determined by the exponential parts. $s_2$ and $s_3$ both damp the integrand; therefore, only the first singularity $s_1$ gives a contribution to the limit of the integral. As a result, (*) simplifies to \begin{align} \begin{split} (*) &= \lim\limits_{t \rightarrow \infty} \alpha_1 \int_0^t {\rm dt'} \exp\left[-2\kappa\left(t-t'\right)\right] \left<\mathcal{\hat P}_m(t-t')\right>\\ &\hspace{-0.4cm}\overset{(u = t-t')}{=} \alpha_1 \int_0^{\infty} {\rm du} \exp\left[-2\kappa u\right] \left<\mathcal{\hat P}_m(u)\right>. \end{split} \end{align} If we evolve $\left<\mathcal{\hat P}_m(u)\right>$ in a Taylor expansion around zero, we can solve the integral: \begin{align} \begin{split} (*) &= \alpha_1 \sum_{l=0}^{\infty} \frac{\left<\mathcal{\hat P}_m(0)^{(l)}\right>}{l!} \underbrace{\int_0^{\infty} {\rm du} \exp\left[-2\kappa u\right] u^{l}}_{= l!(2\kappa)^{-(l+1)}} \\ &= \sum_{l=0}^{\infty} \frac{\alpha_1}{(2\kappa)^{-(l+1)}} \left<\mathcal{\hat P}_m(0)^{(l)}\right>, \end{split} \end{align} where $\left<\mathcal{\hat P}_m(0)^{(l)}\right>$ denotes the $l$ th time derivative (at $t=0$). Calculating the residue \begin{align} \alpha_1 = \lim\limits_{s \rightarrow s_1} f(s)(s-s_1) = \frac{\tilde\omega_R^2}{2} \frac{1+4\frac{\kappa}{\gamma_1}}{\left(\kappa+\frac{\tilde \gamma}{2}\right)\left(1+\frac{\gamma_{\rm TL}}{\gamma_1}\right)} \end{align} and putting this all together in equation \eqref{overall}, we finally end up with an expression for the measurement probability in the stationary state: \begin{align} \begin{split} \lim\limits_{t \rightarrow \infty} \left<\mathcal{\hat P}_m(t)\right> &= \lim\limits_{s \rightarrow 0}\frac{s\frac{\tilde\omega_R^2}{2}}{\left(s+2\kappa\right)\left(s+\kappa+\frac{\tilde \gamma}{2}\right)\left(s + \frac{s\left(\gamma_{\rm TL}+s\right)}{\gamma_1}\right)} \\&\hspace{-1.5cm}- \sum_{l=0}^{\infty} \frac{\tilde\omega_R^2}{2} \frac{1+4\frac{\kappa}{\gamma_1}}{\left(\kappa+\frac{\tilde \gamma}{2}\right)\left(1+\frac{\gamma_{\rm TL}}{\gamma_1}\right)} \frac{\left<\mathcal{\hat P}_m(0)^{(l)}\right>}{(2\kappa)^{-(l+1)}}\\ &\hspace{-1.5cm}= \frac{\tilde\omega_R^2}{4\kappa\left(\kappa+\frac{\tilde \gamma}{2}\right)\left(1 + \frac{\gamma_{\rm TL}}{\gamma_1}\right)} \\&\hspace{-1.5cm}- \sum_{l=0}^{\infty} \frac{\tilde\omega_R^2}{2} \frac{1+4\frac{\kappa}{\gamma_1}}{\left(\kappa+\frac{\tilde \gamma}{2}\right)\left(1+\frac{\gamma_{\rm TL}}{\gamma_1}\right)} \frac{\left<\mathcal{\hat P}_m(0)^{(l)}\right>}{(2\kappa)^{-(l+1)}}. \end{split} \end{align} The expression up to fifth order has the following form \begin{align} \begin{split} \lim\limits_{t \rightarrow \infty} &\left<\mathcal{\hat P}_m(t)\right>\\ \approx &\frac{\tilde\omega_R^2}{4\kappa\left(\kappa+\frac{\tilde \gamma}{2}\right)\left(1 + \frac{\gamma_{\rm TL}}{\gamma_1}\right)} \left(1- \frac{\tilde\omega_R^2}{16\kappa^2}\right). \end{split} \end{align} The validity of the approximation up to fifth order is determined by the ratio $\frac{\alpha}{\kappa}$. The smaller this ratio, the better the approximation (see Fig. \ref{fig:analytic_comparison}). \section{Difference between photon flux and photon number} \label{app:photon_flux} If we take a look at the Langevin equation \eqref{Langevin} we see that the operators $\hat a_{\rm in}$ and $\hat a_{\rm in}^{\dag}$ must have unit $\sqrt{\omega}$, since $\gamma_{\rm TL}$ has units $\omega$. They cannot be unitless as the standard creation and annihilation operators. This is a side effect of input-output theory. The input and output operators are defined as a Fourier transform \eqref{a_in_def} and everything is described in terms of photon flux instead of the actual photon number. This means the value of interest is the photons arriving in a specific time interval and not the overall photon number. E.g in the continuous drive case we get \begin{align} \left<\hat a_{\rm in}^{\dag} \hat a_{\rm in}\right> = \frac{|\alpha|^2 \omega_0}{2\pi}. \end{align} We see that $\left<\hat a_{\rm in}^{\dag} \hat a_{\rm in}\right>$ describes a photon flux. The factor $2\pi$ arises from the fact that $\hat a_{\rm in}$ and $\hat a_{\rm in}^{\dag}$ are given by a Fourier transformation, which includes a respective prefactor of $1/\sqrt{2\pi}$. Here we just take care of this factor by including it into the calculations. It would also be possible to redefine $\alpha$ as $\tilde \alpha = \alpha/2\pi$, but this doesn't make a difference for the final results. Note also that $|\alpha|^2$ in this case does not describe the photon number as usual, but an amplitude of the incoming photon flux. To make a statement about the actual photon number we additionally need a time interval of interest, e.g. the measurement time $t_m$. \section{Time dynamis of the rate equations} \label{app:2} Here we want to study the time evolution of the system of rate equations \eqref{rate_1}-\eqref{rate_3} for a single measurement event ($\gamma_{\rm res} = 0$). Using an algebraic computer software package, we obtain the following solution for the occupation probability of the excited state for initial conditions $P_0 = 1$ and $P_1 = 0$: \begin{align} P_1(t)= K {\rm e}^{-\beta t} \sinh \left(\Gamma t\right), \label{Gl:A1} \end{align} with the constant \begin{align} K = \frac{8\gamma_{\rm TL}\tilde\omega}{\sqrt{\left(\gamma_{\rm TL}+\gamma_1\right)^4+16\gamma_{\rm TL}^2\left(\gamma_{\rm TL}+\gamma_1\right)\tilde \omega + 64 \gamma_{\rm TL}^2 \gamma_1^2}} \end{align} and the rates \begin{align} \Gamma &= \frac{\sqrt{16 \gamma_{\rm TL}^2 \tilde \omega \tilde\gamma+64 \gamma_{\rm TL}^2 \tilde \omega^2+\tilde \gamma^4}}{2 \tilde \gamma} \\ \beta &= \frac{\tilde \gamma^2+8 \gamma_{\rm TL} \tilde \omega}{2 \tilde \gamma}, \end{align} with $\tilde \omega \equiv |\alpha|^2\omega_0/2\pi$. Integration of \eqref{Gl:A1} from $t'=0$ to $t'=t$ and multiplication with $\gamma_1$, together with the boundary condition $P_m(0) = 0$, lead to an expression for the measurement probability: \begin{align} P_m(t) = \frac{\gamma_1 K}{\beta^2-\Gamma^2} \left[\Gamma-\Gamma \cosh(\Gamma t){\rm e}^{-\beta t}-\beta \sinh(\Gamma t){\rm e}^{-\beta t}\right] \label{Gl:A2}. \end{align} In Section \ref{sec:4} we use expression \eqref{Gl:A2} to compare the rate equation approach with the mean field approach. \end{appendix} \end{document}
arXiv
\begin{document} \vspace*{-.8in} \begin{center} {\LARGE\em On the Continuity of Bounded Weak Solutions to Parabolic Equations and Systems with Quadratic Growth in Gradients.} \end{center} \begin{center} {\sc Dung Le}{\footnote {Department of Mathematics, University of Texas at San Antonio, One UTSA Circle, San Antonio, TX 78249. {\tt Email: [email protected]}\\ {\em Mathematics Subject Classifications:} 35K40, 35B65, 42B37. \hfil\break\indent {\em Key words:} Cross diffusion systems, weak solutions regularity.}} \end{center} \begin{abstract} We establish the pointwise continuity of bounded weak solutions to of a class of scalar parabolic equations and strongly coupled parabolic systems. Our approach to the regularity theory of parabolic scalar equations is quite elementary and its applications to strongly coupled systems does not require higher $L^p$ integrability of derivatives. \end{abstract} \section{Introduction} \label{intro}\eqnoset Let $\Og$ be a bounded domain in ${\rm I\kern -1.6pt{\rm R}}^N$, $N\ge2$, with smooth boundary $\partial \Og$ and $T$ be a positive number. Denote $Q=\Og\times(0,T)$ and $z=(x,t)$ a generic point in $Q$. In the first part of this paper, we consider the following scalar parabolic equation \beqno{0eq1}v_t=\Div(\mathbf{a} Dv+\mb(v))+\me(Dv)+\mathbf{g}(v) \quad \mbox{in $Q$}.\end{equation} Here, $\mathbf{a},\me,\mathbf{g}$ are scalar functions and $\mb\in{\rm I\kern -1.6pt{\rm R}}^N$. The equation is regular parabolic in the sense that $\mathbf{a}$ is bounded and $\mathbf{a}\ge\llg_0>0$ for some constant $\llg_0$. Under suitable integrability conditions, much weaker than those in literature (e.g. \cite{LSU,Lieb}), on the data of this equation we will show that any bounded weak solution $v$ of \mref{0eq1} is pointwise (or H\"older) continuous. Also important, we allow $\me(Dv)$ to have a quadratic growth in $Dv$. That is, $\me(Dv)\le C(|Dv|^2+1)$. In the second part, we will apply the theory for scalar equations to systems of $m$ equations, $m\ge2$, with linear or quadratic growth in gradients \beqno{fullsys0aa}u_t=\Div(\mA(u)Du)+\me(Du)+f(u) \end{equation} in $Q$. Here, $u=[u_i]_{i=1}^m$ and $\mathbf{a}(u)$ is a $m\times m$ matrix and $\me,f$ are vectors in ${\rm I\kern -1.6pt{\rm R}}^m$. We will always assume that there is $\llg_0>0$ such that for all $u\in{\rm I\kern -1.6pt{\rm R}}^m$, $\zeta\in{\rm I\kern -1.6pt{\rm R}}^{mN}$ and $i=1,\ldots,m$ \beqno{ellcondsys0} \myprod{\mA(u)\zeta,\zeta}\ge \llg_0|\zeta|^2.\end{equation} Interestingly, we are able to establish the regularity of bounded weak solutions to parabolic systems on {\em planar domains}, where the 'hole filling' technique of Widman \cite{wid} for elliptic systems (e.g. \cite{BF}) does not seem to be extendable. We also consider {\em triangular systems} on {\em any dimension} domains and assert that bounded weak solutions are pointwise continuity continuous. Our methods does not require higher $L^p$ integrability of derivatives as in classical works (e.g. \cite{GiaS}). In \refsec{techlem} we recall a simple parabolic version of the usual Sobolev inequality. We discuss scalar equations in \refsec{scalareqn}. We conclude the paper with applications to systems in \refsec{syseqn}. \section{Some technical lemmas}\label{techlem}\eqnoset In this section, we will present some technical results which will be used throughout this paper. We recall the following simple parabolic version of the usual Sobolev inequality \blemm{parasobo} Let $r=2/N$ if $N>2$ and $r\in(0,1)$ if $N\le 2$. If $g,G$ are sufficiently smooth then $$\itQ{\Og\times I}{|g|^{2r}|G|^2}\le C\sup_I\left(\iidx{\Og}{|g|^2}\right)^r\left(\itQ{\Og\times I}{(|DG|^2+|G|^2)}\right). $$ If $G=0$ on $\partial\Og$ then we can drop the integrand $|G|^2$ on the right hand side. In particular, if $g=G$ we have $$\itQ{\Og\times I}{|g|^{2(1+r)}}\le C(N,|\Og|)\sup_I\left(\iidx{\Og}{|g|^2}\right)^r\left(\itQ{\Og\times I}{(|Dg|^2+|g|^2)}\right). $$ \end{lemma} Putting $g=G=|u|^p$ (with $p>1$) and using Young's inequality, we can see that for some constant $c_0>0$ \beqno{pSobo}\left(\itQ{Q}{|u|^{2p(1+\frac 2N)}}\right)^\frac{N}{N+2}\le c_0\left(\sup_{(0,T)}\iidx{\Og}{|u|^{2p}}+\itQ{Q}{|u|^{2p-2}|Du|^2}\right).\end{equation} In addition, if $\cg\in(1,1+2/N)$, then an use of Young and Sobolevs inequalities also gives that for any $\eg>0$ there is a constant $C(\eg)$ such that \beqno{pSobo1}\|u\|_{L^{2p\cg}(Q)}\le \eg\left(\sup_{(0,T)}\iidx{\Og}{|u|^{2p}}+\itQ{Q}{|u|^{2p-2}|Du|^2}\right)^\frac{1}{2p}+C(\eg)\|u\|_{L^{2p}(Q)}.\end{equation} \section{On scalar equations} \label{scalareqn}\eqnoset \newcommand{\cM}{{\cal M}} We now revisit the regularity theory of scalar equations with integrable coefficients, in the class $\cM(\Og,T)$ defined below. These new improvements serve well our purposes in the next section. For any $x_0\in\Og$, $R>0$ and $t_0\ge 4R^2$, we define $\Og_R(x_0)=\Og\cap B_R(x_0)$ and $Q_R(x_0)=\Og_R(x_0)\times(t_0-R^2,t_0)$. If $x_0,t_0$ are understood from the context, we simply drop them from the notations. {\bf Definition of the class $\cM$:} We say that a function $f:Q\to {\rm I\kern -1.6pt{\rm R}}$ (or ${\rm I\kern -1.6pt{\rm R}}^m$) is of class $\cM(\Og,T)$ if for any $\eg>0$ there is $R(\eg)>0$ such that $\forall R\in(0,R(\eg))$ either $$\mathbf{i)}\quad \sup_{(0,T)}\|f\|_{L^{\frac{N}{2}}(\Og_R)} <\eg,$$ or $$\mathbf{ii)}\quad \|f\|_{L^{\frac{N+2}{2}}(Q_R)} <\eg.$$ Alternatively, we also define the class $\bbM(\Og,T)$ {\bf Definition of the class $\bbM$:} We say that a function $f:Q\to {\rm I\kern -1.6pt{\rm R}}$ (or ${\rm I\kern -1.6pt{\rm R}}^m$) is of class $\bbM(\Og,T)$ if for some $p_0>N/2$ such that either one of the quantities i) $\sup_{(0,T)}\|f\|_{L^{p_0}(\Og)}$ or ii) $\|f\|_{L^{p_0+1}(Q)}$ is finite. By H\"older inequality, it is is easy to see that $\bbM(\Og,T)\subset\cM(\Og,T)$. \subsection{Global boundedness and a local estimate} We consider scalar equation \beqno{diagSKT} \msysa{v_t=\Div(ADv +B)+G& \mbox{in $Q$,}\\ v=v_0 & \mbox{in $\Og$.}} \end{equation} Here $A,G$ are scalar functions. As usual, we will assume that there is a positive number $\llg_0$ such that \beqno{ellcond} A\in\cM(\Og,T) \mbox{ and }A\ge \llg_0.\end{equation} We also assume that there is a function $\Fg\in\cM(\Og,T)$ such that $\Fg\ge\llg_0$ on $Q$ and \beqno{grate} |B|^2, |G| \le \Fg.\end{equation} By using Steklov average, a weak solution of \mref{diagSKT} satisfies for all $\eta\in C^1(Q)$ \beqno{weakeqn} \itQ{Q}{u_t\eta+ADvD\eta}=\itQ{Q}{G\eta}.\end{equation} Applying the usual Moser iteration argument, we derive \blemm{West} Assume \mref{ellcond} and the growth condition \mref{grate} and $v$ is a solution of \mref{weakeqn}. Then there is a constant $C$ such that \beqno{keyWesta}\sup_{Q}|v|\le C \left(\itQ{Q}{v^2}\right)^\frac12.\end{equation} \end{lemma} \newcommand{\itQbar}[2]{\displaystyle{\int\hspace{-2.5mm}\int_{#1}~#2~d\bar{z}}} The proof of this lemma bases on a Moser iteration technique by testing the equation with $|v|^{2p-2}v$ similar to the local version below. We discuss the local estimates. This type of estimates will be useful for later investigations on the regularity of weak solutions. We will assume that the function $A$ is bounded. Note that $A$ may depend on $v$ in general. But $|v|$ is globally bounded by the above lemma. \blemm{Westloc} Assume \mref{ellcond}, the growth condition \mref{grate} and that $A$ is bounded. Let $v$ be a solution of \mref{weakeqn} and $B_R$ be a ball in ${\rm I\kern -1.6pt{\rm R}}^N$. Then there is a constant $C$ such that \beqno{keyWest}\sup_{Q_R}|v|\le C\left( \frac{1}{R^{N+2}}\itQ{Q_{2R}}{v^2}\right)^\frac12.\end{equation} \end{lemma} We will make use of the Sobolev inequality for any $q\in(1,2N/(N-2))$ and $\eg>0$ there is $C(\eg)$ such that \beqno{Sobo1}\left(\iidx{\Og}{|v|^q}\right)^\frac{2}{q}\le \eg\iidx{\Og}{|Dv|^2}+C(\eg)\iidx{\Og_R}{|v|^2}.\end{equation} If $q=2N/(N-2)$ then we can only assert that \beqno{Sobo1a}\left(\iidx{\Og}{|v|^{\frac{2N}{N-2}}}\right)^\frac{N-2}{N}\le C\iidx{\Og}{|Dv|^2}+C\iidx{\Og_R}{|v|^2}.\end{equation} The proof is the standard Moser iteration argument by testing \mref{weakeqn} with $|v|^{2p-2}v\fg^2\eta$ with some $p\ge1$ and $\fg,\eta$ are respectively cutoff functions for concentric balls $B_R,B_{2R}$ and intervals $[-2R^2,-R^2]$, $[-R^2,0]$ with $|D\fg|\le C/R$, $|D\eta|\le C/R^2$. Let $V=|v|^{2p}$. Using \mref{grate} and Young's inequality it is standard to derive (see \cite{Lieb,dleJMAA}) $$\sup_{(0,T)}\iidx{\Og}{|V|^2\fg^2\eta^2}+\itQ{Q}{|DV|^2\fg^2\eta^2}\le C\itQ{Q}{\Fg|V|^2\fg^2\eta^2}+\frac{C}{R^2}\itQ{Q}{|V|^2\fg^2\eta^2}.$$ By H\"older inequality with $q=2N/(N-2)$ so that $q/(q-2)=N/2$ and because $\Fg\in\cM(\Og,T)$ we see that, assuming i) in the definition of $\cM$, for any $\eg>0$ if $R$ is sufficiently small then $$ \begin{array}{lll}\itQ{Q_{2R}}{\Fg|V|^2\fg^2\eta^2}&\le& \displaystyle{\int}_{-2R^2}^{0}{\left(\iidx{\Og_R}{\Fg^\frac{q}{q-2}}\right)^{1-\frac{2}{q}}\left(\iidx{\Og}{|V\fg\eta|^q}\right)^{\frac{2}{q}}}dt\\ &\le& \eg\displaystyle{\int}_{-2R^2}^{0}{\left(\iidx{\Og}{|V\fg\eta|^q}\right)^{\frac{2}{q}}}dt.\end{array}$$ Because $|D\fg|\le C/R$, we derive $$\eg\left(\iidx{\Og}{|V\fg\eta|^q}\right)^\frac{2}{q}\le \eg\iidx{\Og}{|DV|^2\fg^2\eta^2}+\frac{C(\eg)}{R^2}\iidx{\Og_{2R}}{|V|^2}.$$ If $\eg$ is sufficiently small in terms of $\llg_0$ then it follows that \beqno{moser1}\sup_{(-R^2,0)}\iidx{\Og_R}{|V|^2}+\itQ{Q_R}{|DV|^2}\le \frac{C}{R^2}\itQ{Q_{2R}}{|V|^2}.\end{equation} By the parabolic Sobolev inequality for $\cg=1+2/N>1$, we obtain for any $p\ge1$ $$\left(\itQ{Q_R}{|v|^{2p\cg}}\right)^\frac{1}{\cg}\le \frac{C(\eg)}{R^2}\itQ{Q_{2R}}{|v|^{2p}}.$$ A standard Moser iteration argument (e.g. \cite{Lieb}) implies the local estimate of the lemma. If ii) in the definition of $\cM$ holds then for $\cg=1+2/N$ we can also use H\"older inequality ($\cg'=(N+2)/2$) to have for $R$ small \beqno{wholder}\itQ{Q_{2R}}{\Fg V^2}\le \left(\itQ{Q_{2R}}{\Fg^{\cg'}}\right)^\frac{1}{\cg'}\left(\itQ{Q_{2R}}{V^{2\cg}}\right)^\frac{1}{\cg}\le\eg\left(\itQ{Q_{2R}}{V^{2\cg}}\right)^\frac{1}{\cg}.\end{equation} Therefore, we can use the above estimate and the parabolic Sobolev inequality to treat the last integral to obtain the local estimate \mref{moser1} again and the proof can go on. If $\Fg\in\cM(\Og,T)$, depending whether i) or ii) in its definition holds, we define a positive function $\nu$ in $R\ge0$ by \beqno{nudef}\nu(R):=\sup_{(-R^2,0)}\left(\iidx{B_R}{\Fg^\frac{N}{2}}\right)^\frac{2}{N} \mbox{ or }\left(\itQ{Q_R}{\Fg^\frac{N+2}{2}}\right)^\frac{2}{N+2}.\end{equation} It is clear that $\nu$ is increasing and continuous at $0$ ($\nu(0)=0$). Also, because either $$\itQ{Q_R}{\Fg}\le \int_{-R^2}^0 \left(\iidx{B_R}{\Fg^\frac{N}{2}}\right)^\frac{2}{N}(R^{N})^\frac{N-2}{N}dt\le \nu(R)R^N$$ or $$\itQ{Q_R}{\Fg}\le \left(\itQ{Q_R}{\Fg^\frac{N+2}{2}}\right)^\frac{2}{N+2}(R^{N+2})^\frac{N}{N+2}=\nu(R)R^N,$$ we observe that in both cases \beqno{Fglocbound} \itQ{Q_R}{\Fg}\le \nu(R)R^N.\end{equation} \subsection{H\"older continuity} We will study the H\"older regularity in this subsection. Note that the bounds for the H\"older norm and exponents will depend only on the generic constants the parameters in their definition $\cM(\Og,T)$. This fact will play a crucial role when we estimate the derivatives which appear in cross diffusion systems. \btheo{blemm} Assume that $\mathbf{a}\ge\llg_0$ and $\mathbf{a}$ is bounded and $|\mb|^2,|\mathbf{g}|\le\Fg\in \cM(\Og,T)$. Let $v$ be a bounded weak solution of \beqno{eq1}v_t=\Div(\mathbf{a} Dv+\mb)+\me(Dv)+\mathbf{g}.\end{equation} If $\me(Dv)\le \eg_*|Dv|^2 +\Fg$ for some $\eg_*>0$ with $\eg_*\sup_Q|v|$ is sufficiently small compared to $\llg_0$ then $v$ is pointwise continuous. The continuity of $v$ depends on those of the integrals in \mref{nudef} on the measure of their domains. \end{theorem} For simplicity we assume first that $\mb\equiv0$. The case $\mb\ne0$ is similar will be discussed in \refrem{brem} after this proof. The idea based on that of \cite{dleNA}. We present the details and nontrivial modification. Fixing any $x_0\in\Og$, $t_0>0$ and $4R^2<t_0$, we denote $Q_{iR}=\Og_{iR}\times(t_0-iR^2,t_0)$ where $\Og_R=\Og\cap B_R(x_0)$. Let $M_i=\sup_{Q_{iR}}v$, $m_i=\inf_{Q_{iR}}v$ and $\og_i=M_i-m_i$. For some number $\nu_*>0$ will be determined later and the function $\nu$ as in \mref{Fglocbound} we define $\dg(R)=\nu_*\sqrt{\nu(R)}$ and $$ N_1(v)=2(M_4-v)+\dg(R),\quad N_2(v)=2(v-m_4)+\dg(R),$$ $$w_1(v)=\log\left(\frac{\og_4+\dg(R)}{N_1(v)}\right),\quad w_2(v)=\log\left(\frac{\og_4+\dg(R)}{N_2(v)}\right).$$ We will prove that either $w_1$ or $w_2$ is bounded from above on $Q_{2R}$ by a constant $C$ {\em independent of} $R$. This implies a decay estimate for some $\eg\in(0,1)$ and all $R>0$ \beqno{decay}\og_2\le \eg \og_4 + C\dg(R). \end{equation} It is standard to iterate \mref{decay} to obtain (see \cite[Lemma 8.23]{GT}) $$\og(R)\le C\left[\left(\frac{R}{R_0}\right)^\ag\og(R_0)+\dg(R^\mu R_0^{1-\mu})\right]\quad \forall R\in(0,R_0)$$ for any $\mu\in(0,1)$ and some $R_0,\ag>0$. This gives the continuity of $v$ as $\lim_{R\to0}\dg(R)=0$. To see \mref{decay}, if either $w_1$ or $w_2$ is bounded from above by $C>0$ in $Q_{2R}$ then either $$\og_4+\dg(R)\le 2C(\og_4+m_4-v)+C\dg(R)\mbox{ or }\og_4+\dg(R)\le 2C(\og_4+v-M_4)+C\dg(R).$$ Taking the supremum (respectively infimum) over $Q_{2R}$ and replacing $m_4$ by $m_2$ (respectively $M_4$ by $M_2$), we obtain $\og_2\le \eg \og_4+(C-1)\dg(R)$ for $\eg=\frac{2C-1}{2C}<1$. This yields \mref{decay}. Thus, we just need to show that either $w_1$ or $w_2$ is bounded from above on $Q_{2R}$. Before proving this, we note the following crucial property of the functions $w_1,w_2$. We will see that $w_1\le 0\Leftrightarrow w_2\ge0$ and vice versa. Indeed, \beqno{alt}w_1\le0 \Leftrightarrow \og_4+\dg(R)\le 2(M_4-v)+\dg(R) \Leftrightarrow 2(v-m_4)+\dg(R)\le \og_4+\dg(R)\Leftrightarrow w_2\ge 0. \end{equation} {\bf Proof:~~} For any $\eta\in C^1(Q)$ and $\eta\ge0$, observe that $Dw_1=\frac{2Dv}{N_1(v)}$, $(w_1)_t=\frac{2v_t}{N_1(v)}$ and $Dw_2=-\frac{2Dv}{N_2(v)}$, $(w_2)_t=-\frac{2v_t}{N_2(v)}$. So, by multiplying the equation of $v$ by $\eta/N_1(v)$ and $-\eta/N_2(v)$ and writing $w_i, N_i(v)$ respectively by $w, N(v)$ we obtain \beqno{eq2a} \iidx{\Og}{\frac{\partial w}{\partial t}\eta}+ \iidx{\Og}{\myprod{\mathbf{a} Dw,D\eta}}+\iidx{\Og}{\myprod{\mathbf{a} Dv,\frac{\eta Dv}{N^2(v)}}}=2\iidx{\Og}{\frac{\me(Dv)\pm \mathbf{g}}{N(v)}\eta}.\end{equation} Because $\myprod{\mathbf{a} Dv, Dv}\ge\llg_0|Dv|^2$ and the assumption $\me(Dv)\le \eg_*|Dv|^2+\Fg$ $$\left|\frac{\me(Dv)}{N(v)}\right| \le N(v)\frac{\eg_*|Dv|^2+\Fg}{N^2(v)}\le \eg_*(4\sup_Q|v|+1)\frac{|Dv|^2}{N^2(v)}+\frac{\Fg}{N(v)},$$ assuming $\nu(R_0)<1$, we can absorb (and discard) the integral of $\frac{\me(Dv)\eta}{N(v)}$ into that of $\myprod{\mathbf{a} Dv,\eta Dv/N^2(v)}$ if $\eg_*\sup_Q|v|$ is sufficiently small (compared to $\llg_0$) to get \beqno{eq2}\iidx{\Og}{\frac{\partial w}{\partial t}\eta}+ \iidx{\Og}{\myprod{\mathbf{a} Dw,D\eta}}\le 4\iidx{\Og}{\frac{|\Fg|}{N(v)}\eta}.\end{equation} Testing \mref{eq2} with $(w^+)^{2p-1}\eta^2$ as in \reflemm{Westloc} with $G=\Fg/N(v)$. Because $\llg_0\le \mathbf{a}$ and $\mathbf{a}$ is bounded and since $N(v)\ge \nu_*\nu(R)$ and by the definition \mref{nudef} of $\nu$ then either $$\|G\|_{L^\frac{N}{2}(\Og_R)}\le \frac{1}{\nu_*\nu(R)}\|\Fg\|_{L^\frac{N}{2}(\Og_R)}=\frac{1}{\nu_*}$$ or $\|G\|_{L^\frac{N+2}{2}(Q_R)}\le \frac{1}{\nu_*\nu(R)}\|\Fg\|_{L^\frac{N+2}{2}(Q_R)}\le \frac{1}{\nu_*}$. Thus, we see that $G\in \cM(\Og,T)$ if $\nu_*$ large. By \reflemm{Westloc}, which applies if $1/\nu_*$ is sufficiently small, we find a constant $C$ such that \beqno{supw}\sup_{\Og_{2R}\times(t_0-2R^2,t_0)} w^+\le C\left(\frac{1}{R^{N+2}}\itQ{\Og_{4R}\times(t_0-4R^2,t_0)}{w^2}\right)^\frac12.\end{equation} If we can show that for any $R>0$ there is a constant $C$ such that \beqno{wbound} \frac{1}{R^{N+2}}\itQ{\Og_{4R}\times(t_0-4R^2,t_0)}{w^2}\le C\end{equation} then this implies $w^+$ is bounded on $Q_{2R}$. So, the decay estimate \mref{decay} holds. Let $\eta$ be a cut-off function for $B_{2R}, B_{4R}$. Replacing $\eta$ in \mref{eq2a} by $\eta^2$ (keeping the third term on the left hand side and using the assumption that $\eg_*\sup_Q|v|$ is small again), we get \beqno{k1}\frac{d}{dt}\iidx{\Og}{w\eta^2}+\iidx{\Og}{\mathbf{a} |Dw|^2\eta^2}\le \iidx{\Og}{\mathbf{a}|Dw|\eta|D\eta|}+4\iidx{\Og}{\frac{|\Fg|}{N(v)}\eta^2} \end{equation} Applying Young's inequality we derive (as $|D\eta|\le C/R$ and $\mathbf{a}$ is bounded) \beqno{k2}\begin{array}{lll}\frac{d}{dt}\iidx{\Og}{w\eta^2}+\iidx{\Og}{\mathbf{a} |Dw|^2\eta^2}&\le& \frac{1}{R^2}\iidx{\Og}{\mathbf{a}}+4\iidx{\Og}{\frac{|\Fg|}{N(v)}\eta^2}\\&\le& CR^{N-2}+4\iidx{\Og}{\frac{|\Fg|}{N(v)}\eta^2}.\end{array} \end{equation} Set $I_*=[t_0-4R^2, t_0-2R^2]$, $Q_*=B_{2R}\times I_*$ and $Q_v=\{(x,t)\in Q_*\,:\,\; w_1\le0\}$. It is easy to see that $w_2\le0$ on $Q_*\setminus Q_v$ (see \mref{alt}). Therefore one of $w_1^+,w_2^+$ must vanish on a subset $Q^0$ of $Q_*$ with $|Q^0|\ge \frac12|Q_*|$. We denote by $w$ such function. Let $Q_t^0$ be the slice $Q^0\cap (B_{2R}\times\{t\})$ then $Q^0=\cup_{t\in I_*} Q_t^0$. For $t\in I_*$ let $$\Og^0_t=\{x\,:\, w^+(x,t)=0\}, \quad m(t)=|Q^0_t|.$$ The fact that $|Q^0|\ge \frac12|Q_*|$ implies $\int_{I_*}m(t)dt\ge \frac12 R^{N+2}$. We now set $$V(t)=\frac{\iidx{\Og}{w\eta^2}}{\iidx{\Og}{\eta^2}}.$$ By the weighted Poincar\'e' inequality (\cite[Lemma 3]{Moser}) $$\iidx{\Og}{(w-V)^2\eta^2}\le CR^2\iidx{\Og}{|Dw|^2\eta^2}.$$ Reducing the integral on the left to the set $Q^0_t$ where $w\le0$ (so that $V^2\le (w-V)^2$), we have $$V^2(t)m(t)\le CR^2\iidx{\Og}{|Dw|^2\eta^2}.$$ Since $N(v)\ge \nu_*\nu(R)$ on $Q_{4R}$, the above estimate and \mref{k2} implies that ($V'$ denotes the $t$ derivative) \beqno{k3}R^N V'(t)+\frac{1}{R^2}V^2(t)m(t)\le CR^{N-2}+\frac{4}{\nu_*\nu(R)}\iidx{\Og}{|\Fg|\eta^2}.\end{equation} Because $\nu$ is given by \mref{nudef}, we also get from \mref{Fglocbound} \beqno{mgest}\|G\|_{L^1(Q_R)}\le \frac{1}{\nu_*\nu(R)}\itQ{Q_R}{\Fg}\le CR^N.\end{equation} We show that $V(t_1)$ is bounded on for some $t_1\in I_*$. Indeed, suppose $V(t)\ge A>0$ in $I_*$. We have from \mref{k3} $$ R^{N+2}\frac{V'(t)}{V^2(t)}+m(t)\le \frac{C}{A^2}\left(R^N+\frac{R^2}{\nu_*\nu(R)}\iidx{\Og}{|\Fg|\eta^2}\right).$$ Because $\int_{I_*}m(t)dt\ge \frac12 R^{N+2}$ and $|I_*|\sim R^2$, we integrate this over $I_*$ and use \mref{mgest} to see that $$R^{N+2}\le \int_{I_*}m(t)dt\le R^{N+2}\left(\frac{2}{A}+\frac{C}{\nu_*A^2}\right).$$ By choosing $A$ large we get a contradiction. So, we must have $V(t_1)\le A$ for some $t_1\in I_*$. Integrating \mref{k2} over $[t_1,t_2]$ for any $t_2\in I_0=[t_0-2R^2, t_0]$, we have $$V(t_2)\iidx{\Og\times\{t_2\}}{\eta^2}+\itQ{\Og\times I_0}{\mathbf{a} |Dw|^2\eta^2}\le CR^N +V(t_1)\iidx{\Og\times\{t_1\}}{\eta^2}.$$ This implies that $V(t_2)\le C$ for all $t_2\in I_0$ and $\itQ{\Og\times I_0}{\mathbf{a} |Dw|^2\eta^2}\le CR^N$. This implies that $V(t_2)\le C$ for all $t_2\in I_0$ and $\itQ{\Og\times I_0}{\mathbf{a} |Dw|^2\eta^2}\le CR^N$. Since we can always assume that $\og_4\ge \nu(R)$ (otherwise there is nothing to prove) so that $V(t)$ is bounded from below by $-\log(\nu_*)$. Hence, $|V(t)|$ is bounded. By Poincar\'e's inequality again we have $$\itQ{\Og\times I_0}{(w-V)^2\eta^2}\le CR^2\itQ{\Og\times I_0}{|Dw|^2\eta^2}\le CR^{N+2}.$$ Since $|V(t)|$ is bounded on $I_0$, replacing $R$ by $2R$, the above implies the desired \mref{wbound} and concludes our proof. \eproof \brem{brem} The assertion still holds if $\mb\ne0$ and $|\mb|^2\in\cM(\Og,T)$. Indeed, there will be an extra term in our argument. Namely, replacing $\eta$ by $\eta/N(v)$ as before we have the following extra terms in \mref{eq2a} $$\iidx{\Og}{\myprod{\mb, \frac{D\eta}{N(v)}+\frac{\eta Dv}{N^2(v)}}}=\iidx{\Og}{\myprod{\frac{\mb}{N(v)}, D\eta+\frac12 Dw \eta}}.$$ We take $\eta$ to be $|w|^{2p-2}w\eta^2$ and think of $B_1:= \mb/N(v), B_2:=\frac12\mb/N(v)$ in $$w_t=\Div(ADw+B_1)+B_2Dw+G$$ (compared with \mref{diagSKT}). Again noting that $Dw=2Dv/N(v)$. Because $|\mb|^2$, $|B_1|^2,|B_2|^2\le\Fg$ so that the Moser's technique is applied as before. We get the local estimate \mref{supw} and the proof can go on (we can redefine $N(v)$ such that $N^2(v)\ge \nu_*\nu(R)$ because certainly $\nu(R)^\frac{1}{2}\ge \nu(R)$). \erem \brem{FgVrem} It is worth to noting that if $\Fg$ verifies \mref{wholder} then \mref{supw} holds. Once this is true we need only \mref{mgest} which is valid if the $L^1(Q)$ norm of $\Fg$ satisfies $\|\Fg\|_{L^1(Q_R)}\le \nu(R)R^N$ (see \mref{Fglocbound}). \erem Since $\bbM(\Og,T)\subset \cM(\Og,T)$ a similar argument shows that we can take $\nu_*=1$ and $\nu(R)=R^\cg$ for some appropriate $\cg>0$ to have a stronger version of \reflemm{blemm}. \blemm{bholderlemm} Assume that $\mathbf{a}\ge\llg_0$ and $\mathbf{a}$ is bounded and $|\mb|^2,|\mathbf{g}|\le\Fg\in \bbM(\Og,T)$. Let $v$ be a bounded weak solution of $$v_t=\Div(\mathbf{a} Dv+\mb)+\me(Dv)+\mathbf{g}.$$ If $\me(Dv)\le \eg_*|Dv|^2 +\Fg$ for some $\eg_*>0\sup_Q|v|$ small compared to $\llg_0$ then $v$ is H\"older continuous. Its H\"older norm is bounded in terms of the $L^{p_0}(\Og)$ (or $L^{p_0+1}(Q)$) norms of $|\mb|^2,\mathbf{g}$. \end{lemma} It is also important to mention the following \bcoro{Dvloc} Assume that $\mathbf{a}=\mathbf{a}(v)$ is H\"older continuous in $v$ and $\mathbf{a}\ge\llg_0$ and that $|\mb|^2,\me$ are as in \reftheo{blemm}. Let $v$ be a bounded weak solution of $$v_t=\Div(\mathbf{a}(v) Dv+\mb)+\me(Dv)+\mathbf{g}.$$ If $\mathbf{g}\in L^\infty_{loc}(Q)$ then $Dv$ is H\"older continuous and locally bounded. \end{corol} {\bf Proof:~~} Knowing that $v$ is continuous and so is $\mathbf{a}(v)$, we can use \cite[Theorem 3.2]{GiaS} applying to scalar equations to see that $Dv$ is H\"older continuous and thus locally bounded. Note that we don't have to use the $L^p$ estimate of $Dv$ here once we know that $v$ is continuous. This continuity of $v$ suffices to obtain \cite[(3.4) in the proof of Proposition 3.1]{GiaS} to obtain a decay estimate in proving that $Dv$ is H\"older continuous in the proof of \cite[Theorem 3.2]{GiaS}. \eproof \section{Applications to systems} \label{syseqn}\eqnoset In this section, we apply the theory to the system \beqno{fullsys0a}u_t=\Div(\mA(u)Du)+\me(Du)+f(u) \end{equation} in $Q$. Here, $u=[u_i]_{i=1}^m$ and $\mathbf{a}(u)$ is a $m\times m$ matrix and $\me,f$ are vectors in ${\rm I\kern -1.6pt{\rm R}}^m$. We will always assume that there is $\llg_0>0$ such that for all $u\in{\rm I\kern -1.6pt{\rm R}}^m$, $\zeta\in{\rm I\kern -1.6pt{\rm R}}^{mN}$ and $i=1,\ldots,m$ \beqno{ellcondsys} \myprod{\mA(u)\zeta,\zeta}\ge \llg_0|\zeta|^2 \mbox{ and }\ag_{ii}(u)\ge \llg_0.\end{equation} In addition, there is $\eg_*>0$ such that \beqno{egcond} |\me(\zeta)|\le \eg_*|\zeta|^2\quad \forall\zeta\in {\rm I\kern -1.6pt{\rm R}}^{mN}.\end{equation} The well known 'hole-filling' trick of Widman (e.g. \cite{BF}) has been apllied in the regularity of strongly coupled {\em elliptic systems} on {\em planar} domains. Roughly speaking, the idea is that one tests the elliptic system with $u\fg^2$ where $\fg$ is a cutoff function for $B_R,B_{2R}$ (that is $\fg\equiv 1$ in $B_R$ and $\fg\equiv0$ outside $B_{2R}$ and $|D\fg|\le C/R$) to obtain a decay estimate for $\|Du\|_{L^2(B_R)}$. This and imbedding theorems of Campanato spaces (e.g. \cite{Lieb}) implies that $u$ is H\"older continuous if $N=2$. However, this trick does not seem to apply to the corresponding parabolic systems like \mref{fullsys0a} even when $\eg_*=0$ in \mref{egcond}. Following the same idea to \mref{fullsys0a} with $\fg$ is a cutoff function for $Q_R,Q_{2R}$, one can not obtain a decay estimate for $\|Du\|_{L^2(Q_R)}$. Even so, the extra time dimension does not imply any continuity of $u$. We will apply the theory for scalar equations in previous section to each equation in \mref{fullsys0a} for appropriate conditions on $\mathbf{a},\mb$ and $\mathbf{g}$ to establish the pointwise continuity of bounded weak solutions to systems like \mref{fullsys0a}. Namely, under appropriate settings and assumptions, we will show that if a bounded weak solution $u$ of \mref{fullsys0a} is averagely continuous \beqno{avercont} \liminf_{R\to 0} \mitQ{Q_R}{|u-u_R|^2}=0\end{equation} then it is pointwise continuous. \subsection{Full systems (SKT) on planar domains} The checking of the average continuity assumption \mref{avercont} for a bounded weak solution to \mref{fullsys0a} is a hard problem in general. Here, we consider the case $N=2$ and present examples when this can be done. Let $\mA=P_u$ for some $P:{\rm I\kern -1.6pt{\rm R}}^m\to{\rm I\kern -1.6pt{\rm R}}^m$. We consider a special case of \mref{fullsys0a}. The following model was introduced in \cite{SKT} and studied widely in the context of mathematical biology (e.g. \cite{yag}) \beqno{sktsys0a}u_t=\Delta(P(u))+\me(Du)+f(u) \end{equation} and assume that the nonlinearity is sublinear. That is, for some constant $C$ \mref{egcond} is now \beqno{egcondsub} |\me(\zeta)|\le C|\zeta|\quad \forall\zeta\in {\rm I\kern -1.6pt{\rm R}}^{mN}.\end{equation} We will prove that $\|Du\|_{L^2(\Og)}\le C$ for some constant $C$ for all $t\in(0,T)$. The following calculation is formal and it can be justified by replacing the operator $\frac{\partial}{\partial t}$ in the proof of \cite[Lemma 2.2]{dleJMAA} with the partial difference operator $\dg_h^{(t)}$ (or $u,P(u)$ by their Steklov average as in \cite{LSU}). Multiplying the $i^{th}$ equation with $\frac{\partial}{\partial t}P_i(u)\eta$ where $\eta$ is function in $t$ and summing the results, we obtain for $Q^t=\Og\times(t_1,t)$ $$\itQ{Q^t}{\myprod{u_t,P_u u_t}\eta}+\itQ{Q^t}{\myprod{D(P(u)), D(P(u))_t}\eta}=\itQ{Q^t}{\myprod{f,P_u u_t}\eta}.$$ We now choose $\eta$ such that $\eta(t)=1$, $\eta(t_1)=0$ and $|\eta'|\le C$. Since $\myprod{u_t,P_u u_t}\ge \llg_u|u_t|^2$, $|u|$ is bounded, by a simple use of Young's inequality to the last integral (assuming $f(u)\in L^2(Q)$ for any given bounded solution $u$) and rearranging, we easily derive ($C$ denotes a constant depending on $\sup_Q|u|$) $$\iidx{\Og}{|Du(x,t)|^2}\le C\int_{t_1}^t\iidx{\Og}{|Du(x,t)|^2}dt +C.$$ This is an integral Gr\"onwall inequality for $y(t)=\|Du\|_{L^2(\Og\times\{t\})}$ and implies for all $t\in(0,T)$ that $\|Du\|_{L^2(\Og\times\{t\})}\le C$. For each $i=1,\ldots,m$ we apply \reftheo{blemm} by simply set $\mathbf{a}=P_{u_i}$, $\mb=\sum_{j\ne i}P_{u_j}Du_j$ and $\mathbf{g}=\me(Du)+f$ (we see that $|\mb|^2,|\mathbf{g}|$ belong to $\cM(\Og,T)$ as $N=2$ as either i) or ii) of the definition of $\cM$ is satisfied). Hence, $u$ is pointwise continuous. \bcoro{SKT} Consider the system \mref{sktsys0a}. Assume that $N=2$, \mref{egcondsub} and $f\in L^2(Q)$. Then any bounded weak solution of \mref{sktsys0a} is pointwise continuous. \end{corol} \brem{Dbounded} If $f\in L^\infty(Q)$ then the derivatives are bounded and H\"older continuous (see the discussion leading to \refcoro{trisysthm} below). \erem \subsection{Triangular systems on $N$-dimensional domains} The result of \refcoro{SKT} holds for full systems with nonlinearities grow at most linear in gradients (see \mref{egcondsub}). If the system \mref{fullsys0a} is of the special triagular form then we can resume the quadratic growth in gradients \mref{egcond} (and some what more general) for general $N$. We will present now an example of a class of triangular systems whose nonlinearities having quadratic growth in gradients. We start with a system of two equations \beqno{trisys0}\msysa{u_t=\Div(\ag(u,v)Du+\bg(u,v)Dv)+\eg_1|Du|^2+c|Dv|^2+\mathbf{g}_1&\mbox{in $Q$,}\\ v_t=\Div(\dg(v)Dv)+\eg_2|Dv|^2+\mathbf{g}_2&\mbox{in $Q$,} \\ \mbox{Homogeneous Dirichlet or Neumann boundary conditions}&\mbox{on $\partial\Og\times(0,T)$}, \\ (u,v)=(u_0,v_0)&\mbox{on $\Og$.}}\end{equation} That is we will consider \mref{fullsys0a} with $\mA=\mat{\ag&\bg\\0&\dg}$, $\me(Du,Dv)=\left[\begin{array}{c}\eg_{1}|Du|^2+c|Dv|^2\\\eg_{2}|Dv|^2\end{array}\right]$. We assume, instead of \mref{ellcondsys} which implies, that $\ag(u,v), \dg(v)\ge\llg_0$. We also assume that $\mathbf{g}_i\in \cM(\Og,T)$ and $\ag,\bg$ are continuous and $\dg$ is H\"older continuous in $u,v$. Assume that $u,v$ are locally bounded. If $\eg_2\sup_Q|v|$ is small then we see that $v$ is continuous by \reftheo{blemm}, with $\mb=0$ and $\mathbf{g}=\mathbf{g}_2$. Knowing that $\dg(v)$ is continuous and assuming that $\mathbf{g}_1\in L^\infty_{loc}(Q)$, we can use \cite[Theorem 3.2]{GiaS} applying to scalar equations to see that $Dv$ is H\"older continuous and thus bounded (see \refcoro{Dvloc}). This can be used in the equation of $u$ with $\mb=\bg(u,v)Dv$ and $\mathbf{g}=c|Dv|^2+\mathbf{g}_2$ and we can apply \reftheo{blemm} again to prove that $u$ is continuous if $\eg_1\sup_Q|u|$ is small (and $c$ can be large). Again, note that we don't have to use the higher integrability $L^p$ estimate of $Dv$ here once we know that $v$ is continuous. This continuity of $v$ suffices to obtain \cite[(3.4) in the proof of Proposition 3.1]{GiaS} to obtain a decay estimate in proving that $Dv$ is H\"older continuous in the proof of \cite[Theorem 3.2]{GiaS}. Again, we remark that the coninuity of $u,v$ also shows that $Du,Dv$ are H\"older continuous by \cite{GiaS}. But to apply the theory in \cite{GiaS}, we need the elliptic condition \mref{ellcondsys} for the whole system when $m>2$. By induction, the above argument can be extended to systems for $m>2$ unknowns $u_i$ ($i=1,\ldots,m$) satisfying homogeneous Dirichlet or Neumann boundary conditions on $\partial\Og\times(0,T)$. The system consists of $m$ equations of the form \beqno{ieqn}(u_i)_t=\Div(\ag_i(\hat{u}_i)Du_i+\sum_{j< i}\bg_{ij}(\hat{u}_i)Du_j)+\eg_i|Du_i|^2+\sum_{j<i}c_{ij}|Du_j|^2+\mathbf{g}_i,\end{equation} where we denote $\hat{u}_i=(u_1,\ldots,u_i)$ and assume that $\eg_i\sup_Q|u_i|$ is small for all $i=1,\ldots,m$. \bcoro{trisysthm} Consider the system \mref{fullsys0a} of $m$ equations of the form \mref{ieqn}. If $\mathbf{g}_i\in L^\infty_{loc}(Q)$ then any bounded weak solution has bounded derivatives. \end{corol} \end{document}
arXiv
Semilinear elliptic system with boundary singularity DCDS Home Global well-posedness and long time behaviors of chemotaxis-fluid system modeling coral fertilization April 2020, 40(4): 2165-2187. doi: 10.3934/dcds.2020110 Multiple positive solutions of saturable nonlinear Schrödinger equations with intensity functions Tai-Chia Lin 1,2, and Tsung-Fang Wu 3,, Department of Mathematics, National Taiwan University, Taipei 10617, Taiwan Mathematics Division, National Center for Theoretical Sciences, Taipei 10617, Taiwan Department of Applied Mathematics, National University of Kaohsiung, Kaohsiung 811, Taiwan *Corresponding author: T. F. Wu Received April 2019 Published January 2020 Fund Project: T.C. Lin is partially supported by Center for Advanced Study in Theoretical Sciences (CASTS) and the Ministry of Science and Technology, Taiwan grant MOST-106-2115-M-002-003-MY3. T.F. Wu is partially supported by the Ministry of Science and Technology, Taiwan grant MOST-108-2115-M-390-007-MY2 and the National Center for Theoretical Sciences, Taiwan. In this paper, we study saturable nonlinear Schrödinger equations with nonzero intensity function which makes the nonlinearity become not superlinear near zero. Using the Nehari manifold and the Lusternik-Schnirelman category, we prove the existence of multiple positive solutions for saturable nonlinear Schrödinger equations with nonzero intensity function which satisfies suitable conditions. The ideas contained here might be useful to obtain multiple positive solutions of the other non-homogeneous nonlinear elliptic equations. Keywords: Nonlinear Schrödinger equations, saturable nonlinearity, variational methods, multiple positive solutions. Mathematics Subject Classification: Primary: 335J20, 35J60; Secondary: 35A15, 35B09. Citation: Tai-Chia Lin, Tsung-Fang Wu. Multiple positive solutions of saturable nonlinear Schrödinger equations with intensity functions. Discrete & Continuous Dynamical Systems, 2020, 40 (4) : 2165-2187. doi: 10.3934/dcds.2020110 S. Adachi and K. Tanaka, Four positive solutions for the semilinear elliptic equation: $-\triangle u+u = a(x)u^{p}+f(x)$ in $\mathbb{R}^{N}$, Calc. Var. PDE, 11 (2000), 63-95. doi: 10.1007/s005260050003. Google Scholar A. Ambrosetti, Critical points and nonlinear variational problems, Mém. Soc. Math. France (N.S.), (1992), 139 pp. Google Scholar A. Ambrosetti and P. H. Rabinowitz, Dual variational methods in critical point theory and applications, J. Funct. Anal., 14 (1973), 349-381. doi: 10.1016/0022-1236(73)90051-7. Google Scholar H. Berestycki and P. L. Lions, Nonlinear scalar field equations. Ⅰ. Existence of ground state, Arch. Ration. Mech. Anal., 82 (1983), 313-345. doi: 10.1007/BF00250555. Google Scholar K. J. Brown and Y. P. Zhang, The Nehari manifold for a semilinear elliptic equation with a sign-changing weight function, J. Diff. Eqns., 193 (2003), 481-499. doi: 10.1016/S0022-0396(03)00121-9. Google Scholar D. G. Costa and H. Tehrani, On a class of asymptotically linear elliptic problems in $\mathbb{R}^{N}$, J. Diff. Eqns., 173 (2001), 470-494. doi: 10.1006/jdeq.2000.3944. Google Scholar Y. H. Ding and C. Lee, Multiple solutions of Schrödinger equations with indefinite linear part and super or asymptotically linear terms, J. Diff. Eqns., 222 (2006), 137-163. doi: 10.1016/j.jde.2005.03.011. Google Scholar A. L. Edelson and C. A. Stuart, The principle branch of solutions of a nonlinear elliptic eigenvalue problem on $\mathbb{R}^{N}$, J. Diff. Eqns., 124 (1996), 279-301. doi: 10.1006/jdeq.1996.0010. Google Scholar N. K. Efremidis, S. Sears, D. N. Christodoulides, J. W. Fleischer and M. Segev, Discrete solitons in photorefractive optically induced photonic lattices, Phys. Rev. E, 66 (2002), 046602. doi: 10.1364/NLGW.2002.NLTuA4. Google Scholar N. K. Efremidis, J. Hudock, D. N. Christodoulides, J. W. Fleischer, O. Cohen and M. Segev, Two-dimensional optical lattice solitons, Phys. Rev. Lett., 91 (2003), 213906. doi: 10.1103/PhysRevLett.91.213906. Google Scholar I. Ekeland, On the variational principle, J. Math. Anal. Appl., 17 (1974), 324-353. doi: 10.1016/0022-247X(74)90025-0. Google Scholar S. Gatz and J. Herrmann, Propagation of optical beams and the properties of two-dimensional spatial solitons in media with a local saturable nonlinear refractive index, J. Opt. Soc. Amer. B, 14 (1997), 1795-1806. doi: 10.1364/JOSAB.14.001795. Google Scholar B. Gidas, W. M. Ni and L. Nirenberg, Symmetry and related properties via the maximum principle, Comm. Math. Phys., 68 (1978), 209-243. doi: 10.1007/BF01221125. Google Scholar L. Jeanjean, On the existence of bounded Palais-Smale sequence and application to a Landesmann-Lazer type problem, Proc. R. Soc. Edinburgh A, 129 (1999), 787-809. doi: 10.1017/S0308210500013147. Google Scholar L. Jeanjean and K. Tanaka, A positive solution for an asymptotically linear elliptic problem on $\mathbb{R}^{N}$ autinomous at infinity, ESAIM Control Optim. Calc. Var., 7 (2002), 597-614. doi: 10.1051/cocv:2002068. Google Scholar L. Jeanjean and K. Tanaka, Singularly perturbed elliptic problems with superlinear or asymptotically linear nonlinearities, Calc. Var. PDE, 21 (2004), 287-318. doi: 10.1007/s00526-003-0261-6. Google Scholar L. Jeanjean and K. Tanaka, A remark on least energy solutions in $\mathbb{R}^{N}$, Proc. Amer. Math. Soc., 131 (2003), 2399-2408. doi: 10.1090/S0002-9939-02-06821-1. Google Scholar D. Jovic, R. Jovanovic, S. Prvanovic, M. Petrovic and M. Belic, Counterpropagating beams in rotationally symmetric photonic lattices, Opt. Mater., 30 (2008), 1173-1176. Google Scholar G. Li and H.-S. Zhou, The existence of a positive solution to asymptotically linear scalar field equation, Proc. R. Soc. Edinburgh A, 130 (2000), 81-105. doi: 10.1017/S0308210500000068. Google Scholar T.-C. Lin, M. R. Belic, M. S. Petrovic and G. Chen, Ground states of nonlinear Schrödinger systems with saturable nonlinearity in $\mathbb{R}^{2}$ for two counterpropagating beams, J. Math. Phys., 55 (2014), 011505, 13 pp. doi: 10.1063/1.4862190. Google Scholar T.-C. Lin, M. R. Belic, M. S. Petrovic, H. Hajaiej and G. Chen, The virial theorem and ground state energy estimate of nonlinear Schrödinger equations in $\mathbb{R}^{2}$ with square root and saturable nonlinearities in nonlinear optics, Calc. Var. PDE, 56 (2017), Art. 147, 20 pp. doi: 10.1007/s00526-017-1251-4. Google Scholar T.-C. Lin, X. M. Wang and Z.-Q. Wang, Orbital stability and energy estimate of ground states of saturable nonlinear Schrödinger equations with intensity functions in $\mathbb{R}^{2}$, J. Diff. Eqns., 263 (2017), 4750-4786. doi: 10.1016/j.jde.2017.05.030. Google Scholar P.-L. Lions, The concentration-compactness principle in the calculus of variations. The local compact case. Ⅰ, Ann. Inst. H. Poincaré Anal. Non Lineairé, 1 (1984), 109-145. doi: 10.1016/S0294-1449(16)30428-0. Google Scholar P.-L. Lions, The concentration-compactness principle in the calculus of variations. The local compact case. Ⅱ, Ann. Inst. H. Poincaré Anal. Non Lineairé, 1 (1984), 223-283. doi: 10.1016/S0294-1449(16)30422-X. Google Scholar Z. L. Liu and Z.-Q. Wang, Existence of a positive solution of an elliptic equation on $\mathbb{R}^{N}$, Proc. R. Soc. Edinburgh A, 134 (2004), 191-200. doi: 10.1017/S0308210500003152. Google Scholar C. Y. Liu, Z. P. Wang and H.-S. Zhou, Asymptotically linear Schrödinger equation with potential vanishing at infinity, J. Diff. Eqns., 245 (2008), 201-222. doi: 10.1016/j.jde.2008.01.006. Google Scholar I. M. Merhasin, B. A. Malomed, K. Senthilnathan, K. Nakkeeran, P. K. A. Wai and K. W. Chow, Solitons in Bragg gratings with saturable nonlinearities, J. Opt. Soc. Amer. B, 24 (2007), 1458-1468. doi: 10.1364/JOSAB.24.001458. Google Scholar W.-M. Ni and I. Takagi, On the shape of least energy solution to a Neumann problem, Comm. Pure Appl. Math., 44 (1991), 819-851. doi: 10.1002/cpa.3160440705. Google Scholar J. Serrin and M. X. Tang, Uniqueness of ground states for quasilinear elliptic equations, Indiana University Mathematics Journal, 49 (2000), 897-923. doi: 10.1512/iumj.2000.49.1893. Google Scholar C. A. Stuart and H. S. Zhou, Applying the mountain pass theorem to an asymptotically linear elliptic equation on $\mathbb{R}^{N}$, Commum. Partial Diff. Eqns., 24 (1999), 1731-1758. doi: 10.1080/03605309908821481. Google Scholar H. Tehrani, A note on asymptotically linear elliptic problems in $\mathbb{R}^{N}$, J. Math. Anal. Appl., 271 (2002), 546-554. doi: 10.1016/S0022-247X(02)00143-9. Google Scholar M. Willem, Minimax Theorems, Progress in Nonlinear Differential Equations and their Applications, 24. Birkhäuser Boston, Inc., Boston, MA, 1996. doi: 10.1007/978-1-4612-4146-1. Google Scholar J. Cuevas, J. C. Eilbeck, N. I. Karachalios. Thresholds for breather solutions of the discrete nonlinear Schrödinger equation with saturable and power nonlinearity. Discrete & Continuous Dynamical Systems, 2008, 21 (2) : 445-475. doi: 10.3934/dcds.2008.21.445 Haoyu Li, Zhi-Qiang Wang. Multiple positive solutions for coupled Schrödinger equations with perturbations. Communications on Pure & Applied Analysis, 2021, 20 (2) : 867-884. doi: 10.3934/cpaa.2020294 Claudianor O. Alves, Chao Ji. Multiple positive solutions for a Schrödinger logarithmic equation. Discrete & Continuous Dynamical Systems, 2020, 40 (5) : 2671-2685. doi: 10.3934/dcds.2020145 Juncheng Wei, Wei Yao. Uniqueness of positive solutions to some coupled nonlinear Schrödinger equations. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1003-1011. doi: 10.3934/cpaa.2012.11.1003 Tai-Chia Lin, Tsung-Fang Wu. Existence and multiplicity of positive solutions for two coupled nonlinear Schrödinger equations. Discrete & Continuous Dynamical Systems, 2013, 33 (7) : 2911-2938. doi: 10.3934/dcds.2013.33.2911 Fengshuang Gao, Yuxia Guo. Multiple solutions for a nonlinear Schrödinger systems. Communications on Pure & Applied Analysis, 2020, 19 (2) : 1181-1204. doi: 10.3934/cpaa.2020055 Noboru Okazawa, Toshiyuki Suzuki, Tomomi Yokota. Energy methods for abstract nonlinear Schrödinger equations. Evolution Equations & Control Theory, 2012, 1 (2) : 337-354. doi: 10.3934/eect.2012.1.337 Renata Bunoiu, Radu Precup, Csaba Varga. Multiple positive standing wave solutions for schrödinger equations with oscillating state-dependent potentials. Communications on Pure & Applied Analysis, 2017, 16 (3) : 953-972. doi: 10.3934/cpaa.2017046 Caixia Chen, Aixia Qian. Multiple positive solutions for the Schrödinger-Poisson equation with critical growth. Mathematical Foundations of Computing, 2021 doi: 10.3934/mfc.2021036 Chuangye Liu, Zhi-Qiang Wang. Synchronization of positive solutions for coupled Schrödinger equations. Discrete & Continuous Dynamical Systems, 2018, 38 (6) : 2795-2808. doi: 10.3934/dcds.2018118 Gan Lu, Weiming Liu. Multiple complex-valued solutions for the nonlinear Schrödinger equations involving magnetic potentials. Communications on Pure & Applied Analysis, 2017, 16 (6) : 1957-1975. doi: 10.3934/cpaa.2017096 Soohyun Bae, Jaeyoung Byeon. Standing waves of nonlinear Schrödinger equations with optimal conditions for potential and nonlinearity. Communications on Pure & Applied Analysis, 2013, 12 (2) : 831-850. doi: 10.3934/cpaa.2013.12.831 Weiwei Ao, Juncheng Wei, Wen Yang. Infinitely many positive solutions of fractional nonlinear Schrödinger equations with non-symmetric potentials. Discrete & Continuous Dynamical Systems, 2017, 37 (11) : 5561-5601. doi: 10.3934/dcds.2017242 Wulong Liu, Guowei Dai. Multiple solutions for a fractional nonlinear Schrödinger equation with local potential. Communications on Pure & Applied Analysis, 2017, 16 (6) : 2105-2123. doi: 10.3934/cpaa.2017104 Shalva Amiranashvili, Raimondas Čiegis, Mindaugas Radziunas. Numerical methods for a class of generalized nonlinear Schrödinger equations. Kinetic & Related Models, 2015, 8 (2) : 215-234. doi: 10.3934/krm.2015.8.215 Zaihui Gan, Boling Guo, Jian Zhang. Blowup and global existence of the nonlinear Schrödinger equations with multiple potentials. Communications on Pure & Applied Analysis, 2009, 8 (4) : 1303-1312. doi: 10.3934/cpaa.2009.8.1303 Xudong Shang, Jihui Zhang. Multiplicity and concentration of positive solutions for fractional nonlinear Schrödinger equation. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2239-2259. doi: 10.3934/cpaa.2018107 Jiabao Su, Rushun Tian, Zhi-Qiang Wang. Positive solutions of doubly coupled multicomponent nonlinear Schrödinger systems. Discrete & Continuous Dynamical Systems - S, 2019, 12 (7) : 2143-2161. doi: 10.3934/dcdss.2019138 Haidong Liu, Zhaoli Liu. Positive solutions of a nonlinear Schrödinger system with nonconstant potentials. Discrete & Continuous Dynamical Systems, 2016, 36 (3) : 1431-1464. doi: 10.3934/dcds.2016.36.1431 Chunhua Wang, Jing Yang. Positive solutions for a nonlinear Schrödinger-Poisson system. Discrete & Continuous Dynamical Systems, 2018, 38 (11) : 5461-5504. doi: 10.3934/dcds.2018241 HTML views (93) Tai-Chia Lin Tsung-Fang Wu
CommonCrawl
\begin{definition}[Definition:Imperial/Volume/Gill] The '''gill''' is an imperial unit of volume. {{begin-eqn}} {{eqn | o = | r = 1 | c = '''gill''' }} {{eqn | r = 5 | c = fluid ounces }} {{eqn | r = 0 \cdotp 14206 \, 53125 | c = litres }} {{eqn | r = 142 \cdotp 06531 \, 25 | c = millilitres }} {{end-eqn}} \end{definition}
ProofWiki
Twistor space In mathematics and theoretical physics (especially twistor theory), twistor space is the complex vector space of solutions of the twistor equation $\nabla _{A'}^{(A}\Omega _{^{}}^{B)}=0$. It was described in the 1960s by Roger Penrose and Malcolm MacCallum.[1] According to Andrew Hodges, twistor space is useful for conceptualizing the way photons travel through space, using four complex numbers. He also posits that twistor space may aid in understanding the asymmetry of the weak nuclear force.[2] Informal motivation In the (translated) words of Jacques Hadamard: "the shortest path between two truths in the real domain passes through the complex domain." Therefore when studying four-dimensional space $\mathbb {R} ^{4}$ it might be valuable to identify it with $\mathbb {C} ^{2}.$ However, since there is no canonical way of doing so, instead all isomorphisms respecting orientation and metric between the two are considered. It turns out that complex projective 3-space $\mathbb {CP} ^{3}$ parametrizes such isomorphisms together with complex coordinates. Thus one complex coordinate describes the identification and the other two describe a point in $\mathbb {R} ^{4}$. It turns out that vector bundles with self-dual connections on $\mathbb {R} ^{4}$(instantons) correspond bijectively to holomorphic vector bundles on complex projective 3-space $\mathbb {CP} ^{3}$. Formal definition For Minkowski space, denoted $\mathbb {M} $, the solutions to the twistor equation are of the form $\Omega ^{A}(x)=\omega ^{A}-ix^{AA'}\pi _{A'}$ where $\omega ^{A}$ and $\pi _{A'}$ are two constant Weyl spinors and $x^{AA'}=\sigma _{\mu }^{AA'}x^{\mu }$ is a point in Minkowski space. The $\sigma _{\mu }=(I,{\vec {\sigma }})$ are the Pauli matrices, with $A,A^{\prime }=1,2$ the indexes on the matrices. This twistor space is a four-dimensional complex vector space, whose points are denoted by $Z^{\alpha }=(\omega ^{A},\pi _{A'})$, and with a hermitian form $\Sigma (Z)=\omega ^{A}{\bar {\pi }}_{A}+{\bar {\omega }}^{A'}\pi _{A'}$ which is invariant under the group SU(2,2) which is a quadruple cover of the conformal group C(1,3) of compactified Minkowski spacetime. Points in Minkowski space are related to subspaces of twistor space through the incidence relation $\omega ^{A}=ix^{AA'}\pi _{A'}.$ This incidence relation is preserved under an overall re-scaling of the twistor, so usually one works in projective twistor space, denoted $\mathbb {PT} $, which is isomorphic as a complex manifold to $\mathbb {CP} ^{3}$. Given a point $x\in M$ it is related to a line in projective twistor space where we can see the incidence relation as giving the linear embedding of a $\mathbb {CP} ^{1}$ parametrized by $\pi _{A'}$. The geometric relation between projective twistor space and complexified compactified Minkowski space is the same as the relation between lines and two-planes in twistor space; more precisely, twistor space is $\mathbb {T} :=\mathbb {C} ^{4}.$ :=\mathbb {C} ^{4}.} It has associated to it the double fibration of flag manifolds $\mathbb {P} \xleftarrow {\mu } \mathbb {F} \xrightarrow {\nu } \mathbb {M} $ where $\mathbb {P} $ is the projective twistor space $\mathbb {P} =F_{1}(\mathbb {T} )=\mathbb {CP} ^{3}=\mathbf {P} (\mathbb {C} ^{4})$ and $\mathbb {M} $ is the compactified complexified Minkowski space $\mathbb {M} =F_{2}(\mathbb {T} )=\operatorname {Gr} _{2}(\mathbb {C} ^{4})=\operatorname {Gr} _{2,4}(\mathbb {C} )$ and the correspondence space between $\mathbb {P} $ and $\mathbb {M} $ is $\mathbb {F} =F_{1,2}(\mathbb {T} )$ In the above, $\mathbf {P} $ stands for projective space, $\operatorname {Gr} $ a Grassmannian, and $F$ a flag manifold. The double fibration gives rise to two correspondences (see also Penrose transform), $c=\nu \circ \mu ^{-1}$ and $c^{-1}=\mu \circ \nu ^{-1}.$ The compactified complexified Minkowski space $\mathbb {M} $ is embedded in $\mathbf {P} _{5}\cong \mathbf {P} (\wedge ^{2}\mathbb {T} )$ by the Plücker embedding; the image is the Klein quadric. References 1. Penrose, R.; MacCallum, M.A.H. (February 1973). "Twistor theory: An approach to the quantisation of fields and space-time". Physics Reports. 6 (4): 241–315. doi:10.1016/0370-1573(73)90008-2. 2. Hodges, Andrew (2010). One to Nine: The Inner Life of Numbers. Doubleday Canada. p. 142. ISBN 978-0-385-67266-5. • Ward, R.S.; Wells, R.O. (1991). Twistor Geometry and Field Theory. Cambridge University Press. ISBN 0-521-42268-X. • Huggett, S.A.; Tod, K.P. (1994). An introduction to twistor theory. Cambridge University Press. ISBN 978-0-521-45689-0. Topics of twistor theory Objectives Principles • Background independence Final objective • Quantum gravity • Theory of everything Mathematical concepts Twistors • Penrose transform • Twistor space Physical concepts • Twistor string theory • Twistor correspondence • Twistor theory
Wikipedia
Bilevel optimization Bilevel optimization is a special kind of optimization where one problem is embedded (nested) within another. The outer optimization task is commonly referred to as the upper-level optimization task, and the inner optimization task is commonly referred to as the lower-level optimization task. These problems involve two kinds of variables, referred to as the upper-level variables and the lower-level variables.[1][2][3] Mathematical formulation of the problem A general formulation of the bilevel optimization problem can be written as follows: $\min \limits _{x\in X,y\in Y}\;\;F(x,y)$ subject to: $G_{i}(x,y)\leq 0$, for $i\in \{1,2,\ldots ,I\}$ $y\in \arg \min \limits _{z\in Y}\{f(x,z):g_{j}(x,z)\leq 0,j\in \{1,2,\ldots ,J\}\}$ where $F,f:R^{n_{x}}\times R^{n_{y}}\to R$ $G_{i},g_{j}:R^{n_{x}}\times R^{n_{y}}\to R$ $X\subseteq R^{n_{x}}$ $Y\subseteq R^{n_{y}}.$ In the above formulation, $F$ represents the upper-level objective function and $f$ represents the lower-level objective function. Similarly $x$ represents the upper-level decision vector and $y$ represents the lower-level decision vector. $G_{i}$ and $g_{j}$ represent the inequality constraint functions at the upper and lower levels respectively. If some objective function is to be maximized, it is equivalent to minimize its negative. The formulation above is also capable of representing equality constraints, as these can be easily rewritten in terms of inequality constraints: for instance, $h(x)=0$ can be translated as $\{h(x)\leq 0,\ -h(x)\leq 0\}$. However, it is usually worthwhile to treat equality constraints separately, to deal with them more efficiently in a dedicated way; in the representation above, they have been omitted for brevity. Stackelberg competition Bilevel optimization was first realized in the field of game theory by a German economist Heinrich Freiherr von Stackelberg who published Market Structure and Equilibrium (Marktform und Gleichgewicht) in 1934 that described this hierarchical problem. The strategic game described in his book came to be known as Stackelberg game that consists of a leader and a follower. The leader is commonly referred as a Stackelberg leader and the follower is commonly referred as a Stackelberg follower. In a Stackelberg game, the players of the game compete with each other, such that the leader makes the first move, and then the follower reacts optimally to the leader's action. This kind of a hierarchical game is asymmetric in nature, where the leader and the follower cannot be interchanged. The leader knows ex ante that the follower observes its actions before responding in an optimal manner. Therefore, if the leader wants to optimize its objective, then it needs to anticipate the optimal response of the follower. In this setting, the leader's optimization problem contains a nested optimization task that corresponds to the follower's optimization problem. In the Stackelberg games, the upper level optimization problem is commonly referred as the leader's problem and the lower level optimization problem is commonly referred as the follower's problem. If the follower has more than one optimal response to a certain selection of the leader, there are two possible options: either the best or the worst follower's solution with respect to the leader's objective function is assumed, i.e. the follower is assumed to act either in a cooperative way or in an aggressive way. The resulting bilevel problem is called optimistic bilevel programming problem or pessimistic bilevel programming problem respectively. Applications Bilevel optimization problems are commonly found in a number of real-world problems. This includes problems in the domain of transportation, economics, decision science, business, engineering, environmental economics etc. Some of the practical bilevel problems studied in the literature are briefly discussed.[4] Toll setting problem In the field of transportation, bilevel optimization commonly appears in the toll-setting problem. Consider a network of highways that is operated by the government. The government wants to maximize its revenues by choosing the optimal toll setting for the highways. However, the government can maximize its revenues only by taking the highway users' problem into account. For any given tax structure the highway users solve their own optimization problem, where they minimize their traveling costs by deciding between utilizing the highways or an alternative route. Under these circumstances, the government's problem needs to be formulated as a bilevel optimization problem. The upper level consists of the government’s objectives and constraints, and the lower level consists of the highway users' objectives and constraints for a given tax structure. It is noteworthy that the government will be able to identify the revenue generated by a particular tax structure only by solving the lower level problem that determines to what extent the highways are used. Structural optimization Structural optimization problems consist of two levels of optimization task and are commonly referred as mathematical programming problems with equilibrium constraints (MPEC). The upper level objective in such problems may involve cost minimization or weight minimization subject to bounds on displacements, stresses and contact forces. The decision variables at the upper level usually are shape of the structure, choice of materials, amount of material etc. However, for any given set of upper level variables, the state variables (displacement, stresses and contact forces) can only be figured out by solving the potential energy minimization problem that appears as an equilibrium satisfaction constraint or lower level minimization task to the upper level problem. Defense applications Bilevel optimization has a number of applications in defense, like strategic offensive and defensive force structure design, strategic bomber force structure, and allocation of tactical aircraft to missions. The offensive entity in this case may be considered a leader and the defensive entity in this case may be considered a follower. If the leader wants to maximize the damage caused to the opponent, then it can only be achieved if the leader takes the reactions of the follower into account. A rational follower will always react optimally to the leaders offensive. Therefore, the leader's problem appears as an upper level optimization task, and the optimal response of the follower to the leader's actions is determined by solving the lower level optimization task. Workforce and Human Resources applications Bilevel optimization can serve as a decision support tool for firms in real-life settings to improve workforce and human resources decisions. The first level reflects the company’s goal to maximize profitability. The second level reflects employees goal to minimize the gap between desired salary and a preferred work plan. The bilevel model provides an exact solution based on a mixed integer formulation and present a computational analysis based on changing employees behaviors in response to the firm’s strategy, thus demonstrate how the problem’s parameters influence the decision policy.[5] Solution methodologies Bilevel optimization problems are hard to solve. One solution method is to reformulate bilevel optimization problems to optimization problems for which robust solution algorithms are available. Extended Mathematical Programming (EMP) is an extension to mathematical programming languages that provides several keywords for bilevel optimization problems. These annotations facilitate the automatic reformulation to Mathematical Programs with Equilibrium Constraints (MPECs) for which mature solver technology exists. EMP is available within GAMS. KKT reformulation Certain bilevel programs, notably those having a convex lower level and satisfying a regularity condition (e.g. Slater's condition), can be reformulated to single level by replacing the lower-level problem by its Karush-Kuhn-Tucker conditions. This yields a single-level mathematical program with complementarity constraints, i.e., MPECs. If the lower level problem is not convex, with this approach the feasible set of the bilevel optimization problem is enlarged by local optimal solutions and stationary points of the lower level, which means that the single-level problem obtained is a relaxation of the original bilevel problem. Optimal value reformulation Denoting by $\phi (x)=\min \limits _{z\in Y}\{f(x,z):g_{j}(x,z)\leq 0,j\in \{1,2,\ldots ,J\}\}$ the so-called optimal value function, a possible single-level reformulation of the bilevel problem is $\min \limits _{x\in X,y\in Y}\;\;F(x,y)$ subject to: $G_{i}(x,y)\leq 0$, for $i\in \{1,2,\ldots ,I\}$ $g_{j}(x,y)\leq 0,j\in \{1,2,\ldots ,J\}$ $f(x,y)\leq \phi (x).$ This is a nonsmooth optimization problem since the optimal value function is in general not differentiable, even if all the constraint functions and the objective function in the lower level problem are smooth.[6] Heuristic methods For complex bilevel problems, classical methods may fail due to difficulties like non-linearity, discreteness, non-differentiability, non-convexity etc. In such situations, heuristic methods may be used. Among them, evolutionary methods, though computationally demanding, often constitute an alternative tool to offset some of these difficulties encountered by exact methods, albeit without offering any optimality guarantee on the solutions they produce.[7] Multi-objective bilevel optimization A bilevel optimization problem can be generalized to a multi-objective bilevel optimization problem with multiple objectives at one or both levels. A general multi-objective bilevel optimization problem can be formulated as follows: $\min \limits _{x\in X,y\in Y}\;\;F(x,y)=(F_{1}(x,y),F_{2}(x,y),\ldots ,F_{p}(x,y))$ In the Stackelberg games: Leader problem subject to: $G_{i}(x,y)\leq 0$, for $i\in \{1,2,\ldots ,I\}$; $y\in \arg \min \limits _{z\in Y}\{f(x,z)=(f_{1}(x,z),f_{2}(x,z),\ldots ,f_{q}(x,z)):g_{j}(x,z)\leq 0,j\in \{1,2,\ldots ,J\}\}$ In the Stackelberg games: Follower problem where $F:R^{n_{x}}\times R^{n_{y}}\to R^{p}$ $f:R^{n_{x}}\times R^{n_{y}}\to R^{q}$ $G_{i},g_{j}:R^{n_{x}}\times R^{n_{y}}\to R$ $X\subseteq R^{n_{x}}$ $Y\subseteq R^{n_{y}}.$ In the above formulation, $F$ represents the upper-level objective vector with $p$ objectives and $f$ represents the lower-level objective vector with $q$ objectives. Similarly, $x$ represents the upper-level decision vector and $y$ represents the lower-level decision vector. $G_{i}$ and $g_{j}$ represent the inequality constraint functions at the upper and lower levels respectively. Equality constraints may also be present in a bilevel program, but they have been omitted for brevity. References 1. Dempe, Stephan (2002). Foundations of Bilevel Programming. Nonconvex Optimization and Its Applications. Vol. 61. Springer, Boston, MA. doi:10.1007/b101970. ISBN 1-4020-0631-4. 2. Vicente, L.N.; Calamai, P.H. (1994). "Bilevel and multilevel programming: A bibliography review". Journal of Global Optimization. 5 (3): 291–306. doi:10.1007/BF01096458. S2CID 26639305. 3. Colson, Benoit; Marcotte, Patrice; Savard, Gilles (2005). "Bilevel programming: A survey". 4OR. 3 (2): 87–107. doi:10.1007/s10288-005-0071-0. S2CID 15686735. 4. "Scope: Evolutionary Bilevel Optimization". www.bilevel.org. Retrieved 6 October 2013. 5. Ben-Gal, Hila Chalutz; Forma, Iris A.; Singer, Gonen (March 2022). "A flexible employee recruitment and compensation model: A bi-level optimization approach". Computers & Industrial Engineering. 165: 107916. doi:10.1016/j.cie.2021.107916. PMC 9758963. PMID 36568877. S2CID 245625445. 6. Dempe, Stephan; Kalashnikov, Vyacheslav; Prez-Valds, Gerardo A.; Kalashnykova, Nataliya (2015). Bilevel Programming Problems: Theory, Algorithms and Applications to Energy Networks. Springer-Verlag Berlin Heidelberg. doi:10.1007/978-3-662-45827-3. ISBN 978-3-662-45827-3. 7. Sinha, Ankur; Malo, Pekka; Deb, Kalyanmoy (April 2018). "A Review on Bilevel Optimization: From Classical to Evolutionary Approaches and Applications". IEEE Transactions on Evolutionary Computation. 22 (2): 276–295. arXiv:1705.06270. doi:10.1109/TEVC.2017.2712906. S2CID 4626744. External links • Mathematical Programming Glossary
Wikipedia
\begin{document} \title{Emerging quantum computing algorithms for quantum chemistry} \author{Mario Motta} \thanks{corresponding author, e-mail: [email protected]} \affiliation{IBM Quantum, IBM Research-Almaden, San Jose, CA 95120, USA} \author{Julia E. Rice} \thanks{corresponding author, e-mail: [email protected]} \affiliation{IBM Quantum, IBM Research-Almaden, San Jose, CA 95120, USA} \begin{abstract} Digital quantum computers provide a computational framework for solving the Schr\"{o}dinger equation {for} a variety of many-particle systems. Quantum computing algorithms for the quantum simulation of these systems have recently witnessed remarkable growth, {notwithstanding the limitations of existing quantum hardware}, especially as a tool for electronic structure computations in molecules. In this review, we provide a self-contained introduction to {emerging} algorithms for the simulation of Hamiltonian dynamics and eigenstates, with emphasis on their applications to the electronic structure in molecular systems. Theoretical foundations and implementation details of the method are discussed, and their strengths, limitations, and recent advances are presented. \end{abstract} {Article Type: ADVANCED REVIEW} \maketitle { \small \tableofcontents} \section{Introduction} Determining the quantum-mechanical behavior of many interacting particles, by means of accurate and predictive computations, is a problem of conceptual and technological relevance \cite{dirac1928quantum}. An important area within the quantum-mechanical many-body problem is represented by molecular chemistry which, over the last few decades, has been addressed using numerical methods, and designed and implemented for a variety of computational platforms \cite{bartlett2007coupled,helgaker2012recent,helgaker2014molecular}. More recently, digital quantum computers have been proposed as an alternative and complementary approach to the numerical computation of molecular properties \cite{feynman1982simulating,lloyd1996universal,abrams1997simulation}. { Molecular chemistry has been identified as an application for a digital quantum computer. This is because a digital quantum computer can serve as a quantum simulator \cite{georgescu2014quantum} of a molecule, i.e. a controllable quantum system that can be used to study certain properties of a molecule. We will focus on Hamiltonian dynamics, in particular, since at the current state of knowledge, it can be simulated with lower scaling on a digital quantum computer than on a conventional computer \cite{feynman1982simulating,lloyd1996universal}.} The idea of a quantum simulator is conceptually interesting and appealing, and the manufacturing, control and operation of quantum-mechanical devices is one of the most outstanding open problems in physics. On the other hand, a mutual disconnect exists between the quantum chemistry and quantum information science communities, which represents a significant barrier to progress. A shared terminology, a rigorous assessment of the potential impact of quantum computers on practical applications, including a careful identification of areas where quantum technologies can be relevant, and an appreciation of the subtle complications of quantum chemical research, are necessary for quantum information scientists to conduct research in the quantum simulation of chemistry. On the other hand, a robust understanding of quantum information science, quantum computational complexity, quantum simulation algorithms, and of the nature and peculiarities of quantum devices are necessary for chemists to contribute to the design and implementation of algorithms for the quantum simulation of chemistry. In this work, we aim at bridging the gap between the quantum chemistry and quantum computation communities, by examining prospects for quantum computation in molecular chemistry. We review two important classes of quantum algorithms, one for the simulation of Hamiltonian dynamics and the other for the heuristic simulation of Hamiltonian eigenfunctions. We analyze their advantages and disadvantages, suggesting opportunities for future developments and synergistic research. We begin by presenting the concept of simulation and its relevance in quantum chemistry in Section \ref{ref:sec_simulations}. We then proceed to describe the concept of a digital quantum computer, providing a high level view to help understand how it can be used {as a simulator} of a quantum system, and focus on strengths and weaknesses of such an approach. Several important concepts pertaining to the quantum simulation of chemical systems are then presented in Section \ref{sec:algo}. Emphasis will be placed on the requirements and challenges posed by the adoption of these algorithms in the study of molecular properties. A discussion about the nature and decoherence phenomena occurring on quantum hardware follows, with the purpose of describing error mitigation and correction techniques for today's quantum hardware. Such techniques are a requirement for meaningful calculations on the hardware in the near-term. Conclusions are drawn in the last section, highlighting the need of synergistic investigations by quantum information and quantum chemistry scientists to understand and overcome the subtle complications of research at the interface between these two fields. In this context, we will discuss chemical applications that can be targeted to assess and monitor the performance of quantum algorithms and hardware, and opportunities to work collaboratively to improve their performance towards relevant investigations of chemical systems. Note that in Table \ref{tab:glossary}, the acronyms used throughout this work are listed, together with their explicit meaning. \section{Computer simulations in chemistry} \label{ref:sec_simulations} Our understanding of the properties of molecular systems often comes from experiments \cite{mukamel1999principles,barron2009molecular}. A prominent class of experiments is represented by spectroscopic investigations, sketched in Fig.~\ref{fig:experiment}a. In spectroscopic investigations, electromagnetic radiation is applied to a molecule, and the scattering or absorption of the radiation is measured. These experimental techniques probe different aspects of the structure of molecules by observing their response to applied electromagnetic fields. For example, infrared and visible-ultraviolet radiation are used to probe ro-vibrational and electronic excitations, respectively \cite{fleming1986chemical,puzzarini2010quantum}; nuclear magnetic-resonance spectroscopy to perform in situ identification and concentrations for target chemicals in complex mixtures \cite{helgaker1999ab}; linear and non-linear optics to probe the polarization of molecules in the presence of external electric fields \cite{mukamel1999principles}. Spectroscopic investigations are not the only kinds of experiments performed to understand molecular systems. Another important class of experiments is represented by chemical reaction rate measurements, in which reactants are prepared under suitable experimental conditions, and indicators of chemical compositions, such as electrical conductivity, are measured as a function of time, to reveal the evolution of the amount of reactants in the system \cite{hammett1935reaction}. Such chemical processes can be very complicated to model, because obtaining accurate rate constants involves description of a delicate balance of competing phenomena at the molecular level. These include description of conformers within $k_B \, T$ of the lowest energy conformer, effects of solvation, temperature and pressure, to name but a few. Even within the Born-Oppenheimer approximation at zero Kelvin, this results in potentially very large numbers of intermediates, and reaction paths, as in catalytic and metabolic pathways. Although experiments differ from each other due to a large number of crucially important technical aspects, a recurring theme can be recognized, which is schematically depicted in Fig.~\ref{fig:experiment}a and in the first row of Fig.~\ref{fig:experiment}b. A chemical sample is prepared in a suitable quantum state, often at thermal equilibrium at a finite inverse temperature $\beta$. It is then coupled with an external probe, such as a classical external field, or beam of impinging quantum particles, or a change in a chemical or physical environment, where it evolves under the action of such an external perturbation, and is subsequently measured. The structure sketched in Fig.~\ref{fig:experiment} is found not only in experiments, but also in the theory of quantum mechanics: an initial preparation is described by wavefunction or density operators in a Hilbert space, the coupling to a probe by a unitary transformation or a more general quantum operation, and a final measurement by a Hermitian operator or a more general operator-valued measure. The same structure informs quantum computations, which can thus be used to simulate the properties of a quantum system. The relationship between the phases of an experiment, the postulates of quantum mechanics, and the structure of a quantum computation provides a useful high-level framework to recognize the purpose and limitations of quantum computation. \begin{figure} \caption{a) Schematic representation of infrared spectroscopy experiment on the carbon dioxide molecule. b) Analogous phases of both experiment (top row), quantum theory (middle row) and quantum simulations (bottom row).} \label{fig:experiment} \end{figure} One of the goals of computer simulations in chemistry is to explain and predict the outcome of experiments conducted in laboratories. Over the last few decades, molecular electronic structure theory has developed to a stage where computational chemistry practitioners can work alongside experimental collaborators to interpret experimental results and to work as a team to design new molecular systems. Notwithstanding this progress, the electronic structure of molecules and materials still presents many mysterious aspects, and methodological developments towards greater accuracy, predictive power, and access to larger systems are to this day highly investigated research activities. In the 1980s, the seminal work of several scientists led to the conception of innovative computational devices, now termed digital {(or universal)} quantum computers \cite{feynman1982simulating,lloyd1996universal,abrams1997simulation}. At this point, it is worth remarking that the term "classical" refers to conventional (not quantum) computers. It by no means implies that "classical" is a past era, and in fact, as discussed in the remainder of this work, the best of computations will likely be based on a hybrid approach, where classical and quantum co-processors are used in synergy. In the context of chemistry, digital quantum computers are used as digital quantum simulators \cite{georgescu2014quantum}. By the term \important{quantum simulator}, we denote a controllable quantum system used to simulate the behavior of another quantum system. { A \important{digital (or universal) quantum simulator} is a quantum simulator that can be programmed to execute any unitary transformation \cite{georgescu2014quantum,bauer2020quantum}.} The term "simulator" can be a source of confusion. While the quantum computing literature calls a classical computer emulating the behavior of a quantum system a simulator, in quantum information science the term refers to an actual quantum system (e.g. an electric circuit with superconducting elements or an array of ions confined and suspended in free space using electromagnetic fields) used to execute an algorithm and simulate the behavior of another quantum system (e.g. a molecule). We will be using the latter definition in this article. The basic idea of quantum simulation is represented schematically in Fig.~\ref{fig:simulation}. We require that wavefunctions in the Hilbert space of the system under study and the simulator can be connected by {a one-to-one correspondence $\hat{F}$, as exemplified in Section \ref{sec:fermions_second}.} The simulator can then be initialized in a state $| \Psi_i^\prime \rangle$, corresponding to some initial preparation $| \Psi_i \rangle$ of the system under study. The simulator can be manipulated with some quantum operation, such as a unitary transformation $\hat{U}^\prime$, to reach a final state $| \Psi_f^\prime \rangle$, corresponding to some state $| \Psi_f \rangle$ of the system under study. The wavefunction $| \Psi_f^\prime \rangle$ has to be measured, to extract information about the state $| \Psi_f \rangle$. {It is important to emphasize that not all computational problems can be accelerated under a model of computation based on unitary transformations and quantum measurements.} Thus one must carefully identify areas where the use of quantum simulators can be of actual relevance. For example, the assumption of controllability of the digital quantum simulators is crucial for the actual use of these devices to solve a computational problem. This issue will be reviewed in Section \ref{sec:hardware}, where the nature and decoherence phenomena occurring on near-term quantum hardware is discussed, and techniques for error mitigation and correction are presented. The main factors affecting the performance of quantum hardware are $(i)$ the limited numbers of qubits that can be used for any one chemical problem, $(ii)$ the limited qubit connectivity, and $(iii)$ the various decoherence phenomena that limit the number of quantum operations that can be executed. Similarly, the importance of preparation, transformation and measurement steps must be stressed here, because the success of a quantum simulation depends on the ability to perform certain operations efficiently on the quantum simulator, as well as to extract useful information from it. The inherent difficulty of computational problems is typically formalized in terms of the resources required by different models of computation to solve a given problem. For digital quantum computers, such a formalization is achieved by the theory of quantum computational complexity, reviewed in Section \ref{sec:complexity}, which identifies the class of problems that are naturally tackled by a digital quantum simulator. This in turn requires introducing the model of a digital quantum computer and of the operations that can be performed on it, which is the goal of the next Section. \begin{figure}\label{fig:simulation} \end{figure} \subsection{Digital quantum computers} \label{sec:quantum_computers} The concept of a quantum computer can be represented by several equivalent models, each corresponding to a specific approach to the {problem of executing computation on a device based on quantum mechanics}. Such models include quantum Turing machines, circuits, random access machines and walks. In this work, we focus on what is arguably the most widely used model of quantum computation, the circuit model. It is worth emphasizing that the only educational platforms giving a robust understanding of quantum computation are dedicated textbooks covering the topic in depth and breadth. Interested readers are thus referred, for example, to the books by Nielsen and Chuang \cite{nielsen2002quantum}, Mermin \cite{mermin2007quantum}, Kitaev et al \cite{kitaev2002classical}, Benenti et al \cite{benenti2004principles} and Popescu et al \cite{lo1998introduction}. The circuit model of quantum computation is based on the notion of \important{qubit}. A qubit is a physical system whose states are described by unit vectors in a two-dimensional Hilbert space $\mathcal{H} \simeq \mathbbm{C}^2$. A system of $n$ qubits, also called an $n$-qubit \important{register}, has states described by unit vectors in the Hilbert space $\mathcal{H}_n = \mathcal{H}^{\otimes n}$. An orthonormal basis of the Hilbert space $\mathcal{H}_n$ is given by the following vectors, called \important{computational basis states}, \begin{equation} \label{eq:qcomputational_basis} \ket{ {\bf{z}} } = \bigotimes_{\ell=0}^{n-1} \ket{ z_\ell } = \ket{ z_{n-1} \dots z_0 } = \ket{ z } \quad,\quad {\bf{z}} \in \{0,1\}^n \quad,\quad z = \sum_{\ell=0}^{n-1} z_\ell \, 2^\ell \in \{0 \dots 2^n-1 \} \quad. \end{equation} Starting from a register of $n$ qubits prepared in the state $\ket{0} \in \mathcal{H}_n$, a generic $n$-qubit state $\ket{\psi}$ can be prepared applying single- and multi-qubit unitary transformations, or \important{gates}. Examples of single-qubit and two-qubit gates are listed in Table \ref{table:gates}. In quantum computation, a role of particular importance is played by \important{Pauli operators}, defined as \begin{equation} \hat{\sigma}_{{\bf{m}}} = \hat{\sigma}_{m_{n-1}} \otimes \dots \otimes \hat{\sigma}_{m_0} \quad,\quad \hat{\sigma}_m \in \{ \mathbbm{1} , X, Y, Z \} \quad, \end{equation} where the single-qubit Pauli operators are illustrated in Table \ref{table:gates}. \begin{table}[b!] \includegraphics[width=\textwidth]{table0.eps} \caption{Examples of quantum gates and circuit elements. From top to bottom: single-qubit rotations (i.e. exponentials of single-qubit Pauli operators), for example $R_x(\theta) = \exp(-i \theta X/2)$. Single-qubit Pauli operators, which are equal to special single-qubit Pauli rotations up to a global phase, for example $X = R_x(\pi/2)$. Single-qubit operations in the Clifford group (Hadamard, $S$ and $T$ gates), which are equal to special single-qubit Pauli rotations up to a global phase, namely $S = R_z(\pi/2)$, $T=R_z(\pi/4)$ and $H = \exp(-i \pi/2 (X+Z))$. Measurement of a single qubit in the computational basis, measurement of an observable $B = \sum_k b_k | \varphi_k \rangle \langle \varphi_k |$, and measurement of an observable with post-selection (retaining only one specific outcome $k_0$). Two-qubit $\mathsf{CNOT}$ (controlled-$X$), $\mathsf{cU}$ (controlled-$U$) and $\mathsf{SWAP}$ gates. The $\mathsf{CNOT}$ gate is sometimes denoted $\mathsf{CNOT}_{ij}$, where $i$ and $j$ are called the control and target qubit respectively, and applies an $X$ transformation to its target qubit ($\oplus$ symbol) if its control qubit ($\bullet$ symbol) is in the state $|1\rangle$, the $\mathsf{cU}$ can be written as a product of up to two $\mathsf{CNOT}$ gates and four single-qubit gates, and the $\mathsf{SWAP}$ gate can be written as a product of three $\mathsf{CNOT}$ gates, $\mathsf{SWAP}_{ij} = \mathsf{CNOT}_{ij} \mathsf{CNOT}_{ji} \mathsf{CNOT}_{ij}$. {Qubits are ordered from top to bottom, and matrix elements are defined as $G_{zw} = \langle z | \hat{G} | w \rangle = \langle z_{n-1} \dots z_0 | \hat{G} | w_{n-1} \dots w_0 \rangle$, with binary digits running from right to left as in Eq.~\eqref{eq:qcomputational_basis}.} } \label{table:gates} \end{table} Pauli operators are a basis for the space of linear operators on $\mathcal{H}_n$. Exponentials of Pauli operators {can be represented as tensor products of $Z$ operators,} \begin{equation} { \hat{R}_{ \hat{\sigma}_{{\bf{m}}} }(\theta) = e^{ -\frac{i \theta}{2} \hat{\sigma}_{{\bf{m}}} } = \hat{V}^\dagger e^{ - \frac{i \theta}{2} Z \otimes \dots \otimes Z } \hat{V} \quad,\quad \hat{V} = \bigotimes_{\ell=0}^{n-1} \hat{A}_{m_\ell} \quad,\quad \hat{A}_{m}^\dagger \hat{\sigma}_m \hat{A}_{m} = Z \quad, } \end{equation} {and then applied to a register of qubits as illustrated in Fig.~\ref{fig:pauli_circuits}a. Such a circuit contains ladders of $\mathsf{CNOT}$ gates that compute the total parity into the last qubit \cite{seeley2012bravyi}.} Two parameters, respectively called \important{width and depth}, are often used to characterize the cost of a quantum circuit. Width refers to the number of qubits that comprise the circuit (in Fig.~\ref{fig:pauli_circuits}a, $\mathsf{width}=4$). Depth refers to the number of layers of gates that cannot be executed at the same time (in Fig.~\ref{fig:pauli_circuits}a, $\mathsf{depth}=9$). Although width and depth are both limiting factors in the execution of quantum algorithms (large width corresponds to many qubits, and large depth to many operations in presence of decoherence), the latter is a major computational bottleneck on near-term devices. The error mitigation techniques presented in Section \ref{ref:sec_simulations} are used to increase the width and depth of circuits that can be executed on near-term hardware. \begin{figure} \caption{Quantum circuits (a) to apply the exponential of a Pauli operator $\otimes_{\ell=0}^{3} \sigma_{m_\ell}$ and (b) to measure its expectation value. The single-qubit gates are {$A_X = H$, $A_Y = HS$ and $A_Z = I$}. } \label{fig:pauli_circuits} \end{figure} Furthermore, Pauli operators are Hermitian, and thus can be measured. Since \begin{equation} \hat{\sigma}_{{\bf{m}}} = \hat{V}^\dagger {(Z \otimes \dots \otimes Z)} \hat{V} = \sum_{ {\bf{z}} } f({\bf{z}}) \hat{V}^\dagger | {\bf{z}} \rangle \langle {\bf{z}} | \hat{V} \quad,\quad f({\bf{z}}) = \prod_{\ell=0}^{n-1} (-1)^{z_\ell} \quad, \end{equation} preparing a register of qubits in a state $| \Psi \rangle$, applying the transformation $\hat{V}$ and measuring all qubits in the computational basis as shown in Fig.~\ref{fig:pauli_circuits}b yields a collection of samples, or "shots", $\{ {\bf{z}}_i \}_{i=1}^{n_s}$, from which the expectation value $\langle \Psi | \hat{\sigma}_{{\bf{m}}} | \Psi \rangle$ can be estimated as $\mu \pm \sigma$ with $\mu = n_s^{-1} \sum_i f( {\bf{z}}_i )$ and $\sigma^2 = n_s^{-1} (1-\mu^2)$. {The presence of statistical uncertainties on measurement results is a basic but centrally important aspect of quantum computation: quantum algorithms must be understood and formulated in terms of random variables and stochastic calculus, and their results accompanied with carefully estimated statistical uncertainties. These aspects cannot be overlooked in the design and implementation of quantum algorithms.} In the remainder of this work, we will call measurements and unitary transformations executed on a digital quantum computer \important{operations}, and use the term gates to denote unitary transformations only. In this section, we provided a very concise presentation of the concepts of qubit, quantum gates and measurements, with the purpose of fixing notation and maintaining the remainder of the work self-contained. In the coming sections, we will show that suitable sets of single- and two-qubit gates are universal, i.e. they can be multiplied to yield a generic unitary transformation, and present strengths and limitations of quantum computation in the light of quantum complexity theory. \subsubsection{Universality and limitations of digital quantum computers} \label{sec:universality} Informally, a set $\mathcal{S}$ of quantum gates is called \important{universal} if any unitary transformation that can be applied on a quantum computer can be expressed as a product of a finite number of gates from $\mathcal{S}$. Any multi-qubit gate can be expressed as a product of single-qubit and $\mathsf{CNOT}$ operations only, and therefore $\mathcal{S}_1 = \{ \mbox{single-qubit gates}, \mathsf{CNOT} \}$ is a universal set of quantum gates. Since single-qubit operations have continuous parameters (corresponding to rotation angles), the set $\mathcal{S}_1$ is not countable. However, it is known that the set $G$ of single-qubit gates generated by $\{\mathrm{\mathrm{Had},S,T}\}$ is dense in SU(2) \cite{nielsen2002quantum}. According to the Solovay-Kitaev theorem \cite{kitaev1997quantum,nielsen2002quantum,kitaev2002classical,harrow2002efficient,dawson2005solovay}, for any $\hat{U} \in$ SU(2) and any target accuracy $\varepsilon$ there exists a sequence of $\mathcal{O}(\log^c(1/\varepsilon))$ gates from the generating set of $G$ that approximates $\hat{U}$ within accuracy $\varepsilon$ and $c \simeq 4$. {Subsequent work has demonstrated \cite{kliuchnikov2012fast,ross2014optimal} that $z$ rotations and generic single-qubit gates can be implemented with no more than $3 \, \log(1/\varepsilon)$ and $9 \, \log(1/\varepsilon)$ T and H gates respectively, which is asymptotically optimal, with a practical algorithm. Therefore, the countable set $\mathcal{S}_2 = \{ \mathrm{\mathrm{Had}, S, T, \mathsf{CNOT}} \}$ is a universal set of quantum gates.} It should be noted, however, that a generic $n$-qubit unitary transformation is exactly represented with {$\mathcal{O}(4^n)$} single-qubit and $\mathsf{CNOT}$ gates \cite{barenco1995elementary,mottonen2004quantum,shende2006synthesis}, meaning that quantum computers are not guaranteed to give access to a generic $n$-qubit state with an amount of operations scaling polynomially with $n$. Furthermore, finding an \textbf{efficient} decomposition of a unitary transformation in terms of elementary quantum operations can itself be a challenging task \cite{daskin2011decomposition}. Finally, the set $\mathcal{C} = \{ \mathrm{Had, S, \mathsf{CNOT}} \}$ generates the so-called \important{Clifford group} of unitary transformations, that map Pauli operators onto Pauli operators. An important theoretical result, the Gottesman-Knill theorem \cite{gottesman1998heisenberg,aaronson2004improved,nest2008classical}, states that quantum circuits using only $(a)$ preparation of qubits in computational basis states, $(b)$ application quantum gates from the Clifford group and $(c)$ measurement {of a single Pauli operator}, can be efficiently simulated on a classical computer. Therefore, the entanglement that can be achieved with circuits of Clifford gates alone does not give any computational advantage over classical computers. Computational advantage is to be sought in circuits also containing $T$ gates. \subsubsection{Quantum computational complexity} \label{sec:complexity} Solving a problem on a computational platform requires designing an algorithm, by which term we mean the application of a sequence of mathematical steps. Executing an algorithm requires a certain amount of resources, typically understood in terms of space (or memory) and time (or elementary operations). Computational complexity theory groups computational problems in complexity classes defined by their resource usage, and relates these classes to each other. Quantum computational complexity theory is a branch of computational complexity theory that, loosely speaking, studies the difficulty of solving computational problems on quantum computers, formulates quantum complexity classes, and relates such quantum complexity classes with their classical counterparts \cite{bernstein1997quantum,kitaev2002classical,watrous2008quantum}. Two important quantum complexity classes are BQP (bounded-error quantum polynomial time) and QMA (quantum Merlin-Arthur). Roughly speaking, BQP comprises problems that can be solved with polynomial space and time resources on a quantum computer. In the context of quantum simulation for quantum chemistry, the most important BQP problem is the simulation of \important{Hamiltonian dynamics} \cite{feynman1982simulating,feynman1986quantum,lloyd1996universal,zalka1998efficient}. QMA, on the other hand, comprises problems where putative solutions can be verified but not computed in polynomial time on a quantum computer. Producing a putative solution means executing a quantum circuit giving access to a wavefunction $\Psi$, and verifying a putative solution means executing a second quantum circuit to ensure that $\Psi$ is actually a solution of the problem of interest. In the context of quantum simulation for quantum chemistry, the most important QMA problem is the simulation of \important{Hamiltonian eigenstates}, as discussed for example in \cite{kitaev2002classical,kempe2006complexity}. Current knowledge indicates that both the ground-state and the Hamiltonian dynamics problems are, in a worst-case scenario, exponentially expensive on a classical computer. The Hamiltonian dynamics problem is thus a relevant application for quantum algorithms, as it offers theoretical opportunities for better computational performance when tackled on a quantum computer. However, there are a number of practical considerations to take into account. For example, the statements of computational complexity theory refer to exact solutions of the problem at hand, and experience indicates that approximate methods can deliver accurate results for certain problems, which calls for a systematic characterization of quantum algorithms for chemistry in both accuracy and computational cost across a variety of chemical problems. Furthermore, quantum hardware has to reach a level of control and predictability compatible with large-scale quantum chemical simulations, which is one of the most important challenges confronting experimental physics and engineering. The division between BQP and QMA problems will inform the remainder of the present work: in the next Section we will present some important problems in computational chemistry, highlighting those based on the simulation of Hamiltonian dynamics. We will then present quantum algorithms for the simulation of Hamiltonian dynamics, and heuristic algorithms for Hamiltonian eigenstate approximation. \section{Some important problems in computational chemistry} In this Section, we briefly review some problems studied in computational chemistry. While their extensive description is beyond the scope of the present review, we will attempt to highlight some technical challenges, through the lens of which the quantum algorithms presented in the forthcoming sections can be examined. \subsection{Electronic structure} \label{sec:es} The main objective of electronic structure in chemistry is to determine the ground and low-lying excited states of a system of interacting electrons. Often relativistic effects and the coupling between the dynamics of electrons and nuclei can be neglected, or treated separately. Within this approximation, the many-electron wavefunction can be found by solving the time-independent Schr\"{o}dinger equation for the Born-Oppenheimer Hamiltonian \cite{Born_1927,Szabo_book_1989} { \begin{equation} \label{eq:bo_ham} \hat{H} \Psi = \left[ \sum_{a<b}^{N_n} \frac{Z_a Z_b}{| \vett{R}_a - \vett{R}_b|} \right] \Psi - \frac{1}{2} \, \sum_{i=1}^N \frac{\partial^2 \Psi}{\partial \vett{r}_i^2} - \left[ \sum_{i=1}^N \sum_{a=1}^{N_n} \frac{Z_a}{| \vett{r}_i - \vett{R}_a |} \right] \Psi + \left[ \sum_{i<j}^N \frac{1}{|\vett{r}_i - \vett{r}_j|} \right] \Psi \,, \end{equation} } where $\vett{r}_i$ is the position of electron $i$, and $\vett{R}_a$ the position of nucleus $a$, {having atomic number $Z_a$}. The numbers of electrons and nuclei are $N$ and $N_n$, respectively, nuclear positions are held fixed, and atomic units are used throughout. Solving the time-independent Schr\"{o}dinger equation is needed to access molecular properties including, and not limited to, energy differences (e.g. ionization potentials, electron affinities, singlet-triplet gaps, binding energies, deprotonation energies), energy gradients (e.g. forces and frequency-independent polarizabilities) and electrostatic properties (e.g. multipole moments and molecular electrostatic potentials). The first step of any electronic structure simulation is to approximate Eq.~\eqref{eq:bo_ham} with a simpler Hamiltonian acting on a finite-dimensional Hilbert space. This is usually achieved by truncating the Hilbert space of a single electron to a finite set of orthonormal basis functions or spin-orbitals $\{ \varphi_p \}_{p=1}^M$. Electronic structure simulations based on the first quantization formalism describe a system of $N$ electrons in $M$ spin-orbitals using the configuration interaction (CI) representation \begin{equation} \label{eq:ci_wfn} | \Psi \rangle = \sum_{i_1 < \dots < i_N}^M \psi_{i_1 \dots i_N} \, | i_1 \dots i_N \rangle \quad, \end{equation} where $| i_1 \dots i_N \rangle$ is the Slater determinant where orbitals $i_1 \dots i_N$ are occupied. Electronic structure simulations based on the second quantization formalism, on the other hand, operate in the Fock space of electrons in $M$ spin-orbitals, and represent the Hamiltonian as \begin{equation} \label{eq:hamiltonian_born_oppenheimer} \hat{H} = \sum_{a<b}^{N_n} \frac{Z_a Z_b}{| \vett{R}_a - \vett{R}_b|} + \sum_{pr} h_{pr} \, \crt{p} \dst{r} + \sum_{prqs} \frac{(pr|qs)}{2} \, \crt{p} \crt{q} \dst{s} \dst{r} \,, \end{equation} where $\crt{p},\dst{r}$ are fermionic creation and annihilation operators associated to spin-orbitals $\varphi_p, \varphi_r$ respectively. \subsubsection{Basis sets} Choosing a finite set of spin-orbitals $\{ \varphi_p \}_{p=1}^M$ balances two concerns: the need of representing electronic states and operators with as few orbitals as possible, and the need for obtaining accurate results. Gaussian bases are most commonly employed in molecular simulations due to their compactness, while plane waves are most often used in the simulation of crystalline solids. In order to get quantitatively accurate results, very large one-electron basis sets have to be used, in either case. Since, as we shall see, the number $M$ of orbitals translates into the number of qubits needed to perform a simulation (see Section \ref{sec:fermions_second}) and near-term hardware is limited in the number and quality of qubits (see Section \ref{sec:hardware}), most quantum computing simulations so far have been limited to $M \simeq 10$ orbitals. Theoretical and computational chemists can help the field of quantum computation by integrating techniques (e.g. optimized orbitals \cite{mizukami2020orbital}, perturbative treatments \cite{takeshita2020increasing}, explicit electronic correlation \cite{motta2020quantum}) to account for more orbitals without extra qubits and gates. Furthermore, most quantum algorithms have so far been tested and characterized using sets of $M \simeq 10$ orbitals (corresponding to small active spaces or minimal bases). While the results of such investigations have often indicated high accuracy when compared to classical methods with the same basis sets, it is in many cases uncertain whether known algorithms are useful when larger bases are used. Thus, another area of research where theoretical and computational chemists can offer insight and help is the systematic extension and benchmark of quantum algorithms beyond the small basis sets investigated so far. \subsubsection{Classical algorithms and open problems} The main obstacle to the investigation of electronic structure is that, in general, the computational cost of finding the exact eigenfunctions of Eq.~\eqref{eq:bo_ham} grows combinatorially with the size of the studied system \cite{Troyer_PRL_2004,Schuch_NAT_2009}. This limitation has so far precluded exact studies for all but the smallest systems and motivated the development of approximate methods. At a high level, those methods can be distinguished by general categories such as wavefunction, embedding, and diagrammatic {(or Green's function)}. \paragraph{Wavefunction methods} formulate an Ansatz for an eigenstate, e.g. the ground state, and compute expectation values of observables and correlation functions with respect to that wavefunction. The nature of the underlying Ansatz is ultimately responsible for the accuracy and computational cost of a given method. For molecular systems, a hierarchy of quantum chemistry methods has been developed, which allow systematic improvement in accuracy, at increasing computational cost. These techniques typically have their starting point in the Hartree-Fock (HF) method, which approximates the ground state of a molecular Hamiltonian with the lowest-energy Slater determinant, and incorporate electronic correlation. For example, one of the most accurate methods is coupled-cluster with singles and doubles and perturbative estimate to the connected triples, CCSD(T) \cite{Paldus_ACP_1999,bartlett2007coupled,Shavitt_book_2009}. Other promising alternative approaches include tensor network methods \cite{White_PRL_1992,White_JCP_1999,Chan_JCP_2002,Olivares_JCP_2015,Chan_JCP_2016}, which represent electronic wavefunctions as contractions between tensors, and quantum Monte Carlo methods \cite{Booth_JCP_2009,Booth_NAT_2013,Reynolds_JCP_1982,Foulkes_RMP_2001,Zhang_PRL90_2003,motta2018ab,motta2017towards}, which instead represent electronic properties as expectation values of carefully designed random variables. \paragraph {Density functional (DFT) methods } The most widely used methods are mean field in nature and are based on the density \cite{Martin_book_2004,Kohn_RMP_1999}, making them less expensive than wavefunction methods. DFT methods are based on an approximation to the Hamiltonian rather than an approximation to the wavefunction and there is no hierarchy of functionals as there is in wavefunction methods \cite{hammes2017conundrum}. Nevertheless, density functional methods are standard tools for electronic structure calculations in many areas across multiple disciplines, with sophisticated computer software packages available due to their significantly lower cost. \paragraph{Embedding methods} evaluate the properties of a large system by partitioning it within a given basis (e.g. the spatial or energy basis) into a collection of fragments, embedded in a self-consistently determined environment \cite{Knizia-2012,Knizia-2013,Wouters-2016,Georges1996dynamical,vollhardt2012dynamical}. These methods combine two different types of quantum calculations: high-level calculations on fragments, and low-level calculations on the environment surrounding fragments. The accuracy of an embedding method is determined by a combination of several factors including: the size of the fragments, the accuracy in the treatment of the embedded fragments and environment, and the convergence of the self-consistency feedback loops between fragments and their environment. \paragraph{Diagrammatic methods} evaluate, either deterministically or stochastically, a subset of the terms in the diagrammatic interaction expansion \cite{Hedin,VanHoucke2010,Kulagin2013,nguyen2016rigorous,AlexeiSEET,LAN} {of a quantity such as the Green's function, the self-energy, or the ground-state energy}. These methods are often based on the Feynman diagrammatic technique formulated in terms of self-consistent propagators and bare or renormalized interactions, at finite or zero temperature, and their accuracy and computational cost is determined by several factors, including the subsets of diagrams or series terms that are included in the calculation. These methodologies tend to be most accurate for problems where a single electronic configuration dominates the representation Eq.~\eqref{eq:ci_wfn} of the ground-state wavefunction, the so-called single-reference problems, exemplified by the ground states of many simple molecules at equilibrium. In many molecular excited states, along bond stretching, and in transition metal systems, multiple electronic configurations contribute to the ground-state wavefunction, as illustrated in Fig.~\eqref{fig:es}a, leading to multi-reference quantum chemistry problems. \begin{figure} \caption{Left: example of multi-reference character in the dissociation of H$_2$O. Right: transition metal (TM) compounds of relevance for fundamental chemistry and quantum computing research: TM atoms, ions, hydrides and oxides can constitute subjects of study for near-term quantum devices. Longer-term goals are the description of CuO, MnO and FeS cores found in a variety of biological catalysts.} \label{fig:es} \end{figure} Despite remarkable progress in the extension of quantum chemistry methods to multi-reference situations, the accuracy attainable for molecules with more than a few atoms is considerably lower than in the single-reference case. Some important examples of multi-reference quantum chemical problems are sketched in Fig.~\eqref{fig:es}b. Transition metal atoms, oxides, and dimers pose formidable challenges to even remarkably accurate many-body methods, as they feature multiple bonds, each with a weak binding energy, and potential energy surfaces resulting from the interplay between \important{static and dynamical} electronic correlation \cite{williams2020direct,AlSaidi_PRB73_2006,Purwanto_JCP142_2015,Purwanto_JCP144_2016,shee2019achieving,shee2021revealing}. Full configuration interaction (FCI) methods can describe such an interplay but systems that can be studied with FCI methods are arguably small \cite{rossi1999full,vogiatzis2017pushing}. Coupled binuclear copper centers are present in the active sites of some very common metalloenzymes found in living organisms. The correct theoretical description of the interconversion between the two dominant structural isomers of the $\ce{Cu2O2^{2+}}$ core is key to the ability of these metalloenzymes to reversibly bind to molecular oxygen, and is challenging since it also requires a method that can provide a balanced description of static and dynamic electronic correlations \cite{solomon1996multicopper,kitajima1994copper,samanta2012exploring,gerdemann2002crystal}. The chemistry of other active sites, such as the $\ce{Mn3CaO4}$ cubane core of the oxygen-evolving complex of photosystem II, and the $\ce{Fe7MoS9}$ cofactor of nitrogenase, pose some of the most intricate multi-reference problems in the field of biochemistry. In particular with respect to their spectroscopic properties, such systems require a detailed characterization of the interplay between spin-coupling and electron delocalization between metal centers. \cite{sharma2014low,kurashige2013entangled,li2019electronic,li2019electronic2,chilkuri2019ligand,cao2018protonation}. \subsection{Electronic dynamics} \label{sec:electron_dynamics} Many experiments conducted on molecules probe their dynamical, rather than equilibrium, properties. An important example are oscillator strengths \begin{equation} D_{0\to f} \propto (E_f - E_0) \sum_{\alpha=x,y,z} | \langle \Psi_f | \hat{\mu}_\alpha | \Psi_0 \rangle |^2 \quad, \end{equation} where $\Psi_0$ and $\Psi_f$ are a ground and excited state of the electronic Hamiltonian, with energies $E_0$ and $E_f$ respectively, and $\hat{\mu}_\alpha$ is the dipole moment along direction $\alpha=x,y,z$. Oscillator strengths exemplify properties, such as structure factors \cite{damascelli2003angle,Damascelli_2004}, that are determined by excited electronic states through transition energies and matrix elements. A variety of algorithmic tools are available for the calculation of spectral functions and time-dependent properties on classical computers today, including time-dependent density functional theory, equation-of-motion coupled-cluster and diagrammatic theories \cite{onida2002electronic,stanton1993equation}. In some regimes (e.g. transitions to specific and structured excited states) dynamical properties such as oscillator strengths can be computed efficiently and accurately. On the other hand, there are computationally challenging regimes, where quantum algorithms can be relevant (e.g. congested spectra) owing to its ability to simulate time evolution. In fact, oscillator strengths are straightforwardly obtained from dipole-dipole \important{structure factors} \begin{equation} S_{\mu_\alpha,\mu_\beta}(\omega) = \sum_l \langle \Psi_0 | \hat{\mu}_\alpha | \Psi_l \rangle \, \delta\Big( \hbar \omega - (E_l-E_0) \Big) \, \langle \Psi_l | \hat{\mu}_\beta | \Psi_0 \rangle \quad, \end{equation} as $D_{0\to f} = \sum_\alpha S_{\mu_\alpha,\mu_\alpha}(E_f-E_0)$. Dipole-dipole structure factors are in turn the Fourier transform of the time-dependent dipole-dipole \important{correlation functions}, \begin{equation} \label{eq:linear_response1} C_{\mu_\alpha,\mu_\beta}(t) = \langle \Psi_0 | \hat{\mu}_\alpha \, e^{ \frac{t}{i\hbar} (\hat{H}-E_0)} \, \hat{\mu}_\beta | \Psi_0 \rangle = \sum_f \langle \Psi_0 | \hat{\mu}_\alpha | \Psi_f \rangle \, e^{ \frac{t}{i\hbar} (E_f-E_0)} \, \langle \Psi_f | \hat{\mu}_\beta | \Psi_0 \rangle \quad, \end{equation} and can be computed by simulating time evolution. Another important and related research theme is the computation of \important{time- and frequency-dependent properties} (e.g. non-linear optical properties, electro-optical effects, circular dichroism). Similar to oscillator strengths, these properties are challenging to compute as they involve excited states, and are natural applications for quantum algorithms, as they can be computed by simulating time evolution. Consider a system at equilibrium in the ground state $\Psi_0$ of $\hat{H}_S$ at time $t=0$, and subject to a perturbation of the form $\hat{V}(t) = \sum_k f_k(t) \, \hat{O}^\dagger_k$. The expectation value of the operator $\hat{O}_j$ at time $t>0$ is given by \begin{equation} O_j(t) = \langle \Psi(t) | \hat{O}_{j,S}(t) | \Psi(t) \rangle \quad,\quad | \Psi(t) \rangle = \hat{U}(t) | \Psi_0 \rangle \quad,\quad \hat{U}(t) = \sum_{n=0}^{\infty} \int_0^t \frac{dt_1}{i\hbar} \dots \, \int_0^{{t_{n-1}}} \frac{dt_n}{i\hbar} \, \hat{V}_S(t_1) \dots \hat{V}_S(t_n) \quad, \end{equation} where $\hat{V}_S(t) = e^{ - \frac{t \hat{H}_S}{i \hbar} } \hat{V} e^{ \frac{t \hat{H}_S}{i \hbar} }$. The linear response approximation \cite{fetter2012quantum}, valid for weak external perturbations, {consists of} truncating $\hat{U}(t)$ and $O_j(t)$ to first order in $\hat{V}(t)$, leading to the expression \begin{equation} \label{eq:linear_response2} O_j(t) = O_j(0) + \int_0^t \frac{dt^\prime}{i \hbar} \, \sum_k f_k(t) \, \alpha_{jk}(t-t^\prime) \quad,\quad \alpha_{jk}(t) = \langle \Psi_0 | [ \hat{O}_{j,S}(t) , \hat{O}^\dagger_k ] | \Psi_0 \rangle \quad, \end{equation} where $\alpha_{jk}(t)$ is the time-dependent polarizability of $\hat{O}_j$, $\hat{O}_k$. Real-time correlation functions such as \eqref{eq:linear_response1} and \eqref{eq:linear_response2} are objects of central importance in many-particle physics, but naturally emerge in the framework of linear response theory, i.e. of weak external perturbation. Going beyond spectral properties, and probing the non-equilibrium real-time dynamics of chemical systems, is a research topic of increasing relevance, both because of experiments that can now probe quantum dynamics at atomic scales, and because of fundamental interest in studying the time-dependent behavior of many-particle systems. \subsection{Molecular vibrations} The geometric structure of molecules is typically probed by gas-phase spectroscopic experiments, whose interpretation, for all but the smallest molecules, needs input from numerical simulations, due to many competing transitions between states \cite{barone2015quantum,bowman2008variational,X3}. Numerical simulations of molecular rovibrational levels {require} solving the Schr\"{o}dinger equation for the nuclei. Within the Born-Oppenheimer approximation, the nuclear Hamiltonian has the form \begin{equation} \label{eq:nuclear_bo} \hat{H}_{nuc} = - \frac{1}{2} \sum_{a=1}^{N_n} \frac{1}{M_a} \frac{\partial^2}{\partial \vett{R}_a^2} + V( \vett{R}_1 \dots \vett{R}_{N_n} ) \,, \end{equation} where $V$ denotes the ground-state potential energy surface of the electronic Hamiltonian Eq.~\eqref{eq:bo_ham} when nuclei have positions $\{ \vett{R}_a \}_{a=1}^{N_n}$, and $\{ M_a \}_{a=1}^{N_n}$ are the nuclear masses, in atomic units. Solving the nuclear Schr\"{o}dinger equation presents additional challenges: first of all, as the interaction among nuclei is mediated by electrons, the function $V$ is not known a priori, and needs to be computed from quantum chemical calculations at fixed nuclear geometries as in Section \ref{sec:es}, and then fitted to an appropriate functional form, which can be an expensive procedure \cite{X1,X2}. For certain problems, the harmonic oscillator approximation of $V$ is appropriate, which is given by \cite{X1} \begin{equation} \label{eq:nuclear_bo_harmonic} \hat{H}_{nuc} = \frac{1}{2} \sum_{\alpha\beta} (\hat{J}_\alpha - \hat{\pi}_\alpha) \mu_{\alpha\beta} (\hat{J}_\beta - \hat{\pi}_\beta) - \frac{1}{2} \sum_k \frac{\partial^2}{\partial Q_k^2} - \frac{1}{8} \sum_\alpha \mu_{\alpha\alpha} +V({\bf{Q}}) \quad,\quad V({\bf{Q}}) = \frac{\kappa}{2} {\bf{Q}}^2 \quad. \end{equation} In Eq.~\eqref{eq:nuclear_bo_harmonic} $\hat{J}_\alpha$ is the total angular momentum of a given cardinal direction ($x$, $y$, or $z$) denoted by $\alpha$ or $\beta$; $\hat{\pi}_\alpha$ is the total vibrational angular momentum of the same direction; $\mu_{\alpha\beta}$ $\mu_{\alpha\alpha}$ is the inverse of the moment of inertia tensor for the given geometric coordinate; $Q_k$ is a single normal coordinate, and ${\bf{Q}}$ is the set of all normal coordinates; and $\kappa$ is a numerically determined spring constant. Notwithstanding its simplicity (and usefulness in situations such as correction of binding energies for zero-point nuclear motion), the harmonic approximation Eq.~\eqref{eq:nuclear_bo_harmonic} has several limitations. As a result of equal spacing of energy levels for a given normal mode, all transitions occur at the same frequency, and bond dissociation is not described. Improving over the harmonic approximation requires retaining higher-order or anharmonic terms in $V$, which poses additional challenges, especially a proper choice of curvilinear nuclear coordinates, and developing accurate solvers for the nuclear Schr\"{o}dinger equation \cite{sadri2012numeric,tew2003internal,viel2017zeropoint,beck2001multiconfiguration,X1,X2}. Computing solutions of the nuclear Schr\"{o}dinger equation has many valuable applications, notably supporting the study of chemical processes occurring in atmospheric chemistry, and the identification of molecules in the interstellar medium and circumstellar envelopes \cite{vaida2008spectroscopy,heiter2015atomic,smith1988formation,schilke2001line,agundez2008tentative,agundez2014new,X1,X2}. \subsection{Chemical reactions} Understanding the microscopic mechanisms underlying chemical reactions is another problem of central importance in chemistry. A particularly important goal is to calculate thermochemical quantities, such as reactant-product enthalpy and free energy differences, and activation energies. A somewhat common misconception in the quantum computation community stems from the incorrect interpretation and use of the term "chemical accuracy". As the Arrhenius equation postulated that temperature dependence of the reaction rate constants is contained in an exponential factor of the form $\exp( - \beta E_a)$, where $E_a$ is the activation energy, computed reaction rates differ by their experimental values by an order of magnitude at room temperature, $\beta^{-1} \simeq 0.5922$ kcal/mol, when the energy difference $E_a$ is biased by $\Delta E \simeq 1.36$ kcal/mol. In the light of these consideration, the accuracy required to make realistic thermochemical predictions, called \important{chemical accuracy}, is generally considered to be 1 kcal/mol \cite{pople1999nobel}. The term chemical accuracy refers to agreement between computed and experimental energy differences within 1 kcal/mol. Part of the quantum computing literature has used the term chemical accuracy to indicate agreement between computed and exact (i.e. FCI) total energies in a fixed basis set to within 1 kcal/mol, which we argue should instead be called \important{algorithmic accuracy}. Determining the main features of the potential energy surface, i.e. the electronic energy as a function of nuclear positions, its minima and saddle points, is key to understanding chemical reactivity, product distributions, and reaction rates. On the other hand, several other factors affect chemical reactions. Potential energy surfaces are shaped by the presence and properties of solvents: indeed, it is known that solvents have the ability to modify the electron density, stabilizing transition states and intermediates and lowering activation barriers, e.g. \cite{hartshorn1973aliphatic,wade2006organic}. The systematic incorporation of \important{solvation effects}, within a hierarchy of implicit \cite {cramer1999implicit,klamt2011cosmo}, hybrid QM/MM where the solvent is treated with a force field (MM) \cite {mennucci2012polarizable,senn2009qm}, and other multi-scale methods including embedding methods QM(active)/QM(surrounding) where the QM(active) refers to treatment of the most relevant part of the system with higher level quantum mechanics and QM(surrounding) refers to treatment of the surrounding by a lower level of quantum mechanics \cite{cramer1999implicit,klamt2011cosmo,mennucci2012polarizable,senn2009qm}, is thus necessary to improve the description of chemical systems beyond gas-phase properties, and is especially important in the description of many chemical reactions, such as $\mathrm{S_N2}$ nucleophilic substitution reactions \cite{hartshorn1973aliphatic,wade2006organic}. Conformational effects are another important aspect of chemical reactivity, especially in molecular crystals. Molecular crystals have diverse applications in fields such as pharmaceuticals and organic electronics. The organic molecules comprising such crystals are bound by weak dispersion interactions, and as a result the same molecule may crystallize in several different solid forms, known as \important{polymorphs} \cite{bernstein2020polymorphism,price2016can,day2007strategy}. The energy differences between polymorphs are typically within 0.5 kcal/mol or less and the structural differences between polymorphs govern their physical properties and functionality. These observations call for the accuracy of first-principles quantum mechanical approaches, the use and refinement of many-body dispersion methods, and efficient configuration space exploration, making the computational characterization of molecular crystals one of the most difficult and yet highly important in molecular chemistry \cite{bernstein2020polymorphism,lombardo2017silico,kamat2020diabat,beran2016modeling}. \section {Quantum algorithms for chemistry} \label{sec:algo} \subsection{Mappings to qubits} \label{sec:mapping_to_qubits} In this Section, techniques to map fermionic and bosonic degrees of freedom to qubits are presented. The second quantization formalism for fermions is discussed in Section \ref{sec:fermions_second}, its counterpart for bosons in Section \ref{sec:bosons_second}, and alternative approaches are briefly reviewed in Section \ref{sec:fermions_bosons_alternatives}. \subsubsection{Fermions in second quantization} \label{sec:fermions_second} The Fock space $\mathcal{F}_{-}$ of fermions occupying $M$ spin orbitals has the same dimension, $2^M$, of the Hilbert space of $M$ qubits, $\mathcal{H}^{\otimes M}$. Therefore, it is possible to construct {a one-to-one correspondence} \begin{equation} {\hat{F}} : \mathcal{F}_{-} \to \mathcal{H}^{\otimes M} \;,\; | \Psi \rangle \mapsto {\hat{F}} | \Psi \rangle \equiv | \Psi^\prime \rangle \end{equation} to represent fermionic wavefunctions $| \Psi \rangle$ and operators $\hat{A}$ by qubit wavefunctions $| \Psi^\prime \rangle$ and operators $\hat{B}^\prime = {\hat{F}} \hat{B} {\hat{F}}^{-1}$ as in Fig.~\ref{fig:simulation}. There are combinatorially many ways to map a quantum system to a set of qubits \cite{wu2002qubits,batista2004algebraic} and, since fermions exhibit non-locality of their state space, due to their antisymmetric exchange statistics, any representation of fermionic systems on collections of qubits must introduce non-local structures \cite{bravyi2002fermionic}. \paragraph{Jordan-Wigner (JW) transformation.} The JW transformation \cite{jordan1993paulische,abrams1997simulation,ortiz2001quantum,somma2002simulating}, maps electronic configurations with generic particle number onto computational basis states, \begin{equation} \big( \crt{M-1} \big)^{x_{M-1}} \dots \big( \crt{0} \big)^{x_0} | \emptyset \rangle \mapsto | \vett{x} \rangle \;, \end{equation} and fermionic creation and annihilation operators ($\crt{k}$ and $\dst{k}$ respectively) onto non-local qubit operators of the form \begin{equation} \label{eq:jwgivescar} \crt{k} \mapsto \frac{X_k - i Y_k}{2} \otimes Z_{k-1} \otimes \dots \otimes Z_0 \equiv S_{+,k} \, Z^{k-1}_0 \quad,\quad \dst{k} \mapsto \frac{X_k + i Y_k}{2} \, Z^{k-1}_0 \equiv S_{-,k} \, Z^{k-1}_0 \quad. \end{equation} The non-locality of these operator is required to preserve canonical anticommutation relations between creation and destruction operators, and immediately translates to $n$-body fermionic operators. The main limitation of the Jordan-Wigner transformation is that, as a consequence of the non-locality of the operators $Z^b_a$, the number of qubit operations required to simulate a fermionic operator $\crt{k}$ scales as $\mathcal{O}(M)$ \cite{aspuru2005simulated,whitfield2011simulation}. Such a limitation motivated the design of alternative transformations, such as the parity mapping described below. The JW transformation is exemplified in Fig.~\ref{fig:representations} using the hydrogen molecule in a minimal basis as an example. In this work, we follow the convention of mapping spin-up and spin-down orbitals to the first and the last $M/2$ qubits respectively. It is worth emphasizing that, since the JW transformation operates in the Fock space, states with any particle number, spin, and point group symmetry can result. \paragraph{Parity transformation.} The operator $Z^{k-1}_0$ computes the parity $p_k = \sum_{j=0}^{k-1} x_j \, \mathsf{mod} \, 2$ of the number of particles occupying orbitals up to $k$. The computation of parities can be achieved using only single-qubit $Z$ operators using the following parity transformation, \begin{equation} \big( \crt{M-1} \big)^{x_{M-1}} \dots \big( \crt{0} \big)^{x_0} | \emptyset \rangle \mapsto | \vett{p} \rangle \quad,\quad \crt{k} \mapsto X_{M-1} \otimes \dots \otimes X_{k+1} \otimes \frac{ X_k - i Y_{k}}{2} \equiv X^{M-1}_{k+1} \otimes S_{+,k} \;. \end{equation} In the parity transformation, the calculation of parities is local but the change of occupation numbers brought by the application of a creation or destruction operator requires $\mathcal{O}(M)$ single-qubit gates. The parity transformation, therefore, does not improve the efficiency over that of the JW transformation, but naturally allows a reduction of two in the number of qubits, which is desirable when very small quantum computers are used \cite{bravyi2017tapering}. The JW and parity transformations are exemplified in Fig.~\ref{fig:representations}b,c. In JW representation, computational basis states $|x_3,x_2,x_1,x_0 \rangle$ encode occupations $x_k$ of spin-orbitals; in parity representation, they encode parities $|p_3,p_2,p_1,p_0 \rangle$ with $p_k = x_0 + \dots + x_k$. \begin{figure} \caption{Top: (a) molecular orbitals of the $\ce{H2}$ molecule at equilibrium geometry using a minimal STO-6G basis; (b) quantum circuit to prepare the Hartree-Fock state ($X$ gates) and a superposition of the Hartree-Fock and a doubly excited state $\cos(\theta) |0101\rangle + \sin(\theta) |1010\rangle$ (exponential of the {$YXXX$} operator) in JW representation, to convert from JW to parity representation (subsequent 3 $\mathsf{CNOT}$ gates), and to compute the total irrep of the wavefunction (last 3 $\mathsf{CNOT}$ gates); (c) mapping of electronic configurations (horizontal black lines denote molecular orbitals and blue, orange circles denote spin-up, spin-down particles) in Jordan-Wigner representation and parity representation, and with tapering of the D$_{2h}$ symmetry group. Red digits denote (left to right) $(-1)^{N_\uparrow + N_\downarrow}$, $(-1)^{N_\uparrow}$, and the total irrep of the electronic configuration, $x_1+x_3 = p_0 + p_1 + p_2 + p_3$ mod 2.} \label{fig:representations} \end{figure} \paragraph{Bravyi-Kitaev (BK) transformation.} This transformation balances the locality of the occupation numbers and that of parities, to achieve a mapping of fermionic creation and destruction operators onto $\mathcal{O}(\log_2 M)$-qubit operators \cite{bravyi2002fermionic}. It does so by mapping occupation number states onto suitably defined binary strings, \begin{equation} \big( \crt{M-1} \big)^{x_{M-1}} \dots \big( \crt{0} \big)^{x_0} \mapsto | \vett{b} \rangle \;,\; b_k = \sum_{j=0}^{M-1} A_{kj} \, x_j \, \mathsf{mod} \, 2 \;, \end{equation} where the $M \times M$ binary matrix $A$ has the structure of a binary tree \cite{bravyi2002fermionic,seeley2012bravyi}. Since it requires only $\mathcal{O}(\log_2 M)$-qubit operators, the BK transformations allows for more economical encoding of fermionic operators onto qubit operators, with reduced cost for both measurements and quantum circuits. \paragraph{Qubit reduction techniques.} Lowering the number of qubits required to encode fermionic degrees of freedom, for example leveraging Hamiltonian symmetries \cite{bravyi2017tapering,setia2020reducing,faist2020continuous}, is an active and valuable research direction \cite{bravyi2017tapering,steudtner2018fermion}. Since the Hilbert space of a single qubit is isomorphic to $\mathbb{C}^2$ and operators acting on different qubits commute with each other, it is natural to consider Abelian symmetry groups isomorphic to $\mathbb{Z}_2^{\times k}$. Example of such symmetries are those generated by parities $(-1)^{N_\uparrow}$ and $(-1)^{N_\downarrow}$, proper rotations $C_2$, plane reflections $\sigma$ and inversion $i$. In the tapering algorithm \cite{bravyi2017tapering}, such symmetries are detected leveraging the formalism of stabilizer groups \cite{gottesman1997stabilizer}: the Hamiltonian is written as $\hat{H} = \sum_j c_j P_j$, where $P_j$ denotes an $M$-qubit Pauli operator, and symmetry groups $\mathcal{S}$ that (i) are Abelian subgroups of the $M$-qubit Pauli group, (ii) $-I \notin \mathcal{S}$, and (iii) every element of $\mathcal{S}$ commutes with every Pauli operator $P_j$ in the qubit representation of $\hat{H}$ are considered. While such a restriction limits the generality of the formalism, it provides an efficient algorithm, based on linear algebra on the $\mathbb{Z}_2$ field \cite{bravyi2017tapering}, to identify a set of generators $\tau_1 \dots \tau_k$ for $\mathcal{S}$, and a Clifford transformation $U$ such that $\tau_i = U^\dagger \, Z_i \, U$ for all $i=1 \dots k$. An $M$-qubit wavefunction $\Psi$ is an eigenfunction of all symmetry operators with eigenvalues $s_i$ if $Z_i \, U | \Psi \rangle = s_i \, U | \Psi \rangle$, {i.e. $U | \Psi \rangle = | \Phi_{\bf{s}} \rangle \otimes | {\bf{s}} \rangle$. One can thus search for eigenfunctions of the projections $\hat{H}_{\bf{s}} = \mathrm{Tr}_{1 \dots k} \left[ \left( \mathbbm{1} \otimes |{\bf{s}}\rangle \langle {\bf{s}}| \right) U^\dagger H U \right]$ of $\hat{H}$ } on the irreducible representation of $\mathcal{S}$ labeled by the eigenvalues ${\bf{s}}$. The procedure is exemplified in Fig.~\ref{fig:representations}b. At the end of the circuit, qubits 0,1 and 3 from above contain the irrep label and parities $p_1$, $p_3$ respectively, so the calculation of the energy of the H$_2$ molecule with the STO-6G basis using 4 qubits has resulted in the same energy but only using 1 qubit. \subsubsection{Bosons in second quantization} \label{sec:bosons_second} In the previous section, we showed how to map spin-$1/2$ fermions to qubits. For many problems, it is necessary to simulate $d$-level particles with $d>2$, including bosonic elementary particles \cite{fisher1989boson}, spin-$s$ particles \cite{levitt2013spin}, vibrational modes \cite{wilson1980molecular} and electronic energy levels in molecules and quantum dots \cite{turro1991modern,hong2019overview}. Accordingly, several qubit-based quantum algorithms were recently developed for efficiently studying some of these systems, including nuclear degrees of freedom in molecules \cite{veis2016quantum,joshi2014estimating,teplukhin2019calculation,mcardle2019digital,sawaya2019quantum}, the Holstein model \cite{macridin2018electron,macridin2018digital} and quantum optics \cite{sabin2020digital,di2020variational}. Mapping a $d$-level system to a set of qubits can be done in a variety of ways, and determining which encodings is optimal for a given problem has important practical implications. The standard binary mapping refers to the familiar base-two numbering system, such that an integer $l=0\dots d-1$, corresponding to one of the $d$ levels of the system, is represented as $l = \sum_{i=0}^{n_q-1} x_i 2^i$, with $n_q = \lceil \log_2 d \rceil$, and mapped on the binary string ${\bf{x}}_l = (x_0 \dots x_{n_q-1})$. This simple and natural mapping has been used for qubit-based quantum simulation of truncated bosonic degrees of freedom \cite{veis2016quantum,joshi2014estimating,teplukhin2019calculation,mcardle2019digital,sawaya2019quantum,macridin2018electron,macridin2018digital,sabin2020digital,di2020variational}. One mapping from classical information theory with particularly useful properties is called the Gray or the reflected binary code. Its defining feature is that the binary strings ${\bf{x}}_{l+1}$, ${\bf{x}}_l$ encoding two consecutive integers $l+1$, $l$ have to differ by one digit only (i.e. the Hamming distance between the two bitstrings is $1$). This encoding is especially favorable for tridiagonal operators with zero diagonals, and requires $n_q = \lceil \log_2 d \rceil$ qubits. There exist encodings that make less efficient use of quantum resources, requiring more than $\lceil \log_2 d \rceil$ qubits, but usually allow for fewer quantum operations. Among them one finds the unary encoding, using $2^d$ qubits and mapping integers $l$ onto binary strings $\left( {\bf{x}}_l \right)_m = \delta_{lm}$. Previous proposals for digital quantum simulation of bosonic degrees of freedom have used the unary encoding. The standard binary, Gray and unary codes are described in detail and compared in the literature \cite{sawaya2020resource,sawaya2020near}. The most efficient encoding choice is often significantly dependent on the application at hand, due to the complicated and delicate interplay between Hamming distances, sparsity patterns, bosonic truncation, and other properties of the Hamiltonian and other operators, and it is sensitive to the number $d$ of levels of the system \cite{veis2016quantum,joshi2014estimating,teplukhin2019calculation,mcardle2019digital,sawaya2019quantum,macridin2018electron,macridin2018digital,sabin2020digital,di2020variational}. \subsubsection{Alternative approaches} \label{sec:fermions_bosons_alternatives} For local Hamiltonians in one spatial dimension, the JW transformation allows mapping of a local theory of fermions onto a local theory of spins. In higher dimensions, however, the JW transformation gives rise to non-local coupling between spins. The possibility of reducing this non-local coupling has been explored by many authors, including Bravyi and Kitaev \cite{bravyi2002fermionic}. Similar ideas were explored by Ball \cite{ball2004fermions} and Cirac and Verstraete \cite{verstraete2005mapping}. The latter, in particular, achieved a mapping between local Fermi and local qubit Hamiltonians by introducing extra degrees of freedom in the form of Majorana fermions, which interact locally with the original ones. Other approaches have included the use of Fenwick trees \cite{havlivcek2017operator} and graph theoretical tools \cite{setia2019superfast}. Generalizations of the tapering qubit reduction formalism to more general discrete and continuous symmetries is known \cite{setia2020reducing,faist2020continuous}, and symmetries have been recognised as an important ingredient in the implementation of error correction algorithms \cite{bonet2018low,mcardle2019error}. Recently, techniques reducing the number of qubits by half focusing on the seniority-zero sector of the Hilbert space and partitioning an electronic system into classically correlated spin-up and spin-down sectors have also been proposed \cite{elfving2021simulating,eddins2021doubling}. As illustrated by the many different techniques referred to above, this continues to remain an important research direction for a variety of reasons: first, qubit reduction techniques allow for the simulation of problems with less quantum resources and thus to make use of near-term hardware; furthermore, they help enforce exact properties of electronic wavefunctions (e.g. particle number and spin conservation and, in presence of molecular point-group symmetries, labeling ground and excited states by irreducible representations). \subsection{Simulation of Hamiltonian dynamics} \label{sec:dynamics} As discussed in Section \ref{sec:complexity}, the simulation of Hamiltonian dynamics, i.e. the solution of the time-dependent Schr\"{o}dinger equation, is a BQP-complete problem, and thus a natural application for a quantum computer. In this Section, we present some important quantum algorithms for the simulation of Hamiltonian dynamics. Given a Hamiltonian $\hat{H}$ acting on a set of $n$ qubits, we say \cite{childs2004quantum} that $\hat{H}$ can be {\bf{efficiently simulated}} (to an accuracy $\varepsilon$) if one can produce a quantum circuit $\hat{U}$ such that \begin{equation} \label{eq:hamiltonian_simulation} \| \hat{U} - e^{-it \hat{H}} \| < \varepsilon \quad, \end{equation} and $\hat{U}$ comprises a number of gates scaling at most polynomially with $n$, $t$ and $\varepsilon^{-1}$. In the remainder of this Section, we will describe quantum algorithms to simulate the electronic structure Hamiltonian, in the sense of Eq.~\eqref{eq:hamiltonian_simulation}. These algorithms will be classified, according to the underlying mathematical formalism, into product formula \cite{lloyd1996universal,abrams1997simulation}, quantum walk \cite{childs2009universal,berry2015hamiltonian} and linear combination of unitary operators (LCU) algorithms \cite{low2017optimal,berry2015simulating,low2019hamiltonian,childs2018toward}. \subsubsection{Product formula algorithms} \begin{figure} \caption{ Left: $(a)$ quantum circuit to simulate a Trotter step of time evolution under the Hamiltonian of $\ce{H_2}$ at STO-6G level in parity representation with two-qubit reduction $(b)$ same as $(a)$ but accounting for $D_{\infty h}$ symmetry via reduction of a third qubit. $(c)$ quantum circuit to simulate a Trotter step of time evolution under the Born-Oppenheimer Hamiltonian in a set of 2 spatial orbitals and Jordan-Wigner representation, using a low-rank decomposition with $N_\gamma = 1$ term. Blue, teal and green blocks indicate Givens rotations, exponentials of $Z \otimes Z$ and $X \otimes X$ Pauli operators, and single-qubit $Z$ rotations respectively. Angles $\varphi_{kl}$ parametrize Givens rotations, and angles $\theta_k$, $\theta_{kl}$ are functions of the Hamiltonian coefficients $\alpha_k$, $\beta_{kl}$ respectively. Right: Implementation of the exponentials $R_{zz}(\theta)$ (top), $R_{xx}(\theta)$ (middle), and of a Givens rotation (bottom). } \label{fig:trotter} \end{figure} Given a Hamiltonian operator $\hat{H} = \sum_{\ell=1}^L \hat{h}_\ell$, product formulas produce an approximation to $e^{-i t \hat{H}}$ using a product of exponential operators $e^{-i t \hat{h}_\ell}$. This is achieved by dividing the time interval $[0,t]$ into a large number $n_T$ of steps, \begin{equation} \hat{U}(t) = e^{-it \hat{H}} = \prod_{i=0}^{n_T-1} \hat{U}(\Delta t) \quad,\quad \Delta t = \frac{t}{n_T} \quad, \end{equation} and approximating each of the operators $\hat{U}(\Delta t)$. An important example is the primitive approximation \cite{trotter1959product,suzuki1976generalized} \begin{equation} \label{eq:primitive_trotter} \hat{U}(\Delta t) = \prod_{\ell} e^{-i \Delta t \hat{h}_\ell} + \mathcal{O}(\Delta t^2) \quad,\quad \hat{U}_p(\Delta t) \equiv \prod_{\ell} e^{-i \Delta t \hat{h}_\ell} \quad. \end{equation} The accuracy of the primitive approximation Eq.~\eqref{eq:primitive_trotter} can be estimated observing that \begin{equation} \| \hat{U}(t) - \hat{U}_p(\Delta t)^{n_T} \| = \gamma_p \, n_T \, \Delta t^2 + \mathcal{O}(t^3/n_T^2) \quad,\quad \gamma_p = \frac{1}{2} \, \sum_{\ell< \ell^\prime} \| [ \hat{h}_\ell , \hat{h}_{\ell^\prime} ] \| \quad . \end{equation} Therefore, accuracy $\varepsilon$ is attained for $n_T = \mathcal{O}( \gamma_p \, t^2 / \varepsilon )$. More accurate approximations can be achieved relying on the Trotter-Suzuki formulas \cite{trotter1959product,suzuki1976generalized,suzuki1991general,childs2019theory}, which are defined recursively as \begin{equation} \label{eq:LTS_2} \begin{split} \hat{U}_{2k+2}(\Delta t) &= \hat{U}_{2k}^2 \left( a_{2k} \Delta t \right) \hat{U}_{2k} \left( (1-4 a_{2k}) \Delta t \right) \hat{U}_{2k}^2 \left(a_{2k} \Delta t \right) \quad, \\ \hat{U}_{2}(\Delta t) &= \left[ \; \prod_{\ell=L}^1 e^{-i \frac{\Delta t}{2} \hat{h}_\ell} \; \right] \left[ \; \prod_{\ell=1}^L e^{-i \frac{\Delta t}{2} \hat{h}_\ell} \; \right] \quad,\quad a_{2k} = \frac{1}{4-4^{\frac{1}{2k+1}}} \quad. \\ \end{split} \end{equation} Since $\hat{U}_{2k}(\Delta t) = e^{ - i \Delta t \hat{H} } + \mathcal{O}(\Delta t^{2k+1})$, accuracy $\varepsilon$ is achieved for $n_T = t^{ 1 + \frac{1}{2k}} \varepsilon^{- \frac{1}{2k}}$. As seen, the cost of simulating Hamiltonian dynamics with product formulas has power-law scaling in $t$ and $\varepsilon^{-1}$. To achieve efficient simulation, however, the number $L$ of Hamiltonian terms and the cost of simulating $e^{-i \Delta t \hat{h}_\ell}$ has to scale as $\mbox{poly}(n)$. This can be achieved, for example, mapping the electronic structure Hamiltonian Eq.~\eqref{eq:hamiltonian_born_oppenheimer} onto an $M$-qubit operator $\hat{H} = \sum_{\ell=1}^L c_\ell \hat{P}_\ell$, as outlined in Section \ref{sec:mapping_to_qubits}, and exponentiating individual $M$-qubit Pauli operators $\hat{P}_\ell$ with the circuit presented in Section~\ref{sec:hardware}, and exemplified in Fig.~\ref{fig:trotter}. The resulting scaling is between $\mathcal{\tilde{O}}(M^4)$ and $\mathcal{O}(M^5)$ per step \cite{childs2018toward,motta2018low}. Alternative approaches exist, which can be used to lower the computational cost. In particular, Aharonov and Ta-Shma \cite{aharonov2003adiabatic,berry2007efficient} introduced an algorithm for the quantum simulation of $d$-sparse Hamiltonians, based on a combination of graph coloring and Trotter decomposition. More recently, various authors \cite{poulin2014trotter,babbush2017low,jiang2018quantum,kivlichan2018quantum,motta2018low,google2020hartree,matsuzawa2020jastrow} proposed product formulas based on low-rank representations of the electron-repulsion integral, $(pr|qs) = \sum_{\gamma=1}^{N_\gamma} L^\gamma_{pr} L^\gamma_{qs}$. Such representations can be efficiently computed from a density fitting approximation \cite{ Whitten:1973:4496, Dunlap:1977:81, Dunlap:1979:3396, Feyereisen:1993:359, Komornicki:1993:1398, Vahtras:1993:514, Rendell:1994:400, Kendall:1997:158, Weigend:2002:4285}, a Cholesky decomposition \cite{ Beebe:1977:683, Roeggen:1986:154, Koch:2003:9481, Aquilante:2007:194106, Aquilante:2009:154107, motta2019efficient, peng2017highly}, or a multi-unitary tensor hypercontraction representation \cite{hohenstein2012tensor,parrish2012tensor,parrish2013exact,matsuzawa2020jastrow,cohn2021quantum} and typically feature $N_\gamma = \mathcal{O}(M)$ terms. In this framework, the electronic structure Hamiltonian is represented as \begin{equation} \hat{H} = E_0 + \hat{V}_0^\dagger \left[ \sum_p t_p \hat{n}_p \right] \hat{V}_0 + \sum_\gamma \hat{V}_\gamma^\dagger \left[ \sum_{pq} v^\gamma_{pq} \hat{n}_p \hat{n}_q \right] \hat{V}_\gamma \quad,\quad \hat{V}_\gamma = e^{ \sum_{pq} A^\gamma_{pq} \crt{p} \dst{q}. } \quad, \end{equation} and the time evolution operators $e^{-i \Delta t \hat{H}}$ as \begin{equation} e^{-i \Delta t \hat{H} } \simeq \prod_\gamma \left[ \hat{V}_\gamma^\dagger \, \Bigg( \prod_{pq} e^{-i \Delta t \, v^\gamma_{pq} \, \hat{n}_p \hat{n}_q } \Bigg) \hat{V}_\gamma \right] \, \prod_p e^{-i \Delta t \, \hat{n}_p } \quad. \end{equation} Exponentials of one-body operators such as $\hat{V}_\gamma$ can be decomposed into a product of up to $\mathcal{O}(M^2)$ Givens transformations \cite{jiang2018quantum,kivlichan2018quantum} with standard linear algebra operations. Each of these Givens rotations can be implemented, in the JW representation, with a two-qubit gate acting on a pair of adjacent qubits. Exponentials of number and products of number operators can be implemented \cite{jiang2018quantum} with $R_z(\varphi) = \exp(-i \varphi/2 Z)$ and $R_{zz}(\varphi) = \exp(-i \varphi/2 \, Z \otimes Z)$ gates, as exemplified in Fig.~\ref{fig:trotter}. The resulting scaling is $\mathcal{O}(N_\gamma M^2) = \mathcal{O}(M^3)$ \cite{kivlichan2018quantum,motta2018low}. Other approaches to implement time evolution with product formulas have explored alternatives to the high-order formulas Eq.~\eqref{eq:LTS_2} having a more favorable scaling with $k$ \cite{low2019well}, as well as grouping commuting Pauli terms \cite{poulin2014trotter} and divide-and-conquer strategies \cite{haah2018quantum}. Recently, Childs and Su \cite{childs2019nearly} revisited approaches based on product formulas, and demonstrated that their performance is more favorable than suggested by simple error bounds. Techniques of circuit recompilation aimed at generating optimal circuit are also intensely investigated \cite{whitfield2011simulation,Hastings15,childs2018faster,campbell2018random}. The design and improvement of product formulas, and the systematic assessment of their accuracy and computational cost for molecular problems, is thus an active and valuable research area at the interface between quantum computation and chemistry \cite{poulin2014trotter,cao2019quantum,bauer2020quantum}. \subsubsection{Quantum walks} \label{sec:qw} Hamiltonian simulation by product formulas has cost scaling {\bf{super-linearly}} with $t$. Such an observation leads to question of whether a better scaling can be achieved. For a general Hamiltonian, a scaling sub-linear in $t$ cannot be achieved, due to the no-fast-forwarding theorem \cite{berry2007efficient,childs2009limitations}. On the other hand, a number of algorithms achieving linear scaling with $t$ are known. The earliest example is represented by quantum walk algorithms \cite{kempe2003quantum,venegas2012quantum,berry2009black,childs2010simulating,childs2010relationship}. In the context of Hamiltonian simulation by quantum walks, we consider a Hamiltonian operator $\hat{H} = \sum_\lambda \lambda \ket{\lambda} \bra{\lambda}$ acting on a Hilbert space $\mathcal{H}$ and having eigenvalues $\lambda \in [-1,1]$. Such a condition can always be satisfied by computing an upper bound $\zeta \geq \| \hat{H} \|$ for the norm of $\hat{H}$, and rescaling the Hamiltonian and simulation time accordingly. Simulating Hamiltonian dynamics by a quantum walk consists in introducing a unitary operator $\hat{W}$, acting on an extended Hilbert space $\mathcal{H}_e \supset \mathcal{H}$, whose spectrum is connected with that of $e^{-i \Delta t \hat{H}}$ by an invertible transformation. Such a unitary transformation has the form \cite{berry2009black,childs2010simulating,childs2010relationship} \begin{equation} \label{eq:qw} \hat{W} = i \hat{S} \left( 2 \hat{T} \hat{T}^\dagger - \mathbbm{1} \right) \quad, \end{equation} where $\hat{T} : \mathcal{H} \to \mathcal{H}_e$ is an isometry, $\hat{S} : \mathcal{H}_e \to \mathcal{H}_e$ a unitary operator such that $\hat{S}^2 = \mathbbm{1}$, and \begin{equation} \label{eq:q_walk_0} \hat{T}^\dagger \hat{S} \hat{T} = \hat{H} \quad. \end{equation} Thanks to these properties, it can be proved \cite{berry2009black,childs2010relationship} that $\hat{W}$ leaves a set of 2-dimensional subspaces \begin{equation} \mathcal{S}_\lambda = \mathrm{span} \left\{ \hat{T} | \lambda \rangle , \hat{S} \hat{T} | \lambda \rangle \right\} \end{equation} invariant, and has eigenvalues and eigenvectors \begin{equation} \hat{W} \ket{ \mu_\pm(\lambda) } = \mu_\pm(\lambda) \ket{ \mu_\pm(\lambda) } \quad,\quad \mu_\pm(\lambda) = \pm e^{ \pm i \arcsin(\lambda)} \quad,\quad | \mu_\pm(\lambda) \rangle = \frac{\mathbbm{1} + i \mu_\pm(\lambda) \hat{S}}{\sqrt{2 (1-\lambda^2)}} \, \hat{T} | \lambda \rangle \quad . \end{equation} The operators $\hat{S}$ and $\hat{T}$ are constructed assuming the existence of an oracle, i.e. a unitary operator giving access to the binary representation of the matrix elements of $\hat{H}$ in a suitable basis \cite{berry2009black,childs2010relationship}, and the spectrum of $\hat{W}$ is converted into that of $e^{-i t \hat{H}}$ using a subroutine based on the quantum phase estimation algorithm (see Section \ref{sec:applications_of_time_evolution}). The quantum walk approach, though achieving linear scaling with $t$, retains algebraic scaling with $\varepsilon$, which was improved by LCU-based algorithms. \subsubsection{The linear combination of unitary operators: LCU lemma} \label{sec:lcu} Product formulas and quantum walks permit to approximate the time evolution operator by a sequence of unitary operations. A breakthrough in quantum simulation came from the realization that more accurate approximations can be achieved using {\bf{non-unitary approximations}} \cite{berry2014exponential,berry2015simulating,low2016methodology,low2017optimal,low2019hamiltonian}. In particular, such non-unitary approximations often take the form of linear combinations of unitary operations. In order to describe several important quantum algorithms based on non-unitary approximations of the time evolution operator, in this section we present a theoretical result known as LCU lemma \cite{berry2015simulating}, showing how to apply linear combination of $L$ unitary operators to a set of qubits prepared in a state $|\Psi \rangle$, \begin{equation} \label{eq:lcu_target} | \Psi \rangle \mapsto \frac{ \hat{X} | \Psi \rangle}{\| \hat{X} \Psi \|} \equiv | \Psi_X \rangle \quad,\quad \hat{X} = \sum_{\ell=0}^{L-1} \alpha_\ell \hat{U}_\ell \quad. \end{equation} Since $\hat{X}$ is not a unitary operator, it is not straightforward to represent the map \eqref{eq:lcu_target} as a quantum circuit. On the other hand, such a representation is highly desirable, as it extends the reach of quantum computation to non-unitary operators. The LCU lemma provides a strategy for implementing the transformation \eqref{eq:lcu_target} with probability $p$, based on the quantum circuit in Figure \ref{fig:lcu_general}a. A register of $n_A = \lceil \log_2(L) \rceil$ ancillary qubits is prepared in $|0 \rangle^{\otimes n_A}$ and coupled to a register of $n$ qubits, prepared in the state $| \Psi \rangle$. The ancillae are manipulated with a preparation unitary \begin{equation} \label{eq:lcu_p} \hat{W}_p | 0 \rangle = \sum_{\ell=0}^{L-1} \sqrt{\frac{\alpha_\ell}{\alpha}} \, | \ell \rangle \quad,\quad \alpha = \sum_{\ell=0}^{L-1} \alpha_\ell \quad, \end{equation} and coupled with the qubits of the main register by a selection unitary \begin{equation} \hat{W}_s = \sum_{\ell=0}^{L-1} {\hat{U}_\ell \otimes | \ell \rangle \langle \ell |} \quad. \end{equation} The transformation $\hat{W}_p$ is subsequently reversed, and the ancillae are measured. If the outcome of the measurement is $(0 \dots 0)$, which happens with probability $p = \| \hat{X} \psi \|^2 / \alpha^2$, the qubits of the main register collapse onto the state $| \Psi_X \rangle$. The number of ancillae, $n_A = \lceil \log_2(L) \rceil$ scales logarithmically with $n$ provided that $L = \mathrm{poly}(n)$. The unitary $\hat{W}_p$ can require up to $2^{n_A} = \mathrm{poly}(n)$ gates \cite{barenco1995elementary}. Similarly, if every unitary $\hat{U}_\ell$ can be controlled at cost $c_\ell = \mathrm{poly}(n)$, then $\hat{W}_s$ can be implemented at cost $\mathrm{poly}(n)$ \cite{barenco1995elementary}. The main limitation of the LCU algorithm is the success probability, which decays as $1-(1-p)^k = 2^{- \mathcal{O}(k)}$ when the circuit in Fig.~\ref{fig:lcu_general}a is applied $k$ times consecutively. {When $\hat{X}$ is a unitary operator \cite{berry2015hamiltonian}, as in Hamiltonian evolution, the} success probability $p$ can be increased using the procedure called oblivious amplitude amplification (OAA) \cite{brassard1997exact,grover1998quantum,berry2014exponential}. The OAA is described by the quantum circuit in Figure \ref{fig:lcu_general}, where $\hat{W}_{\mathrm{LCU}} = \hat{W}_p^\dagger \hat{W}_s \hat{W}_p$ is the LCU unitary, and {$\hat{R} = \mathbbm{1} \otimes \left( \mathbbm{1} - 2 | 0 \rangle \langle 0 | \right)$ }reflects ancillae around the state $| 0 \rangle \langle 0 |$. The circuit $\hat{A}^k \hat{W}_{\mathrm{LCU}}$ leads to a state of the form \begin{equation} \hat{U} | 0 \rangle | \Psi \rangle = \sqrt{p_k} \, {\frac{\hat{X} | \Psi \rangle}{\| \hat{X} \Psi \|} \otimes | 0 \rangle^{\otimes n_A}} + \sqrt{1-p_k} \, | \Phi^\perp \rangle \quad,\quad p_k = \sin^2 \big( (2k+1) \theta \big) \quad,\quad {\theta = \arcsin \sqrt{p}} \quad, \end{equation} and $p_k \geq 1-\delta$ provided that $k \geq \mathcal{O}( 1 / \sqrt{p} )$. We remark that, when $\hat{X}$ is not a unitary operator, OAA gives Chebyshev polynomials of that operator \cite{berry2015hamiltonian}. \begin{figure} \caption{Schematic representation of the algorithm for the probabilistic implementation of an LCU (a), structure of the LCU unitary (blue block) as composition of a preparation and a selection unitary (b), schematic representation of the OAA algorithm (c), and implementation of the OAA unitary (purple block) as a composition of the LCU unitary and a reflector unitary transformation (d). Slashes denote multi-qubit registers.} \label{fig:lcu_general} \end{figure} \subsubsection{LCU based algorithms} \label{sec:taylor} In 2014, Berry et al \cite{berry2015simulating} introduced a method for Hamiltonian simulation, based on the Taylor series representation of the time evolution operator, to achieve a computational cost scaling linearly in $t$ and logarithmically in $\varepsilon^{-1}$ (i.e. exponentially better than product formulas and quantum walks). The algorithm focuses on Hamiltonian operators that can be written as LCUs, $\hat{H} = \sum_{\ell=0}^{L-1} \alpha_\ell \hat{U}_\ell$. {For simplicity, we also assume that $C = \sum_\ell \alpha_\ell = 1$, which is equivalent to rescaling energy and time units, $\hat{H} \to C^{-1} \hat{H}$ and $t \to C t$.} The time interval $[0,t]$ is divided in $r$ steps of duration $\Delta t = t / r$, and the operator $e^{-i \Delta t \hat{H}}$ is expanded in Taylor series to order $K$, \begin{equation} \label{eq:lcu_taylor} e^{-i \Delta t \hat{H}} \simeq \hat{V}_K(\Delta t) = \sum_{m=0}^{K} \frac{(-i \Delta t)^m}{m!} \hat{H}^m = \sum_{m=0}^{K} \sum_{\ell_0 \dots \ell_{m}=0}^{L-1} \frac{(-i \Delta t)^m}{m!} \alpha_{\ell_0} \dots \alpha_{\ell_{m}} \hat{U}_{\ell_0} \dots \hat{U}_{\ell_{m}} \quad. \end{equation} Eq.~\eqref{eq:lcu_taylor} is an LCU representation of $e^{-i \Delta t \hat{H}}$, which can be probabilistically applied relying on a particular implementation \cite{berry2015simulating} of the LCU lemma presented in Section \ref{sec:lcu}, where $K (1+\log_2 L)$ ancillae are used, as shown in Fig.~\ref{fig:taylor_circuit}. The first $K$ ancillae are prepared in the state $\sum_m \sqrt{\Delta t^m/m!} { \, | 0 \rangle^{K-m} | 1 \rangle^{m}}$ using $\mathcal{O}(K)$ controlled single-qubit rotations, and the remaining $\log_2(L)$ groups of $K$ ancillae are prepared in the normalized state $\sum_\ell \sqrt{\alpha_\ell} | \ell \rangle$ using $\mathcal{O}(KL)$ gates. The other basic component is the selection unitary, which maps states of the form {$\ket{\psi} \ket{\ell_k} \dots \ket{\ell_1} \ket{k}$} to {$(-i)^k \hat{U}_{\ell_1} \dots \hat{U}_{\ell_k} \ket{\psi} \ket{\ell_k} \dots \ket{\ell_1} \ket{k}$} at the cost of $\mathcal{O}( L ( n + \log_2L) K)$ operations \cite{berry2015simulating}. An important aspect of this algorithm is the order $K$ of the polynomial approximating $e^{-i \Delta t \hat{H}}$ with accuracy $\varepsilon/K$, which scales as \cite{berry2015simulating} \begin{equation} K = \mathcal{O}\left( \frac{ \log \left( \frac{t}{\varepsilon} \right) }{ \log \log \left( \frac{t}{\varepsilon} \right) } \right) \quad. \end{equation} The logarithmic dependence of the computational cost on $\varepsilon^{-1}$ thus arises from the rapid convergence of the Taylor series, as well as on the specific use of ancillary qubits and controlled operations described in the previous paragraph. \begin{figure}\label{fig:taylor_circuit} \end{figure} \subsubsection{Quantum signal processing and qubitization} The qubitization algorithm, developed by Low and Chuang \cite{low2019hamiltonian}, uses the LCU decomposition of the Hamiltonian and the quantum signal processing (QSP) technique \cite{low2016methodology,low2017hamiltonian} to achieve a computational cost where the dependence on $t$ and $\log \varepsilon^{-1}$ is \important{additive rather than multiplicative}, $\mathcal{O}(t+\log \varepsilon^{-1})$, which is optimal in both accuracy and time \cite{low2017hamiltonian}. Given a Hamiltonian operator $\hat{H}$ acting on a Hilbert space $\mathcal{H}$ and having norm $\| \hat{H} \| \leq 1$, the starting point of qubitization is the construction of a unitary operator $\hat{W}_{\mathrm{Q}}$, called qubiterate. The qubiterate acts on an extended Hilbert space $\mathcal{H}^\prime \otimes \mathcal{H}$ and is an encoding for $\hat{H}$, in the sense that \begin{equation} \label{eq:qubitization_0} { \langle \phi | \langle g | \hat{U} | \psi \rangle | g \rangle = \langle \phi | \hat{H} | \psi \rangle } \quad \mbox{for some $|g \rangle \in \mathcal{H}^\prime$ and any $\ket{\phi}, \ket{\psi} \in \mathcal{H}$} \quad. \end{equation} Moreover, for any eigenvector of $\hat{H}$, $\hat{H} \ket{\lambda} = \lambda \ket{\lambda}$, one has \begin{equation} \label{eq:qubitization_1} \hat{W}_{\mathrm{Q}} | g_\lambda \rangle = \lambda | g_\lambda \rangle - \sqrt{1-\lambda^2} \ket{ g_\lambda^\perp } \quad,\quad \hat{W}_{\mathrm{Q}} | g_\lambda^\perp \rangle = \lambda | g^\perp_\lambda \rangle + \sqrt{1-\lambda^2} | g_\lambda \rangle \quad, \end{equation} where {$| g_\lambda \rangle = \ket{\lambda} \ket{g}$} and $\ket{ g_\lambda^\perp }$ is defined by the first of Eq.~\eqref{eq:qubitization_1}. In other words, the qubiterate leaves the two-dimensional subspaces $\mathcal{S}_\lambda$ spanned by $| g_\lambda \rangle$ and $| g_\lambda^\perp \rangle$ invariant, so that its restriction to $\oplus_\lambda \mathcal{S}_\lambda$ is a direct sum of $Y$ rotations acting on the subspaces individually $\mathcal{S}_\lambda$, \begin{equation} \hat{W}_{\mathrm{Q}} = \bigoplus_\lambda e^{-i \theta_\lambda Y_\lambda} \quad,\quad \theta_\lambda = \mbox{arccos}(\lambda) \quad. \end{equation} The qubiterate is used to construct an operator $\hat{V}_{\vec{\varphi}}$, which in turn is used to approximate $e^{-it \hat{H}}$, by a QSP \cite{low2016methodology,low2017hamiltonian}. The single-ancilla QSP is described by the quantum circuit in Fig.~\ref{fig:qsp}. The operator $\hat{V}_{\vec{\varphi}}$ has the form \begin{equation} \hat{V}_{\vec{\varphi}} = \prod_{m=0}^{K} \hat{U}^\dagger_{\varphi_{2m+2}+\pi} \hat{U}_{\varphi_{2m+1}} \quad,\quad \hat{U}_\phi {= \left[ \mathbbm{1} \otimes R_z(-\varphi) \, \mathrm{Had} \right] \mathsf{c} \big(iW_{\mathrm{Q}} \big) \left[ \mathbbm{1} \otimes \mathrm{Had} \, R_z(\varphi) \right]} \quad, \end{equation} {where $\mathsf{c} \big(iW_{\mathrm{Q}} \big)$ is the controlled version of the gate $iW_{\mathrm{Q}}$.} It can be shown that \cite{low2019hamiltonian} \begin{equation} \label{eq:qubitization_5} \hat{V}_{ \vec{\varphi} } = \sum_{\lambda,\pm} {| \lambda_\pm \rangle \langle \lambda_\pm | \otimes u( \vec{\varphi} , \theta_{\lambda,\pm} )} \quad, \end{equation} where $| \lambda_\pm \rangle$ is an eigenvector of $\hat{W}_{\mathrm{Q}}$ with eigenvalue $e^{ \pm i \theta_\lambda}$, and $u$ a single-qubit operator. Provided that the number of angles $\vec{\varphi}$ is sufficiently large, $K = \mathcal{O}(t + \log \varepsilon^{-1})$, these angles can be set to values that are efficiently computable on a classical computer \cite{low2019hamiltonian,dong2021efficient,martyn2021grand,chao2020finding} so that \begin{equation} {\langle \chi | \langle g | \langle + | \hat{V}_{ \vec{\phi}} | \psi \rangle | g \rangle | + \rangle \simeq \langle \chi | e^{-it \hat{H}} | \psi \rangle} \end{equation} with error bounded by $\varepsilon$. Qubitization is a systematic framework, that offers a concrete procedure to simulate Hamiltonian dynamics with optimal complexity with respect to $t$ and $\varepsilon$. In recent years, its complexity with respect to the number of spin-orbitals was improved using low-rank \cite{berry2019qubitization} and tensor hypercontraction \cite{lee2020even} techniques. Product formulas and Taylor series techniques allow simulation of Schr\"{o}dinger equations with a time-dependent Hamiltonian $\hat{H}(t)$. On the other hand, the qubitization technique is formulated for time-independent Hamiltonians, and its extension to time-dependent Hamiltonians is an open and challenging research problem. An important consequence of this difference is that product formulas and Taylor series techniques allow simulation in the interaction picture of quantum mechanics. This is especially desirable for electronic structure problems, where the Born-Oppenheimer Hamiltonian can be written as the sum $\hat{H} = \hat{T} + \hat{V}$ of a one-body operator and of a two-body operator, so that the unitary transformation \begin{equation} \Psi(t) \to \Psi_{I}(t) = e^{ \frac{t}{i\hbar} \hat{T}} \Psi_t \end{equation} leads to a Schr\"{o}dinger equation with a time-dependent Hamiltonian $\hat{V}_I(t) = e^{ \frac{t}{i\hbar} \hat{T}} \hat{V} e^{ - \frac{t}{i\hbar} \hat{T}}$. It is important to remark that, since $\hat{T}$ is a one-body operator, the interaction term $\hat{V}$, and any other molecular property described by a $k$-body operator, can be exactly and efficiently transformed to the interaction picture, $\hat{V} \to \hat{V}_I(t)$. \begin{figure}\label{fig:qsp} \end{figure} \subsubsection{Applications of Hamiltonian dynamics, and open problems} \label{sec:applications_of_time_evolution} In the previous Section, we examined some {emerging} quantum algorithms for Hamiltonian dynamics simulation. Hamiltonian dynamics simulation is one of the most compelling applications for a quantum computer, as it lies in the complexity class BQP. The study, implementation and application of algorithms for Hamiltonian dynamics simulation is thus a relevant research area, and one that has significant potential to yield the first relevant quantum simulations of chemical systems \cite{childs2018toward}. Nevertheless, a careful consideration of the actual computational cost and of the accuracy of such algorithms, with particular attention to chemical problems, is a necessary requirement to understand, design and carry out such simulations. While product formula algorithms have the highest asymptotic computational cost, compared against quantum walks and LCU (linear combination of unitary operators) based algorithms, the actual runtime of an algorithm for specific problems of interest is crucially determined by prefactors \cite{elfving2020will}, as well as by other technical considerations, such as the amount of input and output data that need to be pre-computed and post-processed respectively, and moved between the classical and quantum computer \cite{von2020quantum}, and the precise determination of the accuracy of quantum algorithms by tight numerical bounds as opposed to loose inequalities \cite{childs2019nearly}. Furthermore, quantum walks and LCU based algorithms require additional quantum resources, especially ancillae and controlled operations, which are challenging aspects when implementation on near-term quantum devices is considered. In our view, such observations pinpoint the need of detailed comparative studies, conducted on a diverse range of chemical problems, with exhaustive cross-checks, validations and tests between classical and quantum algorithms. Such detailed comparisons can precisely establish by numerical studies the regime (i.e. the number of spin-orbitals $M$, simulation time $t$, and target accuracy $\varepsilon$) where quantum walks and LCU based algorithms become less expensive than product formulas, and of course of other algorithms for classical computers \cite{elfving2020will}, and provide rigorous benchmarks for assessing the current state of the art in quantum simulation, for measuring its progress, and for developing new and improved techniques. We now describe some applications of algorithms for Hamiltonian dynamics simulation, that can represent occasions for such comparative studies. {Another important research direction aims at combining elements of each family of algorithms and fine-tuning their implementation for specific problems. Relevant examples are a procedure \cite{berry2015hamiltonian} to use LCU on the steps of the quantum walk described in Sec.~\ref{sec:qw} and the observation \cite{berry2018improved,poulin2018quantum} that, for the purpose of estimating energies, the steps of the quantum walk Eq.~\eqref{eq:qw} are sufficient, rather than the Hamiltonian evolution unitary constructed from the quantum walk.} \paragraph{Time-dependent observables and correlation functions.} A natural application of the algorithms outlined in Section~\ref{sec:dynamics} is the calculation of time-dependent electrostatic properties, presented in Section~\ref{sec:electron_dynamics}. The monitoring and control of electronic motion in atoms and molecules in real time has been made possible by advances in laser technology \cite{hentschel2001attosecond,kienberger2002steering,bucksbaum2003ultrafast,fohlisch2005direct}, and can be addressed on classical computers by a variety of numerical methods \cite{klamroth2003laser,saalfrank2005laser,krause2005time,krause2007molecular,nest2005multiconfiguration,daley2004time,schollwock2011density,xie2019time,dahlen2007solving}. Achieving this goal requires solving the time-dependent Schr\"{o}dinger equation with $\hat{H}(t) = \hat{H}_0 + \hat{{\bf{d}}} \cdot {\bf{E}}(t)$, where $\hat{{\bf{d}}}$ is the dipole operator and ${\bf{E}}(t)$ is a time-dependent electric field, and computing electronic densities and polarizabilities over the time-evolved wavefunction. Another important set of observables based on the simulation of Hamiltonian dynamics are time-dependent correlation functions, such as dipole-dipole correlation functions $d_{\mu\nu}(t) = \langle \Psi_0 | \hat{d}_\mu(t) \hat{d}_\nu | \Psi_0 \rangle$. The electronic structure Hamiltonian and the dipole operators are mapped onto linear combinations of Pauli operators, {$\hat{d}_\mu = \sum_j c_{j\mu} \hat{P}_j$}, reducing the computation of dipole-dipole correlation functions to that of Pauli operators. The quantum circuit \cite{ortiz2001quantum,somma2002simulating,somma2003quantum} for the calculation of time-dependent correlation functions between unitary operators $\hat{A}$, $\hat{B}$ is shown in Fig.~\ref{fig:green}. The calculation of time-dependent correlation functions is also an important occasion to study, demonstrate, benchmark and improve the performance of quantum circuits comprising ancillae and controlled operations, an important and recurring theme in quantum simulation \cite{chiesa_2019,francis_2020,sun2021quantum,cohn2021quantum}. \begin{figure} \caption{Quantum circuit to measure the correlation function $\langle \Psi_0 | \hat{U}^\dagger(t) \hat{A} \hat{U}(t) \hat{B} | \Psi_0 \rangle$ between unitaries $\hat{A}$, $\hat{B}$, with $S_- = \frac{X+iY}{2}$.} \label{fig:green} \end{figure} \paragraph{Quantum phase estimation (QPE).} QPE is one of the most important subroutines in quantum computation. It serves as a central building block for many quantum algorithms, and implements a measurement for essentially any Hermitian operator $\hat{H}$, most notably the Hamiltonian, through an algorithmic implementation of von Neumann's general measurement scheme \cite{neumann1955mathematical}. Basic quantum-mechanical measurements are performed by decomposing $\hat{H}$ into easily measurable terms, measuring each term separately, and collating results. QPE, on the other hand, prepares an eigenstate of the Hermitian operator to be measured in one register, and stores the corresponding eigenvalue in a second register. As such, QPE only requires a single shot and has the zero-variance property (if $\Psi$ is an eigenfunction of $\hat{H}$ with eigenvalue $E_0$, then the QPE measurement of $\hat{H}$ returns $E_0$ with probability 1). More technically, QPE is used to estimate the eigenvalue $u = e^{i 2\pi \theta}$, $0 < \theta < 1$, corresponding to the eigenvector $|u \rangle$ of a unitary $\hat{U}$ \cite{cleve1998quantum,kitaev1995quantum}. In the context of quantum simulation \cite{aspuru2005simulated,lanyon2010towards,o2016scalable,o2019quantum,cruz2020optimizing} $\hat{U}$ is a controllably accurate approximation of {$e^{-i \lambda \hat{H}}$ and $\lambda$ a suitable rescaling factor}, and QPE is thus used to compute eigenvalues corresponding to ground and excited state of $\hat{H}$. A simple but important observation is that eigenvalues of $\hat{H}$ do not lie between $0$ and $2\pi$, however one can scale and shift the Hamiltonian to an operator $\hat{H}^\prime = 2 \pi (\hat{H}-E_1)/(E_2-E_1)$, where $E_1$/$E_2$ is a lower/upper bound for the lowest/highest eigenvalue of $\hat{H}$, satisfying such a condition. There are two main strategies for algorithmic QPE: the first makes use of the gate expensive inverse quantum Fourier transform (QFT) and, in an ideal quantum computer, could work with a single measurement, the second uses shallower circuits \cite{kitaev1995quantum,griffiths1996semiclassical,dobvsivcek2007arbitrary} but requires multiple measurements and classical post-processing. The former implementation of the QPE algorithm is described by the quantum circuit in Figure \ref{fig:qpe}. \begin{figure} \caption{Quantum circuit for the quantum phase estimation algorithm.} \label{fig:qpe} \end{figure} A register of qubits prepared in $\ket{u}$ is coupled to $t$ ancillae prepared in $|0 \rangle^{\otimes t}$. Hadamard gates and controlled powers of $\hat{U}$ are applied, and a subsequent inverse quantum Fourier transform \cite{nielsen2002quantum,benenti2004principles} leads to the final state \begin{equation} | \Psi \rangle = \sum_{z=0}^{2^t-1} \left( \sum_{k=0}^{2^t-1} \frac{ e^{ \frac{2 \pi i z (2^t \theta-k)}{2^t} } }{2^t } \right) \ket{z} \otimes \ket{u} = \sum_{z=0}^{2^t-1} \Delta(z,\theta) \, \ket{z} \otimes \ket{u} \quad. \end{equation} If $2^t \theta = k_0$ for some integer $k_0 = 0 \dots 2^t-1$, then $| \Psi \rangle = \ket{k_0} \otimes \ket{\psi}$, and measuring the ancillae yields the binary representation of $k_0 = 2^t \theta$ with probability $1$. Otherwise, the probability distribution $p(z) = |\Delta(z,\theta)|^2$ is concentrated around the integer $k_0$ closest to $2^t \theta$, $p(k_0) \geq 4/\pi^2 \simeq 0.4$ \cite{cleve1998quantum}. $p(k_0)$ can be increased to $1-\varepsilon$ increasing the number of ancillae to $t = \mathcal{O}(\log \varepsilon^{-1})$ \cite{cleve1998quantum}. A large body of work is dedicated to the optimization of QPE, from reducing the number of measurements needed to estimate eigenvalues in implementations not based on QFT \cite{svore2013faster}, to methodologies for simultaneously determining multiple eigenvalues based on a classical time-series analysis \cite{o2019quantum,somma2019quantum}, to optimizing the implementation of QPE on contemporary quantum devices \cite{cruz2020optimizing,mohammadbagherpoor2019improved}. \paragraph{Adiabatic state preparation (ASP).} This technique approximates the ground state of an interacting system. The Hamiltonian is written as $\hat{H} = \hat{H}_0 + \hat{H}_1$, where the eigenvalues and eigenvectors of $\hat{H}_0$ can be easily determined and encoded on a classical or a quantum computer (in chemistry, a natural choice is $\hat{H}_0 = \hat{F} + \langle \Psi_{\mathrm{HF}} | \hat{H} - \hat{F} | \Psi_{\mathrm{HF}} \rangle$, where $\hat{F}$ is the Fock operator and $\Psi_{\mathrm{HF}}$ the Hartree-Fock state) and a curve of operators $\hat{H}(s)$, $0 \leq s \leq 1$, with $\hat{H}(s=0) = \hat{H}_0$ and $\hat{H}(s=1) = \hat{H}$, for example the segment $\hat{H}(s) = \hat{H}_0 + s \hat{H}_1$. The adiabatic theorem \cite{born1928beweis,kato1950adiabatic,messiah1962quantum,avron1999adiabatic,teufel2003adiabatic,jordan2008quantum} states that, under opportune conditions, the solution of the Schr\"{o}dinger equation \begin{equation} \label{eq:asp} i \frac{d}{dt} \ket{\Psi_t} = \hat{H}(t/T) \ket{\Psi_t} \quad,\quad 0 \leq t \leq T \quad,\quad \ket{\Psi_0} = \ket{\Phi_0(0)} \quad, \end{equation} where $\ket{\Phi_0}$ is the ground state of $\hat{H}_0$, converges to the ground state $\ket{\Phi_0(1)}$ of $\hat{H}$ in the large $T$ limit, \begin{equation} \lim_{T \to \infty} | \Psi(T) \rangle = | \Phi_0(1) \rangle \quad. \end{equation} APS uses an operation suited for the quantum computer, the simulation of time evolution, to approximate Hamiltonian ground states. The method was originally proposed to address combinatorial optimization problems \cite{farhi2000quantum,farhi2001quantum} and later generalized to chemistry problems \cite{du2010nmr,babbush2014adiabatic,veis2014adiabatic}, as well as to a model of quantum computation, equivalent to the circuit model \cite{kempe2006complexity,nagaj2007new,aharonov2008adiabatic} and featuring interesting robustness properties against coherent and incoherent errors \cite{childs2001robustness}. An important point is the following: the adiabatic theorem states that the time $T$ to approximate the ground state with accuracy $\varepsilon$ scales as $\varepsilon^{-1} F(\gamma(s),f_1(s),f_2(s))$, where $F$ is a functional of the spectral gap $\gamma(s)$ of $\hat{H}(s)$, and of the norms $f_k(s) = \| \frac{d^k \hat{H}}{ds^k} \|(s)$ of the first and second derivative of $\hat{H}$ along the adiabatic path \cite{jordan2008quantum}. The simulation time, and thus the computational cost of ASP, are especially connected with $\gamma(s)$: if such a quantity remains constant, or decreases as $1/\mbox{poly}(M)$ where $M$ is the number of spin-orbitals of the system, ASP is polynomially expensive \cite{van2001powerful,jansen2007bounds}. Otherwise, it can be exponentially expensive, in accordance with the QMA nature of the ground state problem. \subsection{Simulation of Hamiltonian eigenstates} The problem of computing Hamiltonian eigenpairs, $\hat{H} \ket{\Phi_\mu} = E_\mu \ket{\Phi_\mu}$ has enormous importance in chemistry (see e.g. the applications in Sections \ref{sec:es} and \ref{sec:applications_of_time_evolution}). This problem lies in the QMA complexity class, and thus the existence of quantum algorithms outperforming their classical counterparts is not expected. However, heuristic quantum algorithms can, for certain structured problems, produce accurate approximations of ground and selected excited states at polynomial cost. In this Section, we review heuristic quantum algorithms for the computation of approximate Hamiltonian eigenstates and eigenvalues. \subsubsection{Variational quantum algorithms} Variational quantum algorithms (VQAs) have recently emerged as a widely used strategy to approximate Hamiltonian eigenstates/eigenvalues on quantum computers \cite{cao2019quantum,cerezo2020variational,bauer2020quantum,bharti2021noisy}, in part due to the fact that VQAs can be designed to operate within the limitations of contemporary quantum hardware. To define and implement a VQA, one first considers a parametrized wavefunction (or Ansatz) such as \begin{equation} \label{eq:vqa_0} \ket{\Psi(\theta)} = \hat{U}(\theta) \ket{\Psi_0} \quad,\quad \hat{U}(\theta) = \hat{u}_{n_g-1}(\theta_{n_g-1}) \dots \hat{u}_{0}(\theta_{0}) \quad, \end{equation} where $\ket{\Psi_0}$ is an initial wavefunction and the $\{ \hat{u}_k \}_k$ are parameterized unitaries. VQAs typically operate preparing the parametrized Ansatz \eqref{eq:vqa_0} on a quantum computer, executing a circuit, measuring the obtained state, and updating the parameters $\theta$ according to a classical optimization algorithm, based on the results of such measurements. The rather abstract structure of VQAs materializes in a wealth of particular implementations, which can be roughly classified in two families: variational quantum optimization (VQO) and variational quantum simulation (VQS) algorithms. The former approximate target states by minimizing a suitable cost function, and the latter approximate dynamical processes corresponding to curves in a Hilbert space by minimizing a suitable action functional. \subsubsection{The variational quantum eigensolver} \begin{figure} \caption{Schematics of the VQE algorithm. A parametrized wavefunction is produced applying a parametrized unitary to an initial wavefunction. The set of wavefunctions accessible to the VQE algorithm is a manifold in the qubit Hilbert space, and the wavefunction minimizing the energy is chosen as an approximation of the ground state wavefunction. } \label{fig:vqe} \end{figure} A prominent example of a VQO is the variational quantum eigensolver (VQE) \cite{farhi2014quantum,peruzzo2014variational,mcclean2016theory,romero2018strategies}, schematized in Fig.~\ref{fig:vqe}, which approximates the ground-state energy and wavefunction of a Hamiltonian $\hat{H}$ by minimizing the energy $E(\theta) = \langle \Psi(\theta) | \hat{H} | \Psi(\theta) \rangle$ which, according to the variational principle, is an upper bound for the ground-state energy of $\hat{H}$. In the context of VQE, the quantum computer is used to efficiently evaluate $E(\theta)$ and, in some implementations, its first and second derivatives \cite{mcclean2016theory,parrish2019hybrid,schuld2019evaluating,mitarai2020theory,kottmann2021feasible}. A simple and important aspect of VQO methods in general and VQE in particular, is that evaluating the energy on a quantum computer yields a statistical estimate, $E(\theta) \sim \mu \pm \sigma$, and thus optimizers used to update parameters have to take into account the statistical nature of quantum measurements \cite{guerreschi2017practical}. Example of such optimizers are the Simultaneous Perturbation Stochastic Approximation (SPSA) \cite{spall2005introduction,hirokami2006parameter,s2013stochastic}, ADAptive Moment estimation (ADAM) \cite{kingma2014adam}, and the Quantum Natural Gradient (QNG) \cite{stokes2020quantum}. SPSA is a stochastic optimization method where parameters are updated as $\theta_{n+1} = \theta_n - a_n \, g(\theta_n)$, where $g(\theta_n)$ is an estimate of the gradient of the cost function obtained from random perturbation vectors of length $c_n = c \, n^{-\gamma}$, and the step length is defined as $a_n = a \, n^{-1}$. Careful optimization of the hyperparameters $a,c,\gamma$ is key to an efficient optimization \cite{kandala2017hardware}. ADAM is a first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, characterized by a simple implementation, modest memory requirements, and the ability to dynamically select a step size by maintaining a history of past gradients. Within QNG, the optimization dynamics is interpreted as moving in the steepest descent direction with respect to the quantum information geometry, corresponding to the real part of the Fubini-Study metric tensor. The accuracy and computational cost of VQE calculations are determined by the underlying Ansatz: on the one hand it should contain an accurate approximation to the ground state, on the other hand one desires circuits that can be easily executed on a quantum computer, with a well-behaved and rapidly convergent optimization of variational parameters \cite{mcclean2018barren,akshay2020reachability}. Two such Ans\"{a}tze are reviewed in the following paragraphs. \paragraph{Quantum unitary coupled-cluster (q-UCC).} The ground state is approximated \cite{bartlett1989alternative,peruzzo2014variational,Moll_2018,romero2018strategies,albash2018adiabatic,o2016scalable} by an exponential Ansatz, \begin{equation} \label{eq:q_uccsd} | \Psi \rangle = e^{ \hat{T} - \hat{T}^\dagger } | \Psi_0 \rangle \quad,\quad \hat{T} = \sum_{l=1}^{k} \hat{T}_l \quad,\quad \hat{T}_l = \sum_{a_0 \dots a_{l-1}} \sum_{i_0 \dots i_{l-1}} t^{a_0 \dots a_{l-1}}_{i_0 \dots i_{l-1}} \; \crt{a_0} \dots \crt{a_{l-1}} \dst{i_{l-1}} \dots \dst{i_0} \quad, \end{equation} where $| \Psi_0 \rangle$ is a reference Slater determinant and $\hat{T}$ is a linear combination of up to $k$ particle-hole excitation operators that promote electrons from the hole (occupied) to the particle (unoccupied) orbitals of the reference state $| \Psi_0 \rangle$. In many studies, $\hat{T}$ is limited to single and double particle-hole excitations, defining the so-called q-UCCSD Ansatz. Standard coupled cluster theory is naturally implemented on a classical device, while its unitary variant is naturally implemented on a quantum device. Eq.~\eqref{eq:q_uccsd} is factored into a product of exponentials of Pauli operators, using product formulas presented in Section \ref{sec:dynamics}, \begin{equation} e^{ \hat{T} - \hat{T}^\dagger } \simeq \prod_\mu e^{ \theta_\mu \left( \hat{t}_\mu - \hat{t}_\mu ^\dagger \right) } \quad. \end{equation} Such a factorization is not unique, and optimal parameterization of fermionic wavefunctions via q-UCC has been recently explored \cite{evangelista2019exact}. For ease of implementation, Eq.~\eqref{eq:q_uccsd} can be approximated \cite{barkoutsos2018quantum} by a product of exponentials of individual one- and two-body operators, \begin{equation} \label{eq:cc_basic} e^{ \hat{T} - \hat{T}^\dagger } \simeq \prod_{ia} e^{ \theta^a_i ( \crt{a} \dst{i} - \crt{i} \dst{a} ) } \prod_{ijab} e^{\theta^{ab}_{ij} ( \crt{a} \crt{b} \dst{j} \dst{i} - \crt{i} \crt{j} \dst{b} \dst{a} ) } \quad. \end{equation} In the JW representation, the exponentials in Eq.~\eqref{eq:cc_basic} are represented by the quantum circuits in Figure \ref{fig:q-uccsd}, each of which require $\mathcal{O}(M)$ $\mathsf{CNOT}$ gates and have depth $\mathcal{O}(M)$. While the resulting quantum circuit has polynomial depth and number of gates, the $\mathcal{O}(M^5)$ scaling of a basic implementation is sufficiently deep to limit implementations of q-UCCSD on today's quantum hardware. Note that, when using a VQE solver, all Ans\"{a}tze will give an energy that is above the true energy of the system provided that all the appropriate properties/symmetries (e.g. number of particles, $S^2$, $S_z$) are maintained. \begin{figure}\label{fig:q-uccsd} \end{figure} \begin{figure} \caption{Example of a hardware-efficient Ansatz for a 4-qubit problem, defined by layers of single-qubit gates $u_{i,j}$ interspersed with $\mathsf{CNOT}$ gates acting on adjacent qubits on a device with linear connectivity.} \label{fig:hardware_efficient} \end{figure} \paragraph{Hardware efficient Ans\"{a}tze.} Hardware efficient Ans\"{a}tze are typically designed to be experimentally feasible, because they are based on the realizable demands on connectivity and gate operations of a given chip. An example is shown in Fig.~\ref{fig:hardware_efficient}, and consists of alternating layers of arbitrary single-qubit gates and an entangling gate. While it is not guaranteed that such Ans\"{a}tze contain good approximations to the state of interest, they enable important and conceptually insightful simulations on contemporary quantum devices. {It is worth emphasizing that some heuristic methods, epitomized by q-UCC, are based on hierarchies of increasingly more general (and therefore more accurate and expensive) Ans\"{a}tze. As such, they offer the possibility to systematically converge results towards exact quantities, provided enough computational resources are available.} The design of compact Ans\"atze that join hardware efficiency and chemical insight is an active and valuable research area. An important progress in this regard was the development of {schemes like qubit coupled-cluster \cite{ryabinkin2018qubit} and ADAPT-VQE \cite{Grimsley2019,tang2021qubit}. In ADAPT-VQE,} a pool of operators $\{ \hat{A}_i \}_i$ is chosen in advance, the ground state is approximated by the parametrized wavefunction $| \Psi_\theta \rangle = \exp( \theta_n \hat{A}_{i_n} ) \dots \exp( \theta_1 \hat{A}_{i_1} ) | \Psi_0 \rangle$, where the angles are optimized variationally, and pool operators are appended to the circuit based on the value taken by the energy gradient $g_i = \langle \Psi_\theta | [\hat{H} , \hat{A}_i] | \Psi_\theta \rangle$. Numerical simulations showed ADAPT-VQE can improve over q-UCCSD in terms of the accuracy achievable for a given circuit depth. \subsubsection{Variational quantum simulation} \label{sec:vqs} Unlike VQO algorithms, which approximate a specific point of the Hilbert space by minimizing a cost function, VQS algorithms aim at approximating a curve in the Hilbert space of a system (corresponding to a dynamical process) by a curve of time-dependent parametrized wavefunctions, $\ket{ \Psi_{\theta_t} }$. The flow of such a wavefunction is mapped to the evolution of the parameters $\theta_t$, which takes the form of a differential equation \cite{mcardle2019variational,yuan2019theory}. For example, to variationally simulate Hamiltonian dynamics, parameters are evolved to make the vector \begin{equation} \ket{\Delta} = \frac{d}{dt} \ket{ \Psi_{\theta_t} } + i \hat{H} \ket{ \Psi_{\theta_t} } = \sum_k \frac{d \theta_k}{dt} \ket{ \Psi^k_{\theta_t} } + i \hat{H} \ket{ \Psi_{\theta_t} } \quad,\quad \ket{ \Psi^k_{\theta_t} } = \frac{\partial}{\partial {\theta_k}} \ket{ \Psi_{\theta_t} } \quad, \end{equation} vanish. McLachlan's variational principle, i.e. minimization of $\| \Delta \|^2$, leads to the differential equation \begin{equation} \label{eq:working_eq_vqs} b_r = \sum_k A_{rk} \frac{d\theta_k}{dt} \quad,\quad A_{rk} = \mbox{Re}\left( \langle \Psi^r_{\theta_t} | \Psi^k_{\theta_t} \rangle \right) \quad,\quad b_r = \mbox{Im} \left( \langle \Psi^r_{\theta_t} | \hat{H} | \Psi_{\theta_t} \rangle \right) \quad, \end{equation} defining $d \theta/dt$. The quantities $A$, $b$ are measured on the quantum computer \cite{mcardle2019variational,yuan2019theory}, and the classical computer uses such information to compute $d \theta/dt$ and to update the parameters $\theta$. VQS algorithms are especially useful to carry out simulations of Hamiltonian dynamics on contemporary quantum hardware: while the Schr\"odinger equation can be solved by converting $\exp(-it\hat{H})$ into a quantum circuit, as discussed in Section \ref{sec:dynamics}, the depth of such a circuit generally increases polynomially with simulation time $t$. Like their VQO counterparts, VQS algorithms have heuristic nature, as they assume that the quantum state is represented by an Ansatz quantum circuit with fixed depth and structure at any time $t$. Depending on the problem of interest and the structure of the Ansatz, VQS algorithms may thus give inaccurate results. \subsubsection{Quantum diagonalization algorithms} In classical electronic structure, the search for ground and excited Hamiltonian eigenstates can be tackled relying on diagonalization methods. Given a Hamiltonian acting on a Hilbert space $\mathcal{H}$ with dimension $N$, diagonalization methods are based on the synthesis of a collection of vectors $\{ | \vett{v}_a \rangle \}_{a=0}^{d-1}$ that form a basis for a $d$-dimensional subspace of $\mathcal{H}$. Once these vectors are available, the overlap and Hamiltonian matrices \begin{equation} \label{eq:diag_eigen} S_{ab} = \langle \vett{v}_a | \vett{v}_b \rangle \quad,\quad H_{ab} = \langle \vett{v}_a | \hat{H} | \vett{v}_b \rangle \quad, \end{equation} are computed, and the eigenvalue problem $H \vett{c}_\mu = E_\mu S \vett{c}_\mu$ is solved, to determine approximate eigenvalues $E_\mu$ and eigenvectors $| \psi_\mu \rangle = \sum_a c_{a\mu} \, | \vett{v}_a \rangle$ of $\hat{H}$ \cite{lanczos1950iteration,davidsorq1975theiterative,morgan1986generalizations}. Recently, a number of quantum diagonalization algorithms have been conceived \cite{mcclean2017hybrid,colless2018computation,huggins2020non,motta2020determining,ollitrault2020quantum,parrish2019quantum,huggins2020non,stair2020multireference,jamet2021krylov}, that generate vectors $| \vett{v}_a \rangle$ applying suitable operators to an initial state $| \Psi_0 \rangle$, \begin{equation} | \vett{v}_a \rangle = \hat{V}_a | \Psi_0 \rangle \quad,\quad a = 0 \dots d-1 \quad . \end{equation} When the $\hat{V}_a$ are unitary operators, the matrix elements Eq.~\eqref{eq:diag_eigen} can be computed with the so-called Hadamard test circuit \cite{somma2002simulating,aharonov2009polynomial}, shown in Fig.~\ref{fig:lanczos}. An alternative approach is to compile the operators $\hat{V}_a^\dagger \hat{V}_b$ and $\hat{V}_a^\dagger \hat{H} \hat{V}_b$ into linear combinations of Pauli operators, and measure them with the techniques described in Section \ref{sec:quantum_computers}, provided the number of Pauli operators grows polynomially with qubit number. Depending on the number and the nature of the vectors $| \vett{v}_a \rangle$, and on the structure of the problem at hand, quantum diagonalization algorithms can offer a polynomially expensive route to accurate approximation for ground and selected excited states. Examples of quantum diagonalization algorithms are briefly described in the remainder of this Section. \begin{figure}\label{fig:lanczos} \end{figure} \paragraph{Quantum subspace expansion (QSE).} In this algorithm, excited states are defined \cite{mcclean2017hybrid,colless2018computation,huggins2020non} by the Ansatz $| \psi_\mu \rangle = \sum_a c_{a\mu} \hat{E}_a | \Psi_0 \rangle$, where $\{ \hat{E}_a\}_a$ is a set of pre-defined excitation operators. Common choices are Pauli operators of weight at most $k$ and fermionic operators of rank at most $k$, \begin{equation} \begin{split} P_k &= \left\{ \hat{\sigma}_{{\bf{m}}} = \hat{\sigma}_{m_{n-1}} \otimes \dots \otimes \hat{\sigma}_{m_0} \;,\; \sigma_{m_i} \neq \mathbbm{1} \mbox{ for at most $k$ indices } \right\} \quad, \\ F_k &= \left\{ \crt{i_0} \dots \crt{i_{l-1}} \dst{j_{l-1}} \dots \dst{j_0} \;,\; l \leq k \right\} \quad. \end{split} \end{equation} \paragraph{Quantum filter diagonalization (QFD).} This technique \cite{parrish2019quantum,huggins2020non,stair2020multireference,cohn2021quantum} projects the Hamiltonian on a subspace spanned by a set of non-orthogonal quantum states generated via approximate time evolution, $\ket{{\bf{v}}_k} = \exp(-i k \Delta t \hat{H}) \, \ket{\Psi_0}$. QFD can be regarded as a quantum computational equivalent of classical filter diagonalization \cite{neuhauser1990bound,neuhauser1994circumventing}, from which it inherits the connection with the Lanczos algorithm. Furthermore, it draws a profound connection between the two central problems of quantum simulation, namely BQP-complete Hamiltonian simulation and QMA-complete ground-state search. {Finally, it provides} a compelling framework to apply and test algorithms for approximate and variational time evolution, because QFD requires time evolution to generate a set of linearly independent states, a goal that can be achieved with less conservative approximations. \paragraph{Quantum equation of motion (q-EOM).} QSE computes total energies of ground and excited states, which can be subtracted to yield excitation energies. One route to compute excitation energies directly is the q-EOM method, well established in classical simulations and recently extended to quantum computation \cite{Rowe,Ganzhorn,ollitrault2020quantum,gao2020applications,barison2020quantum}. In this framework, excitation energies are computed as \begin{equation} \Delta E_\mu = \frac{ \langle \Psi_0 | [\hat{O}_\mu,\hat{H},\hat{O}_\mu^\dag] | \Psi_0 \rangle }{ \langle \Psi_0 | [ \hat{O}_\mu ,\hat{O}_\mu^\dag] | \Psi_0 \rangle } \quad, \end{equation} where $2 [\hat{O}_\mu,\hat{H},\hat{O}_\mu^\dag] = [[\hat{O}_\mu,\hat{H}],\hat{O}_\mu^\dag] + [\hat{O}_\mu,[\hat{H},\hat{O}_\mu^\dag]]$ and $\hat{O}_\mu$ is an excitation operator expanded on a suitable basis. {Commutators are introduced for several reasons. First, they can be used to compute energy differences directly rather than total energies: in fact, if $\Psi_0$ is an eigenstate of $\hat{H}$ with eigenvalue $E_0$, then $\hat{H} \hat{O}_\mu^\dagger |\Psi_0 \rangle = [ \hat{H} , \hat{O}_\mu^\dagger ] |\Psi_0 \rangle + E_0 \hat{O}_\mu^\dagger |\Psi_0 \rangle$ and thus $\hat{O}_\mu^\dagger |\Psi_0 \rangle$ is an eigenstate of $\hat{H}$ with eigenvalue $E_\mu$ if $[ \hat{H} , \hat{O}_\mu^\dagger ] |\Psi_0 \rangle = (E_\mu - E_0) \hat{O}_\mu^\dagger |\Psi_0 \rangle = \Delta E_\mu \hat{O}_\mu^\dagger |\Psi_0 \rangle$. Second, they can project the Schr\"{o}dinger equation onto a subspace of relevant electronic wavefunctions \cite{Rowe} and finally, the rank (number of electronic excitations) of a commutator is lower than the rank of a product, which has a beneficial impact on the computational cost.} The variational problem of finding the stationary points of $\Delta E_\mu$ leads to a generalized eigenvalue equation, the solutions of which are the excited-state energies. \paragraph{Quantum Lanczos (qLANCZOS).} In this algorithm, imaginary-time evolution (ITE) \begin{equation} \ket{{\bf{v}}_k} = \frac{ e^{-k \Delta \tau \hat{H}} \ket{\Psi_0} }{\| e^{-k \Delta \tau \hat{H}} \ket{\Psi_0} \|} \quad,\quad k = 0 \dots d-1 \quad, \end{equation} is used to construct the subspace. {QFD and qLANCZOS are examples of methods projecting the Schr\"{o}dinger equation in a time series basis.} ITE is a non-linear and non-unitary map, and thus is not naturally simulatable on a digital quantum computer. Various approaches to achieve this goal have been proposed, ranging from LCU-based to variational (see Sections \ref{sec:taylor}, \ref{sec:vqs}). Ref.~\cite{motta2020determining} introduced an alternative approach to apply ITE on a quantum computer, termed quantum ITE or QITE, which is free from ancillae and controlled operations as well as from high-dimensional parameter optimizations. In the QITE method, a single step of ITE under a geometrically local term $\hat{h}_m$ of the Hamiltonian is approximated by a unitary, $e^{- \Delta\tau \hat{h}_m} \ket{\Psi} \propto e^{i\hat{A}_m} \ket{\Psi}$, where $\hat{A}_m = \sum_i x_{im} \hat{P}_{im}$, the operators $\hat{P}_{im}$ act on a neighborhood of the domain of $\hat{h}_m$, and the coefficients $x_{im}$ are determined from local measurements \cite{motta2020determining}. While initial estimates in a limited set of problems show QITE to be resource-efficient compared to variational methods \cite{motta2020determining,sun2021quantum,yeter2020a,gomes2020,yeter2020b,kamakari2021digital}, an extensive numerical understanding of its performance and cost across different problems remains to be developed. \subsubsection{Applications of variational algorithms, and open problems} The design and improvement of heuristic algorithms for Hamiltonian eigenpair approximation is one of the most active research areas at the interface between quantum chemistry and quantum computation. Some of the directions of current and future research include: the creation of new Ans\"{a}tze based on chemical notions, the extension of properties accessible to heuristic algorithms, and the economization of calculations. The relationship between standard coupled-cluster $e^{\hat{T}} \ket{\Psi_0}$ and unitary coupled-cluster Eq.~\eqref{eq:q_uccsd} exemplifies how quantum computation can offer occasions to revive and reinterpret concepts and techniques from quantum chemistry. In both cases, the use of a cluster expansion is motivated by the qualitative accuracy of mean-field theory, and quantum computation provides a compelling framework to explore the theoretical and numerical differences between these theories \cite{Cooper2010,harsha2018difference,evangelista2019exact,lee2018generalized} especially in statically correlated situations. Circuits inspired by the q-UCCSD hierarchy but which directly substitute the fermionic field operators for spin ladder operators \cite{ryabinkin2018qubit} have been suggested, as well as circuits based on the use of specific building blocks \cite{o2019generalized,anselmetti2021local,barison2020quantum,matsuzawa2020jastrow} and symmetry preserving Ans\"{a}tze \cite{gard2020efficient}. The VQE algorithms has been extended to the calculation of excited states \cite{higgott2019variational,ibe2020calculating} and the optimization of molecular orbitals \cite{mizukami2020orbital,sokolov2020quantum} by suitable modification of the cost function; to properties other than the ground-state energy \cite{rice2021quantum,sokolov2021microcanonical} by evaluation of suitable operators on the VQE wavefunction; and have been integrated in the workflow of solid-state chemistry \cite{choudhary2021quantum,ma2020quantum} , transcorrelated Hamiltonian \cite{motta2020quantum,mcardle2020improving} and quantum embedding \cite{rubin2016hybrid,dhawan2020dynamical,metcalf2020resource,kawashima2021efficient,rossmannek2021quantum} calculations. {Modifying the VQE cost function has also been proposed as a technique to improve the quality or the convergence of ground-state simulations \cite{stair2021simulating,kuroiwa2021penalty,ryabinkin2018constrained}.} On the front of algorithm optimization, considerable effort has been devoted to reducing the measurement cost \cite{gonthier2020identifying}, for example by simultaneously measuring commuting subsets of the Pauli operators needed for the cost function \cite{wecker2015progress,jena2019pauli,izmaylov2019unitary,jena2019pauli,kubler2020adaptive,zhao2020measurement}, {leveraging amplitude amplification \cite{wang2021minimizing}}, as well as adopting machine-learning techniques to extract more information from a given measurement dataset \cite{torlai2020precise,hadfield2020measurements,hillmich2021decision}. {In the context of variational quantum algorithms for time evolution, important research directions are related with the simplification of the working equations \eqref{eq:working_eq_vqs}, and the economization of the quantum measurement required by the simulation. For example, VQS techniques based on minimizing the distance (or maximizing the overlap) between states evolved in time exactly and variationally have been proposed \cite{barison2021efficient,benedetti2021hardware}, along with techniques to economize quantum measurements introducing causal light-cone structure in the Ansatz \cite{foss2021holographic,benedetti2021hardware,kattemolle2021variational}. Generalizing these techniques to chemical Hamiltonians is a compelling research direction, at the interface between quantum algorithms for physics and chemistry.} {Furthermore, while molecular simulations generally require handling the N-electron wavefunction, recently proposed approaches have instead focused on the expression of the ground-state energy as a functional of the two-electron reduced density matrix \cite{boyn2021quantum,mazziotti2021quantum}. While recent results have indicated that they represent a promising direction for efficient molecular quantum simulations, additional research is needed to assess their full potential.} \section{Error mitigation techniques for near-term quantum devices} \label{sec:hardware} Until recently, executing quantum algorithms was only a theoretical possibility. Recent advances have made quantum computing devices available to the scientific community \cite{ibm2020services,rigetti2020services}, and computational packages to design and implement quantum algorithms \cite{smith2016practical,Qiskit,mcclean2020openfermion}. \begin{figure}\label{fig:hardware} \end{figure} \begin{table}[h!] \begin{tabular}{ccc} \hline\hline reference & systems & number of qubits \\ \hline \cite{peruzzo2014variational} & \ce{HeH+} & 2 \\ \cite{o2016scalable,kandala2017hardware} & \ce{H2} & 2 \\ \cite{kandala2017hardware} & \ce{BeH2} & 6 \\ \cite{kandala2017hardware,rice2021quantum} & \ce{LiH} & 4 \\ \cite{nam2020ground} & \ce{H2O} & 4 \\ \cite{gao2021computational} & \ce{LiO2} dimer & 2 \\ \cite{mccaskey2019quantum} & \ce{NaH}, \ce{RbH}, \ce{KH} & 4 \\ \cite{google2020hartree} & \ce{H12} & 12 \\ \cite{eddins2021doubling} & \ce{H2O} & 5 \\ \hline\hline \end{tabular} \caption{Some recent experiments on contemporary quantum hardware, aimed at simulating molecular systems .} \label{table:recent_simulations} \end{table} Based on a broad range of architectures, such as superconducting \cite{krantz2019quantum,devoret2004superconducting} and trapped ion \cite{bruzewicz2019trapped,brown2021materials} qubits, such devices are capable of carrying out quantum computations of chemical systems on a limited scale, as exemplified in Table \ref{table:recent_simulations}, for a variety of technical reasons. In particular: ($i$) they comprise less than 100 qubits which, as seen in Section \ref{sec:mapping_to_qubits}, limits the number of electrons and orbitals that can be simulated, ($ii$) not all pairs of qubits are physically connected, so that entangling gates have to be limited to adjacent qubits in the topology of the chip, or implemented incurring an overhead of $\mathsf{SWAP}$ gates, see Fig.~\ref{fig:hardware}, ($iii$) each device has a set of native gates, dictated by its architecture and manipulation techniques \cite{rigetti2010fully,chow2011simple,yan2018tunable}; while such gates are universal (see Section \ref{sec:universality}), every gate in a quantum circuit has to be compiled into a product of native gates, ($iv$) quantum hardware is subject to decoherence and imperfect implementation of quantum operations. Errors occurring on a quantum device can be classified into coherent (unitary noise processes) and incoherent (non-unitary noise processes), \begin{equation} i \hbar \frac{d \rho_{\textrm{hw}} }{dt} = [ \hat{H}_{\textrm{hw}}(t) + \hat{H}_{\textrm{c}}(t) , \rho_{\textrm{hw}} ] + \mathcal{L}_{ \textrm{i} }( t, \rho_{\textrm{hw}} ) \quad, \end{equation} where $\rho_{\textrm{hw}}$ is the density operator of the quantum hardware, and $\hat{H}_{\textrm{c}}(t)$ and $\mathcal{L}_{ \textrm{i} }$ generate unitary and non-unitary evolution respectively. Coherent errors are exemplified by over- or under-rotation in qubit control pulses and qubit cross-talking, and examples of incoherent errors are the following single-qubit amplitude damping, phase damping and depolarization processes, respectively \cite{nielsen2002quantum} \begin{equation} \mathcal{L}_{\textrm{a}}(\rho) = \gamma_{\textrm{a}} \; \mathcal{V}_{S_-}(\rho) \quad,\quad \mathcal{L}_{\textrm{p}}(\rho) = \gamma_{\textrm{p}} \; \mathcal{V}_{Z}(\rho) \quad,\quad \mathcal{L}_{\textrm{d}}(\rho) = \gamma_{\textrm{d}} \; \sum_m \mathcal{V}_{\sigma_m}(\rho) \quad, \end{equation} where $\mathcal{V}_{A}(\rho) = A \rho A^\dagger - A^\dagger A \rho - \rho A^\dagger A$. Relaxation and dephasing processes occur on time scales $T_1$ and $T_2$ respectively, called qubit decoherence times. Deeper quantum circuits comprising more entangling gates have significantly larger biases and statistical uncertainties, due to the accumulation of coherent and incoherent errors, an effect that is especially pronounced when the execution time of the circuit is comparable with decoherence times of the qubits. Fully addressing the decoherence problem requires an advanced set of techniques, namely fault-tolerant quantum computation via quantum error correction \cite{peres1985reversible,shor1995scheme,steane1996error,nielsen2002quantum,fowler2012surface}, which in turn requires extremely low error rates for qubit operations as well as a significant overhead of physical qubits \cite{fowler2012surface,preskill2018quantum}. Enhancing the capabilities of near-term quantum computing hardware thus requires techniques to mitigate errors without requiring any additional quantum resources \cite{kandala2018extending,maciejewski2020mitigation,bharti2021noisy}. The emergent nature of quantum devices requires quantum chemists and quantum information scientists to conduct synergistic research, so that algorithmic implementations understand and leverage the nature, limitations and features of quantum hardware, and the benchmarking and development of quantum hardware is driven by chemical applications. Conducting quantum simulations of chemical systems and designing algorithms for contemporary quantum hardware is also an important occasion to understand, characterize and mitigate errors induced by experimental imperfections. In the remainder of this section, we will describe some {emerging} techniques for mitigation of readout and gate errors. \paragraph{Readout error mitigation.} In theoretical considerations about quantum information protocols, a quantum device is often assumed to perform unbiased measurements. In practice, this assumption is often violated due to experimental imperfections and decoherence. Since measurements are a central part of any quantum simulation, this observation led to the development of readout error mitigation techniques. Let us denote by ${\bf{p}}_{\textrm{ideal}}$ the exact probability distribution for the outcomes of a quantum measurement, and ${\bf{p}}_{\textrm{exp}}$ the probability distribution actually measured on the quantum hardware. The relationship between ${\bf{p}}_{\textrm{ideal}}$ and ${\bf{p}}_{\textrm{exp}}$ is in general captured by a complicated multidimensional function. However, if noise affecting measurements is weak, an accurate approximation can be obtained assuming the relation between the two probability distributions is given by a linear map, ${\bf{p}}_{\textrm{exp}} = \Lambda \, {\bf{p}}_{\textrm{ideal}} + {\bf{\Delta}}$. The elements of $\Lambda$ and ${\bf{\Delta}}$ can be estimated by choosing a set of calibration circuits $\{ c_k \}_k$ such that ${\bf{p}}_{\textrm{ideal}}(k)$ can be computed analytically (e.g. circuits comprising a single layer of $X$ gates), measuring the observable of interest over these circuits, obtaining probability distributions ${\bf{p}}_{\textrm{exp}}(k)$, and minimizing the distance $\Lambda_0,{\bf{\Delta}}_0 = \mbox{argmin}_{\Lambda,{\bf{\Delta}}} \sum_k \| {\bf{p}}_{\textrm{exp}}(k) - \Lambda {\bf{p}}_{\textrm{ideal}}(k) - {\bf{\Delta}} \|$. The observable of interest is measured over a different state outside the calibration set, and the ideal probability distribution is reconstructed from the experimental one as ${\bf{p}}_{\textrm{ideal}} \simeq \Lambda^{-1}_0 \, \left[ {\bf{p}}_{\textrm{exp}} - {\bf{\Delta}}_0 \right]$. This method is solely based on classical post-processing and, although its cost scales exponentially with qubit number, suitable Ans\"{a}tze on the structure of the pair $\Lambda_0,{\bf{\Delta}}_0$ can still give accurate maps at polynomial cost. Numerical studies have analyzed the impact of finite statistics (at the stage of estimation of probability distributions) on the protocol, and confirmed its approximate validity on a variety of publicly available prototypes of quantum chips \cite{temme2017error,kandala2018extending,maciejewski2020mitigation}. \paragraph{Gate error mitigation.} Recent work \cite{temme2017error,li2017efficient,endo2018practical} has shown that the accuracy of computation based off expectation values of quantum observables, such as variational quantum algorithms, can be enhanced through an extrapolation of results from a collection of varying noisy experiments. Any quantum circuit can be expressed in terms of evolution under a time-dependent drive Hamiltonian $\hat{H}_{\textrm{hw}}(t) = \sum_\alpha J_\alpha(t) \hat{P}_\alpha$ acting on the quantum hardware, where $\hat{P}_\alpha$ represents some Hermitian operator of the quantum hardware and $J_\alpha(t)$ the strength of the associated interaction. The expectation value $B(\varepsilon)$ of an observable of interest over the state prepared by the drive $\hat{H}_{\textrm{hw}}(t)$ in the presence of noise can be expressed as a power series around its zero-noise value, \begin{equation} B(\varepsilon) = b_0 + \sum_{k=1}^n b_k \, \varepsilon^k + \mathcal{O}(\varepsilon^{n+1}) \quad, \end{equation} where $\varepsilon$ is a small noise parameter, and the coefficients in the expansion $b_k$ are dependent on specific details of the noise model. The primary objective of gate error mitigation techniques is to experimentally obtain improved estimates to $b_0$ despite using noisy quantum hardware. Assuming noise is time-translationally invariant a possible strategy \cite{kandala2018extending}, sketched in Fig.~\ref{fig:gate_error_mitigation}, is to perform a collection of experiments with stretched pulses, $J_\alpha(t) \to c_i^{-1} J_\alpha(c^{-1}_i t)$ corresponding to noise strengths $c_i \varepsilon$, computing the corresponding expectation values $B_i(\varepsilon) = B(c_i \varepsilon) = b_0 + \sum_k b_k (c_i \varepsilon)^k$, and extracting $b_0$ using a Richardson extrapolation \cite{richardson1911approximate}. This protocol, demonstrated for a variety of applications within and beyond molecular electronic structure \cite{kandala2017hardware}, proved able to enhance the computational capabilities of quantum processors based on superconducting architectures, with no additional quantum resources or hardware modifications, which makes it very compelling for practical implementations on near-term hardware. It is important to notice that implementing the Richardson extrapolation protocol requires a profound understanding and control of the gates used in the circuit, which in turn motivated less general but more easily implementable schemes \cite{dumitrescu2018cloud,rice2021quantum}. Furthermore, unlike quantum error correction, gate error mitigation techniques do not allow for an indefinite extension of the computation time, and only provide corrections to expectation values, without correcting for the full quantum mechanical probability distributions. \begin{figure} \caption{Left: a measurement of the expectation value after rescaled state preparation is equivalent to a measurement under an amplified noise strength, if the noise is time-translation invariant. Right: illustration of gate error mitigation based on a first-order Richardson extrapolation to the zero-noise limit. This highlights that the variance of the mitigated estimate $b_0$ is dependent on the variance of the unmitigated measurements, and the stretch factors $c_i$.} \label{fig:gate_error_mitigation} \end{figure} {There exist other gate error mitigation techniques, that achieve cancellation of errors, for example, by introducing quantum gates implementing unitary transformations generated by the symmetries of the system \cite{tran2021faster}, or by resampling randomized circuits according to a quasi-probability distribution \cite{temme2017error}.} \paragraph{Post-selection.} To mitigate the effect of hardware noise on the measurement results, one can also process hardware data by error-mitigation methods such as post-selection. When a Hamiltonian has $\mathbbm{Z}_2$ symmetries, as discussed in Sec.~\ref{sec:fermions_second}, a wavefunction encoded on a quantum computer can be written as {$| \Psi \rangle = \hat{U}^\dagger \sum_{{\bf{s}}} c_{ {\bf{s}} } | \Phi_{\bf{s}} \rangle \otimes | {\bf{s}} \rangle$}, where the stabilizer parities ${\bf{s}}$ label irreps of the symmetry group. In absence of noise, wavefunctions should have an intended stabilizer parity ${\bf{s}}_0$, to avoid mixing different irreps. However, during execution of the circuit, gate errors and qubit decoherence can induce nonzero overlap of the qubit state with subspaces of undesired parity. Post-selection can mitigate these undesirable effects by discarding measurement outcomes with the wrong parity \cite{bonet2018low,mcardle2019error}, as exemplified in Fig.~\ref{fig:post_selection}. Compared against qubit reduction, post-selection requires more qubits, but typically retains the compact and efficient representation of fermionic and other operators as qubit operators, which the transformation $\hat{U}$ typically undoes. In the context of molecular electronic structure, post-selection is particularly appealing when simulations are performed in the fermionic Fock space. While Fock space representations are often elected for ease of implementation, electronic structure wavefunctions have well-defined particle number and spin. When the JW representation is used in conjunction with low-rank decompositions of the Hamiltonian \cite{huggins2021efficient,cohn2021quantum}, several such constants of motion quantities can be efficiently measured simultaneously with the one- and two-body contributions to the Hamiltonian, thereby projecting the electronic wavefunction into an eigenspace of constants of motion labeled by desired eigenvalues. \begin{figure} \caption{ Schematics of a post-selection procedure. The transformation $\hat{V}$ achieves simultaneous measurement of the stabilizer generators, transformed to single-qubit operators acting on the first $k=2$ qubits, from which the stabilizer parities $s_0,s_1$ are read. The other qubits are measured in $X$, $Y$ or $Z$ basis depending on the Pauli string of interest, and measurement outcomes with the undesired parity are discarded. } \label{fig:post_selection} \end{figure} \section{Conclusion and Outlook} \label{sec:conclusions} In this work, we explored {emerging} quantum computing algorithms for chemistry. In our discussion, we emphasized that quantum computers are special purpose machines, capable of solving certain structured problems with a polynomial amount of resources. These problems, exemplified by the simulation of Hamiltonian dynamics, can be projected to benefit from quantum algorithms. For other problems, exemplified by the simulation of Hamiltonian eigenpairs, quantum algorithms are based on heuristic approximation schemes. We reviewed quantum algorithms for the simulation of Hamiltonian dynamics (product formulas, quantum walks, and LCU-based algorithms) and of Hamiltonian eigenstates (variational and diagonalization algorithms), highlighting some applications and open problems. Given the emergent nature of quantum computation, a number of properties of quantum algorithms need to be characterized by systematic numerical studies over a set of diverse chemical problems, especially accuracy and computational cost. This characterization is needed for both heuristic (to help understand, establish and refine the underlying approximations) and non-heuristic algorithms (in order to determine when and how to apply them). Chemists can contribute to this effort in many different ways. First, they can help design sets of chemical systems that interpolate between small, simple (e.g. H$_2$) and large, realistic cases (e.g. enzymes). Achieving this goal can help demonstrate algorithms on today's devices, as well as lead to a more systematic understanding of their scalability and accuracy. Chemists can contribute also to the continuous development of new heuristics, by collaborating with quantum information scientists to optimally represent chemical wavefunctions and observables in terms of quantum circuits and measurements respectively. \section*{Acknowledgment} We thank {T. J. Lee, D. Maslov, H. Nakamura, A. Mezzacapo, and D. W. Berry} for helpful feedback on the manuscript. \appendix \section{Glossary} \begin{table}[h!] \centering \resizebox{\textwidth}{!}{ \begin{tabular}{|ll|ll|} \hline\hline acronym & meaning & acronym & meaning \\ \hline 2QR & two-qubit reduction & QITE & quantum imaginary-time evolution (ITE) \\ ADAM & adaptive moment estimation & qLANCZOS & quantum Lanczos \\ ASP & adiabatic state preparation & QMA & quantum Merlin-Arthur \\ BK & Bravyi-Kitaev & QM/MM & quantum mechanics / molecular mechanics approach \\ BQP & bounded-error quantum polynomial time & QNG & quantum natural gradient \\ CCSD & coupled-cluster (CC) with singles and doubles & QPE & quantum phase estimation \\ CCSD(T) & CCSD with perturbative estimate to connected triples & QSE & quantum subspace expansion \\ CI & configuration interaction & QSP & quantum signal processing \\ DFT & density functional theory & q-UCC & quantum unitary CC \\ FCI & full CI & q-UCCSD & q-UCC with singles and doubles \\ HF & Hartree-Fock & STO-6G & minimal basis where 6 primitive Gaussian orbitals \\ JW & Jordan-Wigner & & are fit to a Slater-type orbital (STO) \\ LCU & linear combination of unitaries & SPSA & simultaneous perturbation stochastic approximation \\ OAA & oblivious amplitude amplification & VQA & variational quantum (VQ) algorithm \\ q-EOM & quantum equation of motion & VQE & VQ eigensolver \\ QFD & quantum filter diagonalization & VQO & VQ optimization \\ QFT & quantum Fourier transform & VQS & VQ simulation \\ \hline\hline \end{tabular} } \caption{Glossary of acronyms used throughout the present work.} \label{tab:glossary} \end{table} \begin{thebibliography}{392} \makeatletter \providecommand \@ifxundefined [1]{ \@ifx{#1\undefined} } \providecommand \@ifnum [1]{ \ifnum #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \@ifx [1]{ \ifx #1\expandafter \@firstoftwo \else \expandafter \@secondoftwo \fi } \providecommand \natexlab [1]{#1} \providecommand \enquote [1]{``#1''} \providecommand \bibnamefont [1]{#1} \providecommand \bibfnamefont [1]{#1} \providecommand \citenamefont [1]{#1} \providecommand \href@noop [0]{\@secondoftwo} \providecommand \href [0]{\begingroup \@sanitize@url \@href} \providecommand \@href[1]{\@@startlink{#1}\@@href} \providecommand \@@href[1]{\endgroup#1\@@endlink} \providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode `\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax} \providecommand \@@startlink[1]{} \providecommand \@@endlink[0]{} \providecommand \url [0]{\begingroup\@sanitize@url \@url } \providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }} \providecommand \urlprefix [0]{URL } \providecommand \Eprint [0]{\href } \providecommand \doibase [0]{https://doi.org/} \providecommand \selectlanguage [0]{\@gobble} \providecommand \bibinfo [0]{\@secondoftwo} \providecommand \bibfield [0]{\@secondoftwo} \providecommand \translation [1]{[#1]} \providecommand \BibitemOpen [0]{} \providecommand \bibitemStop [0]{} \providecommand \bibitemNoStop [0]{.\EOS\space} \providecommand \EOS [0]{\spacefactor3000\relax} \providecommand \BibitemShut [1]{\csname bibitem#1\endcsname} \let\auto@bib@innerbib\@empty \bibitem [{\citenamefont {Dirac}(1928)}]{dirac1928quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~A.~M.}\ \bibnamefont {Dirac}},\ }\bibfield {title} {\bibinfo {title} {The quantum theory of the electron},\ }\href {https://royalsocietypublishing.org/doi/abs/10.1098/rspa.1928.0023} {\bibfield {journal} {\bibinfo {journal} {Proc. Roy. Soc. London A, Math. Phys. Sci}\ }\textbf {\bibinfo {volume} {117}},\ \bibinfo {pages} {610} (\bibinfo {year} {1928})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bartlett}\ and\ \citenamefont {Musia{\l}}(2007)}]{bartlett2007coupled} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Bartlett}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Musia{\l}}},\ }\bibfield {title} {\bibinfo {title} {Coupled-cluster theory in quantum chemistry},\ }\href {https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.79.291} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {291} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Helgaker}\ \emph {et~al.}(2012)\citenamefont {Helgaker}, \citenamefont {Coriani}, \citenamefont {J{\o}rgensen}, \citenamefont {Kristensen}, \citenamefont {Olsen},\ and\ \citenamefont {Ruud}}]{helgaker2012recent} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Helgaker}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Coriani}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {J{\o}rgensen}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kristensen}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Olsen}},\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Ruud}},\ }\bibfield {title} {\bibinfo {title} {Recent advances in wave function-based methods of molecular-property calculations},\ }\href {https://doi.org/10.1021/cr2002239} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {112}},\ \bibinfo {pages} {543} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Helgaker}\ \emph {et~al.}(2014)\citenamefont {Helgaker}, \citenamefont {Jorgensen},\ and\ \citenamefont {Olsen}}]{helgaker2014molecular} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Helgaker}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Jorgensen}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Olsen}},\ }\href@noop {} {\emph {\bibinfo {title} {Molecular electronic-structure theory}}}\ (\bibinfo {publisher} {John Wiley \& Sons},\ \bibinfo {year} {2014})\BibitemShut {NoStop} \bibitem [{\citenamefont {Feynman}(1982)}]{feynman1982simulating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~P.}\ \bibnamefont {Feynman}},\ }\bibfield {title} {\bibinfo {title} {Simulating physics with computers},\ }\href {https://doi.org/10.1007/BF02650179} {\bibfield {journal} {\bibinfo {journal} {Int. J. Theor. Phys}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {467} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lloyd}(1996)}]{lloyd1996universal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\bibfield {title} {\bibinfo {title} {Universal quantum simulators},\ }\href {https://doi.org/10.1126/science.273.5278.1073} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {273}},\ \bibinfo {pages} {1073} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Abrams}\ and\ \citenamefont {Lloyd}(1997)}]{abrams1997simulation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Abrams}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ }\bibfield {title} {\bibinfo {title} {Simulation of many-body {F}ermi systems on a universal quantum computer},\ }\href {https://doi.org/10.1103/PhysRevLett.79.2586} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {79}},\ \bibinfo {pages} {2586} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Georgescu}\ \emph {et~al.}(2014)\citenamefont {Georgescu}, \citenamefont {Ashhab},\ and\ \citenamefont {Nori}}]{georgescu2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~M.}\ \bibnamefont {Georgescu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Ashhab}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Nori}},\ }\bibfield {title} {\bibinfo {title} {Quantum simulation},\ }\href {https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.86.153} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {153} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mukamel}(1999)}]{mukamel1999principles} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Mukamel}},\ }\href@noop {} {\emph {\bibinfo {title} {Principles of nonlinear optical spectroscopy}}}\ (\bibinfo {publisher} {Oxford University Press},\ \bibinfo {year} {1999})\BibitemShut {NoStop} \bibitem [{\citenamefont {Barron}(2009)}]{barron2009molecular} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~D.}\ \bibnamefont {Barron}},\ }\href@noop {} {\emph {\bibinfo {title} {Molecular light scattering and optical activity}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2009})\BibitemShut {NoStop} \bibitem [{\citenamefont {Fleming}(1986)}]{fleming1986chemical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Fleming}},\ }\href@noop {} {\emph {\bibinfo {title} {Chemical applications of ultrafast spectroscopy}}}\ (\bibinfo {publisher} {Oxford University Press},\ \bibinfo {year} {1986})\BibitemShut {NoStop} \bibitem [{\citenamefont {Puzzarini}\ \emph {et~al.}(2010)\citenamefont {Puzzarini}, \citenamefont {Stanton},\ and\ \citenamefont {{G}auss}}]{puzzarini2010quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Puzzarini}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Stanton}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {{G}auss}},\ }\bibfield {title} {\bibinfo {title} {Quantum-chemical calculation of spectroscopic parameters for rotational spectroscopy},\ }\href {https://www.tandfonline.com/doi/abs/10.1080/01442351003643401} {\bibfield {journal} {\bibinfo {journal} {Int. Rev. Phys. Chem}\ }\textbf {\bibinfo {volume} {29}},\ \bibinfo {pages} {273} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Helgaker}\ \emph {et~al.}(1999)\citenamefont {Helgaker}, \citenamefont {Jaszunski},\ and\ \citenamefont {Ruud}}]{helgaker1999ab} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Helgaker}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Jaszunski}},\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Ruud}},\ }\bibfield {title} {\bibinfo {title} {Ab initio methods for the calculation of {NMR} shielding and indirect spin-spin coupling constants},\ }\href {https://pubs.acs.org/doi/abs/10.1021/cr960017t} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {293} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hammett}(1935)}]{hammett1935reaction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~P.}\ \bibnamefont {Hammett}},\ }\bibfield {title} {\bibinfo {title} {Reaction rates and indicator acidities},\ }\href {https://pubs.acs.org/doi/10.1021/cr60053a006} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {67} (\bibinfo {year} {1935})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bauer}\ \emph {et~al.}(2020)\citenamefont {Bauer}, \citenamefont {Bravyi}, \citenamefont {Motta},\ and\ \citenamefont {Kin-Lic~Chan}}]{bauer2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bauer}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bravyi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kin-Lic~Chan}},\ }\bibfield {title} {\bibinfo {title} {Quantum algorithms for quantum chemistry and quantum materials science},\ }\href {https://pubs.acs.org/doi/10.1021/acs.chemrev.9b00829} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {12685} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nielsen}\ and\ \citenamefont {Chuang}(2010)}]{nielsen2002quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Nielsen}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum computation and quantum information}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2010})\BibitemShut {NoStop} \bibitem [{\citenamefont {Mermin}(2007)}]{mermin2007quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~D.}\ \bibnamefont {Mermin}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum computer science: an introduction}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2007})\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitaev}\ \emph {et~al.}(2002)\citenamefont {Kitaev}, \citenamefont {Shen}, \citenamefont {Vyalyi},\ and\ \citenamefont {Vyalyi}}]{kitaev2002classical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont {Kitaev}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Shen}}, \bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont {Vyalyi}},\ and\ \bibinfo {author} {\bibfnamefont {M.~N.}\ \bibnamefont {Vyalyi}},\ }\href@noop {} {\emph {\bibinfo {title} {Classical and quantum computation}}}\ (\bibinfo {publisher} {American Mathematical Society},\ \bibinfo {year} {2002})\BibitemShut {NoStop} \bibitem [{\citenamefont {Benenti}\ \emph {et~al.}(2019)\citenamefont {Benenti}, \citenamefont {Casati}, \citenamefont {Rossini},\ and\ \citenamefont {Strini}}]{benenti2004principles} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Benenti}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Casati}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Rossini}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Strini}},\ }\href@noop {} {\emph {\bibinfo {title} {Principles of quantum computation and information: A Comprehensive Textbook}}}\ (\bibinfo {publisher} {World Scientific},\ \bibinfo {year} {2019})\BibitemShut {NoStop} \bibitem [{\citenamefont {Lo}\ \emph {et~al.}(1998)\citenamefont {Lo}, \citenamefont {Spiller},\ and\ \citenamefont {Popescu}}]{lo1998introduction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.-K.}\ \bibnamefont {Lo}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Spiller}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Popescu}},\ }\href@noop {} {\emph {\bibinfo {title} {Introduction to quantum computation and information}}}\ (\bibinfo {publisher} {World Scientific},\ \bibinfo {year} {1998})\BibitemShut {NoStop} \bibitem [{\citenamefont {Seeley}\ \emph {et~al.}(2012)\citenamefont {Seeley}, \citenamefont {Richard},\ and\ \citenamefont {Love}}]{seeley2012bravyi} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~T.}\ \bibnamefont {Seeley}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Richard}},\ and\ \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}},\ }\bibfield {title} {\bibinfo {title} {The {B}ravyi-{K}itaev transformation for quantum computation of electronic structure},\ }\href {https://aip.scitation.org/doi/10.1063/1.4768229} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {137}},\ \bibinfo {pages} {224109} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitaev}(1997)}]{kitaev1997quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont {Kitaev}},\ }\bibfield {title} {\bibinfo {title} {Quantum computations: algorithms and error correction},\ }\href {https://iopscience.iop.org/article/10.1070/RM1997v052n06ABEH002155/meta} {\bibfield {journal} {\bibinfo {journal} {Rus. Math. Surv}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {1191} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Harrow}\ \emph {et~al.}(2002)\citenamefont {Harrow}, \citenamefont {Recht},\ and\ \citenamefont {Chuang}}]{harrow2002efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~W.}\ \bibnamefont {Harrow}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Recht}},\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\bibfield {title} {\bibinfo {title} {Efficient discrete approximations of quantum gates},\ }\href {https://aip.scitation.org/doi/10.1063/1.1495899} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {4445} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dawson}\ and\ \citenamefont {Nielsen}(2005)}]{dawson2005solovay} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Dawson}}\ and\ \bibinfo {author} {\bibfnamefont {M.~A.}\ \bibnamefont {Nielsen}},\ }\bibfield {title} {\bibinfo {title} {{The Solovay-Kitaev algorithm}},\ }\href {https://dl.acm.org/doi/10.5555/2011679.2011685} {\bibfield {journal} {\bibinfo {journal} {Quant. Info. Comput}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {1} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kliuchnikov}\ \emph {et~al.}(2013)\citenamefont {Kliuchnikov}, \citenamefont {Maslov},\ and\ \citenamefont {Mosca}}]{kliuchnikov2012fast} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Kliuchnikov}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maslov}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mosca}},\ }\bibfield {title} {\bibinfo {title} {Fast and efficient exact synthesis of single-qubit unitaries generated by {C}lifford and {T} gates},\ }\href {https://dl.acm.org/doi/10.5555/2535649.2535653} {\bibfield {journal} {\bibinfo {journal} {Quant. Info. Comput}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {607–630} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ross}(2015)}]{ross2014optimal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Ross}},\ }\bibfield {title} {\bibinfo {title} {Optimal ancilla-free {C}liffort+{V} approximation of $z$-rotations},\ }\href {https://dl.acm.org/doi/10.5555/2535649.2535653} {\bibfield {journal} {\bibinfo {journal} {Quant. Info. Comput}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {932–950} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barenco}\ \emph {et~al.}(1995)\citenamefont {Barenco}, \citenamefont {Bennett}, \citenamefont {Cleve}, \citenamefont {DiVincenzo}, \citenamefont {Margolus}, \citenamefont {Shor}, \citenamefont {Sleator}, \citenamefont {Smolin},\ and\ \citenamefont {Weinfurter}}]{barenco1995elementary} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Barenco}}, \bibinfo {author} {\bibfnamefont {C.~H.}\ \bibnamefont {Bennett}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}}, \bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont {DiVincenzo}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Margolus}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Shor}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Sleator}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Smolin}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Weinfurter}},\ }\bibfield {title} {\bibinfo {title} {Elementary gates for quantum computation},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.3457} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {3457} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {M{\"o}tt{\"o}nen}\ \emph {et~al.}(2004)\citenamefont {M{\"o}tt{\"o}nen}, \citenamefont {Vartiainen}, \citenamefont {Bergholm},\ and\ \citenamefont {Salomaa}}]{mottonen2004quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {M{\"o}tt{\"o}nen}}, \bibinfo {author} {\bibfnamefont {J.~J.}\ \bibnamefont {Vartiainen}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Bergholm}},\ and\ \bibinfo {author} {\bibfnamefont {M.~M.}\ \bibnamefont {Salomaa}},\ }\bibfield {title} {\bibinfo {title} {Quantum circuits for general multiqubit gates},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.93.130502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {130502} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shende}\ \emph {et~al.}(2006)\citenamefont {Shende}, \citenamefont {Bullock},\ and\ \citenamefont {Markov}}]{shende2006synthesis} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Shende}}, \bibinfo {author} {\bibfnamefont {S.~S.}\ \bibnamefont {Bullock}},\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Markov}},\ }\bibfield {title} {\bibinfo {title} {Synthesis of quantum-logic circuits},\ }\href {https://ieeexplore.ieee.org/document/1629135/} {\bibfield {journal} {\bibinfo {journal} {Trans. IEEE}\ }\textbf {\bibinfo {volume} {25}},\ \bibinfo {pages} {1000} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Daskin}\ and\ \citenamefont {Kais}(2011)}]{daskin2011decomposition} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Daskin}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Kais}},\ }\bibfield {title} {\bibinfo {title} {Decomposition of unitary matrices for finding quantum circuits: application to molecular {H}amiltonians},\ }\href {https://aip.scitation.org/doi/abs/10.1063/1.3575402} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {134}},\ \bibinfo {pages} {144112} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gottesman}(1998)}]{gottesman1998heisenberg} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}},\ }\bibfield {title} {\bibinfo {title} {The {H}eisenberg representation of quantum computers},\ }\href {https://arxiv.org/abs/quant-ph/9807006} {\bibfield {journal} {\bibinfo {journal} {quant-ph/9807006}\ } (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aaronson}\ and\ \citenamefont {Gottesman}(2004)}]{aaronson2004improved} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Aaronson}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}},\ }\bibfield {title} {\bibinfo {title} {Improved simulation of stabilizer circuits},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.70.052328} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {70}},\ \bibinfo {pages} {052328} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nest}(2008)}]{nest2008classical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Nest}},\ }\bibfield {title} {\bibinfo {title} {{Classical simulation of quantum computation, the Gottesman-Knill theorem, and slightly beyond}},\ }\href {https://dl.acm.org/doi/10.5555/2011350.2011356} {\bibfield {journal} {\bibinfo {journal} {Quant. Info. Comput}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {3} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bernstein}\ and\ \citenamefont {Vazirani}(1997)}]{bernstein1997quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Bernstein}}\ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Vazirani}},\ }\bibfield {title} {\bibinfo {title} {Quantum complexity theory},\ }\href {http://www.cs.berkeley.edu/~vazirani/bv.ps} {\bibfield {journal} {\bibinfo {journal} {SIAM J. Comput}\ }\textbf {\bibinfo {volume} {26}},\ \bibinfo {pages} {1411} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Watrous}(2009)}]{watrous2008quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Watrous}},\ }\bibinfo {title} {Quantum computational complexity},\ in\ \href {https://doi.org/10.1007/978-0-387-30440-3_428} {\emph {\bibinfo {booktitle} {Encyclopedia of Complexity and Systems Science}}},\ \bibinfo {editor} {edited by\ \bibinfo {editor} {\bibfnamefont {R.~A.}\ \bibnamefont {Meyers}}}\ (\bibinfo {publisher} {Springer New York},\ \bibinfo {address} {New York, NY},\ \bibinfo {year} {2009})\ pp.\ \bibinfo {pages} {7174--7201}\BibitemShut {NoStop} \bibitem [{\citenamefont {Feynman}(1986)}]{feynman1986quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~P.}\ \bibnamefont {Feynman}},\ }\bibfield {title} {\bibinfo {title} {Quantum mechanical computers},\ }\href {https://link.springer.com/article/10.1007/BF01886518} {\bibfield {journal} {\bibinfo {journal} {Found. Phys}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {507} (\bibinfo {year} {1986})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zalka}(1998)}]{zalka1998efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Zalka}},\ }\bibfield {title} {\bibinfo {title} {Efficient simulation of quantum systems by quantum computers},\ }\href {https://onlinelibrary.wiley.com/doi/abs/10.1002/(SICI)1521-3978(199811)46:6/8 {\bibfield {journal} {\bibinfo {journal} {Progr. Phys}\ }\textbf {\bibinfo {volume} {46}},\ \bibinfo {pages} {877} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kempe}\ \emph {et~al.}(2006)\citenamefont {Kempe}, \citenamefont {Kitaev},\ and\ \citenamefont {Regev}}]{kempe2006complexity} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kitaev}},\ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Regev}},\ }\bibfield {title} {\bibinfo {title} {The complexity of the local {H}amiltonian problem},\ }\href {https://link.springer.com/chapter/10.1007/978-3-540-30538-5_31} {\bibfield {journal} {\bibinfo {journal} {SIAM J. Comput}\ }\textbf {\bibinfo {volume} {35}},\ \bibinfo {pages} {1070} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{Born}}\ and\ \citenamefont {{Oppenheimer}}(1927)}]{Born_1927} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {{Born}}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {{Oppenheimer}}},\ }\bibfield {title} {\bibinfo {title} {{Zur Quantentheorie der Molekeln}},\ }\href {https://doi.org/10.1002/andp.19273892002} {\bibfield {journal} {\bibinfo {journal} {Ann. Phys}\ }\textbf {\bibinfo {volume} {389}},\ \bibinfo {pages} {457} (\bibinfo {year} {1927})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Szabo}\ and\ \citenamefont {Ostlund}(1989)}]{Szabo_book_1989} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Szabo}}\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Ostlund}},\ }\href@noop {} {\emph {\bibinfo {title} {Modern quantum chemistry: introduction to advanced electronic structure theory}}},\ Dover Books on Chemistry\ (\bibinfo {publisher} {Dover Publications},\ \bibinfo {year} {1989})\BibitemShut {NoStop} \bibitem [{\citenamefont {Mizukami}\ \emph {et~al.}(2020)\citenamefont {Mizukami}, \citenamefont {Mitarai}, \citenamefont {Nakagawa}, \citenamefont {Yamamoto}, \citenamefont {Yan},\ and\ \citenamefont {Ohnishi}}]{mizukami2020orbital} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Mizukami}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mitarai}}, \bibinfo {author} {\bibfnamefont {Y.~O.}\ \bibnamefont {Nakagawa}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Yamamoto}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Yan}},\ and\ \bibinfo {author} {\bibfnamefont {Y.-y.}\ \bibnamefont {Ohnishi}},\ }\bibfield {title} {\bibinfo {title} {Orbital optimized unitary coupled cluster theory for quantum computer},\ }\href {https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033421} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {033421} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Takeshita}\ \emph {et~al.}(2020)\citenamefont {Takeshita}, \citenamefont {Rubin}, \citenamefont {Jiang}, \citenamefont {Lee}, \citenamefont {Babbush},\ and\ \citenamefont {McClean}}]{takeshita2020increasing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Takeshita}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Rubin}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ and\ \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}},\ }\bibfield {title} {\bibinfo {title} {Increasing the representation accuracy of quantum simulations of chemistry without extra quantum resources},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.10.011004} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {011004} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Motta}\ \emph {et~al.}(2020{\natexlab{a}})\citenamefont {Motta}, \citenamefont {Gujarati}, \citenamefont {Rice}, \citenamefont {Kumar}, \citenamefont {Masteran}, \citenamefont {Latone}, \citenamefont {Lee}, \citenamefont {Valeev},\ and\ \citenamefont {Takeshita}}]{motta2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Gujarati}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Rice}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kumar}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Masteran}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Latone}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {E.~F.}\ \bibnamefont {Valeev}},\ and\ \bibinfo {author} {\bibfnamefont {T.~Y.}\ \bibnamefont {Takeshita}},\ }\bibfield {title} {\bibinfo {title} {Quantum simulation of electronic structure with a transcorrelated {H}amiltonian: improved accuracy with a smaller footprint on the quantum computer},\ }\href {https://pubs.rsc.org/en/content/articlelanding/2020/cp/d0cp04106h#!divAbstract} {\bibfield {journal} {\bibinfo {journal} {Phys. Chem. Chem. Phys}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {24270} (\bibinfo {year} {2020}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Troyer}\ and\ \citenamefont {Wiese}(2005)}]{Troyer_PRL_2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}}\ and\ \bibinfo {author} {\bibfnamefont {U.~J.}\ \bibnamefont {Wiese}},\ }\bibfield {title} {\bibinfo {title} {Computational complexity and fundamental limitations to fermionic quantum {M}onte {C}arlo simulations},\ }\href {https://doi.org/10.1103/PhysRevLett.94.170201} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {170201} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schuch}\ and\ \citenamefont {Verstraete}(2009)}]{Schuch_NAT_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Schuch}}\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Verstraete}},\ }\bibfield {title} {\bibinfo {title} {Computational complexity of interacting electrons and fundamental limitations of density functional theory},\ }\href {https://doi.org/10.1038/nphys1370} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {732} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Paldus}\ and\ \citenamefont {Li}(1999)}]{Paldus_ACP_1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Paldus}}\ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Li}},\ }\bibfield {title} {\bibinfo {title} {A critical assessment of coupled cluster method in quantum {C}hemistry},\ }\href {https://doi.org/10.1002/9780470141694.ch1} {\bibfield {journal} {\bibinfo {journal} {Adv. Chem. Phys}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {1} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shavitt}\ and\ \citenamefont {Bartlett}(2009)}]{Shavitt_book_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Shavitt}}\ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Bartlett}},\ }\href@noop {} {\emph {\bibinfo {title} {Many-body methods in chemistry and physics}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2009})\BibitemShut {NoStop} \bibitem [{\citenamefont {White}(1992)}]{White_PRL_1992} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~R.}\ \bibnamefont {White}},\ }\bibfield {title} {\bibinfo {title} {Density matrix formulation for quantum renormalization groups},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.69.2863} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {69}},\ \bibinfo {pages} {2863} (\bibinfo {year} {1992})}\BibitemShut {NoStop} \bibitem [{\citenamefont {White}\ and\ \citenamefont {Martin}(1999)}]{White_JCP_1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~R.}\ \bibnamefont {White}}\ and\ \bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont {Martin}},\ }\bibfield {title} {\bibinfo {title} {Ab initio quantum chemistry using the density matrix renormalization group},\ }\href {https://aip.scitation.org/doi/10.1063/1.478295} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {4127} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chan}\ and\ \citenamefont {Head-Gordon}(2002)}]{Chan_JCP_2002} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Head-Gordon}},\ }\bibfield {title} {\bibinfo {title} {Highly correlated calculations with a polynomial cost algorithm: a study of the density matrix renormalization group},\ }\href {https://aip.scitation.org/doi/10.1063/1.1449459} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {4462} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Olivares-Amaya}\ \emph {et~al.}(2015)\citenamefont {Olivares-Amaya}, \citenamefont {Hu}, \citenamefont {Nakatani}, \citenamefont {Sharma}, \citenamefont {Yang},\ and\ \citenamefont {Chan}}]{Olivares_JCP_2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Olivares-Amaya}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Hu}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Nakatani}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sharma}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Yang}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {The ab-initio density matrix renormalization group in practice},\ }\href {https://aip.scitation.org/doi/abs/10.1063/1.4905329?journalCode=jcp} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {142}},\ \bibinfo {pages} {034102} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chan}\ \emph {et~al.}(2016)\citenamefont {Chan}, \citenamefont {Keselman}, \citenamefont {Nakatani}, \citenamefont {Li},\ and\ \citenamefont {White}}]{Chan_JCP_2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Keselman}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Nakatani}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Li}},\ and\ \bibinfo {author} {\bibfnamefont {S.~R.}\ \bibnamefont {White}},\ }\bibfield {title} {\bibinfo {title} {Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms},\ }\href {https://aip.scitation.org/doi/10.1063/1.4955108} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {145}},\ \bibinfo {pages} {014102} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Booth}\ \emph {et~al.}(2009)\citenamefont {Booth}, \citenamefont {Thom},\ and\ \citenamefont {Alavi}}]{Booth_JCP_2009} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Booth}}, \bibinfo {author} {\bibfnamefont {A.~J.~W.}\ \bibnamefont {Thom}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Alavi}},\ }\bibfield {title} {\bibinfo {title} {Fermion {M}onte {C}arlo without fixed nodes: a game of life, death, and annihilation in slater determinant space},\ }\href {https://doi.org/10.1063/1.3193710} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {131}},\ \bibinfo {pages} {054106} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Booth}\ \emph {et~al.}(2013)\citenamefont {Booth}, \citenamefont {Gruneis}, \citenamefont {Kresse},\ and\ \citenamefont {Alavi}}]{Booth_NAT_2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Booth}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Gruneis}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kresse}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Alavi}},\ }\bibfield {title} {\bibinfo {title} {Towards an exact description of electronic wavefunctions in real solids},\ }\href {https://doi.org/http://dx.doi.org/10.1038/nature11770} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {493}},\ \bibinfo {pages} {365} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Reynolds}\ \emph {et~al.}(1982)\citenamefont {Reynolds}, \citenamefont {Ceperley}, \citenamefont {Alder},\ and\ \citenamefont {Lester}}]{Reynolds_JCP_1982} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Reynolds}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ceperley}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Alder}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lester}},\ }\bibfield {title} {\bibinfo {title} {Fixed-node quantum {M}onte {C}arlo for molecules},\ }\href {https://aip.scitation.org/doi/10.1063/1.443766} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {5593} (\bibinfo {year} {1982})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Foulkes}\ \emph {et~al.}(2001)\citenamefont {Foulkes}, \citenamefont {Mitas}, \citenamefont {Needs},\ and\ \citenamefont {Rajagopal}}]{Foulkes_RMP_2001} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~M.~C.}\ \bibnamefont {Foulkes}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Mitas}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Needs}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Rajagopal}},\ }\bibfield {title} {\bibinfo {title} {Quantum {M}onte {C}arlo simulations of solids},\ }\href {https://doi.org/10.1103/RevModPhys.73.33} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {33} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhang}\ and\ \citenamefont {Krakauer}(2003)}]{Zhang_PRL90_2003} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}}\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Krakauer}},\ }\bibfield {title} {\bibinfo {title} {Quantum {M}onte {C}arlo method using phase-free random walks with {S}later determinants},\ }\href {https://doi.org/10.1103/PhysRevLett.90.136401} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {136401} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Motta}\ and\ \citenamefont {Zhang}(2018)}]{motta2018ab} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}},\ }\bibfield {title} {\bibinfo {title} {Ab initio computations of molecular systems by the auxiliary-field quantum {M}onte {C}arlo method},\ }\href {https://onlinelibrary.wiley.com/doi/abs/10.1002/wcms.1364} {\bibfield {journal} {\bibinfo {journal} {WIREs Comput. Mol. Sci}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {e1364} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Motta}\ \emph {et~al.}(2017)\citenamefont {Motta}, \citenamefont {Ceperley}, \citenamefont {Chan}, \citenamefont {Gomez}, \citenamefont {Gull}, \citenamefont {Guo}, \citenamefont {Jim{\'e}nez-Hoyos}, \citenamefont {Lan}, \citenamefont {Li}, \citenamefont {Ma} \emph {et~al.}}]{motta2017towards} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {D.~M.}\ \bibnamefont {Ceperley}}, \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Gomez}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Gull}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Jim{\'e}nez-Hoyos}}, \bibinfo {author} {\bibfnamefont {T.~N.}\ \bibnamefont {Lan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Ma}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Towards the solution of the many-electron problem in real materials: equation of state of the hydrogen chain with state-of-the-art many-body methods},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.7.031059} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {031059} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Martin}(2004)}]{Martin_book_2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Martin}},\ }\href@noop {} {\emph {\bibinfo {title} {Electronic Structure: Basic theory and practical methods}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {2004})\BibitemShut {NoStop} \bibitem [{\citenamefont {Kohn}(1999)}]{Kohn_RMP_1999} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Kohn}},\ }\bibfield {title} {\bibinfo {title} {Nobel lecture: Electronic structure of matter: wave functions and density functionals},\ }\href {https://doi.org/10.1103/RevModPhys.71.1253} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {1253} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hammes-Schiffer}(2017)}]{hammes2017conundrum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hammes-Schiffer}},\ }\bibfield {title} {\bibinfo {title} {A conundrum for density functional theory},\ }\href {https://science.sciencemag.org/content/355/6320/28.summary?casa_token=IV8WK_B6pvcAAAAA:3GCz6VVwytstqdPYj02xuS_t6zXrcxEgTc9Y7rCouz0Szxz086X6pdk4i3OlymViFuvs7rYtnF-rQzk} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {355}},\ \bibinfo {pages} {28} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knizia}\ and\ \citenamefont {Chan}(2012)}]{Knizia-2012} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Knizia}}\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {Density matrix embedding: a simple alternative to dynamical mean-field theory},\ }\href {https://doi.org/10.1103/PhysRevLett.109.186404} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {186404} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Knizia}\ and\ \citenamefont {Chan}(2013)}]{Knizia-2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Knizia}}\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {Density matrix embedding: a strong-coupling quantum embedding theory},\ }\href {https://doi.org/10.1021/ct301044e} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {1428} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wouters}\ \emph {et~al.}(2016)\citenamefont {Wouters}, \citenamefont {Jim\'enez-Hoyos}, \citenamefont {Sun},\ and\ \citenamefont {Chan}}]{Wouters-2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wouters}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Jim\'enez-Hoyos}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Sun}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {A practical guide to density matrix embedding theory in quantum chemistry},\ }\href {https://doi.org/10.1021/acs.jctc.6b00316} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {2706} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Georges}\ \emph {et~al.}(1996)\citenamefont {Georges}, \citenamefont {Kotliar}, \citenamefont {Krauth},\ and\ \citenamefont {Rozenberg}}]{Georges1996dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Georges}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Kotliar}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Krauth}},\ and\ \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Rozenberg}},\ }\bibfield {title} {\bibinfo {title} {Dynamical mean-field theory of strongly correlated fermion systems and the limit of infinite dimensions},\ }\href {https://doi.org/10.1103/RevModPhys.68.13} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {68}},\ \bibinfo {pages} {13} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vollhardt}(2012)}]{vollhardt2012dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Vollhardt}},\ }\bibfield {title} {\bibinfo {title} {Dynamical mean-field theory for correlated electrons},\ }\href {https://www.sciencedirect.com/science/article/pii/S2214785319309022} {\bibfield {journal} {\bibinfo {journal} {Ann. Phys}\ }\textbf {\bibinfo {volume} {524}},\ \bibinfo {pages} {1} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hedin}(1965)}]{Hedin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Hedin}},\ }\bibfield {title} {\bibinfo {title} {New method for calculating the one-particle {G}reen's function with application to the electron-gas problem},\ }\href {http://link.aps.org/doi/10.1103/PhysRev.139.A796} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev}\ }\textbf {\bibinfo {volume} {139}},\ \bibinfo {pages} {A796} (\bibinfo {year} {1965})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Van~Houcke}\ \emph {et~al.}(2010)\citenamefont {Van~Houcke}, \citenamefont {Kozik}, \citenamefont {Prokof'ev},\ and\ \citenamefont {Svistunov}}]{VanHoucke2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Van~Houcke}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Kozik}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Prokof'ev}},\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Svistunov}},\ }\bibfield {title} {\bibinfo {title} {Diagrammatic {M}onte {C}arlo},\ }\href {https://arxiv.org/ct?url=https {\bibfield {journal} {\bibinfo {journal} {Phys. Proc}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {95} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kulagin}\ \emph {et~al.}(2013)\citenamefont {Kulagin}, \citenamefont {Prokof'ev}, \citenamefont {Starykh}, \citenamefont {Svistunov},\ and\ \citenamefont {Varney}}]{Kulagin2013} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Kulagin}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Prokof'ev}}, \bibinfo {author} {\bibfnamefont {O.~A.}\ \bibnamefont {Starykh}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Svistunov}},\ and\ \bibinfo {author} {\bibfnamefont {C.~N.}\ \bibnamefont {Varney}},\ }\bibfield {title} {\bibinfo {title} {Bold diagrammatic {M}onte {C}arlo method applied to fermionized frustrated spins},\ }\href {https://doi.org/10.1103/PhysRevLett.110.070601} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {110}},\ \bibinfo {pages} {070601} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nguyen~Lan}\ \emph {et~al.}(2016)\citenamefont {Nguyen~Lan}, \citenamefont {Kananenka},\ and\ \citenamefont {Zgid}}]{nguyen2016rigorous} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Nguyen~Lan}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Kananenka}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Zgid}},\ }\bibfield {title} {\bibinfo {title} {Rigorous ab initio quantum embedding for quantum chemistry using {G}reen's function theory: Screened interaction, nonlocal self-energy relaxation, orbital basis, and chemical accuracy},\ }\href {http://dx.doi.org/10.1021/acs.jctc.6b00638} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {4856} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kananenka}\ \emph {et~al.}(2015)\citenamefont {Kananenka}, \citenamefont {Gull},\ and\ \citenamefont {Zgid}}]{AlexeiSEET} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Kananenka}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Gull}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Zgid}},\ }\bibfield {title} {\bibinfo {title} {Systematically improvable multiscale solver for correlated electron systems},\ }\href {https://journals.aps.org/prb/abstract/10.1103/PhysRevB.91.121111} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {91}},\ \bibinfo {pages} {121111} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lan}\ \emph {et~al.}(2015)\citenamefont {Lan}, \citenamefont {Kananenka},\ and\ \citenamefont {Zgid}}]{LAN} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~N.}\ \bibnamefont {Lan}}, \bibinfo {author} {\bibfnamefont {A.~A.}\ \bibnamefont {Kananenka}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Zgid}},\ }\bibfield {title} {\bibinfo {title} {Communication: towards ab initio self-energy embedding theory in quantum chemistry},\ }\href {https://aip.scitation.org/doi/10.1063/1.4938562} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {143}},\ \bibinfo {pages} {241102} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Williams}\ \emph {et~al.}(2020)\citenamefont {Williams}, \citenamefont {Yao}, \citenamefont {Li}, \citenamefont {Chen}, \citenamefont {Shi}, \citenamefont {Motta}, \citenamefont {Niu}, \citenamefont {Ray}, \citenamefont {Guo}, \citenamefont {Anderson} \emph {et~al.}}]{williams2020direct} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~T.}\ \bibnamefont {Williams}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Shi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Niu}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Ray}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Anderson}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Direct comparison of many-body methods for realistic electronic {H}amiltonians},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.10.011041} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {011041} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Al-Saidi}\ \emph {et~al.}(2006)\citenamefont {Al-Saidi}, \citenamefont {Krakauer},\ and\ \citenamefont {Zhang}}]{AlSaidi_PRB73_2006} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~A.}\ \bibnamefont {Al-Saidi}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Krakauer}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}},\ }\bibfield {title} {\bibinfo {title} {Auxiliary-field quantum {M}onte {C}arlo study of {T}i{O} and {M}n{O} molecules},\ }\href {https://doi.org/10.1103/PhysRevB.73.075103} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {73}},\ \bibinfo {pages} {075103} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Purwanto}\ \emph {et~al.}(2015)\citenamefont {Purwanto}, \citenamefont {Zhang},\ and\ \citenamefont {Krakauer}}]{Purwanto_JCP142_2015} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Purwanto}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Krakauer}},\ }\bibfield {title} {\bibinfo {title} {An auxiliary-field quantum {M}onte {C}arlo study of the chromium dimer},\ }\href {https://doi.org/10.1063/1.4906829} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {142}},\ \bibinfo {pages} {064302} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Purwanto}\ \emph {et~al.}(2016)\citenamefont {Purwanto}, \citenamefont {Zhang},\ and\ \citenamefont {Krakauer}}]{Purwanto_JCP144_2016} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Purwanto}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Krakauer}},\ }\bibfield {title} {\bibinfo {title} {Auxiliary-field quantum {M}onte {C}arlo calculations of the molybdenum dimer},\ }\href {https://doi.org/10.1063/1.4954245} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {144}},\ \bibinfo {pages} {244306} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shee}\ \emph {et~al.}(2019)\citenamefont {Shee}, \citenamefont {Rudshteyn}, \citenamefont {Arthur}, \citenamefont {Zhang}, \citenamefont {Reichman},\ and\ \citenamefont {Friesner}}]{shee2019achieving} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Shee}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Rudshteyn}}, \bibinfo {author} {\bibfnamefont {E.~J.}\ \bibnamefont {Arthur}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {D.~R.}\ \bibnamefont {Reichman}},\ and\ \bibinfo {author} {\bibfnamefont {R.~A.}\ \bibnamefont {Friesner}},\ }\bibfield {title} {\bibinfo {title} {On achieving high accuracy in quantum chemical calculations of 3d transition metal-containing systems: a comparison of auxiliary-field quantum {M}onte {C}arlo with coupled cluster, density functional theory, and experiment for diatomic molecules},\ }\href {https://doi.org/10.1021/acs.jctc.9b00083} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {2346} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shee}\ \emph {et~al.}(2021)\citenamefont {Shee}, \citenamefont {Loipersberger}, \citenamefont {Hait}, \citenamefont {Lee},\ and\ \citenamefont {Head-Gordon}}]{shee2021revealing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Shee}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Loipersberger}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Hait}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lee}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Head-Gordon}},\ }\bibfield {title} {\bibinfo {title} {Revealing the nature of electron correlation in transition metal complexes with symmetry breaking and chemical intuition},\ }\href {https://aip.scitation.org/doi/10.1063/5.0047386} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {154}},\ \bibinfo {pages} {194109} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rossi}\ \emph {et~al.}(1999)\citenamefont {Rossi}, \citenamefont {Bendazzoli}, \citenamefont {Evangelisti},\ and\ \citenamefont {Maynau}}]{rossi1999full} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Rossi}}, \bibinfo {author} {\bibfnamefont {G.~L.}\ \bibnamefont {Bendazzoli}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Evangelisti}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maynau}},\ }\bibfield {title} {\bibinfo {title} {A full-configuration benchmark for the {N}$_2$ molecule},\ }\href {https://www.sciencedirect.com/science/article/pii/S0009261499007915} {\bibfield {journal} {\bibinfo {journal} {Chem. Phys. Lett}\ }\textbf {\bibinfo {volume} {310}},\ \bibinfo {pages} {530} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vogiatzis}\ \emph {et~al.}(2017)\citenamefont {Vogiatzis}, \citenamefont {Ma}, \citenamefont {Olsen}, \citenamefont {Gagliardi},\ and\ \citenamefont {De~Jong}}]{vogiatzis2017pushing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~D.}\ \bibnamefont {Vogiatzis}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Olsen}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Gagliardi}},\ and\ \bibinfo {author} {\bibfnamefont {W.~A.}\ \bibnamefont {De~Jong}},\ }\bibfield {title} {\bibinfo {title} {Pushing configuration-interaction to the limit: Towards massively parallel mcscf calculations},\ }\href {https://aip.scitation.org/doi/10.1063/1.4989858} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {147}},\ \bibinfo {pages} {184111} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Solomon}\ \emph {et~al.}(1996)\citenamefont {Solomon}, \citenamefont {Sundaram},\ and\ \citenamefont {Machonkin}}]{solomon1996multicopper} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~I.}\ \bibnamefont {Solomon}}, \bibinfo {author} {\bibfnamefont {U.~M.}\ \bibnamefont {Sundaram}},\ and\ \bibinfo {author} {\bibfnamefont {T.~E.}\ \bibnamefont {Machonkin}},\ }\bibfield {title} {\bibinfo {title} {Multicopper oxidases and oxygenases},\ }\href {https://pubs.acs.org/doi/10.1021/cr950046o} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {96}},\ \bibinfo {pages} {2563} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitajima}\ and\ \citenamefont {Moro-oka}(1994)}]{kitajima1994copper} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Kitajima}}\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Moro-oka}},\ }\bibfield {title} {\bibinfo {title} {Copper-dioxygen complexes. {I}norganic and bioinorganic perspectives},\ }\href {https://pubs.acs.org/doi/10.1021/cr00027a010} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {94}},\ \bibinfo {pages} {737} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Samanta}\ \emph {et~al.}(2012)\citenamefont {Samanta}, \citenamefont {Jim\'enez-Hoyos},\ and\ \citenamefont {Scuseria}}]{samanta2012exploring} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Samanta}}, \bibinfo {author} {\bibfnamefont {C.~A.}\ \bibnamefont {Jim\'enez-Hoyos}},\ and\ \bibinfo {author} {\bibfnamefont {G.~E.}\ \bibnamefont {Scuseria}},\ }\bibfield {title} {\bibinfo {title} {Exploring copper oxide cores using the projected {H}artree-{F}ock method},\ }\href {https://pubs.acs.org/doi/full/10.1021/ct300689e} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {4944} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gerdemann}\ \emph {et~al.}(2002)\citenamefont {Gerdemann}, \citenamefont {Eicken},\ and\ \citenamefont {Krebs}}]{gerdemann2002crystal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gerdemann}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Eicken}},\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Krebs}},\ }\bibfield {title} {\bibinfo {title} {The crystal structure of catechol oxidase: new insight into the function of type-3 copper proteins},\ }\href {https://pubs.acs.org/doi/10.1021/ar990019a} {\bibfield {journal} {\bibinfo {journal} {Acc. Chem. Res}\ }\textbf {\bibinfo {volume} {35}},\ \bibinfo {pages} {183} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sharma}\ \emph {et~al.}(2014)\citenamefont {Sharma}, \citenamefont {Sivalingam}, \citenamefont {Neese},\ and\ \citenamefont {Chan}}]{sharma2014low} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sharma}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Sivalingam}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Neese}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {Low-energy spectrum of iron--sulfur clusters directly from many-particle quantum mechanics},\ }\href {https://www.nature.com/articles/nchem.2041} {\bibfield {journal} {\bibinfo {journal} {Nat. Chem}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {927} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kurashige}\ \emph {et~al.}(2013)\citenamefont {Kurashige}, \citenamefont {Chan},\ and\ \citenamefont {Yanai}}]{kurashige2013entangled} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Kurashige}}, \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Yanai}},\ }\bibfield {title} {\bibinfo {title} {Entangled quantum electronic wavefunctions of the {M}n$_4${C}a{O}$_5$ cluster in photosystem {II}},\ }\href {https://www.nature.com/articles/nchem.1677} {\bibfield {journal} {\bibinfo {journal} {Nat. Chem}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {660} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}\ \emph {et~al.}(2019{\natexlab{a}})\citenamefont {Li}, \citenamefont {Guo}, \citenamefont {Sun},\ and\ \citenamefont {Chan}}]{li2019electronic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Sun}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {Electronic landscape of the {P}-cluster of nitrogenase as revealed through many-electron quantum wavefunction simulations},\ }\href {https://www.nature.com/articles/s41557-019-0337-3?draft=marketing} {\bibfield {journal} {\bibinfo {journal} {Nat. Chem}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {1026–1033} (\bibinfo {year} {2019}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}\ \emph {et~al.}(2019{\natexlab{b}})\citenamefont {Li}, \citenamefont {Li}, \citenamefont {Dattani}, \citenamefont {Umrigar},\ and\ \citenamefont {Chan}}]{li2019electronic2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {N.~S.}\ \bibnamefont {Dattani}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Umrigar}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {The electronic complexity of the ground-state of the {FeMo} cofactor of nitrogenase as relevant to quantum simulations},\ }\href {https://aip.scitation.org/doi/10.1063/1.5063376} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {150}},\ \bibinfo {pages} {024302} (\bibinfo {year} {2019}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chilkuri}\ \emph {et~al.}(2020)\citenamefont {Chilkuri}, \citenamefont {DeBeer},\ and\ \citenamefont {Neese}}]{chilkuri2019ligand} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~G.}\ \bibnamefont {Chilkuri}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {DeBeer}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Neese}},\ }\bibfield {title} {\bibinfo {title} {Ligand field theory and angular overlap model based analysis of the electronic structure of homovalent iron-sulfur dimers},\ }\href {https://pubs.acs.org/doi/10.1021/acs.inorgchem.9b00974} {\bibfield {journal} {\bibinfo {journal} {Inorg. Chem}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo {pages} {984} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cao}\ \emph {et~al.}(2018)\citenamefont {Cao}, \citenamefont {Caldararu},\ and\ \citenamefont {Ryde}}]{cao2018protonation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Caldararu}},\ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Ryde}},\ }\bibfield {title} {\bibinfo {title} {Protonation and reduction of the {FeMo} cluster in nitrogenase studied by quantum mechanics/molecular mechanics ({QM/MM}) calculations},\ }\href {https://pubs.acs.org/doi/abs/10.1021/acs.jctc.8b00778} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {6653} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Damascelli}\ \emph {et~al.}(2003)\citenamefont {Damascelli}, \citenamefont {Hussain},\ and\ \citenamefont {Shen}}]{damascelli2003angle} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Damascelli}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Hussain}},\ and\ \bibinfo {author} {\bibfnamefont {Z.-X.}\ \bibnamefont {Shen}},\ }\bibfield {title} {\bibinfo {title} {Angle-resolved photoemission studies of the cuprate superconductors},\ }\href {https://doi.org/10.1103/RevModPhys.75.473} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys.}\ }\textbf {\bibinfo {volume} {75}},\ \bibinfo {pages} {473} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Damascelli}(2004)}]{Damascelli_2004} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Damascelli}},\ }\bibfield {title} {\bibinfo {title} {Probing the electronic structure of complex systems by {ARPES}},\ }\href {https://doi.org/10.1238/physica.topical.109a00061} {\bibfield {journal} {\bibinfo {journal} {Phys. Scripta}\ }\textbf {\bibinfo {volume} {T109}},\ \bibinfo {pages} {61} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Onida}\ \emph {et~al.}(2002)\citenamefont {Onida}, \citenamefont {Reining},\ and\ \citenamefont {Rubio}}]{onida2002electronic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Onida}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Reining}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rubio}},\ }\bibfield {title} {\bibinfo {title} {Electronic excitations: density-functional versus many-body {G}reen’s-function approaches},\ }\href {https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.74.601} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {74}},\ \bibinfo {pages} {601} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stanton}\ and\ \citenamefont {Bartlett}(1993)}]{stanton1993equation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Stanton}}\ and\ \bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Bartlett}},\ }\bibfield {title} {\bibinfo {title} {The equation of motion coupled-cluster method. a systematic biorthogonal approach to molecular excitation energies, transition probabilities, and excited state properties},\ }\href {https://aip.scitation.org/doi/10.1063/1.464746} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {7029} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fetter}\ and\ \citenamefont {Walecka}(2012)}]{fetter2012quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~L.}\ \bibnamefont {Fetter}}\ and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Walecka}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum theory of many-particle systems}}}\ (\bibinfo {publisher} {Courier Corporation},\ \bibinfo {year} {2012})\BibitemShut {NoStop} \bibitem [{\citenamefont {Barone}\ \emph {et~al.}(2015)\citenamefont {Barone}, \citenamefont {Biczysko},\ and\ \citenamefont {Puzzarini}}]{barone2015quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Barone}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Biczysko}},\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Puzzarini}},\ }\bibfield {title} {\bibinfo {title} {Quantum chemistry meets spectroscopy for astrochemistry: increasing complexity toward prebiotic molecules},\ }\href {https://pubs.acs.org/doi/abs/10.1021/ar5003285} {\bibfield {journal} {\bibinfo {journal} {Acc. Chem. Res}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {1413} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bowman}\ \emph {et~al.}(2008)\citenamefont {Bowman}, \citenamefont {Carrington},\ and\ \citenamefont {Meyer}}]{bowman2008variational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Bowman}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Carrington}},\ and\ \bibinfo {author} {\bibfnamefont {H.-D.}\ \bibnamefont {Meyer}},\ }\bibfield {title} {\bibinfo {title} {Variational quantum approaches for computing vibrational energies of polyatomic molecules},\ }\href {https://doi.org/10.1080/00268970802258609} {\bibfield {journal} {\bibinfo {journal} {Mol. Phys}\ }\textbf {\bibinfo {volume} {106}},\ \bibinfo {pages} {2145} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}\ \emph {et~al.}(1995)\citenamefont {Lee}, \citenamefont {Martin},\ and\ \citenamefont {Taylor}}]{X3} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martin}},\ and\ \bibinfo {author} {\bibfnamefont {P.~R.}\ \bibnamefont {Taylor}},\ }\bibfield {title} {\bibinfo {title} {An accurate ab initio quartic force field and vibrational frequencies for {CH}$_4$ and isotopomers},\ }\href {https://aip.scitation.org/doi/abs/10.1063/1.469398} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {254} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fortenberry}\ and\ \citenamefont {Lee}(2019)}]{X1} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~C.}\ \bibnamefont {Fortenberry}}\ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Lee}},\ }\bibfield {title} {\bibinfo {title} {Computational vibrational spectroscopy for the detection of molecules in space},\ }in\ \href {https://www.sciencedirect.com/science/article/pii/S1574140019300064} {\emph {\bibinfo {booktitle} {Annu. Rep. Comput. Chem}}},\ Vol.~\bibinfo {volume} {15}\ (\bibinfo {publisher} {Elsevier},\ \bibinfo {year} {2019})\ pp.\ \bibinfo {pages} {173--202}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huang}\ \emph {et~al.}(2021)\citenamefont {Huang}, \citenamefont {Schwenke},\ and\ \citenamefont {Lee}}]{X2} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Huang}}, \bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Schwenke}},\ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Lee}},\ }\bibfield {title} {\bibinfo {title} {What it takes to compute highly accurate rovibrational line lists for use in astrochemistry},\ }\href {https://pubs.acs.org/doi/abs/10.1021/acs.accounts.0c00624} {\bibfield {journal} {\bibinfo {journal} {Acc. Chem. Res}\ }\textbf {\bibinfo {volume} {54}},\ \bibinfo {pages} {1311} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sadri}\ \emph {et~al.}(2012)\citenamefont {Sadri}, \citenamefont {Lauvergnat}, \citenamefont {Gatti},\ and\ \citenamefont {Meyer}}]{sadri2012numeric} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Sadri}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lauvergnat}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Gatti}},\ and\ \bibinfo {author} {\bibfnamefont {H.-D.}\ \bibnamefont {Meyer}},\ }\bibfield {title} {\bibinfo {title} {Numeric kinetic energy operators for molecules in polyspherical coordinates},\ }\href {https://aip.scitation.org/doi/abs/10.1063/1.4729536?casa_token=mldUCRFRO8IAAAAA:prXsiGNh_Gh2pI6YckJ8GxFRM3hKou5Hct_zE--DreLJn0w2zgtZK5tGTCIMNiPGBqNZYcw24VuV} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {136}},\ \bibinfo {pages} {234112} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tew}\ \emph {et~al.}(2003)\citenamefont {Tew}, \citenamefont {Handy}, \citenamefont {Carter}, \citenamefont {Irle},\ and\ \citenamefont {Bowman}}]{tew2003internal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont {Tew}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Handy}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Carter}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Irle}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Bowman}},\ }\bibfield {title} {\bibinfo {title} {The internal coordinate path {H}amiltonian; application to methanol and malonaldehyde},\ }\href {https://www.tandfonline.com/doi/abs/10.1080/0026897042000178079} {\bibfield {journal} {\bibinfo {journal} {Mol. Phys}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {3513} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Viel}\ \emph {et~al.}(2007)\citenamefont {Viel}, \citenamefont {Coutinho-Neto},\ and\ \citenamefont {Manthe}}]{viel2017zeropoint} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Viel}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Coutinho-Neto}},\ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Manthe}},\ }\bibfield {title} {\bibinfo {title} {The ground state tunneling splitting and the zero point energy of malonaldehyde: a quantum {M}onte {C}arlo determination},\ }\href {https://doi.org/10.1063/1.2406074} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys.}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages} {024308} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Beck}\ \emph {et~al.}(2000)\citenamefont {Beck}, \citenamefont {Jäckle}, \citenamefont {Worth},\ and\ \citenamefont {Meyer}}]{beck2001multiconfiguration} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Beck}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Jäckle}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Worth}},\ and\ \bibinfo {author} {\bibfnamefont {H.-D.}\ \bibnamefont {Meyer}},\ }\bibfield {title} {\bibinfo {title} {The multiconfiguration time-dependent {H}artree ({MCTDH}) method: a highly efficient algorithm for propagating wavepackets},\ }\href {https://doi.org/https://doi.org/10.1016/S0370-1573(99)00047-2} {\bibfield {journal} {\bibinfo {journal} {Phys. Rep}\ }\textbf {\bibinfo {volume} {324}},\ \bibinfo {pages} {1 } (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vaida}(2008)}]{vaida2008spectroscopy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Vaida}},\ }\bibfield {title} {\bibinfo {title} {Spectroscopy of photoreactive systems: Implications for atmospheric chemistry},\ }\href {https://pubs.acs.org/doi/10.1021/jp806365r} {\bibfield {journal} {\bibinfo {journal} {J. Phys. Chem. A}\ }\textbf {\bibinfo {volume} {113}},\ \bibinfo {pages} {5} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Heiter}\ \emph {et~al.}(2015)\citenamefont {Heiter}, \citenamefont {Lind}, \citenamefont {Asplund}, \citenamefont {Barklem}, \citenamefont {Bergemann}, \citenamefont {Magrini}, \citenamefont {Masseron}, \citenamefont {Mikolaitis}, \citenamefont {Pickering},\ and\ \citenamefont {Ruffoni}}]{heiter2015atomic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Heiter}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Lind}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Asplund}}, \bibinfo {author} {\bibfnamefont {P.~S.}\ \bibnamefont {Barklem}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Bergemann}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Magrini}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Masseron}}, \bibinfo {author} {\bibfnamefont {{\v{S}}.}~\bibnamefont {Mikolaitis}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Pickering}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ruffoni}},\ }\bibfield {title} {\bibinfo {title} {Atomic and molecular data for optical stellar spectroscopy},\ }\href {https://iopscience.iop.org/article/10.1088/0031-8949/90/5/054010} {\bibfield {journal} {\bibinfo {journal} {Phys. Scripta}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {054010} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Smith}(1988)}]{smith1988formation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Smith}},\ }\bibfield {title} {\bibinfo {title} {Formation and destruction of molecular ions in interstellar clouds},\ }\href {https://royalsocietypublishing.org/doi/pdf/10.1098/rsta.1988.0016} {\bibfield {journal} {\bibinfo {journal} {Proc. Roy. Soc. London A, Math. Phys. Sci}\ }\textbf {\bibinfo {volume} {324}},\ \bibinfo {pages} {257} (\bibinfo {year} {1988})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schilke}\ \emph {et~al.}(2001)\citenamefont {Schilke}, \citenamefont {Benford}, \citenamefont {Hunter}, \citenamefont {Lis},\ and\ \citenamefont {Phillips}}]{schilke2001line} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Schilke}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Benford}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Hunter}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lis}},\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Phillips}},\ }\bibfield {title} {\bibinfo {title} {A line survey of {O}rion-{KL} from 607 to 725 {GH}z},\ }\href {https://iopscience.iop.org/article/10.1086/318951} {\bibfield {journal} {\bibinfo {journal} {Astrophys. J., Suppl. Ser}\ }\textbf {\bibinfo {volume} {132}},\ \bibinfo {pages} {281} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ag{\'u}ndez}\ \emph {et~al.}(2008)\citenamefont {Ag{\'u}ndez}, \citenamefont {Cernicharo}, \citenamefont {Pardo}, \citenamefont {Gu{\'e}lin},\ and\ \citenamefont {Phillips}}]{agundez2008tentative} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ag{\'u}ndez}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Cernicharo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Pardo}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gu{\'e}lin}},\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Phillips}},\ }\bibfield {title} {\bibinfo {title} {Tentative detection of phosphine in {IRC}+ 10216},\ }\href {https://www.aanda.org/articles/aa/abs/2008/27/aa10193-08/aa10193-08.html} {\bibfield {journal} {\bibinfo {journal} {Astron. Astrophys}\ }\textbf {\bibinfo {volume} {485}},\ \bibinfo {pages} {L33} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ag{\'u}ndez}\ \emph {et~al.}(2014)\citenamefont {Ag{\'u}ndez}, \citenamefont {Cernicharo},\ and\ \citenamefont {Gu{\'e}lin}}]{agundez2014new} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ag{\'u}ndez}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Cernicharo}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Gu{\'e}lin}},\ }\bibfield {title} {\bibinfo {title} {New molecules in {IRC}+ 10216: confirmation of {C}$_5${S} and tentative identification of {M}g{CCH}, {NCCP}, and {S}i{H}$_3${CN}},\ }\href {https://www.aanda.org/articles/aa/abs/2014/10/aa24542-14/aa24542-14.html} {\bibfield {journal} {\bibinfo {journal} {Astron. Astrophys}\ }\textbf {\bibinfo {volume} {570}},\ \bibinfo {pages} {A45} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Pople}(1999)}]{pople1999nobel} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Pople}},\ }\bibfield {title} {\bibinfo {title} {Nobel lecture: Quantum chemical models},\ }\href {https://journals.aps.org/rmp/abstract/10.1103/RevModPhys.71.1267} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {1267} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hartshorn}(1973)}]{hartshorn1973aliphatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~R.}\ \bibnamefont {Hartshorn}},\ }\href@noop {} {\emph {\bibinfo {title} {Aliphatic nucleophilic substitution}}}\ (\bibinfo {publisher} {Cambridge University Press},\ \bibinfo {year} {1973})\BibitemShut {NoStop} \bibitem [{\citenamefont {Wade}(2013)}]{wade2006organic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~G.}\ \bibnamefont {Wade}},\ }\href@noop {} {\emph {\bibinfo {title} {Organic chemistry}}}\ (\bibinfo {publisher} {Pearson},\ \bibinfo {year} {2013})\BibitemShut {NoStop} \bibitem [{\citenamefont {Cramer}\ and\ \citenamefont {Truhlar}(1999)}]{cramer1999implicit} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~J.}\ \bibnamefont {Cramer}}\ and\ \bibinfo {author} {\bibfnamefont {D.~G.}\ \bibnamefont {Truhlar}},\ }\bibfield {title} {\bibinfo {title} {Implicit solvation models: equilibria, structure, spectra, and dynamics},\ }\href {https://pubs.acs.org/doi/10.1021/cr960149m} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {2161} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Klamt}(2011)}]{klamt2011cosmo} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Klamt}},\ }\bibfield {title} {\bibinfo {title} {The {COSMO and COSMO-RS} solvation models},\ }\href {https://onlinelibrary.wiley.com/doi/abs/10.1002/wcms.56} {\bibfield {journal} {\bibinfo {journal} {WIREs Comput. Mol. Sci}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {699} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mennucci}(2012)}]{mennucci2012polarizable} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Mennucci}},\ }\bibfield {title} {\bibinfo {title} {Polarizable continuum model},\ }\href {https://pubs.acs.org/doi/10.1021/jp020124t} {\bibfield {journal} {\bibinfo {journal} {WIREs Comput. Mol. Sci}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {386} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Senn}\ and\ \citenamefont {Thiel}(2009)}]{senn2009qm} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~M.}\ \bibnamefont {Senn}}\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Thiel}},\ }\bibfield {title} {\bibinfo {title} {{QM/MM} methods for biomolecular systems},\ }\href {https://pubmed.ncbi.nlm.nih.gov/19173328/} {\bibfield {journal} {\bibinfo {journal} {Angew. Chem., Int. Ed. Engl}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {1198} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bernstein}(2020)}]{bernstein2020polymorphism} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Bernstein}},\ }\href@noop {} {\emph {\bibinfo {title} {Polymorphism in Molecular Crystals}}}\ (\bibinfo {publisher} {International Union of Crystal},\ \bibinfo {year} {2020})\BibitemShut {NoStop} \bibitem [{\citenamefont {Price}\ \emph {et~al.}(2016)\citenamefont {Price}, \citenamefont {Braun},\ and\ \citenamefont {Reutzel-Edens}}]{price2016can} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {Price}}, \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Braun}},\ and\ \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Reutzel-Edens}},\ }\bibfield {title} {\bibinfo {title} {Can computed crystal energy landscapes help understand pharmaceutical solids?},\ }\href {https://pubs.rsc.org/en/content/articlelanding/2016/cc/c6cc00721j#!divAbstract} {\bibfield {journal} {\bibinfo {journal} {Chem. Comm}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {7065} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Day}\ \emph {et~al.}(2007)\citenamefont {Day}, \citenamefont {Motherwell},\ and\ \citenamefont {Jones}}]{day2007strategy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~M.}\ \bibnamefont {Day}}, \bibinfo {author} {\bibfnamefont {W.~S.}\ \bibnamefont {Motherwell}},\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Jones}},\ }\bibfield {title} {\bibinfo {title} {A strategy for predicting the crystal structures of flexible molecules: the polymorphism of phenobarbital},\ }\href {https://pubs.rsc.org/en/content/articlelanding/2007/cp/b612190j#!divAbstract} {\bibfield {journal} {\bibinfo {journal} {Phys. Chem. Chem. Phys}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {1693} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lombardo}\ \emph {et~al.}(2017)\citenamefont {Lombardo}, \citenamefont {Desai}, \citenamefont {Arimoto}, \citenamefont {Desino}, \citenamefont {Fischer}, \citenamefont {Keefer}, \citenamefont {Petersson}, \citenamefont {Winiwarter},\ and\ \citenamefont {Broccatelli}}]{lombardo2017silico} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Lombardo}}, \bibinfo {author} {\bibfnamefont {P.~V.}\ \bibnamefont {Desai}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Arimoto}}, \bibinfo {author} {\bibfnamefont {K.~E.}\ \bibnamefont {Desino}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Fischer}}, \bibinfo {author} {\bibfnamefont {C.~E.}\ \bibnamefont {Keefer}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Petersson}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Winiwarter}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Broccatelli}},\ }\bibfield {title} {\bibinfo {title} {In silico absorption, distribution, metabolism, excretion, and pharmacokinetics ({ADME-PK}): utility and best practices},\ }\href {https://pubs.acs.org/doi/abs/10.1021/acs.jmedchem.7b00487} {\bibfield {journal} {\bibinfo {journal} {J. Med. Chem}\ }\textbf {\bibinfo {volume} {60}},\ \bibinfo {pages} {9097} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kamat}\ \emph {et~al.}(2020)\citenamefont {Kamat}, \citenamefont {Guo}, \citenamefont {Reutzel-Edens}, \citenamefont {Price},\ and\ \citenamefont {Peters}}]{kamat2020diabat} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kamat}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Guo}}, \bibinfo {author} {\bibfnamefont {S.~M.}\ \bibnamefont {Reutzel-Edens}}, \bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont {Price}},\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Peters}},\ }\bibfield {title} {\bibinfo {title} {Diabat method for polymorph free energies: Extension to molecular crystals},\ }\href {https://aip.scitation.org/doi/10.1063/5.0024727} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {153}},\ \bibinfo {pages} {244105} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Beran}(2016)}]{beran2016modeling} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~J.}\ \bibnamefont {Beran}},\ }\bibfield {title} {\bibinfo {title} {Modeling polymorphic molecular crystals with electronic structure theory},\ }\href {https://pubs.acs.org/doi/10.1021/acs.chemrev.5b00648} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {5567} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wu}\ and\ \citenamefont {Lidar}(2002)}]{wu2002qubits} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.-A.}\ \bibnamefont {Wu}}\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lidar}},\ }\bibfield {title} {\bibinfo {title} {Qubits as parafermions},\ }\href {https://aip.scitation.org/doi/10.1063/1.1499208} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys}\ }\textbf {\bibinfo {volume} {43}},\ \bibinfo {pages} {4506} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Batista}\ and\ \citenamefont {Ortiz}(2004)}]{batista2004algebraic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont {Batista}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ortiz}},\ }\bibfield {title} {\bibinfo {title} {Algebraic approach to interacting quantum systems},\ }\href {http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=9813919D5A55FCF21E625A112327FC7E?doi=10.1.1.130.6443&rep=rep1&type=pdf} {\bibfield {journal} {\bibinfo {journal} {Adv. Phys}\ }\textbf {\bibinfo {volume} {53}},\ \bibinfo {pages} {1} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ and\ \citenamefont {Kitaev}(2002)}]{bravyi2002fermionic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~B.}\ \bibnamefont {Bravyi}}\ and\ \bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont {Kitaev}},\ }\bibfield {title} {\bibinfo {title} {Fermionic quantum computation},\ }\href {https://www.sciencedirect.com/science/article/abs/pii/S0003491602962548} {\bibfield {journal} {\bibinfo {journal} {Ann. Phys}\ }\textbf {\bibinfo {volume} {298}},\ \bibinfo {pages} {210} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jordan}\ and\ \citenamefont {Wigner}(1993)}]{jordan1993paulische} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Jordan}}\ and\ \bibinfo {author} {\bibfnamefont {E.~P.}\ \bibnamefont {Wigner}},\ }\bibfield {title} {\bibinfo {title} {{\"U}ber das paulische {\"a}quivalenzverbot},\ }in\ \href@noop {} {\emph {\bibinfo {booktitle} {The Collected Works of Eugene Paul Wigner}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year} {1993})\ pp.\ \bibinfo {pages} {109--129}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ortiz}\ \emph {et~al.}(2001)\citenamefont {Ortiz}, \citenamefont {Gubernatis}, \citenamefont {Knill},\ and\ \citenamefont {Laflamme}}]{ortiz2001quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ortiz}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Gubernatis}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}},\ }\bibfield {title} {\bibinfo {title} {Quantum algorithms for fermionic simulations},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.64.022319} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {64}},\ \bibinfo {pages} {022319} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Somma}\ \emph {et~al.}(2002)\citenamefont {Somma}, \citenamefont {Ortiz}, \citenamefont {Gubernatis}, \citenamefont {Knill},\ and\ \citenamefont {Laflamme}}]{somma2002simulating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Somma}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ortiz}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Gubernatis}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Laflamme}},\ }\bibfield {title} {\bibinfo {title} {Simulating physical phenomena by quantum networks},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.65.042323} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo {pages} {042323} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aspuru-Guzik}\ \emph {et~al.}(2005)\citenamefont {Aspuru-Guzik}, \citenamefont {Dutoi}, \citenamefont {Love},\ and\ \citenamefont {Head-Gordon}}]{aspuru2005simulated} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {Dutoi}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Head-Gordon}},\ }\bibfield {title} {\bibinfo {title} {Simulated quantum computation of molecular energies},\ }\href {https://science.sciencemag.org/content/309/5741/1704} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {309}},\ \bibinfo {pages} {1704} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Whitfield}\ \emph {et~al.}(2011)\citenamefont {Whitfield}, \citenamefont {Biamonte},\ and\ \citenamefont {Aspuru-Guzik}}]{whitfield2011simulation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whitfield}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Biamonte}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\bibfield {title} {\bibinfo {title} {Simulation of electronic structure {H}amiltonians using quantum computers},\ }\href {https://www.tandfonline.com/doi/abs/10.1080/00268976.2011.552441} {\bibfield {journal} {\bibinfo {journal} {Mol. Phys}\ }\textbf {\bibinfo {volume} {109}},\ \bibinfo {pages} {735} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bravyi}\ \emph {et~al.}(2017)\citenamefont {Bravyi}, \citenamefont {Gambetta}, \citenamefont {Mezzacapo},\ and\ \citenamefont {Temme}}]{bravyi2017tapering} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bravyi}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}},\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Temme}},\ }\bibfield {title} {\bibinfo {title} {Tapering off qubits to simulate fermionic {H}amiltonians},\ }\href {https://arxiv.org/abs/1701.08213} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1701.08213}\ } (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Setia}\ \emph {et~al.}(2020)\citenamefont {Setia}, \citenamefont {Chen}, \citenamefont {Rice}, \citenamefont {Mezzacapo}, \citenamefont {Pistoia},\ and\ \citenamefont {Whitfield}}]{setia2020reducing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Setia}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Rice}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pistoia}},\ and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whitfield}},\ }\bibfield {title} {\bibinfo {title} {Reducing qubit requirements for quantum simulations using molecular point group symmetries},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.0c00113} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {6091} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Faist}\ \emph {et~al.}(2020)\citenamefont {Faist}, \citenamefont {Nezami}, \citenamefont {Albert}, \citenamefont {Salton}, \citenamefont {Pastawski}, \citenamefont {Hayden},\ and\ \citenamefont {Preskill}}]{faist2020continuous} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Faist}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Nezami}}, \bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Albert}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Salton}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Pastawski}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hayden}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\bibfield {title} {\bibinfo {title} {Continuous symmetries and approximate quantum error correction},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.10.041018} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {041018} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steudtner}\ and\ \citenamefont {Wehner}(2018)}]{steudtner2018fermion} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Steudtner}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wehner}},\ }\bibfield {title} {\bibinfo {title} {Fermion-to-qubit mappings with varying resource requirements for quantum simulation},\ }\href {https://www.google.com/search?client=safari&rls=en&q=Fermion-to-qubit+mappings+with+varying+resource+requirements+for+quantum+simulation&ie=UTF-8&oe=UTF-8} {\bibfield {journal} {\bibinfo {journal} {New J. Phys}\ }\textbf {\bibinfo {volume} {20}},\ \bibinfo {pages} {063010} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gottesman}(1997)}]{gottesman1997stabilizer} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gottesman}},\ }\emph {\bibinfo {title} {Stabilizer codes and quantum error correction}},\ \href {https://arxiv.org/abs/quant-ph/9705052} {Ph.D. thesis},\ \bibinfo {school} {California Institute of Technology} (\bibinfo {year} {1997})\BibitemShut {NoStop} \bibitem [{\citenamefont {Fisher}\ \emph {et~al.}(1989)\citenamefont {Fisher}, \citenamefont {Weichman}, \citenamefont {Grinstein},\ and\ \citenamefont {Fisher}}]{fisher1989boson} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Fisher}}, \bibinfo {author} {\bibfnamefont {P.~B.}\ \bibnamefont {Weichman}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Grinstein}},\ and\ \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Fisher}},\ }\bibfield {title} {\bibinfo {title} {Boson localization and the superfluid-insulator transition},\ }\href {https://journals.aps.org/prb/abstract/10.1103/PhysRevB.40.546} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {546} (\bibinfo {year} {1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Levitt}(2013)}]{levitt2013spin} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Levitt}},\ }\href@noop {} {\emph {\bibinfo {title} {Spin dynamics: basics of nuclear magnetic resonance}}}\ (\bibinfo {publisher} {John Wiley \& Sons},\ \bibinfo {year} {2013})\BibitemShut {NoStop} \bibitem [{\citenamefont {Wilson}\ \emph {et~al.}(1980)\citenamefont {Wilson}, \citenamefont {Decius},\ and\ \citenamefont {Cross}}]{wilson1980molecular} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~B.}\ \bibnamefont {Wilson}}, \bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Decius}},\ and\ \bibinfo {author} {\bibfnamefont {P.~C.}\ \bibnamefont {Cross}},\ }\href@noop {} {\emph {\bibinfo {title} {Molecular vibrations: the theory of infrared and Raman vibrational spectra}}}\ (\bibinfo {publisher} {Courier Corporation},\ \bibinfo {year} {1980})\BibitemShut {NoStop} \bibitem [{\citenamefont {Turro}(1991)}]{turro1991modern} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Turro}},\ }\href@noop {} {\emph {\bibinfo {title} {Modern molecular photochemistry}}}\ (\bibinfo {publisher} {University science books},\ \bibinfo {year} {1991})\BibitemShut {NoStop} \bibitem [{\citenamefont {Hong}\ \emph {et~al.}(2019)\citenamefont {Hong}, \citenamefont {Wu}, \citenamefont {Wu}, \citenamefont {Wang},\ and\ \citenamefont {Zhang}}]{hong2019overview} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Hong}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Wang}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Zhang}},\ }\bibfield {title} {\bibinfo {title} {Overview of computational simulations in quantum dots},\ }\href {https://onlinelibrary.wiley.com/doi/abs/10.1002/ijch.201900026} {\bibfield {journal} {\bibinfo {journal} {Isr. J. Chem}\ }\textbf {\bibinfo {volume} {59}},\ \bibinfo {pages} {661} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Veis}\ \emph {et~al.}(2016)\citenamefont {Veis}, \citenamefont {Vi{\v{s}}{\v{n}}{\'a}k}, \citenamefont {Nishizawa}, \citenamefont {Nakai},\ and\ \citenamefont {Pittner}}]{veis2016quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Veis}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Vi{\v{s}}{\v{n}}{\'a}k}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Nishizawa}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Nakai}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Pittner}},\ }\bibfield {title} {\bibinfo {title} {Quantum chemistry beyond {B}orn-{O}ppenheimer approximation on a quantum computer: A simulated phase estimation study},\ }\href {https://onlinelibrary.wiley.com/doi/abs/10.1002/qua.25176} {\bibfield {journal} {\bibinfo {journal} {Int. J. Quantum Chem}\ }\textbf {\bibinfo {volume} {116}},\ \bibinfo {pages} {1328} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Joshi}\ \emph {et~al.}(2014)\citenamefont {Joshi}, \citenamefont {Shukla}, \citenamefont {Katiyar}, \citenamefont {Hazra},\ and\ \citenamefont {Mahesh}}]{joshi2014estimating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Joshi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Shukla}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Katiyar}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Hazra}},\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Mahesh}},\ }\bibfield {title} {\bibinfo {title} {Estimating {Franck-Condon} factors using an {NMR} quantum processor},\ }\href {https://dx.doi.org/10.1103/PhysRevA.90.022303} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {022303} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Teplukhin}\ \emph {et~al.}(2019)\citenamefont {Teplukhin}, \citenamefont {Kendrick},\ and\ \citenamefont {Babikov}}]{teplukhin2019calculation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Teplukhin}}, \bibinfo {author} {\bibfnamefont {B.~K.}\ \bibnamefont {Kendrick}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Babikov}},\ }\bibfield {title} {\bibinfo {title} {Calculation of molecular vibrational spectra on a quantum annealer},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.9b00402} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {4555} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McArdle}\ \emph {et~al.}(2019{\natexlab{a}})\citenamefont {McArdle}, \citenamefont {Mayorov}, \citenamefont {Shan}, \citenamefont {Benjamin},\ and\ \citenamefont {Yuan}}]{mcardle2019digital} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {McArdle}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mayorov}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Shan}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Benjamin}},\ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}},\ }\bibfield {title} {\bibinfo {title} {Digital quantum simulation of molecular vibrations},\ }\href {https://pubs.rsc.org/en/content/articlelanding/2019/sc/c9sc01313j#!divAbstract} {\bibfield {journal} {\bibinfo {journal} {Chem. Sci}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {5725} (\bibinfo {year} {2019}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sawaya}\ and\ \citenamefont {Huh}(2019)}]{sawaya2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont {Sawaya}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Huh}},\ }\bibfield {title} {\bibinfo {title} {Quantum algorithm for calculating molecular vibronic spectra},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jpclett.9b01117} {\bibfield {journal} {\bibinfo {journal} {J. Phys. Chem. Lett}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {3586} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Macridin}\ \emph {et~al.}(2018{\natexlab{a}})\citenamefont {Macridin}, \citenamefont {Spentzouris}, \citenamefont {Amundson},\ and\ \citenamefont {Harnik}}]{macridin2018electron} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Macridin}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Spentzouris}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Amundson}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Harnik}},\ }\bibfield {title} {\bibinfo {title} {Electron-phonon systems on a universal quantum computer},\ }\href {https://dx.doi.org/10.1103/PhysRevLett.121.110504} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo {pages} {110504} (\bibinfo {year} {2018}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Macridin}\ \emph {et~al.}(2018{\natexlab{b}})\citenamefont {Macridin}, \citenamefont {Spentzouris}, \citenamefont {Amundson},\ and\ \citenamefont {Harnik}}]{macridin2018digital} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Macridin}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Spentzouris}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Amundson}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Harnik}},\ }\bibfield {title} {\bibinfo {title} {Digital quantum computation of fermion-boson interacting systems},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.042312} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {042312} (\bibinfo {year} {2018}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sab{\'\i}n}(2020)}]{sabin2020digital} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Sab{\'\i}n}},\ }\bibfield {title} {\bibinfo {title} {Digital quantum simulation of linear and nonlinear optical elements},\ }\href {https://www.mdpi.com/2624-960X/2/1/13} {\bibfield {journal} {\bibinfo {journal} {Quantum Rep}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {208} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Di~Paolo}\ \emph {et~al.}(2020)\citenamefont {Di~Paolo}, \citenamefont {Barkoutsos}, \citenamefont {Tavernelli},\ and\ \citenamefont {Blais}}]{di2020variational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Di~Paolo}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Barkoutsos}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Blais}},\ }\bibfield {title} {\bibinfo {title} {Variational quantum simulation of ultrastrong light-matter coupling},\ }\href {https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.033364} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {033364} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sawaya}\ \emph {et~al.}(2020{\natexlab{a}})\citenamefont {Sawaya}, \citenamefont {Menke}, \citenamefont {Kyaw}, \citenamefont {Johri}, \citenamefont {Aspuru-Guzik},\ and\ \citenamefont {Guerreschi}}]{sawaya2020resource} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont {Sawaya}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Menke}}, \bibinfo {author} {\bibfnamefont {T.~H.}\ \bibnamefont {Kyaw}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Johri}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ and\ \bibinfo {author} {\bibfnamefont {G.~G.}\ \bibnamefont {Guerreschi}},\ }\bibfield {title} {\bibinfo {title} {Resource-efficient digital quantum simulation of $d$-level systems for photonic, vibrational, and spin-s {H}amiltonians},\ }\href {https://www.nature.com/articles/s41534-020-0278-0} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {1} (\bibinfo {year} {2020}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sawaya}\ \emph {et~al.}(2020{\natexlab{b}})\citenamefont {Sawaya}, \citenamefont {Paesani},\ and\ \citenamefont {Tabor}}]{sawaya2020near} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont {Sawaya}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Paesani}},\ and\ \bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont {Tabor}},\ }\bibfield {title} {\bibinfo {title} {Near-and long-term quantum algorithmic approaches for vibrational spectroscopy},\ }\href {https://arxiv.org/abs/2009.05066} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2009.05066}\ } (\bibinfo {year} {2020}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ball}(2005)}]{ball2004fermions} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Ball}},\ }\bibfield {title} {\bibinfo {title} {Fermions without fermion fields},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.95.176407} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {176407} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Verstraete}\ and\ \citenamefont {Cirac}(2005)}]{verstraete2005mapping} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Verstraete}}\ and\ \bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Cirac}},\ }\bibfield {title} {\bibinfo {title} {Mapping local {H}amiltonians of fermions to local {H}amiltonians of spins},\ }\href {https://iopscience.iop.org/article/10.1088/1742-5468/2005/09/P09012/meta} {\bibfield {journal} {\bibinfo {journal} {J. Stat. Mech}\ }\textbf {\bibinfo {volume} {2005}},\ \bibinfo {pages} {P09012} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Havl{\'\i}{\v{c}}ek}\ \emph {et~al.}(2017)\citenamefont {Havl{\'\i}{\v{c}}ek}, \citenamefont {Troyer},\ and\ \citenamefont {Whitfield}}]{havlivcek2017operator} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Havl{\'\i}{\v{c}}ek}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whitfield}},\ }\bibfield {title} {\bibinfo {title} {Operator locality in the quantum simulation of fermionic models},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.032332} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {032332} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Setia}\ \emph {et~al.}(2019)\citenamefont {Setia}, \citenamefont {Bravyi}, \citenamefont {Mezzacapo},\ and\ \citenamefont {Whitfield}}]{setia2019superfast} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Setia}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bravyi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}},\ and\ \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whitfield}},\ }\bibfield {title} {\bibinfo {title} {Superfast encodings for fermionic quantum simulation},\ }\href {https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.1.033033} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {1}},\ \bibinfo {pages} {033033} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bonet-Monroig}\ \emph {et~al.}(2018)\citenamefont {Bonet-Monroig}, \citenamefont {Sagastizabal}, \citenamefont {Singh},\ and\ \citenamefont {O'Brien}}]{bonet2018low} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Bonet-Monroig}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Sagastizabal}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Singh}},\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {O'Brien}},\ }\bibfield {title} {\bibinfo {title} {Low-cost error mitigation by symmetry verification},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.98.062339} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {062339} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McArdle}\ \emph {et~al.}(2019{\natexlab{b}})\citenamefont {McArdle}, \citenamefont {Yuan},\ and\ \citenamefont {Benjamin}}]{mcardle2019error} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {McArdle}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Benjamin}},\ }\bibfield {title} {\bibinfo {title} {Error-mitigated digital quantum simulation},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.122.180501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo {pages} {180501} (\bibinfo {year} {2019}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Elfving}\ \emph {et~al.}(2021)\citenamefont {Elfving}, \citenamefont {Millaruelo}, \citenamefont {G{\'a}mez},\ and\ \citenamefont {Gogolin}}]{elfving2021simulating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~E.}\ \bibnamefont {Elfving}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Millaruelo}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {G{\'a}mez}},\ and\ \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}},\ }\bibfield {title} {\bibinfo {title} {Simulating quantum chemistry in the seniority-zero space on qubit-based quantum computers},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.103.032605} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {032605} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Eddins}\ \emph {et~al.}(2021)\citenamefont {Eddins}, \citenamefont {Motta}, \citenamefont {Gujarati}, \citenamefont {Bravyi}, \citenamefont {Mezzacapo}, \citenamefont {Hadfield},\ and\ \citenamefont {Sheldon}}]{eddins2021doubling} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Eddins}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Gujarati}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bravyi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Hadfield}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sheldon}},\ }\bibfield {title} {\bibinfo {title} {Doubling the size of quantum simulators by entanglement forging},\ }\href {https://arxiv.org/abs/2104.10220} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2104.10220}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}(2004)}]{childs2004quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}},\ }\emph {\bibinfo {title} {Quantum information processing in continuous time}},\ \href {https://dspace.mit.edu/handle/1721.1/16663} {Ph.D. thesis},\ \bibinfo {school} {Massachusetts Institute of Technology} (\bibinfo {year} {2004})\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}(2009)}]{childs2009universal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}},\ }\bibfield {title} {\bibinfo {title} {Universal computation by quantum walk},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.102.180501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {102}},\ \bibinfo {pages} {180501} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ \emph {et~al.}(2015{\natexlab{a}})\citenamefont {Berry}, \citenamefont {Childs},\ and\ \citenamefont {Kothari}}]{berry2015hamiltonian} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kothari}},\ }\bibfield {title} {\bibinfo {title} {{H}amiltonian simulation with nearly optimal dependence on all parameters},\ }in\ \href {https://ieeexplore.ieee.org/document/7354428} {\emph {\bibinfo {booktitle} {Proc. FOCS}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {2015})\ pp.\ \bibinfo {pages} {792--809}\BibitemShut {NoStop} \bibitem [{\citenamefont {Low}\ and\ \citenamefont {Chuang}(2017{\natexlab{a}})}]{low2017optimal} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\bibfield {title} {\bibinfo {title} {Optimal {H}amiltonian simulation by quantum signal processing},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.010501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {010501} (\bibinfo {year} {2017}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ \emph {et~al.}(2015{\natexlab{b}})\citenamefont {Berry}, \citenamefont {Childs}, \citenamefont {Cleve}, \citenamefont {Kothari},\ and\ \citenamefont {Somma}}]{berry2015simulating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kothari}},\ and\ \bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}},\ }\bibfield {title} {\bibinfo {title} {Simulating {H}amiltonian dynamics with a truncated {T}aylor series},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.114.090502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {114}},\ \bibinfo {pages} {090502} (\bibinfo {year} {2015}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Low}\ and\ \citenamefont {Chuang}(2019)}]{low2019hamiltonian} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\bibfield {title} {\bibinfo {title} {{H}amiltonian simulation by qubitization},\ }\href {https://dx.doi.org/10.22331/q-2019-07-12-163} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {163} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ \emph {et~al.}(2018)\citenamefont {Childs}, \citenamefont {Maslov}, \citenamefont {Nam}, \citenamefont {Ross},\ and\ \citenamefont {Su}}]{childs2018toward} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maslov}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Nam}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Ross}},\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Su}},\ }\bibfield {title} {\bibinfo {title} {Toward the first quantum simulation with quantum speedup},\ }\href {https://www.pnas.org/content/115/38/9456} {\bibfield {journal} {\bibinfo {journal} {Proc. Natl. Acad. Sci. USA}\ }\textbf {\bibinfo {volume} {115}},\ \bibinfo {pages} {9456} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Trotter}(1959)}]{trotter1959product} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~F.}\ \bibnamefont {Trotter}},\ }\bibfield {title} {\bibinfo {title} {On the product of semi-groups of operators},\ }\href {https://www.jstor.org/stable/2033649} {\bibfield {journal} {\bibinfo {journal} {Proc. AMS}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {545} (\bibinfo {year} {1959})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Suzuki}(1976)}]{suzuki1976generalized} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Suzuki}},\ }\bibfield {title} {\bibinfo {title} {Generalized {T}rotter's formula and systematic approximants of exponential operators and inner derivations with applications to many-body problems},\ }\href {https://link.springer.com/article/10.1007/BF01609348} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {183} (\bibinfo {year} {1976})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Suzuki}(1991)}]{suzuki1991general} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Suzuki}},\ }\bibfield {title} {\bibinfo {title} {General theory of fractal path integrals with applications to many-body theories and statistical physics},\ }\href {https://aip.scitation.org/doi/10.1063/1.529425} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys}\ }\textbf {\bibinfo {volume} {32}},\ \bibinfo {pages} {400} (\bibinfo {year} {1991})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ \emph {et~al.}(2021)\citenamefont {Childs}, \citenamefont {Su}, \citenamefont {Tran}, \citenamefont {Wiebe},\ and\ \citenamefont {Zhu}}]{childs2019theory} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Su}}, \bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Tran}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhu}},\ }\bibfield {title} {\bibinfo {title} {Theory of {T}rotter error with commutator scaling},\ }\href {https://doi.org/10.1103/PhysRevX.11.011020} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {011020} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Motta}\ \emph {et~al.}(2021)\citenamefont {Motta}, \citenamefont {Ye}, \citenamefont {McClean}, \citenamefont {Li}, \citenamefont {Minnich}, \citenamefont {Babbush},\ and\ \citenamefont {Chan}}]{motta2018low} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Ye}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Minnich}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {Low rank representations for quantum simulation of electronic structure},\ }\href {https://www.nature.com/articles/s41534-021-00416-z} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ and\ \citenamefont {Ta-Shma}(2003)}]{aharonov2003adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Aharonov}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ta-Shma}},\ }\bibfield {title} {\bibinfo {title} {Adiabatic quantum state generation and statistical zero knowledge},\ }in\ \href {https://dl.acm.org/doi/10.1145/780542.780546} {\emph {\bibinfo {booktitle} {Proc. ACM}}}\ (\bibinfo {year} {2003})\ pp.\ \bibinfo {pages} {20--29}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ \emph {et~al.}(2007)\citenamefont {Berry}, \citenamefont {Ahokas}, \citenamefont {Cleve},\ and\ \citenamefont {Sanders}}]{berry2007efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ahokas}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}},\ and\ \bibinfo {author} {\bibfnamefont {B.~C.}\ \bibnamefont {Sanders}},\ }\bibfield {title} {\bibinfo {title} {Efficient quantum algorithms for simulating sparse {H}amiltonians},\ }\href {https://arxiv.org/abs/quant-ph/0508139} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys}\ }\textbf {\bibinfo {volume} {270}},\ \bibinfo {pages} {359} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Poulin}\ \emph {et~al.}(2015)\citenamefont {Poulin}, \citenamefont {Hastings}, \citenamefont {Wecker}, \citenamefont {Wiebe}, \citenamefont {Doberty},\ and\ \citenamefont {Troyer}}]{poulin2014trotter} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Poulin}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Hastings}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wecker}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Doberty}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ }\bibfield {title} {\bibinfo {title} {The {T}rotter step size required for accurate quantum simulation of quantum chemistry},\ }\href {https://dl.acm.org/doi/10.5555/2871401.2871402} {\bibfield {journal} {\bibinfo {journal} {Quant. Info. Comput}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {361–384} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Babbush}\ \emph {et~al.}(2018)\citenamefont {Babbush}, \citenamefont {Wiebe}, \citenamefont {McClean}, \citenamefont {McClain}, \citenamefont {Neven},\ and\ \citenamefont {Chan}}]{babbush2017low} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {McClain}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {Low-depth quantum simulation of materials},\ }\href {https://doi.org/10.1103/PhysRevX.8.011044} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {011044} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jiang}\ \emph {et~al.}(2018)\citenamefont {Jiang}, \citenamefont {Sung}, \citenamefont {Kechedzhi}, \citenamefont {Smelyanskiy},\ and\ \citenamefont {Boixo}}]{jiang2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Sung}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kechedzhi}}, \bibinfo {author} {\bibfnamefont {V.~N.}\ \bibnamefont {Smelyanskiy}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}},\ }\bibfield {title} {\bibinfo {title} {Quantum algorithms to simulate many-body physics of correlated fermions},\ }\href {https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.9.044036} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {044036} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kivlichan}\ \emph {et~al.}(2018)\citenamefont {Kivlichan}, \citenamefont {McClean}, \citenamefont {Wiebe}, \citenamefont {Gidney}, \citenamefont {Aspuru-Guzik}, \citenamefont {Chan},\ and\ \citenamefont {Babbush}}]{kivlichan2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~D.}\ \bibnamefont {Kivlichan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}}, \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ }\bibfield {title} {\bibinfo {title} {Quantum simulation of electronic structure with linear depth and connectivity},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.110501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {110501} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Arute}\ \emph {et~al.}(2020)\citenamefont {Arute} \emph {et~al.}}]{google2020hartree} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Arute}} \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Hartree-{F}ock on a superconducting qubit quantum computer},\ }\href {https://science.sciencemag.org/content/369/6507/1084} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {369}},\ \bibinfo {pages} {1084} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Matsuzawa}\ and\ \citenamefont {Kurashige}(2020)}]{matsuzawa2020jastrow} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Matsuzawa}}\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Kurashige}},\ }\bibfield {title} {\bibinfo {title} {Jastrow-type decomposition in quantum chemistry for low-depth quantum circuits},\ }\href {https://pubs.acs.org/doi/full/10.1021/acs.jctc.9b00963} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {944} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Whitten}(1973)}]{Whitten:1973:4496} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {Whitten}},\ }\bibfield {title} {\bibinfo {title} {Coulombic potential-energy integrals and approximations},\ }\href {https://doi.org/NONE} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {58}},\ \bibinfo {pages} {4496} (\bibinfo {year} {1973})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dunlap}\ \emph {et~al.}(1977)\citenamefont {Dunlap}, \citenamefont {Connolly},\ and\ \citenamefont {Sabin}}]{Dunlap:1977:81} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~I.}\ \bibnamefont {Dunlap}}, \bibinfo {author} {\bibfnamefont {J.~W.~D.}\ \bibnamefont {Connolly}},\ and\ \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Sabin}},\ }\bibfield {title} {\bibinfo {title} {Applicability of {LCAO-X$\alpha$} methods to molecules containing transition-metal atoms -- nickel atom and nickel hydride},\ }\href {https://doi.org/Transition-metal Atoms - Nickel Atom and Nickel H} {\bibfield {journal} {\bibinfo {journal} {Int. J. Quantum Chem. Symp}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {81} (\bibinfo {year} {1977})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dunlap}\ \emph {et~al.}(1979)\citenamefont {Dunlap}, \citenamefont {Connolly},\ and\ \citenamefont {Sabin}}]{Dunlap:1979:3396} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~I.}\ \bibnamefont {Dunlap}}, \bibinfo {author} {\bibfnamefont {J.~W.~D.}\ \bibnamefont {Connolly}},\ and\ \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Sabin}},\ }\bibfield {title} {\bibinfo {title} {On some approximations in applications of {X}$\alpha$ theory},\ }\href {https://doi.org/10.1063/1.438728} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {71}},\ \bibinfo {pages} {3396} (\bibinfo {year} {1979})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Feyereisen}\ \emph {et~al.}(1993)\citenamefont {Feyereisen}, \citenamefont {Fitzgerald},\ and\ \citenamefont {Komornicki}}]{Feyereisen:1993:359} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Feyereisen}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Fitzgerald}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Komornicki}},\ }\bibfield {title} {\bibinfo {title} {Use of approximate integrals in ab initio theory. {A}n application in {MP2} calculations},\ }\href {https://www.sciencedirect.com/science/article/abs/pii/000926149387156W?via {\bibfield {journal} {\bibinfo {journal} {Chem. Phys. Lett}\ }\textbf {\bibinfo {volume} {208}},\ \bibinfo {pages} {359} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Komornicki}\ and\ \citenamefont {Fitzgerald}(1993)}]{Komornicki:1993:1398} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Komornicki}}\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Fitzgerald}},\ }\bibfield {title} {\bibinfo {title} {Molecular gradients and {H}essians implemented in density functional theory},\ }\href {https://doi.org/http://dx.doi.org/10.1063/1.465054} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {1398} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Vahtras}\ \emph {et~al.}(1993)\citenamefont {Vahtras}, \citenamefont {Alml{\"o}f},\ and\ \citenamefont {Feyereisen}}]{Vahtras:1993:514} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Vahtras}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Alml{\"o}f}},\ and\ \bibinfo {author} {\bibfnamefont {M.~W.}\ \bibnamefont {Feyereisen}},\ }\bibfield {title} {\bibinfo {title} {Integral approximations for {LCAO-SCF} calculations},\ }\href {https://aip.scitation.org/doi/10.1063/1.2956507} {\bibfield {journal} {\bibinfo {journal} {Chem. Phys. Lett}\ }\textbf {\bibinfo {volume} {213}},\ \bibinfo {pages} {514} (\bibinfo {year} {1993})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rendell}\ and\ \citenamefont {Lee}(1994)}]{Rendell:1994:400} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~P.}\ \bibnamefont {Rendell}}\ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Lee}},\ }\bibfield {title} {\bibinfo {title} {Coupled-cluster theory employing approximate integrals: An approach to avoid the input/output and storage bottlenecks},\ }\href {https://doi.org/INTN} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {400} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kendall}\ and\ \citenamefont {Fruchtl}(1997)}]{Kendall:1997:158} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~A.}\ \bibnamefont {Kendall}}\ and\ \bibinfo {author} {\bibfnamefont {H.~A.}\ \bibnamefont {Fruchtl}},\ }\bibfield {title} {\bibinfo {title} {The impact of the resolution of the identity approximate integral method on modern ab initio algorithm development},\ }\href {https://doi.org/NONE} {\bibfield {journal} {\bibinfo {journal} {Theor. Chem. Acc}\ }\textbf {\bibinfo {volume} {97}},\ \bibinfo {pages} {158} (\bibinfo {year} {1997})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Weigend}(2002)}]{Weigend:2002:4285} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Weigend}},\ }\bibfield {title} {\bibinfo {title} {A fully direct {RI-HF} algorithm: implementation, optimized auxiliary basis sets, demonstration of accuracy and efficiency},\ }\href {https://doi.org/10.1039/B204199P} {\bibfield {journal} {\bibinfo {journal} {Phys. Chem. Chem. Phys}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {4285} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Beebe}\ and\ \citenamefont {Linderberg}(1977)}]{Beebe:1977:683} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~H.~F.}\ \bibnamefont {Beebe}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Linderberg}},\ }\bibfield {title} {\bibinfo {title} {Simplifications in the generation and transformation of two-electron integrals in molecular calculations},\ }\href {https://doi.org/Two-electron Integrals in Molecular Calculations} {\bibfield {journal} {\bibinfo {journal} {Int. J. Quantum Chem}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {683} (\bibinfo {year} {1977})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Roeggen}\ and\ \citenamefont {Wisloff-Nilssen}(1986)}]{Roeggen:1986:154} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Roeggen}}\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Wisloff-Nilssen}},\ }\bibfield {title} {\bibinfo {title} {On the {B}eebe-{L}inderberg 2-electron integral approximation},\ }\href {https://doi.org/GEXT} {\bibfield {journal} {\bibinfo {journal} {Chem. Phys. Lett}\ }\textbf {\bibinfo {volume} {132}},\ \bibinfo {pages} {154} (\bibinfo {year} {1986})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Koch}\ \emph {et~al.}(2003)\citenamefont {Koch}, \citenamefont {de~Meras},\ and\ \citenamefont {Pedersen}}]{Koch:2003:9481} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Koch}}, \bibinfo {author} {\bibfnamefont {A.~S.}\ \bibnamefont {de~Meras}},\ and\ \bibinfo {author} {\bibfnamefont {T.~B.}\ \bibnamefont {Pedersen}},\ }\bibfield {title} {\bibinfo {title} {Reduced scaling in electronic structure calculations using {C}holesky decompositions},\ }\href {https://doi.org/10.1063/1.1578621} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {118}},\ \bibinfo {pages} {9481} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aquilante}\ \emph {et~al.}(2007)\citenamefont {Aquilante}, \citenamefont {Pedersen},\ and\ \citenamefont {Lindh}}]{Aquilante:2007:194106} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Aquilante}}, \bibinfo {author} {\bibfnamefont {T.~B.}\ \bibnamefont {Pedersen}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Lindh}},\ }\bibfield {title} {\bibinfo {title} {Low-cost evaluation of the exchange {F}ock matrix from {C}holesky and density fitting representations of the electron repulsion integrals},\ }\href {https://doi.org/10.1063/1.2736701} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {126}},\ \bibinfo {pages} {194106} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aquilante}\ \emph {et~al.}(2009)\citenamefont {Aquilante}, \citenamefont {Gagliardi}, \citenamefont {Pedersen},\ and\ \citenamefont {Lindh}}]{Aquilante:2009:154107} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Aquilante}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Gagliardi}}, \bibinfo {author} {\bibfnamefont {T.~B.}\ \bibnamefont {Pedersen}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Lindh}},\ }\bibfield {title} {\bibinfo {title} {Atomic {C}holesky decompositions: a route to unbiased auxiliary basis sets for density fitting approximation with tunable accuracy and efficiency},\ }\href {https://doi.org/10.1063/1.3116784} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {130}},\ \bibinfo {pages} {154107} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Motta}\ \emph {et~al.}(2019)\citenamefont {Motta}, \citenamefont {Shee}, \citenamefont {Zhang},\ and\ \citenamefont {Chan}}]{motta2019efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Shee}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Zhang}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {Efficient ab initio auxiliary-field quantum {M}onte {C}arlo calculations in {G}aussian bases via low-rank tensor decomposition},\ }\href {https://pubs.acs.org/doi/abs/10.1021/acs.jctc.8b00996} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {3510} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peng}\ and\ \citenamefont {Kowalski}(2017)}]{peng2017highly} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Peng}}\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kowalski}},\ }\bibfield {title} {\bibinfo {title} {Highly efficient and scalable compound decomposition of two-electron integral tensor and its application in coupled cluster calculations},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.7b00605} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {13}},\ \bibinfo {pages} {4179} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hohenstein}\ \emph {et~al.}(2012)\citenamefont {Hohenstein}, \citenamefont {Parrish},\ and\ \citenamefont {Mart{\'\i}nez}}]{hohenstein2012tensor} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Hohenstein}}, \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Parrish}},\ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Mart{\'\i}nez}},\ }\bibfield {title} {\bibinfo {title} {Tensor hypercontraction density fitting {I}: quartic scaling second-and third-order {M}{\o}ller-{P}lesset perturbation theory},\ }\href {https://aip.scitation.org/doi/abs/10.1063/1.4732310} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {137}},\ \bibinfo {pages} {044103} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Parrish}\ \emph {et~al.}(2012)\citenamefont {Parrish}, \citenamefont {Hohenstein}, \citenamefont {Mart{\'\i}nez},\ and\ \citenamefont {Sherrill}}]{parrish2012tensor} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Parrish}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Hohenstein}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Mart{\'\i}nez}},\ and\ \bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont {Sherrill}},\ }\bibfield {title} {\bibinfo {title} {Tensor hypercontraction {II}: least-squares renormalization},\ }\href {https://aip.scitation.org/doi/10.1063/1.4768233} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {137}},\ \bibinfo {pages} {224106} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Parrish}\ \emph {et~al.}(2013)\citenamefont {Parrish}, \citenamefont {Hohenstein}, \citenamefont {Schunck}, \citenamefont {Sherrill},\ and\ \citenamefont {Mart{\'\i}nez}}]{parrish2013exact} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Parrish}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Hohenstein}}, \bibinfo {author} {\bibfnamefont {N.~F.}\ \bibnamefont {Schunck}}, \bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont {Sherrill}},\ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Mart{\'\i}nez}},\ }\bibfield {title} {\bibinfo {title} {Exact tensor hypercontraction: a universal technique for the resolution of matrix elements of local finite-range {$N$}-body potentials in many-body quantum problems},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.111.132505} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {111}},\ \bibinfo {pages} {132505} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cohn}\ \emph {et~al.}(2021)\citenamefont {Cohn}, \citenamefont {Motta},\ and\ \citenamefont {Parrish}}]{cohn2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Cohn}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}},\ and\ \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Parrish}},\ }\bibfield {title} {\bibinfo {title} {Quantum filter diagonalization with double-factorized {H}amiltonians},\ }\href {https://arxiv.org/abs/2104.08957} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2104.08957}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Low}\ \emph {et~al.}(2019)\citenamefont {Low}, \citenamefont {Kliuchnikov},\ and\ \citenamefont {Wiebe}}]{low2019well} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Kliuchnikov}},\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}},\ }\bibfield {title} {\bibinfo {title} {Well-conditioned multiproduct {H}amiltonian simulation},\ }\href {https://arxiv.org/abs/1907.11679} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1907.11679}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Haah}\ \emph {et~al.}(2018)\citenamefont {Haah}, \citenamefont {Hastings}, \citenamefont {Kothari},\ and\ \citenamefont {Low}}]{haah2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Haah}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hastings}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kothari}},\ and\ \bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}},\ }\bibfield {title} {\bibinfo {title} {Quantum algorithm for simulating real time evolution of lattice {H}amiltonians},\ }in\ \href {https://ieeexplore.ieee.org/document/8555119} {\emph {\bibinfo {booktitle} {Proc. FOCS}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {2018})\ pp.\ \bibinfo {pages} {350--360}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ and\ \citenamefont {Su}(2019)}]{childs2019nearly} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Su}},\ }\bibfield {title} {\bibinfo {title} {Nearly optimal lattice simulation by product formulas},\ }\href {https://doi.org/10.1103/PhysRevLett.123.050503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {050503} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hastings}\ \emph {et~al.}(2015)\citenamefont {Hastings}, \citenamefont {Wecker}, \citenamefont {Bauer},\ and\ \citenamefont {Troyer}}]{Hastings15} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Hastings}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wecker}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Bauer}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ }\bibfield {title} {\bibinfo {title} {Improving quantum algorithms for quantum chemistry},\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.92.042303} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Comput.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {1–21} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ \emph {et~al.}(2019)\citenamefont {Childs}, \citenamefont {Ostrander},\ and\ \citenamefont {Su}}]{childs2018faster} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ostrander}},\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Su}},\ }\bibfield {title} {\bibinfo {title} {Faster quantum simulation by randomization},\ }\href {https://quantum-journal.org/papers/q-2019-09-02-182/} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {182} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Campbell}(2019)}]{campbell2018random} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Campbell}},\ }\bibfield {title} {\bibinfo {title} {{Random Compiler for Fast {H}amiltonian Simulation}},\ }\href {https://doi.org/10.1103/PhysRevLett.123.070503} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {070503} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cao}\ \emph {et~al.}(2019)\citenamefont {Cao}, \citenamefont {Romero}, \citenamefont {Olson}, \citenamefont {Degroote}, \citenamefont {Johnson}, \citenamefont {Kieferov{\'a}}, \citenamefont {Kivlichan}, \citenamefont {Menke}, \citenamefont {Peropadre}, \citenamefont {Sawaya} \emph {et~al.}}]{cao2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Olson}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Degroote}}, \bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont {Johnson}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kieferov{\'a}}}, \bibinfo {author} {\bibfnamefont {I.~D.}\ \bibnamefont {Kivlichan}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Menke}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Peropadre}}, \bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont {Sawaya}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Quantum chemistry in the age of quantum computing},\ }\href {https://pubs.acs.org/doi/10.1021/acs.chemrev.8b00803} {\bibfield {journal} {\bibinfo {journal} {Chem. Rev}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {10856} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ and\ \citenamefont {Kothari}(2009)}]{childs2009limitations} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kothari}},\ }\bibfield {title} {\bibinfo {title} {Limitations on the simulation of non-sparse {H}amiltonians},\ }\href {https://dl.acm.org/doi/10.5555/2011373.2011380} {\bibfield {journal} {\bibinfo {journal} {Quant. Info. Comput}\ }\textbf {\bibinfo {volume} {10}} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kempe}(2003)}]{kempe2003quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}},\ }\bibfield {title} {\bibinfo {title} {Quantum random walks: an introductory overview},\ }\href {https://www.tandfonline.com/doi/abs/10.1080/00107151031000110776} {\bibfield {journal} {\bibinfo {journal} {Contemp. Phys}\ }\textbf {\bibinfo {volume} {44}},\ \bibinfo {pages} {307} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Venegas-Andraca}(2012)}]{venegas2012quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~E.}\ \bibnamefont {Venegas-Andraca}},\ }\bibfield {title} {\bibinfo {title} {Quantum walks: a comprehensive review},\ }\href {https://link.springer.com/article/10.1007/s11128-012-0432-5} {\bibfield {journal} {\bibinfo {journal} {Quantum Inf. Process}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {1015} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ and\ \citenamefont {Childs}(2009)}]{berry2009black} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}}\ and\ \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}},\ }\bibfield {title} {\bibinfo {title} {Black-box {H}amiltonian simulation and unitary implementation},\ }\href {https://dl.acm.org/doi/10.5555/2231036.2231040} {\bibfield {journal} {\bibinfo {journal} {Quant. Info. Comput}\ }\textbf {\bibinfo {volume} {10}} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ and\ \citenamefont {Kothari}(2010)}]{childs2010simulating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kothari}},\ }\bibfield {title} {\bibinfo {title} {Simulating sparse {H}amiltonians with star decompositions},\ }in\ \href {https://link.springer.com/chapter/10.1007/978-3-642-18073-6_8} {\emph {\bibinfo {booktitle} {Theory of Quantum Computation, Communication, and Cryptography}}}\ (\bibinfo {organization} {Springer},\ \bibinfo {year} {2010})\ pp.\ \bibinfo {pages} {94--103}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}(2010)}]{childs2010relationship} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}},\ }\bibfield {title} {\bibinfo {title} {On the relationship between continuous-and discrete-time quantum walks},\ }\href {https://link.springer.com/article/10.1007/s00220-009-0930-1} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys}\ }\textbf {\bibinfo {volume} {294}},\ \bibinfo {pages} {581} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ \emph {et~al.}(2014)\citenamefont {Berry}, \citenamefont {Childs}, \citenamefont {Cleve}, \citenamefont {Kothari},\ and\ \citenamefont {Somma}}]{berry2014exponential} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kothari}},\ and\ \bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}},\ }\bibfield {title} {\bibinfo {title} {Exponential improvement in precision for simulating sparse {H}amiltonians},\ }in\ \href {https://dl.acm.org/doi/abs/10.1145/2591796.2591854?casa_token=OJZwX5waoRMAAAAA:e8aykBqNOtSjxJaAoYNBGUbGKWYQoMowOPvVRXQZYTV7optK-TeDt1TwnNN0U8WaPwD91bsu15276xo} {\emph {\bibinfo {booktitle} {Proc. ACM}}}\ (\bibinfo {year} {2014})\ pp.\ \bibinfo {pages} {283--292}\BibitemShut {NoStop} \bibitem [{\citenamefont {Low}\ \emph {et~al.}(2016)\citenamefont {Low}, \citenamefont {Yoder},\ and\ \citenamefont {Chuang}}]{low2016methodology} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}}, \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Yoder}},\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\bibfield {title} {\bibinfo {title} {Methodology of resonant equiangular composite quantum gates},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.6.041067} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {041067} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brassard}\ and\ \citenamefont {Hoyer}(1997)}]{brassard1997exact} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Brassard}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Hoyer}},\ }\bibfield {title} {\bibinfo {title} {An exact quantum polynomial-time algorithm for {S}imon's problem},\ }in\ \href {https://ieeexplore.ieee.org/abstract/document/595153/} {\emph {\bibinfo {booktitle} {Proc. ISTCS}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {1997})\ pp.\ \bibinfo {pages} {12--23}\BibitemShut {NoStop} \bibitem [{\citenamefont {Grover}(1998)}]{grover1998quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~K.}\ \bibnamefont {Grover}},\ }\bibfield {title} {\bibinfo {title} {Quantum computers can search rapidly by using almost any transformation},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.80.4329} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {80}},\ \bibinfo {pages} {4329} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Low}\ and\ \citenamefont {Chuang}(2017{\natexlab{b}})}]{low2017hamiltonian} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}}\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\bibfield {title} {\bibinfo {title} {{H}amiltonian simulation by uniform spectral amplification},\ }\href {https://arxiv.org/abs/1707.05391} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1707.05391}\ } (\bibinfo {year} {2017}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dong}\ \emph {et~al.}(2021)\citenamefont {Dong}, \citenamefont {Meng}, \citenamefont {Whaley},\ and\ \citenamefont {Lin}}]{dong2021efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Dong}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Meng}}, \bibinfo {author} {\bibfnamefont {K.~B.}\ \bibnamefont {Whaley}},\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Lin}},\ }\bibfield {title} {\bibinfo {title} {Efficient phase-factor evaluation in quantum signal processing},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.103.042419} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {103}},\ \bibinfo {pages} {042419} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Martyn}\ \emph {et~al.}(2021)\citenamefont {Martyn}, \citenamefont {Rossi}, \citenamefont {Tan},\ and\ \citenamefont {Chuang}}]{martyn2021grand} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martyn}}, \bibinfo {author} {\bibfnamefont {Z.~M.}\ \bibnamefont {Rossi}}, \bibinfo {author} {\bibfnamefont {A.~K.}\ \bibnamefont {Tan}},\ and\ \bibinfo {author} {\bibfnamefont {I.~L.}\ \bibnamefont {Chuang}},\ }\bibfield {title} {\bibinfo {title} {A grand unification of quantum algorithms},\ }\href {https://arxiv.org/abs/2105.02859} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2105.02859}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chao}\ \emph {et~al.}(2020)\citenamefont {Chao}, \citenamefont {Ding}, \citenamefont {Gilyen}, \citenamefont {Huang},\ and\ \citenamefont {Szegedy}}]{chao2020finding} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Chao}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Ding}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Gilyen}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Huang}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Szegedy}},\ }\bibfield {title} {\bibinfo {title} {Finding angles for quantum signal processing with machine precision},\ }\href {https://arxiv.org/abs/2003.02831} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2003.02831}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ \emph {et~al.}(2019)\citenamefont {Berry}, \citenamefont {Gidney}, \citenamefont {Motta}, \citenamefont {McClean},\ and\ \citenamefont {Babbush}}]{berry2019qubitization} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ }\bibfield {title} {\bibinfo {title} {Qubitization of arbitrary basis quantum chemistry leveraging sparsity and low rank factorization},\ }\href {https://quantum-journal.org/papers/q-2019-12-02-208/} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {208} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}\ \emph {et~al.}(2020)\citenamefont {Lee}, \citenamefont {Berry}, \citenamefont {Gidney}, \citenamefont {Huggins}, \citenamefont {McClean}, \citenamefont {Wiebe},\ and\ \citenamefont {Babbush}}]{lee2020even} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Huggins}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ }\bibfield {title} {\bibinfo {title} {Even more efficient quantum computations of chemistry through tensor hypercontraction},\ }\href {https://arxiv.org/abs/2011.03494} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2011.03494}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Elfving}\ \emph {et~al.}(2020)\citenamefont {Elfving}, \citenamefont {Broer}, \citenamefont {Webber}, \citenamefont {Gavartin}, \citenamefont {Halls}, \citenamefont {Lorton},\ and\ \citenamefont {Bochevarov}}]{elfving2020will} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.~E.}\ \bibnamefont {Elfving}}, \bibinfo {author} {\bibfnamefont {B.~W.}\ \bibnamefont {Broer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Webber}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gavartin}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Halls}}, \bibinfo {author} {\bibfnamefont {K.~P.}\ \bibnamefont {Lorton}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bochevarov}},\ }\bibfield {title} {\bibinfo {title} {How will quantum computers provide an industrially relevant computational advantage in quantum chemistry?},\ }\href {https://arxiv.org/abs/2009.12472} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2009.12472}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {von Burg}\ \emph {et~al.}(2020)\citenamefont {von Burg}, \citenamefont {Low}, \citenamefont {H{\"a}ner}, \citenamefont {Steiger}, \citenamefont {Reiher}, \citenamefont {Roetteler},\ and\ \citenamefont {Troyer}}]{von2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {von Burg}}, \bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {H{\"a}ner}}, \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Steiger}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Reiher}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Roetteler}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ }\bibfield {title} {\bibinfo {title} {Quantum computing enhanced computational catalysis},\ }\href {https://arxiv.org/abs/2007.14460} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2007.14460}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Berry}\ \emph {et~al.}(2018)\citenamefont {Berry}, \citenamefont {Kieferov{\'a}}, \citenamefont {Scherer}, \citenamefont {Sanders}, \citenamefont {Low}, \citenamefont {Wiebe}, \citenamefont {Gidney},\ and\ \citenamefont {Babbush}}]{berry2018improved} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~W.}\ \bibnamefont {Berry}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kieferov{\'a}}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Scherer}}, \bibinfo {author} {\bibfnamefont {Y.~R.}\ \bibnamefont {Sanders}}, \bibinfo {author} {\bibfnamefont {G.~H.}\ \bibnamefont {Low}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ }\bibfield {title} {\bibinfo {title} {Improved techniques for preparing eigenstates of fermionic hamiltonians},\ }\href {https://www.nature.com/articles/s41534-018-0071-5} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {1} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Poulin}\ \emph {et~al.}(2018)\citenamefont {Poulin}, \citenamefont {Kitaev}, \citenamefont {Steiger}, \citenamefont {Hastings},\ and\ \citenamefont {Troyer}}]{poulin2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Poulin}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kitaev}}, \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Steiger}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Hastings}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ }\bibfield {title} {\bibinfo {title} {Quantum algorithm for spectral measurement with a lower gate count},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.121.010501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {121}},\ \bibinfo {pages} {010501} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hentschel}\ \emph {et~al.}(2001)\citenamefont {Hentschel}, \citenamefont {Kienberger}, \citenamefont {Spielmann}, \citenamefont {Reider}, \citenamefont {Milosevic}, \citenamefont {Brabec}, \citenamefont {Corkum}, \citenamefont {Heinzmann}, \citenamefont {Drescher},\ and\ \citenamefont {Krausz}}]{hentschel2001attosecond} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hentschel}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kienberger}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Spielmann}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Reider}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Milosevic}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Brabec}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Corkum}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Heinzmann}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Drescher}},\ and\ \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Krausz}},\ }\bibfield {title} {\bibinfo {title} {Attosecond metrology},\ }\href {https://www.nature.com/articles/35107000} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {414}},\ \bibinfo {pages} {509} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kienberger}\ \emph {et~al.}(2002)\citenamefont {Kienberger}, \citenamefont {Hentschel}, \citenamefont {Uiberacker}, \citenamefont {Spielmann}, \citenamefont {Kitzler}, \citenamefont {Scrinzi}, \citenamefont {Wieland}, \citenamefont {Westerwalbesloh}, \citenamefont {Kleineberg}, \citenamefont {Heinzmann} \emph {et~al.}}]{kienberger2002steering} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Kienberger}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Hentschel}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Uiberacker}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Spielmann}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kitzler}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Scrinzi}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Wieland}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Westerwalbesloh}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Kleineberg}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Heinzmann}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Steering attosecond electron wave packets with light},\ }\href {https://science.sciencemag.org/content/297/5584/1144} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {297}},\ \bibinfo {pages} {1144} (\bibinfo {year} {2002})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bucksbaum}(2003)}]{bucksbaum2003ultrafast} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~H.}\ \bibnamefont {Bucksbaum}},\ }\bibfield {title} {\bibinfo {title} {Ultrafast control},\ }\href {https://www.nature.com/articles/s41563-021-00922-7} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {421}},\ \bibinfo {pages} {593} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {F{\"o}hlisch}\ \emph {et~al.}(2005)\citenamefont {F{\"o}hlisch}, \citenamefont {Feulner}, \citenamefont {Hennies}, \citenamefont {Fink}, \citenamefont {Menzel}, \citenamefont {S{\'a}nchez-Portal}, \citenamefont {Echenique},\ and\ \citenamefont {Wurth}}]{fohlisch2005direct} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {F{\"o}hlisch}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Feulner}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Hennies}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fink}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Menzel}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {S{\'a}nchez-Portal}}, \bibinfo {author} {\bibfnamefont {P.~M.}\ \bibnamefont {Echenique}},\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Wurth}},\ }\bibfield {title} {\bibinfo {title} {Direct observation of electron dynamics in the attosecond domain},\ }\href {https://www.nature.com/articles/nature03833} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {436}},\ \bibinfo {pages} {373} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Klamroth}(2003)}]{klamroth2003laser} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Klamroth}},\ }\bibfield {title} {\bibinfo {title} {Laser-driven electron transfer through metal-insulator-metal contacts: Time-dependent configuration interaction singles calculations for a jellium model},\ }\href {https://journals.aps.org/prb/abstract/10.1103/PhysRevB.68.245421} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {68}},\ \bibinfo {pages} {245421} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Saalfrank}\ \emph {et~al.}(2005)\citenamefont {Saalfrank}, \citenamefont {Klamroth}, \citenamefont {Huber},\ and\ \citenamefont {Krause}}]{saalfrank2005laser} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Saalfrank}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Klamroth}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Huber}},\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Krause}},\ }\bibfield {title} {\bibinfo {title} {Laser-driven electron dynamics at interfaces},\ }\href {https://onlinelibrary.wiley.com/doi/abs/10.1002/ijch.201900026} {\bibfield {journal} {\bibinfo {journal} {Isr. J. Chem}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {205} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krause}\ \emph {et~al.}(2005)\citenamefont {Krause}, \citenamefont {Klamroth},\ and\ \citenamefont {Saalfrank}}]{krause2005time} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Krause}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Klamroth}},\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Saalfrank}},\ }\bibfield {title} {\bibinfo {title} {Time-dependent configuration-interaction calculations of laser-pulse-driven many-electron dynamics: Controlled dipole switching in lithium cyanide},\ }\href {https://aip.scitation.org/doi/10.1063/1.1999636} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {123}},\ \bibinfo {pages} {074105} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krause}\ \emph {et~al.}(2007)\citenamefont {Krause}, \citenamefont {Klamroth},\ and\ \citenamefont {Saalfrank}}]{krause2007molecular} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Krause}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Klamroth}},\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Saalfrank}},\ }\bibfield {title} {\bibinfo {title} {Molecular response properties from explicitly time-dependent configuration interaction methods},\ }\href {https://aip.scitation.org/doi/abs/10.1063/1.2749503?journalCode=jcp} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {127}},\ \bibinfo {pages} {034107} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nest}\ \emph {et~al.}(2005)\citenamefont {Nest}, \citenamefont {Klamroth},\ and\ \citenamefont {Saalfrank}}]{nest2005multiconfiguration} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Nest}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Klamroth}},\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Saalfrank}},\ }\bibfield {title} {\bibinfo {title} {The multiconfiguration time-dependent {Hartree--Fock} method for quantum chemical calculations},\ }\href {https://aip.scitation.org/doi/10.1063/1.1862243} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {122}},\ \bibinfo {pages} {124102} (\bibinfo {year} {2005})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Daley}\ \emph {et~al.}(2004)\citenamefont {Daley}, \citenamefont {Kollath}, \citenamefont {Schollw{\"o}ck},\ and\ \citenamefont {Vidal}}]{daley2004time} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Daley}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Kollath}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Schollw{\"o}ck}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Vidal}},\ }\bibfield {title} {\bibinfo {title} {Time-dependent density-matrix renormalization-group using adaptive effective {Hilbert} spaces},\ }\href {https://dx.doi.org/10.1088/1742-5468/2004/04/P04005} {\bibfield {journal} {\bibinfo {journal} {J. Stat. Mech}\ }\textbf {\bibinfo {volume} {2004}},\ \bibinfo {pages} {P04005} (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schollw{\"o}ck}(2011)}]{schollwock2011density} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Schollw{\"o}ck}},\ }\bibfield {title} {\bibinfo {title} {The density-matrix renormalization group in the age of matrix product states},\ }\href {https://dx.doi.org/10.1016/j.aop.2010.09.012} {\bibfield {journal} {\bibinfo {journal} {Ann. Phys}\ }\textbf {\bibinfo {volume} {326}},\ \bibinfo {pages} {96} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Xie}\ \emph {et~al.}(2019)\citenamefont {Xie}, \citenamefont {Liu}, \citenamefont {Yao}, \citenamefont {Schollw{\"o}ck}, \citenamefont {Liu},\ and\ \citenamefont {Ma}}]{xie2019time} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Xie}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Liu}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yao}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Schollw{\"o}ck}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Liu}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ma}},\ }\bibfield {title} {\bibinfo {title} {Time-dependent density matrix renormalization group quantum dynamics for realistic chemical systems},\ }\href {https://aip.scitation.org/doi/10.1063/1.5125945} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {151}},\ \bibinfo {pages} {224101} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dahlen}\ and\ \citenamefont {van Leeuwen}(2007)}]{dahlen2007solving} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~E.}\ \bibnamefont {Dahlen}}\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {van Leeuwen}},\ }\bibfield {title} {\bibinfo {title} {Solving the {Kadanoff-Baym} equations for inhomogeneous systems: application to atoms and molecules},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.153004} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {153004} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Somma}\ \emph {et~al.}(2003)\citenamefont {Somma}, \citenamefont {Ortiz}, \citenamefont {Knill},\ and\ \citenamefont {Gubernatis}}]{somma2003quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Somma}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Ortiz}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Knill}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Gubernatis}},\ }\bibfield {title} {\bibinfo {title} {{Quantum Simulations of Physics Problems}},\ }\href {https://www.spiedigitallibrary.org/conference-proceedings-of-spie/5105/0000/Quantum-simulations-of-physics-problems/10.1117/12.487249.short?SSO=1} {\bibfield {journal} {\bibinfo {journal} {Proc. SPIE}\ }\textbf {\bibinfo {volume} {5105}},\ \bibinfo {pages} {96} (\bibinfo {year} {2003})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chiesa}\ \emph {et~al.}(2019)\citenamefont {Chiesa}, \citenamefont {Tacchino}, \citenamefont {Grossi}, \citenamefont {Santini}, \citenamefont {Tavernelli}, \citenamefont {Gerace},\ and\ \citenamefont {Carretta}}]{chiesa_2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Chiesa}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Tacchino}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Grossi}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Santini}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Gerace}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Carretta}},\ }\bibfield {title} {\bibinfo {title} {Quantum hardware simulating four-dimensional inelastic neutron scattering},\ }\href {https://doi.org/10.1038/s41567-019-0437-4} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys.}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {455} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Francis}\ \emph {et~al.}(2020)\citenamefont {Francis}, \citenamefont {Freericks},\ and\ \citenamefont {Kemper}}]{francis_2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Francis}}, \bibinfo {author} {\bibfnamefont {J.~K.}\ \bibnamefont {Freericks}},\ and\ \bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Kemper}},\ }\bibfield {title} {\bibinfo {title} {Quantum computation of magnon spectra},\ }\href {https://doi.org/10.1103/PhysRevB.101.014411} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {014411} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sun}\ \emph {et~al.}(2021)\citenamefont {Sun}, \citenamefont {Motta}, \citenamefont {Tazhigulov}, \citenamefont {Tan}, \citenamefont {Chan},\ and\ \citenamefont {Minnich}}]{sun2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.-N.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {R.~N.}\ \bibnamefont {Tazhigulov}}, \bibinfo {author} {\bibfnamefont {A.~T.}\ \bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Minnich}},\ }\bibfield {title} {\bibinfo {title} {Quantum computation of finite-temperature static and dynamical properties of spin systems using quantum imaginary time evolution},\ }\href {https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.2.010317} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {010317} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {von Neumann}(1955)}]{neumann1955mathematical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {von Neumann}},\ }\href@noop {} {\emph {\bibinfo {title} {Mathematical Foundations of Quantum Mechanics}}}\ (\bibinfo {publisher} {Princeton University Press},\ \bibinfo {year} {1955})\BibitemShut {NoStop} \bibitem [{\citenamefont {Cleve}\ \emph {et~al.}(1998)\citenamefont {Cleve}, \citenamefont {Ekert}, \citenamefont {Macchiavello},\ and\ \citenamefont {Mosca}}]{cleve1998quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Cleve}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Ekert}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Macchiavello}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mosca}},\ }\bibfield {title} {\bibinfo {title} {Quantum algorithms revisited},\ }\href {https://royalsocietypublishing.org/doi/10.1098/rspa.1998.0164} {\bibfield {journal} {\bibinfo {journal} {Proc. Roy. Soc. London A, Math. Phys. Sci}\ }\textbf {\bibinfo {volume} {454}},\ \bibinfo {pages} {339} (\bibinfo {year} {1998})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kitaev}(1995)}]{kitaev1995quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~Y.}\ \bibnamefont {Kitaev}},\ }\bibfield {title} {\bibinfo {title} {Quantum measurements and the {Abelian} stabilizer problem},\ }\href {https://arxiv.org/abs/quant-ph/9511026} {\bibfield {journal} {\bibinfo {journal} {arXiv quant-ph/9511026}\ } (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lanyon}\ \emph {et~al.}(2010)\citenamefont {Lanyon}, \citenamefont {Whitfield}, \citenamefont {Gillett}, \citenamefont {Goggin}, \citenamefont {Almeida}, \citenamefont {Kassal}, \citenamefont {Biamonte}, \citenamefont {Mohseni}, \citenamefont {Powell}, \citenamefont {Barbieri} \emph {et~al.}}]{lanyon2010towards} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont {Lanyon}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Whitfield}}, \bibinfo {author} {\bibfnamefont {G.~G.}\ \bibnamefont {Gillett}}, \bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont {Goggin}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Almeida}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Kassal}}, \bibinfo {author} {\bibfnamefont {J.~D.}\ \bibnamefont {Biamonte}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mohseni}}, \bibinfo {author} {\bibfnamefont {B.~J.}\ \bibnamefont {Powell}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Barbieri}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Towards quantum chemistry on a quantum computer},\ }\href {https://www.nature.com/articles/nchem.483} {\bibfield {journal} {\bibinfo {journal} {Nat. Chem}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {106} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {O’Malley}\ \emph {et~al.}(2016)\citenamefont {O’Malley}, \citenamefont {Babbush}, \citenamefont {Kivlichan}, \citenamefont {Romero}, \citenamefont {McClean}, \citenamefont {Barends}, \citenamefont {Kelly}, \citenamefont {Roushan}, \citenamefont {Tranter}, \citenamefont {Ding} \emph {et~al.}}]{o2016scalable} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {O’Malley}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {I.~D.}\ \bibnamefont {Kivlichan}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Barends}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kelly}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Roushan}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Tranter}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Ding}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Scalable quantum simulation of molecular energies},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.6.031007} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {031007} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {O’Brien}\ \emph {et~al.}(2019)\citenamefont {O’Brien}, \citenamefont {Tarasinski},\ and\ \citenamefont {Terhal}}]{o2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.~E.}\ \bibnamefont {O’Brien}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Tarasinski}},\ and\ \bibinfo {author} {\bibfnamefont {B.~M.}\ \bibnamefont {Terhal}},\ }\bibfield {title} {\bibinfo {title} {Quantum phase estimation of multiple eigenvalues for small-scale (noisy) experiments},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/aafb8e} {\bibfield {journal} {\bibinfo {journal} {New J. Phys}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {023022} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cruz}\ \emph {et~al.}(2020)\citenamefont {Cruz}, \citenamefont {Catarina}, \citenamefont {Gautier},\ and\ \citenamefont {Fern{\'a}ndez-Rossier}}]{cruz2020optimizing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~M.}\ \bibnamefont {Cruz}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Catarina}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Gautier}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Fern{\'a}ndez-Rossier}},\ }\bibfield {title} {\bibinfo {title} {Optimizing quantum phase estimation for the simulation of {{H}amiltonian} eigenstates},\ }\href {https://iopscience.iop.org/article/10.1088/2058-9565/abaa2c} {\bibfield {journal} {\bibinfo {journal} {Quant. Sci. Tech}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {044005} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Griffiths}\ and\ \citenamefont {Niu}(1996)}]{griffiths1996semiclassical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Griffiths}}\ and\ \bibinfo {author} {\bibfnamefont {C.-S.}\ \bibnamefont {Niu}},\ }\bibfield {title} {\bibinfo {title} {Semiclassical {Fourier} transform for quantum computation},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.76.3228} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {3228} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dob{\v{s}}{\'\i}{\v{c}}ek}\ \emph {et~al.}(2007)\citenamefont {Dob{\v{s}}{\'\i}{\v{c}}ek}, \citenamefont {Johansson}, \citenamefont {Shumeiko},\ and\ \citenamefont {Wendin}}]{dobvsivcek2007arbitrary} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Dob{\v{s}}{\'\i}{\v{c}}ek}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Johansson}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Shumeiko}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Wendin}},\ }\bibfield {title} {\bibinfo {title} {Arbitrary accuracy iterative quantum phase estimation algorithm using a single ancillary qubit: A two-qubit benchmark},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.76.030306} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {76}},\ \bibinfo {pages} {030306} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Svore}\ \emph {et~al.}(2013)\citenamefont {Svore}, \citenamefont {Hastings},\ and\ \citenamefont {Freedman}}]{svore2013faster} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~M.}\ \bibnamefont {Svore}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Hastings}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Freedman}},\ }\bibfield {title} {\bibinfo {title} {Faster phase estimation},\ }\href {https://dl.acm.org/doi/10.5555/2600508.2600515} {\bibfield {journal} {\bibinfo {journal} {Quant. Info. Comput}\ }\textbf {\bibinfo {volume} {14}} (\bibinfo {year} {2013})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Somma}(2019)}]{somma2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~D.}\ \bibnamefont {Somma}},\ }\bibfield {title} {\bibinfo {title} {Quantum eigenvalue estimation via time series analysis},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/ab5c60} {\bibfield {journal} {\bibinfo {journal} {New J. Phys}\ }\textbf {\bibinfo {volume} {21}},\ \bibinfo {pages} {123025} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mohammadbagherpoor}\ \emph {et~al.}(2019)\citenamefont {Mohammadbagherpoor}, \citenamefont {Oh}, \citenamefont {Dreher}, \citenamefont {Singh}, \citenamefont {Yu},\ and\ \citenamefont {Rindos}}]{mohammadbagherpoor2019improved} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Mohammadbagherpoor}}, \bibinfo {author} {\bibfnamefont {Y.-H.}\ \bibnamefont {Oh}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Dreher}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Singh}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yu}},\ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Rindos}},\ }\bibfield {title} {\bibinfo {title} {An improved implementation approach for quantum phase estimation on quantum computers},\ }in\ \href {https://ieeexplore.ieee.org/abstract/document/8914702/} {\emph {\bibinfo {booktitle} {Proc. ICRC}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {2019})\ pp.\ \bibinfo {pages} {1--9}\BibitemShut {NoStop} \bibitem [{\citenamefont {Born}\ and\ \citenamefont {Fock}(1928)}]{born1928beweis} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Born}}\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Fock}},\ }\bibfield {title} {\bibinfo {title} {Beweis des adiabatensatzes},\ }\href {https://link.springer.com/article/10.1007/BF01343193} {\bibfield {journal} {\bibinfo {journal} {Zeit. Phys}\ }\textbf {\bibinfo {volume} {51}},\ \bibinfo {pages} {165} (\bibinfo {year} {1928})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kato}(1950)}]{kato1950adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kato}},\ }\bibfield {title} {\bibinfo {title} {On the adiabatic theorem of quantum mechanics},\ }\href {https://journals.jps.jp/doi/10.1143/JPSJ.5.435} {\bibfield {journal} {\bibinfo {journal} {J. Phys. Soc. Jpn}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {435} (\bibinfo {year} {1950})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Messiah}(1962)}]{messiah1962quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Messiah}},\ }\href@noop {} {\emph {\bibinfo {title} {Quantum mechanics: volume II}}}\ (\bibinfo {publisher} {North-Holland Publishing Company Amsterdam},\ \bibinfo {year} {1962})\BibitemShut {NoStop} \bibitem [{\citenamefont {Avron}\ and\ \citenamefont {Elgart}(1999)}]{avron1999adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Avron}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Elgart}},\ }\bibfield {title} {\bibinfo {title} {Adiabatic theorem without a gap condition},\ }\href {https://link.springer.com/article/10.1007/s002200050620} {\bibfield {journal} {\bibinfo {journal} {Comm. Math. Phys}\ }\textbf {\bibinfo {volume} {203}},\ \bibinfo {pages} {445} (\bibinfo {year} {1999})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Teufel}(2003)}]{teufel2003adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Teufel}},\ }\href@noop {} {\emph {\bibinfo {title} {Adiabatic perturbation theory in quantum dynamics}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year} {2003})\BibitemShut {NoStop} \bibitem [{\citenamefont {Jordan}(2008)}]{jordan2008quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~P.}\ \bibnamefont {Jordan}},\ }\emph {\bibinfo {title} {Quantum computation beyond the circuit model}},\ \href {https://arxiv.org/abs/0809.2307} {Ph.D. thesis},\ \bibinfo {school} {Massachusetts Institute of Technology} (\bibinfo {year} {2008})\BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ \emph {et~al.}(2000)\citenamefont {Farhi}, \citenamefont {Goldstone}, \citenamefont {Gutmann},\ and\ \citenamefont {Sipser}}]{farhi2000quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goldstone}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sipser}},\ }\bibfield {title} {\bibinfo {title} {Quantum computation by adiabatic evolution},\ }\href {https://arxiv.org/abs/quant-ph/0001106} {\bibfield {journal} {\bibinfo {journal} {arXiv quant-ph/0001106}\ } (\bibinfo {year} {2000})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ \emph {et~al.}(2001)\citenamefont {Farhi}, \citenamefont {Goldstone}, \citenamefont {Gutmann}, \citenamefont {Lapan}, \citenamefont {Lundgren},\ and\ \citenamefont {Preda}}]{farhi2001quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goldstone}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lapan}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Lundgren}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Preda}},\ }\bibfield {title} {\bibinfo {title} {A quantum adiabatic evolution algorithm applied to random instances of an {NP}-complete problem},\ }\href {https://science.sciencemag.org/content/292/5516/472} {\bibfield {journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume} {292}},\ \bibinfo {pages} {472} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Du}\ \emph {et~al.}(2010)\citenamefont {Du}, \citenamefont {Xu}, \citenamefont {Peng}, \citenamefont {Wang}, \citenamefont {Wu},\ and\ \citenamefont {Lu}}]{du2010nmr} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Du}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Xu}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Peng}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Wu}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Lu}},\ }\bibfield {title} {\bibinfo {title} {Nmr implementation of a molecular hydrogen quantum simulation with adiabatic state preparation},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.104.030502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {104}},\ \bibinfo {pages} {030502} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Babbush}\ \emph {et~al.}(2014)\citenamefont {Babbush}, \citenamefont {Love},\ and\ \citenamefont {Aspuru-Guzik}}]{babbush2014adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\bibfield {title} {\bibinfo {title} {Adiabatic quantum simulation of quantum chemistry},\ }\href {https://www.nature.com/articles/srep06603} {\bibfield {journal} {\bibinfo {journal} {Sci. Rep}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {6603} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Veis}\ and\ \citenamefont {Pittner}(2014)}]{veis2014adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Veis}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Pittner}},\ }\bibfield {title} {\bibinfo {title} {Adiabatic state preparation study of methylene},\ }\href {https://aip.scitation.org/doi/10.1063/1.4880755} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {140}},\ \bibinfo {pages} {214111} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nagaj}\ and\ \citenamefont {Mozes}(2007)}]{nagaj2007new} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Nagaj}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Mozes}},\ }\bibfield {title} {\bibinfo {title} {New construction for a {QMA} complete three-local {H}amiltonian},\ }\href {https://aip.scitation.org/doi/10.1063/1.2748377} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {072104} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(2008)\citenamefont {Aharonov}, \citenamefont {Van~Dam}, \citenamefont {Kempe}, \citenamefont {Landau}, \citenamefont {Lloyd},\ and\ \citenamefont {Regev}}]{aharonov2008adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Aharonov}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Van~Dam}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kempe}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Landau}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Lloyd}},\ and\ \bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Regev}},\ }\bibfield {title} {\bibinfo {title} {Adiabatic quantum computation is equivalent to standard quantum computation},\ }\href {https://epubs.siam.org/doi/abs/10.1137/080734479?journalCode=siread} {\bibfield {journal} {\bibinfo {journal} {SIAM Rev}\ }\textbf {\bibinfo {volume} {50}},\ \bibinfo {pages} {755} (\bibinfo {year} {2008})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Childs}\ \emph {et~al.}(2001)\citenamefont {Childs}, \citenamefont {Farhi},\ and\ \citenamefont {Preskill}}]{childs2001robustness} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Childs}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\bibfield {title} {\bibinfo {title} {Robustness of adiabatic quantum computation},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.65.012322} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {65}},\ \bibinfo {pages} {012322} (\bibinfo {year} {2001})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Van~Dam}\ \emph {et~al.}(2001)\citenamefont {Van~Dam}, \citenamefont {Mosca},\ and\ \citenamefont {Vazirani}}]{van2001powerful} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Van~Dam}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mosca}},\ and\ \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Vazirani}},\ }\bibfield {title} {\bibinfo {title} {How powerful is adiabatic quantum computation?},\ }in\ \href {https://ieeexplore.ieee.org/document/959902} {\emph {\bibinfo {booktitle} {Proc. FOCS}}}\ (\bibinfo {organization} {IEEE},\ \bibinfo {year} {2001})\ pp.\ \bibinfo {pages} {279--287}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jansen}\ \emph {et~al.}(2007)\citenamefont {Jansen}, \citenamefont {Ruskai},\ and\ \citenamefont {Seiler}}]{jansen2007bounds} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Jansen}}, \bibinfo {author} {\bibfnamefont {M.-B.}\ \bibnamefont {Ruskai}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Seiler}},\ }\bibfield {title} {\bibinfo {title} {Bounds for the adiabatic approximation with applications to quantum computation},\ }\href {https://aip.scitation.org/doi/abs/10.1063/1.2798382?journalCode=jmp} {\bibfield {journal} {\bibinfo {journal} {J. Math. Phys}\ }\textbf {\bibinfo {volume} {48}},\ \bibinfo {pages} {102111} (\bibinfo {year} {2007})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cerezo}\ \emph {et~al.}(2021)\citenamefont {Cerezo}, \citenamefont {Arrasmith}, \citenamefont {Babbush}, \citenamefont {Benjamin}, \citenamefont {Endo}, \citenamefont {Fujii}, \citenamefont {McClean}, \citenamefont {Mitarai}, \citenamefont {Yuan}, \citenamefont {Cincio} \emph {et~al.}}]{cerezo2020variational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Cerezo}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Arrasmith}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Benjamin}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Endo}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Fujii}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mitarai}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Cincio}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Variational quantum algorithms},\ }\href {https://www.nature.com/articles/s42254-021-00348-9} {\bibfield {journal} {\bibinfo {journal} {Nat. Rev. Phys}\ ,\ \bibinfo {pages} {1}} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bharti}\ \emph {et~al.}(2021)\citenamefont {Bharti}, \citenamefont {Cervera-Lierta}, \citenamefont {Kyaw}, \citenamefont {Haug}, \citenamefont {Alperin-Lea}, \citenamefont {Anand}, \citenamefont {Degroote}, \citenamefont {Heimonen}, \citenamefont {Kottmann}, \citenamefont {Menke} \emph {et~al.}}]{bharti2021noisy} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Bharti}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cervera-Lierta}}, \bibinfo {author} {\bibfnamefont {T.~H.}\ \bibnamefont {Kyaw}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Haug}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Alperin-Lea}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Anand}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Degroote}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Heimonen}}, \bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Kottmann}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Menke}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Noisy intermediate-scale quantum (nisq) algorithms},\ }\href {https://arxiv.org/abs/2101.08448} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2101.08448}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Farhi}\ \emph {et~al.}(2014)\citenamefont {Farhi}, \citenamefont {Goldstone},\ and\ \citenamefont {Gutmann}}]{farhi2014quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Farhi}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Goldstone}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gutmann}},\ }\bibfield {title} {\bibinfo {title} {A quantum approximate optimization algorithm},\ }\href {https://arxiv.org/abs/1411.4028} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1411.4028}\ } (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peruzzo}\ \emph {et~al.}(2014)\citenamefont {Peruzzo}, \citenamefont {McClean}, \citenamefont {Shadbolt}, \citenamefont {Yung}, \citenamefont {Zhou}, \citenamefont {Love}, \citenamefont {Aspuru-Guzik},\ and\ \citenamefont {O'Brien}}]{peruzzo2014variational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Peruzzo}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Shadbolt}}, \bibinfo {author} {\bibfnamefont {M.-H.}\ \bibnamefont {Yung}}, \bibinfo {author} {\bibfnamefont {X.-Q.}\ \bibnamefont {Zhou}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ and\ \bibinfo {author} {\bibfnamefont {J.~L.}\ \bibnamefont {O'Brien}},\ }\bibfield {title} {\bibinfo {title} {A variational eigenvalue solver on a photonic quantum processor},\ }\href {https://doi.org/10.1038/ncomms5213} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {4213} (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McClean}\ \emph {et~al.}(2016)\citenamefont {McClean}, \citenamefont {Romero}, \citenamefont {Babbush},\ and\ \citenamefont {Aspuru-Guzik}}]{mcclean2016theory} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\bibfield {title} {\bibinfo {title} {The theory of variational hybrid quantum-classical algorithms},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/18/2/023023} {\bibfield {journal} {\bibinfo {journal} {New J. Phys}\ }\textbf {\bibinfo {volume} {18}},\ \bibinfo {pages} {023023} (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Romero}\ \emph {et~al.}(2018)\citenamefont {Romero}, \citenamefont {Babbush}, \citenamefont {McClean}, \citenamefont {Hempel}, \citenamefont {Love},\ and\ \citenamefont {Aspuru-Guzik}}]{romero2018strategies} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Hempel}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\bibfield {title} {\bibinfo {title} {Strategies for quantum computing molecular energies using the unitary coupled cluster {A}nsatz},\ }\href {https://iopscience.iop.org/article/10.1088/2058-9565/aad3e4/meta} {\bibfield {journal} {\bibinfo {journal} {Quantum Sci. Technol.}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {014008} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Parrish}\ \emph {et~al.}(2019)\citenamefont {Parrish}, \citenamefont {Hohenstein}, \citenamefont {McMahon},\ and\ \citenamefont {Martinez}}]{parrish2019hybrid} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Parrish}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Hohenstein}}, \bibinfo {author} {\bibfnamefont {P.~L.}\ \bibnamefont {McMahon}},\ and\ \bibinfo {author} {\bibfnamefont {T.~J.}\ \bibnamefont {Martinez}},\ }\bibfield {title} {\bibinfo {title} {Hybrid quantum/classical derivative theory: analytical gradients and excited-state dynamics for the multistate contracted variational quantum eigensolver},\ }\href {https://arxiv.org/abs/1906.08728} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1906.08728}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Schuld}\ \emph {et~al.}(2019)\citenamefont {Schuld}, \citenamefont {Bergholm}, \citenamefont {Gogolin}, \citenamefont {Izaac},\ and\ \citenamefont {Killoran}}]{schuld2019evaluating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Schuld}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Bergholm}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Izaac}},\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Killoran}},\ }\bibfield {title} {\bibinfo {title} {Evaluating analytic gradients on quantum hardware},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.99.032331} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {99}},\ \bibinfo {pages} {032331} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mitarai}\ \emph {et~al.}(2020)\citenamefont {Mitarai}, \citenamefont {Nakagawa},\ and\ \citenamefont {Mizukami}}]{mitarai2020theory} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mitarai}}, \bibinfo {author} {\bibfnamefont {Y.~O.}\ \bibnamefont {Nakagawa}},\ and\ \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Mizukami}},\ }\bibfield {title} {\bibinfo {title} {Theory of analytical energy derivatives for the variational quantum eigensolver},\ }\href {https://doi.org/10.1103/PhysRevResearch.2.013129} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {013129} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kottmann}\ \emph {et~al.}(2021)\citenamefont {Kottmann}, \citenamefont {Anand},\ and\ \citenamefont {Aspuru-Guzik}}]{kottmann2021feasible} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~S.}\ \bibnamefont {Kottmann}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Anand}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Aspuru-Guzik}},\ }\bibfield {title} {\bibinfo {title} {A feasible approach for automatically differentiable unitary coupled-cluster on quantum computers},\ }\href {https://pubs.rsc.org/en/content/articlehtml/2021/sc/d0sc06627c} {\bibfield {journal} {\bibinfo {journal} {Chem. Sci}\ }\textbf {\bibinfo {volume} {12}},\ \bibinfo {pages} {3497} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Guerreschi}\ and\ \citenamefont {Smelyanskiy}(2017)}]{guerreschi2017practical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~G.}\ \bibnamefont {Guerreschi}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Smelyanskiy}},\ }\bibfield {title} {\bibinfo {title} {Practical optimization for hybrid quantum-classical algorithms},\ }\href {https://arxiv.org/abs/1701.01450} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1701.01450}\ } (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Spall}(2005)}]{spall2005introduction} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~C.}\ \bibnamefont {Spall}},\ }\href@noop {} {\emph {\bibinfo {title} {Introduction to stochastic search and optimization: estimation, simulation, and control}}}\ (\bibinfo {publisher} {John Wiley \& Sons},\ \bibinfo {year} {2005})\BibitemShut {NoStop} \bibitem [{\citenamefont {Hirokami}\ \emph {et~al.}(2006)\citenamefont {Hirokami}, \citenamefont {Maeda},\ and\ \citenamefont {Tsukada}}]{hirokami2006parameter} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Hirokami}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Maeda}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Tsukada}},\ }\bibfield {title} {\bibinfo {title} {Parameter estimation using simultaneous perturbation stochastic approximation},\ }\href {https://onlinelibrary.wiley.com/doi/abs/10.1002/eej.20239} {\bibfield {journal} {\bibinfo {journal} {Electr. Eng. Jpn}\ }\textbf {\bibinfo {volume} {154}},\ \bibinfo {pages} {30} (\bibinfo {year} {2006})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bhatnagar}\ \emph {et~al.}(2013)\citenamefont {Bhatnagar}, \citenamefont {Prasad},\ and\ \citenamefont {Prashanth}}]{s2013stochastic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bhatnagar}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Prasad}},\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Prashanth}},\ }\href@noop {} {\emph {\bibinfo {title} {Stochastic recursive algorithms for optimization: simultaneous perturbation methods}}}\ (\bibinfo {publisher} {Springer},\ \bibinfo {year} {2013})\BibitemShut {NoStop} \bibitem [{\citenamefont {Kingma}\ and\ \citenamefont {Ba}(2014)}]{kingma2014adam} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont {Kingma}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Ba}},\ }\bibfield {title} {\bibinfo {title} {Adam: A method for stochastic optimization},\ }\href {https://arxiv.org/abs/1412.6980} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1412.6980}\ } (\bibinfo {year} {2014})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stokes}\ \emph {et~al.}(2020)\citenamefont {Stokes}, \citenamefont {Izaac}, \citenamefont {Killoran},\ and\ \citenamefont {Carleo}}]{stokes2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Stokes}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Izaac}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Killoran}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Carleo}},\ }\bibfield {title} {\bibinfo {title} {Quantum natural gradient},\ }\href {https://quantum-journal.org/papers/q-2020-05-25-269/} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {269} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kandala}\ \emph {et~al.}(2017)\citenamefont {Kandala}, \citenamefont {Mezzacapo}, \citenamefont {Temme}, \citenamefont {Takita}, \citenamefont {Brink}, \citenamefont {Chow},\ and\ \citenamefont {Gambetta}}]{kandala2017hardware} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kandala}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Temme}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Takita}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Brink}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Chow}},\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}},\ }\bibfield {title} {\bibinfo {title} {Hardware-efficient variational quantum eigensolver for small molecules and quantum magnets},\ }\href {https://doi.org/10.1038/nature23879} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {549}},\ \bibinfo {pages} {242} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McClean}\ \emph {et~al.}(2018)\citenamefont {McClean}, \citenamefont {Boixo}, \citenamefont {Smelyanskiy}, \citenamefont {Babbush},\ and\ \citenamefont {Neven}}]{mcclean2018barren} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Boixo}}, \bibinfo {author} {\bibfnamefont {V.~N.}\ \bibnamefont {Smelyanskiy}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Neven}},\ }\bibfield {title} {\bibinfo {title} {Barren plateaus in quantum neural network training landscapes},\ }\href {https://www.nature.com/articles/s41467-018-07090-4} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun}\ }\textbf {\bibinfo {volume} {9}},\ \bibinfo {pages} {1} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Akshay}\ \emph {et~al.}(2020)\citenamefont {Akshay}, \citenamefont {Philathong}, \citenamefont {Morales},\ and\ \citenamefont {Biamonte}}]{akshay2020reachability} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Akshay}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Philathong}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Morales}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Biamonte}},\ }\bibfield {title} {\bibinfo {title} {Reachability deficits in quantum approximate optimization},\ }\href {https://dx.doi.org/10.1103/PhysRevLett.124.090504} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {124}},\ \bibinfo {pages} {090504} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bartlett}\ \emph {et~al.}(1989)\citenamefont {Bartlett}, \citenamefont {Kucharski},\ and\ \citenamefont {Noga}}]{bartlett1989alternative} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~J.}\ \bibnamefont {Bartlett}}, \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Kucharski}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Noga}},\ }\bibfield {title} {\bibinfo {title} {Alternative coupled-cluster {A}ns{\"a}tze {II}. {T}he unitary coupled-cluster method},\ }\href {https://www.sciencedirect.com/science/article/abs/pii/S0009261489873725} {\bibfield {journal} {\bibinfo {journal} {Chem. Phys. Lett}\ }\textbf {\bibinfo {volume} {155}},\ \bibinfo {pages} {133} (\bibinfo {year} {1989})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Moll}\ \emph {et~al.}(2018)\citenamefont {Moll}, \citenamefont {Barkoutsos}, \citenamefont {Bishop}, \citenamefont {Chow}, \citenamefont {Cross}, \citenamefont {Egger}, \citenamefont {Filipp}, \citenamefont {Fuhrer}, \citenamefont {Gambetta}, \citenamefont {Ganzhorn}, \citenamefont {Kandala}, \citenamefont {Mezzacapo}, \citenamefont {M\"{u}ller}, \citenamefont {Riess}, \citenamefont {Salis}, \citenamefont {Smolin}, \citenamefont {Tavernelli},\ and\ \citenamefont {Temme}}]{Moll_2018} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Moll}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Barkoutsos}}, \bibinfo {author} {\bibfnamefont {L.~S.}\ \bibnamefont {Bishop}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Chow}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Cross}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Egger}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Filipp}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fuhrer}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ganzhorn}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kandala}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {M\"{u}ller}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {Riess}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Salis}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Smolin}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}},\ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Temme}},\ }\bibfield {title} {\bibinfo {title} {Quantum optimization using variational algorithms on near-term quantum devices},\ }\href {https://doi.org/10.1088/2058-9565/aab822} {\bibfield {journal} {\bibinfo {journal} {Quantum Sci. Technol.}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {030503} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Albash}\ and\ \citenamefont {Lidar}(2018)}]{albash2018adiabatic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Albash}}\ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Lidar}},\ }\bibfield {title} {\bibinfo {title} {Adiabatic quantum computation},\ }\href {https://link.aps.org/doi/10.1103/RevModPhys.90.015002} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {90}},\ \bibinfo {pages} {015002} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Evangelista}\ \emph {et~al.}(2019)\citenamefont {Evangelista}, \citenamefont {Chan},\ and\ \citenamefont {Scuseria}}]{evangelista2019exact} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Evangelista}}, \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ and\ \bibinfo {author} {\bibfnamefont {G.~E.}\ \bibnamefont {Scuseria}},\ }\bibfield {title} {\bibinfo {title} {Exact parameterization of fermionic wave functions via unitary coupled cluster theory},\ }\href {https://aip.scitation.org/doi/10.1063/1.5133059} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {151}},\ \bibinfo {pages} {244112} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barkoutsos}\ \emph {et~al.}(2018)\citenamefont {Barkoutsos}, \citenamefont {Gonthier}, \citenamefont {Sokolov}, \citenamefont {Moll}, \citenamefont {Salis}, \citenamefont {Fuhrer}, \citenamefont {Ganzhorn}, \citenamefont {Egger}, \citenamefont {Troyer}, \citenamefont {Mezzacapo}, \citenamefont {Filipp},\ and\ \citenamefont {Tavernelli}}]{barkoutsos2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Barkoutsos}}, \bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Gonthier}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Sokolov}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Moll}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Salis}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fuhrer}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ganzhorn}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Egger}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Filipp}},\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}},\ }\bibfield {title} {\bibinfo {title} {Quantum algorithms for electronic structure calculations: particle-hole {H}amiltonian and optimized wave-function expansions},\ }\href {https://doi.org/10.1103/PhysRevA.98.022322} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo {pages} {022322} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ryabinkin}\ \emph {et~al.}(2018{\natexlab{a}})\citenamefont {Ryabinkin}, \citenamefont {Yen}, \citenamefont {Genin},\ and\ \citenamefont {Izmaylov}}]{ryabinkin2018qubit} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~G.}\ \bibnamefont {Ryabinkin}}, \bibinfo {author} {\bibfnamefont {T.-C.}\ \bibnamefont {Yen}}, \bibinfo {author} {\bibfnamefont {S.~N.}\ \bibnamefont {Genin}},\ and\ \bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Izmaylov}},\ }\bibfield {title} {\bibinfo {title} {Qubit coupled cluster method: a systematic approach to quantum chemistry on a quantum computer},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.8b00932} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {14}},\ \bibinfo {pages} {6317} (\bibinfo {year} {2018}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Grimsley}\ \emph {et~al.}(2019)\citenamefont {Grimsley}, \citenamefont {Economou}, \citenamefont {Barnes},\ and\ \citenamefont {Mayhall}}]{Grimsley2019} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~R.}\ \bibnamefont {Grimsley}}, \bibinfo {author} {\bibfnamefont {S.~E.}\ \bibnamefont {Economou}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Barnes}},\ and\ \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Mayhall}},\ }\bibfield {title} {\bibinfo {title} {An adaptive variational algorithm for exact molecular simulations on a quantum computer},\ }\href {https://doi.org/10.1038/s41467-019-10988-2} {\bibfield {journal} {\bibinfo {journal} {Nat. Commun}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {3007} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tang}\ \emph {et~al.}(2021)\citenamefont {Tang}, \citenamefont {Shkolnikov}, \citenamefont {Barron}, \citenamefont {Grimsley}, \citenamefont {Mayhall}, \citenamefont {Barnes},\ and\ \citenamefont {Economou}}]{tang2021qubit} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.~L.}\ \bibnamefont {Tang}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Shkolnikov}}, \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Barron}}, \bibinfo {author} {\bibfnamefont {H.~R.}\ \bibnamefont {Grimsley}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Mayhall}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Barnes}},\ and\ \bibinfo {author} {\bibfnamefont {S.~E.}\ \bibnamefont {Economou}},\ }\bibfield {title} {\bibinfo {title} {Qubit-{ADAPT-VQE}: an adaptive algorithm for constructing hardware-efficient {A}ns\"atze on a quantum processor},\ }\href {https://doi.org/10.1103/PRXQuantum.2.020310} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {020310} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McArdle}\ \emph {et~al.}(2019{\natexlab{c}})\citenamefont {McArdle}, \citenamefont {Jones}, \citenamefont {Endo}, \citenamefont {Li}, \citenamefont {Benjamin},\ and\ \citenamefont {Yuan}}]{mcardle2019variational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {McArdle}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Jones}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Endo}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}, \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Benjamin}},\ and\ \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}},\ }\bibfield {title} {\bibinfo {title} {Variational {A}nsatz-based quantum simulation of imaginary time evolution},\ }\href {https://www.nature.com/articles/s41534-019-0187-2} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {1} (\bibinfo {year} {2019}{\natexlab{c}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yuan}\ \emph {et~al.}(2019)\citenamefont {Yuan}, \citenamefont {Endo}, \citenamefont {Zhao}, \citenamefont {Li},\ and\ \citenamefont {Benjamin}}]{yuan2019theory} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Yuan}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Endo}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}},\ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Benjamin}},\ }\bibfield {title} {\bibinfo {title} {Theory of variational quantum simulation},\ }\href {https://quantum-journal.org/papers/q-2019-10-07-191/} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {191} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{L}anczos}(1950)}]{lanczos1950iteration} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {{L}anczos}},\ }\bibfield {title} {\bibinfo {title} {An iteration method for the solution of the eigenvalue problem of linear differential and integral operators},\ }\href {https://nvlpubs.nist.gov/nistpubs/jres/045/jresv45n4p255_A1b.pdf} {\bibfield {journal} {\bibinfo {journal} {J. Res. Natl. Bur. Stand}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo {pages} {2133} (\bibinfo {year} {1950})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Davidson}(1975)}]{davidsorq1975theiterative} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~R.}\ \bibnamefont {Davidson}},\ }\bibfield {title} {\bibinfo {title} {The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices},\ }\href {https://www.sciencedirect.com/science/article/pii/0021999175900650} {\bibfield {journal} {\bibinfo {journal} {J. Comp. Phys}\ }\textbf {\bibinfo {volume} {17}},\ \bibinfo {pages} {87} (\bibinfo {year} {1975})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Morgan}\ and\ \citenamefont {Scott}(1986)}]{morgan1986generalizations} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~B.}\ \bibnamefont {Morgan}}\ and\ \bibinfo {author} {\bibfnamefont {D.~S.}\ \bibnamefont {Scott}},\ }\bibfield {title} {\bibinfo {title} {Generalizations of {D}avidson's method for computing eigenvalues of sparse symmetric matrices},\ }\href {https://epubs.siam.org/doi/10.1137/0907054} {\bibfield {journal} {\bibinfo {journal} {SIAM J. Sci. Comput}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {817} (\bibinfo {year} {1986})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McClean}\ \emph {et~al.}(2017)\citenamefont {McClean}, \citenamefont {Kimchi-Schwartz}, \citenamefont {Carter},\ and\ \citenamefont {De~Jong}}]{mcclean2017hybrid} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {M.~E.}\ \bibnamefont {Kimchi-Schwartz}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Carter}},\ and\ \bibinfo {author} {\bibfnamefont {W.~A.}\ \bibnamefont {De~Jong}},\ }\bibfield {title} {\bibinfo {title} {Hybrid quantum-classical hierarchy for mitigation of decoherence and determination of excited states},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.95.042308} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {95}},\ \bibinfo {pages} {042308} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Colless}\ \emph {et~al.}(2018)\citenamefont {Colless}, \citenamefont {Ramasesh}, \citenamefont {Dahlen}, \citenamefont {Blok}, \citenamefont {Kimchi-Schwartz}, \citenamefont {McClean}, \citenamefont {Carter}, \citenamefont {De~Jong},\ and\ \citenamefont {Siddiqi}}]{colless2018computation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~I.}\ \bibnamefont {Colless}}, \bibinfo {author} {\bibfnamefont {V.~V.}\ \bibnamefont {Ramasesh}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Dahlen}}, \bibinfo {author} {\bibfnamefont {M.~S.}\ \bibnamefont {Blok}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kimchi-Schwartz}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Carter}}, \bibinfo {author} {\bibfnamefont {W.}~\bibnamefont {De~Jong}},\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Siddiqi}},\ }\bibfield {title} {\bibinfo {title} {Computation of molecular spectra on a quantum processor with an error-resilient algorithm},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.011021} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {011021} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huggins}\ \emph {et~al.}(2020)\citenamefont {Huggins}, \citenamefont {Lee}, \citenamefont {Baek}, \citenamefont {O’Gorman},\ and\ \citenamefont {Whaley}}]{huggins2020non} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Huggins}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {U.}~\bibnamefont {Baek}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {O’Gorman}},\ and\ \bibinfo {author} {\bibfnamefont {K.~B.}\ \bibnamefont {Whaley}},\ }\bibfield {title} {\bibinfo {title} {A non-orthogonal variational quantum eigensolver},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/ab867b} {\bibfield {journal} {\bibinfo {journal} {New J. Phys}\ }\textbf {\bibinfo {volume} {22}},\ \bibinfo {pages} {073009} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Motta}\ \emph {et~al.}(2020{\natexlab{b}})\citenamefont {Motta}, \citenamefont {Sun}, \citenamefont {Tan}, \citenamefont {O’Rourke}, \citenamefont {Ye}, \citenamefont {Minnich}, \citenamefont {Brand{\~a}o},\ and\ \citenamefont {Chan}}]{motta2020determining} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {A.~T.}\ \bibnamefont {Tan}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {O’Rourke}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Ye}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Minnich}}, \bibinfo {author} {\bibfnamefont {F.~G.}\ \bibnamefont {Brand{\~a}o}},\ and\ \bibinfo {author} {\bibfnamefont {G.~K.-L.}\ \bibnamefont {Chan}},\ }\bibfield {title} {\bibinfo {title} {Determining eigenstates and thermal states on a quantum computer using quantum imaginary time evolution},\ }\href {https://www.nature.com/articles/s41567-019-0704-4} {\bibfield {journal} {\bibinfo {journal} {Nat. Phys}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {205} (\bibinfo {year} {2020}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ollitrault}\ \emph {et~al.}(2020)\citenamefont {Ollitrault}, \citenamefont {Kandala}, \citenamefont {Chen}, \citenamefont {Barkoutsos}, \citenamefont {Mezzacapo}, \citenamefont {Pistoia}, \citenamefont {Sheldon}, \citenamefont {Woerner}, \citenamefont {Gambetta},\ and\ \citenamefont {Tavernelli}}]{ollitrault2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Ollitrault}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kandala}}, \bibinfo {author} {\bibfnamefont {C.-F.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Barkoutsos}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pistoia}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Sheldon}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Woerner}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}},\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}},\ }\bibfield {title} {\bibinfo {title} {Quantum equation of motion for computing molecular excitation energies on a noisy quantum processor},\ }\href {https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.2.043140} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {043140} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Parrish}\ and\ \citenamefont {McMahon}(2019)}]{parrish2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Parrish}}\ and\ \bibinfo {author} {\bibfnamefont {P.~L.}\ \bibnamefont {McMahon}},\ }\bibfield {title} {\bibinfo {title} {Quantum filter diagonalization: quantum eigendecomposition without full quantum phase estimation},\ }\href {https://arxiv.org/abs/1909.08925} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1909.08925}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stair}\ \emph {et~al.}(2020)\citenamefont {Stair}, \citenamefont {Huang},\ and\ \citenamefont {Evangelista}}]{stair2020multireference} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~H.}\ \bibnamefont {Stair}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Huang}},\ and\ \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Evangelista}},\ }\bibfield {title} {\bibinfo {title} {A multireference quantum {K}rylov algorithm for strongly correlated electrons},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.9b01125} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {2236} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jamet}\ \emph {et~al.}(2021)\citenamefont {Jamet}, \citenamefont {Agarwal}, \citenamefont {Lupo}, \citenamefont {Browne}, \citenamefont {Weber},\ and\ \citenamefont {Rungger}}]{jamet2021krylov} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Jamet}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Agarwal}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Lupo}}, \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Browne}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Weber}},\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Rungger}},\ }\bibfield {title} {\bibinfo {title} {Krylov variational quantum algorithm for first principles materials simulations},\ }\href {https://arxiv.org/abs/2105.13298} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2105.13298}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Aharonov}\ \emph {et~al.}(2009)\citenamefont {Aharonov}, \citenamefont {Jones},\ and\ \citenamefont {Landau}}]{aharonov2009polynomial} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Aharonov}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Jones}},\ and\ \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Landau}},\ }\bibfield {title} {\bibinfo {title} {A polynomial quantum algorithm for approximating the {J}ones polynomial},\ }\href {https://doi.org/https://doi.org/10.1007/s00453-008-9168-0} {\bibfield {journal} {\bibinfo {journal} {Algorithmica}\ }\textbf {\bibinfo {volume} {55}},\ \bibinfo {pages} {395} (\bibinfo {year} {2009})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Neuhauser}(1990)}]{neuhauser1990bound} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Neuhauser}},\ }\bibfield {title} {\bibinfo {title} {Bound state eigenfunctions from wave packets: Time-energy resolution},\ }\href {https://aip.scitation.org/doi/10.1063/1.458900} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {93}},\ \bibinfo {pages} {2611} (\bibinfo {year} {1990})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Neuhauser}(1994)}]{neuhauser1994circumventing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Neuhauser}},\ }\bibfield {title} {\bibinfo {title} {Circumventing the {H}eisenberg principle: a rigorous demonstration of filter-diagonalization on a {LiCN} model},\ }\href {https://aip.scitation.org/doi/10.1063/1.467224} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {100}},\ \bibinfo {pages} {5076} (\bibinfo {year} {1994})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rowe}(1968)}]{Rowe} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Rowe}},\ }\bibfield {title} {\bibinfo {title} {{Equations-of-Motion Method and the Extended Shell Model}},\ }\href {https://doi.org/10.1103/RevModPhys.40.153} {\bibfield {journal} {\bibinfo {journal} {Rev. Mod. Phys}\ }\textbf {\bibinfo {volume} {40}},\ \bibinfo {pages} {153} (\bibinfo {year} {1968})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ganzhorn}\ \emph {et~al.}(2019)\citenamefont {Ganzhorn}, \citenamefont {Egger}, \citenamefont {Barkoutsos}, \citenamefont {Ollitrault}, \citenamefont {Salis}, \citenamefont {Moll}, \citenamefont {Roth}, \citenamefont {Fuhrer}, \citenamefont {Mueller}, \citenamefont {Woerner}, \citenamefont {Tavernelli},\ and\ \citenamefont {Filipp}}]{Ganzhorn} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Ganzhorn}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Egger}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Barkoutsos}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Ollitrault}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Salis}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Moll}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Roth}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Fuhrer}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Mueller}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Woerner}}, \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Filipp}},\ }\bibfield {title} {\bibinfo {title} {Gate-efficient simulation of molecular eigenstates on a quantum computer},\ }\href {https://doi.org/10.1103/PhysRevApplied.11.044092} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl}\ }\textbf {\bibinfo {volume} {11}},\ \bibinfo {pages} {044092} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gao}\ \emph {et~al.}(2021{\natexlab{a}})\citenamefont {Gao}, \citenamefont {Jones}, \citenamefont {Motta}, \citenamefont {Sugawara}, \citenamefont {Watanabe}, \citenamefont {Kobayashi}, \citenamefont {Watanabe}, \citenamefont {Ohnishi}, \citenamefont {Nakamura},\ and\ \citenamefont {Yamamoto}}]{gao2020applications} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Gao}}, \bibinfo {author} {\bibfnamefont {G.~O.}\ \bibnamefont {Jones}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Sugawara}}, \bibinfo {author} {\bibfnamefont {H.~C.}\ \bibnamefont {Watanabe}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kobayashi}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Watanabe}}, \bibinfo {author} {\bibfnamefont {Y.-y.}\ \bibnamefont {Ohnishi}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Nakamura}},\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Yamamoto}},\ }\bibfield {title} {\bibinfo {title} {Applications of quantum computing for investigations of electronic transitions in phenylsulfonyl-carbazole {TADF} emitters},\ }\href {https://www.nature.com/articles/s41524-021-00540-6} {\bibfield {journal} {\bibinfo {journal} {npj Comput. Mater}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1} (\bibinfo {year} {2021}{\natexlab{a}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barison}\ \emph {et~al.}(2020)\citenamefont {Barison}, \citenamefont {Galli},\ and\ \citenamefont {Motta}}]{barison2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Barison}}, \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Galli}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}},\ }\bibfield {title} {\bibinfo {title} {Quantum simulations of molecular systems with intrinsic atomic orbitals},\ }\href {https://arxiv.org/abs/2011.08137} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2011.08137}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yeter~Aydeniz}\ \emph {et~al.}(2021)\citenamefont {Yeter~Aydeniz}, \citenamefont {Siopsis},\ and\ \citenamefont {Pooser}}]{yeter2020a} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Yeter~Aydeniz}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Siopsis}},\ and\ \bibinfo {author} {\bibfnamefont {R.~C.}\ \bibnamefont {Pooser}},\ }\bibfield {title} {\bibinfo {title} {Scattering in the ising model with the quantum {L}anczos algorithm},\ }\href {https://doi.org/10.1088/1367-2630/abe63d} {\bibfield {journal} {\bibinfo {journal} {New J. Phys}\ }\textbf {\bibinfo {volume} {23}},\ \bibinfo {pages} {043033} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gomes}\ \emph {et~al.}(2020)\citenamefont {Gomes}, \citenamefont {Zhang}, \citenamefont {Berthusen}, \citenamefont {Wang}, \citenamefont {Ho}, \citenamefont {Orth},\ and\ \citenamefont {Yao}}]{gomes2020} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Gomes}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Zhang}}, \bibinfo {author} {\bibfnamefont {N.~F.}\ \bibnamefont {Berthusen}}, \bibinfo {author} {\bibfnamefont {C.-Z.}\ \bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {K.-M.}\ \bibnamefont {Ho}}, \bibinfo {author} {\bibfnamefont {P.~P.}\ \bibnamefont {Orth}},\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Yao}},\ }\bibfield {title} {\bibinfo {title} {Efficient step-merged quantum imaginary time evolution algorithm for quantum chemistry},\ }\href {https://doi.org/10.1021/acs.jctc.0c00666} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {6256–6266} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yeter-Aydeniz}\ \emph {et~al.}(2020)\citenamefont {Yeter-Aydeniz}, \citenamefont {Pooser},\ and\ \citenamefont {Siopsis}}]{yeter2020b} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Yeter-Aydeniz}}, \bibinfo {author} {\bibfnamefont {R.~C.}\ \bibnamefont {Pooser}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Siopsis}},\ }\bibfield {title} {\bibinfo {title} {Practical quantum computation of chemical and nuclear energy levels using quantum imaginary time evolution and {L}anczos algorithms},\ }\href {https://doi.org/10.1038/s41534-020-00290-1} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {63} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kamakari}\ \emph {et~al.}(2021)\citenamefont {Kamakari}, \citenamefont {Sun}, \citenamefont {Motta},\ and\ \citenamefont {Minnich}}]{kamakari2021digital} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Kamakari}}, \bibinfo {author} {\bibfnamefont {S.-N.}\ \bibnamefont {Sun}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}},\ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Minnich}},\ }\bibfield {title} {\bibinfo {title} {Digital quantum simulation of open quantum systems using quantum imaginary time evolution},\ }\href {https://arxiv.org/abs/2104.07823} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2104.07823}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Cooper}\ and\ \citenamefont {Knowles}(2010)}]{Cooper2010} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Cooper}}\ and\ \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Knowles}},\ }\bibfield {title} {\bibinfo {title} {Benchmark studies of variational, unitary and extended coupled cluster methods},\ }\href {https://doi.org/10.1063/1.3520564} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {133}},\ \bibinfo {pages} {234102} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Harsha}\ \emph {et~al.}(2018)\citenamefont {Harsha}, \citenamefont {Shiozaki},\ and\ \citenamefont {Scuseria}}]{harsha2018difference} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Harsha}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Shiozaki}},\ and\ \bibinfo {author} {\bibfnamefont {G.~E.}\ \bibnamefont {Scuseria}},\ }\bibfield {title} {\bibinfo {title} {On the difference between variational and unitary coupled cluster theories},\ }\href {https://doi.org/10.1063/1.5011033} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {148}},\ \bibinfo {pages} {044107} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Lee}\ \emph {et~al.}(2018)\citenamefont {Lee}, \citenamefont {Huggins}, \citenamefont {Head-Gordon},\ and\ \citenamefont {Whaley}}]{lee2018generalized} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Huggins}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Head-Gordon}},\ and\ \bibinfo {author} {\bibfnamefont {K.~B.}\ \bibnamefont {Whaley}},\ }\bibfield {title} {\bibinfo {title} {Generalized unitary coupled cluster wave functions for quantum computation},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.8b01004} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {311} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {O'Gorman}\ \emph {et~al.}(2019)\citenamefont {O'Gorman}, \citenamefont {Huggins}, \citenamefont {Rieffel},\ and\ \citenamefont {Whaley}}]{o2019generalized} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {O'Gorman}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Huggins}}, \bibinfo {author} {\bibfnamefont {E.~G.}\ \bibnamefont {Rieffel}},\ and\ \bibinfo {author} {\bibfnamefont {K.~B.}\ \bibnamefont {Whaley}},\ }\bibfield {title} {\bibinfo {title} {Generalized swap networks for near-term quantum computing},\ }\href {https://arxiv.org/abs/1905.05118} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1905.05118}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Anselmetti}\ \emph {et~al.}(2021)\citenamefont {Anselmetti}, \citenamefont {Wierichs}, \citenamefont {Gogolin},\ and\ \citenamefont {Parrish}}]{anselmetti2021local} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.-L.~R.}\ \bibnamefont {Anselmetti}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wierichs}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gogolin}},\ and\ \bibinfo {author} {\bibfnamefont {R.~M.}\ \bibnamefont {Parrish}},\ }\bibfield {title} {\bibinfo {title} {Local, expressive, quantum-number-preserving {VQE} {A}ns\"{a}tze for fermionic systems},\ }\href {https://arxiv.org/abs/2104.05695} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2104.05695}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gard}\ \emph {et~al.}(2020)\citenamefont {Gard}, \citenamefont {Zhu}, \citenamefont {Barron}, \citenamefont {Mayhall}, \citenamefont {Economou},\ and\ \citenamefont {Barnes}}]{gard2020efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~T.}\ \bibnamefont {Gard}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Zhu}}, \bibinfo {author} {\bibfnamefont {G.~S.}\ \bibnamefont {Barron}}, \bibinfo {author} {\bibfnamefont {N.~J.}\ \bibnamefont {Mayhall}}, \bibinfo {author} {\bibfnamefont {S.~E.}\ \bibnamefont {Economou}},\ and\ \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Barnes}},\ }\bibfield {title} {\bibinfo {title} {Efficient symmetry-preserving state preparation circuits for the variational quantum eigensolver algorithm},\ }\href {https://www.nature.com/articles/s41534-019-0240-1} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {1} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Higgott}\ \emph {et~al.}(2019)\citenamefont {Higgott}, \citenamefont {Wang},\ and\ \citenamefont {Brierley}}]{higgott2019variational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont {Higgott}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wang}},\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Brierley}},\ }\bibfield {title} {\bibinfo {title} {Variational quantum computation of excited states},\ }\href {https://quantum-journal.org/papers/q-2019-07-01-156/} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {156} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ibe}\ \emph {et~al.}(2020)\citenamefont {Ibe}, \citenamefont {Nakagawa}, \citenamefont {Earnest}, \citenamefont {Yamamoto}, \citenamefont {Mitarai}, \citenamefont {Gao},\ and\ \citenamefont {Kobayashi}}]{ibe2020calculating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Ibe}}, \bibinfo {author} {\bibfnamefont {Y.~O.}\ \bibnamefont {Nakagawa}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Earnest}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Yamamoto}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Mitarai}}, \bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Gao}},\ and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Kobayashi}},\ }\bibfield {title} {\bibinfo {title} {Calculating transition amplitudes by variational quantum deflation},\ }\href {https://arxiv.org/abs/2002.11724} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2002.11724}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sokolov}\ \emph {et~al.}(2020)\citenamefont {Sokolov}, \citenamefont {Barkoutsos}, \citenamefont {Ollitrault}, \citenamefont {Greenberg}, \citenamefont {Rice}, \citenamefont {Pistoia},\ and\ \citenamefont {Tavernelli}}]{sokolov2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~O.}\ \bibnamefont {Sokolov}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Barkoutsos}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Ollitrault}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Greenberg}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Rice}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pistoia}},\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}},\ }\bibfield {title} {\bibinfo {title} {Quantum orbital-optimized unitary coupled cluster methods in the strongly correlated regime: can quantum algorithms outperform their classical equivalents?},\ }\href {https://aip.scitation.org/doi/10.1063/1.5141835} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {152}},\ \bibinfo {pages} {124107} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rice}\ \emph {et~al.}(2021)\citenamefont {Rice}, \citenamefont {Gujarati}, \citenamefont {Motta}, \citenamefont {Takeshita}, \citenamefont {Lee}, \citenamefont {Latone},\ and\ \citenamefont {Garcia}}]{rice2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Rice}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Gujarati}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Motta}}, \bibinfo {author} {\bibfnamefont {T.~Y.}\ \bibnamefont {Takeshita}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lee}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Latone}},\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Garcia}},\ }\bibfield {title} {\bibinfo {title} {Quantum computation of dominant products in lithium--sulfur batteries},\ }\href {https://aip.scitation.org/doi/10.1063/5.0044068} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {154}},\ \bibinfo {pages} {134115} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Sokolov}\ \emph {et~al.}(2021)\citenamefont {Sokolov}, \citenamefont {Barkoutsos}, \citenamefont {Moeller}, \citenamefont {Suchsland}, \citenamefont {Mazzola},\ and\ \citenamefont {Tavernelli}}]{sokolov2021microcanonical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~O.}\ \bibnamefont {Sokolov}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Barkoutsos}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Moeller}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Suchsland}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Mazzola}},\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}},\ }\bibfield {title} {\bibinfo {title} {Microcanonical and finite-temperature ab initio molecular dynamics simulations on quantum computers},\ }\href {https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.3.013125} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {013125} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Choudhary}(2021)}]{choudhary2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Choudhary}},\ }\bibfield {title} {\bibinfo {title} {Quantum computation for predicting electron and phonon properties of solids},\ }\href {https://iopscience.iop.org/article/10.1088/1361-648X/ac1154} {\bibfield {journal} {\bibinfo {journal} {J. Phys Cond. Mat}\ }\textbf {\bibinfo {volume} {33}},\ \bibinfo {pages} {385501} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ma}\ \emph {et~al.}(2020)\citenamefont {Ma}, \citenamefont {Govoni},\ and\ \citenamefont {Galli}}]{ma2020quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Ma}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Govoni}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Galli}},\ }\bibfield {title} {\bibinfo {title} {Quantum simulations of materials on near-term quantum computers},\ }\href {https://www.nature.com/articles/s41524-020-00353-z} {\bibfield {journal} {\bibinfo {journal} {npj Comput. Mater}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {1} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McArdle}\ and\ \citenamefont {Tew}(2020)}]{mcardle2020improving} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {McArdle}}\ and\ \bibinfo {author} {\bibfnamefont {D.~P.}\ \bibnamefont {Tew}},\ }\bibfield {title} {\bibinfo {title} {Improving the accuracy of quantum computational chemistry using the transcorrelated method},\ }\href {https://arxiv.org/abs/2006.11181} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2006.11181}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rubin}(2016)}]{rubin2016hybrid} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Rubin}},\ }\bibfield {title} {\bibinfo {title} {A hybrid classical/quantum approach for large-scale studies of quantum systems with density matrix embedding theory},\ }\href {https://arxiv.org/abs/1610.06910} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1610.06910}\ } (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dhawan}\ \emph {et~al.}(2020)\citenamefont {Dhawan}, \citenamefont {Metcalf},\ and\ \citenamefont {Zgid}}]{dhawan2020dynamical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Dhawan}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Metcalf}},\ and\ \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Zgid}},\ }\bibfield {title} {\bibinfo {title} {Dynamical self-energy mapping ({DSEM}) for quantum computing},\ }\href {https://arxiv.org/abs/2010.05441} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2010.05441}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Metcalf}\ \emph {et~al.}(2020)\citenamefont {Metcalf}, \citenamefont {Bauman}, \citenamefont {Kowalski},\ and\ \citenamefont {De~Jong}}]{metcalf2020resource} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Metcalf}}, \bibinfo {author} {\bibfnamefont {N.~P.}\ \bibnamefont {Bauman}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kowalski}},\ and\ \bibinfo {author} {\bibfnamefont {W.~A.}\ \bibnamefont {De~Jong}},\ }\bibfield {title} {\bibinfo {title} {Resource-efficient chemistry on quantum computers with the variational quantum eigensolver and the double unitary coupled-cluster approach},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.0c00421} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {6165} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kawashima}\ \emph {et~al.}(2021)\citenamefont {Kawashima}, \citenamefont {Coons}, \citenamefont {Nam}, \citenamefont {Lloyd}, \citenamefont {Matsuura}, \citenamefont {Garza}, \citenamefont {Johri}, \citenamefont {Huntington}, \citenamefont {Senicourt}, \citenamefont {Maksymov} \emph {et~al.}}]{kawashima2021efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Kawashima}}, \bibinfo {author} {\bibfnamefont {M.~P.}\ \bibnamefont {Coons}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Nam}}, \bibinfo {author} {\bibfnamefont {E.}~\bibnamefont {Lloyd}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Matsuura}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {Garza}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Johri}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Huntington}}, \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Senicourt}}, \bibinfo {author} {\bibfnamefont {A.~O.}\ \bibnamefont {Maksymov}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Efficient and accurate electronic structure simulation demonstrated on a trapped-ion quantum computer},\ }\href {https://arxiv.org/abs/2102.07045} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2102.07045}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rossmannek}\ \emph {et~al.}(2021)\citenamefont {Rossmannek}, \citenamefont {Barkoutsos}, \citenamefont {Ollitrault},\ and\ \citenamefont {Tavernelli}}]{rossmannek2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Rossmannek}}, \bibinfo {author} {\bibfnamefont {P.~K.}\ \bibnamefont {Barkoutsos}}, \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Ollitrault}},\ and\ \bibinfo {author} {\bibfnamefont {I.}~\bibnamefont {Tavernelli}},\ }\bibfield {title} {\bibinfo {title} {Quantum hf/dft-embedding algorithms for electronic structure calculations: scaling up to complex molecular systems},\ }\href {https://aip.scitation.org/doi/10.1063/5.0029536} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Phys}\ }\textbf {\bibinfo {volume} {154}},\ \bibinfo {pages} {114105} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Stair}\ and\ \citenamefont {Evangelista}(2021)}]{stair2021simulating} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {N.~H.}\ \bibnamefont {Stair}}\ and\ \bibinfo {author} {\bibfnamefont {F.~A.}\ \bibnamefont {Evangelista}},\ }\bibfield {title} {\bibinfo {title} {Simulating many-body systems with a projective quantum eigensolver},\ }\href {https://doi.org/10.1103/PRXQuantum.2.030301} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {030301} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kuroiwa}\ and\ \citenamefont {Nakagawa}(2021)}]{kuroiwa2021penalty} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Kuroiwa}}\ and\ \bibinfo {author} {\bibfnamefont {Y.~O.}\ \bibnamefont {Nakagawa}},\ }\bibfield {title} {\bibinfo {title} {Penalty methods for a variational quantum eigensolver},\ }\href {https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.3.013197} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {013197} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Ryabinkin}\ \emph {et~al.}(2018{\natexlab{b}})\citenamefont {Ryabinkin}, \citenamefont {Genin},\ and\ \citenamefont {Izmaylov}}]{ryabinkin2018constrained} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {I.~G.}\ \bibnamefont {Ryabinkin}}, \bibinfo {author} {\bibfnamefont {S.~N.}\ \bibnamefont {Genin}},\ and\ \bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Izmaylov}},\ }\bibfield {title} {\bibinfo {title} {Constrained variational quantum eigensolver: Quantum computer search engine in the fock space},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.8b00943} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {249} (\bibinfo {year} {2018}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gonthier}\ \emph {et~al.}(2020)\citenamefont {Gonthier}, \citenamefont {Radin}, \citenamefont {Buda}, \citenamefont {Doskocil}, \citenamefont {Abuan},\ and\ \citenamefont {Romero}}]{gonthier2020identifying} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~F.}\ \bibnamefont {Gonthier}}, \bibinfo {author} {\bibfnamefont {M.~D.}\ \bibnamefont {Radin}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Buda}}, \bibinfo {author} {\bibfnamefont {E.~J.}\ \bibnamefont {Doskocil}}, \bibinfo {author} {\bibfnamefont {C.~M.}\ \bibnamefont {Abuan}},\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Romero}},\ }\bibfield {title} {\bibinfo {title} {Identifying challenges towards practical quantum advantage through resource estimation: the measurement roadblock in the variational quantum eigensolver},\ }\href {https://arxiv.org/pdf/2001.11509.pdf} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2012.04001}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wecker}\ \emph {et~al.}(2015)\citenamefont {Wecker}, \citenamefont {Hastings},\ and\ \citenamefont {Troyer}}]{wecker2015progress} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Wecker}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Hastings}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Troyer}},\ }\bibfield {title} {\bibinfo {title} {Progress towards practical quantum variational algorithms},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.92.042303} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {92}},\ \bibinfo {pages} {042303} (\bibinfo {year} {2015})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Jena}\ \emph {et~al.}(2019)\citenamefont {Jena}, \citenamefont {Genin},\ and\ \citenamefont {Mosca}}]{jena2019pauli} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Jena}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Genin}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mosca}},\ }\bibfield {title} {\bibinfo {title} {Pauli partitioning with respect to gate sets},\ }\href {https://arxiv.org/abs/1907.07859} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1907.07859}\ } (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Izmaylov}\ \emph {et~al.}(2019)\citenamefont {Izmaylov}, \citenamefont {Yen}, \citenamefont {Lang},\ and\ \citenamefont {Verteletskyi}}]{izmaylov2019unitary} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~F.}\ \bibnamefont {Izmaylov}}, \bibinfo {author} {\bibfnamefont {T.-C.}\ \bibnamefont {Yen}}, \bibinfo {author} {\bibfnamefont {R.~A.}\ \bibnamefont {Lang}},\ and\ \bibinfo {author} {\bibfnamefont {V.}~\bibnamefont {Verteletskyi}},\ }\bibfield {title} {\bibinfo {title} {Unitary partitioning approach to the measurement problem in the variational quantum eigensolver method},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jctc.9b00791} {\bibfield {journal} {\bibinfo {journal} {J. Chem. Theory Comput}\ }\textbf {\bibinfo {volume} {16}},\ \bibinfo {pages} {190} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {K{\"u}bler}\ \emph {et~al.}(2020)\citenamefont {K{\"u}bler}, \citenamefont {Arrasmith}, \citenamefont {Cincio},\ and\ \citenamefont {Coles}}]{kubler2020adaptive} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {K{\"u}bler}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Arrasmith}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Cincio}},\ and\ \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Coles}},\ }\bibfield {title} {\bibinfo {title} {An adaptive optimizer for measurement-frugal variational algorithms},\ }\href {https://arxiv.org/abs/1909.09083} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {263} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Zhao}\ \emph {et~al.}(2020)\citenamefont {Zhao}, \citenamefont {Tranter}, \citenamefont {Kirby}, \citenamefont {Ung}, \citenamefont {Miyake},\ and\ \citenamefont {Love}}]{zhao2020measurement} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Zhao}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Tranter}}, \bibinfo {author} {\bibfnamefont {W.~M.}\ \bibnamefont {Kirby}}, \bibinfo {author} {\bibfnamefont {S.~F.}\ \bibnamefont {Ung}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Miyake}},\ and\ \bibinfo {author} {\bibfnamefont {P.~J.}\ \bibnamefont {Love}},\ }\bibfield {title} {\bibinfo {title} {Measurement reduction in variational quantum algorithms},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.101.062322} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {101}},\ \bibinfo {pages} {062322} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Wang}\ \emph {et~al.}(2021)\citenamefont {Wang}, \citenamefont {Koh}, \citenamefont {Johnson},\ and\ \citenamefont {Cao}}]{wang2021minimizing} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Wang}}, \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont {Koh}}, \bibinfo {author} {\bibfnamefont {P.~D.}\ \bibnamefont {Johnson}},\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}},\ }\bibfield {title} {\bibinfo {title} {Minimizing estimation runtime on noisy quantum computers},\ }\href {https://doi.org/10.1103/PRXQuantum.2.010346} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {010346} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Torlai}\ \emph {et~al.}(2020)\citenamefont {Torlai}, \citenamefont {Mazzola}, \citenamefont {Carleo},\ and\ \citenamefont {Mezzacapo}}]{torlai2020precise} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Torlai}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Mazzola}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Carleo}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}},\ }\bibfield {title} {\bibinfo {title} {Precise measurement of quantum observables with neural-network estimators},\ }\href {https://doi.org/10.1103/physrevresearch.2.022060} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {022060} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hadfield}\ \emph {et~al.}(2020)\citenamefont {Hadfield}, \citenamefont {Bravyi}, \citenamefont {Raymond},\ and\ \citenamefont {Mezzacapo}}]{hadfield2020measurements} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Hadfield}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bravyi}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Raymond}},\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}},\ }\bibfield {title} {\bibinfo {title} {Measurements of quantum hamiltonians with locally-biased classical shadows},\ }\href {https://arxiv.org/abs/2006.15788} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2006.15788}\ } (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Hillmich}\ \emph {et~al.}(2021)\citenamefont {Hillmich}, \citenamefont {Hadfield}, \citenamefont {Raymond}, \citenamefont {Mezzacapo},\ and\ \citenamefont {Wille}}]{hillmich2021decision} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Hillmich}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Hadfield}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Raymond}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Wille}},\ }\bibfield {title} {\bibinfo {title} {Decision diagrams for quantum measurements with shallow circuits},\ }\href {https://arxiv.org/abs/2105.06932} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2105.06932}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Barison}\ \emph {et~al.}(2021)\citenamefont {Barison}, \citenamefont {Vicentini},\ and\ \citenamefont {Carleo}}]{barison2021efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Barison}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Vicentini}},\ and\ \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Carleo}},\ }\bibfield {title} {\bibinfo {title} {An efficient quantum algorithm for the time evolution of parameterized circuits},\ }\href {https://quantum-journal.org/papers/q-2021-07-28-512/pdf/} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2101.04579}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Benedetti}\ \emph {et~al.}(2021)\citenamefont {Benedetti}, \citenamefont {Fiorentini},\ and\ \citenamefont {Lubasch}}]{benedetti2021hardware} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Benedetti}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Fiorentini}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Lubasch}},\ }\bibfield {title} {\bibinfo {title} {Hardware-efficient variational quantum algorithms for time evolution},\ }\href {https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.3.033083} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {033083} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Foss-Feig}\ \emph {et~al.}(2021)\citenamefont {Foss-Feig}, \citenamefont {Hayes}, \citenamefont {Dreiling}, \citenamefont {Figgatt}, \citenamefont {Gaebler}, \citenamefont {Moses}, \citenamefont {Pino},\ and\ \citenamefont {Potter}}]{foss2021holographic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Foss-Feig}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Hayes}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Dreiling}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Figgatt}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Gaebler}}, \bibinfo {author} {\bibfnamefont {S.~A.}\ \bibnamefont {Moses}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Pino}},\ and\ \bibinfo {author} {\bibfnamefont {A.~C.}\ \bibnamefont {Potter}},\ }\bibfield {title} {\bibinfo {title} {Holographic quantum algorithms for simulating correlated spin systems},\ }\href {https://journals.aps.org/prresearch/abstract/10.1103/PhysRevResearch.3.033002} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Research}\ }\textbf {\bibinfo {volume} {3}},\ \bibinfo {pages} {033002} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kattem{\"o}lle}\ and\ \citenamefont {van Wezel}(2021)}]{kattemolle2021variational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Kattem{\"o}lle}}\ and\ \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {van Wezel}},\ }\bibfield {title} {\bibinfo {title} {Variational quantum eigensolver for the heisenberg antiferromagnet on the kagome lattice},\ }\href {https://arxiv.org/pdf/2001.11509.pdf} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2108.02175}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Boyn}\ \emph {et~al.}(2021)\citenamefont {Boyn}, \citenamefont {Lykhin}, \citenamefont {Smart}, \citenamefont {Gagliardi},\ and\ \citenamefont {Mazziotti}}]{boyn2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.-N.}\ \bibnamefont {Boyn}}, \bibinfo {author} {\bibfnamefont {A.~O.}\ \bibnamefont {Lykhin}}, \bibinfo {author} {\bibfnamefont {S.~E.}\ \bibnamefont {Smart}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Gagliardi}},\ and\ \bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Mazziotti}},\ }\bibfield {title} {\bibinfo {title} {Quantum-classical hybrid algorithm for the simulation of all-electron correlation},\ }\href {https://arxiv.org/pdf/2106.11972.pdf} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2106.11972}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Mazziotti}\ \emph {et~al.}(2021)\citenamefont {Mazziotti}, \citenamefont {Smart},\ and\ \citenamefont {Mazziotti}}]{mazziotti2021quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.~A.}\ \bibnamefont {Mazziotti}}, \bibinfo {author} {\bibfnamefont {S.~E.}\ \bibnamefont {Smart}},\ and\ \bibinfo {author} {\bibfnamefont {A.~R.}\ \bibnamefont {Mazziotti}},\ }\bibfield {title} {\bibinfo {title} {Quantum simulation of molecules without fermionic encoding of the wave function},\ }\href {https://arxiv.org/abs/2101.11607} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:2101.11607}\ } (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {{IBM Quantum}}(2020)}]{ibm2020services} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibnamefont {{IBM Quantum}}},\ }\href@noop {} {\bibinfo {title} {Quantum computing services}},\ \bibinfo {howpublished} {https://www.ibm.com/quantum-computing/} (\bibinfo {year} {2020})\BibitemShut {NoStop} \bibitem [{\citenamefont {{Rigetti computing}}(2020)}]{rigetti2020services} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibnamefont {{Rigetti computing}}},\ }\href@noop {} {\bibinfo {title} {Quantum cloud services}},\ \bibinfo {howpublished} {https://qcs.rigetti.com/dashboard} (\bibinfo {year} {2020})\BibitemShut {NoStop} \bibitem [{\citenamefont {Smith}\ \emph {et~al.}(2016)\citenamefont {Smith}, \citenamefont {Curtis},\ and\ \citenamefont {Zeng}}]{smith2016practical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~S.}\ \bibnamefont {Smith}}, \bibinfo {author} {\bibfnamefont {M.~J.}\ \bibnamefont {Curtis}},\ and\ \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Zeng}},\ }\bibfield {title} {\bibinfo {title} {A practical quantum instruction set architecture},\ }\href {https://arxiv.org/abs/1608.03355} {\bibfield {journal} {\bibinfo {journal} {arXiv preprint arXiv:1608.03355}\ } (\bibinfo {year} {2016})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Abraham}\ \emph {et~al.}(2019)\citenamefont {Abraham} \emph {et~al.}}]{Qiskit} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Abraham}} \emph {et~al.},\ }\href {https://doi.org/10.5281/zenodo.2562110} {\bibinfo {title} {Qiskit: An open-source framework for quantum computing}} (\bibinfo {year} {2019})\BibitemShut {NoStop} \bibitem [{\citenamefont {McClean}\ \emph {et~al.}(2020)\citenamefont {McClean}, \citenamefont {Rubin}, \citenamefont {Sung}, \citenamefont {Kivlichan}, \citenamefont {Bonet-Monroig}, \citenamefont {Cao}, \citenamefont {Dai}, \citenamefont {Fried}, \citenamefont {Gidney}, \citenamefont {Gimby} \emph {et~al.}}]{mcclean2020openfermion} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Rubin}}, \bibinfo {author} {\bibfnamefont {K.~J.}\ \bibnamefont {Sung}}, \bibinfo {author} {\bibfnamefont {I.~D.}\ \bibnamefont {Kivlichan}}, \bibinfo {author} {\bibfnamefont {X.}~\bibnamefont {Bonet-Monroig}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Cao}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Dai}}, \bibinfo {author} {\bibfnamefont {E.~S.}\ \bibnamefont {Fried}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Gidney}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Gimby}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Openfermion: the electronic structure package for quantum computers},\ }\href {https://iopscience.iop.org/article/10.1088/2058-9565/ab8ebc} {\bibfield {journal} {\bibinfo {journal} {Quant. Sci. Tech}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {034014} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maslov}(2017)}]{maslov2017basic} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maslov}},\ }\bibfield {title} {\bibinfo {title} {Basic circuit compilation techniques for an ion-trap quantum machine},\ }\href {https://iopscience.iop.org/article/10.1088/1367-2630/aa5e47/meta} {\bibfield {journal} {\bibinfo {journal} {New J. Phys}\ }\textbf {\bibinfo {volume} {19}},\ \bibinfo {pages} {023035} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Nam}\ \emph {et~al.}(2020)\citenamefont {Nam}, \citenamefont {Chen}, \citenamefont {Pisenti}, \citenamefont {Wright}, \citenamefont {Delaney}, \citenamefont {Maslov}, \citenamefont {Brown}, \citenamefont {Allen}, \citenamefont {Amini}, \citenamefont {Apisdorf} \emph {et~al.}}]{nam2020ground} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Nam}}, \bibinfo {author} {\bibfnamefont {J.-S.}\ \bibnamefont {Chen}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Pisenti}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Wright}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Delaney}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Maslov}}, \bibinfo {author} {\bibfnamefont {K.~R.}\ \bibnamefont {Brown}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Allen}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Amini}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Apisdorf}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Ground-state energy estimation of the water molecule on a trapped-ion quantum computer},\ }\href {https://www.nature.com/articles/s41534-020-0259-3} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {1} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Gao}\ \emph {et~al.}(2021{\natexlab{b}})\citenamefont {Gao}, \citenamefont {Nakamura}, \citenamefont {Gujarati}, \citenamefont {Jones}, \citenamefont {Rice}, \citenamefont {Wood}, \citenamefont {Pistoia}, \citenamefont {Garcia},\ and\ \citenamefont {Yamamoto}}]{gao2021computational} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont {Gao}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {Nakamura}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Gujarati}}, \bibinfo {author} {\bibfnamefont {G.~O.}\ \bibnamefont {Jones}}, \bibinfo {author} {\bibfnamefont {J.~E.}\ \bibnamefont {Rice}}, \bibinfo {author} {\bibfnamefont {S.~P.}\ \bibnamefont {Wood}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Pistoia}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Garcia}},\ and\ \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Yamamoto}},\ }\bibfield {title} {\bibinfo {title} {Computational investigations of the lithium superoxide dimer rearrangement on noisy quantum devices},\ }\href {https://pubs.acs.org/doi/10.1021/acs.jpca.0c09530} {\bibfield {journal} {\bibinfo {journal} {J. Phys. Chem. A}\ }\textbf {\bibinfo {volume} {125}},\ \bibinfo {pages} {1827} (\bibinfo {year} {2021}{\natexlab{b}})}\BibitemShut {NoStop} \bibitem [{\citenamefont {McCaskey}\ \emph {et~al.}(2019)\citenamefont {McCaskey}, \citenamefont {Parks}, \citenamefont {Jakowski}, \citenamefont {Moore}, \citenamefont {Morris}, \citenamefont {Humble},\ and\ \citenamefont {Pooser}}]{mccaskey2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {McCaskey}}, \bibinfo {author} {\bibfnamefont {Z.~P.}\ \bibnamefont {Parks}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Jakowski}}, \bibinfo {author} {\bibfnamefont {S.~V.}\ \bibnamefont {Moore}}, \bibinfo {author} {\bibfnamefont {T.~D.}\ \bibnamefont {Morris}}, \bibinfo {author} {\bibfnamefont {T.~S.}\ \bibnamefont {Humble}},\ and\ \bibinfo {author} {\bibfnamefont {R.~C.}\ \bibnamefont {Pooser}},\ }\bibfield {title} {\bibinfo {title} {Quantum chemistry as a benchmark for near-term quantum computers},\ }\href {https://www.nature.com/articles/s41534-019-0209-0} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages} {1} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Krantz}\ \emph {et~al.}(2019)\citenamefont {Krantz}, \citenamefont {Kjaergaard}, \citenamefont {Yan}, \citenamefont {Orlando}, \citenamefont {Gustavsson},\ and\ \citenamefont {Oliver}}]{krantz2019quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Krantz}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kjaergaard}}, \bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Orlando}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gustavsson}},\ and\ \bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}},\ }\bibfield {title} {\bibinfo {title} {A quantum engineer's guide to superconducting qubits},\ }\href {https://aip.scitation.org/doi/10.1063/1.5089550?ai=1gvoi&mi=3ricys&af=R&gclid=Cj0KCQjwhr2FBhDbARIsACjwLo2-aKSjuTESOu-rSw3mnuiAYxarV6MDBiqydP-4fkxeNINfTif3BZAaAjI8EALw_wcB} {\bibfield {journal} {\bibinfo {journal} {Appl. Phys. Rev}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {021318} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Devoret}\ \emph {et~al.}(2004)\citenamefont {Devoret}, \citenamefont {Wallraff},\ and\ \citenamefont {Martinis}}]{devoret2004superconducting} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~H.}\ \bibnamefont {Devoret}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Wallraff}},\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\ }\bibfield {title} {\bibinfo {title} {Superconducting qubits: A short review},\ }\href {https://arxiv.org/abs/cond-mat/0411174} {\bibfield {journal} {\bibinfo {journal} {cond-mat/0411174}\ } (\bibinfo {year} {2004})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Bruzewicz}\ \emph {et~al.}(2019)\citenamefont {Bruzewicz}, \citenamefont {Chiaverini}, \citenamefont {McConnell},\ and\ \citenamefont {Sage}}]{bruzewicz2019trapped} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.~D.}\ \bibnamefont {Bruzewicz}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chiaverini}}, \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {McConnell}},\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Sage}},\ }\bibfield {title} {\bibinfo {title} {Trapped-ion quantum computing: Progress and challenges},\ }\href {https://aip.scitation.org/doi/10.1063/1.5088164} {\bibfield {journal} {\bibinfo {journal} {Appl. Phys. Rev}\ }\textbf {\bibinfo {volume} {6}},\ \bibinfo {pages} {021314} (\bibinfo {year} {2019})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Brown}\ \emph {et~al.}(2021)\citenamefont {Brown}, \citenamefont {Chiaverini}, \citenamefont {Sage},\ and\ \citenamefont {H{\"a}ffner}}]{brown2021materials} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.~R.}\ \bibnamefont {Brown}}, \bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Chiaverini}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Sage}},\ and\ \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont {H{\"a}ffner}},\ }\bibfield {title} {\bibinfo {title} {Materials challenges for trapped-ion quantum computers},\ }\href {https://www.nature.com/articles/s41578-021-00292-1} {\bibfield {journal} {\bibinfo {journal} {Nat. Rev. Mater}\ ,\ \bibinfo {pages} {1}} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Rigetti}\ and\ \citenamefont {Devoret}(2010)}]{rigetti2010fully} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Rigetti}}\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Devoret}},\ }\bibfield {title} {\bibinfo {title} {Fully microwave-tunable universal gates in superconducting qubits with linear couplings and fixed transition frequencies},\ }\href {https://journals.aps.org/prb/abstract/10.1103/PhysRevB.81.134507} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. B}\ }\textbf {\bibinfo {volume} {81}},\ \bibinfo {pages} {134507} (\bibinfo {year} {2010})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Chow}\ \emph {et~al.}(2011)\citenamefont {Chow}, \citenamefont {C{\'o}rcoles}, \citenamefont {Gambetta}, \citenamefont {Rigetti}, \citenamefont {Johnson}, \citenamefont {Smolin}, \citenamefont {Rozen}, \citenamefont {Keefe}, \citenamefont {Rothwell}, \citenamefont {Ketchen} \emph {et~al.}}]{chow2011simple} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Chow}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {C{\'o}rcoles}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}}, \bibinfo {author} {\bibfnamefont {C.}~\bibnamefont {Rigetti}}, \bibinfo {author} {\bibfnamefont {B.~R.}\ \bibnamefont {Johnson}}, \bibinfo {author} {\bibfnamefont {J.~A.}\ \bibnamefont {Smolin}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {Rozen}}, \bibinfo {author} {\bibfnamefont {G.~A.}\ \bibnamefont {Keefe}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Rothwell}}, \bibinfo {author} {\bibfnamefont {M.~B.}\ \bibnamefont {Ketchen}}, \emph {et~al.},\ }\bibfield {title} {\bibinfo {title} {Simple all-microwave entangling gate for fixed-frequency superconducting qubits},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.107.080502} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {107}},\ \bibinfo {pages} {080502} (\bibinfo {year} {2011})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Yan}\ \emph {et~al.}(2018)\citenamefont {Yan}, \citenamefont {Krantz}, \citenamefont {Sung}, \citenamefont {Kjaergaard}, \citenamefont {Campbell}, \citenamefont {Orlando}, \citenamefont {Gustavsson},\ and\ \citenamefont {Oliver}}]{yan2018tunable} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont {Yan}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Krantz}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Sung}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Kjaergaard}}, \bibinfo {author} {\bibfnamefont {D.~L.}\ \bibnamefont {Campbell}}, \bibinfo {author} {\bibfnamefont {T.~P.}\ \bibnamefont {Orlando}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Gustavsson}},\ and\ \bibinfo {author} {\bibfnamefont {W.~D.}\ \bibnamefont {Oliver}},\ }\bibfield {title} {\bibinfo {title} {Tunable coupling scheme for implementing high-fidelity two-qubit gates},\ }\href {https://journals.aps.org/prapplied/abstract/10.1103/PhysRevApplied.10.054062} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Appl}\ }\textbf {\bibinfo {volume} {10}},\ \bibinfo {pages} {054062} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Peres}(1985)}]{peres1985reversible} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Peres}},\ }\bibfield {title} {\bibinfo {title} {Reversible logic and quantum computers},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.32.3266} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {32}},\ \bibinfo {pages} {3266} (\bibinfo {year} {1985})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Shor}(1995)}]{shor1995scheme} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {P.~W.}\ \bibnamefont {Shor}},\ }\bibfield {title} {\bibinfo {title} {Scheme for reducing decoherence in quantum computer memory},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.52.R2493} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {52}},\ \bibinfo {pages} {R2493} (\bibinfo {year} {1995})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Steane}(1996)}]{steane1996error} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~M.}\ \bibnamefont {Steane}},\ }\bibfield {title} {\bibinfo {title} {Error correcting codes in quantum theory},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.77.793} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {77}},\ \bibinfo {pages} {793} (\bibinfo {year} {1996})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Fowler}\ \emph {et~al.}(2012)\citenamefont {Fowler}, \citenamefont {Mariantoni}, \citenamefont {Martinis},\ and\ \citenamefont {Cleland}}]{fowler2012surface} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~G.}\ \bibnamefont {Fowler}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Mariantoni}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Martinis}},\ and\ \bibinfo {author} {\bibfnamefont {A.~N.}\ \bibnamefont {Cleland}},\ }\bibfield {title} {\bibinfo {title} {Surface codes: towards practical large-scale quantum computation},\ }\href {https://journals.aps.org/pra/abstract/10.1103/PhysRevA.86.032324} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {86}},\ \bibinfo {pages} {032324} (\bibinfo {year} {2012})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Preskill}(2018)}]{preskill2018quantum} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {J.}~\bibnamefont {Preskill}},\ }\bibfield {title} {\bibinfo {title} {Quantum computing in the {NISQ} era and beyond},\ }\href {https://quantum-journal.org/papers/q-2018-08-06-79/} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {79} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Kandala}\ \emph {et~al.}(2018)\citenamefont {Kandala}, \citenamefont {Temme}, \citenamefont {Corcoles}, \citenamefont {Mezzacapo}, \citenamefont {Chow},\ and\ \citenamefont {Gambetta}}]{kandala2018extending} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Kandala}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Temme}}, \bibinfo {author} {\bibfnamefont {A.~D.}\ \bibnamefont {Corcoles}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Mezzacapo}}, \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Chow}},\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}},\ }\bibfield {title} {\bibinfo {title} {Extending the computational reach of a noisy superconducting quantum processor},\ }\href {https://www.nature.com/articles/s41586-019-1040-7} {\bibfield {journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume} {567}},\ \bibinfo {pages} {491} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Maciejewski}\ \emph {et~al.}(2020)\citenamefont {Maciejewski}, \citenamefont {Zimbor{\'a}s},\ and\ \citenamefont {Oszmaniec}}]{maciejewski2020mitigation} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {F.~B.}\ \bibnamefont {Maciejewski}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Zimbor{\'a}s}},\ and\ \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont {Oszmaniec}},\ }\bibfield {title} {\bibinfo {title} {Mitigation of readout noise in near-term quantum devices by classical post-processing based on detector tomography},\ }\href {https://quantum-journal.org/papers/q-2020-04-24-257/} {\bibfield {journal} {\bibinfo {journal} {Quantum}\ }\textbf {\bibinfo {volume} {4}},\ \bibinfo {pages} {257} (\bibinfo {year} {2020})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Temme}\ \emph {et~al.}(2017)\citenamefont {Temme}, \citenamefont {Bravyi},\ and\ \citenamefont {Gambetta}}]{temme2017error} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont {Temme}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Bravyi}},\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Gambetta}},\ }\bibfield {title} {\bibinfo {title} {Error mitigation for short-depth quantum circuits},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.119.180509} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {119}},\ \bibinfo {pages} {180509} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Li}\ and\ \citenamefont {Benjamin}(2017)}]{li2017efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}}\ and\ \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Benjamin}},\ }\bibfield {title} {\bibinfo {title} {Efficient variational quantum simulator incorporating active error minimization},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.7.021050} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {021050} (\bibinfo {year} {2017})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Endo}\ \emph {et~al.}(2018)\citenamefont {Endo}, \citenamefont {Benjamin},\ and\ \citenamefont {Li}}]{endo2018practical} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Endo}}, \bibinfo {author} {\bibfnamefont {S.~C.}\ \bibnamefont {Benjamin}},\ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Li}},\ }\bibfield {title} {\bibinfo {title} {Practical quantum error mitigation for near-future applications},\ }\href {https://journals.aps.org/prx/abstract/10.1103/PhysRevX.8.031027} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo {pages} {031027} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Richardson}(1911)}]{richardson1911approximate} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {L.~F.}\ \bibnamefont {Richardson}},\ }\bibfield {title} {\bibinfo {title} {The approximate arithmetical solution by finite differences with an application to stresses in masonry dams},\ }\href {https://royalsocietypublishing.org/doi/10.1098/rsta.1911.0009} {\bibfield {journal} {\bibinfo {journal} {Philos. Trans. Royal Soc. A}\ }\textbf {\bibinfo {volume} {210}},\ \bibinfo {pages} {307} (\bibinfo {year} {1911})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Dumitrescu}\ \emph {et~al.}(2018)\citenamefont {Dumitrescu}, \citenamefont {McCaskey}, \citenamefont {Hagen}, \citenamefont {Jansen}, \citenamefont {Morris}, \citenamefont {Papenbrock}, \citenamefont {Pooser}, \citenamefont {Dean},\ and\ \citenamefont {Lougovski}}]{dumitrescu2018cloud} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {E.~F.}\ \bibnamefont {Dumitrescu}}, \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont {McCaskey}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Hagen}}, \bibinfo {author} {\bibfnamefont {G.~R.}\ \bibnamefont {Jansen}}, \bibinfo {author} {\bibfnamefont {T.~D.}\ \bibnamefont {Morris}}, \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Papenbrock}}, \bibinfo {author} {\bibfnamefont {R.~C.}\ \bibnamefont {Pooser}}, \bibinfo {author} {\bibfnamefont {D.~J.}\ \bibnamefont {Dean}},\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Lougovski}},\ }\bibfield {title} {\bibinfo {title} {Cloud quantum computing of an atomic nucleus},\ }\href {https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.120.210501} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett}\ }\textbf {\bibinfo {volume} {120}},\ \bibinfo {pages} {210501} (\bibinfo {year} {2018})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Tran}\ \emph {et~al.}(2021)\citenamefont {Tran}, \citenamefont {Su}, \citenamefont {Carney},\ and\ \citenamefont {Taylor}}]{tran2021faster} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {M.~C.}\ \bibnamefont {Tran}}, \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Su}}, \bibinfo {author} {\bibfnamefont {D.}~\bibnamefont {Carney}},\ and\ \bibinfo {author} {\bibfnamefont {J.~M.}\ \bibnamefont {Taylor}},\ }\bibfield {title} {\bibinfo {title} {Faster digital quantum simulation by symmetry protection},\ }\href {https://journals.aps.org/prxquantum/abstract/10.1103/PRXQuantum.2.010323} {\bibfield {journal} {\bibinfo {journal} {Phys. Rev. X Quantum}\ }\textbf {\bibinfo {volume} {2}},\ \bibinfo {pages} {010323} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \bibitem [{\citenamefont {Huggins}\ \emph {et~al.}(2021)\citenamefont {Huggins}, \citenamefont {McClean}, \citenamefont {Rubin}, \citenamefont {Jiang}, \citenamefont {Wiebe}, \citenamefont {Whaley},\ and\ \citenamefont {Babbush}}]{huggins2021efficient} \BibitemOpen \bibfield {author} {\bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Huggins}}, \bibinfo {author} {\bibfnamefont {J.~R.}\ \bibnamefont {McClean}}, \bibinfo {author} {\bibfnamefont {N.~C.}\ \bibnamefont {Rubin}}, \bibinfo {author} {\bibfnamefont {Z.}~\bibnamefont {Jiang}}, \bibinfo {author} {\bibfnamefont {N.}~\bibnamefont {Wiebe}}, \bibinfo {author} {\bibfnamefont {K.~B.}\ \bibnamefont {Whaley}},\ and\ \bibinfo {author} {\bibfnamefont {R.}~\bibnamefont {Babbush}},\ }\bibfield {title} {\bibinfo {title} {Efficient and noise resilient measurements for quantum chemistry on near-term quantum computers},\ }\href {https://www.nature.com/articles/s41534-020-00341-7} {\bibfield {journal} {\bibinfo {journal} {npj Quantum Inf}\ }\textbf {\bibinfo {volume} {7}},\ \bibinfo {pages} {1} (\bibinfo {year} {2021})}\BibitemShut {NoStop} \end{thebibliography} \end{document}
arXiv
\begin{definition}[Definition:Linear Equation] A '''linear equation''' is an equation in the form: :$b = a_1 x_1 + a_2 x_2 + \cdots + a_n x_n$ where all of $a_1, \ldots, a_n, x_1, \ldots x_n, b$ are elements of a given field. The point is that all the indices of the $x$ and $y$ terms in such an equation are $1$. \end{definition}
ProofWiki
LtrDetector: A tool-suite for detecting long terminal repeat retrotransposons de-novo Joseph D. Valencia1 & Hani Z. Girgis ORCID: orcid.org/0000-0002-6080-03201 Long terminal repeat retrotransposons are the most abundant transposons in plants. They play important roles in alternative splicing, recombination, gene regulation, and defense mechanisms. Large-scale sequencing projects for plant genomes are currently underway. Software tools are important for annotating long terminal repeat retrotransposons in these newly available genomes. However, the available tools are not very sensitive to known elements and perform inconsistently on different genomes. Some are hard to install or obsolete. They may struggle to process large plant genomes. None can be executed in parallel out of the box and very few have features to support visual review of new elements. To overcome these limitations, we developed LtrDetector, which uses techniques inspired by signal-processing. We compared LtrDetector to LTR_Finder and LTRharvest, the two most successful predecessor tools, on six plant genomes. For each organism, we constructed a ground truth data set based on queries from a consensus sequence database. According to this evaluation, LtrDetector was the most sensitive tool, achieving 16–23% improvement in sensitivity over LTRharvest and 21% improvement over LTR_Finder. All three tools had low false positive rates, with LtrDetector achieving 98.2% precision, in between its two competitors. Overall, LtrDetector provides the best compromise between high sensitivity and low false positive rate while requiring moderate time and utilizing memory available on personal computers. LtrDetector uses a novel methodology revolving around k-mer distributions, which allows it to produce high-quality results using relatively lightweight procedures. It is easy to install and use. It is not species specific, performing well using its default parameters on genomes of varying size and repeat content. It is automatically configured for parallel execution and runs efficiently on an ordinary personal computer. It includes a k-mer scores visualization tool to facilitate manual review of the identified elements. These features make LtrDetector an attractive tool for future annotation projects involving long terminal repeat retrotransposons. Formerly considered "junk DNA", the intergenic sequences of genomes are attracting increased attention among biologists. A particularly striking feature of these regions is the prevalence of transposable elements (TEs), a type of repeated sequence. TEs include class I elements, which replicate using RNA to "copy-and-paste" themselves, and class II elements, which replicate via a "cut-and-paste" mechanism using DNA as an intermediate [1]. Barbara McClintock discovered transposons in the 1940s and the 1950s while studying the maize genome [2]. TEs are common to all eukaryotes, comprising around 45% of the human genome and up to 80% of some plants like maize and wheat [3, 4]. TEs have several important functions. Bennetzen and Wang highlight the known functions of plant TEs [5]. Transposons are the major factor affecting the sizes of plant genomes [6–8]. Under stressful conditions, they can rearrange a genome [9–11]. TEs play roles in relocating genes [12, 13] and generating new genes [14, 15] and new pseudo genes [16, 17]. They can contribute to centromere function [18, 19]. TEs can regulate the expression of nearby genes via several mechanisms including: (i) providing regulatory elements, such as promoters and enhancers, to nearby genes [14, 20–22]; (ii) inserting themselves into genes, then targeting the epigenetic regulatory system [23]; (iii) producing small interfering RNA specific to host genes [24–26]; and (iv) generating new micro RNA genes modulating host genes [27–29]. Transposons have been utilized in cloning plant genes in a technique called transposon tagging [30–32]. They also have the potential to become a new frontier in enhancing the productivity of crops [33, 34]. Long Terminal Repeat retrotransposons (LTR-RTs) are a particularly interesting type of class I transposable element related to retroviruses. LTR-RTs are widespread in plants and are considered one of their primary evolutionary mechanisms [35]. Gonzalez, et al. summarizes some of their functions [36]. LTR-RTs can insert adjacent to and inside of genes and promote alternative splicing [37], They play roles in recombination, epigenetic control [38, 39], and other forms of regulation [36]. LTR-RTs have been found with regulatory motifs that promote defense mechanisms in damaged plant tissues [40]. They can also serve as genomic markers for evolutionary phylogeny [41]. LTR-RTs are named for their characteristic direct repeat — typically 100–6000 base pairs (bp) long in plants. These direct repeats surround interior coding regions (the gag and pol genes). Lerat suggests 5 kbp–9 kbp as a size range for LTR-RTs [1], but based on the consensus sequences of plant LTR-RTs, their lengths can exceed 20 kbp. Computational tools are extremely important in locating repeated sequences, including LTR-RTs. Tools can be roughly divided into knowledge-based tools, which leverage consensus sequence databases to search for repeats, and de-novo tools, which use internal sequence comparison and structural features to search for repeats without prior knowledge about the target sequence [1]. Knowledge-based methods include well-known bioinformatics software such as NCBI BLAST [42], RepeatMasker (http://www.repeatmasker.org), and Censor (https://www.girinst.org/downloads/software/censor/); they can be utilized in locating all types of known TEs including LTR-RTs. However, if the sequence of the repetitive element is unknown, tools like these cannot find copies in a genome. Several methods for locating all types of TEs de-novo have been developed [43–46]. Tools built specifically for detecting LTR-RTs include LTR_STRUC [47], LTR_seq [48], MGEScan-LTR [49], LTR_Finder [50], and LTRharvest [51]. LTR_retriever is a post-processing tool, which may help increase the accuracy of de-novo approaches [52]. LTRsift [53] and Inpactor [54] are other post-processing tools that cluster LTR-RTs into families and allow additional analyses. These tools face a variety of usability, scalability, and accuracy concerns. For example, LTR_STRUC, one of the pioneering tools for locating LTR-RTs, was developed exclusively for an old version of Windows, making it difficult to use nowadays. Several tools have external dependencies which greatly complicate their installation. None of them take advantage of the parallel multi-core architecture of modern personal computers. Some may struggle to process larger plant genomes such as the barley genome on an ordinary personal computer. Some tools are highly sensitive to species-specific parameters. All produce false positive predictions and do not retrieve all known LTRs. Finally, only a few of these tools were designed with post-processing manual review in mind. Thousands of plant genomes are being sequenced currently and in the near future. The 10KP Project for plant genomes (https://db.cngb.org/10kp/) and the Earth Biogenome Project (https://www.earthbiogenome.org) aim at sequencing a large number of plant genomes. This expansion of genomic data creates an urgent need for modern software tools to aid in detecting LTR-RTs in the new plant genomes; such tools should remedy the limitations of the currently available tools. To this end, we have developed LtrDetector, which is a software tool for detecting LTR-RTs. LtrDetector depends on techniques inspired by signal processing. It is easy to install because it does not have any external dependencies. It can run on multiple machine cores in parallel, taking advantage of the advanced hardware available on personal computers. It is not species specific. It is more sensitive to known LTR-RTs than the related tools. It can process larger genomes such as the barley genome. It can produce images to facilitate the manual review/annotation of the newly located LTR-RTs. Our efforts have resulted in the following contributions: The LtrDetector software for discovering LTR retrotransposons in assembled genomes. LtrDetector is available on GitHub (https://github.com/TulsaBioinformaticsToolsmith/LtrDetector) and in Additional file 1. Visualization script to view scores, which should aid in the manual verification of newly found elements — available in Additional file 1 and the GitHub repository. Novel pipeline to generate ground truth (sequences of known LTR retrotransposons). The pipeline is available in Additional file 2 and the GitHub repository. Putative LTR retrotransposons of six plant genomes (Additional files 3, 4, 5, 6, 7, 8, and 9). Comparing the performance of LtrDetector to the performances of other related tools demonstrates that LtrDetector is the best de-novo tool currently available for detecting LTR-RTs. These results were obtained on synthetic sequences and multiple genomes. We used a variety of computational techniques to perform de-novo signature-based discovery of Long terminal repeat (LTR) retrotransposons (LTR-RTs). Signature-based tools rely strictly on specific structural features of LTR-RTs, e.g. the presence of two flanking LTRs, without referring to the nucleotide sequences of known elements. The main contribution of this study is a software package called LtrDetector. The tool utilizes methods inspired by signal processing, using the distances between copies of k-mers — short nucleotide sequences of length k — to determine the location of LTRs. At a high level, LtrDetector locates LTR-RTs using the following steps (Fig. 1): Mapping each nucleotide in a sequence to a positive or negative numerical score recording the distance to the closest exact copy of the k-mer starting at that nucleotide; Method overview: LtrDetector is a software tool for locating long terminal repeat (LTR) retrotransposons (RTs). a A sequence of scores reflects the distance to the closest exact copy of the k-mer starting at each nucleotide. b Smoothed scores are produced after adjacent spikes are merged into a contiguous region. c Plateau regions are identified. Separate plateaus here are represented by black and red lines. d Plateaus are paired and their boundaries are adjusted. The red triangles denote the start and end coordinates for each LTR Processing the scores to merge adjacent stretches of similar scores (i.e. plateaus); Collecting plateaus and pairing those whose distance scores point to each other; Correcting LTR coordinates via local alignment of the regions surrounding each plateau in a pair; and Removing faulty candidates based on sequence identity, element length, and structural similarity to other types of transposable elements (TEs). Scoring the input sequence At the time of insertion into the genome, the two LTRs of an LTR-RT will be identical [4]. Centuries worth of mutation will lead to some degeneration, but the LTRs should retain a high degree of homology. The goal of the scoring step is to mark the genomic distance to the nearest exact copy of every k-mer in the input sequence using a data-structure called a hash-table. A hash-table is conceptually similar to a dictionary, containing entries mapping a unique key to an associated value. It employs a mathematical function called hashing to convert a key into an index, which is used to look up the value in the hash-table's underlying array. LtrDetector utilizes a hashing function specific to DNA sequences. Each nucleotide (A, C, G, and T) in a k-mer is encoded as a digit (0, 1, 2, and 3). This digit sequence is considered as a quaternary (base-4) number and converted to a decimal number (base-10) that indicates an index within the array. Horner's rule is used for efficiently converting the number from its quaternary to its decimal representation [55]. We have used similar data structures successfully in other software tools [56–59]. For example, the 5-mer ACCTG is transformed to 01132 (base 4) and then to 94 (base 10), mapping it to the 94th cell in the array. Note that the array for storing all k-mers will be of length 4k. LtrDetector traverses the input sequence nucleotide by nucleotide, computing the index for the k-mer starting at each position. As it encounters a particular k-mer for the first time, it will fill in the hash-table value with the initial location. Whenever that k-mer is found again, it will update the hash table with the new location and report the score at that index as the distance between the current copy and the previous copy. Distances to and directions of the closest copies are recorded both forward and backward in the genome as positive and negative numbers. This process requires only one pass through the sequence because the direction of the closest copy can be calculated from the index of the k-mer and the index of the closest copy that is stored in the hash table. Scores will be updated if a copy is found closer downstream. The distance between the k-mer and its copy must be within a specific range due to the length properties of LTR retrotransposons [1]. Processing scores The raw scores yielded by the previous step are processed to accentuate meaningful patterns. Wherever there is a significant repeat in the genome, there should be an extended, semi-continuous sequence of similar scores. However, any mutation will cause gaps in these stretches. LtrDetector first identifies all continuous stretches of non-zero scores, categorizing them as "keep" — K — if they are longer than or equal to a minimum seed value, (default: 10 bp), or "delete" — D — if they are not. The forward merging step merges a D section with a neighboring K section if the two are separated by a gap of less than a certain size (default: 200 bp). To merge, the scores belonging to the D section are overwritten with the median score of the adjacent K section, as are the scores in between. This D section is re-categorized as a K. Neighboring K sections will be merged by re-scoring only the gap section, using the median score of one of the two K sections. Next, the backward merging step proceeds in the opposite direction, merging all D sections that appear upstream of K sections and are missed by the forward merging step. After both passes, all remaining D selections are overwritten to zero to reduce noise. We illustrate this procedure in Fig. 2. (a) Contiguous stretches of the same non-zero score are identified and marked as keep (K) or delete (D). (b) The forward pass merges K sections toward each other and adjacent D sections. (c) The backward pass merges remaining D sections that are close to K sections Pairing plateaus The merging step should produce wide plateaus in the scoring signal. The magnitude of their scores can be thought of both as the height of the plateau and as the distance towards its match. The sign of the scores indicate the direction — positive for downstream and negative for upstream. For instance, a plateau of width 200 and height +8000 should imply a similarly wide plateau of height -8000 starting about 8000 base pairs downstream. In this way, the scores of the two plateaus point towards each other. Another hash-table-like data structure helps pair matching plateaus to form a full retrotransposon. Plateaus are assigned to a bin based on the magnitude of their height, with each bin holding plateaus within a certain range of height. The algorithm then steps through the candidates, placing each positive plateau into the appropriate bin as it is encountered. Each negative plateau will be assigned an initial bin, which will be searched for a positively-scored plateau located at the proper distance; recall that this distance is implied by the height of the negative plateau. Because we allow for some difference in height, the bins immediately above and below the initial one may be inspected. If a match is found, the two regions are returned and listed as a candidate LTR pair. If not, the negative plateau is discarded. Boundary correction The newly paired LTR candidates merely approximate the boundaries of a putative retrotransposon because of the sensitivity of k-mers to mutations. Next, we use the Smith-Waterman local alignment algorithm [60] to sharpen the LTR boundaries. As the plateaus may be of unequal length at this stage, we define a value L equivalent to the length of the larger plateau. From the center of each plateau, we mark a window of size 1.5×L bp in each direction. The resulting windows may not be longer than the maximum LTR length parameter (6000 bp is the default). We align these two regions, and the returned alignment indicates the corrected boundaries for the putative LTR-RT. Alignment identity scores are stored for later use. Scaling the alignment window based on the initial plateau length provides for good average case run time while still allowing LtrDetector to discover elements with large LTRs. Several filters are applied to reduce the number of false positives. LTR identity scores from the previous step are used for discarding all entries whose paired LTRs exhibit sequence similarity below a given threshold (default: 85%). Then elements are filtered by size to remove those where either the LTR or the whole element is too small or too large, using values typical of known LTR-RTs. Our default range for the full element is 400–22000 and 100–6000 for its LTRs. Next, the candidates are analyzed to determine whether they exhibit features of DNA transposons, which are another type of TE that appear in high copy number in many genomes. DNA transposons of the same family can appear in close proximity and be falsely identified as LTR-RTs by the previous steps. DNA transposons contain terminal inverted repeats, meaning that the reverse complement of the beginning sequence appears at the end of the element. LtrDetector locally aligns the first 30 nucleotides of each LTR with the reverse complement of its last 30 nucleotides. If this resulting alignment is sufficiently long (>15 bp), the element may represent two DNA transposons within close distance to each other; this element is discarded. Structural Annotations Other structural features — Target Site Duplication (TSD) and Polypurine Tract (PPT), and the TG..CA motif are included as annotations. A TSD is a small exact repeat that may occur at the insertion site. LtrDetector searches for TSDs using the longest common substring algorithm, which is a dynamic programming algorithm that finds exactly matching substrings of two strings. We run this algorithm on the regions consisting of the 20 bp before the left LTR and after the right LTR. The tool finds the closest TSD of at least 4 bp — if one exists. Additionally, LtrDetector searches for a PPT, which is a region of highly enriched purine (A and G) content that appears in the interior region immediately adjacent to the 3' LTR. We calculate a search window based on the size of the interior region on the LTR-RT. The tool searches for a minimum length seed composed entirely of purines, then expands in both directions from the first such seed, allowing for gaps. When the maximum gap is exceeded, the length and the purine percentage of the putative PPT are calculated. If these values are below certain minimums, the algorithm proceeds to the next seed and repeats the process. This search continues until an acceptable PPT is found or the search window is exceeded. LtrDetector also searches for a PPT on the negative strand by scanning the reverse complement of the search window, giving a clue as to the orientation of the LTR-RT. Finally, we search for the TG..CA box. We scan the first 20 bp of the LTR-RT for the first occurrence of the TG motif and the last 20 bp for the final occurrence of the CA motif. If both motifs occur, we report the start and end of the box as an alternative boundary for the full LTR-RT. By default, LtrDetector reports results in a tabular format, including columns for the start and the end coordinates of each retrotransposon, its constituent LTRs, and any found TSD, PPT, or TG..CA Box. Alternatively, it can produce output in BED format for easy evaluation using comparison utilities like bedtools. In this case, the output columns contain start and end coordinates for the entire element and for its constituent LTRs. Visualizing putative LTR-RTs We have built this tool with manual verification in mind. LtrDetector comes with a Python program to enable the user to visualize the scores — distances and directions — as well as the boundaries of the two LTRs. The visualization program produces graphs. The x-axis displays the nucleotide indexes, and the y-axis displays the forward/backward distances to the closest copies. Forward distances are represented as positive numbers, whereas backward distances are represented as negative numbers. Figure 1a shows the scores. The merged plateaus can also be visualized (Fig. 1b). The boundaries of each LTR are demarcated by two inverted, red triangles (Fig. 1d). Looking at these graphs, the user can quickly assess the quality of predicted elements by comparing the identified boundaries with the surrounding scores. This should provide important information for the manual review process. Ground truth generation In order to assess the accuracy of our signature-based predictions, we built a pipeline for assembling ground truth using previously known sequences of LTR-RTs. Repbase is the most comprehensive database for repetitive elements, containing consensus sequences for a wide variety of genomes [61]. The Repbase browser system provides FASTA files containing the LTRs and interior sequences for full LTR-RTs separately. The ground truth were constructed using two related, complementary approaches. In the first approach, we downloaded these files and parsed them to append the LTR sequences before and after their associated interior sequence to form full LTR-RT elements. We then performed a BLAST search for these complete elements against the input genome, processing the output from BLAST to accept only those results that represent 100% coverage of the query as well as 70% or more identity. In the second approach, we built a pipeline around RepeatMasker — the standard database-driven tool for repeat identification. RepeatMasker also uses Repbase as its default source of repeat consensus sequences. Instead of concatenating the LTRs and their interior regions before the search, we searched for them separately in RepeatMasker's output. These entries were used for extracting LTR-RT coordinates by finding two 100%-query-coverage regions of the same LTR that are 400–22000 bp apart as defined by their start coordinates. The corresponding interior element was required to appear somewhere in between and to have 70% query coverage. The outputs of the two pipelines were merged and duplicates were removed. The reason we used both pipelines is that the results from the RepeatMasker pipeline are dependent upon the estimated length parameters, but do a better job finding LTR-RTs with more degenerate interior regions, whereas the BLAST data is free from guesswork but stricter about enforcing the canonical structure of LTR-RTs. False positive evaluation We built a false positive detection pipeline by parsing RepeatMasker output to determine when putative LTR-RTs overlap with non-LTR repeats. RepeatMasker-reported elements that do not belong to an LTR (excluding simple and low-complexity repeats) are compared with the predicted LTR-RTs. If two repeats of the same type overlap by more than 80% with the two supposed LTRs of a putative LTR-RT, this element is considered a false positive. However, if other repetitive elements overlap the interior of the predicted LTR-RT, this putative element is not counted as a false positive because nested repeats are very common. This approach was inspired by another study for detecting Miniature Inverted-repeat Transposable Elements (MITEs) [62]; in that study, a putative element is considered a false positive if it overlaps with any non-MITE elements. Repbase is by no means a complete record of repetitive elements, so neither the ground truth nor the false positive annotations will be comprehensive. Accordingly, a large amount of elements discovered by LtrDetector will overlap with neither set and will be impossible to evaluate against existing databases. Nonetheless, this approach — in our opinion — is the best available method for evaluating the false positives of tools for discovering LTR-RTs. Evaluation measures True Positives (TP) are the discoveries that overlap with an entry in the ground truth. False Negatives (FN) are elements listed in the ground truth but not found by a tool. False Positives (FP) are the discoveries that overlap with an entry in the false positive data set. Mutual overlap is required to be 95%; for example, sequences A and B are counted as equivalent if the overlapping segment between A and B constitutes 95% of both A and B. Because we calculate this overlap on the whole element, it is theoretically possible that this definition of overlap may overlook some slight inaccuracies in the length of the LTRs. We use the standard measures of sensitivity (Eq. 1) and precision (Eq. 2) to assess the performances of LtrDetector and the related tools. Sensitivity is the ratio (or percentage) of the true elements found by a tool, whereas precision is the ratio (or percentage) of the true elements identified by a tool to the total number of regions predicted by the same tool. $$ Sensitivity = \frac{TP}{TP+FN} $$ $$ Precision = \frac{TP}{TP+FP} $$ Additionally, we report the F1 measure (Eq. 3), which combines sensitivity and precision. $$ \begin{aligned} F1 & = 2 \times \frac{Precision \times Sensitivity}{Precision+Sensitivity}\\ & = \frac{2 TP}{2 TP + FP + FN}\\ \end{aligned} $$ Comparing the F1 measure to the accuracy (Eq. 4) shows how similar these two measures are. $$ Accuracy = \frac{TP + TN}{TP + TN + FP + FN} $$ Here, TN stands for True Negatives — unknown in this study. Therefore, the accuracy cannot be calculated. If we substitute TN with TP in the accuracy equation, we obtain the F1 measure equation. In other words, the F1 can be viewed as an accuracy measure when the TN cannot be determined. We validated the results of LtrDetector using a variety of genomes. An initial test replicated the experiment in a study by Lerat [1], testing multiple tools on the X Chromosome of the D. melanogaster (Dm3) against a ground-truth annotation assembled from RepeatMasker. We performed similar analysis on the following genomes: Arabidopsis thaliana (TAIR10): http://plants.ensembl.org/Arabidopsis_thaliana/Info/Index Hordeum vulgare (HvIbscPgsbV2): http://plants.ensembl.org/Hordeum_vulgare/Info/Index Oryza sativa Japonica (IRGSP1): http://plants.ensembl.org/Oryza_sativa/Info/Index Sorghum bicolor (SorghumBicolorV2): http://plants.ensembl.org/Sorghum_bicolor/Info/Index Zea mays (ZeaMaysAGPv4): http://ensembl.gramene.org/Zea_mays/Info/Index Glycine max (Gmax_109): http://www.plantgdb.org/XGDB/phplib/download.php?GDB=Gm Parameter defaults We conducted an empirical analysis of the Repbase sequences for six genomes in order to set the element length parameters for LtrDetector and the other tools. Table 1 shows length statistics of LTR-RTs of these genomes. We chose the default values to include almost all elements found in the data. The range for LTR length is 100–6000 bp, and 400–22000 bp for whole LTR-RTs. The whole element maximums and minimums are also used in our ground truth generation. Table 1 Length statistics to determine the default parameters of LtrDetector The value of k is extremely important to both the effectiveness and the efficiency of LtrDetector. If the chosen value is too small, many k-mers will occur by chance and the scores will contain a large amount of noise. If k is too large, the signal will likely miss more degenerate repeats. The memory usage is also proportional to the value of k; an increase of k by 1 increases the size of the hash table 4-fold. We evaluated LtrDetector on both A. thaliana and O. sativa for all values of k between 9 and 15 inclusive, and tracked the performance of each trial on F1. Figure 3 displays these results. On the basis of this experiment, we selected 13 as a suitable default k value because it provides excellent performance while keeping memory usage moderate. Users who are particularly concerned with memory may want to select a smaller k; a higher k may produce slightly better results at the cost of memory. The effect of different values of k — the size of the short words, which are used as the keys in the hash table — on the F1 measure. As the value of k increases from 9 to 11 or 12, the F1 value increases (the higher, the better). The performance does not change markedly after that. (a) Shows the experiment on A. thaliana, (b) shows O. sativa Results on the X chromosome of the Drosophila melanogaster Our initial test is based on the experiment by Lerat [1]. Table 2 shows the performances of these four tools: LTR_finder, LTR_seq, LTRharvest, and LtrDetector. All are evaluated using their default parameters. Although other tools like LTR_STRUC and MGEScan-LTR exist to discover LTR-RTs, they all had issues with availability and/or installation, so we were unable to get them to produce results. LtrDetector finds one fewer element that LTRharvest (92/96 vs. 93/96), while making 20% fewer total predictions (160 vs. 200). LTR_seq performed the worst of the tools on every metric, and will be excluded from further experiments. These results are an early indication that LtrDetector performs well relative to the currently available tools. D. melanogaster has a small genomic size and extremely well-preserved LTR sequences, making this a relatively easy test. Further evaluations are necessary to accurately gauge the performance of any tool. Table 2 Results on the X Chromosome of D. melanogaster: We evaluated four de-novo tools on a ground-truth annotation provided by Lerat [1] Results on synthetic data sets We built synthetic data by randomly generating exact repeats — long terminal repeats — within a certain size range, mutating a selected percentage of one of them, and inserting them with random sequences in between. Although this data set does not accurately simulate the content of a real genome, it can help us demonstrate the ability of a tool to discover repeats at a given level of mutation (See Table 3). LTR_Finder is one of the best-performing predecessor tools to LtrDetector, but its results are not listed because its strict filtering system requires other structural features that our synthetic genome lacks. For each trial, both tools are run with a sequence identity threshold of 5% lower than the similarity implied by the mutation rate. For example, the trial with 15% mutation would have an identity threshold of 80%. All other parameters are left at their defaults. Both tools capture nearly all of the elements in well conserved repeats (0–5%), but by 15% mutation, LtrDetector identifies 74 of 92 ground truth elements, whereas LTRharvest finds only 29. On the 20% mutation rate, LtrDetector outperformed LTRharvest by a wide margin in terms of the sensitivity (40/93 vs. 8/93). Neither tool is capable of reliably detecting repeats at 30% or greater mutation rates. These results are indicative of LtrDetector's capabilities on repeats of varying levels of degeneration. Table 3 Results on synthetic genomes: We constructed several synthetic chromsomes with randomly generated direct repeats mutated at a given percentage of nucleotides (0–30%) to assess performance at different levels of LTR conservation Results on six plant genomes Our main experiment was an evaluation of three tools (LtrDetector, LTR_Finder and LTRharvest) on six plant genomes (including several important crops) of varying size and repeat content. In this experiment, all tools were run using parameters determined on sequences found in Repbase (see Table 4). Results for LTR_Finder are unavailable for the Hordeum vulgare (barley) genome because memory demands repeatedly caused the computer to crash on four computer cores, and a subsequent trial on one core was unable to finish over two weeks of run time (2/7 chromosomes finished). The results of this experiment suggest substantial performance gains for our tool over previous methods. Table 4 Results on six plant genomes: We tested three tools on one model organism, A. thaliana, and five important crops of varying genomic size and repeat content Aggregate sensitivity (excluding the H. vulgare genome) is the classification measure in which we saw the most improvement, with our software tool identifying 79.1% of known LTR-RT overall, in comparison to 65.3% by LTR_Finder (improvement of 21.1%) and 68.2% by LTRharvest (improvement of 16.0%). When considering the aggregate sensitivity on the six genomes, LtrDetector outperformed LTRharvest (improvement of 23.4%). Additionally, LtrDetector produced fairly consistent results across the different genomes we tested, ranging from 74.5% on H. vulgare to 81.9% on the smaller O. sativa. LtrDetector was the most sensitive tool on all six genomes. LTR_Finder predicted very few false positives and was the most precise tool overall at 99.5%. LtrDetector came in second at 98.2% followed by LTRharvest at 96.0%. All three tools found many more true positives than false positives, resulting in high precision overall. On the F1 composite measure (excluding the H. vulgare genome), LtrDetector again achieves the highest score, outperforming LTR_Finder by 11.0% (87.6 vs. 78.9) and LTRharvest by 9.9% (87.6 vs. 79.7). When the H. vulgare genome is included, LtrDetector showed improvement of 14.4% over LTRharvest. These results demonstrate that LtrDetector strikes a balance between thorough collection of known LTR-RTs and avoiding spurious predictions. Our evaluation criteria are dependent on the consensus sequences available in Repbase, so we will not be able to definitively classify the majority of putative LTR-RTs as true positives or false positives. Such elements will be unconfirmed, but could potentially be novel discoveries. On the first five genomes (excluding barley), LtrDetector and LTRharvest propose similar numbers of retrotransposons — 176197 and 165721, respectively. LTR_Finder is more conservative with only 90458 discovered elements. The total number of identified elements helps in deriving an estimate of the percentage of a given genome that is composed of full-length LTR-RTs. We summed the length of each discovery in base pairs and divided this total by the number of base pairs in the entire genome. This produced estimates of 10.6% LTR-RT content for A. thaliana, 18.5% for O. Sativa, 25.9% for G. max, 39.2% for S. bicolor, 48.2% for H. Vulgare, and 62.4% for Z. mays. The three tools have vastly different run times. LtrDetector can work on multiple FASTA files in parallel, whereas we had to configure the other two tools to process several chromosomes simultaneously using the GNU parallel command utility. The experiments were run on all four cores of an Intel i5 machine with 16 GB RAM running Ubuntu. We recorded wall-clock time using the Linux time command. LTRharvest was by far the fastest tool, capable of processing the five smallest genomes in just over 33 min. On the other end of the spectrum, LTR_Finder took about 153 h — more than 6 days. LtrDetector's runtime efficiency was in the middle (around 8 h). LTRharvest uses far less memory overall. LTR_Finder requires moderate memory on the small genomes. LtrDetector consistently had the highest memory requirements of the the three tools. The above experiments suggest that LtrDetector represents a substantial advance in the available methods for discovering LTR-RT elements de-novo. In comparison to related software tools, it delivers more accurate predictions in reasonable time using memory readily available on modern personal computers. Its capabilities are proven not only on simple model organisms but also on a wide variety of plant genomes. Crucially for researchers, the tool is easy to install and run and will perform well on an ordinary desktop computer. It provides a robust set of default parameters for maximum generality, but still allows for user configuration via command-line options. As more genome sequences become available, the utility of tools like LtrDector will only increase. Gene validation We obtained protein sequences for the gag/pol genes from the UniProt database and used tBLASTn (protein to nucleotide) to search for them inside of our predictions. We recorded the percentage that exhibited more than 25% query coverage. The results for the ground truth dataset and the predictions by LTR_Finder, LTRharvest, and LtrDetector are found in Table 5. For instance, the 24.9% of the identified elements with gag/pol for LtrDetector on G. max compares favorably with the 19.7% for LTR_Finder and 17.6% for LTRharvest. LTR_Finder's putative LTR-RTs contain slightly more genes on A. thaliana (49.1% to LtrDetector's 48.0%). LtrDetector's low rate of 10.4% on S. bicolor is in line with the 9.0–11.0% from the ground truth and the other two tools. Even with our strict ground truth generation (at least 70% of the interior of an LTR-RT is required to be covered at 70% or more identity), the proteins did not consistently appear in this ground truth data set. The sequences gag/pol are the best available on the UniProt database, but we stress that they are unconfirmed and should only be taken as a primitive sign of the biological relevance of all predictions. These results show that LtrDetector's putative LTR-RTs are enriched with fragments of these two genes, suggesting the quality of elements identified by LtrDetector relative to those predicted by the other tools. Table 5 Gene content validation: We searched for species-specific fused gag/pol in the interior of the known and the predicted LTR-RTs Nested element discovery Although this feature was not enabled for the above analysis, the current version of the software includes beta functionality for finding nested LTR-RTs. The first pass of LtrDetector discovers non-nested elements and nested elements that are small enough to meet the length requirements of LTR-RTs. Optionally, it conducts an equivalent search around each discovered element, automatically adjusting the scoring system parameters to identify elements that fully enclose the elements discovered in the first pass. Post-processing manual annotation aid LtrDetector is unique in providing a simple visualization tool to aid with manual verification of putative LTR-RTs. For each LTR-RT identified by LtrDetecor, the script will produce a colorful graph showing distances between k-mers (short words of length k) and their nearest copies as well as markers for the start and end locations of each LTR. See Fig. 1 for examples. This signal will ideally show two flat plateaus representing two LTRs. Comparisons to related tools LtrDetector represents an innovative approach to repeat discovery that differs greatly from its predecessor tools. LtrDetector uses techniques inspired by signal processing. The use of a signal of k-mer distances as an indication of repeat locations is the first of its kind. Both of our closest competitor tools use suffix-arrays, which are complex data structures that have been widely used in text processing [63]. LTRharvest uses a suffix-array to identifies initial maximal repeats — seeds — and a greedy dynamic programming algorithm called X-drop extension to expand from the seeds [51]. It can filter based on length, LTR identity, target site duplications, and the palindromic LTR motif (i.e. TG..CA box). LTR_Finder begins with all sets of exact repeats found by the suffix-array [50]. Each member in every set is considered in a pair-wise fashion. The region between the two start coordinates in a pair is aligned with the region between the two end coordinates. The pairs are merged if the alignment is above a certain threshold. Similarly to LtrDetector, LTR_Finder uses the Smith-Waterman local alignment algorithm [60] for boundary adjustment. LTR_Finder concludes with an aggressive filtering system based on searching for target site duplications, the TG..CA box, primary binding sites, and the proper protein domains in the interior sequence. Future work will seek to improve time efficiency, largely by reducing our dependence on local alignment, which is very slow on longer sequences. This may include replacing the Smith-Waterman algorithm with more efficient approximations. We will seek to reduce memory consumption by optimizing the C++ code-base and developing an iterative approach that will allow LtrDetector to sequentially load pieces of larger chromosomes from storage. We will add the option to use the structural features (TSD etc) as filters rather than just annotations. We will also work to improve the beta version of the nested LTR discovery that is included with the software. In this study, we developed and tested a software tool called LtrDetector, which identifies Long Terminal Repeat Retrotransposons (LTR-RTs) de novo in assembled genomes. Our software addresses some of the scaleability and usability concerns of older tools and is better capable of matching the performance of tools that leverage consensus sequnces. LtrDetector revolves around a novel repeat detection methodology that calculates k-mer distance scores to recover underlying repeats. It supplements this with an alignment-based correction and filters to enforce the structure of LTR-RTs. This methodology provides accurate predictions across a diverse range of input genomes. Using consensus sequence predictions from six plant genomes, including maize and barley, we proved that our tool is significantly more sensitive than the previous two most successful software tools, LTR_Finder and LTRharvest. We believe that LtrDetector can provide valuable computational support to researchers, particularly those studying plant genomes. It reports biologically relevant features of the LTR-RTs and includes a k-mer score visualization script to aid with manual review. It is simple to use and performs well on an ordinary personal computer. As the number of sequenced genomes increases by the day, the potential impact of LtrDetector also increases. Automated, accurate identification of LTR-RTs will enable researchers to further investigate the regulatory capacities of LTR-RTs, and could hold great promise in understanding plant evolution and crop productivity. The source code (C++ and Python) is available as Additional file 1. Project name: LtrDetector. Project home page: https://github.com/TulsaBioinformaticsToolsmith/LtrDetector Operating system(s): UNIX/Linux/Mac. Programming language: C++ and Python. Other requirements: BLAST (https://blast.ncbi.nlm.nih.gov/Blast.cgi) and Bedtools (http://bedtools.readthedocs.io/en/latest/). Python: NumPy, Matplotlib, Pandas. License: The software is provided as-is under the GNU GPLv3. Any restrictions to use by non-academics: License needed. Base pairs LTR-RT: Long terminal repeat retrotransposon LTR: Long terminal repeat Retrotransposon TE: Transposable element Lerat E. Identifying repeats and transposable elements in sequenced genomes: how to find your way through the dense forest of programs. Heredity. 2010; 104(6):520. McClintock B. The origin and behavior of mutable loci in maize. Proc Natl Acad Sci U S A. 1950; 36(6):344–55. Consortium IHGS, Lander ES, Linton LM, Birren B, Nusbaum C, Zody MC, Baldwin J, Devon K, Dewar K, Doyle M, FitzHugh W, Funke R, Gage D, Harris K, Heaford A, Howland J, Kann L, Lehoczky J, LeVine R, McEwan P, McKernan K, Meldrim J, Mesirov JP, Miranda C, Morris W, Naylor J, Raymond C, Rosetti M, Santos R, Sheridan A, et al. Initial sequencing and analysis of the human genome. Nature. 2001; 409:860–921. SanMiguel P, Gaut BS, Tikhonov A, Nakajima Y, Bennetzen JL. The paleontology of intergene retrotransposons of maize. Nat Genet. 1998; 20:43–5. Bennetzen JL, Wang H. The contributions of transposable elements to the structure, function, and evolution of plant genomes. Annu Rev Plant Biol. 2014; 65:505–30. Kellogg EA, Bennetzen JL. The evolution of nuclear genome structure in seed plants. Am J Bot. 2004; 91(10):1709–25. Nystedt B, Street NR, Wetterbom A, Zuccolo A, Lin Y-C, Scofield DG, Vezzi F, Delhomme N, Giacomello S, Alexeyenko A, Vicedomini R, Sahlin K, Sherwood E, Elfstrand M, Gramzow L, Holmberg K, Hallman J, Keech O, Klasson L, Koriabine M, Kucukoglu M, Kaller M, Luthman J, Lysholm F, Niittyla T, Olson A, Rilakovic N, Ritland C, Rossello JA, Sena J, et al. The norway spruce genome sequence and conifer genome evolution. Nature. 2013; 497(7451):579–84. Ibarra-Laclette E, Lyons E, Hernandez-Guzman G, Perez-Torres CA, Carretero-Paulet L, Chang T-H, Lan T, Welch AJ, Juarez MJA, Simpson J, Fernandez-Cortes A, Arteaga-Vazquez M, Gongora-Castillo E, Acevedo-Hernandez G, Schuster SC, Himmelbauer H, Minoche AE, Xu S, Lynch M, Oropeza-Aburto A, Cervantes-Perez SA, de Jesus Ortega-Estrada M, Cervantes-Luevano JI, Michael TP, Mockler T, Bryant D, Herrera-Estrella A, Albert VA, Herrera-Estrella L. Architecture and evolution of a minute plant genome. Nature. 2013; 498(7452):94–8. McClintock B. The significance of responses of the genome to challenge. Science. 1984; 226(4676):792–801. Robbins TP, Walker EL, Kermicle JL, Alleman M, Dellaporta SL. Meiotic instability of the R-r complex arising from displaced intragenic exchange and intrachromosomal rearrangement. Genetics. 1991; 129(1):271–83. Nagy ED, Bennetzen JL. Pathogen corruption and site-directed recombination at a plant disease resistance gene cluster. Genome Res. 2008; 18(12):1918–23. Jiang N, Bao Z, Zhang X, Eddy SR, Wessler SR. Pack-MULE transposable elements mediate gene evolution in plants. Nature. 2004; 431(7008):569–73. Elrouby N, Bureau TE. Bs1, a new chimeric gene formed by retrotransposon-mediated exon shuffling in maize. Plant Physiol. 2010; 153(3):1413–24. Feschotte C. Transposable elements and the evolution of regulatory networks. Nat Rev Genet. 2008; 9:397–405. Kajihara D, de Godoy F, Hamaji TA, Blanco SR, Van Sluys M-A, Rossi M. Functional characterization of sugarcane mustang domesticated transposases and comparative diversity in sugarcane, rice, maize and sorghum. Genet Mol Biol. 2012; 35(3):632–9. Wang W, Zheng H, Fan C, Li J, Shi J, Cai Z, Zhang G, Liu D, Zhang J, Vang S, Lu Z, Wong GK-S, Long M, Wang J. High rate of chimeric gene origination by retroposition in plant genomes. Plant Cell. 2006; 18(8):1791–18902. Wicker T, Mayer KFX, Gundlach H, Martis M, Steuernagel B, Scholz U, Šimková H, Kubaláková M, Choulet F, Taudien S, Platzer M, Feuillet C, Fahima T, Budak H, Doležel J, Keller B, Stein N. Frequent gene movement and pseudogene evolution is common to the large and complex genomes of wheat, barley, and their relatives. Plant Cell. 2011; 23(5):1706–18. Lippman Z, Gendrel A-V, Black M, Vaughn MW, Dedhia N, Richard McCombie W, Lavine K, Mittal V, May B, Kasschau KD, Carrington JC, Doerge RW, Colot V, Martienssen R. Role of transposable elements in heterochromatin and epigenetic control. Nature. 2004; 430(6998):471–6. Sharma A, Wolfgruber TK, Presting GG. Tandem repeats derived from centromeric retrotransposons. BMC Genomics. 2013; 14(1):142. Hayashi K, Yoshida H. Refunctionalization of the ancient rice blast disease resistance gene Pit by the recruitment of a retrotransposon as a promoter. Plant J. 2009; 3:413–25. Fernandez L, Torregrosa L, Segura V, Bouquet A, Martinez-Zapater JM. Transposon-induced gene activation as a mechanism generating cluster shape somatic variation in grapevine. Plant J. 2010; 61(4):545–57. Rebollo R, Romanish MT, Mager DL. Transposable elements: An abundant and natural source of regulatory sequences for host genes. Annu Rev Genet. 2012; 46:21–42. Lisch D, Bennetzen JL. Transposable element origins of epigenetic gene regulation. Curr Opin Plant Biol. 2011; 14(2):156–61. Yan Y, Zhang Y, Yang K, Sun Z, Fu Y, Chen X, Fang R. Small RNAs from MITE-derived stem-loop precursors regulate abscisic acid signaling and abiotic stress responses in rice. Plant J. 2011; 65(5):820–8. McCue AD, Slotkin RK. Transposable element small RNAs as regulators of gene expression. Trends Genet. 2012; 28(12):616–23. McCue AD, Nuthikattu S, Slotkin RK. Genome-wide identification of genes regulated in trans by transposable element small interfering RNAs. RNA Biol. 2013; 10(8). Piriyapongsa J, Jordan IK. Dual coding of siRNAs and miRNAs by plant transposable elements. RNA. 2008; 14(5):814–21. Yu S, Li J, Luo L. Complexity and specificity of precursor microRNAs driven by transposable elements in rice. Plant Mol Biol Rep. 2010; 28(3):502–11. Li Y, Li C, Xia J, Jin Y. Domestication of transposable elements into microRNA genes in plants. PLoS One. 2011; 6(5):19212. Walbot V. Strategies for mutagenesis and gene cloning using transposon tagging and T-DNA insertional mutagenesis. Annu Rev Plant BioI. 1992; 43:49–82. Wessler SR, Bureau TE, White SE. LTR-retrotransposons and MITEs: important players in the evolution of plant genomes. Curr Opin Genet Dev. 1995; 5(6):814–21. Osborne BI, Baker B. Movers and shakers: maize transposons as tools for analyzing other plant genomes. Curr Opin Cell Biol. 1995; 7(3):406–13. Studer A, Zhao Q, Ross-Ibarra J, Doebley J. Identification of a functional transposon insertion in the maize domestication gene tb1. Nat Genet. 2011; 43(11):1160–3. Paszkowski J. Controlled activation of retrotransposition for plant breeding. Curr Opin Biotechnol. 2015; 32:200–6. Feschotte C, Jiang N, Wessler SR. Plant transposable elements: where genetics meets genomics. Nat Rev Genet. 2002; 3:329–41. Galindo-González L, Mhiri C, Deyholos MK, Grandbastien M-A. Ltr-retrotransposons in plants: Engines of evolution. Gene. 2017; 626:14–25. Varagona MJ, Purugganan M, Wessler SR. Alternative splicing induced by insertion of retrotransposons into the maize waxy gene. Plant Cell. 1992; 4(7):811–20. Costa JH, De Melo DF, Gouveia Z, Cardoso HG, Peixe A, Arnholdt-Schmitt B. The alternative oxidase family of vitis vinifera reveals an attractive model to study the importance of genomic design. Physiol Plant. 2009; 137(4):553–65. Yao J-L, Dong Y-H, Morris BAM. Parthenocarpic apple fruit production conferred by transposon insertion mutations in a MADS-box transcription factor. Proc Natl Acad Sci U S A. 2001; 98(3):1306–11. Sugimoto K, Takeda S, Hirochika H. Myb-related transcription factor ntmyb2 induced by wounding and elicitors is a regulator of the tobacco retrotransposon tto1 and defense-related genes. Plant Cell. 2000; 12(12):2511–27. Kumar A, Bennetzen JL. Retrotransposons: central players in the structure, evolution and function of plant genomes. Trends Plant Sci. 2000; 5(12):509–10. Altschul SF, Gish W, Miller W, Myers EW, Lipman DJ. Basic local alignment search tool. J Mol Biol. 1990; 215(3):403–10. Bao Z, Eddy SR. Automated de novo identification of repeat sequence families in sequenced genomes. Genome Res. 2002; 12(8):1269–76. Price AL, Jones NC, Pevzner PA. De novo identification of repeat families in large genomes. Bioinformatics. 2005; 21(1):351–8. Morgulis A, Gertz EM, Schäffer AA, Agarwala R. WindowMasker: window-based masker for sequenced genomes. Bioinformatics. 2006; 22(2):134–41. Girgis HZ, Ovcharenko I. Predicting tissue specific cis-regulatory modules in the human genome using pairs of co-occurring motifs. BMC Bioinformatics. 2012; 13(1):25. McCarthy EM, McDonald JF. LTR_STRUC: a novel search and identification program for LTR retrotransposons. Bioinformatics. 2003; 19(3):362–7. Kalyanaraman A, Aluru S. Efficient algorithms and software for detection of full-length LTR retrotransposons. J Bioinform Comput Biol. 2006; 4(2):197–216. Rho M, Choi J-H, Kim S, Lynch M, Tang H. De novo identification of LTR retrotransposons in eukaryotic genomes. BMC Genomics. 2007; 8:90. Xu Z, Wang H. LTR_FINDER: an efficient tool for the prediction of full-length LTR retrotransposons. Nucleic Acids Res. 2007; 35(2):265–8. Ellinghaus D, Kurtz S, Willhoeft U. LTRharvest, an efficient and flexible software for de novo detection of LTR retrotransposons. BMC Bioinformatics. 2008; 9(18):18. Ou S, Jiang N. LTR_retriever: A highly accurate and sensitive program for identification of long terminal repeat retrotransposons. Plant Physiol. 2018; 176(2):1410–22. Steinbiss S, Kastens S, Kurtz S. Ltrsift: a graphical user interface for semi-automatic classification and postprocessing of de novo detected ltr retrotransposons. Mob DNA. 2012; 3(1):18. Orozco-Arias S, Liu J, Tabares-Soto R, Ceballos D, Silva Domingues D, Garavito A, Ming R, Guyot R. Inpactor, integrated and parallel analyzer and classifier of LTR retrotransposons and its application for pineapple LTR retrotransposons diversity and dynamics. Biology. 2018; 7(32). PubMed Central Article CAS Google Scholar Cormen TH, Stein C, Rivest RL, Leiserson CE. Introduction to Algorithms, 2nd. New York: McGraw-Hill Higher Education; 2001. Girgis HZ. Red: an intelligent, rapid, accurate tool for detecting repeats de-novo on the genomic scale. BMC Bioinformatics. 2015; 16(1):227. Luczak BB, James BT, Girgis HZ. A survey and evaluations of histogram-based statistics in alignment-free sequence comparison. Brief Bioinform. 2017; 161:bbx161. James BT, Luczak BB, Girgis HZ. MeShClust: an intelligent tool for clustering DNA sequences. Nucleic Acids Res. 2018; 315:gky315. James BT, Luczak BB, Girgis HZ. FASTCAR: Rapid alignment-free prediction of sequence alignment identity scores. BioRxiv. 2018; 380824. Smith FT, Waterman MS. Identification of common molecular subsequences. J Mol Biol. 1981; 147(1):195–7. Jurka J, Kapitonov VV, Pavlicek A, Klonowski P, Kohany O, Walichiewicz J. Repbase Update, a database of eukaryotic repetitive elements. Cytogenet Genome Res. 2005; 110(1-4):462–7. Crescente JM, Zavallo D, Helguera M, Vanzetti LS. Mite tracker: an accurate approach to identify miniature inverted-repeat transposable elements in large genomes. BMC Bioinformatics. 2018; 19(1):348. Gusfield D. Algorithms on Strings, Trees, and Sequences: Computer Science and Computational Biology. New York: Cambridge University Press; 1997. We are grateful to the anonymous reviewers for their comments and suggestions, which improved the software as well as this manuscript. This research was supported mainly by funds from the Oklahoma Center for the Advancement of Science and Technology [PS17-015] and in part by internal funds provided by the College of Engineering and Natural Sciences and the Tulsa Undergraduate Research Challenge (TURC) Program at the University of Tulsa. The funding organizations have no role in the design of the study and collection, analysis, and interpretation of data and in writing the manuscript. The source code of LtrDetector, ground truth generation and visualization scripts, and long terminal repeats retrotransposons are available as Additional files 1, 2, 3, 4, 5, 6, 7, 8 and 9. The Bioinformatics Toolsmith Laboratory, Tandy School of Computer Science, University of Tulsa, 800 South Tucker Drive, Tulsa, 74104, OK, USA Joseph D. Valencia & Hani Z. Girgis Joseph D. Valencia Hani Z. Girgis HZG designed the software and the case studies, implemented the scoring module, and wrote the manuscript. JDV implemented the software and the evaluation pipelines, conducted the experiments, produced the results, and wrote the manuscript. Correspondence to Hani Z. Girgis. The LtrDetector software and the visualization script. This compressed file (.tar.gz) includes the C++ source code of LtrDetector and the Python script for visualizing putative elements as well as instructions on how to compile and run the programs. (TAR.GZ 145 kb) Script for ground truth generation pipeline. This compressed file (.tar.gz) includes the Python code for the evaluation pipeline. (TAR.GZ 5 kb) The synthetic sequences. This compressed file (.tar.gz) includes the synthetic sequences with different mutation rates. (TAR.GZ 1872 kb) Long Terminal Repeat (LTR) retrotransposons found by LtrDetector in Arabidopsis thaliana. This compressed file (.tar.gz) includes the LTR retrotransposons found by LtrDetector in BED format. (TAR.GZ 62 kb) LTR retrotransposons found by LtrDetector in Glycine max. This compressed file (.tar.gz) includes the LTR retrotransposons found by LtrDetector in BED format. (TAR.GZ 929 kb) LTR retrotransposons found by LtrDetector in Hordeum vulgare. This compressed file (.tar.gz) includes the LTR retrotransposons found by LtrDetector in BED format. (TAR.GZ 7488 kb) LTR retrotransposons found by LtrDetector in Oryza sativa Japonica. This compressed file (.tar.gz) includes the LTR retrotransposons found by LtrDetector in BED format. (TAR.GZ 268 kb) LTR retrotransposons found by LtrDetector in Sorghum bicolor. This compressed file (.tar.gz) includes the LTR retrotransposons found by LtrDetector in BED format. (TAR.GZ 891 kb) LTR retrotransposons found by LtrDetector in Zea mays. This compressed file (.tar.gz) includes the LTR retrotransposons found by LtrDetector in BED format. (TAR.GZ 4278 kb) Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. Valencia, J.D., Girgis, H.Z. LtrDetector: A tool-suite for detecting long terminal repeat retrotransposons de-novo. BMC Genomics 20, 450 (2019). https://doi.org/10.1186/s12864-019-5796-9 Long terminal repeats retrotransposons
CommonCrawl
Find the least common multiple of 8 and 15. $8 = 2^3$, $15 = 3^1 \cdot 5^1$, so lcm$[8, 15] = 2^3 \cdot 3^1 \cdot 5^1 = \boxed{120}$. Notice that, since 8 and 15 have no common factors (greater than 1), their least common multiple is equal to their product.
Math Dataset
Fractional model In applied statistics, fractional models are, to some extent, related to binary response models. However, instead of estimating the probability of being in one bin of a dichotomous variable, the fractional model typically deals with variables that take on all possible values in the unit interval. One can easily generalize this model to take on values on any other interval by appropriate transformations.[1] Examples range from participation rates in 401(k) plans[2] to television ratings of NBA games.[3] Description There have been two approaches to modeling this problem. Even though they both rely on an index that is linear in xi combined with a link function,[4] this is not strictly necessary. The first approach uses a log-odds transformation of y as a linear function of xi, i.e., $\operatorname {logit} y=\log {\frac {y}{1-y}}=x\beta $. This approach is problematic for two distinct reasons. The y variable can not take on boundary values 1 and 0, and the interpretation of the coefficients is not straightforward. The second approach circumvents these issues by using the logistic regression as a link function. More specifically, $\operatorname {E} [y\lor x]={\frac {\exp(x\beta )}{1+\exp(x\beta )}}$ It immediately becomes clear that this set up is very similar to the binary logit model, with that difference that the y variable can actually take on values in the unit interval. Many of the estimation techniques for the binary logit model, such as non-linear least squares and quasi-MLE, carry over in a natural way, just like heteroskedasticity adjustments and partial effects calculations.[5] Extensions to this cross-sectional model have been provided that allow for taking into account important econometric issues, such as endogenous explanatory variables and unobserved heterogeneous effects. Under strict exogeneity assumptions, it is possible to difference out these unobserved effects using panel data techniques, although weaker exogeneity assumptions can also result in consistent estimators.[6] Control function techniques to deal with endogeneity concerns have also been proposed.[7] References 1. Wooldridge, J. (2002): Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass. 2. Papke, L. E. and J. M. Wooldridge (1996): "Econometric Methods for Fractional Response Variables with an Application to 401(k) Plan Participation Rates." Journal of Applied Econometrics (11), pp. 619–632 3. Hausman, J. A. and G. K. Leonard (1997): "Superstars in the National Basketball Association: Economic Value and Policy." Journal of Labor Economics (15), pp. 586–624 4. McCullagh, P. and J. A. Nelder (1989): Generalized Linear Models, CRC Monographs on Statistics and Applied Probability (Book 37), 2nd Edition, Chapman and Hall, London. 5. Wooldridge, J. (2002): Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass. 6. Papke, L. E. and J. M. Wooldridge (1996): "Panel Data Methods for Fractional Response Variables with an Application to Test Pass Rates." Journal of Econometrics (145), pp. 121–133 7. Wooldridge, J.M. (2005): "Unobserved heterogeneity and estimation of average partial effects." Identification and Inference for Econometric Models: Essays in Honor of Thomas Rothenberg, ed. by Andrews, D.W.K. and J.H. Stock, Cambridge University Press, Cambridge, pp. 27–55
Wikipedia
\begin{document} \title[Spatio-temporal Poisson processes]{Spatio-temporal Poisson processes for visits to small sets} \author{Fran\c{c}oise P\`ene \and Beno\^\i t Saussol} \address{1)Universit\'e de Brest, Laboratoire de Math\'ematiques de Bretagne Atlantique, CNRS UMR 6205, Brest, France\\ 2)Fran\c{c}oise P\`ene is supported by the IUF.} \email{[email protected]} \email{[email protected]} \keywords{} \subjclass[2000]{Primary: 37B20} \begin{abstract} For many measure preserving dynamical systems $(\Omega,T,m)$ the successive hitting times to a small set is well approximated by a Poisson process on the real line. In this work we define a new process obtained from recording not only the successive times $n$ of visits to a set $A$, but also the position $T^n(x)$ in $A$ of the orbit, in the limit where $m(A)\to0$. We obtain a convergence of this process, suitably normalized, to a Poisson point process in time and space under some decorrelation condition. We present several new applications to hyperbolic maps and SRB measures, including the case of a neighborhood of a periodic point, and some billiards such as Sinai billiards, Bunimovich stadium and diamond billiard. \end{abstract} \date{\today} \maketitle \tableofcontents \section{Introduction} The study of Poincar\'e recurrence in dynamical systems such as occurrence of rare events, distribution of return time, hitting time and Poisson law has grown to an active field of research, in deep relation with extreme values of stochastic processes; (see~\cite{book} and references therein). Let $(\Omega,\mathcal F,\mu,T)$ be a probability preserving dynamical system. For every $A\in\mathcal F$, we set $\tau_A$ as the first hitting time to A, i.e. $$\tau_A(x):=\inf\{n\ge 1\, :\, T^nx\in A\} \, .$$ In many systems the behavior of the successive visits of a typical orbit $(T^nx)_n$ to the sets $A_\varepsilon$, with $\mu(A_\varepsilon)\rightarrow 0+$ is often asymptotic, when suitably normalized, to a Poisson process. Such results were first obtained by Doeblin \cite{dob} for the Gauss map, Pitskel \cite{pit} considered the case of Markov chains. The most recent developments concern non uniformly hyperbolic dynamical systems, for example \cite{stv,cc,fft,hv,wasilewska} just to mention a few of them. An important issue of our work is that we take into account not only the times of successive visits to the set, but also the position of the successive visits in $A_\eps$ within each return. This study was first motivated by a question asked to us by D. Sz\'asz and I.P. T\'oth for diamond billiards, that we address in Section~\ref{sec:bd}. Beyond its own interest, Poisson limit theorems for such spatio-temporal processes have been recently use to prove convergence to L\'evy stable processes in dynamical systems; See~\cite{marta} and subsequent works such as~\cite{MZ}. Let us indicate that, at the same time and independently of the present work, analogous processes have been investigated in \cite{ffm}. We thus consider these events in time and space $$\{(n,T^nx)\ :\ n\ge 1,\ T^nx\in A_\varepsilon\}\subset [0,+\infty)\times \Omega,$$ that we will normalize both in time and space, as follows. \includegraphics[scale=0.6]{pp1.pdf} The successive visit times have order $1/\mu(A_\varepsilon)$, which gives the normalization in time. For the space, we use a family of normalization functions $H_\varepsilon:A_\varepsilon\rightarrow V$. A typical choice when $\Omega$ is Euclidean would be to take for $A_\eps$ an $\eps$-ball an $H_\eps$ a zoom which sends $A_\eps$ to size one. Another choice of extremal processes flavor would be to consider $A_\eps$ as a rare event, and $H_\eps$ would be the strength of the event. We will then consider the family of point processes $(\mathcal N_\varepsilon)_\varepsilon$ on $[0,+\infty)\times V$ given by \begin{equation}\label{PointProcess} \mathcal N_\varepsilon(x):=\sum_{n\ge 1\ :\ T^n(x)\in A_\varepsilon} \delta_{(n\mu(A_\varepsilon),H_\varepsilon(T^n(x)))} . \end{equation} \includegraphics[scale=0.45]{pp2.pdf} For any measurable subset $B$ of $[0,+\infty)\times V$, $$ \mathcal N_\varepsilon(x) (B)=\#\{n\ge 1\ :\ T^n(x)\in A_\varepsilon,\ (n\mu(A_\varepsilon),H_\varepsilon(T^n(x)))\in B\}.$$ We will simply write $\mathcal N_\varepsilon(B)$ for the measurable function $\mathcal N_\varepsilon(B):x\mapsto \mathcal N_\varepsilon(x) (B)$. The main result of the paper provides general conditions under which the point process $\mathcal N_\varepsilon$ is well estimated by a Poisson point process $\mathcal P_\varepsilon$. This virtually contains all spatial information given by the space coordinate $H_\eps$, and the continuous mapping theorem could in principle be applied to recover many properties related to recurrences in $A_\eps$. We then present several applications, with different maps, flows, and sets $A_\eps$. The structure of the paper is as follows: In Section~\ref{sec:gr} we present a general result which gives a convergence to a spatio-temporal point process in a discrete time dynamical system, under some probabilistic one-step decorrelation condition. Then we provide a method to transfer this result to continuous time flows. All these general results are proved in Subsection~\ref{pgr}. In Section~\ref{common} we introduce a framework, adapted for systems modeled by a Young tower, under which one can apply the general results. In Subsection~\ref{sec:voisin} we check these conditions in the case of hitting to balls in a Riemmanian manifold. In Section~\ref{billiard} we present several applications of the common framework mentioned above to two different types of billiards: the Sina\"{\i} billiards with finite horizon, the Bunimovich stadium billiards. In Section~\ref{periodic} we study the case of balls centered around a periodic point in a uniformly hyperbolic system; as a byproduct we recover a compound Poisson distribution for the temporal process. Section~\ref{sec:bd} consists in a fine study of visits in the successive visits in the vicinity of the corner in a diamond shaped billiard. \section{Poisson process under a one-step decorrelation assumption}\label{sec:gr} \subsection{Results for discrete-time dynamical systems}\label{sec:gene} Let $(\Omega,\mathcal F,\mu,T)$ be a probability preserving dynamical system. Let $(A_\varepsilon)_\varepsilon$ be a family of measurable subsets of $\Omega$ with $\mu(A_\varepsilon)\rightarrow 0+$ as $\varepsilon\rightarrow 0$. Let $V$ be a locally compact metric space endowed with its Borel $\sigma$-algebra $\mathcal V$. Let $(H_\varepsilon)_\varepsilon$ be a family of measurable functions $H_\varepsilon:A_\varepsilon\rightarrow V$. We set $E:=[0,+\infty)\times V$ and we endow it with its Borel $\sigma$-algebra $\mathcal E=\mathcal B([0,+\infty))\otimes \mathcal V$. We also consider the family of measures $(m_\eps)_\eps$ on $(V,\mathcal V)$ defined by \begin{equation} m_\eps:=\mu(H_\eps^{-1}(\cdot)|A_\eps) \end{equation} and $\mathcal W$ a family stable by finite unions and intersections of relatively compact open subsets of $V$, that generates the $\sigma$-algebra $\mathcal V$. Let $\lambda$ be the Lebesgue measure on $[0,\infty)$. We will approximate the point process defined by \eqref{PointProcess} by a Poisson point process on $ E$. Given $\eta$ a $\sigma$-finite measure on $(E,\mathcal E)$, recall that a process $\mathcal N$ is a Poisson point process on $E$ of intensity $\eta$ if \begin{enumerate} \item $\mathcal N$ is a point process (i.e. $\mathcal N=\sum_i\delta_{x_i}$ with $x_i$ $E$-valued random variables), \item For every pairwise disjoint Borel sets $B_1,...,B_n\subset E$, the random variables $\mathcal N(B_1),...,\mathcal N(B_n)$ are independent Poisson random variables with respective parameters $\eta(B_1),...,\eta(B_n)$. \end{enumerate} Let $M_p(E)$ be the space of all point measures defined on $E$, endowed with the topology of vague convergence ; it is metrizable as a complete separable metric space. A family of point processes $(\mathcal N_\eps)_\eps$ converges in distribution to $\mathcal N$ if for any bounded continuous function $f\colon M_p(E)\to {\mathbb R}$ the following convergence holds true \begin{equation}\label{cvdf} \mathbb{E}(f(\mathcal N_\eps))\to \mathbb{E}( f(\mathcal N)),\quad \mbox{as }\varepsilon\rightarrow 0. \end{equation} For a collection $\mathcal A$ of measurable subsets of $\Omega$, we define the following quantity: \begin{equation}\label{defDelta} \Delta(\mathcal{A}) := \sup_{A\in \mathcal{A},B\in\sigma(\cup_{n=1}^{\infty}T^{-n}\mathcal{A})} \left|\mu(A\cap B)-\mu(A)\mu(B)\right|. \end{equation} Our main general result is the following one. \begin{theorem}\label{THM} We assume that \begin{enumerate} \item for any finite subset $\mathcal W_0$ of $\mathcal W$ we have $\Delta(H_\eps^{-1}\mathcal W_0)=o(\mu(A_\varepsilon))$, \item there exists a measure $m$ on $(V,\mathcal V)$ such that for every $F\in\mathcal W$, $m(\partial F)=0$ and $\lim_{\varepsilon\to 0}\mu(H_\eps^{-1}(F)|A_\eps)$ converges to $m(F)$. \end{enumerate} Then the family of point processes $(\mathcal N_\varepsilon)_\varepsilon$ converges strongly\footnote{i.e. with respect to any probability measure absolutely continuous w.r.t. $\mu$} in distribution to a Poisson process $\mathcal P$ of intensity $\lambda\times m$. In particular, for every relatively compact open $B\subset E$ such that $(\lambda\times m)(\partial B)=0$, $(\mathcal N_\varepsilon(B))_\varepsilon$ converges in distribution to a Poisson random variable with parameter $(\lambda\times m)(B)$. \end{theorem} We emphasize that Theorem~\ref{THM} remains valid when $\eps$ is restricted to a subsequence $\eps_k\to0$, in the assumptions and the conclusion. Condition (ii) is equivalent to the fact that the family of measures $(m_\eps)_\eps$ converges vaguely\footnote{This means that for any continuous test function $\varphi$ with compact support the integrals $m_{\eps}(\varphi)$ converge to $m(\varphi)$. In particular $m$ may not be a probability because of a loss of mass at infinity.} to $m$. This is sometimes too strong, especially when the $m_\eps$ are not absolutely continuous. Nevertheless, without this hypothesis, our point process $\mathcal N_\varepsilon$ may remain well approximated by a Poisson process $\mathcal P_\varepsilon$ with varying intensities $\lambda\times m_\eps$, in a sense that can be made precise. \begin{theorem}\label{THM1} We assume that for any vague limit point $m$ of $(m_\eps)_\eps$ and for any sequence $\mathcal E=(\eps_k)_k$ converging to $0$ achieving the above limit, there exists a family $\mathcal W_\mathcal E$ of relatively compact open subsets of $V$, stable by finite unions and intersections, that generates the $\sigma$-algebra $\mathcal V$ and that \begin{enumerate} \item for any finite subset $\mathcal W_0$ of $\mathcal W_\mathcal E$ we have $\Delta(H_{\eps_k}^{-1}\mathcal W_0)=o(\mu(A_{\varepsilon_k}))$, \item for every $F\in\mathcal W_\mathcal E$, $m(\partial F)=0$. \end{enumerate} Then the family of point processes $(\mathcal N_\varepsilon)_\varepsilon$ is approximated strongly in distribution by a family of Poisson processes $(\mathcal P_\eps)_\eps$ of intensities $\lambda\times m_\eps$, in the sense that for any $\nu\ll \mu$ and for every continuous and bounded $f\colon M_p(E)\to {\mathbb R}$ \begin{equation}\label{approx} \mathbb{E}_\nu(f(\mathcal{N}_\eps)) - \mathbb{E}(f( \mathcal P_\eps)) \to 0. \end{equation} \end{theorem} \begin{proof}[Proof of Theorem~\ref{THM1}] Suppose that \eqref{approx} does not hold for some $f$. Then there exists $\vartheta>0$ and a sequence $\eps_k\to0$ such that for all $k$, \begin{equation}\label{badf} |\mathbb{E}_\nu(f(\mathcal{N}_{\eps_k})) - \mathbb{E}(f( \mathcal P_{\eps_k}))|>\vartheta. \end{equation} Up to taking a subsequence if necessary, we may assume that $(m_{\eps_k})_k$ converges to some $m$. Applying Theorem~\ref{THM} with the sequence $(\eps_k)_k$ we get that $(\mathcal{N}_{\eps_k})_k$ converges to a Poisson point process of intensity $\lambda\times m$, and $(\mathcal P_{\eps_k})_k$ as well, which contradicts \eqref{badf}. \end{proof} This proof shows that the possible non convergence of the measures $m_\eps$ is not a serious problem. {\bf In the rest of the paper, we will assume without loss of generality - to simplify the exposition of our general results - that the intensity measures involved in the results converge to a unique limit}. Clearly, if it is not the case, one can always reduces the problem to that case by passing through a subsequence. \subsection{Application to special flows}\label{sec:specialflow} In this section we show how to pass from the discrete time setting to the continuous time one. Given $(\Omega,\mathcal F,\mu,T)$ and an integrable function $\tau:\Omega\rightarrow(0,+\infty)$, the special flow over $(\Omega,\mathcal F,\mu,T)$ with roof function $\tau:\Omega\rightarrow(0,+\infty)$ is the flow $(\mathcal M,\mathcal T,\nu,(Y_t)_t)$ defined as follows: $$\mathcal M:=\{(x,t)\in \Omega\times [0,+\infty)\ :\ t<\tau(x)\} \, ,$$ $\mathcal T$ is the trace in $\mathcal M$ of the product $\sigma$-algebra $\mathcal F\otimes\mathcal B([0,+\infty))$ $$\nu:=\left(\frac{\mu\times\lambda}{\int_\Omega\tau\, d\mu}\right)_{|\mathcal T}\, ,$$ where $\lambda$ is the Lebesgue measure on $[0,+\infty)$ and $Y_s(x,t)=(x,t+s)$ with the identification $(x,\tau(x))\equiv (T(x),0)$, i.e. $$Y_s(x,t)=\left(T^{n_{s}(x,t)},t+s-\sum_{k=0}^{n_{s}(x,t) -1}\tau\circ T^k(x)\right)\, ,$$ with $n_s(x,t):=\sup\{ n\ :\ \sum_{k=0}^{n-1}\tau\circ T^k(x)\le t+s\}$ the number of visits of the orbit $(Y_u(x,t))_{u\in(0,s)}$ to $\Omega\times \{0\}$ before time $t$. We will write \[ \forall x\in \Omega,\quad \forall n\in\mathbb N,\quad S_n\tau(x):=\sum_{k=0}^{n-1}\tau\circ T^k(x)\quad\mbox{and}\quad \bar\tau:=\int_\Omega \tau(x)\, d\mu(x). \] We define also the canonical projection $\Pi:\mathcal M\rightarrow \Omega$ by $\Pi(x,t)= x$. Let $(\mathcal A_\varepsilon)_\varepsilon$ be a sequence of subsets of $\mathcal M$. We are interested in a process that records the times where at least one entrance into $\mathcal A_\eps$ occurs between two consecutive returns to the base. This only depends on the projection $A_\varepsilon:=\Pi \mathcal A_\varepsilon$. Let $V$ be a locally compact metric space endowed with its Borel $\sigma$-algebra $\mathcal V$. Let $(H_\eps)_\eps$ be a family of measurable functions from $A_\eps$ to $V$. This leads us to the definition $$ \mathfrak N_\varepsilon (y):=\sum_{t>0\ :\ Y_t(y) \in A_\eps\times\{0\} } \delta_{(t\mu( A_\varepsilon)/\bar\tau,H_\varepsilon(\Pi Y_t(y)))}. $$ \begin{theorem}\label{THMflow} Assume that $(\mathcal N_\varepsilon)_\varepsilon$, defined on $(\Omega,\mu,T)$ by \eqref{PointProcess} with $A_\varepsilon:=\Pi \mathcal A_\varepsilon$ and $H_\varepsilon$ given above, converges in distribution, with respect to some probability measure $\tilde\mu\ll\mu$, to a Poisson point process of intensity $\lambda\times m$, where $m$ is some measure on $(V,\mathcal V)$. Then the family of point processes $(\mathfrak N_\varepsilon)_\varepsilon$ converges strongly in distribution to a Poisson process $\mathcal P$ of intensity $\lambda\times m$. \end{theorem} \subsection{Proofs of the general theorems}\label{pgr} In this section we prove Theorems \ref{THM} and \ref{THMflow}. To show that the family of point processes $(\mathcal{N}_\eps)_\eps$ on ${\mathbb R}^+\times V=E$ converges (with respect to some probability measure $\mathbb P$) to a Poisson point process with a $\sigma$-finite intensity $\eta$, we can apply Kallenberg's criterion (Proposition 3.22 in \cite{resnick}). It suffices to prove that for some system $\mathcal R$ stable by finite intersection and union of relatively compact open subsets, that generates the $\sigma$-algebra $\mathcal V\times \mathcal B({\mathbb R}^+)$, the following holds: For any $R\in\mathcal{R}$, \begin{itemize} \item[(O)] $\eta(\partial R)=0$, \item[(A)] $\mathbb{E}(\mathcal{N}_\eps(R))\to \eta(R)$, \item[(B)] $\mathbb{P}(\mathcal{N}_\eps(R)=0)\to e^{-\eta(R)}$. \end{itemize} The last condition will be obtained from the simple next geometric approximation. \begin{proposition}\label{pro:delta} Let $\mathcal{A}$ be a collection of measurable subsets of $\Omega$. Then for any $r\ge 1$, any positive integers $p_1,\ldots, p_r,q_1,\ldots,q_r$ such that $p_i+q_i<p_{i+1}$ for any $i=1,...r-1$, and for any sets $A_1,\ldots, A_r\in\mathcal{A}$ we have \[ \left|\mu(\forall i,\tau_{A_i}\circ T^{p_i}>q_i)-\prod_{i=1}^r(1-\mu(A_i))^{q_i}\right| \le \sum_{i=1}^r q_i \Delta(\mathcal{A})\, , \] with $\Delta$ defined in \eqref{defDelta}. \end{proposition} \begin{proof} We will write $a = b\pm c$ to say that $|a-b|\le c$. Note that for any integer $q_1\ge1$ we have \[ \{ \tau_{A_1}>q_1 \} = T^{-1} \{ \tau_{A_1}>q_1-1 \} - T^{-1}(A_1\cap \{ \tau_{A_1}>q_1-1 \}). \] Using the $T$-invariance of $\mu$, this gives \[ \begin{split} &\mu(\forall i,\ \tau_{A_i}\circ T^{p_i}>q_i)\\ &=\mu(\forall i,\ \tau_{A_i}\circ T^{p_i-p_1}>q_i)\\ &=\mu(\tau_{A_1}>q_1-1 ; \forall i\ge2,\tau_{A_i}\circ T^{p_i-p_1-1}>q_i)\\ &\quad -\mu(A_1\cap \{\tau_{A_1}>q_1-1 ; \forall i\ge2,\tau_{A_i}\circ T^{p_i-p_1-1}>q_i\}) \\ &=(1-\mu(A_1)) \mu(\tau_{A_1}>q_1-1 ; \forall i\ge2,\tau_{A_i}\circ T^{p_i-p_1+1}>q_i)\pm \Delta(\mathcal{A}). \end{split} \] By an immediate induction on $q_1$ we obtain \[ \mu(\forall i\ge 1,\tau_{A_i}\circ T^{p_i}>q_i) = (1-\mu(A_1))^{q_1} \mu(\forall i\ge2,\tau_{A_i}\circ T^{p_i-q_1}>q_i)\pm q_1\Delta(\mathcal{A}). \] The conclusion follows by an induction on the number $r$ of sets. \end{proof} \begin{proof}[Proof of Theorem \ref{THM}] The strong convergence in distribution is by Theorem 1 in \cite{Zweimuller:2007} a direct consequence of the classical convergence in distribution with respect to $\mu$. The later will be proven as announced using Kallenberg's criterion with $\eta:=\lambda\times m$. Let $\mathcal{R}$ be the collection of finite unions of open rectangles $I\times F$, where $I$ is an open bounded interval in $[0,\infty)$ and $F\in\mathcal W$. Let $R\in\mathcal{R}$. We rearrange the subdivision given by the endpoints of the intervals defining $R$ to write $R=R' \cup \bigcup_{i=1}^r (t_i,s_i)\times F_i $, where $t_i< s_i$, $F_i\in\mathcal{W}$ for $i=1,...,r$, $s_i\le t_{i+1}$ for every $i=1,...,r-1$ and $R'$ is contained in a finite number of vertical strips $\{t\}\times V$. We assume without loss of generality that $R'=\emptyset$, as it will be clear from the sequel that the same arguments would give $\mathbb E_\mu[\mathcal N_\varepsilon(R')]\to0$ and $\mu(\mathcal{N}_\eps(R')=0)\to1$. Note that for all $i=1,\ldots ,r$, $m_\eps(F_i)\to m(F_i)$ as $\eps\to 0$ by Hypothesis (ii) and the Portemanteau theorem. Condition (A) follows from the definition and the linearity of the expectation. Indeed, $$\mathbb E_\mu[\mathcal N_\varepsilon(R)]=\sum_{i=1}^r \sum_{n=\lfloor\frac {t_i}{\mu(A_\varepsilon)}\rfloor+1}^{\lceil\frac {s_i}{\mu(A_\varepsilon)}\rceil-1}\mu(H_\varepsilon^{-1}(F_i))\sim\sum_i(s_i-t_i)m_\eps(F_i)\sim (\lambda\times m)(R).$$ For Condition (B), set $p_i=\lfloor t_i/\mu(A_\eps)\rfloor$ and $q_i=\lceil s_i/\mu(A_\eps)\rceil-1-p_i$ and $A_{\eps,i}=H_\eps^{-1}F_i$. Observe that $\mathcal{N}_\eps(R)=0$ is equivalent to $\forall i, \tau_{A_{\eps,i}}\circ T^{p_i}>q_i$. By Proposition~\ref{pro:delta} and due to assumption (i), we have \begin{eqnarray*} \mu(\mathcal{N}_\eps(R)=0) &=& \prod_i (1-\mu(A_{\eps,i}))^{q_i} \pm \sum_{i=1}^r q_i \Delta(\{A_{\varepsilon,i}\})\\ &=& \prod_i (1-\mu(A_\varepsilon)m_\eps(F_i))^{q_i} + o(1)\\ &=&\exp\left(\sum_i q_i \mu(A_\varepsilon) m_\eps(F_i)\right) +o(1)\\ &=&\exp\left((\lambda\times m)(R) \right)+o(1). \end{eqnarray*} \end{proof} \begin{proof}[Proof of Theorem \ref{THMflow}] Due to Theorem 1 of \cite{Zweimuller:2007}, the assumption holds also with $\tilde\mu$ s.t. $\dfrac{d\tilde\mu}{d\mu}=\dfrac{\tau}{\tilde\tau}$, and it suffices to prove the convergence in distribution of $(\mathcal N_\varepsilon)_\varepsilon$ with respect to $\nu$. By hypotheses, $(\mathcal N_\varepsilon)_\varepsilon$ converges in distribution wrt $\Pi_*\nu$ (which coincide with the probability measure on $\Omega$ with density $\tau/\bar\tau$ wrt $\mu$) to a Poisson point process $\mathcal P$ of intensity $\lambda\times m$, i.e. $(\mathcal N_\varepsilon\circ\Pi)_\varepsilon$ converges in distribution, with respect to $\nu$, to $\mathcal P$. Note that for $y=(x,s)\in \mathcal{M}$ \[ \mathfrak N_\varepsilon(y)= (\psi_{\varepsilon,y})_* (\mathcal N_\varepsilon(\Pi(y)))\ \] with \[ \psi_{\varepsilon,y}(t,z)= ((S_{\lfloor t/\mu(A_\varepsilon)\rfloor}\tau(\Pi y)-s)\mu(A_\varepsilon)/\bar\tau,z) \] We conclude using the facts that $\mu(A_\varepsilon)\to 0$ and that $S_n\tau/(n\bar\tau)\to 1$ $\nu$-almost surely. \end{proof} \section{A common framework suitable for systems modeled by Gibbs-Markov-Young towers}\label{common} \subsection{General result} The authors have studied in \cite{ps15} the case of a temporal Poisson point process, corresponding to $V=\{0\}$ and $H_\varepsilon\equiv 0$ (Theorem 3.5 therein appears as Theorem \ref{THM} of the present paper in this specific context). In the above mentioned article, the general context was the case of dynamical systems modeled by a Gibbs Markov Young tower as studied in~\cite{aa}. \begin{hypothesis}\label{HHH} Let $\alpha,\beta>0$. Let $\Omega$ be a metric space endowed with a Borel probability measure $\mu$ and a $\mu$-preserving transformation $T$. Assume that there exists a sequence of finite partitions $(\mathcal Q_k)_{k\ge 0}$ of an extension $(\tilde\Omega,\tilde\mu,\tilde T)$ of $(\Omega,\mu,T)$ by $\tilde\Pi:\tilde\Omega\rightarrow\Omega$ such that one of the two following assumptions holds: \begin{itemize} \item[(I)] either $\sup_{Q\in\mathcal Q_k}\diam (\tilde\Pi(Q))\le C k^{-\alpha}$, \item[(II)] or $\sup_{Q\in\mathcal Q_{2k}}\diam(\tilde\Pi \tilde T^kQ)\le C k^{-\alpha}$. \end{itemize} Assume moreover that, there exists $C'>0$ such that, for every $k,n$ with $n\ge 2k$, for any $A\in \sigma(\mathcal Q_k)$ and for any $B\in\sigma\left(\bigcup_{m\ge 0}\mathcal Q_m\right)$, $$\left|Cov_{\tilde\mu}\left(\mathbf 1_A,\mathbf 1_{B}\circ \tilde T^n\right)\right|\le C' n^{-\beta}\tilde\mu(A).$$ \end{hypothesis} Case (I) is appropriate for non-invertible systems, whereas Case (II) is appropriate for invertible systems. Wide classes of examples of dynamical systems satisfying these assumptions can be found in \cite{aa} (see also \cite{young,young99}). \begin{proposition}\label{appli1} Let $(\Omega,\mathcal F,\mu,T)$ satisfying Hypothesis \ref{HHH}. With the notations of the beginning of Section \ref{sec:gene}, assume that there exists $p_\eps=o(\mu(A_\eps)^{-1})$ such that \begin{enumerate} \item\label{a} $ \mu(\tau_{A_\eps}\le p_\varepsilon | A_\eps)=o(1)$, \item \label{ii} $\mu((\partial A_\varepsilon)^{\lfloor C4^\alpha p_\varepsilon^{-\alpha}\rfloor})=o(\mu(A_\varepsilon))$ \item \label{iii} $\mu(H_\varepsilon^{-1}(\cdot)|A_\varepsilon)$ converges vaguely to some measure $m$, \item\label{F} for all $F\in\mathcal W$, $m(\partial F)=0$ and $\mu((\partial (H_\varepsilon^{-1} F))^{\lfloor C4^\alpha p_\varepsilon^{-\alpha}\rfloor}|A_\varepsilon)=o(1)$. \end{enumerate} Then the assumptions of Theorem~\ref{THM} are satisfied. \end{proposition} We postpone the proof to the appendix. When assumption \eqref{iii} is missing, one has to change \eqref{F} accordingly, going through a subsequence $\mathcal E=(\eps_k)$, a family $\mathcal W_{\mathcal E} $ and a limit point $m$ to get that the assumptions of Theorem~\ref{THM1} are satisfied. \begin{remark}\label{first} \begin{itemize} \item Notice that the Assumption \eqref{a} is always satisfied when the normalized first return time $\mu(A_\eps)\tau_{A_\eps}$ to $A_\eps$ is asymptotically exponentially distributed with parameter one, in particular when the temporal process (corresponding to $V=\{0\}$) converges in distribution to a Poisson process of intensity $\lambda$. \item It may happen that one only has a nonuniform control of the diameters instead of Hypothesis \ref{HHH} (I) or (II). Indeed it is possible to avoid these hypotheses. It suffices to replace (ii) with $$ \mu(\partial A_\eps^{[k_\eps]}) = o(\mu(A_\eps)), $$ where in the non invertible case (I) $\partial A_\eps^{[k_\eps]}=\cup\tilde\Pi(Q)$, the union begin on those $Q\in \mathcal{Q}_{k_\eps}$ such that $\diam \tilde{\Pi}(Q)>Ck_\eps^{-\alpha}$ and $\partial A_\eps\cap\tilde{\Pi}(Q)\neq\emptyset$, $k_\eps =\lfloor p_\eps/2\rfloor$; in the invertible case (II) $\partial A_\eps^{[k_\eps]}=\cup\tilde\Pi(T^{k_\eps}Q)$, the union begin on those $Q\in \mathcal{Q}_{2k_\eps}$ such that $\diam \tilde{\Pi}(T^{k_\eps}Q)>Ck_\eps^{-\alpha}$ and $\partial A_\eps\cap\tilde{\Pi}(T^{k_\eps} Q)\neq\emptyset$, $k_\eps=\lfloor p_\eps/4\rfloor $; The modification of the proof of Proposition~\ref{appli1} is immediate. \end{itemize} \end{remark} The following result is helpful to get assumption~\eqref{F} in many cases. \begin{proposition}\label{partition} Assume that $V$ is an open subset of ${\mathbb R}^d$, for some $d>0$, that $H_\eps$ are $\mathfrak h$-H\"older continuous maps with respective H\"older constant $C_{\mathfrak h}(H_\eps)$, that there exists $\eta_\eps\rightarrow 0+$ such that $C_{\mathfrak h}(H_\eps)\eta_\eps^{\mathfrak h}\to0$ and that $m_\eps\to m$ (where $m$ is a finite measure on $V$). Then there exists a family $\mathcal W$ of relatively compact open subsets of $B$, stable by finite unions and intersections, generating the Borel $\sigma$-algebra of $V$, such that \begin{enumerate} \item $m(\partial F)=0$ for any $F\in \mathcal W$, \item for any $F\in \mathcal W$, $$ \mu((\partial H_\eps^{-1}F)^{[\eta_\eps]} |A_\eps) = o(1). $$ \end{enumerate} \end{proposition} \begin{proof} (i) Let $\pi_j:\mathbb R^d\rightarrow\mathbb R$ be the $j$-th canonical projection (i.e. $\pi_j((x_i)_i)=x_j$). The set $\mathcal G^{j}:=\{ a\in{\mathbb R}\colon (\pi_j)_*m({a})=0\}$ is dense in ${\mathbb R}$, since its complement is at most countable. We then define $\mathcal W$ as the collection of open rectangles $\prod_{j=1}^d(a_j;b_j)\subset B$, with $a_j,b_j\in\mathcal G^{j}$. By construction $m(\partial F)=0$ for any $F\in \mathcal W$. The density of the $\mathcal G^j$'s implies that $\mathcal W$ generates the Borel $\sigma$-algebra. (ii) For any $F\in\mathcal W$ we have the inclusion \[ (\partial H_\eps^{-1}F)^{[\eta_\eps]} \subset H_\eps^{-1} \partial F^{[C_{\mathfrak h}(H_\eps)\eta_\eps^{\mathfrak h}]}. \] Hence, for every $\eps_0>0$, \[ \mu((\partial H_\eps^{-1}F)^{[\eta_\eps]} |A_\eps) \le m_\eps(\partial F^{[C_{\mathfrak h}(H_\eps)\eta_\eps^{\mathfrak h}]})\le m_\eps(\partial F^{[M_{\eps_0}]}), \] for any $0<\eps<\eps_0$, with $M_{\eps_0}:=\sup_{\eps\in(0,\eps_0)}C_{\mathfrak h}(H_\eps)\eta_\eps^{\mathfrak h}$. Therefore \[ \mathop{{\overline {\hbox{{\rm lim}}}}}_{\eps\to0} \mu((\partial H_\eps^{-1}F)^{[\eta_\eps]} |A_\eps) \le m(\partial F^{[M_{\eps_0}]}), \] which goes to $0$ as $\eps_0\to0$ since $m(\partial F)=0$. \end{proof} \subsection{Successive visits in a small neighbourhood of a generic point}\label{sec:voisin} The purpose of the next result is to give examples for which Theorem \ref{THM} applies for returns in small balls, in the same context as in \cite{ps15}. \begin{theorem}\label{ThmgeneAppli1} Assume that $\Omega$ is a $d$-Riemmannian manifold and that $\dim_H\mu=\mathop{{\underline {\hbox{{\rm lim}}}}}_{\varepsilon\rightarrow 0} \frac{\log \mu(B(x_0,\varepsilon))}{\log\varepsilon}$ for $\mu$-almost every $x_0\in\Omega$. Assume Hypothesis \ref{HHH} with $\alpha>\dim_H\mu$. Then for $\mu$-almost every $x_0\in\Omega$ such that \begin{equation}\label{neglec} \exists \delta\in (1,\alpha\dim_H\mu),\quad \mu(B(x_0,\varepsilon)\setminus B(x_0,\varepsilon-\varepsilon^\delta)=o(\mu(B(x_0,\varepsilon))), \end{equation} the family of point processes $$ \mathcal N_\varepsilon(x) =\sum_{n\colon T^n(x)\in B(x_0,\eps)} \delta_{\left(n\mu(B(x_0,\eps),\eps^{-1}\exp_{x_0}^{-1}(T^nx)\right)} $$ is strongly approximated in distribution by a Poisson process $\mathcal P_\eps$ of intensity $\lambda\times m_\eps^{x_0}$ where $ m_\eps^{x_0}=\mu(H_\eps^{-1}(\cdot)|B(x_0,\varepsilon))$. \end{theorem} \begin{proof}[Proof of Theorem~\ref{ThmgeneAppli1}] We apply Proposition~\ref{appli1}, assuming without loss of generality that the measures $m_\eps^{x_0}$ are converging. Fix $\sigma\in(\delta/\alpha,\dim_H\mu)$ and set $p_\eps=\eps^{-\sigma}$. The assumptions imply that the temporal return times process converges in distribution to a Poisson process of parameter one (See \cite{ps15}). By Remark~\ref{first} this shows that Assumption \eqref{a} of Proposition~\ref{appli1} is true. Assumption~\eqref{ii} follows from \eqref{neglec}. Finally, the family $\mathcal W$ comes from Proposition~\ref{partition} above, which proves also the last assumption \eqref{F} and thus the theorem. \end{proof} \section{Applications to billiard maps and flows}\label{billiard} \subsection{Bunimovich billiard}\label{sec:Bunimovich} The Bunimovich billiard is an example of weakly hyperbolic system (with polynomial decay of the covariance of H\"older functions). Let $\ell>0$. We consider the planar domain $Q$ union of the rectangle $[-\ell/2;\ell/2]\times[-1,1]$ and of the two planar discs of radius 1 centered at $(\pm\ell/2,0)$. We consider a point particle moving with unit speed in $Q$, going straight on between two reflections off $\partial Q$ and reflecting with respect to the classical Descartes law of reflection (incident angle=reflected angle). The billiard system $(\Omega,\mu,T)$ describes the evolution at reflected times of a point particle moving in the domain $Q$ as described above. We define the set $\Omega$ of reflected vectors as follows $$ \Omega:=\{\partial Q \times S^1\ :\ <\vec n_q,\vec v>\ge 0\}\, ,$$ where $\vec n_q$ is the unit normal vector normal to $\partial Q$, directed inside $Q$, at $q$. Such a reflected vector $(q,\vec v)$ can be represented by $x=(r,\varphi)\in\mathbb R/(2(\pi+\ell)\mathbb Z)$, $r$ corresponding to the counterclockwise curvilinear abscissa of $q$ on $\partial Q$ (starting from a fixed point on $\partial Q$) and $\varphi$ measuring the angle between $\vec n_q$ and $\vec v$. The transformation $T$ maps a reflected vector to the reflected vector corresponding to the next reflection time. This transformation preserves the measure $\mu$ given (in coordinates) by ${d\mu}(r,\varphi)=h(r,\varphi) \, dr\, d\varphi$ where $h(r,\varphi)=\frac{\cos\varphi}{2\, |\partial Q|}$. We endow $\Omega$ with the supremum metric $d((r,\varphi),(r',\varphi'))=\max(|r-r'|,|\varphi-\varphi'|)$. \begin{theorem}\label{thmBunimovich} For $\mu$-almost every $x_0=(r_0,\varphi_0)\in \Omega$, the family of point processes $$\sum_{n\ge 1\ :\ d(T^n(x),x_0)<\eps} \delta_{\left(n\varepsilon^2h(x_0),\frac{T^n(x)-x_0}\eps\right)} $$ converges in distribution (with $x$ distributed with respect to any probability measure absolutely continuous with respect to the Lebesgue measure on $\Omega$) to a Poisson Point Process with intensity $\lambda\times\lambda_2$, where here $\lambda_2$ is the 2-dimensional normalized Lebesgue measure on $(-1,1)^2$. \end{theorem} \ \begin{center} \includegraphics[scale=0.5]{billstade.pdf} \end{center} \begin{proof} The convergence comes directly from Theorem \ref{ThmgeneAppli1}, case (II) with $\alpha=1$, $\zeta=1$ and $\dim_H\mu=2$, (due to \cite{ps15}, namely in Section 9 therein) and from the fact that $\mu(H_\varepsilon^{-1}(\cdot) |A_\varepsilon)$ converges in distribution to the normalized Lebesgue measure on $B_{|\cdot|_\infty}(0,1)$. To identify the intensity we observe that $\mu(B(x_0,\varepsilon))\sim \cos\varphi_0\varepsilon^2/(2|\partial Q|)$ as $\varepsilon\rightarrow 0+$. \end{proof} We now consider the billiard flow $(\mathcal M,\nu,(Y_t)_t)$ with $$\mathcal M=\{(q,\vec v)\in Q \times S^1\ :\ q\in \partial Q\Rightarrow <\vec n_q,\vec v>\ge 0\},$$ where $Y_t(q,\vec v)=(q',\vec v')$ if a particle that was at time 0 at position $q$ with speed $\vec v$ will be at time $t$ at position $q'$ with speed $\vec v'$ and where $\nu$ is the normalized Lebesgue measure on $\mathcal M$. An important well known fact is that the billiard flow system $(\mathcal M,\nu,(Y_t)_t)$ can be represented as the special flow over the billiard map system $(\Omega,\mu,T)$ with roof function $\tau:\Omega\rightarrow(0,+\infty)$ given by $\tau(x):=\inf\{t>0\ :\ Y_t(x)\in\partial Q\times S^1\}$. This enables us to get the following result. \begin{corollary}\label{coroBunimovich} For $\mu$-almost every $x_0=(r_0,\varphi_0)\in\Omega$, the family of point processes $$ \sum_{t>0\ :\ Y_t(y)\in B_\Omega(x_0,\varepsilon)} \delta_{\left(\frac{t\varepsilon^2\cos\varphi_0}{2\pi Area(Q)},\frac{Y_t(y)-x_0}{\varepsilon}\right)} $$ converges in distribution (with respect to any probability measure absolutely continuous with respect to the Lebesgue measure on $\mathcal M$) to a Poisson Point Process with intensity $\lambda\times\lambda_2$, where here $\lambda_2$ is the 2-dimensional normalized Lebesgue measure on $[-1,1]^2$ and where $B_\Omega(x_0,\eps)$ means the ball in $\Omega$. \end{corollary} \begin{proof} Set $\bar\tau:=\int_\Omega\tau\, d\mu$. Due to Theorems \ref{thmBunimovich} and \ref{THMflow}, the point process $$\sum_{t>0\ :\ Y_t(y)\in B_\Omega(x_0,\varepsilon)} \delta_{\left(\frac{t\varepsilon^2\cos\varphi_0}{2\bar\tau|\partial Q|},\frac{Y_t(y)-x_0)}\eps \right)} $$ converges in distribution to a Poisson Point Process with intensity $\lambda\times\lambda_2$. Moreover $2\pi Area(Q)=\int_\Omega\tau\cos\varphi\, dr\, d\varphi=2|\partial Q|\bar\tau .$ \end{proof} \subsection{Sinai billiards} Let $I\in\mathbb N^*$ and $O_1,...,O_I$ be open convex subsets of $\mathbb T^2$ with $C^3$-smooth boundary of positive curvature, and pairwise disjoint closures. We then set $Q=\mathbb T^2\setminus\bigcup_{i=1}^IO_i$. As for the Bunimovich billiard, we consider a point particle moving in $Q$, with unit speed and elastic reflections off $\partial Q$. This model is called the Sinai billiard. We assume moreover that the horizon is finite, i.e. that the time between two reflections is uniformly bounded. For this choice of $Q$, we consider now the billiard map $(\Omega,\mu,T)$ and the billiard flow $(\mathcal M,\nu,(Y_t)_t)$ defined as for the Bunimovich billiard in Subsection \ref{sec:Bunimovich}. Under this finite horizon assumption, the Sinai billiard has much stronger hyperbolic properties than the Bunimovich billiard (with namely an exponential decay of the covariance of H\"older functions), but nevertheless, compared to Anosov map, its study is complicated by the presence of discontinuities. \begin{theorem}\label{Sinai1} For $\mu$-almost every $x_0=(q_0,\vec v_0)\in \Omega$ (represented by $(r_0,\varphi_0)$), \begin{itemize} \item {\bf [Return times in a neighborhood of $x_0$]} The conclusions of Theorem \ref{thmBunimovich} and of its Corollary \ref{coroBunimovich} hold also true. \item {\bf [Return times in a neighborhood of the position $q_0$ of $x_0$]} The family of point processes $$ \sum_{n\ge1\ :\ T^n(x)\in B_{\partial Q}(q_0,\varepsilon)\times S^1} \delta_{\left(\frac{2\varepsilon n}{|\partial Q|},\frac{r-r_0}{\eps},\varphi\right)}$$ (where $x$ is represented by $(r,\varphi)$) converges in distribution (with respect to any probability measure absolutely continuous with respect to the Lebesgue measure on $\Omega$) to a Poisson Point Process with intensity $\lambda\times m_0$, where here $m_0$ is the probability measure on $[-1,1]\times[-\pi/2;\pi/2]$ with density $(r,\varphi)\mapsto \frac{\cos(\varphi)}{4}$. \item The family of point processes $$\sum_{t>0\ :\ Y_t(y)\in B_Q(q_0,\varepsilon)\times S^1} \delta_{\left(\frac{2\varepsilon t}{\pi Area(Q)},\frac{r-r_0}\eps,\varphi\right)}$$ converges in distribution (with respect to any probability measure absolutely continuous with respect to the Lebesgue measure on $\mathcal M$) to a Poisson Point Process with intensity $\lambda\times m_0$, with $m_0$ as above. \end{itemize} \end{theorem} Due to \cite{young}, the Sinai billiard $(\Omega,\mu, T)$ satisfies Hypothesis \ref{HHH}-(II) with any $\alpha>0$ and any $\beta>0$. \begin{proof} The first item follows from Theorem~\ref{ThmgeneAppli1} as in Theorem~\ref{thmBunimovich} and Corollary~\ref{coroBunimovich}. The third item follows from the second one as Corollary \ref{coroBunimovich} comes from Theorem \ref{thmBunimovich}. Let us prove the second item. Let $x_0=(q_0,\varphi_0)\in\Omega$. We set $A_\varepsilon(x_0):=[q_0-\varepsilon,q_0+\varepsilon] \times [\pi/2,\pi/2]$ and $H_\varepsilon:(q,\varphi)\mapsto(\varepsilon^{-1}(q-q_0),\varphi)$. Note first that $\mu(A_\varepsilon)=2\varepsilon/|\partial Q|$ and that $\mu(H_\varepsilon^{-1}(\cdot) |A_\varepsilon)$ converges vaguely to $m_0$. We follow verbatim the proof of Theorem~\ref{ThmgeneAppli1} which invokes Proposition \ref{appli1}, except for its assumption~\eqref{a}: The convergence of the temporal process as in the proof of Theorem~\ref{ThmgeneAppli1}, associated to the position of the billiard particle, is not present in the literature in these terms. However, a slight adaptation of \cite[Lemma 6.4-(iii)]{ps10} gives that for any $\sigma<1$ and $\mu$-a.e. $x_0\in\partial Q\times]-\frac\pi 2;\frac\pi 2[$ we have $$ \mu(\tau_{A_\varepsilon}\le \varepsilon^{-\sigma}|A_\varepsilon)=o(1).$$ That is (i) of Proposition \ref{appli1} holds with $p_\eps=\eps^{-\sigma}$. \end{proof} Let us write $\Pi_Q:\mathcal M\rightarrow Q$ and $\Pi_V:\mathcal M\rightarrow S^1$ for the two canonical projections, which correspond respectively to the position and to the speed. Using results established in \cite{ps10}, we also state a result of convergence to a spatio-temporal Poisson point process for entrance times in balls for the flows. We endow $\mathcal M$ with the metric $d$ given by $d((q,\vec v),(q',\vec v'))=\max(d_0(q,q'),|\angle{(\vec v,\vec v')}|)$, where $d_0$ is the euclidean metric in $Q$ and where $\angle(\cdot,\cdot)$ is the angular measure of the angle. \begin{theorem} For $\nu$-a.e. $y_0=(q_0,\vec v_0)\in\mathcal M$, \begin{itemize} \item the family of point processes $$\sum_{t\ :\ (Y_s(y))_s\mbox{ enters } B(y_0,\varepsilon) \mbox{ at time t}} \delta_{\left(\frac{2\varepsilon^2t}{\pi Area(Q)},\frac{\Pi_Q(Y_t(y))-q_0}\eps,\frac{\angle{(\vec v_0,\Pi_V(Y_t(y)))}}\eps\right)}$$ converges in distribution (when $y$ is distributed with respect to any probability measure absolutely continuous with respect to the Lebesgue measure on $\mathcal M$) to a Poisson Point Process with intensity $\lambda\times \tilde m_1$, where $\tilde m_1$ is the probability measure of density $(p,\vec u)\mapsto\frac14\langle\tilde n_p,\vec v_0\rangle^+$ on $S^1\times[-1,1]$ (with $\langle\cdot,\cdot\rangle^+$ the positive part of the scalar product in $\mathbb R^2$ and $\tilde n_p$ the inward normal vector to $S^1$ at $p$).\footnote{In the limit, the authorized normalized positions are the positions located on a semi-circle (corresponding to positions at which the vector $\vec v_0$ enters the ball) and the normalized variation of speed is uniform in $[-1,1]$} \includegraphics[scale=0.4]{billSinai1.pdf} \item the family of point processes $$\sum_{t\ :\ (Y_s(y))_s\mbox{ enters } B(q,\varepsilon)\times S^1\mbox{ at time t}}\delta_{\left(\frac{2\pi\varepsilon t}{Area(Q)},\frac{\Pi_Q(Y_t(y))-q_0}\eps,\Pi_V(Y_t(y))\right)}$$ converges in distribution (when $y$ is distributed with respect to any probability measure absolutely continuous with respect to the Lebesgue measure on $\mathcal M$) to a Poisson Point Process with intensity $\lambda\times \tilde m_0$ where $\tilde m_0$ is the probability measure with density $(p,\vec u)\mapsto\frac1{4\pi} \langle\tilde n_p,\vec u\rangle^+$ on $S^1\times S^1$. \footnote{In the limit the authorized vectors are the unit vectors entering the ball.} \begin{center} \includegraphics[scale=0.4]{billSinai2.pdf} \end{center} \end{itemize} \end{theorem} \begin{proof} We apply Theorem \ref{THMflow} to go from the discrete time to the continuous time. Let $\mathcal A_{\varepsilon}:=B(x,\varepsilon)$ (resp. $\mathcal A_{\varepsilon}:=B(q,\varepsilon)\times S^1$). Return times to these sets have already been studied in \cite{ps10}. We set $A_\varepsilon:=\Pi \mathcal A_\varepsilon$ and study the discrete time process associated to these sets. To this end, we apply Proposition~\ref{appli1} after checking its assumptions. \begin{itemize} \item We know that $\mu(A_\varepsilon)=2\varepsilon^2/|\partial Q|$ due to \cite[Lemma 5.1]{ps10} (resp. $\mu(A_\varepsilon)=2\pi\varepsilon/|\partial Q|$ due to \cite[Lemma 5.1]{ps10}). So $d=2$ (resp. $d=1$). Let $\sigma<d$ and $\delta>1$. \item Note that $\mu((\partial A_\varepsilon)^{[\varepsilon^\delta]}) =o(\mu(A_\varepsilon))$. Due to \cite[Theorem 3.3]{ps10} (resp. \cite[Lemma 6.4]{ps10}, $ \mu(\tau_{A_\varepsilon}\le\eps^{-\sigma}|A_\varepsilon)=o(1)$. \item We define $\mathcal B_\varepsilon:=\{(q,\vec v)\in\partial B(x_0,\varepsilon)\times S^1\ :\ \langle\tilde n_q,\vec v\rangle\ge 0\}$. We endow it with the measure $\tilde\mu$ given by $d\tilde\mu(q,\vec v)=\cos \varphi\, dr\, d\varphi$ with $\varphi=\angle(\tilde n_q,\vec v)$ and $r$ the curvilinear abscissa of $q$ on $\partial B(x_0,\varepsilon)$. \item Let $\tau_{\mathcal A_\varepsilon}(y):=\inf\{t>0\ :\ Y_t(y)\in \mathcal A_\varepsilon\}$. \item We define $H_\varepsilon:A_\varepsilon\rightarrow S^1\times[-1,1]$ which maps $x=(q,\vec v)\in A_\varepsilon$ to $\varepsilon^{-1}(\Pi_Q(Y_{\tau_{\mathcal A_\varepsilon}}(x))-q_0,\angle (\vec v_0,\vec v))$ (resp. $H_\varepsilon:A_\varepsilon\rightarrow S^1\times S^1$ which maps $x=(q,\vec v)\in A_\varepsilon$ to $(\varepsilon^{-1}\Pi_Q(Y_{\tau_{\mathcal A_\varepsilon}}(x))-q_0),\vec v)$). \item Note that the image measure of $\mu(\cdot| A_\varepsilon)$ by $x\mapsto Y_{\tau_{\mathcal A_\varepsilon}}(x)$ corresponds to $\tilde\mu(\cdot | Y_{\tau_{\mathcal A_\varepsilon}}(A_\varepsilon))$. The set $Y_{\tau_{\mathcal A_\varepsilon}}(A_\varepsilon)$ consists of points of $\mathcal B_\varepsilon$ such that $\angle(\vec v_0,v)<\varepsilon$ (resp. $Y_{\tau_{\mathcal A_\varepsilon}}(A_\varepsilon)=\mathcal B_\varepsilon$). \item Hence $\mu(H_\varepsilon^{-1}(\cdot)|A_\varepsilon)$ is the image measure of $\mu(\cdot|Y_{\tau_{\mathcal A_\varepsilon}}(A_\varepsilon))$ by $(q,\vec v)\mapsto \varepsilon^{-1}(q-q_0,\langle(\vec v_0,\vec v))$ (resp. $\mu(H_\varepsilon^{-1}(\cdot)|A_\varepsilon)$ is the image measure of $\mu(\cdot|\mathcal B_\varepsilon)$ by $(q,\vec v)\mapsto (\varepsilon^{-1}q,\vec v)$). Hence we obtain the convergence in distribution of these families of measures. \item For the construction of $\mathcal W$ we use Proposition~\ref{partition}. \end{itemize} \end{proof} \section{Successive visits in a small neighborhood of an hyperbolic periodic point}\label{periodic} \subsection{General results around a periodic hyperbolic orbit} We consider the case of a periodic point $x_0$ of smallest period $p$ (i.e. $p$ is the smallest $n>0$ such that $T^nx_0=x_0$). By periodicity, returns to $B(x_0,\varepsilon)$ appear in clusters, so that we cannot hope that the return process is represented by a simple Poisson process. However, the occurrence of clusters should be well separated and have a chance to be represented by a simple Poisson Process. Thus we define $A_\varepsilon$ as the set of points of $B(x_0,\varepsilon)$ leaving $B(x_0,\varepsilon)$ for a time at least $q_0$, i.e. $$ A_\varepsilon:=B(x_0,\varepsilon)\setminus \bigcup_{j=1}^{q_0} T^{-jp} B(x_0,\varepsilon) $$ and consider $\mathcal{N}_\eps$ defined by \eqref{PointProcess} with this choice of $A_\eps$. This definition of $A_\eps$ essentially records the last passage among a series of hitting to the ball. We emphasize that in general, one has to consider $q_0>1$ to avoid clustering of occurrences of $A_\varepsilon$ due to finite time effects\footnote{ Indeed, consider the determinant one hyperbolic matrix $M =\begin{pmatrix}-0.2 & 1.8\\ 0.6 & -0.4\end{pmatrix}$. The vector $v=\begin{pmatrix}0.5\\ 0.7\end{pmatrix}$ belongs to the unit ball $B_1$, $Mv\not\in B_1, M^2v\in B_1$ and $M^3v\not\in B_1$. Assume that $T$ preserves the Lebesgue measure $\mu$ and has a fixed point $x_0$ such that $D_{x_0}T=M$. Let $A_\varepsilon=B(x_0,\varepsilon)\setminus T^{-1}B(x_0,\varepsilon)^c$. One easily shows that the inequality \[ \mu(A_\varepsilon\cap T^{-2}A_\varepsilon) \ge \mu(B(x_0,\varepsilon)\cap T^{-1} B(x_0,\varepsilon)^c \cap T^{-2}B(x_0,\varepsilon)\cap T^{-3}B(x_0,\varepsilon)^c), \] contradicts the assumption that $\Delta(\{A_\eps\})=o(\mu(A_\eps))$. }. \begin{lemma}\label{LEM0} Assume $x_0$ is a hyperbolic fixed point of $T$ and that $T$ is $C^{1+\alpha}$ in a neighborhood $U$ of the orbit $x_0,\ldots,T^{p-1}x_0$. Then there exist an integer $q_0$ and $a>0$ such that for any $\varepsilon>0$ sufficiently small, for any $n=1,\ldots,\lfloor a\log 1/\varepsilon\rfloor $, $A_\varepsilon \cap T^{-n}A_\varepsilon=\emptyset$, where $A_\varepsilon:=B(x_0,\varepsilon)\setminus \bigcup_{j=1}^{q_0} T^{-jp} B(x_0,\varepsilon)$. \end{lemma} For $\varepsilon>0$ small enough, we define, as in Theorem \ref{ThmgeneAppli1}, $H_\varepsilon:B(x_0,\varepsilon)\mapsto B(0,1)$ by $H_\varepsilon:=\varepsilon^{-1}\exp^{-1}$. We are interested in the behaviour of the point processes $(\tilde{\tilde{\mathcal N}}_\varepsilon)_\varepsilon$ defined by \begin{equation}\label{tildetildeNeps} \tilde{\tilde{\mathcal N}}_\varepsilon(x):=\sum_{n\ :\ T^n(x)\in B(x_0,\varepsilon)} \delta_{(n\mu( B(x_0,\varepsilon)),H_\varepsilon(T^n(x)))}. \end{equation} The extremal index defined by \[ \theta_\eps = \frac{\mu(A_\eps)}{\mu(B(x_0,\varepsilon))} \] relates the above process to \begin{equation}\label{tildeNeps} \tilde{\mathcal N}_\varepsilon(x):=\sum_{n\ :\ T^n(x)\in B(x_0,\varepsilon) \setminus \{x_0\}} \delta_{(n\mu( A_\varepsilon),H_\varepsilon(T^n(x)))} \end{equation} in an obvious way. Indeed, $\tilde{\tilde{ {\mathcal N}}}_\eps=\Theta_\eps(\tilde{\mathcal N}_\eps)$ where \begin{equation} \Theta_\eps \left(\sum_n \delta_{(t_n,x_n)}\right)=\sum_n \delta_{(\theta_\eps^{-1}t_n,x_n)}. \end{equation} We see these processes as point processes on $[0,+\infty)\times \dot B(0,1)$ (where $\dot B(0,1)=B(0,1)\setminus\{0\}$ is the open punctured ball). Assume that $\Omega$ is a $d$-Riemmannian manifold. A $p$-periodic point of $T$ is said to be hyperbolic if $T^p$ defines a $C^1$ diffeomorphism between two neighborhoods of $x_0$ and if $DT_{x_0}$ admits no eigenvalue of modulus 1. We write $E^u_{x_0}$ for the spectral space associated to eigenvalues of modulus strictly larger than 1. \begin{theorem}\label{compoundPoisson} Assume that $T^{-1}$ is well defined on a small neighborhood of $x_0$ and that $(\mathcal N_\varepsilon)_\varepsilon$ converges in distribution to a Poisson point process $\mathcal P$ with intensity $\lambda\times m$, then the sequence of point processes $(\tilde{\mathcal N}_\varepsilon)_\varepsilon$ converges in distribution to the point process $\mathcal N=\Psi(\mathcal P)$ on $[0,+\infty)\times \dot B(0,1)$, with $$\Psi\left(\sum_{n}\delta_{(t_n,x_n)}\right):= \sum_{n\, :\, x_n\ne 0} \sum_{k=0}^{\ell_{x_n}} \delta_{(t_n,DT_{x_0} ^{-kp}(x_n))}\, ,$$ where $\ell_y:=\inf\{k\ge 0\ :\ DT_{x_0}^{-kp}(y) \in B(0,1)\setminus \bigcup_{j=1}^{q_0}DT_{x_0}^{jp}B(0,1)\}$. \end{theorem} \begin{proof} Note that $m$ has support in $ B(0,1)\setminus \bigcup_{j=1}^{q_0}DT_{x_0}^{-jp}B(0,1)$. Observe that, for every $\varepsilon$ small enough, $\tilde{\mathcal N}_\varepsilon(x)$ is the image measure of $\mathcal N_\varepsilon(x)$ by $$\Psi_\varepsilon:\sum_{n}\delta_{(t_n,x_n)}\mapsto \sum_{n} \sum_{k=0}^{\ell_{x_n,\varepsilon}} \delta_{(t_n-kp\mu(A_\varepsilon),H_\varepsilon T^{-kp}H_\varepsilon^{-1}(x_n))},$$ with $$ \ell_{y,\varepsilon} :=\inf\{k\ge 0\ :\ H_\varepsilon T^{-kp}(H_\varepsilon^{-1}y)\in B(0,1)\setminus \bigcup_{j=1 }^{q_0}T^{jp}B(0,1)\}.$$ Observe that, for every $x\in B(0,1)$, $$ \lim_{\varepsilon\rightarrow 0} \ell_{x,\varepsilon}=\ell_x\, ,$$ and that, for every $k$, every $t>0$ and every $x\in B(0,1)$, $$\lim_{\varepsilon\rightarrow 0} (t-kp\mu(A_\varepsilon),H_\varepsilon T^{-kp}H_\varepsilon^{-1}(x))= (t,DT_{x_0}^{-kp}(x)).$$ Let us consider a set $R=\cup_i ]r_i,s_i[\times F_i$ such that $\mathbb E[\mathcal N(\partial R)]=0$ where $0\le r_i< s_i\le r_{i+1}$ and where $F_i$ are open precompact subsets of $\dot B(0,1)$. Note that, since the closure of $\cup_iF_i$ does not contain 0, there exists $K>0$ and $\varepsilon_0>0$ (depending on the $F_i$'s) such that $$\sup_{\varepsilon\in(0,\varepsilon_0),\, y\in\cup_i F_i}\inf\{k\ge 0\ :\ H_\varepsilon^{-1}T^{kp}H_\varepsilon(y)\in A_\varepsilon\}<K\, $$ and $$\sup_{\varepsilon\in(0,\varepsilon_0),\, y\in\cup_i F_i}\inf\{k\ge 0\ :\ H_\varepsilon^{-1}T^{-kp}H_\varepsilon(x)\in B(0,1)\setminus \bigcup_{j=1}^{q_0}T^{jp}B(0,1)\}<K\, $$ so that $$\forall y\in \bigcup_i F_i, \left[\exists x\in A_\varepsilon,\ \exists k\in \{0,...,\ell_{x,\varepsilon}\},\ y=H_\varepsilon T^{-kp}(x)\right]\ \ \Rightarrow \ \ \ell_{x,\varepsilon}\le 2K\, .$$ By definition of $\ell_{x,\varepsilon}$, for every $y\in \bigcup_i F_i$, every $x\in A_\varepsilon$, every $k\in \{0,...,\ell_{x,\varepsilon}\}$ such that $y=H_\varepsilon T^{-kp}(x)$, we have: $$x=T^{\tau^{(0)}_{A_{\varepsilon}}} (y)$$ with $\tau^{(0)}_A(y):=\inf\{k\ge 0\ :\ T^k(y)\in A\}$. Due to lemma \ref{LEM0}, there exists $\varepsilon_1\in(0,\varepsilon_0)$ such that, for every $\varepsilon\in(0,\varepsilon_1)$, $$\forall y\in \bigcup_i F_i,\quad \{T^k(y),\ k=0,...,K\}\cap A_\varepsilon=\{T^{\tau^{(0)}_{A_{\varepsilon}}} (y)\} \, .$$ Therefore, for every $\varepsilon\in (0,\varepsilon_1)$ $$ \tilde{\mathcal N}_\varepsilon (R)=\tilde\Psi_{\varepsilon,K}(\mathcal N_\varepsilon) (R) =\mathcal N_{\varepsilon}\left(\bigcup_{k=0}^K\varphi_{\varepsilon,k}^{-1}(R)\right)\, ,$$ with $$\tilde{\Psi}_{\varepsilon,K}:\sum_{n}\delta_{(t_n,x_n)}\mapsto \sum_n \sum_{k=0}^{K} \delta_{(t_n-kp\mu(A_\varepsilon),H_\varepsilon T^{-kp}H_\varepsilon^{-1}(x_n))} $$ and with $\varphi_{\varepsilon,k}:(t,x)\mapsto (t-kp\mu(A_\varepsilon),H_\varepsilon T^{-kp}H_\varepsilon^{-1}(x))$. Arguing analogously for $\mathcal N$ and $\mathcal P$ instead of $\tilde{\mathcal N}_\varepsilon$ and $\mathcal N_\varepsilon$, we obtain $$\mathcal N(R)=\tilde\Psi_{K}(\mathcal P) (R) =\mathcal P\left(\bigcup_{k=0}^K\varphi_{k}^{-1}(R)\right)\, $$ and $$(\lambda\times m)(\bigcup_{k=0}^K\varphi_k^{-1}(\partial R)))=\mathbb E[\mathcal N(\partial R)]=0\, ,$$ with $$\tilde{\Psi}_{K}:\sum_{n}\delta_{(t,x)}\mapsto \sum_n \sum_{k=0}^{K} \delta_{(t_n,H_\varepsilon T^{-kp}H_\varepsilon^{-1}(x_n))}\, .$$ and $$ \varphi_k:(t,x)\mapsto (t\mu(A_\varepsilon),DT_{x_0}^{-kp}(x))\, .$$ Since $(\mathcal N_\varepsilon)_\varepsilon$ converges in distribution to $\mathcal P$, we conclude that $\left(\mathcal N_\varepsilon\left(\bigcup_{k=0}^K\varphi_{k}^{-1}(R)\right)\right)_\varepsilon $ converges in distribution to $\mathcal P\left(\bigcup_{k=0}^K\varphi_{k}(R)\right) $. Moreover $$\left|\mathcal N_\varepsilon\left(\bigcup_{k=0}^K\varphi_{k}^{-1}(R)\right)-\mathcal N_\varepsilon\left(\bigcup_{k=0}^K\varphi_{\varepsilon,k}(R)\right)\right|\le \mathcal N_\varepsilon\left(\bigcup_{k=0}^K\varphi_{k}^{-1}(\partial R^{[\eta_\varepsilon]})\right)$$ with $\lim_{\varepsilon\rightarrow 0}\eta_\varepsilon=0$. Since $(\mathcal N_\varepsilon)_\varepsilon$ converges in distribution to $\mathcal P$ and since $\mathcal N(\partial R)=0$, we conclude that $$\left(\mathcal N_\varepsilon\left(\bigcup_{k=0}^K\varphi_{k}^{-1}(R)\right)-\mathcal N_\varepsilon\left(\bigcup_{k=0}^K\varphi_{\varepsilon,k}(R)\right)\right)_\varepsilon$$ converges in distribution to 0. Therefore $(\tilde{\mathcal N}_\varepsilon(R))_\varepsilon$ converges in distribution to $\mathcal N(R)$. \end{proof} Theorem~\ref{compoundPoisson} is designed for invertible systems, near a hyperbolic periodic point, and does not apply to expanding maps. Indeed, in such a non-invertible situation, one has to define the set $A_\eps$ with the first passage in the ball $B(x_0,\eps)$ and not the last. More precisely, one has to set $$A_\eps=T^{-(q_0+1)}B(x_0,\eps)\setminus \cup_{j=1}^{q_0}T^{-j} B(x_0,\eps). $$ We leave to the reader the generalization of Theorem~\ref{compoundPoisson} and the result of the next section to this case. \subsection{SRB measure for Anosov maps} We now consider a $C^2$ Anosov map $T$ on the $d$-dimensional riemaniann manifold $\Omega$. We assume that the measure $\mu$ is the SRB measure of the system \cite{kh}, and that $x_0$ is a periodic point of $T$ of smallest period $p$. \begin{theorem} We assume that $\mu(B(x_0,2\eps))\eps^{b_0} = o(\mu(B(x_0,\eps)))$ for some $b_0>0$ sufficiently small\footnote{Indeed this happens when for example the pointwise dimension of $\mu$ at $x_0$ exists and is bounded away from $0$ and $\infty$.}. Then \begin{enumerate} \item[(a)] The point process $\mathcal{N}_\eps$ for entrances in $A_\eps$ is asymptotically Poisson $\mathcal P_\eps$, of intensity $\lambda\times m_\eps$ \item[(b)] The point process $\tilde\mathcal{N}_\eps$ for entrances in $B(x_0,\eps)$ is asymptotically $\Psi(\mathcal P_\eps)$. \item[(c)] The point process $\tilde{\tilde{\mathcal{N}}}_\eps$ for entrances in $B(x_0,\eps)$ is asymptotically $\Theta_\eps(\Psi(\mathcal P_\eps))$. \item[(d)] The return time point process $$\mathcal {T}_\eps:=\sum_{n\colon T^nx\in B(x_0,\eps)}\delta_{n\mu(B(x_0,\eps))}$$ is asymptotically the compound Poisson point process $\pi\Theta_\eps(\Psi(\mathcal P_\eps))$, where $\pi$ is the projection on the time axis. \end{enumerate} \end{theorem} The compound Poisson distribution (d) has already been established for few dynamical systems~\cite{ht,hv,fft,cnz}, typically with strong assumptions on the dimension (1 or 1+1), the measure (non singular) and the shape of the balls (e.g. products of stable and unstable balls). We point out that our result is valid for balls $B(x_0,\eps)$ in the original Riemmanian metric and with a possibly singular measure; Note that the convergence of the extremal exponent $\theta_\eps$ is not expected with singular measures. \begin{proof} Due to the proof of Theorem \ref{THM1}, we assume that $(m_\eps)_\eps$ converges to some $m$. (b) will follow from (a) by Theorem \ref{compoundPoisson}. The fact that (b) implies (c), which implies (d) comes from the definitions. Let's prove (a) by applying Proposition~\ref{appli1}. It is well known that our system satisfies Hypothesis~\ref{HHH}-(II) for any $\alpha,\beta>0$. Our assumption on $(m_\eps)_\eps$ ensures Assumption (iii) of Proposition~\ref{appli1}. Choose $b_0,b$ such that $0<b_0<b<a d_u\log\lambda^{-1}$, with $a$ given by Lemma~\ref{LEM0}. Let $p_\eps=\eps^{-\sigma}$ with $\sigma=b-b_0$. By Lemma~\ref{LEM0} $$ \mu(A_\eps \cap \{\tau_{A_\eps}\le p_\eps\}) \le \sum_{n=\lfloor a\log1/\eps\rfloor+1}^{p_\eps}\mu(A_\eps \cap T^{-n}A_\eps). $$ By Lemma~\ref{epsb} (with $c=1$) this sum is bounded by \[ p_\eps \eps^{b}\mu B(x_0,2\eps)=o(\mu(B(x_0,\eps))) \] by assumption. Hence assumption \eqref{a} of Proposition~\ref{appli1} holds by Lemma~\ref{extremal}. (ii) comes from Lemma \ref{lem:couronne} and (iv) follows as in the proof of Theorem~\ref{ThmgeneAppli1}. \end{proof} \begin{lemma}\label{epsb} For any $a,b,c>0$ such that $a\log\lambda+b/d_u+c<1$, for $\varepsilon$ small enough, for any $n\ge a\log1/\varepsilon$ we have \[ \mu( A_\varepsilon \cap T^{-n}A_\varepsilon ) \le \eps^b \mu(B(x_0,\varepsilon+\varepsilon^c)). \] \end{lemma} \begin{proof} Let $\kappa>0$ small and consider a partition (or a cover with finite multiplicity) of $\Omega$ by pieces of unstable cubes (or balls) $W\in \mathcal{W}$ of size $\varepsilon^{1-\kappa}$. Let $\mathcal{V}$ be the set of the $V=T^{-n}W$, $W\in \mathcal{W}$. We can disintegrate the measure with respect to $\mathcal{V}$ such that for any set $Z$ \begin{equation}\label{disintegrate} \mu(Z ) = \int_\mathcal{V} \mu(Z|V)d\mu(V). \end{equation} Since each $W=T^nV$ is a small piece of a smooth manifold, its intersection with $B(x_0,\varepsilon)$ consists at most in an unstable ball of radius $C \varepsilon$. Therefore the proportion of the unstable volume of $W\cap B(x_0,\varepsilon)$ in $W$ is bounded by $C \varepsilon^{\kappa d_u}$, where $d_u$ is the unstable dimension. By distortion, we get that $\mu(T^{-n}B(x_0,\varepsilon)|V) \le C \varepsilon^{\kappa d_u}$. Using \eqref{disintegrate} we get \[ \begin{aligned}\mu(B(x_0,\varepsilon)\cap T^{-n}B(x_0,\varepsilon)) &= \int_{\mathcal{V}} \mu(B(x_0,\varepsilon)\cap T^{-n}B(x_0,\varepsilon)|V) d\mu(V)\\ &\le \int_{\mathcal{V}} \mu(T^{-n}B(x_0,\varepsilon)|V) 1_{V\cap B(x_0,\varepsilon)\neq\emptyset}d\mu(V) \\ &\le C \varepsilon^{\kappa d_u} \mu(B(x_0,\varepsilon+\varepsilon^c)), \end{aligned} \] since $\diam V\le C\lambda^n \varepsilon^{1-\kappa} \le \varepsilon^c$. \end{proof} \begin{lemma}\label{7/8} There exists a constant $\delta_0>0$ independent of $\eps$ such that \[ \mu(B(x_0,\frac78\eps)) \le (1-\delta_0) \mu(B(x_0,\eps)). \] \end{lemma} \begin{proof} We fix a measurable partition $\mathcal{V}$ of unstable manifolds such that the $d_u$-dimensional Lebesgue measure of $V$ is bounded from below by a constant, and $x_0\not\in\overline{\cup_V\partial V}$, such that the disintegration \eqref{disintegrate} holds true. Let $\rho=\frac78$. Suppose that $V\in \mathcal{V}$ intersects the ball $B(x_0,\rho\eps)$. Then $V$ intersects also the sphere $S(x_0,\frac{\rho+1}2\eps)$ in a point say $x$, and the ball $B(x,\frac{1-\rho}2\eps)\cap V$ is contained in $B(x_0,\eps)\setminus B(x_0,\rho\eps)$. Since $\mu(\cdot|V)$ is equivalent to the $d_u$-dimensional Lebesgue measure, there exists $\delta_0>0$ such that \[ \mu( B(x_0,\eps)\setminus B(x_0,\rho\eps)|V) \ge \mu( B(x,\frac{1-\rho}2\eps)| V) \ge \delta_0 \mu(B(x_0,\eps)|V) \] and the lemma follows by integration. \end{proof} \begin{lemma}\label{extremal} The extremal index is bounded away from zero: $\theta_\eps\ge\frac{\delta_0}{q_0+1}>0$. \end{lemma} \begin{proof} We first observe, using equation~\eqref{Tq} that \[ B(x_0,\eps)\cap \{\tau_{B(x_0,\eps)} \circ T^{q_0}\le q_0\} \subset T^{-q}B(x_0,\frac78\eps). \] for some integer $q$. Denote for simplicity $B=B(x_0,\eps)$. By Lemma \ref{7/8} this gives \[ \mu(B\cap \{\tau_{B}\circ T^{q_0}> q_0\}) \ge \delta_0 \mu(B). \] Recall that \[ \mu(A_\eps)= \theta_\eps \mu(B). \] We have \[ B \cap \{\tau_{B}\circ T^{q_0}> q_0\} \subset \bigcup_{j=0}^{q_0}T^{-j}A_\eps \] hence \[ \mu(B \cap \{\tau_{B}\circ T^{q_0}> q_0\})\le (q_0+1)\theta_\eps \mu(B). \] This implies that $\delta_0\le (q_0+1)\theta_\eps$. \end{proof} \begin{lemma}\label{lem:couronne} Suppose that $T$ is $C^{1+\alpha}$ and that $\alpha>1-1/\overline{d}_{\mu}(x_0)$. Then \[ \mu(B(x_0,\eps)\setminus B(x_0,\eps-\eps^\delta) = o( \mu(B(x_0,\eps) )) \] for any $\delta$ such that $\overline{d}_\mu(x_0)<\frac{\delta+1}{2}<\frac1{1-\alpha}$. \end{lemma} \begin{proof} We fix a measurable partition $\mathcal{V}$ of unstable manifolds such that the $d_u$-dimensional Lebesgue measure of $V$ is bounded from below by a constant, and $x_0\not\in\overline{\cup_V\partial V}$, such that the disintegration \eqref{disintegrate} holds true. Up to applying the exponential map at $x_0$ we assume that $B(x_0,\eps_0)$ is the ball $B(0,\eps_0)$ of ${\mathbb R}^d$. Let $\eps\in(0,\eps_0)$. Let $V\in \mathcal{V}$. Let $y\in V\cap B(0,\eps)\setminus B(0,\eps-\eps^\delta)$. Up to a rotation we suppose that locally $V$ is the graph of a $C^{1+\alpha}$ map $\varphi$ from $U\subset{\mathbb R}^{d_u}$ to ${\mathbb R}^{d-d_u}$, with $d_x\varphi=0$ where $x\in U$ is such that $y=(x,\varphi(x))$. We have \begin{equation}\label{Beps} (\eps-\eps^\delta)^2 \le |x|^2 + |\varphi(x)|^2 \le \eps^2. \end{equation} Let $h\in {\mathbb R}^{d_u}$ such that $x+h\in U$ and $(x+h,\varphi(x+h))\in B(0,\eps)\setminus B(0,\eps-\eps^\delta)$. We have \begin{equation}\label{Beps+h} (\eps-\eps^\delta)^2 \le |x+h|^2 + |\varphi(x+h)|^2 \le \eps^2. \end{equation} Subtracting \eqref{Beps} to \eqref{Beps+h} we get, setting $a(h) = |\varphi(x+h)|^2-|\varphi(x)|^2$, since $\eps^{2\delta}\le \eps^{1+\delta}$, \[ 3\eps^{1+\delta} \ge \left||x+h|^2 -|x|^2 + a(h)\right| = \left| |h|^2 + 2 x\cdot h+ a(h)\right|. \] Assume that $|h|>6\eps^\frac{1+\delta}2$. We fix a unit vector $u\in {\mathbb R}^{d_u}$ and seek for the solutions $h=tu$ of the above equation, which becomes \[ \left| t^2 + 2 x\cdot u t +a(tu) \right| \le 3 \eps^{1+\delta}. \] Therefore, dividing by $|t|>6\eps^\frac{1+\delta}2$ we end up with \[ \left| t + 2 x\cdot u +\frac{a(tu)}t \right| \le \frac12 \eps^\frac{1+\delta}2. \] Let $t,t'$ be two solutions. We obtain by subtraction \[ \left| t'-t +\frac{a(t'u)}{t'} - \frac{a(tu)}t \right| \le \eps^\frac{1+\delta}2. \] Note that the function $g_u(t):=\frac{a(tu)}t$ is $C^1$ in the range of $t$'s, and \[ g_u'(t) = \frac{1}{t}(2 (d_{x+tu}\varphi u)\cdot\varphi(x+tu)) - \frac{a(tu)}{t^2}. \] Since $\varphi$ is $C^{1+\alpha}$ we have $|d_{x+tu}\varphi u|= O( |t|^\alpha)$. In addition, \[ a(tu) =(|\varphi(x+tu)|-|\varphi(x)|)(|\varphi(x+tu)|+|\varphi(x)|) =O(\eps |t|^{1+\alpha}). \] This implies that $|g_u'(t)| = O(\eps |t|^{\alpha-1}) = O(\eps^{1+(\alpha-1)\frac{1+\delta}2})=o(1)$, hence \[ \left| (t-t') (1+o(1)) \right| \le \eps^\frac{1+\delta}2. \] Using radial integration this gives that the $d_u$-dimensional Lebesgue measure of those $h$ such that $(x+h,\varphi(x+h))\in B(0,\eps)\setminus B(0,\eps-\eps^\delta)$ is $O(\eps^\frac{1+\delta}2)$. Hence $\mu(B(x_0,\eps)\setminus B(x_0,\eps-\eps^\delta)|V)= O(\eps^\frac{1+\delta}2)$, and the result follows by integration using \eqref{disintegrate}. \end{proof} \section{Billiard in a diamond}\label{sec:bd} We consider a diamond shaped billiard, with no cusp. The billiard table $Q$ is a bounded closed part of $\mathbb R^2$ delimited by $4$ convex obstacles $(\Gamma_i)_{i\in\mathbb Z/4\mathbb Z}$ (with $C^3$-smooth boundary, with positive curvature) placed in such a way that, for every $i\in\mathbb Z/4\mathbb Z$, $\partial\Gamma_i$ meets $\partial \Gamma_{i+1}$ transversely at some point called corner $C_i$, but has no common point with $\partial\Gamma_{j}$ for $j\ne i-1,i+1$. In our representation of this billiard table, $C_1=(0,0)$ is on the left side of $Q$ and the inner bisector at the corner $C_1$ is horizontal. We consider again the billiard flow $(\mathcal M,\nu,(Y_t)_t)$ and the billiard map $(\Omega,\mu,T)$ in the domain $Q$. For any $\varepsilon>0$, we put a virtual vertical barrier $I_\varepsilon$ of length $\varepsilon$ joining a point $a^{(1)}_\varepsilon\in\partial\Gamma_1$ to a point $a^{(2)}_\varepsilon\in\partial \Gamma_2$. and we are interested in the times at which the billiard flow enters the corner by crossing the barrier $I_\varepsilon$. So that we define $$\mathcal A_\varepsilon:=I_\varepsilon\times S^1_-,\quad\mbox{with}\quad S^1_-:=\{\vec v=(v_1,v_2)\in S^1\ :\ v_1<0\} \, .$$ We take $V:=\mathbb R\times S^1_-$ and $$\mathcal H_\varepsilon:(q=(q_1,q_2),\vec v)\mapsto \left(\frac{q_2}{\varepsilon},\vec v\right) .$$ \includegraphics[scale=0.5]{billdiamant.pdf} \begin{theorem}\label{billardDiamant} The point process $$\sum_{t>0\, :\, Y_t\in \mathcal A_\varepsilon}\delta_{\left(\frac{\eps t}{\pi\, Area(Q)},\mathcal H_\varepsilon(Y_t(\cdot))\right)} $$ converges in distribution to a Poisson point process on $[-1/2,1/2]\times S^1_-$ with density $\lambda\times m_0$, with $m_0$ the probability measure of density proportional to $(\bar q,\vec v)\mapsto|v_1|$ with $\vec v=(v_1,v_2)$. \end{theorem} \subsection{Notations, recalls and proof of Theorem \ref{billardDiamant}} Let us recall some useful facts and notations. Due to the transversality of $\partial\Gamma_1$ and $\partial\Gamma_2$ at $C_1$, there exist $0<\theta_1<\theta_2$ such that, for $\varepsilon>0$ small enough, the distance on $\partial Q$ between $C_1$ and $a^{(i)}_\varepsilon$ is between $\theta_1\varepsilon$ and $\theta_2\varepsilon$. Here $\Omega$ is the set of reflected unit vectors based on $\partial Q\setminus\{ C_1,...,C_4\}$. We parametrize $\Omega$ by $\bigcup_{i\in\mathbb Z/4\mathbb Z} \{i\}\times ]0,\length(\partial\Gamma_i\cap Q)[\times\left[-\frac \pi 2;\frac \pi 2\right]$. A reflected vector $(q,\vec v)$ is represented by $(i,r,\varphi)$ if $q\in\partial\Gamma_i$ at distance $r$ (on $Q\cap \partial\Gamma_i$) of $C_{i-1}$ and if $\varphi$ is the angular measure in $[-\pi/2,\pi/2]$ of $(\vec{n}(q),\vec{v})$ where $\vec{n}(q)$ is the normal vector to $\partial Q$ at $q$. For any $C^1$-curve $\gamma$ in $\Omega$, we write $\ell(\gamma)$ for the euclidean length in the $(r,\varphi)$ coordinates of $\gamma$. If moreover $\gamma$ is given in coordinates by $\varphi=\phi(r)$, then we also write $p(\gamma):=\int_\gamma \cos(\phi(r))\, dr$. We define the time until the next reflection in the future by $$\tau^+(q,\vec {v}):=\min\{s>0\ :\ q+ s\vec{v}\in \partial Q\}\, .$$ We also define $\tau^-:\Omega\rightarrow (0,+\infty)$ for the time until the last reflection in the past (corresponding to $\tau^-=\tau^+\circ T^{-1}$ when $T^{-1}$ is well defined) by $$\tau^-(q,\vec {v}):=\min\{s>0\ :\ q+ s\vec{v}^-\in Q\}\, ,$$ with $\vec v^{-}$ be the reflected vector with respect to the normal to $\partial Q$ at $q$, i.e. $\vec v^-$ is the unit vector satisfying the following angular equality $\angle (\vec n(q),\vec v)=\angle(\vec v^-,\vec n(q))$. It will be useful to define $R_0:=\{\varphi=\pm\pi/2\}$, $\mathcal C_+=\{(q,\vec v)\in \Omega\ :\ q+\tau^+(q,\vec v)\vec v \in\{C_1,...,C_4\}\}$ and $\mathcal C_-=\{(q,\vec v)\in \Omega\ :\ q+\tau^-(q,\vec v)\vec v^-\in\{C_1,...,C_4\}\}$. Observe that, for every $k\ge 1$, $T^k$ defines a $C^1$-diffeomorphism from $\Omega\setminus \mathcal S_{-k}$ to $\Omega\setminus \mathcal S_k$ with $\mathcal S_{-k}:= T^{-k}R_0\cup\bigcup_{m=0}^{k-1}T^{-m}(R_0\cup \mathcal C_+)$ and $\mathcal S_{k}:= T^{k}R_0\cup\bigcup_{m=0}^{k-1}T^{m}(R_0\cup \mathcal C_-)$. As for the other billiard models, we set $\pi_Q:\Omega\rightarrow Q$ for the canonical projection. Despite the absence of the so called complexity bound in billards with corners, De Simoi and Toth have shown in \cite{dst} that some expansion condition holds, from which the growth lemma~\cite[Theorem 5.52]{ChernovMarkarian} follows. It says that for any weakly homogeneous unstable curve $W$ one has \begin{equation}\label{GL} m_W(r_n<\delta) \le c\theta^n \delta + c\delta m_W(W) \end{equation} where $m_W$ is the one dimensional Lebesgue measure on $W$, and $r_n(x)$ denotes the distance (on $T^nW$) of $T^n(x)$ to the boundary of the homogeneous piece of $T^nW$ containing $x$. In particular, for those systems one can build a Young tower with exponential parameters, from which it follows that Hypothesis \ref{HHH}-(II) is satisfied for every $\alpha,\beta>0$. \begin{proof}[Proof of Theorem \ref{billardDiamant}] Let $A_\varepsilon\subset \Omega$ be the set of all possible configurations at the reflection time just after the particle crosses the virtual barrier $I_\eps$ from the right side. Note that $\mu(A_\eps)=\frac 1{2|\partial Q|}\int_{[-\eps/2,\eps/2]\times[-\pi/2,\pi/2]}\cos\varphi\, dr\, d\varphi=\frac{\eps}{|\partial Q|}$. Due to Theorem \ref{THMflow}, it is enough to prove that $(\mathcal N_\eps)_\eps$ converges to a Poisson point process with density $\lambda\times m_0$ for the above choice of $A_\eps$ and for $ H_\eps:A_\eps\rightarrow \mathbb R\times S^1_-$, such that given by $\mathcal H_\eps=H_\eps\circ T\circ \Pi$ with $\Pi$ the projection defined in Subsection \ref{sec:specialflow}. To this end we apply Proposition \ref{appli1}. The fact that Assumption (i) of Proposition \ref{appli1} is satisfied for $p_\varepsilon=\varepsilon^{-\sigma}$ comes from Proposition \ref{propshort}. We take $\alpha>3/\sigma$. Assumption (ii) comes from the fact that the boundary of each connected component of $\partial A_\varepsilon$ is made of a part of $R_0$ and of a $C^1$-increasing curve $r=R(\varphi)$ with $R'(\varphi)=1/(\kappa(r)+\frac{\cos(\varphi)}{\tau^-(r,\varphi)})\le 1/\min\kappa $ corresponding to reflected vectors in the corner on the leftside of $I_\eps$ coming from $\{a_{\varepsilon}^{(1)},a_\varepsilon^{(2)}\}$ and to $T(R_0\cap A_\eps)$. The image measure of $\mu(\cdot|A_\eps)$ by $H_\eps$ is proportional to $\cos(\varphi) drd\varphi$ where $r$ is the position on $[-1/2,1/2]$ and $\varphi$ the angle (in $[-\pi/2,\pi/2]$) between the vector $(-1,0)$ and the incident vector (i.e. the speed vector at the time when the particle crosses $I_\eps$). We take for $\mathcal W$ the set of rectangles of the form $(a,b)\times(c,d)$ in the above $(r,\varphi)$ coordinates. Outside the strips $A_\eps\cap\{(r,\varphi)\:\ |r-a_\eps^{(i)}|<\eps^2\}$, $H_\eps$ is $K.\eps^{-2}$-Lipschitz (indeed the jacobian is in $O(\eps^{-1}/\cos\varphi)\le c\, \eps^{-2}$) and so, using the argument of the proof of Proposition \ref{partition}-(ii), we conclude that (iv) is satisfied since $\mu(A_\eps)=O(\eps)$. \end{proof} \subsection{Short returns} The aim of this subsection is the following result. \begin{proposition}\label{propshort} There exists $\sigma>0$ such that $\mu(\tau_{A_\eps}<\eps^{-\sigma}|A_\eps)=o(1)$. \end{proposition} To this end, we will recall useful facts and introduce some notations. Let $\tau_0:=\min_{(i,j)\:\ j\ne i,i+1}\dist(C_i,\Gamma_j)/10$. \begin{definition} We say that a curve $\gamma$ of $\Omega$ satisfies assumption (C) if it is given by $\varphi=\phi(r)$ with $\phi$ $C^1$-smooth, increasing and such that $\min\kappa\le \phi'\le \max\kappa+\frac 1{\tau_0}$. \end{definition} We recall the following facts. \begin{itemize} \item There exist $C_0,C_1>0$ and $\lambda_1>1$ such that, for every $\gamma$ satisfying Assumption (C) and every integer $m$ such that $\gamma\cap\mathcal S_{-m}=\emptyset$, $T^m\gamma$ is a $C^1$-smooth curve satisfying $C_1 p(T^m\gamma)\ge \lambda_1^mp(\gamma)$ and $\ell(\gamma)\le C_0 \sqrt{p(T\gamma)}$. \item There exist $C_2>0$ and $\lambda_2>\lambda_1^{1/2}$ such that, for every integer $m$, the number of connected components of $\Omega\setminus \mathcal S_{-m}$ is less than $C_2\lambda_2^m$. Moreover $\mathcal S_{-m}$ is made of curves $\varphi=\phi(r)$ with $\phi$ $C^1$-smooth and strictly decreasing. \item If $\gamma\subset \Omega\setminus \mathcal S_{-1}$ is given by $\varphi=\phi(r)$ or $r=\mathfrak r(\varphi)$ with $\phi$ or $\mathfrak r$ increasing and $C^1$ smooth, then $T\gamma$ is $C^1$, is given by $\varphi=\phi_1(r)$ with $\min\kappa\le \phi_1'\le\max\kappa+\frac 1{\min_\gamma\tau^+}$. Moreover $\int_{T\gamma}\, d\varphi\ge \int_{\gamma}\, d\varphi$. \item There exists $m_0$ such that, for every $x\in \Omega\setminus\bigcup_{k=0}^{m_0-1} T^{-k}\mathcal C_+$, there exists $k\in\{1,...,m_0\}$ such that $\tau^+(T^{k-1})>\tau_0$. \end{itemize} Let $A_\varepsilon\subset \Omega$ be the set of possible configurations of a particle at the reflection time just after the particle reaches the virtual barrier $I_\varepsilon$ from the right side. We observe that there exists $K_0>0$ and $\varepsilon_0>0$ such that, for every $\varepsilon\in(0,\varepsilon_0)$, for every $q$ between $C_1$ and $a^{(i)}_\varepsilon$, the set of $\vec v$ such that $(q,\vec v)\in A_\varepsilon$ has Lebesgue measure at least $K_0$. \begin{lemma}[Very quick returns]\label{lemmaveryquick} There exists $K_1>0$ such that, $$\forall s\ge 1,\quad \mu(T^{-s}(A_\varepsilon)|A_\varepsilon)\le K_1(\lambda_2/\lambda_1^{\frac 12})^s\varepsilon^{\frac 12}.$$ \end{lemma} \begin{proof} Let $q\in\pi_Q(A_\varepsilon)$. Let $\gamma_1$ be a connected component of $\pi_Q^{-1}(\{q\})\setminus\mathcal S_s$. We define $\gamma:=\gamma_1\cap A_\varepsilon\cap T^{-s}A_\varepsilon$. Let $m$ be the smallest positive integer such that $\min_\gamma\tau\circ T^{m-1}>\tau_0$. By definition of $A_\varepsilon$ and of $m_0$, $m<\min(m_0,s)$. Hence $T^m\gamma$ satisfies Assumption (C) and $$\ell(\gamma)\le \int_{\gamma}\, d\varphi \le \int_{T^{m}\gamma}\, d\varphi \le \ell(T^m\gamma)\le C_0\sqrt{p(T^m\gamma)}.$$ Moreover, since $\gamma\cap S_{-s}=\emptyset$, we also have $$ p(T^m\gamma)\le C_1\lambda_1^{m-s}p(T^s\gamma).$$ But, since $T^s\gamma$ is an increasing curve contained in $A_\varepsilon$, we conclude that $p(T^s\gamma)\le \theta\varepsilon$. Hence $$\ell(\gamma)\le C_0\sqrt{C_1\lambda_1^{m-s}\theta\varepsilon} .$$ By using the fact that $\pi_Q^{-1}(\{q\})\setminus\mathcal S_s$ contains at most $C_2\lambda_2^s$ connected components and by integrating on $\pi_Q(A_\varepsilon)$, we obtain $$ \mu(T^{-s}(A_\varepsilon)\cap A_\varepsilon)\le \frac {C_0\sqrt{\theta C_1}C_2 \lambda_1^{\frac m2}(\lambda_2/\lambda_1^{\frac 12})^s\varepsilon^{\frac 12}}{2\length(\partial Q)}\length(\pi_Q(A_\varepsilon)).$$ We conclude by using the fact that $\mu(A_\varepsilon)\ge (1-\sin K_0) \length(\pi_Q(A_\varepsilon))$ and by setting $K_1:=\frac{C_0\sqrt{\theta C_1}C_2 \lambda_1^{\frac {m_0}2}}{1-\sin K_0}$. \end{proof} \begin{lemma}[quick returns]\label{lemmaquick} For any $a>0$, there exists $s_a>0$ such that \[ \sum_{n=-a\log\eps}^{\eps^{-s_a}} \mu(A_\eps\cap T^{-n}A_\eps)=o(\mu(A_\eps)). \] \end{lemma} \begin{proof} We take a measurable partition $\mathcal{V}$ of $\Omega\setminus \mathcal{S}_1$ by the unstable curves $\frac{d\varphi}{dr}=\kappa(r)$. By disintegration there exist a probability measure $\tilde\mu$ on $\mathcal{V}$, and a constant $c<\infty$ such that for any measurable set $B$ we have \[ c^{-1}\mu( B) \le \int_{\mathcal{V}}m_W( B)d\tilde\mu(W)=:\mu_0(B). \] We define $\tilde A_\eps$ as the set of $T^{-j}x$ where $x\in A_\eps$ and $j$ is the minimal integer such that $T^{-\ell}x$ does not belong to the sides adjacent to the corner. Any corner sequence is bounded by some constant $m$ depending only on the billiard table, thus $A_\eps \subset \cup_{\ell=1}^m T^{\ell}\tilde A_\eps$. It follows by invariance that \[ \mu(A_\eps \cap T^{-n} A_\eps) \le m^2 \max_{|\ell|\le m} \mu(\tilde A_\eps \cap T^{-n-\ell}\tilde A_\eps). \] Therefore it suffices to control $\mu(\tilde A_\eps \cap T^{-n}\tilde A_\eps)$. Note that $\tilde A_\eps$ is at a long distance from the corner, hence there are finitely many decreasing curves $\varphi_j$, $j=1..m_0$ and a constant $c$ such that $\tilde A_\eps \subset \tilde V_\eps$ where \[ \tilde V_\epsilon = \cup_j \{(r,\varphi)\colon \varphi_j(r)-c\eps< \varphi <\varphi_j(r)+c\eps\}\, \] where $(r,\varphi_j(r))$ represents a vector pointing exactly to the corner provided its two adjacent obstacles are removed. In particular $\frac{d\varphi_j(r)}{dr}\le-\kappa$. We denote the $k$th homogeneity strip\footnote{See \cite{ChernovMarkarian} for notations and definitions.} by $\mathbb{H}_k$ for $k\neq0$ and set $\mathbb{H}_0 = \cup_{|k|< k_0} \mathbb{H}_k$ for some fixed $k_0$. Set $s:=\min(-a\log\theta,1)/3$. Let $k_\eps=\eps^{-s}$ and $H^\eps=\cup_{|k|\le k_\eps}\mathbb{H}_k$. For any $W\in\mathcal{V}$ we set $W^\eps=W\cap \tilde V_\eps$, $W_k=W\cap \mathbb{H}_k$ and $W_k^\eps=W_k\cap \tilde V_\eps$. Each $W_k^\eps$ is a weakly homogeneous unstable curve. We cut each curve $W_k^\eps$ into small pieces $W_{k,i}^\eps$ such that each $T^j W_{k,i}^\eps$, $j=0,\ldots,n$ is contained in a homogeneity strip and a connected component of $\Omega\setminus \mathcal{S}_1$. For $x\in W_{k,i}^\eps$ we denote by $r_n(x)$ the distance (in $T^n W$) of $T^n(x)$ to the boundary of $T^n W_{k,i}^\eps$. By definition of $W^\eps$ \[ \begin{split} m_W&(\tilde A_\eps \cap T^{-n} \tilde A_\eps) \le m_{W^\eps}(T^{-n}\tilde V_\eps)\\ \le &m_{W^\eps}(\mathbb{H}_\eps^c) + \sum_{|k|\le k_\eps} m_{W_k^\eps}(\{r_n\ge \eps^{1-s}\}\cap T^{-n}\tilde V_\eps) + m_{W_k^\eps}(r_n<\eps^{1-s}). \end{split} \] The first term inside the sum is bounded by the sum $\sum_i m_{W_{k,i}^\eps}(T^{-n}\tilde V_\eps) $ over those $i$'s such that $T^nW_{k,i}^\eps$ is of size larger than $\eps^{1-s}$. In particular $m_{T^n W_{k,i}^\eps}(T^n W_{k,i}^\eps)\ge \eps^{1-s}$. On the other hand, by transversality \[ m_{T^n W_{k,i}^\eps}(\tilde V_\eps) \le c\eps. \] By distortion (See Lemma 5.27 in \cite{ChernovMarkarian}) we obtain \[ m_{W_{k,i}^\eps}(T^{-n}\tilde V_\eps) \le c \eps^{s} m_{W_{k,i}^\eps}(W_{k,i}^\eps). \] Summing up over these $i$ gives the first term inside the sum is bounded by \[ m_{W_k^\eps}(\{r_n\ge \eps^{1-s}\}\cap T^{-n}\tilde V_\eps) \le c \eps^{s} m_{W_{k}^\eps}(W_{k}^\eps). \] On the other hand, the growth lemma~\eqref{GL} implies that \[ m_{W_k^\eps}(r_n<\eps^{1-s}) \le c \theta^n \eps^{1-s} + c\eps^{1-s} m_{W_k^\eps}(W_k^\eps). \] A final summation over $k$ gives \[ m_W(\tilde A_\eps \cap T^{-n} \tilde A_\eps) \le m_{W}(\tilde V_\eps \cap \mathbb{H}_\eps^c) + c(\eps^{s}+\eps^{1-s})m_W(\tilde V_\eps) + ck_\eps\theta^n \eps^{1-s}. \] Integrating over $W\in\mathcal{V}$ gives \[ \mu(\tilde A_\eps \cap T^{-n} \tilde A_\eps) \le \mu_0(\tilde V_\eps \cap \mathbb{H}_\eps^c) + O(\eps^{1+s/3}) = O(\mu(A_\eps)\eps^{s/3}), \] where we used the fact that $\mu_0$ is equivalent to Lebesgue and $\tilde V_\eps \cap \mathbb{H}_\eps^c$ is contained in the union of at most $m_0$ rectangles of width $O(\eps)$ and height $k_\eps^{-2}=\eps^{2s}$. We take $s_a=s/6$. \end{proof} \begin{proof}[Proof of Proposition \ref{propshort}] Choose $a=1/(4\log(\lambda_2/\lambda_1^{1/2})$. Observe that, due to Lemma \ref{lemmaveryquick}, we have $$\sum_{s=1}^{-a\log\eps}\mu(T^{-s}A_\eps|A_\eps)\le \frac{K_1}{\lambda_2/\lambda_1^{\frac 12}-1}(\lambda_2/\lambda_1^{\frac 12})^{-a\log\eps}\varepsilon^{1/2} \le \frac{K_1}{\lambda_2/\lambda_1^{\frac 12}-1}\varepsilon^{1/4} .$$ This combined with Lemma \ref{lemmaquick} leads to $$\sum_{n=1}^{\varepsilon^{-s_a}}\mu(T^{-n}A_\eps|A_\eps)=o(1)\, . $$ \end{proof} \appendix \section{More proofs} \begin{proof}[Proof of Proposition~\ref{appli1}] Item (ii) of Theorem \ref{THM} being satisfied by assumption by $\mathcal W$, it remains to prove Item (i). Let $\mathcal G_0=\{G_1,...,G_L\}$ be a finite subcollection of $H_\varepsilon^{-1}\mathcal W$. Let $t>0$. Let $A\in \sigma(\mathcal G_0)$ and $B\in\sigma(\bigcup_{n= 1}^{N_{\varepsilon,t}} T^{-n} \mathcal G_0)$, with $N_{\varepsilon,t}:=\lfloor t/\mu(A_\varepsilon)\rfloor$. Set $X_j:=(1_{T^{-j}(G_1)},\ldots,1_{T^{-j}(G_L)}) \in {\mathbb R}^L$. Note that $\mathbf 1_B=g(X_1,...,X_{N_{\varepsilon,t}})$ for some $g\colon(\{0,1\}^L)^{N_{\varepsilon,t}}\to\{0,1\}$. Note that if $\varepsilon$ is small enough, $N_{\varepsilon,t}>p_{\varepsilon}$ by assumption. Let $B_1=\{g(0,...,0,X_1,...,X_{N_{\varepsilon,t}-p_\varepsilon})=1\}$ so that $ |\mathbf 1_B-\mathbf 1_{B_1}\circ f^{p_\varepsilon}|\le\mathbf 1_{\{\tau_{A_\varepsilon}\le p_\varepsilon\}}$. Note that $$ |\mu(B\cap A|A_\varepsilon)-\mu(B_1\cap A|A_\varepsilon)|\le \mu(\tau_{A_\varepsilon}\le p_\varepsilon|A_\varepsilon)=o(1).$$ Moreover $$|\mu(B)-\mu(B_1)|\le \mu(\tau_{A_\varepsilon}\le p_\eps)\le p_\varepsilon\mu(A_\varepsilon)=o(1).$$ Set $K:=\lfloor p_\varepsilon/4\rfloor$. Under (I), set $M=0$, $\mathcal P_K=\mathcal Q_K$. Under (II), set $M=K$, $\mathcal P_K=\mathcal Q_{2K}$. It remains to show that \begin{equation} |\mu(B_1\cap A)-\mu(B_1)\mu(A)|=o(\mu(A_\varepsilon)), \end{equation} i.e. $$|\mathrm{Cov}_{\tilde\mu}(\mathbf 1_{\tilde T^{-M},\tilde\Pi^{-1}B_1},\mathbf 1_{ \tilde T^{-M}\tilde\Pi^{-1}A})|=o(\tilde\mu(\tilde T^{-M}\tilde\Pi^{-1}A_\varepsilon)). $$ We approximate $\tilde T^{-M}\tilde\Pi^{-1}A$ (resp. $\tilde T^{-M}\tilde\Pi^{-1}A_\varepsilon$) by the union of the atoms of $\mathcal P_K$ intersecting this set. We write $\tilde A$ (resp. $\tilde A_\varepsilon$) for this union. Note that $\tilde T^{-M}\tilde\Pi^{-1}A\subset \tilde A$ (resp. $\tilde T^{-M}\tilde\Pi^{-1}A_\varepsilon\subset \tilde A_\varepsilon$) and that $$ \tilde A\setminus \tilde T^{-M}\tilde\Pi^{-1}A\subset A':=\bigcup_{Q\in Q_K:\tilde\Pi \tilde T^MQ\cap A\ne\emptyset,\tilde\Pi \tilde T^MQ\setminus B_1\ne\emptyset}Q$$ and so (due to the assumption on the diameters of the atoms of $\mathcal P_K$), \begin{eqnarray*} \tilde\mu(A') &\le&\tilde\mu\left(\bigcup_{Q\in Q_K:\tilde\Pi\tilde T^M Q\subset (\partial A)^{[Ck^{-\alpha}]}}Q\right)\\ &\le&\mu((\partial A)^{[CK^{-\alpha}]})=o(\mu(A_\varepsilon)). \end{eqnarray*} Analogously $$ \tilde A_\varepsilon\setminus \tilde T^{-M}\tilde\Pi^{-1}A_\varepsilon\subset A'_\varepsilon:=\bigcup_{Q\in Q_K:\tilde\Pi \tilde T^M Q\cap A_\varepsilon\ne\emptyset,\tilde\Pi T^MQ\setminus A_\varepsilon\ne\emptyset}Q$$ and $$ \tilde\mu(A'_\varepsilon) =o(\mu(A_\varepsilon)). $$ Finally we approximate $ \tilde T^{-M}\tilde\Pi^{-1}B_1 $ by a union $\tilde B$ of atoms of $\sigma(\bigcup_{\ell=1}^{N_{\varepsilon,t}-p_\varepsilon}\mathcal P_K)$ such that $ \tilde T^{-M}\tilde\Pi^{-1}B_1 \subset\tilde B_1$ and $$\tilde B_1\setminus T^{-M}\tilde\Pi^{-1}B_1 \subset B_1':=\bigcup_{\ell=1}^{N _{\varepsilon,t}-p_\varepsilon} F^{-\ell}\left(\bigcup_{Q\in \mathcal P_K:\tilde \Pi\tilde T^M Q\subset \bigcup_{k=1}^L(\partial G_k)^{[CK^{-\alpha}]}}Q\right),$$ and $$\tilde\mu( B_1')=(N_{\varepsilon,t}-p_\varepsilon) o(\mu(A_\varepsilon)).$$ Therefore $$|Cov_{\tilde\mu}(\mathbf 1_{\tilde\Pi^{-1}A},\mathbf 1_{ \tilde\Pi^{-1}B_1})-Cov_{\tilde\mu}(\mathbf 1_{\tilde A},\mathbf 1_{ \tilde B_1})| $$ \begin{eqnarray*} &\le&|Cov_{\tilde\mu}(\mathbf 1_{\tilde A}-\mathbf 1_{T^{-M}\tilde\Pi A},\mathbf 1_{\tilde B_1} )+Cov_{\tilde\mu}(\mathbf 1_{T^{-M}\tilde\Pi A},\mathbf 1_{\tilde B_1}-\mathbf 1_{T^{-M}\tilde\Pi B_1} )|\\ &\le&|Cov_{\tilde\mu}(\mathbf 1_{A'},\mathbf 1_{\tilde B} )|+|Cov_{\tilde\mu}(\mathbf 1_{\tilde A},\mathbf 1_{B_1'})|\\ &\ &+2\tilde\mu(A')\tilde\mu(\tilde B_1)+2 \tilde\mu(\tilde A)\tilde\mu(B_1')\\ &\le& C'_0(\mu(A_\varepsilon)p^{-\beta}+o(\mu(A_\varepsilon)) (N-p)\mu(A_\varepsilon))=o(\mu(A_\varepsilon)). \end{eqnarray*} \end{proof} \begin{proof}[Proof of Lemma~\ref{LEM0}] Since the analysis is local, taking the exponential map at $x_0$ if necessary we may assume that $U\subset {\mathbb R}^d$. Let $L_0$ be the Lipschitz norm of $T$. We first observe that if $a>0$ is sufficiently small, then for any $\varepsilon>0$ small and $n\le a\log1/\varepsilon$, $\|x-x_0\|<\varepsilon$ and $\|T^nx-x_0\|<\varepsilon$ implies that \[ \|T^nx_0-x_0\| \le \|T^nx-T^nx_0\|+\|T^nx-x_0\| \le L_0^n\eps+\eps, \] and thus $n$ is a multiple of $p$. Hence without loss of generality we assume that $p=1$. Let $q$ be the integer given by Lemma~\ref{matrice} for the hyperbolic matrix $D_{x_0}T$. Since the $C^{1+\alpha}$ norm of $T^n$ is growing at most exponentially fast, changing the value of $a>0$ if necessary, there exists $c,L\ge L_0$ such that for any $x\in B(x_0,\varepsilon)$, \begin{equation}\label{dl1+alpha} \|T^nx-T^nx_0 -D_{x_0}T^n(x-x_0) \| \le cL^n \varepsilon^{1+\alpha}\le \varepsilon/2, \end{equation} for any $1\le n\le a\log1/\varepsilon$. Let $2q\le n\le a\log(1/\varepsilon)$ and suppose that $x\in B(x_0,\varepsilon)\cap T^{-n}B(x_0,\varepsilon)$. Using \eqref{dl1+alpha} we obtain $\|D_{x_0}T^n(x-x_0)\| \le \|T^nx-x_0\| + \varepsilon/2 <\frac32 \varepsilon$. Thus Lemma~\ref{matrice} gives $ \|D_{x_0}T^q(x-x_0)\|\le\frac38 \varepsilon$. Using again~\eqref{dl1+alpha} we get \[ \|T^qx-T^q x_0\| \le \|D_{x_0}T^q(x-x_0)\|+\varepsilon/2 < \frac78\varepsilon. \] Hence \begin{equation}\label{Tq} B(x_0,\eps)\cap T^{-n}B(x_0,\eps)\subset T^{-q}B(x_0,\frac78\eps). \end{equation} Set $q_0=2q$. When $n\ge q_0$ this proves that $$ A_\varepsilon\cap T^{-n}A_\varepsilon \subset A_\eps \cap T^{-q}B(x_0,\eps)=\emptyset, $$ while when $n<q_0$ the intersection is empty by definition. \end{proof} \begin{lemma}\label{matrice} Let $A$ be hyperbolic matrix. There exists an integer $q$ such that for any vector $v$, any $n\ge 2q$, $\|A^qv\|\le \max(\|v\|, \|A^nv\|)/4$. \end{lemma} We leave the proof to the reader. \end{document}
arXiv
7: Exponential and Logarithmic Functions Mat C Intermediate Algebra (Carr) { "7.01:_Finding_Composite_and_Inverse_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "7.02:_Evaluate_and_Graph_Exponential_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "7.03:_Evaluate_and_Graph_Logarithmic_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "7.04:_Use_the_Properties_of_Logarithms" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "7.05:_Solve_Exponential_and_Logarithmic_Equations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" } { "00:_Front_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "01:_Solving_Linear_Equations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "02:_Graphs_and_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "03:_Systems_of_Linear_Equations" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "04:_Rational_Expressions_and_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "05:_Roots_and_Radicals" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "06:_Quadratic_Equations_and_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "07:_Exponential_and_Logarithmic_Functions" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "08:_Conics" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "09:_Sequences_and_Series" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()", "zz:_Back_Matter" : "property get [Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass228_0.<PageSubPageProperty>b__1]()" } 7.2: Evaluate and Graph Exponential Functions profmindy [ "article:topic", "authorname:openstax", "license:ccby", "showtoc:no", "transcluded:yes", "source[1]-math-5182" ] https://math.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fmath.libretexts.org%2FCourses%2FMission_College%2FMat_C_Intermediate_Algebra_(Carr)%2F07%253A_Exponential_and_Logarithmic_Functions%2F7.02%253A_Evaluate_and_Graph_Exponential_Functions Graph Exponential Functions Definition \(\PageIndex{1}\) Exercise \(\PageIndex{1}\) Properties of the Graph of \(f(x)=a^{x}\) when \(a>1\) Properties of the Graph of \(f(x)=a^{x}\) Solve Exponential Equations Example \(\PageIndex{5}\) How to Solve an Exponential Equation Exercise \(\PageIndex{10}\) How to Solve an Exponential Function Use Exponential Models in Applications By the end of this section, you will be able to: Before you get started, take this readiness quiz. Simplify: \(\left(\frac{x^{3}}{x^{2}}\right)\). If you missed this problem, review Example 5.13. Evaluate: a. \(2^{0}\) b. \(\left(\frac{1}{3}\right)^{0}\). Evaluate: a. \(2^{−1}\) b. \(\left(\frac{1}{3}\right)^{-1}\). The functions we have studied so far do not give us a model for many naturally occurring phenomena. From the growth of populations and the spread of viruses to radioactive decay and compounding interest, the models are very different from what we have studied so far. These models involve exponential functions. An exponential function is a function of the form \(f(x)=a^{x}\) where \(a>0\) and \(a≠1\). An exponential function, where \(a>0\) and \(a≠1\), is a function of the form \(f(x)=a^{x}\) Notice that in this function, the variable is the exponent. In our functions so far, the variables were the base. Figure 10.2.1 Our definition says \(a≠1\). If we let \(a=1\), then \(f(x)=a^{x}\) becomes \(f(x)=1^{x}\). Since \(1^{x}=1\) for all real numbers, \(f(x)=1\). This is the constant function. Our definition also says \(a>0\). If we let a base be negative, say \(−4\), then \(f(x)=(−4)^{x}\) is not a real number when \(x=\frac{1}{2}\). \(\begin{aligned} f(x) &=(-4)^{x} \\ f\left(\frac{1}{2}\right) &=(-4)^{\frac{1}{2}} \\ f\left(\frac{1}{2}\right) &=\sqrt{-4} \text { not a real number } \end{aligned}\) In fact, \(f(x)=(−4)^{x}\) would not be a real number any time \(x\) is a fraction with an even denominator. So our definition requires \(a>0\). By graphing a few exponential functions, we will be able to see their unique properties. On the same coordinate system graph \(f(x)=2^{x}\) and \(g(x)=3^{x}\). We will use point plotting to graph the functions. Graph: \(f(x)=4^{x}\). Graph: \(g(x)=5^{x}\) If we look at the graphs from the previous Example 10.2.1 and Exercises 10.2.1 and 10.2.2, we can identify some of the properties of exponential functions. The graphs of \(f(x)=2^{x}\) and \(g(x)=3^{x}\), as well as the graphs of \(f(x)=4^{x}\) and \(g(x)=5^{x}\), all have the same basic shape. This is the shape we expect from an exponential function where \(a>1\). We notice, that for each function, the graph contains the point \((0,1)\). This make sense because \(a^{0}=1\) for any \(a\). The graph of each function, \(f(x)=a^{x}\) also contains the point \((1,a)\). The graph of \(f(x)=2^{x}\) contained \((1,2)\) and the graph of \(g(x)=3^{x}\) contained \((1,3)\). This makes sense as \(a^{1}=a\). Notice too, the graph of each function \(f(x)=a^{x}\) also contains the point \((−1,\frac{1}{a})\). The graph of \(f(x)=2^{x}\) contained \((−1,\frac{1}{2})\) and the graph of \(g(x)=3^{x}\) contained \((−1,\frac{1}{3})\).This makes sense as \(a^{−1}=\frac{1}{a}\). What is the domain for each function? From the graphs we can see that the domain is the set of all real numbers. There is no restriction on the domain. We write the domain in interval notation as \((−∞,∞)\). Look at each graph. What is the range of the function? The graph never hits the \(x\)-axis. The range is all positive numbers. We write the range in interval notation as \((0,∞)\). Whenever a graph of a function approaches a line but never touches it, we call that line an asymptote. For the exponential functions we are looking at, the graph approaches the \(x\)-axis very closely but will never cross it, we call the line \(y=0\), the \(x\)-axis, a horizontal asymptote. Domain \((-\infty, \infty)\) Range \((0, \infty)\) \(x\)-intercept None \(y\)-intercept \((0,1)\) Contains \((1, a),\left(-1, \frac{1}{a}\right)\) Asymptote \(x\)-axis, the line \(y=0\) Table 10.2.1 Our definition of an exponential function \(f(x)=a^{x}\) says \(a>0\), but the examples and discussion so far has been about functions where \(a>1\). What happens when \(0<a<1\) The next example will explore this possibility. On the same coordinate system, graph \(f(x)=\left(\frac{1}{2}\right)^{x}\) and \(g(x)=\left(\frac{1}{3}\right)^{x}\). Graph: \(f(x)=\left(\frac{1}{4}\right)^{x}\). Graph: \(g(x)=\left(\frac{1}{5}\right)^{x}\). Figure 10.2.10 Now let's look at the graphs from the previous Example 10.2.2 and Exercises 10.2.3 and 10.2.4 so we can now identify some of the properties of exponential functions where \(0<a<1\). The graphs of \(f(x)=\left(\frac{1}{2}\right)^{x}\) and \(g(x)=\left(\frac{1}{3}\right)^{x}\) as well as the graphs of \(f(x)=\left(\frac{1}{4}\right)^{x}\) and \(g(x)=\left(\frac{1}{5}\right)^{x}\) all have the same basic shape. While this is the shape we expect from an exponential function where \(0<a<1\), the graphs go down from left to right while the previous graphs, when \(a>1\), went from up from left to right. We notice that for each function, the graph still contains the point \((0, 1)\). This make sense because \(a^{0}=1\) for any \(a\). As before, the graph of each function, \(f(x)=a^{x}\), also contains the point \((1,a)\). The graph of \(f(x)=\left(\frac{1}{2}\right)^{x}\) contained \(\left(1, \frac{1}{2}\right)\) and the graph of \(g(x)=\left(\frac{1}{3}\right)^{x}\) contained \(\left(1, \frac{1}{3}\right)\). This makes sense as \(a^{1}=a\). Notice too that the graph of each function, \(f(x)=a^{x}\), also contains the point \(\left(-1, \frac{1}{a}\right)\). The graph of \(f(x)=\left(\frac{1}{2}\right)^{x}\) contained \((−1,2)\) and the graph of \(g(x)=\left(\frac{1}{3}\right)^{x}\) contained \((−1,3)\). This makes sense as \(a^{-1}=\frac{1}{a}\). What is the domain and range for each function? From the graphs we can see that the domain is the set of all real numbers and we write the domain in interval notation as \((−∞,∞)\). Again, the graph never hits the \(x\)-axis. The range is all positive numbers. We write the range in interval notation as \((0,∞)\). We will summarize these properties in the chart below. Which also include when \(a>1\). When \(a>1\) When \(0<a<1\) 1\)">Domain \((-\infty, \infty)\) Domain \((-\infty, \infty)\) 1\)">Range \((0, \infty)\) Range \((0, \infty)\) 1\)">\(x\)-intercept none \(x\)-intercept none 1\)">\(y\)-intercept \((0,1)\) \(y\)-intercept \((0,1)\) 1\)">Contains \((1, a),\left(-1, \frac{1}{a}\right)\) Contains \((1, a),\left(-1, \frac{1}{a}\right)\) 1\)">Asymptote \(x\)-axis, the line \(y=0\) 1\)">Basic Shape increasing Basic Shape decreasing It is important for us to notice that both of these graphs are one-to-one, as they both pass the horizontal line test. This means the exponential function will have an inverse. We will look at this later. When we graphed quadratic functions, we were able to graph using translation rather than just plotting points. Will that work in graphing exponential functions? On the same coordinate system graph \(f(x)=2^{x}\) and \(g(x)=2^{x+1}\). On the same coordinate system, graph: \(f(x)=2^{x}\) and \(g(x)=2^{x-1}\). On the same coordinate system, graph \(f(x)=3^{x}\) and \(g(x)=3^{x+1}\). Looking at the graphs of the functions \(f(x)=2^{x}\) and \(g(x)=2^{x+1}\) in the last example, we see that adding one in the exponent caused a horizontal shift of one unit to the left. Recognizing this pattern allows us to graph other functions with the same pattern by translation. Let's now consider another situation that might be graphed more easily by translation, once we recognize the pattern. On the same coordinate system graph \(f(x)=3^{x}\) and \(g(x)=3^{x}-2\). On the same coordinate system, graph \(f(x)=3^{x}\) and \(g(x)=3^{x}+2\). On the same coordinate system, graph \(f(x)=4^{x}\) and \(g(x)=4^{x}-2\). Looking at the graphs of the functions \(f(x)=3^{x}\) and \(g(x)=3^{x}−2\) in the last example, we see that subtracting \(2\) caused a vertical shift of down two units. Notice that the horizontal asymptote also shifted down \(2\) units. Recognizing this pattern allows us to graph other functions with the same pattern by translation. All of our exponential functions have had either an integer or a rational number as the base. We will now look at an exponential function with an irrational number as the base. Before we can look at this exponential function, we need to define the irrational number, \(e\). This number is used as a base in many applications in the sciences and business that are modeled by exponential functions. The number is defined as the value of \(\left(1+\frac{1}{n}\right)^{n}\) as \(n\) gets larger and larger. We say, as \(n\) approaches infinity, or increases without bound. The table shows the value of \(\left(1+\frac{1}{n}\right)^{n}\) for several values of \(n\). \(n\) \(\left(1+\frac{1}{n}\right)^{n}\) \(1\) \(2\) \(2\) \(2.25\) \(5\) \(2.48832\) \(10\) \(2.59374246\) \(100\) \(2.704813829 \ldots\) \(1,000\) \(2.716923932 \ldots\) \(10,000\) \(2.718145927 \ldots\) \(100,000\) \(2.718268237 \ldots\) \(1,000,000\) \(2.718280469 \ldots\) \(1,000,000,000\) \(2.718281827 \ldots\) \(e \approx 2.718281827\) The number \(e\) is like the number \(π\) in that we use a symbol to represent it because its decimal representation never stops or repeats. The irrational number \(e\) is called the natural base. Natural Base \(e\) The number \(e\) is defined as the value of \(\left(1+\frac{1}{n}\right)^{n}\), as \(n\) increases without bound. We say, as \(n\) approaches infinity, The exponential function whose base is \(e\), \(f(x)=e^{x}\) is called the natural exponential function. Natural Exponential Function The natural exponential function is an exponential function whose base is \(e\) \(f(x)=e^{x}\) The domain is \((−∞,∞)\) and the range is \((0,∞)\). Let's graph the function \(f(x)=e^{x}\) on the same coordinate system as \(g(x)=2^{x}\) and \(h(x)=3^{x}\). Notice that the graph of \(f(x)=e^{x}\) is "between" the graphs of \(g(x)=2^{x}\) and \(h(x)=3^{x}\).Does this make sense as \(2<e<3\)? Equations that include an exponential expression \(a^{x}\) are called exponential equations. To solve them we use a property that says as long as \(a>0\) and \(a≠1\), if \(a^{x}=a^{y}\) then it is true that \(x=y\). In other words, in an exponential equation, if the bases are equal then the exponents are equal. One-to-One Property of Exponential Equations For \(a>0\) and \(a≠1\), If \(a^{x}=a^{y}\),then \(x=y\). To use this property, we must be certain that both sides of the equation are written with the same base. Solve: \(3^{2 x-5}=27\). Step 1: Write both sides of the equation with the same base. Since the left side has base \(3\), we write the right side with base \(3\). \(27=3^{3}\) \(3^{2 x-5}=27\) \(3^{2 x-5}=3^{3}\) Step 2: Write a new equation by setting the exponents equal. Since the bases are the same, the exponents must be equal. \(2x-5=3\) Step 3: Solve the equation. Add \(5\) to each side. Divide by \(2\). \(\begin{aligned} 2 x &=8 \\ x &=4 \end{aligned}\) Step 4: Check the solution. Substitute \(x=4\) into the original equation. \(\begin{aligned} 3^{2 x-5} &=27 \\ 3^{2 \cdot \color{red}{4}\color{black}{-}5} & \stackrel{?}{=} 27 \\ 3^{3} &\stackrel{?}{=}27 \\ 27 &=27 \end{aligned}\) \(x=2\) Solve: \(7^{x-3}=7\). The steps are summarized below. Write both sides of the equation with the same base, if possible. Write a new equation by setting the exponents equal. Check the solution. In the next example, we will use our properties on exponents. Solve \(\frac{e^{x^{2}}}{e^{3}}=e^{2 x}\). \(\frac{e^{x^{2}}}{e^{3}}=e^{2 x}\) Use the Property of Exponents: \(\frac{a^{m}}{a^{n}}=a^{m-n}\). \(e^{x^{2}-3}=e^{2 x}\) Write a new equation by setting the exponents equal. \(x^{2}-3=2 x\) Solve the equation. \(x^{2}-2 x-3=0\) \((x-3)(x+1)=0\) \(x=3, x=-1\) Check the solutions. Solve: \(\frac{e^{x^{2}}}{e^{x}}=e^{2}\). \(x=-1, x=2\) Exponential functions model many situations. If you own a bank account, you have experienced the use of an exponential function. There are two formulas that are used to determine the balance in the account when interest is earned. If a principal, \(P\), is invested at an interest rate, \(r\), for \(t\) years, the new balance, \(A\), will depend on how often the interest is compounded. If the interest is compounded \(n\) times a year we use the formula \(A=P\left(1+\frac{r}{n}\right)^{n t}\). If the interest is compounded continuously, we use the formula \(A=Pe^{rt}\). These are the formulas for compound interest. For a principal, \(P\), invested at an interest rate, \(r\), for \(t\) years, the new balance, \(A\), is: \(\begin{array}{ll}{A=P\left(1+\frac{r}{n}\right)^{n t}} & {\text { when compounded } n \text { times a year. }} \\ {A=P e^{r t}} & {\text { when compounded continuously. }}\end{array}\) As you work with the Interest formulas, it is often helpful to identify the values of the variables first and then substitute them into the formula. A total of $\(10,000\) was invested in a college fund for a new grandchild. If the interest rate is \(5\)%, how much will be in the account in \(18\) years by each method of compounding? compound quarterly compound monthly compound continuously Identify the values of each variable in the formulas. Remember to express the percent as a decimal. \(\begin{aligned} A &=? \\ P &=\$ 10,000 \\ r &=0.05 \\ t &=18 \text { years } \end{aligned}\) a. For quarterly compounding, \(n=4\). There are \(4\) quarters in a year. \(A=P\left(1+\frac{r}{n}\right)^{n t}\) Substitute the values in the formula. \(A=10,000\left(1+\frac{0.05}{4}\right)^{4 \cdot 18}\) Compute the amount. Be careful to consider the order of operations as you enter the expression into your calculator. \(A=\$ 24,459.20\) b. For monthly compounding, \(n=12\).There are \(12\) months in a year. \(A=10,000\left(1+\frac{0.05}{12}\right)^{12 \cdot 18}\) Compute the amount. c. For compounding continuously, \(A=P e^{r t}\) \(A=10,000 e^{0.05 \cdot 18}\) Angela invested $\(15,000\) in a savings account. If the interest rate is \(4\)%, how much will be in the account in \(10\) years by each method of compounding? $\(22,332.96\) Allan invested $\(10,000\) in a mutual fund. If the interest rate is \(5\)%, how much will be in the account in \(15\) years by each method of compounding? Other topics that are modeled by exponential functions involve growth and decay. Both also use the formula \(A=Pe^{rt}\) we used for the growth of money. For growth and decay, generally we use \(A_{0}\), as the original amount instead of calling it \(P\), the principal. We see that exponential growth has a positive rate of growth and exponential decay has a negative rate of growth. Exponential Growth and Decay For an original amount, \(A_{0}\), that grows or decays at a rate, \(r\), for a certain time, \(t\), the final amount, \(A\), is: \(A=A_{0} e^{r t}\) Exponential growth is typically seen in the growth of populations of humans or animals or bacteria. Our next example looks at the growth of a virus. Chris is a researcher at the Center for Disease Control and Prevention and he is trying to understand the behavior of a new and dangerous virus. He starts his experiment with \(100\) of the virus that grows at a rate of \(25\)% per hour. He will check on the virus in \(24\) hours. How many viruses will he find? Identify the values of each variable in the formulas. Be sure to put the percent in decimal form. Be sure the units match--the rate is per hour and the time is in hours. \(\begin{aligned} A &=? \\ A_{0} &=100 \\ r &=0.25 / \text { hour } \\ t &=24 \text { hours } \end{aligned}\) Substitute the values in the formula: \(A=A_{0} e^{r t}\). \(A=100 e^{0.25 \cdot 24}\) \(A=40,342.88\) Round to the nearest whole virus. \(A=40,343\) The researcher will find \(40,343\) viruses. Another researcher at the Center for Disease Control and Prevention, Lisa, is studying the growth of a bacteria. She starts his experiment with \(50\) of the bacteria that grows at a rate of \(15\)% per hour. He will check on the bacteria every \(8\) hours. How many bacteria will he find in \(8\) hours? She will find \(166\) bacteria. Maria, a biologist is observing the growth pattern of a virus. She starts with \(100\) of the virus that grows at a rate of \(10\)% per hour. She will check on the virus in \(24\) hours. How many viruses will she find? She will find \(1,102\) viruses. Access these online resources for additional instruction and practice with evaluating and graphing exponential functions. Graphing Exponential Functions Solving Exponential Equations Applications of Exponential Functions Continuously Compound Interest Radioactive Decay and Exponential Growth Properties of the Graph of \(f(x)=a^{x}\): One-to-One Property of Exponential Equations: How to Solve an Exponential Equation Compound Interest: For a principal, \(P\), invested at an interest rate, \(r\), for \(t\) years, the new balance, \(A\), is Exponential Growth and Decay: For an original amount, \(A_{0}\) that grows or decays at a rate, \(r\), for a certain time \(t\), the final amount, \(A\), is \(A=A_{0}e^{rt}\). A line which a graph of a function approaches closely but never touches. exponential function An exponential function, where \(a>0\) and \(a≠1\), is a function of the form \(f(x)=a^{x}\). The number \(e\) is defined as the value of \((1+\frac{1}{n})^{n}\), as \(n\) gets larger and larger. We say, as \(n\) increases without bound, \(e≈2.718281827...\) The natural exponential function is an exponential function whose base is \(e\): \(f(x)=e^{x}\). The domain is \((−∞,∞)\) and the range is \((0,∞)\). This page titled 7.2: Evaluate and Graph Exponential Functions is shared under a CC BY license and was authored, remixed, and/or curated by OpenStax. 7.1: Finding Composite and Inverse Functions 7.3: Evaluate and Graph Logarithmic Functions source[1]-math-5182
CommonCrawl
Spheroidal Models of Atoms René Descartes certainly thought that atoms were like little balls spinning around and bumping into each other.1 So consider an atom $\mathbf{A}$ described by a repetitive chain of events $\Psi = \left( \sf{\Omega}_{1} , \sf{\Omega}_{2}, \sf{\Omega}_{3} \ldots \right)$ where each repeated cycle $\sf{\Omega}$ is a space-time event in a Euclidean space. Let this atom be characterized by its wavenumber $\kappa$ and its orbital radius $R$. Definition: We can model $\mathbf{A}$ as a spheroid mathematically represented in Cartesian coordinates by $\begin{align} \frac{x^{2} + y^{2}}{R^{2}} + \left( \frac{2 \kappa}{3 \pi } \right)^{2} z^{2} = 1 \end{align}$ This shape is also known as an ellipsoid of revolution about the atom's polar axis. If $\mathbf{A}$ is in its ground-state then $\kappa = 0$ and the sphere collapses into a circle Spheroidal Shapes a perfect sphere $3 \lambda = 4 R$ a prolate spheroid $3 \lambda > 4 R$ a oblate spheroid $3 \lambda < 4 R$ $x^{2} + y^{2} = R^{2}$ However if $\mathbf{A}$ is an excited atom then its wavelength is $\lambda =2 \pi /\kappa$ and its shape can be represented as $\begin{align} \frac{x^{2} + y^{2}}{R^{2}} + \left( \frac{4 }{3 \lambda} \right)^{2} z^{2} = 1 \end{align}$ Traditional mensuration formulae give the volume enclosed by this curve as $V = \lambda \pi R^{2}$ So the spheroidal model has been scaled to give $\mathbf{A}$ exactly the same volume as the cylindrical model of an atom. A variety of spheroidal shapes are specified in the accompanying table. For example he writes that; "The material, as I said, is composed of many small balls which are in mutual contact; and we have sensory awareness of two kinds of motion which these balls have. One is the motion by which they approach our eyes in a straight line, which gives us the sensation of light; and the other is the motion whereby they turn about their own centers as they approach us. If the speed at which they turn is much smaller than that of their rectilinear motion, the body from which they come appears blue to us; while if the turning speed is much greater than that of their rectilinear motion, the body appears red to us. But the only type of body which could possibly make their turning motion faster is one whose tiny parts have such slender strands, and ones which are so close together (as I have shown those of the blood to be), that the only material revolving round them is that of the first element. The little balls of the second element encounter this material of the first element on the surface of the blood; this material of the first element then passes with a continuous, very rapid, oblique motion from one gap between the balls to another, thus moving in an opposite direction to the balls, so that they are forced by it to turn about their centres." From A Description of the Human Body published in The Philosophical Writings of Descartes, Volume I. Translated by John Cottingham, Robert Stoothoff and Dugald Murdoch. Cambridge University Press, 1985, page 323.
CommonCrawl
Influence of occupational exposure on hyperuricemia in steelworkers: a nested case–control study Yuanyu Chen1, Yongzhong Yang1, Ziwei Zheng1, Hui Wang1, Xuelin Wang1, Zhikang Si1, Rui Meng1, Guoli Wang1,2 & Jianhui Wu1,2 Occupational exposure may be associated with an increased risk of developing hyperuricemia. This study sheds lights on the association between occupational exposure and hyperuricemia in steelworkers. A nested case–control study was conducted within a cohort of workers in steel companies to explore the association between occupational exposure and hyperuricemia. The case group consisted of a total of 641 cases of hyperuricemia identified during the study period, while 641 non-hyperuricemia subjects with the same age and gender distribution were randomly selected from the cohort as the control group. The incidence rate of hyperuricemia among workers in the steel company was 17.30%, with an incidence density of 81.32/1,000 person-years. In comparison to the reference group, the risks of developing hyperuricemia for steelworkers undergoing ever shifts, current shifts, heat exposure, and dust exposure were 2.18 times, 1.81 times, 1.58 times and 1.34 times higher respectively. The odds ratios (ORs) and 95% confidence intervals (CIs) were 1.87(1.12–3.13) and 2.02(1.21–3.37) for the cumulative number of days of night work at 0–1,972.80 and ≥ 1,972.80 (days), respectively. Compared to the group with the cumulative heat exposure of 0 (°C/year), the ORs (95% CI) for the risk of developing hyperuricemia in the groups with the cumulative heat exposure of 0–567.83 and ≥ 567.83 (°C/year) were 1.50(1.02–2.22) and 1.64(1.11–2.43), respectively. The OR (95% CI) for the risk of developing hyperuricemia was 1.56(1.05–2.32) at the cumulative dust exposure of ≥ 30.02 (mg/m3/year) compared to that at the cumulative dust exposure of 0 (mg/m3/year). Furthermore, there was a multiplicative interaction between heat exposure and dust exposure in the development of hyperuricemia. Shift work, heat, and dust are independent risk factors for the development of hyperuricemia in steelworkers. Additionally, there is a multiplicative interaction between heat exposure and dust exposure in the development of hyperuricemia. Interventions for shift work, heat and dust may help to reduce the incidence rate of hyperuricemia and improve the health of steelworkers. Hyperuricemia (HUA) refers to a group of metabolic disorders. To be more specific, the concentration of uric acid in the blood is too high due to chronic impairment of purine nucleotide metabolism and, or abnormal excretion of uric acid in the body [1]. In the early stages, hyperuricemia is characterized only by elevated blood uric acid concentration and is highly insidious. Gouty arthritis and, in severe instances, renal impairment can develop when the blood is saturated with urate [2]. Studies have confirmed that hyperuricemia not only causes gouty arthritis and renal impairment but is also associated with type 2 diabetes, hypertension, coronary artery disease, endothelial dysfunction, and metabolic syndrome [3,4,5,6,7]. A study of 36,348 adults showed that the prevalence of hyperuricemia among Chinese adults was 8.4% in 2009–2010 [8]. In 2015, a mate-analysis conducted by Liu R et al. involving 44 epidemiological surveys in 16 provinces, municipalities, and autonomous regions in the mainland of China, revealed that the overall prevalence of hyperuricemia in China was 13.3% [9]. In addition, the prevalence of hyperuricemia among US adults, significantly higher than the results of the 1988–1994 survey (18.2%), rises to 20.1% according to the 2007–2016 US National Health and Nutrition Examination Survey [10]. Thus, it appears that the prevalence of hyperuricemia is increasing every year. Studies have shown that heat exposure can lead to kidney damage [11], which in turn can affect the metabolism of uric acid. Roncal-Jimenez CA et al. carried out an animal experiment in which mice exposed to heat for 5 weeks had higher serum uric acid levels, indicating that heat exposure is associated with elevated uric acid in the serum [12]. A large case–control study was conducted, and the results revealed a link between inorganic dust exposure and gout [13]. In addition, a Russian study showed a higher prevalence of hyperuricemia in oil press workers exposed to chronic noise than that in the general population [14]. Different production processes in steel plants involve a variety of occupational hazards such as heat, noise, dust, and long irregular shifts, all of which can be hazardous to the health of workers in steel companies and give rise to various diseases. LI X et al. conducted a survey of steelworkers and made an analysis. They found that the prevalence of hyperuricemia among steelworkers was 35.9% [15], which was higher than that of the general population. The factors that affect hyperuricemia have been extensively studied [16,17,18,19]. However, these studies have mainly focused on social behavior, dietary habits, and genetic factors. There have been very few investigations into the influence of occupational hazardous variables on hyperuricemia. Hence, the current study was carried out to look into the connection between occupational exposure and the development of hyperuricemia in workers at a steel company. Besides, the interactions between occupational hazards were examined. Population study The participants in this study were workers of a steel company who underwent occupational health screening at Hongci Hospital, Tangshan City, Hebei Province, China, from March 2017 to September 2017. Cohort inclusion criteria: formally employed workers, at least one year of service. Cohort exclusion criteria: hyperuricemia and refusal to participate in this cohort study. A total of 4,247 workers from the steel company were included in the cohort. A follow-up survey of a total of 3,706 steelworkers between March 2019 and September 2019 was carried out, with a 12.74% missing rate. Workers in the cohort with new-onset hyperuricemia in steel companies are regarded as the case group. From this cohort population, each new-onset patient was matched with a worker in the steel company without hyperuricemia as a control according to the principle of matched design (the same sex and age) to form a control group. A total of 641 case and control pairs were eventually included as samples after those with incomplete information were excluded. All participants have read and signed the informed consent form. The subject was approved by the Medical Ethics Committee of the North China University of Technology (No.15006). A one-on-one survey was conducted by uniformly trained enumerators with a questionnaire designed by the subject group. The contents of the questionnaire included sex, age, household size, monthly household income, education level, marital status, smoking status, drinking status, tea status, physical exercise, type of work, change in type of work, shift work, type of shift, change in shift work, change in working hours, daily working hours, monthly rest time, and personal disease history. The participants were also asked about the frequency of food intake (0 day per week, 1–2 days per week, 3–6 days per week, and 7 days per week). Food was divided into different groups: vegetables, fruits, meat, eggs, dairy products, soy products, and seafood. The participants took off their shoes when their height and weight were measured through the ultrasound height and weight measuring device. Measurements were taken three times and averaged. The body mass index (BMI) was calculated based on the measurement. The participant was instructed not to drink tea, coffee, alcohol or other beverages that may affect the results of blood pressure. The participant was asked to have a five-minute break before the blood pressure measurement and to take the measurement three times with an interval of not less than three minutes. Laboratory examination Subjects were required to fast for 12 h. Fasting blood and morning urine were collected by the Laboratory Department of Tangshan Hongci Hospital before 9 a.m. the next day. Blood, urine, and blood biochemistry were examined by specialist physicians. On-site hygienic investigation The on-site hygienic survey involved heat, dust, and noise. The measurement tool for temperature is Wet Bulb Black Globe Temperature Gauge (WBGT). According to relevant standard [20], the temperature should be measured during the hottest season of the year. Temperatures were measured at different workplaces in consideration of the specific conditions of the steel production unit. Three to six measurement points were selected for each workplace, and the test was repeated three times at each measurement point, with the average taken as the final result. Dust was measured with a dust sampler. The sampling points were chosen according to relevant standard [21] and specific conditions of the workshop. The sample collection time for each sampling point was 45 min, and the flow rate of the dust sampler was set at 40 L/min. The calculation formula is as follows: $$\begin{array}{c}C=\frac{\mathrm{m}2-\mathrm{m}1}{\mathrm{Q}\times \mathrm{t}}\times 1000\#\end{array}$$ where: C- dust concentration, mg/m.3 m2—the mass of the filter membrane after sampling, mg. m1—the mass of the filter membrane before sampling, mg. Q—flow rate, L/min. t—sampling time, min. Noise testing was carried out according to relevant standards [22, 23] and specific circumstances of the workplace. When the noise distribution in the workshop was uneven, the noise was divided into different sound zones according to the sound level and two test points were set up in each zone. When the noise distribution in the workshop was relatively even (the sound level difference is less than or equal to 3 dB (A)), three measurement points were set up. The average value was taken as the final result after measurement. The calculation formula for sound level measurement is as follows: $${L}_{Aeq,T}=10\text{lg(}\frac{1}{T}{\sum }_{i=1}^{n}{T}_{i}{10}^{0.1{L}_{Aeq,{T}_{i}}})$$ where: \({L}_{Aeq, {T}_{i}}\)- equivalent sound level during the time period Ti. \({L}_{Aeq, T}\)—equivalent sound level for a full day. n—the total number of periods. T—the duration of each period. Ti- i period of time. Cumulative exposure measurement (CEM) for steelworkers is calculated based on the results of the on-site hygienic survey, combined with the change in work status and duration of occupational exposure. The formula is as follows: $$\mathrm{CEM}={\mathrm{L}}_{1} {\mathrm{T}}_{1}+ {\mathrm{L}}_{2} {\mathrm{T}}_{2}\dots \dots +{\mathrm{L}}_{\mathrm{n}} {\mathrm{T}}_{\mathrm{n}}$$ where: Ln is the average exposure to the target harmful factor over a period of time Tn. Definition and grouping of indicators Those having a blood uric acid value greater than or equal to 7.0 mg/dL in men and 6.0 mg/dL in women, as well as previous or ongoing gout treatment, were diagnosed with hyperuricemia [24]. Never smokers were defined as those who had never smoked from birth to the time of the survey. Former smokers were defined as those who had previously smoked but had quit smoking for 6 months or longer as of this survey. People who had smoked at least 1 cigarette per day for six months or longer as of the survey were defined as current smokers. People who drank alcohol more than twice a week, regardless of the type of alcohol, and who had done so for more than a year were considered to be drinking. The frequency of food consumption was divided into four categories: never (0 day per week), occasionally (1–2 days per week), frequently (3–6 days per week) and daily (7 days per week). In this study, the International Physical Activity Questionnaire (IPAQ) was used to analyze the physical activities of steelworkers [25]. Physical activities were classified into light, moderate, and heavy activities based on intensity, frequency, and overall weekly physical activity level. Shift work is a system of irregular working hours in which one or more teams perform tasks continuously for 24 h by working in shifts without stopping [26]. The cumulative number of days of night work represents the total number of days of night work done by workers in the steel plant as of the date of the survey. According to relevant standard [20], work with a productive heat source and WBGT ≥ 25 °C was defined as heat-exposed work. Dust exposure was defined based on the type of work, the work environment, and the findings of the site hygiene [21]. Exposure to noise was defined as workers who were exposed to a noisy environment where the 8 h/d or 40 h/week equivalent A-weighted sound pressure level is ≥ 80 dB, which may be harmful to health and hearing [27]. Continuous variables were described by means and standard deviations, and the differences between groups were obtained through Student's t-test. The categorical variables were expressed through the number of individuals (%), and χ2 tests were used for comparisons between groups. Multifactorial analyses were performed and multiplicative interactions between occupational hazard factors were explored with the help of conditional logistic regression models. Additive interactions were assessed using the attributable proportion of interaction (AP), the relative excess risk of interaction (RERI), and the synergy index (SI), calculated using the Excel spreadsheet by Andersson et al. [28]. The AP is the proportion of the risk due to the interaction in the doubly exposed group. When RERI is positive, it indicates increased risk due to the additive interaction. SI can be interpreted as the ratio of an increased risk due to both exposures to the sum of individual increased risks. All statistical analyses were performed by dint of IBM SPSS 24.0 and Excel 2019. P < 0.05 was regarded as significant for two-sided tests. A total of 3,706 workers (3,352 males and 354 females) were followed up in the steel enterprise. The follow-up period was (25.88 ± 1.68) months. 641 new cases of hyperuricemia (587 males and 54 females) were found among the 3,706 workers in the steel company, with a mean age of (44.67 ± 7.58) years. The case and control groups have the same sex and age composition because the same sex and age were required and 1:1 matching was used according to the matching principle. The two groups were comparable with the same basic data. The incidence rate was 17.30% (17.51% for males and 15.25% for females). The incidence density was 81.32/1,000 person-years (82.35/1,000 person-years for males and 71.56/1,000 person-years for females). Characteristics of participant study The results of the analysis are shown in Table 1 (at the end of the manuscript). The analysis revealed that the proportions of junior, senior, and secondary were the highest in both the case and control groups, at 78.5 and 71.1%, respectively. The proportion of people with a junior college or above in the case group was 20.4%, lower than 27.6% in the control group, with a statistically significant difference (P < 0.05). The proportions of workers in the steel enterprise with a per capita monthly income of RMB1,500—and RMB ≥ 2,500 were 46.8 and 32.6%, respectively, higher than those of the control group at 41.0% and 25.6%, respectively, with statistically significant differences (P < 0.001). Regarding the health status of steelworkers, the proportions of overweight, obesity, hypertension, diabetes, dyslipidemia, abnormal kidney function, and abnormal liver function in the case group were all higher than those in the control group, with statistically significant differences (P < 0.05 for all diseases). In terms of behavioral lifestyle, the proportion of workers in the steel company who consumed fruit daily was the highest in both the case (51.6%) and control (59.3%) groups, and was lower in the case group than in the control group, showing a statistically significant difference (P < 0.05). The proportion of workers in the steel company who consumed meat daily was 33.4% in the case group, higher than that in the control group at 24.2%, with a statistically significant difference (P < 0.001). The proportion of workers in the steel company who consumed seafood daily was 11.4% in the case group, compared with 6.4% in the control group, hence the difference was statistically significant (P < 0.05). The proportions of heavy physical activities and physical exercises 5–7 times per week were 75.2 and 14.8% in the case group, respectively, compared with 82.2 and 22.9% in the control group, presenting statistically significant differences (P < 0.001 for both). The proportion of alcohol consumption in the case group was 53.8%, higher than that of 40.1% in the control group, showing a statistically significant difference (P < 0.001). The differences in the distribution of marital status, frequency of vegetable consumption, frequency of egg consumption, frequency of soy product consumption, smoking, and tea-drinking between the case and control groups were not statistically significant (P > 0.05 for all parameters). Table 1 Comparison of population characteristics of steelworkers Single-factor analysis of the exposure to occupational hazards A comparison of the distribution of exposure to occupational hazards between the case group and the control group revealed that ever shifts, current shifts, heat exposure, and dust exposure differed significantly between the two groups, but the differences were not statistically significant in the two noise exposure groups. Please refer to Table 2. Table 2 Comparison of exposure to different occupational hazardous factors Univariate analysis of the cumulative exposure to occupational hazards In the cumulative exposure, those not exposed to occupational hazards with a cumulative exposure of 0 formed a group. The median among exposures to occupational hazards was obtained. With 0 and the median as bounds, each cumulative exposure was divided into three groups (0,0-median, ≥ median). The median cumulative exposure to night shifts in the shift work population was 1,972.80. The median cumulative exposure to heat in the heat-exposed population was 567.83. The median cumulative exposure to dust in the dust-exposed population was 30.02. The median cumulative exposure to noise in the noise-exposed population was 1,660.68. After comparing the distribution of cumulative exposure to occupational hazards between the case and control groups, we found that the proportion of the cumulative number of days of night work in the case group was 46.5% for ≥ 1,972.80 day, which was higher than that of the control group (41.2%), presenting a statistically significant difference (P < 0.05). The proportions of cumulative exposure to heat in the case group were 26.1% and 30.7% for 0–567.83(°C/year) and ≥ 567.83(°C/year), respectively, which were higher than those of the control group (24.6% and 20.0%), which shows statistically significant differences (P < 0.001). The difference in the cumulative dust exposure between the two groups was also statistically significant (P < 0.05). The difference in the cumulative noise exposure between the two groups was not statistically significant (P > 0.05). Please see Table 3. Table 3 Distribution of the cumulative occupational exposure after segmentation Multi-factor analysis of the effect of occupational hazard exposure on HUA in workers in the steel company Table 4 shows the analysis results of the effect of occupational hazard exposure on HUA by dint of the Conditional Logistic Regression model. The dependent variable was the presence or absence of hyperuricemia, and the independent variable was the occupational exposure to harmful factors. After the adjustment of possible confounders, the OR (95% CI) for the risk of developing hyperuricemia was 2.18 (1.28–3.69) times and 1.81 (1.11–2.95) times for ever shifts and current shifts, respectively, compared to the reference group. Heat exposure and dust exposure were associated with an elevated risk of developing hyperuricemia with ORs (95% CI) of 1.58 (1.17–2.14) and 1.34(1.01–1.81), respectively. In addition, the effect of noise exposure on the risk of hyperuricemia development in steelworkers was not statistically significant (P > 0.05). Table 4 Logistic regression analysis of the connection between the exposure to occupational hazards and HUA Multi-factor analysis of the effect of the cumulative exposure to occupational hazards on the HUA of workers in the steel company Cumulative exposure to occupational hazards was segmented as above. A Conditional Logistic Regression model was employed with the presence or absence of hyperuricemia as the dependent variable, and the segmented cumulative exposure to occupational hazards as the independent variable. After the adjustment of possible confounders, the results showed an increased risk of developing hyperuricemia in the cumulative days of night work in the 0–1,972.80 and ≥ 1,972.80 (days) compared to the reference group, with ORs (95% CI) of 1.87(1.12–3.13) and 2.02(1.21–3.37), respectively. The ORs (95% CI) for the risk of developing hyperuricemia in steelworkers in the cumulative exposure to heat groups of 0–567.83 and ≥ 567.83 (°C/year) were 1.50(1.02–2.22) and 1.64(1.11–2.43), respectively. Compared to the reference group, the OR (95% CI) for the risk of hyperuricemia in steelworkers was 1.56 (1.05–2.32) for cumulative dust exposure ≥ 30.02 (mg/m3/year). The OR (95% CI) for the risk of developing hyperuricemia in steelworkers at dust exposure of ≥ 30.02 (mg/m3/year) was 1.16(0.78–1.73), which was not statistically significant (P > 0.05) compared to the reference group. The association between cumulative noise exposure and hyperuricemia development in steelworkers was not statistically significant (P > 0.05). Please refer to Table 5. Table 5 Logistic regression analysis of the connection between post-occupational cumulative exposure segmentation and HUA Correlation of interactions between occupational hazards and HUA The multiplicative interactions between shift work, heat exposure, and dust exposure were analyzed in the case of hyperuricemia. The main effects of the two occupational hazards and the product terms of the two multiplicative interactions were jointly introduced into the Conditional Logistic Regression model for analysis. The model analysis was adjusted for service years, marital status, education level, monthly per capita household income, physical activity level, BMI, hypertension, diabetes, dyslipidemia, abnormal kidney function, abnormal liver function, frequency of vegetable consumption, frequency of fruit consumption, frequency of egg consumption, frequency of dairy consumption, frequency of soya product consumption, frequency of seafood consumption, physical exercise, smoking, drinking, and tea drinking. The results showed a multiplicative interaction between dust exposure and heat exposure (P interaction < 0.05). Dust exposure significantly increased the effect of heat exposure on hyperuricemia development (OR = 2.45, 95% CI: 1.34–4.46). No multiplicative interactions were found between shift work and heat exposure or shift work and dust exposure (P interactions > 0.05 for all variables). Please see Tables 6, 7 and 8. Table 6 Analysis of the multiplicative interaction between shift work and heat exposure on HUA Table 7 Analysis of the multiplicative interaction between shift work and dust exposure on HUA Table 8 Analysis of the multiplicative interaction between heat exposure and dust exposure on HUA Since the additive interaction model can only be employed to explore interactions between dichotomous variables, the ever shifts and current shifts were combined into one group. The shift states were divided into never-shift and shift for the analysis of additive interactions. The results revealed no additive interactions between shift work and heat exposure, shift work and dust exposure, or heat exposure and dust exposure. Please refer to Table 9. Table 9 Analysis of additive interactions between occupational hazards on HUA in steelworkers With rapid socio-economic development and changes in people's lifestyles, hyperuricemia has become the second most prevalent metabolic disease after diabetes, seriously affecting people's quality of life [29]. Hyperuricemia is a risk factor for many diseases. The deposition of urate crystals can give rise to metabolic disorders and damage to kidney and heart [30, 31]. Studies have shown that the prevalence of hyperuricemia in adults is increasing year by year. There is a trend of a younger onset of the disease [9]. In this study, the incidence rate of hyperuricemia in the workers of the steel company investigated was 17.30%. A meta-analysis revealed that 13.3% of people in the mainland of China had hyperuricemia [9]. The incidence rate of hyperuricemia among workers in steel companies is higher than the average in the mainland of China. It is therefore essential to study the effects of occupational exposure on hyperuricemia in steelworkers. According to the US 2010 National Health Interview Survey, about one-fifth of the workforce carries out shift work of varying intensities worldwide [32]. Shift work has many adverse effects on the physical and mental health of workers. Therefore, shift work is one of the occupational hazards that cannot be overlooked. Despite production reforms, shift work is still practiced in the steel industry. Of the 641 sample pairs in this study, 87.7% of workers in the steel enterprise had a history of shift work, and 65.4% of workers in the steel enterprise were currently working in shifts. In this study, after the adjustment of possible confounders, it was found that the ORs (95% CI) for the risk of developing hyperuricemia due to ever shifts and current shifts were 2.18 and 1.81 times higher than that due to never shifts, respectively. The risk of hyperuricemia was increased in both the 0–1,972.80 and ≥ 1,972.80 (days) groups compared to the 0 (day) cumulative days of night work. The ORs were 1.87 and 2.02, respectively, indicating that the risk of developing hyperuricemia rose with the number of cumulative days of night work. A Japanese cohort study revealed that shift work is independently related to elevated serum uric acid in males [33]. In another study, the risk of hyperuricemia development was 1.41 times higher in steelworkers who worked shifts compared to that of those who didn't [34]. The present study echoes with these results. Long-term shift work disrupts physiological functions and the body's circadian rhythm. At the same time, the biological clock is disturbed, thus impairing uric acid metabolism. Studies suggest that the culprit could be oxidative stress caused by disrupted circadian rhythms [35]. Heat is among the main occupational hazards for workers in steel companies, and high workplace temperatures can place a burden of disease on occupational groups [36, 37]. Over 50% of the 641 sample pairs in this study were exposed to heat. The current study showed that heat exposure raised the risk of hyperuricemia development with an OR of 1.58. The risk of developing hyperuricemia was increased in both groups with the cumulative exposure to heat of 0–567.83 and ≥ 567.83 (°C/year) compared to the reference group, respectively. The ORs were 1.50 and 1.64, respectively. Lin QY et al. examined the factors impacting the prevalence of chronic diseases among heat-exposed workers in a port terminal, and the results revealed that the longer the length of labor exposed to heat, the higher the risk of hyperuricemia is (P < 0.01), which is similar to the results of this study [38]. The mechanisms involved are hypothesized: firstly, under the hot working conditions, most of the water in the body is excreted in sweat, thus the urinary excretion is significantly reduced and uric acid accumulates. Secondly, the concentration of lactic acid in workers' bodies rises under high-temperature working conditions. Lactic acid competitively inhibits the excretion of uric acid. The competitive inhibition affects uric acid excretion and the concentration of uric acid increases in the blood. Furthermore, research has demonstrated that exposure to heat might lead to kidney damage [11]. Kidney damage can affect the normal excretion of uric acid, leading to the accumulation of uric acid and, ultimately, hyperuricemia. Dust is generated in all links of the steel production process, and exposure to dust is one of the major occupational hazards for steelworkers. This study showed that dust exposure elevated the risk of developing hyperuricemia in comparison to no exposure to dust with an OR of 1.34. The risk of hyperuricemia development was increased in the group with cumulative dust exposure ≥ 30.02 (mg/m3/year) compared to the reference group. The OR (95% CI) was 1.56 (1.05–2.32). When dust is inhaled by the body, it not only accumulates in the lungs, but also enters the circulation through the blood barrier and damages other organs [39]. This has been confirmed by many animal tests [40, 41]. It is, therefore, speculated that the increased risk of hyperuricemia development from dust exposure may be owing to kidney damage, which affects the normal metabolism of purines and the normal excretion of uric acid, leading to an elevation in uric acid levels in the blood. There are fewer studies on the association between dust and hyperuricemia, and more research is needed to make clarify the exact mechanisms. The present study showed no statistically significant association between noise exposure and hyperuricemia, which is consistent with the findings of the Zhang SK study [42]. However, it has also been noted that hyperuricemia is associated with noise exposure in the work environment [43]. Noise-induced psychological stress may affect purine metabolism and uric acid excretion through neuroendocrine regulation [44]. Therefore, the effect of noise exposure on hyperuricemia requires additional investigation. Few research has clarified the interplay of occupational hazards in prior studies on factors affecting hyperuricemia. In this study, the interaction analysis revealed a multiplicative interaction between heat exposure and dust exposure in the development of hyperuricemia. Exposure to both heat and dust significantly increased the risk of hyperuricemia development, which proves the combined effect of some occupational hazards on physical health. This indicates that reducing workers' heat exposure can lower the risk of hyperuricemia in steelworkers exposed to dust so as to protect their health. The main strengths of our study lie in a precise calculation of cumulative exposure to occupational hazards and a comprehensive range of potential confounders, which enables a more accurate study on the effect of occupational exposures on hyperuricemia. However, there are some limitations in our study. There was a 12.74% missed visit rate throughout the study, which may have been subject to missed visit bias. High-temperature and dusty weathers may affect the results of this study, but they weren't taken into account in this study. Moreover, this study is only based on a sample of workers from a steel company in one region. Due to the uniqueness of the occupational environment, the sample size of female workers was small and the representation of the study population was limited. Therefore, a multi-regional and large sample of the target population is required for validation. Shift work, heat, and dust are three independent factors that put steelworkers at risk for hyperuricemia, and exposure to both heat and dust increases the risk of hyperuricemia development. Therefore, close monitoring of the aforementioned factors and early intervention are required to lower the prevalence of hyperuricemia and improve the health of steelworkers. Data are available upon reasonable request. The datasets generated and analyzed during the current study are not publicly available due other analyses are proceeding but are available from the corresponding author on reasonable request. HUA: 95%CI : 95% Confident limit RERI: Relative excess risk of interaction Attributable proportion of interaction Synergy index Yanai H, Adachi H, Hakoshima M, et al. Molecular biological and clinical understanding of the pathophysiology and treatments of hyperuricemia and its association with metabolic syndrome, cardiovascular diseases and chronic kidney disease. Int J Mol Sci. 2021;22(17):9221. Park JH, Jo YI, Lee JH. Renal effects of uric acid: hyperuricemia and hypouricemia. Korean J Intern Med. 2020;35(6):1291–304. Xia Q, Zhang SH, Yang SM, et al. Serum uric acid is independently associated with diabetic nephropathy but not diabetic retinopathy in patients with type 2 diabetes. J Chin Med Assoc. 2020;83(4):350–6. Rocha E, Vogel M, Stanik J, et al. Serum uric acid levels as an indicator for metabolically unhealthy obesity in children and adolescents. Horm Res Paediatr. 2018;90(1):19–27. Zuo T, Liu X, Jiang L, et al. Hyperuricemia and coronary heart disease mortality: a meta-analysis of prospective cohort studies. BMC Cardiovasc Disord. 2016;16(1):207. Ndrepepa G. Uric acid and cardiovascular disease. Clin Chim Acta. 2018;484:150–63. Cicero A, Fogacci F, Giovannini M, et al. Serum uric acid predicts incident metabolic syndrome in the elderly in an analysis of the Brisighella Heart Study. Sci Rep. 2018;8(1):11529. Liu H, Zhang XM, Wang YL, et al. Prevalence of hyperuricemia among Chinese adults: a national cross-sectional survey using multistage, stratified sampling. J Nephrol. 2014;27(6):653–8. Liu R, Han C, Wu D, et al. Prevalence of hyperuricemia and gout in Mainland China from 2000 to 2014: a systematic review and meta-analysis. Biomed Res Int. 2015;2015:762820. Chen XM, Yokose C, Rai SK, et al. Contemporary prevalence of gout and hyperuricemia in the united states and decadal trends: the national health and nutrition examination survey, 2007–2016. Arthritis Rheumatol. 2019;71(6):991–9. Schlader ZJ, Hostler D, Parker MD, et al. The potential for renal injury elicited by physical work in the heat. Nutrients. 2019;11(9):2087. Roncal-Jimenez CA, Sato Y, Milagres T, et al. Experimental heat stress nephropathy and liver injury are improved by allopurinol. Am J Physiol Renal Physiol. 2018;315(3):F726–33. Sigurdardottir V, Jacobsson L, Schiöler L, et al. Occupational exposure to inorganic dust and risk of gout: a population-based study. RMD Open. 2020;6(2):e001178. Nosov AE, Baĭdina AS, Ustinova Olu. Features of early stages of cardiovascular continuum in workers engaged in oil-extracting enterprise. Med Tr Prom Ekol. 2013;11:32–6. Li X, Xu X, Song Y, et al. An association between cumulative exposure to light at night and the prevalence of hyperuricemia in steel workers. Int J Occup Med Environ Health. 2021;34(3):385–401. Cui L, Meng L, Wang G, et al. Prevalence and risk factors of hyperuricemia: results of the Kailuan cohort study. Mod Rheumatol. 2017;27(6):1066–71. Papk DY, Kim YS, Ryu SH, et al. The association between sedentary behavior, physical activity and hyperuricemia. Vasc Health Risk Manag. 2019;15:291–9. Rivera-Paredez B, Macías-Kauffer L, Fernandez-Lopez JC, et al. Influence of genetic and non-genetic risk factors for serum uric acid levels and hyperuricemia in Mexicans. Nutrients. 2019;11(6):1336. He H, Pan L, Ren X, et al. The effect of body weight and alcohol consumption on hyperuricemia and their population attributable fractions: a national health survey in China. Obes Facts. 2022;15(2):216–27. Ministry of Health of the People's Republic of China. GBZT189.7–2007-Measurement of physical factors in the workplace Part 7: High temperature. Beijing: China Standards Press; 2007. Ministry of Health of the People's Republic of China. GBZ/T 192.1–2007-Determination of dust in workplace air, Part 1: Total dust concentration. Beijing: China Standard Press; 2007. Ministry of Health of the People's Republic of China. GBZT189.8–2007-Measurement of physical factors in the workplace Part 8: Noise. Beijing: China Standards Press; 2007. Ministry of Health of the People's Republic of China. GBZ 2.2–2007 - Limits of exposure to harmful factors in the workplace Part II: Physical factors. Beijing: China Standards Press; 2007. Lee JW, Kwon BC, Choi HG. Analyzes of the relationship between hyperuricemia and osteoporosis. Sci Rep. 2021;11(1):12080. Lou X, He Q. Validity and reliability of the international physical activity questionnaire in Chinese hemodialysis patients: a multicenter study in China. Med Sci Monit. 2019;25:9402–8. Hansen AB, Stayner L, Hansen J, et al. Night shift work and incidence of diabetes in the Danish Nurse Cohort. Occup Environ Med. 2016;73(4):262–8. Ministry of Health of the People's Republic of China. GBZ/T 229.4-2012 - Classification of Occupational Hazards at Workplaces. Part 4: Occupational Exposure to Noise. Beijing: China Standards Press; 2012. Andersson T, Alfredsson L, Källberg H, et al. Calculating measures of biological interaction. Eur J Epidemiol. 2005;20(7):575–9. Goldberg EL, Asher JL, Molony RD, et al. Beta-hydroxybutyrate deactivates neutrophil NLRP3 inflammasome to relieve gout flares. Cell Rep. 2017;18(9):2077–87. Johnson RJ, Bakris GL, Borghi C, et al. Hyperuricemia, acute and chronic kidney disease, hypertension, and cardiovascular disease: report of a scientific workshop organized by the national kidney foundation. Am J Kidney Dis. 2018;71(6):851–65. Bonakdaran S, Kharaqani B. Association of serum uric acid and metabolic syndrome in type 2 diabetes. Curr Diabetes Rev. 2014;10(2):113–7. Alterman T, Luckhaupt SE, Dahlhamer JM, et al. Prevalence rates of work organization characteristics among workers in the U.S.: data from the 2010 National Health Interview Survey. Am J Ind Med. 2013;56(6):647–59. Uetani M, Suwazono Y, Kobayashi E, et al. A longitudinal study of the influence of shift work on serum uric acid levels in workers at a telecommunications company. Occup Med (Lond). 2006;56(2):83–8. Oh JS, Choi WJ, Lee MK, et al. The association between shift work and hyperuricemia in steelmaking male workers. Ann Occup Environ Med. 2014;26(1):42. Ozdemir PG, Selvi Y, Ozkol H, et al. The influence of shift work on cognitive functions and oxidative stress. Psychiatry Res. 2013;210(3):1219–25. Xiang J, Bi P, Pisaniello D, et al. Health impacts of workplace heat exposure: an epidemiological review. Ind Health. 2014;52(2):91–101. Han SR, Wei M, Wu Z, et al. Perceptions of workplace heat exposure and adaption behaviors among Chinese construction workers in the context of climate change. BMC Public Health. 2021;21(1):2160. Lin QY, Qiu CX, Li YR, et al. Analysis of factors influencing the prevalence of chronic diseases among workers working in high temperature at a port terminal in Guangzhou. Chin Occup Med. 2018;45(03):329–34. Ahmad R, Akhter QS, Haque M. Occupational cement dust exposure and inflammatory nemesis: Bangladesh relevance. J Inflamm Res. 2021;14:2425–44. Krajnak K, Kan H, Russ KA, et al. Biological effects of inhaled hydraulic fracturing sand dust. VI. Cardiovascular effects. Toxicol Appl Pharmacol. 2020;406:115242. Yoshida S, Hiyoshi K, Ichinose T, et al. Aggravating effect of natural sand dust on male reproductive function in mice. Reprod Med Biol. 2009;8(4):151–6. Zhang SK. Association of shift work and rhythm-related gene polymorphisms with HUA in steelworkers. North China University of Technology; 2019. https://kns.cnki.net/KCMS/detail/detail.aspx?dbname=CMFD202201&filename=1019669720.nh. Gong DY, Ya RJ, Xie SP, et al. Risk factors for hyperuricemia in helicopter pilots and their intervention measures. J PLA Med. 2021;46(02):156–62. Turner-cobb JM. Psychological and stress hormone correlates in early life: a key to HPA-axis dysregulation and normalization. Stress. 2005;8(1):47–57. We would like to thank the north China university of science and technology for providing a software and hardware platform and financial support to ensure the smooth progress of this research. We would also like to thank our teachers and classmates for their help and warmth in the research process. Funded by Science and Technology Project of Hebei Education Department (No. JYG2019002). School of Public Health, Caofeidian New Town, North China University of Science and Technology, No.21 Bohai Avenue, Tangshan City, Hebei Province, 063210, People's Republic of China Yuanyu Chen, Yongzhong Yang, Ziwei Zheng, Hui Wang, Xuelin Wang, Zhikang Si, Rui Meng, Guoli Wang & Jianhui Wu Hebei Province Key Laboratory of Occupational Health and Safety for Coal Industry, North China University of Science and Technology, Tangshan, Hebei, People's Republic of China Guoli Wang & Jianhui Wu Yuanyu Chen Yongzhong Yang Ziwei Zheng Hui Wang Xuelin Wang Zhikang Si Rui Meng Guoli Wang Jianhui Wu Design research, Y.Y.C. and J.H.W.; Methodology, Y.Z.Y., Z.W.Z. and H.W.; Project administration, X.L.W., Z.K.Z. and R.M.; Software, Y.Y.C. and Y.Z.Y.; Validation, J.H.W. and G.L.W.; Writing original draft, Y.Y.C.; Writing review, Y.Y.C. and J.H.W. All authors responded to the modification of the study protocol and approved the final manuscript. Correspondence to Jianhui Wu. All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The study was approved by the Ethics Committee of North China University of Science and Technology (NO.15006). All individuals in the study signed a paper version of the informed consent. Chen, Y., Yang, Y., Zheng, Z. et al. Influence of occupational exposure on hyperuricemia in steelworkers: a nested case–control study. BMC Public Health 22, 1508 (2022). https://doi.org/10.1186/s12889-022-13935-x Occupational hazards Steelworkers Nested case–control study
CommonCrawl
\begin{definition}[Definition:Algebraic Number Field] An '''algebraic number field''' is a finite extension of the field of rational numbers $\Q$. \end{definition}
ProofWiki
\begin{document} \title{Variants of the A-HPE and large-step A-HPE algorithms for strongly convex problems with applications to accelerated high-order tensor methods} \author{ M. Marques Alves \thanks{ Departamento de Matem\'atica, Universidade Federal de Santa Catarina, Florian\'opolis, Brazil, 88040-900 ({\tt [email protected]}). The work of this author was partially supported by CNPq grants no. 304692/2017-4. } } \maketitle \begin{abstract} For solving strongly convex optimization problems, we propose and study the global convergence of variants of the accelerated hybrid proximal extragradient (A-HPE) and large-step A-HPE algorithms of Monteiro and Svaiter~\cite{mon.sva-acc.siam13}. We prove \emph{linear} and the \emph{superlinear} $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ global rates for the proposed variants of the A-HPE and large-step A-HPE methods, respectively. The parameter $p\geq 2$ appears in the (high-order) large-step condition of the new large-step A-HPE algorithm. We apply our results to high-order tensor methods, obtaining a new inexact (relative-error) tensor method for (smooth) strongly convex optimization with iteration-complexity $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$. In particular, for $p=2$, we obtain an inexact proximal-Newton algorithm with fast global $\mathcal{O}\left(k^{\,-k/3}\right)$ convergence rate. \\ \\ 2000 Mathematics Subject Classification: 90C60, 90C25, 47H05, 65K10. \\ \\ Key words: Convex optimization, strongly convex, accelerated methods, proximal-point algorithm, large-step, high-order tensor methods, superlinear convergence, proximal-Newton method. \end{abstract} \pagestyle{plain} \section{Introduction} \label{sec:int} The \emph{proximal-point method}~\cite{martinet70,roc-mon.sjco76} is one of the most popular algorithms for solving nonsmooth convex optimization problems. For the general problem of minimizing a convex function $h(\cdot)$, its \emph{exact} version can be described by the iteration \begin{align} \label{eq:pp_intr} x^{k+1}=\mbox{arg}\min_{x}\,\left\{h(x)+\dfrac{1}{2\lambda}\norm{x-x^k}^2\right\},\qquad k\geq 0, \end{align} where $\lambda=\lambda_{k+1}>0$ and $x^k$ is the current iterate. Motivated by the fact that in many cases the computation of $x^{k+1}$ is numerically expensive, several authors have proposed \emph{inexact} versions of \eqref{eq:pp_intr}. Among them, inexact proximal-point methods based on \emph{relative-error} criterion for the subproblems are currently quite popular. For the more abstract setting of solving inclusions for maximal monotone operators, this approach was initially developed by Solodov and Svaiter (see, e.g., \cite{sol.sva-hpe.svva99,sol.sva-hyb.jca99, sol.sva-breg.mor00,sol.sva-uni.nfao01}), subsequently studied, from the viewpoint of computational complexity, by Monteiro and Svaiter (see, e.g., \cite{mon.sva-tse.siam10,mon.sva-hpe.siam10, mon.sva-newton.siam12,mon.sva-acc.siam13}) and has gained a lot of attention by different authors and research groups (see, e.g., \cite{att.alv.sva-dyn.jca16,bach-preprint21,eck.yao-rel.mp18,zhang-preprint20,jordan-controlpreprint20}) with many applications in optimization algorithms and related topics such as variational inequalities, saddle-point problems, etc. The starting point of this contribution is \cite{mon.sva-acc.siam13}, where the relative-error inexact hybrid proximal extragradient (HPE) method~\cite{mon.sva-hpe.siam10,sol.sva-hpe.svva99} was accelerated for convex optimization, by using Nesterov's acceleration~\cite{nes-book}. The resulting accelerated HPE-type algorithms, called A-HPE and large-step A-HPE, were applied to first- and second-order optimization, with iteration-complexities $\mathcal{O}\left(1/k^2\right)$ and $\mathcal{O}\left(1/k^{7/2}\right)$, respectively. The A-HPE and/or the large-step A-HPE algorithms were recently studied also in \cite{arjevani-oracle.mp19,bach-preprint21,bubeck-near.pmlr19,gasnikov-optimal.pmlr19,zhang-preprint20,jordan-controlpreprint20}, with applications in high-order optimization, machine learning and tensor methods. In this paper, we consider the (unconstrained) convex optimization problem \begin{align} \label{eq:co} \min_{x}\,\left\{h(x):=f(x)+g(x)\right\}, \end{align} where $f$ is convex and $g$ is \emph{strongly convex}. For solving \eqref{eq:co}, we propose and study the convergence rates of variants of the A-HPE and large-step A-HPE algorithms. The new algorithms are designed especially for strongly convex problems, and the resulting global convergence rates are \emph{linear} and $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ for the variants of the A-HPE and large-step A-HPE, respectively. (the parameter $p\geq 2$ appears in the high-order large-step condition (see \cite{zhang-preprint20,jordan-controlpreprint20}.) We also apply our study to tensor algorithms for high-order convex optimization, a topic which has been the object of investigation of several authors (see, e.g., \cite{bubeck-near.pmlr19,doikov-local.preprint19,gra.nes-ten.oms20,zhang-preprint20,jordan-controlpreprint20,nes-preprint20b,nes-preprint20a} and references therein). The proposed inexact (relative-error) $p$-th order tensor algorithm has global \emph{superlinear} $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ convergence rate. We also mention that, for $p=2$ we obtain, as a by-product of our approach to high-order optimization, a fast $\mathcal{O}\left(k^{-k/3}\right)$ proximal-Newton method for strongly convex optimization. The main contributions of this paper can be summarized as follows: \begin{itemize} \item[(i)] A variant of the A-HPE algorithm for strongly convex objectives (Algorithm \ref{alg:main}) and its iteration-complexity analysis as in Theorems \ref{th:main_comp} and \ref{th:main_comp03}. \item[(ii)] A large-step A-HPE-type algorithm for strongly convex problems (Algorithm \ref{alg:ls}) with a high-order large-step condition and its iteration-complexity (see Theorem \ref{th:main_comp02}). \item[(iii)] A new inexact high-order tensor algorithm (Algorithm \ref{alg:second}) for strongly convex problems and its global convergence analysis (see Theorem \ref{th:main_tensor}). Here and in item (ii) above we highlight the fast global convergence rate $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$. \item[(iv)] An inexact relative-error forward-backward algorithm for strongly convex optimization (see Algorithm \ref{alg:first01} and Theorem \ref{th:first01}). \end{itemize} Additionally to the contributions described in (i)--(iv) above, we refer the reader to the remarks/comments following Algorithms \ref{alg:main}, \ref{alg:ls}, \ref{alg:second} and \ref{alg:first01}. \noindent {\bf Some previous contributions.} The A-HPE and forward-backward methods for strongly convex problems were also recently studied in \cite{bach-preprint21}. Based on the A-HPE framework, $p$th-order tensor methods with iteration-complexity $\mathcal{O}\left(1/k^{\frac{3p+1}{2}}\right)$ were studied in \cite{arjevani-oracle.mp19,bubeck-near.pmlr19,gasnikov-optimal.pmlr19,zhang-preprint20,jordan-controlpreprint20}. When combined with restart techniques, improved rates for the uniformly- and/or strongly convex case were also obtained in \cite{arjevani-oracle.mp19,gasnikov-optimal.pmlr19} (see also \cite{kornowski-high.preprint20}). We also mention that local superlinear convergence rates for tensor methods were obtained in \cite{doikov-local.preprint19}. Tensor and/or second-order schemes were also recently studied in \cite{doi.nes-min.jota21,dvurechensky-near.preprint19,gra.nes-acc.siam19,gra.nes-ten.oms20}. \noindent {\bf General notation.} We denote by $\mathcal{H}$ a finite-dimensional real vector space with inner product $\inner{\cdot}{\cdot}$ and induced norm $\norm{\cdot}=\sqrt{\inner{\cdot}{\cdot}}$. The \emph{$\varepsilon$-subdifferential} and the \emph{subdifferential} of a convex function $g:\mathcal{H}\to (-\infty,\infty]$ at $x\in \mathcal{H}$ are defined as $\partial_\varepsilon g(x) := \{u\in \mathcal{H}\;|\; g(y)\geq g(x)+\inner{u}{y-x}-\varepsilon\quad \forall y\in \mathcal{H}\}$ and $\partial g(x) := \partial_0 g(x)$, respectively. For additional details on standard notations and definitions of convex analysis we refer the reader to the reference~\cite{rock-ca.book}. Recall that $g:\mathcal{H}\to (-\infty,\infty]$ is $\mu$-strongly convex if $\mu>0$ and, for all $x,y\in \mathcal{H}$, \begin{align} \label{eq:def.str} g(\lambda x+(1-\lambda)y)\leq \lambda g(x)+(1-\lambda)g(y)- \dfrac{1}{2} \mu\lambda(1-\lambda) \norm{x-y}^2,\qquad \forall \lambda\in [0,1]. \end{align} \section{A variant of the A-HPE algorithm for strongly convex problems} \label{sec:alg} In this section, we consider the convex optimization problem \eqref{eq:co}, i.e., \begin{align*} \min_{x\in \mathcal{H}}\,\{h(x):=f(x)+g(x)\}, \end{align*} where $f,g:\mathcal{H}\to (-\infty, \infty]$ are proper, closed and convex functions, $\mbox{dom}\,h\neq\emptyset$, and $g$ is {\it $\mu$-strongly convex}, for some $\mu>0$. We will denote by $x^*$ the unique solution of \eqref{eq:co}. Next we present the main algorithm of this section for solving \eqref{eq:co}, whose the complexity analysis will be presented in Theorems \ref{th:main_comp} and \ref{th:main_comp03}. \noindent \fbox{ \begin{minipage}[h]{6.6 in} \begin{algorithm} \label{alg:main} {\bf A variant of the A-HPE algorithm for solving the (strongly convex) problem \eqref{eq:co}} \end{algorithm} \begin{itemize} \item [0)] Choose $x^0,y^0\in \mathcal{H}$, $\sigma\in [0,1]$, let $A_0=0$ and set $k=0$. \item [1)] Compute $\lambda_{k+1}>0$ and $(y^{k+1},v^{k+1},\varepsilon_{k+1})\in \mathcal{H}\times \mathcal{H}\times \mathbb{R}_{++}$ such that \begin{align} \begin{aligned} \label{eq:alg_err} &v^{k+1}\in \partial_{\varepsilon_{k+1}} f(y^{k+1})+ \partial g(y^{k+1}),\\[3mm] &\dfrac{\norm{\lambda_{k+1}v^{k+1}+y^{k+1}-\widetilde x^k}^2}{1+\lambda_{k+1}\,\mu} +2\lambda_{k+1}\varepsilon_{k+1} \leq \sigma^2\norm{y^{k+1}-\widetilde x^k}^2, \end{aligned} \end{align} where \begin{align} \label{eq:alg_xtil} & \widetilde x^k = \left(\dfrac{a_{k+1}-\mu A_k\lambda_{k+1}}{A_k+a_{k+1}}\right)x^k + \left(\dfrac{A_k+\mu A_k\lambda_{k+1}}{A_k+a_{k+1}}\right)y^k,\\[3mm] \label{eq:alg_a} & a_{k+1}=\dfrac{(1+2\mu A_k)\lambda_{k+1}+\sqrt{(1+2\mu A_k)^2\lambda_{k+1}^2 +4(1+\mu A_k)A_k\lambda_{k+1}}}{2}. \end{align} \item[2)] Let \begin{align} \label{eq:alg_A} &A_{k+1} = A_k + a_{k+1},\\[2mm] \label{eq:alg_xne} & x^{k+1} = \left(\dfrac{1+\mu A_k}{1+\mu A_{k+1}}\right)x^k + \left(\dfrac{\mu a_{k+1}}{1+\mu A_{k+1}}\right) y^{k+1} - \left(\dfrac{a_{k+1}}{1+\mu A_{k+1}}\right)v^{k+1}. \end{align} \item[3)] Set $k=k+1$ and go to step 1. \end{itemize} \noindent \end{minipage} } Next we make some remarks about Algorithm \ref{alg:main}: \begin{itemize} \item[(i)] By letting $\mu=0$ in Algorithm \ref{alg:main}, we obtain a special instance of the A-HPE algorithm of Monteiro and Svaiter (see \cite[Section 3]{mon.sva-acc.siam13}), whose global convergence rate is $\mathcal{O}\left(1/k^2\right)$ (see \cite[Theorem 3.8]{mon.sva-acc.siam13}). On the other hand, thanks to the strong-convexity assumption on $g$, in Theorems \ref{th:main_comp} and \ref{th:main_comp03} we obtain \emph{linear convergence} for Algorithm \ref{alg:main}. We will also study a \emph{high-order} large-step version of Algorithm \ref{alg:main} (see Algorithm \ref{alg:ls} in Section \ref{sec:ls}), for which \emph{superlinear} $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ global convergence rates are proved, where $p\geq 2$. Applications of the latter result to high-order tensor methods for convex optimization will also be discussed in Section \ref{sec:ls}. \item[(ii)] Since the cost of computing $\widetilde x^k, a_{k+1}, A_{k+1}$ and $x^{k+1}$ as in \eqref{eq:alg_xtil}--\eqref{eq:alg_xne} is negligible (from a computational viewpoint), it follows that the computational burden of Algorithm \ref{alg:main} is represented by the computation of $\lambda_{k+1}>0$ and $(y^{k+1}, v^{k+1},\varepsilon_{k+1})$ as in \eqref{eq:alg_err}. In this regard, note that if $\mbox{prox}_{\lambda h}:=(\lambda \partial h+I)^{-1}$ of $h$ is computable, for $\lambda>0$, then $\lambda_{k+1}:=\lambda$ and $(y^{k+1}, v^{k+1},\varepsilon_{k+1}):=\left(\mbox{prox}_{\lambda h}(\widetilde x^k),\frac{\widetilde x^k-y^{k+1}}{\lambda_{k+1}},0\right)$ clearly satisfy the conditions in \eqref{eq:alg_err} with $\sigma=0$. On the other hand, in the more general setting of $\sigma>0$, Algorithm \ref{alg:main} can be used both as a framework for the design and analysis of practical algorithms \cite{mon.sva-acc.siam13} and as a \emph{bilevel} method, in which the inequality in \eqref{eq:alg_err} is used as a stopping criterion for some \emph{inner} algorithm applied to the regularized inclusion $0\in \lambda \partial h(x)+x-\widetilde x^k$. In this case, note that the \emph{error-criterion} in \eqref{eq:alg_err} is \emph{relative} and controlled by the parameter $\sigma\in (0,1]$. \item[(iii)] We emphasize that the inequality in \eqref{eq:alg_err} is specially tailored for strongly convex problems, in the sense that it is more general than the usual inequality appearing in relative-error HPE-type methods (see, e.g., \cite{alves-reg.siam16,eck.yao-rel.mp18,mon.sva-hpe.siam10,mon.sva-newton.siam12,sol.sva-hpe.svva99}), which in the context of this paper would read as \[ \norm{\lambda_{k+1} v^{k+1}+y^{k+1}-\widetilde x^k}^2+2\lambda_{k+1}\varepsilon_{k+1} \leq\sigma^2\norm{y^{k+1}-\widetilde x^k}^2. \] \item[(iv)] We also mention that Algorithm \ref{alg:main} is closely related to a variant of the A-HPE for strongly convex objectives presented and studied in \cite[Section 5]{bach-preprint21}. In this paper, by taking an approach similar to the one which was considered in~\cite{mon.sva-acc.siam13,nes-smo.mp05}, we obtain global convergence rates for Algorithm \ref{alg:main} in terms of \emph{function values}, \emph{sequences} and \emph{(sub-)gradients} (see Theorems \ref{th:main_comp} and \ref{th:main_comp03} below). In contrast to \cite{bach-preprint21}, in this paper we also consider a \emph{large-step} version of Algorithm \ref{alg:main}, namely Algorithm \ref{alg:ls}, for which the (global) superlinear $\mathcal{O}\left(k^{-k\,\left(\frac{p-1}{p+1}\right)}\right)$ convergence rate is proved (see Theorems \ref{th:main_comp02} and \ref{th:main_tensor} below). \item[(v)] We note that condition \eqref{eq:alg_a} yields \begin{align} \label{eq:cond.lamb} \dfrac{(1+\mu A_k)A_{k+1}\lambda_{k+1}}{a_{k+1}^2} +\dfrac{\mu A_k\lambda_{k+1}}{a_{k+1}}=1. \end{align} Indeed, substitution of $A_{k+1}$ by $A_k+a_{k+1}$ (see \eqref{eq:alg_A}) and some simple algebra give that \eqref{eq:cond.lamb} is equivalent to \begin{align} \label{eq:cond.lamb2} a_{k+1}^2-(1+2\mu A_k)\lambda_{k+1}a_{k+1}-(1+\mu A_k)A_k\lambda_{k+1}=0. \end{align} Note now that $a_{k+1}$ as in \eqref{eq:alg_a} is exactly the largest root of the quadratic equation in \eqref{eq:cond.lamb2}. \item[(vi)] Using \eqref{eq:alg_A} and the fact that $A_0=0$ (see step 0) we obtain $A_1=A_0+a_1=a_1$. On the other hand, direct substitution of $A_0=0$ in \eqref{eq:alg_a} with $k=0$ yields $a_1=\lambda_1$. As a consequence, we conclude that \begin{align} \label{eq:A1a1} A_1=a_1=\lambda_1. \end{align} \end{itemize} In what follows in this section, we will analyze convergence rates of Algorithm \ref{alg:main}. To this end, we first define $\gamma_k(\cdot)$ and $\Gamma_k(\cdot)$ as, for all $x\in \mathcal{H}$, \begin{align} \label{eq:def.gammak} \gamma_k(x)=h(y^k)+\inner{v^k}{x-y^k}-\varepsilon_k+\dfrac{\mu}{2}\norm{x-y^k}^2\qquad (k\geq 1) \end{align} and \begin{align} \label{eq:def.Gammak} \Gamma_0(x)=0\;\;\mbox{and},\; \mbox{for}\;k\geq 1,\;\;\Gamma_k(x)=\sum_{j=1}^k\,\dfrac{a_j}{A_k}\gamma_j(x). \end{align} Note that \begin{align} \label{eq:der.gamma} \nabla \gamma_k(x)=v^k+\mu(x-y^k)\;\;\mbox{and}\;\; \nabla^2\gamma_k(x)=\mu I \end{align} and observe that $A_k$ ($k=0,1,\dots$) as in Algorithm \ref{alg:main} satisfies \begin{align} \label{eq:def.ak} A_0=0\;\;\mbox{and},\;\mbox{for}\;k\geq 1,\;\; A_{k}=\sum_{j=1}^k\,a_j. \end{align} From \eqref{eq:def.Gammak}--\eqref{eq:def.ak} we obtain, for $k\geq 1$, \begin{align} \label{eq:der.Gamma} \nabla^2\Gamma_k(x)=\mu I,\qquad x\in \mathcal{H}. \end{align} Note also that the following holds trivially from \eqref{eq:def.Gammak} and \eqref{eq:def.ak}: for all $k\geq 0$, \begin{align} \label{eq:rec.ak} &A_{k+1}\Gamma_{k+1} = A_k\Gamma_k+a_{k+1}\gamma_{k+1}. \end{align} Define also, for all $k\geq 0$, \begin{align} \label{eq:def.betak} &\beta_k=\inf_{x\in \mathcal{H}}\left\{A_k\Gamma_k(x)+\dfrac{1}{2}\norm{x-x^0}^2\right\}. \end{align} Note that $\beta_0=0$. The following three technical lemmas will be useful to prove the first result on the iteration-complexity of Algorithm \ref{alg:main}, namely Proposition \ref{pr:cambio} below. \begin{lemma} \label{lm:initial} Let $\gamma_k(\cdot)$ and $\Gamma_k(\cdot)$ be as in \eqref{eq:def.gammak} and \eqref{eq:def.Gammak}, respectively. The following holds: \begin{itemize} \item[\emph{(a)}] For all $k\geq 1$, we have $\gamma_k(x)\leq h(x),\quad \forall x\in \mathcal{H}$. \item[\emph{(b)}] For all $k\geq 0$, we have $x^k=\arg\min_{x\in \mathcal{H}}\{A_k\Gamma_k(x)+\frac{1}{2}\norm{x-x^0}^2\}$. \end{itemize} \end{lemma} \begin{proof} (a) In view of the inclusion in \eqref{eq:alg_err} we have, for all $k\geq 1$, $v^k=r^k+s^k$, where $r^k\in \partial_{\varepsilon_k} f(y^k)$ and $s^k\in \partial g(y^k)$. Using the assumption that $g$ is $\mu$-strongly convex and the definition of the $\varepsilon$-subdifferential of $f$ we obtain, for all $x\in \mathcal{H}$, \begin{align*} & f(x)\geq f(y^k)+\inner{r^k}{x-y^k}-\varepsilon_k,\\ & g(x)\geq g(y^k)+\inner{s^k}{x-y^k}+\dfrac{\mu}{2}\norm{x-y^k}^2, \end{align*} which in turn combined with the definition of $h(\cdot)$ in \eqref{eq:co}, the fact that $v^k=r^k+s^k$ and \eqref{eq:def.gammak} yields the desired result. (b) Let us proceed by induction on $k\geq 0$. The result is trivially true for $k=0$ (since $A_0\Gamma_0=0$). Assume now that it is true for some $k\geq 0$, i.e., assume that $x^k=\arg\min_x\{A_k\Gamma_k(x)+\frac{1}{2}\norm{x-x^0}^2\}$. Using the latter identity, \eqref{eq:der.Gamma}--\eqref{eq:def.betak} and Taylor's theorem we find \begin{align} \nonumber A_{k+1}\Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2 &=A_{k}\Gamma_{k}(x)+\dfrac{1}{2}\norm{x-x^0}^2 + a_{k+1}\gamma_{k+1}(x)\\ \label{eq:kaua} &=\beta_k+\left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2+ a_{k+1}\gamma_{k+1}(x). \end{align} From the definition of $\gamma_{k+1}(\cdot)$ (see \eqref{eq:def.gammak}) and some simple calculus one can check that $x^{k+1}$ as in \eqref{eq:alg_xne} is exactly the (unique) minimizer of $x\mapsto \left(\frac{1+\mu A_k}{2}\right)\norm{x-x^k}^2+ a_{k+1}\gamma_{k+1}(x)$. Hence, from this fact and \eqref{eq:kaua} we obtain that $x^{k+1}=\arg\min_{x\in \mathcal{H}}\{A_{k+1}\Gamma_{k+1}(x)+\frac{1}{2}\norm{x-x^0}^2\}$, completing the induction argument. \end{proof} \begin{lemma} \label{lm:gauss} Consider the sequences evolved by \emph{Algorithm \ref{alg:main}}. The following holds for all $x\in \mathcal{H}$: \begin{itemize} \item[\emph{(a)}] For all $k\geq 0$, \begin{align*} A_k \Gamma_k(x)+\dfrac{1}{2}\norm{x-x^0}^2=\beta_k+ \left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2. \end{align*} \item[\emph{(b)}] For all $k\geq 0$, \begin{align*} A_{k+1} \Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2=\beta_k+ \left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2 + a_{k+1}\gamma_{k+1}(x). \end{align*} \item[\emph{(c)}] For all $k\geq 0$, \begin{align*} A_k h(y^k)+ A_{k+1} \Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2\geq \beta_k+ \left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2 + a_{k+1}\gamma_{k+1}(x) + A_k \gamma_{k+1}(y^k). \end{align*} \end{itemize} \end{lemma} \begin{proof} (a) First note that the result is trivial for $k=0$, since $\beta_0=A_0=0$ and $\Gamma_0=0$. Now note that in view of \eqref{eq:der.Gamma} we obtain, for $k\geq 1$, \[ \nabla^2 \left(A_k \Gamma_k(\cdot)+\dfrac{1}{2}\norm{\cdot-x^0}^2\right)(x)= 1+\mu A_k. \] Using the latter identity, Lemma \ref{lm:initial}(b), \eqref{eq:def.betak} and Taylor's theorem we find \begin{align*} A_k \Gamma_k(x)+\dfrac{1}{2}\norm{x-x^0}^2&= \underbrace{A_k \Gamma_k(x^k)+\dfrac{1}{2}\norm{x^k-x^0}^2}_{\beta_k}+ \dfrac{1}{2}\inner{(1+\mu A_k)(x-x^k)}{x-x^k}\\ &=\beta_k+\left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2. \end{align*} (b) From \eqref{eq:rec.ak} and item (a), we obtain, for all $k\geq 0$, \begin{align*} A_{k+1} \Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2&=A_k\Gamma_k(x)+\dfrac{1}{2}\norm{x-x^0}^2 +a_{k+1}\gamma_{k+1}(x)\\ &=\beta_k+\left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2+a_{k+1}\gamma_{k+1}(x). \end{align*} (c) From (b) and Lemma \ref{lm:initial}(a) with $k=k+1$ and $x=y^k$, \begin{align*} A_k h(y^k)+A_{k+1} \Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2&=\beta_k+ \left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2 + a_{k+1}\gamma_{k+1}(x)+A_k h(y^k)\\ &\geq \beta_k+ \left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2 + a_{k+1}\gamma_{k+1}(x)+A_k \gamma_{k+1}(y^k). \end{align*} \end{proof} \begin{lemma} \label{lm:newton} Consider the sequences evolved by \emph{Algorithm \ref{alg:main}}. The following holds: \begin{itemize} \item[\emph{(a)}] For all $k\geq 0$ and $x\in \mathcal{H}$, \begin{align*} a_{k+1}\gamma_{k+1}(x)+A_k \gamma_{k+1}(y^k)= A_{k+1}\gamma_{k+1}(\widetilde x) + \left(\dfrac{\mu\, a_{k+1}A_k}{2 A_{k+1}}\right)\norm{x-y^k}^2, \end{align*} where \begin{align} \label{eq:def.xtilde} \widetilde x:= \dfrac{a_{k+1}}{A_{k+1}}x+\dfrac{A_k}{A_{k+1}}y^k. \end{align} \item[\emph{(b)}] For all $k\geq 0$ and $x\in \mathcal{H}$, \begin{align*} A_k h(y^k)+ A_{k+1} \Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2\geq \beta_k+ A_{k+1}\Big[\gamma_{k+1}(\widetilde x)+\Delta_k\Big], \end{align*} where, for all $k\geq 0$, $\widetilde x$ is as in \eqref{eq:def.xtilde} and \begin{align} \label{eq:def.deltak} &\Delta_k:=\left(\dfrac{(1+\mu A_k)A_{k+1}}{2a_{k+1}^2}\right)\norm{\widetilde x-z^k}^2 +\left(\dfrac{\mu A_k}{2a_{k+1}}\right)\norm{\widetilde x-y^k}^2,\\ \label{eq:def.xktilde} &z^k:= \dfrac{a_{k+1}}{A_{k+1}}x^k+\dfrac{A_k}{A_{k+1}}y^k. \end{align} \item[\emph{(c)}] For all $k\geq 0$, \begin{align} \label{eq:kaua02} \Delta_k&=\dfrac{1}{2\lambda_{k+1}}\left[\norm{\widetilde x-\widetilde x^k}^2 +\left(\dfrac{\mu(1+\mu A_k)\lambda_{k+1}^2A_k}{a_{k+1}A_{k+1}}\right)\norm{x^k-y^k}^2 \right], \end{align} where $\widetilde x$ is as in \eqref{eq:def.xtilde}. \item[\emph{(d)}] For all $k\geq 0$ and $x\in \mathcal{H}$, \begin{align*} A_k h(y^k)+ A_{k+1} \Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2\geq \beta_k+ A_{k+1}h(y^{k+1})&+\left(\dfrac{1-\sigma^2}{2}\right) \left(\dfrac{A_{k+1}}{\lambda_{k+1}}\norm{y^{k+1}-\widetilde x^k}^2\right)\\[2mm] &+\left(\dfrac{\mu(1+\mu A_k)\lambda_{k+1}A_k}{2a_{k+1}}\right)\norm{x^k-y^k}^2. \end{align*} \end{itemize} \end{lemma} \begin{proof} (a) First recall that (see \eqref{eq:def.gammak}) \begin{align} \label{eq:recall.gamma} \gamma_{k+1}(x)=\underbrace{h(y^{k+1})+ \inner{v^{k+1}}{x-y^{k+1}}-\varepsilon_{k+1}}_{\ell_{k+1}(x)}+\dfrac{\mu}{2}\norm{x-y^{k+1}}^2, \qquad \forall x\in \mathcal{H}. \end{align} Let $p=\frac{a_{k+1}}{A_{k+1}}$, $q=\frac{A_k}{A_{k+1}}$ and note that $p,q\geq 0$, $p+q=1$ and $\widetilde x=px+qy^k$. Since $\ell_{k+1}(\cdot)$ is affine, we find \begin{align} \nonumber \ell_{k+1}(\widetilde x)= \ell_{k+1}(px+qy^k)&=p\ell_{k+1}(x)+q\ell_{k+1}(y^k)\\ \label{eq:ee.2} &=\dfrac{1}{A_{k+1}}\left[a_{k+1}\ell_{k+1}(x)+A_k\ell_{k+1}(y^k)\right]. \end{align} On the other hand, using the well-know identity $\norm{pz+qw}^2=p\norm{z}^2+q\norm{w}^2-pq\norm{z-w}^2$, for all $z,w\in \mathcal{H}$, we also find \begin{align} \nonumber \norm{\widetilde x-y^{k+1}}^2&=\norm{p(x-y^{k+1})+q(y^k-y^{k+1})}^2\\ \nonumber &=p\norm{x-y^{k+1}}^2+q\norm{y^k-y^{k+1}}^2-pq\norm{x-y^k}^2\\ \label{eq:norm.2} &=\dfrac{1}{A_{k+1}}\left[a_{k+1}\norm{x-y^{k+1}}^2+A_k\norm{y^k-y^{k+1}}^2- \left(\dfrac{a_{k+1}A_k}{A_{k+1}}\right)\norm{x-y^k}^2\right]. \end{align} Combining \eqref{eq:recall.gamma}--\eqref{eq:norm.2}, we then obtain \begin{align*} \gamma_{k+1}(\widetilde x)&=\ell_{k+1}(\widetilde x)+\dfrac{\mu}{2}\left\|\widetilde x-y^{k+1}\right\|^2\\ &=\dfrac{1}{A_{k+1}} \left[a_{k+1}\left(\ell_{k+1}(x)+\dfrac{\mu}{2}\norm{x-y^{k+1}}^2\right) +A_k\left(\ell_{k+1}(y^k)+\dfrac{\mu}{2}\norm{y^k-y^{k+1}}^2\right) -\left(\dfrac{\mu a_{k+1}A_k}{2A_{k+1}}\right)\norm{x-y^k}^2 \right]\\ &=\dfrac{1}{A_{k+1}}\left[a_{k+1}\gamma_{k+1}(x)+A_k\gamma_{k+1}(y^k)- \left(\dfrac{\mu a_{k+1}A_k}{2A_{k+1}}\right)\norm{x-y^k}^2\right], \end{align*} which is clearly equivalent to the desired identity. (b) First note that in view of \eqref{eq:def.xtilde} and \eqref{eq:def.xktilde} we have $\widetilde x-z^k=\frac{a_{k+1}}{A_{k+1}}(x-x^k)$ and, analogously, we also have tilde $x-y^k=\frac{a_{k+1}}{A_{k+1}}(x-y^k)$. Hence, \begin{align} \label{eq:mud.xtilde} \norm{x-x^k}^2=\dfrac{A_{k+1}^2}{a_{k+1}^2}\norm{\widetilde x-z^k}^2 \;\;\;\mbox{and}\;\;\; \norm{x-y^k}^2=\dfrac{A_{k+1}^2}{a_{k+1}^2}\norm{\widetilde x-y^k}^2. \end{align} Using Lemma \ref{lm:gauss}(c) and item (a) we find \begin{align*} A_k h(y^k)+ A_{k+1} \Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2&\geq \beta_k+ \left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2 + a_{k+1}\gamma_{k+1}(x) + A_k \gamma_{k+1}(y^k)\\ &= \beta_k+A_{k+1}\gamma_{k+1}(\widetilde x)\\ &\hspace{1cm}+ \left(\dfrac{1+\mu A_k}{2}\right)\norm{x-x^k}^2 +\left(\dfrac{\mu\, a_{k+1}A_k}{2 A_{k+1}}\right)\norm{x-y^k}^2, \end{align*} which in turn combined with \eqref{eq:mud.xtilde} and \eqref{eq:def.deltak} finishes the proof of item (b). (c) First let $p=\frac{(1+\mu A_k)A_{k+1}\lambda_{k+1}}{a_{k+1}^2}$, $q=\dfrac{\mu A_k\lambda_{k+1}}{a_{k+1}}$ and note that $p,q\geq 0$ and, in view of \eqref{eq:cond.lamb}, $p+q=1$. From \eqref{eq:def.deltak} and the above definitions of $p$ and $q$, we obtain \begin{align} \nonumber \Delta_k&= \left(\dfrac{(1+\mu A_k)A_{k+1}}{2a_{k+1}^2}\right)\norm{\widetilde x-z^k}^2 +\left(\dfrac{\mu A_k}{2a_{k+1}}\right)\norm{\widetilde x-y^k}^2\\ \nonumber &= \dfrac{1}{2\lambda_{k+1}} \left[p\norm{\widetilde x-z^k}^2 +q\norm{\widetilde x-y^k}^2\right]\\ \label{eq:first.part} &= \dfrac{1}{2\lambda_{k+1}}\left[\norm{\widetilde x-(p z^k+qy^k)}^2+pq\norm{y^k-z^k}^2\right], \end{align} where we also used the well-known identity $p\norm{z}^2+q\norm{w}^2=\norm{pz+qw}^2+pq\norm{z-w}^2$, for $z,w\in \mathcal{H}$. Using \eqref{eq:def.xktilde}, the definitions of $p,q$, the fact that $p+q=1$, \eqref{eq:alg_xtil} and \eqref{eq:alg_A}, and some simple computations, we find \begin{align} \nonumber p z^k+qy^k&=(1-q)\left(\dfrac{a_{k+1}}{A_{k+1}}x^k+\dfrac{A_k}{A_{k+1}}y^k\right)+q y^k\\ \nonumber &=(1-q) \dfrac{a_{k+1}}{A_{k+1}}x^k+\left(\dfrac{A_k}{A_{k+1}}+q\left(1-\dfrac{A_k}{A_{k+1}}\right)\right)y^k\\ \nonumber &=(1-q) \dfrac{a_{k+1}}{A_{k+1}}x^k+\left(\dfrac{A_k}{A_{k+1}}+q \dfrac{a_{k+1}}{A_{k+1}}\right)y^k\\ \nonumber &=\left(1-\dfrac{\mu A_k\lambda_{k+1}}{a_{k+1}}\right) \dfrac{a_{k+1}}{A_{k+1}}x^k+\left(\dfrac{A_k}{A_{k+1}}+\left(\dfrac{\mu A_k\lambda_{k+1}}{a_{k+1}}\right)\dfrac{a_{k+1}}{A_{k+1}}\right)y^k\\ \nonumber &=\left(\dfrac{a_{k+1}-\mu A_k \lambda_{k+1}}{A_{k+1}}\right)x^k+ \left(\dfrac{A_k+\mu A_k\lambda_{k+1}}{A_{k+1}}\right)y^k\\ \label{eq:ide.txk} &=\widetilde x^k. \end{align} On the other hand, using again \eqref{eq:def.xktilde} and the definitions of $p,q$, we also obtain \begin{align} \nonumber pq\norm{y^k-z^k}^2&=\left(\frac{(1+\mu A_k)A_{k+1}\lambda_{k+1}}{a_{k+1}^2}\right) \left(\dfrac{\mu A_k\lambda_{k+1}}{a_{k+1}}\right)\dfrac{a_{k+1}^2}{A_{k+1}^2}\norm{x^k-y^k}^2\\ \label{eq:part.pq} &=\left(\dfrac{\mu(1+\mu A_k)\lambda_{k+1}^2A_k}{a_{k+1}A_{k+1}}\right)\norm{x^k-y^k}^2. \end{align} The desired result now follows directly from \eqref{eq:first.part}, \eqref{eq:ide.txk} and \eqref{eq:part.pq}. (d) From items (b) and (c), \begin{align} \label{eq:broyden} \nonumber A_k h(y^k)+ A_{k+1} \Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2\geq \beta_k&+ A_{k+1}\Big[\gamma_{k+1}(\widetilde x)+\dfrac{1}{2\lambda_{k+1}} \norm{\widetilde x-\widetilde x^k}^2\Big]\\[2mm] &+\left(\dfrac{\mu(1+\mu A_k)\lambda_{k+1}A_k}{2a_{k+1}}\right)\norm{x^k-y^k}^2. \end{align} From \eqref{eq:def.gammak}, \begin{align} \nonumber \gamma_{k+1}(\widetilde x)+\dfrac{1}{2\lambda_{k+1}} \norm{\widetilde x-\widetilde x^k}^2&= h(y^{k+1})\\ \label{eq:lete} & \hspace{0.7cm}+\underbrace{\inner{v^{k+1}}{\widetilde x-y^{k+1}}+\dfrac{\mu}{2}\norm{\widetilde x-y^{k+1}}^2- \varepsilon_{k+1}+\dfrac{1}{2\lambda_{k+1}} \norm{\widetilde x-\widetilde x^k}^2}_{=:q_{k+1}(\widetilde x)}. \end{align} On the other hand, from Lemma \ref{lm:wolfe}(c) applied to $q_{k+1}(\cdot)$ and \eqref{eq:alg_err}, \begin{align*} q_{k+1}(\widetilde x)\geq \left(\dfrac{1-\sigma^2}{2\lambda_{k+1}}\right)\norm{y^{k+1}-\widetilde x^k}^2, \end{align*} which in turn combined with \eqref{eq:lete} gives \begin{align*} \gamma_{k+1}(\widetilde x)+\dfrac{1}{2\lambda_{k+1}} \norm{\widetilde x-\widetilde x^k}^2 \geq h(y^{k+1})+\left(\dfrac{1-\sigma^2}{2\lambda_{k+1}}\right)\norm{y^{k+1}-\widetilde x^k}^2. \end{align*} The desired result now follows by the substitution of the latter inequality in \eqref{eq:broyden}. \end{proof} Next is our first result on the iteration-complexity of Algorithm \ref{alg:main}. Item (b) follows trivially from item (a), which will be derived from Lemmas \ref{lm:initial}, \ref{lm:gauss} and \ref{lm:newton}. The main results on the iteration-complexity of Algorithm \ref{alg:main} will then be presented in Theorem \ref{th:main_comp} below. \begin{proposition} \label{pr:cambio} Consider the sequences evolved by \emph{Algorithm \ref{alg:main}}, let $x^*$ denote the (unique) solution of \eqref{eq:co} and let \begin{align} \label{eq:def.d0} d_0:=\norm{x^*-x^0}. \end{align} The following holds: \begin{itemize} \item[\emph{(a)}] For all $k\geq 1$ and $x\in \mathcal{H}$, \begin{align*} & A_{k}\left[h(y^{k})-h(x)\right]+\left(\dfrac{1-\sigma^2}{2}\right)\sum_{j=1}^{k}\, \dfrac{A_j}{\lambda_j}\norm{y^j-\widetilde x^{\,j-1}}^2\\ & \hspace{1cm} +\sum_{j=1}^{k}\,\left(\dfrac{\mu(1+\mu A_{j-1})\lambda_{j}A_{j-1}}{2a_{j}}\right)\norm{x^{j-1}-y^{j-1}}^2 + \left(\dfrac{1+\mu A_{k}}{2}\right)\norm{x-x^{k}}^2\leq \dfrac{1}{2}\norm{x-x^0}^2. \end{align*} \item[\emph{(b)}] If $\sigma<1$, for all $k\geq 1$, \begin{align} \label{eq:sting3} \sum_{j=1}^{k}\, \dfrac{A_j}{\lambda_j}\norm{y^j-\widetilde x^{\,j-1}}^2\leq \dfrac{d_0^2}{1-\sigma^2}, \qquad \forall k\geq 1. \end{align} \end{itemize} \end{proposition} \begin{proof} (a) From Lemma \ref{lm:newton}(d) and the definition of $\beta_{k+1}$ -- see \eqref{eq:def.betak} -- we obtain, for all $k\geq 0$, \begin{align*} A_k h(y^k)+\beta_{k+1}\geq \beta_k+ A_{k+1}h(y^{k+1})&+\left(\dfrac{1-\sigma^2}{2}\right) \left(\dfrac{A_{k+1}}{\lambda_{k+1}}\norm{y^{k+1}-\widetilde x^k}^2\right)\\[2mm] &+\left(\dfrac{\mu(1+\mu A_k)\lambda_{k+1}A_k}{2a_{k+1}}\right)\norm{x^k-y^k}^2, \end{align*} and so, for all $k\geq 0$, \begin{align*} \underbrace{\sum_{j=0}^{k}\,\left[\beta_{j+1}-\beta_j\right]}_{\beta_{k+1}-\beta_0} \geq \underbrace{\sum_{j=0}^{k}\,\left[A_{j+1}h(y^{j+1})-A_jh(y^j)\right]}_{A_{k+1}h(y^{k+1})-A_0h(y^0)} &+\left(\dfrac{1-\sigma^2}{2}\right)\sum_{j=0}^{k}\, \dfrac{A_{j+1}}{\lambda_{j+1}}\norm{y^{j+1}-\widetilde x^{j}}^2\\ &+\sum_{j=0}^{k}\,\left(\dfrac{\mu(1+\mu A_j)\lambda_{j+1}A_j}{2a_{j+1}}\right)\norm{x^j-y^j}^2, \end{align*} which, since $\beta_0=A_0=0$, yields, for all $k\geq 0$, \begin{align*} \beta_{k+1}\geq A_{k+1}h(y^{k+1})+\left(\dfrac{1-\sigma^2}{2}\right)\sum_{j=1}^{k+1}\, \dfrac{A_j}{\lambda_j}\norm{y^j-\widetilde x^{\,j-1}}^2 +\sum_{j=1}^{k+1}\,\left(\dfrac{\mu(1+\mu A_{j-1})\lambda_{j}A_{j-1}}{2a_{j}}\right)\norm{x^{j-1}-y^{j-1}}^2. \end{align*} By adding $\left(\frac{1+\mu A_{k+1}}{2}\right)\norm{x-x^{k+1}}^2$ in both sides of the latter inequality, we obtain, for all $k\geq 0$, \begin{align*} \beta_{k+1}+\left(\dfrac{1+\mu A_{k+1}}{2}\right)\norm{x-x^{k+1}}^2&\geq A_{k+1}h(y^{k+1})+\left(\dfrac{1-\sigma^2}{2}\right)\sum_{j=1}^{k+1}\, \dfrac{A_j}{\lambda_j}\norm{y^j-\widetilde x^{\,j-1}}^2\\ &+\sum_{j=1}^{k+1}\,\left(\dfrac{\mu(1+\mu A_{j-1})\lambda_{j}A_{j-1}}{2a_{j}}\right)\norm{x^{j-1}-y^{j-1}}^2\\ &+\left(\dfrac{1+\mu A_{k+1}}{2}\right)\norm{x-x^{k+1}}^2. \end{align*} Using Lemma \ref{lm:gauss}(a) we then find, for all $k\geq 0$, \begin{align} \label{eq:aint} \nonumber A_{k+1}\Gamma_{k+1}(x)+\dfrac{1}{2}\norm{x-x^0}^2&\geq A_{k+1}h(y^{k+1})+\left(\dfrac{1-\sigma^2}{2}\right)\sum_{j=1}^{k+1}\, \dfrac{A_j}{\lambda_j}\norm{y^j-\widetilde x^{\,j-1}}^2\\ \nonumber &+\sum_{j=1}^{k+1}\,\left(\dfrac{\mu(1+\mu A_{j-1})\lambda_{j}A_{j-1}}{2a_{j}}\right)\norm{x^{j-1}-y^{j-1}}^2\\ &+ \left(\dfrac{1+\mu A_{k+1}}{2}\right)\norm{x-x^{k+1}}^2. \end{align} Note now that from \eqref{eq:def.Gammak} and Lemma \ref{lm:initial}(a) we obtain, for all $k\geq 0$, \[ A_{k+1}\Gamma_{k+1}(x)=\sum_{j=1}^{k+1}\,a_j\gamma_j(x)\leq A_{k+1}h(x), \] which combined with \eqref{eq:aint} yields, for all $k\geq 1$, \begin{align*} \dfrac{1}{2}\norm{x-x^0}^2&\geq A_{k}\left[h(y^{k})-h(x)\right]+\left(\dfrac{1-\sigma^2}{2}\right)\sum_{j=1}^{k}\, \dfrac{A_j}{\lambda_j}\norm{y^j-\widetilde x^{\,j-1}}^2\\ & \hspace{1cm} +\sum_{j=1}^{k}\,\left(\dfrac{\mu(1+\mu A_{j-1})\lambda_{j}A_{j-1}}{2a_{j}}\right)\norm{x^{j-1}-y^{j-1}}^2 + \left(\dfrac{1+\mu A_{k}}{2}\right)\norm{x-x^{k}}^2. \end{align*} (b) This follows trivially from item (a) and \eqref{eq:def.d0}. \end{proof} \begin{lemma} \label{lm:standard} For all $k\geq 0$, \begin{align} \label{eq:standard2} \left(1-\sigma\sqrt{1+\lambda_{k+1}\mu}\right)\norm{y^{k+1}-\widetilde x^k}\leq \norm{\lambda_{k+1}v^{k+1}}\leq \left(1+\sigma\sqrt{1+\lambda_{k+1}\mu}\right)\norm{y^{k+1}-\widetilde x^k}. \end{align} \end{lemma} \begin{proof} The proof follows from the inequality in \eqref{eq:alg_err}, the fact that $\varepsilon_{k+1}\geq 0$ and a simple argument based on the triangle inequality. \end{proof} Since, under mild regularity assumptions on $f$ and $g$, problem \eqref{eq:co} is equivalent to the inclusion \begin{align} \label{eq:mipfg} 0\in \partial f(x)+\partial g(x), \end{align} it is natural to attempt to evaluate the residuals produced by Algorithm \ref{alg:main} in the light of \eqref{eq:mipfg}, and this is exactly what Theorem \ref{th:main_comp}(b) is about. Note that if we set $v^{k+1}=0$ and $\varepsilon_{k+1}=0$ in \eqref{eq:jeffblack}, then it follows that $x:=y^{k+1}$ satisfies the inclusion \eqref{eq:mipfg}. As we mentioned before, Theorem \ref{th:main_comp} below is our main result on the iteration-complexity of Algorithm \ref{alg:main}. \begin{theorem}[{\bf Convergence rates for Algorithm \ref{alg:main}}] \label{th:main_comp} Consider the sequences evolved by \emph{Algorithm \ref{alg:main}}, let $x^*$ be the (unique) solution of \eqref{eq:co} and let $d_0$ be as in \eqref{eq:def.d0}. Then, the following holds: \begin{itemize} \item[\emph{(a)}] For all $k\geq 1$, \begin{align*} h(y^{k})-h(x^*)\leq \dfrac{d_0^2}{2 A_k},\qquad \norm{x^*-y^k}^2\leq \dfrac{d_0^2}{\mu A_k},\qquad \norm{x^*-x^{k}}^2\leq \dfrac{d_0^2}{1+\mu A_k}. \end{align*} \item[\emph{(b)}] For all $k\geq 1$, \begin{align} \label{eq:jeffblack} \begin{cases} &v^{k+1}\in \partial_{\varepsilon_{k+1}} f(y^{k+1})+\partial g(y^{k+1}),\\[2mm] &\norm{v^{k+1}}^2\leq \left(\dfrac{1+\sigma\sqrt{1+\mu\lambda_{k+1}}}{6^{-1/2} \lambda_{k+1}}\right)^2 \dfrac{d_0^2}{\mu A_k},\\[6mm] &\varepsilon_{k+1}\leq \left(\dfrac{3\sigma^2}{\lambda_{k+1}}\right) \dfrac{d_0^2}{\mu A_k}. \end{cases} \end{align} \end{itemize} \end{theorem} \begin{proof} (a) Note that the bounds on $h(y^k)-h(x^*)$ and $\norm{x^*-x^k}^2$ follow directly from Proposition \ref{pr:cambio}(a) with $x=x^*$ and \eqref{eq:def.d0}. Now, since $h(\cdot)$ is $\mu$-strongly convex and $0\in \partial h(x^*)$, one can use the inequality (see, e.g., \cite[Proposition 6(c)]{roc-mon.sjco76}) $h(x)\geq h(x^*)+\frac{\mu}{2}\norm{x-x^*}^2$, for all $x\in \mathcal{H}$, with $x=y^k$ and the bound on $h(y^k)-h(x^*)$ to conclude that $\norm{y^k-x^*}^2\leq \frac{2}{\mu}\left(h(y^k)-h(x^*)\right)\leq \frac{d_0^2}{\mu A_k}$. (b) First, note that the inclusion in \eqref{eq:jeffblack} follows from the inclusion in \eqref{eq:alg_err}. Since we will use the second inequality in \eqref{eq:standard2} to prove the inequality for $\norm{v^{k+1}}^2$, it follows that we first have to bound the term $\norm{y^{k+1}-\widetilde x^{k}}^2$. To this end, note that from the second inequality in item (a) with $k=k+1$ and the fact that $A_{k+1}\geq A_k$, \begin{align} \label{eq:bjovi} \nonumber \norm{y^{k+1}-\widetilde x^{k}}^2&\leq 2\left(\norm{x^*-y^{k+1}}^2 +\norm{\widetilde x^{k}-x^*}^2\right)\\ & \leq 2\left(\dfrac{d_0^2}{\mu A_k}+\norm{\widetilde x^{k}-x^*}^2\right). \end{align} We now have to bound the second term in \eqref{eq:bjovi}. Since, from \eqref{eq:alg_xtil}, $\widetilde x^k$ is a convex combination of $x^k$ and $y^k$, it follows that \begin{align} \label{eq:bjovi2} \nonumber \norm{\widetilde x^{k}-x^*}^2&\leq \norm{x^*-x^k}^2+\norm{x^*-y^k}^2\\ \nonumber &\leq \dfrac{d_0^2}{1+\mu A_k} + \dfrac{d_0^2}{\mu A_k}\\ &\leq \dfrac{2d_0^2}{\mu A_k}, \end{align} where in the second inequality we used the second and third inequalities in item (a). Now using \eqref{eq:bjovi} and \eqref{eq:bjovi2}, we find \begin{align} \label{eq:bjovi3} \norm{y^{k+1}-\widetilde x^k}^2\leq 6\dfrac{d_0^2}{\mu A_k}. \end{align} To finish the proof of (b), note that using \eqref{eq:bjovi3}, we obtain the desired bounds on $\norm{v^{k+1}}^2$ and $\varepsilon_{k+1}$ as a consequence of the second inequality in \eqref{eq:standard2} and the fact that $2\lambda_{k+1}\varepsilon_{k+1}\leq \sigma^2\norm{y^{k+1}-\widetilde x^k}^2$ (see \eqref{eq:alg_err}), respectively. \end{proof} Next result is motivated by the fact that the rate of convergence of Algorithm \ref{alg:main} presented in Theorem \ref{th:main_comp} is given in terms of the sequence $\{A_k\}$. We also mention that the proof of Lemma \ref{lm:bbking} follows the same outline of an argument given in p. 13 of \cite{bach-preprint21}. \begin{lemma} \label{lm:bbking} The following holds: \begin{itemize} \item[\emph{(a)}] For all $k\geq 1$, \begin{align} \label{eq:stingz} A_{k+1} \geq \lambda_1 \prod_{j=2}^{k+1} \left(\dfrac{1}{1-\sqrt{\dfrac{\mu\lambda_{j}}{1+\mu\lambda_{j}}}}\right). \end{align} \item[\emph{(b)}] For all $k\geq 1$, \begin{align} \label{eq:sting2} A_{k+1} \geq \lambda_1 \prod_{j=2}^{k+1}\left(1+2\mu\lambda_{j}\right). \end{align} \end{itemize} \end{lemma} \begin{proof} (a) From \eqref{eq:alg_a}, \begin{align*} a_{k+1}&=\dfrac{(1+2\mu A_k)\lambda_{k+1}+\sqrt{(1+2\mu A_k)^2\lambda_{k+1}^2 +4(1+\mu A_k)A_k\lambda_{k+1}}}{2}\\ &\geq \dfrac{(2\mu A_k)\lambda_{k+1}+\sqrt{(2\mu A_k)^2\lambda_{k+1}^2 +4(\mu A_k)A_k\lambda_{k+1}}}{2}\\ &=\dfrac{(2\mu A_k)\lambda_{k+1}+2A_k\sqrt{\mu^2\lambda_{k+1}^2 +\mu \lambda_{k+1}}}{2}\\ &= A_k\left[\mu\lambda_{k+1}+\sqrt{\mu\lambda_{k+1}(1+\mu\lambda_{k+1})}\right]. \end{align*} Hence, from \eqref{eq:alg_A}, \begin{align} \nonumber A_{k+1}&=A_k+a_{k+1}\\ \nonumber & \geq A_k+ A_k\left[\mu\lambda_{k+1}+\sqrt{\mu\lambda_{k+1}(1+\mu\lambda_{k+1})}\right]\\ \label{eq:blackwell02} &= A_k\left[1+\mu\lambda_{k+1}+\sqrt{\mu\lambda_{k+1}(1+\mu\lambda_{k+1})}\right]\\[2mm] \label{eq:sting} &= A_k\left(\dfrac{1}{1-\sqrt{\dfrac{\mu\lambda_{k+1}}{1+\mu\lambda_{k+1}}}}\right), \end{align} where in the last equality we used the identity $1/\left(1-\sqrt{\frac{x}{1+x}}\right)=1+x+\sqrt{x(1+x)}$ with $x=\mu\lambda_{k+1}$. Note now that \eqref{eq:stingz} follows directly from \eqref{eq:sting} and the fact that $A_1=\lambda_1$ -- see \eqref{eq:A1a1}. (b) Using \eqref{eq:blackwell02}, the fact that $\sqrt{\mu\lambda_{k+1}(1+\mu\lambda_{k+1})}\geq \mu\lambda_{k+1}$ and a similar reasoning to the proof of item (a), we obtain that \eqref{eq:sting2} holds for all $k\geq 1$. \end{proof} Next is a corollary of Lemma \ref{lm:bbking}(a) for the special case that the sequence $\{\lambda_k\}$ is bounded away from zero. Lemma \ref{lm:bbking}(b) will be useful later in Section \ref{sec:ls}. \begin{corollary} \label{cor:bounded} Assume that $\lambda_k\geq \underline{\lambda}>0$, for all $k\geq 1$, and define $\alpha\in (0,1)$ as \begin{align} \label{eq:def.alpha} \alpha:=\sqrt{\dfrac{\mu\underline{\lambda}}{1+\mu\underline{\lambda}}}. \end{align} Then, for all $k\geq 1$, \begin{align} \label{eq:borwein} A_k\geq \underline{\lambda} \left(\dfrac{1}{1-\alpha}\right)^{k-1}. \end{align} \end{corollary} \begin{proof} Using the fact that the scalar function $(0,\infty)\ni t \mapsto \frac{\mu t}{1+\mu t}\in (0,1)$ is increasing, the assumption $\lambda_k\geq \underline{\lambda}>0$, for all $k\geq 1$, and \eqref{eq:def.alpha}, we find \begin{align*} \dfrac{1}{1-\sqrt{\dfrac{\mu\lambda_{j}}{1+\mu\lambda_{j}}}}\geq \dfrac{1}{1-\alpha},\qquad \forall j\geq 1. \end{align*} Hence, from Lemma \ref{lm:bbking}(a) and the assumption $\lambda_k\geq \underline{\lambda}$ with $k=1$ we obtain $A_{k+1}\geq \underline{\lambda} \left(\dfrac{1}{1-\alpha}\right)^k$, for all $k\geq 1$, which is clearly equivalent to $A_{k}\geq \underline{\lambda} \left(\dfrac{1}{1-\alpha}\right)^{k-1}$ for all $k\geq 2$. To finish the proof of \eqref{eq:borwein}, note that the latter inequality holds trivially for $k=1$ (because $A_1=\lambda_1$ and $\lambda_1\geq \underline{\lambda}$). \end{proof} Next we present convergence rate results for Algorithm \ref{alg:main} under the assumption that $\{\lambda_k\}$ is bounded away from zero. \begin{theorem}[{\bf Convergence rates for Algorithm \ref{alg:main} with $\{\lambda_k\}$ bounded below}] \label{th:main_comp03} Consider the sequences evolved by \emph{Algorithm \ref{alg:main}} and assume that $\lambda_k\geq \underline{\lambda}>0$ for all $k\geq 1$. Let $x^*$ be the (unique) solution of \eqref{eq:co}, let $d_0$ be as in \eqref{eq:def.d0} and let $\alpha\in (0,1)$ be as in \eqref{eq:def.alpha}. The following holds: \begin{itemize} \item[\emph{(a)}] For all $k\geq 1$, \begin{align*} & h(y^{k})-h(x^*)\leq \dfrac{d_0^2}{2\underline{\lambda}}(1-\alpha)^{k-1},\\[3mm] & \max\left\{\norm{x^*-y^k},\norm{x^*-x^k}\right\}\leq \dfrac{d_0}{\sqrt{\mu\underline{\lambda}}}(1-\alpha)^{(k-1)/2}. \end{align*} \item[\emph{(b)}] For all $k\geq 1$, \begin{align*} \begin{cases} &v^{k+1}\in \partial_{\varepsilon_{k+1}} f(y^{k+1})+\partial g(y^{k+1}),\\[3mm] &\norm{v^{k+1}}\leq \left(\dfrac{1+\sigma\sqrt{1+\mu\underline{\lambda}}}{6^{-1/2}\mu^{1/2}\underline{\lambda}^{3/2}}\right) d_0\,(1-\alpha)^{(k-1)/2},\\[6mm] &\varepsilon_{k+1}\leq \left(\dfrac{3\sigma^2 d_0^2}{\mu \underline{\lambda}^2}\right) (1-\alpha)^{k-1}. \end{cases} \end{align*} \end{itemize} \end{theorem} \begin{proof} (a) This follows from Theorem \ref{th:main_comp}(a) and Corollary \ref{cor:bounded}. (b) The result follows from Theorem \ref{th:main_comp}(b), Corollary \ref{cor:bounded}, the assumption $\lambda_k\geq \underline{\lambda}$ and the fact that, for $t>0$, the scalar function $t\mapsto \frac{1+\sigma\sqrt{1+\mu t}}{t}$ is nonincreasing. \end{proof} \section{A (high-order) large-step A-HPE algorithm for strongly convex problems} \label{sec:ls} In this section, we also consider problem \eqref{eq:co}, i.e., $\min_{x\in \mathcal{H}}\left\{h(x):=f(x)+g(x)\right\}$, where the same assumptions as in Section \ref{sec:alg} are assumed to hold on $h$, $f$ and $g$. For solving \eqref{eq:co}, we propose and study the iteration-complexity of a variant (Algorithm \ref{alg:ls}) of the large-step A-HPE algorithm of Monteiro and Svaiter~\cite{mon.sva-acc.siam13}, with a high-order large-step condition specially tailored for strongly convex objectives . Applications of this general framework to high-order tensor methods will be given in Section \ref{sec:second}. The main results on convergence rates for Algorithm \ref{alg:ls} are presented in Theorem \ref{th:main_comp02} below. \noindent \fbox{ \begin{minipage}[h]{6.6 in} \begin{algorithm} \label{alg:ls} {\bf A variant of the large-step A-HPE algorithm for (the strongly convex) problem \eqref{eq:co}} \end{algorithm} \begin{itemize} \item [0)] Choose $x^0,y^0\in \mathcal{H}$, $\sigma\in [0,1)$, $p\geq 2$ and $\theta>0$; let $A_0=0$ and set $k=0$. \item [1)] Compute $\lambda_{k+1}>0$ and $(y^{k+1},v^{k+1},\varepsilon_{k+1})\in \mathcal{H}\times \mathcal{H}\times \mathbb{R}_{+}$ such that \begin{align} \begin{aligned} \label{eq:alg_err2} &v^{k+1}\in \partial_{\varepsilon_{k+1}} f(y^{k+1})+ \partial g(y^{k+1}),\\[3mm] &\dfrac{\norm{\lambda_{k+1}v^{k+1}+y^{k+1}-\widetilde x^k}^2}{1+\lambda_{k+1}\,\mu} +2\lambda_{k+1}\varepsilon_{k+1} \leq \sigma^2\norm{y^{k+1}-\widetilde x^k}^2,\\[3mm] & \lambda_{k+1}\norm{y^{k+1}-\widetilde x^k}^{p-1}\geq \theta, \end{aligned} \end{align} where \begin{align} \label{eq:alg_xtil2} & \widetilde x^k = \left(\dfrac{a_{k+1}-\mu A_k\lambda_{k+1}}{A_k+a_{k+1}}\right)x^k + \left(\dfrac{A_k+\mu A_k\lambda_{k+1}}{A_k+a_{k+1}}\right)y^k,\\[3mm] \label{eq:alg_a2} & a_{k+1}=\dfrac{(1+2\mu A_k)\lambda_{k+1}+\sqrt{(1+2\mu A_k)^2\lambda_{k+1}^2 +4(1+\mu A_k)A_k\lambda_{k+1}}}{2}. \end{align} \item[2)] Let \begin{align} \label{eq:alg_A2} &A_{k+1} = A_k + a_{k+1},\\[2mm] \label{eq:alg_xne2} & x^{k+1} = \left(\dfrac{1+\mu A_k}{1+\mu A_{k+1}}\right)x^k + \left(\dfrac{\mu a_{k+1}}{1+\mu A_{k+1}}\right) y^{k+1} - \left(\dfrac{a_{k+1}}{1+\mu A_{k+1}}\right)v^{k+1}. \end{align} \item[3)] Set $k=k+1$ and go to step 1. \end{itemize} \noindent \end{minipage} } We now make a few remarks concerning Algorithm \ref{alg:ls}: \begin{itemize} \item[(i)] By deleting the third inequality in \eqref{eq:alg_err2} (the high-order large-step condition), we see that Algorithm \ref{alg:ls} is a special instance of Algorithm \ref{alg:main}. As a consequence, all results proved in Section \ref{sec:alg} for Algorithm \ref{alg:main} also hold for Algorithm \ref{alg:ls}. \item[(ii)] We mention that Algorithm \ref{alg:ls} is a generalization of Algorithm 1 in \cite{jordan-controlpreprint20} to strongly convex objectives. The authors of the latter work proved the global rates $\mathcal{O}\left(k^{-\frac{3p+1}{2}}\right)$, $\mathcal{O}\left(k^{-3p}\right)$ and $\mathcal{O}\left(k^{-\frac{3p+3}{2}}\right)$ for (in the notation of this paper) function values $h(y^{k+1})-h(x^*)$ and residuals $\inf_{1\leq i\leq k+1}\,\norm{v^{i}}^2$ and $\inf_{1\leq i\leq k+1}\,\varepsilon_i$, respectively. (see \cite[Theorem 4.3]{jordan-controlpreprint20}.) \end{itemize} In what follows we will use remark (i) following Algorithm \ref{alg:ls} to apply the results proved for Algorithm \ref{alg:main} in Section \ref{sec:alg} to Algorithm \ref{alg:ls}. The next two lemmas will be used to prove Theorem \ref{th:main_comp02} below. \begin{lemma} \label{lm:emicida} Consider the sequences evolved by \emph{Algorithm \ref{alg:ls}} and let $d_0:=\norm{x^0-x^*}$, where $x^*$ is the (unique) solution of \eqref{eq:co}. Then, for all $k\geq 1$, \begin{align} \label{eq:furry} \sum_{j=1}^k\,\dfrac{A_j}{\lambda_j^\frac{p+1}{p-1}}\leq \dfrac{d_0^2}{\theta^{\frac{2}{p-1}}(1-\sigma^2)}. \end{align} In particular, for all $k\geq 1$, \begin{align} \label{eq:furry2} \lambda_k\geq C d_0^{-\frac{2(p-1)}{p+1}},\qquad C:=\lambda_1^{\frac{p-1}{p+1}}\theta^{\frac{2}{p+1}}(1-\sigma^2)^{\frac{p-1}{p+1}}. \end{align} \end{lemma} \begin{proof} Using \eqref{eq:sting3} and third inequality in \eqref{eq:alg_err2}, we obtain \begin{align*} \left(\sum_{j=1}^k\,\dfrac{A_j}{\lambda_j^\frac{p+1}{p-1}}\right)\theta^{\frac{2}{p-1}} \leq \sum_{j=1}^{k}\, \dfrac{A_j}{\lambda_j^{\frac{p+1}{p-1}}}\left(\lambda_j \norm{y^j-\widetilde x^{\,j-1}}^{p-1}\right)^{\frac{2}{p-1}} =\sum_{j=1}^{k}\, \dfrac{A_j}{\lambda_j}\norm{y^j-\widetilde x^{\,j-1}}^2 \leq \dfrac{d_0^2}{1-\sigma^2}, \end{align*} which yields \eqref{eq:furry}. To finish the proof of the lemma, note that \eqref{eq:furry2} follows directly fom \eqref{eq:furry} and the fact that $A_k\geq \lambda_1$ for all $k\geq 1$ (see \eqref{eq:alg_A} and \eqref{eq:A1a1}). \end{proof} \begin{lemma} \label{lm:belchior} For all $k\geq 0$, \begin{align} \label{eq:belchior} A_{k+1}\geq \lambda_1\left(1+ \dfrac{2\mu C}{d_0^{\frac{2(p-1)}{p+1}}}\, k^{\,\left(\frac{p-1}{p+1}\right)}\right)^k, \end{align} where $C>0$ is as in \eqref{eq:furry2}. \end{lemma} \begin{proof} First note that from \eqref{eq:A1a1} we have $A_1=\lambda_1$, showing that \eqref{eq:belchior} trivially holds for $k=0$. Assume now that $k>0$. From Lemma \ref{lm:emicida} we know, in particular, that \begin{align*} \sum_{j=2}^{k+1}\,\dfrac{A_j}{\lambda_j^\frac{p+1}{p-1}}\leq \dfrac{d_0^{2}}{\theta^\frac{2}{p-1}(1-\sigma^2)}. \end{align*} Since $A_j=A_{j-1}+a_j\geq A_{j-1}\geq \dots \geq A_1$, for all $j\geq 2$, and $A_1=\lambda_1$, we then obtain \begin{align*} \sum_{j=2}^{k+1}\,\dfrac{1}{\lambda_j^\frac{p+1}{p-1}}\leq \dfrac{d_0^2}{\lambda_1 \theta^\frac{2}{p-1}(1-\sigma^2)}=:c. \end{align*} Now using Lemma \ref{lm:carinhoso} with $c>0$ as above, $q=\frac{p+1}{p-1}$ and $\lambda_j\leftarrow 2\mu\lambda_j$, we find \begin{align*} \prod_{j=2}^{k+1}\left(1+2\mu\lambda_{j}\right)&\geq \left(1+\left(\dfrac{(2\mu)^{\frac{p+1}{p-1}}}{c}k \right)^{\frac{p-1}{p+1}}\right)^{k}\\[2mm] & =\left(1+\dfrac{2\mu}{c^{\frac{p-1}{p+1}}}k^{\frac{p-1}{p+1}}\right)^{k}\\[2mm] & = \left(1+\left(\dfrac{2\mu\lambda_1^{\frac{p-1}{p+1}}\theta^{\frac{2}{p+1}}(1-\sigma^2)^{\frac{p-1}{p+1}}}{d_0^{\frac{2(p-1)}{p+1}}}\right)k^{\frac{p-1}{p+1}}\right)^{k}, \end{align*} which, in turn, combined with \eqref{eq:sting2} and the definition of $C$ in \eqref{eq:furry2} finishes the proof of the lemma. \end{proof} Next is the main result on global convergence rates for Algorithm \ref{alg:ls}. As we mentioned before, it provides a global \emph{superlinear} $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ convergence, where $p-1\geq 1$ is the power in the high-order large-step condition (third inequality in \eqref{eq:alg_err2}). \begin{theorem}[{\bf Convergence rates for Algorithm \ref{alg:ls}}] \label{th:main_comp02} Consider the sequences evolved by \emph{Algorithm \ref{alg:ls}}, let $x^*$ denote the (unique) solution of \eqref{eq:co} and let $C>0$ be as in \eqref{eq:furry2}. Then the following holds: \begin{itemize} \item[\emph{(a)}] For all $k\geq 0$, \begin{align*} &h(y^{k+1})-h(x^*)\leq \dfrac{d_0^2}{2\lambda_1\left(1+\frac{2\mu C}{d_0^{\frac{2(p-1)}{p+1}}}\,k^{\,\left(\frac{p-1}{p+1}\right)}\right)^{k}} =\mathcal{O}\left(\frac{1}{k^{k\left(\frac{p-1}{p+1}\right)}}\right),\\[2mm] &\max\left\{\norm{x^*-x^{k+1}}^2,\norm{x^*-y^{k+1}}^2\right\}\leq \dfrac{d_0^2} {\mu \lambda_1\left(1+ \frac{2\mu C}{d_0^{\frac{2(p-1)}{p+1}}} k^{\,\left(\frac{p-1}{p+1}\right)}\right)^{k}} =\mathcal{O}\left(\frac{1}{k^{k\left(\frac{p-1}{p+1}\right)}}\right). \end{align*} \item[\emph{(b)}] For all $k\geq 1$, \begin{align*} \begin{cases} &v^{k+1}\in \partial_{\varepsilon_{k+1}} f(y^{k+1})+\partial g(y^{k+1}),\\[3mm] &\norm{v^{k+1}}^2\leq \left(1+\sigma\sqrt{1+\mu C d_0^{-\frac{2(p-1)}{p+1}}}\right)^2 \dfrac{6 d_0^{\frac{2(3p-1)}{p+1}}} {\mu C^2\lambda_1\left(1+ \frac{2\mu C}{d_0^{\frac{2(p-1)}{p+1}}} (k-1)^{\,\left(\frac{p-1}{p+1}\right)}\right)^{k-1}} =\mathcal{O}\left(\frac{1}{(k-1)^{(k-1)\left(\frac{p-1}{p+1}\right)}}\right),\\[6mm] &\varepsilon_{k+1}\leq \dfrac{3\sigma^2 d_0^{\frac{4p}{p+1}}} {\mu C \lambda_1\left(1+ \frac{2\mu C}{d_0^{\frac{2(p-1)}{p+1}}} (k-1)^{\,\left(\frac{p-1}{p+1}\right)}\right)^{k-1}} =\mathcal{O}\left(\frac{1}{(k-1)^{(k-1)\left(\frac{p-1}{p+1}\right)}}\right). \end{cases} \end{align*} \end{itemize} \end{theorem} \begin{proof} Both items follow from Theorem \ref{th:main_comp} and Lemmas \ref{lm:emicida} and \ref{lm:belchior}. To prove the inequalities in item (b), one also has to use the fact that the scalar function $t\mapsto \frac{1+\sigma\sqrt{1+\mu t}}{t}$ is nonincreasing as well as the lower bound on $\lambda_k$ given in \eqref{eq:furry2}. \end{proof} \section{Applications to accelerated high-order tensor methods for strongly convex objectives} \label{sec:second} In this section, we consider the problem \begin{align} \label{eq:co4} \min_{x\in \mathcal{H}}\,\left\{h(x):=f(x)+g(x)\right\}, \end{align} where $f, g:\mathcal{H}\to (-\infty,\infty]$ are proper, closed and convex functions, $\mbox{dom}\,h\neq \emptyset$, and $g$ is {\it $\mu$-strongly convex} on $\mathcal{H}$ and $p\geq 2$ {\it times continuously differentiable} on $\Omega\supseteq \mbox{Dom}\,(\partial f)$ with $D^p g(\cdot)$ being {\it $L_p$-Lipschitz continuous on $\Omega$}: $0<L_p<+\infty$ and \begin{align} \label{eq:mg} \norm{D^p g(x) - D^p g(y)}\leq L_p \norm{x-y},\qquad \forall x,y\in \Omega. \end{align} Define \begin{align} g_{x,p}(y):=g(x)+\sum_{k=1}^p\,\dfrac{1}{k!} D^k g(x)[y-x]^k + \dfrac{M}{(p+1)!}\norm{y-x}^{p+1} ,\qquad (x,y)\in \Omega\times \mathcal{H}, \end{align} where $M>0$ is such that $M\geq p L_p$. As observed by Nesterov in \cite{nes-imp.mp20}, the function $g_{x,p}(\cdot)$ is convex whenever $M\geq p L_p$ and, moreover, \begin{align} \label{eq:error_p} \norm{\nabla g(y) - \nabla g_{x,p}(y)}\leq \dfrac{L_p+M}{p!}\norm{y-x}^p,\qquad \forall (x,y)\in \Omega\times \mathcal{H}. \end{align} At each iteration of the (exact) Proximal-Tensor method for solving \eqref{eq:co4} one has to find $y\in \mathcal{H}$ solving an inclusion of the form \begin{align} \label{eq:manu} 0\in \lambda\Big(\partial f(y)+ \nabla g_{z,p}(y)\Big) + y - x, \end{align} where $z=P_{\Omega}(x)$ and $\lambda>0$. Note also that \eqref{eq:manu} is equivalent to solving the convex problem \begin{align} \label{eq:manux} \min_{y\in \mathcal{H}}\,\left\{f(y)+ g_{z,p}(y)+\dfrac{1}{2\lambda}\norm{y-x}^2\right\}. \end{align} Next definition introduces a notion of \emph{relative-error} inexact solution for \eqref{eq:manu} (or, equivalently, \eqref{eq:manux}). It will be used in step 2 (see \eqref{eq:sigma11}) of Algorithm \ref{alg:second}, and can be motivated as follows. Note that the inclusion \eqref{eq:manu} can be splitted as an inclusion-equation system: \begin{align*} \begin{cases} u\in \partial f(y),&\\[2mm] \lambda\Big(u+\nabla g_{z,p}(y)\Big) + y - x = 0. \end{cases} \end{align*} Definition \ref{def:newton_sol} below relaxes the latter conditions by allowing errors in both the inclusion ($u$ will belong to $\partial_\varepsilon f(y)$ instead of belonging to $\partial f(y)$) and the equation. \begin{definition} \label{def:newton_sol} The triple $(y,u,\varepsilon)\in \mathcal{H}\times \mathcal{H}\times \mathbb{R}_+$ is a $\hat \sigma$-approximate \emph{Tensor solution} of \eqref{eq:manu} at $(x,\lambda)\in \mathcal{H}\times \mathbb{R}_{++}$ if $\hat \sigma\geq 0$ and \begin{align} \label{eq:mbarros2} u\in \partial_\varepsilon f(y),\qquad \dfrac{\left\|\lambda \Big(u+ \nabla g_{z,p}(y)\Big)+y-x\right\|^2}{1+\lambda\mu}+2\lambda\varepsilon \leq \hat\sigma^2\norm{y-x}^2, \end{align} where $z=P_{\Omega}(x)$. \end{definition} Note that if $\hat\sigma=0$ in \eqref{eq:mbarros2}, then it follows that $\varepsilon=0$, $u\in \partial f(y)$ and $\lambda \Big(u+ \nabla g_{z,p}(y)\Big)+y-x=0$, which implies that $y$ is the solution of \eqref{eq:manu}. We also mention that if we set $\mu=0$ in Definition \ref{def:newton_sol} then we recover \cite[Definition 2.1]{zhang-preprint20} (see also \cite[Definition 1]{jordan-controlpreprint20}). Next proposition shows that $\hat\sigma$-approximate solutions of \eqref{eq:manu} provide relative-error approximate solutions in the sense of \eqref{eq:alg_err2}. \begin{proposition} \label{pr:tensor_hat} Let $(u,y,\varepsilon)$ be a $\hat\sigma$-approximate Tensor solution of \eqref{eq:manu} at $(x,\lambda)\in \mathcal{H}\times \mathbb{R}_{++}$ \emph{(}in the sense of \emph{Definition \ref{def:newton_sol}}\emph{)} and define \begin{align} \label{eq:gulliver} v=u + \nabla g(y),\qquad \sigma=\dfrac{\lambda (L_p+M)\norm{y-x}^{p-1}}{p!\sqrt{1+\lambda\mu}}+\hat \sigma. \end{align} Then, \begin{align} \label{eq:gulliver02} v\in \partial_\varepsilon f(y)+\nabla g(y), \qquad \dfrac{\norm{\lambda v+y-x}^2}{1+\lambda\mu}+2\lambda\varepsilon\leq \sigma^2\norm{y-x}^2. \end{align} \end{proposition} \begin{proof} Note that the inclusion in \eqref{eq:gulliver02} follows from the definition of $v$ in \eqref{eq:gulliver} and the inclusion in \eqref{eq:mbarros2}. To prove the inequality in \eqref{eq:gulliver02}, note that from the definition of $v$ in \eqref{eq:gulliver}, the triangle inequality and property \eqref{eq:error_p}, we find \begin{align*} \nonumber \norm{\lambda v+y-x}^2&=\norm{\lambda \left(u+\nabla g_{z,p}(y)\right)+y-x+\lambda \big(\nabla g(y) - \nabla g_{z,p}(y)\big)}^2\\ \nonumber & \leq \Big(\norm{\lambda \left(u+\nabla g_{z,p}(y)\right)+y-x}+\lambda\norm{\nabla g(y) - \nabla g_{z,p}(y)}\Big)^2\\ \nonumber &\leq \left(\norm{\lambda \left(u+\nabla g_{z,p}(y)\right)+y-x}+ \frac{\lambda (L_p+M)}{p!} \norm{y-z}^p\right)^2\\ &\leq \left(\norm{\lambda \left(u+\nabla g_{z,p}(y)\right)+y-x}+ \frac{\lambda (L_p+M)}{p!} \norm{y-x}^p\right)^2, \end{align*} where in the last inequality we also used the fact that $\norm{y-z}\leq \norm{y-x}$. (because $y\in \mbox{Dom}(\partial_\varepsilon f)\subset \overline{\mbox{Dom}(\partial f)}\subset \Omega$ and $z=P_\Omega(x)$.) Hence, \begin{align*} \dfrac{\norm{\lambda v+y-x}^2}{1+\lambda\mu}+2\lambda\varepsilon &\leq \left(\dfrac{\norm{\lambda \left(u+\nabla g_{z,p}(y)\right)+y-x}}{\sqrt{1+\lambda\mu}}+ \frac{\lambda (L_p+M)}{p!\sqrt{1+\lambda\mu}} \norm{y-x}^p\right)^2+2\lambda\varepsilon. \end{align*} Using now the elementary inequality $(a+b)^2+c\leq \left(b+\sqrt{a^2+c}\right)^2$ with $a=\frac{\norm{\lambda \left(u+\nabla g_{z,p}(y)\right) +y-x}}{\sqrt{1+\lambda\mu}}$, $b=\lambda (L_p+M)\norm{y-x}^p / (p!\sqrt{1+\lambda\mu})$ and $c=2\lambda\varepsilon$, we find \begin{align*} \dfrac{\norm{\lambda v+y-x}^2}{1+\lambda\mu}+2\lambda\varepsilon&\leq \left(\frac{\lambda (L_p+M)}{p!\sqrt{1+\lambda\mu}} \norm{y-x}^p+ \sqrt{\dfrac{\norm{\lambda \left(u+\nabla g_{z,p}(y)\right)+y-x}^2}{1+\lambda\mu}+2\lambda\varepsilon}\right)^2\\ &\leq \left(\frac{\lambda (L_p+M)}{p!\sqrt{1+\lambda\mu}} \norm{y-x}^p+\hat \sigma\norm{y-x}\right)^2\\ &=\left(\frac{\lambda (L_p+M)}{p!\sqrt{1+\lambda\mu}} \norm{y-x}^{p-1}+\hat \sigma\right)^2\norm{y-x}^2\\ &=\sigma^2\norm{y-x}^2, \end{align*} where in the second inequality we used the inequality in \eqref{eq:mbarros2} and in the second identity we used the second equality \eqref{eq:gulliver}. \end{proof} Next we present our $p$-th order inexact (relative-error) accelerated tensor algorithm for solving \eqref{eq:co4}. \noindent \fbox{ \begin{minipage}[h]{6.6 in} \begin{algorithm} \label{alg:second} {\bf An accelerated inexact high-order tensor method for solving \eqref{eq:co4}} \end{algorithm} \begin{itemize} \item [0)] Choose $x^0,y^0\in \mathcal{H}$ and $p\geq 2$, $\hat \sigma\geq 0$, $0<\sigma_\ell<\sigma_u<1$ such that \begin{align} \label{eq:sigma10} \sigma:=\sigma_u+\hat\sigma<1,\qquad \sigma_\ell(1+\hat\sigma)^{p-1}<\sigma_u(1-\hat\sigma)^{p-1}; \end{align} let $A_0=0$ and set $k=0$. \item [1)] Compute $\lambda_{k+1}>0$ and a $\hat\sigma$-approximate Tensor solution $(u^{k+1},y^{k+1},\varepsilon_{k+1})$ (in the sense of Definition \ref{def:newton_sol}) of \eqref{eq:manu} at $(\widetilde x^k,\lambda_{k+1})$ satisfying \begin{align} \label{eq:sigma11} \dfrac{p!\,\sigma_\ell}{L_p+M}\leq \lambda_{k+1}\norm{y^{k+1}-\widetilde x^k}^{p-1}\leq \dfrac{p!\,\sigma_u\sqrt{1+\lambda_{k+1}\mu}}{L_p+M}, \end{align} where \begin{align} \label{eq:alg_xtil5} & \widetilde x^k = \left(\dfrac{a_{k+1}-\mu A_k\lambda_{k+1}}{A_k+a_{k+1}}\right)x^k + \left(\dfrac{A_k+\mu A_k\lambda_{k+1}}{A_k+a_{k+1}}\right)y^k,\\[3mm] \label{eq:alg_a5} & a_{k+1}=\dfrac{(1+2\mu A_k)\lambda_{k+1}+\sqrt{(1+2\mu A_k)^2\lambda_{k+1}^2 +4(1+\mu A_k)A_k\lambda_{k+1}}}{2}. \end{align} \item[2)] Let \begin{align} \label{eq:alg_A5} &A_{k+1} = A_k + a_{k+1},\\[2mm] \label{eq:alg_nv} &v^{k+1}=u^{k+1}-\nabla g_{z^k,p}(y^{k+1})+\nabla g(y^{k+1}),\quad z^k=P_{\Omega}(\widetilde x^k),\\[2mm] \label{eq:alg_nv2} & x^{k+1} = \left(\dfrac{1+\mu A_k}{1+\mu A_{k+1}}\right)x^k + \left(\dfrac{\mu a_{k+1}}{1+\mu A_{k+1}}\right) y^{k+1}-\left(\dfrac{a_{k+1}}{1+\mu A_{k+1}}\right)v^{k+1}. \end{align} \item[3)] Set $k=k+1$ and go to step 1. \end{itemize} \noindent \end{minipage} } We now make two remarks concerning Algorithm \ref{alg:second}: \begin{itemize} \item[(i)] Algorithm \ref{alg:second} is a generalization of \cite[Algorithm 3]{jordan-controlpreprint20} for strongly convex problems. The latter algorithm, which can be applied to \eqref{eq:co4} with $f\equiv 0$, has the global convergence rates $\mathcal{O}\left(k^{-\frac{3p+1}{2}}\right)$ and $\mathcal{O}\left(k^{-3p}\right)$ for (in the notation of this paper) $g(y^k)-g(x^*)$ and $\inf_{1\leq i\leq k}\,\norm{\nabla g(y^i)}^2$, respectively (see \cite[Theorem 4.13]{jordan-controlpreprint20}). In contrast to this, here we obtained, see Theorem \ref{th:main_tensor}, the fast global $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ convergence rate. \item[(ii)] We also mention that a $\hat\sigma$-approximate Tensor solution satisfying \eqref{eq:sigma11} can be computed using bisection schemes (see \cite{alv.mon.sva-sqp.pre14,zhang-preprint20,mon.sva-acc.siam13}). More precisely, by defining the curve \begin{align*} \tau(\lambda) = \frac{a_{k+1}(\lambda)-\mu A_k \lambda}{A_k + a_{k+1}(\lambda)},\quad \lambda>0, \end{align*} where \begin{align*} a_{k+1}(\lambda) := \dfrac{(1+2\mu A_k)\lambda + \sqrt{(1+2\mu A_k)^2\lambda^2 +4(1+\mu A_k)A_k\lambda}}{2}, \end{align*} we find that $\widetilde x^k$ as in (67) can be written as \begin{align*} \widetilde x^k = y^k + \tau_{k+1}(x^k-y^k), \end{align*} where $\tau_{k+1}:=\tau(\lambda_{k+1})$. Moreover, it is not difficult to check that $\tau(\cdot)$ is (smooth) increasing and that $\dot{\tau}(\lambda)$ satisfies \[ \dot{\tau}(\lambda)\leq \frac{1}{\lambda}\qquad \forall \lambda>0. \] As a consequence of the above inequality, one can reproduce Lemma 7.13 as in~\cite[p. 1121]{mon.sva-acc.siam13} and pose the ``Line search problem'' as in~\cite[p. 1117]{mon.sva-acc.siam13} (with $\lambda\norm{y_\lambda - x(\lambda)}$ replaced by $\lambda\norm{y_\lambda - x(\lambda)}^{p-1}$). The general search procedures studied in references~\cite[Section 4]{alv.mon.sva-sqp.pre14} and~\cite{zhang-preprint20} would also be helpful. The complexity of the bracketing/bisection procedure depends on a logarithm of the inverse of the precision (see, e.g.,~\cite[Theorem 7.16]{mon.sva-acc.siam13}). \end{itemize} \begin{proposition} \label{pr:tensor_special} \emph{Algorithm \ref{alg:second}} is a special instance of \emph{Algorithm \ref{alg:ls}} for solving \eqref{eq:co4}, where \begin{align} \label{eq:theta_tensor} \theta:=\dfrac{p!\sigma_\ell}{L_p+M}. \end{align} \end{proposition} \begin{proof} It follows from the definitions of Algorithms \ref{alg:ls} and \ref{alg:second} that we only have to prove that \eqref{eq:alg_err2} holds. Note that the inclusion and the first inequality in \eqref{eq:alg_err2} follow from step 1 of Algorithm \ref{alg:second} -- the fact that $(u^{k+1},y^{k+1},\varepsilon_{k+1})$ is a $\hat\sigma$-approximate Tensor solution of \eqref{eq:manu}--, the second inequality in \eqref{eq:sigma11}, the definition of $\sigma$ in \eqref{eq:sigma10} and Proposition \ref{pr:tensor_hat}. To finish the proof of the proposition, note that the last inequality in \eqref{eq:alg_err2} (the large-step condition) is a direct consequence of the first inequality in \eqref{eq:sigma11} and \eqref{eq:theta_tensor}. \end{proof} Next theorem states the fast global $\mathcal{O}\left(k^{\,-k\left(\frac{p-1}{p+1}\right)}\right)$ convergence rate for Algorithm \ref{alg:second}. \begin{theorem}[{\bf Convergence rates for Algorithm \ref{alg:second}}] \label{th:main_tensor} Consider the sequences generated by \emph{Algorithm \ref{alg:second}}, let $\theta>0$ be as in \eqref{eq:theta_tensor} and let $C>0$ be as in \eqref{eq:furry2}, where $d_0:=\norm{x^0-x^*}$ and $x^*$ is the (unique) solution of \eqref{eq:co4}. Then all the conclusions of \emph{Theorem \ref{th:main_comp02}} hold. \end{theorem} \begin{proof} The proof follows from Proposition \ref{pr:tensor_special} and Theorem \ref{th:main_comp02}. \end{proof} \noindent {\bf Remarks.} We now make some remarks concerning Algorithm \ref{alg:second}: \begin{itemize} \item[(i)] Note that if $p=2$ in \eqref{eq:mg}, then it follows that \emph{Algorithm \ref{alg:second}} reduces to an (accelerated) inexact proximal-Newton-type algorithm with $\mathcal{O}\left(k^{-k/3}\right)$ global convergence rate (see Theorems \ref{th:main_tensor} and \ref{th:main_comp02}). \item[(ii)] In \cite{gra.nes-acc.siam19}, an accelerated regularized Newton method was proposed for solving \eqref{eq:co4} with $p=2$ and $g$ being $\sigma_q$-uniformly convex. In the notation of this paper, Algorithm 1 of \cite{gra.nes-acc.siam19} (see Theorem 3.3) has global linear \[ \mathcal{O}\left(\dfrac{L_2 d_0^3}{\left(1+\left(\frac{\mu}{L_2}\right)^{1/3}\right)^{2(k-1)}}\right) \] convergence rate for function values. \item[(iii)] In \cite{gasnikov-optimal.pmlr19}, problem \eqref{eq:co4} was considered with $f\equiv 0$ and $g$ assumed to be $\sigma_q$-uniformly convex. The complexity of an optimal tensor method with restart (\cite[Algorithm 3]{gasnikov-optimal.pmlr19}) to compute $x$ satisfying $g(x)-g(x^*)\leq \varepsilon$ is \begin{align*} &\mathcal{O}\left(\left(\dfrac{L_p}{\sigma_{p+1}}\right)^{\frac{2}{3p+1}}\log_2\left(\frac{\Delta_0}{\varepsilon}\right)\right), \;q = p+1; \quad\\[2mm] &\mathcal{O}\left(\left(\dfrac{L_p\left(\Delta_0\right)^{p+1-q}{q}}{\sigma_q^{\frac{p+1}{q}}}\right)^{\frac{2}{3p+1}} + \log_2\left(\dfrac{\Delta_0}{\varepsilon}\right) \right),\; q<p+1, \end{align*} where $\Delta_0\geq g(x^0)-g(x^*)$. We mention that the upper-bound $\Delta_0$ on $g(x^0)-g(x^*)$ is assumed to be known in the implementation of \cite[Algorithm 3]{gasnikov-optimal.pmlr19}. \item[(iv)] In \cite{dvurechensky-near.preprint19}, a near-optimal algorithm for solving \eqref{eq:co4} with $f\equiv 0$ and $\mu=0$ (i.e., with $g$ convex but not strongly convex) was proposed and studied. The iteration-complexity of Algorithm 2 in \cite{dvurechensky-near.preprint19} to find $x$ satisfying $\norm{\nabla g(x)}\leq \varepsilon$ (see \cite[Theorem 2]{dvurechensky-near.preprint19}) is \[ \mathcal{O}\left(\dfrac{L_p^{\frac{2}{3p+1}}}{\varepsilon^{\frac{2(p+1)}{3p+1}}} \Delta_0^{\frac{2p}{3p+1}} +\log_2\left(\dfrac{2^{\frac{4p-3}{p+1}}\Delta_0\left(pL_p\right)^{\frac{1}{p}}(p+1)!} {\varepsilon^{\frac{p}{p+1}}}\right), \right) \] where $\Delta_0\geq g(x^0)-g(x^*)$. Analogously to \cite{gasnikov-optimal.pmlr19}, the upper-bound $\Delta_0$ is assumed to be known while running \cite[Algorithm 2]{dvurechensky-near.preprint19}. \item[(v)] In \cite{gra.nes-ten.oms20}, tensor methods for solving \eqref{eq:co4} with $f\equiv 0$ and $g$ being $p$-times continuously differentiable with $\nu$-H\"older continuous $p$th derivatives were proposed. The iteration-complexity is $\mathcal{O}\left(\varepsilon^{-1/(p+\nu-1)}\right)$ and $\mathcal{O}\left(\varepsilon^{-(p+\nu)/[(p+\nu-1)(p+\nu+1)]}\right)$ for the non accelerated and accelerated methods, respectively, to find $x$ such that $\norm{\nabla g(x)}\leq \varepsilon$. \item[(vi)] In \cite{doi.nes-min.jota21}, a regularized Newton method with linear convergence rate in function values for minimizing uniformly convex functions with $\nu$-H\"older continuous Hessian was proposed and studied. The main result (\cite[Theorem 4.1]{doi.nes-min.jota21}) gives that the rate is linear: \[ F(x_{k+1})-F^*\leq \left(1-\min\left\{ \dfrac{(2+\nu)\left((1+\nu)(2+\nu)\right)^{1/(1+\nu)}\left(\gamma_f(\nu)\right)^\frac{1}{1+\nu}} {(1+\nu)6^{3/2}\cdot 2^{1/2}\cdot (8+\nu)^{(1-\nu)/(2+2\nu)}} ,\frac{1}{2} \right\}\right)\left(F(x_k)-F^*\right), \] where $F$ is the objective function and $F^*$ denotes the optimal value. \end{itemize} \section{Applications to first-order methods for strongly convex problems} \label{sec:first} Consider the convex optimization problem \begin{align} \label{eq:co2} \min_{x\in \mathcal{H}}\,\left\{h(x):=f(x)+g(x)\right\}, \end{align} where $f, g:\mathcal{H}\to (-\infty,\infty]$ are proper, closed and convex functions, $\mbox{dom}\,h\neq \emptyset$, and, additionally, $g$ is \emph{$\mu$-strongly convex} on $\mathcal{H}$ and {\it differentiable} on $\Omega\supseteq \mbox{dom}\,f$ with $\nabla g$ being $L$\emph{-Lipschitz continuous on $\Omega$} An iteration of the proximal-gradient (forward-backward) method for solving \eqref{eq:co2} can be written as follows: \begin{align} \label{eq:ite.fb} y = (\lambda \partial f+I)^{-1} \left(x-\lambda \nabla g(z)\right), \end{align} where $z=P_{\Omega}(x)$ and $\lambda>0$. Using the definition of $(\lambda \partial f+I)^{-1}$, it is easy to see that \eqref{eq:ite.fb} is equivalent to solving the inclusion \begin{align} \label{eq:vhalen} 0\in \lambda \Big(\partial f(y)+\nabla g(z)\Big) + y - x. \end{align} Next we define a notion of approximate solution for \eqref{eq:vhalen} within a \emph{relative-error} criterion. \begin{definition} \label{def:gradient_sol} The triple $(y,u,\varepsilon)\in \mathcal{H}\times \mathcal{H}\times \mathbb{R}_+$ is a $\hat \sigma$-approximate Proximal-Gradient (PG) solution of \eqref{eq:vhalen} at $(x,\lambda)\in \mathcal{H}\times \mathbb{R}_{++}$ if $\hat \sigma\geq 0$ and \begin{align} \label{eq:mbarros4} u\in \partial_\varepsilon f(y),\qquad \dfrac{\norm{\lambda \left(u+\nabla g(z)\right)+y-x}^2}{1+\lambda\mu}+2\lambda\varepsilon \leq \hat\sigma^2\norm{y-x}^2, \end{align} where $z=P_{\Omega}(x)$. We also write \begin{align*} (y,u,\varepsilon) \approx (\lambda \partial f+I)^{-1} \left(x-\lambda \nabla g(z)\right) \end{align*} to mean that $(y,u,\varepsilon)$ is a $\hat \sigma$-approximate PG solution of \eqref{eq:vhalen} at $(x,\lambda)$. \end{definition} Note that if $\hat\sigma=0$ in \eqref{eq:mbarros4}, then it follows that $\varepsilon=0$, $u\in \partial f(y)$ and $\lambda\left[u+\nabla g(z)\right]+y-x=0$, which implies that $y$ is the (exact) solution of \eqref{eq:vhalen}. In particular, in this case, $y$ satisfies \eqref{eq:ite.fb}. \begin{proposition} \label{pr:vhalen} Let $(u,y,\varepsilon)$ be a $\hat\sigma$-approximate PG solution of \eqref{eq:vhalen} at $(x,\lambda)\in \mathcal{H}\times \mathbb{R}_{++}$ as in \emph{Definition \ref{def:gradient_sol}} and define \begin{align} \label{eq:vhalen2} v=u +\nabla g(y),\qquad \sigma=\dfrac{\lambda L}{\sqrt{1+\lambda\mu}}+\hat \sigma. \end{align} Then, \begin{align} \label{eq:vhalen3} v\in \partial_\varepsilon f(y)+\nabla g(y), \qquad \dfrac{\norm{\lambda v+y-x}^2}{1+\lambda\mu}+2\lambda\varepsilon\leq \sigma^2\norm{y-x}^2. \end{align} \end{proposition} \begin{proof} The proof follows the same outline of Proposition \ref{pr:tensor_hat}'s proof. \end{proof} For solving \eqref{eq:co2}, we propose the following inexact (relative-error) accelerated first-order algorithm. \noindent \fbox{ \begin{minipage}[h]{6.6 in} \begin{algorithm} \label{alg:first01} {\bf An accelerated inexact proximal-gradient algorithm for solving \eqref{eq:co2}} \end{algorithm} \begin{itemize} \item [0)] Choose $x^0,y^0\in \mathcal{H}$ and $\hat\sigma\geq 0$, $0<\sigma_u\leq 1$ such that $\sigma:=\sigma_u+\hat\sigma<1$ and let \begin{align} \label{eq:first_lambda} \lambda=\dfrac{\sigma_u}{\sqrt{\left(\dfrac{\sigma_u\mu}{2}\right)^2+L^2}-\dfrac{\sigma_u\mu}{2}}>\dfrac{\sigma_u}{L}; \end{align} let $A_0=0$ and set $k=0$. \item [1)] Compute $z^k=P_{\Omega}(\widetilde x^k)$ and \begin{align} \begin{aligned} \label{eq:alg_err4} \qquad (y^{k+1},u^{k+1},\varepsilon_{k+1})\approx (\lambda \partial f+I)^{-1}\left(\widetilde x^k-\lambda \nabla g(z^k)\right), \end{aligned} \end{align} i.e., compute a $\hat\sigma$-approximate PG solution $(u^{k+1},y^{k+1},\varepsilon_{k+1})$ at $(\widetilde x^k,\lambda)$ (in the sense of Definition \ref{def:gradient_sol}), where \begin{align} \label{eq:alg_xtil4} & \widetilde x^k = \left(\dfrac{a_{k+1}-\mu A_k\lambda}{A_k+a_{k+1}}\right)x^k + \left(\dfrac{A_k+\mu A_k\lambda}{A_k+a_{k+1}}\right)y^k,\\[3mm] \label{eq:alg_a4} & a_{k+1}=\dfrac{(1+2\mu A_k)\lambda+\sqrt{(1+2\mu A_k)^2\lambda^2 +4(1+\mu A_k)A_k\lambda}}{2}. \end{align} \item[2)] Let \begin{align} \label{eq:alg_A4} &A_{k+1} = A_k + a_{k+1},\\[2mm] \label{eq:alg_vn4} &v^{k+1}=u^{k+1}+\nabla g(y^{k+1}),\\[2mm] \label{eq:alg_xne4} & x^{k+1} = \left(\dfrac{1+\mu A_k}{1+\mu A_{k+1}}\right)x^k + \left(\dfrac{\mu a_{k+1}}{1+\mu A_{k+1}}\right) y^{k+1}- \left(\dfrac{a_{k+1}}{1+\mu A_{k+1}}\right) v^{k+1}. \end{align} \item[3)] Set $k=k+1$ and go to step 1. \end{itemize} \noindent \end{minipage} } \noindent We now make the following remark concerning Algorithm \ref{alg:first01}: \begin{itemize} \item[(i)] From the definition of $\lambda>0$ in \eqref{eq:first_lambda} we obtain \begin{align} \label{eq:watson} \dfrac{\lambda^2 L^2}{1+\lambda\mu}=\sigma_u^2. \end{align} Indeed, it is easy to check that $\lambda>0$ is the largest root of $L^2\lambda^2-(\sigma_u^2\mu)\lambda-\sigma_u^2=0$, which is clearly equivalent to \eqref{eq:watson}. Now using \eqref{eq:watson}, we find \begin{align} \label{eq:watson2} \sigma=\sigma_u+\hat\sigma=\dfrac{\lambda L}{\sqrt{1+\lambda \mu}}+\hat\sigma. \end{align} \end{itemize} Next proposition shows that Algorithm \ref{alg:first01} is a special instance of Algorithm \ref{alg:main} for solving \eqref{eq:co2}. \begin{proposition} \label{pr:special01} Consider the sequences evolved by \emph{Algorithm \ref{alg:first01}} and let $\lambda_{k+1}\equiv \lambda$. Then, $\lambda_{k+1}>0$ and the triple $(y^{k+1}, v^{k+1}, \varepsilon_{k+1})$ satisfy condition \eqref{eq:alg_err} in \emph{Algorithm \ref{alg:main}} with $\sigma=\hat\sigma+\sigma_u$. As a consequence, \emph{Algorithm \ref{alg:first01}} is a special instance of \emph{Algorithm \ref{alg:main}} for solving \eqref{eq:co2}. \end{proposition} \begin{proof} The proof follows from \eqref{eq:alg_err4}, \eqref{eq:watson2}, Proposition \ref{pr:vhalen} and the definitions of Algorithms \ref{alg:main} and \ref{alg:first01}. \end{proof} Next we summarize the results on \emph{linear convergence} rates for Algorithm \ref{alg:first01}. \begin{theorem}[{\bf Convergence rates for Algorithm \ref{alg:first01}}] \label{th:first01} Consider the sequences evolved by \emph{Algorithm \ref{alg:first01}} and let $\sigma=\hat\sigma+\sigma_u$. Let also $x^*$ be the unique solution of \eqref{eq:co2}, let $d_0$ be as in \eqref{eq:def.d0} and denote $\gamma=\sqrt{(1+\sigma_u)^{-1}\sigma_u}$. The following holds: \begin{itemize} \item[\emph{(a)}] For all $k\geq 1$, \begin{align*} & h(y^{k})-h(x^*)\leq \dfrac{L d_0^2}{2\sigma_u}\left(1-\gamma\sqrt{\dfrac{\mu}{L}}\,\right)^{k-1},\\[3mm] & \max\left\{\norm{x^*-y^k},\norm{x^*-x^k}\right\}\leq \sqrt{\dfrac{L}{\sigma_u\mu}}\,d_0\left(1-\gamma\sqrt{\dfrac{\mu}{L}}\,\right)^{(k-1)/2}. \end{align*} \item[\emph{(b)}] For all $k\geq 1$, \begin{align*} \begin{cases} &v^{k+1}\in \partial_{\varepsilon_{k+1}} f(y^{k+1})+\partial g(y^{k+1}),\\[2mm] &\norm{v^{k+1}}\leq \dfrac{6d_0 L^{3/2}}{\mu^{1/2}\sigma_u^{3/2}} \left(1+\sigma\sqrt{1+\frac{\sigma_u\,\mu}{L}}\,\right) \left(1-\gamma\sqrt{\dfrac{\mu}{L}}\,\right)^{(k-1)/2},\\[6mm] &\varepsilon_{k+1}\leq \dfrac{3\sigma^2 d_0^2 L^2} {\sigma_u^2\,\mu}\left(1-\gamma\sqrt{\dfrac{\mu}{L}}\,\right)^{k-1}. \end{cases} \end{align*} \end{itemize} \end{theorem} \begin{proof} (a) First note that simple computations using \eqref{eq:def.alpha} with $\underline{\lambda}=\lambda$, the inequality in \eqref{eq:first_lambda}, the definition of $\gamma>0$ and the fact that $L \geq \mu$ show that \begin{align} \label{eq:viola} \alpha>\sqrt{(1+\sigma_u)^{-1}\sigma_u}\sqrt{\dfrac{\mu}{L}}=\gamma\sqrt{\dfrac{\mu}{L}},\qquad \lambda>\dfrac{\sigma_u}{L}, \end{align} which combined with Proposition \ref{pr:special01} and Theorem \ref{th:main_comp03}(a) gives the proof of (a). (b) The result follows from \eqref{eq:viola}, Proposition \ref{pr:special01} and Theorem \ref{th:main_comp03}(b). \end{proof} \section*{Acknowledgments} The author would like to thank Dr. Benar F. Svaiter for the fruitful discussions related to the first draft of this work. The author also thank the three anonymous referees and the associate editor for their comments that significantly improved the manuscript. \appendix \section{Some auxiliary results} \begin{lemma} \label{lm:carinhoso} For all $k\geq 1$, the optimal value of the minimization problem, over $\lambda_1,\dots, \lambda_k>0$, \begin{align} \label{eq:min_prod} \begin{aligned} &\min \,\prod_{j=1}^{k}\left(1+\lambda_{j}\right)\\[2mm] &\emph{s.t.}\;\;\sum_{j=1}^k\dfrac{1}{\lambda_j^q}\leq c, \end{aligned} \end{align} where $c>0$ and $q\geq 1$, is given by \[ \left(1+\left(\dfrac{k}{c}\right)^{1/q}\right)^k. \] \end{lemma} \begin{proof} First consider the convex problem \begin{align} \label{eq:min_prod_log} \begin{aligned} &\min \,\sum_{j=1}^{k}\,\log (1+e^{t_j})\\[2mm] &\mbox{s.t.}\;\;\sum_{j=1}^k\dfrac{1}{e^{qt_j}}\leq c. \end{aligned} \end{align} Since the objective and constraint functions in \eqref{eq:min_prod_log} are convex and invariant under permutations on $(t_1,\dots, t_k)$, it follows that one of its solutions takes the form $(t,\dots, t)$. It is also clear that at any solution the inequality in \eqref{eq:min_prod_log} must hold as an equality. Hence, $k \frac{1}{e^{qt}}=c$, i.e., $e^t=\left(\frac{k}{c}\right)^{1/q}$. As a consequence, for all $(t_1,\dots, t_k)$ such that $\sum_{j=1}^k\frac{1}{e^{qt_j}}\leq c$, \begin{align} \label{eq:bourgain} \sum_{j=1}^{k}\,\log (1+e^{t_j})\geq k \log (1+e^t)=k \log \left(1+\left(\dfrac{k}{c}\right)^{1/q}\right) =\log\left(\left(1+\left(\dfrac{k}{c}\right)^{1/q}\right)^k\right). \end{align} Now let $\lambda_1,\dots, \lambda_k>0$ be such that $\sum_{j=1}^k\frac{1}{\lambda_j^q}\leq c$ and define $t_j:=\log(\lambda_j)$, for $j\in \{1,\dots, k\}$. Then, since in this case $\sum_{j=1}^k\frac{1}{e^{qt_j}}\leq c$, using \eqref{eq:bourgain} and some basic properties of logarithms we find \begin{align*} \prod_{j=1}^k\,(1+\lambda_j)=\prod_{j=1}^k\,(1+e^{t_j})&=e^{\log\left(\prod_{j=1}^k\,(1+e^{t_j})\right)}\\ &=e^{\sum_{j=1}^{k}\,\log (1+e^{t_j})}\\ &\geq e^{\log\left(\left(1+\left(\dfrac{k}{c}\right)^{1/q}\right)^k\right)}\\ &=\left(1+\left(\dfrac{k}{c}\right)^{1/q}\right)^k, \end{align*} which concludes the proof of the lemma. \end{proof} \begin{lemma} \label{lm:wolfe} The following holds for $q(\cdot)$ defined by \begin{align} \label{eq:def.q} q(x)=\inner{v}{x-y}+\frac{\mu}{2}\norm{x-y}^2-\varepsilon+\frac{1}{2\lambda}\norm{x-z}^2\qquad (x\in \mathcal{H}) \end{align} where $v,y,z\in \mathcal{H}$ and $\mu,\varepsilon,\lambda>0$. \begin{itemize} \item[\emph{(a)}] The (unique) global minimizer of $q(\cdot)$ is given by \[ x^*=\frac{1}{1+\lambda\mu}z+\frac{\lambda\mu}{1+\lambda\mu}y -\frac{\lambda}{1+\lambda\mu}v. \] \item[\emph{(b)}] We have, \begin{align*} \min_{x}\,q(x)= \dfrac{1}{2\lambda}\left[\norm{y-z}^2 -\left(\dfrac{\norm{\lambda v+y-z}^2}{1+\lambda\mu}+2\lambda\varepsilon\right)\right]. \end{align*} \item[\emph{(c)}] We have, \begin{align*} q(x)=\dfrac{1}{2\lambda}\left[\norm{y-z}^2 -\left(\dfrac{\norm{\lambda v+y-z}^2}{1+\lambda\mu}+2\lambda\varepsilon\right)\right] +\dfrac{1+\lambda\mu}{2\lambda}\norm{x-x^*}^2,\qquad \forall x\in \mathcal{H}. \end{align*} \end{itemize} \end{lemma} \begin{proof} (a) This follows directly from \eqref{eq:def.q} and some simple calculus. (b) Note first that \begin{align} \label{eq:opt.value} \min_x\,q(x)=q(x^*) &=\inner{v}{x^*-y}+\frac{\mu}{2}\norm{x^*-y}^2-\varepsilon+\frac{1}{2\lambda}\norm{x^*-z}^2. \end{align} Using the well-known identity $a\norm{z}^2+b\norm{w}^2=\frac{1}{a+b} \left[\norm{az+bw}^2+ab\norm{z-w}^2\right]$ with $a=\mu$, $b=1/ \lambda$, $z=x^*-y$ and $w=x^*-z$, and (a) we find \begin{align} \label{eq:square.xstar} \nonumber \mu \norm{x^*-y}^2+\frac{1}{\lambda}\norm{x^*-z}^2&= \dfrac{\lambda}{1+\lambda\mu}\left[\left\|\underbrace{\dfrac{1+\lambda\mu}{\lambda}x^*-\mu y-\dfrac{1} {\lambda}z}_{-v}\right\|^2+\dfrac{\mu}{\lambda}\norm{z-y}^2\right]\\ &=\dfrac{\lambda}{1+\lambda\mu}\left[\norm{v}^2+\dfrac{\mu}{\lambda}\norm{z-y}^2\right]. \end{align} On the other hand, we also have $x^*-y=\frac{1}{1+\lambda\mu}(z-y)-\frac{\lambda}{1+\lambda\mu}v$, which in turn gives \begin{align} \label{eq:inner.v} \inner{v}{x^*-y}=\dfrac{1}{1+\lambda\mu}\left[\inner{v}{z-y}-\lambda\norm{v}^2\right]. \end{align} Direct use of \eqref{eq:opt.value}, \eqref{eq:square.xstar} and \eqref{eq:inner.v} yields \begin{align*} \min_x\,q(x)+\varepsilon&= \dfrac{1}{1+\lambda\mu}\left[\inner{v}{z-y}-\lambda\norm{v}^2\right] +\dfrac{\lambda}{2(1+\lambda\mu)}\left[\norm{v}^2+\dfrac{\mu}{\lambda}\norm{z-y}^2\right]\\ &=\dfrac{1}{2\lambda(1+\lambda\mu)} \left[2\inner{\lambda v}{z-y}-\norm{\lambda v}^2+\lambda\mu \norm{z-y}^2\right]\\ &= \dfrac{1}{2\lambda(1+\lambda\mu)}\left[(1+\lambda\mu)\norm{y-z}^2 -\norm{\lambda v+y-z}^2\right] \\ &= \dfrac{1}{2\lambda}\left[\norm{y-z}^2 -\dfrac{\norm{\lambda v+y-z}^2}{1+\lambda\mu}\right], \end{align*} which then yields \begin{align*} \min_x\,q(x)&=\dfrac{1}{2\lambda}\left[\norm{y-z}^2 -\dfrac{\norm{\lambda v+y-z}^2}{1+\lambda\mu}\right]-\varepsilon\\ &=\dfrac{1}{2\lambda}\left[\norm{y-z}^2 -\left(\dfrac{\norm{\lambda v+y-z}^2}{1+\lambda\mu}+2\lambda\varepsilon\right)\right]. \end{align*} (c) This follows from (b) and Taylor's theorem applied to $q(\cdot)$. \end{proof} \def$'$} \def\cprime{$'${$'$} \def$'$} \def\cprime{$'${$'$} \end{document}
arXiv
Calderón–Zygmund lemma In mathematics, the Calderón–Zygmund lemma is a fundamental result in Fourier analysis, harmonic analysis, and singular integrals. It is named for the mathematicians Alberto Calderón and Antoni Zygmund. Given an integrable function  f  : Rd → C, where Rd denotes Euclidean space and C denotes the complex numbers, the lemma gives a precise way of partitioning Rd into two sets: one where  f  is essentially small; the other a countable collection of cubes where  f  is essentially large, but where some control of the function is retained. This leads to the associated Calderón–Zygmund decomposition of  f , wherein  f  is written as the sum of "good" and "bad" functions, using the above sets. Covering lemma Let  f  : Rd → C be integrable and α be a positive constant. Then there exists an open set Ω such that: (1) Ω is a disjoint union of open cubes, Ω = ∪k Qk, such that for each Qk, $\alpha \leq {\frac {1}{m(Q_{k})}}\int _{Q_{k}}|f(x)|\,dx\leq 2^{d}\alpha .$ (2) | f (x)| ≤ α almost everywhere in the complement F of Ω. Here, $m(Q_{k})$ denotes the measure of the set $Q_{k}$. Calderón–Zygmund decomposition Given  f  as above, we may write  f  as the sum of a "good" function g and a "bad" function b,  f  = g + b. To do this, we define $g(x)={\begin{cases}f(x),&x\in F,\\{\frac {1}{m(Q_{j})}}\int _{Q_{j}}f(t)\,dt,&x\in Q_{j},\end{cases}}$ and let b =  f  − g. Consequently we have that $b(x)=0,\ x\in F$ ${\frac {1}{m(Q_{j})}}\int _{Q_{j}}b(x)\,dx=0$ for each cube Qj. The function b is thus supported on a collection of cubes where  f  is allowed to be "large", but has the beneficial property that its average value is zero on each of these cubes. Meanwhile, |g(x)| ≤ α for almost every x in F, and on each cube in Ω, g is equal to the average value of  f  over that cube, which by the covering chosen is not more than 2dα. See also • Singular integral operators of convolution type, for a proof and application of the lemma in one dimension. • Rising sun lemma References • Calderon A. P., Zygmund, A. (1952), "On the existence of certain singular integrals", Acta Math, 88: 85–139, doi:10.1007/BF02392130, S2CID 121580197{{citation}}: CS1 maint: multiple names: authors list (link) • Hörmander, Lars (1990), The analysis of linear partial differential operators, I. Distribution theory and Fourier analysis (2nd ed.), Springer-Verlag, ISBN 3-540-52343-X • Stein, Elias (1970). "Chapters I–II". Singular Integrals and Differentiability Properties of Functions. Princeton University Press. ISBN 9780691080796.
Wikipedia
Simple magic cube A simple magic cube is the lowest of six basic classes of magic cubes. These classes are based on extra features required. The simple magic cube requires only the basic features a cube requires to be magic. Namely, all lines parallel to the faces, and all 4 space diagonals sum correctly.[1] i.e. all "1-agonals" and all "3-agonals" sum to $S={\frac {m(m^{3}+1)}{2}}.$ No planar diagonals (2-agonals) are required to sum correctly, so there are probably no magic squares in the cube. See also • Magic square • Magic cube classes References 1. Pickover, Clifford A. (2002). The Zen of Magic Squares, Circles, and Stars: An Exhibition of Surprising Structures Across Dimensions. Princeton University Press. p. 400. ISBN 9780691070414. External links • Aale de Winkel - Magic hypercubes encyclopedia • Harvey Heinz - large site on magic squares and cubes • Christian Boyer - Multimagic cubes • John Hendricks site on magic hypercubes
Wikipedia
Find the value of $x$ such that $\sqrt{x+ 7} = 9$. Since $\sqrt{x+7} = 9$, we know that 9 is the number whose square is $x+7$. Therefore, we have \[x+7 = 9^2.\] This gives us $x + 7= 81$, so $x= \boxed{74}$.
Math Dataset
Measurements of $\pi^\pm$, K$^\pm$, p and $\bar{\textrm{p}}$ spectra in proton-proton interactions at 20, 31, 40, 80 and 158 GeV/c with the NA61/SHINE spectrometer at the CERN SPS (1705.02467) NA61/SHINE Collaboration: A. Aduszkiewicz, Y. Ali, E. Andronov, T. Antićić, B. Baatar, M. Baszczyk, S. Bhosale, A. Blondel, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, H. Cherif, M. Ćirković, T. Czopowicz, A. Damyanova, N. Davis, H. Dembinski, M. Deveaux, W. Dominik, P. Dorosz, J. Dumarchez, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, C. Francois, A. Garibov, M. Gaździcki, M. Golubeva, K. Grebieszkow, F. Guber, A. Haesler, A.E. Hervé, J. Hylen, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, E. Kaptur, M. Kiełbowicz, V.A. Kireyeu, V. Klochkov, N. Knezević, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, V. Kovalenko, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, W. Kucewicz, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, B. Lundberg, V.V. Lyubushkin, B. Łysakowski, M. Maćkowiak-Pawłowska, B. Maksiak, A.I. Malakhov, D. Manić, A. Marchionni, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, A. Merzlaya, B. Messerly, Ł. Mik, G.B. Mills, S. Morozov, S. Mrówczyński, Y. Nagai, M. Naskręt, V. Ozvenchuk, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, P. Podlaski, B.A. Popov, M. Posiadała, S. Puławski, J. Puzović, R. Rameika, W. Rauch, M. Ravonel, R. Renfordt, E. Richter-Wąs, D. Röhrich, E. Rondio, M. Roth, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Rybicki, A. Sadovsky, K. Schmidt, I. Selyuzhenkov, A. Seryakov, P. Seyboth, M. Słodkowski, A. Snoch, P. Staszel, G. Stefanek, J. Stepaniak, M. Strikhanov, H. Ströbele, T. Šuša, M. Szuba, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, A. Toia, R. Tsenov, L. Turko, R. Ulrich, M. Unger, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, M. Walewski, A. Wickremasinghe, C. Wilkinson, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, L. Zambelli, E.D. Zimmerman, R. Zwaska Sept. 27, 2017 nucl-ex Measurements of inclusive spectra and mean multiplicities of $\pi^\pm$, K$^\pm$, p and $\bar{\textrm{p}}$ produced in inelastic p+p interactions at incident projectile momenta of 20, 31, 40, 80 and 158 GeV/c ($\sqrt{s} = $ 6.3, 7.7, 8.8, 12.3 and 17.3 GeV, respectively) were performed at the CERN Super Proton Synchrotron using the large acceptance NA61/SHINE hadron spectrometer. Spectra are presented as function of rapidity and transverse momentum and are compared to predictions of current models. The measurements serve as the baseline in the NA61/SHINE study of the properties of the onset of deconfinement and search for the critical point of strongly interacting matter. Measurement of Meson Resonance Production in $\pi^{-} + $C Interactions at SPS energies (1705.08206) A. Aduszkiewicz, Y. Ali, E.V. Andronov, T. Antićić, B. Baatar, M. Baszczyk, S. Bhosale, A. Blondel, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, H. Cherif, M. Ćirković, T. Czopowicz, A. Damyanova, N. Davis, H. Dembinski, M. Deveaux, W. Dominik, P. Dorosz, J. Dumarchez, R. Engel, A. Ereditato, S. Faas, G.A. Feofilov, Z. Fodor, C. Francois, A. Garibov, X. Garrido, M. Gaździcki, M. Golubeva, K. Grebieszkow, F. Guber, A. Haesler, A.E. Hervé, J. Hylen, S.N. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, E. Kaptur, M. Kiełbowicz, V.A. Kireyeu, V. Klochkov, V.I. Kolesnikov, D. Kolev, A. Korzenev, V.N. Kovalenko, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, W. Kucewicz, M. Kuich, A. Kurepin, D. Larsen, A. László, T.V. Lazareva, M. Lewicki, B. Lundberg, B. Łysakowski, V.V. Lyubushkin, M. Maćkowiak-Pawłowska, B. Maksiak, A.I. Malakhov, D. Manić, A. Marchionni, A. Marcinek, A.D. Marino, I.C. Mariş, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, A.O. Merzlaya, B. Messerly, Ł. Mik, G.B. Mills, S. Morozov, S. Mrówczyński, Y. Nagai, M. Naskręt, V. Ozvenchuk, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, P. Podlaski, B.A. Popov, M. Posiadała, S. Puławski, J. Puzović, R. Rameika, W. Rauch, M. Ravonel, R. Renfordt, E. Richter-Wąs, D. Röhrich, E. Rondio, M. Roth, M. Ruprecht, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Rybicki, A. Sadovsky, K. Schmidt, I. Selyuzhenkov, A.Yu. Seryakov, P. Seyboth, M. Słodkowski, A. Snoch, P. Staszel, G. Stefanek, J. Stepaniak, M. Strikhanov, H. Ströbele, T. Šuša, M. Szuba, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, A. Toia, R. Tsenov, L. Turko, R. Ulrich, M. Unger, F.F. Valiev, D. Veberič, V.V. Vechernin, M. Walewski, A. Wickremasinghe, C. Wilkinson, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, L. Zambelli, E.D. Zimmerman, R. Zwaska May 23, 2017 nucl-ex, astro-ph.HE We present measurements of $\rho^0$, $\omega$ and K$^{*0}$ spectra in $\pi^{-} + $C production interactions at 158 GeV/c and $\rho^0$ spectra at 350 GeV/c using the NA61/SHINE spectrometer at the CERN SPS. Spectra are presented as a function of the Feynman's variable $x_\text{F}$ in the range $0 < x_\text{F} < 1$ and $0 < x_\text{F} < 0.5$ for 158 GeV/c and 350 GeV/c respectively. Furthermore, we show comparisons with previous measurements and predictions of several hadronic interaction models. These measurements are essential for a better understanding of hadronic shower development and for improving the modeling of cosmic ray air showers. Two-particle correlations in azimuthal angle and pseudorapidity in inelastic p+p interactions at the CERN Super Proton Synchrotron (1610.00482) NA61/SHINE Collaboration: A. Aduszkiewicz, Y. Ali, E. Andronov, T. Anticic, N. Antoniou, B. Baatar, F. Bay, A. Blondel, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, M. Cirkovic, T. Czopowicz, A. Damyanova, N. Davis, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, A. Garibov, M. Gazdzicki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A.E. Herve, M. Hierholzer, J. Hylen, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, M. Kielbowicz, J. Kisiel, N. Knezevic, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, V. Kovalenko, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. Laszlo, M. Lewicki, B. Lundberg, V.V. Lyubushkin, M. Mackowiak-Pawlowska, B. Maksiak, A.I. Malakhov, D. Manic, A. Marchionni, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, A. Merzlaya, B. Messerly, G.B. Mills, S. Morozov, S. Mrowczynski, Y. Nagai, T. Nakadaira, M. Naskret, M. Nirkko, K. Nishikawa, V. Ozvenchuk, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Planeta, B.A. Popov, M. Posiadala, S. Pulawski, J. Puzovic, R. Rameika, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Was, A. Robert, D. Rohrich, E. Rondio, M. Roth, A. Rubbia, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Rybicki, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Strobele, T. Susa, M. Szuba, M. Tada, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, M. Walewski, A. Wickremasinghe, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, O. Wyszynski, L. Zambelli, E.D. Zimmerman, R. Zwaska Feb. 7, 2017 nucl-ex Results on two-particle $\Delta\eta\Delta\phi$ correlations in inelastic p+p interactions at 20, 31, 40, 80, and 158~GeV/c are presented. The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN Super Proton Synchrotron. The data show structures which can be attributed mainly to effects of resonance decays, momentum conservation, and quantum statistics. The results are compared with the EPOS and UrQMD models. Measurements of $\pi^{\pm}$ differential yields from the surface of the T2K replica target for incoming 31 GeV/c protons with the NA61/SHINE spectrometer at the CERN SPS (1603.06774) NA61/SHINE Collaboration: N. Abgrall, A. Aduszkiewicz, M. Ajaz, Y. Ali, E. Andronov, T. Antićić, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blümer, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, M. Ćirković, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, A. Garibov, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A.E. Hervé, M. Hierholzer, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, J. Kisiel, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, V.V. Lyubushkin, M. Maćkowiak-Pawłowska, B. Maksiak, A.I. Malakhov, D. Manic, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, B. Messerly, G.B. Mills, S. Morozov, S. Mrówczyński, Y. Nagai, T. Nakadaira, M. Naskręt, M. Nirkko, K. Nishikawa, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, B.A. Popov, M. Posiadała-Zezula, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Wąs, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, K. Yarritu, L. Zambelli, E.D. Zimmerman, M. Friend, V. Galymov, M. Hartz, T. Hiraki, A. Ichikawa, H. Kubo, K. Matsuoka, A. Murakami, T. Nakaya, K. Suzuki, M. Tzanov, M. Yu Nov. 29, 2016 hep-ex, physics.ins-det Measurements of particle emission from a replica of the T2K 90 cm-long carbon target were performed in the NA61/SHINE experiment at CERN SPS, using data collected during a high-statistics run in 2009. An efficient use of the long-target measurements for neutrino flux predictions in T2K requires dedicated reconstruction and analysis techniques. Fully-corrected differential yields of $\pi^\pm$-mesons from the surface of the T2K replica target for incoming 31 GeV/c protons are presented. A possible strategy to implement these results into the T2K neutrino beam predictions is discussed and the propagation of the uncertainties of these results to the final neutrino flux is performed. Multiplicity and transverse momentum fluctuations in inelastic proton-proton interactions at the CERN Super Proton Synchrotron (1510.00163) NA61/SHINE Collaboration: A. Aduszkiewicz, Y. Ali, E. Andronov, T. Anticic, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blumer, M. Bogomilov, A. Bravar, J. Brzychczyk, S. A. Bunyatov, O. Busygina, P. Christakoglou, M. Cirkovic, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G. A. Feofilov, Z. Fodor, A. Garibov, M. Gazdzicki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A. Herve, M. Hierholzer, S. Igolkin, A. Ivashkin, K. Kadija, A. Kapoyannis, E. Kaptur, J. Kisiel, T. Kobayashi, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. Laszlo, M. Lewicki, V. V. Lyubushkin, M. Mackowiak-Pawlowska, B. Maksiak, A. I. Malakhov, D. Manic, A. Marcinek, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G. L. Melkumov, S. Morozov, S. Mrowczynski, T. Nakadaira, M. Naskret, M. Nirkko, K. Nishikawa, A. D. Panagiotou, M. Pavin, O. Petukhov, C. Pistillo, R. Planeta, B. A. Popov, M. Posiadala, S. Pulawski, J. Puzovic, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Was, A. Robert, D. Rohrich, E. Rondio, M. Roth, A. Rubbia, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Strobele, T. Susa, M. Szuba, M. Tada, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V. V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, O. Wyszynski, L. Zambelli Aug. 30, 2016 hep-ex, nucl-ex Measurements of multiplicity and transverse momentum fluctuations of charged particles were performed in inelastic p+p interactions at 20, 31, 40, 80 and 158 GeV/c beam momentum. Results for the scaled variance of the multiplicity distribution and for three strongly intensive measures of multiplicity and transverse momentum fluctuations \$\Delta[P_{T},N]\$, \$\Sigma[P_{T},N]\$ and \$\Phi_{p_T}\$ are presented. For the first time the results on fluctuations are fully corrected for experimental biases. The results on multiplicity and transverse momentum fluctuations significantly deviate from expectations for the independent particle production. They also depend on charges of selected hadrons. The string-resonance Monte Carlo models EPOS and UrQMD do not describe the data. The scaled variance of multiplicity fluctuations is significantly higher in inelastic p+p interactions than in central Pb+Pb collisions measured by NA49 at the same energy per nucleon. This is in qualitative disagreement with the predictions of the Wounded Nucleon Model. Within the statistical framework the enhanced multiplicity fluctuations in inelastic p+p interactions can be interpreted as due to event-by-event fluctuations of the fireball energy and/or volume. Production of $\Lambda$ hyperons in inelastic p+p interactions at 158 GeV/$c$ (1510.03720) A. Aduszkiewicz, Y. Ali, E. Andronov, T. Antićić, N. Antoniou, B. Baatar, F. Bay, A. Blondel, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, M. Ćirković, T. Czopowicz, A. Damyanova, N. Davis, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, A. Garibov, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A.E. Hervé, M. Hierholzer, S. Igolkin, A. Ivashkin, S.R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, J. Kisiel, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, V.V. Lyubushkin, M. Maćkowiak-Pawłowska, B. Maksiak, A.I. Malakhov, D. Manić, A. Marcinek, A.D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, B. Messerly, G.B. Mills, S. Morozov, S. Mrówczyński, Y. Nagai, T. Nakadaira, M. Naskręt, M. Nirkko, K. Nishikawa, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, B.A. Popov, M. Posiadała, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Wąs, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, B.T. Rumberger, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, A. Taranenko, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, L. Zambelli, E.D. Zimmerman (The NA61/SHINE Collaboration) April 25, 2016 hep-ex, nucl-ex Inclusive production of $\Lambda$-hyperons was measured with the large acceptance NA61/SHINE spectrometer at the CERN SPS in inelastic p+p interactions at beam momentum of 158~\GeVc. Spectra of transverse momentum and transverse mass as well as distributions of rapidity and x$_{_F}$ are presented. The mean multiplicity was estimated to be $0.120\,\pm0.006\;(stat.)\,\pm 0.010\;(sys.)$. The results are compared with previous measurements and predictions of the EPOS, UrQMD and FRITIOF models. Measurements of $\pi^\pm$, $K^\pm$, $K^0_S$, $\Lambda$ and proton production in proton-carbon interactions at 31 GeV/$c$ with the NA61/SHINE spectrometer at the CERN SPS (1510.02703) N. Abgrall, A. Aduszkiewicz, Y. Ali, E. Andronov, T. Antićić, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blümer, M. Bogomilov, A. Brandin, A. Bravar, J. Brzychczyk, S.A. Bunyatov, O. Busygina, P. Christakoglou, T. Czopowicz, A. Damyanova, N. Davis, S. Debieux, H. Dembinski, M. Deveaux, F. Diakonos, S. Di Luise, W. Dominik, T. Drozhzhova, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G.A. Feofilov, Z. Fodor, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, A. Herve, M. Hierholzer, S. Igolkin, A. Ivashkin, D. Joković, S. R. Johnson, K. Kadija, A. Kapoyannis, E. Kaptur, D. Kiełczewska, J. Kisiel, T. Kobayashi, V.I. Kolesnikov, D. Kolev, V.P. Kondratiev, A. Korzenev, K. Kowalik, S. Kowalski, M. Koziel, A. Krasnoperov, M. Kuich, A. Kurepin, D. Larsen, A. László, M. Lewicki, V.V. Lyubushkin, M. Maćkowiak-Pawłowska, Z. Majka, B. Maksiak, A.I. Malakhov, A. Marchionni, D. Manić, A. Marcinek, A. D. Marino, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G.L. Melkumov, B. Messerly, G. B. Mills, S. Morozov, S. Mrówczyński, S. Murphy, Y. Nagai, T. Nakadaira, M. Naskret, M. Nirkko, K. Nishikawa, T. Palczewski, A.D. Panagiotou, V. Paolone, M. Pavin, O. Petukhov, C. Pistillo, R. Płaneta, J. Pluta, B.A. Popov, M. Posiadała-Zezula, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, E. Richter-Was, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, B. T. Rumberger, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, R. Sarnecki, K. Schmidt, T. Sekiguchi, I. Selyuzhenkov, A. Seryakov, P. Seyboth, D. Sgalaberna, M. Shibata, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, A. Taranenko, A. Tefelska, D. Tefelski, V. Tereshchenko, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V.V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, K. Yarritu, L. Zambelli, E. D. Zimmerman Feb. 24, 2016 hep-ex, nucl-ex Measurements of hadron production in p+C interactions at 31 GeV/c are performed using the NA61/ SHINE spectrometer at the CERN SPS. The analysis is based on the full set of data collected in 2009 using a graphite target with a thickness of 4% of a nuclear interaction length. Inelastic and production cross sections as well as spectra of $\pi^\pm$, $K^\pm$, p, $K^0_S$ and $\Lambda$ are measured with high precision. These measurements are essential for improved calculations of the initial neutrino fluxes in the T2K long-baseline neutrino oscillation experiment in Japan. A comparison of the NA61/SHINE measurements with predictions of several hadroproduction models is presented. Measurement of negatively charged pion spectra in inelastic p+p interactions at $p_{lab}$ = 20, 31, 40, 80 and 158 GeV/c (1310.2417) NA61/SHINE Collaboration: N. Abgrall, A. Aduszkiewicz, Y. Ali, T. Anticic, N. Antoniou, B. Baatar, F. Bay, A. Blondel, J. Blumer, M. Bogomilov, A. Bravar, J. Brzychczyk, S. A. Bunyatov, O. Busygina, P. Christakoglou, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, F. Diakonos, S. Di Luise, W. Dominik, T. Drozhzhova, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, G. A. Feofilov, Z. Fodor, A. Fulop, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, T. Hasegawa, M. Hierholzer, R. Idczak, S. Igolkin, A. Ivashkin, D. Joković, K. Kadija, A. Kapoyannis, E. Kaptur, D. Kiełczewska, M. Kirejczyk, J. Kisiel, T. Kiss, S. Kleinfelder, T. Kobayashi, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, P. Kovesarki, S. Kowalski, A. Krasnoperov, A. Kurepin, D. Larsen, A. László, V. V. Lyubushkin, M. Maćkowiak-Pawłowska, Z. Majka, B. Maksiak, A. I. Malakhov, D. Manić, A. Marcinek, V. Marin, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G. L. Melkumov, St. Mrówczyński, S. Murphy, T. Nakadaira, M. Nirkko, K. Nishikawa, T. Palczewski, G. Palla, A. D. Panagiotou, T. Paul, C. Pistillo, W. Peryt, O. Petukhov, R. Płaneta, J. Pluta, B. A. Popov, M. Posiadała, S. Puławski, J. Puzović, W. Rauch, M. Ravonel, A. Redij, R. Renfordt, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, M. Savić, K. Schmidt, T. Sekiguchi, P. Seyboth, D. Sgalaberna, M. Shibata, R. Sipos, E. Skrzypczak, M. Słodkowski, P. Staszel, G. Stefanek, J. Stepaniak, H. Ströbele, T. Šuša, M. Szuba, M. Tada, V. Tereshchenko, T. Tolyhi, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberič, V. V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Włodarczyk, A. Wojtaszek-Szwarc, O. Wyszyński, L. Zambelli, W. Zipper We present experimental results on inclusive spectra and mean multiplicities of negatively charged pions produced in inelastic p+p interactions at incident projectile momenta of 20, 31, 40, 80 and 158 GeV/c ($\sqrt{s} = $ 6.3, 7.7, 8.8, 12.3 and 17.3 GeV, respectively). The measurements were performed using the large acceptance NA61/SHINE hadron spectrometer at the CERN Super Proton Synchrotron. Two-dimensional spectra are determined in terms of rapidity and transverse momentum. Their properties such as the width of rapidity distributions and the inverse slope parameter of transverse mass spectra are extracted and their collision energy dependences are presented. The results on inelastic p+p interactions are compared with the corresponding data on central Pb+Pb collisions measured by the NA49 experiment at the CERN SPS. The results presented in this paper are part of the NA61/SHINE ion program devoted to the study of the properties of the onset of deconfinement and search for the critical point of strongly interacting matter. They are required for interpretation of results on nucleus-nucleus and proton-nucleus collisions. Measurements of Production Properties of K0S mesons and Lambda hyperons in Proton-Carbon Interactions at 31 GeV/c (1309.1997) N. Abgrall, A. Aduszkiewicz, Y. Ali, T. Anticic, N. Antoniou, J. Argyriades, B. Baatar, A. Blondel, J. Blumer, M. Bogomilov, A. Bravar, W. Brooks, J. Brzychczyk, S. A. Bunyatov, O. Busygina, P. Christakoglou, T. Czopowicz, N. Davis, S. Debieux, H. Dembinski, F. Diakonos, S. Di Luise, W. Dominik, T. Drozhzhova, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, L. Esposito, G. A. Feofilov, Z. Fodor, A. Fulop, M. Gaździcki, M. Golubeva, K. Grebieszkow, A. Grzeszczuk, F. Guber, H. Hakobyan, A. Haesler, T. Hasegawa, M. Hierholzer, R. Idczak, S. Igolkin, Y. Ivanov, A. Ivashkin, D. Jokovic, K. Kadija, A. Kapoyannis, N. Katrynska, E. Kaptur, D. Kielczewska, D. Kikola, M. Kirejczyk, J. Kisiel, T. Kiss, S. Kleinfelder, T. Kobayashi, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, S. Kowalski, A. Krasnoperov, S. Kuleshov, A. Kurepin, D. Larsen, A. Laszlo, V. V. Lyubushkin, M. Mackowiak-Pawlowska, Z. Majka, B. Maksiak, A. I. Malakhov, D. Maletic, D. Manic, A. Marchionni, A. Marcinek, V. Marin, K. Marton, H.-J. Mathes, T. Matulewicz, V. Matveev, G. L. Melkumov, St. Mrówczyński, S. Murphy, T. Nakadaira, M. Nirkko, K. Nishikawa, T. Palczewski, G. Palla, A. D. Panagiotou, T. Paul, W. Peryt, C. Pistillo, A. Redij, O. Petukhov, R. Planeta, J. Pluta, B. A. Popov, M. Posiadała, S. Puławski, J. Puzovic, W. Rauch, M. Ravonel, R. Renfordt, A. Robert, D. Röhrich, E. Rondio, M. Roth, A. Rubbia, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, M. Savic, K. Schmidt, T. Sekiguchi, P. Seyboth, M. Shibata, R. Sipos, E. Skrzypczak, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, T. Susa, M. Szuba, M. Tada, V. Tereshchenko, T. Tolyhi, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V. V. Vechernin, G. Vesztergombi, L. Vinogradov, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek, O. Wyszyński, L. Zambelli, W. Zipper Sept. 8, 2013 nucl-ex, physics.acc-ph Spectra of K0S mesons and Lambda hyperons were measured in p+C interactions at 31 GeV/c with the large acceptance NA61/SHINE spectrometer at the CERN SPS. The data were collected with an isotropic graphite target with a thickness of 4% of a nuclear interaction length. Interaction cross sections, charged pion spectra, and charged kaon spectra were previously measured using the same data set. Results on K0S and Lambda production in p+C interactions serve as reference for the understanding of the enhancement of strangeness production in nucleus-nucleus collisions. Moreover, they provide important input for the improvement of neutrino flux predictions for the T2K long baseline neutrino oscillation experiment in Japan. Inclusive production cross sections for K0S and Lambda are presented as a function of laboratory momentum in intervals of the laboratory polar angle covering the range from 0 up to 240 mrad. The results are compared with predictions of several hadron production models. The K0S mean multiplicity in production processes <n_K0S> and the inclusive cross section for K0S production were measured and amount to 0.127 +- 0.005 (stat) +- 0.022 (sys) and 29.0 +- 1.6 (stat) +- 5.0 (sys) mb, respectively. Pion emission from the T2K replica target: method, results and application (1207.2114) N. Abgrall, A. Aduszkiewicz, T. Anticic, N. Antoniou, J. Argyriades, B. Baatar, A. Blondel, J. Blumer, M. Bogomilov, A. Bravar, W. Brooks, J. Brzychczyk, A. Bubak, S. A. Bunyatov, O. Busygina, P. Christakoglou, P. Chung, T. Czopowicz, N. Davis, S. Debieux, S. Di Luise, W. Dominik, J. Dumarchez, K. Dynowski, R. Engel, A. Ereditato, L. S. Esposito, G. A. Feofilov, Z. Fodor, A. Ferrero, A. Fulop, M. Gazdzicki, M. Golubeva, B. Grabez, K. Grebieszkow, A. Grzeszczuk, F. Guber, A. Haesler, H. Hakobyan, T. Hasegawa, R. Idczak, S. Igolkin, Y. Ivanov, A. Ivashkin, K. Kadija, A. Kapoyannis, N. Katrynska, D. Kielczewska, D. Kikola, M. Kirejczyk, J. Kisiel, T. Kiss, S. Kleinfelder, T. Kobayashi, O. Kochebina, V. I. Kolesnikov, D. Kolev, V. P. Kondratiev, A. Korzenev, S. Kowalski, A. Krasnoperov, S. Kuleshov, A. Kurepin, R. Lacey, D. Larsen, A. Laszlo, V. V. Lyubushkin, M. Mackowiak-Pawlowska, Z. Majka, B. Maksiak, A. I. Malakhov, D. Maletic, A. Marchionni, A. Marcinek, I. Maris, V. Marin, K. Marton, T. Matulewicz, V. Matveev, G. L. Melkumov, M. Messina, St. Mrowczynski, S. Murphy, T. Nakadaira, K. Nishikawa, T. Palczewski, G. Palla, A. D. Panagiotou, T. Paul, W. Peryt, O. Petukhov, R. Planeta, J. Pluta, B. A. Popov, M. Posiadala, S. Pulawski, J. Puzovic, W. Rauch, M. Ravonel, R. Renfordt, A. Robert, D. Rohrich, E. Rondio, B. Rossi, M. Roth, A. Rubbia, A. Rustamov, M. Rybczynski, A. Sadovsky, K. Sakashita, M. Savic, T. Sekiguchi, P. Seyboth, M. Shibata, R. Sipos, E. Skrzypczak, M. Slodkowski, P. Staszel, G. Stefanek, J. Stepaniak, C. Strabel, H. Strobele, T. Susa, M. Szuba, M. Tada, A. Taranenko, V. Tereshchenko, T. Tolyhi, R. Tsenov, L. Turko, R. Ulrich, M. Unger, M. Vassiliou, D. Veberic, V. V. Vechernin, G. Vesztergombi, A. Wilczek, Z. Wlodarczyk, A. Wojtaszek-Szwarc, O. Wyszynski, L. Zambelli, W. Zipper (The NA61/SHINE Collaboration), V. Galymov, M. Hartz, A. K. Ichikawa, H. Kubo, A. D. Marino, K. Matsuoka, A. Murakami, T. Nakaya, K. Suzuki, T. Yuan, E. D. Zimmerman Nov. 28, 2012 hep-ex, nucl-ex The T2K long-baseline neutrino oscillation experiment in Japan needs precise predictions of the initial neutrino flux. The highest precision can be reached based on detailed measurements of hadron emission from the same target as used by T2K exposed to a proton beam of the same kinetic energy of 30 GeV. The corresponding data were recorded in 2007-2010 by the NA61/SHINE experiment at the CERN SPS using a replica of the T2K graphite target. In this paper details of the experiment, data taking, data analysis method and results from the 2007 pilot run are presented. Furthermore, the application of the NA61/SHINE measurements to the predictions of the T2K initial neutrino flux is described and discussed. The Power Supply System of the CLEO III Silicon Detector (hep-ex/0103037) E. von Toerne, J. Burns, J. Duboscq, E. Eckhart, K. Honscheid, H. Kagan R. Kass, D. Larsen, C. Rush, S. Smith, J. B. Thayer March 21, 2001 hep-ex The CLEO III detector has recently commenced data taking at the Cornell electron Storage Ring (CESR). One important component of this detector is a 4 layer double-sided silicon tracker with 93% solid angle coverage. This detector ranges in size and number of readout channels between the LEP and LHC silicon detectors. In order to reach the detector performance goals of signal-to-noise ratios greater than 15:1 low noise front-end electronics together with highly regulated low noise power supplies were used. In this paper we describe the low-noise power supply system and associated monitoring and safety features used by the CLEO III silicon tracker.
CommonCrawl
\begin{definition}[Definition:Supremum Metric/Bounded Continuous Mappings] Let $M_1 = \struct {A_1, d_1}$ and $M_2 = \struct {A_2, d_2}$ be metric spaces. Let $A$ be the set of all continuous mappings $f: M_1 \to M_2$ which are also bounded. Let $d: A \times A \to \R$ be the function defined as: :$\ds \forall f, g \in A: \map d {f, g} := \sup_{x \mathop \in A_1} \map {d_2} {\map f x, \map g x}$ where $\sup$ denotes the supremum. $d$ is known as the '''supremum metric''' on $A$. \end{definition}
ProofWiki
Bertha has 6 daughters and no sons. Some of her daughters have 6 daughters, and the rest have none. Bertha has a total of 30 daughters and granddaughters, and no great-granddaughters. How many of Bertha's daughters and granddaughters have no daughters? Bertha has $30 - 6 = 24$ granddaughters, none of whom have any daughters. The granddaughters are the children of $24/6 = 4$ of Bertha's daughters, so the number of women having no daughters is $30 - 4 = \boxed{26}$.
Math Dataset
\begin{document} \title[Cohomologically complete intersections]{Cohomologically complete intersections with vanishing of Betti numbers} \author[W. Mahmood]{Waqas Mahmood } \address{Quaid-I-Azam University Islamabad,Pakistan} \email{ waqassms$@$gmail.com} \thanks{This research was partially supported by Higher Education Commission, Pakistan} \subjclass[2000]{13D45.} \keywords{Local cohomology, Cohomologically complete intersections, Betti numbers} \maketitle \begin{abstract} Let $I$ be ideal of an $n$-dimensional local Gorenstein ring $R$. In this paper we will describe several necessary and sufficient conditions such that the ideal $I$ becomes cohomologically complete intersections. In fact, as a technical tool, it will be shown that the vanishing $H^i_{I}(R)= 0$ for all $i\neq c= \grade (I)$ is equivalent to the vanishing of the Betti numbers of $H^c_{I}(R)$. This gives a new characterization to check the cohomologically complete intersections property with the homological properties of the vanishing of Tor modules of $H^c_{I}(R)$. \end{abstract} \section{Introduction} For a commutative Noetherian local ring $(R,\mathfrak m,k)$ and an ideal $I\subset R$ we denote $H^i_I(R)$, $i\in \mathbb{Z}$, the local cohomology modules of $R$ with respect to $I$. We refer to see \cite{b} and \cite{goth} for the definition of local cohomology modules. It is a one of the difficult questions to compute the cohomological dimension $\cd (I)$ of $I$ with respect to $R$. Here $\cd (I):=\max\{i: i\in \mathbb{Z}\}.$ Moreover it is well-known that $\grade (I)\leq \cd (I)$. The ideal $I$ is called cohomologically complete intersections if $H^i_{I}(R)= 0$ for all $i\neq c= \grade (I)$. Note that cohomologically complete intersections property helps us to decide whether an ideal is set-theoretically complete. As a first step M. Hellus and P. Schenzel (see \cite[Theorem 0.1]{pet1}) have shown that $H^i_{I}(R)= 0$ for all $i\neq c= \grade (I)$ if and only if $\dim_k(\Ext^i_R(k,H^c_{I}(R)))= \delta_{n,i}$ provided that $I$ is cohomologically complete intersection in $V(I) \setminus \{ \mathfrak m\}$ over an $n$-dimensional local Gorenstein ring $R$. This was the first time that the cohomologically complete intersections property of $I$ is completely encoded in homological properties of the module $H^c_{I}(R).$ Moreover the above characterization of cohomologically complete intersections looks like a Gorensteiness property. After that several authors have studied this cohomologically complete intersections property. For instance the author and M. Zargar (see \cite[Theorem 1.1]{waqas2} and \cite[Theorem 1.1]{z}) have generalized this result to a maximal Cohen-Macaulay module of finite injective dimension over a finite dimensional local ring. For an extension to arbitrary finitely generated $R$-module we refer to \cite[Theorem 4.4]{p1}. Here a natural question arise that can it also be true in case of the vanishing of Betti numbers $\Tor^{R}_{i}(k,H^c_{I}(R))$, $i\in \mathbb{Z}$, of $H^c_{I}(R)$. In this regard we will prove the following result: \begin{theorem}\label{a1} Let $R$ be a local Gorenstein ring of dimension $n$ and $I$ be an ideal with $c = \grade(I).$ Then for all $\mathfrak{p}\in V(I)$ the following conditions are equivalent: \begin{itemize} \item[(a)] $H^i_I(R)= 0$ for all $i\neq c,$ that is $I$ is a cohomologically complete intersection. \item[(b)] The natural homomorphism \[ \Tor^{R_\mathfrak{p}}_{c}(E_{R_\mathfrak{p}}(k(\mathfrak{p})),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p})))\to E_{R_\mathfrak{p}}(k(\mathfrak{p})) \] is an isomorphism and $\Tor^{R_\mathfrak{p}}_{i}(E_{R_\mathfrak{p}}(k(\mathfrak{p})),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}))= 0$ for all $i\neq c .$ \item[(c)] The natural homomorphism \[ E_{R_\mathfrak{p}}(k(\mathfrak{p}))\to H^c_{\mathfrak{p}R_\mathfrak{p}}(\Hom_{R_\mathfrak{p}}(H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}), E_{R_\mathfrak{p}}(k(\mathfrak{p})))) \] is an isomorphism and $H^i_{\mathfrak{p}R_\mathfrak{p}}(\Hom_{R_\mathfrak{p}}(H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}), E_{R_\mathfrak{p}}(k(\mathfrak{p}))))= 0$ for all $i\neq c$. \item[(d)] The natural homomorphism \[ \Tor^{R_\mathfrak{p}}_{c}(k(\mathfrak{p}),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}))\to k(\mathfrak{p}) \] is an isomorphism and $\Tor^{R_\mathfrak{p}}_{i}(k(\mathfrak{p}),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}))= 0$ for all $i\neq c$. \end{itemize} \end{theorem} In the above Theorem \ref{a1} $k(\mathfrak{p})$ denotes the residue field of the local Gorenstein ring $R_\mathfrak{p}$ with the injective hull $E_{R_\mathfrak{p}}(k(\mathfrak{p}))$. Moreover the existence of the above natural homomorphisms is shown in Theorem \ref{2}. The new point of view here is a new characterization of an ideal $I$ to be cohomologically complete is equivalent to the following property of Betti numbers of $H^c_{IR_\mathfrak{p}}(R_\mathfrak{p})$ \[ \dim_{k(\mathfrak{p})} (\Tor^{R_\mathfrak{p}}_{i}(k(\mathfrak{p}),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p})))= \delta_{c,i} \] for all ${\mathfrak p}\in V(I)$. \section{Preliminaries} In this section we will recall few preliminaries and auxiliary results. In the paper we will denote by $(R,\mathfrak{m})$ a commutative Noetherian local ring of finite dimension $n$ with unique maximal ideal $\mathfrak{m}$. Let $E= E_R(k)$ be the injective hull of the residue field $k=R/\mathfrak{m}$. We will denote $D(\cdot)$ by the Matlis dual functor. Moreover for the basic facts about commutative algebra and homological algebra we refer to \cite{b}, \cite{har1}, \cite{m}, and \cite{w}. Suppose that the homomorphism $X\to Y$ of complexes of $R$-modules induces an isomorphism in homologies. Then it is called quasi-isomorphism. In this case we will write it as $X \qism Y$. In order to derive the natural homomorphisms of the next Theorem \ref{2} we need the definition of the truncation complex. The truncation complex is firstly introduce in \cite[Definition 4.1]{p2}. Let $E^{\cdot}_R(R)$ be a minimal injective resolution of a local Gorenstein ring $R$ of $\dim(R)= n$. Let $I$ be an ideal of $R$ with $\grade (I)= c$. Then there is an exact sequence \[ 0\to H^c_I(R)\to \Gamma_I(E^{\cdot}_R(R))^c\rightarrow \Gamma_I(E^{\cdot}_R(R))^{c+1}. \] Whence there is an embedding of complexes of $R$-modules $ H^c_I(R)[-c]\rightarrow \Gamma_I(E^{\cdot}_R(R))$. \begin{definition} \label{2.2} Let $C^{\cdot}_R(I)$ be the cokernel of the above embedding. Then the truncation complex of $R$ with respect to $I$ is a short exact sequence of complexes \[ 0\to H^c_I(R)[-c]\to \Gamma_I(E^{\cdot}_R(R))\to C^{\cdot}_R(I)\to 0. \] Note that one can easily see from the long exact sequence of cohomologies that $H^i(C^{\cdot}_R(I))= 0$ for all $i\leq c$ or $i> n$ and $H^i(C^{\cdot}_R(I))\cong H^i_I(R)$ for all $c< i\leq n.$ \end{definition} As a consequence of the truncation complex the following result was proved in \cite[Lemma 2.2 and Corollary 2.3]{pet1}. Note that a generalization of the next result to maximal Cohen-Macaulay modules was given in \cite[Theorem 4.2 and Corollary 4.3]{waqas2}. Moreover recently it has also been extended to finitely generated $R$-modules (see \cite[Lemma 4.2]{p1}). For the sake of completeness we add it here. \begin{lemma} \label{1} Let $(R,\mathfrak{m})$ denote an $n$-dimensional Gorenstein ring. Let $I\subset R$ denote an ideal with $c = \grade I$. Then there is a short exact sequence \[ 0 \to H^{n-1}_{\mathfrak m}(C^{\cdot}_R(I)) \to H^d_{\mathfrak m}(H^c_I(R)) \to E \to H^n_{\mathfrak m}(C^{\cdot}_R(I)) \to 0, \] and isomorphisms $H^{i-c}_{\mathfrak m}(H^c_I(R)) \cong H^{i-1}_{\mathfrak m}(C^{\cdot}_R(I))$ for all $i\neq n,n+1$. Moreover if in addition $H^i_I(R)=0$ for all $i\neq c$ then the map $H^d_{\mathfrak m}(H^c_I(R)) \to E$ is an isomorphism and $H^{i}_{\mathfrak m}(H^c_I(R))=0$ for all $i\neq d.$ \end{lemma} \begin{proof} For the proof see \cite[Lemma 2.2 and Corollary 2.3]{pet1}. \end{proof} In the following we will obtain some natural homomorphisms with the help of the truncation complex. These maps will be used further to investigate the property of cohomologically complete intersection ideals. \begin{theorem}\label{2} Let $R$ be a Gorenstein ring of dimension $n$ and $I$ an ideal with $c = \grade(I).$ Then we have the following results: \begin{itemize} \item[(a)] There is an exact sequence \[ 0\to \Tor_{-1}^R(E,C^{\cdot}_R(I))\to \Tor_{c}^R(E,H^c_I(R))\to E\to \Tor_{0}^R(E,C^{\cdot}_R(I))\to 0 \] and isomorphism $\Tor_{c-i}^R(E, H^c_I(R))\cong \Tor_{-(i+1)}^R(E,C^{\cdot}_R(I))$ for all $i\neq 0, -1$ \item[(b)] There is an exact sequence \[ 0\to H^0_{\mathfrak m}(D(C^{\cdot}_R(I)))\to E\to H^c_{\mathfrak m}(D(H^c_I(R)))\to H^1_{\mathfrak m}(D(C^{\cdot}_R(I)))\to 0 \] and isomorphism $H^{c+i}_{\mathfrak m}(D(H^c_I(R)))\cong H^{i+1}_{\mathfrak m}(D(C^{\cdot}_R(I)))$ for all $i\neq 0, -1$. \item[(c)] There is an exact sequence \[ 0\to \Tor_{-1}^R(k,C^{\cdot}_R(I))\to \Tor_{c}^R(k,H^c_I(R))\to k\to \Tor_{0}^R(k,C^{\cdot}_R(I))\to 0 \] and isomorphism $\Tor_{c-i}^R(k,H^c_I(R))\cong \Tor_{-(i+1)}^R(k,C^{\cdot}_R(I))$ for all $i\neq 0, -1$. \end{itemize} \end{theorem} \begin{proof} Let $F_{\cdot}^R$ be a free resolution of $E$. Then the short exact sequence of the truncation complex induces the following short exact sequence of complexes of $R$-modules \begin{equation}\label{1aa} 0\to (F_{\cdot}^R\otimes_{R} H^c_I(R))[-c]\to F_{\cdot}^R\otimes_{R}\Gamma_I(E^{\cdot}_R(R))\to F_{\cdot}^R\otimes_{R} C^{\cdot}_R(I)\to 0 \end{equation} Now let $\underline{y} = y_1,\ldots, y_r$ be a generating set of the ideal $I$ and $\Check{C}_{\underline{y}}$ denote the \v{C}ech complex with respect to $\underline{y}$. Since $E^{\cdot}_R(R)$ is a complex of injective $R$-modules. Then by \cite[Theorem 3.2]{pet2} the middle complex is a quasi-isomorphic to $F_{\cdot}^R\otimes_{R} \Check{C}_{\underline{y}}\otimes_{R} E^{\cdot}_R(R)$. Moreover $\Check{C}_{\underline{y}}$ and $F_{\cdot}^R$ are complexes of flat $R$-modules so there are the following quasi-isomorphisms \[ F_{\cdot}^R\otimes_{R}\Check{C}_{\underline{y}} \qism F_{\cdot}^R\otimes_{R} \Check{C}_{\underline{y}}\otimes_{R} E^{\cdot}_R(R), \text { and } \] \[ F_{\cdot}^R\otimes_R \Check{C}_{\underline{y}} \qism E\otimes_R \Check{C}_{\underline{y}} \] But $\Supp_R(E)= V(\mathfrak{m})$ it follows that $E\otimes_R \Check{C}_{\underline{y}}\cong E$. Therefore we get the homologies $H^i(F_{\cdot}^R\otimes_{R}\Gamma_I(E^{\cdot}_R(R)))=0$ for all $i\neq 0$ and $E$ for $i=0$. Then the statement $(a)$ can be deduced from the homology sequence of the above exact sequence \ref{1aa}. Now apply the functor $D(\cdot)=\Hom_R({\cdot}, E)$ to the truncation complex then it induces the short exact sequence \begin{equation}\label{e} 0\to D(C^{\cdot}_R(I))\to D(\Gamma_I(E^{\cdot}_R(R)))\to D(H^c_I(R))[c]\to 0. \end{equation} Let $F_{\cdot}(R/\mathfrak{m}^s)$ be a free resolution of $R/\mathfrak{m}^s$ for each $s\in \mathbb{N}$. Take $\Hom_R(F_{\cdot}(R/\mathfrak{m}^s),{\cdot})$ to this sequence then the resulting exact sequence of complexes is the following: \begin{gather*} 0\to \Hom_R(F_{\cdot}(R/\mathfrak{m}^s),D(C^{\cdot}_R(I)))\to \Hom_R(F_{\cdot}(R/\mathfrak{m}^s),D(\Gamma_I(E^{\cdot}_R(R))))\to \\ \Hom_R(F_{\cdot}(R/\mathfrak{m}^s),D(H^c_I(R)))[c] \to 0. \end{gather*} Now we look at the cohomologies of the complex in the middle. By Hom-Tensor Duality it is isomorphic to $D(F_{\cdot}(R/\mathfrak{m}^s)\otimes_{R}\Gamma_I(E^{\cdot}_R(R)))$. Since the Matlis dual functor $D(\cdot)$ is exact and cohomologies commutes with exact functor. So there is an isomorphism \[ H^i(D(F_{\cdot}(R/\mathfrak{m}^s)\otimes_{R}\Gamma_I(E^{\cdot}_R(R))))\cong D(H^i(F_{\cdot}(R/\mathfrak{m}^s)\otimes_{R}\Gamma_I(E^{\cdot}_R(R)))) \] for all $i\in \mathbb{Z}$. Then, by the same arguments as we used in the proof of $(a)$, the cohomologies on the right side are zero for each $i\neq 0$ and for $i=0$ it is isomorphic to $D(R/\mathfrak{m}^s)$. Recall that support of $R/\mathfrak{m}^s$ is contained in $V(\mathfrak{m}).$ Then by cohomology sequence there is an exact sequence \begin{gather*}\label{we} 0\to \Hom_R(R/\mathfrak{m}^s,D(C^{\cdot}_R(I)))\to D(R/\mathfrak{m}^s)\to \Ext^c_R(R/\mathfrak{m}^s,D(H^c_I(R)))\to \\ \Ext^1_R(R/\mathfrak{m}^s,D(C^{\cdot}_R(I)))\to 0 \end{gather*} and the isomorphism $\Ext^{i+1}_R(R/\mathfrak{m}^s,D(C^{\cdot}_R(I)))\cong \Ext^{i+c}_R(R/\mathfrak{m}^s,D(H^c_I(R)))$ for all $i\neq 0,-1$. Take the direct limit we get the following exact sequence \[ 0\to H^0_{\mathfrak m}(D(C^{\cdot}_R(I)))\to E\to H^c_{\mathfrak m}(D(H^c_I(R)))\to H^1_{\mathfrak m}(D(C^{\cdot}_R(I)))\to 0 \] and for all $i\neq 0, -1$ \[ H^{c+i}_{\mathfrak m}(D(H^c_I(R)))\cong H^{i+1}_{\mathfrak m}(D(C^{\cdot}_R(I))). \] This proves the statement in $(b)$. Now let $L^R_{\cdot}$ denote the free resolution of $k$. Then short exact sequence of truncation complex induces the exact sequence \begin{equation}\label{e1} 0\to (L^R_{\cdot}\otimes_R H^c_I(R))[-c]\to L^R_{\cdot}\otimes_R \Gamma_I(E^{\cdot}_R(R))\to L^R_{\cdot}\otimes_R C^{\cdot}_R(I)\to 0. \end{equation} Then by the proof of $(a)$ the cohomologies of the middle complex of this last sequence are $H^i(L^R_{\cdot}\otimes_R \Gamma_I(E^{\cdot}_R(R)))=0$ for all $i\neq 0$ and for $i=0$ it is $k$. This gives the statement in $(c)$ by virtue of long exact sequence of cohomologies. This finishes the proof of the Theorem. \end{proof} We close this section with the the following version of the Local Duality Lemma. The proof of it can be found in \cite{goth}. Note that in \cite[Theorem 6.4.1]{he}, \cite[Theorem 3.1]{p1}, or \cite[Lemma 2.4]{waqas1} there is a generalization of it to arbitrary cohomologically complete intersection ideals. \begin{lemma} \label{2.1} Let $I$ be an ideal of a Gorenstein ring $R$ with $\dim(R)= n$. Then for any $R$-module $M$ and for all $i\in \mathbb{Z}$ we have \begin{itemize} \item[(1)] $\Tor_{n-i}^R(M, E) \cong H^i_{\mathfrak{m}}(M)$. \item[(2)] $D(H^i_\mathfrak{m}(M)) \cong \Ext^{n-i}_R(M, \hat{R})$. Here $\hat{R}$ denotes the of completion of $R$ with respect to the maximal ideal. \end{itemize} \end{lemma} \section{On cohomologically complete intersections} In this section we will see when the natural homomorphisms, described in above Theorem \ref{2}, are isomorphisms. The first result in this regard is obtained in the following Corollary provided that the ideal $I$ is of cohomologically complete intersections. In fact the next result tells us that all the Betti numbers of the module $H^c_I(R)$ vanish except at degree $c$. \begin{corollary}\label{01} With the notation of Theorem \ref{2} suppose in addition that $H^i_I(R)= 0$ for all $i\neq c.$ Then the following are true: \begin{itemize} \item[(a)] The natural homomorphism \[ \Tor_{c}^R(E,H^c_I(R))\to E \] is an isomorphism and $\Tor_{i}^R(E,H^c_I(R))=0$ for all $i\neq c$ \item[(b)] The natural homomorphism \[ E\to H^c_{\mathfrak m}(D(H^c_I(R))) \] is an isomorphism and $H^{i}_{\mathfrak m}(D(H^c_I(R)))=0$ for all $i\neq c$. \item[(c)] The natural homomorphism \[ \Tor_{c}^R(k,H^c_I(R))\to k \] is an isomorphism and $\Tor_{i}^R(k,H^c_I(R))=0$ for all $i\neq c$. That is the Betti numbers of $H^c_I(R)$ satisfy \[ \dim_k(\Tor^{R}_i(k,H^c_I(R)))=\delta_{c,i}. \] \end{itemize} \end{corollary} \begin{proof} Since $H^i_I(R)= 0$ for all $i\neq c$ so the complex $C^{\cdot}_R(I)$ is an exact complex (by definition of the truncation complex). Let $F_{\cdot}^R$, $F_{\cdot}(R/\mathfrak{m}^s)$ and $L^{\cdot}_R$ be the free resolutions of $E$, $R/\mathfrak{m}^s$ and $k$ respectively. Then it follows that all the following complexes \[ F_{\cdot}^R\otimes_R C^{\cdot}_R(I), \] \[ \Hom_R(F_{\cdot}(R/\mathfrak{m}^s),D(C^{\cdot}_R(I))),\text { and } \] \[ L^{\cdot}_R\otimes_R C^{\cdot}_R(I) \] are exact. Then from the long exact sequence of cohomologies of the sequences \ref{1aa} and \ref{e1} we can easily obtained the statements in $(a)$ and $(c)$. Moreover after applying the functor $\Hom_R(F_{\cdot}(R/\mathfrak{m}^s),D(C^{\cdot}_R(I)))$ to the sequence \ref{e} we get, from the cohomology sequence, that the natural homomorphism \[ D(R/\mathfrak{m}^s)\to \Ext^c_R(R/\mathfrak{m}^s,D(H^c_I(R))) \] is an isomorphism and $\Ext^{i}_R(R/\mathfrak{m}^s,D(H^c_I(R)))=0$ for all $i\neq c$. By passing to the direct limits we get the statement in $(b)$. Hence the proof of the Corollary is complete. \end{proof} In order to prove Theorem \ref{a} which is one of the main result of this section we need the following result. Note that the next Theorem provides us the equivalence of the natural homomorphisms of Theorem \ref{2} such that they become isomorphisms. Moreover this equivalence related to the vanishing of the Betti numbers of $H^c_I(R)$ for $\grade(I)=c$. \begin{theorem}\label{6} Let $I$ be an ideal of an $n$-dimensional Gorenstein ring $R$ with $\grade(I)=c$. Then the following conditions are equivalent: \begin{itemize} \item[(a)] The natural homomorphism \[ E\to H^c_{\mathfrak m}(D(H^c_I(R))) \] is an isomorphism and $H^i_{\mathfrak m}(D(H^c_I(R)))=0$ for all $i\neq c.$ \item[(b)] The natural homomorphism \[ \Tor_{c}^R(k,H^c_I(R))\to k \] is an isomorphism and $\Tor_{i}^R(k,H^c_I(R))= 0$ for all $i\neq c$. \item[(c)] The Betti numbers of $H^c_I(R)$ satisfy \[ \dim_k(\Tor^{R}_i(k,H^c_I(R)))=\delta_{i,c}. \] \end{itemize} \end{theorem} \begin{proof} Note that the equivalence of $(b)$ and $(c)$ is obvious. Now we prove that $(a)$ implies $(b)$. Since $H^i_{\mathfrak m} (D(H^c_I(R)))= 0$ for all $i\neq c$. By Theorem \ref{2} $(a)$ it follow that $H^i_{\mathfrak m} (D(C^{\cdot}_R(I)))= 0$ for all $i\in \mathbb{Z}$. Let $\Check{C}_{\underline{x}}$ be the \v{C}ech complex with respect to $\underline{x}= x_1, \ldots ,x_s\in \mathfrak{m}$ such that $\Rad \mathfrak{m}= \Rad(\underline{x})R$. Then it implies that $\Check{C}_{\underline{x}}\otimes_{R} D(C^{\cdot}_R(I))$ is an exact complex. Suppose that $F^{\cdot}_R$ denote a minimal injective resolution of $D(C^{\cdot}_R(I)).$ Let us denote $X:= \Hom_R(k, F^{\cdot}_R)$ then there is an isomorphism \[ \Ext^{i}_R(k, D(C^{\cdot}_R(I)))\cong H^i(X) \] for all $i\in \mathbb{Z}$. We claim that the complex $X$ is homologically trivial. To this end note that the support of each module of $X$ is in $\{\mathfrak{m}\}$. It follows that there is an isomorphism of complexes \[ \Check{C}_{\underline{x}}\otimes_{R} X\cong X. \] So in order to prove the claim it will be enough to show that $H^i(\Check{C}_{\underline{x}}\otimes_{R} X)=0$ for all $i\in \mathbb{Z}$. By the above arguments, for a free resolution $L_{\cdot}^R$ of $k$, the following complex is exact \[ Y:=\Hom_R(L_{\cdot}^R, \Check{C}_{\underline{x}}\otimes_{R} D(C^{\cdot}_R(I))). \] Moreover $L_{\cdot}^R$ is a right bounded complex of finitely generated free $R$-modules and $\Check{C}_{\underline{x}}$ is a bounded complex of flat $R$-modules. So by \cite[Proposition 5.14]{har1} $Y$ is quasi-isomorphic to $\Check{C}_{\underline{x}}\otimes_{R} \Hom_R(L_{\cdot}^R, D(C^{\cdot}_R(I)))$. So it is homologically trivial. Note that the morphism of complexes \[ \Check{C}_{\underline{x}}\otimes_{R} \Hom_R(L_{\cdot}^R, D(C^{\cdot}_R(I)))\to \Check{C}_{\underline{x}}\otimes_{R} \Hom_R(L_{\cdot}^R, F^{\cdot}_R) \] induces an isomorphism in cohomologies. Because of $F^{\cdot}_R$ is a minimal injective resolution of $C^{\cdot}_M(I)$. Moreover \[ \Check{C}_{\underline{x}}\otimes_{R} X \qism \Check{C}_{\underline{x}}\otimes_{R} \Hom_R(L_{\cdot}^R, F^{\cdot}_R). \] By the discussion above the complex on the right side is homologically trivial. It follows that the complex $\Check{C}_{\underline{x}}\otimes_{R} X$ is homologically trivial. This proves the claim. Therefore $\Ext^{i}_R(k, D(C^{\cdot}_R(I)))=0$ for all $i\in \mathbb{Z}$. By Hom-Tensor Duality $D(\Tor_{i}^R(k,C^{\cdot}_R(I)))\cong\Ext^{i}_R(k, D(C^{\cdot}_R(I)))=0$ for all $i\in \mathbb{Z}$. It implies that $L_{\cdot}^R\otimes_R C^{\cdot}_R(I)$ is an exact complex. Then from the cohomology sequence of the exact sequence \ref{e1} it follows that the natural homomorphism \[ \Tor_{c}^R(k,H^c_I(R))\to k \] is an isomorphism and $\Tor_{i}^R(k,H^c_I(R))= 0$ for all $i\neq c$. Conversely, note that after the application of the functor $\Tor^{R}_i(\cdot, C^{\cdot}_R(I))$ to the exact sequence \[ 0\to \mathfrak m^s/\mathfrak m^{s+1}\to R/\mathfrak m^{s+1}\to R/\mathfrak m^{s}\to 0 \] induces the exact sequence \[ \Tor^{R}_i(\mathfrak m^s/\mathfrak m^{s+1}, C^{\cdot}_R(I)) \to \Tor^{R}_i(R/\mathfrak m^{s+1}, C^{\cdot}_R(I))\to \Tor^{R}_i(R/\mathfrak m^s, C^{\cdot}_R(I)) \] for all $i\in \mathbb Z$. Then by induction on $s$ , in view of vanishing of Tot modules, this proves that $\Tor^{R}_i(R/\mathfrak m^s, C^{\cdot}_M(I)))=0$ for all $i\in \mathbb{Z}$ and for all $s\in \mathbb N$ (see Theorem \ref{2}$(c)$). It follows that \[ \Ext^{i}_R(R/\mathfrak m^s, D(C^{\cdot}_R(I)))\cong D(\Tor_{i}^R(R/\mathfrak m^s,C^{\cdot}_R(I)))=0 \] for all $i\in \mathbb{Z}$ and for all $s\in \mathbb N$. Recall that $\mathfrak m^s/\mathfrak m^{s+1}$ is a finite dimensional $k$-vector space. Then by virtue of the proof of Theorem \ref{2}$(b)$ the natural homomorphism \[ D(R/\mathfrak{m}^s)\to \Ext^c_R(R/\mathfrak{m}^s,D(H^c_I(R))) \] is an isomorphism and $\Ext^{i}_R(R/\mathfrak{m}^s,D(H^c_I(R)))=0$ for all $i\neq c$. By Passing to the direct limit of it we can easily obtain the statement in $(a)$. This completes the proof of the Theorem. \end{proof} \begin{lemma}\label{03} With the above notation suppose that $N:= D(D(H^c_I(R)))$ then we have: \begin{itemize} \item[(a)] The natural homomorphism \[ E\to H^c_{\mathfrak m}(D(H^c_I(R))) \] is an isomorphism if and only if the natural homomorphism \[ \lim_{\longleftarrow} \Tor^R_{c}(R/\mathfrak{m}^s,N)\to \hat{R} \] is an isomorphism. \item[(b)] Let $i\in \mathbb{Z}$ be a fixed integer. Then the module $H^i_{\mathfrak m}(D(H^c_I(R)))= 0$ if and only if $\lim\limits_{\longleftarrow} \Tor^R_{i}(R/\mathfrak{m}^s,N)= 0$. \end{itemize} \end{lemma} \begin{proof} Since Hom functor transforms the direct system into an inverse system at first variable. Moreover $H^i_{\mathfrak{m}}(D(H^c_I(R)))\cong \lim\limits_{\longrightarrow}\Ext_R^{i}(R/\mathfrak{m}^s,D(H^c_I(R)))$ for each $i\in \mathbb{Z}$. By taking the Matlis dual of this induces the isomorphism \[ D(H^i_{\mathfrak{m}}(D(H^c_I(R))))\cong \lim\limits_{\longleftarrow} D(\Ext_R^{i}(R/\mathfrak{m}^s,D(H^c_I(R)))) \] for each $i\in \mathbb{Z}$. Then by Hom-Tensor duality the module on the right side is isomorphic to \[ \lim_{\longleftarrow} \Tor^R_{i}(R/\mathfrak{m}^s,N). \] Then both of the statements are obvious in view of Matlis duality. \end{proof} The next Proposition is indeed in the proof of Theorem \ref{a}. So we will prove it firstly. \begin{proposition}\label{7} Let $R$ be an $n$-dimensional local Gorenstein ring. Let $I$ be an ideal with $\grade(I)=c$ and $d:=n-c$. If the natural homomorphism \[ \Tor_{c}^R(k,H^c_I(R))\to k \] is an isomorphism and $\Tor_i^R(k,H^c_I(R))=0$ for all $i\neq c$. Then the following conditions hold: \begin{itemize} \item[(a)] The natural homomorphism \[ H^d_{\mathfrak m}(H^c_I(R)) \to E \] is an isomorphism and $H^{i}_{\mathfrak m}(H^c_I(R))=0$ for all $i\neq d$. \item[(b)] The natural homomorphism \[ \Tor_{c}^R(E,H^c_I(R))\to E \] is an isomorphism and $\Tor_{i}^R(E,H^c_I(R))=0$ for all $i\neq c$. \end{itemize} \end{proposition} \begin{proof} Note that by Local Duality Lemma \ref{2.1} (for $M=H^c_I(R)$) it will be enough to prove the statements in $(b)$. For this let $N_{\alpha}:=D(R/\mathfrak m^\alpha)$ for each $\alpha\in \mathbb N$. Suppose that $F_{\cdot}(N_{\alpha})$ denote a minimal free resolution of $N_{\alpha}$ for each $\alpha\in \mathbb N$. Since the support of $N_{\alpha}$ is contained in $\{\mathfrak{m}\}$. Then after the implement of the functor $ \cdot \otimes_R F_{\cdot}(N_{\alpha})$ to the truncation complex gives us the following natural homomorphisms \[ f_\alpha: \Tor^{R}_c(N_{\alpha}, H^c_I(R))\to N_{\alpha} \] for all $\alpha\in \mathbb N$ (by the proof of Theorem \ref{2}$(a)$). Now the short exact sequence $0\to \mathfrak m^\alpha/\mathfrak m^{\alpha+1}\to R/\mathfrak m^{\alpha+1}\to R/\mathfrak m^{\alpha}\to 0$ induces the exact sequence \[ 0\to N_{\alpha}\to N_{\alpha+1}\to D(\mathfrak m^\alpha/\mathfrak m^{\alpha+1})\to 0 \] Moreover after the application of the functor $\Tor^{R}_i(\cdot, H^c_I(R))$ to the this sequence we get the exact sequence \[ \Tor^{R}_i(N_{\alpha}, H^c_I(R)) \to \Tor^{R}_i(N_{\alpha+1}, H^c_I(R))\to \Tor^{R}_i(D(\mathfrak m^\alpha/\mathfrak m^{\alpha+1}), H^c_I(R)) \] for all $i\in \mathbb Z$. Since $N_{1}=D(R/\mathfrak m)\cong k$. Then by induction on $\alpha$ , in view of vanishing of Tor modules, this proves that $\Tor^{R}_i(N_{\alpha}, H^c_I(R)))=0$ for all $i\neq c$ and for all $\alpha\in \mathbb N$. Now we show that $f_{\alpha}$ is an isomorphism for all $\alpha\in \mathbb N$. Clearly $f_1$ is isomorphism (by assumption). Then there is the following commutative diagram with exact rows \[ \begin{array}{cccccccc} & & \Tor^{R}_c(N_{\alpha}, H^c_I(M)) & \to & \Tor^{R}_c(N_{\alpha+1}, H^c_I(M)) & \to & \Tor^{R}_c(D(\mathfrak m^\alpha/\mathfrak m^{\alpha+1}), H^c_I(M)) & \to 0\\ & & \downarrow {f_\alpha } & & \downarrow {f_{\alpha+1}} & & \downarrow f &\\ 0 & \to & N_{\alpha} & \to & N_{\alpha+1} & \to & D(\mathfrak m^\alpha/\mathfrak m^{\alpha+1}) & \to 0 \end{array} \] Note that the above row is exact because of $\Tor^{R}_i(N_{\alpha}, H^c_I(R))=0$ for all $i\neq c$ and for all $\alpha\in \mathbb N$. Then the natural homomorphism $f$ is an isomorphism because of $\mathfrak m^\alpha/\mathfrak m^{\alpha+1}$ is a finite dimensional $k$-vector space. Hence by induction, in view of snake lemma, it implies that $f_{\alpha}$ is an isomorphism for all $\alpha \in \mathbb{N}$. Take the direct limits of $f_{\alpha}$ it induces the isomorphism \[ \Tor_{c}^R(E,H^c_I(R))\to E \] Since the direct limits commutes with Tor functor and $\Supp_R(E)\subseteq V(\mathfrak{m})$. Moreover note that the vanishing of the above Tor modules implies that $\Tor_{i}^R(E,H^c_I(R))=0$ for all $i\neq c$. \end{proof} Now we are able to prove our main result. Before proving it we will make some notation. Let $R$ be a Gorenstein ring of dimension $n$ and $I$ an ideal with $c = \grade(I).$ We will denote $k(\mathfrak{p})$ by the residue field of $R_\mathfrak{p}$ with injective hull $E_{R_\mathfrak{p}}(k(\mathfrak{p}))$ for $\mathfrak{p}\in V(I)$. Moreover we set $h({\mathfrak p})=\dim(R_{\mathfrak p})- c$. \begin{theorem}\label{a} Fix the previous notation. Then for all $\mathfrak{p}\in V(I)$ the following conditions are equivalent: \begin{itemize} \item[(a)] $H^i_I(R)= 0$ for all $i\neq c,$ that is $I$ is a cohomologically complete intersection. \item[(b)] The natural homomorphism \[ H^{h({\mathfrak p})}_{{\mathfrak p}R_{\mathfrak p}}(H^c_{IR_{\mathfrak p}}(R_{\mathfrak p})) \to E_{R_\mathfrak{p}}(k(\mathfrak{p})) \] is an isomorphism and $H^i_{{\mathfrak p}R_{\mathfrak p}}(H^c_{IR_{\mathfrak p}}(R_{\mathfrak p}))= 0$ for all $i\neq h({\mathfrak p})$. \item[(c)] The natural homomorphism \[ \Tor^{R_\mathfrak{p}}_{c}( E_{R_\mathfrak{p}}(k(\mathfrak{p})),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p})))\to E_{R_\mathfrak{p}}(k(\mathfrak{p})) \] is an isomorphism and $\Tor^{R_\mathfrak{p}}_{i}(E_{R_\mathfrak{p}}(k(\mathfrak{p})),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}))= 0$ for all $i\neq c .$ \item[(d)] The natural homomorphism \[ E_{R_\mathfrak{p}}(k(\mathfrak{p}))\to H^c_{\mathfrak{p}R_\mathfrak{p}}(\Hom_{R_\mathfrak{p}}(H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}), E_{R_\mathfrak{p}}(k(\mathfrak{p})) \] is an isomorphism and $H^i_{\mathfrak{p}R_\mathfrak{p}}(\Hom_{R_\mathfrak{p}}(H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}), E_{R_\mathfrak{p}}(k(\mathfrak{p}))))= 0$ for all $i\neq c$. \item[(e)] The natural homomorphism \[ \Tor^{R_\mathfrak{p}}_{c}(k(\mathfrak{p}),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}))\to k(\mathfrak{p}) \] is an isomorphism and $\Tor^{R_\mathfrak{p}}_{i}(k(\mathfrak{p}),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p}))= 0$ for all $i\neq c$. \item[(f)] The Betti numbers of $H^c_{IR_\mathfrak{p}}(R_\mathfrak{p})$ satisfy \[ \dim_{k(\mathfrak{p})} (\Tor^{R_\mathfrak{p}}_{i}(k(\mathfrak{p}),H^c_{IR_\mathfrak{p}}(R_\mathfrak{p})))= \delta_{c,i} \] \end{itemize} \end{theorem} \begin{proof} We firstly prove that the statements in $(a)$ and $(b)$ are equivalent. Suppose that $H^i_I(R)= 0$ for all $i\neq c.$ Then by \cite[Proposition 2.7]{waqas2} it follows that $H^i_{{\mathfrak p}R_{\mathfrak p}}(H^c_{IR_{\mathfrak p}}(R_{\mathfrak p}))= 0$ for all $i\neq c=\grade(IR_{\mathfrak p})$ and for all $\mathfrak{p}\in V(I)$. So the result follows from Lemma \ref{1}. Conversely, suppose that the statement in $(b)$ is true. We use induction on $\dim_R(R/IR)$. Let $\dim_R(R/IR)= 0$ then it follows that $\Rad (IR)= \mathfrak{m}$. This proves the result since $R$ is Gorenstein. Now let us assume that $\dim(R/IR)> 0$. Then it is easy to see that $\dim(R_{\mathfrak p}/IR_{\mathfrak p})< \dim(R/IR)$ for all ${\mathfrak p} \in V(I)\setminus \{ \mathfrak m\}$. Moreover by the induction hypothesis for all $i\neq c$ and for all ${\mathfrak p} \in V(I)\setminus \{ \mathfrak m\}$ we have \[ H^i_{IR_{\mathfrak p}}(R_{\mathfrak p})= 0. \] That is $\Supp(H^i_I(R))\subseteq V({\mathfrak m})$ for all $i\neq c$. Since our assumption is true for $\mathfrak p= \mathfrak m$. Then by Lemma \ref{1} it follows that $H^i_\mathfrak m(C^{\cdot}_R(I))= 0$ for all $i\in \mathbb Z.$ Since $\Supp_R(H^i(C^{\cdot}_M(I)))\subseteq V(\mathfrak m)$. So by \cite[Lemma 2.5]{waqas2} in view of definition of the truncation complex we have \[ 0=H^i_\mathfrak m(C^{\cdot}_R(I))\cong H^i(C^{\cdot}_R(I))\cong H^i_I(R) \] for all $c<i\leq n$. That is $H^i_{I}(R)= 0$ for all $i\neq c$. This completes the proof of $(a)$ is equivalent to $(b)$. Now we prove that the statements in $(b)$ and $(c)$ are equivalent. If the statement in $(b)$ is true. Then by the remarks above it follows that $ c=\grade(IR_{\mathfrak p})$ for all $\mathfrak{p}\in V(I)$. Hence by Local Duality the statement in $(c)$ is obvious. Note that the converse is also true by the similar arguments of induction on $\dim_R(R/IR)$ as we used above in view of Local Duality. Let us prove that $(c)$ is equivalent to $(e)$. Let the statement in $(c)$ be true. Then it implies that $I$ is cohomologically complete intersections. By \cite[Proposition 2.7]{waqas2} it follows that $H^i_{{\mathfrak p}R_{\mathfrak p}}(H^c_{IR_{\mathfrak p}}(R_{\mathfrak p}))= 0$ for all $i\neq c=\grade(IR_{\mathfrak p})$ and for all $\mathfrak{p}\in V(I)$. So the result is obvious by virtue of Corollary \ref{01}. Now suppose that the statement in $(e)$ holds. By Local Duality and Proposition \ref{7} one can prove it by following the same steps of induction on $\dim_R(R/IR)$. In the similar way we can easily see that $(c)$ and $(d)$ are equivalent by virtue of Theorem \ref{6}, Proposition \ref{7} and Local Duality. Note that the equivalence of the statements in $(e)$ and $(f)$ is obvious. This completes the equivalence of all the statements of the Theorem. \end{proof} \begin{remark} Note that in the above Theorem \ref{a} one should have localization with respect to all $\mathfrak{p}\in V(I)$. To prove this let $R = k[|x_0,x_1,x_2,x_3,x_4|]$ denote the formal power series ring over any filed $k.$ Suppose that $I =(x_0,x_1)\cap (x_1,x_2)\cap (x_2,x_3)\cap (x_3,x_4).$ Then Hellus and Schenzel (see \cite[Example 4.1]{pet1}) proved that $\dim_k \Ext^i_R(k, H^2_I(R)) = \delta_{3,i}$ and $H^i_I(R) \not= 0$ for all $i \not= 2,3$ with $\grade(I)=2$. Then by \cite[Corollary 4.7]{waqas2} it follows that the natural homomorphism $H^d_{\mathfrak m}(H^c_I(R)) \to E$ is an isomorphism and $H^{i}_{\mathfrak m}(H^c_I(R))=0$ for all $i\neq 3.$ This shows that the local conditions are necessary in order to prove the Theorem \ref{a}. \end{remark} \end{document}
arXiv
Skewness – Quick Introduction, Examples & Formulas By Ruben Geert van den Berg under Statistics A-Z Skewness is a number that indicates to what extent a variable is asymmetrically distributed. Positive (Right) Skewness Example Negative (Left) Skewness Example Population Skewness - Formula and Calculation Sample Skewness - Formula and Calculation Skewness in SPSS Skewness - Implications for Data Analysis A scientist has 1,000 people complete some psychological tests. For test 5, the test scores have skewness = 2.0. A histogram of these scores is shown below. The histogram shows a very asymmetrical frequency distribution. Most people score 20 points or lower but the right tail stretches out to 90 or so. This distribution is right skewed. If we move to the right along the x-axis, we go from 0 to 20 to 40 points and so on. So towards the right of the graph, the scores become more positive. Therefore, right skewness is positive skewness which means skewness > 0. This first example has skewness = 2.0 as indicated in the right top corner of the graph. The scores are strongly positively skewed. Another variable -the scores on test 2- turn out to have skewness = -1.0. Their histogram is shown below. The bulk of scores are between 60 and 100 or so. However, the left tail is stretched out somewhat. So this distribution is left skewed. Right: to the left, to the left. If we follow the x-axis to the left, we move towards more negative scores. This is why left skewness is negative skewness. And indeed, skewness = -1.0 for these scores. Their distribution is left skewed. However, it is less skewed -or more symmetrical- than our first example which had skewness = 2.0. Symmetrical Distribution Implies Zero Skewness Finally, symmetrical distributions have skewness = 0. The scores on test 3 -having skewness = 0.1- come close. Now, observed distributions are rarely precisely symmetrical. This is mostly seen for some theoretical sampling distributions. Some examples are the (standard) normal distribution; the t distribution and the binomial distribution if p = 0.5. These distributions are all exactly symmetrical and thus have skewness = 0.000... If you'd like to compute skewnesses for one or more variables, just leave the calculations to some software. But -just for the sake of completeness- I'll list the formulas anyway. If your data contain your entire population, compute the population skewness as: $$Population\;skewness = \Sigma\biggl(\frac{X_i - \mu}{\sigma}\biggr)^3\cdot\frac{1}{N}$$ \(X_i\) is each individual score; \(\mu\) is the population mean; \(\sigma\) is the population standard deviation and \(N\) is the population size. For an example calculation using this formula, see this Googlesheet (shown below). It also shows how to obtain population skewness directly by using =SKEW.P(...) where ".P" means "population". This confirms the outcome of our manual calculation. Sadly, neither SPSS nor JASP compute population skewness: both are limited to sample skewness. If your data hold a simple random sample from some population, use $$Sample\;skewness = \frac{N\cdot\Sigma(X_i - \overline{X})^3}{S^3(N - 1)(N - 2)}$$ \(\overline{X}\) is the sample mean; \(S\) is the sample-standard-deviation and \(N\) is the sample size. An example calculation is shown in this Googlesheet (shown below). An easier option for obtaining sample skewness is using =SKEW(...). which confirms the outcome of our manual calculation. First off, "skewness" in SPSS always refers to sample skewness: it quietly assumes that your data hold a sample rather than an entire population. There's plenty of options for obtaining it. My favorite is via MEANS because the syntax and output are clean and simple. The screenshots below guide you through. The syntax can be as simple as means v1 to v5 /cells skew. A very complete table -including means, standard deviations, medians and more- is run from means v1 to v5 /cells count min max mean median stddev skew kurt. The result is shown below. Many analyses -ANOVA, t-tests, regression and others- require the normality assumption: variables should be normally distributed in the population. The normal distribution has skewness = 0. So observing substantial skewness in some sample data suggests that the normality assumption is violated. Such violations of normality are no problem for large sample sizes -say N > 20 or 25 or so. In this case, most tests are robust against such violations. This is due to the central limit theorem. In short, for large sample sizes, skewness is no real problem for statistical tests. However, skewness is often associated with large standard deviations. These may result in large standard errors and low statistical power. Like so, substantial skewness may decrease the chance of rejecting some null hypothesis in order to demonstrate some effect. In this case, a nonparametric test may be a wiser choice as it may have more power. Violations of normality do pose a real threat for small sample sizes of -say- N < 20 or so. With small sample sizes, many tests are not robust against a violation of the normality assumption. The solution -once again- is using a nonparametric test because these don't require normality. Last but not least, there isn't any statistical test for examining if population skewness = 0. An indirect way for testing this is a normality test such as the Kolmogorov-Smirnov normality test and the Shapiro-Wilk normality test. However, when normality is really needed -with small sample sizes- such tests have low power: they may not reach statistical significance even when departures from normality are severe. Like so, they mainly provide you with a false sense of security. And that's about it, I guess. If you've any remarks -either positive or negative- please throw in a comment below. We do love a bit of discussion. By Jon peck on October 28th, 2019 It's easy enough to get the population skewness in the rare case it is needed. Use a COMPUTE to get the standardized variable cubed and then AGGREGATE to get its mean. By Ruben Geert van den Berg on October 29th, 2019 Hi Jon! I think that's not exactly correct: the z-scores obtained via DESCRIPTIVES have been standardized with the sample standard deviation. For the population skewness, that should have been the population standard deviation which is also completely absent from SPSS: both between and within cases, SPSS uses the sample standard deviation formula. Surely you could create it with AGGREGATE commands but this may get cumbersome for multiple variables. I'm well aware that the sample skewness approximates the population skewness if the population size approaches infinity. However, I find it hard to sell that the population formulas are present even in Googlesheets but not SPSS. If SPSS was my product, I'd include them just for the sake of completeness and as the easiest way to silence any discussion. But perhaps there's no discussion in the first place as many "social scientists" seem to think that all data are simple random samples. I feel there's a lot of room for improvement when it comes to understanding statistics and data analysis in the social sciences. By Jon K Peck on October 29th, 2019 I wasn't suggesting using z scores. Just calculate the second and third moments with a COMPUTE and AGGREGATE. No doubt, it would be simpler if built in, but that would apply to other moments, too. Ok. When you mentioned "standardized variable cubed", I thought you were referring to cubed z-scores (most likely via DESCRIPTIVES).
CommonCrawl
Business Economics Production function For a firm, the production function represents the relationship between: a. Implicit costs and... For a firm, the production function represents the relationship between: a. Implicit costs and explicit costs, b. Quantity of inputs and total cost, c. Quantity of inputs and quantity of output, d. Quantity of output and total cost. PRODUCTION FUNCTION The combination of inputs to get goods and services is referred to as production. The production process is explained through an input-output model. There are several production functions depending on the relationship between inputs and outputs, namely; linear, Cobb Douglas, and the CES types of production functions. Answer and Explanation: Production Function in Economics: Definition, Formula & Example Chapter 11 / Lesson 27 Learn about the production function. Read the production function definition in economics, learn the production function formula. Plus, see graphs and examples. For a firm, the production function represents the relationship between: a) implicit costs and explicit costs. b) quantity of inputs and total costs. c) quantity of inputs and quantity of outputs. d) quantity of output and total cost. For a firm, the relationship between the quantity of inputs and quantity of output is called the: a. Profit function, b. Production function, c. Total-cost function, d. Quantity function. The following cost function represents the relationship between the prices of three inputs used in production (w_i,i=1,2,3), output (Y) and the minimum cost of production (C): lnC(w,Y) = alpha_0 + sum Production functions indicate the relationship between: a. factor costs and output prices. b. factor inputs and the quantity of output. c. the value of inputs and average costs. d. factor inputs and factor prices. Production functions indicate the relationship between: A. factor costs and output prices. B. the value of inputs and average costs. C. factor inputs and the quantity of output. D. factor inputs and factor prices. A firm's production function is the relationship between: A. the inputs employed by the firm and the resulting costs of production. B. the firm's production costs and the amount of revenue it receives from the sale of its output. C. the factors of produ A firm production function is given by q =f(k,l) = k�l. Derive this firm (indirect) cost function. Using this firm (indirect) cost function, compute its total cost of production when q0 = 100, w = $20 A production function establishes the relationship between: A. the quantity of inputs used and the quantity of output produced. B. the market price of a good and the sales revenue generated. C. the quantity of output produced and the firm's profit. D. the Total variable cost is the sum of all A. implicit costs. B. costs of the firm's fixed inputs. C. costs that rise as output increases. D. costs associated with the production of goods. A firm has cost function: C = 300 - 2Q + 3Q^2 The firm is producing 4 units of output. Calculate values tor these: a) total cost b) total variable cost c) total fixed cost d) avenge total cost A firm produces an output with a fixed proportion production given by: f(x_1, x_2) = min(x_1, x_2). Assume that w_1 = 2 and w_2 = 4. A) Find an expression for the firm's total cost function in the lon A production function establishes the relationship between: a) the market price of a good and the sales revenue generated, b) the quantity of output produced and the firm's profit, c) the quantity of inputs used and the quantity of output produced, d) the 1. The cost function is: a) a means for expressing output as a function of cost b) a schedule or mathematical relationship showing the total cost of producing various quantities of output c) similar A firm has fixed costs of $5,000. Its short-run production function is y = 2x^{1/2}, where x is the amount of variable factor it uses. The price of the variable factor is $80 per unit. Where y is the amount of output, the short-run total cost function is Total fixed cost is the sum of all A. costs associated with the production of goods. B. explicit costs. C. costs of the firm's fixed inputs. D. costs that rise as output increases. A firm has a production function given by Q=10K^{0.5}L^{0.5}. Suppose that each unit of capital costs R and each unit of labor costs W. a. Derive the long-run demands for capital and labor. b. Derive the total cost curve for this firm. c. Derive the lon The production function is a mathematical function that shows A. the relationship between output and the factors of production. B. how various inputs are produced. C. the most cost-efficient means of producing output. D. the most efficient level of output A firm has production function Y = KL + L^2. The firm faces costs of $10 wages and $1 rental rate of capital. Find the cost function, average total cost, average viable cost, and marginal cost functions. The total product curve shows the relationship between total product (output) and _____; and the total cost curve shows the relationship between total cost and _____. A) the quantity of labor; the quantity of labor B) the quantity of output; the quantity A firm has a production function defined as y = 40L^1 /10K^7 /10. The firm faces costs of $10 wages, and $100 rental rate of capital. Find the cost function, and average total cost, average variable c A production function describes: a) how input prices change as the firm changes its output level. b) how much output you will get from a given amount of inputs. c) the level of output that firms should optimally produce at each price level. d) a relations The production function is a mathematical function that shows: A. how various inputs are produced. B. the most cost-efficient means of producing output. C. the relationship between output and the factors of production. D. the most efficient level of outpu A firm produces 1,000 units of output at an average variable cost of production of 50 cents. The firm's total fixed costs equal $700. The total cost of producing 1,000 units of output equals A. $500. B. $800. C. $1,000. D. $1,200. E. $700. An economist estimated that the cost function of single-product firm is C(Q) = 100 + 20Q + 15Q^2 + 10Q^3, where Q is the quantity of output. Calculate the total cost of producing 10 units of output. Firm A is producing 40,000 units of output, incurring a total cost of $1,000,000 and total variable cost of $200,000. What is Firm A's average fixed cost? A production function measures the relation between input prices and output prices. the quantity of inputs and the quantity of output. input prices and the quantity of output. the quantity of inputs a The production function is a mathematical function that shows a. the most cost-efficient means of producing output. b. the relationship between output and the factors of production. c. how various An economist estimated that the cost function of single-product firm is C(Q) = 100 + 20Q + 15Q^2 + 10Q^3, where Q is the quantity of output. Calculate the variable cost of producing 10 units of output. A firm's production function is y = 12x10.5x20.5, where x1 and x2 are the amounts of factors 1 and 2 that the firm uses as inputs. If the firm is minimizing unit costs and if the price of factor 1 is 5 times the price of factor 2, the ratio in which the f The term "production function" refers to the: a. relationship between inputs and output. b. use of machinery and equipment in production. c. relationship between costs and output. d. role of labor unions. A cost-minimizng firm's production function is given Q=LK. The price of labor services is w and the price of capital services is r. When w=$4 and r=$2, the firm's total cost is $160. Also when input p A firm's total cost of producing 50 units of output is $10,000. At this output level, average fixed costs are equal to $50. It follows that the firm's average variable costs are equal to how much? A firm has the production function: q = 10L^{0.5}K^{0.5}, the price of labor is w = 10 and the price of capital is r = 20 a) demonstrate that this function has constant returns to scale. b) derive the short-run marginal and average variable cost functio A firm is operating with a total variable cost of R_s is 500 when 5 units of the given output are produced and the total fixed cost is R_s is 200. What will be the average total cost of producing Using the production function (ie: q = (K^{1/2} + L^{1/2})^2) suppose that the firm is now operating in the long-run. a) Solve for the long-run cost function (i.e. total costs as a function of input A firm has this cost function: [{MathJax fullWidth=?false? C=1000 + 6Q }]. The firm is currently producing 9 units of its product. Calculating these: A) average total cost B) Total fixed cost An economist estimated that the cost function of a single-product firm is C(Q) = 100 + 20Q + 15Q^2 + 10Q^3, where Q is the quantity. Calculate the average total cost of producing 10 units of output. A firm's fixed costs for producing 0 units of output and its average total cost of producing different output levels are summarized in the tale below. Complete the table to find the fixed cost, variable cost, total cost, average fixed cost, average vari An economist estimated that the cost function of single-product firm is C(Q) = 100 + 20Q + 15Q^2 + 10Q^3, where Q is the quantity of output. Calculate the average fixed cost of producing 10 units of output. A firm has fixed costs of 4000. It's short-run production function is y=4x^{ \frac {1}{2, where x is the amount of the variable factor it uses. The price of the variable factor is 2000/unit. What is A firm has production function f(X1,X2)=X1^(1/4) X2^(1/4). The prices of the inputs w1 and w2. 1.Find the firm's demand for inputs (as a function of output and the input prices). 2. Find the cost fu A firm's production function is given as Q=10L^{1/2}K^{1/2} where L and K are labour and capital. Firm's iso-cost function is C = wL + rK. (a) Using this production function, express the amount of labour employed by the firm as a function of the level of A firm is producing 100 units of output at a total cost of $400. The firm's average variable cost is $3 per unit. What is the firm's fixed cost? a. $150 b. $100 c. $300 d. $1 Given the total cost function: C = 5x^2 + 2xy + 3y^2 + 400 for a firm producing goods x and y. The must meet a production quota of x + y = 30. Minimize costs for a firm. A firm's fixed costs for producing 0 units of output and its average total cost of producing different output levels are summarized in the tale below. Complete the table to find the fixed cost, varia If the cubic total cost function TC = a + b q + c (q^2) + d (q^3) applies to the production of output by a firm, and a = 0, b = 400, c = -50, and d = 5, what are the equations for the firm's TFC, TVC, MC, AFC, AVC, and ATC? Suppose that a firm's production function is q = 5x^{0.5} in the short run, where there are fixed costs of $1,000, and x is the variable input whose cost is$1250 per unit. The total cost of producing a level of output q is C(q) = 1,000 + \frac{1250q^2}{25 An economist estimated that the cost function of a single-product firm is C(Q) = 100 + 20Q + 15Q^2 + 10Q^3, where Q is the quantity. Calculate the average variable cost of producing 10 units of output. A firm produces a product with labor and capital. Its production function is described by Q = 2L + 3K. Let w and r be the prices of labor and capital, respectively. (a) Find the equation for the firm's long-run total cost curves as a function of quantity An economist estimated that the cost function of a single-product firm is C(Q) = 100 + 20Q + 15Q^2 + 10Q^3, where Q is the quantity. Calculate the fixed cost of producing 10 units of output. Aggregate supply (AS) denotes the relationship between the ____ and the ____ that firms choose to produce, holding the price of inputs fixed. A. total quantity; price level for output B. type of goods; input price of raw materials C. price of goods; a nu A firm has a production function of y = f(L, k) = ( sqrtL + sqrtk)^2 a) Find expressions for the marginal product of labor and capital (b) Find the cost function A firm's fixed costs for producing 0 units of output and its average total cost of producing different output levels are summarized in the table below. Complete the table to find the fixed cost, variable cost, total cost, average fixed cost, the average v A company has the following cost function: C(q) = 4q3 - 200q2 + 500q + 50,000. a. What level of output will minimize the average variable cost? b. Does the production process indicate diminishing marginal product? How can you tell? A firm's production is given by: q = 5L^{2/3} K^{1/3} (a) Calculate APL and MPL. Determine if the production function exhibits the law of diminishing marginal returns. Calculate the output (production) elasticity with respect to labor. (b) Calculate MRTS. An economist estimated that the cost function of a single-product firm is C(Q) = 50 + 25Q + 30Q^2 + 5Q^3. Determine the total cost of producing 10 units of output. A firm has a production function given by Q = 10(K^{.25})(L^{.25}). Suppose that each unit of capital costs R and each unit of labor costs W. a.) Derive the long-run demands for capital and labor. b.) Derive the total cost curve for this firm. c.) Deri A firm has a fixed cost of $700 in its first year of operation. When the firm produces 99 units of output, its total costs are $4,000. The marginal cost of producing the 100th unit of output is $200. What is the total cost of producing 100 units? Suppose that a firm's production function is q = 10L1/2K1/2. The cost of a unit of labor is $20 and the cost of a unit of capital is $80. a. Derive the long-run total cost curve function TC(q). b. The firm is currently producing 100 units of output. Find Let the production function of a firm be given by Find the equation of the isoquant associated with a production level of 10 units of output. The production function relates a. inputs to outputs. b. cost to input c. cost to output d. wages to profits. An economist estimated that the cost function of a single-product firm is C(Q) = 50 + 25Q + 30Q^2 + 5Q^3. Determine the average total cost of producing 10 units of output. A firm's fixed costs for 0 units of output and its average total cost of producing different output levels are summarized in the table below. Complete the table to find the fixed cost, variable cost, If a firm is producing 100 units of output at an average variable cost of $5/unit and total fixed cost is $700 what is the average total cost? A firm's production function is Q = min(L, 4K). The price of labor is w > 0 and the price of capital is r > 0. Derive the long-run total cost function. Draw the long-run expansion path. Label your graph clearly. What is a production function? a. A relationship between a firm's profits and the technology it uses in its production processes. b. A relationship between the cost and revenue of a firm associated with each possible output level. c. A relationship betwee An imperfectly competitive firm attempts to minimize the costs of producing a specific level of output (Q^0). Quantity is produced according to the production function: Q=KL. a. Show graphically the amount of capital and labor that will minimize the cost The table given below shows the average total cost of production of a firm at different levels of the output. If the total fixed cost is $3, what is the total variable cost of producing 5 units? a. $8 An imperfectly competitive firm attempts to minimize the cost of producing a specific level of output Q^0. Quantity is produced according to the production function: Q=KL. a. Show graphically the amount of capital and labor that ill minimize the costs of 2. Consider a firm with a production function F(l,k)=l^(3/4)k^(1/4), who faces input prices w = 2 and v = 54 Find the long run cost total cost when q=540 A producer?s production function is q=6K^(1/3)L^(1/2). The market price for the firm's output is p, the cost of capital is $4 per unit used and the cost of labor is $3 per unit. a. Write an equation f Suppose a firm is collecting $1,250 in total revenues and the total costs of its variable factors of production are $1,000 at its current level of output. The firm has $500 in fixed costs. In the short run, one can predict that the firm will (blank) and i A firm's fixed costs for 0 units of output and its average total cost of producing different output levels are summarized in the table below. Complete the table to find the Fixed Cost (FC), Variable C A firm, producing six units of output, has an average total cost of R200 and has to pay R300 to its fixed factors of production. The average variable cost is: 1. R50. 2. R150. 3. R200. 4. R300. A firm's production function is given by Q = 2L - L^2 + K. The price of labor is w > 0 and the price of capital is r > 0. Assuming the firm uses both labor and capital, derive the long-run total cost function. The aggregate supply curve shows the relationship between: a) the overall price level in the economy and total production by firms. b) the unemployment rate and total production by firms. c) the inflation rate and the overall price level in the economy Suppose the firm's production function is given by f (L, K) = 2LK. C represents the total cost of production; the price of L is P_L and the price of K is P_k. a) Plot an isoquant from this specific production function. b) Using graphical analysis, deriv The concept of the production function implies that a firm using resources inefficiently will: A. obtain more output than the theoretical production function shows. B. not be subject to diminishing marginal product. C. obtain exactly the amount that the Consider a firm with the production function f(L,K) = L^{0.5}K^{0.5}. The wage rate and rental rate on capital are w and r, respectively. a. Use the Lagrangian for cost minimization to do derive the long-run cost function for this firm. b. Suppose the A firm's product function is Q = 5L^{0.5}K^{0.5}. Labor costs $40 per unit and capital costs $10 per unit. K = 16 in the short run. Suppose the production of a firm is Q = 5 + 2K + L. Which of the following statements is correct? A. The firm's production Suppose that a firm has only one variable input, labor, and firm output is zero when labor is zero. When the firm hires 6 workers the firm produces 90 units of output. Fixed costs of production are $6 and the variable cost per unit of labor is $10. The ma 1) Production Functions and Marginal Products: How do inputs turn into outputs? 2) Iso-quants/-costs and cost-minimization: What?s the best way to produce a given level of output? 3) Cost Curves and The short-run production function of a profit maximizer firm is given by f(L) = 6L^(2/3), where L is the amount of labor it uses. The cost per labor unit is w = 6, and the price per unit of output is p = 3. 1) How many units of labor will the firm hire? Suppose a firm has a production function y=f(x_1, x_2) = x_1^{1/2} x_2^{1/2}. The price of factor 1 is w_1 = 16, and the price of factor 2 is w_2 = 4. Calculate the short-run cost function when x_1 = The total variable cost incurred by a firm will depend upon A) the prices of its variable inputs (e.g., the hourly wage rate that workers are paid) B) the production techniques that are used (i.e.m its short-run production function) C) the amount of outp (1) A firm has the production function . The price of k is v, and the price of l is w. (a) Solve for the equation describing the k/l ratio that minimizes the total cost of producing a given level of o An economist estimated that the cost function of a single-product firm is C(Q) = 50 + 25Q + 30Q^2 + 5Q^3. Determine the average variable cost of producing 10 units of output. The total variable cost incurred by a firm will depend upon: a) the prices of its variable inputs (e.g. the hourly wage rate that workers are paid) b) the production techniques that are used (i.e., its short run production function) c) the amount of outpu An economist estimated that the cost function of a single-product firm is C(Q) = 50 + 25Q + 30Q^2 + 5Q^3. Determine the variable cost of producing 10 units of output. 4) A firm produces Good X, using inputs 'L' and 'K.' The production function is x = K^0.5 L^0.5. Assume that the price of L is $15 and the price of K is $20. Find the firm's average total cost of prod For the production function q= K^0.2 L^0.3 use a lagrangian to solve for the cost minimizing input bundle to produce 100 units of output if v=10 and w=15. What is the marginal cost of production at th A competitive firm has a production function of the form Y = 2L + 5K . If w = $2 and r = $3, what will be the minimum cost of producing 10 units of output? A firm has a production function given by f (x_1,x_2,x_3,x_4) = \min(2x_1+x_2,x_3+2x_4). What is the cost function for this? Total product is the amount of output that a firm can produce: a. using a given amount of inputs. b. using a given amount of output. c. by ignoring production costs. d. by not considering a firm's technology.
CommonCrawl
\begin{document} \title{\bf Indirect Cross-validation for Density Estimation} \author{Olga Y. Savchuk, Jeffrey D. Hart, Simon J. Sheather} \date{} \maketitle \begin{abstract} A new method of bandwidth selection for kernel density estimators is proposed. The method, termed {\it indirect cross-validation}, or ICV, makes use of so-called {\it selection} kernels. Least squares cross-validation (LSCV) is used to select the bandwidth of a selection-kernel estimator, and this bandwidth is appropriately rescaled for use in a Gaussian kernel estimator. The proposed selection kernels are linear combinations of two Gaussian kernels, and need not be unimodal or positive. Theory is developed showing that the relative error of ICV bandwidths can converge to 0 at a rate of $n^{-1/4}$, which is substantially better than the $n^{-1/10}$ rate of LSCV. Interestingly, the selection kernels that are best for purposes of bandwidth selection are very poor if used to actually estimate the density function. This property appears to be part of the larger and well-documented paradox to the effect that ``the harder the estimation problem, the better cross-validation~ performs.'' The ICV method uniformly outperforms LSCV in a simulation study, a real data example, and a simulated example in which bandwidths are chosen locally. \noindent KEY WORDS: Kernel density estimation; Bandwidth selection; Cross-validation; Local cross-validation. \end{abstract} \section{Introduction} Let $X_1,\ldots,X_n$ be a random sample from an unknown density $f$. A kernel density estimator of $f(x)$ is \begin{equation} \label{eq:KDE} \hat f_h(x)=\frac{1}{nh}\sum_{i=1}^n K\Bigl(\frac{x-X_i}{h}\Bigr), \end{equation} where $h>0$ is a smoothing parameter, also known as the bandwidth, and $K$ is the kernel, which is generally chosen to be a unimodal probability density function that is symmetric about zero and has finite variance. A popular choice for $K$ is the Gaussian kernel: $\phi(u)=(2\pi)^{-1/2}\exp(-u^2/2)$. To distinguish between estimators with different kernels, we shall refer to estimator \eqref{eq:KDE} with given kernel $K$ as a {\it $K$-kernel estimator}. Choosing an appropriate bandwidth is vital for the good performance of a kernel estimate. This paper is concerned with a new method of data-driven bandwidth selection that we call {\it indirect cross-validation} (ICV). Many data-driven methods of bandwidth selection have been proposed. The two most widely used are least squares cross-validation, proposed independently by~\citeN{Rudemo:LSCV} and~\citeN{Bowman:LSCV}, and the~\citeN{Sheather:PI} plug-in method. Plug-in produces more stable bandwidths than does cross-validation, and hence is the currently more popular method. Nonetheless, an argument can be made for cross-validation since it requires fewer assumptions than plug-in and works well when the density is difficult to estimate; see \citeN{Loader:Classical}. A survey of bandwidth selection methods is given by~\citeN{Jones:survey}. A number of modifications of LSCV has been proposed in an attempt to improve its performance. These include the biased cross-validation~ method of~\citeN{Scott:UCV}, a method of~\citeN{Chiu:CV}, the trimmed cross-validation~ of~\citeN{Feluch:spacing}, the modified cross-validation~ of~\citeN{Stute}, and the method of~\citeN{Ahmad} based on kernel contrasts. The ICV method is similar in spirit to one-sided cross-validation (OSCV), which is another modification of cross-validation proposed in the regression context by~\citeN{HartYi}. As in OSCV, ICV initially chooses the bandwidth of an $L$-kernel estimator using least squares cross-validation. Multiplying the bandwidth chosen at this initial stage by a known constant results in a bandwidth, call it $\hat h_{ICV}$, that is appropriate for use in a Gaussian kernel estimator. A popular means of judging a kernel estimator is the mean integrated squared error, i.e., $MISE(h)=E\left[ISE(h)\right]$, where $$ ISE(h)=\int_{-\infty}^\infty \left(\hat f_h(x)-f(x)\right)^2\,dx. $$ Letting $h_0$ be the bandwidth that minimizes $MISE(h)$ when the kernel is Gaussian, we will show that the mean squared error of $\hat h_{ICV}$ as an estimator of $h_0$ converges to 0 at a faster rate than that of the ordinary LSCV bandwidth. We also describe an unexpected bonus associated with ICV, namely that, unlike LSCV, it is robust to rounded data. A fairly extensive simulation study and two data analyses confirm that ICV performs better than ordinary cross-validation in finite samples. \section{Description of indirect cross-validation} We begin with some notation and definitions that will be used subsequently. For an arbitrary function $g$, define \[ R(g)=\int g(u)^2\,du,\quad \mu_{jg}=\int u^j g(u)\,du. \] The LSCV criterion is given by $$ LSCV(h)=R(\hat f_h)-\frac{2}{n}\sum_{i=1}^n\hat f_{h,-i}(X_i), $$ where, for $i=1,\ldots,n$, $\hat f_{h,-i}$ denotes a kernel estimator using all the original observations except for $X_i$. When $\hat f_h$ uses kernel $K$, $LSCV$ can be written as \begin{eqnarray}\label{eq:LSCV1} LSCV(h)&=&\frac{1}{nh}R(K)+\frac{1}{n^2h}\sum_{i\neq j}\int K(t)K\Bigl(t+\frac{X_i-X_j}{h}\Bigr)\,dt\notag\\ &&-\frac{2}{n(n-1)h}\sum_{i\neq j}K\Bigl(\frac{X_i-X_j}{h}\Bigr). \end{eqnarray} It is well known that $LSCV(h)$ is an unbiased estimator of $MISE(h)-\int f^2(x)\,dx$, and hence the minimizer of $LSCV(h)$ with respect to $h$ is denoted $\hat h_{UCV}$. \subsection{The basic method}\label{sec:meth} Our aim is to choose the bandwidth of a {\it second order} kernel estimator. A second order kernel integrates to 1, has first moment 0, and finite, nonzero second moment. In principle our method can be used to choose the bandwidth of any second order kernel estimator, but in this article we restrict attention to $K\equiv \phi$, the Gaussian kernel. It is well known that a $\phi$-kernel estimator has asymptotic mean integrated squared error (MISE) within 5\% of the minimum among all positive, second order kernel estimators. Indirect cross-validation may be described as follows: \begin{itemize} \item Select the bandwidth of an $L$-kernel estimator using least squares cross-val- idation, and call this bandwidth $\hat b_{UCV}$. The kernel $L$ is a second order kernel that is a linear combination of two Gaussian kernels, and will be discussed in detail in Section \ref{sec:wildk}. \item Assuming that the underlying density $f$ has second derivative which is continuous and square integrable, the bandwidths $h_n$ and $b_n$ that asymptotically minimize the $MISE$ of $\phi$- and $L$-kernel estimators, respectively, are related as follows: \begin{eqnarray}\label{eq:hnbn} h_n=\left(\frac{R(\phi)\mu_{2L}^2}{R(L)\mu_{2\phi}^2}\right)^{1/5}b_n\equiv Cb_n. \end{eqnarray} \item Define the indirect cross-validation bandwidth by $\hat h_{ICV}=C\hat b_{UCV}$. Importantly, the constant $C$ depends on no unknown parameters. Expression (\ref{eq:hnbn}) and existing cross-validation theory suggest that $\hat h_{ICV}/h_0$ will at least converge to 1 in probability, where $h_0$ is the minimizer of $MISE$ for the $\phi$-kernel estimator. \end{itemize} Henceforth, we let $\hat h_{UCV}$ denote the bandwidth that minimizes $LSCV(h)$ with $K\equiv \phi$. Theory of~\citeN{H&M:Extent} and~\citeN{Scott:UCV} shows that the relative error $(\hat h_{UCV}-h_0)/h_0$ converges to 0 at the rather disappointing rate of $n^{-1/10}$. In contrast, we will show that $(\hat h_{ICV}-h_0)/h_0$ can converge to 0 at the rate $n^{-1/4}$. Kernels $L$ that are sufficient for this result are discussed next. \subsection{Selection kernels}\label{sec:wildk} We consider the family of kernels ${\cal L}=\{L(\,\cdot\,;\alpha,\sigma): \alpha\ge0,\sigma>0\}$, where, for all $u$, \begin{equation} \label{eq:L} L(u;\alpha,\sigma)=(1+\alpha)\phi(u)-\frac{\alpha}{\sigma}\phi\left(\frac{u}{\sigma}\right). \end{equation} Note that the Gaussian kernel is a special case of~\eqref{eq:L} when $\alpha=0$ or $\sigma=1$. Each member of ${\cal L}$ is symmetric about 0 and such that $\mu_{2L}=\int u^2L(u)\,du=1+\alpha-\alpha\sigma^2$. It follows that kernels in ${\cal L}$ are second order, with the exception of those for which $\sigma=\sqrt{(1+\alpha)/\alpha}$. The family ${\cal L}$ can be partitioned into three families: ${\cal L}_1$, ${\cal L}_2$ and ${\cal L}_3$. The first of these is $\mathcal L_1=\bigl\{L(\cdot;\alpha,\sigma): \alpha>0, \sigma<\frac{\alpha}{1+\alpha}\bigr\}$. Each kernel in ${\cal L}_1$ has a negative dip centered at $x=0$. For $\alpha$ fixed, the smaller $\sigma$ is, the more extreme the dip; and for fixed $\sigma$, the larger $\alpha$ is, the more extreme the dip. The kernels in $\mathcal L_1$ are ones that ``cut-out-the-middle.'' The second family is $\mathcal L_2=\bigl\{L(\cdot;\alpha,\sigma): \alpha>0, \frac{\alpha}{1+\alpha}\leq\sigma\leq1\bigr\}$. Kernels in $\mathcal L_2$ are densities which can be unimodal or bimodal. Note that the Gaussian kernel is a member of this family. The third sub-family is $\mathcal L_3=\bigl\{L(\cdot;\alpha,\sigma): \alpha>0, \sigma>1\}$, each member of which has negative tails. Examples of kernels in $\mathcal L_3$ are shown in Figure~\ref{fig:Lnegative}. \begin{figure} \caption{ Selection kernels in $\mathcal L_3$. The dotted curve corresponds to the Gaussian kernel, and each of the other kernels has $\alpha=6$. } \label{fig:Lnegative} \end{figure} Kernels in $\mathcal L_1$ and $\mathcal L_3$ are not of the type usually used for estimating $f$. Nonetheless, a worthwhile question is ``why not use $L$ for both cross-validation~ {\it and} estimation of $f$?'' One could then bypass the step of rescaling $\hat b_{UCV}$ and simply estimate $f$ by an $L$-kernel estimator with bandwidth $\hat b_{UCV}$. The ironic answer to this question is that the kernels in ${\cal L}$ that are best for cross-validation~ purposes are very inefficient for estimating $f$. Indeed, it turns out that an $L$-kernel estimator based on a sequence of ICV-optimal kernels has $MISE$ that does not converge to 0 faster than $n^{-1/2}$. In contrast, the $MISE$ of the best $\phi$-kernel estimator tends to 0 like $n^{-4/5}$. These facts fit with other cross-validation~ paradoxes, which include the fact that LSCV outperforms other methods when the density is highly structured, \citeN{Loader:Classical}, the improved performance of cross-validation~ in multivariate density estimation, \citeN{Sainetal:cv}, and its improvement when the true density is not smooth, \citeN{vanEs}. One could paraphrase these phenomena as follows: ``The more difficult the function is to estimate, the better cross-validation~ seems to perform.'' In our work, we have in essence made the function more difficult to estimate by using an inefficient kernel $L$. More details on the $MISE$ of $L$-kernel estimators may be found in~\citeN{Savchuk:thesis}. \section{Large sample theory}\label{sec:theory} The theory presented in this section provides the underpinning for our methodology. We first state a theorem on the asymptotic distribution of $\hat h_{ICV}$, and then derive asymptotically optimal choices for the parameters $\alpha$ and $\sigma$ of the selection kernel. \subsection{Asymptotic mean squared error of the ICV bandwidth}\label{sec:MSE} Classical theory of~\citeN{H&M:Extent} and~\citeN{Scott:UCV} entails that the bias of an LSCV bandwidth is asymptotically negligible in comparison to its standard deviation. We will show that the variance of an ICV bandwidth can converge to 0 at a faster rate than that of an LSCV bandwidth. This comes at the expense of a squared bias that is {\it not} negligible. However, we will show how to select $\alpha$ and $\sigma$ (the parameters of the selection kernel) so that the variance and squared bias are balanced and the resulting mean squared error tends to 0 at a faster rate than does that of the LSCV bandwidth. The optimal rate of convergence of the relative error $(\hat h_{ICV}-h_0)/h_0$ is $n^{-1/4}$, a substantial improvement over the infamous $n^{-1/10}$ rate for LSCV. Before stating our main result concerning the asymptotic distribution of $\hat h_{ICV}$, we define some notation: $$ \gamma(u)=\int L(w)L(w+u)\,du-2L(u),\quad \rho(u)=u\gamma'(u), $$ $$ T_n(b)={\sum\sum}_{1\le i<j\le n}\left[\gamma\left(\frac{X_i-X_j}{b}\right)+\rho\left(\frac{X_i-X_j}{b}\right)\right], $$ $$ T_n^{(j)}(b)=\frac{\partial^jT_n(b)}{\partial b^j},\quad j=1,2, $$ $$ A_\alpha=\frac{3}{\sqrt{2\pi}}(1+\alpha)^2\left[\frac{1}{8}(1+\alpha)^2-\frac{8}{9\sqrt{3}}(1+\alpha)+\frac{1}{\sqrt2}\right], $$ $$ C_\alpha=\frac{\sqrt{2A_\alpha}(2\sqrt{\pi})^{9/10}}{5(1+\alpha)^{9/5}\alpha^{1/5}} \qquad {\rm and}\qquad D_\alpha=\frac{3}{20}\left(\frac{(1+\alpha)^2}{2\alpha^2\sqrt\pi}\right)^{2/5}. $$ Note that to simplify notation, we have suppressed the fact that $L$, $\gamma$ and $\rho$ depend on the parameters $\alpha$ and $\sigma$. An outline of the proof of the following theorem is given in the Appendix. \noindent {\bf Theorem.} \ {\sl Assume that $f$ and its first five derivatives are continuous and bounded and that $f^{(6)}$ exists and is Lipschitz continuous. Suppose also that \begin{equation}\label{cond1} (\hat b_{UCV}-b_0)\frac{T_n^{(2)}(\tilde b)}{T_n^{(1)}(b_0)}=o_p(1) \end{equation} for any sequence of random variables $\tilde b$ such that $|\tilde b-b_0|\le|\hat b_{UCV}-b_0|$, a.s. Then, if $\sigma=o(n)$ and $\alpha$ is fixed, $$ \frac{\hat h_{ICV}-h_0}{h_0}=Z_nS_n+B_n+o_p(S_n+B_n), $$ as $n\rightarrow\infty$ and $\sigma\rightarrow\infty$, where $Z_n$ converges in distribution to a standard normal random variable, \begin{eqnarray}\label{eq:SD} S_n=\left(\frac{1}{\sigma^{2/5}n^{1/10}}\right) \frac{R(f)^{1/2}}{R(f'')^{1/10}}C_\alpha, \end{eqnarray} and \begin{eqnarray}\label{eq:Bias} B_n=\left(\frac{\sigma}{n}\right)^{2/5}\frac{R(f''')}{R(f'')^{7/5}}D_\alpha. \end{eqnarray} } \noindent {\bf Remarks} \noindent \begin{itemize} \item[R1.] Assumption (\ref{cond1}) is only slightly stronger than assuming that $\hat b_{UCV}/b_0$ converges in probability to 1. To avoid making our paper overly technical we have chosen not to investigate sufficient conditions for (\ref{cond1}). However, this can be done using techniques as in~\citeN{Hall:CV} and~\citeN{H&M:Extent}. \item[R2.] Theorem 4.1 of~\citeN{Scott:UCV} on asymptotic normality of LSCV bandwidths is not immediately applicable to our setting for at least three reasons: the kernel $L$ is not positive, it does not have compact support, and, most importantly, it changes with $n$ via the parameter $\sigma$. \item[R3.] The assumption of six derivatives for $f$ is required for a precise quantification of the asymptotic bias of $\hat h_{ICV}$. Our proof of asymptotic normality of $\hat b_{UCV}$ only requires that $f$ be four times differentiable, which coincides with the conditions of Theorem 4.1 in~\citeN{Scott:UCV}. \item[R4.] The asymptotic bias $B_n$ is positive, implying that the ICV bandwidth tends to be larger than the optimal bandwidth. This is consistent with our experience in numerous simulations. \end{itemize} In the next section we apply the results of our theorem to determine asymptotically optimal choices for $\alpha$ and $\sigma$. \subsection{Minimizing asymptotic mean squared error} The limiting distribution of $(\hat h_{ICV}-h_0)/h_0$ has second moment $S_n^2+B_n^2$, where $S_n$ and $B_n$ are defined by (\ref{eq:SD}) and (\ref{eq:Bias}). Minimizing this expression with respect to $\sigma$ yields the following asymptotically optimal choice for $\sigma$: \begin{eqnarray}\label{eq:sigopt} \sigma_{n,opt}=n^{3/8}\left(\frac{C_\alpha}{D_\alpha}\right)^{5/4}\left[\frac{R(f)R(f'')^{13/5}}{R(f''')^2}\right]^{5/8}. \end{eqnarray} The corresponding asymptotically optimal mean squared error is \begin{eqnarray}\label{eq:optmse} MSE_{n,opt}=n^{-1/2}C_\alpha D_\alpha\left[\frac{R(f''')R(f)^{1/2}}{R(f'')^{3/2}}\right], \end{eqnarray} which confirms our previous claim that the relative error of $\hat h_{ICV}$ converges to 0 at the rate $n^{-1/4}$. The corresponding rates for LSCV and the Sheather-Jones plug-in rule are $n^{-1/10}$ and $n^{-5/14}$, respectively. Because $\alpha$ is not confounded with $f$ in $MSE_{n,opt}$, we may determine a single optimal value of $\alpha$ that is independent of $f$. The function $C_\alpha D_\alpha$ of $\alpha$ is minimized at $\alpha_0=2.4233$. Furthermore, small choices of $\alpha$ lead to an arbitrarily large increase in mean squared error, while the MSE at $\alpha=\infty$ is only about 1.33 times that at the minimum. Our theory to this point applies to kernels in ${\cal L}_3$, i.e., kernels with negative tails. \citeN{Savchuk:thesis} has developed similar theory for the case where $\sigma\ra0$, which corresponds to $L\in {\cal L}_1$, i.e., kernels that apply negative weights to the smallest spacings in the LSCV criterion. Interestingly, the same optimal rate of $n^{-1/4}$ results from letting $\sigma\ra0$. However, when the optimal values of $(\alpha,\sigma)$ are used in the respective cases ($\sigma\ra0$ and $\sigma\rightarrow\infty$), the limiting ratio of optimum mean squared errors is $0.752$, with $\sigma\rightarrow\infty$ yielding the smaller error. Our simulation studies confirm that using $L$ with large $\sigma$ does lead to more accurate estimation of the optimal bandwidth. \section{Practical choice of $\alpha$ and $\sigma$\label{sec:MSEopt}} In order to have an idea of how good choices of $\alpha$ and $\sigma$ vary with $n$ and $f$, we determined the minimizers of the asymptotic mean squared error of $\hat h_{ICV}$ for various sample sizes and densities. In doing so, we considered a single expression for the asymptotic mean squared error that is valid for either large or small values of $\sigma$. Furthermore, we use a slightly enhanced version of the asymptotic bias of $\hat h_{ICV}$. The first order bias of $\hat h_{ICV}$ is $Cb_0-h_0$, or $C(b_0-b_n)+(h_n-h_0)$, where \begin{eqnarray}\label{eq:b_n} b_n=\left(\frac{R(L)}{\mu_{2L}^2R(f'')}\right)^{1/5}n^{-1/5}\quad{\rm and}\quad h_n=\left(\frac{R(\phi)}{\mu_{2\phi}^2R(f'')}\right)^{1/5}n^{-1/5}. \end{eqnarray} Now, the term $h_n-h_0$ is of smaller order asymptotically than $C(b_0-b_n)$ and hence was deleted in the theory of Section \ref{sec:theory}. Here we retain $h_n-h_0$, and hence the $\alpha$ that minimizes the mean squared error depends on both $n$ and $f$. We considered the following five normal mixtures defined in the article by~\citeN{MarronWand:MISE}: \begin{center} \begin{tabular}{ll} Gaussian density:&$N(0,1)$\\ Skewed unimodal density:&$\frac{1}{5}N(0,1)+\frac{1}{5}N\Bigl(\frac{1}{2},\bigl(\frac{2}{3}\bigr)^2\Bigr)+\frac{3}{5}N\Bigl(\frac{13}{12},\bigl(\frac{5}{9}\bigr)^2\Bigr)$\\ Bimodal density:&$\frac{1}{2}N\Bigl(-1,\bigl(\frac{2}{3}\bigr)^2\Bigr)+\frac{1}{2}N\Bigl(1,\bigl(\frac{2}{3}\bigr)^2\Bigr)$\\ Separated bimodal density:&$\frac{1}{2}N\Bigl(-\frac{3}{2},\bigl(\frac{1}{2}\bigr)^2\Bigr)+\frac{1}{2}N\Bigl(\frac{3}{2},\bigl(\frac{1}{2}\bigr)^2\Bigr)$\\ Skewed bimodal density:&$\frac{3}{4}N(0,1)+\frac{1}{4}N\Bigl(\frac{3}{2},\bigl(\frac{1}{3}\bigr)^2\Bigr)$.\\ \end{tabular} \end{center} These choices for $f$ provide a fairly representative range of density shapes. It is worth noting that the asymptotically optimal $\sigma$ (expression (\ref{eq:sigopt})) is free of location and scale. We may thus choose a single representative of a location-scale family when investigating the effect of $f$. The following remarks summarize our findings about $\alpha$ and $\sigma$. \begin{itemize} \item For each $n$, the optimal value of $\sigma$ ($\alpha$) is larger (smaller) for the unimodal densities than for the bimodal ones. \item All of the MSE-optimal $\alpha$ and $\sigma$ correspond to kernels from $\mathcal L_3$, the family of negative-tailed kernels. \item For each density, the optimal $\alpha$ decreases monotonically with $n$. Recall from Section 3.2 that the asymptotically optimal $\alpha$ is $2.42$. For each unimodal density, the optimal $\alpha$ is within 13.5\% of 2.42 at $n=1000$, and for each bimodal density is within 18\% of 2.42 when $n$ is 20,000. \end{itemize} In practice it would be desirable to have choices of $\alpha$ and $\sigma$ that would adapt to the $n$ and $f$ at hand. However, attempting to estimate optimal values of $\alpha$ and $\sigma$ is potentially as difficult as the bandwidth selection problem itself. We have built a practical purpose model for $\alpha$ and $\sigma$ by using polynomial regression. The independent variable was $\log_{10}(n)$ and the dependent variables were the MSE-optimal values of $\log_{10}(\alpha)$ and $\log_{10}(\sigma)$ for the five densities defined above. Using a sixth degree polynomial for $\alpha$ and a quadratic for $\sigma$, we arrived at the following models for $\alpha$ and $\sigma$: \begin{equation} \label{eq:model} \begin{array}{l} \alpha_{mod}=10^{3.390-1.093\log10(n)+0.025\log10(n)^3-0.00004\log10(n)^6}\\ \sigma_{mod}=10^{-0.58+0.386\log10(n)-0.012\log10(n)^2},\quad 100\leq n\leq500000. \end{array} \end{equation} To the extent that unimodal densities are more prevalent than multimodal densities in practice, these model values are biased towards bimodal cases. Our extensive experience shows that the penalty for using good bimodal choices for $\alpha$ and $\sigma$ when in fact the density is unimodal, is an increase in the upward bias of $\hat h_{ICV}$. Our implementation of ICV, however, guards against oversmoothing by using an objective upper bound on the bandwidth, as we explain in detail in Section 7. We thus feel confident in recommending model (\ref{eq:model}) for choosing $\alpha$ and $\sigma$ in practice, at least until a better method is proposed. Indeed, this model is what we used to choose $\alpha$ and $\sigma$ in the simulation study reported upon in Section 7. \section{Robustness of ICV to data rounding\label{sec:Robust}} \citeN[p.52]{Silverman:book} showed that if the data are rounded to such an extent that the number of pairs $i<j$ for which $X_i=X_j$ is above a threshold, then $LSCV(h)$ approaches $-\infty$ as $h$ approaches zero. This threshold is $0.27n$ for the Gaussian kernel. \citeN{Chiu:discretization} showed that for data with ties, the behavior of $LSCV(h)$ as $h\rightarrow 0$ is determined by the balance between $R(K)$ and $2K(0)$. In particular, $\lim_{h\ra0}LSCV(h)$ is $-\infty$ and $\infty$ when $R(K)<2K(0)$ and $R(K)>2K(0)$, respectively. The former condition holds necessarily if $K$ is nonnegative and has its maximum at $0$. This means that all the traditional kernels have the problem of choosing $h=0$ when the data are rounded. Recall that selection kernels~\eqref{eq:L} are not restricted to be nonnegative. It turns out that there exist $\alpha$ and $\sigma$ such that $R(L)>2L(0)$ will hold. We say that selection kernels satisfying this condition are robust to rounding. It can be verified that the negative-tailed selection kernels with $\sigma>1$ are robust to rounding when \begin{equation} \label{eq:roundalpha} \alpha>\frac{-a_\sigma+\sqrt{a_\sigma+(2-1/\sqrt2)b_\sigma}}{b_\sigma}, \end{equation} where $a_\sigma=\left(\frac{1}{\sqrt2}-\frac{1}{\sqrt{1+\sigma^2}}-1+\frac{1}{\sigma}\right)$ and $b_\sigma=\left(\frac{1}{\sqrt2}-\frac{2}{\sqrt{1+\sigma^2}}+ \frac{1}{\sigma\sqrt2}\right)$. It turns out that all the selection kernels corresponding to model (\ref{eq:model}) are robust to rounding. Figure~\ref{fig:robustwild} shows the region~\eqref{eq:roundalpha} and also the curve defined by model~\eqref{eq:model} for $100\leq n\leq 500000$. \begin{figure} \caption{Selection kernels robust to rounding have $\alpha$ and $\sigma$ above the solid curve. Dashed curve corresponds to the model-based selection kernels.} \label{fig:robustwild} \end{figure} Interestingly, the boundary separating robust from nonrobust kernels almost coincides with the $(\alpha,\sigma)$ pairs defined by that model. \section{Local ICV} A local version of cross-validation for density estimation was proposed and analyzed independently by \citeN{HaSch} and \citeN{MSV}. A local method allows the bandwidth to vary with $x$, which is desirable when the smoothness of the underlying density varies sufficiently with $x$. \citeN{FHMP} proposed a different method of local smoothing that is a hybrid of plug-in and cross-validation methods. Here we propose that ICV be performed locally. The method parallels that of \citeN{HaSch} and \citeN{MSV}, with the main difference being that each local bandwidth is chosen by ICV rather than LSCV. We suggest using the {\it smallest} local minimizer of the ICV curve, since ICV does not have LSCV's tendency to undersmooth. Let $\hat f_b$ be a kernel estimate that employs a kernel in the class ${\cal L}$, and define, at the point $x$, a local ICV curve by $$ ICV(x,b)=\frac{1}{w}\int_{-\infty}^\infty\phi\left(\frac{x-u}{w}\right)\hat f_b^2(u)\,du-\frac{2}{nw}\sum_{i=1}^n\phi\left(\frac{x-X_i}{w}\right)\hat f_{b,-i}(X_i), \quad b>0. $$ The quantity $w$ determines the degree to which the cross-validation~ is local, with a very large choice of $w$ corresponding to global ICV. Let $\hat b(x)$ be the minimizer of $ICV(x,b)$ with respect to $b$. Then the bandwidth of a Gaussian kernel estimator at the point $x$ is taken to be $\hat h(x)=C\hat b(x)$. The constant $C$ is defined by (\ref{eq:hnbn}), and choice of $\alpha$ and $\sigma$ in the selection kernel will be discussed in Section 8. Local LSCV can be criticized on the grounds that, at any $x$, it promises to be even more unstable than global LSCV since it (effectively) uses only a fraction of the $n$ observations. Because of its much greater stability, ICV seems to be a much more feasible method of local bandwidth selection than does LSCV. We provide evidence of this stability by example in Section 8. \section{Simulation study \label{sec:Sims}} The primary goal of our simulation study is to compare ICV with ordinary LSCV. However, we will also include the Sheather-Jones plug-in method in the study. We considered the four sample sizes $n=100$, 250, 500 and 5000, and sampled from each of the five densities listed in Section~\ref{sec:MSEopt}. For each combination of density and sample size, 1000 replications were performed. Here we give only a synopsis of our results. The reader is referred to \citeN{SHS} for a much more detailed account of what we observed. Let $\hat h_0$ denote the minimizer of $ISE(h)$ for a Gaussian kernel estimator. For each replication, we computed $\hat h_0$, $\hat h_{ICV}^*$, $\hat h_{UCV}$ and $\hat h_{SJPI}$. The definition of $\hat h_{ICV}^*$ is $\min(\hat h_{ICV},\hat h_{OS})$, where $\hat h_{OS}$ is the oversmoothed bandwidth of \citeN{overTerr}. Since $\hat h_{ICV}$ tends to be biased upwards, this is a convenient means of limiting the bias. In all cases the parameters $\alpha$ and $\sigma$ in the selection kernel $L$ were chosen according to model (\ref{eq:model}). For any random variable $Y$ defined in each replication of our simulation, we denote the average of $Y$ over all replications (with $n$ and $f$ fixed) by $\widehat E(Y)$. Our main conclusions may be summarized as follows. \begin{itemize} \item The ratio $\widehat E(\hat h_{ICV}^*-\widehat E\hat h_0)^2/\widehat E(\hat h_{UCV}-\widehat E\hat h_0)^2$ ranged between 0.04 and 0.70 in the sixteen settings excluding the skewed bimodal density. For the skewed bimodal, the ratio was 0.84, 1.27, 1.09, and 0.40 at the respective sample sizes 100, 250, 500 and 5000. The fact that this ratio was larger than 1 in two cases was a result of ICV's bias, since the sample standard deviation of the ICV bandwidth was smaller than that for the LSCV bandwidth in all twenty settings. \item The ratio $\widehat{E}\bigl(ISE(\hat{h}_{ICV}^*)/ISE(\hat h_0)\bigr)/ \widehat{E}\bigl(ISE(\hat{h}_{UCV})/ISE(\hat h_0)\bigr)$ was smaller than 1 for every combination of density and sample size. For the two ``large bias'' cases mentioned in the previous remark the ratio was 0.92. \item The ratio $\widehat{E}\bigl(ISE(\hat{h}_{ICV}^*)/ISE(\hat h_0)\bigr)/ \widehat{E}\bigl(ISE(\hat{h}_{SJPI})/ISE(\hat h_0)\bigr)$ was smaller than 1 in six of the twenty cases considered. Among the other fourteen cases, the ratio was between 1.00 and 1.15, exceeding 1.07 just twice. \item Despite the fact that the LSCV bandwidth is asymptotically normally distributed (see~\citeN{H&M:Extent}), its distribution in finite samples tends to be skewed to the left. In contrast, our simulations show that the ICV bandwidth distribution is nearly symmetric. \end{itemize} \section{Examples} In this Section we illustrate the use of ICV with two examples, one involving credit scores from Fannie Mae and the other simulated data. The first example is provided to compare the ICV, LSCV, and Sheather-Jones plug-in methods for choosing a global bandwidth. The second example illustrates the benefit of applying ICV locally. \subsection{Mortgage defaulters} In this example we analyze the credit scores of Fannie Mae clients who defaulted on their loans. The mortgages considered were purchased in ``bulk'' lots by Fannie Mae from primary banking institutions. The data set was taken from the website \url{http://www.dataminingbook.com} associated with~\citeN{Shmueli:book}. In Figure~\ref{fig:KDE_Mortgage} we have plotted an unsmoothed frequency histogram and the LSCV, ICV and Sheather-Jones plug-in density estimates for the credit scores. The class interval size in the unsmoothed histogram was chosen to be 1, which is equal to the accuracy to which the data have been reported. It turns out that the LSCV curve tends to $-\infty$ when $h\rightarrow 0$, but has a local minimum at about 2.84. Using $h=2.84$ results in a severely undersmoothed estimate. Both the Sheather-Jones plug-in and ICV density estimates show a single mode around 675 and look similar, with the ICV estimate being somewhat smoother. \begin{figure} \caption{Unsmoothed histogram and kernel density estimates for credit scores. } \label{fig:KDE_Mortgage} \end{figure} Interestingly, a high percentage of the defaulters have credit scores less than 620, which many lenders consider the minimum score that qualifies for a loan; see \citeN{Desmond}. \subsection{Local ICV: simulated example} For this example we took five samples of size $n=1500$ from the kurtotic unimodal density defined in~\citeN{MarronWand:MISE}. First, we note that even the bandwidth that minimizes $ISE(h)$ results in a density estimate that is much too wiggly in the tails. On the other hand, using local versions of either ICV or LSCV resulted in much better density estimates, with local ICV producing in each case a visually better estimate than that produced by local LSCV. For the local LSCV and ICV methods we considered four values of $w$ ranging from 0.05 to 0.3. A selection kernel with $\alpha=6$ and $\sigma=6$ was used in local ICV. This $(\alpha,\sigma)$ choice performs well for global bandwidth selection when the density is unimodal, and hence seems reasonable for local bandwidth selection since locally the density should have relatively few features. For a given $w$, the local ICV and LSCV bandwidths were found for $x=-3,-2.9,\ldots,2.9,3$, and were interpolated at other $x\in[-3,3]$ using a spline. Average squared error (ASE) was used to measure closeness of a local density estimate $\hat f_\ell$ to the true density $f$: \[ ASE=\frac{1}{61}\sum_{i=1}^{61}(\hat f_\ell(x_i)-f(x_i))^2. \] Figure~\ref{fig:Loc_estimates} shows results for one of the five samples. Estimates corresponding to the smallest and the largest values of $w$ are provided. The local ICV method performed similarly well for all values of $w$ considered, whereas all the local LSCV estimates were very unsmooth, albeit with some improvement in smoothness as $w$ increased. \begin{figure} \caption{The solid curves correspond to the local LSCV and ICV density estimates, whereas the dashed curves show the kurtotic unimodal density. } \label{fig:Loc_estimates} \end{figure} \section{Summary} A widely held view is that kernel choice is not terribly important when it comes to estimation of the underlying curve. In this paper we have shown that kernel choice can have a dramatic effect on the properties of cross-validation. Cross-validating kernel estimates that use Gaussian or other traditional kernels results in highly variable bandwidths, a result that has been well-known since at least 1987. We have shown that certain kernels with low efficiency for estimating $f$ can produce cross-validation bandwidths whose relative error converges to 0 at a faster rate than that of Gaussian-kernel cross-validation bandwidths. The kernels we have studied have the form $(1+\alpha)\phi(u)-\alpha\phi(u/\sigma)/\sigma$, where $\phi$ is the standard normal density and $\alpha$ and $\sigma$ are positive constants. The interesting selection kernels in this class are of two types: unimodal, negative-tailed kernels and ``cut-out the middle kernels,'' i.e., bimodal kernels that go negative between the modes. Both types of kernels yield the rate improvement mentioned in the previous paragraph. However, the best negative-tailed kernels yield bandwidths with smaller asymptotic mean squared error than do the best ``cut-out-the-middle'' kernels. A model for choosing the selection kernel parameters has been developed. Use of this model makes our method completely automatic. A simulation study and examples reveal that use of this method leads to improved performance relative to ordinary LSCV. To date we have considered only selection kernels that are a linear combination of two normal densities. It is entirely possible that another class of kernels would work even better. In particular, a question of at least theoretical interest is whether or not the convergence rate of $n^{-1/4}$ for the relative bandwidth error can be improved upon. \section{Appendix} Here we outline the proof of our theorem in Section \ref{sec:theory}. A much more detailed proof is available from the authors. We start by writing \begin{eqnarray*} T_n(b_0)&=&T_n(\hat b_{UCV})+(b_0-\hat b_{UCV})T_n^{(1)}(b_0)+\frac{1}{2} (b_0-\hat b_{UCV})^2T_n^{(2)}(\tilde b)\\ &=&-nR(L)/2+(b_0-\hat b_{UCV})T_n^{(1)}(b_0)+\frac{1}{2} (b_0-\hat b_{UCV})^2T_n^{(2)}(\tilde b), \end{eqnarray*} where $\tilde b$ is between $b_0$ and $\hat b_{UCV}$, and so $$ (\hat b_{UCV}-b_0)\lp1-(\hat b_{UCV}-b_0)\frac{T_n^{(2)}(\tilde b)}{2T_n^{(1)}(b_0)}\right)=\frac{T_n(b_0)+nR(L)/2}{-T_n^{(1)}(b_0)}. $$ Using condition (\ref{cond1}) we may write the last equation as \begin{eqnarray}\label{eq:delta} (\hat b_{UCV}-b_0)=\frac{T_n(b_0)+nR(L)/2}{-T_n^{(1)}(b_0)}+o_p\left(\frac{T_n(b_0)+nR(L)/2}{-T_n^{(1)}(b_0)}\right). \end{eqnarray} Defining $s_n^2=\mbox{Var}(T_n(b_0))$ and $\beta_n=E(T_n(b_0))+nR(L)/2$, we have $$ \frac{T_n(b_0)+nR(L)/2}{-T_n^{(1)}( b_0)}=\frac{T_n(b_0)-ET_n(b_0)}{s_n}\cdot\frac{s_n}{-T_n^{(1)}(b_0)}+\frac{\beta_n}{-T_n^{(1)}(b_0)}. $$ Using the central limit theorem of~\citeN{Hall:CLT}, it can be verified that $$ Z_n\equiv\frac{T_n(b_0)-ET_n(b_0)}{s_n}\buildrel {\cal D}\over \longrightarrow N(0,1). $$ Computation of the first two moments of $T_n^{(1)}(b_0)$ reveals that $$ \frac{-T_n^{(1)}(b_0)}{5R(f'')b_0^4\mu_{2L}^2n^2/2}\buildrel p\over \longrightarrow 1, $$ and so $$ \frac{T_n(b_0)+nR(L)/2}{-T_n^{(1)}( b_0)}=Z_n\cdot \frac{2s_n}{5R(f'')b_0^4\mu_{2L}^2n^2} +\frac{2\beta_n}{5R(f'')b_0^4\mu_{2L}^2n^2}+ o_p\left( \frac{s_n+\beta_n}{b_0^4\mu_{2L}^2n^2}\right). $$ At this point we need the first two moments of $T_n(b_0)$. A fact that will be used frequently from this point on is that $\mu_{2k,L}=O(\sigma^{2k})$, $k=1,2,\ldots$. Using our assumptions on the smoothness of $f$, Taylor series expansions, symmetry of $\gamma$ about 0 and $\mu_{2\gamma}=0$, $$ ET_n(b_0)=-\frac{n^2}{12}b_0^5\mu_{4\gamma}R(f'')+\frac{n^2}{240}b_0^7\mu_{6\gamma}R(f''')+O(n^2b_0^8\sigma^7). $$ Recalling the definition of $b_n$ from (\ref{eq:b_n}), we have \begin{eqnarray}\label{eq:ET} \beta_n&=&-\frac{n^2}{12}b_0^5\mu_{4\gamma}R(f'')+\frac{n^2}{240}b_0^7\mu_{6\gamma}R(f''')\notag\\ &&+\frac{n^2}{2}b_n^5\mu_{2L}^2R(f'')+O(n^2b_0^8\sigma^7). \end{eqnarray} Let $MISE_L(b)$ denote the MISE of an $L$-kernel estimator with bandwidth $b$. Then $MISE_L'(b_n)=(b_n-b_0)MISE_L''(b_0)+o\left[(b_n-b_0)MISE_L''(b_0)\right]$, implying that \begin{eqnarray}\label{eq:secord} b_n^5=b_0^5+5b_0^4\frac{MISE_L'(b_n)}{MISE_L''(b_0)}+o\left[b_0^4\frac{MISE_L'(b_n)}{MISE_L''(b_0)}\right]. \end{eqnarray} Using a second order approximation to $MISE'_L(b)$ and a first order approximation to $MISE''_L(b)$, we then have $$ b_n^5=b_0^5-b_0^7\frac{\mu_{2L}\mu_{4L}R(f''')}{4\mu_{2L}^2R(f'')}+o(b_0^7\sigma^2). $$ Substitution of this expression for $b_n$ into (\ref{eq:ET}) and using the facts $\mu_{4\gamma}=6\mu_{2L}^2$, $\mu_{6\gamma}=30\mu_{2L}\mu_{4L}$ and $b_0\sigma=o(1)$, it follows that $\beta_n=o(n^2b_0^7\sigma^6)$. Later in the proof we will see that this last result implies that the first order bias of $\hat h_{ICV}$ is due only to the difference $Cb_0-h_0$. Tedious but straightforward calculations show that $s_n^2\sim n^2b_0R(f)A_\alpha/2$, where $A_\alpha$ is as defined in Section \ref{sec:MSE}. It is worth noting that $A_\alpha=R(\rho_\alpha)$, where $\rho_\alpha(u)=u\gamma'_\alpha(u)$ and $\gamma_\alpha(u)=(1+\alpha)^2\int\phi(u+v)\phi(v)\,dv-2(1+\alpha)\phi(u)$. One would expect from Theorem 4.1 of~\citeN{Scott:UCV} that the factor $R(\rho)$ would appear in $\mbox{Var}(T_n(b_0))$. Indeed it does implicitly, since $R(\rho_\alpha)\sim R(\rho)$ as $\sigma\rightarrow\infty$. Our point is that, when $\sigma\rightarrow\infty$, the part of $L$ depending on $\sigma$ is negligible in terms of its effect on $R(\rho)$ and also $R(L)$. To complete the proof write \begin{eqnarray*} \frac{\hat h_{ICV}-h_0}{h_0}&=&\frac{\hat h_{ICV}-h_0}{h_n}+o_p\left[\frac{\hat h_{ICV}-h_0}{h_n}\right]\\ &=&\frac{\hat b_{UCV}-b_0}{b_n}+\frac{(Cb_0-h_0)}{h_n}+o_p\left[\frac{\hat h_{ICV}-h_0}{h_n}\right]. \end{eqnarray*} Applying the same approximation of $b_0$ that led to (\ref{eq:secord}), and the analogous one for $h_0$, we have \begin{eqnarray*} \frac{Cb_0-h_0}{h_n}&=&b_n^2\frac{\mu_{2L}\mu_{4L}R(f''')}{20\mu_{2L}^2R(f'')}-h_n^2\frac{\mu_{2\phi}\mu_{4\phi}R(f''')}{20\mu_{2\phi}^2R(f'')}+o(b_n^2\sigma^2+h_n^2)\\ &=&\frac{R(L)^{2/5}\mu_{2L}\mu_{4L}R(f''')}{20(\mu_{2L}^2)^{7/5}R(f'')^{7/5}}\,n^{-2/5}+o(b_n^2\sigma^2). \end{eqnarray*} It is easily verified that, as $\sigma\rightarrow\infty$, $R(L)\sim (1+\alpha)^2/(2\sqrt\pi)$, $\mu_{2L}\sim-\alpha\sigma^2$ and $\mu_{4L}\sim -3\alpha\sigma^4$, and hence $$ \frac{Cb_0-h_0}{h_n}=\left(\frac{\sigma}{n}\right)^{2/5}\frac{R(f''')}{R(f'')^{7/5}}D_\alpha+o\left[\left(\frac{\sigma}{n}\right)^{2/5}\right]. $$ The proof is now complete upon combining all the previous results. \end{document}
arXiv
Extending digital PCR analysis by modelling quantification cycle data Methodology article Philip J. Wilson1 & Stephen L. R. Ellison1 Digital PCR (dPCR) is a technique for estimating the concentration of a target nucleic acid by loading a sample into a large number of partitions, amplifying the target and using a fluorescent marker to identify which partitions contain the target. The standard analysis uses only the proportion of partitions containing target to estimate the concentration and depends on the assumption that the initial distribution of molecules in partitions is Poisson. In this paper we describe a way to extend such analysis using the quantification cycle (Cq) data that may also be available, but rather than assuming the Poisson distribution the more general Conway-Maxwell-Poisson distribution is used instead. A software package for the open source language R has been created for performing the analysis. This was used to validate the method by analysing Cq data from dPCR experiments involving 3 types of DNA (attenuated, virulent and plasmid) at 3 concentrations. Results indicate some deviation from the Poisson distribution, which is strongest for the virulent DNA sample. Theoretical calculations indicate that the deviation from the Poisson distribution results in a bias of around 5 % for the analysed data if the standard analysis is used, but that it could be larger for higher concentrations. Compared to the estimates of subsequent efficiency, the estimates of 1st cycle efficiency are much lower for the virulent DNA, moderately lower for the attenuated DNA and close for the plasmid DNA. Further method validation using simulated data gave results closer to the true values and with lower standard deviations than the standard method, for concentrations up to approximately 2.5 copies/partition. The Cq-based method is effective at estimating DNA concentration and is not seriously affected by data issues such as outliers and moderately non-linear trends. The data analysis suggests that the Poisson assumption of the standard approach does lead to a bias that is fairly small, though more research is needed. Estimates of the 1st cycle efficiency being lower than estimates of the subsequent efficiency may indicate samples that are mixtures of single-stranded and double-stranded DNA. The model can reduce or eliminate the resulting bias. Digital Polymerase Chain Reaction (dPCR) is a technique first published in [1] that is used to quantify deoxyribonucleic acid (DNA) and other nucleic acids such as ribonucleic acid (RNA) for a variety of applications such as absolute quantification [2], copy number variation [3] and rare mutation detection [4]. It is now being used as a reference method to assign the copy number concentration of reference materials [5]. Samples are loaded onto a chip in a large number of separate partitions and then a series of cycles of the Polymerase Chain Reaction (PCR) are used to amplify the nucleic acid in the partitions. Fluorescent markers are used to detect which partitions contain nucleic acid. The most basic data produced by this process are the counts of positive and negative reactions. These count data are sufficient, under the standard assumption [1] that the molecules in the partitions are initially independently distributed following a Poisson distribution [6], to calculate an estimate of the concentration of the target nucleic acid. The estimate for the mean molecules per partition based on the Poisson assumption is $$ \tilde{\mu}=- \log \left(\frac{n_0}{n}\right) $$ where n 0 is the number of negative partitions out of a total of n and log refers to the natural logarithm. This estimate follows the classical statistics (also called frequentist statistics) method of maximum likelihood. If the Poisson distribution assumption is invalid then the estimate is likely to be biased. In some dPCR instruments, the fluorescence is measured after each PCR cycle in what is known as real-time dPCR. The data are processed to produce the amplification curve for each partition, which for positive partitions includes a phase of exponential growth and, eventually, a plateau with no further growth. This provides a measure of fluorescence as a proxy for the amount of the target at each cycle, and is used to calculate the quantification cycle (Cq) for each positive partition. This is defined as the cycle at which fluorescence reaches a fixed threshold [7], with cycle treated as a continuous variable. The threshold is chosen so that it is crossed during the phase when fluorescence is growing exponentially. A common method is to fit a curve to the data and calculate the point at which it crosses the threshold. Such data have the potential to provide more information than the counts do, particularly about the value and uncertainties of the relevant concentration. One approach to analysing Cq data is the retroflex method described in [8], where a continuous extension of the Poisson distribution is used to approximate the distribution of the data. In this paper we describe and illustrate a method of analysing Cq data from dPCR experiments that is appropriate for concentrations up to approximately 2.5 copies/partition, and that allows for possible departures from the Poisson distribution. The standard method requires the assumption of a single parameter distribution such as the Poisson distribution because the simple count data only provide information about whether the numbers of initial molecules in partitions are either zero or at least one. The justification for the Poisson distribution comes from the Poisson limit theorem which in part depends on the independence of the positions of the DNA molecules within the fluid. If there are significant dependencies, for example due to molecules sticking together or repelling each other, then there may be some deviation from the Poisson distribution. This may depend on factors such as the length of the DNA strands and the partition size. The Poisson distribution has probability mass function $$ P\left(X=x;\mu \right)={e}^{-\mu}\frac{\mu^x}{x!},\ x=0,\ 1,\dots,\ \mu >0. $$ A less restrictive distribution is the Conway-Maxwell-Poisson (CMP) distribution [9], which has probability mass function $$ P\left(X=x;\lambda, \nu \right)=\frac{1}{Z\left(\lambda, \nu \right)}\frac{\lambda^x}{{\left(x!\right)}^{\nu }},\ x=0,\ 1,\dots; \lambda >0,\ \nu \ge 0, $$ where Z(λ, ν) is the normalising constant. For ν = 1 it is equivalent to the Poisson distribution, and the variance equals the mean. For ν < 1 the variance is greater than the mean and for ν > 1 the variance is less. Figure 1 provides a comparison between the Poisson and CMP distributions, where P(X = 0) is the same for each. The means are 1.40 (CMP with v = 0.8), 1.50 (Poisson) and 1.62 (CMP with v = 1.2). Comparison of Poisson and CMP distributions with P(X = 0) the same for each. Probabilities are given by (1) for the Poisson distribution and (2) for the CMP distribution, with λ chosen so that P(X = 0|λ, ν) = e − 1.5 Our model of Cq data first requires a model of the growth of the number of molecules over the PCR amplification cycles. If the number of molecules at cycle c is given by N(c), then for c > 0 $$ N(c)=N\left(c - 1\right)+\mathrm{Binom}\left(N\left(c-1\right),{E}_c\right) $$ where Binom(n, p) represents a binomial random variable with n trials and probability p of success and E c is the efficiency at cycle c. This is because each of the N(c – 1) molecules from the previous cycle is duplicated with probability E c . In the model the efficiency for the first cycle is E 1 but for subsequent cycles is E. Equation (4) can be used with Eq. (3) as the initial distribution of molecules to calculate the distribution after a chosen modest number of cycles. The distribution after further growth is modelled as following a normal distribution. The fact that Eq. (4) represents a Galton-Watson branching process [10] is used to derive the mean and variance. The introduction of the parameter A, defined as the relative fluorescence per molecule, leads to a distribution for relative fluorescence. This can then be used to derive an approximation for the distribution of Cq data for a given threshold value h. The default Cq values provided by the data analysed later show clear trends. Additional analysis suggested that the trends could be removed through normalising the amplification curves and then calculating the Cq values (see Additional file 1). This approach could not be properly tested as the amplification curves generally appeared to be a few cycles short of reaching the plateau stage. It is not obvious what causes the differing plateaus, though one potential factor is varying temperature across the panels. The trends appear approximately linear in many cases, and so a linear trend is included in the model. Censoring may be required for outliers, as they can represent some technical deviation from the model. The exclusion of such values that are inconsistent with the model should improve the performance of the analysis and the accuracy of the results. High outliers may be caused by a problem in amplifying the molecule. Our model censors high outliers, treating them as partitions with one molecule, rather than using the Cq values. The model could be similarly extended to deal with low outliers by treating them as counts of partitions with more than one molecule, though this was not done for the present study. As discussed later, low outliers do lead to spurious results for one of the analysed data sets. The full vector of variables is θ = (μ, ν, E, E 1, A, b x , b y ), where μ is the mean number of initial molecules per partition. The overall likelihood is $$ L\left(\boldsymbol{\theta}; \mathbf{c},\mathbf{x},\mathbf{y},\mathbf{n}\right)\propto p{\left(0,0;\mu, \nu \right)}^{n_0}p{\left(0,1;\mu, \nu \right)}^{n_1}\times \left\{{\displaystyle {\prod}_{j=1}^{n_2}{\displaystyle {\sum}_{\kern0.30em i=1}^{\kern0.30em m{2}^{c_0}}p\left(i,{c}_0;\mu, \nu, E,{E}_0\right)\left[\Phi \left(h,iA{G}_{c_j^{\hbox{'}}},i{A}^2{G}_{c_j^{\hbox{'}}}\left(\frac{1-E}{1+E}\right)\left({G}_{c_j^{\hbox{'}}}-1\right)\right)\right.}}\right.-\left.\left.\Phi \left(h,iA{G}_{c_j^{\hbox{'}}+\delta },i{A}^2{G}_{c_j^{\hbox{'}}+\delta}\left(\frac{1-E}{1+E}\right)\left({G}_{c_j^{\hbox{'}}+\delta }-1\right)\right)\right]\right\} $$ where c' j = c j − b x (x − 0.5n x ) − b y (y − 0.5n y ) are the detrended Cq data, \( {G}_c={\left(1+E\right)}^{c-{c}_0} \), Φ is the distribution function of the normal distribution and p(j, c; μ, ν) is the probability of there being j molecules at cycle c in a partition given parameters μ and ν. The values of p(j, c; μ, ν) are calculated from Eq. (3) for c = 0 and then through repeated application of Eq. (4) for cycles up to c 0. The value c 0 = 6 was chosen as it is the smallest value required to achieve sufficient precision (see Fig. 2), and computational time increases rapidly as c 0 increases further. See Additional file 2 for the derivation and more details. Density plot of simulated data superimposed on model density. Simulation was performed using the rcq function from the R package edpcr for N = 106 partitions with E = .95, E 1 = .85, ν = 1.2, μ = 1.5 and c = 25.5 (a location parameter used to calculate A). This used Eq. 3 to select N(0) and repeated applications of Eq. 4 to simulate subsequent growth. Cq values were calculated based on exponential growth between the cycles immediately before and after the threshold was crossed. The density plot of the simulated Cq values uses a Gaussian kernel with a bandwidth of 0.01. The model density is calculated using Eq. 5 with the same parameters The data comprise n = (n 0, n 1) where n 0 is the count of partitions with no Cq value (no molecules), and n 1 is the count of high censored Cq values (one molecule), \( \mathbf{c}=\left({c}_1,\dots, {c}_{n_2}\right) \) the other Cq values along with \( \mathbf{x}=\left({x}_1,\dots, {x}_{n_2}\right) \) and \( \mathbf{y}=\left({y}_1,\dots, {y}_{n_2}\right) \) the x- and y-locations of the associated partitions. The only other data that is required is the threshold value h. All the data can be extracted from the dPCR experiment itself. This model is a very good approximation as shown in Fig. 2, where a density plot of simulated data (using Eqs. (3) and (4)) almost entirely obscures the associated density plot (Eq. (5)) of the model with the same parameters. As a Bayesian approach is being used, prior distributions are required for the parameters. We used non-informative uniform priors for μ and ν. Where suitable prior information is available gamma distributed priors could be used instead. Prior information about the efficiency E can be provided by preliminary quantitative PCR (qPCR) experiments. However these estimates for the qPCR efficiency are imprecise [11] and need not be the same as dPCR efficiency. For E we used qPCR estimates of efficiency to select a prior of Beta(190, 10) which has a mean of 0.95 and has 95 % of its mass between 0.92 and 0.98. For E 1 lower values are more plausible and so a prior of Beta(18,2) was used, with mean 0.9 and 95 % of its mass between 0.74 and 0.99. For the remaining parameters there is little prior information, and so we use the non-informative priors π(A) ∝ A − 1, π(b x ) ∝ 1 and π(b y ) ∝ 1. Single-strand adjustment There are various reasons why E 1 could be different to E. For example the initial molecule may be more difficult to amplify than the replicates of the target sequence because of its extra length or because of degradation. On the other hand efficiency may decrease as PCR reagents become degraded or are consumed. Another possible factor is the presence of single-stranded DNA. In the first amplification cycle it can only be amplified to double-stranded DNA molecules, which is equivalent to double-stranded DNA failing to amplify. The standard method counts single-stranded DNA as full molecules and so if they are present it will tend to overestimate μ [12]. If the difference between E and E 1 is entirely because of this issue, then the estimated parameters Ê and Ê 1 can be used to estimate the proportion of single-stranded DNA. This leads to an estimate for μ given by multiplying its original estimate \( \widehat{\mu} \) by an adjustment factor that is between 0.5 and 1: $$ {\widehat{\mu}}_{\mathrm{adj}}=\frac{\widehat{\mu}\ }{2}\left(1+ \min \left(1,\frac{{\widehat{E}}_1}{\widehat{E}}\right)\right) $$ Full experimental details are given in [13]. Data were generated in an experiment performed by LGC on a BioMark 48.770 machine made by Fluidigm Corporation. The raw data produced by this experiment comprised fluorescence measurements made at the end of each of the 40 cycles for each partition on several chips. The chips contained 48 panels, each with 770 partitions arranged in 70 rows and 11 columns. The raw data were converted into Cq values for positive partitions by the 'Fluidigm Digital PCR analysis' software using an algorithm that is not publicly available. TheCq data are provided in Additional file 3. The experiment was performed using 3 types of DNA: Attenuated genomic DNA (gDNA), Virulent gDNA and linearised plasmid DNA. The attenuated type was M. Tuberculosis (MTb) H37Ra gDNA, while the virulent type was MTb H37Rv gDNA. These were both sourced from ATCC and have lengths 4,419,977 bp and 4,411,532 bp respectively. The plasmid DNA comprised a genetic construct containing the full sequences of the 16S rRNA and rpoB genes of MTb H37Rv synthesised and inserted into a pUC19 plasmid vector. It had length 8486 bp. We shall refer to these types as A, V and P respectively. Assays Jiang_16S and UCL_16S were used for the amplification of the 16S gene and their primers are described in [14] and [15], while assays GN_rpoB1 and GN_rpoB2 were used for the amplification of the rpoB gene and their primers were designed using Primer Express (Applied Biosystems). Both targets were present once in the genomes of each of the different DNA types. There were 4 mastermixes, but only Gene Expression Mastermix (Life Technologies) was used for the present analysis. There were three dilutions (identified as 2A, 2B and 3). True values for their concentrations were not available. There were three replications of each combination of dilution, DNA type and assay, with each DNA type tested on a different chip. The fluorescent marker used was FAM and the passive reference ROX was used to normalise the measurements. 'No template control' panels were included and showed no issues. See [13] for more information, including the MIQE checklist [16]. Numerical methods are required in order to perform analysis using the model we have described. We have produced the software package edpcr for the software platform R, which was used to perform the analyses and create the plots in this paper. R can be freely downloaded from [17] and the package can be installed from within R using the command install.packages("edpcr",repos = "http://R-Forge.R-project.org"). The first stage of analysis is to calculate the mode of the posterior distribution via an optimisation algorithm. If a frequentist analysis is being performed rather than a Bayesian one, then no prior distributions are used and the mode is the MLE estimate for the parameters. For different initial values of E and E 1 the optimisation algorithm may find different local maxima. We used the combinations {E, E 1} = {0.9, 0.9}, {0.9, 0.75}, {0.85, 0.9}, {0.85, 0.75}, {0.9, 0.6} and {0.8, 0.9}, with the mode having the highest value selected as the overall mode. A sample from the posterior density may then be produced by the random walk Metropolis algorithm [18]. The Geweke diagnostic [19] can be used to help confirm convergence. For more information on the method of analysis see Additional file 4. Figure 3 contains plots of the Cq data and density plots of the detrended Cq data for 3 data sets. Each density plot is overlaid by the density function of the model using the posterior mode parameter estimates. The data sets are for the different dilutions and molecule types, but are each for the Jiang_16Ss assay. The model fits well to the data sets, though less well at the highest concentration, dilution 2A. Plots of Cq data (left) and density plots of Cq data with fitted model (right). Model fit (red) shows posterior mode parameters. Data are for dilution 2A and type A (a), dilution 2B and type V (b) and dilution 3 and type P (c). Assay is Jiang_16S. Density plots (blue) are for detrended data (defined immediately after Eq. (4)) and use Gaussian kernels with bandwidth = 0.01. They include vertical lines representing proportion of negative partitions Figure 4 provides the posterior mode estimates of the parameters μ, ν, E and E 1. The estimates are generally similar for the same type and dilution; however there are outliers, which are clearest for the E and E 1 estimates. Posterior modes for μ, ν, E and E 0. Ordered first by type (A, V, P), then dilution (2A, 2B, 3), then assay (Jiang_16S, UCL_16S, GN_rpoB1 and GN_rpoB2), then replicate (1–3). The line at ν = 1 represents the value ν takes for the Poisson distribution. The unfilled circles represent the outlier with a low mode for E and the cross represents the outlier with a high mode for E 1 If E 1 is close to 1 then the peak for 1 molecule not amplified in the first cycle is small, in which case there is a risk that the parameter estimates will misalign the peaks. This appears to be the situation for the point plotted as a cross, from the type P, dilution 2A data which has a very low E 1 estimate, while the other estimates from within the same group are close to 1. In that case a local mode for different starting values of E and E 1 was consistent with the estimates for the other type P, dilution 2A data sets. There is another outlier plotted as an unfilled circle for which the estimate of E is very low and the estimate of E 1 is high. This appears to be due to some low outliers in the data causing a misfitting of the model, as when the mode was rerun with them censored (treated as a count of more than one molecule) the estimates were consistent with those of the other data sets. Other possible causes of misfitting are the presence of trends that are not taken into account by the simple linear trends of the model and changes in variability. These features are typically present in the data sets to varying degrees (see changes in gradient and variability in Fig. 3), but misfitting is avoided through reasonably informative priors for E and E 1. Deviation from poisson distribution The estimates of ν shown in Fig. 4 provide insight about deviation from the Poisson distribution. They appear to depend on the DNA type, but not on other variables such as dilution. The medians of the estimates for the different DNA types, which are insensitive to the outliers, are 1.02 for A, 1.14 for V and 0.95 for P. Excluding the 2 outliers, the differences in the means from 1 are strongly significant for V and P with t-test p-values 10−4and 0.02 respectively, but not for A where the t-test p-value is 0.45. Figure 5 illustrates the theoretical relative bias that would exist for an estimate of μ using the standard method due to ν actually taking the value 0.8, 0.9, 1.1 or 1.2. It is a plot of (μ − [−log(P(X = 0; μ, ν))])/μ where − log(P(X = 0; μ, ν)) is the estimate of the mean based on the Poisson distribution when the distribution is actually CMP with mean μ and dispersion ν. For example, if the true concentration is μ = 2.0 and ν = 1.2 then P(X = 0; μ, ν) = 0.162 so that the count-based estimate of μ is − log(0.162) = 1.82 and the relative bias is 0.09. The differences between the count-based and Cq-based estimates of the concentration in Fig. 6 are consistent with these results with respect to size and sign. The issue of outliers due to misfitting the model of Cq data (see earlier discussion) does not affect the count-based estimates. Relative bias in μ calculated assuming Poisson distribution for different values of ν. Plot is of (μ − [−log(P(X = 0; μ, ν))])/μ Estimates of μ ordered by dilution and then type. Count-based estimates use Eq. (1), Cq-based estimates are the posterior modes for Eq. (5), while the Cq-based estimates with single-strand adjustment are adjusted based on Eq. (6). Outliers correspond to the outliers in Fig. 4 Figure 5 indicates that the theoretical bias due to deviation from the Poisson distribution ranges up to about 5 % over the range of concentrations examined. We have not examined concentrations above about 2.5 molecules/partition, but if similar deviation from the Poisson distribution exists for higher concentrations then based on the theoretical analysis the bias should increase. We cannot rule out greater deviation from the Poisson distribution with more substantial biases for other experiments and DNA types. In particular, It is not possible to predict how big the bias will be for droplet digital PCR (ddPCR) as the different method of partitioning the sample could lead to different values of ν. For ddPCR the Cq approach is impractical for estimating ν, but an indirect method for detecting the bias could be used by examining the difference in the estimate of μ for a range of dilutions. The retroflex method in [8] is not based on count data, and the effect of deviation from the Poisson distribution is likely to be more limited. Figure 4 shows that the estimates for E are very consistent, while the estimates for E 1 appear to depend on type and dilution. The biggest effect is from type. For P the estimates of E 1 are close to the respective estimates for E, while the estimates of E 1 for type A are lower, and the estimates for type V are lower still. It makes sense that the E 1 estimates are higher for the plasmid DNA type as it is much shorter than the others. It is not possible to determine from the data alone how much of the differences between E and E 1 are due to the single-strand issue. As an illustration of the effect on the estimates if the full difference are due to the single-strand issue, the adjusted estimates of μ (using Eq. (6)) are presented in Fig. 6 along with the count-based and Cq-based estimates. MCMC results MCMC samples can provide information about the posterior distribution beyond the mode, such as estimates of the mean, variance and quantiles. They can also indicate when there is a poor fit of the data, such as for the outlier with the low estimate of E (the unfilled circle in Fig. 4). Figure 7 contains trace and density plots for μ, ν, E and E 1 from MCMC samples for that outlier data set and one of the other two replicates. The trace plots for the outlier move significantly from the initial values, which shows that the optimisation algorithm failed to find the mode and suggests the possibility of poor data leading to poor estimates of the mode. Examination of the data shows low outliers, and if these are censored as discussed earlier then the problems are resolved. Trace and density plots for MCMC samples of posterior distributions. Data are for two of the replicate experiments for dilution 2A, type V, assay GN_rpoB1. a Plots for replicate 2. b Plots for replicate 3, for which the parameter estimates in Fig. 4 are the outliers represented by the unfilled circles in Figs. 4 and 6 Data were simulated using the rcq function from the edpcr package. This uses Eqs. (3) and (4), with Cq values calculated based on exponential growth between the cycles immediately before and after the threshold was crossed. 100 data sets were simulated for each combination of μ = 0.5, 1.5, 2.5 or 3.5, E 1 = 0.9 or 0.75 and ν = 0.8, 1, or 1.2. Each data set was for 770 partitions. Posterior modes for uniform priors (equivalent to MLEs) were found using Eq. (5), but excluding the linear trend parameters b x and b y . Results are presented in Table 1. For μ = 0.5 and 1.5 the Cq-based estimates consistently have lower bias (the means are closer to the true values) and have lower standard deviation than the count-based estimates, except for μ = 1.5 and ν = 1.2 where the standard deviation is higher. The bias and standard deviation are generally better for μ = 2.5 and generally worse for μ = 3.5. This indicates good performance for concentrations up to about 2.5 copies/partition. Table 1 Sample means and standard deviations of μ, ν and adjustment factor estimates for simulated data The standard method of dPCR analysis only uses count data. In this paper we have introduced a new method of analysis that also uses the Cq data often produced by dPCR experiments. This method estimates the concentration of the target without the standard assumption that the initial distribution of the target is Poisson. It also produces estimates of E 1 the 1st cycle amplification efficiency and E the subsequent amplification efficiency. Low estimates of E may be useful for identifying problems with the reagents. If the estimate of E 1 is less than that of E then this may be an indication of the sample being a mixture of single-stranded and double-stranded DNA. The estimates can be used to take this into account via Eq. (6). Our Cq-based method was validated by simulation and demonstrated by applying it to data from different types and dilutions of DNA. Deviation from the Poisson distribution was identified for virulent and plasmid gDNA. We believe that this the first time that the Poisson distribution assumption has been tested. The bias from assuming the Poisson distribution was small for this particular case and on that basis the count-based method is still appropriate for routine applications, and the potential bias could reasonably be ignored. We do recommend caution with respect to estimates involving high concentrations, as the theoretical calculations suggest the bias could be higher (see Fig. 5). Where highly accurate quantitation is required, if the count-based method is used then an uncertainty contribution for the bias should be considered for any overall uncertainty. Further use of the Cq-based method and other research is required to better establish the size of the biases across different types of sample and experiment, and to determine when the Cq-based method may be prefereable. If the Cq-based method is used, then it should only be used for concentrations up to about 2.5 molecules/partition. Application of the method could also be used as a diagnostic to identify whether ν is close to 1, and whether E 1 is close to E. CMP: Conway-Maxwell Poisson Cq : Quantification cycle DNA: dPCR: digital PCR gDNA: genomic DNA MCMC: Markov chain Monte Carlo MLE: Maximum likelihood estimate PCR: qPCR: RNA: Ribonucleic acid Sykes PJ, Neoh SH, Morley AA, et al. Quantitation of targets for PCR by use of limiting dilution. Biotechniques. 1992;13:444–9. Sanders R, Huggett JF, Bushell CA, Cowen S, Scott DJ, Foy CA. Evaluation of Digital PCR for Absolute DNA Quantification. Anal Chem. 2011;83:6474–84. Hindson BJ, Ness KD, Masquelier DA, Belgrader P, Heredia NJ, Makarewicz AJ, et al. High-throughput droplet digital PCR system for absolute quantitation of DNA copy number. Anal Chem. 2011;83:8604–10. Pohl G, Shih IM. Principle and applications of digital PCR. Expert Rev Mol Diagn. 2004;4(1):41–7. Bhat S, Emslie KR. Digital polymerase chain reaction for characterisation of DNA reference materials. Biomol Detect Quantif. 2016; http://dx.doi.org/10.1016/j.bdq.2016.04.001 Dube S, Qin J, Ramakrishnan R. Mathematical analysis of copy number variation in a DNA sample using digital PCR on a nanofluidic device. PLoS One. 2008;3(8), e2876. doi:10.1371/journal.pone.0002876. Huggett J, Bustin SA. Standardisation and reporting for nucleic acid quantification. Accred Qual Assur. 2011;16:399–405. Mojtahedi M, D'herouel AF, Huang S. Direct elicitation of template concentration from quantification cycle (Cq) distributions in digital PCR. Nucleic Acids Res. 2014. doi:10.1093/nar/gku603. Shmueli G, Minka TP, Kadane JB, Borle S, Boatwright P. A useful distribution for fitting discrete data: revival of the Conway-Maxwell-Poisson distribution. J R Stat Soc Ser C Appl Stat. 2005;54:127–42. Harris TE. The theory of branching processes. North Chelmsford: Courier Corporation; 2002. Svec D, Tichopad A, Novosadova V, Pfaffl MW, Kubista M. How good is a PCR efficiency estimate: Recommendations for precise and robust qPCR efficiency assessments. Biomol Detect Quantif. 2015;3:9–16. Bhat S, Curach N, Mostyn T, Bains GS, Griffiths KR, Emslie KR. Comparison of methods for accurate quantification of DNA mass concentration with traceability to the international system of units. Anal Chem. 2010;82:7185–92. Devonshire AS, Honeyborne I, Gutteridge A, Whale AS, Nixon G, Wilson P, Jones G, Mchugh TD, Foy CA, Huggett JF. Highly reproducible absolute quantification of mycobacterium tuberculosis complex by digital PCR. Anal Chem. 2015;87:3706–13. Jiang LJ, Wu WJ, Wu H, Ryang SS, Zhou J, Wu W, Li T, Guo J, Wang HH, Lu SH, Li YJ. Rapid detection and monitoring therapeutic efficacy of Mycobacterium tuberculosis complex using a novel real-time assay. Microbiol Biotechnol. 2012;22:1301–6. Honeyborne I, McHugh TD, Phillips PP, Bannoo S, Bateson A, Carroll N, Perrin FM, Ronacher K, Wright L, van Helden PD, Walzl G, Gillespie SH. Molecular bacterial load assay, a culture-free biomarker for rapid and accurate quantification of sputum Mycobacterium tuberculosis bacillary load during treatment. J Clin Microbiol. 2011;49:3905–11. Huggett JF, Foy CA, Benes V, Emslie K, Garson JA, Haynes R, et al. The digital MIQE guidelines: minimum information for publication of quantitative digital PCR experiments. Clin Chem. 2013;59:892–902. The R Project for Statistical Computing. https://www.r-project.org. Accessed 16 Sept 2015. Roberts GO, Gelman A, Gilks WR. Weak convergence and optimal scaling of random walk Metropolis algorithms. Ann Appl Probab. 1997;7:110–20. Geweke J. Evaluating the accuracy of sampling-based approaches to the calculation of posterior moments. Vol 196. Minneapolis: Federal Reserve Bank of Minneapolis, Research Dept; 1991. The authors thank Alison Devonshire for providing the data and scientific guidance, Rebecca Sanders, Jim Huggett and Simon Cowen for providing scientific guidance, Roberto Puch-Solis for manuscript review and the anonymous referees for their helpful comments. This paper has been funded by the European Metrology Research Programme (EMRP) project NEW 04 "Novel mathematical and statistical approaches to uncertainty evaluation". The EMRP is jointly funded by the EMRP participating countries within EURAMET (European Association of National Metrology Institutes) and the European Union. The funding body had no further role. The dataset analysed during the current study is included in the supplementary information files. The software package edpcr can be downloaded from within R using the command install.packages("edpcr",repos = "http://R-Forge.R-project.org"). PW developed the model, created the software package, performed the analysis and wrote the paper and Additional files 1, 2 and 3. SE conceived and directed the project, provided support with developing and packaging the software and reviewed the manuscript. Both authors read and approved the final manuscript. LGC, Queens Road, Teddington, Middlesex, TW11 0LY, UK Philip J. Wilson & Stephen L. R. Ellison Philip J. Wilson Stephen L. R. Ellison Correspondence to Philip J. Wilson. Additional file 1: Description and illustration of normalisation method. (PDF 335 kb) Derivation of Eq. (5). (PDF 405 kb) Collated Cq data GE. The data used in this paper, collated from dPCR output data files. (CSV 7402 kb) Algorithm used for computational analysis. (PDF 393 kb) Wilson, P.J., Ellison, S.L.R. Extending digital PCR analysis by modelling quantification cycle data. BMC Bioinformatics 17, 421 (2016). https://doi.org/10.1186/s12859-016-1275-3 Bayesian MCMC Conway-Maxwell-Poisson distribution CMP distribution Amplification efficiency ssDNA
CommonCrawl
\begin{document} \title{Arnold diffusion for cusp-generic\\ nearly integrable convex systems on ${\mathbb A}^3$} \author{Jean-Pierre Marco \thanks{Universit\'e Paris 6, 4 Place Jussieu, 75005 Paris cedex 05. E-mail: [email protected] }} \date{} \maketitle \begin{abstract} Using the results of \cite{Mar} and \cite{GM}, we prove the existence of ``Arnold diffusion orbits'' in cusp-generic nearly integrable {\em a priori} stable systems on ${\mathbb A}^3$. More precisely, we consider perturbed systems of the form $H({\theta},r)=h(r)+f({\theta},r)$, where $h$ is a $C^\kappa$ strictly convex and superlinear function on ${\mathbb R}^3$ and $f\in C^\kappa({\mathbb A}^3)$, $\kappa\geq2$. We equip $C^\kappa({\mathbb A}^3)$ with the uniform seminorm $$ \norm{f}_\kappa=\sum_{k\in{\mathbb N}^6,\ 0\leq \abs{k}\leq \kappa}\norm{\partial^kf}_{C^0({\mathbb A}^3)}\leq+\infty $$ we set $ C_b^\kappa({\mathbb A}^3)=\big\{f\in C^\kappa({\mathbb A}^3)\mid \norm{f}_\kappa<+\infty\big\}, $ and we denote by ${\mathcal S}^\kappa$ its unit sphere. Given a ``threshold function'' ${\boldsymbol\eps}_0:{\mathcal S}^\kappa\to [0,+\infty[$, we define the associated ${\boldsymbol\eps}_0$-ball as $$ {\mathscr B}^\kappa({\boldsymbol\eps}_0):= \big\{{\varepsilon} {\bf f} \mid {\bf f}\in{\mathcal S}^\kappa,\ {\varepsilon}\in\,]0,{\boldsymbol\eps}_0({\bf f})[\big\}, $$ so that ${\mathscr B}^\kappa({\boldsymbol\eps}_0)$ is open in $C_b^\kappa({\mathbb A}^3)$ when ${\boldsymbol\eps}_0$ is lower semicontinuous. Given $h$ as above, an energy ${\bf e}>\mathop{\rm Min\,}\limits h$ and a finite family of arbitrary open sets $O_i$ in ${\mathbb R}^3$ intersecting $h^{-1}({\bf e})$, we prove the existence of a lower semicontinuous threshold function ${\boldsymbol\eps}_0$, positive on an open dense subset of ${\mathcal S}^\kappa$, such that for $f$ in an open dense subset of the associated ball ${\mathscr B}^\kappa({\boldsymbol\eps}_0)$, the system $H=h+f$ admits orbits intersecting each open set ${\mathbb T}^3\times O_i\subset{\mathbb A}^3$. \end{abstract} \section{The main results} We denote by ${\mathbb A}^n={\mathbb T}^n\times{\mathbb R}^n$ the cotangent bundle of the torus ${\mathbb T}^n$. This paper is the last step of a geometrical proof of the existence of Arnold diffusion for a ``large subset'' of perturbations of strictly convex integrable systems on ${\mathbb A}^3$. It relies on the results of \cite{Mar} and \cite{GM}. Two different approaches are developped in \cite{KZ} and \cite{C}. \paraga Before giving a precise statement, let us quote the initial formulation of the diffusion conjecture by V.I. Arnold, from \cite{A94}. {\em ``Consider a generic analytic Hamiltonian system close to an integrable one: $$ H=H_0(p)+{\varepsilon} H_1(p,q,{\varepsilon}) $$ where the perturbation $H_1$ is $2\pi$-periodic in the angle variables $(q_1,\ldots,q_n)$ and where the nonperturbed Hamiltonian function $H_0$ depends on the action variables $(p_1,\ldots,p_n)$ generically. Let $n$ be greater than $2$. \vskip2mm {\bf Conjecture.} For any two points $p',p''$ on the connected level hypersurface of $H_0$ in the action space, there exist orbits connecting an arbitrary small neighborhood of the torus $p=p'$ with an arbitrary small neighborhood of the torus $p=p''$, provided that ${\varepsilon}>0$ is sufficiently small and that $H_1$ is generic.'' } \vskip2mm \paraga The formulation of the conjecture is rather imprecise and we first have to clarify our framework. We are concerned with the case $n=3$ only and we restrict ourselves to finitely differentiable systems. Moreover, we adopt a setting close to that introduced by Mather in \cite{Mat04}, which we now describe with our usual notation. Let $({\theta},r)$ be the angle-action coordinates on ${\mathbb A}^3$, and let $\lambda=\sum_{i=1}^3r_id{\theta}_i$ be the Liouville form of ${\mathbb A}^3$. The Hamiltonian systems we consider have the form \begin{equation}\label{eq:hampert} H({\theta},r)=h(r)+f({\theta},r),\qquad ({\theta},r)\in{\mathbb A}^3, \end{equation} where the unperturbed part $h:{\mathbb R}^3\to{\mathbb R}$ is a $C^\kappa$ ($\kappa\geq 2$) Tonelli Hamiltonian, that is, $h$ is strictly convex with superlinear growth at infinity. Here we therefore relax the genericity condition initially imposed by Arnold. We want to find $C^\kappa$ Hamiltonians $H$ which admit {\em diffusion orbits} intersecting prescribed open sets in their energy level. More precisely, we start with a Tonelli Hamiltonian~$h$ and fix an energy ${\bf e}>\mathop{\rm Min\,}\limits h$, which is therefore a regular value of $h$ whose level set $h^{-1}({\bf e})$ is diffeomorphic to $S^2$. We then fix a finite collection of arbitrary open sets $(O_i)_{1\leq i\leq i_*}$ in ${\mathbb R}^3$, which intersect $h^{-1}({\bf e})$. Given $H\in C^\kappa({\mathbb A}^3,{\mathbb R})$, a diffusion orbit associated with these data is an orbit of the system generated by $H$ which intersects each open set ${\mathbb T}^3\times O_i\subset{\mathbb A}^3$. Our problem is to prove the existence of a ``large'' set of perturbations $f$ for which (\ref{eq:hampert}) possesses such diffusion orbits. We equip $C^\kappa({\mathbb A}^3,{\mathbb R})$ with the uniform seminorm \begin{equation} \norm{f}_\kappa=\mathop{\rm Max\,}\limits_{0\leq \abs{k}\leq \kappa}\mathop{\rm Sup\,}\limits_{x\in{\mathbb A}^3}{\partial^kf(x)}\leq+\infty, \end{equation} and we set \begin{equation} C_b^\kappa({\mathbb A}^3,{\mathbb R})=\big\{f\in C^\kappa({\mathbb A}^3,{\mathbb R})\mid \norm{f}_\kappa<+\infty\big\}, \end{equation} so that $\big(C_b^\kappa({\mathbb A}^3,{\mathbb R}),\norm{\ }_\infty\big)$ is a Banach algebra. Let ${\mathcal S}^\kappa$ and $B^\kappa(\rho)$ stand for the unit sphere and the ball with radius $\rho$ centered at~$0$ in $C_b^\kappa({\mathbb A}^3,{\mathbb R})$. Given a ``threshold function'' ${\boldsymbol\eps}_0:{\mathcal S}^\kappa\to{\mathbb R}$, one introduces the associated generalized ball: \begin{equation}\label{eq:cuspball} {\mathscr B}^\kappa({\boldsymbol\eps}_0):= \big\{{\varepsilon} u \mid u\in{\mathcal S}^\kappa,\ {\varepsilon}\in\,]0,{\boldsymbol\eps}_0(u)[\big\}. \end{equation} Observe that ${\mathscr B}^\kappa({\boldsymbol\eps}_0)$ is open in $C_b^\kappa({\mathbb A}^3,{\mathbb R})$ when ${\boldsymbol\eps}_0$ is lower-semicontinuous. \paraga The main result of this paper is the following. \begin{thm}\label{thm:main1} Consider a $C^\kappa$ integrable Tonelli Hamiltonian $h$ on ${\mathbb A}^3$. Fix ${\bf e}>\mathop{\rm Min\,}\limits h$ together with a finite family of arbitrary open sets $O_1,\ldots,O_m$ which intersect $h^{-1}({\bf e})$. Then for $\kappa\geq \kappa_0$ large enough, there exists a lower-semicontinuous function $$ {\boldsymbol\eps}_0:{\mathscr S}^\kappa\to{\mathbb R}^+ $$ with positive values on an open dense subset of ${\mathscr S}^\kappa$ such that the subset of all $f\in{\mathscr B}^\kappa({\boldsymbol\eps}_0)$ for which the system \begin{equation} H({\theta},r)=h(r)+ f({\theta},r) \end{equation} admits an orbit which intersects each ${\mathbb T}^3\times O_k$ is open and dense in ${\mathscr B}^\kappa({\boldsymbol\eps}_0)$. \end{thm} \begin{figure} \caption{A generalized ball} \label{Fig:genball} \end{figure} \paraga Our approach consists in proving first the existence of a ``geometric skeleton'' for diffusion and then to use this skeleton to produce the diffusion orbits. Let us informally describe our method (the precise definitions were introduced in \cite{Mar} and \cite{GM}, they are recalled in Section~\ref{Sec:setting}). The main objects constituting the skeleton are $3$-dimensional invariant cylinders with boundary, diffeomorphic to ${\mathbb T}^2\times[0,1]$, which are moreover normally hyperbolic and satisfy several additional properties, and which we call {\em admissible cylinders}. We also have to consider admissible {\em singular} cylinders. These cylinders and singular cylinders contain particular $2$-dimensional invariant tori, which we call {\em essential tori}, and admit {\em homoclinic correspondences} coming from the homoclinic intersections of the essential tori. Finally, we define an {\em admissible chain}, as a finite family $({\mathscr C}_k)_{1\leq k\leq k_*}$ of admissible cylinders or singular cylinders, with heteroclinic connections from ${\mathscr C}_k$ to ${\mathscr C}_{k+1}$, $1\leq k\leq k_*-1$, which satisfy additional dynamical conditions. The main result of \cite{Mar} is the following. \vskip2mm\noindent {\bf Theorem \cite{Mar}.} {\it Consider a $C^\kappa$ integrable Tonelli Hamiltonian $h$ on ${\mathbb A}^3$. Fix ${\bf e}>\mathop{\rm Min\,}\limits h$ and a finite family of open sets $O_1,\ldots,O_m$ which intersect $h^{-1}({\bf e})$. Fix $\delta>0$. Then for $\kappa\geq \kappa_0$ large enough, there exists a lower-semicontinuous function $$ {\boldsymbol\eps}_0:{\mathscr S}^\kappa\to{\mathbb R}^+ $$ with positive values on an open dense subset of ${\mathscr S}^\kappa$ such that for $f\in{\mathscr B}^\kappa({\boldsymbol\eps}_0)$ the system \begin{equation} H({\theta},r)=h(r)+ f({\theta},r) \end{equation} admits an admissible chain of cylinders and singular cylinders such that each open set ${\mathbb T}^3\times O_k$ contains the $\delta$-neighborhood in ${\mathbb A}^3$ of some essential torus of the chain.} \vskip2mm Still, admissible chains need not admit diffusion orbits drifting along them. In \cite{GM} is introduced the more refined notion of {\em good chains of cylinders}. A {\em $\delta$-admissible orbit} for a good chain is an orbit which intersects the $\delta$-neighborhood (in ${\mathbb A}^3$) of any essential torus of the chain. The main result of \cite{GM} is the following. \vskip2mm\noindent {\bf Theorem \cite{GM}.} {\it Let $H$ be a $C^2$ proper Hamiltonian on ${\mathbb A}^3$ and let ${\bf e}$ be a regular value of~$H$. Then, for any good chain of cylinders contained in $H^{-1}({\bf e})$ and for any $\delta>0$, there exists a $\delta$-admissible orbit for the chain. } \paraga Taking the previous two results for granted, Theorem~\ref{thm:main1} is an easy consequence of the following perturbative result, whose proof constitutes the main part of the paper. \begin{thm}\label{thm:main2} Let $H$ be a $C^\kappa$ proper Hamiltonian on ${\mathbb A}^3$ and let ${\bf e}$ be a regular value of~$H$. Fix $\delta>0$ and assume that $H$ admits an admissible chain $({\mathscr C}_k)_{1\leq k\leq k_*}$. Then for any $\alpha>0$ there exists a Hamiltonian ${\mathcal H}\in C^\kappa({\mathbb A}^3)$ with \begin{equation} \norm{H-{\mathcal H}}_\kappa<\alpha \end{equation} such that $({\mathscr C}_k)_{1\leq k\leq k_*}$ is a good chain at energy ${\bf e}$ for ${\mathcal H}$, such that each open set ${\mathbb T}^3\times O_k$ contains the $\delta$-neighborhood in ${\mathbb A}^3$ of some essential torus. \end{thm} \paraga We recall the necessary definitions from \cite{Mar} and \cite{GM} in Section~\ref{Sec:setting}. We state in Section~\ref{Sec:perturbation} a perturbative result for the characteristic foliations of the stable and unstable manifolds of a normally hyperbolic manifold, which is the main ingredient for the proof of Theorem~\ref{thm:main2}. In Section~\ref{Sec:proof2} we prove Theorem~\ref{thm:main2}, from which we deduce Theorem~\ref{thm:main1} thanks to the previous two results of \cite{Mar,GM}. We recall some necessary results on normally hyperbolic manifolds in Appendix~\ref{app:normhyp}. \setcounter{paraga}{0} \section{The setting}\label{Sec:setting} The Hamiltonian vector field associated with a $C^2$ function $H$ will be denoted by $X_H$ and its Hamiltonian flow, when defined, by $\Phi_H$. \setcounter{paraga}{0} \subsection{Normally hyperbolic annuli and cylinders}\label{sec:normhyp} We introduce the main objects of our construction, that is, normally hyperbolic $3$-dime\-nsional cylinders and singular cylinders with boundary. We refer to \cite{C04,Berg10} for direct presentations of the normal hyperbolicity of manifolds with boundary. Here we will recall the definitions of \cite{Mar} (to which we refer for more details), which take advantage of the existence of invariant $4$-dimensional symplectic annuli containing the cylinders in their relative interior. These annuli will moreover play an essential role in the definition of the intersection conditions in the next section. \paraga A {\em $4$-annulus} will be a $C^p$ manifold $C^p$ diffeomorphic to ${\mathbb A}^2$, with $p\geq 2$. A {\em singular annulus} will be a ($4$-dimensional) $C^1$ manifold $C^1$-diffeomorphic to ${\mathbb T}\times\,]0,1[\,\times {\mathcal Y}$, where ${\mathcal Y}$ is (any realization of) the sphere $S^2$ minus three points. \paraga A {\em $C^p$ cylinder} is a $C^p$ manifold $C^p$-diffeomorphic to ${\mathbb T}^2\times [0,1]$, so that a cylinder is compact and its boundary has two components diffeomorphic to ${\mathbb T}^2$. A {\em singular cylinder} is a $C^1$ manifold $C^1$-diffeomorphic to ${\mathbb T}\times {\bf Y}$, where~${\bf Y}$ is (any realization of) the sphere $S^2$ minus three open discs with nonintersecting closures. Our singular cylinders will moreover be of class $C^p$, $p\geq 2$, in large neighborhoods of their boundary (which admits three components, diffeomorphic to ${\mathbb T}^2$). \paraga We endow now ${\mathbb A}^3$ with its standard symplectic form $\Omega$, and we assume that $X=X_H$ is the vector field generated by $H\in C^\kappa({\mathbb A}^3)$, $\kappa\geq 2$. Following \cite{Mar}, we say that an invariant $4$-annulus ${\mathscr A}\subset A^3$ for $X$ is {\em normally hyperbolic} when there exist \vskip1.5mm ${\bullet}$ an open subset $O$ of ${\mathbb A}^3$ containing ${\mathscr A}$, \vskip1.5mm ${\bullet}$ an embedding $\Psi:O\to {\mathbb A}^2\times {\mathbb R}^2$ whose image has compact closure, such that $\Psi_*\Omega$ continues to a symplectic form $\overline\Omega$ on ${\mathbb A}^2\times {\mathbb R}^2$ which satisfies (Appendix~\ref{app:normhyp} (\ref{eq:assumpsymp})), \vskip1.5mm ${\bullet}$ a vector field ${\mathscr V}$ on ${\mathbb A}^2\times {\mathbb R}^2$ satisfying the assumptions of the normally hyperbolic persistence theorem, in particular~(\ref{eq:addcond}), together with those of the symplectic normally hyperbolic theorem (Appendix~\ref{app:normhyp}) for the form $\overline\Omega$, such that, with the notation of this theorem: \begin{equation} \Psi({\mathscr A})\subset {\rm Ann}({\mathscr V})\quad{\rm and}\quad \Psi_*X(x)={\mathscr V}(x),\quad \forall x\in O. \end{equation} Such an annulus ${\mathscr A}$ is therefore of class $C^p$ and symplectic. We define normally hyperbolic singular $4$-annuli, with in this case $p=1$. One easily checks that annuli and singular annuli are uniformly normally hyperbolic in the usual sense. In particular, they admit well-defined stable, unstable, center-stable and center-unstable manifolds. The stable and unstable manifolds are coisotropic and their characteristic foliations coincide with their center-stable and center-unstable foliations \paraga With the same assumptions, let ${\bf e}$ be a regular value of $H$. Here we say for short that an invariant cylinder (with boundary) ${\mathscr C}\subset H^{-1}({\bf e})$ for $X_H$ is {\em normally hyperbolic in $H^{-1}({\bf e})$} when there exists an invariant normally hyperbolic {\em symplectic} $4$-annulus ${\mathscr A}$ for $X_H$, such that ${\mathscr C}\subset{\mathscr A}\cap H^{-1}({\bf e})$. Any such ${\mathscr A}$ is said to be {\em associated with ${\mathscr C}$}. We say for short that a singular cylinder ${\mathscr C}_{\bullet}\subset H^{-1}({\bf e})$ is invariant for $X_H$ when it is invariant together with its critical circles. We say that ${\mathscr C}_{\bullet}$ {\em normally hyperbolic in $H^{-1}({\bf e})$} when there is an invariant normally hyperbolic {\em symplectic} singular annulus ${\mathscr A}_{\bullet}$ for $X_H$ such that ${\mathscr C}_{\bullet}\subset{\mathscr A}_{\bullet}\cap H^{-1}({\bf e})$. Any such ${\mathscr A}$ is said to be {\em associated with ${\mathscr C}$}. One immediately sees that normally hyperbolic invariant cylinders or singular cylinders, contained in $H^{-1}({\bf e})$, admit well-defined $4$-dimensional stable and unstable manifolds with boundary, also contained in $H^{-1}({\bf e})$, together with their center-stable and center-unstable foliations. The stable and unstable manifolds of the complement in a singular cylinder of its critical circles are $C^p$. \paraga A normally hyperbolic cylinder admits $C^1$ characteristic projections $\Pi^\pm:W^\pm({\mathscr C})\to{\mathscr C}$ (since the invariant manifolds $W^\pm({\mathscr A})$ of are $C^p$ with $p\geq 2$), it satisfies the $\lambda$-lemma (as stated in \cite{GM}) and one easily proves that it admits a $\Phi_H$-invariant Radon measure $\mu_{\mathscr C}$, positive on its open sets. It is therefore {\em tame} in the sense of \cite{GM}. A normally hyperbolic singular cylinder admits $C^0$ characteristic projections, which are $C^1$ in a large neighborhood of its boundary, it also satisfies the $\lambda$-lemma and admits a $\Phi_H$-invariant Radon measure $\mu_{\mathscr C}$, positive on its open sets. \paraga Let $a<b$ be fixed. Let us introduce the notation: $$ {\bf A}:={\bf A}(a,b)={\mathbb T}\times[a,b],\qquad \partial_{\bullet}{\bf A}={\mathbb T}\times\{a\},\qquad \partial^{\bullet}{\bf A}={\mathbb T}\times\{b\}. $$ A {\em twist section} for a cylinder ${\mathscr C}\subset H^{-1}({\bf e})$ is a global $2$-dimensional transverse section ${\Sigma}\subset {\mathscr C}_{\varepsilon}$, image of an exact-symplectic embedding $j_{\Sigma}: {\bf A}$, such that the associated Poincar\'e return map is a twist map in the $j_{\Sigma}$-induced coordinates on $ {\bf A}$. Denote by ${\bf Ess\,}(\varphi)$ the set of essential invariant circles of $\varphi$ (that is, whose inverse image by $j_{\Sigma}$ is homotopic to the base). By the Birkhoff theorem, these circles are Lispchizian graphs over the base. One requires moreover that the boundaries of ${\Sigma}$ are accumulation points of ${\bf Ess\,}(\varphi)$. Note that $\partial{\mathscr C}\cap{\Sigma}=\partial_{\bullet}{\bf A}\cup \partial^{\bullet}{\bf A}$. We define $\partial_{\bullet}{\mathscr C}$ and $\partial^{\bullet}{\mathscr C}$ as the components of $\partial {\mathscr C}$ which contain $j_{\Sigma}(\partial_{\bullet}{\bf A})$ and $j_{\Sigma}(\partial^{\bullet}{\bf A})$ respectively. \paraga A generalized twist section for a singular cylinder is a singular $2$ annulus which admits a continuation to a $2$-annulus, on which the Poincar\'e return map continues to a twist map (see \cite{Mar,GM}). \setcounter{paraga}{0} \subsection{\bf Intersection conditions, gluing condition, and admissible chains} Let $H$ be a proper $C^2$ Hamiltonian function on ${\mathbb A}^3$ and fix a regular value~${\bf e}$. \paraga{\bf Oriented cylinders.} We say that a cylinder ${\mathscr C}$ is {\em oriented} when an order is prescribed on the two components of its boundary. We denote the first one by $\partial_{\bullet}{\mathscr C}$ and the second one by $\partial^{\bullet}{\mathscr C}$. \paraga {\bf The homoclinic condition {\rm(FS1)}.} A compact invariant cylinder ${\mathscr C}\subset H^{-1}({\bf e})$ with twist section ${\Sigma}$ and associated invariant symplectic $4$-annulus ${\mathscr A}$ satisfies condition {\rm(FS1)}\ when there exists a $5$-dimensional submanifold $\Delta\subset {\mathbb A}^3$, transverse to $X_H$ such that: \begin{itemize} \item there exist $4$-dimensional submanifolds ${\mathscr A}^\pm\subset W^\pm({\mathscr A})\cap\Delta$ such that the restrictions to ${\mathscr A}^\pm$ of the characteristic projections $\Pi^\pm:W^\pm({\mathscr A})\to{\mathscr A}$ are diffeomorphisms, whose inverses we denote by $j^\pm:{\mathscr A}\to{\mathscr A}^\pm$; \item there exists a continuation ${\mathscr C}_*$ of ${\mathscr C}$ such that ${\mathscr C}^\pm_*=j^\pm({\mathscr C}_*)$ have a nonempty intersection, transverse in the $4$-dimensional manifold $ \Delta_{\bf e}:=\Delta\cap H^{-1}({\bf e}) $; so that ${\mathscr I}_*:={\mathscr C}_*^+\cap{\mathscr C}_*^-$ is a $2$-dimensional submanifold of $\Delta_{\bf e}$; \item the projections $\Pi^\pm({\mathscr I}_*)\subset{\mathscr C}_*$ are $2$-dimensional transverse sections of the vector field $X_H$ restricted to ${\mathscr C}_*$, and the associated Poincar\'e maps $P^\pm: \Pi^\pm({\mathscr I}_*)\to{\Sigma}_k$ are diffeomorphisms. \end{itemize} We then say that \begin{equation} \psi=P^+\circ\Pi^+\circ j^-\circ (P^-)^{-1}_{\vert{\Sigma}}:{\Sigma}\to {\Sigma}_* \end{equation} (where ${\Sigma}_*$ is a continuation of ${\Sigma}$), is the {\em homoclinic map} attached to ${\mathscr C}$. Note that $\psi$ is a Hamiltonian diffeomorphism on its image. \paraga {\bf The heteroclinic condition {\rm(FS2)}.} A pair $({\mathscr C}_0,{\mathscr C}_1)$ of compact invariant oriented cylinders with twist sections ${\Sigma}_0$, ${\Sigma}_1$ and associated invariant symplectic $4$-annuli $({\mathscr A}_0,{\mathscr A}_1)$ satisfies condition {\rm(FS2)}\ when there exists a $5$-dimensional submanifold $\Delta\subset {\mathbb A}^3$, transverse to $X_H$ such that: \begin{itemize} \item there exist $4$-dimensional submanifolds $\widetilde {\mathscr A}_0^-\subset W^-({\mathscr A}_0)\cap\Delta$ and $\widetilde {\mathscr A}_1^+\subset W^+({\mathscr A}_1)\cap\Delta$ such that $\Pi_0^-{\vert \widetilde{\mathscr A}_0^-}$ and $\Pi_1^+{\vert \widetilde{\mathscr A}_1^+}$ are diffeomorphisms on their images $\widetilde {\mathscr A}_0$, $\widetilde {\mathscr A}_1$, which we require to be neighbohoods of the boundaries $\partial^{\bullet}{\mathscr C}_0$ and $\partial_{\bullet}{\mathscr C}_1$ in ${\mathscr A}_0$ and ${\mathscr A}_1$ respectively, we denote their inverses by $j_0^-$ and $j_1^+$; \item there exist neighborhoods $\widetilde{\mathscr C}_0$ and $\widetilde{\mathscr C}_1$ of $\partial^{\bullet}{\mathscr C}^0$ and $\partial_{\bullet}{\mathscr C}^1$ in continuations of the initial cylinders, such that $\widetilde {\mathscr C}_0^-=j_0^-(\widetilde{\mathscr C}_0)$ and $\widetilde{\mathscr C}_1^+=j_1^+(\widetilde{\mathscr C}_0)$ intersect transversely in the $4$-dimensional manifold $ \Delta_{\bf e}:=\Delta\cap H^{-1}({\bf e}), $, let ${\mathscr I}_*$ be this intersection; \item the projections $\Pi_0^-({\mathscr I}_*)\subset{\mathscr C}$ and $\Pi_1^+({\mathscr I}_*)\subset{\mathscr C}$ are $2$-dimensional transverse sections of the vector field $X_H$ restricted to $\widetilde {\mathscr C}_0$ and $\widetilde{\mathscr C}_1$, and the Poincar\'e maps $P_0: \Pi_0^-({\mathscr I}_*)\to{\Sigma}_0$ and $P_1: \Pi_1^-({\mathscr I}_*)\to{\Sigma}_1$ are diffeomorphisms (where ${\Sigma}_I$ stands for Poincar\'e sections in the neighborhoods $\widetilde{\mathscr C}_i$). \end{itemize} We then say that \begin{equation} \psi=P_1\circ\Pi^+\circ j^-\circ (P_0)^{-1}:{\Sigma}_0\to{\Sigma}_1 \end{equation} is the {\em heteroclinic map} attached to ${\mathscr C}$ (which is not uniquely defined). \paraga {\bf The homoclinic condition {\rm(PS1)}.} Consider an invariant cylinder ${\mathscr C}\subset H^{-1}({\bf e})$ with twist section ${\Sigma}$ and attached Poincar\'e return map $\varphi$, so that ${\Sigma}=j_{\Sigma}({\mathbb T}\times[a,b])$, where $j_{\Sigma}$ is exact-symplectic. Define ${\bf Tess}({\mathscr C})$ as the set of all invariant tori generated by the previous circles under the action on the Hamiltonian flow (so each element of ${\bf Tess}({\mathscr C})$ is a Lispchitzian Lagrangian torus contained in ${\mathscr C}$). The elements of ${\bf Tess}({\mathscr C})$ are said to be {\em essential tori}. \vskip2mm We say that an invariant cylinder ${\mathscr C}$ with associated invariant symplectic $4$-annulus ${\mathscr A}$ satisfies the {\em partial section property~{\rm(PS)}} when there exists a $5$-dimensional submanifold $\Delta\subset {\mathbb A}^3$, transverse to $X_H$ such that: \begin{itemize} \item there exist $4$-dimensional submanifolds ${\mathscr A}^\pm\subset W^\pm({\mathscr A})\cap\Delta$ such that the restrictions to ${\mathscr A}^\pm$ of the characteristic projections $\Pi^\pm:W^\pm({\mathscr A})\to{\mathscr A}$ are diffeomorphisms, whose inverses we denote by $j^\pm:{\mathscr A}\to{\mathscr A}^\pm$; \item there exist conformal exact-symplectic diffeomorphisms \begin{equation} \Psi^{\rm ann}:{\mathscr O}^{\rm ann}\to {\mathscr A},\qquad \Psi^{\rm sec}:{\mathscr O}^{\rm sec}\to\Delta_{\bf e}:=\Delta\cap H^{-1}({\bf e}) \end{equation} where ${\mathscr O}^{\rm ann}$ and ${\mathscr O}^{\rm sec}$ are neighborhoods of the zero section in $T^*{\mathbb T}^2$ endowed with the conformal Liouville form $a\lambda$ for a suitable $a>0$; \item each torus ${\mathscr T}\in{\bf Tess}({\mathscr C})$ is contained in some ${\mathscr C}$ and the image $\Psi^{\rm ann}({\mathscr T})$ is a Lipschitz graph over the base ${\mathbb T}^2$; \item for each such torus ${\mathscr T}$, setting ${\mathscr T}^\pm:=j^\pm({\mathscr T})\subset \Delta_{\bf e}$, the images $\Psi^{\rm sec}({\mathscr T}^\pm)$ are Lipschitz graphs over the base ${\mathbb T}^2$. \end{itemize} \paraga {\bf Bifurcation condition.} See \cite{Mar} for the definitions and assumptions. The condition we state involves the $s$-averaged system along a simple resonance circle, it will be translated in the following as an intrinsic condition. With the notation of \cite{Mar}: for any $r^0\in B$, the derivative $\tfrac{d}{dr}\big(m^*(r)-m^{**}(r)\big)$ does not vanish. This immediately yields transverse heteroclinic intersection properties for the intersections of the corresponding cylinders. \paraga {\bf The gluing condition {\rm(G)}.} A pair $({\mathscr C}_0,{\mathscr C}_1)$ of compact invariant oriented cylinders satisfies condition {\rm(G)}\ when they are contained in a invariant cylinder and satisfy \begin{itemize} \item $\partial^{\bullet}{\mathscr C}_0=\partial_{\bullet}{\mathscr C}_1$ is a dynamically minimal invariant torus that we denote by ${\mathscr T}$, \item $W^-({\mathscr T})$ and $W^+({\mathscr T})$ intersect transversely in $H^{-1}({\bf e})$. \end{itemize} \paraga {\bf \!\! Admissible chains.}\!\! A finite family of compact invariant oriented cylinders $({\mathscr C}_k)_{1\leq k\leq k_*}$ is an {\em admissible chain} when each cylinder satisfies either ${\rm(FS1)}$ or ${\rm(PS1)}$ and, for $k$ in $\{1,\ldots,k_*-1\}$, the pair $({\mathscr C}_k,{\mathscr C}_{k+1})$ satisfies either ${\rm(FS2)}$ or ${\rm(G)}$, or corresponds to a bifurcation point. \setcounter{paraga}{0} \subsection{Good cylinders and good chains}\label{sec:goodcylchains} This section is dedicated to the additional conditions introduced in \cite{GM} which produce orbits drifting along a chain. \paraga {\bf Special twist maps.} We say that a twist map $\varphi$ of ${\bf A}$ is {\em special} when it does not admit any essential invariant circle with rational rotation number and when moreover {every} element of ${\bf Ess\,}(\varphi)\setminus\partial^{\bullet}{\bf A}$ is either the upper boundary of a Birkhoff zone, or accumulated from below (in the Hausdorff topology) by a sequence of elements of ${\bf Ess\,}(\varphi)$. \paraga {\bf The homoclinic correspondence.} We first define the {\em transverse homoclinic intersection} of a normally hyperbolic cylinder ${\mathscr C}\subset H^{-1}({\bf e})$ as the set \begin{equation} {\rm Homt}({\mathscr C})\subset W^+({\mathscr C})\cap W^-({\mathscr C}) \end{equation} formed by the points $\xi$ such that \begin{equation} W^-\big(\Pi^-(\xi)\big)\pitchfork_\xi W^+({\mathscr C})\quad\textrm{and}\quad W^+\big(\Pi^+(\xi)\big)\pitchfork_\xi W^-({\mathscr C}), \end{equation} where $\pitchfork_\xi$ stands for ``intersects transversely at $\xi$ relatively to $H^{-1}({\bf e})$.'' \vskip2mm${\bullet}$ Assume that ${\mathscr C}$ admits a twist section ${\Sigma}=j_{\Sigma}({\bf A})$ and identify ${\bf A}$ with ${\Sigma}$. A {\em homoclinic correspondence} associated with these data is a family of $C^1$ local diffeomorphisms of ${\Sigma}$: \begin{equation} \psi=(\psi_i)_{i\in I}, \qquad \psi_i:{\rm Dom\,} \psi_i\subset{\Sigma}\to {\rm Im\,}\psi_i \subset{\Sigma}, \end{equation} where ${\rm Dom\,} \psi_i$ and ${\rm Im\,}\psi_i$ are open subsets of ${\Sigma}$, for which there exist is a family of $C^1$ local diffeomorphisms of ${\mathscr C}$: \begin{equation} S=(S_i)_{i\in I}, \qquad S_i:{\rm Dom\,} S_i\subset{\mathscr C}\to {\rm Im\,} S_i\subset{\mathscr C} \end{equation} where ${\rm Dom\,} S_i$ and ${\rm Im\,} S_i$ are open subsets of ${\mathscr C}$, such that for all $i\in I$: \begin{itemize} \item there exists a $C^1$ function $\tau_i:{\rm Dom\,}\psi_i\to {\mathbb R}$ such that \begin{equation} \forall x\in{\rm Dom\,}\psi_i,\qquad \Phi_H^{\tau_i(x)}(x)\in {\rm Dom\,} S_i \quad \textit{and}\quad \psi_i(x)=S_i\Big(\Phi_H^{\tau_i(x)}(x)\Big); \end{equation} \item there is an open subset $\Domt S_i\subset {\rm Dom\,} S_i$, with full measure in ${\rm Dom\,} S_i$, such that \begin{equation}\label{eq:redhom3} \forall y\in \Domt S_i, \quad W^-(y)\cap W^+\big(S_i(y)\big)\cap {\rm Homt}({\mathscr C})\neq\emptyset. \end{equation} \end{itemize} We say that a family $S$ satisfying the previous properties is {\em associated} with $\psi$. \vskip2mm Homoclinic correspondences are not uniquely defined and the domains ${\rm Dom\,} \psi_i$ (resp. ${\rm Dom\,} S_i$) are not necessarily pairwise disjoint. The index set $I$ is non countable in general. In the following we indifferently consider our homoclinic correspondences as defined on ${\Sigma}$ or on ${\bf A}$. The following additional definition is necessary to produce $\delta$-admissible orbits. \vskip2mm${\bullet}$ Let ${\mathscr C}$ be a good cylinder with twist section ${\Sigma}=j_{\Sigma}({\bf A})$ and a homoclinic correspondence $\psi:{\bf A}\righttoleftarrow$. Fix $\delta>0$. We say that $\psi$ is {\em $\delta$-bounded} when for each essential circle ${\Gamma}\in{\bf Ess\,}(\varphi)$ { \begin{equation} {\rm dist\,}\Big({\rm cl}({\Gamma}),{\rm cl}\big({\Gamma}^-\cap\psi^{-1}({\Gamma}^+)\big)\Big)<\delta \end{equation} where ${\rm dist\,}$ stands for the Hausdorff distance.} \vskip2mm ${\bullet}$ The previous definitions, in their full generality, will apply to cylinders which satisfy Condition {\rm(PS1)}, in the sense that one immediately shows that any such cylinder admits a homoclinic correspondence. When a cylinder ${\mathscr C}$ satisfies {\rm(FS1)}, it turns out that the situation is much simpler: there exists a single $C^1$ diffeomorphism $\psi:{\bf A}\righttoleftarrow$ which satisfy the previous compatibility condition with a single diffeomorphism $S:{\rm Dom\,} S\to{\rm Im\,} S$. In the case of a singular cylinder ${\mathscr C}_{\bullet}$ with generalized section ${\Sigma}_{\bullet}\sim{\bf A}_{\bullet}$, there also exist a single $C^1$ diffeomorphism $\psi_{\bullet}:{\bf A}_{\bullet}\righttoleftarrow$ which satisfy the previous compatibility condition with a single diffeomorphism $S:{\rm Dom\,} S\to{\rm Im\,} S$. The diffeomorphism $\psi_{\bullet}$ is continuable to a diffeomorphism of $\psi$ the continuation ${\bf A}$ of the section. \paraga {\bf Splitting arcs.} The notions we introduce now are useful only in the case of cylinders satisfying {\rm(PS1)}. We consider a such a normally hyperbolic cylinder ${\mathscr C}$ equipped with a twist section ${\Sigma}=j_{\Sigma}({\bf A})$ and a homoclinic correspondence $\psi=(\psi_i)_{i\in I}$ on ${\bf A}$. An {\em arc} of ${\bf A}$ is a continuous map ${\zeta}=[0,1]\to{\bf A}$. We write $\widehat {\zeta}={\zeta}([0,1])\subset{\bf A}$ for the image of the arc. Given two distinct points ${\theta},{\theta}'$ of ${\mathbb T}$, we write $[{\theta},{\theta}']$ for the (unique) segment bounded by ${\theta}$ and ${\theta}'$ according to the natural orientation of ${\mathbb T}$. When two points $\alpha=({\theta},r),\alpha'=({\theta}',r')$ belong to a circle ${\Gamma}$ which is a graph over ${\mathbb T}$, we write $[\alpha,\alpha']_{\Gamma}$ for the oriented segment of ${\Gamma}$ located over $[{\theta},{\theta}']$, equipped with the natural orientation of ${\Gamma}$. We write $-[\alpha,\alpha']_{\Gamma}$ for the segment equipped with the opposite orientation. \vskip2mm\noindent Consider ${\Gamma}\in{\bf Ess\,}(\varphi)$ and let $\alpha\in{\Gamma}$. \begin{itemize} \item A {\em splitting arc} based at $\alpha$ for the pair $(\varphi,\psi)$ is an arc ${\zeta}$ of ${\bf A}$ for which $$ {\zeta}(0)=\alpha,\quad {\zeta}(]0,1])\subset {\Gamma}^-;\quad \exists i\in I,\ {\zeta}(]0,1])\subset {\rm Dom\,}\psi_i,\quad \psi_i({\zeta}(]0,1]))\subset {\Gamma}. $$ \item A {\em splitting domain} based at $\alpha$ for the pair $(\varphi,\psi)$ is a the interior of a $2$-dimensional submanifold with boundary of ${\bf A}$ which is contained in ${\Gamma}^-$ and whose boundary contains a splitting arc based at $\alpha$; \item A {\em simple splitting arc} based at $\alpha=({\theta},r)$ for the pair $(\varphi,\psi)$ is a splitting arc ${\zeta}$ based at $\alpha$ such that $\widehat{\zeta}$ projects over an interval $[{\theta},{\theta}+{\sigma}]$ or an interval $[{\theta}-{\sigma},{\theta}]$, with $0<{\sigma}<{\tfrac{1}{2}}$. \end{itemize} \paraga {\bf Good cylinders.} We will distinguish between three cases. \vskip1mm 1) We say that a normally hyperbolic cylinder ${\mathscr C}$ which satisfies {\rm(FS1)}\ is a good cylinder when it admits a twist section ${\Sigma}=j_{\Sigma}({\bf A})$ with return map $\varphi:{\bf A}\righttoleftarrow$ and a homoclinic map $\psi:{\bf A}\righttoleftarrow$ such that no element of ${\bf Ess\,}(\varphi)$ is invariant under $\psi$. \vskip1mm 2) We say that a normally hyperbolic singular cylinder ${\mathscr C}_{\bullet}$ which satisfies {\rm(FS1)}\ is a good cylinder when it admits a generalized twist section ${\Sigma}_{\bullet}=j_{\Sigma}({\bf A}{\bullet})$ with return map $\varphi:{\bf A}{\bullet}\righttoleftarrow$ (continuable as a twist map of an annulus) and a homoclinic map $\psi:{\bf A}\righttoleftarrow$ such that no element of ${\bf Ess\,}(\varphi)$ is invariant under $\psi$. \vskip1mm 3) We say that a normally hyperbolic cylinder ${\mathscr C}$ which satisfies {\rm(PS1)}\ is a good cylinder when it admits a twist section ${\Sigma}=j_{\Sigma}({\bf A})$ with return map $\varphi:{\bf A}\righttoleftarrow$ and a homoclinic correspondence $\psi:{\bf A}\righttoleftarrow$ such that \begin{itemize} \item for any element ${\Gamma}$ of ${\bf Ess\,}(\varphi)$ {which is not the upper boundary of a Birkhoff zone}, there exists a splitting domain based on ${\Gamma}$; \item if ${\Gamma}\in{\bf Ess\,}(\varphi)$ is the upper boundary of a Birkhoff zone, then there exists a simple splitting arc based on ${\Gamma}$. \end{itemize} \paraga {\bf Heteroclinic maps.} Let ${\mathscr C}_1$ and ${\mathscr C}_2$ be disjoint good cylinders at energy ${\bf e}$ for $H$, with characteristic projections $\Pi_i^\pm:W^\pm({\mathscr C}_i)\to{\mathscr C}_i$ and twist sections ${\Sigma}_i=j_{{\Sigma}_i}({\bf A}_i)$, with ${\bf A}_i={\mathbb T}\times[a_i,b_i]$. We define the {\em transverse heteroclinic intersection of ${\mathscr C}_1$ and ${\mathscr C}_2$} as the set \begin{equation} {\rm Hett}({\mathscr C}_1,{\mathscr C}_2)\subset W^-({\mathscr C}_1)\cap W^+({\mathscr C}_2) \end{equation} formed by the points $\xi$ such that \begin{equation} W^-\big(\Pi_1^-(\xi)\big)\pitchfork_\xi W^+({\mathscr C}_2)\quad\textrm{and}\quad W^+\big(\Pi_2^+(\xi)\big)\pitchfork_\xi W^-({\mathscr C}_1). \end{equation} A {\em heteroclinic map} from ${\mathscr C}_1$ to ${\mathscr C}_2$ is a $C^1$ diffeomorphism \begin{equation} \psi_1^2:{\rm Dom\,} \psi_1^2\subset {\Sigma}_1 \to {\rm Im\,}\psi_1^2\subset {\Sigma}_2 \end{equation} where ${\rm Dom\,} \psi_1^2$ is an open neighborhood of $\partial^{\bullet}{\Sigma}_1$ in ${\Sigma}_1$ and ${\rm Im\,}\psi_1^2$ is an open neighborhood of $\partial_{\bullet} {\Sigma}_2$ in ${\bf A}_2$, for which there exists a $C^1$ diffeomorphism \begin{equation} S_1^2:{\rm Dom\,} S_1^2\subset {\mathscr C}_1 \to {\rm Im\,} S_1^2\subset {\mathscr C}_2 \end{equation} where ${\rm Dom\,} S_1^2$ and ${\rm Im\,} S_1^2$ are open subsets, which satisfies the following conditions: \begin{itemize} \item there exists a $C^1$ function $\tau:{\rm Dom\,}\psi_1^2\to {\mathbb R}$ such that \begin{equation} \forall x\in{\rm Dom\,}\psi_1^2,\qquad \Phi_H^{\tau(x)}(x)\in {\rm Dom\,} S_1^2 \quad \textit{and}\quad \psi_1^2(x)=S_1^2\Big(\Phi_H^{\tau(x)}(x)\Big); \end{equation} \item there is an open subset $\Domt S_1^2\subset {\rm Dom\,} S_1^2$, with full measure in ${\rm Dom\,} S_1^2$, such that \begin{equation}\label{eq:compahetero} \forall y\in \Domt S_1^2,\qquad W^-(y)\cap W^+\big(S_1^2(y)\big)\cap{\rm Hett}({\mathscr C}_1,{\mathscr C}_2)\neq\emptyset. \end{equation} \end{itemize} \paraga {\bf Bifurcation maps.} Let ${\mathscr C}_1$ and ${\mathscr C}_2$ be disjoint good cylinders at energy ${\bf e}$ for $H$, with characteristic projections $\Pi_i^\pm:W^\pm({\mathscr C}_i)\to{\mathscr C}_i$ and twist sections ${\Sigma}_i=j_{{\Sigma}_i}({\bf A}_i)$, with ${\bf A}_i={\mathbb T}\times[a_i,b_i]$. A bifurcation map from ${\mathscr C}_1$ to ${\mathscr C}_2$ is a $C^1$ diffeomorphism \begin{equation} \psi_1^2:{\Gamma}_1\subset {\Sigma}_1 \to {\Gamma}_2\subset {\Sigma}_2 \end{equation} where ${\Gamma}_i$ are dynamically minimal circles for the return maps $\varphi_i$, such that, denoting by ${\mathscr T}_i$ the essential tori they generate, there exists a $C^1$ diffeomorphism \begin{equation} S_1^2:{\mathscr T}_1 \to {\mathscr T}_2 \end{equation} which satisfies the following conditions: there exists a $C^1$ function $\tau:{\mathscr T}_1\to {\mathbb R}$ such that \begin{equation} \forall x\in{\mathscr T}_1,\qquad \Phi_H^{\tau(x)}(x)\in {\mathscr T}_1 \quad \textit{and}\quad \psi_1^2(x)=S_1^2\Big(\Phi_H^{\tau(x)}(x)\Big); \end{equation} and \begin{equation} \forall y\in {\mathscr T}_1,\qquad W^-(y)\cap W^+\big(S_1^2(y)\big)\cap{\rm Hett}({\mathscr C}_1,{\mathscr C}_2)\neq\emptyset. \end{equation} \paraga {\bf Good chains.} A {\em good chain of cylinders} at energy ${\bf e}$ is an admissible chain $({\mathscr C}_k)_{1\leq k\leq k_*}$ of {\em good} cylinders or singular cylinders at energy ${\bf e}$, with twist sections ${\Sigma}_k$, such that for $1\leq k\leq k_*-1$: \begin{itemize} \item either ${\mathscr C}_k$ and ${\mathscr C}_{k+1}$ are consecutive cylinders contained in the same cylinder, that is $\partial_{\bullet}{\mathscr C}_k=\partial^{\bullet}{\mathscr C}_{k+1}$, which satisfy the gluing condition {\rm(G)}; \item or there exists a bifurcation map $\psi_k^{k+1}$ from ${\mathscr C}_k$ to ${\mathscr C}_{k+1}$; \item or a heteroclinic map $\psi_k^{k+1}$ from ${\mathscr C}_k$ to ${\mathscr C}_{k+1}$ and a circle ${\Gamma}_k\in{\bf Ess\,} (\varphi_k)$ contained in ${\rm Dom\,} \psi_k^{k+1}$ whose image $\psi_k^{k+1}({\Gamma}_k)$ is a dynamically minimal essential invariant circle for $\varphi_{k+1}$. \end{itemize} \section{Perturbation of characteristic foliations}\label{Sec:perturbation} We now introduce the main ingredient of our perturbative construction. \subsection{A perturbative lemma for Poincar\'e maps} We refer to \cite{MS} for the necessary definitions and results in symplectic geometry. We begin with a global form of the Hamiltonian flow-box theorem. \begin{lemma}\label{lem:hamflowbox} Let $(M^{2m},\Omega)$ be a symplectic manifold with Poisson bracket $\{\, ,\, \}$, and fix a Hamiltonian $H\in C^\infty(M)$, with complete vector field. Let $\Lambda$ be a codimension 1 submanifold of $M$, transverse to $X_H$, such that there exists un open interval $I\subset{\mathbb R}$ containing $0$ for which the restriction of $\Phi_H$ to $I\times \Lambda$ is an embedding. Set \begin{equation} {\mathscr D}:=\Phi_H(I\times\Lambda) \end{equation} and let $F:{\mathscr D}\to{\mathbb R}$ be the $C^\kappa$ (transition time) function defined by \begin{equation} \Phi_H\big(-F(x),x\big)\in\Lambda,\qquad \forall x\in {\mathscr D}. \end{equation} Then $\{H,F\}=1$ and $\Lambda=F^{-1}(0)$, so $X_F$ is tangent to $\Lambda$. Assume moreover that there exist an open interval $J$ and $\overline{\bf e}\in J$ such that, setting $\Lambda_{\overline{\bf e}}=H^{-1}(\overline {\bf e})\cap \Lambda$, the flow of $X_F$ is defined on $J\times \Lambda_{\overline{\bf e}}$ and satisfies \begin{equation} \Lambda=\Phi_F\big(J\times \Lambda_{\overline{\bf e}}\big). \end{equation} Then the form $\Omega_{\overline {\bf e}}$ induced by $\Omega$ on $\Lambda_{\overline{\bf e}}$ is symplectic, and the map \begin{equation} \begin{array}{lccll} {\boldsymbol \chi}: &(I\times J)\times \Lambda_{\overline{\bf e}} &\longrightarrow& {\mathscr D}&\\[5pt] &\big((t,{\bf e}), x\big)&\longmapsto &\Phi_H\big(t,\Phi_F({\bf e},x)\big)& \end{array} \end{equation} is a $C^\infty$ symplectic diffeomorphism on its image, where $(I\times J)\times \Lambda_{\overline{\bf e}}$ is equipped with the form \begin{equation} d{\bf e}\wedge dt\,\oplus\, \Omega_{\overline {\bf e}}. \end{equation} Moreover \begin{equation} H\circ{\boldsymbol \chi}\big((t,{\bf e}), x\big)={\bf e},\qquad {\boldsymbol \chi}^*(X_H)=\frac{\partial}{\partial t}. \end{equation} \end{lemma} In the following we say that a submanifold $\Lambda$ satisfying the assumptions of the previous lemma is a {\em box-section} for $X_H$, with associated data $(I,J,\overline {\bf e})$. Given {\em any} transverse section $\widehat\Lambda$ of $X_H$ and a compact subset $K$ of $\widehat\Lambda\cap H^{-1}(\overline{\bf e})$, one easily proves that there exists a box-section $\Lambda\subset \widehat\Lambda$ which is a neihborhood of $K$ in $\widehat\Lambda$. \begin{lemma}\label{lem:perturblemma} Let $M^{2m}$ be a symplectic manifold and fix a Hamiltonian $H\in C^\infty(M)$ with complete vector field $X_H$ and flow $\Phi_H$. Assume that $\Lambda$ is a {\em box-section} for $X_H$, with associated data $(I,J,\overline {\bf e})$ such that $[-1,0]\subset I$, set \begin{equation} \De^{(-1)}=\Phi_H^{-1}(\Lambda),\qquad \Lambda_{\overline{\bf e}}=\Lambda\cap H^{-1}(\overline{\bf e}),\qquad \De^{(-1)}_{\overline{\bf e}}=\De^{(-1)}\cap H^{-1}(\overline{\bf e}), \end{equation} and let $P_H$ be the $\Phi_H$-induced Poincar\'e map between $\De^{(-1)}_{\overline{\bf e}}$ and $\Lambda_{\overline{\bf e}}$. Let $K$ be a compact of $\Lambda_{\overline {\bf e}}$ contained in the relative interior of $\Lambda_{\overline {\bf e}}$. Then for any $C^\infty$ Hamiltonian diffeomorphism $\phi:\Lambda_{\overline {\bf e}}\righttoleftarrow$ with support in $K$, there exists a Hamiltonian ${\mathcal H}\in C^\infty(M)$ such that: \vskip1mm ${\bullet}$ $\Lambda$ is a box-section of $X_{{\mathcal H}}$ with associated data $(I,J,\overline {\bf e})$, and $\Phi_{\mathcal H}^{-1}(\Lambda)=\De^{(-1)}$, \vskip1mm ${\bullet}$ ${\mathcal H}$ coincides with $H$ outside $\Phi_H(]-1,0[\times \Lambda)$, \vskip1mm ${\bullet}$ the $\Phi_{\mathcal H}$-induced Poincar\'e map between $\De^{(-1)}_{\overline{\bf e}}$ and $\Lambda_{\overline{\bf e}}$ satisfies \begin{equation} P_{\mathcal H}=\phi\circ P_H. \end{equation} \vskip0mm ${\bullet}$ ${\mathcal H}$ tends to $H$ in the $C^\infty$ topology when $\phi\to {\rm Id}$ in the $C^\infty$ topology. \end{lemma} \begin{proof} The map \begin{equation} {\boldsymbol \chi}:I\times J\times \Lambda_{\bf e}\longrightarrow \Phi_H(I\times \Lambda) \end{equation} of Lemma~\ref{lem:hamflowbox} is a symplectic diffeomorphism such that $H\circ{\boldsymbol \chi}(t,{\bf e},x)={\bf e}$. We first work in the coordinates $(t,{\bf e},x)$ to construct our new Hamiltonian. Let $\ell: I\times \Lambda_{\bf e}$ be a $C^\infty$ nonautonomous Hamiltonian on $\Lambda_{\bf e}$ with support in $[-2/3,-1/3]\times K$, whose associated transition map between the times $\{-2/3\}$ and $\{-1/3\}$ coincides with $\phi$. We set \begin{equation} {\mathsf H}(t,{\bf e},x)={\bf e}+\eta({\bf e})\ell(t,x) \end{equation} where $\eta:J\to {\mathbb R}$ is a $C^\infty$ function with support in $J$, equal to $1$ in an open neighborhood $J_*$ of ${\bf e}$, so that the function $\eta\,\ell$ has compact support contained in $[-2/3,-1/3]\times I\times K$. The associated vector field reads, for $(t,{\bf e},x)\in I\times J_*\times \Lambda_{\bf e}$: \begin{equation} \dot t=1,\quad \dot{\bf e} =\partial_t\ell(t,x),\quad \dot x=X_\ell(t,x), \end{equation} where $X_\ell$ stands for the vector field generated by $\ell$ relatively to the induced form $\Omega_{\overline {\bf e}}$ on~$\Lambda_{\bf e}$. As a consequence, the Poincar\'e map of ${\mathsf H}$ between the sections $\{-1\}\times(J\times \Lambda_{\bf e})$ and $\{0\}\times(J\times \Lambda_{\bf e})$ reads: \begin{equation} P_{\mathsf H}(-1,{\bf e},x)=\big(0,{\bf e},\phi(x)\big). \end{equation} The function ${\mathsf H}\circ{\boldsymbol \chi}^{-1}$ coincides with $H$ in the open set \begin{equation} {\boldsymbol \chi}(I\times J\times \Lambda_{\bf e})\ \setminus\ {\boldsymbol \chi}([-2/3,-1/3]\times {\rm Supp}\,\eta \times K) \end{equation} and so continues as a $C^\infty$ function ${\mathcal H}$ on $M$, which coincides with $H$ outside the latter factor. Since ${\boldsymbol \chi}$ is symplectic, $\Lambda$ is a box-section of $X_{{\mathcal H}}$ with associated data $(I,J,\overline {\bf e})$, and $\Phi_{\mathcal H}^{-1}(\Lambda)=\De^{(-1)}$. The Poincar\'e maps $P_{\mathsf H}$ and $P_{\mathcal H}$ satisfy \begin{equation} P_{\mathcal H}={\boldsymbol \chi}\circ P_{\mathsf H}\circ{\boldsymbol \chi}^{-1}_{\vert \Phi_H^{-1}(\Lambda_{\overline{\bf e}})}, \end{equation} and \begin{equation} {\boldsymbol \chi}(0,\overline{\bf e},x)=x,\qquad {\boldsymbol \chi}(-1,\overline{\bf e},x)=\Phi_H^{-1}(x), \end{equation} hence, setting $z=\Phi_H^{-1}(x)$ for $x\in \Lambda_{\overline{\bf e}}$: \begin{equation} P_{\mathcal H}(z)={\boldsymbol \chi}\circ P_{\mathsf H}\circ{\boldsymbol \chi}^{-1}(z)={\boldsymbol \chi}\circ P_{\mathsf H}(-1,\overline{\bf e},x)={\boldsymbol \chi}\big(0,{\bf e},\phi(x)\big)=\phi(x) \end{equation} so that \begin{equation} P_{\mathcal H}=\phi\circ P_H. \end{equation} Finally one can choose $\ell$ so that $\ell\to 0$ in the $C^\infty$ topology when $\phi$ tends to ${\rm Id}$ in the $C^\infty$ topology, from which our last assertion easily follows. \end{proof} \subsection{Perturbations of homoclinic maps in the {\rm(FS)}\ case} In this section we consider a Hamiltonian $H\in C^\kappa({\mathbb A}^3)$ and fix a regular value ${\bf e}$ of $H$. \begin{lemma}\label{lem:pertfomhom} Assume that ${\mathscr C}\subset H^{-1}({\bf e})$ with twist section ${\Sigma}$ satisfies condition {\rm(FS)}, and let $\psi_H$ be its homoclinic map. Then for any $C^{\kappa-2}$ Hamiltonian diffeomorphism ${\sigma}:{\Sigma}\righttoleftarrow$, there exists a $C^\kappa$ Hamiltonian ${\mathcal H}$ which coincides with $H$ in the neighborhood of ${\mathscr C}$, such that ${\mathscr C}$ still satisfies condition {\rm(FS)}\ for ${\mathcal H}$ and that the associated homoclinic map $\psi_{\mathcal H}$ satisfies \begin{equation} \psi_{\mathcal H}={\sigma}\circ\psi_H; \end{equation} moreover $\norm{{\mathcal H}-H}_\kappa\to 0$ when $d_{\kappa-2}({\sigma},{\rm Id})\to0$. \end{lemma} \begin{proof} By the $C^\infty$-smoothing technique of \cite{Ze76}, there exists a Hamiltonian $H_*$, arbitrarily close to $H$ in the $C^\kappa$ topology and coincides with $H$ in the neighborhood of ${\mathscr C}$, such that ${\mathscr C}$ still satisfies condition {\rm(FS)}\ with respect to $\Delta$ and such that $H_*$ is $C^\infty$ in the neighborhood of $\Delta$. Let $\psi_{{\mathcal H}_*}$ be the associated homoclinic map and set \begin{equation} {\sigma}_*={\sigma}\circ\psi_H\circ\psi_{H_*}^{-1}, \end{equation} so that ${\sigma}_*$ is a Hamiltonian diffeomorphism of ${\Sigma}$, arbitrarily close to ${\sigma}$ in the $C^{\kappa-2}$ topology. It is therefore enough to prove the result for $H_*$ and ${\sigma}_*$ instead of $H$ and~${\sigma}$. So one can assume without loss of generality that $H$ is $C^\infty$ in the neighborhood of $\Delta$. By compactness of ${\mathscr C}^-$, one can find a neighborhood $\Lambda$ of ${\mathscr C}^-$ in $\Delta$ which is a box-section for $X_H$, with data $(I,J,{\bf e})$, where $I$ contains some interval $[-\tau,0]$. One can moreover assume that $H$ is $C^\infty$ in the neighborhood of $\Phi_H(\overline I\times \overline\Lambda)$. Set \begin{equation} \phi=(P_H^+\circ\Pi_H^+)^{-1}\circ{\sigma}\circ(P_H^+\circ\Pi_H^+)_{\vert {\mathscr I}} \end{equation} where ${\mathscr I}={\mathscr C}^+\cap{\mathscr C}^-$, so that $\phi$ is a Hamiltonian diffeomorphism of ${\mathscr I}$. We proved in \cite{Mar16} the existence of a Hamiltonian diffeomorphism ${\boldsymbol \phi}$ of $\Delta_{\bf e}$ which continues $\phi$. By Lemma~\ref{lem:perturblemma}, there exists a Hamiltonian ${\mathcal H}\in C^\kappa(M)$ such that: \vskip1mm ${\bullet}$ $\Lambda$ is a box-section of $X_{{\mathcal H}}$ with associated data $(I,J,\overline {\bf e})$, and $\Phi_{\mathcal H}^{-1}(\Lambda)=\De^{(-1)}$, \vskip1mm ${\bullet}$ ${\mathcal H}$ coincides with $H$ outside $\Phi_H(]-1,0[\times \Lambda)$, \vskip1mm ${\bullet}$ the $\Phi_{\mathcal H}$-induced Poincar\'e map between $\Lambda^{-}_{\overline{\bf e}}$ and $\Lambda_{\overline{\bf e}}$ satisfies \begin{equation} P_{\mathcal H}=\phi\circ P_H. \end{equation} \vskip0mm ${\bullet}$ ${\mathcal H}$ tends to $H$ in the $C^\kappa$ topology when $\phi\to {\rm Id}$ in the $C^{\kappa-2}$ topology. \vskip1mm \noindent Fix $\xi\in{\mathscr A}^-$ and set $\eta=\Phi_H^{-\tau}(\xi)$. Since $H$ and ${\mathcal H}$ coincide outside the perturbation box \begin{equation} \Pi_H^-(\eta)=\Pi_{\mathcal H}^-(\eta). \end{equation} By equivariance of the unstable foliation of $W^-({\mathscr A})$: \begin{equation} \Pi_{\mathcal H}^-\big(\Phi_{\mathcal H}(\eta)\big)=\Pi_H^-\big(\Phi_H(\eta)\big). \end{equation} Moreover, if $\xi\in{\mathscr C}^-$, \begin{equation} \Phi_{\mathcal H}^\tau(\eta)={\boldsymbol \phi}\circ\Phi_H^\tau(\eta) \end{equation} which proves that \begin{equation} \Pi_{\mathcal H}^-\circ{\boldsymbol \phi}(\xi)=\Pi_H^-(\xi),\qquad \forall \xi\in{\mathscr C}^-. \end{equation} In particular, since ${\boldsymbol \phi}$ leaves ${\mathscr I}$ invariant \begin{equation} \Pi_{\mathcal H}^-({\mathscr I})=\Pi_H^-({\mathscr I}). \end{equation} and the Poincar\'e maps $P_{\mathcal H}^-$ and $P_H^-$ coincide. Moreover, since the perturbation does not affect the stable manifold $W^+({\mathscr A})$: \begin{equation} \psi_{\mathcal H}(x)=P_H^+\circ\Pi_H^+\circ j_{{\mathcal H}}^-\circ (P_H^-)^{-1}(x),\qquad \forall x\in {\Sigma}. \end{equation} Hence \begin{equation} \psi_{\mathcal H}(x)=P_H^+\circ\Pi_H^+\circ\phi\circ j_H^-\circ(P_H^-)^{-1}(x)={\sigma}\circ\psi_H(x),\qquad \forall x\in {\Sigma}, \end{equation} which proves our claim. Finally, since $d_{\kappa-2}({\boldsymbol \phi},{\rm Id})\to 0$ when $d_{\kappa-2}({\sigma},{\rm Id})\to 0$ (see \cite{Mar16}), the last statement comes from Lemma~\ref{lem:perturblemma}. \end{proof} The following lemma for the heteroclinic condition {\rm(FS)}\ is proved exactly in the same way as the previous one. \begin{lemma} Assume that the pair $({\mathscr C}_0,{\mathscr C}_1)$ of compact invariant cylinders with twist sections ${\Sigma}_i$, contained in $H^{-1}({\bf e})$, satisfies condition {\rm(FS)}, and let $\psi_H$ be its heteroclinic map. Then for any $C^{\kappa-2}$ Hamiltonian diffeomorphism ${\sigma}:{\Sigma}_2\righttoleftarrow$, there exists a $C^\kappa$ Hamiltonian ${\mathcal H}$ which coincides with $H$ in the neighborhood of ${\mathscr C}_0$ and ${\mathscr C}_1$, such that $({\mathscr C}_0,{\mathscr C}_1)$ still satisfies condition {\rm(FS)}\ for ${\mathcal H}$ and that the associated heteroclinic map $\psi_{\mathcal H}$ satisfies \begin{equation} \psi_{\mathcal H}={\sigma}\circ\psi_H; \end{equation} moreover $\norm{{\mathcal H}-H}_\kappa\to 0$ when $d_{\kappa-2}({\sigma},{\rm Id})\to0$. \end{lemma} \subsection{Perturbations of characteristic projections in the {\rm(PS)}\ case} We consider a Hamiltonian $H\in C^\kappa({\mathbb A}^3)$ and fix a regular value ${\bf e}$ of $H$. \begin{lemma}\label{lem:perturbpom} Assume that ${\mathscr C}\subset H^{-1}({\bf e})$ with twist section ${\Sigma}$ satisfies condition {\rm(PS)}. Then there exists a compact neighborhood $K$ of ${\mathscr C}^-$ in $\Delta_{\bf e}$ such that for any $C^{\kappa-2}$ Hamiltonian diffeomorphism $\phi$ of $\Delta_{\bf e}$ with support in $K$, there exists a $C^\kappa$ Hamiltonian ${\mathcal H}$ which coincides with $H$ in the neighborhood of ${\mathscr C}$, such that ${\mathscr C}$ still satisfies condition {\rm(PS)}\ for ${\mathcal H}$ and that the associated characteristic projection satisfies \begin{equation} (\Pi^-_{\mathcal H})_{\vert {\mathscr C}^-}=(\Pi^-_{\mathcal H})_{\vert {\mathscr C}^-}\circ\phi; \end{equation} moreover $\norm{{\mathcal H}-H}_\kappa\to 0$ when $d_{\kappa-2}(\phi,{\rm Id})\to0$. \end{lemma} \begin{proof} The proof is follows the same lines as that of Lemma~\ref{lem:perturblemma}. One can assume that $H$ is $C^\infty$ in the neighborhood of $\Delta$. One first constructs a box-section $\Lambda\subset\Delta_{\bf e}$ with data $(I,J,{\bf e})$, where $I$ contains some interval $[-\tau,0]$, such that moreover $H$ is $C^\infty$ in the neighborhood of $\Phi_H(\overline I\times \overline\Lambda)$. Then, by Lemma~\ref{lem:perturblemma}, there exists a Hamiltonian ${\mathcal H}\in C^\kappa(M)$ such that: \vskip1mm ${\bullet}$ $\Lambda$ is a box-section of $X_{{\mathcal H}}$ with associated data $(I,J,\overline {\bf e})$, and $\Phi_{\mathcal H}^{-1}(\Lambda)=\De^{(-1)}$, \vskip1mm ${\bullet}$ ${\mathcal H}$ coincides with $H$ outside $\Phi_H(]-1,0[\times \Lambda)$, \vskip1mm ${\bullet}$ the $\Phi_{\mathcal H}$-induced Poincar\'e map between $\Lambda^{-}_{\overline{\bf e}}$ and $\Lambda_{\overline{\bf e}}$ satisfies \begin{equation} P_{\mathcal H}=\phi\circ P_H. \end{equation} \vskip0mm ${\bullet}$ ${\mathcal H}$ tends to $H$ in the $C^\kappa$ topology when $\phi\to {\rm Id}$ in the $C^{\kappa-2}$ topology. \vskip1mm \noindent One then deduces exactly as in Lemma~\ref{lem:perturblemma} that \begin{equation} (\Pi^-_{\mathcal H})_{\vert {\mathscr C}^-}=(\Pi^-_{\mathcal H})_{\vert {\mathscr C}^-}\circ\phi; \end{equation} and the last statement is immediate. \end{proof} \section{Proofs of Theorem~\ref{thm:main2} and Theorem~\ref{thm:main1}}\label{Sec:proof2} We now use the results of the previous section and prove that admissible chains can be made good chains by arbitrarily small perturbations of the Hamiltonian, which is the content of Theorem~\ref{thm:main2}. We then prove Theorem~\ref{thm:main1}, which relies on the results of \cite{Mar,GM}. \subsection{Cylinders with condition {\rm(FS)}} We consider a proper Hamiltonian $H\in C^\kappa({\mathbb A}^3)$ and fix a regular value ${\bf e}$ of $H$. \begin{lemma}\label{lem:goodcylfom} Assume that ${\mathscr C}\subset H^{-1}({\bf e})$ satisfies condition~{\rm(FS1)}. Then for any $\alpha>0$, there exists a $C^\kappa$ Hamiltonian ${\mathcal H}$ such that ${\mathscr C}$ is a good cylinder for ${\mathcal H}$, and which satisfies \begin{equation} \norm{{\mathcal H}-H}_\kappa<\alpha. \end{equation} \end{lemma} \begin{proof} By \cite{R} applied to the symplectic manifold ${\mathscr A}$, there exists an arbitrarily small perturbation $\widetilde H$ of $H$ in the $C^\kappa$ topology which admits an invariant annulus $\widetilde {\mathscr A}$ and an invariant cylinder $\widetilde{\mathscr C}$ which are arbitrarily $C^\kappa$-close to the initial ones, such that any periodic orbit of $\widetilde H$ is hyperbolic in ${\mathscr C}$ or elliptic with nondegenerate torsion and KAM nonresonance conditions. As a consequence, one can choose $\widetilde H$ such that $\norm{H-\widetilde H}_{C^\kappa}<\alpha/2$, $\widetilde{\mathscr C}$ still satisfies condition {\rm(FS1)}\ and admits a section ${\Sigma}$ for which the Poincar\'e map $\widetilde\varphi$ is a special twist map. We can therefore assume that these properties are satisfied by the initial Hamiltonian $H$ and we get rid of the $\widetilde{\phantom{u}}$. Let $\psi$ be the homoclinic map of ${\mathscr C}$ for $H$. By \cite{M02}, there exists a $C^\kappa$ Hamiltonian diffeomorphism ${\sigma}$ of ${\Sigma}$, arbitrarily close to ${\rm Id}$, such that the return map $\varphi:{\Sigma}\righttoleftarrow$ and the map ${\sigma}^{-1}\circ\psi\circ{\sigma}:{\Sigma}\righttoleftarrow$ have no common essential invariant circle. Observe that $[{\sigma}^{-1}\circ\psi\circ{\sigma}\circ\psi^{-1}]$ is a Hamiltonian diffeomorphism of ${\Sigma}$, which tends to $I$ in the $C^\kappa$ topology when ${\sigma}$ tends to ${\rm Id}$ in the $C^\kappa$ topology. So, by Lemma~\ref{lem:pertfomhom}, if ${\sigma}$ is close enough to the indentity, there exists a Hamiltonian ${\mathcal H}$ which coincides with $H$ in the neighborhood of ${\mathscr C}$, such that ${\mathscr C}$ still satisfies condition~{\rm(FS1)}, with new homoclinic map \begin{equation} \psi_{{\mathcal H}}=[{\sigma}^{-1}\circ\psi\circ{\sigma}\circ\psi^{-1}]\circ\psi={\sigma}^{-1}\circ\psi\circ{\sigma}. \end{equation} One can choose ${\sigma}$ so that $\norm{{\mathcal H}-H}_{\kappa}<\alpha$. By construction, the cylinder ${\mathscr C}$ is now a good cylinder at energy ${\bf e}$ for ${\mathcal H}$. \end{proof} \subsection{Cylinders with condition {\rm(PS)}} We consider a proper Hamiltonian $H\in C^\kappa({\mathbb A}^3)$ and fix a regular value ${\bf e}$ of $H$. We prove the following analog of Lemma~\ref{lem:goodcylfom} for condition {\rm(PS1)}. \begin{lemma}\label{lem:goodcylpom} Assume that ${\mathscr C}\subset H^{-1}({\bf e})$ satisfies condition~{\rm(PS1)}. Then for any $\alpha>0$, there exists a $C^\kappa$ Hamiltonian ${\mathcal H}$ such that ${\mathscr C}$ is a good cylinder for ${\mathcal H}$, and which satisfies \begin{equation} \norm{{\mathcal H}-H}_\kappa<\alpha. \end{equation} \end{lemma} The proof follows from a sequence of intermediate lemmas. First, by the same argument as in the proof of the previous lemma, we can assume that ${\mathscr C}\subset H^{-1}({\bf e})$ admits a section ${\Sigma}$ whose attached return map $\varphi$ is a special twist map. We set \begin{equation} {\mathscr C}_*={\mathscr A}\cap H^{-1}({\bf e}),\qquad {\mathscr C}_*^\pm=j^\pm({\mathscr C}_*) \end{equation} so that ${\mathscr C}\subset{\mathscr C}_*$ and ${\mathscr C}^\pm\subset{\mathscr C}_*^\pm$. \paraga The first lemma is a direct consequence of the last two conditions of {\rm(PS1)}, of which we keep the notation. \begin{lemma}\label{lem:existint} For each ${\mathscr T}\in{\bf Tess}({\mathscr C})$, ${\mathscr T}^+\cap{\mathscr T}^-\neq\emptyset$. \end{lemma} \begin{proof} The Lipshitzian Lagrangian tori $\Psi^{sec}({\mathscr T}^+)$ and $\Psi^{sec}({\mathscr T}^-)$ are graphs over the null section in ${\mathscr O}^{sec}$. They moreover have the same cohomology, since they are images of the same graph $\Psi^{ann}({\mathscr T})$ by the exact-symplectic diffeomorphisms $$ \Psi^{sec}\circ j^{\pm}\circ(\Psi^{ann})^{-1}. $$ Hence their intersection in nonempty. \end{proof} \paraga Given a vector field $X$ on ${\mathscr C}_*$ and a $2$-dimensional submanifold $S\subset {\mathscr C}_*$, we define the tangency set $$ {\rm Tan}(X,S)=\big\{x\in S\mid X(x)\in T_xS\big\}. $$ We say that a point $x\in S\setminus {\rm Tan}(X,S)$ is regular. We define the folds and cusps of ${\mathscr Z}$ relatively to $X$ in the usual way (see \cite{GS}). \paraga We fix a compact subset $K\subset\Delta_{\bf e}$ which contains ${\mathscr C}^+\cup{\mathscr C}^-$ in its interior, and we denote by ${\bf H}(K)$ the space of pairs of nonautonomous Hamiltonians in $C^\kappa({\mathbb R}\times \Delta_{\bf e})$ with support in $[-2/3,1/3]\times K$. Let ${\bf H}_0$ be a ball centered at~$0$ in the space ${\bf H}(K)$, such that for each $(\ell_+,\ell_-)\in {\bf H}_0$, the conclusion of Lemma~\ref{lem:perturbpom} holds. Given $(\ell_+,\ell_-)\in{\bf H}(K)$, we denote by ${\mathcal H}_{(\ell_+,\ell_-)}$ the associated Hamiltonian, as defined in Lemma~\ref{lem:perturbpom}. Recall that ${\mathcal H}_{(\ell_+,\ell_-)}$ coincides with the initial Hamiltonian in the neighborhood of ${\mathscr A}$. We can therefore assume that ${\bf H}_0$ is small enough so that ${\mathscr C}$ still satisfies condition {\rm(PS1)}\ for ${\mathcal H}_{(\ell_+,\ell_-)}$, relatively to the same section $\Delta$, with characteristic maps $$ j_{(\ell_+,\ell_-)}^\pm: {\mathscr A}\to {\mathscr A}^\pm(\ell_+,\ell_-)\subset\Delta. $$ In order not to overload the notation and get rid of the problem of the boundaries and corners when considering intersections, we continue ${\mathscr C}$ to a slightly larger $2$-dimensional manifold {\em without boundary} contained in ${\mathscr C}_*$ and with compact closure, {\em that we still denote by ${\mathscr C}$}. We set $$ {\mathscr C}^\pm(\ell_+,\ell_-)=j_{(\ell_+,\ell_-)}^\pm({\mathscr C}),\qquad {\mathscr C}_*^\pm(\ell_+,\ell_-)=j_{(\ell_+,\ell_-)}^\pm({\mathscr C}_*). $$ \begin{lemma}\label{lem:genericprop} The following properties are satisfied. \begin{enumerate} \item The set ${\bf H}_1$ of pairs of Hamiltonians $(\ell_+,\ell_-)\in{\bf H}_0$ for which the intersection $$ {\mathscr C}^+_*(\ell_+,\ell_-)\cap{\mathscr C}^-_*(\ell_+,\ell_-) $$ is transverse in $\Delta_{\bf e}$ at each point of ${\rm cl}({\mathscr C})$ is open and dense in ${\bf H}_0$. \item For $(\ell_+,\ell_-)\in{\bf H}_1$, the set $$ {\mathscr I}(\ell_+,\ell_-):={\mathscr C}^+(\ell_+,\ell_-)\cap{\mathscr C}^-(\ell_+,\ell_-) $$ is a two-dimensional submanifold of $\Delta_{\bf e}$ which contains the set ${\mathscr X}(\ell_+,\ell_-)$ of all homoclinic intersections $j^+(\ell_+,\ell_-)({\mathscr T})\cap j^+(\ell_+,\ell_-)({\mathscr T})$ for ${\mathscr T}\in{\bf Tess}({\mathscr C})$. \item We get rid of the indexation by $(\ell_+,\ell_-)$ when obvious. The set ${\bf H}_2$ of pairs of Hamiltonians in ${\bf H}_1$ for which the subsets $$ \Big\{x\in {\mathscr I}\mid X^-(x)\in T_xW^+({\mathscr C})\Big\},\qquad \Big\{x\in {\mathscr I}\mid X^+(x)\in T_xW^-({\mathscr C})\Big\}, $$ are 1-dimensional submanifolds of ${\mathscr I}_*$, is open dense in ${\bf H}_0$. \item The set ${\bf H}_3$ of pairs of Hamiltonians in ${\bf H}_2$ for which the tangency sets $$ {\mathscr Z}^\pm={\rm Tan}\big(X_{\mathcal H},\Pi^\pm({\mathscr I}_*)\big) $$ are $1$-dimensional submanifolds of $\Pi^\pm({\mathscr I}_*)$, is open and dense in~${\bf H}_0$. \item The set ${\bf H}_4$ of pairs of Hamiltonians in ${\bf H}_3$ for which \begin{equation} {\mathscr X} \cap (\Pi^+)^{-1}({\mathscr Z}^+)\cap (\Pi^-)^{-1}({\mathscr Z}^-)=\emptyset \end{equation} is open and dense in ${\bf H}_0$. \item The set ${\bf H}_5$ of pairs of Hamiltonians in ${\bf H}_4$ for which each point of $\Pi^+\big({\mathscr X}\big)\cap{\mathscr Z}^+$ and $\Pi^\pm\big({\mathscr X}\big)\cap{\mathscr Z}^-$ is either regular or located on a fold, is open and dense set in ${\bf H}_0$. \end{enumerate} \end{lemma} \begin{proof} The proofs use only very classical methods of singularity theory, we will give the main ideas and refer to \cite{LM} for more details. We first recall the following classical genericity result by Abraham. \vskip2mm\noindent {\bf Theorem (\cite{AR}).} {\em Fix $1\leq k<+\infty$. Let ${\mathcal L}$ be a $C^k$ and second-countable Banach manifold. Let $X$ and $Y$ be finite dimensional $C^k$ manifolds. Let $\chi : {\mathcal L}\to C^k(X,Y)$ be a map such that the associated evaluation $$ {\bf ev}_\chi: ({\mathcal L}\times X)\to Y,\qquad {\bf ev}_\chi(\ell,x)=\big(\chi(\ell)\big)(x) $$ is $C^k$ for the natural structures. Fix a be a submanifold $D$ of $Y$ such that $$ k>\dim X-{\rm codim\,} D $$ and assume that ${\bf ev}_\chi$ is transverse to $D$. Then the set of all $\ell\in{\mathcal L}$ such that $\chi(\ell)$ is transverse to $D$ is residual in ${\mathcal L}$, and open and dense when $D$ is compact.} \vskip2mm ${\bullet}$ To prove 1, we consider the following map \begin{equation} \left\vert \begin{array}{rrl} \chi:{\bf H}_0&\to &C^\kappa({\mathscr C}_*\times {\mathscr C}_*, \Delta_{\bf e}\times\Delta_{\bf e})\\[4pt] (\ell_+,\ell_-)\ \ &\mapsto& \Big[(x,y)\mapsto \big(j^+_{(\ell_+,\ell_-)}(x),j^-_{(\ell_+,\ell_-)}(y)\big)\Big]\\ \end{array} \right. \end{equation} whose evaluation ${\bf ev}_\chi$ map is $C^{\kappa-2}$. Introduce the diagonal \begin{equation} D=\big\{(z,z)\mid z\in \Delta_{\bf e}\big\}\subset \Delta_{\bf e}\times\Delta_{\bf e}. \end{equation} Given $(\ell^0_+,\ell^0_-)\in{\bf H}_0$, a point $(x,y)\in {\mathscr C}_*\times {\mathscr C}_*$ such that ${\bf ev}_\chi\big((\ell^0_+,\ell^0_-),x,y\big)\in D$ and a vector $u\in T_y\Delta_{\bf e}$, one readily checks the existence of a path $s\mapsto (\ell_+,\ell_-)(s)$ such that \begin{equation} (\ell_+,\ell_-)(s)=(\ell^0_+,\ell^0_-),\qquad \frac{d}{ds}{\bf ev}_\chi(\ell(s),x,y)_{\vert s=0}=(0,u), \end{equation} which proves that ${\bf ev}_\chi$ is transverse to $D$ at $(\ell,x,y)$. For $\kappa$ large enough, the Abraham genericity theorem proves our first claim, while the second one is an immediate consequence of Lemma~\ref{lem:existint}. \vskip2mm ${\bullet}$ To prove 3, we endow ${\mathbb A}^3$ with a trivial Riemannian metric, and we denote by $\cdot$ the associated scalar product, which is therefore defined for pairs of vectors possibly tangent at different points. Given $x\in {\mathscr A}^\pm$, we denote by $N^\pm(x)$ the (suitably oriented) unit normal vectors at $x$ to the invariant manifolds $W^\pm({\mathscr A})$. We then introduce the map (where we get rid of the obvious indexation of $j^\pm$ by $(\ell_+,\ell_-)$): \begin{equation} \left\vert \begin{array}{rrl} \chi:{\bf H}_0&\to &C^\kappa({\mathscr C}_*\times {\mathscr C}_*, \Delta_{\bf e}\times\Delta_{\bf e}\times{\mathbb R})\\[4pt] (\ell_+,\ell_-)\ \ &\mapsto& \Big[(x,y)\mapsto \Big(j^+(x),j^-(y),X^-\big(j^-(x)\big)\cdot N^+\big(j^+(y)\big)\Big)\Big]\\ \end{array} \right. \end{equation} whose evaluation ${\bf ev}_\chi$ map is $C^{\kappa-2}$. One then proves by straightforward constructions that ${\bf ev}_\chi$ is tranvsverse to the diagonal $$ D=\{(\xi,\xi,0)\mid \xi\in\Delta_{\bf e}\}. $$ As a consequence of the Abraham theorem, for an open dense subset of pairs $(\ell_+,\ell_-)$, $\widehat\chi:=\chi(\ell_+,\ell_-)$ is transverse to $D$. For such pairs, $ \widehat\chi^{-1}(D) $ is a codimension $5$ submanifold of ${\mathscr C}_*\times{\mathscr C}_*$, and so is $1$-dimensional. The projection of this manifold on the first factor is the set of points. \vskip2mm ${\bullet}$ The proof of 4 is analogous. We denote by $Y^\pm$ the direct image of the vector field $X_H$ restricted to ${\mathscr A}$ by the transition maps $j^\pm$, which is therefore a vector field on ${\mathscr A}^\pm$. We introduce the projection operator $P$ on the normal bundle of the intersection ${\mathscr I}$, that we see as a submanifold of the trivial bundle $T{\mathbb A}^3$. To deal with ${\mathscr Z}^-$, we introduce the following map \begin{equation} \left\vert \begin{array}{rrl} \chi:{\bf H}_0&\to &C^\kappa({\mathscr C}_*\times {\mathscr C}_*, \Delta_{\bf e}\times\Delta_{\bf e}\times{\mathbb R})\\[4pt] (\ell_+,\ell_-)\ \ &\mapsto& \Big[(x,y)\mapsto \Big(j^+(x),j^-(y),\norm{P_{j^+(x)}(Y^-(y))}^2\Big)\Big]\\ \end{array} \right. \end{equation} which yields the same result as above. \vskip2mm ${\bullet}$ To prove 5 we begin by proving that the set of pairs $(\ell_+,\ell_-)\in{\bf H}_4$ for which $$ {\mathscr X} \cap (\Pi^+)^{-1}({\mathscr Z}^+)\cap (\Pi^-)^{-1}({\mathscr Z}^-) $$ is finite is open and dense in ${\bf H}_0$. For this we introduce the map $$ \chi:{\bf H}_0\to C^\kappa({\mathscr C}_*\times {\mathscr C}_*, \Delta_{\bf e}\times\Delta_{\bf e}\times{\mathbb R}\times{\mathbb R}) $$ such that $$ \big(\chi(\ell_+,\ell_-)\big)(x,y)= \Big( j^+(x), j^-(y), Y^-\big(j^-(x)\big)\cdot N^+\big(j^+(y)\big), Y^+\big(j^+(x)\big)\cdot N^-\big(j^-(y)\big) \Big) $$ which is also easily proved to be transverse to the diagonal $$ D=\big\{(\xi,\xi,0,0)\mid \xi\in\Delta_{\bf e}\big\}. $$ So the set of pairs for which $\widehat\chi^{-1}(D)$ is a dimension 0 manifold is open and dense, which proves our claim. It only remains to check that one can make an additional perturbation which disconnects ${\mathscr X}$ from this last set, which is easy. \vskip2mm ${\bullet}$ The proof of 6 is analogous to the previous one, since the set of cusp points is finite. \end{proof} \paraga The following lemma immediately yields Lemma~\ref{lem:goodcylpom}. Recall that ${\mathscr C}$ admits a section ${\Sigma}$ for which the return map $\varphi$ is a special twist map. \begin{lemma} Given $(\ell^0_+,\ell^0_-)\in{\bf H}_5$, with associated Hamiltonian ${\mathcal H}^0$, there exists ${\mathcal H}$ in $C^\kappa({\mathbb A}^3)$, arbitrarily close to ${\mathcal H}^0$ in the $C^\kappa$ topology and which coincides with ${\mathcal H}^0$ in the neighborhood of ${\mathscr A}$, for which: \begin{itemize} \item ${\mathscr C}$ satisfies condition {\rm(PS1)}\ and admits a homoclinic correspondence $\psi$; \item for any essential circle ${\Gamma}\in{\bf Ess\,}(\varphi)$ there exists a splitting arc based on ${\Gamma}$ for $(\varphi,\psi)$; \item if moreover ${\Gamma}$ is the lower boundary of a Birkhoff zone, there exists a simple arc based of ${\Gamma}$ for $(\varphi,\psi)$. \end{itemize} \end{lemma} \begin{proof} We will first prove that for any ${\Gamma}\in{\bf Ess\,}(\varphi^0)$ there exists a splitting arc based on ${\Gamma}$ for $(\varphi^0,\psi^0)$. Let ${\mathscr T}\subset{\mathscr C}$ be the torus generated by ${\Gamma}$ under the action of the Hamiltonian flow. The complement ${\mathscr C}\setminus{\mathscr T}$ admits two connected components, which we denote by ${\mathscr C}_{\bullet}$ and ${\mathscr C}^{\bullet}$ according to the orientation of ${\Sigma}$ induced by its parametrization. \vskip2mm ${\bullet}$ Set $I={\mathscr T}^+\cap {\mathscr T}^-$, so that $I$ is compact and contained in ${\mathscr X}$. By property 5) of Lemma~\ref{lem:genericprop}, for each point $\xi\in I$, one of the points $\xi^\pm=\Pi^\pm(\xi)$ is regular with respect to $X_H$, in the sense that $X_H$ is not tangent to $\Pi^\pm({\mathscr I})$ at $\xi^\pm$. We say that $\xi$ is positively (resp. negatively) regular when $\xi^+$ (resp. $\xi^-$) is regular. We define the point $x^+$ as the first intersection of the positive orbit of $\xi^+$ with ${\Sigma}$, and the point $x^-$ as the first intersection of the negative orbit of $\xi^-$ with ${\Sigma}$. We adopt the same convention for the transport to ${\Sigma}$ for all points in small enough neighborhoods of the previous two ones. \vskip2mm ${\bullet}$ Assume for instance that $\xi$ is positively regular. Then there exists a neighborhood $V(\xi)$ of $\xi$ in ${\mathscr I}$ such that the previous transport by the Hamiltonian flow in ${\mathscr C}$ induces an embedding of the image $\Pi^\pm(V(\xi))$ in the section ${\Sigma}$, whose image $V^+$ is a neighborhood of the point $x^+$. Hence one can moreover assume $V(\xi)$ small enough so that the complement $\widehat{\Gamma}=V^+\setminus{\Gamma}$ admits exactly two connected components, contained in ${\mathscr C}^{\bullet}$ and ${\mathscr C}_{\bullet}$. Inverse transport of $\widehat{\Gamma}$ by the Hamiltonian flow and then by $j^+$ yields a Lipshitzian curve in $V(\xi)$, which disconnects $V(\xi)$. We denote by $V^{\bullet}(\xi)$ and $V_{\bullet}(\xi)$ the two components of its complement, according to the intersections of their images with ${\mathscr C}_{\bullet}$ and ${\mathscr C}^{\bullet}$. We say that ${\mathscr C}^{\bullet}$ is the positive component and that ${\mathscr C}_{\bullet}$ is the negative component. \vskip2mm ${\bullet}$ Then, the point $\xi^-$ is either regular or located on a fold of $\Pi^-({\mathscr I})$ relatively to $X_H$. One can reduce $V(\xi)$ in order that the intersection $\Pi^-(V(\xi))\cap {\mathscr T}$ is connected, and therefore arc-connected. We will assume this condition satsified in the following. This yields by inverse transport an arc-connected subset ${\sigma}=j^-({\sigma}^-)\subset V(\xi)$. We say that the intersection $I$ is {\em one-sided in $V(\xi)$} if either $ {\sigma}\cap V_{\bullet}(\xi)=\emptyset $ or $ {\sigma}\cap V^{\bullet}(\xi)=\emptyset $ is empty, we say that is is positive in the latter case and negative in the former. \vskip2mm ${\bullet}$ In the case where $\xi$ is negatively regular, define the neighborhood $V(\xi)$ in a symmetric way, with the analogues for the notion of one-sided, positive and negative intersections in $V(\xi)$. When both $\xi^+$ and $\xi^-$ are regular, we arbitrarily choose one of them to perform the previous construction and define the notion of one-sided intersection. \vskip2mm ${\bullet}$ One then gets a covering of $I$ by a finite number of neighborhoods $V(\xi_1),\ldots,V(\xi_\ell)$ with the previous properties. Arguing by contraction and assuming that each positively regular point $\xi_i$ yields a positive intersection and each negatively regular point yields a negative intersection, one proves that it would be possible to construct a pair of arbitrarily close to the identity Hamiltonian diffeomorphisms $\phi^\pm$ of $\Delta_{\bf e}$ to which Lemma~\ref{lem:perturbpom} applies, and which yield a $C^\kappa$ perturbation ${\mathcal H}$ of $H$ which still admits ${\mathscr C}$ as a cylinder satisfying {\rm(PS1)}, and ${\mathscr T}$ as an invariant torus, but for which ${\mathscr T}^+\cap{\mathscr T}^-$ is empty, which is impossible. \vskip2mm ${\bullet}$ Therefore there exists in $I$ either a positively regular point with two-sided or negative intersection, or a negatively regular point with two-sided or positive intersection. Both cases yield a splitting arc based on ${\Gamma}$ for the (local) composition $P^+\circ\Pi^+\circ j-\circ(P^-)^{-1}$. Now we know that if for instance $\xi$ is positively regular, then $\xi^-$ is either regular or located on a fold. This immediately yields a splitting domain for the homoclinic correpondence and concludes the proof. \end{proof} \paraga It only remains now to consider the case of lower boundaries of Birkhoff zones. \begin{lemma} If ${\Gamma}$ is the lower boundary of a Birkhoff zone, there exists a splitting arc based on ${\Gamma}$. \end{lemma} \begin{proof} The main observation is that the set of essential circles which are lower boundaries of Birkhoff zones in countable. Therefor it is enough to prove the property for one such circle. But this is an immediate perturbation of the previous lemma, by construction a pair of Hamiltonian diffeomorphisms which makes the intersection point of the previous splitting arc be a derivability point for the arc, with a non-vertical tangent, and a derivability point for the circle, with a tangent distinct from the tangent to the arc. \end{proof} \subsection{From admissible chains to good chains: proof of Theorem~\ref{thm:main2}}\label{ssec:proofmain2} Theorem~\ref{thm:main2} is now a direct consequence of the previous two sections. Consider an admissible chain $({\mathscr C}_k)_{1\leq k\leq k_*}$ with $\delta$-bounded homoclinic correspondence at energy ${\bf e}$ for $H$. Then by Lemma~\ref{lem:goodcylfom} used recursively there exists a Hamiltonian ${\mathcal H}^0$ arbitrarily $C^\kappa$-close to $H$ such that each cylinder satisfying {\rm(FS1)}\ is a good cylinder, and such that each gluing boundary still admits transverse homoclinic intersections. By Lemma~\ref{lem:goodcylpom} used recursively, one can then find ${\mathcal H}^1$ arbitrarily $C^\kappa$-close to ${\mathcal H}^0$ such that each cylinder {\rm(PS1)}\ is a good cylinder, without altering the heteroclinic condition {\rm(PS2)}\ or the gluing condition {\rm(G)}. The only remaining point to prove is that one perturb the system in such a way that each bifurcation pair $({\mathscr C}_k,{\mathscr C}_{k+1})$ admits a bifurcation map in the sense of Section~\ref{sec:goodcylchains}, paragraph 6. We keep the assumptions and notation of \cite{Mar}, Part I, Section 2. We can first make family of small $C^\kappa$ perturbations to the Hamiltonian $H_{\varepsilon}=h+{\varepsilon} f$, parametrized by ${\varepsilon}$, to make it $C^\infty$ in the neighborhood of ${\mathbb T}^3\times\{b\}$. We so construct a family of $C^\infty$ Hamiltonians ${\mathcal H}_{\varepsilon}$ such that $\norm{\widetilde H_{\varepsilon}-H_{\varepsilon}}_{C^p}\leq \alpha{\varepsilon}$, which we still write $$ \widetilde H_{\varepsilon}=\widetilde h_{\varepsilon}+{\varepsilon}\widetilde f_{\varepsilon}=h+{\varepsilon} \widehat f_{\varepsilon}, $$ so that $\widehat f_{\varepsilon}$ is $C^p$ $\alpha$-close to $f$. We then perturb $\widetilde f_{\varepsilon}$ in order that its bifurcation point admits a frequency vector $\omega$ which is $2$-Diophantine, that is, in adapted coordinates: $\omega=(\widehat\omega,0)$ with $\widehat\omega$ Diophantine. One then use the ${\varepsilon}$-dependent normal form of ${\bf H}_{\varepsilon}$ introduced in \cite{Mar}, Appendix 3. Given two constants $d>0$ and $\delta<1$ with $1-\delta>d$, then, there is an ${\varepsilon}_0>0$ such that for $0<{\varepsilon}<{\varepsilon}_0$, there exists an analytic symplectic embedding $$ \Phi_{\varepsilon}: {\mathbb T}^3\times B(b_1,{\varepsilon}^d)\to {\mathbb T}^3\times B(b_1,2{\varepsilon}^d) $$ such that $$ \widetilde N_{\varepsilon}({\theta},r)=\widetilde H_{\varepsilon}\circ\Phi_{\varepsilon}({\theta},r)=\widetilde h_{\varepsilon}(r)+ g_{\varepsilon}({\theta}_3,r)+R_{\varepsilon}({\theta},r), $$ where $g_{\varepsilon}$ and $R_{\varepsilon}$ are $C^p$ functions and \begin{equation} \norm{R_{\varepsilon}}_{C^p\big( {\mathbb T}^n\times B(b_1{\varepsilon}^d)\big)}\leq {\varepsilon}^3. \end{equation} Moreover, $\Phi_{\varepsilon}$ is close to the identity, in the sense that \begin{equation} \norm{\Phi_{\varepsilon}-{\rm Id}}_{C^p\big( {\mathbb T}^3\times B(b_1,{\varepsilon}^d)\big)}\leq {\varepsilon}^{1-\delta}. \end{equation} The final perturbation of the initial Hamiltonian (localized in the neighborhood of the bifurcation locus) will be the truncation $$ H_{\varepsilon}^0=(\widetilde h_{\varepsilon}+g_{\varepsilon})\circ\Phi_{\varepsilon}^{-1}=h+{\varepsilon} f^0_{\varepsilon} $$ which is $\alpha$ close to $H_{\varepsilon}$ when ${\varepsilon}$ is small enough. One checks that $H_{\varepsilon}^0$ satisfies the condition of a good chain for the cylinders at their bifurcation point and that $f^0_{\varepsilon}$ is $\alpha$ close to $f$ is ${\varepsilon}$ is small enough. This concludes our proof. \subsection{Proof of Theorem~\ref{thm:main1}}\label{ssec:proofmain1} Fix ${\bf e}>\mathop{\rm Min\,}\limits h$ together with a finite family of arbitrary open sets $O_1,\ldots,O_m$ which intersect $h^{-1}({\bf e})$. By \cite{Mar}, for $\kappa\geq \kappa_0$ large enough, there exists a lower-semicontinuous function $$ {\boldsymbol\eps}_0:{\mathscr S}^\kappa\to{\mathbb R}^+ $$ with positive values on an open dense subset of ${\mathscr S}^\kappa$ such that for $f\in{\mathscr B}^\kappa({\boldsymbol\eps}_0)$ the system \begin{equation} H({\theta},r)=h(r)+ f({\theta},r) \end{equation} admits an admissible chain $({\mathscr C}_k)_{1\leq k\leq k_*}$, with $(\delta/2)$-bounded homoclinic correspondence, such that each open set ${\mathbb T}^3\times O_k$ contains the $\delta$-neighborhood in ${\mathbb A}^3$ of some essential torus of the chain. By Theorem~\ref{thm:main2} there exists ${\mathcal H}$, arbitrarily $C^\kappa$-close to $H$, such that $({\mathscr C}_k)_{1\leq k\leq k_*}$ is a good chain with $\delta$-bounded homoclinic correspondence. By \cite{GM} there exists an orbit for ${\mathcal H}$ which is $\delta$-admissible, and therefore which intersects each ${\mathbb T}^3\times O_i$. This last property being open, Theorem~\ref{thm:main1} is proved. \appendix \section{Normal hyperbolicity and symplectic geometry}\label{app:normhyp} \setcounter{paraga}{0} We refer to \cite{Berg10,BB13,C04,C08,HPS} for the references on normal hyperbolicity. Here we limit ourselves to a very simple class of systems which admit a normally hyperbolic invariant (non compact) submanifold, which serves us as a model from which all other definitions and properties will be deduced. \paraga The following statement is a simple version of the persistence theorem for normally hyperbolic manifolds well-adapted to our setting, whose germ can be found in \cite{B10} and whose proof can be deduced from the previous references. \vskip3mm \noindent {\bf The normally hyperbolic persistence theorem.} {\em Fix $m\geq 1$ and consider a vector field on ${\mathbb R}^{m+2}$ of the form ${\mathscr V}={\mathscr V}_0+{\mathscr F}$, with ${\mathscr V}_0$ and ${\mathscr F}$ of class $C^1$ and reads \begin{equation}\label{eq:formV0} \dot x=X(x,u,s),\qquad \dot u=\lambda_u(x)\, u,\qquad \dot s =-\lambda_s(x)\,s, \end{equation} for $(x,u,s)\in {\mathbb R}^{m+2}$. Assume moreover that there exists $\lambda>0$ such that the inequalities \begin{equation}\label{eq:ineg} \lambda_u(x)\geq \lambda,\quad{\it and}\quad \lambda_s(x)\geq \lambda,\qquad x\in{\mathbb R}^m. \end{equation} hold. Fix a constant $\mu>0$. Then there exists a constant $\delta_*>0$ such that if \begin{equation}\label{eq:condder} \norm{\partial _xX}_{C^0({\mathbb R}^{m+2})}\leq \delta_*,\qquad \norm{{\mathscr F}}_{C^1({\mathbb R}^{m+2})}\leq \delta_*, \end{equation} the following assertions hold. \begin{itemize} \item The maximal invariant set for ${\mathscr V}$ contained in $O=\big\{(x,u,s)\in{\mathbb R}^{m+2}\mid \norm{(u,s)}\leq \mu\big\}$ is an $m$-dimensional manifold ${\rm Ann}({\mathscr V})$ which admits the graph representation: $$ {\rm Ann}({\mathscr V})=\big\{\big(x,u=U(x), s=S(x)\big)\mid x\in{\mathbb R}^m\big\}, $$ where $U$ and $S$ are $C^1$ maps ${\mathbb R}^m\to{\mathbb R}$ such that \begin{equation}\label{eq:loc} \norm{(U,S)}_{C^0({\mathbb R}^m)}\leq \frac{2}{\lambda}\,\norm{{\mathscr F}}_{C^0}. \end{equation} \item The maximal positively invariant set for ${\mathscr V}$ contained in $O$ is an $(m+1)$-dimensional manifold $W^+\big({\rm Ann}({\mathscr V})\big)$ which admits the graph representation: $$ W^+\big({\rm Ann}({\mathscr V})\big)=\big\{\big(x,u=U^+(x,s), s\big)\mid x\in{\mathbb R}^m\ s\in[-\mu,\mu]\big\}, $$ where $U^+$ is a $C^1$ map ${\mathbb R}^m\times[-1,1]\to{\mathbb R}$ such that \begin{equation}\label{eq:loc2} \norm{U^+}_{C^0({\mathbb R}^m)}\leq c_+\,\norm{{\mathscr F}}_{C^0}. \end{equation} for a suitable $c_+>0$. Moreover, there exists $C>0$ such that for $w\in W^+\big({\rm Ann}({\mathscr V})\big)$, \begin{equation} {\rm dist\,}\big(\Phi^t(w),{\rm Ann}({\mathscr V})\big)\leq C\exp(-\lambda t),\qquad t\geq0. \end{equation} \item The maximal negatively invariant set for ${\mathscr V}$ contained in $O$ is an $(m+1)$-dimensional manifold $W^-\big({\rm Ann}({\mathscr V})\big)$ which admits the graph representation: $$ W^-\big({\rm Ann}({\mathscr V})\big)=\big\{\big(x,u, s=S^-(x,u)\big)\mid x\in{\mathbb R}^m,\ u\in[-\mu,\mu]\big\}, $$ where $S^-$ is a $C^1$ map ${\mathbb R}^m\times[-1,1]\to{\mathbb R}$ such that \begin{equation}\label{eq:loc3} \norm{S^-}_{C^0({\mathbb R}^m)}\leq c_-\,\norm{{\mathscr F}}_{C^0}. \end{equation} for a suitable $c_->0$. Moreover, there exists $C>0$ such that for $w\in W^-\big({\rm Ann}({\mathscr V})\big)$, \begin{equation} {\rm dist\,}\big(\Phi^t(w),{\rm Ann}({\mathscr V})\big)\leq C\exp(\lambda t),\qquad t\leq0. \end{equation} \item The manifolds $W^\pm\big({\rm Ann}({\mathscr V})\big)$ admit $C^0$ foliations $\big(W^\pm(x)\big)_{x\in {\rm Ann}({\mathscr V})}$ such that for $w\in W^\pm(x)$ \begin{equation} {\rm dist\,}\big(\Phi^t(w),\Phi^t(x)\big)\leq C\exp(\pm\lambda t),\qquad t\geq0. \end{equation} \item If moreover ${\mathscr V}_0$ and ${\mathscr F}$ are of class $C^p$, $p\geq1$, and if in addition of the previous conditions the domination inequality, the condition \begin{equation}\label{eq:addcond} p\,\norm{\partial_x X}_{C^0({\mathbb R}^m)}\leq \lambda \end{equation} holds, then the functions $U$, $S$, $U^+$, $S^-$ are of class $C^p$ and \begin{equation} \norm{(U,S)}_{C^p({\mathbb R}^m)}\leq C_p \norm{{\mathscr F}}_{C^p({\mathbb R}^{m+2})}. \end{equation} for a suitable constant $C_p>0$. \item Assume moreover that the vector fields ${\mathscr V}_0,{\mathscr V}$ are $R$-periodic in $x$, where $R$ is a lattice in ${\mathbb R}^m$. Then their flows and the manifolds ${\rm Ann}({\mathscr V})$ and $W^\pm\big({\rm Ann}({\mathscr V})\big)$ pass to the quotient $({\mathbb R}^m/R)\times {\mathbb R}^2$ Assume that the time-one map of ${\mathscr V}_0$ on ${\mathbb R}^m/R\times\{0\}$ is $C^0$ bounded by a constant $M$. Then, with the previous assumptions, the constant $C_p$ depends only on $p$, $\lambda$ and $M$. \end{itemize} } The last statement will be applied in the case where $m=2\ell$ and $R=c{\mathbb Z}^\ell\times\{0\}$, where $c$ is a positive constant, so that the quotient ${\mathbb R}^{2\ell}/R$ is diffeomorphic to the annulus ${\mathbb A}^\ell$. \paraga The following result describes the symplectic geometry of our system in the case where ${\mathscr V}$ is a Hamiltonian vector field. We keep the notation of the previous theorem. \vskip3mm \noindent {\bf The symplectic normally hyperbolic persistence theorem.} {\it Endow ${\mathbb R}^{2m+2}$ with a symplectic form $\Omega$ such that there exists a constant $C>0$ such that for all $z\in O$ \begin{equation}\label{eq:assumpsymp} \abs{\Omega(v,w)}\leq C\norm{v}\norm{w},\qquad \forall v,w \in T_z M. \end{equation} Let ${\mathscr H}_0$ be a $C^2$ Hamiltonian on ${\mathbb R}^{2m+2}$ whose Hamiltonian vector field ${\mathscr V}_0$ satisfies (\ref{eq:formV0}) with conditions (\ref{eq:ineg}), and consider a Hamiltonian ${\mathscr H}={\mathscr H}_0+{\mathscr P}$. Then there exists a constant $\delta_*>0$ such that if \begin{equation}\label{eq:condder2} \norm{\partial _xX}_{C^0({\mathbb R}^{m+2})}\leq \delta_*,\qquad \norm{{\mathscr P}}_{C^2({\mathbb R}^{m+2})}\leq \delta_*, \end{equation} the following properties hold. \begin{itemize} \item The manifold ${\rm Ann}({\mathscr V})$ is $\Omega$-symplectic. \item The manifolds $W^\pm\big({\rm Ann}({\mathscr V})\big)$ are coisotropic and the center-stable and center-unstab\-le foliations $\big(W^\pm(x)\big)_{x\in {\rm Ann}({\mathscr V})}$ coincide with the characteristic foliations of the manifolds $W^\pm\big({\rm Ann}({\mathscr V})\big)$. \item If ${\mathscr H}$ is $C^{p+1}$ and condition (\ref{eq:addcond}) is satisfied, then $W^\pm\big({\rm Ann}({\mathscr V})\big)$ are of class $C^p$ and the foliations $\big(W^\pm(x)\big)_{x\in {\rm Ann}({\mathscr V})}$ are of class $C^{p-1}$. \item There exists a neighborhood ${\mathscr O}$ of ${\rm Ann}({\mathscr V})$ and a symplectic straightening symplectic diffeomorphism $\Psi:{\mathscr O}\to O$ such that \begin{equation} \begin{array}{lll} \Psi\big({\rm Ann}({\mathscr V})\big)={\mathbb A}^{\ell}\times\{(0,0)\};\\[4pt] \Psi\big(W^-\big({\rm Ann}({\mathscr V})\big)\big)\subset{\mathbb A}^{\ell}\times\big({\mathbb R}\times\{0\}\big),\qquad \Psi\big(W^-\big({\rm Ann}({\mathscr V})\big)\big)\subset{\mathbb A}^{\ell}\times\big(\{0\}\times{\mathbb R}\big);\\[4pt] \Psi\big(W^-(x)\big)\subset\{\Psi(x)\}\times\big({\mathbb R}\times\{0\}\big),\qquad \Psi\big(W^+(x)\big)\subset\{\Psi(x)\}\times\big(\{0\}\times{\mathbb R}\big).\\ \end{array} \end{equation} \end{itemize} } See \cite{Mar} for a proof. \end{document}
arXiv
Existence and multiplicity of positive solutions for a class of quasilinear Schrödinger equations in $ \mathbb R^N $$ ^\diamondsuit $ DCDS-S Home Solution of contrast structure type for a reaction-diffusion equation with discontinuous reactive term September 2021, 14(9): 3267-3284. doi: 10.3934/dcdss.2020335 Chaotic oscillations of linear hyperbolic PDE with variable coefficients and implicit boundary conditions Qigui Yang 1,, and Qiaomin Xiang 2, Department of Mathematics, South China University of Technology, Guangzhou, 510640, China Department of Mathematics and Big Data, Foshan University, Foshan, 528000, China * Corresponding author: Qigui Yang Received March 2019 Revised November 2019 Published September 2021 Early access April 2020 Full Text(HTML) In this paper, the chaotic oscillations of the initial-boundary value problem of linear hyperbolic partial differential equation (PDE) with variable coefficients are investigated, where both ends of boundary conditions are nonlinear implicit boundary conditions (IBCs). It separately considers that IBCs can be expressed by general nonlinear boundary conditions (NBCs) and cannot be expressed by explicit boundary conditions (EBCs). Finally, numerical examples verify the effectiveness of theoretical prediction. Keywords: Chaotic oscillation, hyperbolic PDE; variable coefficient, implicit boundary condition, nonlinear boundary condition. Mathematics Subject Classification: Primary: 34C28, 35L70; Secondary: 35L05. Citation: Qigui Yang, Qiaomin Xiang. Chaotic oscillations of linear hyperbolic PDE with variable coefficients and implicit boundary conditions. Discrete & Continuous Dynamical Systems - S, 2021, 14 (9) : 3267-3284. doi: 10.3934/dcdss.2020335 G. Chen, S.-B. Hsu and J. X. Zhou, Chaotic vibrations of the one-dimensional wave equation due to a self-excitation boundary condition. Ⅰ: Controlled hysteresis, Trans. Amer. Math. Soc., 350 (1998), 4265-4311. doi: 10.1090/S0002-9947-98-02022-4. Google Scholar G. Chen, S.-B. Hsu and J. X. Zhou, Chaotic vibrations of the one-dimensional wave equation due to a self-excitation boundary condition. Ⅱ: Energy injection, period doubling and homoclinic orbits, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 8 (1998), 423-445. doi: 10.1142/S0218127498000280. Google Scholar G. Chen, S.-B. Hsu and J. X. Zhou, Chaotic vibrations of the one-dimensional wave equation due to a self-excitation boundary condition. Ⅲ: Natural hysteresis memory effects, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 8 (1998), 447-470. doi: 10.1142/S0218127498000292. Google Scholar G. Chen, S.-B. Hsu and J. X. Zhou, Nonisotropic spatiotemporal chaotic vibration of the wave equation due to mixing energy transport and a van der Pol boundary condition, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 12 (2002), 535-559. doi: 10.1142/S0218127402004504. Google Scholar G. Chen, T. W. Huang and Y. Huang, Chaotic behavior of interval maps and total variations of iterates, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 14 (2004), 2161-2186. doi: 10.1142/S0218127404010540. Google Scholar G. Chen, B. Sun and T. W. Huang, Chaotic oscillations of solutions of the Klein-Gordon equation due to inbalance of distributed and boundary energy flows, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 24 (2014), 1430021, 19 pp. doi: 10.1142/S0218127414300213. Google Scholar X. P. Dai, Chaotic dynamics of continuous-time topological semiflow on Polish spaces, J. Differential Equations, 258 (2015), 2794-2805. doi: 10.1016/j.jde.2014.12.027. Google Scholar M. W. Hirsch, S. Smale and R. L. Devaney, Differential Equations, Dynamical Systems, and an Introduction to Chaos, Second edition, Pure and Applied Mathematics (Amsterdam), 60. Elsevier/Academic Press, Amsterdam, 2004. doi: 10.1016/C2009-0-61160-0. Google Scholar C.-C. Hu, Chaotic vibrations of the one-dimensional mixed wave system, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 19 (2009), 579-590. doi: 10.1142/S0218127409023202. Google Scholar Y. Huang, Growth rates of total variations of snapshots of the 1D linear wave equation with composite nonlinear boundary reflection relations, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 13 (2003), 1183-1195. doi: 10.1142/S0218127403007138. Google Scholar Y. Huang, J. Luo and Z. L. Zhou, Rapid fluctuations of snapshots of one-dimensional linear wave equations with a van der Pol nonlinear boundary condition, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 15 (2005), 567-580. doi: 10.1142/S0218127405012223. Google Scholar L. L. Li, Y. L. Chen and Y. Huang, Nonisotropic spatiotemporal chaotic vibrations of the one-dimensional wave equation with a mixing transport term and general nonlinear boundary condition, J. Math. Phys., 51 (2010), 102703, 22 pp. doi: 10.1063/1.3486070. Google Scholar L. L. Li, Y. Huang, G. Chen and T. W. Huang, Chaotic oscillations of second order linear hyperbolic equations with nonlinear boundary conditions: A factorizable but noncommutative case, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 25 (2015), 1530032, 20 pp. doi: 10.1142/S0218127415300323. Google Scholar L. L. Li, T. W. Huang and X. Y. Huang, Chaotic oscillations of the 1D wave equation due to extreme imbalance of self-regulations, J. Math. Anal. Appl., 450 (2017), 1388-1400. doi: 10.1016/j.jmaa.2017.01.095. Google Scholar [15] Y. C. Li, Chaos in Partial Differential Equations, Graduate Series in Analysis. International Press, omerville, MA, 2004. doi: 10.1002/cnm.650. Google Scholar S. Wiggins, Introduction to Applied Nonlinear Dynamical Systems and Chaos, 2$^nd$ edition, Texts in Applied Mathematics, 2. Springer-Verlag, New York, 2003. doi: 10.1063/1.4822950. Google Scholar Q. M. Xiang and Q. G. Yang, Nonisotropic chaotic oscillations of the wave equation due to the interaction of mixing transport term and superlinear boundary condition, J. Math. Anal. Appl., 462 (2018), 730-746. doi: 10.1016/j.jmaa.2018.02.031. Google Scholar Q. M. Xiang and Q. G. Yang, Chaotic oscillations of a linear hyperbolic PDE with a general nonlinear boundary condition, J. Math. Anal. Appl., 472 (2019), 94-111. doi: 10.1016/j.jmaa.2018.10.083. Google Scholar Q. G. Yang, G. R. Jiang and T. S. Zhou, Chaotification of linear impulsive differential systems with applications, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 22 (2012), 1250297, 12 pp. doi: 10.1142/S0218127412502975. Google Scholar Q. G. Yang and Q. M. Xiang, Existence of chaotic oscillations in second-order linear hyperbolic PDEs with implicit boundary conditions, J. Math. Anal. Appl., 457 (2018), 751-775. doi: 10.1016/j.jmaa.2017.08.018. Google Scholar Z. B. Yin and Q. G. Yang, Distributionally scrambled set for an annihilation operator, Internat. J. Bifur. Chaos Appl. Sci. Engrg., 25 (2015), 1550178, 13 pp. doi: 10.1142/S0218127415501783. Google Scholar Z. B. Yin and Q. G. Yang, Generic distributional chaos and principal measure in linear dynamics, Ann. Pol. Math., 118 (2016), 71-94. doi: 10.4064/ap3908-9-2016. Google Scholar Z. B. Yin and Q. G. Yang, Distributionally n-chaotic dynamics for linear operators, Rev. Mat. Complut., 31 (2018), 111-129. doi: 10.1007/s13163-017-0226-5. Google Scholar Z. B. Yin and Q. G. Yang, Distributionally n-scrambled set for weighted shift operators, J. Dyn. Control Syst., 23 (2017), 693-708. doi: 10.1007/s10883-017-9359-6. Google Scholar Figure 1. The spatiotemporal profiles of system (23) with $ (\alpha_1,\beta_1) = (0.1,1) $, $ (\alpha_2,\beta_2) = (0.5,1) $, $ x\in [0,1] $ and $ t\in [60,64] $: (a) $ w_{x}(x,t) $; (b) $ w_{t}(x,t) $. Figure Options Download as PowerPoint slide Figure 2. The spatiotemporal profiles of system (23) with $ \gamma_1 = 1.1\pi $ and $ \gamma_2 = 0.4\pi $, $ x\in [0,1] $ and $ t\in [60,64] $: (a) $ w_{x}(x,t) $; (b) $ w_{t}(x,t) $. R.G. Duran, J.I. Etcheverry, J.D. Rossi. Numerical approximation of a parabolic problem with a nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 1998, 4 (3) : 497-506. doi: 10.3934/dcds.1998.4.497 Jesús Ildefonso Díaz, L. Tello. On a climate model with a dynamic nonlinear diffusive boundary condition. Discrete & Continuous Dynamical Systems - S, 2008, 1 (2) : 253-262. doi: 10.3934/dcdss.2008.1.253 Hongwei Zhang, Qingying Hu. Asymptotic behavior and nonexistence of wave equation with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2005, 4 (4) : 861-869. doi: 10.3934/cpaa.2005.4.861 Jong-Shenq Guo. Blow-up behavior for a quasilinear parabolic equation with nonlinear boundary condition. Discrete & Continuous Dynamical Systems, 2007, 18 (1) : 71-84. doi: 10.3934/dcds.2007.18.71 Shijin Deng, Linglong Du, Shih-Hsien Yu. Nonlinear stability of Broadwell model with Maxwell diffuse boundary condition. Kinetic & Related Models, 2013, 6 (4) : 865-882. doi: 10.3934/krm.2013.6.865 Marek Fila, Kazuhiro Ishige, Tatsuki Kawakami. Convergence to the Poisson kernel for the Laplace equation with a nonlinear dynamical boundary condition. Communications on Pure & Applied Analysis, 2012, 11 (3) : 1285-1301. doi: 10.3934/cpaa.2012.11.1285 Patrick Winkert. Multiplicity results for a class of elliptic problems with nonlinear boundary condition. Communications on Pure & Applied Analysis, 2013, 12 (2) : 785-802. doi: 10.3934/cpaa.2013.12.785 Zuodong Yang, Jing Mo, Subei Li. Positive solutions of $p$-Laplacian equations with nonlinear boundary condition. Discrete & Continuous Dynamical Systems - B, 2011, 16 (2) : 623-636. doi: 10.3934/dcdsb.2011.16.623 Kazuhiro Ishige, Ryuichi Sato. Heat equation with a nonlinear boundary condition and uniformly local $L^r$ spaces. Discrete & Continuous Dynamical Systems, 2016, 36 (5) : 2627-2652. doi: 10.3934/dcds.2016.36.2627 G. Acosta, Julián Fernández Bonder, P. Groisman, J.D. Rossi. Numerical approximation of a parabolic problem with a nonlinear boundary condition in several space dimensions. Discrete & Continuous Dynamical Systems - B, 2002, 2 (2) : 279-294. doi: 10.3934/dcdsb.2002.2.279 Larissa V. Fardigola. Transformation operators in controllability problems for the wave equations with variable coefficients on a half-axis controlled by the Dirichlet boundary condition. Mathematical Control & Related Fields, 2015, 5 (1) : 31-53. doi: 10.3934/mcrf.2015.5.31 Takeshi Taniguchi. Exponential boundary stabilization for nonlinear wave equations with localized damping and nonlinear boundary condition. Communications on Pure & Applied Analysis, 2017, 16 (5) : 1571-1585. doi: 10.3934/cpaa.2017075 Jiayue Zheng, Shangbin Cui. Bifurcation analysis of a tumor-model free boundary problem with a nonlinear boundary condition. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4397-4410. doi: 10.3934/dcdsb.2020103 Samia Challal, Abdeslem Lyaghfouri. The heterogeneous dam problem with leaky boundary condition. Communications on Pure & Applied Analysis, 2011, 10 (1) : 93-125. doi: 10.3934/cpaa.2011.10.93 Nicolas Van Goethem. The Frank tensor as a boundary condition in intrinsic linearized elasticity. Journal of Geometric Mechanics, 2016, 8 (4) : 391-411. doi: 10.3934/jgm.2016013 H. Beirão da Veiga. Vorticity and regularity for flows under the Navier boundary condition. Communications on Pure & Applied Analysis, 2006, 5 (4) : 907-918. doi: 10.3934/cpaa.2006.5.907 Wenzhen Gan, Peng Zhou. A revisit to the diffusive logistic model with free boundary condition. Discrete & Continuous Dynamical Systems - B, 2016, 21 (3) : 837-847. doi: 10.3934/dcdsb.2016.21.837 Jean-François Coulombel, Frédéric Lagoutière. The Neumann numerical boundary condition for transport equations. Kinetic & Related Models, 2020, 13 (1) : 1-32. doi: 10.3934/krm.2020001 Raffaela Capitanelli. Robin boundary condition on scale irregular fractals. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1221-1234. doi: 10.3934/cpaa.2010.9.1221 Kei Fong Lam, Hao Wu. Convergence to equilibrium for a bulk–surface Allen–Cahn system coupled through a nonlinear Robin boundary condition. Discrete & Continuous Dynamical Systems, 2020, 40 (3) : 1847-1878. doi: 10.3934/dcds.2020096 PDF downloads (165) HTML views (576) Qigui Yang Qiaomin Xiang Article outline
CommonCrawl
Quantum correlations in electron microscopy Chen Mechel, Yaniv Kurman, Aviv Karnieli, Nicholas Rivera, Ady Arie, and Ido Kaminer Chen Mechel,1 Yaniv Kurman,1 Aviv Karnieli,2 Nicholas Rivera,3 Ady Arie,2 and Ido Kaminer1,* 1Solid State Institute, Technion, Israel Institute of Technology, 32000 Haifa, Israel 2School of Electrical Engineering, Tel Aviv University, Ramat Aviv 69978, Tel Aviv, Israel 3Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139, USA *Corresponding author: [email protected] Ido Kaminer https://orcid.org/0000-0003-2691-1892 C Mechel Y Kurman A Karnieli N Rivera A Arie I Kaminer Chen Mechel, Yaniv Kurman, Aviv Karnieli, Nicholas Rivera, Ady Arie, and Ido Kaminer, "Quantum correlations in electron microscopy," Optica 8, 70-78 (2021) Get PDF (11330 KB) Free-electron shaping using quantum light (OPTICA) Probing quantum optical excitations with fast electrons (OPTICA) Optical analog of valley Hall effect of 2D excitons in hyperbolic metamaterial (OPTICA) Excitons Quantum wells Surface plasmons Original Manuscript: August 3, 2020 Revised Manuscript: November 22, 2020 DISCUSSION AND SUMMARY Electron microscopes provide a powerful platform for exploring physical phenomena with nanoscale resolution, based on the interaction of free electrons with the excitations of a sample such as phonons, excitons, bulk plasmons, and surface plasmons. The interaction usually results in the absorption or emission of such excitations, which can be detected directly through cathodoluminescence or indirectly through electron energy loss spectroscopy (EELS). However, as we show here, the underlying interaction of a free electron and an arbitrary optical excitation goes beyond what was predicted or measured so far, due to the interplay of entanglement and decoherence of the electron-excitation system. The entanglement of electrons and optical excitations can provide new analytical tools in electron microscopy. For example, it can enable measurements of optical coherence, plasmonic lifetimes, and electronic length scales in matter (such as the Bohr radius of an exciton). We show how these can be achieved using common configurations in electron diffraction and EELS, revealing significant changes in the electron's coherence, as well as in other quantum information theoretic measures such as purity. Specifically, we find that the purity after interaction with nanoparticles can only take discrete values, versus a continuum of values for interactions with surface plasmons. We quantify the post-interaction density matrix of the combined electron-excitation system by developing a framework based on macroscopic quantum electrodynamics. The framework enables a quantitative account of decoherence due to excitations in any general polarizable material (optical environment). This framework is thus applicable beyond electron microscopy. Particularly in electron microscopy, our work enriches analytical capabilities and informs the design of quantum information experiments with free electrons, allowing control over their quantum states and their decoherence by the optical environment. Electron microscopy provides a powerful platform to study phenomena in condensed matter physics, optics, plasmonics, and many aspects of nanomaterials, all with the subnanometric precision of a free-electron probe. The most common techniques used for studying optical and material excitations are electron energy loss spectroscopy (EELS) [1–3] and cathodoluminescence [4,5]. More recent techniques also utilize the quantum wave nature of free electrons (e.g., by phase-front shaping [6–11] or by dressing electrons using a strong laser field [12–21]). Surprisingly, although these capabilities exploit the quantum wave properties of free electrons, other quantum features, such as the entanglement between the free electron and the optical excitations [22,23], have yet to be exploited. One of the main footprints of entanglement of the electron with other quantum degrees of freedom in the environment is quantum decoherence [24–26]. Any observer measuring exclusively the electron usually loses knowledge of any coherence between the electron and the environment, since the environment was left unobserved ("traced out"). A useful platform for observing the electron decoherence is the famous double-slit experiment [27,28] that continues to inspire new ideas in the field [29–31]. The first double-slit experiment that presented an interference pattern from a single electron was done in electron microscopes using electron holography [32–34]. Potapov et al. introduced a series of extensions of such double-slit experiments [35,36] in which free electrons interact inelastically with a (bulk) plasmonic excitation that serves as an effective "which-path" detector. Each electron's quantum coherence (transverse to its propagation) was measured using biprism-based holography. These experiments, and closely related theory [37], found that the spatial coherence of the inelastically scattered electron is highly correlated to the optical properties of the sample. In particular, the spatial coherence length is closely related to the propagation length of the bulk plasmon that is emitted [38–40]. From the discussion above, it should be clear that analyzing the quantum properties of free electrons has the potential to reveal more information about the sample's excitations than conventional measures in electron microscopy. The remaining question is how to analyze the electron to unveil this additional information. In this paper, we formulate a general theory of the entanglement and decoherence dynamics of electrons subject to interactions with an arbitrary optical environment. Using quantum information measures, we demonstrate how the formation of entanglement with the sample's optical excitations is correlated to spatial decoherence of the post-selected electron. We base our analytical calculations on macroscopic quantum electrodynamics (MQED) [41,42], which enables a fully quantum analysis of the interactions of electrons with electromagnetic fields in any optical environment, including absorbing and polarizable materials. This theory enables us to link the post-interaction electron density matrix with the quantum fluctuations of the electromagnetic field of the medium. Furthermore, we propose means of measuring and controlling the electron density matrix and specific properties such as decoherence by using standard electron microscopy techniques. We show the richness of coherence properties that the electron can inherit during its interaction with common nanostructures such as nanoparticles and thin metallic interfaces, comparing our results with experiments such as those of Ref. [35]. Our findings could help develop new analytical capabilities in electron microscopy. In particular, measuring the quantum coherence may enable probing the optical environment's quantum fluctuations through the electron's decoherence and help answer fundamental questions on the nature of light–matter interaction in the quantum regime. Our approach provides the first quantum optical analysis of free-electron interaction with arbitrary electromagnetic environments. As opposed to bound-electron systems (such as atoms and quantum dots), examining arbitrary optical excitations using the continuous energy levels of the electron enables exploration of the wide range of phenomena in electron microscopy applications. In its classical limit, our formalism recovers the well-known EELS rates [2]. However, the formalism goes beyond the classical limit to unveil the underlying quantum correlations embedded in the free-electron probe. As a result, we strengthen the connection between the fields of electron microscopy and quantum information—a connection that has begun to emerge in recent years [22,23,43,44]. The motivation for our approach comes from previous works in electron microscopy and other free-electron systems that quantified electron decoherence [35–37,45,46]. The power of the MQED-based field quantization that we apply here is its generality: it enables one to link the properties of any optical environment directly to the electron image that is measured after the interaction. To demonstrate intuitively the fundamental role of optical excitations in the quantum decoherence of free electrons, we illustrate in Fig. 1 a gedankenexperiment inspired by Refs. [35–37,47,48]. Consider a free electron impinging onto a thin metallic film and interacting with it [Figs. 1(a) and 1(b)]. During the interaction, the electron creates an optical excitation in the film, e.g., a surface plasmon polariton (SPP), and as a result, it loses one quantum of energy [49]. Then the electron passes through a double-slit apparatus (realized, for example, by an electron biprism [39,40,45,46,50]). A resulting interference pattern is obtained on a screen, from which the electron transverse coherence can be quantitatively measured via the fringe visibility. Fig. 1. Illustration of an electron interacting with an optical excitation as commonly realized in electron microscopes. Examples of electrons interacting with (a) localized and (b) delocalized optical excitations, which leave distinct footprints on the post-interaction electron. Such are manifested in the resulting interference patterns after the post-interaction electron propagates through two slits. The transverse coherence of the electron is measured on a screen at the bottom via the interference visibility, and it is strongly affected by the size of the excitations. In (a) and (b), the excitations are spatially localized/delocalized, respectively, compared to the distance between the slits, thereby preventing/preserving a visible interference pattern. (c) A transmission electron microscope, in which an electron can impinge on a thin metallic film to interact with optical excitations (plasmons). The microscope enables post-selected measurement of the electron's position or momentum distribution by energy filtering, e.g., via the electron energy loss spectrometer (EELS) at the bottom of the figure. In Figs. 1(a) and 1(b) we present the two possible extreme cases for the electron's transverse coherence. In Fig. 1(a), the excitation is very localized, for example, as a result of high propagation losses. There, the post-interaction electron remains coherent only over a small area, so we expect no visible interference pattern on the screen. On the other hand, a large spatial coherence associated with a delocalized optical excitation manifests as a large spatial coherence of the post-interacted electron, and a visible interference pattern is observed [Fig. 1(b)]. From a quantum mechanical point of view, regardless of the spatial extent of the optical mode, the interacting electron is maximally entangled with the environment in both cases (see Supplement 1 Section S.4). However, the spatial width of the excitation determines the coherence length of the resulting electron. Thus, the usually unexplored properties of the optical environment, such as the spatial correlation functions of the electromagnetic field, can be extracted directly from measuring the electron's spatial coherence. A. Theory for Analyzing Quantum Properties of a Post-Interaction Electron In the spirit of the gedankenexperiment presented above, we consider the general case of an electron impinging on a sample of some structured optical medium, representing a general optical environment. The electron emits a quantum of an optical excitation hosted by this environment, leading to an entanglement of the electron and the optical excitation. Then the electron travels through an energy filter that post-selects electrons of a given energy before measuring its position or momentum [Fig. 1(c)]. The description of the interaction itself begins from the minimal coupling Hamiltonian of quantum electrodynamics (QED), acting as a perturbation on a free, relativistic electron, ignoring spin effects [51]. In this case, the Hamiltonian of the whole system is $H = {H_{\text{e}}} + {H_{\text{em}}} + {H_{\text{int}}}$, where the electron's Hamiltonian is $H_{\text{e}} = {{\bf p}^2}/2{m}\gamma$, the field's Hamiltonian (omitting the zero-point energy) is ${H_{{\text{em}}}} = \smallint {\text{d}}{\textbf r}\int {\text{d}}\omega \hbar \omega\; {\textbf{f}^\dagger}\!({\textbf{r},\omega}) \cdot \textbf{f}({\textbf{r},\omega})$, and ${H_{{\text{int}}}} = \frac{e}{{m\gamma}}\textbf{A}(\textbf{r}) \cdot \textbf{p}$ describes the interaction term. In these equations, $m$ denotes the electron mass, $\gamma$ the Lorentz factor, $\textbf{p}$ the momentum operator of the paraxial electron, $\textbf{A}$ the quantized vector potential of the electromagnetic field, and $\textbf{f}({\textbf{r},\omega})$ (${\textbf{f}^\dagger}({\textbf{r},\omega})$) the creation (annihilation) operator for a dipolar excitation in a medium. We write the electromagnetic vector potential $\textbf{A}(\textbf{r})$ in the second quantization in terms of these dipolar excitations as (1)$$\begin{split}{\bf A}( {\bf r},\omega )&=\sqrt{\frac{\hbar }{\pi {{\epsilon }_{0}}}}\frac{1}{{{c}^{2}}}\int \omega {\text{d}}\omega \int {\text{d}}{\bf r}^{\prime}\sqrt{\text{Im }\epsilon ( {\bf r}^{\prime},\omega )}\\&\quad\times\overset{=}{\bf G}\,( {\bf r},{\bf r}^{\prime},\omega ) \hat{{{\bf f}}}( {\bf r}^{\prime},\omega )+\text{h}.\text{c}.,\end{split}$$ where ${\epsilon _0}$ is the vacuum permittivity, and ${c}$ is the speed of light. The quantized field of Eq. (1) can be seen as an expression of the field resulting from a quantum dipole excitation at position $\textbf{r}^\prime$ and frequency $\omega$, oriented along a direction $j = x,y,z$. This excitation contributes to the vector potential in $\textbf{r}$ using the dyadic Green's function $\overset{=}{\boldsymbol G}({\textbf{r},\textbf{r}^\prime,\omega})$ [41] as a propagator. The joint state of the entire quantum system is obtained by combining the electron's state with the state of the electromagnetic field. Therefore, without loss of generality, the initial joint state of the system is a coherent electron wave function ${| {{\psi _0}}\rangle _{\text{e}}}$ and no field excitations ${| 0\rangle _{{\text{exc}}}}$. The more general case of an incoherent electron is discussed in Supplement 1 Section S.1.2. Assuming a weak interaction, the resulted entangled state is a coherent superposition, described as ${| {{\psi _0}}\rangle _{\text{e}}}{| 0\rangle _{{\text{exc}}}}\mathop{ \to} \limits^{{H_{{\text{int}}}}} {| {{\psi _0}}\rangle _{\text{e}}}{| 0 \rangle_{{\text{exc}}}} + \mathop \sum \limits_\sigma \mathop \sum \limits_\kappa {c_{\sigma \kappa}}{| {{\psi _\sigma}}\rangle _{\text{e}}}{| {{1_\kappa}}\rangle _{{\text{exc}}}}$. In this summation, σ denotes the electron's degrees of freedom (e.g., momentum, spin, energy, position) and $\kappa$ the excitation's degrees of freedom. In usual EELS experiments, only the post-interaction electron is measured, whereas the sample (and its excitations) are left unmeasured [52]. From the perspective of quantum mechanics, this partial measurement corresponds to a partial trace over the sample degrees of freedom resulting in the electron's reduced density matrix (2)$${\rho _{\text{e}}} = {{\text{Tr}}_{{\text{exc}}}}\!\left\{{{\rho _{\text{joint}}}} \right\} = \mathop \sum \limits_{\sigma ,\sigma ^\prime} {\rho _{\text{e}}}({\sigma ,\sigma ^\prime} ){\left| {{\psi _\sigma}} \right.\rangle_{\text{e}}}\langle{\left. {{\psi _{\sigma ^\prime}}} \right|_{\text{e}}}$$ with the coefficients in the $\sigma ,\sigma ^\prime$ basis determined by ${\rho _{\text{e}}}({\sigma ,\sigma ^\prime}) = \mathop \sum \limits_\kappa {c_{\sigma \kappa}}c_{\sigma ^\prime \kappa}^*$. Therefore, the post-interaction electron is in a mixed state, comprised by an incoherent mixture of pure states, each corresponding to a different scattering outcome. This picture is equivalent to describing the electron as partially decohering as a result of the interaction. To further develop Eq. (2), we use several approximations that are common in free-electron experiments and in electron microscopy. First, we assume the paraxial approximation for the motion of the electron beam, since typical angular spreads of such beams are on the order of a few milliradians or below. This approximation remains valid after the interaction as well, as given by the validity of the approximation that the electron recoils negligibly after interaction with an optical excitation ("no-recoil approximation") [2]. Apart from the paraxial approximation, we also assume that the interaction occurs over a short distance relative to the post-interaction propagation distance to the detector, e.g., a few-micrometers evanescent field, compared to tens-of-centimeters distance to the detector. Such distances are also much longer than the typical (longitudinal) coherence length of each electron. Under the assumptions above, the electron's reduced density matrix after the interaction is given by (see Supplement 1 Section S.1) (3)$$\begin{split}{\rho _{\text{e}}}({{\textbf{r}_T},\textbf{r}^{\prime}_T;\Delta E} )& = \frac{{4}\alpha}{{\hbar cS}}\int {\text{d}}z{\text{d}}z^\prime {e^{i\frac{\Delta{E}}{{\hbar {v_0}}}({z - z^\prime} )}}\\[-3pt]&\quad\times {\text{Im}}{G_{\textit{zz}}}\left({{\textbf{r}_T},z;\textbf{r}_T^\prime ,z^\prime ;\frac{\Delta{E}}{\hbar}} \right).\end{split}$$ Here $z$ is coordinate along the electron propagation axis, $\Delta E$ is the electron energy loss, $\alpha$ is the fine-structure constant, ${v_0}$ is the initial electron speed, $S$ is the electron beam area, and $\hbar$ is the reduced Planck constant. The effect of the optical excitations is fully captured by the $zz$ component of the imaginary part of the dyadic Green's function. Importantly, the two transverse spatial arguments, ${\textbf{r}_T}$ and $\textbf{r}_T^\prime$, enable the determination of the transverse quantum coherence of the post-interaction electron via the off-diagonal elements of ${\rho _{\text{e}}}$. We note that the diagonal elements of the reduced density matrix, i.e., taking ${\textbf{r}_T} = \textbf{r}^{\prime}_T$ in Eq. (3), retrieve the commonly used EELS probabilities for a point charge (found in [2]). In Eq. (3) we describe the density matrix in units of probability per unit energy per area, though it can be defined as probability per energy, which result in multiplying Eq. (3) by the electron transverse spot size. Equation (3) can be generalized to the case where the incoming electron is described by a general density matrix (see Supplement 1 Section S.1.2 for full derivation). In this case, the electron final density matrix in the spatial coordinate basis is simply a product of Eq. (3) and its initial density matrix, which can be measured independently. This corresponds to a convolution of the momentum response of the sample with the electron's initial density matrix in the momentum basis. To demonstrate the implications of our new formalism and especially Eq. (3), we apply it to two types of archetypical optical excitations: propagating SPPs in thin metallic films and nanoparticles supporting localized optical modes. These two examples enable comparison of the effect of the excitation spectrum (e.g., whether it is continuous or discrete) on the post-interaction electron. These cases result in very different consequences in terms of the entanglement created during the interaction and, accordingly, the purity of the electron and its spatial coherence. In each case, we derive the resulting electron density matrix, discuss its properties, and show what the electron measurement can reveal about the optical excitation itself. We then present the connection between the two cases and highlight their differences and similarities. Fig. 2. Free-electron entanglement with surface plasmon polaritons (SPPs) to probe the SPP dispersion, lifetime, and coherence. (a) Schematic of the interaction. The electron moves through a thin insulator-metal-insulator interface hosting SPPs. The electron excites a plasmon and thus changes its energy and momentum in an amount equal to that of the plasmon, giving rise to electron-plasmon entanglement. Since each optical excitation is well described by its momentum, the density matrix is diagonal in that basis, and the complete density matrix of the electron at each energy can be obtained from a single energy-filtered diffraction image. (b) Reconstruction of the SPP dispersion and lifetime from EELS diffraction images for each energy loss $\hbar \omega$. Insets: two EELS diffraction images. The radius and width of each ring are used for the reconstruction of the dispersion curve. Each image shows the diagonal terms of the electron density matrix in the momentum basis for each energy, $\langle{\textbf{k}_T},\hbar \omega | {{\rho _{\text{e}}}} |{\textbf{k}_T},\hbar \omega\rangle$. The dispersion plot is constructed from the EELS diffraction image, calculated for each energy, as experimental data would be processed. (c) Electron transverse coherence as a function of the relative distance. The electron coherence grows with the propagation length of the plasmon, which decreases with energy. The calculations are based on Eq. (5), with simulated data from (b). (d) Same axes as in (c), but for bulk plasmons in aluminum (blue) and compared to the experimental results by [35] (red), giving a good fit. See Supplement 1 Section S.2.2 for fitting details and parameters. B. First Case: Delocalized Excitations—Surface Plasmon Polaritons The first case we present is the interaction of a free electron with SPPs as shown in Fig. 2(a). In this case, we show how the spatial coherence of the post-interaction electron is a convenient and reliable measurable quantity to analyze the optical excitation. In our example, the SPPs propagate along a thin metallic layer in a planar geometry (an insulator-metal-insulator, or IMI, structure). The system is translation invariant in the plane transverse to the electron's motion, so it is convenient to work in the transverse momentum basis for both the electron and the electromagnetic modes. Using this representation, we write the $zz$ component of the electromagnetic Green's function of the sample [53]: (4)$$\begin{split}{G_{\textit{zz}}}({{\textbf{r}_T},z;\textbf{r}_T^\prime ,z^\prime ;\omega} ) &= \frac{i}{{8{\pi ^2}}}\frac{1}{{{{({\omega /c} )}^2}}}\iint\limits_{- \infty}^\infty {r^{\text{p}}}({{\textbf{k}_T},\omega} )\\[-3pt]&\quad\times{e^{i\left[{{\textbf{k}_T} \cdot ({{\textbf{r}_T} - \textbf{r}_T^\prime} ) + \frac{\omega}{c}({z + z^\prime} )} \right]}}\textbf{k}_T^2{{\text{d}}^2}{\textbf{k}_T},\end{split}$$ where ${r^{\text{p}}}({{\textbf{k}_T},\omega}) = \frac{{({{\epsilon _r}(\omega) - 1})i - \frac{{\sigma (\omega){k_T}}}{{\omega {\epsilon _0}}}}}{{({{\epsilon _r}(\omega) + 1})i - \frac{{\sigma (\omega){k_T}}}{{\omega {\epsilon _0}}}}}$ is the momentum-dependent Fresnel reflection coefficient for $p$-polarized light, and ${\epsilon _r}(\omega)$ and $\sigma (\omega)$ are the frequency-dependent relative permittivity and conductivity of the metal (for the figures, we assume they follow the Drude model as discussed in Supplement 1 Section S.2.3). Substituting Eq. (4) into Eq. (3), the resulting electron density matrix becomes (5)$$\begin{split}{\rho _{\text{e}}}({{\textbf{k}_T},\textbf{k}_T^{ ^\prime};\Delta E} ) &= \frac{{\delta ({{\textbf{k}_T} - \textbf{k}_T^{^\prime}} )}}{S \hbar}\frac{{16 \pi ^2\alpha c}}{{{{({{\Delta}E/\hbar} )}^2}}}\frac{{{\text{Im}}\{{{r^{\text{p}}}({{\textbf{k}_T},\Delta E/\hbar} )} \}}}{{\sqrt {k_T^2 - \frac{{{{({\Delta E/\hbar} )}^2}}}{{{c^2}}}}}}\\[-3pt]&\quad\times\frac{{k_T^2}}{{k_T^2 - \frac{{{{({\Delta E/\hbar} )}^2}}}{{{c^2}}} + \frac{{{{({\Delta E/\hbar} )}^2}}}{{{v^2}}}}}.\\[-1.2pc]\end{split}$$ Due to the in-plane translational symmetry in the system, the density matrix is diagonal in the transverse momentum basis [manifested through $\delta ({{\textbf{k}_T} - \textbf{k}^{\prime}_T})$]. The diagonal elements depend on the post-selected energy loss ${\Delta}E$ and on the plasmonic spectral density of states in the specific energy and momenta [through ${\text{Im}}\{{{r^{\text{p}}}({{\textbf{k}_T},\Delta E/\hbar})} \}$]. Therefore, for each plasmonic energy, ${\rho_{\text{e}}}$ is real-valued, positive, and depends on the transverse momentum transfer (${\textbf{k}_T}$) during the interaction. We note that, in this case, the azimuthal symmetry forces ${\rho_{\text{e}}}$ to depend only on the magnitude of ${\textbf{k}_T}$, and if the sample would be tilted, there would be an angular dependence. In Fig. 2(b) we show the density matrix as a function of the magnitude of the momentum transfer ${k_T}$ [and energy loss $\Delta E$ (vertical axis)], normalized for each plasmon energy. Due to energy and momentum conservation in the spontaneous emission process leading to EELS, the diagonal components of the density matrix are proportional to the energy and momentum-dependent excitation probability. Its peaks (in ${{k}_T}$-$\omega$ space) reveal the (complex) SPP dispersion relation. In experiments, the post-interaction electron goes through energy loss filtering, which leads to a ring-like diffraction image of the momentum-space electron state as illustrated in Fig. 2(b). The width of the ring is set by the momentum space width of the plasmon at that frequency, which is set by the inverse propagation length. From this information, one can directly calculate the lifetime of the plasmon for each energy (limited by the electron zero-loss peak and by the resolution of the energy filter). Note that this approach is still limited by the energy resolution, which is affected by the initial electron energy spread (zero-loss peak) and the EELS resolution. This resolution determines the momentum resolution of the electron image and thus the resolution of both the dispersion curve and its width (see [54] for elaborated discussion). Note also that in terms of quantum measurement, we first post-select the electrons by energy and then measure their momentum distribution at the detector, which still obeys uncertainty relations. Apart from the complex dispersion relation, the density matrix of Eq. (5) contains information regarding the loss of spatial coherence of the excitation, as the coherence length is connected to the propagation length. To quantify the SPP spatial coherence through the electron spatial coherence, we use a similar notation to that in optical coherence theory [55]. Due to the translation invariance of the system, the energy-filtered spatial coherence $\gamma ({{\textbf{r}_T},\textbf{r}^{\prime}_T,\Delta E})$ depends solely on relative distance $\Delta {\textbf{r}_T}$ and can be written as (see Supplement 1 Section S.2.1) (6)$$\gamma ({\Delta{\textbf{r}_T},\Delta E} ) = \frac{{\int {\rho _{\text{e}}}({{\textbf{k}_T}} ){e^{i{\textbf{k}_T} \cdot \Delta {\textbf{r}_T}}}{\text{d}}{\textbf{k}_T}}}{{\int {\rho _{\text{e}}}({{\textbf{k}_T}} ){\text{d}}{\textbf{k}_T}}},$$ where ${\rho _{\text{e}}}({{\textbf{k}_T}})$ denotes the diagonal elements of ${\rho _{\text{e}}}$. Figure 2(c) presents $\gamma$ in our structure for different conductivity values of the film and thus different propagation lengths ${L_p}$. Comparing it to Fig. 2(b), we find that larger losses correspond to shorter propagation lengths and higher spatial confinement of the optical modes. Therefore, the electron coherence area is greatly affected by the plasmonic losses in the material. This observation provides interesting possibilities for controlling the whole electron density matrix, for example, through controlling the sample temperature or charge carrier concentration via gate voltage. Additionally, this coherence function can be investigated for different propagation distances after the interaction; its propagation can be calculated using wave optics [55] as was done in electron microscopy [37,56]. It is important to note that although the electron beam area increases and refocuses in a microscope, the ratio between coherence and the beam width is known to be conserved [57], as the electron itself preserves its quantum coherence under any unitary operation. Equation (6) indicates a way to measure $\gamma ({\Delta{\textbf{r}_T},\Delta E})$ from the same energy-filtered diffraction images that are used for the reconstruction of the plasmonic dispersion. This measurement can be done by taking this diffraction image and calculating its two-dimensional Fourier transform, normalized by the total measured power. Note that the final electron coherence is also affected by the initial electron coherence; see Supplement 1 Sections S.1.2 and S.2.2. We compare our theory to the experimental results from [35] for the case of bulk plasmons in Al samples and find a good match with our theory [see Fig. 2(d)]. Our findings explain the experimental results in terms of a general first-principles theory, in contrast to the previous structure-dependent approaches [37]. Since we only look at the electromagnetic response of the sample, an analogous calculation can be used to quantify the coherence size of any optical excitation, for example, the Bohr radius of an exciton [58]. Let us compare our approach (as performed, e.g., in [54,59]) with holography techniques that can also extract $\gamma ({\Delta {\textbf{r}_T},\Delta E})$. We note that both are limited by the electron spatial coherence. However, there is a fundamental difference. Electron holography experiments in which one path undergoes inelastic scattering also suffer from temporal decoherence [35,36,50] (consequently, the duration of measurement alters the visibility since the different parts of the electron do not have the same energy). The scheme at the focus of our work does not involve the same temporal decoherence despite undergoing the same inelastic process. The reason is that the entire electron goes through an energy filter, and thus we only deal with a single electron energy (consequently, the duration of measurement does not alter the visibility) [38]. When holography is performed between two parts of an electron that have both undergone the same inelastic scattering process [60], there is similarly no temporal decoherence. We further note that the initial electron coherence affects the abovementioned methods in different ways. In holography, the fringe visibility decreases with reduced coherence, which could be compensated by increasing electron flux. In our approach, the initial electron coherence is convolved with the diffraction image. The preferred approach depends on a variety of factors, such as electron flux limitations (due to the sample or microscope), signal-to-noise ratio, and the momentum spread caused by the electron initial coherence compared to the plasmon mean momentum and spread. C. Second Case: Localized Excitations in Nanoparticles We now consider an electron interacting with a nanoparticle as illustrated in Fig. 3. In contrast to the previous case, the nanoparticles have a discrete number of electromagnetic modes that can be excited. As a result, we can analyze the system using concepts from quantum information in systems with discrete degrees of freedom, such as purity, which helps quantify the entanglement in the interaction. The nanoparticles can host optical excitations in the form of localized surface plasmons, whispering-gallery modes, or general cavity modes [61–63]. For simplicity, we consider the case in which the nanoparticles are far smaller than the electron beam area or the optical wavelength so we can regard them as point dipoles with a tensorial polarizability $\overset{=}{\boldsymbol \alpha}(\omega)$ (3 by 3 matrix). The polarizability tensor represents the nanoparticle's affinity to support a dipole moment in different directions and is affected by the nanoparticle geometry. Figure 3(a) depicts some examples of nanoparticles with different polarizability tensors and the corresponding electron energy-filtered images. The special cases presented in Fig. 3(a) show how one can analyze an arbitrary nanoparticle through the imaging of the energy-filtered electron. Fig. 3. Free-electron interaction with localized dipolar modes in nanoparticles. (a) Different shapes and orientations of nanoparticles, and each result in a different energy-filtered electron image for the interacting electron. The different dipole orientations induce very different electron images, which can be used to image the near field. All the images are shown with absolute value. (b) Electron coherence [normalized absolute value of ${\rho _{\text{e}}}({{\textbf{r}_T},\textbf{r}^{\prime}_T})$] for a fixed $\textbf{r}^{\prime}_T$ (blue diamond). The three rightmost panels portray the coherence under the same $x{-}z$-disk nanoparticle, with a different reference point $\textbf{r}^{\prime}_T$. The coherence has a strong angular dependence and a very slow radial decay. Thus, the electron is mostly spatially coherent within the transverse interaction area, yet different directions are incoherent with each other. As a reference, when exciting only single mode, the electron is perfectly coherent. The images of the electron are derived through the diagonal terms of the density matrix, obtained using Eq. (3) and the Green's function of a single dipolar particle. The optical excitations can be oriented along the direction of the electron velocity ($z$) or in the transverse plane ($x{-}y$), giving the electron three different electromagnetic potentials to interact with. As a result, the density matrix of the electron has longitudinal and transverse contributions. In particular, the density matrix in real space is given as (7)$$\begin{array}{*{20}{c}}{{\rho _{\text{e}}}({{\textbf{r}_{\text{T}}},\textbf{r}^{\prime}_{\text{T}},{\Delta}E} ) = \frac{{4}\alpha}{{c \hbar S}}{{({\frac{{\Delta E}}{\hbar}} )}^2}{\mu _0}\boldsymbol{\psi} ({{\textbf{r}_{\text{T}}}} ) \cdot {\text{Im}}\!\left\{{\overset{\boldsymbol =}{\boldsymbol \alpha}\big({\frac{{\Delta E}}{\hbar}} \big)} \right\} \cdot {{\boldsymbol \psi}^{ *}}({\textbf{r}_{\text{T}}^\prime} ),}\end{array}$$ where ${\mu_0}$ is the vacuum permeability, $\overset{=}{\boldsymbol \alpha}$ is the polarizability tensor, and $\boldsymbol{\psi}( {{\textbf{r}}_{T}} )={{\psi }_{T}}( {{r}_{T}} ){{{{\hat{\textbf{r}}}}\,}_{T}}+{{\psi }_{z}}( {{r}_{T}} ){\hat{\textbf{z}}}$. The function ${\psi _z}$ describes the electron wave function after interacting with a purely longitudinal mode (a "$z$-rod") and ${\psi _T}$ for an electron wave function after interacting with a purely transverse mode (an "$x{-}y$-disk") as shown in Fig. 3(a). These functions can be calculated by considering the overlap between each optical mode and the electron's initial wave function (see Supplement 1 Section S.3.1). From Eq. (7) we find that the electron density matrix is composed of an (incoherent) sum of the three different modes, weighted by the polarizability tensor of the nanoparticle. Thus, each term in the sum is associated with a different principal axis of the polarizability tensor $\overset{\boldsymbol =}{\boldsymbol \alpha}(\omega)$. The contribution of each electromagnetic mode on the electron is summed up incoherently in its density matrix. Thus, there is no interference between the different modes of excitation. Therefore, the spatial coherence of the electron depends on the relative excitation strength of different dipole orientations as seen in Fig. 3(b). As the electron excites more modes (e.g., a nanodisk or nanosphere, which support two and three modes, respectively), its spatial coherence is reduced as can be seen from the spatial coherence variation. Yet, since the number of available dipolar modes is discrete and limited to three dipole orientations, the electron remains mostly spatially coherent around the excitation point. Nevertheless, in all cases the coherence has a strong angular dependence that stems from the lack of coherence between different dipole orientations. However, when the electron interacts with a single electromagnetic mode per frequency, as in the nanorod case, it remains completely coherent at all angles. Let us now explain these results in terms of the entanglement created during the interaction with the nanoparticle. We quantify the degree of entanglement using the purity measure (8)$${\text {purity} = \frac{{{\text{Tr}}({\rho _{\text{e}}^2} )}}{{{{{\text{Tr}}}^2}({{\rho _{\text{e}}}} )}}.}$$ The purity measures the eigenvalue spread of a density matrix [64]; it equals unity for a pure electron state, and it gets a minimal value of $1/d$ for a maximally mixed state, where $d$ is the dimension of the one-particle Fock space of the excitation. In the case of interactions with nanoparticles, the minimal purity value is 1/3 ($d = 3$), while in the SPP case there is a continuum of modes (due to translational invariance), so $d$ goes to infinity and the purity is zero (see Supplement 1 Section S.4). The electron's purity is affected by its entanglement with the excitations: increasing their entanglement increases the quantum information contained in their joint state compared to each one's state, decreasing the quantum information contained in the electron state alone, which increases the amount of nonzero eigenvalues in ${\rho _{\text{e}}}$ and hence decreases its purity. Figure 4 confirms that the electron's purity is strongly related to the number of modes the nanoparticle can host. This corresponds to the number of nonzero eigenvalues of the polarizability tensor $\stackrel{\boldsymbol =}{\boldsymbol \alpha}$ (see Supplement 1 Section S.3.2), which determines the number of eigenvalues of the electron density matrix. Consequently, the purity equals 1 if the electron interacts with a single electromagnetic mode, and it can take on rational values (e.g., 1/2 and 1/3) when each eigenmode is excited with equal probability. Therefore, the degree of entanglement between the electron and excitations can be adjusted by changing the relative excitation strength of the different optical modes. This can be done by rotating the nanoparticle [as in Fig. 4(b)], changing its aspect ratio (thus affecting the relative dipole moments along different directions), changing its permittivity, or simply by post-selecting different energies (which correspond to different modes altogether). This result leads to a fundamental conclusion: the degree of entanglement, and the respective electron decoherence, is primarily affected by the number of available optical excitations and their relative strength. Fig. 4. Free-electron entanglement with excitations of nanoparticles of various forms/shapes: accessing discrete dimensionality of quantum information. (a) The electron purity for diagonal polarizability tensors representing various nanoparticle shapes. In the left half, the nanoparticles have their dipole moment in the $x{-}y$ plane. According to the number of eigenmodes, we find that for rods the purity is exactly 1, while for a circular disk the purity is exactly 1/2. In the right panel, the shapes range from a $z$ rod to an $x{-}y$ disk while keeping an $x{-}y$ (azimuthal) symmetry. The purity is found at its minimal value, exactly 1/3, when the contributions from all the dipole orientations ($x,y,z$) are equal (three equal eigenvalues), corresponding to an oval-shaped nanoparticle. This figure demonstrates our control over the electron's purity by shaping the nanoparticles' classical dimensions, which directly controls the entanglement created in the interaction, in both discrete and continuous manners. The energy dependence stems from the fact that both longitudinal and transverse excitations are present, and each one has a different energy dependence. (b) Purity for one- and two-dimensional shapes in $y{-}z$, for rotations about the $x$ axis. For 1D shapes (nanorods), there is only one excitable mode, so the entanglement vanishes and the purity is always 1. For the 2D shapes, the purity depends on the relative excitation strength of the $z$ and the $y$ dipoles. This depends both on rotation angle, and importantly on excitation energy, since the $z$-dipole excitation has a different energy dependence than the $x{-}y$ dipoles. In general, direct observation of the results from Fig. 4 could be realized via interference experiments [34], quantum state reconstruction [16], tomographic imaging [61], or cathodoluminescence coincidence [65]. However, using our theoretical analysis, one can connect classical-like observables (as cross-section images) to quantum phenomena (as entanglement), which enables inference of the entanglement in the interaction from standard electron intensity images. The entanglement, quantified by the electron's purity, results from the fact that the electron experiences a different electromagnetic potential depending on the excited optical mode: $x$, $y$, or $z$ dipoles. This dependence correlates the quantum wave function of the electron to the state of the nanoparticle, with different weights for each "option" (e.g., orientation), depending on the excitation probability of each option, and the tensorial polarizability of the nanoparticle. Using this knowledge of the electron density matrix, the electron purity can be calculated directly from a single measurement. In particular, this is done by fitting the polarizability tensor to the energy-filtered electron image (see Supplement 1 Section S.3.3 for further details). Additionally, the electron-optical excitation entanglement can be tested in a direct manner, using experimental setups that implement coincidence measurements of EELS with light emission (cathodoluminescence) from plasmonic excitations, analogous to the coincidence experiment of [65] that correlated EELS and ${x}$-ray emission. These tests could be made to fully characterize the joint wave function of the electron and the plasmons or to perform experiments that test Bell's inequality [66]. 3. DISCUSSION AND SUMMARY Let us now compare the two cases discussed above: the nanoparticle and SPP excitation. In the nanoparticle case, the number of excited modes is always discrete and is related to the geometrical properties of the nanostructure. The discrete number of modes implies that the electron's reduced density matrix has a discrete set of eigenvalues, which results in a finite electron purity and slowly decaying spatial coherence, in contrast with what might be expected from the gedankenexperiment of Fig. 1. The reason is that Fig. 1 relates to a SPP structure, where the electron's spatial coherence corresponds to the SPP's propagation length, and the eigenvalues of the electron density matrix enable finding the dispersion relation. However, the continuum of modes results in a vanishing purity (completely mixed electron), which is discussed more extensively in Supplement 1 Section S.4. Coherently sculpting the electron wave function has been pursued in different experimental setups. Implementations include using elastic interactions in thin membranes [6–11] or by utilizing lasers [12–21,67], generally treating decoherence as a noise mechanism. Our work now paves the way towards sculpting the electron density matrix and its quantum decoherence using inelastic scattering. The control of the coupling strengths to each electromagnetic mode, as well as the mode's confinement and lifetime, enables full control over the electron coherence and purity. An example of such coherent sculpting can come with nanoparticles hosting multipolar modes [68,69] or multiple nanoparticles. Then the optical environment consists of more electromagnetic modes, which yields a higher dimensionality and thus a lower electron purity. Having several nanoparticles enables one to geometrically control the electron's purity when changing the distance between the nanoparticles compared to the spatial extent of the electromagnetic modes. When the nanoparticles are optically separated, the electron's quantum state is divided to independent Hilbert spaces. The interacting electron becomes entangled with this combined Hilbert space. When there is some spatial overlap between the different nanoparticle modes, the set of electromagnetic modes needs to be diagonalized in a way that usually removes degeneracies. Through the post-selected electron energy loss, one can control (post-select) an electron interaction with one of these modes. The above argument holds for electron interaction with any set of optical excitations. For example, a quantum dot exciton strongly coupled to a plasmonic resonator (forming together two polaritonic modes) may also be probed by this technique, allowing for new probes of strong light–matter coupling. Furthermore, our MQED-based formalism can be applied to more complex scenarios such as electron interaction with optical excitations that are pre-excited by an ultrashort laser pulse—a growing subfield of electron microscopy [12–23,67,70]. For such stimulated processes, it is possible to generalize the theories in [22,23] and in [2] using MQED [41]. There, due to the potentially high number of photon quanta in bosonic excitation, first-order perturbation theory is often not enough, necessitating higher-order calculations and even a nonperturbative analysis [71]. Additional interesting effects could be observed in cases of laser excitation of quantum emitters in the sample (e.g., in quantum dots, excitonic 2D materials, and nitrogen-vacancy centers), which can carry phase information. Then, the free-electron density matrix can capture information about the coherence of the initial excitation of the quantum emitter. The coherent phase information of the quantum emitter could potentially be extracted using specially shaped energy-modulated free electrons [70]. Additionally, our formalism could be extended to applications where time dynamics play a more central role in the interaction. Such applications may include optical environments with time-dependent properties (e.g., photonic time crystals [72–74]) and quantum emitters with time-dependent population dynamics (e.g., driven by ultrashort laser pulses), yielding different types of stimulated interactions. In case of a time-modulated permittivity, our formalism should be able to relate the temporal coherence in addition to its spatial coherence, which creates possibilities for entanglement in temporal degrees of freedom. The required mathematical extension is to calculate the electron density matrix's off-diagonal elements in the frequency space, in addition to the position space. This should be an interesting extension for our formalism, which we leave for future work. To conclude, our work demonstrates how quantum information techniques can be exploited to expand the already rich analytical capabilities of electron microscopy. We presented a fully analytical and general description of free-electron interactions with quantized electromagnetic excitations, yielding the post-interaction electron density matrix. Using MQED, our analysis can be applied in any optical environment. Our theoretical predictions are in good agreement with existing experiments (e.g., based on holography), suggesting that the current phenomenological descriptions can be replaced by our first-principles theory. We showed how the optical fluctuations in the environment leave their footprints on the electron decoherence, and we analyzed the available quantum information in the electron using its spatial coherence and purity. Furthermore, we proposed means of measuring the coherence and lifetime of the optical excitations using simple electron measurements as well as means of controlling the electron coherence and purity (by shaping the electromagnetic environment). These capabilities can lead to many novel applications of existing electron microscopes. European Union's Horizon 2020 research and innovation programme (851780-ERC-NanoEP); Israel Science Foundation (1415/17). We thank Prof. Uri Sivan, Prof. Moti Segev and Prof. Jo Verbeeck for stimulating discussions. A.K. is supported by the Adams Fellowship Program of the Israel Academy of Science and Humanities. N. R. is supported by the Department of Energy Fellowship DE-FG02-97ER25308 and by a Dean's Fellowship by the MIT School of Science. See Supplement 1 for supporting content. 1. R. F. Egerton, "Electron energy-loss spectroscopy in the TEM," Rep. Prog. Phys. 72, 016502 (2008). [CrossRef] 2. F. G. De Abajo, "Optical excitations in electron microscopy," Rev. Mod. Phys. 82, 209–275 (2010). [CrossRef] 3. R. F. Egerton, Electron Energy-Loss Spectroscopy in the Electron Microscope (Springer, 2011). 4. L. Ozawa, Cathodoluminescence: Theory and Applications (Wiley-VCH, 1990). 5. A. Polman, M. Kociak, and F. J. G. de Abajo, "Electron-beam spectroscopy for nanophotonics," Nat. Mater. 18, 1158–1171 (2019). [CrossRef] 6. R. Danev and K. Nagayama, "Transmission electron microscopy with Zernike phase plate," Ultramicroscopy 88, 243–252 (2001). [CrossRef] 7. J. Verbeeck, H. Tian, and P. Schattschneider, "Production and application of electron vortex beams," Nature 467, 301–304 (2010). [CrossRef] 8. M. Uchida and A. Tonomura, "Generation of electron beams carrying orbital angular momentum," Nature 464, 737–739 (2010). [CrossRef] 9. B. J. McMorran, A. Agrawal, I. M. Anderson, A. A. Herzing, H. J. Lezec, J. J. McClelland, and J. Unguris, "Electron vortex beams with high quanta of orbital angular momentum," Science 331, 192–195 (2011). [CrossRef] 10. N. Voloch-Bloch, Y. Lereah, Y. Lilach, A. Gover, and A. Arie, "Generation of electron Airy beams," Nature 494, 331–335 (2013). [CrossRef] 11. R. Shiloh, Y. Lereah, Y. Lilach, and A. Arie, "Sculpturing the electron wave function using nanoscale phase masks," Ultramicroscopy 144, 26–31 (2014). [CrossRef] 12. B. Barwick, D. J. Flannigan, and A. H. Zewail, "Photon-induced near-field electron microscopy," Nature 462, 902–906 (2009). [CrossRef] 13. F. J. Garcia de Abajo, A. Asenjo-Garcia, and M. Kociak, "Multiphoton absorption and emission by interaction of swift electrons with evanescent light fields," Nano Lett. 10, 1859–1863 (2010). [CrossRef] 14. S. T. Park, M. Lin, and A. H. Zewail, "Photon-induced near-field electron microscopy (PINEM): theoretical and experimental," New J. Phys. 12, 123028 (2010). [CrossRef] 15. A. Feist, K. E. Echternkamp, J. Schauss, S. V. Yalunin, S. Schäfer, and C. Ropers, "Quantum coherent optical phase modulation in an ultrafast transmission electron microscope," Nature 521, 200–203 (2015). [CrossRef] 16. K. E. Priebe, C. Rathje, S. V. Yalunin, T. Hohage, A. Feist, S. Schäfer, and C. Ropers, "Attosecond electron pulse trains and quantum state reconstruction in ultrafast transmission electron microscopy," Nat. Photonics 11, 793–797 (2017). [CrossRef] 17. E. Pomarico, I. Madan, G. Berruto, G. M. Vanacore, K. Wang, I. Kaminer, and F. Carbone, "meV resolution in laser-assisted energy-filtered transmission electron microscopy," ACS Photon. 5, 759–764 (2017). [CrossRef] 18. Y. Morimoto and P. Baum, "Diffraction and microscopy with attosecond electron pulse trains," Nat. Phys. 14, 252–256 (2018). [CrossRef] 19. G. M. Vanacore, G. Berruto, I. Maden, E. Pomarico, P. Biagioni, R. J. Lamb, D. McGrouther, O. Reinhardt, I. Kaminer, B. Barwick, H. Larocque, V. Grillo, E. Karimi, F. J. Garcia de Abajo, and F. Carbone, "Ultrafast generation and control of an electron vortex beam via chiral plasmonic near fields," Nat. Mater. 18, 573–579 (2019). [CrossRef] 20. K. Wang, R. Dahan, M. Shentcis, Y. Kauffmann, A. Ben Hayun, O. Reinhardt, S. Tsesses, and I. Kaminer, "Coherent interaction between free electrons and a photonic cavity," Nature 582, 50–54 (2020). [CrossRef] 21. O. Kfir, H. Lourenco-Martins, G. Storeck, M. Sivis, T. R. Harvey, T. J. Kippenberg, A. Feist, and C. Ropers, "Controlling free electrons with optical whispering-gallery modes," Nature 582, 46–49 (2020). [CrossRef] 22. V. Di Giulio, M. Kociak, and F. J. G. de Abajo, "Probing quantum optical excitations with fast electrons," Optica 6, 1524–1534 (2019). [CrossRef] 23. O. Kfir, "Entanglements of electrons and cavity photons in the strong-coupling regime," Phys. Rev. Lett. 123, 103602 (2019). [CrossRef] 24. A. Stern, Y. Aharonov, and Y. Imry, "Phase uncertainty and loss of interference: a general picture," Phys. Rev. A 41, 3436–3448 (1990). [CrossRef] 25. Y. Imry, Introduction to Mesoscopic Physics (No. 2) (Oxford University Press on Demand, 2002). 26. K. Hornberger, S. Gerlich, P. Haslinger, S. Nimmrichter, and M. Arndt, "Quantum interference of clusters and molecules," Rev. Mod. Phys. 84, 157–173 (2012). [CrossRef] 27. R. P. Feynman, The Feynman Lectures on Physics (Narosa, 1965), Vol. 3. 28. E. Buks, R. Schuster, M. Heiblum, D. Mahalu, and V. Umansky, "Dephasing in electron interference by a 'which-path' detector," Nature 391, 871–874 (1998). [CrossRef] 29. F. S. Yasin, K. Harada, D. Shindo, H. Shinada, B. J. McMorran, and T. Tanigaki, "A tunable path-separated electron interferometer with an amplitude-dividing grating beamsplitter," Appl. Phys. Lett. 113, 233102 (2018). [CrossRef] 30. K. Harada, T. Akashi, K. Niitsu, K. Shimada, Y. A. Ono, D. Shindo, H. Shinada, and S. Mori, "Interference experiment with asymmetric double slit by using 1.2-MV field emission transmission electron microscope," Sci. Rep. 8, 1–10 (2018). [CrossRef] 31. A. H. Tavabi, C. B. Boothroyd, E. Yücelen, S. Frabboni, G. C. Gazzadi, R. E. Dunin-Borkowski, and G. Pozzi, "The Young-Feynman controlled double-slit electron interference experiment," Sci. Rep. 9, 1–8. (2019). [CrossRef] 32. P. G. Merli, G. Missiroli, and G. Pozzi, "On the statistical aspect of electron interference phenomena," Am. J. Phys. 44, 306–307 (1976). [CrossRef] 33. A. Tonomura, "Electron holography," in Electron Holography (Springer, 1999), pp. 29–49. 34. A. Orchowski, W. D. Rau, and H. Lichte, "Electron holography surmounts resolution limit of electron microscopy," Phys. Rev. Lett. 74, 399–402 (1995). [CrossRef] 35. P. L. Potapov, H. Lichte, J. Verbeeck, and D. Van Dyck, "Experiments on inelastic electron holography," Ultramicroscopy 106, 1012–1018 (2006). [CrossRef] 36. P. L. Potapov, J. Verbeeck, P. Schattschneider, H. Lichte, and D. Van Dyck, "Inelastic electron holography as a variant of the Feynman thought experiment," Ultramicroscopy 107, 559–567 (2007). [CrossRef] 37. J. Verbeeck, D. Van Dyck, H. Lichte, P. Potapov, and P. Schattschneider, "Plasmon holographic experiments: theoretical framework," Ultramicroscopy 102, 239–255 (2005). [CrossRef] 38. Note that there is a difference between the spatial decoherence that we study here and the temporal decoherence that arises in interference of electron parts with different energies, e.g., where part of the electron undergoes inelastic scattering [39,40]. The spatial decoherence arises for electrons of a single energy value, e.g., for energy-filtered imaging or diffraction. Consequently, such decoherence measurements are not altered by the duration of measurement. 39. J. Verbeeck, G. Bertoni, and H. Lichte, "A holographic biprism as a perfect energy filter?" Ultramicroscopy 111, 887–893 (2011). [CrossRef] 40. D. Van Dyck, H. Lichte, and J. C. H. Spence, "Inelastic scattering and holography," Ultramicroscopy 81, 187–194 (2000). [CrossRef] 41. S. Scheel and S. Y. Buhmann, "Macroscopic quantum electrodynamics-concepts and applications," Acta Phys. Slovaca 58, 675–809 (2008). 42. N. Rivera, I. Kaminer, B. Zhen, J. D. Joannopoulos, and M. Soljačić, "Shrinking light to allow forbidden transitions on the atomic scale," Science 353, 263–269 (2016). [CrossRef] 43. R. Remez, A. Karnieli, S. Trajtenberg-Mills, N. Shapira, I. Kaminer, Y. Lereah, and A. Arie, "Observing the quantum wave nature of free electrons through spontaneous emission," Phys. Rev. Lett. 123, 060401 (2019). [CrossRef] 44. O. Reinhardt, C. Mechel, M. Lynch, and I. Kaminer, "Free-electron qubits," Analen der Physik, Early View (2020), [CrossRef] . 45. P. Sonnentag and F. Hasselbach, "Measurement of decoherence of electron waves and visualization of the quantum-classical transition," Phys. Rev. Lett. 98, 200402 (2007). [CrossRef] 46. N. Kerker, R. Röpke, L. M. Steinert, A. Pooch, and A. Stibor, "Quantum decoherence by Coulomb interaction," New J. Phys. 22, 063039 (2020). [CrossRef] 47. F. Röder and H. Lichte, "Inelastic electron holography–first results with surface plasmons," Eur. Phys. J. Appl. Phys. 54, 33504 (2011). [CrossRef] 48. P. Schattschneider and H. Lichte, "Correlation and the density-matrix approach to inelastic electron holography in solid state plasmas," Phys. Rev. B 71, 045130 (2005). [CrossRef] 49. One can apply an energy filter to post-select electrons that lost exactly one quanta of plasmon energy. 50. H. Lichte and M. Lehmann, "Electron holography—basics and applications," Rep. Prog. Phys. 71, 016102 (2007). [CrossRef] 51. I. Kaminer, M. Mutzafi, A. Levy, G. Harari, H. H. Sheinfux, S. Skirlo, J. Nemirovsky, J. D. Joannopoulos, M. Segev, and M. Soljacic, "Quantum Čerenkov radiation: spectral cutoffs and the role of spin and orbital angular momentum," Phys. Rev. X 6, 011006 (2016). [CrossRef] 52. From the point of view of quantum information, this is the opposite case to cathodoluminescence in scanning electron microscopes, in which the emitted light is measured while the electron is not. 53. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University, 2012). 54. G. Bertoni, J. Verbeeck, and F. Brosens, "Fitting the momentum dependent loss function in EELS," Microsc. Res. Tech. 74, 212–218 (2011). [CrossRef] 55. L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995). 56. J. Verbeeck, G. Bertoni, and P. Schattschneider, "The Fresnel effect of a defocused biprism on the fringes in inelastic holography," Ultramicroscopy 108, 263–269 (2008). [CrossRef] 57. G. Pozzi, "Theoretical considerations on the spatial coherence in field emission electron microscope," Optik 77, 69–73 (1987). 58. A. Polimeni, G. B. H. von Högersthal, F. Masia, A. Frova, M. Capizzi, S. Sanna, and W. Stolz, "Tunable variation of the electron effective mass and exciton radius in hydrogenated GaAs 1-x N x," Phys. Rev. B 69, 041201 (2004). [CrossRef] 59. J. Verbeeck, D. Van Dyck, and G. Van Tendeloo, "Energy-filtered transmission electron microscopy: an overview," Spectrochim. Acta, Part B 59, 1529–1534 (2004). [CrossRef] 60. F. Röder and A. Lubk, "Transfer and reconstruction of the density matrix in off-axis electron holography," Ultramicroscopy 146, 103–116 (2014). [CrossRef] 61. A. Hörl, G. Haberfehlner, A. Trügler, F. P. Schmidt, U. Hohenester, and G. Kothleitner, "Tomographic imaging of the photonic environment of plasmonic nanoparticles," Nat. Commun. 8, 1–7 (2017). [CrossRef] 62. M. R. Foreman, D. Keng, J. R. Lopez, and S. Arnold, "Whispering gallery mode single nanoparticle detection and sizing: the validity of the dipole approximation," Opt. Lett. 42, 963–966 (2017). [CrossRef] 63. U. Hohenester, H. Ditlbacher, and J. R. Krenn, "Electron-energy-loss spectra of plasmonic nanoparticles," Phys. Rev. Lett. 103, 106801 (2009). [CrossRef] 64. M. A. Nielsen and I. Chuang, Quantum Computation and Quantum Information (2002). 65. D. Jannis, K. Müller-Caspary, A. Béché, A. Oelsner, and J. Verbeeck, "Spectroscopic coincidence experiments in transmission electron microscopy," Appl. Phys. Lett. 114, 143101 (2019). [CrossRef] 66. J. S. Bell, "On the Einstein Podolsky Rosen paradox," Phys. Phys. Fiz. 1, 195–290 (1964). [CrossRef] 67. Y. Pan, B. Zhang, and A. Gover, "Anomalous photon-induced near-field electron microscopy," Phys. Rev. Lett. 122, 183204 (2019). [CrossRef] 68. C. T. Tai, "On the eigenfunction expansion of dyadic Green's functions," Proc. IEEE 61, 480–481 (1973). [CrossRef] 69. C. T. Tai, Dyadic Green Functions in Electromagnetic Theory (Institute of Electrical & Electronics Engineers (IEEE), 1994). 70. O. Reinhardt and I. Kaminer, "Theory of shaping electron wavepackets with light," ACS Photon. 7, 2859–2870 (2020). [CrossRef] 71. N. Rivera and I. Kaminer, "Light-matter interactions with photonic quasiparticles," Nat. Rev. Phys. 2, 538–561 (2020). [CrossRef] 72. F. Biancalana, A. Amann, A. V. Uskov, and E. P. O' Reilly, "Dynamics of light propagation in spatiotemporal dielectric structures," Phys. Rev. E 75, 46607 (2007). [CrossRef] 73. J. R. Zurita-Sánchez, P. Halevi, and J. C. Cervantes-González, "Reflection and transmission of a wave incident on a slab with a time-periodic dielectric function ɛ(t)," Phys. Rev. A 79, 53821 (2009). [CrossRef] 74. E. Lustig, Y. Sharabi, and M. Segev, "Topological aspects of photonic time crystals," Optica 5, 1390–1395 (2018). [CrossRef] R. F. Egerton, "Electron energy-loss spectroscopy in the TEM," Rep. Prog. Phys. 72, 016502 (2008). F. G. De Abajo, "Optical excitations in electron microscopy," Rev. Mod. Phys. 82, 209–275 (2010). R. F. Egerton, Electron Energy-Loss Spectroscopy in the Electron Microscope (Springer, 2011). L. Ozawa, Cathodoluminescence: Theory and Applications (Wiley-VCH, 1990). A. Polman, M. Kociak, and F. J. G. de Abajo, "Electron-beam spectroscopy for nanophotonics," Nat. Mater. 18, 1158–1171 (2019). R. Danev and K. Nagayama, "Transmission electron microscopy with Zernike phase plate," Ultramicroscopy 88, 243–252 (2001). J. Verbeeck, H. Tian, and P. Schattschneider, "Production and application of electron vortex beams," Nature 467, 301–304 (2010). M. Uchida and A. Tonomura, "Generation of electron beams carrying orbital angular momentum," Nature 464, 737–739 (2010). B. J. McMorran, A. Agrawal, I. M. Anderson, A. A. Herzing, H. J. Lezec, J. J. McClelland, and J. Unguris, "Electron vortex beams with high quanta of orbital angular momentum," Science 331, 192–195 (2011). N. Voloch-Bloch, Y. Lereah, Y. Lilach, A. Gover, and A. Arie, "Generation of electron Airy beams," Nature 494, 331–335 (2013). R. Shiloh, Y. Lereah, Y. Lilach, and A. Arie, "Sculpturing the electron wave function using nanoscale phase masks," Ultramicroscopy 144, 26–31 (2014). B. Barwick, D. J. Flannigan, and A. H. Zewail, "Photon-induced near-field electron microscopy," Nature 462, 902–906 (2009). F. J. Garcia de Abajo, A. Asenjo-Garcia, and M. Kociak, "Multiphoton absorption and emission by interaction of swift electrons with evanescent light fields," Nano Lett. 10, 1859–1863 (2010). S. T. Park, M. Lin, and A. H. Zewail, "Photon-induced near-field electron microscopy (PINEM): theoretical and experimental," New J. Phys. 12, 123028 (2010). A. Feist, K. E. Echternkamp, J. Schauss, S. V. Yalunin, S. Schäfer, and C. Ropers, "Quantum coherent optical phase modulation in an ultrafast transmission electron microscope," Nature 521, 200–203 (2015). K. E. Priebe, C. Rathje, S. V. Yalunin, T. Hohage, A. Feist, S. Schäfer, and C. Ropers, "Attosecond electron pulse trains and quantum state reconstruction in ultrafast transmission electron microscopy," Nat. Photonics 11, 793–797 (2017). E. Pomarico, I. Madan, G. Berruto, G. M. Vanacore, K. Wang, I. Kaminer, and F. Carbone, "meV resolution in laser-assisted energy-filtered transmission electron microscopy," ACS Photon. 5, 759–764 (2017). Y. Morimoto and P. Baum, "Diffraction and microscopy with attosecond electron pulse trains," Nat. Phys. 14, 252–256 (2018). G. M. Vanacore, G. Berruto, I. Maden, E. Pomarico, P. Biagioni, R. J. Lamb, D. McGrouther, O. Reinhardt, I. Kaminer, B. Barwick, H. Larocque, V. Grillo, E. Karimi, F. J. Garcia de Abajo, and F. Carbone, "Ultrafast generation and control of an electron vortex beam via chiral plasmonic near fields," Nat. Mater. 18, 573–579 (2019). K. Wang, R. Dahan, M. Shentcis, Y. Kauffmann, A. Ben Hayun, O. Reinhardt, S. Tsesses, and I. Kaminer, "Coherent interaction between free electrons and a photonic cavity," Nature 582, 50–54 (2020). O. Kfir, H. Lourenco-Martins, G. Storeck, M. Sivis, T. R. Harvey, T. J. Kippenberg, A. Feist, and C. Ropers, "Controlling free electrons with optical whispering-gallery modes," Nature 582, 46–49 (2020). V. Di Giulio, M. Kociak, and F. J. G. de Abajo, "Probing quantum optical excitations with fast electrons," Optica 6, 1524–1534 (2019). O. Kfir, "Entanglements of electrons and cavity photons in the strong-coupling regime," Phys. Rev. Lett. 123, 103602 (2019). A. Stern, Y. Aharonov, and Y. Imry, "Phase uncertainty and loss of interference: a general picture," Phys. Rev. A 41, 3436–3448 (1990). Y. Imry, Introduction to Mesoscopic Physics (No. 2) (Oxford University Press on Demand, 2002). K. Hornberger, S. Gerlich, P. Haslinger, S. Nimmrichter, and M. Arndt, "Quantum interference of clusters and molecules," Rev. Mod. Phys. 84, 157–173 (2012). R. P. Feynman, The Feynman Lectures on Physics (Narosa, 1965), Vol. 3. E. Buks, R. Schuster, M. Heiblum, D. Mahalu, and V. Umansky, "Dephasing in electron interference by a 'which-path' detector," Nature 391, 871–874 (1998). F. S. Yasin, K. Harada, D. Shindo, H. Shinada, B. J. McMorran, and T. Tanigaki, "A tunable path-separated electron interferometer with an amplitude-dividing grating beamsplitter," Appl. Phys. Lett. 113, 233102 (2018). K. Harada, T. Akashi, K. Niitsu, K. Shimada, Y. A. Ono, D. Shindo, H. Shinada, and S. Mori, "Interference experiment with asymmetric double slit by using 1.2-MV field emission transmission electron microscope," Sci. Rep. 8, 1–10 (2018). A. H. Tavabi, C. B. Boothroyd, E. Yücelen, S. Frabboni, G. C. Gazzadi, R. E. Dunin-Borkowski, and G. Pozzi, "The Young-Feynman controlled double-slit electron interference experiment," Sci. Rep. 9, 1–8. (2019). P. G. Merli, G. Missiroli, and G. Pozzi, "On the statistical aspect of electron interference phenomena," Am. J. Phys. 44, 306–307 (1976). A. Tonomura, "Electron holography," in Electron Holography (Springer, 1999), pp. 29–49. A. Orchowski, W. D. Rau, and H. Lichte, "Electron holography surmounts resolution limit of electron microscopy," Phys. Rev. Lett. 74, 399–402 (1995). P. L. Potapov, H. Lichte, J. Verbeeck, and D. Van Dyck, "Experiments on inelastic electron holography," Ultramicroscopy 106, 1012–1018 (2006). P. L. Potapov, J. Verbeeck, P. Schattschneider, H. Lichte, and D. Van Dyck, "Inelastic electron holography as a variant of the Feynman thought experiment," Ultramicroscopy 107, 559–567 (2007). J. Verbeeck, D. Van Dyck, H. Lichte, P. Potapov, and P. Schattschneider, "Plasmon holographic experiments: theoretical framework," Ultramicroscopy 102, 239–255 (2005). Note that there is a difference between the spatial decoherence that we study here and the temporal decoherence that arises in interference of electron parts with different energies, e.g., where part of the electron undergoes inelastic scattering [39,40]. The spatial decoherence arises for electrons of a single energy value, e.g., for energy-filtered imaging or diffraction. Consequently, such decoherence measurements are not altered by the duration of measurement. J. Verbeeck, G. Bertoni, and H. Lichte, "A holographic biprism as a perfect energy filter?" Ultramicroscopy 111, 887–893 (2011). D. Van Dyck, H. Lichte, and J. C. H. Spence, "Inelastic scattering and holography," Ultramicroscopy 81, 187–194 (2000). S. Scheel and S. Y. Buhmann, "Macroscopic quantum electrodynamics-concepts and applications," Acta Phys. Slovaca 58, 675–809 (2008). N. Rivera, I. Kaminer, B. Zhen, J. D. Joannopoulos, and M. Soljačić, "Shrinking light to allow forbidden transitions on the atomic scale," Science 353, 263–269 (2016). R. Remez, A. Karnieli, S. Trajtenberg-Mills, N. Shapira, I. Kaminer, Y. Lereah, and A. Arie, "Observing the quantum wave nature of free electrons through spontaneous emission," Phys. Rev. Lett. 123, 060401 (2019). O. Reinhardt, C. Mechel, M. Lynch, and I. Kaminer, "Free-electron qubits," Analen der Physik, Early View (2020),. P. Sonnentag and F. Hasselbach, "Measurement of decoherence of electron waves and visualization of the quantum-classical transition," Phys. Rev. Lett. 98, 200402 (2007). N. Kerker, R. Röpke, L. M. Steinert, A. Pooch, and A. Stibor, "Quantum decoherence by Coulomb interaction," New J. Phys. 22, 063039 (2020). F. Röder and H. Lichte, "Inelastic electron holography–first results with surface plasmons," Eur. Phys. J. Appl. Phys. 54, 33504 (2011). P. Schattschneider and H. Lichte, "Correlation and the density-matrix approach to inelastic electron holography in solid state plasmas," Phys. Rev. B 71, 045130 (2005). One can apply an energy filter to post-select electrons that lost exactly one quanta of plasmon energy. H. Lichte and M. Lehmann, "Electron holography—basics and applications," Rep. Prog. Phys. 71, 016102 (2007). I. Kaminer, M. Mutzafi, A. Levy, G. Harari, H. H. Sheinfux, S. Skirlo, J. Nemirovsky, J. D. Joannopoulos, M. Segev, and M. Soljacic, "Quantum Čerenkov radiation: spectral cutoffs and the role of spin and orbital angular momentum," Phys. Rev. X 6, 011006 (2016). From the point of view of quantum information, this is the opposite case to cathodoluminescence in scanning electron microscopes, in which the emitted light is measured while the electron is not. L. Novotny and B. Hecht, Principles of Nano-Optics (Cambridge University, 2012). G. Bertoni, J. Verbeeck, and F. Brosens, "Fitting the momentum dependent loss function in EELS," Microsc. Res. Tech. 74, 212–218 (2011). L. Mandel and E. Wolf, Optical Coherence and Quantum Optics (Cambridge University, 1995). J. Verbeeck, G. Bertoni, and P. Schattschneider, "The Fresnel effect of a defocused biprism on the fringes in inelastic holography," Ultramicroscopy 108, 263–269 (2008). G. Pozzi, "Theoretical considerations on the spatial coherence in field emission electron microscope," Optik 77, 69–73 (1987). A. Polimeni, G. B. H. von Högersthal, F. Masia, A. Frova, M. Capizzi, S. Sanna, and W. Stolz, "Tunable variation of the electron effective mass and exciton radius in hydrogenated GaAs 1-x N x," Phys. Rev. B 69, 041201 (2004). J. Verbeeck, D. Van Dyck, and G. Van Tendeloo, "Energy-filtered transmission electron microscopy: an overview," Spectrochim. Acta, Part B 59, 1529–1534 (2004). F. Röder and A. Lubk, "Transfer and reconstruction of the density matrix in off-axis electron holography," Ultramicroscopy 146, 103–116 (2014). A. Hörl, G. Haberfehlner, A. Trügler, F. P. Schmidt, U. Hohenester, and G. Kothleitner, "Tomographic imaging of the photonic environment of plasmonic nanoparticles," Nat. Commun. 8, 1–7 (2017). M. R. Foreman, D. Keng, J. R. Lopez, and S. Arnold, "Whispering gallery mode single nanoparticle detection and sizing: the validity of the dipole approximation," Opt. Lett. 42, 963–966 (2017). U. Hohenester, H. Ditlbacher, and J. R. Krenn, "Electron-energy-loss spectra of plasmonic nanoparticles," Phys. Rev. Lett. 103, 106801 (2009). M. A. Nielsen and I. Chuang, Quantum Computation and Quantum Information (2002). D. Jannis, K. Müller-Caspary, A. Béché, A. Oelsner, and J. Verbeeck, "Spectroscopic coincidence experiments in transmission electron microscopy," Appl. Phys. Lett. 114, 143101 (2019). J. S. Bell, "On the Einstein Podolsky Rosen paradox," Phys. Phys. Fiz. 1, 195–290 (1964). Y. Pan, B. Zhang, and A. Gover, "Anomalous photon-induced near-field electron microscopy," Phys. Rev. Lett. 122, 183204 (2019). C. T. Tai, "On the eigenfunction expansion of dyadic Green's functions," Proc. IEEE 61, 480–481 (1973). C. T. Tai, Dyadic Green Functions in Electromagnetic Theory (Institute of Electrical & Electronics Engineers (IEEE), 1994). O. Reinhardt and I. Kaminer, "Theory of shaping electron wavepackets with light," ACS Photon. 7, 2859–2870 (2020). N. Rivera and I. Kaminer, "Light-matter interactions with photonic quasiparticles," Nat. Rev. Phys. 2, 538–561 (2020). F. Biancalana, A. Amann, A. V. Uskov, and E. P. O' Reilly, "Dynamics of light propagation in spatiotemporal dielectric structures," Phys. Rev. E 75, 46607 (2007). J. R. Zurita-Sánchez, P. Halevi, and J. C. Cervantes-González, "Reflection and transmission of a wave incident on a slab with a time-periodic dielectric function ɛ(t)," Phys. Rev. A 79, 53821 (2009). E. Lustig, Y. Sharabi, and M. Segev, "Topological aspects of photonic time crystals," Optica 5, 1390–1395 (2018). Aharonov, Y. Akashi, T. Amann, A. Anderson, I. M. Arie, A. Arndt, M. Arnold, S. Asenjo-Garcia, A. Barwick, B. Baum, P. Béché, A. Bell, J. S. Ben Hayun, A. Berruto, G. Bertoni, G. Biagioni, P. Biancalana, F. Boothroyd, C. B. Brosens, F. Buhmann, S. Y. Buks, E. Capizzi, M. Carbone, F. Cervantes-González, J. C. Chuang, I. Dahan, R. Danev, R. De Abajo, F. G. de Abajo, F. J. G. Di Giulio, V. Ditlbacher, H. Dunin-Borkowski, R. E. Echternkamp, K. E. Egerton, R. F. Feist, A. Feynman, R. P. Flannigan, D. J. Foreman, M. R. Frabboni, S. Frova, A. Garcia de Abajo, F. J. Gazzadi, G. C. Gerlich, S. Gover, A. Grillo, V. Haberfehlner, G. Halevi, P. Harada, K. Harari, G. Harvey, T. R. Haslinger, P. Hasselbach, F. Hecht, B. Heiblum, M. Herzing, A. A. Hohage, T. Hörl, A. Hornberger, K. Imry, Y. Jannis, D. Joannopoulos, J. D. Kaminer, I. Karimi, E. Karnieli, A. Kauffmann, Y. Keng, D. Kerker, N. Kfir, O. Kippenberg, T. J. Kociak, M. Kothleitner, G. Krenn, J. R. Lamb, R. J. Larocque, H. Lehmann, M. Lereah, Y. Levy, A. Lezec, H. J. Lichte, H. Lilach, Y. Lin, M. Lopez, J. R. Lourenco-Martins, H. Lubk, A. Lustig, E. Lynch, M. Madan, I. Maden, I. Mahalu, D. Mandel, L. Masia, F. McClelland, J. J. McGrouther, D. McMorran, B. J. Mechel, C. Merli, P. G. Missiroli, G. Mori, S. Morimoto, Y. Müller-Caspary, K. Mutzafi, M. Nagayama, K. Nemirovsky, J. Nielsen, M. A. Niitsu, K. Nimmrichter, S. Novotny, L. O' Reilly, E. P. Oelsner, A. Ono, Y. A. Orchowski, A. Ozawa, L. Pan, Y. Park, S. T. Polimeni, A. Polman, A. Pomarico, E. Pooch, A. Potapov, P. Potapov, P. L. Pozzi, G. Priebe, K. E. Rathje, C. Rau, W. D. Reinhardt, O. Remez, R. Rivera, N. Röder, F. Ropers, C. Röpke, R. Sanna, S. Schäfer, S. Schattschneider, P. Schauss, J. Scheel, S. Schmidt, F. P. Schuster, R. Shapira, N. Sharabi, Y. Sheinfux, H. H. Shentcis, M. Shiloh, R. Shimada, K. Shinada, H. Shindo, D. Sivis, M. Skirlo, S. Soljacic, M. Sonnentag, P. Spence, J. C. H. Steinert, L. M. Stern, A. Stibor, A. Stolz, W. Storeck, G. Tai, C. T. Tanigaki, T. Tavabi, A. H. Tian, H. Tonomura, A. Trajtenberg-Mills, S. Trügler, A. Tsesses, S. Uchida, M. Umansky, V. Unguris, J. Uskov, A. V. Van Dyck, D. Van Tendeloo, G. Vanacore, G. M. Verbeeck, J. Voloch-Bloch, N. von Högersthal, G. B. H. Wang, K. Wolf, E. Yalunin, S. V. Yasin, F. S. Yücelen, E. Zewail, A. H. Zhang, B. Zhen, B. Zurita-Sánchez, J. R. ACS Photon. (2) Acta Phys. Slovaca (1) Am. J. Phys. (1) Eur. Phys. J. Appl. Phys. (1) Microsc. Res. Tech. (1) Nat. Mater. (2) Nat. Rev. Phys. (1) New J. Phys. (2) Phys. Phys. Fiz. (1) Phys. Rev. A (2) Phys. Rev. E (1) Phys. Rev. X (1) Rep. Prog. Phys. (2) Sci. Rep. (2) Spectrochim. Acta, Part B (1) Ultramicroscopy (9) » Supplement 1 Supplementary material (1) A ( r , ω ) = ℏ π ϵ 0 1 c 2 ∫ ω d ω ∫ d r ′ Im ϵ ( r ′ , ω ) × G = ( r , r ′ , ω ) f ^ ( r ′ , ω ) + h . c . , (2) ρ e = Tr exc { ρ joint } = ∑ σ , σ ′ ⁡ ρ e ( σ , σ ′ ) | ψ σ ⟩ e ⟨ ψ σ ′ | e (3) ρ e ( r T , r T ′ ; Δ E ) = 4 α ℏ c S ∫ d z d z ′ e i Δ E ℏ v 0 ( z − z ′ ) × Im G zz ( r T , z ; r T ′ , z ′ ; Δ E ℏ ) . (4) G zz ( r T , z ; r T ′ , z ′ ; ω ) = i 8 π 2 1 ( ω / c ) 2 ∬ − ∞ ∞ r p ( k T , ω ) × e i [ k T ⋅ ( r T − r T ′ ) + ω c ( z + z ′ ) ] k T 2 d 2 k T , (5) ρ e ( k T , k T ′ ; Δ E ) = δ ( k T − k T ′ ) S ℏ 16 π 2 α c ( Δ E / ℏ ) 2 Im { r p ( k T , Δ E / ℏ ) } k T 2 − ( Δ E / ℏ ) 2 c 2 × k T 2 k T 2 − ( Δ E / ℏ ) 2 c 2 + ( Δ E / ℏ ) 2 v 2 . (6) γ ( Δ r T , Δ E ) = ∫ ρ e ( k T ) e i k T ⋅ Δ r T d k T ∫ ρ e ( k T ) d k T , (7) ρ e ( r T , r T ′ , Δ E ) = 4 α c ℏ S ( Δ E ℏ ) 2 μ 0 ψ ( r T ) ⋅ Im { α = ( Δ E ℏ ) } ⋅ ψ ∗ ( r T ′ ) , (8) purity = Tr ( ρ e 2 ) Tr 2 ( ρ e ) .
CommonCrawl
Extremal graph theory Extremal graph theory is a branch of combinatorics, itself an area of mathematics, that lies at the intersection of extremal combinatorics and graph theory. In essence, extremal graph theory studies how global properties of a graph influence local substructure. [1] Results in extremal graph theory deal with quantitative connections between various graph properties, both global (such as the number of vertices and edges) and local (such as the existence of specific subgraphs), and problems in extremal graph theory can often be formulated as optimization problems: how big or small a parameter of a graph can be, given some constraints that the graph has to satisfy? [2] A graph that is an optimal solution to such an optimization problem is called an extremal graph, and extremal graphs are important objects of study in extremal graph theory. Extremal graph theory is closely related to fields such as Ramsey theory, spectral graph theory, computational complexity theory, and additive combinatorics, and frequently employs the probabilistic method. History Extremal graph theory, in its strictest sense, is a branch of graph theory developed and loved by Hungarians. Bollobás (2004) [3] Mantel's Theorem (1907) and Turán's Theorem (1941) were some of the first milestones in the study of extremal graph theory. [4] In particular, Turán's theorem would later on become a motivation for the finding of results such as the Erdős–Stone theorem (1946).[1] This result is surprising because it connects the chromatic number with the maximal number of edges in an $H$-free graph. An alternative proof of Erdős–Stone was given in 1975, and utilised the Szemerédi regularity lemma, an essential technique in the resolution of extremal graph theory problems.[4] Topics and concepts Graph coloring Main article: Graph coloring A proper (vertex) coloring of a graph $G$ is a coloring of the vertices of $G$ such that no two adjacent vertices have the same color. The minimum number of colors needed to properly color $G$ is called the chromatic number of $G$, denoted $\chi (G)$. Determining the chromatic number of specific graphs is a fundamental question in extremal graph theory, because many problems in the area and related areas can be formulated in terms of graph coloring.[2] Two simple lower bounds to the chromatic number of a graph $G$ is given by the clique number $\omega (G)$—all vertices of a clique must have distinct colors—and by $|V(G)|/\alpha (G)$, where $\alpha (G)$ is the independence number, because the set of vertices with a given color must form an independent set. A greedy coloring gives the upper bound $\chi (G)\leq \Delta (G)+1$, where $\Delta (G)$ is the maximum degree of $G$. When $G$ is not an odd cycle or a clique, Brooks' theorem states that the upper bound can be reduced to $\Delta (G)$. When $G$ is a planar graph, the four-color theorem states that $G$ has chromatic number at most four. In general, determining whether a given graph has a coloring with a prescribed number of colors is known to be NP-hard. In addition to vertex coloring, other types of coloring are also studied, such as edge colorings. The chromatic index $\chi '(G)$ of a graph $G$ is the minimum number of colors in a proper edge-coloring of a graph, and Vizing's theorem states that the chromatic index of a graph $G$ is either $\Delta (G)$ or $\Delta (G)+1$. Forbidden subgraphs Main article: Forbidden subgraph problem The forbidden subgraph problem is one of the central problems in extremal graph theory. Given a graph $G$, the forbidden subgraph problem asks for the maximal number of edges $\operatorname {ex} (n,G)$ in an $n$-vertex graph that does not contain a subgraph isomorphic to $G$. When $G=K_{r}$ is a complete graph, Turán's theorem gives an exact value for $\operatorname {ex} (n,K_{r})$ and characterizes all graphs attaining this maximum; such graphs are known as Turán graphs. For non-bipartite graphs $G$, the Erdős–Stone theorem gives an asymptotic value of $\operatorname {ex} (n,G)$ in terms of the chromatic number of $G$. The problem of determining the asymptotics of $\operatorname {ex} (n,G)$ when $G$ is a bipartite graph is open; when $G$ is a complete bipartite graph, this is known as the Zarankiewicz problem. Homomorphism density Main article: Homomorphism density The homomorphism density $t(H,G)$ of a graph $H$ in a graph $G$ describes the probability that a randomly chosen map from the vertex set of $H$ to the vertex set of $G$ is also a graph homomorphism. It is closely related to the subgraph density, which describes how often a graph $H$ is found as a subgraph of $G$. The forbidden subgraph problem can be restated as maximizing the edge density of a graph with $G$-density zero, and this naturally leads to generalization in the form of graph homomorphism inequalities, which are inequalities relating $t(H,G)$ for various graphs $H$. By extending the homomorphism density to graphons, which are objects that arise as a limit of dense graphs, the graph homomorphism density can be written in the form of integrals, and inequalities such as the Cauchy-Schwarz inequality and Hölder's inequality can be used to derive homomorphism inequalities. A major open problem relating homomorphism densities is Sidorenko's conjecture, which states a tight lower bound on the homomorphism density of a bipartite graph in a graph $G$ in terms of the edge density of $G$. Graph regularity Main article: Szemerédi regularity lemma Szemerédi's regularity lemma states that all graphs are 'regular' in the following sense: the vertex set of any given graph can be partitioned into a bounded number of parts such that the bipartite graph between most pairs of parts behave like random bipartite graphs.[2] This partition gives a structural approximation to the original graph, which reveals information about the properties of the original graph. The regularity lemma is a central result in extremal graph theory, and also has numerous applications in the adjacent fields of additive combinatorics and computational complexity theory. In addition to (Szemerédi) regularity, closely related notions of graph regularity such as strong regularity and Frieze-Kannan weak regularity have also been studied, as well as extensions of regularity to hypergraphs. Applications of graph regularity often utilize forms of counting lemmas and removal lemmas. In simplest forms, the graph counting lemma uses regularity between pairs of parts in a regular partition to approximate the number of subgraphs, and the graph removal lemma states that given a graph with few copies of a given subgraph, we can remove a small number of edges to eliminate all copies of the subgraph. See also Related fields • Ramsey theory • Ramsey-Turán theory • Spectral graph theory • Additive combinatorics • Computational complexity theory • Probabilistic combinatorics Techniques and methods • Probabilistic method • Dependent random choice • Container method • Hypergraph regularity method Theorems and conjectures (in addition to ones mentioned above) • Ore's theorem • Ruzsa–Szemerédi problem References 1. Diestel, Reinhard (2010), Graph Theory (4th ed.), Berlin, New York: Springer-Verlag, pp. 169–198, ISBN 978-3-642-14278-9, archived from the original on 2017-05-28, retrieved 2013-11-18 2. Alon, Noga; Krivelevich, Michael (2008). "Extremal and Probabilistic Combinatorics". In Gowers, Timothy; Barrow-Green, June; Leader, Imre (eds.). The Princeton Companion to Mathematics. Princeton, New Jersey: Princeton University Press. pp. 562–575. doi:10.1515/9781400830398. ISBN 978-0-691-11880-2. JSTOR j.ctt7sd01. LCCN 2008020450. MR 2467561. OCLC 227205932. OL 19327100M. Zbl 1242.00016. 3. Bollobás, Béla (2004), Extremal Graph Theory, New York: Dover Publications, ISBN 978-0-486-43596-1 4. Bollobás, Béla (1998), Modern Graph Theory, Berlin, New York: Springer-Verlag, pp. 103–144, ISBN 978-0-387-98491-9
Wikipedia
Abstract: The Lieb-Liniger model describes one-dimensional bosons interacting through a repulsive contact potential. In this work, we introduce an extended version of this model by replacing the contact potential with a decaying exponential. Using the recently developed continuous matrix product states techniques, we explore the ground state phase diagram of this model by examining the superfluid and density correlation functions. At weak coupling superfluidity governs the ground state, in a similar way as in the Lieb-Liniger model. However, at strong coupling quasi-crystal and super-Tonks-Girardeau regimes are also found, which are not present in the original Lieb-Liniger case. Therefore the presence of the exponentially-decaying potential leads to a superfluid/super-Tonks-Girardeau/quasi-crystal crossover, when tuning the coupling strength from weak to strong interactions. This corresponds to a Luttinger liquid parameter in the range $K \in (0, \infty)$; in contrast with the Lieb-Liniger model, where $K \in [1, \infty)$, and the screened long-range potential, where $K \in (0, 1]$.
CommonCrawl
\begin{document} \title{Task-dependent control of open quantum systems} \date{\today} \author{Jens Clausen} \affiliation{Institute for Quantum Optics and Quantum Information, Austrian Academy of Sciences, Technikerstr. 21a, A-6020 Innsbruck} \affiliation{Institute for Theoretical Physics, University of Innsbruck, Technikerstr. 25, A-6020 Innsbruck, Austria} \author{Guy Bensky} \author{Gershon Kurizki} \affiliation{Department of Chemical Physics, Weizmann Institute of Science, Rehovot, 76100, Israel} \begin{abstract} We develop a general optimization strategy for performing a chosen unitary or non-unitary task on an open quantum system. The goal is to design a controlled time-dependent system Hamiltonian by variationally minimizing or maximizing a chosen function of the system state, which quantifies the task success (score), such as fidelity, purity, or entanglement. If the time-dependence of the system Hamiltonian is fast enough to be comparable or shorter than the response-time of the bath, then the resulting non-Markovian dynamics is shown to optimize the chosen task score to second order in the coupling to the bath. This strategy can protect a desired unitary system evolution from bath-induced decoherence, but ca also take advantage of the system-bath coupling so as to realize a desired non-unitary effect on the system. \end{abstract} \pacs{03.65.Yz, 03.67.Pp, 37.10.-x, 02.60.-x } \keywords{ decoherence, open systems, quantum information, decoherence protection, quantum error correction, computational techniques, simulations } \maketitle \section{Introduction} \label{sec1} Due to the ongoing trends of device miniaturization, increasing demands on speed and security of data processing, along with requirements on measurement precision in fundamental research, quantum phenomena are expected to play an increasing role in future technologies. Special attention must hence be paid to omnipresent decoherence effects. These may have different physical origins, such as coupling of the system to an external environment (bath) \cite{bookBreuer} or to internal degrees of freedom of a structured particle \cite{BarGil06}, noise in the classical fields controlling the system, or population leakage out of a relevant system subspace \cite{WKB09}. Formally, their consequence is always a deviation of the quantum state evolution (error) with respect to the expected unitary evolution if these effects are absent \cite{krausNielsen}. In operational tasks such as the preparation, transformation, transmission, and detection of quantum states, these environmental couplings effects are detrimental and must be suppressed by strategies known as dynamical decoupling \cite{vio99,uhr07,kuo11,cai11}, or the more general dynamical control by modulation \cite{kof01,Barone04,kof04,JPB,gkl08}. There are however tasks which cannot be implemented by unitary evolution, in particular those involving a change of the system's state entropy \cite{Alicki79,Lindbladbook}. Such tasks necessitate a coupling to a bath and their efficient implementation hence requires {\em enhancement} of this coupling. Examples are the use of measurements to cool (purify) a system \cite{gor09,gorennjp12,Gonzalo10,cha11} or manipulate its state \cite{Harel96,kof00,Opatrny00} or harvest and convert energy from the environment \cite{Scully03,gro10,scho11,sch04,bla11}. A general task may also require state and energy transfer \cite{esc11}, or entanglement \cite{vol11} of non-interacting parties via {\em shared} modes of the bath \cite{durga11,gor06b} which call for maximizing the shared (two-partite) couplings with the bath, but suppressing the single-partite couplings. It is therefore desirable to have a general framework for optimizing the way a system interacts with its environment to achieve a desired task. This optimization consists in adjusting a given ``score'' that quantifies the success of the task, such as the targeted fidelity, purity, entropy, entanglement, or energy by dynamical modification of the system-bath coupling spectrum on demand. The goal of this work is to develop such a framework. The remainder of the paper is organized as follows. In Sec.~\ref{sec2} we state the problem formally and provide general expressions for the change of the score over a fixed time interval as operator and matrix spectral overlap. In Sec.~\ref{sec3} we discuss a general solution in terms of an Euler-Lagrange optimization. In Sec.~\ref{sec4} we apply the approach to the protection of a general quantum gate, which requires minimizing any coupling to the bath, whereas in Sec.~\ref{sec5} we consider the complementary case of {\em enhancing} the system-bath coupling in order to modify the purity (entropy) of a qubit. Open problems are outlined and an outlook is presented in Sec.~\ref{sec6}. Supplementary information is provided in Apps.~\ref{secA1} and \ref{secA2}. \section{Overlap-integral formalism} \label{sec2} \subsection{Fixed time approach} Assume that a quantity of interest (``score'') can be written as a real-valued function $P(t)$ $\!=$ $\!P[\hat{\varrho}(t)]$ of the system state $\hat{\varrho}(t)$ at a given time $t$. This might be, for example, a measure of performance of some input-output-device that is supposed to operate within a predefined cycle/gate time $t$. Depending on the physical problem and model chosen, extensions and generalizations are conceivable, such as a comparison of the outcome for different $t$ (on a time scale set by a constraint) \cite{esc11}, a time average $P$ $\!=$ $\!\int\mathrm{d}\tau{f}(\tau)P[\hat{\varrho}(\tau)]$ with some probability density $f(\tau)$ \cite{cai10}, or a maximum $P(t)$ $\!=$ $\!\mathrm{max}_{\tau\in[0,t]}P[\hat{\varrho}(\tau)]$ \cite{sch11}. Here we restrict ourselves to the ``fixed-time'' definition as given above. Our goal is to generate, by means of classical control fields applied to the system, a time dependence of the system Hamiltonian within the interval $0\le\tau\le{t}$ that adjusts $P(t)$ to a desired value. In particular, this can be an optimum (i.e. maximum or minimum) of the possible values of $P$. Assume that the initial system state $\hat{\varrho}(0)$ is given. The {\em change} to the ``score'' over time $t$, is then given by the first-order Taylor expansion in a chosen basis \begin{equation} \label{dpoverl} P(t) \approx\sum_{mn}\frac{\partial{P}}{\partial\varrho_{mn}}\Delta\varrho_{mn} =\mathrm{Tr}\bigl(\hat{P}\Delta\hat{\varrho}\bigr), \end{equation} where the expansion coefficients \begin{equation} \label{defP} \left(\frac{\partial{P}}{\partial\varrho_{mn}}\right)_{t=0} \equiv(\hat{P})_{nm} \end{equation} are the matrix elements (in the chosen basis) of a Hermitian operator $\hat{P}$, which is the gradient of $P[\hat{\varrho}(t)]$ with respect to $\hat{\varrho}$ at $t$ $\!=$ $\!0$, i.e., we may formally write $\hat{P}$ $\!=$ $\!(\nabla_{\hat{\varrho}}P)_{t=0}^T$ $\!=$ $\!(\partial{P}/\partial\hat{\varrho})_{t=0}^T$. In what follows, it is $\hat{P}$ which contains all information on the score variable. Note that the transposition applied in Eq.~(\ref{defP}) simply allows to express the sum over the Hadamard (i.e. entrywise) matrix product in Eq.~(\ref{dpoverl}) as a trace of the respective operator product $\hat{P}\Delta\hat{\varrho}$. Let us illustrate this in two examples. If $P$ is the expectation value of an observable (i.e. Hermitian operator) $\hat{Q}$, so that $P$ $\!=$ $\!\mathrm{Tr}(\hat{\varrho}\hat{Q})$, then Eq.~(\ref{defP}) just reduces to this observable, $\hat{P}$ $\!=$ $\!\hat{Q}$. If $P$ is the state purity, $P$ $\!=$ $\!\mathrm{Tr}(\hat{\varrho}^2)$, then Eq.~(\ref{defP}) becomes proportional to the state, $\hat{P}$ $\!=$ $\!2\hat{\varrho}(0)$. Note that the score $P$ is supposed to reflect the environment (bath) effects and not the internal system dynamics. Equation~(\ref{dpoverl}) implies that $\Delta\hat{\varrho}$ and with it $P$ are small. Hence $\Delta\hat{\varrho}$ must refer to the interaction picture and a weak interaction, while $P[\hat{\varrho}(t)]$ should not be affected by the internal dynamics [so that no separate time dependence emerges in Eq.~(\ref{dpoverl}), which is not included in the chain-rule derivative]. In the examples above, this is obvious for state purity, whereas an observable $\hat{Q}$ might be thought of co-evolving with the internal dynamics. Our starting point for what follows is simply the relation $P$ $\!=$ $\!\mathrm{Tr}(\hat{P}\Delta\hat{\varrho})$ with some Hermitian $\hat{P}$, whose origin is not relevant. \subsection{Role of averaged interaction energy} Equation~(\ref{dpoverl}) expresses the score $P$ as an overlap between the gradient $\hat{P}$ and the change of system state $\Delta\hat{\varrho}$. In order to find expressions for $P$ in terms of physically insightful quantities, we decompose the total Hamiltonian into system, bath, and interaction parts, \begin{equation} \hat{H}(t)=\hat{H}_{\mathrm{S}}(t)+\hat{H}_{\mathrm{B}}+\hat{H}_{\mathrm{I}}, \end{equation} and consider the von Neumann equation of the total (system and environment) state in the interaction picture, \begin{equation} \label{vNeq} \frac{\partial}{\partial{t}}\hat{\varrho}_{\mathrm{tot}}(t) =-\mathrm{i}[\hat{H}_{\mathrm{I}}(t),\hat{\varrho}_{\mathrm{tot}}(t)], \end{equation} [$\hat{H}_{\mathrm{I}}(t)$ $\!=$ $\!\hat{U}_{\mathrm{F}}^\dagger(t)\hat{H}_{\mathrm{I}}^{\mathrm{(S)}}(t) \hat{U}_{\mathrm{F}}(t)$, (S) denoting the Schr\"odinger picture and $\hat{U}_{\mathrm{F}}(t)$ $\!=$ $\!\mathrm{T}_+\mathrm{e}^{-\mathrm{i}\int_{0}^{t}\! \mathrm{d}\tau[\hat{H}_{\mathrm{S}}(\tau)+\hat{H}_{\mathrm{B}}]}$]. Its solution can be written as Dyson (state) expansion \begin{eqnarray} &&\hspace{-0.5cm}\hat{\varrho}_{\mathrm{tot}}(t) =\hat{\varrho}_{\mathrm{tot}}(0) +(-\mathrm{i})\int_{0}^{t}\mathrm{d}t_1 [\hat{H}_{\mathrm{I}}(t_1),\hat{\varrho}_{\mathrm{tot}}(0)] \nonumber\\ &&\hspace{-0.5cm}+(-\mathrm{i})^2 \int_{0}^{t}\mathrm{d}t_1\int_{0}^{t_1}\mathrm{d}t_2 [\hat{H}_{\mathrm{I}}(t_1), [\hat{H}_{\mathrm{I}}(t_2),\hat{\varrho}_{\mathrm{tot}}(0)]]+\ldots,\quad \label{dyson} \end{eqnarray} which can be obtained either by an iterated integration of (\ref{vNeq}) or from its formal solution \begin{equation} \label{fsol} \hat{\varrho}_{\mathrm{tot}}(t)=\hat{U}_{\mathrm{I}}(t) \hat{\varrho}_{\mathrm{tot}}(0)\hat{U}_{\mathrm{I}}^\dagger(t), \quad \hat{U}_{\mathrm{I}}(t) =\mathrm{T}_+\mathrm{e}^{-\mathrm{i}\int_{0}^{t}\!\mathrm{d}t^\prime \hat{H}_{\mathrm{I}}(t^\prime)}, \end{equation} by applying the Magnus (operator) expansion \begin{eqnarray} \hat{U}_{\mathrm{I}}(t) &=&\mathrm{e}^{-\mathrm{i}t\hat{H}_{\mathrm{eff}}(t)}, \\ \hat{H}_{\mathrm{eff}}(t) &=&\frac{1}{t}\int_{0}^{t}\mathrm{d}t_1\,\hat{H}_{\mathrm{I}}(t_1) \nonumber\\ &&-\frac{\mathrm{i}}{2t}\int_{0}^{t}\mathrm{d}t_1 \int_{0}^{t_1}\mathrm{d}t_2 [\hat{H}_{\mathrm{I}}(t_1),\hat{H}_{\mathrm{I}}(t_2)]+\ldots,\quad\quad \label{magnus} \end{eqnarray} expanding in (\ref{fsol}) the exponential $\hat{U}_{\mathrm{I}}$ $\!=$ $\!\hat{U}_{\mathrm{I}}(\hat{H}_{\mathrm{eff}})$ and sorting the terms according to their order in $\hat{H}_{\mathrm{I}}$. We assume that initially, the system is brought in contact with its environment (rather than being in equilibrium with it), which corresponds to factorizing initial conditions $\hat{\varrho}_{\mathrm{tot}}(0)$ $\!=$ $\!\hat{\varrho}(0)\otimes\hat{\varrho}_{\mathrm{B}}$. The environment is in a steady state $\hat{\varrho}_{\mathrm{B}}$, $[\hat{\varrho}_{\mathrm{B}},\hat{H}_{\mathrm{B}}]$ $\!=$ $\!0$, so it is more adequate to speak of a ``bath''. Tracing over the bath in Eq.~(\ref{dyson}) then gives the change of system state $\Delta\hat{\varrho}$ $\!=$ $\!\hat{\varrho}(t)$ $\!-$ $\!\hat{\varrho}(0)$ over time $t$, which we must insert into Eq.~(\ref{dpoverl}). We further assume a vanishing bath expectation value of the interaction Hamiltonian, \begin{equation} \label{symmcond} \langle\hat{H}_{\mathrm{I}}\rangle_{\mathrm{B}} \equiv\mathrm{Tr}_{\mathrm{B}}(\hat{\varrho}_{\mathrm{B}}\hat{H}_{\mathrm{I}}) =\hat{0}. \end{equation} As a consequence, the ``drift'' term corresponding to the first order in Eq.~(\ref{dyson}) vanishes, and we only consider the second order term as the lowest non-vanishing order approximation. Finally, we assume that the initial system state commutes with $\hat{P}$, \begin{equation} \label{condition} \bigl[\hat{\varrho}(0),\hat{P}\bigr]=0. \end{equation} In the language of control theory, $\mathrm{Tr}[\hat{\varrho}(0)\hat{P}]$ is a ``kinematic critical point'' \cite{pec11} if Eq.~(\ref{condition}) holds, since $\mathrm{Tr}[\mathrm{e}^{\mathrm{i}\hat{H}}\hat{\varrho}(0) \mathrm{e}^{-\mathrm{i}\hat{H}}\hat{P}]$ $\!=$ $\!\mathrm{Tr}[\hat{\varrho}(0)\hat{P}]$ $\!+$ $\!\mathrm{i}\mathrm{Tr}(\hat{H}[\hat{\varrho}(0),\hat{P}])$ $\!+$ $\!\mathcal{O}(\hat{H}^2)$ for a small arbitrary system Hamiltonian $\hat{H}$. Since we consider $\hat{\varrho}$ in the interaction picture, Eq.~(\ref{condition}) means that the score is insensitive (in first order) to a bath-induced unitary evolution (i.e., a generalized Lamb shift) \cite{durga11}. The purpose of this assumption is only to simplify the expressions, but it is not essential. Physically, one may think of a fast auxiliary unitary transformation that is applied initially in order to diagonalize the initial state in the eigenbasis of $\hat{P}$. Modifications to be made if Eq.~(\ref{condition}) does not hold are provided in App.~\ref{secA2}. To lowest (i.e., second) order we then evaluate Eq.~(\ref{dpoverl}) for the score change as \begin{equation} \label{deltaP} P=t^2\bigl\langle[\hat{H},\hat{P}]\hat{H}\bigr\rangle,\quad \hat{H}=\frac{1}{t}\int_{0}^{t}\mathrm{d}\tau\,\hat{H}_{\mathrm{I}}(\tau), \end{equation} where $\langle\cdot\rangle$ $\!=$ $\!\mathrm{Tr}[\hat{\varrho}_{\mathrm{tot}}(0)(\cdot)]$. This expresses the change of score in terms of the interaction Hamiltonian, averaged in the interaction picture over the time interval of interest. Our scheme is summarized in Fig.~\ref{fig1}. \begin{figure}\label{fig1} \end{figure} \subsection{Spectral overlap} \label{sec2c} Alternatively, Eq.~(\ref{deltaP}) can be written as an overlap of system-and bath matrices, which allows a more direct physical interpretation. To do so, we assume $d$ dimensional Hilbert-space, and expand the interaction Hamiltonian as a sum of products of system and bath operators, \begin{equation} \label{HIdec} \hat{H}_{\mathrm{I}}=\sum_{j=1}^{d^2-1}\hat{S}_j\otimes\hat{B}_j, \end{equation} in such a way that $\langle\hat{B}_j\rangle$ $\!=$ $\!0$, which ensures that Eq.~(\ref{symmcond}) is satisfied (otherwise we may shift $\hat{B}_j^\prime$ $\!=$ $\!\hat{B}_j-\langle\hat{B}_j\rangle\hat{I}$, $\hat{H}_{\mathrm{S}}^\prime$ $\!=$ $\!\hat{H}_{\mathrm{S}}+\sum_j\langle\hat{B}_j\rangle\hat{S}_j$). Considering Eq.~(\ref{HIdec}) in the interaction picture and expanding $\hat{S}_j(t)$ $\!=$ $\!\sum_k{\epsilon}_{jk}(t)\hat{S}_k$ in terms of [Hermitian, traceless, orthonormalized to $\mathrm{Tr}(\hat{S}_j\hat{S}_k)$ $\!=$ $\!d\,\delta_{jk}$] basis operators $\hat{S}_j$, defines a (real orthogonal) rotation matrix ${\bm{\epsilon}}(t)$ in the system's Hilbert space, with elements \begin{equation} {\epsilon}_{jk}(t) =\bigl\langle\hat{S}_j(t)\hat{S}_k\bigr\rangle_{\mathrm{id}}, \end{equation} where $\langle\cdot\rangle_{\mathrm{id}}$ $\!=$ $\!\mathrm{Tr}[d^{-1}\hat{I}(\cdot)]$. These elements of the matrix ${\bm{\epsilon}}(t)$ may be regarded as the dynamical correlation functions of the basis operators. Analogously, we define a bath correlation matrix ${\bm{\Phi}}(t)$ with elements \begin{equation} {\Phi}_{jk}(t)=\bigl\langle\hat{B}_j(t)\hat{B}_k\bigr\rangle_{\mathrm{B}}. \end{equation} It contains the entire description of the bath behavior in our approximation. Finally, we define a Hermitian matrix ${\bm{\Gamma}}$ with elements \begin{equation} \label{Gm} \Gamma_{kj}=\langle[\hat{S}_j,\hat{P}]\hat{S}_k\rangle, \end{equation} where $\langle\cdot\rangle$ $\!=$ $\!\mathrm{Tr}[\hat{\varrho}(0)(\cdot)]$. The matrix ${\bm{\Gamma}}$ may be understood as a representation of the gradient $\hat{P}$ with respect to the chosen basis operators $\hat{S}_j$. Finally, we define the bath and (finite-time) system spectra according to \begin{eqnarray} \label{Gdef} {\bm{G}}(\omega) &=&\int_{-\infty}^{\infty}\!\mathrm{d}t\;\mathrm{e}^{\mathrm{i}\omega{t}}\; {\bm{\Phi}}_{}(t), \\ {\bm{\epsilon}_t}(\omega) &=&\frac{1}{\sqrt{2\pi}}\int_{0}^t\mathrm{d}\tau\, \mathrm{e}^{\mathrm{i}\omega\tau}{\bm{\epsilon}}(\tau). \end{eqnarray} This allows to express the score, Eq.~(\ref{deltaP}), as the matrix overlap \begin{eqnarray} {P}&=&\iint_{0}^{t}\mathrm{d}t_1\mathrm{d}t_2 \mathrm{Tr}[{\bm{\epsilon}}^T(t_1){\bm{\Phi}}(t_1-t_2){\bm{\epsilon}}(t_2) {\bm{\Gamma}}] \label{soverl1} \\ &=&\int_{-\infty}^{\infty}\!\!\mathrm{d}\omega\, \mathrm{Tr}[{\bm{\epsilon}_t}^\dagger(\omega) {\bm{G}}(\omega){\bm{\epsilon}_t}(\omega){\bm{\Gamma}}] \label{soverl2} \\ &=&t\int_{-\infty}^{\infty}\!\!\mathrm{d}\omega\, \mathrm{Tr}[{\bm{F}_t}(\omega){\bm{G}}(\omega)]. \label{soverl3} \end{eqnarray} In Eq.~(\ref{soverl3}) we have used the cyclic property of the trace to write the spectral overlap in a more compact form by combining the rotation matrix spectra ${\bm{\epsilon}_t}(\omega)$ and the gradient representation ${\bm{\Gamma}}$ to a system spectral matrix \begin{equation} {\bm{F}_t}(\omega)=\frac{1}{t} {\bm{\epsilon}_t}(\omega){\bm{\Gamma}}{\bm{\epsilon}_t}^\dagger(\omega). \end{equation} Analogously, Eq.~(\ref{soverl1}) can be written in a more compact form by introducing the matrix \begin{equation} \label{dm} {\bm{R}}(t_1,t_2) ={\bm{\epsilon}}^T(t_1){\bm{\Phi}}(t_1-t_2) {\bm{\epsilon}}(t_2). \end{equation} Equation~(\ref{soverl3}) is as a generalization of \citet{kof05} and demonstrates that the change $P$ over a given time $t$ is determined by the spectral overlap between system and bath dynamics, analogously to DCM \cite{gkl08}, or the measurement induced quantum Zeno and anti-Zeno control of open systems \cite{kof00,kof96}. The bath-spectral matrix ${\bm{G}}(\omega)$ must be positive semi-definite for all $\omega$. If the same holds for the matrix ${\bm{F}_t}(\omega)$, then $P$ is always positive. Below we will consider such a case where $P$ reflects a gate error and the goal is then to minimize this error. The spectral overlap Eq.~(\ref{soverl3}) can be made as small as desired by a rapid modulation of the system, such that the entire weight of the system spectrum is shifted beyond that of the bath, which is assumed to vanish for sufficiently high frequencies. Since this fast modulation may cause unbounded growth of the system energy, a meaningful posing of the problem requires a constraint. In general, ${\bm{F}_t}(\omega)$ is Hermitian but need not necessarily be positive semi-definite, depending on the choice of score as encoded in ${\bm{\Gamma}}$. This reflects the fact that $P$ can increase or decrease over $t$. Depending on the application, our goal can therefore also be to maximize $P$ with positive and negative sign. In what follows we will consider the question how to find a system dynamics that optimizes the score. \section{\label{sec3} Euler-Lagrange Optimization} \subsection{Role of control, score and constraint} Our considerations in the previous section suggest to define our control problem in terms of a triple $(\bm{f},P,E)$ consisting of a control $\bm{f}$, a score $P$, and a constraint $E$. The \emph{control} is a set of real parameters $f_l$, which have been combined to a vector $\bm{f}$. These can either be timings, amplitudes, and/or phases of a given number of discrete pulses, or describe a time-continuous modulation of the system. Here, we focus on time-dependent control, where the $f_l(\tau)$ parametrize the system Hamiltonian as $\hat{H}_{\mathrm{S}}$ $\!=$ $\!\hat{H}_{\mathrm{S}}[{\bm{{f}}}(\tau)]$, or the unitary evolution operator $\hat{U}(\tau)$ $\!=$ $\!T_+ \mathrm{e}^{-\mathrm{i}\int_0^\tau\mathrm{d}\tau^\prime\,\hat{H}_{\mathrm{S}} (\tau^\prime)}$ $\!\equiv$ $\!\hat{U}[{\bm{{f}}}(\tau)]$. A direct parametrization of $\hat{U}$ avoids the need of time-ordered integration of its exponent. The $\hat{U}(\tau)$ thus obtained \cite{cla10} can be then used to calculate the system Hamiltonian $\hat{H}_{\mathrm{S}}(\tau)$ $\!=$ $\!\mathrm{i}[\frac{\partial}{\partial{t}} \hat{U}(\tau)]\hat{U}^\dagger(\tau)$. Two explicit examples of the score $P$ pertain to the fidelity of $\hat{P}$ with a given pure state $F_{\Psi}$ $\!=$ $\!\langle\Psi|\hat{\varrho}|\Psi\rangle$, (for which $\hat{P}$ $\!=$ $\!|\Psi\rangle\langle\Psi|$), or to the von Neumann entropy which we can approximate (for nearly pure states) by the linear entropy, $S$ $\!=$ $\!-k\mathrm{Tr}(\hat{\varrho}\mathrm{ln}\hat{\varrho})$ $\!\approx$ $\!S_{\mathrm{L}}$ $\!=$ $\!k[1-\mathrm{Tr}(\hat{\varrho}^2)]$, [for which $\hat{P}$ $\!=$ $\!-2k\hat{\varrho}(0)$]. The latter score can be used to maximize the fidelity with the maximally mixed state $\hat{\varrho}$ $\!\sim$ $\!\hat{I}$ (for which $S_{\mathrm{L}}$ becomes maximum), or to maximize the concurrence $C_{|\Psi_{\mathrm{AB}}\rangle}$ $\!=$ $\!\sqrt{2(1-\mathrm{Tr}\hat{\varrho}_{\mathrm{A}}^2)}$, $\hat{\varrho}_{\mathrm{A}}$ $\!=$ $\!\mathrm{Tr}_{\mathrm{B}} |\Psi_{\mathrm{AB}}\rangle\langle\Psi_{\mathrm{AB}}|$, as a measure of entanglement of a pure state $|\Psi_{\mathrm{AB}}\rangle$ of a bipartite system. If a \emph{constraint} is required to ensure the existence of a finite (physical) solution, its choice should depend on the most critical source of error. An example is the average speed with which the controls change, $E$ $\!=$ $\!\int_{0}^{t}\mathrm{d}{\tau}\,\dot{{\bm{{f}}}}^2({\tau})$, which depend on the control bandwidth in the spectral domain. A parametrization-independent alternative is the mean square of the modulation energy, $E_{}$ $\!=$ $\!\int_{0}^{t}\!\mathrm{d}{\tau}\, \bigl\langle(\Delta\hat{H})^2({\tau}) \bigr\rangle_{\mathrm{id}}$, where $\langle\cdot\rangle_{\mathrm{id}}$ refers to a maximally mixed state and hence to a state-independent norm, and $\Delta\hat{H}$ is the difference between the modulated and unmodulated (natural) system Hamiltonians. \subsection{A projected gradient search} We want to find controls $\bm{f}$ that optimize a score $P(\bm{f})$ subject to a constraint $E(\bm{f})$. A numerical local optimization can be visualized in parameter space as shown in Fig.~\ref{fig2}. \begin{figure}\label{fig2} \end{figure} We start at some initial point $\bm{f}_0$ for which $E(\bm{f}_0)$ is the desired value of the constraint. Simply following the gradient $\bm{\delta}P$ would maximize or minimize $P$, but also change $E$. To optimize $P$ while keeping $E$ constant, we therefore move along the projection of $\bm{\delta}P$ orthogonal to $\bm{\delta}E$, i.e., along $\bm{\delta}P_{\perp}$ $\!=$ $\!\bm{\delta}P$ $\!-$ $\!\frac{\bm{\delta}P\cdot\bm{\delta}E}{(\bm{\delta}E)^2}\bm{\delta}E$. Since the gradients depend on $\bm{f}$, the iteration consists of small steps $\bm{f}_{n+1}$ $\!=$ $\!\bm{f}_n$ $\!\pm$ $\!\epsilon\bm{\delta}P_{\perp}(\bm{f}_n)$, $\epsilon$ $\!\ll$ $\!1$. Assuming that neither $\bm{\delta}P$ nor $\bm{\delta}E$ vanish, the iteration will come to a halt where $\bm{\delta}P_{\perp}$ vanishes, because the gradients are parallel, \begin{equation} \label{ELG} {\bm{\delta}}P =\lambda{\bm{\delta}}E. \end{equation} This condition constitutes the Euler-Lagrange (EL) equation of the extremal problem, with the proportionality constant $\lambda$ being the Lagrange multiplier. Its concrete form depends on the choice of $P$ and $E$. Since the solutions of the EL optimization represent local optima of the constrained $P$, we may repeat the search with randomly chosen $\bm{f}_0$ a number of times and select the best solution. The gradients at each point $\bm{f}_n$ may be obtained either from a calculation based on prior knowledge of the bath or experimentally from data measured in real time. A discretization of the time interval $0$ $\!\le$ $\!\tau$ $\!\le$ $\!t$ then reduces the variational $\bm{\delta}$ to a finite-dimensional vector gradient $\bm{\nabla}$. \section{Gate protection with BOMEC} \label{sec4} \subsection{Gate error as average fidelity decline} A particular application of our formalism is decoherence protection of a given quantum operation by bath-optimal minimal-energy control (BOMEC) \cite{gkl08,cla10}. Consider the implementation of a predetermined quantum gate, i.e., unitary operation within a given ``gate time'' $t$. It is sufficient to consider a pure input state $|\Psi\rangle$. In the interaction picture with respect to the desired gate operation and in the absence of bath effects, we should therefore observe at time $t$ the initial state $|\Psi\rangle$. The quantity of interest is here the fidelity $\langle\Psi|\hat{\varrho}(t)|\Psi\rangle$, and we use the projector $\hat{P}$ $\!=$ $\!\hat{\varrho}(0)$ $\!=$ $\!|\Psi\rangle\langle\Psi|$ as the gradient operator, so that Eq.~(\ref{condition}) is satisfied and Eq.~(\ref{deltaP}) gives the fidelity change as the score \begin{equation} {P}=\langle\Psi|\Delta\hat{\varrho}|\Psi\rangle =-t^2\bigl\langle \langle\Psi|\hat{H}^2|\Psi\rangle-\langle\Psi|\hat{H}|\Psi\rangle^2 \bigr\rangle_{\mathrm{B}}, \end{equation} which is given by $\hat{H}$ defined in Eq.~(\ref{deltaP}). Since a quantum gate is supposed to act on an unknown input state, we need to get rid of the dependence on $|\Psi\rangle$. One possibility is to perform a uniform average over all $|\Psi\rangle$. We may apply \begin{equation} \label{dankert} \overline{\langle\Psi|\hat{A}|\Psi\rangle\langle\Psi|\hat{B}|\Psi\rangle} =\frac{\mathrm{Tr}\hat{A}\hat{B}+\mathrm{Tr}\hat{A}\mathrm{Tr} \hat{B}}{d(d+1)} \end{equation} \cite{dan05,sto05} which gives the average \begin{equation} \label{deltaPbar} \overline{{P}} =-t^2\frac{d}{d+1} \bigl\langle\hat{H}^2\bigr\rangle_{\mathrm{id}}, \end{equation} where $\langle\cdot\rangle_{\mathrm{id}}$ $\!=$ $\!\mathrm{Tr}[d^{-1}\hat{I}\otimes\hat{\varrho}_{\mathrm{B}}(\cdot)]$. In Eq.~(\ref{deltaPbar}) we have used $\mathrm{Tr}_{\mathrm{S}}\hat{H}$ $\!=$ $\!\hat{0}$, which corresponds to $\mathrm{Tr}\hat{S}_j$ $\!=$ $\!0$ in Sec.~\ref{sec2c}. Because of this and because of Eq.~(\ref{symmcond}), $\langle\hat{H}\rangle_{\mathrm{B}}$ $\!=$ $\!\langle\hat{H}_{\mathrm{I}}\rangle_{\mathrm{B}}$ $\!=$ $\!\hat{0}$, we have $\bigl\langle\hat{H}\bigr\rangle_{\mathrm{id}}$ $\!=$ $\!0$, and Eq.~(\ref{deltaPbar}) also describes the variance $\mathrm{Var}(\hat{H})$ $\!=$ $\!\bigl\langle\hat{H}^2\bigr\rangle_{\mathrm{id}}$ $\!-$ $\!\bigl\langle\hat{H}\bigr\rangle_{\mathrm{id}}^2$. On the other hand $\hat{P}$ $\!=$ $\!-2k\hat{\varrho}(0)$ that gives the change $\Delta{S}$ of entropy $S$ $\!=$ $\!-k\mathrm{Tr}(\hat{\varrho}\mathrm{ln}\hat{\varrho})$ is (up to a proportionality factor of $-2k$) the same as the $\hat{P}$ used here to give the change of fidelity, we have $\Delta{S}$ $\!=$ $\!-2k{P}$. If we define a \emph{gate error} $\mathcal{E}$ as the average fidelity decline, $\mathcal{E}$ $\!=$ $\!-\overline{{P}}$, with $\overline{{P}}$ given in Eq.~(\ref{deltaPbar}), we can summarize the following proportionalities: gate error $\equiv$ average fidelity decline $\sim$ average entropy increase (purity decline) $\overline{\Delta{S}}$ $\sim$ square (variance) of the average interaction energy $\hat{H}$: \begin{equation} \label{ge} \mathcal{E}\equiv-\overline{{P}} =\frac{\overline{\Delta{S}}}{2k} =t^2\frac{d}{d+1} \bigl\langle\hat{H}^2\bigr\rangle_{\mathrm{id}} =t^2\frac{d}{d+1}\mathrm{Var}(\hat{H}). \end{equation} In the matrix representation of Sec.~\ref{sec2c}, the average over the initial states in the matrix ${\bm{\Gamma}}$ defined in Eq.~(\ref{Gm}), gives $\overline{\bm{\Gamma}}$ $\!=$ $\!-\frac{d}{d+1}\bm{I}$ [using $\mathrm{Tr}(\hat{S}_j\hat{S}_k)$ $\!=$ $\!d\,\delta_{jk}$ and $\mathrm{Tr}\hat{S}_j$ $\!=$ $\!0$], so that \begin{equation} \mathcal{E}=\frac{d}{d+1}\int_{-\infty}^{\infty}\!\!\mathrm{d}\omega\, \mathrm{Tr}[\bm{\epsilon}_t(\omega)\bm{\epsilon}_t^\dagger(\omega) \bm{G}(\omega)] \end{equation} in agreement with \cite{cla10} [except a different normalization $\mathrm{Tr}(\hat{S}_j\hat{S}_k)$ $\!=$ $\!2\delta_{jk}$ leading there to a prefactor $\frac{2}{d+1}$]. From the requirement that $\mathcal{E}$ $\!\ge$ $\!0$ must hold for any positive semi-definite matrix $\bm{\epsilon}_t(\omega)\bm{\epsilon}_t^\dagger(\omega)$, we conclude that $\bm{G}(\omega)$ must be a positive semi-definite matrix for any $\omega$. The task of BOMEC is then to find a system evolution $\hat{U}(\tau)$ (cf. the control examples in the previous section) that minimizes $\mathcal{E}$, subject to the boundary condition that the final $\hat{U}(t)$ is the desired gate. \subsection{Comparison of BOMEC with DD} It is appropriate to compare the effect of dynamical decoupling (DD) \cite{vio99,uhr07} with that of BOMEC. DD does not change with the bath spectrum $\bm{G}(\omega)$. With an increasing number of pulses, DD shifts the weight of the system spectrum $\bm{F}(\omega)$ towards higher frequencies, until the overlap Eq.~(\ref{soverl3}) has become sufficiently small. This is illustrated for two different numbers of pulses of periodic DD (PDD, pulses periodic in time), in the upper row of Fig.~\ref{fig3} in the case of a 1D single qubit modulation (i.e., all pulses are given by an arbitrary but fixed Pauli matrix). \begin{figure} \caption{ System modulation spectra $F(\omega)$ generated by two methods of DD. Upper row: periodic dynamical decoupling with $n$ $\pi$-pulses (PDDn), lower row: Uhrig dynamical decoupling with the same number of $\pi$-pulses (UDDn), compared for $n=11$ (left column) and $n=19$ (right column) pulses. } \label{fig3} \end{figure} Aperiodic DD such as UDD \cite{uhr07} suppress low-frequency components (to the left of the main peak) in the system spectrum, which retain the system-bath coupling even if the main peak of the system spectrum has been shifted beyond the bath cutoff frequency (Fig.~\ref{fig3}). The plots indicate that this suppression of low frequency components is achieved at the price of a smaller shift of the main peak, i.e., shifting the main peak beyond a given cutoff requires more pulses in UDD than in PDD. Note that optimized DD sequences with improved asymptotics exist \cite{kuo11}, which we will not consider here. System modulation spectra obtained with BOMEC are shown in Fig.~\ref{fig4}. \begin{figure} \caption{ BOMEC-minimization of the gate error for a single qubit $\pi$-gate caused by pure dephasing with a given bath spectrum [$G(\omega)$, bold red line]. The $Z$-components of the obtained system modulation spectra $F_i(\omega)$ are shown for energy constraints $E_i$ $\!=$ $\!0.1+4(i-1)$, $i$ $\!=$ $\!1,2,\ldots,101$, (thin lines, blue to green) individually scaled to 1. } \label{fig4} \end{figure} The plot refers to a qubit subject to pure dephasing (i.e., $Z$-coupling) by a bath whose spectrum $G(\omega)$ has a Lorentzian peak and low-frequency tail. The BOMEC optimizes $\hat{U}(\tau)$ simultaneously for 3D Pauli matrix couplings to the bath (Z, Y and X). The resulting system spectrum $F(\omega)$ is shown for different energy constraints $E$ which are increased in small and equal steps. For low $E$, $F(\omega)$ has a single peak on the left of the bath peak. Increasing $E$ causes a second peak of $F(\omega)$ to emerge on the right of the bath peak, which continues to grow, while the peak on the left diminishes, until for high $E$, only the right peak remains. Fig.~\ref{fig4} hence demonstrates that the spectrum $F(\omega)$ generated by BOMEC changes continuously as $E$ increases, but avoids overlap with the maxima of $G(\omega)$ irrespective of $E$. BOMEC can therefore be superior to all forms of DD including UDD, especially if the bath has high cutoff but bandgaps at low frequencies. \section{Purity control of a qubit} \label{sec5} To give an example of the opposite case, where the goal is to maximize the system-bath coupling, we apply our approach of constrained optimization to the linear entropy $S_{\mathrm{L}}$ $\!=$ $\!2[1-\mathrm{Tr}(\hat{\varrho}^2)]$ of a qubit. [Note that here $S_{\mathrm{L}}$ has been normalized to 1 by setting the coefficient $k$ $\!=$ $\!d/(d-1)$ $\!=$ $\!2$, cf. Sec.~\ref{sec3}.] We assume an initial mixture \begin{equation} \label{defp} \hat{\varrho}(0)=p|1\rangle\langle1|+(1-p)|0\rangle\langle0| \end{equation} of a ground (excited) state $|0\rangle$ ($|1\rangle$), where $0$ $\!\le$ $\!p$ $\!\le$ $\!0.5$ is related to $S_{\mathrm{L}}$ by $p=(1-\sqrt{1-S_{\mathrm{L}}})/2$. With $\hat{S}_j$ $\!=$ $\!\hat{\sigma}_j$ denoting for $d$ $\!=$ $\!2$ the Pauli matrices, Eq.~(\ref{defp}) can be written in terms of $\hat{H}_0$ $\!=$ $\!\frac{\omega_0}{2}\hat{\sigma}_3$ as $\hat{\varrho}(0)$ $\!=$ $\!\frac{\mathrm{e}^{-\beta_{}\hat{H}_0}} {\mathrm{Tr}(\mathrm{e}^{-\beta_{}\hat{H}_0})}$ $\!=$ $\!\frac{|1\rangle\langle1|}{1+\mathrm{e}^{\beta_{}\omega_0}}$ $\!+$ $\!\frac{|0\rangle\langle0|}{1+\mathrm{e}^{-\beta_{}\omega_0}}$, where $\beta_{}$ $\!=$ $\!\frac{\ln(p^{-1}-1)}{\omega_0}$ is the inverse temperature. Purity and temperature are hence related via the energy scale $\omega_0$. Our goal is a constrained optimization of $\Delta{S}_{\mathrm{L}}$, i.e., $\hat{P}$ $\!=$ $\!-4\,\hat{\varrho}(0)$ in Eq.~(\ref{defP}). Unlike the gate error Eq.~(\ref{ge}), $\Delta{S}_{\mathrm{L}}$ can be negative or positive, which can be understood as cooling or heating, respectively. The time evolutions resulting from a minimization of $\Delta{S}_{\mathrm{L}}$ for the initial state Eq.~(\ref{defp}) are illustrated in Fig.~\ref{fig5}. \begin{figure} \caption{ Evolution within $0$ $\!\le$ $\!\tau$ $\!\le$ $\!t$ for an optimized cooling with initial $p$ $\!=$ $\!0.25$ of (a) effective and (b) instantaneous controls (red, green, and blue graphs show $x$, $y$, and $z$ component), (c) ground state overlap of the system state, and (d) linear entropy [green, red and blue graph show numerical integration of the Nakajima-Zwanzig equation, time-convolutionless equation, and second order approximation of the solution \cite{bookBreuer}, which are indistinguishable in (c)] (system-bath coupling strength $\kappa$ $\!=$ $\!10^{-2}$, $\omega_0$ $\!=$ $\!\frac{2\pi}{t}$, $t$ $\!=$ $\!10$, $E$ $\!=$ $\!100$). } \label{fig5} \end{figure} The $f_j$ shown in Fig.~\ref{fig5}(a) are defined by $\hat{U}(\tau)$ $\!=$ $\!\mathrm{e}^{-\frac{\mathrm{i}}{2}f_3(\tau)\hat{\sigma}_3} \mathrm{e}^{-\frac{\mathrm{i}}{2}f_2(\tau)\hat{\sigma}_2} \mathrm{e}^{-\frac{\mathrm{i}}{2}f_1(\tau)\hat{\sigma}_3}$, whereas the $\omega_j$ shown in Fig.~\ref{fig5}(b) are given by $\hat{H}_{\mathrm{S}}(\tau)$ $\!=$ $\!\sum_j\omega_j(\tau)\hat{\sigma}_j$. The chosen constraint $E_{}$ $\!=$ $\!\frac{1}{2}\int_{0}^{t}\!\mathrm{d}\tau \mathrm{Tr}({H}_{\mathrm{S}}\!-\!\hat{H}_0)^2(\tau)$ can be written in terms of the $f_j$ as $E_{}$ $\!=$ $\!\frac{1}{4}\int_{0}^{t}\mathrm{d}{t_1}\; [\dot{f_1}^2+\dot{f_2}^2+(\dot{f_3}-\omega_0)^2 +2\dot{f_1}(\dot{f_3}-\omega_0)\cos{f_2}]$. The overlap between the evolving system state $\hat{\varrho}(\tau)$ (in the Schr\"odinger picture) and the ground state $|0\rangle$ shown in Fig.~\ref{fig5}(c) indicates the fast unitary system modulation through short time population inversions without significantly altering the state purity as verified in Fig.~\ref{fig5}(d). This can be visualized as fast $\pi$-rotations of the state inside the Bloch sphere, which, together with smaller rotations, here result in the final reduction of $S_{\mathrm{L}}(t)$ seen in Fig.~\ref{fig5}(d). Fig.~\ref{fig5}(d) also confirms that for the chosen time and coupling strength, differences between various methods of approximation are small. In contrast to gate protection, no initial-state averaging is performed here, i.e., Eq.~(\ref{defp}) is known. Consequently, as Fig.~\ref{fig6} shows, \begin{figure} \caption{ Cooling and heating of a TLS by minimization (left: $a,b,c$) and maximization (right: $g,h,i$) of the change of linear entropy for a given bath spectrum (red, $G_j$) with $j=1,2,3$ denoting $x$, $y$ and $z$ component. The optimized system spectra ($F_j$, green) are shown for an energy constraint $E$ $\!=$ $\!10^2$ (left) and $E$ $\!=$ $\!10^3$ (right) as contrasted to the unmodulated Hamiltonian $\hat{H}_0$ (i.e., for $E$ $\!=$ $\!0$, middle: $d,e,f$) and different initial states with $p$ $\!=$ $\!0.001$, $0.05$, $0.1$, $\ldots$, $0.45$, $0.499$. } \label{fig6} \end{figure} the relevant components $F_{j}$ $\!\equiv$ $\!(\bm{F}_t)_{jj}(\omega)$ of the system modulation spectrum contributing to the spectral overlap Eq.~(\ref{soverl3}) depend on the initial state $\hat{\varrho}(0)$ via the matrix $\bm{\Gamma}$ Eq.~(\ref{Gm}). [We assume an uncorrelated bath, i.e., $G_{jk}(\omega)$ $\!=$ $\!0$ for $j$ $\!\neq$ $\!k$ and $G_j$ $\!\equiv$ $\!G_{jj}(\omega)$.] This influence is clearly visible in case of a constant (unmodulated, i.e., free) Hamiltonian (middle column), for which we set $\omega_0$ $\!=$ $\!\frac{2\pi}{t}$ with the final $t$ being in the order of the bath correlation time. Cooling (heating) is achieved by realizing negative (positive) $\Delta{S}_{\mathrm{L}}$ via maximum negative (positive) spectral overlap, as shown in the left (right) column of Fig.~\ref{fig6}. This is the opposite to system-bath decoupling, where the goal is to minimize the overlap. The plots illustrate the role of the energy constraint $E$: increasing $E$ allows to establish overlap with higher frequency components of the bath spectrum. This also suggests that for a bath spectrum with a finite frequency cutoff, increasing $E$ beyond a certain saturation value will not lead to further improvement of the optimization, cf. the general considerations in App.~\ref{secA1}. In the time domain, increasing $E$ leads to more rapid changes in the physical Hamiltonian however, requiring higher resolution of the numerical treatment. On the contrary, for attempted cooling (heating) by minimization (maximization) of $\Delta{S}_{\mathrm{L}}$, a given $E$ may be too small to lead to negative (positive) $\Delta{S}_{\mathrm{L}}$. The obtained $\Delta{S}_{\mathrm{L}}$ may then be understood as ``reduced heating'' (``reduced cooling'') as compared to a $\Delta{S}_{\mathrm{L}}$ obtained with an unmodulated $\hat{H}_0$. This is shown in Fig.~\ref{fig7}. \begin{figure}\label{fig7} \end{figure} The figure also illustrates once more that the $\hat{\varrho}(0)$-dependence of the spectra shown in Fig.~\ref{fig6} is accompanied by a $\hat{\varrho}(0)$-dependency of the achievable change $\Delta{S}_{\mathrm{L}}$ for a given bath. For a maximally mixed state in particular ($p$ $\!=$ $\!0.5$), the matrix $\bm{\Gamma}$ Eq.~(\ref{Gm}) vanishes and with it $\Delta{S}_{\mathrm{L}}$. A possibility to achieve negative (positive) $\Delta{S}_{\mathrm{L}}$ by its minimization (maximization) even for weak modulation (i.e., for small $E$), is to adapt the temperature of the bath such that for an undriven system Hamiltonian $\hat{H}_0$, no change is observed, $\Delta{S}_{\mathrm{L}}$ $\!=$ $\!0$, which is a necessary condition for a system-bath equilibrium. This is would require non-unitary system modulation, e.g., the effect of repeated measurements \cite{ere08,jah11,all11}. \section{Summary and outlook} \label{sec6} \emph{Peculiarity of the approach}: In summary, we have considered a way of finding a time-dependence of the system Hamiltonian over a fixed time interval such that a given system observable attains a desired value at the end of this interval. The peculiarity of our approach is that it relies on knowledge of the bath coupling spectrum and adapts the spectrum of the system modulation to it. This allows to adjust the modulation to bandgaps or peaks in the bath coupling spectrum. In contrast to dynamic decoupling of system and bath, which can be achieved by shifting the entire system-modulation spectrum beyond some assumed bath cutoff frequency, an enhancement of the coupling requires more detailed knowledge on the peak positions of the bath spectrum. In this way, our approach may comprise suppression and enhancement of the system-bath coupling in a unified way for executing more general tasks than decoherence suppression. The same approach can also be applied to map out the bath spectrum by measuring the coherence decay rate for a narrow-band modulation centered at different frequencies \cite{alm11}. As far as the controls are concerned, we here consider time-continuous modulation of the system Hamiltonian, which allows for vastly more freedom compared to control that is restricted to stroboscopic pulses as in DD \cite{vio99,uhr07,kuo11}. We do not rely on rapidly changing control fields that are required to approximate stroboscopic $\pi$-pulses. These features allow efficient optimization under energy constraint. On the other hand, the generation of a sequence of well-defined pulses may be preferable experimentally. We may choose the pulse timings and/or areas as continuous control-parameters and optimize them with respect to a given bath spectrum. Hence, our approach encompasses both pulsed and continuous modulation as special cases. \emph{Open issues}: An open issue of the approach is the inclusion of higher orders in the system-bath coupling, which becomes important for strong or resonant system-bath coupling, so that a perturbative expansion cannot be applied. This may be the case especially when this coupling is to be enhanced in order to achieve a non-unitary operation (e.g. cooling), since in this case an optimization of the coupling may take us out of the domain of validity of the entire approach. Another concern regards the initial conditions. Here we have assumed a factorized initial state of the system and bath. This prevents us from taking into account system-bath-interactions that may have occurred prior to that time. In particular, if the system is in equilibrium with the bath, their states are entangled or correlated \cite{ere08,gorennjp12}. An immediate problem of both higher order coupling and system-bath correlations is that their consideration requires knowledge of the corresponding parameters. It may be difficult to obtain such data with sufficient experimental precision. Moreover, its consideration renders the theory cumbersome and the intuition gained from the spectral overlap approach presented here is lost. A way out is offered by replacing the ``open'' iteration loop in Fig.~\ref{fig2} with a ``closed'' loop \cite{bie09}, where the calculation of the score, constraint and their gradients are based on actual measurements performed on the controlled system in real time rather than on prior model assumptions, i.e., knowledge of bath properties. Such closed loop control would allow efficient optimization, but at the cost of losing any insight into the physical mechanisms behind the result obtained. From a fundamental point of view, it is interesting to derive analytic bounds of a desired score (under a chosen constraint) and see if this bound can be achieved by means of some (global) optimization, i.e., if the bound is tight. The need for a constraint in such optimization is not obvious if the task requires coupling enhancement, especially when the bath spectrum has a single maximum (Appendix A). \begin{acknowledgments} The support of the EU (FET Open Project MIDAS), ISF and DIP is acknowledged \end{acknowledgments} \appendix \section{Bound estimation} \label{secA1} Our goal is to give constraint-independent upper and lower bounds for the maximum change ${P}$ $\!=$ $\!\mathrm{Tr}(\hat{P}\Delta\hat{\varrho})$ that can be achieved with a given bath and $\hat{P}$ under the condition (\ref{condition}). We assume that $\bm{\epsilon}$, $\bm{\Gamma}$, and $\bm{G}(\omega)$ are quadratic $(d^2\!-\!1)$-dimensional matrices. If $\max{[\mathrm{Tr}{\bm{G}}(\omega)]}$ $\!<$ $\!\infty$, we can estimate ${P}$ by using that $\mathrm{Tr}({\bm{A}}{\bm{B}})$ $\!\le$ $\!\mathrm{Tr}({\bm{A}})\mathrm{Tr}({\bm{B}})$ for positive semi-definite matrices ${\bm{A}}$, ${\bm{B}}$, and applying H\"older's inequality in the form of $\int_{-\infty}^{\infty}\mathrm{d}\omega\,|f(\omega)g(\omega)|$ $\!\le$ $\!\mathrm{sup}\{|g(\omega)|\} \int_{-\infty}^{\infty}\mathrm{d}\omega\,|f(\omega)|$. Decomposing ${\bm{\Gamma}}$ $\!=$ $\!{\bm{\Gamma}}_1$ $\!-$ $\!{\bm{\Gamma}}_2$ into positive semi-definite matrices ${\bm{\Gamma}}_i$ and making use of $\frac{1}{t}\int_{-\infty}^{\infty}\mathrm{d}\omega\, {\bm{\epsilon}_t}^\dagger(\omega){\bm{\epsilon}_t}(\omega)$ $\!=$ $\!\hat{I}$ we thus get \begin{equation} -{{P}}_2\le{{P}}\le{{P}}_1,\quad {{P}}_i=t\, \sup{[\mathrm{Tr}{\bm{G}}(\omega)]}\, \mathrm{Tr}{\bm{\Gamma}}_i. \end{equation} This reveals that for given $t$ and ${\bm{G}}(\omega)$, the bounds ${{P}}_i$ depend on $\hat{\varrho}(0)$ and $\hat{P}$ via ${\bm{\Gamma}}$. \section{\label{secA2} Non-commuting score} If Eq.~(\ref{condition}) does not hold, the following modifications must be made. We denote by ${\bm{A}}_\pm$ $\!=$ $\!({\bm{A}}$ $\!\pm$ $\!{\bm{A}}^\dagger)/2$ the (skew) Hermitian part of a given matrix ${\bm{A}}$ $\!=$ $\!{\bm{A}}_+$ $\!+$ $\!{\bm{A}}_-$ for convenience. Equation~(\ref{deltaP}) must be replaced with \begin{eqnarray} \label{A7} \widetilde{{P}}&=&2\mathrm{Re} \!\!\int_{0}^{t}\!\!\mathrm{d}t_1\!\!\int_{0}^{t_1}\!\!\mathrm{d}t_2\, \bigl\langle[\hat{H}_{\mathrm{I}}(t_1),\hat{P}] \hat{H}_{\mathrm{I}}(t_2)\bigr\rangle \\ &=&{P} +\!\!\int_{0}^{t}\!\!\mathrm{d}t_1\!\!\int_{0}^{t_1}\!\!\mathrm{d}t_2\, \mathrm{Tr}\{[\hat{\varrho}_{\mathrm{tot}}(0),\hat{P}] \hat{H}_{\mathrm{I}}(t_2)\hat{H}_{\mathrm{I}}(t_1)\}, \nonumber\\ \end{eqnarray} where $\langle\cdot\rangle$ $\!=$ $\!\mathrm{Tr}[\hat{\varrho}_{\mathrm{tot}}(0)(\cdot)]$ and ${P}$ is defined in Eq.~(\ref{deltaP}). Equivalently, we can write \begin{eqnarray} \widetilde{{P}}&=& \!\!2\!\!\int_{0}^{t}\!\!\!\mathrm{d}t_1\!\!\!\int_{0}^{t_1}\!\!\!\! \mathrm{d}t_2\, \mathrm{Tr}[\!{\bm{R}}_+(t_1,t_2){\bm{\Gamma}}_+ \!\!+\!\!{\bm{R}}_-(t_1,t_2){\bm{\Gamma}}_-\!]\quad\quad \\ &=&{P} -2\!\int_{0}^{t}\!\!\mathrm{d}t_1\!\!\int_{0}^{t_1}\!\!\mathrm{d}t_2\, \mathrm{Tr}[{\bm{R}}^\dagger(t_1,t_2){\bm{\Gamma}}_-], \end{eqnarray} where ${P}$ is given by Eq.~(\ref{soverl1}) together with Eq.~(\ref{dm}). In the spectral domain, the analogous expression is \begin{eqnarray} \widetilde{{P}} &=&2t\,\mathrm{Re}\!\int_{-\infty}^{\infty}\!\!\mathrm{d}\omega\, \mathrm{Tr}[{\bm{F}}_t(\omega) {\bm{\mathcal{G}}}(\omega)] \\ &=&{P}-2t\int_{-\infty}^{\infty}\!\!\mathrm{d}\omega\, \mathrm{Tr}[{\bm{F}}_-(\omega){\bm{\mathcal{G}}}^\dagger(\omega)], \end{eqnarray} where ${P}$ is given in Eq.~(\ref{soverl3}) and \begin{equation} {\bm{\mathcal{G}}}(\omega) =\int_{0}^{\infty}\!\mathrm{d}t\,\mathrm{e}^{\mathrm{i}\omega{t}}\, {\bm{\Phi}}_{}(t), \end{equation} is related to Eq.~(\ref{Gdef}) by ${\bm{G}}(\omega)$ $\!=$ $\!2{\bm{\mathcal{G}}}_+(\omega)$. \end{document}
arXiv
Minimize this real function on $\mathbb{R}^{2}$ without calculus? When it comes to minimizing a differentiable real function, calculus comes into play immediately. If $f: (x,y) \mapsto (x+y-1)^{2} + (x+2y-3)^{2} + (x+3y-6)^{2}$ on $\mathbb{R}^{2}$, and if one is asked to find the minimum of $f$ along with the minimizer(s), is it possible to do that without calculus? The three equations do not admit a common solution; besides, I was not seeing an elementary inequality that might be useful at this point. Although this question itself may not be very interesting, I am interested in knowing an elegant way for the (more or less recreational) minimization. inequality optimization recreational-mathematics cauchy-schwarz-inequality discriminant MegadethMegadeth $\begingroup$ It is a least squares minimisation problem with an algebraic solution. But calculus is used implicitly. $\endgroup$ – copper.hat Jun 4 at 15:00 $\begingroup$ @copper.hat, Hi, thanks. Yes, I noticed that too. Wondering if a brute force method exists. $\endgroup$ – Megadeth Jun 4 at 15:01 $\begingroup$ Well, the solution is $(A^TA)^{-1} Ab$ with $A,b$ taken from above. But that has implicit calculus. $\endgroup$ – copper.hat Jun 4 at 15:05 $\begingroup$ If you are willing to countenance geometry, there is a nice answer. I have elaborated below. $\endgroup$ – copper.hat Jun 4 at 16:08 $\begingroup$ @GaryMoore How about my solution? $\endgroup$ – user679470 Jun 5 at 3:17 \begin{align*} f(x,y)&=3x^2+12xy+14y^2-20x-50y+46\\ &=3(x+2y)^2+2y^2-20(x+2y)-10y+46\\ &=\frac13(3x+6y-10)^2+2y^2-10y+\frac{38}3\\ &=\frac13(3x+6y-10)^2+\frac12(2y-5)^2+\frac16 \end{align*} The minimum value is $\dfrac16$. It happens when $\displaystyle (x,y)=\left(-\dfrac53,\dfrac52\right)$. CY AriesCY Aries $\begingroup$ Arriving at the first equality seems inevitable, though annoying. $\endgroup$ – Megadeth Jun 4 at 15:45 In general, any quadratic function $\ f\ $ on $\ \mathbb{R}^n\ $ can be written as $$ f\left(x\right) = x^\top A x + b^\top x + c\ , $$ where $\ A\ $ is a symetric $\ n\times n\ $ matrix, $\ b\ $ an $\ n\times 1\ $ column vector and $\ c\ $ a constant. A minimum exists if and only if $\ A\ $ is positive definite or semidefinite and $\ b\ $ lies in its column space. If these conditions are satisfied, and $\ b=-2 Ax_0\ $, then $$ f\left(x\right) = (x-x_0)^\top A\, (x-x_0) + c-x_0^\top A x_0\ , $$ and has a minimum value $\ c-x_0^\top A x_0\ $ when $\ x=x_0\ $. For the function $\ f\ $ given in the question, $$ f\left(x,y\right) = \pmatrix{x&y}^\top\pmatrix{3&6\\6&14}\pmatrix{x\\y} + \pmatrix{-20&-50}\pmatrix{x\\y}+46\ , $$ and we have $$ \pmatrix{-20\\-50} = -2\pmatrix{3&6\\6&14}\pmatrix{-\frac{5}{3}\\ \frac{5}{2}}\ , $$ leading to the same result as given in the other answers. lonza leggieralonza leggiera 2,75622 gold badges22 silver badges88 bronze badges By C-S $$f(x,y)=\frac{1}{6}(1+4+1)\left((1-x-y)^2+\left(x+2y-3\right)^2+(6-x-3y)^2\right)\geq$$ $$=\frac{1}{6}\left(1-x-y+2x+4y-6+6-x-3y\right)^2=\frac{1}{6}.$$ The equality occurs for $$(1,2,1)||(1-x-y,x+2y-3,6-x-3y),$$ id est, for $$(x,y)=\left(-\frac{5}{3},\frac{5}{2}\right),$$ which says that $\frac{1}{6}$ is a minimal value. Michael RozenbergMichael Rozenberg It is possible to minimize this function without using calculus, but this method is going, instead, to use some linear algebra. This is all possible because it's a quadratic form. Here are the steps: Expand the function completely to obtain $$f(x,y)=3x^2+12xy+14y^2-20x-50y+46.$$ Now we need a change of coordinates in order to eliminate the $xy$ term. This amounts to a rotation, and the result of this is that we should be able to complete the square separately in $x$ and $y$. We are rotating the axes by an angle $\theta,$ given by $$\cot(2\theta)=\frac{3-14}{12}=-\frac{11}{12}\quad\implies\quad \theta=\frac12\,\operatorname{arccot}\left(-\frac{11}{12}\right).$$ The new coordinates $(x', y')$ will be given by the rotation matrix $$\left[\begin{matrix}x\\y\end{matrix}\right]=\left[\begin{matrix}\cos(\theta) &-\sin(\theta)\\\sin(\theta) &\cos(\theta)\end{matrix}\right]\left[\begin{matrix}x'\\y'\end{matrix}\right]\quad\implies\quad \left[\begin{matrix}x'\\y'\end{matrix}\right]=\left[\begin{matrix}\cos(\theta) &\sin(\theta)\\-\sin(\theta) &\cos(\theta)\end{matrix}\right]\left[\begin{matrix}x\\y\end{matrix}\right] .$$ Note that we can write these out explicitly, since \begin{align*} \cos\left(\frac12\,\underbrace{\operatorname{arccot}\left(-\frac{11}{12}\right)}_{\varphi}\right)&= \underbrace{\operatorname{sgn}\left(\pi+\varphi+4\pi\left\lfloor\frac{\pi-\varphi}{4\pi}\right\rfloor\right)}_{=1}\sqrt{\frac{1+\cos(\varphi)}{2}}\\ &=\sqrt{\frac{1+11/\sqrt{265}}{2}},\\ \sin\left(\frac12\,\operatorname{arccot}\left(-\frac{11}{12}\right)\right)&= \underbrace{\operatorname{sgn}\left(2\pi-\varphi+4\pi\left\lfloor\frac{\varphi}{4\pi}\right\rfloor\right)}_{=-1}\sqrt{\frac{1-\cos(\varphi)}{2}}\\ &=-\sqrt{\frac{1-11/\sqrt{265}}{2}}. \end{align*} The original expression $f(x,y)$ in terms of the new coordinates, becomes $$f(x',y')=-\frac{1}{2} \left(\sqrt{265}-17\right) x'^2-2 \sqrt{50+110 \sqrt{\frac{5}{53}}} x'+5 \sqrt{50-110 \sqrt{\frac{5}{53}}} x'+\frac{1}{2} \left(17+\sqrt{265}\right) y'^2-5 \sqrt{50+110 \sqrt{\frac{5}{53}}} y'-2 \sqrt{50-110 \sqrt{\frac{5}{53}}} y'+46.$$ While this is certainly complicated-looking, notice that there is no cross-term! That's what we needed. Now it's a matter of completing the square separately. This is normally straight-forward, but with this monster, it will be helpful to have some symbolic manipulation (true confessions: I've already used Mathematica on this one to take out some of the tedium). Using the depress function defined here, we obtain the following results. Suppose we define \begin{align*} g(x')&=-\frac{1}{2} \left(\sqrt{265}-17\right) x'^2-2 \sqrt{50+110 \sqrt{\frac{5}{53}}} x'+5 \sqrt{50-110 \sqrt{\frac{5}{53}}} x'\\ h(y')&=\frac{1}{2} \left(17+\sqrt{265}\right) y'^2-5 \sqrt{50+110 \sqrt{\frac{5}{53}}} y'-2 \sqrt{50-110 \sqrt{\frac{5}{53}}} y', \end{align*} not forgetting the $46$ left (actually, we can ignore it later), we can complete the square on these to obtain \begin{align*} g(x')&=\frac{1}{2} \left(17-\sqrt{265}\right) \left(x'+\frac{5 \sqrt{50-110 \sqrt{\frac{5}{53}}}-2 \sqrt{50+110 \sqrt{\frac{5}{53}}}}{17-\sqrt{265}}\right)^2-\frac{5 \left(471 \sqrt{265}-7685\right)}{53 \left(\sqrt{265}-17\right)}\\ h(y')&=\frac{1}{2} \left(17+\sqrt{265}\right) \left(y'+\frac{-2 \sqrt{50-110 \sqrt{\frac{5}{53}}}-5 \sqrt{50+110 \sqrt{\frac{5}{53}}}}{17+\sqrt{265}}\right)^2-\frac{5 \left(7685+471 \sqrt{265}\right)}{53 \left(17+\sqrt{265}\right)}. \end{align*} Now we are in a position to minimize the function, because we just minimize the perfect squares to get \begin{align*} x'&=-\frac{5 \sqrt{50-110 \sqrt{\frac{5}{53}}}-2 \sqrt{50+110 \sqrt{\frac{5}{53}}}}{17-\sqrt{265}} \\ y'&=\frac{2 \sqrt{50-110 \sqrt{\frac{5}{53}}}+5 \sqrt{50+110 \sqrt{\frac{5}{53}}}}{17+\sqrt{265}}. \end{align*} Getting back to the original $x$ and $y,$ we have \begin{align*} x&=-\frac53\\ y&=\frac52. \end{align*} The actual minimum value of the function at this point would be $1/6.$ To recap: the mathematics used here, in principle, are matrix rotations, some trigonometry, and completing the square. While this procedure is certainly more complicated-looking than some of the other answers, it is also more algorithmic: just turn the crank. Adrian KeisterAdrian Keister Here is a geometric answer. This is slightly cheating since the duality between planes and normals is essentially what one obtains from the optimality conditions from calculus. Note that $n=(1,-2,1)^T$ is orthogonal to the plane spanning $(1,1,1)^T, (1,2,3)^T$ and we are trying to find the closest point to $b=(1,3,6)^T$. From the closest point we can find $x,y$. The plane is defined by $\{ x | n^T x =0 \}$. Let $p$ denote the closest point. We must have $b-p=tn$ for some $t$. Since $b-p$ is orthogonal to the plane, we have $n^Tp = 0$, or $t = {n^Tb \over n^T n} = {1 \over 6}$ and so $p={1 \over 6}(5,20,35)^T$. Now we can solve for $x,y$ to get $(x,y)^T = {1 \over 6}(-10,15)^T$. copper.hatcopper.hat See How to Find the Vertex of a Quadratic Equation. $\tag 1 f(x,y) = 3 x^2 + 4 x (3 y - 5) + 2 (7 y^2 - 25 y + 23)$ $$\tag 2 x = \frac{-4(3y-5)}{6}$$ (Vertex = $\frac{-b}{2a}$). and plug back into $\text{(1)}$, giving $M(y) = 1/2 (2 y - 5)^2 + 1/6$ as the quantity to be minimized. So at $y = \frac{5}{2}$ the minimum of $\frac{1}{6}$ is achieved. Plugging $\frac{5}{2}$ into $\text{(2)}$ (certainly easier than using $\text{(1)}$ again), we get $$\tag 3 x = \frac{-4(3(\frac{5}{2})-5)}{6} = -\frac{5}{3}$$ $$ (x,y) = (-\frac{5}{3},\frac{5}{2})$$ CopyPasteItCopyPasteIt $\begingroup$ Another way is to find discriminant of the quadratic equation in $x$ $\endgroup$ – lab bhattacharjee Jun 4 at 15:21 No calculus or cleverness required. Note how he third diagonal element in $D$ is the constant $1/6.$ The whole polynomial is $3 f^2 + 2 g^2 + \frac{1}{6},$ where the coefficients of $f,g$ are given by the first two rows of $Q.$ In this direction, this is usually called Lagrange's method or repeated completing squares. $$ Q^T D Q = H $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ 2 & 1 & 0 \\ - \frac{ 10 }{ 3 } & - \frac{ 5 }{ 2 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & \frac{ 1 }{ 6 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 2 & - \frac{ 10 }{ 3 } \\ 0 & 1 & - \frac{ 5 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 & 6 & - 10 \\ 6 & 14 & - 25 \\ - 10 & - 25 & 46 \\ \end{array} \right) $$ Algorithm discussed at http://math.stackexchange.com/questions/1388421/reference-for-linear-algebra-books-that-teach-reverse-hermite-method-for-symmetr https://en.wikipedia.org/wiki/Sylvester%27s_law_of_inertia $$ H = \left( \begin{array}{rrr} 3 & 6 & - 10 \\ 6 & 14 & - 25 \\ - 10 & - 25 & 46 \\ \end{array} \right) $$ $$ D_0 = H $$ $$ E_j^T D_{j-1} E_j = D_j $$ $$ P_{j-1} E_j = P_j $$ $$ E_j^{-1} Q_{j-1} = Q_j $$ $$ P_j Q_j = Q_j P_j = I $$ $$ P_j^T H P_j = D_j $$ $$ Q_j^T D_j Q_j = H $$ $$ H = \left( \begin{array}{rrr} 3 & 6 & - 10 \\ 6 & 14 & - 25 \\ - 10 & - 25 & 46 \\ \end{array} \right) $$ $$ E_{1} = \left( \begin{array}{rrr} 1 & - 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P_{1} = \left( \begin{array}{rrr} 1 & - 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q_{1} = \left( \begin{array}{rrr} 1 & 2 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D_{1} = \left( \begin{array}{rrr} 3 & 0 & - 10 \\ 0 & 2 & - 5 \\ - 10 & - 5 & 46 \\ \end{array} \right) $$ $$ E_{2} = \left( \begin{array}{rrr} 1 & 0 & \frac{ 10 }{ 3 } \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P_{2} = \left( \begin{array}{rrr} 1 & - 2 & \frac{ 10 }{ 3 } \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q_{2} = \left( \begin{array}{rrr} 1 & 2 & - \frac{ 10 }{ 3 } \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D_{2} = \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & 2 & - 5 \\ 0 & - 5 & \frac{ 38 }{ 3 } \\ \end{array} \right) $$ $$ E_{3} = \left( \begin{array}{rrr} 1 & 0 & 0 \\ 0 & 1 & \frac{ 5 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) $$ $$ P_{3} = \left( \begin{array}{rrr} 1 & - 2 & - \frac{ 5 }{ 3 } \\ 0 & 1 & \frac{ 5 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; Q_{3} = \left( \begin{array}{rrr} 1 & 2 & - \frac{ 10 }{ 3 } \\ 0 & 1 & - \frac{ 5 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) , \; \; \; D_{3} = \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & \frac{ 1 }{ 6 } \\ \end{array} \right) $$ $$ P^T H P = D $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ - 2 & 1 & 0 \\ - \frac{ 5 }{ 3 } & \frac{ 5 }{ 2 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 & 6 & - 10 \\ 6 & 14 & - 25 \\ - 10 & - 25 & 46 \\ \end{array} \right) \left( \begin{array}{rrr} 1 & - 2 & - \frac{ 5 }{ 3 } \\ 0 & 1 & \frac{ 5 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & \frac{ 1 }{ 6 } \\ \end{array} \right) $$ $$ Q^T D Q = H $$ $$\left( \begin{array}{rrr} 1 & 0 & 0 \\ 2 & 1 & 0 \\ - \frac{ 10 }{ 3 } & - \frac{ 5 }{ 2 } & 1 \\ \end{array} \right) \left( \begin{array}{rrr} 3 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & \frac{ 1 }{ 6 } \\ \end{array} \right) \left( \begin{array}{rrr} 1 & 2 & - \frac{ 10 }{ 3 } \\ 0 & 1 & - \frac{ 5 }{ 2 } \\ 0 & 0 & 1 \\ \end{array} \right) = \left( \begin{array}{rrr} 3 & 6 & - 10 \\ 6 & 14 & - 25 \\ - 10 & - 25 & 46 \\ \end{array} \right) $$ answered Jun 6 at 1:29 Will JagyWill Jagy $\begingroup$ OK, for those new to this matrix manipulation - you fill in 3x3 $H$ matrix with the nine entries (well, there is a way) from the OP's polynomial $\quad$ $f(x,y)=3x^2+14y^2+46+ 6xy + 6xy -10x-10x-25y-25y$ $\endgroup$ – CopyPasteIt Jun 6 at 2:28 Here's my solution without calculus (not sure how elegant it is though). We make a few changes of variable; firstly replace $x$ with $x + 3$, and then let $a = x+2y, b = y$. We obtain $(a-b-2)^2 + a^2 + (a+b+3)^2$, and maximising this over $a$ and $b$ allows us to recover $x$ and $y$. Note that we have a $(a-b-2)^2$ term and a $(a+b+3)^2$ term; one has $b$ and one has $-b$ so the sum is maximised when they are closest together, i.e. $b = -\frac{5}{2}$ both squares become $(a+ \frac{1}{2})^2$. So we now need to minimise $2(a+ \frac{1}{2})^2 + a^2 = 3a^2 + a + \frac{1}{2}$, but since this is a quadratic this minimum occurs at $a = \frac{-1}{6}$, and so we simply substitute back to find $x, y$. auscryptauscrypt $$3\,x^{\,2}+ 12\,xy+ 14\,y^{\,2}- 20\,x- 56\,y+ 46- \frac{1}{6}= \frac{1}{3}(\,3\,x+ 5\,)(\,3\,x+ 12\,y- 25\,)+ \frac{7}{2}(\,5- 2\,y\,)^{\,2}$$ $$18(3 x^{ 2}+ 12 xy+ 14 y^{ 2}- 20 x- 56 y+ 46- \frac{1}{6})= 7(3 x+ 6 y- 10)^{ 2}- (3 x+ 5)(3 x+ 12 y- 25)$$ $$\therefore\,3\,x^{\,2}+ 12\,xy+ 14\,y^{\,2}- 20\,x- 56\,y+ 46- \frac{1}{6}\geqq 0$$ $$\because\,{\rm discriminant}[\,3\,x^{\,2}+ 12\,xy+ 14\,y^{\,2}- 20\,x- 56\,y+ 46- \frac{1}{6},\,x\,]= -\,6(\,5- 2\,y\,)^{\,2}\leqq 0$$ Not the answer you're looking for? Browse other questions tagged inequality optimization recreational-mathematics cauchy-schwarz-inequality discriminant or ask your own question. Representing a class of non-negative polynomial functions on $\Bbb R \times \Bbb R$. How to Minimize this Function? Minimize $(x+y)(x+z)$ with constraint without calculus Evaluation Of Maximum Value Without Calculus How to minimize this non-linear function? how to minimize this convex function? Minimize function with constraint Olympiad Inequality $\sum\limits_{cyc} \frac{x^4}{8x^3+5y^3} \geqslant \frac{x+y+z}{13}$ Is there a more efficient way to minimize this function? Minimize objective function for least square classification How to minimize this quartic function?
CommonCrawl
Making sense of the canonical anti-commutation relations for Dirac spinors Dirac spinors under Parity transformation or what do the Weyl spinors in a Dirac spinor really stand for? Dirac Spinors, Grassmann Numbers and $SL(2,\mathbb{C})$ actions Spinors on orbifolds Dirac spectrum on unorientable manifold ($RP^n$) Spinors as complexified forms representing spinors in four dimensions as complexified forms Dirac, Weyl and Majorana Spinors The Dirac-Killing spinors Internal flavor symmetry of the $N$ left-handed complex Weyl spinors v.s. $N$ real Majorana spinors: ${\rm U}(N)$ vs. ${\rm O}(2N)$ or ${\rm O}(N)$ Anti-symmetric forms on Dirac spinors In order to describe invariant forms on Dirac spinors $S$ one can find trivial subrepresentations in $S \otimes S$. If we use $S \cong (1/2, 0) \oplus (0, 1/2)$ then \begin{multline} [(1/2, 0) \oplus (0, 1/2)] \otimes [(1/2, 0) \oplus (0, 1/2)] =\\ (0, 0) \oplus (1, 0) \oplus (1/2, 1/2) \oplus (1/2, 1/2) \oplus (0, 1) \oplus (0, 0) \end{multline} Therefore representation theory predicts existence of two invariant forms. It is usually claimed that this two forms are $$ D_1(\chi, \psi)=\bar{\chi}\psi=\chi^T\gamma_0\psi=\chi_R^T\psi_L+\chi_L^T\psi_R $$ and $$ D_2(\chi, \psi)=\bar{\chi}\gamma_5\psi=\chi^T\gamma_0 \gamma_5\psi=\chi_R^T\psi_L-\chi_L^T\psi_R $$ The form $D_1$ is symmetric and it's quadratic form (with complex conjugation on the first argument) is usually used to construct Dirac's Lagrangian. From the other hand, it is known that on Weyl spinors one can also find an antisymmetric invariant forms given as $$ \chi^T_L\sigma_2\psi_L . $$ Let me use this to construct one more anti-symmetric invariant form on Dirac spinors as a sum of two forms on Weyl spinors $$ D_3(\chi, \psi)=\chi^T_L\sigma_2\psi_L+\chi^T_R\sigma_2\psi_R $$ Form $D_3$ is not a linear combination of $D_1$ and $D_2$ and thus I get a contradiction with representation theory prediction. Where I made a mistake? This post imported from StackExchange Physics at 2014-04-13 14:36 (UCT), posted by SE-user Sasha quantum-field-theory spinors asked Apr 9, 2014 in Theoretical Physics by Sasha (110 points) [ no revision ] I don't think that Sasha made a mistake. I'll use the dotted/undotted notation which may clarify the possible SL(2,C) invariants. Let $\xi^{A}$, $\theta^{A}$, $\eta^{\dot{A}}$ and $\phi^{\dot{A}}$ be Weyl spinors. The Levi-Civita tensors $\epsilon_{AB}$ and $\epsilon_{\dot{A}\dot{B}}$ transform trivially under SL(2,C) so they can be used to lower indices. The consistent rules are, $$ \xi_{A}=\xi^{B}\epsilon_{BA} $$ and, $$ \eta_{\dot{A}}=\epsilon_{\dot{A}\dot{B}}\eta^{\dot{B}} $$ The SL(2,C) invariant Levi-Civita tensors are just similarity transformations which connect equivalent SL(2,C) irreps. Using upstairs and downstairs indices and complex conjugation $(^{*})$, one can make four SL(2,C) invariants, $\xi^{A}\theta_{A}$, $\eta^{\dot{A}}\phi_{\dot{A}}$, $$ (\xi^{A})^{*}\eta^{\dot{A}}=[\xi^{*}]_{\dot{A}}\eta^{\dot{A}} $$ and $$ \xi^{A}(\eta^{\dot{A}})^{*}=\xi^{A}[\eta^{*}]_{A} $$ The first and second are invariant under parity. The third and fourth are not invariant under parity. By adding and subtracting the third and fourth SL(2,C) invariants, one can make Sasha's bilinear forms $D_{1}$ and $D_{2}$. $D_{1}$ transforms trivially under parity whilst $D_{2}$ changes sign under parity. Thus $D_{1}$ is an $O(3,1)$ scalar and $D_{2}$ is an $O(3,1)$ pseudoscalar. Sasha's invariant $\chi^T_L\sigma_2\psi_L$ is my $\eta^{\dot{A}}\phi_{\dot{A}}$ modulo a factor of $i$ so Sasha's O(3,1) invariant $D_{3}$ is made by summing my first and second invariants. Edit: My earlier draft said, "I don't see any contradiction with representation theory here because I don't see any reason for the expansion of Dirac spinors $$ [(1/2, 0) \oplus (0, 1/2)] \otimes [(1/2, 0) \oplus (0, 1/2)] $$ to exhaust the SL(2,C) invariants of the Weyl spinors." On reflection, my words were wrong. All I've done here is to list the bilinear O(3,1) invariants . I guess Sasha wants to see the decomposition of a general rank 2 Dirac tensor into O(3,1) irreps, I haven't done this part. This post imported from StackExchange Physics at 2014-04-13 14:36 (UCT), posted by SE-user Stephen Blake answered Apr 10, 2014 by Stephen Blake (70 points) [ no revision ] p$\hbar$ysicsOve$\varnothing$flow
CommonCrawl
\begin{document} \title{Generalized eigenvalues for fully tnonlinear singular or degenerate operators in the radial case.} \footnote{AMS Subject classification: 35 J 25, 35 J 60, 35 P 15, 35 P 30} \begin{abstract} In this paper we extend some existence's results concerning the generalized eigenvalues for fully nonlinear operators singular or degenerate. We consider the radial case and we prove the existence of an infinite number of eigenvalues, simple and isolated. This completes the results obtained by the author with Isabeau Birindelli for the first eigenvalues in the radial case, and the results obtained for the Pucci's operator by Busca Esteban and Quaas and for the $p$-Laplace operator by Del Pino and Manasevich. \end{abstract} \section{Introduction} The extension of the concept of eigenvalue for fully nonlinear operators has seen a remarkable development in these last years, let us mention the works of Quaas, Sirakov \cite{QS}, Ishii, Yoshimura \cite{IY}, Juutinen \cite{J}, Patrizi \cite{P}, Armstong \cite{A}, and previous papers of the author with Isabeau Birindelli \cite{BD1,BD2} which all deal with the existence of eigenvalues and corresponding eigenfunctions for different fully-nonlinear operators in bounded domains. In \cite{BD1} we defined the concept of first eigenvalue on the model of \cite{BNV} and we proved some existence's result for Dirichlet problem, and for the eigenvalue problem. The simplicity of the first eigenvalue which is known in the case of the $p$-Laplacian, for Pucci's operators, and for operators related but homogeneous of degree $1$, remains an open problem for general operators fully nonlinear singular or degenerate homogeneous of degree $1+\alpha$ with $\alpha> -1$. However in \cite{BDr} we proved some uniqueness result in the case where the domain is a ball or an annulus and when the operator is radial. Concerning the "other eigenvalues", few is known about them, except for the Pucci's operators and for the $p$-Laplacian, in the radial case. More precisally in \cite{DM} the authors prove that in the radial case for the $p$-Laplace operator, there exists an infinite numerable set of eigenvalues, which are simple and isolated, in \cite{BEQ} the authors prove the same result for the Pucci's operators. Moreover in each of these papers, the authors establish some bifurcation results of positive (respectively negative) solutions for some partial differential equations related. Here we consider also the radial case for the model operator $$F(Du, D^2 u) = |\nabla u|^\alpha{\cal M}_{a,A} (D^2 u)$$ where $a$ and $A$ are two positive numbers, $a\leq A$, $\alpha > -1$ and ${\cal M}_{a,A} $ is the Pucci's operator ${\cal M}_{a , A} (M) = A tr(M^+)-a tr(M^-)$. We prove the existence of a numerable set of eigenvalues, $(\mu_k)_k$ which are simple and isolated, and some continuity results for the eigenvalues with respect to the parameters $\alpha, a, A$. \section{Assumptions, notations and previous results in the general case } We begin with some generalities about the operators that we consider. Let $\Omega$ be some bounded domain in ${\rm I}\!{\rm R}^N$. For $\alpha>-1$ , $F_\alpha$ satisfies : \begin{itemize} \item[(H1)] $F_\alpha : \Omega\times {\rm I}\!{\rm R}^N\setminus\{0\}\times S\rightarrow{\rm I}\!{\rm R}$, is continuous and $\forall t\in {\rm I}\!{\rm R}^\star$, $\mu\geq 0$, $F_\alpha (x, tp,\mu X)=|t|^{\alpha}\mu F_\alpha(x, p,X)$. \item[(H2)] There exist $0\leq a\leq A$, such that for any $x\in {\Omega}$, $p\in {\rm I}\!{\rm R}^N\backslash \{0\}$, $M\in S$, $N\in S$, $N\geq 0$ \begin{equation}\label{eqaA} a|p|^\alpha tr(N)\leq F (x,p,M+N)-F (x,p,M) \leq A |p|^\alpha tr(N). \end{equation} \item [(H3)] There exists a continuous function $ \omega$ with $\omega (0) = 0$, such that if $(X,Y)\in S^2$ and $\zeta\in {\rm I}\!{\rm R}^+$ satisfy $$-\zeta \left(\begin{array}{cc} I&0\\ 0&I \end{array} \right)\leq \left(\begin{array}{cc} X&0\\ 0&Y \end{array}\right)\leq 4\zeta \left( \begin{array}{cc} I&-I\\ -I&I\end{array}\right)$$ and $I$ is the identity matrix in ${\rm I}\!{\rm R}^N$, then for all $(x,y)\in {\rm I}\!{\rm R}^N$, $x\neq y$ $$F(x, \zeta(x-y), X)-F(y, \zeta(x-y), -Y)\leq \omega (\zeta|x-y|^2).$$ \end{itemize} Let us now recall the definition of viscosity solutions \begin{defi}\label{def1} Let $\Omega$ be a bounded domain in ${\rm I}\!{\rm R}^N$, suppose that $f$ is continuous on $\Omega \times {\rm I}\!{\rm R}$, then $v$, continuous in $\Omega$ is called a viscosity super solution (respectively sub-solution) of $F(x,\nabla u,D^2u)=f(x,u)$ if for all $x_0\in \Omega$, -Either there exists an open ball $B(x_0,\delta)$, $\delta>0$ in $\Omega$ on which $v= cte= c $ and $0\leq f(x,c)$, for all $x\in B(x_0,\delta)$ (respectively $0\geq f(x,c)$) -Or $\forall \varphi\in {\mathcal C}^2(\Omega)$, such that $v-\varphi$ has a local minimum on $x_0$ (respectively a local maximum) and $\nabla\varphi(x_0)\neq 0$, one has $$ F( x_0,\nabla\varphi(x_0), D^2\varphi(x_0))\leq f(x_0,v(x_0)). $$ (respectively $$ F( x_0,\nabla\varphi(x_0), D^2\varphi(x_0))\geq f(x_0,v(x_0)).) $$ \end{defi} One can also extend the definition of viscosity solutions to upper semicontinuous sub-solutions and lower semicontinuous super solutions, as it is done in the paper of Ishii \cite{I}. We shall consider in the sequel radial solutions, which will be solutions of differential equations of order two. These solutions will be ${\cal C}^1$ everywhere and ${\cal C}^2$ on each point where their gradient is zero, so it is easy to see that these solutions are viscosity solutions. We now recall the definition of the first eigenvalue and first eigenfunction adapted to this context, on the model of \cite{BNV}. We define $$\lambda^+ (\Omega) = \sup \{\lambda, \exists \ \varphi>0, \ F(x, \nabla \varphi, D^2\varphi)+ \lambda \varphi^{1+\alpha} \leq 0\ {\rm in} \ \Omega\}$$ $$\lambda^- (\Omega) = \sup \{\lambda, \exists \ \varphi<0, \ F(x, \nabla \varphi, D^2\varphi)+ \lambda |\varphi|^{\alpha}\varphi \geq 0\ {\rm in} \ \Omega\}$$ \begin{rema} Let us observe that in this definition, for $\lambda^+$ (respectively $\lambda^-$), the supremum can be taken over either continuous and bounded functions, or lower semicontinuous and bounded functions (respectively continuous and bounded functions, or upper semicontinuous and bounded ). \end{rema} We proved in \cite{BD1} the following existence's result of "eigenfunctions" \begin{theo}\label{valp} Suppose that $\Omega$ is a bounded regular domain. There exists $\varphi\geq 0$ such that $$\left\{\begin{array} {lc} F(x, \nabla \varphi, D^2\varphi)+ \lambda^+(\Omega) \varphi^{1+\alpha} =0 &\ {\rm in} \ \Omega \\ \varphi=0& {\rm on} \ \partial \Omega \end{array}\right.$$ Moreover $\varphi >0$ inside $\Omega$, is bounded and continuous. Symmetrically there exists $\varphi\leq 0$ such that $$\left\{\begin{array} {lc} F(x, \nabla \varphi, D^2\varphi)+ \lambda^-(\Omega) |\varphi|^{\alpha}\varphi =0 &\ {\rm in} \ \Omega \\ \varphi=0& {\rm on} \ \partial \Omega \end{array}\right.$$ Moreover $\varphi <0$ inside $\Omega$, is bounded and continuous. \end{theo} These eigenvalues have the properties, called maximum and minimum principle : \begin{theo} Suppose that $\Omega$ is a bounded regular domain. If $\lambda < \lambda^+ $, every upper semicontinuous and bounded sub-solution of $$F(x,\nabla u, D^2 u) + \lambda |u|^\alpha u \geq 0$$ which is $\leq 0$ on the boundary, is $\leq 0$ inside $\Omega$. If $\lambda < \lambda^- $, every lower semicontinuous and bounded super-solution of $$F(x,\nabla u, D^2 u) + \lambda |u|^\alpha u \leq 0$$ which is $\geq 0$ on the boundary, is $\geq 0$ inside $\Omega$. \end{theo} The maximum and minimum principle and some iterative process permit to prove the existence of solutions for the Dirichlet problem, $$\left\{ \begin{array}{lc} F(x,\nabla u, D^2 u) + \lambda |u|^\alpha u = f&{\rm in} \ \Omega\\ u = 0 & {\rm on} \ \partial \Omega \end{array} \right.$$ where $f$ is supposed to be continuous and bounded, and $\lambda < \inf (\lambda^+, \lambda^-)$. Moreover if $f\leq 0$ and $\lambda <\lambda^+$ , (respectively $f\geq 0$ and $\lambda <\lambda^-$), there exists a nonnegative (respectively non positive) solution . We now give some increasing property of the eigenvalues $\lambda^\pm$ with respect to the domain. \begin{prop}\label{propinc} Suppose that $\Omega$ and $\Omega^\prime$ are some regular bounded domains such that $\Omega^\prime \subset \subset \Omega$. Then $\lambda^\pm (\Omega^\prime ) > \lambda^\pm (\Omega)$. \end{prop} For the convenience of the reader we give a short proof here : We do it for $\lambda^+$. Let $\varphi$ be an eigenfunction for $\lambda^+ (\Omega)$. Then by the strict maximum principle there exists $\epsilon >0$ such that $\varphi \geq 2\epsilon$ on $\Omega^\prime $. Define $\lambda^\prime = \lambda^+(\Omega) \inf_{\Omega^\prime}{\varphi^{1+\alpha}\over (\varphi-\epsilon) ^{1+\alpha}}> \lambda^+(\Omega)$. Then the function $\varphi-\epsilon$ is some positive function which satisfies in $\Omega^\prime$ $$F(x, \nabla (\varphi-\epsilon), \nabla \nabla (\varphi-\epsilon))+ \lambda^\prime (\varphi-\epsilon)^{1+\alpha} \leq 0$$ which implies by the definition of $\lambda^+(\Omega^\prime)$, that $\lambda^+ (\Omega) < \lambda^\prime \leq \lambda^+(\Omega^\prime)$. The following property of eigenvalues will be needed in section 4 : \begin{prop}\label{mupropre} Suppose that there exists $\mu\in {\rm I}\!{\rm R}$, and $u$ continuous and bounded such that $$\left\{ \begin{array}{lc} F(x,\nabla u, D^2 u)+ \mu |u|^\alpha u=0&,\ u\geq 0, \ u\not \equiv 0 \ {\rm in} \ \Omega\\ u=0 & {\rm on } \ \partial \Omega \end{array}\right. $$ Then $\mu = \lambda^+$. Symmetrically suppose that there exists $\mu\in {\rm I}\!{\rm R}$, and $u$ continuous and bounded such that $$\left\{ \begin{array}{lc} F(x,\nabla u, D^2 u)+ \mu |u|^\alpha u=0&, u\leq 0, \ u\not \equiv 0\ {\rm in} \ \Omega\\ u= 0 & {\rm on} \ \partial \Omega \end{array}\right.$$ Then $\mu = \lambda^-$. \end{prop} Proof of proposition \ref{mupropre} We consider only the first case, the other can be treated in the same manner. By the definition of the first eigenvalue, $\mu\leq \lambda^+$. If $\mu < \lambda^+$, then the minimum principle would imply that $u\leq 0$ in $\Omega$, a contradiction. We now recall some regularity and compactness results which will be used in the last section. \begin{prop} Suppose that $\Omega$ is a bounded regular domain. Suppose that $F$ satisfies the previous assumptions. Let $f$ be a continuous and bounded function in ${\Omega}$. Let $u$ be a continuous and bounded viscosity solution of \begin{equation}\label{eq4.1} \left\{ \begin{array}{lc} F(x, \nabla u, D^2u)=f & \ {\rm in}\ \Omega\\ u=0 & \ {\rm on}\ \partial\Omega. \end{array} \right. \end{equation} Then for any $\gamma<1$ there exists some constant $C$ which depends only on $|f|_\infty$, $\gamma$, $a$, $A$, and $N$, such that for any $(x,y)\in\bar\Omega^2$ $$|u(x)-u(y)|\leq C|x-y|^\gamma.$$ \end{prop} \begin{cor} \label{comp} Suppose that $\Omega$ is a bounded regular domain. Suppose that $F$ satisfies the previous assumptions. Suppose that $(f_n)$ is a sequence of continuous and uniformly bounded functions, and $(u_n)$ is a sequence of continuous and bounded viscosity solutions of $$\left\{\begin{array}{cc} F( x, \nabla u_n,\ D^2u_n)=f_n& \ {\rm in} \ \Omega\\ u_n=0 &\ {\rm on} \ \partial \Omega. \end{array}\right.$$ Then the sequence $(u_n)$ is relatively compact in ${\mathcal C} (\overline{\Omega})$. Moreover if $f_n $ converges, even simply, to some continuous and bounded function $f$, and if for a subsequence $\sigma (n)$, $u_{\sigma (n)} \rightarrow u$, then $u$ is a solution of the equation with the right hand side $f$. \end{cor} \begin{rema} Under some additionnal assumption on the regularity of $F$, one has some Lipschitz regularity of the solutions. This assumption is satisfied in the case of the operator considered in the following sections. \end{rema} We end this section by giving some property of the first demi-eigenvalues for some particular operators related to Pucci's operators : Let $0<a< A$ and the Pucci's operator $${\cal M}_{a,A} (D^2 u) = A tr( (D^2 u)^+)-a tr((D^2u)^-)$$ where $(D^2u)^\pm$ denote the positive and negative part of the symmetric matrix $D^2 u$. For $\alpha > -1$ the following operator $$F(\nabla u, D^2 u) = |\nabla u|^\alpha {\cal M}_{a,A} (D^2 u)$$ satisfies the assumption $(H1), (H2)$. We denote by $\lambda_{a,A, \alpha}^\pm$ its corresponding first eigenvalues. Then \begin{prop} \label{proplam} If $a< A$, one has $\lambda_{a,A, \alpha}^+ (\Omega) < \lambda_{a,A, \alpha}^-(\Omega)$. Moroever if $\lambda_{eq}$ is the first eigenvalue for the operator $|\nabla u|^\alpha \Delta u$, $$\lambda_{a,A, \alpha}^+ \leq a \lambda_{eq} < A \lambda_{eq} \leq \lambda_{a,A, \alpha}^-$$ \end{prop} Proof of proposition \ref{proplam} Let $\phi>0$ be some eigenfunction for the eigenvalue $\lambda_{a,A, \alpha}^+ (\Omega)$. We observe that \begin{eqnarray*} a \Delta \phi &\leq & A tr(D^2\phi)^+- a tr(D^2\phi)^-\\ &\leq& {\cal M}_{a, A} (D^2\phi). \end{eqnarray*} This implies that $$a \Delta \phi |\nabla \phi|^\alpha + \lambda_{a,A, \alpha}^+ |\phi|^\alpha \phi \leq 0$$ and then by the definition of $\lambda_{eq}$, $a \lambda_{eq} \geq \lambda_{a,A, \alpha}^+$. In the same manner let $\phi\leq 0$ be such that $\Delta \phi |\nabla \phi|^\alpha = -\lambda_{eq} |\phi|^\alpha\phi$ then $$|\nabla \phi|^\alpha \left(A tr( (D^2 \phi)^+)- a tr((D^2 \phi)^-) \right)\geq |\nabla \phi|^\alpha A \Delta \phi = -A \lambda_{eq} |\phi|^\alpha \phi$$ and by the definition of $\lambda_{a,A, \alpha}^-$ this implies that $$ A \lambda_{eq} \leq \lambda_{a,A, \alpha}^-.$$ The question of the simplicity of the first eigenvalues for general operators satisfying $(H1$),.. $(H3)$, is an open problem. The difficulty resides in the fact that one cannot establish some strict comparison principle. More precisally we should need the following result : {\it If $u\geq v$ and $F(x, \nabla u, D^2 u)= f\leq F(x, \nabla v, D^2 v )= g$ then either $u> v$ everywhere, or $u\equiv v$}. The difficulty when one wants to prove this result resides on the points where test functions have their gradient equal to zero. However we proved in \cite{BDr} the simplicity result in the radial case. It will be precised in the forthcoming section, this will be an argument for the existence and the properties of the other eigenvalues in the case of the operator $|\nabla u|^\alpha {\cal M}_{a,A} (D^2 u)$. \section{The radial case} Let $\Omega$ be a ball $B(0,1)$ or an annulus $B(0,1)\setminus \overline{B(0, \rho)}$ for some $\rho\in ]0, 1[$. We suppose that there exists $\tilde F$ such that for any radial function $u(x) = g(|x|)$, $F(x, \nabla u, D^2 u) = \tilde F(r, g^\prime , g^{\prime\prime})$. In that case the conditions on $F$ imply that $$ |g^\prime|^\alpha\left(\gamma_1 g^{\prime\prime}+\frac{\gamma_2(N-1)}{|x|}g^\prime\right)\leq F(x,\nabla \phi,D^2\phi)\leq |g^\prime|^\alpha\left(\Gamma_1 g^{\prime\prime}+\frac{\Gamma_2(N-1)}{|x|}g^\prime\right)$$ where $$\gamma_1=\left\{\begin{array}{lc} a & {\rm if}\ g^{\prime\prime}>0\\ A & {\rm if}\ g^{\prime\prime}<0 \end{array}, \right. \ \gamma_2=\left\{\begin{array}{lc} a & {\rm if}\ g^{\prime}>0\\ A & {\rm if}\ g^{\prime}<0, \end{array} \right.\ $$ $$ \Gamma_1=\left\{\begin{array}{lc} A & {\rm if}\ g^{\prime\prime}>0\\ a & {\rm if}\ g^{\prime\prime}<0 \end{array}, \right. \ \Gamma_2=\left\{\begin{array}{lc} A & {\rm if}\ g^{\prime}>0\\ a & {\rm if}\ g^{\prime}<0. \end{array} \right. $$ In this situation one can define the first radial eigenvalues $\lambda^\pm_{rad} (\Omega)$ $$\lambda_{rad}^+ (\Omega ) = \sup \{\lambda , \exists \varphi>0, \ {\rm radial},\ \tilde F(r, \varphi^\prime , \varphi^") + \lambda |\varphi|^\alpha \varphi\leq 0\ {\rm in} \ \Omega \}$$ $$\lambda_{rad}^- (\Omega ) = \sup \{\lambda , \exists \varphi<0, \ {\rm radial}, \ \tilde F(r, \varphi^\prime , \varphi^") + \lambda |\varphi|^\alpha \varphi\geq 0\ {\rm in} \ \Omega \}$$ Acting as in the general case, one can prove the existence of eigenfunctions for each of these eigenvalues, and using the maximum and minimum principle one derives that $\lambda^\pm _{rad} (\Omega )= \lambda^\pm (\Omega)$ in the sense given in theorem \ref{valp} for the operator $F(x, \nabla u, D^2 u)$. \begin{rema}\label{remaru} In the case of the ball, for any constant sign viscosity solution of $$\left\{\begin{array}{lc} \tilde F(r, u^\prime, u^{\prime\prime}) + \lambda^\pm |u|^\alpha u = 0\ &{\rm in } \ B(0,1)\\ u = 0 &{\rm on} \ \{r = 1\}, \end{array}\right. $$ Then $u$ is decreasing from $r=0$ for $\lambda^+$, increasing from $r=0$ for $\lambda^-$. In particular if $u$ is ${\cal C}^1$, $0$ is the unique point where $u^\prime$ is zero. In the case of an annulus $B(0,1)\setminus \overline{B(0, \rho)}$, if $u$ is a positive (respectively negative) viscosity solution of \begin{equation}\label{equaz} \left\{\begin{array}{lc} \tilde F(r, u^\prime, u^{\prime\prime}) + \lambda^\pm |u|^\alpha u = 0\ &{\rm in } \ B(0,1)\setminus \overline{B(0, \rho)}\\ u = 0 &{\rm on} \ \{r = 1\}\ {\rm and} \ \{r = \rho\}, \end{array}\right. \end{equation} then there exists a unique point $r = r_u$ such that $u$ is increasing (respectively decreasing) on $[\rho, r_u]$, and decreasing (respectively increasing ) on $[r_u, 1]$. In particular if $u$ is ${\cal C}^1$, $r_u$ is the unique point where $u^\prime$ is zero. \end{rema} The uniqueness result obtained in \cite{BDr} is the following : \begin{prop}\label{propun} Suppose that $\Omega$ is a ball or an annulus. Suppose that $\varphi$ and $\psi$ are two positive radial eigenfunctions in the viscosity sense, for the eigenvalue $\lambda^+$, which are zero on the boundary, then there exists some positive constant $c$ such that $\varphi = c\psi$. \end{prop} \begin{rema} Of course the same result holds for the negative eigenfunctions corresponding to $\lambda^-$. \end{rema} From now we shall denote by an abuse of notation by ${\cal M}_{a,A} (r, g^\prime, g")$ the operator $g\mapsto \Gamma_1 g^{\prime\prime}+\frac{\Gamma_2(N-1)}{r}g^\prime$ and $\tilde F$ will be \begin{equation}\label{eqpucci} \tilde F(r, g^\prime , g^") = |g^\prime|^\alpha\left(\Gamma_1 g^{\prime\prime}+\frac{\Gamma_2(N-1)}{r}g^\prime\right), \end{equation} where $\Gamma_1$ and $\Gamma_2$ are the multivalued functions defined at the beginning of section 3. \begin{rema} We shall most of the time use more correctly the definition which is valid when $g$ is Lipshitz, and when $\Gamma_1$ and $\Gamma_2$ are determined : $$\tilde F (r, g^\prime, g^")= \Gamma_1 {d\over dr} ({|g^\prime|^\alpha g^\prime \over 1+\alpha}) + \Gamma_2 {(N-1)\over r} |g^\prime |^\alpha g^\prime, $$ the derivative ${d\over dr} ({|g^\prime|^\alpha g^\prime) \over 1+\alpha})$ being taken in the distributional sense. \end{rema} We end this section by giving one consequence of the Hopf principle in the case of the operator $\tilde F$. \begin{rema}\label{remhopf} Suppose that $u$ is a non negative solution in the viscosity sense of $\tilde F(r, u^\prime, u^") = f$ on $[0, R[$ for some $R \leq \infty$, with $f$ continuous and non positive , then either $u>0$ everywhere, or $u\equiv0$; In particular if $u$ satisfies $\tilde F(r, u^\prime, u^") = -\lambda |u|^\alpha u$ with $\lambda >0$, and if $u(r_o)=0$ then $u$ must change sign on $r_o$. \end{rema} \section{ The functions $w^+$ and $w^-$. } In this section we prove the existence and uniqueness of some radial solutions of $$ \left\{ \begin{array}{lc} |w^\prime |^\alpha {\cal M}_{a, A} (r, w^\prime, w^") = -|w|^\alpha w&\ {\rm in} \ {\rm I}\!{\rm R}^+\ , \\ w(0) = 1, w^\prime (0) = 0&\ \end{array}\right.$$ This will permit as in \cite {BEQ}, \cite{DM} to prove the existence of an infinite numerable set of radial eigenvalues for the operator $|\nabla w |^\alpha {\cal M}_{a,A} (D^2 w)$ in the ball. \begin{prop}\label{propex} There exists a unique ${\cal C}^1$ solution of the equation \begin{equation} \label{eq1} |w^\prime |^\alpha ({\cal M}_{a, A} (r, w^\prime, w^") )= -|w|^\alpha w\ {\rm in} \ {\rm I}\!{\rm R}^+\ , w(0) = 1, w^\prime (0) = 0 \end{equation} Moreover $w$ is ${\cal C}^2$ around each point where $ w^\prime \neq 0$. \end{prop} This proposition will be a consequence of the three following results : \begin{prop}\label{propk} For all $r_o\geq 0$, and for all $k_o\neq 0$ there exists some $\delta >0$ such that there is existence and uniqueness of solution to \begin{eqnarray} \label{eq2} && a \left( |k^\prime |^\alpha k^\prime\left( {N-1\over r}\right) + {d\over dr} ({ |k^\prime |^\alpha k^\prime\over 1+\alpha} )\right) = -|k|^\alpha k\ {\rm for } \ r\in ] r_o, r_o+\delta[\ , \ {\rm or}\ r\in ]r_o-\delta, r_o[\cap {\rm I}\!{\rm R}^+, \nonumber\\ && k(r_o) = k_o, k^\prime (r_o) = 0,\ \end{eqnarray} \begin{eqnarray} \label{eq3} && A|k^\prime |^\alpha k^\prime\left( {N-1\over r}\right) +a {d\over dr} ({ |k^\prime |^\alpha k^\prime\over 1+\alpha} ) = -|k|^\alpha k\ {\rm for } \ {\rm for} \ r\in ] r_o, r_o+\delta[\ , {\rm or} \ r\in ] r_o-\delta, r_o[\cap {\rm I}\!{\rm R}^+\ , \nonumber\\ &&k(r_o) = k_o, k ^\prime (r_o) = 0, \end{eqnarray} \begin{eqnarray} \label{eq4} && A \left(|k^\prime |^\alpha k^\prime \left({N-1\over r} \right)+{d\over dr} ({ |k^\prime |^\alpha k^\prime\over 1+\alpha} )\right) = -|k|^\alpha k\ {\rm for } \ r\in ] r_o, r_o+\delta[\ , \ {\rm or}\ r\in ]r_o-\delta, r_o[\cap {\rm I}\!{\rm R}^+,\nonumber\\ && k(r_o)= k_o, k^\prime (r_o) =0, \end{eqnarray} \begin{eqnarray} \label{eq5} && a|k^\prime |^\alpha k^\prime \left({N-1\over r} \right)+A {d\over dr} ({ |k^\prime |^\alpha k^\prime\over 1+\alpha} ) = -|k|^\alpha k\ {\rm for } \ \ r\in ] r_o, r_o+\delta[\ , \ {\rm or} \ r\in ] r_o-\delta, r_o[\cap {\rm I}\!{\rm R}^+ \nonumber\\ && k(r_o) =k_o, k^\prime (r_o) = 0. \end{eqnarray} Moreover $k$ is ${\cal C}^2$ around each point where $ k^\prime \neq 0$. \end{prop} In a second step we shall prove the existence's and uniqueness result : \begin{prop}\label{propmM} If $w_o^\prime\neq 0$, and for all $w_o$, there exists a local unique solution to $$ {\cal M}_{a,A} (r, w^\prime, w^") = -{|w|^\alpha w\over |w^\prime |^\alpha}$$ $$ (w(r_o), w^\prime (r_o)) = (w_o, w_o^\prime) $$ Moreover if on $]r_1, r_2[\subset ]0,\infty[$, $w$ is a maximal solution, $\lim_{r\rightarrow r_i, \ r\in ]r_1, r_2[} w^\prime (r) = 0$, $w$ is ${\cal C}^2$ on $]r_1, r_2[$, $ {d\over dr} (|w^\prime |^\alpha w^\prime(r)) $ exists everywhere on $]r_1, r_2[$ and ${d\over dr} (|w^\prime |^\alpha w^\prime)(r_1^+)w(r_1^+)<0$, and ${d\over dr} (|w^\prime |^\alpha w^\prime)(r_2^-)w(r_2^-)<0$. \end{prop} \begin{prop}\label{propr1} Let $\delta$ be such that on ${\cal C}([0, \delta])$, $k$ in (\ref{eq2}) with $r_o = 0$ and $k_o = 1$ is well defined and $|k-1|_{{\cal C} ([0, \delta])} < {1\over 2} $. Then there exists some constant $c_1$ depending on $a$, $A$, $N$ such that $|k^\prime |\leq c_1$. Moreover there exists $r_1 >0$ which depends only on $a$, $A$, and $N$ such that $k^\prime$ and $k^"$ are $<0$ on $]0, r_1[$. \end{prop} \begin{rema} The analogous result holds for the situations in (\ref{eq3}), (\ref{eq4}), (\ref{eq5}). \end{rema} We postpone the proof of these three propositions, and we conclude to the local existence and uniqueness's result, arguing as follows : Let $r_o = 0$, $k$ be the solution of (\ref{eq2}) with $k_o = 1$, and, according to proposition \ref{propr1}, let $r_1$ be such that on $]0, r_1]$, $k^\prime$ and $k^{\prime\prime}$ are negative. Let $w$ be the solution given by proposition \ref{propmM} of \begin{equation} \label{eq6} {\cal M}_{a, A} (r, w^\prime, w^{\prime\prime}) = -{|w|^\alpha w\over |w^\prime |^\alpha }\ {\rm in} \ {\rm I}\!{\rm R}^+\ , w(r_1) = k(r_1), w^\prime (r_1) =k^\prime (r_1)\neq 0 \end{equation}on some neighborhood $]r_1-\delta_1, r_1[$. By the equation one must have $w^{\prime\prime}(r_1) <0$. Then by uniqueness $ w= k$ on $]r_1-\delta_1, r_1[$. We can continue replacing $r_1$ by $r_1-\delta_1$ and finally obtain that $w = k$ on the left of $r_1$ as long as $w^\prime\neq 0$, i.e. until $0$. So we have obtained the existence and uniqueness of solution on a neighborhood on the right of zero. We can extend the solution on the right of $r_1$. If $w^\prime (r) \neq 0$ for all $r \geq r_1$, the result is given by proposition \ref{propmM}. Suppose now that $r_o\geq r_1$ is the first point after $r_1$ such that $w^\prime (r_o) = 0$. By remark \ref{remhopf} in section 3, $w(r_o)$ cannot be zero. If $w(r_o) <0$, anticipating on the behaviour of the possible solutions on the right of $r_o$, we know by using the conclusion in proposition \ref{propmM}, that one must have $\lim_{r\rightarrow r_o, r> r_o} {d\over dr} (|w^\prime |^\alpha w^\prime(r)) >0$ , so the equation to solve on the right of $r_o$ is (\ref{eq4}), and we get a local solution on the right of $r_o$. The situation $w(r_o) >0$ cannot occur, since this would imply that $\lim_{r\rightarrow r_o, r> r_o} {d\over dr} (|w^\prime |^\alpha w^\prime(r)) <0$ and $w^\prime $ coud not be $\leq 0$ on the left of $r_o$ and $=0$ on $r_o$. Proof of proposition \ref{propk} We prove the result for equation (\ref{eq2}), with $k_o=1$ and $r_o=0$, the changes to bring in the other cases are given shortly at the end of the proof. The equation can also be written as $$ \left\{ \begin{array}{lc} {d\over dr} (r^{(N-1)(1+\alpha)} |k^\prime |^\alpha k^\prime )(r)= -{(\alpha+1)r^{(N-1)(1+\alpha)} |k|^\alpha k(r)\over a}& {\rm in} \ {\rm I}\!{\rm R}^+\\ k(0) = 1, k^\prime (0) = 0. & (4.7) \end{array}\right.$$ or equivalently, defining $\varphi_{p^\prime } (u) = |u|^{p^\prime -2} u $ and $p^\prime = {\alpha+2\over \alpha+1}$ as : \begin{equation}\label{eqlap} k(r) = 1-\int_0^r \varphi_{p^\prime} \left({\alpha+1\over a s^{(N-1)(1+\alpha)} }\int_0^s t^{(N-1)(1+\alpha)} |k|^\alpha k (t) \ dt\right) ds. \end{equation} We use the properties of the operator \begin{equation}\label{Tlap} T(k) (r)= 1-\int_0^r \varphi_{p^\prime} \left({\alpha+1\over a s^{(N-1)(1+\alpha)} }\int_0^s t^{(N-1)(1+\alpha)} |k|^\alpha k (t) \ dt \right) d s \end{equation} which satisfies on $[0, \delta]$ $$||T(k)-1||_\infty \leq\delta \left\vert \varphi_{p^\prime} \left({(\alpha+1)\delta ||u||_\infty ^{\alpha+1}\over a( (N-1) (1+\alpha)+1)} \right)\right\vert \leq c_1 \delta ^{p^\prime} ||u||_\infty \leq c_1 \delta ^{p^\prime} (||u-1||_\infty+1) $$ where $c_1 = \left({(\alpha+1)\over a ((N-1) (1+\alpha)+1)} \right) ^{p^\prime-1}$ If $\delta <\left ({1\over 3^{|\alpha|+1}c_1}\right)^{1\over p^\prime}$, $T$ sends the ball $\{ u\in {\cal C} ([0, \delta]), ||u-1||_{{\cal C} ([0, \delta])}\leq {1\over 2}\}$ into itself. We now prove that it is contracting. We observe that for $k$ with values in $[{1\over 2}, {3\over 2}]$ \begin{eqnarray*} {(\alpha+1)\over a ((N-1) (1+\alpha)+1)} \left( {1\over 2}\right)^{\alpha+1} \ s&\leq& {\alpha+1\over a s^{(N-1)(1+\alpha)} }\int_0^s t^{(N-1)(1+\alpha)} |k|^\alpha k (t) \ dt\\ & \leq& {(\alpha+1)\over a ((N-1) (1+\alpha)+1)} \left({3\over 2}\right)^{\alpha+1}\ s, \end{eqnarray*} and then by the mean value theorem for $(u,v)\in B_{{\cal C} ([0, \delta])} (1, {1\over 2})$ \begin{eqnarray*} \left\vert \varphi_p^\prime \left({\alpha+1\over a s^{(N-1)(1+\alpha)} }\int_0^s t^{(N-1)(1+\alpha)} u^{1+\alpha}(t)dt \right)\right. &-&\left.\varphi_{p^\prime} \left({\alpha+1\over a s^{(N-1)(1+\alpha)} }\int_0^s t^{(N-1)(1+\alpha)} v^{1+\alpha} (t) dt\right)\right\vert \\ &\leq& c_1 s^{p^\prime-1} |u^{\alpha +1}-v^{\alpha +1} |_{L^\infty ([0, s])} \sup \left(\left({3\over 2}\right)^{-\alpha} , \left({1\over 2}\right)^{-\alpha}\right)\\ &\leq &c_1 s^{p^\prime-1} |u-v| _{L^\infty ([0, s])} \sup\left(\left( {3\over 2}\right)^{-\alpha} , \left({1\over 2}\right)^{-\alpha}\right)\\ && \sup \left(\left({3\over 2}\right)^{\alpha} , \left({1\over 2}\right)^{\alpha}\right)\\ &\leq & c_1 s^{p^\prime-1} |u-v| _{L^\infty ([0, s])} 3^{|\alpha|} \end{eqnarray*} This implies that $$|T(u)-T(v)|\leq c_1{ \delta^{p^\prime} \over p^\prime } |u-v| _{L^\infty ([0, \delta ])} 3^{|\alpha|} \leq {1\over 3} |u-v| _{L^\infty ([0, \delta ])}$$ Then the fixed point theorem implies that there exists a unique fixed point in ${\cal C} ([0, \delta])$. In the case of equation ( \ref{eq3}) one is lead to consider $$ T(k)(r) = k_o-\int_{r_o}^{r}\varphi_{p^\prime} \left({\alpha+1\over a s^{N^+} }\int_{r_o}^s t^{N^+} |k|^\alpha k (t)\ dt \right) ds $$ with $N^+ = {(N-1)(1+\alpha)A\over a}$. For equation (\ref{eq5}) we shall consider $$ T(k)(r) = k_o-\int_{r_o}^r\varphi_{p^\prime} \left({\alpha+1\over A s^{N^-} }\int_{r_o}^s t^{N^-} |k|^\alpha k (t)\ dt \right)ds$$ with $N^- = {(N-1)(1+\alpha)a\over A}$. Finally for equation (\ref{eq4}) $$ T(k) (r)= k_o-\int_{r_o}^r\varphi_{p^\prime} \left({\alpha+1\over A s^{(N-1)(1+\alpha)} }\int_{r_o}^s t^{(N-1)(1+\alpha)} |k|^\alpha k (t)\ dt \right) ds.$$ Proof of proposition \ref{propmM} We prove the local existence by proving that for each $(w_o,w_o^\prime)$ with $w_o^\prime \neq 0$ and for all $ r_o>0$, there exists a neighborhood around $ r_o$ and a solution to the equation which satisfies the condition $(w( r_o), w^\prime ( r_o) )= (w_o,w_o^\prime)$. We suppose that $w_o^\prime \neq 0$ and we introduce the function $$f_2(r, y_1, y_2) = M\left(-{m(y_2) (N-1)\over r }- {|y_1|^\alpha y_1 \over |y_2|^\alpha}\right)$$ where $M$ and $m$ are respectively the functions $$ M (x)= \left\{\begin{array}{c} {x\over A}\ {\rm if} \ x>0\\ {x\over a} \ {\rm if } \ x<0 \end{array}\right.$$ and $$ m (x)= \left\{\begin{array}{c} { A x }\ {\rm if} \ x>0\\ {a x } \ {\rm if }\ x<0 \end{array}\right.$$ The functions $M$ and $m$ are lipschitzian, hence $f_2$ is lipschitzian with respect to $y = (y_1, y_2)$ around $(w_o, w_o^\prime)$ when $w_o^\prime \neq 0$. Let $f_1(r, y_1, y_2) = y_2$, and $f(y_1, y_2) = (f_1(y_1, y_2), f_2(y_1, y_2))$, then the standard theory of ordinary differential equations implies that $$\left\{ \begin{array}{c} (y_1^\prime, y_2^\prime) = f(y_1, y_2)\\ (y_1, y_2)(r_o) = (w_o,w_o^\prime) \end{array}\right.$$ has a unique solution around $(w_o,w_o^\prime)$ when $w_o^\prime\neq 0$. Then $w = y_1$ is a local solution of \begin{equation} \label{eqmM}w^{\prime\prime } = M\left(-{m(w^\prime ) (N-1)\over r}-{|w|^\alpha w \over |w^\prime |^\alpha}\right) \end{equation} with the initial condition $w ( r_o) = w_o ,$ $w^\prime ( r_o) = w^\prime_o $. If $w$ is a solution on $]r_1, r_2[$ and $\lim_{r\rightarrow r_2, r<r_2}w^\prime(r)$ exists and is $\neq 0$, $w^{\prime\prime } $ has also a finite limit from the equation, then $\lim_{r\rightarrow r_2, r< r_2} (y_1^\prime, y_2^\prime )(r) $ exists and is finite and one can continue, replacing $r_o$ by $r_2$ and $(w_o, w^\prime_o)$ by $(w(r_2), \lim_{r\rightarrow r_2, r<r_2}w^\prime(r))$. If $\lim_{r\rightarrow r_2, r<r_2}w^\prime(r))$ is zero, one gets $\lim_{r\rightarrow r_2} w^{\prime\prime } (r)= \pm \infty$ and then one cannot get a continuation, since the solutions of $(y_1^\prime, y_2^\prime) = f(y_1, y_2)$ must be ${\cal C}^1$. We prove the last facts concerning ${d\over dr} (|w^\prime |^\alpha w^\prime)$. Suppose that $w(r_2) >0$ and assume by contradiction that $\lim_{r\rightarrow r_2} {d\over dr} (|w^\prime |^\alpha w^\prime(r))\geq 0$. Then the equation on the left of $r_2$ is, since it is clear from the equation that $w^\prime$ cannot be nonnegative : $$A {d\over dr}\left({|w^\prime|^\alpha w^\prime \over 1+ \alpha}\right) + {a(N-1) \over r} |w^\prime |^\alpha w^\prime = -|w|^\alpha w $$ which yields a contradiction when $r\rightarrow r_2$. Suppose now that $w(r_2)<0$ and $\lim_{r\rightarrow r_2} {d\over dr} (|w^\prime |^\alpha w^\prime(r))\leq 0$, then from the equation $w^\prime $ cannot be $\geq 0$ on the left of $r_2$, and one is lead to solve on the left of $r_2$ : $$a {d\over dr} \left({|w^\prime|^\alpha w^\prime \over 1+ \alpha} \right)+ {A(N-1) \over r} |w^\prime |^\alpha w^\prime = -|w|^\alpha w .$$ This is absurd by passing to the limit when $r\rightarrow r_2$. Suppose that $w(r_1)>0$ and assume by contradiction that $\lim_{r\rightarrow r_1, r> r_1} {d\over dr} ( |w^\prime|^\alpha w^\prime )(r) \geq 0$, by the equation $w^\prime$ cannot be $\geq 0$ then this equation is on the right of $r_1$ $$A {d\over dr}\left( {|w^\prime|^\alpha w^\prime \over 1+ \alpha} \right)+ {a(N-1) \over r} |w^\prime |^\alpha w^\prime = -|w|^\alpha w. $$ This is absurd by passing to the limit when $r\rightarrow r_1$. Suppose that $w(r_1)<0$ and that $\lim_{r\rightarrow r_1, r> r_1} {d\over dr} ( |w^\prime|^\alpha w^\prime )(r) \leq 0$, then the equation on the right of $r_1$ is $$a {d\over dr} \left({|w^\prime|^\alpha w^\prime \over 1+ \alpha} \right)+ {A(N-1) \over r} |w^\prime |^\alpha w^\prime = -|w|^\alpha w . $$ This is absurd letting $r$ go to $r_1$. Proof of proposition \ref{propr1} We can observe that $ |k^\prime|^\alpha k^\prime$ is differentiable for $r>0$ and has a limit $<0$ for $r\rightarrow 0$. Moreover we shall give some constant $\delta_1$ which depends only on $a$, $A$, $\alpha$, $N$ such that $k^\prime \neq 0$ and ${d\over dr} \left(|k^\prime |^\alpha k^\prime\right)$ remains $<0$ on $]0, \delta_1[$. We begin to prove that ${d\over dr} ( |k^\prime|^\alpha k^\prime)< 0$ around zero. One has for $r>0$ $$(|k^\prime |^\alpha k^\prime)(r) = -{1+\alpha\over a r^{N_o}}\int_0^r (|k|^\alpha k)(s) s^{N_o} ds$$ where $N_o = (N-1)(1+\alpha)$, and then $ ( |k^\prime|^\alpha k^\prime)$ is continuously differentiable for $r\neq 0$, as the primitive of some continuous function, and $${d\over dr} (|k^\prime |^\alpha k^\prime ) (r)= {N_o(1+\alpha)\over a r^{N_o+1}} \int_0^r (|k|^\alpha k)(s) s^{N_o} ds - {1+\alpha \over a}( |k|^\alpha k)(r). $$ For the point $0$, one has $$\lim_{r\rightarrow 0} {|k^\prime|^\alpha k^\prime(r)\over r} = -\lim_{r\rightarrow 0} {1+\alpha\over ar^{N_o+1}} \int_0^r |k|^\alpha k (s)s^{N_o} ds = -{1+\alpha\over a (N_o+1)}<0$$ Using the fact that $k$ tends to $1$ when $r$ goes to zero we get that $$\lim_{r\rightarrow 0}{d\over dr} (|k^\prime |^\alpha k^\prime )(r) = {1+\alpha \over a} ({N_o\over N_o+1} -1)= -{(1+\alpha) \over A(N_o+1)}<0. $$ and then $|k^\prime |^\alpha k^\prime $ is ${\cal C}^1$ on $0$. Moreover we prove that there exists a neighborhood on the right of zero which depends only on the data, such that ${d\over dr} ( |k^\prime|^\alpha k^\prime)<0$ on it. For that aim we begin to establish some Lipschitz estimate on the solution with some constant which depends only on the data. We have chosen $\delta$ (which depends only on $a$, $A$, $\alpha$, and $N$) such that for $r\in [0, \delta]$, $k(r)\in [{1\over 2}, {3\over 2} ]$. We now observe that $k^\prime$ is then bounded by $$|k^\prime |^{\alpha+1}(r) \leq {1+\alpha \over a(N_o+1)} \left({3\over 2}\right)^{\alpha+1} r$$ We have obtained that there exists some constant $c_2$ which depends only on the constant $a$, N$, A$ such that $|k^\prime |\leq c_2$ on $]0, \delta[$. We derive from this that on $[0, \delta]$ $$|k(r)-1|\leq c_2 r, $$ and also that $$|(|k|^\alpha k)(r) -1|\leq (1+\alpha) \sup (\left({3\over 2}\right)^\alpha, \left({1\over 2}\right)^\alpha ) c_2 r = c_3 r, $$ and then \begin{eqnarray*} |{d\over dr} (|k^\prime |^\alpha k^\prime )(r) + {1+\alpha \over a(N_o+1)} |&\leq& {(1+\alpha )N_o\over a r^{N_o+1}} \int_0^r ||k|^\alpha k-1| (s)s^{N_o} ds \\ &+& {1+\alpha \over a}|( |k|^\alpha k)(r)-1| \\ &\leq & { c_3(1+\alpha)r\over a} \left({N_o\over N_o+2} +1\right) \end{eqnarray*} We have obtained that as long as $r< {N_o+2\over 2(N_o+1)^2 c_3}\equiv r_1$, ${d\over dr} ( |k^\prime|^\alpha k^\prime)$ remains negative (and then so does $k^\prime$). This ends the proof of proposition \ref{propr1}. To finish the proof of proposition \ref{propex}, i.e. to prove global existence's result, suppose that $w$ is a solution on $[0, r_1[$. If $w^\prime (r_1)\neq 0$ we use proposition \ref{propmM}, if $w^\prime (r_1)=0$, using $\lim_{r\rightarrow r_1, \ r> r_1} {d\over dr} (|w^\prime |^\alpha w^\prime ) (r) w(r_1) <0$ we consider on the right of $r_1$, equation ( \ref{eq2}) if $w(r_1)>0$, and equation ( \ref{eq4}) if $w(r_1)<0$. We have obtained a solution on ${\rm I}\!{\rm R}^+$. We now prove that the solution $w$ is oscillatory : \begin{prop}\label{proposc} The solution of ( \ref{eq1}) is oscillatory, ie, for all $r>0$ there exists $\tau> r$ such that $w(\tau) = 0$. \end{prop} Proof of proposition \ref{proposc} : {\bf First step} We suppose that $a= A$. We follow the arguments in \cite{DM}. We assume by contradiction that there exists $r_o$ such that $ w$ does not vanish on $[r_o, \infty[$. Then one can consider the function $$y(r) = r^{(N-1)(1+\alpha)} {|w^\prime |^\alpha w^\prime(r) \over |w|^\alpha w(r)}, $$ which satisfies the equation $$ y^\prime(r) = -{(\alpha+1) r^{(N-1)(1+\alpha)}\over a}-{(\alpha+1)|y|^{\alpha+2}(r) \over r^{(N-1)(1+\alpha)^2}} . $$ Integrating between $r_0$ and $t$ one gets that $$ y(t) + (\alpha+1)\int_{r_0}^t {|y|^{\alpha+2}(r) \over r^{(N-1)(1+\alpha)^2} }dr = -{(\alpha+1)t^{(N-1)(1+\alpha)+1} \over a \left((N-1)(1+\alpha)+1\right)} + y(r_0)+{(\alpha+1)r_o^{(N-1)(1+\alpha)+1} \over a\left((N-1)(1+\alpha)+1\right)} . $$ In particular we obtain that $y(t)\leq 0$ for $t$ large enough. For the next step it will be useful to remark that if, in place of the equation, we had the inequation $${d\over dr}\left( r^{(N-1)(1+\alpha)} |w^\prime |^\alpha w^\prime(r) \right)\leq { -r^{(N-1)(1+\alpha)} |w|^\alpha w\over a}, $$ the conclusion would be the same. We obtain that $ -y(t) = |y(t)|\geq C t^{(N-1)(1+\alpha) +1}$ for some constant $C>0$, as soon as $t$ is large enough. Let $k(t) = \int_{r_0}^t {|y|^{\alpha+2} (r)\over r^{(N-1)(1+\alpha)^2} }dr $, then using the previous considerations $k(t) \geq c_1t^{N(1+\alpha)+2}$ for some positive constant $c_1$. Coming back to the equation, always for $t$ large $$ (\alpha+1)k(t) \leq |y(t)| = (k^\prime(t) t^{(N-1)(1+\alpha)^2})^{1\over \alpha+2} $$ and then, $$(1+\alpha)^{\alpha+2} k^{\alpha+2}(t) \leq k^\prime(t) t^{(N-1)(1+\alpha)^2}.$$ Integrating between $t$ and $s$, $s> t$, we obtain that for some positive constant $c_2$ $${1\over k^{\alpha+1} (t)} -{1\over k^{\alpha+1} (s) } \geq c_2\left( {1\over t^{(N-1)(1+\alpha)^2-1}} -{1\over s^{(N-1)(1+\alpha)^2-1}} \right).$$ Letting $s$ go to infinity $${1\over k^{\alpha+1} }(t) \geq c_2{1\over t^{(N-1)(1+\alpha)^2-1}} .$$ From this one gets a contradiction with $k(t) \geq c_1t^{N(1+\alpha)+2}$. This ends the proof of the first step. {\bf Second step :} $a < A$. We argue on the model of \cite{BEQ}. We suppose as in the first step that there exists $r_o$ such that $w$ does not vanish on $[r_o, \infty[$. We begin to prove that if $w >0$ for $r\geq r_o$, then for $r\geq r_o$ $${d\over dr} \left(r^{(N-1)(1+\alpha)} |w^\prime |^\alpha w^\prime(r)\right) \leq { -r^{(N-1)(1+\alpha)} |w|^\alpha w(r)\over a}, $$ and then following the previous arguments in the first step we obtain that if $y(r) = r^{(N-1)(1+\alpha)} {|w^\prime |^\alpha w^\prime(r) \over |w|^\alpha w(r)} $ then $$ y(t) + (\alpha+1)\int_{r_o}^t {|y|^{\alpha+2} (r)\over r^{(N-1)(1+\alpha)^2} }dr \leq -{t^{(N-1)(1+\alpha)+1} \over a\left((N-1)(1+\alpha)+1\right)} + y(r_o)+{r_o^{(N-1)(1+\alpha)+1} \over a\left((N-1)(1+\alpha)+1\right)} , $$ a contradiction if $y>0$ for $t$ large enough. To prove that $${d\over dr}\left( r^{(N-1)(1+\alpha)} |w^\prime |^\alpha w^\prime(r)\right) \leq { -r^{N-1)(1+\alpha)} |w|^\alpha w(r)\over a}, $$ let us note that in the case $w^\prime\leq 0$ and ${d\over dr} ({|w^\prime|^\alpha w^\prime \over 1+\alpha})\leq 0$ equality holds in the previous inequality, if $w^\prime\geq 0$ and ${d\over dr} ({|w^\prime|^\alpha w^\prime \over 1+\alpha}) \geq 0$ the equation is impossible . For the other cases, we assume first that $w^\prime \leq 0$, this implies if ${d\over dr} \left({|w^\prime|^\alpha w^\prime \over 1+\alpha}\right)\geq 0$ that \begin{eqnarray*} a{d\over dr} \left({|w^\prime|^\alpha w^\prime \over 1+\alpha} \right)+ { a (N-1)\over r} |w^\prime |^\alpha w^\prime &\leq & \left(A {d\over dr} ({|w^\prime|^\alpha w^\prime \over 1+\alpha}) + { a (N-1)\over r} |w^\prime |^\alpha w^\prime \right) \\ &\leq & -|w|^\alpha w, \end{eqnarray*} which implies the result. If $w^\prime \geq 0$ and ${d\over dr} ({|w^\prime|^\alpha w^\prime \over 1+\alpha})\leq 0$ \begin{eqnarray*} a{d\over dr} \left({|w^\prime|^\alpha w^\prime \over 1+\alpha} \right)+ { a (N-1)\over r} |w^\prime |^\alpha w^\prime &\leq & a {d\over dr} ({|w^\prime|^\alpha w^\prime \over 1+\alpha} )+ { A (N-1)\over r} |w^\prime |^\alpha w^\prime \\ & \leq & -|w|^\alpha w. \end{eqnarray*} This also implies the result. We now assume that $w<0$ on $[r_o, \infty[$. Then we prove that there exists $r^\star$ such that $w^\prime (r^\star) = 0$ and $w^\prime >0$ on $]r^\star, \infty[$. Indeed by the equation if $w^\prime (r^\star) = 0$, by proposition \ref{propmM} $\lim_{r\rightarrow r^\star, r> r^\star} {d\over dr} (|w^\prime|^\alpha w^\prime ) (r) >0$. This implies that $w^\prime$ is increasing on $r^\star$, then $w^\prime$ is $>0$ on a neighborhood on the right of $r^\star$. If there exists $r^\prime > r^\star$ such that $w^\prime (r^\prime ) = 0$, we argue as before and then $w^\prime >0$ after $r^\prime$. From these remarks, it is sufficient to discard $w^\prime <0$ on $[r_o, \infty[$. Then in that case necessarily ${d\over dr} (|w^\prime |^\alpha w^\prime) >0$ on $[r_o, \infty[$ by the equation, and then $w$ satisfies ${d\over dr} (|w^\prime |^\alpha w^\prime(r) r^{N^-}) = -(1+\alpha) {r^{N^-} |w|^\alpha w(r)\over a} >0$. Let $g(r) \equiv (|w^\prime |^\alpha w^\prime(r) r^{N^-}) $, $g$ is monotone increasing , and since $w^\prime <0$, it has a limit $c_1\leq 0$ at $+\infty$. On the other hand, since $w^\prime <0$ there exists $c_2\in [-\infty, 0[$ such that $\lim_{r\rightarrow +\infty} w(r) = c_2$, then from the equation satisfied by $w$, $\lim_{r\rightarrow +\infty} g^\prime (r) = +\infty$, which is a contradiction with $\lim_{r\rightarrow +\infty} g(r) = c_1 \leq 0$. Finally $w^\prime >0$ after $r_o$. We recall that $N^+ = {A(1+\alpha) (N-1)\over a} $. Distinguishing the cases ${d\over dr} ( |w^\prime|^\alpha w^\prime)>0$ and ${d\over dr} ( |w^\prime|^\alpha w^\prime) <0$ on the right of $r_o$ and arguing as we already did before we obtain that $w$ satisfies $${d\over dr} (|w^\prime |^\alpha w^\prime(r) r^{N^+}) \leq -{(1+\alpha) r^{N^+} |w|^\alpha w(r) \over a}$$ Then defining $$y(r) = r^{N^+}{ |w^\prime |^\alpha w^\prime(r) \over |w|^\alpha w(r)} $$ one has \begin{equation}\label{eqb}y^\prime(t) + {(\alpha+1)|y(r)|^{\alpha+2} \over r^{(N^+)(\alpha+1)}} + {(\alpha+1)r^{N^+}\over a} \leq 0. \end{equation} Hence integrating between $r_o$ and $t$ one gets for some constant $c_1>0$ $$|y(t)|= -y(t) \geq c_1 t^{N^++1}. $$ Let $k(t) = \int_{r_o}^t {|y|^{\alpha+2}(r) \over r^{N^+(\alpha+1)}} dr \geq c t^{N^++ \alpha+3}$. From the equation (\ref{eqb}) integrated between $r_o$ and $t$ , using $$k^\prime (t) = {|y|^{\alpha+2}(t) \over t^{N^+(\alpha+1)}}, $$ we get $$(\alpha+1)^{\alpha+2}k^{\alpha+2}(t) \leq k^\prime(t) t^{N^+(\alpha+1)}, $$ hence for some positive constant $c_2$ $$k^{-(\alpha+1)}(t) -k^{-(\alpha+1)} (s)\geq c_2( t^{-N^+(\alpha+1)+1}-s^{-N^+(\alpha+1)+1})$$ for $s>t$. Letting $s$ go to infinity and using $\lim k(t) = +\infty $, one derives that $$k^{-(\alpha+1)}(t) \geq c_2t^{-N^+(\alpha+1)+1}, $$ which is a contradiction with $k(t) \geq c_1t^{N^++ \alpha+3}$. We have obtained that $w$ is oscillatory. This ends the proof of proposition \ref{proposc}. For the sake of completeness, we give some property of the function $w$ inherited from the property of the eigenfunctions in the viscosity sense \cite{BDr} : \begin{lemme} Between two successive zeros of $w$, there exists a unique zero of $w^\prime$. \end{lemme} Proof Suppose that $w$ is constant sign on $B(0,t)\setminus\overline{B(0, s)}$, $s<t$ and $w(s)= w(t)=0$, then $w_1 (x) = w(\mu^{1\over 2+\alpha} x)$ is an eigenfunction for one of the first demi-eigenvalue $\mu = \lambda^+(B(0,t)\setminus\overline{B(0, s)})$ if $w>0$, or $ \mu = \lambda^-(B(0,t)\setminus\overline{B(0, s)})$ if $w<0$. Then by the uniqueness of the first eigenfunction in the radial case, if $w>0$, by remark \ref{remaru}, $w$ is increasing on $[s, r_w]$ and decreasing on $[r_w, t]$ and $r_w$ is the unique point on which $w^\prime =0$. We argue in the same manner when $w<0$, using the fact that in that case $w$ is decreasing on $[s, r_w]$ and increasing on $[r_w, t]$. In the sequel we shall denote by $w^+$ the radial solution given by proposition \ref{propex} of $$\left\{ \begin{array}{cc} |w^\prime |^\alpha {\cal M}_{a, A} (r, w^\prime, w^") = -|w|^\alpha w& \\ w(0) = 1,\ w^\prime (0) = 0.& \end{array}\right.$$ And we denote by $w^-$ the radial solution of $$\left\{ \begin{array}{cc} |w^\prime |^\alpha {\cal M}_{a, A} (r, w^\prime, w^") = -|w|^\alpha w&\ \\ w(0) = -1,\ w^\prime (0) = 0.& \end{array}\right.$$ The proof of the existence and uniqueness of $w^-$ is obtained by the same arguments used for $w^+$. The results in proposition \ref{proposc} can be adapted to the case of $w^-$, and then we also get that $w^-$ is oscillatory. \section{Eigenvalues and eigenfunctions} In this section we prove the existence of an infinite numerable set of eigenvalues for the radial operator defined in equation (\ref{eqpucci}). These eigenvalues are simple and isolated. We begin with some properties of the eigenfuntions. \begin{prop}\label{propuprime0} Suppose that $u$ is a radial viscosity solution of $$\left\{ \begin{array}{lc} \tilde F(r, u^\prime , u^") = -\mu |u|^\alpha u& {\rm in} \ B(0,1)\\ u(1) = 0, u(0) >0. & \ \end{array}\right.$$ Then $0$ is a local maximum for $u$, $u$ is ${\cal C}^2$ on a neighborhood $]0, r_o[$ of zero, is ${\cal C}^1$ on $[0, r_o]$ and $u^\prime (0) = 0$. \end{prop} Proof of proposition \ref{propuprime0} First let us note that $\mu >0$, because if not the maximum principle would imply that $u\leq 0$. Since $u$ is continuous there exists some neighborhood $B(0, r_o)$ on which $$\tilde F(r, u^\prime , u^") <0$$ Then using the comparison principle for such operators, and remarking that positive constants are sub-solutions, one gets that $u(r) \geq u(r_1)$ on $B(0, r_1)$, if $r_1< r_o$. This implies in particular that $u$ is decreasing from zero, and $0$ is a local maximum. We now prove that $u$ is ${\cal C}^1$ around zero and ${\cal C}^2$ on a neighborhood of $0$, except on $0$. Let $r_1$ be the first zero of $u$. Then $u>0$ on $B(0, r_1)$ and $\lambda^+(B(0, r_1) )= \mu, $ by proposition \ref{mupropre}. Let $w^+$ be the ${\cal C}^1$ solution in proposition \ref{propex} and $\beta^+_1$ its first zero, (it exists according to proposition \ref{proposc}). Define $$ v(r) = w^+({\beta_1^+ r\over r_1}).$$ Then $v>0$ on $B(0, r_1)$ and $v$ is an eigenfunction in $B(0, r_1)$ for the eigenvalue $\left({\beta_1^+\over r_1}\right)^{2+\alpha}$, in particular $\lambda^+(B(0, r_1)) =\mu = \left({\beta_1^+\over r_1}\right)^{2+\alpha}$, and by the uniquenes of the first radial eigenfunction $ >0$ in proposition \ref{propun}, there exists some constant $c>0$ such that $ u = c v$ on $B(0, r_1)$. In particular $u$ is ${\cal C}^2$ on each point where $u^\prime$ is different from zero and ${\cal C}^1$ everywhere on $B(0, r_1)$. This proves in particular, since $u$ is ${\cal C}^1$ on $B(0, r_1)$ and $u$ has a maximum on $0$, that $u^\prime (0)=0$. Of course the symmetric result holds for $u$ such that $u(0) <0$. We now present an improvement of proposition \ref{propinc} which will be used in the proof of corollary \ref{cormuk} \begin{prop}\label{propts} Suppose that $s<t<1$ Suppose that there exists some eigenfunctions for the annulus $B(0, 1) \setminus\overline{B(0,s)}$ and for $B(0,1)\setminus \overline{B(0, t)}$, which are ${\cal C}^2$ on each point where their first derivative is different from $0$, and ${\cal C}^1$ anywhere, then $\lambda^\pm (B(0, 1) \setminus\overline{B(0,s)})< \lambda^\pm (B(0, 1) \setminus\overline{B(0,t)}) $. \end{prop} Proof Suppose by contradiction that $\lambda^\pm (B(0, 1) \setminus\overline{B(0,s)})= \lambda^\pm (B(0, 1) \setminus\overline{B(0,t)}) $. that we shall denote for simplicity by $\lambda^\pm$. Let $\varphi$ and $u$ be solutions of the equation $$ \tilde F(r, \varphi^\prime , \varphi^")+ \lambda^\pm |\varphi|^\alpha \varphi = 0$$ which are ${\cal C}^2$ on each point where their first derivative is different from $0$, and ${\cal C}^1$ anywhere, with $\varphi = 0$ on $\{r=1\}$ and $\{r= s\}$, and $u=0$ on $\{r=1\}$ and $\{r=t\}$. To fix the ideas we also assume that $\varphi$ and $u$ are positive (and then we replace $\lambda^\pm $ by $\lambda^+$) . Using the same arguments as in propositions \ref{propk} and \ref{propmM}, since $\varphi (1) = u(1)=0$ and $u^\prime (1) <0$, $\varphi^\prime (1)<0$, by uniqueness there exists some constant $c>0$ such that $\varphi = cu$ as long as $\varphi^\prime$ or $u^\prime$ is different from zero. By remark \ref{remaru} there exists exactly one point $r_u$ on $]t,1[$ for which $u^\prime (r_u)=0$ and it is a global strict maximum for $u$ on $]t,1[$. By uniqueness, $\varphi^\prime (r_u)=0$ and $r_u$ must also be a global strict maximum for $\varphi$ on $]t,1[$. Then the equation satisfied by $u$ and $\varphi$ on the left of $r_u$, is equation (\ref{eq3}). By local uniqueness of solutions to (\ref{eq3}) one gets that $u = c \varphi$ on the left of $r_u$ and this is true as long as $u^\prime$ or $\varphi^\prime $ is different from $0$, hence at least on $]t,1[$. We get a contradiction since $u = 0$ on $\{r=t\}$ and $\varphi(t)\neq 0$. We now prove the existence of a numerable set of eigenvalues. The result in proposition \ref{proposc} implies that there exists a sequence $\beta_k^\pm$ of increasing sequence of zeros of $w^\pm$. We now consider $u_k^\pm(r) = w^\pm(\beta_k^\pm r)$. Then $u_k^\pm$ is an eigenfunction on $B(0,1)$ for the eigenvalue $\mu_k^\pm :=(\beta_k^\pm)^{\alpha+2}$ and it has $k-1$ zeros inside the ball, say $r_i \equiv{\beta_i^\pm\over \beta_k^\pm}$, $i\in [1, k-1]$. We need to prove that they are the only eigenvalues : \begin{prop}\label{propri} The set of eigenvalues of the operator is the set $\{\mu_k^\pm, \ k\geq 1\}$. These eigenvalues are simple in the following sense : Suppose that $v$ is some eigenfunction for the eigenvalue $\mu_k^\pm$, which is ${\cal C}^1$ and ${\cal C}^2$ on each point where the first derivative is different from $0$, then there exists some constant $c>0$ such that $v =c w^\pm((\mu_k^\pm)^{1\over 2+\alpha} \cdot)$. \end{prop} Proof of proposition \ref{propri} Let $\mu$ be an eigenvalue. Let $v$ be a corresponding eigenfunction, that we suppose to fix the ideas such that $v(0)>0$. Necessarily since $v$ is radial and ${\cal C}^1$, $v^\prime (0) = 0$. Let $z(\cdot) = {v(\mu^{-1\over 2+\alpha}\cdot)\over v(0)}$. Then $z$ satisfies equation (\ref{eq2}) and by uniqueness $z = w^+$ on $[0, \mu^{1\over 2+\alpha})$. This implies that $\mu^{1\over 2+\alpha}$ is one of the zeros of $w$. This proves also the simplicity of the eigenvalue $\mu$. The fact that the eigenvalues are isolated is a consequence of the properties of the zeros of $w^+$. The following corollary are not necessary for the present paper, they will be useful for the bifurcation results announced in the final concluding section : \begin{cor}\label{cormuk} There is uniqueness (up to a positive multiplicative constant) of the $k$-th eigenfunction. As a consequence one has $\mu_k^-< \mu_{k+1}^+$ and $\mu_k^+< \mu_{k+1}^-$. \end{cor} Proof It is sufficient to prove that $\beta_k^+ < \beta_{k+1}^-$ and $\beta_k^-< \beta_{k+1}^+$. We begin to prove that $\beta_1^+ < \beta_2^-$. One has $$\lambda^-(]\beta_1^+, \beta_2^+[) = 1 = \lambda^- (]0, \beta_1^-[)=1 $$ Suppose first that $ \beta_2^+< \beta_1^-$, then it contradicts proposition \ref{propinc} If $\beta_2^+= \beta_1^-$, one has a contradiction with proposition \ref{propts}. We consider the case $k\geq 2$. Suppose by contradiction that $\beta_k^- < \beta_{k+1}^+< \beta_{k+2}^+\leq \beta_{k+1}^-$, and in a first time we assume that $\beta_{k+2}^+< \beta_{k+1}^-$. In that case one would have $$\lambda^\epsilon (]\beta_{k+1}^+, \beta_{k+2}^+[) = \lambda^\epsilon (]\beta_k^-, \beta_{k+1}^-[)=1, $$ where $\epsilon = sign (-1)^{k+1}$, this would then contradict proposition \ref{propinc}. In a second time if we assume that $\beta_{k+2}^+ = \beta_{k+1}^-$, this contradicts proposition \ref{propts} In the same manner we should prove that $\beta_{k+1}^+ < \beta_{k+2}^-$. For the sake of completeness we finish this section with some additional property of the eigenvalues . This result is an analogous of one result in \cite{BEQ}. \begin{prop}\label{propgap} The gap between the two first half eigenvalues is larger than between the second ones : $${\mu_1^-\over \mu_1^+} \geq {\mu_2^-\over \mu_2^+}. $$ \end{prop} {\rm Proof of proposition \ref{propgap} Let $\varphi_i^\pm$ $i = 1,2$ be the eigenfunctions associated with $\mu_i^\pm$ with $\varphi_i^\pm (0) = \pm 1$. Let $r^+$ be the first zero of $\varphi_2^+$, $r^-$ the first zero of $\varphi_2^-$. We prove that $ r^-\geq r^+$. indeed, suppose by contradiction that $r^-< r^+$, and define $$A^+ = \{ r, \ r^+ < r< 1\}$$ and $$A^- = \{ r, \ r^- < r< 1\}$$ then $A^+ \subset A^-$ and then $$\lambda^-(A^+) = \mu_2^+ \geq \lambda^-(A^-) > \lambda^+ (A^-) = \mu_2^-$$ and $$\lambda^+ (B_{r^+}) = \mu_2^+ < \lambda^+ (B_{r^-})< \lambda^- (B_{r^-}) = \mu_2^-.$$ We have obtained a contradiction. Moreover let us consider $$\psi (x) = \varphi_2^+ (r^+ x)$$ Then $\psi$ is a radial solution on $B(0,1)$ of $$ |\psi^\prime |^\alpha {\cal M}_{a, A} (r,\psi^\prime , \psi^")= - (r^+)^{2+\alpha} \mu_2^+|\psi|^\alpha \psi, $$ which implies since $\psi (1) = 0$, that $(r^+)^{2+\alpha} \mu_2^+= \mu_1^+$, by the definition of the first half eigenvalue. In the same manner $$ (r^-)^{2+\alpha} \mu_2^-= \mu_1^-,$$ and then $${\mu_1^-\over \mu_2^-} \geq {\mu_1^+\over \mu_2^+}, $$ this yields the result. \section{ The continuity of the spectrum with respect to the parameters.} In this section we let vary $\alpha\in ]-1, \infty[$ and $a\in [0, A]$ and for that reason we denote by $\tilde F_{\alpha,a}$ the operator $\tilde F$ defined before. We denote by $\mu_k^\pm (\alpha, a)$ the corresponding eigenvalues. In order to prove the continuity of the map $(\alpha, a)\mapsto \mu_k^\pm(\alpha, a )$, we begin to establish the boundedness of the eigenvalues $\mu_k^\pm (\alpha, a)$ when $\alpha $ belongs to some compact set of $]-1, \infty[$ and $a\in [0, A]$. \begin{prop}\label{propbc} We suppose that $ a=A=1$. Let $\lambda_{eq, \alpha} (]c,b[)$ be the first "radial" eigenvalue for the set $B(0, b)\setminus \overline{B(0,c)}$ and for the operator $u\mapsto - {d\over dr} {|u^\prime |^\alpha u\over 1+\alpha} - {N-1\over r} u^\prime$. Then there exists some continuous function $\varphi(\alpha)$, bounded on every compact set of $[-1, \infty [$, such that $$\lambda_{eq, \alpha}(]c,b[)\leq \varphi(\alpha ) (b-c)^{-2-\alpha}.$$ \end{prop} \begin{cor}\label{corbc} We assume that $a< A$. Then $$\lambda^+_{a,A, \alpha} (]c,b[)\leq a\varphi(\alpha ) (b-c)^{-2-\alpha}.$$ \end{cor} \begin{cor}\label{cormuk} For all $k\geq 1$ $$\mu_k^+(\alpha, a) (B(0,1)) \leq a \varphi(\alpha) k^{2+\alpha},$$ and $$\mu_k^-(\alpha, a) (B(0,1)) \leq a \varphi(\alpha) (k+1)^{2+\alpha}.$$ \end{cor} Proof of proposition \ref{propbc} Let us note that one can also use the following result for general operators satisfying the hypothesis in section 2, proved in \cite{BD3} : {\it There exists some constant $C$ which depends on $a$, $A$, $N$ such that if $R$ is the radius of some ball included in $\Omega$ then } $$\lambda^\pm (\Omega ) \leq {C\over R^{\alpha+2}}.$$ But we shall give a more precise estimate here : For the radial case, one can easily see that $$\lambda_{eq, \alpha}= \inf_{u\in W_0^{1, 2+\alpha} (]c,b[)}{ \int_c^b |u^\prime |^{2+\alpha} (r) r^{(N-1)(1+\alpha)} dr \over \int_c^b |u |^{2+\alpha} (r) r^{(N-1)(1+\alpha)} dr }.$$ Let us consider the function $u(r) = (r-c) (b-r)$. We need to get an upper bound for $$I = \int_c^b |2r-(c+b)|^{2+\alpha} r^{(N-1)(1+\alpha)} dr , $$ and to get a lower bound for $$ J = \int_c^b (r-c)^{2+\alpha} (b-r)^{2+\alpha} r^{(N-1)(1+\alpha)} dr . $$ For the first integral we use the inequality $r^{(N-1)(1+\alpha)} \leq 2^{|1-(N-1)(1+\alpha)|}(r-\left({c+b\over 2}\right))^{(N-1)(1+\alpha)} + \left({c+b\over 2}\right)^{(N-1)(1+\alpha)}) $. In the following $c(\alpha, N)$ is some constant which can vary from one line to another but is bounded for $\alpha \in [-1, M]$. We obtain that \begin{eqnarray*} J &\leq& c(\alpha, N) \left(\int_c^b |r-\left({c+b\over 2}\right)|^{2+\alpha + (N-1)(1+\alpha)}dr\right.\\ &+& \left. \left({c+b\over 2}\right)^{(N-1)(1+\alpha)} \int_c^b|r-\left({c+b\over 2}\right)|^{2+\alpha} dr \right)\\ &\leq &c(\alpha , N) \left((b-c)^{3+\alpha + (N-1)(1+\alpha)}+ (c+b)^{(N-1)(1+\alpha)}(b-c)^{3+\alpha}\right) \\ &\leq & c(\alpha, N) (b-c)^{3+\alpha} (c+b)^{(N-1)(1+\alpha)}. \end{eqnarray*} To minorize $I$ we use $$ r^{(N-1)(1+\alpha)} \geq 2^{-|1-(N-1)(1+\alpha)|}\left((r-c)^{(N-1)(1+\alpha)} + c^{(N-1)(1+\alpha)}\right) $$ and then \begin{eqnarray*} I&\geq & c(\alpha, N) \int_c^b \left((r-c)^{2+\alpha + (N-1)(1+\alpha)} (b-r)^{2+\alpha} + c^{(N-1)(1+\alpha)} (r-c)^{2+\alpha}(b-r)^{2+\alpha} dr\right) \\ &\geq& c(\alpha, N) (b-c)^{5+ 2\alpha + (N-1)(1+\alpha)} B( N(1+\alpha)+2, 3+\alpha)\\ &+ &c^{(N-1)(1+\alpha)} (b-c)^{5+ 2\alpha} B(3+\alpha, 3+\alpha)\\ &\geq & c(\alpha, N) (b-c)^{5+2\alpha} b^{(N-1)(1+\alpha)} \end{eqnarray*} where in the previous lines, $B$ denotes the Euler function. We have obtained the result. Proof of corollary \ref{corbc} We use the inequality in proposition \ref{proplam} $$\lambda^+ (B(0, b)\setminus \overline{B(0, c)}) \leq a \lambda_{eq} (B(0, b)\setminus \overline{B(0, c)}) $$ Proof of corollary \ref{cormuk} Let us recall that we have denoted by $(r_i)_i$ the zeros of the eigenfunction $ \varphi_k^+$. $\mu_k^+ (B(0,1))$ coincides with $\lambda^+ (B(0, r_1))$ and with $\lambda^+ (B(0, r_{i+1})\setminus \overline{B(0, r_i))}= \mu_1^+ (B(0, r_{i+1})\setminus \overline{B(0, r_i)})$, for all $i \in [ 1, k]$. Now, either $r_1\geq {1\over k}$, or there exists $i_o\geq 2$ such that $r_{i_o+1}-r_{i_o} \geq {1\over k}$ . In each of the cases we get the result. Concerning $\mu_k^-$ we use the inequality $\mu_k^-\leq \mu_{k+1}^+$ in corollary \ref{cormuk}. \begin{prop}\label{contalpha} Let $M>0$ be given. Suppose that $(\alpha_n, a_n) \rightarrow (\alpha, a)\in ]-1, M[\times [0, A] $, then $\mu_k^\pm (\alpha_n,a_n)\rightarrow \mu_k^\pm (\alpha,a )$. \end{prop} Proof of proposition \ref{contalpha} By corollary \ref{cormuk}, the sequence $(\mu_k^\pm (\alpha_n, a_n))_n$ is bounded, so we can extract from it a subsequence, denoted in the same manner for simplicity, such that $\mu_k^\pm (\alpha_n,a_n) \rightarrow \mu$, for some $\mu\in {\rm I}\!{\rm R}^+$. We fix the integer $k$. Let $\varphi_n$ be such that $\varphi_n(0)=1$, and $$\left\{ \begin{array}{lc} \tilde F_{\alpha_n, a_n} (r, \varphi_n^\prime, \varphi_n^") + \mu_k^+ (\alpha_n, a_n) |\varphi_n|^{\alpha_n}\varphi_n= 0&\ {\rm in } \ B(0,1)\\ \varphi_n (1)= 0 & \end{array} \right. $$ Using the compactness results in corollary \ref{comp} one can extract from $(\varphi_n)$ a subsequence which will be denoted in the same manner for simplicity, which converges uniformly to a viscosity solution $\varphi$ of $$\left\{ \begin{array}{lc} \tilde F_{\alpha,a } (r, \varphi^\prime, \varphi^") +\mu |\varphi|^{\alpha}\varphi= 0&\ {\rm in } \ B(0,1) \\ \varphi (1)= 0 & \end{array} \right. $$ By the uniform convergence, $\varphi$ is not identically zero and $\varphi(0)=1$. Then $\mu$ is some eigenvalue. We must prove first that that $\varphi$ has $k-1$ zeros, secondly that $\varphi$ is ${\cal C}^1$ and ${\cal C}^2$ on every point where the first derivative is different from zero. Let $j$ be such that $(r_i)_{1\leq i \leq j-1}$ are the zeros of $\varphi$. By remark \ref{remhopf} in section 3, $\varphi$ changes sign on each of them. As a consequence there exists $\delta >0$ such that for all $i\in [1, j-1]$, on $[r_i-\delta, r_i+\delta]$, $\varphi$ has no other zero than $r_i$ and on $[r_{i-1}+\delta, r_i-\delta]$ $\varphi $ has no zero. From $\varphi (r_i-\delta) \varphi (r_i+ \delta) <0$, one has for $n$ large enough $\varphi_n (r_i-\delta) \varphi_n (r_i+ \delta) <0$, and then $\varphi_n$ has at least one zero in $]r_i-\delta, r_i+\delta[$. In the same manner there exists $m>0$ such that $|\varphi|> m$ on every $[r_{i-1}+\delta, r_i-\delta]$ , which implies by the uniform convergence of $\varphi_n$ towards $\varphi$ that $\varphi_n$ cannot have a zero in this intervall. As a consequence $k\geq j$. Moreover by the strict monotonicity of $\varphi$ on $[r_i-\delta, r_i+\delta]$, $\varphi_n$ is also monotone for $n$ large enough. This implies in particular the uniqueness of zero of $\varphi_n$ on that intervall. Finally $j = k$. There remains to prove that $\varphi$ is "regular", i.e. that $\varphi$ is ${\cal C}^2$ on each point where the first derivative is different from zero, and ${\cal C}^1$ anywhere. Suppose that $\bar r< \bar t$ are two successive zeros of $\varphi$, then for $n$ large enough, there exists $r_n< t_n$ two successive zeros of $\varphi_n$ which converge respectively to $\bar r,\bar t$. Moreover $\varphi_n$ (respectively $\varphi$) has constant sign on $]r_n, t_n[$ (respectively $]\bar r, \bar t[$). One can assume without loss of generality that this sign is negatif. We need to prove that $\varphi$ is "regular " on $[\bar r,\bar t]$. Let $r_n^\prime $ be the unique zero of $\varphi_n^\prime$ on $]r_n, t_n[$. Then $\varphi_n$ is the unique fixed point on $]r_n, r^\prime_n[$ , of the operator $T_n$ defined as $$T_n(w)(r)= \varphi_n(r_n^\prime )-\int_{r_n^\prime} ^r \varphi_{p^\prime} \left({(1+\alpha_n)\mu_k^+ (\alpha_n, a_n, A) \over A s^{N_n^-}} \int_{r_n^\prime }^s |w|^{\alpha_n} w(t) t^{N_n^-} dt\right) ds, $$ where $N_n^- = {a_n(N-1)(1+\alpha_n)\over A}$. One can prove as it is done in the proof of proposition \ref{propr1} that there exists some neighborhood $]r^\prime _n-\delta , r_n^\prime [$ with $\delta$ which does not depend on $n$, such that on the left of $r_n$, $\varphi_n^\prime <0$ and $\varphi_n^{\prime\prime} >0$. In the same manner $\varphi_n$ is the unique fixed point of $T_n$ on $]r_n^\prime, r_n^\prime+\delta[$ defined as $$T_n(w) (r)= \varphi_n(r_n^\prime )-\int_{r_n^\prime}^r \varphi_{p^\prime} \left((1+\alpha_n){\mu_k^+ (\alpha_n, a_n, A) \over A s^{N_{0,n}} }\int_{r_n^\prime }^s |w|^{\alpha_n} w(t) t^{N_{0,n}} dt\right) ds, $$ where $N_{0,n} = (N-1)(1+\alpha_n)$ and there exists some $\delta>0$ which does not depend on $n$, such that on $]r^\prime_n, r_n^\prime + \delta[$, $\varphi_n^\prime >0$, and $\varphi_n^{\prime\prime} >0$. Using remark \ref{remaru} there exists exactly one point $r^\prime$ such that $\varphi$ is decreasing on $]\bar r , r^\prime[$ and increasing on $]r^\prime, \bar t[$, hence since $\varphi_n$ converges uniformly to $\varphi$, one gets that $r_n^\prime$ converges to $r^\prime$. From the definition of $T_n$ one sees that $\varphi_n$ converges uniformly on $]r^\prime-{3\delta\over 4} , r^\prime[$ to the solution $\psi$ on that intervall of $T(\psi) = \psi$, where $$T(w)(r) = \varphi ( r^\prime )-\int_{r^\prime} ^r \varphi_{p^\prime} \left({\mu (1+\alpha)\over A s^{N^-} }\int_{ r^\prime }^s |w|^{\alpha} w(t) t^{N^-} dt\right)ds. $$ This implies that $ \varphi$ is a ${\cal C}^2$ solution on $] r^\prime-{3\delta\over 4}, r^\prime[$ . We do the same on $]r^\prime, r^\prime +{ 3\delta\over 4} [$. We now consider the equation on $]\bar r, r^\prime -{\delta\over 2}[$. As soon as $n$ is large enough in order that $\bar r > r_{n-1}^\prime $, on that intervall $\varphi_n$ satisfies $$(\varphi_n^\prime, \varphi_n^{\prime\prime}) = f_n (\varphi_n, \varphi_n^\prime)$$ where $f_n= (f_{1,n}, f_{2,n})$, $f_{1,n} (r, y_1, y_2) = y_2$, and $$f_{2,n}(r, y_1, y_2) = M_n\left(-{m_n(y_2) (N-1)\over r }- {|y_1|^{\alpha_n} y_1 \over |y_2|^{\alpha_n}}\right), $$ where $M_n$ and $m_n$ are respectively the functions $$ M_n (x)= \left\{\begin{array}{c} {x\over A}\ {\rm if} \ x>0\\ {x\over a_n} \ {\rm if } \ x<0, \end{array}\right. $$ and $$ m_n (x)= \left\{\begin{array}{c} { A x }\ {\rm if} \ x>0\\ {a_n x } \ {\rm if }\ x<0. \end{array}\right.$$ It is clear that $f_n$ is uniformly Lipschitzian on $ ](\varphi(\bar r), \varphi (r^\prime)[\times ]\varphi^\prime (\bar r), \varphi^\prime(r^\prime -{\delta\over 2})[$. Then $\varphi_n$ converges in ${\cal C}^1$ (even ${\cal C}^2$) to some solution $\psi$ on $]\bar r, r^\prime -{\delta\over 2}[$ of $$(\psi^\prime, \psi^") = f(\psi, \psi^\prime)$$ whith $f= (f_{1}, f_{2})$, $f_{1} (r, y_1, y_2) = y_2$, $$f_{2}(r, y_1, y_2) = M\left(-{m(y_2) (N-1)\over r }- {|y_1|^{\alpha} y_1 \over |y_2|^{\alpha}}\right), $$ and $M$ and $m$ are respectively the functions $$ M (x)= \left\{\begin{array}{c} {x\over A}\ {\rm if} \ x>0\\ {x\over a} \ {\rm if } \ x<0, \end{array}\right.$$ and $$ m(x)= \left\{\begin{array}{c} { A x }\ {\rm if} \ x>0\\ {a x } \ {\rm if }\ x<0. \end{array}\right.$$ with the condition $\psi (\bar r)=0$, $\psi^\prime (\bar r) = \varphi^\prime (\bar r)$. This implies that $\varphi$ is ${\cal C}^2$ on $]\bar r, r^\prime -{\delta\over 2}[\cup [r^\prime -{3\delta\over 4}[ $. We can do the same on $]r^\prime + {\delta\over 2}, \bar t[$ and get in that way the regularity of $\varphi$ on $[r^\prime, \bar t[$. In fact the proof contains the regularity of $\varphi$ on a open neighborhood of $[\bar r, \bar t]$. Since this can be repeated on each intervall delimited by two zeros of $\varphi$ one gets the regularity of $\varphi$ on $B(0,1)$. As a consequence of proposition \ref{propri} we have obtained that $\mu = \mu_k^+$. Since $\mu_k^+(\alpha_n, a_n)$ has a unique cluster point we get that all the sequence converges to $\mu_k^+$. \section{Conclusion and supplementary results} Let $K_{\alpha,a } $ be the operator defined on ${\cal C}(\Omega)$ by : For $f\in {\cal C} (\overline{\Omega})$, $K_{\alpha,a}(f)$ is the unique $v\in {\cal C} (\overline{\Omega})$ solution of $$\left\{\begin{array}{cc} \tilde F_{\alpha,a} (r, v^\prime , v^") - |v|^\alpha v =- f& {\rm in}\ \Omega\\ v = 0& \ {\rm on } \ \partial \Omega. \end{array}\right. $$ The operator $K_{\alpha,a}$ is well defined since $\alpha>-1$, and defining for $\mu$ positive given $K_{\alpha, a, \mu} (u)=K_{\alpha,a} ( (\mu+1)|u|^\alpha u) $, one can note that the fixed points of $K_{\alpha, a, \mu} $ exist if $\mu$ is an eigenvalue, as some eigenfunction associated. We will be able to derive from the continuity results in the last section some results about the degree of the operator $K_{\alpha, a, \mu} $ in function of the position of $\mu$ with respect to the eigengalues $\mu_k^\pm$. Next we shall establish some bifurcation results for the equations defined as follows Let $f$ be defined as $(\mu, s)\mapsto f(\mu,s)$ which is "super-linear" in $s$ uniformly with respect to $\mu$ in the sense that $$\lim_{s\rightarrow 0}\frac{f(\mu,s)}{|s|^{1+\alpha}}=0. $$ We also assume that $f$ is locally bounded and continuous in all its variables. Then we shall consider the problem \begin{equation}\label{prob} \left\{\begin{array}{cc} \tilde F_{\alpha,a } (r, u^\prime, u^") + \mu |u|^\alpha u + f(\mu,u)=0&\ {\rm in} \ \Omega \\ u=0 &{\rm on } \ \partial \Omega. \end{array}\right. \end{equation} for which we shall prove bifurcation results, completing the results already obtained in \cite{BDb}. This will be the object of a forthcomming paper. \end{document}
arXiv
\begin{document} \author[labri]{Valentin F\'eray}\ead{[email protected]} \author[lix]{Ekaterina A. Vassilieva}\ead{[email protected]} \address[labri]{LaBRI, Universit\'e Bordeaux 1, 351 cours de la lib\'eration, 33 400 Talence, France} \address[lix]{LIX, Ecole Polytechnique, 91128, Palaiseau, France} \title{Bijective enumeration of some colored permutations given by the product of two long cycles} \begin{abstract} Let $\gamma_n$ be the permutation on $n$ symbols defined by $\gamma_n = (1\ 2\ \ldots\ n)$. We are interested in an enumerative problem on colored permutations, that is permutations $\beta$ of $n$ in which the numbers from $1$ to $n$ are colored with $p$ colors such that two elements in a same cycle have the same color. We show that the proportion of colored permutations such that $\gamma_n \beta^{-1}$ is a long cycle is given by the very simple ratio $\frac{1}{n- p+1}$. Our proof is bijective and uses combinatorial objects such as partitioned hypermaps and thorn trees. This formula is actually equivalent to the proportionality of the number of long cycles $\alpha$ such that $\gamma_n\alpha$ has $m$ cycles and Stirling numbers of size $n+1$, an unexpected connection previously found by several authors by means of algebraic methods. Moreover, our bijection allows us to refine the latter result with the cycle type of the permutations. \end{abstract} \begin{keyword} Colored Permutations, Bipartite Maps, Long Cycle Factorization \end{keyword} \maketitle \section{Introduction} The question of the number of factorizations of the long cycle $(1\ 2\ \ldots\ n)$ into two permutations with given number of cycles has already been studied via algebraic or combinatorial\footnote{It can be reformulated in terms of unicellular bipartite maps with given number of vertices, see paragraph \ref{subsect:partitioned_maps}.} methods \cite{ Adrianov1998serie_carte_bicolore, SchaefferVassilieva:factorizations_long_cycle}. In these papers, the authors obtain nice generating series for these numbers. Note that the combinatorial approach has been refined to state a result on the number of factorizations of the long cycle $(1\ 2\ \ldots\ n)$ in two permutations with given types \cite{MoralesVassilieva:factorizations_long_cycle}.\\ Unfortunately, even though generating series have nice compact forms, the formulae for one single coefficient are much more complicated (see for example \cite{GoupilSchaefferGenusExpansion}). The case where one factor has to be also a long cycle is particularly interesting. Indeed, the number $B'(n,m)$ of permutations $\beta$ of $[n]$ with $m$ cycles, such that $(1\ 2\ \ldots\ n) \beta^{-1}$ is a long cycle, is known to be the coefficient of some linear monomial in Kerov's and Stanley's character polynomials (see \cite[Theorem 6.1]{Biane2003Kerov} and \cite{Stanley2003rectangles,Feray:stanley_formula}). These polynomials express the character value of the irreducible representation of the symmetric group indexed by a Young diagram $\lambda$ on a cycle of fixed length in terms of some coordinates of $\lambda$.\\ The numbers $B'(n,m)$ admit a very compact formula in terms of Stirling numbers. \begin{Th}[\cite{KwakLee1993}] \label{th:stanley} Let $m \leq n$ be two positive integers with the same parity. Then \begin{equation}\label{eq:stanley} \frac{n(n+1)}{2} B'(n,m) = s(n+1,m), \end{equation} where $s(n+1,m)$ is the unsigned Stirling number of the first kind, that is the number of permutations of $[n+1]$ with $m$ cycles. \end{Th} This formula has been found independently by several authors: J.H. Kwak and J. Lee \cite[Theorem 3]{KwakLee1993}, then D. Zagier \cite[Application 3]{ZagierCycles} and finally R. Stanley \cite[Corollary 3.4]{Stanley2009product_cycles}. Very recently, a combinatorial proof of this statement has been given by R. Cori, M. Marcus and G. Schaeffer \cite{CoriMarcusSchaeffer2010}. This paper is focused on an equivalent statement in terms of {\it colored} (or partitioned) permutations. \begin{Def}\label{DefColoredPerm} A colored permutation of $n$ with $p$ colors is a couple $(\beta,\varphi)$ where: \begin{itemize} \item $\beta$ is a permutation of $n$; \item $\varphi$ is a surjective map from $\{1,\dots,n\}$ to a set $C$ of colors of cardinality $p$. We require that two elements belonging to the same cycle of $\beta$ have the same color. \end{itemize} In what follows, we consider that two colored permutations differing only by a bijection on the set of colors are the same object. As such, coloration can be seen as a set partition of the set of cycles of $\beta$, or as a set partition $\pi$ of $\{1,\dots,n\}$ coarser than the set partition into cycles of $\beta$ (in other words, if $i$ and $j$ lie in the same cycle of $\beta$, they must be in the same part of $\pi$). The set of colored permutations of $n$ with $p$ colors is denoted $\mathcal {C}(p,n)$. \end{Def} According to the last remark of definition \ref{DefColoredPerm}, we rather denote colored permutations $(\beta, \pi)$ where $\pi$ is a set partition coarser than the set partition into cycles of $\beta$.\\ These objects play an important role in the combinatorial study of the factorizations in the symmetric group, as it is much easier to find direct bijections for colored factorizations than it is for classical ones (see \cite{GouldenNica, GouldenSlofstra, Bernardi, SchaefferVassilieva:factorizations_long_cycle, MoralesVassilieva:factorizations_long_cycle}). Generating series of colored and classical factorizations are linked through simple formulae (Lemma \ref{LemLinkOGF}). We consider here an analogue of Theorem \ref{th:stanley} for colored permutations, that is the problem of enumerating colored permutations such that $(1\ 2\ \ldots\ n) \beta^{-1}$ is a long cycle. We obtain the following elegant result: \begin{Th}\label{th:reformulation} Let $p \leq n$ be two positive integers. Choose randomly (with uniform probability) a colored permutation $(\beta,\pi)$ in $\mathcal {C}(p,n)$. Then the probability for $(1\ 2\ \ldots\ n) \beta^{-1}$ to be a long cycle is exactly $1/(n-p+1)$. \end{Th} Given a colored permutation $(\beta,\pi)$ in $\mathcal {C}(p,n)$, the (unordered) sequence of the numbers of elements having the same color defines an integer partition of $n$ with $p$ parts, which we call the type of $(\beta,\pi)$. For any $\lambda$ integer partition of $n$, we note $\mathcal {C}(\lambda)$ the set of all colored permutations of type $\lambda$. Our main result is the following refinement of Theorem \ref{th:reformulation}: \begin{Th}[Main result]\label{th:reformulationfine} Let $p \leq n$ be two positive integers. Fix an integer partition $\lambda$ of size $n$ and length $p$. Choose randomly (with uniform probability) a colored permutation $(\beta,\pi)$ in $\mathcal {C}(\lambda)$. Then the probability for $(1\ 2\ \ldots\ n) \beta^{-1}$ to be a long cycle is exactly $1/(n-p+1)$. \end{Th} In fact, counting colored permutations and counting permutations without additional structure are two equivalent problems. Therefore, one can deduce from Theorem \ref{th:reformulationfine} a refinement of Theorem \ref{th:stanley}. To state this new theorem, we need to introduce a few notations. Recall that the type of a permutation is defined as the sequence of the lengths of its cycles, sorted in increasing order. With this notion, it is natural to refine the numbers $s(n+1,m)$ and $B'(n,m)$: if $\lambda \vdash n$ (\textit{i.e.} $\lambda$ is a partition of $n$), let $A(\lambda)$ (resp. $B(\lambda)$) be the number of permutations $\beta \in S_n$ of type $\lambda$ (resp. with the additional condition that $(1\ 2\ \ldots\ n) \beta^{-1}$ is a long cycle). Of course, $A(\lambda)$ is given by the simple formula $|\lambda|!/z_\lambda$, where $m_i(\lambda)$ is the number of parts $i$ in $\lambda$ and $z_\mu=\prod_i i^{m_i(\mu)} m_i(\mu)!$. Then, as Theorem \ref{th:stanley} deals with permutations of $[n]$ and $[n+1]$, we need operators on partitions which modify their size, but not their length. If $\mu$ (resp. $\lambda$) has at least one part $i+1$ (resp. $i$), let $\mu^{\downarrow (i+1)}$ (resp. $\lambda^{\uparrow (i)}$) be the partition obtained from $\mu$ (resp. $\lambda$) by erasing a part $i+1$ (resp. $i$) and adding a part $i$ (resp. $i+1$). For instance, using exponential notations (see \cite[chapter 1, section 1]{Macdo}), $(1^2 3^1 4^2)^{\downarrow (4)}=1^2 3^2 4^1$ and $(2^2 3^2 4)^{\uparrow (2)}=2^1 3^3 4^1$.\\ \begin{Th}[Corollary]\label{th:Stanleyfine} Let $m \leq n$ be two positive integers with the same parity. For each partition $\mu \vdash n+1$ of length $m$, one has: \begin{equation} \frac{n+1}{2} \sum_{\lambda = \mu^{\downarrow (i+1)}, i>0} i\ m_i(\lambda)\ B(\pnv\lambda) = A(\mu) = \frac{(n+1)!}{z_\mu}. \end{equation} \end{Th} From this result, one can immediately recover Theorem \ref{th:stanley} by summing over all partitions $\mu$ of length $m$ and size $n+1$. Indeed, \begin{multline*} \sum_{\mu \vdash n+1 \atop \ell(\mu)=m} \frac{n+1}{2} \sum_{\lambda = \mu^{\downarrow (i+1)}, i>0} i\ m_i(\lambda)\ B(\pnv\lambda) = \frac{n+1}{2} \sum_{\lambda \vdash n \atop \ell(\lambda)=m} \sum_{\mu = \lambda^{\uparrow (i)}, i>0} i\ m_i(\lambda)\ B(\pnv\lambda) \\ = \frac{n+1}{2} \sum_{\lambda \vdash n \atop \ell(\lambda)=m} B(\pnv\lambda) \left(\sum_{i>0} i\ m_i(\lambda)\right) = \frac{n(n+1)}{2} \sum_{\lambda \vdash n \atop \ell(\lambda)=m} B(\pnv\lambda) = \frac{n(n+1)}{2} B'(n,m). \end{multline*} To be comprehensive on the subject, we mention that G. Boccara has found an integral formula for $B(\lambda)$ (see \cite{Boccara1980produit2cycles}), but there does not seem to be any direct link with our result. \begin{Rem}\label{rem:syst_inv} Theorem \ref{th:Stanleyfine}, written for all $\mu \vdash n+1$, gives the collection of numbers $B(\pnv\lambda)$ as solution of a sparse triangular system. Indeed, if we endow the set of partitions of $n$ with the lexicographic order, Theorem \ref{th:Stanleyfine}, written for the partition $\mu=(\lambda_1+1, \lambda_2,\lambda_3,\dots)$, gives $B(\pnv\lambda)$ in terms of the quantities $A(\mu)$ and $B(\pnv\nu)$ with $\nu > \lambda$. \end{Rem} Note that the statement of Theorem \ref{th:reformulationfine} is much nicer than Theorem \ref{th:Stanleyfine} (in particular, the fact that the ratio depends only on $|\lambda|$ and $\ell(\lambda)$ is quite surprising). This suggests that it is interesting to work with colored permutations rather than with permutations without additional structure (as it is done in \cite{CoriMarcusSchaeffer2010} for example). \noindent {\em Outline of the paper.} Thanks to an interpretation of colored permutations in terms of partitioned hypermaps (Section \ref{sect:hypermaps}), we prove bijectively Theorem \ref{th:reformulationfine} in Sections \ref{sect:def_psi}, \ref{sect:inj_psi} and \ref{sect:im_psi}. Finally, in Section \ref{sect:reformulation}, we use algebraic computations in the ring of symmetric functions to show the equivalence with Theorem \ref{th:Stanleyfine}. \section{Combinatorial formulation of Theorem \ref{th:reformulationfine}} \label{sect:hypermaps} \subsection{Black-partitioned maps}\label{subsect:partitioned_maps} By definition, a {\em map} is a graph drawn on a two-dimensional oriented closed compact surface (up to deformation), i.e. a graph with a cyclic order on the incident edges to each vertex. The faces of a map are the connected components of the surface without the graph (we require that these components are isomorphic to open discs).\\ As usual \cite{JacquesHypermaps, CoriHypermaps}, a couple of permutations $(\alpha,\beta)$ in $S_n$ can be represented as a bipartite map (or hypermap) with $n$ edges labeled with integers from $1$ to $n$. In this identification, $\alpha(i)$ (resp. $\beta(i)$) is the edge following $i$ when turning around its white (resp. black) extremity. White (resp. black) vertices correspond to cycles of $\alpha$ (resp. $\beta$). In this setting, faces of the map correspond to cycles of the product $\alpha \beta$. Hence, the condition $\alpha \beta = (1\ 2\ \ldots\ n)$ (which we will assume from now on) means that the map is unicellular ({\it i.e.} has only one face) and that the positions of the labels are determined by the choice of the edge labeled by $1$ (which can be seen as a \emph{root}). In this case, the couple of permutations is entirely determined by $\beta$.\\ Therefore, if $\lambda \vdash n$, the quantity $A(\lambda)$ is the number of rooted unicellular maps with black vertices' degree distribution $\lambda$ (there are no conditions on white vertices). The condition that the product $(1\ 2\ \ldots\ n) \beta^{-1}$ is a long cycle is equivalent to the fact that the corresponding rooted bipartite map has only one white vertex (we call such maps \emph{star} maps). Thus $B(\lambda)$ is the number of star rooted unicellular maps with black vertices' degree distribution $\lambda$.\\ As in the papers \cite{SchaefferVassilieva:factorizations_long_cycle} and \cite{MoralesVassilieva:factorizations_long_cycle}, our combinatorial construction deals with maps with additional structure: \begin{Def} A black-partitioned (rooted unicellular) map is a rooted unicellular map with a set partition $\pi$ of its black vertices. We call degree of a part (block) $\pi_{i}$ of $\, \pi$ the sum of the degrees of the vertices in $\pi_{i}$. The \emph{type} of a black-partitioned map is its blocks' degree distribution. \end{Def} In terms of permutations, a black-partitioned map consists of a couple $(\alpha,\beta)$ in $S_n$ with the condition $\alpha \, \beta=(1\ 2\ \ldots\ n)$ and a set partition $\pi$ of $\{1,\ldots,n\}$ coarser than the set partition in orbits under the action of $\beta$. Note that couples $(\alpha,\beta)$ with $\alpha \, \beta=(1\ 2\ \ldots\ n)$ are in bijection with permutations $\beta$. Therefore, a black-partitioned map is the same object as a colored permutation (see Definition \ref{DefColoredPerm}). The number $p$ of colors corresponds to the number of blocks in the set partition $\pi$. \begin{example} \label{exspm} Let $\beta = (1)(2 5)(3 7)(4)(6)$, $\alpha = (1 2 3 4 5 6 7) \beta^{-1} = (1 2 6 7 4 5 3)$, and $\pi$ be the partition $\left\{ \{1, 3, 6, 7\};\{2, 5\};\{4\}\right\}$. Here, the type of $(\beta,\pi)$ is $(4,2,1)$. Associating the triangle, circle and square shape to the blocks, $(\beta, \pi)$ is the black-partitioned star map pictured on figure \ref{spm}. \begin{figure} \caption{The black-partitioned map defined in example \ref{exspm}} \label{spm} \end{figure} \end{example} If $\lambda \vdash n$, we denote by $C(\lambda)$ (resp. $D(\lambda)$) the number of black-partitioned maps (resp. black-partitioned star maps) of type $\lambda$. Equivalently, $C(\lambda)$ (resp. $D(\lambda)$) is the number of couples $(\beta,\pi)$ as above such that $\pi$ is a partition of type $\lambda$ (resp. and $(1\ 2\ \ldots\ n) \beta^{-1}$ is a long cycle). With this notations, Theorem \ref{th:reformulationfine} can be rewritten as: \begin{equation}\label{EqCD} D(\pnv\lambda) = \frac{1}{n-\ell(\lambda)+1} C(\pnv\lambda) \text{ , } \lambda \vdash n \end{equation} \subsection{Permuted star thorn trees and Morales'-Vassilieva's bijection}\label{subsect:thorn_trees} The main tool of this article is to encode black-partitioned maps into star thorn trees, which have a very simple combinatorial structure. Note that they are a particular case of the notion of thorn trees, introduced by A. Morales and the second author in \cite{MoralesVassilieva:factorizations_long_cycle}. \begin{Def}[star thorn tree] An \emph{(ordered rooted bipartite) star thorn tree} of size $n$ is a planar tree with a white root vertex, $p$ black vertices and $n - p$ thorns connected to the white vertex and $n - p$ thorns connected to the black vertices. A thorn is an edge connected to only one vertex. ``Planar`` means that the sons of a given vertex are ordered (here, a thorn should be considered as a son of its extremity). We call \emph{type} of a star thorn tree its black vertices' degree distribution (taking the thorns into account). If $\mu$ is an integer partition, we denote by $\widetilde{ST}(\mu)$ the number of star thorn trees of type $\mu$. \end{Def} Two examples are given on Figure \ref{fig:ex_arbre_permute} (for the moment, please do not pay attention to the labels). The interest of this object lies in the following theorem. \begin{Th}[\cite{MoralesVassilieva:factorizations_long_cycle}]\label{th:bij_MV} Let $\mu \vdash n$ be a partition of length $p$. One has: \begin{equation}\label{eq:bij_MV} C(\mu) = (n-p)! \cdot \widetilde{ST}(\mu). \end{equation} \end{Th} This theorem corresponds to the case $\lambda=(n)$ of \cite[Theorem 2]{MoralesVassilieva:factorizations_long_cycle} (note that the proof is entirely bijective). The right-hand side of \eqref{eq:bij_MV} is the number of couples $(\tau,\sigma)$ where: \begin{itemize} \item $\tau$ is a star thorn tree of type $\mu$. \item $\sigma$ is a bijection between thorns with a white extremity and thorns with a black extremity (by definition, $\tau$ has exactly $n-p$ thorns of white extremity and $n-p$ thorns of black extremity). \end{itemize} We call such a couple a permuted (star) thorn tree. By definition, the type of $(\tau,\sigma)$ is the type of $\tau$. Examples of graphical representations are given on Figure \ref{fig:ex_arbre_permute}: we put symbols on edges and thorns with the following rule. Two thorns get the same symbol if they are associated by $\sigma$ and, except from that rule, all symbols are different (the chosen symbols and their order do not matter, we call that a symbolic labeling).\\ \begin{figure} \caption{Example of two permuted star thorn trees $\ex{1}$ of type $2^1 3^1$ and $\ex{2}$ of type $2^2 3^2$} \label{fig:ex_arbre_permute} \end{figure} Using this result, one obtains another equivalent formulation for Theorem \ref{th:reformulationfine}: \begin{equation}\label{EqDBT} D(\pnv\lambda) = \frac{1}{n-p+1} (n-p)! \widetilde{ST}(\pnv\lambda) \text{ , } \lambda \vdash n. \end{equation} Sections \ref{sect:def_psi}, \ref{sect:inj_psi} and \ref{sect:im_psi} are devoted to the proof of equation \eqref{EqDBT}. We proceed in a three step fashion. Firstly, we define a mapping $\Psi$ from the set of black-partitioned star maps of type $\lambda$ (counted by $D(\lambda)$) into the set of permuted star thorn trees of the same type. Secondly, we show it is injective. As a final step, we compute the cardinality of the image set of $\Psi$ and show it is exactly $\left (1/(n-p+1)\right)(n-p)! \widetilde{ST}(\pnv\lambda)$.\\ \noindent {\em Remark.} Although there are some related ideas, $\Psi$ is not the restriction of the bijection of paper \cite{MoralesVassilieva:factorizations_long_cycle}. \section{Mapping black-partitioned star maps to permuted thorn trees}\label{sect:def_psi} \subsection{Labeled thorn tree} Let $(\beta,\pi)$ be a black-partitioned star map. First we construct a labeled star thorn tree $\overline{\tau}$: \begin{enumerate}[(i)] \item \label{item:root_place} Let $(\alpha_k)_{(1\leq k \leq n)}$ be the integer list such that $\alpha_1 = 1$ and such that the long cycle $\alpha=(1\ 2\ \ldots\ n) \beta^{-1}$ is equal to $(\alpha_1 \alpha_2 \alpha_3 \dots \alpha_n)$. The root of $\overline{\tau}$ is a white vertex with $n$ descending edges labeled from right to left with $\alpha_1, \alpha_2, \alpha_3, \dots, \alpha_n$ ($\alpha_1$ is the rightmost descending edge and $\alpha_n$ the leftmost). \item \label{item:edge_root} Let $m_i$ be the maximum element of the block $\pi_i$. For $k=1\ldots n$, if $\alpha_k = \beta(m_i)$ for some $i$, we draw a black vertex at the other end of the descending edge labeled with $\alpha_k$. Otherwise the descending edge is a thorn. \begin{Rem}\label{rem:leftmost_edge} As $\alpha_n=\alpha^{-1}(1) = \beta(n)$ the leftmost descending edge is never a thorn and is labeled with $\beta(n)$. \end{Rem} \item \label{item:black_thorns} For $i \in \{1,\dots,p\}$, let ${(\beta^{u}_{1} \ldots \beta^{u}_{l_u})}_{1\leq u \leq c}$ be the $c$ cycles included in block $\pi_i$ such that $\beta^{u}_{l_u}$ is the maximum element of cycle $u$. (We have $\Sigma_{u} l_u = \mid\pi_i\mid$). We also order these cycles according to their maximum, \textit{i.e.} we assume that $ \beta^{c}_{l_c} < \beta^{c-1}_{l_{c-1}} < \ldots < \beta^{1}_{l_1} = m_i$. As a direct consequence, $\beta^{1}_1 = \beta(m_i)$.\\ We connect $\mid\pi_i\mid-1$ thorns to the black vertex linked to the root by the edge $\beta(m_i)$. Moving around the vertex clockwise and starting right after edge $\beta(m_i)$, we label its thorns with the integers \[\beta^{c}_{l_c}, \dots, \beta^{c}_{1}, \dots, \beta^{2}_{l_2}, \dots, \beta^{2}_{1}, \beta^{1}_{l_1}, \ldots, \beta^{1}_{2}\] in this order. Note that the last one is $\beta^{1}_{2}$ as $\beta^{1}_1 = \beta(m_i)$ is the label of the edge. Then $\overline{\tau}$ is the resulting thorn tree. \begin{Rem}\label{rem:betai} Moving around a black vertex clockwise starting with the thorn right after the edge, a new cycle of $\beta$ begins whenever we meet a left-to-right maximum of the labels. \end{Rem} \end{enumerate} The idea behind this construction is to add a root to the map $(\alpha,\beta)$, select one edge per block, cut all other edges into two thorns and merge the vertices corresponding to the same black block together. Step (\ref{item:root_place}) tells us where to place the root, step (\ref{item:edge_root}) which edges we select and step (\ref{item:black_thorns}) how to merge vertices (in maps unlike in graphs, one has several ways to merge given vertices). \begin{example} \label{exspm2lstt} Let us take the black-partitioned star map of example \ref{exspm}. Following construction rules (\ref{item:root_place}) and (\ref{item:edge_root}), one has $m_\triangle = 7$, $m_\bigcirc = 5$, $m_\Box = 4$ and the descending edges indexed by $\beta(m_\triangle) = 3$, $\beta(m_\bigcirc) = 2$ and $\beta(m_\Box) = 4$ connect a black vertex to the white root. Other descending edges from the root are thorns. Using (\ref{item:black_thorns}), we add labeled thorns to the black vertices to get the labeled thorn tree depicted on Figure \ref{taubar}. Focusing on the one connected to the root through the edge $3$, we have $(\beta^1_1 \beta^1_{2}) (\beta^2_{1}) (\beta^3_{1}) = (3 7)(6)(1)$. Reading the labels clockwise around this vertex, we get $1,6,7,3$. The three cycles can be simply recovered looking at the left-to-right maxima $1$, $6$ and $7$. \begin{figure} \caption{Labeled thorn tree associated to the black-partitioned star map of Figure \ref{spm}} \label{taubar} \end{figure} \end{example} \begin{Rem}\label{rem:taubar2smp} Let us fix a labeled thorn tree $\overline{\tau}$ coming from a black-partitioned star map $(\beta,\pi)$. Then $\alpha = (1\ 2\ \dots\ n) \beta^{-1}$ can be found from $\overline{\tau}$ by reading the labels around the root in counter-clockwise order and $\pi$ is the following set-partition: for each black vertex $b$ of $\overline{\tau}$, the block $\pi_b$ of $\pi$ is the set of the labels of the edge and of the thorns linked to $b$. Hence, a labeled thorn tree $\overline{\tau}$ corresponds at most to one black-partitioned star map $(\beta,\pi)$. \end{Rem} \subsection{Permuted thorn tree} We call $\tau$ the star thorn tree obtained from $\overline{\tau}$ by removing labels and $\sigma$ the permutation that associates to a white thorn in $\tau$ the black thorn with the same label in $\overline{\tau}$. Finally, we define: $ \Psi(\beta,\pi) = (\tau,\sigma). $ \begin{example} Following up with example \ref{spm}, we get the permuted thorn tree $\ex{3}$ drawn on Figure \ref{tausigma}. Graphically we use the same convention as in paragraph \ref{subsect:thorn_trees} to represent $\sigma$. \begin{figure} \caption{Permuted thorn tree $\ex{3}$ associated to the black-partitioned star map of Figure \ref{spm}} \label{tausigma} \end{figure} \end{example} \section{Injectivity and reverse mapping}\label{sect:inj_psi} Assume $(\tau,\sigma) = \Psi(\beta,\pi)$ for some black partitioned star map $(\beta,\pi)$. We show that $(\beta,\pi)$ is actually uniquely determined by $(\tau,\sigma)$.\\ As a first step, we recover the labeled thorn tree $\overline{\tau}$. Let us draw the permuted thorn tree $(\tau,\sigma)$ as explained in paragraph \ref{subsect:thorn_trees}. We show by induction that there is at most one possible integer value for each symbolic label. \begin{enumerate}[(i)] \item By construction, the label $\alpha_1$ of the right-most edge or thorn descending from the root is necessarily $1$. \item \label{item:etape_determiner_bi} Assume that for $i \in [n-1]$, we have identified the symbols of values $1,2,\ldots,i$. We look at the edge or thorn with label $i$ connected to a black vertex $b$. In this step, we determine which symbol corresponds to $\beta(i)$.\\ Recall that, when we move around $b$ clockwise finishing with the edge (in this step, we will always turn in this sense), a new cycle begins whenever we meet a left-to-right maximum (Remark \ref{rem:betai}). So, to find $\beta(i)$, one has to know whether $i$ is a left-to-right maximum or not.\\ If all values of symbols of thorns before $i$ have not already been retrieved, then $i$ is not a left-to-right maximum. Indeed, the remaining label values are $i+1$, \ldots, $n$ and at least one thorn's label on the left of $i$ lies in this interval. According to our construction $\beta(i)$ necessarily corresponds to the symbolic label of the thorn right at the left of $i$ (case {\it a})\\ If all the symbol values of thorns before $i$ have already been retrieved (or there are no thorns at all), then $i$ is a left-to-right maximum. According to the construction of $\overline{\tau}$, $\beta(i)$ corresponds necessarily to the symbolic label of the thorn preceding the next left-to-right maximum . But one can determine which thorn (or edge) corresponds to the next left-to-right maximum: it is the first thorn (or edge) $e$ whose value has not been retrieved so far (again moving around the black vertex from left to right). Indeed, all the values retrieved so far are less than $i$ and those not retrieved greater than $i$. Therefore $\beta(i)$ is the thorn right at the left of $e$ (case {\it b}). If all the values of the labels of the thorns connected to $b$ have already been retrieved then $i$ is the maximum element of the corresponding block and $\beta(i)$ corresponds to the symbolic label of the edge connecting this black vertex to the root (we can see this as a special case of case {\it b}). \item \label{item:etape_determiner_ipu} Consider the element (thorn of edge) of white extremity with the symbolic label corresponding to $\beta(i)$. The next element (turning around the root in counter-clockwise order) has necessarily label $\alpha(\beta(i))=i+1$.\\ As a result, the knowledge of the thorn or edge with label $i$ uniquely determines the edge or thorn with label $i+1$. \end{enumerate} Applying the previous procedure up to $i=n-1$ we see that $\overline{\tau}$ is uniquely determined by $(\tau,\sigma)$ and so is $(\beta, \pi)$ (see Remark \ref{rem:taubar2smp}). \begin{example} Take as an example the permuted thorn tree $\ex{1}$ drawn on the left-hand side of Figure \ref{fig:ex_arbre_permute}, the procedure goes as described on Figure \ref{recons}. First, we identify $\alpha_1 = 1$. Then, as there is a non (value) labeled thorn $\alpha_2$ on the left of the thorn connected to a black vertex with label value $1$, necessarily $1$ is not a left-to-right maximum and $\alpha_2$ is the label of the thorn immediately to the left of $1$. Then as $\alpha_3$ follows $\alpha_2 = \beta(1)$ around the white root, we have $\alpha_3 = \alpha(\beta(1))=2$. \noindent We apply the procedure up to the full retrieval of the edges' and thorns' labels. We find $\alpha_2 = 3$, $\alpha_4 = 4$, $\alpha_5 = 5$. Finally, we have $\alpha = (1 3 2 4 5)$, $\beta = (213)(4)(5)$, $\pi = \{\{1, 2, 3\}; \{4, 5\} \}$ as shown on figure \ref{endrecons}. \begin{figure} \caption{Reconstruction of the map} \label{recons} \label{endrecons} \end{figure} \end{example} \section{Characterisation and size of the image set $\Im(\Psi)$}\label{sect:im_psi} \subsection{A necessary and sufficient condition to belong to $\Im(\Psi)$} \subsubsection{Why $\Psi$ is not surjective?} Let us fix a permuted star thorn tree $(\tau,\sigma)$. We can try to apply to it the procedure of section \ref{sect:inj_psi} and we distinguish two cases: \begin{itemize} \item it can happen, for some $i<n$, when one wants to give the label $i+1$ to the edge following $\beta(i)$ (step (\ref{item:etape_determiner_ipu})), that this edge has already a label $j$ ($j<i$). If so, the procedure fails and $(\tau,\sigma)$ is not in $\Im(\Psi)$. \item if this never happens, the procedure ends with a labeled thorn tree $\overline{\tau}$. In this case, one can find the unique black-partitioned star map $M$ corresponding to $\overline{\tau}$ and by construction $\Psi(M)=(\tau,\sigma)$. \end{itemize} For instance, take the couple $\ex{2}$ on the right of Figure \ref{fig:ex_arbre_permute}, the procedure gives successively \[ \alpha_1=1,\ \alpha_9=2,\ \alpha_{10}=3,\ \alpha_6=4,\ \alpha_7=5,\ \alpha_4=6,\ \alpha_5=7\] and then we should choose $\alpha_1=8$, but this is impossible because we already have $\alpha_1=1$. \begin{Lem} If the procedure fails, the label $j$ of the edge that should get a second label $i+1$ is always $1$. \end{Lem} \begin{proof} Assume $j >1$. As the reconstruction procedure did not fail for $1\ldots i$, there are two distinct pairs of thorns with labels $i$ and $j-1$. We will prove that the reconstruction provides labels $\beta(i)$ and $\beta(j-1)$ to two distinct elements. We assume that the labels $\beta(i)$ and $\beta(j-1)$ have been given to the same element. In particular, $i$ and $j-1$ must belong to the same black vertex. Let us consider the different possible cases in the reconstruction step (\ref{item:etape_determiner_bi}): \begin{itemize} \item If $\beta(j-1)$ is obtained {\it via} case {\it b} (the left-to-right maximum case), the label $i$ must be just to the right of $\beta(j-1)$ and not a left-to-right maximum. But this is impossible because all thorns to the left of $\beta(j-1)$ (including $\beta(j-1)$) have labels smaller than $j$. \item If $j-1$ is obtained {\it via} case {\it a} (the not left-to-right maximum case) and $i$ is a left-to-right maximum. The label $j-1$ is just to the right of the thorn/edge labeled by both $\beta(j-1)$ and $\beta(i)$. Then $\beta(i)$ is before the next left-to-right maximum. So the edge to the right of $\beta(i)$ has a label greater than $i$ and can not be $j-1$. \item If $j-1$ is obtained {\it via} case {\it a} (the not left-to-right maximum case) and $i$ is {\em not} a left-to-right maximum. The label $j-1$ is still just to the right of the thorn/edge labeled by both $\beta(j-1)$ and $\beta(i)$. Label $i$ must be as well just to the right of $\beta(i)$. It is not possible as $i$ and $j-1$ are the labels of two distinct thorns or edge since the procedure has not failed at step $i$. \end{itemize} Finally $\beta(i)$ and $\beta(j-1)$ correspond to two different symbolic labels and hence $i+1$ and $j$ also (they are respectively the symbolic label of the elements right at the left of $\beta(i)$ and $\beta(j-1)$ when turning around the root). Hence, the procedure can not fail for a value of $j >1$. \end{proof} \subsubsection{An auxiliary oriented graph} Remark \ref{rem:leftmost_edge} gives a necessary condition for $(\tau,\sigma)$ to be in $\Im(\Psi)$: its leftmost edge attached to the root must be a real edge and not a thorn. From now on, we call this property $(P1)$: note that, among all permuted thorn trees of a given type $\lambda \vdash n$ of length $p$, exactly $p$ over $n$ have this property. Whenever $(P1)$ is satisfied, we denote $e_0$ the left-most edge leaving the root and $\pi_0$ its black extremity. The lemma above shows that the procedure fails if and only if $e_0$ is chosen as $\beta(i)$ for some $i<n$. But this can not happen at any time. Indeed, the following lemma is a direct consequence from step (\ref{item:etape_determiner_bi}) of the reconstruction procedure: \begin{Lem}\label{lem:sommet_complete} A real edge (\textit{i.e.} which is not a thorn) $e$ can be chosen as $\beta(i)$ only if the edge and all thorns attached to the corresponding black vertex have labels smaller or equal to $i$. If this happens, we say that the black vertex is {\em completed} at step $i$. \end{Lem} \begin{Corol} Let $e$ be a real edge of black extremity $\pi \neq \pi_0$. Let us denote $e'$ the element (edge or thorn) immediately to the left of $e$ around the white vertex. Let $\pi'$ be the black extremity of the element $e''$ associated to $e'$ (\textit{i.e.} $e'$ itself if it is an edgeand its image by $\sigma$ otherwise). Then $\pi'$ can not be completed before $\pi$. \end{Corol} \begin{proof} If $\pi'$ is completed at step $i$, by Lemma \ref{lem:sommet_complete}, the element $e''$ has a label $j \leq i$. As $e'$ has the same label, this implies that $e$ has label $\beta(j-1)$ or in other words, that $\pi$ is completed at time $j-1 < i$. \end{proof} When applied for every black vertex $\pi \neq \pi_0$, this corollary gives some partial information on the order in which the black vertices can be completed. We will summarize this in an oriented graph $G(\tau,\sigma)$: its vertices are the black vertices of $\tau$ and its edges are $\pi \to \pi'$, where $\pi$ and $\pi'$ are in the situation of the corollary above. This graph has one edge attached to each of its vertices except $\pi_0$. As examples, we draw the graphs corresponding to $\ex{2}$ and to $\ex{3}$ (see Figures \ref{fig:ex_arbre_permute} and \ref{tausigma}) on Figure \ref{fig:ex_graphe_ordre_completion}. \begin{figure} \caption{Two examples of auxiliary graphs.} \label{fig:ex_graphe_ordre_completion} \end{figure} \subsubsection{The graph $G(\tau,\sigma)$ gives all the information we need!} Can we decide, using only $G(\tau,\sigma)$, whether $(\tau,\sigma)$ belongs to $\Im(\Psi)$ or not? There are two cases, in which the answer is obviously yes: \begin{enumerate} \item Let us suppose that $G(\tau,\sigma)$ is an oriented tree of root $\pi_0$ (all edges are oriented towards the root). In this case, we say that $(\tau,\sigma)$ has property $(P2)$. Then, the vertex $\pi_0$ can be completed only when all other vertices have been completed, \textit{i.e.} when all edges and thorns have already a label. That means that $e_0$ can be chosen as $\beta(i)$ only for $i=n$. Therefore, in this case, the procedure always succeeds and $(\tau,\sigma)$ belongs to $\Im(\Psi)$. This is the case of $\ex{3}$. \item Let us suppose that $G(\tau,\sigma)$ contains an oriented cycle (eventually a loop). Then all the vertices of this cycle can never be completed. Therefore in this situation the procedure always fails and $(\tau,\sigma)$ does not belong to $\Im(\Psi)$. This is the case of $\ex{2}$. \end{enumerate} In fact, we are always in one of these two cases: \begin{Lem} Let $G$ be an oriented graph whose vertices have out-degree $1$, except one vertex $v_0$ which has out-degree $0$. Then $G$ is either an oriented tree with root $v_0$ or contains an oriented cycle. \end{Lem} \begin{proof} We consider two different cases: \begin{itemize} \item either, there exists a vertex $v$ with no paths from $v$ to $v_0$. In this case, we denote $v^1,v^2,\ldots$ the vertices such that $v^1$ is the successor of $v$ and $v^{i+1}$ is the successor of $v^i$. As the number of vertices is finite, there are at least two indices $i_1$ and $i_2$ such that $v^{i_1}=v^{i_2}$. The chain $v^{i_1}v^{i_1+1}\ldots v^{i_2}$ is an oriented loop. \item or there is a path from each vertex $v$ to $v_0$. So $G$ contains an oriented tree of root $v_0$. As the number of edges is exactly one less than the number of vertices, $G$ is an oriented tree.\qedhere \end{itemize} \end{proof} Finally, one has the following result: \begin{Prop} The mapping $\Psi$ defines a bijection: \begin{equation}\label{eq:main_bijection} \left\{\begin{array}{c} \text{black-partitioned star maps} \\ \text{of type }\lambda \end{array}\right\} \simeq \left\{\begin{array}{c} \text{permuted star thorn trees of type }\lambda\\ \text{with properties }(P1) \text{ and }(P2) \end{array}\right\}. \end{equation} \end{Prop} \subsection{Proportion of permuted thorn trees $(\tau,\sigma)$ in $\Im(\Psi)$} To finish the proof of equation \eqref{EqDBT}, one has justto compute the size of the right-hand side of \eqref{eq:main_bijection}. We do it {\it via} a quite technical (but pretty easy) induction, it would be nice to find a more elegant argument. \begin{Prop}\label{prop:proportion_P2} Let $\lambda$ be a partition of $n$ of length $p$. Denote by $P(\lambda)$ the proportion of couples $(\tau,\sigma)$ with properties $(P1)$ and $(P2)$ among all the permuted thorn trees of type $\lambda$ . Then, one has: $$P(\lambda)=\frac{1}{n-p+1}.$$ \end{Prop} \begin{proof} In fact, we will rather work with the proportion $P'(\lambda)$ of couples verifying $(P2)$ among the permuted thorn trees of type $\lambda$ verifying $(P1)$. As the proportion of couples with property $(P1)$ among couples $(\tau,\sigma)$ of type $\lambda$ is $\ell(\lambda)/|\lambda|$, one has: $P'(\lambda)=|\lambda|/\ell(\lambda)\cdot P(\lambda)$. We will prove by induction over $p=\ell(\lambda)$ that: $$P'(\lambda)= \frac{|\lambda|}{\ell(\lambda) (|\lambda| - \ell(\lambda)+1)}.$$ The case $p=1$ is easy: as $G(\tau,\sigma)$ has only one vertex and no edges, it is always a tree. Therefore, for any $n \geq 1$, one has $P'((n))=1$. Suppose that the result is true for any $\lambda$ of length $p-1$ and fix a partition $\mu \vdash n$ of length $p>1$. Let $PTT_1(\mu)$ (resp. $PTT_{1,2}(\mu)$) be the set of permuted thorn trees $(\tau,\sigma)$ of type $\mu$, verifying $(P1)$ (resp. verifying $(P1)$ and $(P2)$). With these notations, $P'(\mu)$ is defined as the quotient \[ \frac{\big|PTT_{1,2}(\mu)\big|}{\big|PTT_{1}(\mu)\big|}. \] It will be convenient to consider marked permuted thorn trees, {\it i.e.} permuted thorn trees with a marked black vertex different from $\pi_0$. The marked vertex will be denoted $\overline{\pi}$ and the corresponding edge $e_{\overline{\pi}}$. We denote $MPTT_1(\mu)$ (resp. $MPTT_{1,2}(\mu)$) the set of marked permuted thorn trees $(\tau,\sigma)$ of type $\mu$, verifying $(P1)$ (resp. verifying $(P1)$ and $(P2)$). To each permuted thorn tree $(\tau,\sigma)$ of type $\mu$ corresponds exactly $p-1$ marked permuted thorn trees, so: \begin{align*} \big|MPTT_\star(\mu)\big| &= (p-1) \cdot \big|PTT_\star(\mu)\big| \text{ for $\star=1$ or $\star=1,2$},\\ \text{and thus }P'(\mu) &= \frac{\big|MPTT_{1,2}(\mu)\big|}{\big|MPTT_{1}(\mu)\big|}. \end{align*} Let us now split these sets $MPTT_\star(\mu)$ depending on the degree of the marked vertex: \[ MPTT_\star(\mu) = \bigsqcup_k MPTT_\star^k(\mu), \] where $MPTT_\star^k(\mu)$ denote the subset of $MPTT_\star(\mu)$ of trees with a marked vertex of degree $k$. By Lemma \ref{LemFin1} (see next paragraph), one has: \[ \text{for all } k \geq 1,\ \big|MPTT_1^k(\mu)\big| = \frac{m_k(\mu)}{p} \big|MPTT_1(\mu)\big|. \] Let us consider an element of $MPTT_1^k(\mu)$. We distinguish two cases: \begin{itemize} \item either the end of the edge leaving $\overline{\pi}$ in the graph $G(\tau,\sigma)$ is $\overline{\pi}$ itself. In this case, the graph $G(\tau,\sigma)$ contains a loop and the element is not in $MPTT_{1,2}^k(\mu)$. \item or it is another vertex of the tree. We call such marked permuted thorn trees \emph{good}. We will prove below (Lemma \ref{LemFin3}) with the induction hypothesis that, in this case, exactly $n-1$ elements over $(p-1)(n-p+1)$ are in $MPTT_{1,2}^k(\mu)$. \end{itemize} By Lemma \ref{LemFin2}, the second case concerns exactly $n-k$ elements over $n-1$. Therefore: \[ | MPTT_{1,2}^k(\mu) | = \frac{n-1}{(p-1)(n-p+1)} \left( \frac{n-k}{n-1} \big| MPTT_1^k(\mu) \big| \right)\] and we can compute $P'(\mu)$ as follows \begin{align*} P'(\mu) & = \frac{\big|MPTT_{1,2}(\mu)\big|}{\big|MPTT_{1}(\mu)\big|} =\frac{\sum_k \big|MPTT_{1,2}^k(\mu)\big|}{ \big|MPTT_{1}(\mu)\big|}\\ P'(\mu) &= \frac{\sum_k \frac{n-k}{(p-1)(n-p+1)} \big| MPTT_1^k(\mu) \big|}{\big|MPTT_{1}(\mu)\big|}\\ P'(\mu) &= \frac{\sum_k \frac{n-k}{(p-1)(n-p+1)} \frac{m_k(\mu)}{p} \big| MPTT_1(\mu) \big|}{\big|MPTT_{1}(\mu)\big|}\\ P'(\mu) & = \frac{1}{(p-1)\big(n-p+1\big)} \cdot \left[ \frac{1}{p} \left( \sum_k n \cdot m_k(\mu) - k \cdot m_k(\mu) \right) \right] ;\\ P'(\mu) & = \frac{1}{(p-1)\big(n-p+1\big)} \frac{n\cdot p - n}{p} ;\\ P'(\mu) & =\frac{n}{p\big(n-p+1\big)}. \end{align*} This computation ends the proof of Proposition \ref{prop:proportion_P2} and, therefore, of equation \eqref{EqDBT}. \end{proof} \subsection{Technical lemmas} Let $\mu$ be a partition of size $n$ and length $p$. \begin{Lem} \label{LemFin1} For all $k \geq 1$, \[ \big|MPTT_1^k(\mu)\big| = \frac{m_k(\mu)}{p} \big|MPTT_1(\mu)\big|. \] \end{Lem} \begin{proof} Consider the action of $S_p$ on $PTT_1(\mu)$ consisting in permuting the black vertices (with their thorns). In each orbit and hence in the whole set $PTT_1(\mu)$, the proportion of elements for which the left-most black vertex $\pi_0$ has degree $k$ is $\frac{m_k(\mu)}{p}$. To each element in $PTT_1(\mu)$ correspond exactly $p-1$ elements in $MPTT_1(\mu)$ obtained by choosing a marked vertex $\overline{\pi}$ among the black vertices different from $\pi_0$. Therefore the probability that $\overline{\pi}$ has degree $k$ is also $\frac{m_k(\mu)}{p}$, which is what we wanted to prove. Note that this is not true if we consider elements with property $(P2)$ as the action of $S_p$ does not preserve this property. \end{proof} We denote by $GMPTT_1^k(\mu)$ the set of good marked permuted thorn trees $(\tau,\sigma,\overline{\pi})$ of type $\mu$, for which $\overline{\pi}$ is a vertex of degree $k$. \begin{Lem} \label{LemFin2} \[\frac{|GMPTT_1^k(\mu)|}{|MPTT_1^k(\mu)|} = \frac{n-k}{n-1}. \] \end{Lem} \begin{proof} Consider the action of $S_{n-1}$ on $MPTT_1^k(\mu)$ consisting in changing the cyclic order of the edges and thorns incident to the root without moving the left-most edge. In each orbit of this action, the edge or thorn $e'$ just after $e_{\overline{\pi}}$ is uniformly distributed among the $n-1$ edges and thorns incident to the root and different from $e_{\overline{\pi}}$. Among these edges and thorns, there are $k-1$ thorns which are associated by $\sigma$ to a thorn incident to the black vertex $\overline{\pi}$. By definition, an element in $MPTT_1^k(\mu)$ is good if and only if $e'$ is not one of these thorns, therefore, in each orbit, the proportion of good elements is $\frac{n-k}{n-1}$. \end{proof} Recall that any marked permuted thorn tree verifying property $(P2)$ is good. In other terms, $MPTT_{1,2}^k(\mu)$ is a subset of $GMPTT_1^k(\mu)$. \begin{Lem} \label{LemFin3} We assume that, for $\mu'$ of size $n-1$ and length $p-1$, the proportion of permuted star thorn trees of type $\mu'$ verifying $(P2)$ among those which verify $(P1)$ does not depend on $\mu'$. We denote this proportion $P'_{n-1,p-1}$. Then one has: \[\frac{|MPTT_{1,2}^k(\mu)|}{|GMPTT_1^k(\mu)|} = P'_{n-1,p-1}.\] \end{Lem} \begin{proof} Consider the following application \[ \varphi_{\mu,k} : \begin{array}{rcl} GMPTT_1^k(\mu) &\longrightarrow& \left\{ \begin{tabular}{c} permuted star thorn trees with \\ $\ell(\mu)-1$ black vertices and $n- \ell(\mu)$ thorns \end{tabular} \right\} \\ (\tau,\sigma,\overline{\pi}) &\longmapsto& (\tau',\sigma'), \end{array}\] where $(\tau',\sigma')$ is obtained as follows. Consider the edge or thorn immediately to the left of $e_{\overline{\pi}}$ and denote $\pi'$ the black extremity of the element with the same symbolic label. Then, starting from $(\tau,\sigma,\overline{\pi})$, erase the marked black vertex $\overline{\pi}$ with its edge $e_{\overline{\pi}}$ and move its thorns to the black vertex $\pi'$ (at the right of its own thorns). For example, \[ \varphi_{\mu,k} \left( \begin{array}{c} \includegraphics[width=35mm]{taubarpart5-1.pdf}\end{array} \right) = \begin{array}{c}\includegraphics[width=35mm]{taubarpart5-2.pdf} \end{array}. \] This application has nice properties: \begin{itemize} \item it preserves property $(P2)$. Indeed, if $(\tau',\sigma') = \varphi(\tau,\sigma,\overline{\pi})$, then $G_{\tau',\sigma'}$ is obtained form $G_{\tau,\sigma}$ by contracting its edge attached to the vertex $\overline{\pi}$. \item the number of preimages of a given permuted star thorn tree $(\tau',\sigma')$ depends only on its type $\lambda$. Indeed, there are no preimages if $\lambda$ is not of the form $\mu \backslash (j,k) \cup (j+k-1)$ for some $j$ (from now on, we use the notation $\mu^{\downarrow (j,k)} = \mu \backslash (j,k) \cup (j+k-1)$). Otherwise, the preimages are obtained as follows: choose a vertex $v$ of $\tau'$ of degree $j+k-1$ (there are $m_{j+k-1}(\lambda)$ possible choices), choose the edge or a thorn of white extremity associated to one of its $j-1$ left-most thorns ($j$ choices per vertex $v$), add a new black vertex just at the right of this element and attach the $k-1$ last thorns of $v$ to this new vertex. With this description, it is clear that the cardinality of preimage is $j m_{j+k-1}(\lambda)$. \end{itemize} Recall that we assumed the number $P'_{n-1,p-1}$ ( dependent only on $n$ and $p$, but not on $\lambda$) to be the proportion of permuted star thorn trees of type $\lambda$ verifying $(P2)$ among those which verify $(P1)$. With the two above properties, we can compute the proportion of elements verifying $(P2)$ in $GMPTT_1^k(\mu)$. Indeed, \begin{multline*} |MPTT_{1,2}^k(\mu)| =\sum_{j \geq 1 \atop \lambda=\mu^{\downarrow (j,k)}} j m_{j+k-1}(\lambda) |MPTT_{1,2}^k(\lambda)| \\ = \sum_{j \geq 1 \atop \lambda=\mu^{\downarrow (j,k)}} j m_{j+k-1}(\lambda) P'_{n-1,p-1} |MPTT_{1}^k(\lambda)| = P'_{n-1,p-1} |GMPTT_{1}^k(\mu)|, \end{multline*} which is exactly what we wanted to prove. \end{proof} \section{Link between Theorems \ref{th:reformulationfine} and \ref{th:Stanleyfine}}\label{sect:reformulation} The goal of this section is to prove the equivalence between Theorem \ref{th:reformulationfine} and Theorem \ref{th:Stanleyfine}. This will be done using differential calculus in the symmetric function ring : we present this algebra in paragraph \ref{SubsectSymmetricFunctions}. Then, in paragraph \ref{subsect:LinkOGF}, we explain how the generating series of black-partitioned maps and maps are related. Finally, after a small lemma on thorn trees (paragraph \ref{subsect:lemma_thorn_trees}), we use all these tools to prove the equivalence of Theorems \ref{th:reformulationfine} and \ref{th:Stanleyfine} in paragraph \ref{SubsectEquivalence}. \subsection{Symmetric functions}\label{SubsectSymmetricFunctions} Let us begin by some definitions and notations on symmetric functions. As much as possible we use the notations of I.G. Macdonald's book \cite{Macdo}. We consider the ring $\Lambda_n$ of symmetric polynomials in $n$ variables $x_1,\dots,x_n$. The sequence $(\Lambda_n)_{n \geq 1}$ admits a projective limit $\Lambda$, called \emph{ring of symmetric functions}. This ring has several classical linear bases indexed by partitions. \begin{itemize} \item \emph{monomial symmetric functions}: for monomials we use the short notation $\mathbf{x}^\mathbf{v}=x_1^{v_1} x_2^{v_2} \dots$. Then, we define \[M_\lambda = \sum_{\mathbf{v}} \mathbf{x}^\mathbf{v}\] where the sum runs over all vectors $\mathbf{v}$ which are permutations of $\lambda$ (without multiplicities). \noindent {\it Remark.} We use upper case $M$ for the monomial symmetric functions instead of the usual lower case $m$ because a lot of formulae in this paper involve multiplicities $m_i(\lambda)$ of some parts and monomial symmetric functions at the same time. \item \emph{power sum symmetric functions}: by definition \[p_0 =1,\qquad p_k =\sum_{i \geq 1} x_i^k, \qquad p_\mu =\prod_{j=1}^{\ell(\mu)} p_{\mu_j}.\] \end{itemize} Besides, we consider the differential operator $\Delta_n : \Lambda_n \to \Lambda_n$ given by: \[ \text{for all }f \in \Lambda_n, \Delta_n ( f ) = \sum_{i=1}^n x_i^2 \frac{\partial f}{\partial x_i}. \] Let us compute the image by this operator of the symmetric polynomials $M_\lambda(x_1,\dots,x_n)$ and $p_\mu(x_1,\dots,x_n)$. If $\ell(\lambda) \leq n$, we denote $S_n(\lambda)$ the set (without multiplicities) of all vectors obtained by a permutation of the vector $(\lambda_1,\dots,\lambda_{\ell(\lambda)},0,\dots,0)$ of size $n$. \begin{align*} \Delta_n\big(M_\lambda(x_1,\dots,x_n)\big) &= \sum_{\mathbf{v} \in S_n(\lambda)} \sum_{i=1}^n x_i^2 \frac{\partial \mathbf{x}^\mathbf{v}}{\partial x_i},\\ &= \sum_{\mathbf{v} \in S_n(\lambda)} \sum_{i=1}^n v_i \mathbf{x}^{\mathbf{v} + \delta_i}, \end{align*} where $\delta_i$ is the vector of length $n$, whose components are all equal to $0$, except for its $i$-th component which is equal to $1$. It is clear that, if $\mathbf{v}$ is a permutation of a partition $\lambda$, then $\mathbf{v} + \delta_i$ is a permutation of some $\mu= \lambda^{\uparrow (j)}$ for $j=v_i$. We will group together terms with the same exponent. So the question is: given a vector $\mathbf{v}'$, which is a permutation of $\mu$, in how many ways can it be written as $\mathbf{v} + \delta_i$ with $\mathbf{v} \in S_n(\lambda)$ and $1\leq i \leq n$? The vector $\mathbf{v}' - \delta_i$ is a permutation of $\mu^{\downarrow (v'_i)}$, which is equal to $\lambda$ if and only if $v'_i=j+1$. Therefore, there are $m_{j+1}(\mu)$ ways to write $\mathbf{v}'$ under this form. Finally, \begin{align*} \Delta_n\big(M_\lambda(x_1,\dots,x_n)\big) &= \sum_{j>0 \atop \mu=\lambda^{\uparrow (j)}} \sum_{\mathbf{v}' \in S_n(\mu)} j \cdot m_{j+1}(\mu) \mathbf{x}^{\mathbf{v}'}\\ &= \sum_{j>0 \atop \mu=\lambda^{\uparrow (j)}} j \cdot m_{j+1}(\mu)\ M_\mu(x_1,\dots,x_n). \end{align*} As the coefficients in this formula do not depend on $n$, one can define the limit of the operators $\Delta_n$ as the operator $\Delta : \Lambda \to \Lambda$ which sends $M_\lambda$ to \begin{equation}\label{EqDeltaMon} \Delta(M_\lambda) = \sum_{j>0 \atop \mu=\lambda^{\uparrow (j)}} j \cdot m_{j+1}(\mu)\ M_\mu. \end{equation} It is the limit of the sequence $(\Delta_n)_{n \geq 1}$ in the sense that: \[ \text{for all } F \in \Lambda,\ (\Delta F)(x_1,\dots,x_n) = \Delta_n \big( F(x_1,\dots,x_n) \big).\] Note that it was not obvious before the computation that the sequence of ope\-rators $\Delta_n$ had a limit. For instance, the sequence of operators $\Delta'_n$ defined by $\Delta'_n(f)=\sum_{i=1}^n \frac{\partial f}{\partial x_i} $ does not have a limit because $\Delta'_n \big( M_{(1)} (x_1,\dots,x_n) \big)= n$ does not have a limit in $\Lambda$. Let us now come to the image of power sums. For one part partition, one has, for $k \geq 1$: \[ \Delta_n \big( p_k(x_1,\dots,x_n) \big) = \sum_{1 \leq i,j \leq n} x_i^2 \frac{\partial x_j^k}{\partial x_i} = \sum_{1 \leq i \leq n} k \cdot x_i^{k+1} = k \cdot p_{k+1} (x_1,\dots,x_n). \] The result still holds for $k=0$. Using the fact that $\Delta_n$ is a derivation, one obtains immediately the formula for general power sums: \begin{align*} \Delta_n \big( p_\lambda(x_1,\dots,x_n) \big) & = \sum_j \left( \lambda_j \cdot p_{\lambda_j+1}(x_1,\dots,x_n) \cdot \prod_{\ell \neq j} p_\ell \right) \\ & = \sum_i i \cdot m_i(\lambda)\ p_{\lambda^{\uparrow (i)}}(x_1,\dots,x_n). \end{align*} One can take the limit of the previous equation and we get: \begin{equation}\label{EqDeltaPower} \Delta(p_\lambda) = \sum_i i \cdot m_i(\lambda)\ p_{\lambda^{\uparrow (i)}}. \end{equation} \subsection{Generating series of maps and partitioned maps}\label{subsect:LinkOGF} Recall that $A(\lambda)$, $B(\lambda)$, $C(\lambda)$ and $D(\lambda)$ count the numbers of (star) (partitioned) rooted unicellular bipartitite maps of type $\lambda$, according to the following table. \begin{tabular}{c|c|c} & \begin{tabular}{c} maps without \\ additional structure \end{tabular} & \begin{tabular}{c} partitioned \\ maps \end{tabular} \\ \hline \begin{tabular}{c} no conditions \\ on white vertices \end{tabular} & $A(\lambda)$ & $C(\lambda)$ \\ \hline \begin{tabular}{c} only one \\ white vertex \end{tabular} & $B(\lambda)$ & $D(\lambda)$ \end{tabular} Quantities $A$ and $C$ (resp. $B$ and $D$) are linked by the following lemma: \begin{Lem}\label{LemLinkOGF} \begin{align} \label{eq:C2A} \sum_{\mu \vdash n+1} C(\pnuv\mu) \Aut(\mu) M_\mu & = \sum_{\nu \vdash n+1} A(\nu) p_\nu;\\ \label{eq:D2B} \sum_{\lambda \vdash n} D(\pnv\lambda) \Aut(\lambda) M_\lambda & = \sum_{\pi \vdash n} B(\pnv\pi) p_\pi, \end{align} where $\Aut(\mu)$ is the numerical factor $\prod_i m_i(\mu)!$ by definition. \end{Lem} \begin{proof} The proof is similar to the one of \cite[Proposition 1]{MoralesVassilieva:factorizations_long_cycle}. We note $\overline{R}_{\epsilon,\rho}$ the number of ways to {\it coarse} an integer partition $\epsilon \vdash n$ to get an integer partition $\rho$, i.e. the number of unordered set partitions $\{P^{1}, \ldots, P^{\ell(\rho)}\}$ of $[\ell(\epsilon)]$ such that $\rho_j = \sum_{i\in P^j} \epsilon_i$. We have the classical relation: $p_{\epsilon} = \sum_{\rho} Aut(\rho) \overline{R}_{\epsilon,\rho} M_{\rho}$.\\ Furthermore by definition of partitioned maps, $C(\mu) = \sum_{\nu}\overline{R}_{\nu,\mu}A(\nu)$ (resp. $D(\lambda) = \sum_{\pi}\overline{R}_{\pi,\lambda}B(\pi)$). Combining these expressions yields the desired result.\end{proof} \subsection{An easy lemma on permuted thorn trees}\label{subsect:lemma_thorn_trees} Consider integers $n,i \geq 1$ and two partitions $\lambda \vdash n$, $mu \vdash n+1$ with $\mu=\lambda^{\uparrow (i)}$. It is easy to transform a permuted thorn tree $(\tau,\sigma)$ where $\tau$ has type $\lambda \vdash n$ into a permuted thorn tree $(\tau',\sigma')$ where $\tau'$ has type $\mu$. We just add a thorn anywhere on the white vertex ($n+1$ possible places) and a thorn anywhere on a black vertex of degree $i$ (there are $i$ possible places on each of the $m_i(\lambda)$ black vertices of degree $i$). Then we choose $\sigma'$ to be the extension of $\sigma$ associating the two new thorns. This procedure is invertible if we remember which thorn of black extremity is the new one (it must be on a black vertex of degree $i+1$, so there are $i \cdot m_{i+1}(\mu)$ choices). This leads immediately to the following relation: \begin{equation}\label{eq:rec_BT} \widetilde{ST}(\pnuv\mu) \cdot (n+1-p)! \cdot i \cdot m_{i+1}(\mu) = (n+1) \cdot i \cdot m_i(\lambda) \cdot \widetilde{ST}(\pnv\lambda) \cdot (n-p)!. \end{equation} If we fix a partition $\mu \vdash n+1$ of length $p<n+1$ and sum equation \eqref{eq:rec_BT} over partitions $\lambda$ that write as $\mu^{\downarrow (i+1)}$ for some $i$, we get: \begin{equation}\label{EqSumRecBT} \widetilde{ST}(\pnuv\mu) \cdot (n+1-p)! \cdot (n+1-p) = (n+1) \sum_{\lambda = \mu^{\downarrow (i+1)}, i>0} i \cdot m_i(\lambda) \cdot \widetilde{ST}(\pnv\lambda) \cdot (n-p)!. \end{equation} \subsection{Counting partitioned or not partitioned maps are equivalent} \label{SubsectEquivalence} We have now all the tools to prove the equivalence of Theorems \ref{th:reformulationfine} and \ref{th:Stanleyfine}. \begin{proof} Let us first assume that Theorem \ref{th:reformulationfine}, and hence equation \eqref{EqDBT}, is true. We start from equation \eqref{EqSumRecBT} and use equations \eqref{eq:bij_MV} and \eqref{EqDBT} respectively in the left and right-hand sides: for any $\mu \vdash n+1$, \begin{align} C(\pnuv\mu) \cdot (n+1-p) & = (n+1) \sum_{\lambda = \mu^{\downarrow (i+1)}, i>0} i \cdot m_i(\lambda) \cdot D(\pnv\lambda) \cdot (n+1-p).\label{EqProofEquiv1}\\ \intertext{We multiply both sides by $\Aut(\mu) M_\mu$ and sum this equality on all partitions $\mu$ of $n+1$, except $1^{n+1}$.} \sum_{\mu \vdash n+1 \atop \mu \neq 1^{(n+1)}} C(\pnuv\mu) \Aut(\mu) M_\mu & = (n+1) \sum_{\mu \vdash n+1 \atop \mu \neq 1^{(n+1)}} \sum_{i>0 \atop \lambda = \mu^{\downarrow (i+1)}} i \cdot m_i(\lambda) \Aut(\mu) D(\pnv\lambda) M_\mu \label{EqThReformulationMSeries}\\ & = (n+1) \sum_{\lambda \vdash n} \Aut(\lambda) D(\pnv\lambda) \left(\sum_{i>0 \atop \mu = \lambda^{\uparrow (i)}} i \cdot m_{i+1}(\mu) M_\mu \right). \nonumber \end{align} The last equality has been obtained by changing the order of summation and using the trivial fact that, if $\mu = \lambda^{\uparrow (i)}$, one has $\Aut(\mu) \cdot m_i(\lambda)= \Aut(\lambda) \cdot m_{i+1}(\mu)$. Now, observing that the expression in the brackets is exactly the right hand-side of equation \eqref{EqDeltaMon}, one has: $$\sum_{\mu \vdash n+1} C(\pnuv\mu) \Aut(\mu) M_\mu - (n+1)! M_{1^{n+1}} = (n+1) \cdot \Delta \left( \sum_{\lambda \vdash n} \Aut(\lambda) D(\pnv\lambda) M_\lambda \right).$$ Let us rewrite this equality in the power sum basis. The expansion of the two summations in this basis are given by equations \eqref{eq:C2A} and \eqref{eq:D2B}. We also need the power sum expansion of $(n+1)! M_{1^{n+1}}$, which is (see \cite[Chapter I, equation (2.14')]{Macdo}): \begin{multline*} (n+1)! M_{1^{n+1}} = (n+1)! \sum_{\nu \vdash n+1} \frac{(-1)^{n+1-\ell(\nu)}}{z_\nu} p_\nu = \sum_{\nu \vdash n+1} A(\nu) (-1)^{n+1-\ell(\nu)} p_\nu. \end{multline*} Putting everything together, we get: \begin{multline}\label{EqThMainPSeries} \sum_{\nu \vdash n+1} A(\nu) p_\nu + \sum_{\nu \vdash n+1} A(\nu) (-1)^{n-\ell(\nu)} p_\nu = (n+1) \sum_{\pi \vdash n} B(\pnv\pi) \Delta(p_\pi)\\ = (n+1) \sum_{\pi \vdash n} B(\pnv\pi) \sum_i i \cdot m_i(\pi)\ p_{\pi^{\uparrow (i)}}. \end{multline} The last equality comes from equation \eqref{EqDeltaPower}. Identifying the coefficients of $p_\mu$ in both sides, we obtain exactly Theorem \ref{th:Stanleyfine}. Conversely, let us suppose that Theorem \ref{th:Stanleyfine} is true. This means that, for every partition $\mu \vdash n+1$, one has: \[A(\nu) + (-1)^{n-\ell(\nu)} A(\nu) = (n+1) \sum_{\pi = \nu^{\downarrow (i+1)}, i>0} i\ m_i(\pi)\ B(\pnv\pi).\] Multiplying by $p_\mu$ and summing over all partitions $\mu$ of $n+1$, we obtain equation \eqref{EqThMainPSeries}. With the same computations as before we can deduce equation \eqref{EqThReformulationMSeries} from it . Identifying the coefficients of $M_\mu$ , we get equation \eqref{EqProofEquiv1}. But, using equations \eqref{eq:bij_MV} and \eqref{EqSumRecBT}, one has: \begin{multline*} C(\pnuv\mu) \cdot (n+1-p) = \widetilde{ST}(\pnuv\mu) \cdot (n+1-p)! \cdot (n+1-p)\\ = (n+1) \sum_{\lambda = \mu^{\downarrow (i+1)}, i>0} i \cdot m_i(\lambda) \cdot \widetilde{ST}(\pnv\lambda) \cdot (n-p)!. \end{multline*} Therefore, for every $\mu \vdash n+1$, \[ \sum_{\lambda = \mu^{\downarrow (i+1)}, i>0} i \cdot m_i(\lambda) \cdot \widetilde{ST}(\pnv\lambda) \cdot (n-p)! = \sum_{\lambda = \mu^{\downarrow (i+1)}, i>0} i \cdot m_i(\lambda) \cdot D(\pnv\lambda) \cdot (n+1-p).\] Using Remark \ref{rem:syst_inv} implies that for any $\lambda \vdash n$ \[\widetilde{ST}(\pnv\lambda) \cdot (n-p)! = D(\pnv\lambda) \cdot (n+1-p),\] because both sides are solutions of the same sparse triangular system. This corresponds to equation \eqref{EqDBT}, one of the equivalent forms of Theorem~\ref{th:reformulationfine}.\end{proof} \begin{Rem} Using the same kind of arguments, one could also prove that Theorem \ref{th:stanley} and Theorem~\ref{th:reformulation} are equivalent. \end{Rem} \end{document}
arXiv
Ulrica Wilson Ulrica Wilson is a mathematician specializing in the theory of noncommutative rings and in the combinatorics of matrices.[1] She is an associate professor at Morehouse College, associate director of diversity and outreach at the Institute for Computational and Experimental Research in Mathematics (ICERM),[2][1] and a former vice president of the National Association of Mathematicians.[3] Education and career Wilson is African-American,[2] and originally from Massachusetts, but grew up in Birmingham, Alabama.[2] She is a 1992 graduate of Spelman College,[4] and completed her Ph.D. at Emory University in 2004. Her dissertation, Cyclicity of Division Algebras over an Arithmetically Nice Field, was supervised by Eric Brussel.[5] Wilson has contributed to the advancement of black women, women of color, and women in general in the field of mathematical sciences through the program EDGE[6] Enhancing Diversity in Graduate Education, which is a program that helps minorities with support in order to achieve academic goals and obtain Doctoral Degrees.[7] After two stints as a postdoctoral researcher,[2] she joined the Morehouse College faculty in 2007, and became associate director at ICERM in 2013.[1] She serves on the Education Advisory Board for ICERM.[8] In collaboration with ICERM, Wilson is also co-director of the REUF program,[9] The Research Experience for Undergraduate Faculty, this program was founded under the American Institute of Mathematics (AIM) to provide undergraduate faculty a community of scholars that support exchange and expand research ideas and projects to engage in with undergraduate students.[9] The EDGE Program In 2011, Wilson became Co-Director of the EDGE Program, a program to mentor, train, and support the academic development and research activities of women in mathematics. The program was designed to focus on training and creating jobs in mathematical sciences for women, especially those from underrepresented groups.  The EDGE program helped increase the number of women, especially in minority groups, to take over in academia, industry and government roles. The EDGE program first began offering summer sessions to equip women in research providing annual conferences, mini-research, and collaborations with prestigious universities. The EDGE program has since expanded and its activities are centered on providing ongoing support for women toward the academic development and research productivity of women at several critical stages of their careers. EDGE focuses on women at 4 career stages—entering graduate students, advanced graduate students, postdoctoral students, and early career researchers. Since Wilson became Co-Director, over 50 women participated in various EDGE program activities and 18 EDGE participants received their PhDs. Numerous women have been granted sabbatical support and one woman was even able to use her mini-sabbatical to continue and build her research with a senior mathematician at Purdue University.[7] Recognition In 2003, Wilson was awarded the Marshall Hall Award from Emory College of Arts and Sciences in recognition of excellent performance while teaching and outstanding research as a doctoral student.[10] Wilson was the Morehouse College Vulcan Teaching Excellence Award winner for 2016–2017.[11] She was recognized by Mathematically Gifted & Black as a Black History Month 2017 Honoree.[12] In 2018, she won the Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring.[13] She is on the Board of directors of Enhancing Diversity in Graduate Education (EDGE), a program that helps women entering graduate studies in the mathematical sciences. She was included in the 2019 class of fellows of the Association for Women in Mathematics " for her many years of supporting the professional development of women in their pursuit of graduate degrees in mathematics, most visibly through mentoring, teaching and program administration within the EDGE Program, and also as associate director of diversity and outreach at The Institute for Computational and Experimental Research in Mathematics (ICERM)".[14] She was awarded the 2023 Award for Impact on the Teaching and Learning of Mathematics from the AMS for her "many initiatives on the teaching and learning of mathematics for many different segments of the mathematics community."[15] References 1. Ulrica Wilson: Associate Professor of Mathematics, Morehouse College, retrieved 2018-10-07 2. "Ulrica Wilson", Mathematically Gifted and Black: Black History Month 2017 Honoree, The Network of Minorities in Mathematical Sciences, retrieved 2021-11-12 3. "The NAM Board of Directors" (PDF), NAM Newsletter, 18 (4): 12, Spring 2018 4. Mulcahy, Colm (2017), "A Century of Mathematical Excellence at Spelman College", JMM 2017, doi:10.22595/scpubs.00013 5. Ulrica Wilson at the Mathematics Genealogy Project 6. "EDGE".{{cite web}}: CS1 maint: url-status (link) 7. Wilson, Ulrica. "EDGE Program". Retrieved November 15, 2021. {{cite journal}}: Cite journal requires |journal= (help) 8. "ICERM - Trustee and Advisory Boards - Trustee & Advisory Boards". icerm.brown.edu. Retrieved 2021-07-11. 9. Hogben, Leslie, and Ulrica Wilson. "AIM’s Research Experiences for Undergraduate Faculty program." Involve: A Journal of Mathematics 7.3 (2014): 343-353. 10. "Department of MATH - Graduate Programs". www.math.emory.edu. Retrieved 2021-11-16. 11. Mathematics Professor Ulrica Wilson Named Morehouse College's 2016–2017 Vulcan Teaching Excellence Award Winner, Morehouse College, May 26, 2017, retrieved 2018-10-07 12. "Ulrica Wilson". Mathematically Gifted & Black.{{cite web}}: CS1 maint: url-status (link) 13. Morehouse Professors Win White House/National Science Foundation Awards, Morehouse College, July 2, 2018, retrieved 2018-10-07 14. 2019 Class of AWM Fellows, Association for Women in Mathematics, retrieved 2019-01-08 15. "News from the AMS". American Mathematical Society. Retrieved 2023-04-08. Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
\begin{document} \title{Being Friends Instead of Adversaries: \ Deep Networks Learn from Data Simplified by Other Networks } \begin{abstract} Amongst a variety of approaches aimed at making the learning procedure of neural networks more effective, the scientific community developed strategies to order the examples according to their estimated complexity, to distil knowledge from larger networks, or to exploit the principles behind adversarial machine learning. A different idea has been recently proposed, named Friendly Training, which consists in altering the input data by adding an automatically estimated perturbation, with the goal of facilitating the learning process of a neural classifier. The transformation progressively fades-out as long as training proceeds, until it completely vanishes. In this work we revisit and extend this idea, introducing a radically different and novel approach inspired by the effectiveness of neural generators in the context of Adversarial Machine Learning. We propose an auxiliary multi-layer network that is responsible of altering the input data to make them easier to be handled by the classifier at the current stage of the training procedure. The auxiliary network is trained jointly with the neural classifier, thus intrinsically increasing the ``depth'' of the classifier, and it is expected to spot general regularities in the data alteration process. The effect of the auxiliary network is progressively reduced up to the end of training, when it is fully dropped and the classifier is deployed for applications. We refer to this approach as Neural Friendly Training. An extended experimental procedure involving several datasets and different neural architectures shows that Neural Friendly Training overcomes the originally proposed Friendly Training technique, improving the generalization of the classifier, especially in the case of noisy data. \end{abstract} \section{Introduction} \label{sec:intro} \xdef\@thefnmark{}\@footnotetext{Accepted for publication at the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI2022) (DOI: TBA).} In the last decade, the scientific research in neural networks studied different aspects of the training procedure, leading to deep neural models of significantly increased quality \cite{batchnormalization,adam,dropout,bengiocurriculum,spcn,zhang2020fat}. Amongst a large variety of approaches, this paper considers those that are mostly oriented in performing specific actions on the available training data in order to improve the quality of the trained neural classifier. For example, Curriculum Learning (CL) pursues the idea of presenting the training data in a more efficient manner \cite{bengiocurriculum,Wu2020WhenDC,sinha-curriculum}, exposing the network to simple, easily-discernible examples at first, and to gradually harder examples later, progressively increasing the size of the training set \cite{elman}. Self-Paced Learning (SPL) \cite{spl-kumar,{spcn}} is another related research area, in which some examples are either excluded from the training set or their impact in the risk function is downplayed if some conditions are met \cite{spcn}. A common property of CL and SPL is that they essentially sub-select or re-order the training examples, without altering the contents of the data. However, more recently, researches considered approaches that perform transformations of the input data within the input space of the classifier. Friendly Training (FT) \cite{ft} is a novel approach belonging to the latter category. FT allows the training procedure not only to adapt the weights and biases of the classifier, but also to transform the training data in order to facilitate the early fulfilment of the learning criterion. Basically, data are modified to better accommodate the development of the classifier. Such transformations (also referred to as ``simplifications'') are controlled and embedded into a precise developmental plan in which the training procedure is progressively constrained to reduce their extent, until data are left in their original version. A key property of FT is that data are altered according to the state of the classifier at the considered stage of the training procedure, and each example is perturbed by a specific offset, obtained by an inner iterative optimization procedure that is started from scratch for each input. Similarly to CL, the benefits of FT are expected to be mostly evident in the case of noisy examples or in datasets annotated with noisy labels. These are pretty common situations of every data collection process of the real-world. In the case of CL, this has been recently discussed and evaluated in \cite{Wu2020WhenDC}, while in the case of FT the existing evaluation is limited to artificial datasets for digit recognition \cite{ft}. \begin{figure} \caption{Left-to-right, top-to-bottom: evolution of the decision boundary developed by a single hidden layer classifier (5 neurons) in the 2-moon dataset, in Neural Friendly Training. Each plot is about a different training iteration ($\gamma$); in the last plot data are not transformed anymore.} \label{fig:toy} \end{figure} In this paper we revisit and extend the idea of FT, introducing a radically different and novel approach. The intuition behind what we propose is that the data simplification process of FT might include regularities that are shared among different training examples, and that there is an intrinsic coherence in the way data are altered in consecutive training iterations, i.e., similar simplifications might be fine in nearby stages of the training procedure. These considerations are not exploited by FT, which applies an independent perturbation to each example, estimated from scratch at each training step. We propose to introduce an auxiliary multi-layer network, that is responsible of altering data belonging to the input space of the classifier. The auxiliary network is trained jointly with the neural classifier, and it learns how to transform the data to improve the learning process of the classifier itself. The weights of the auxiliary net represent the state of the alteration model, that is progressively updated by the training procedure, thus letting the model evolve as long as time passes. From an architectural perspective, the auxiliary network extends the classifier by adding a new set of initial layers, thus increasing the ``depth'' of the model. The effect of the auxiliary network is progressively reduced until the end of training, when it is fully dropped and the classifier is deployed for applications. We refer to this approach as Neural Friendly Training (NFT), and Fig.~\ref{fig:toy} illustrates the behaviour of NFT in a toy 2D classification problem. Neural models to alter data samples have been proficiently exploited by the Adversarial Machine Learning community \cite{qiu2019semanticadv,ijcai2018-543} with the goal of fooling a classifier. When considering how to improve a classifier exploiting another network, it is immediate to trace a connection also with Knowledge Distillation (KD) \cite{44873,pmlr-v97-phuong19a}, although in KD the main network is supplied with output probability distributions obtained from a pretrained large model. The auxiliary network of NFT learns to transform the input data, closer to what is done by Spatial Transformer Networks \cite{spatialtransformer} (STN). However, STNs deal with image data only and estimate the parameters of a spatial transformation from a pre-defined family. The contributions of this paper are: (1) we propose a novel training strategy that allows the machine to simplify the training data by means of an auxiliary network that progressively fades out; (2) we extend the experimental analysis of the original FT to non-artificial data, and (3) we experimentally compare it with the proposed NFT approach, using convolutional and fully connected neural architectures with different numbers of layers. Our results confirm that NFT outperforms FT, proving that NFT is a feasible and effective way to improve the generalization skills of the network and to efficiently deal with noisy training data. \section{Neural Friendly Training} \label{sec:method} We consider a generic classification problem in which we are given a training set $\mathcal{X}$ composed of $n$ supervised pairs, $\mathcal{X} = \{ (x_k, y_k),\ k=1,\ldots,n \}$, being $x_k \in \mathbb{R}^{d}$ a training example labeled with $y_k$.\footnote{We consider the case of classification mostly for the sake of simplicity. The proposed approach actually goes beyond classification problems.} Given some input data $x$, we denote with $f(x, w)$ the function computed by a neural network-based classifier with all its weights and biases stored into vector $w$. When optimizing the model exploiting a mini-batch based stochastic gradient descent procedure, at each step of the training routine the following empirical risk $L$ measures the mismatch between predictions and the ground truths, \begin{equation} L\left(\mathcal{B}, w \right) = \frac{1}{|\mathcal{B}|} \sum_{i=1}^{|\mathcal{B}|} \ell \left(f\left(x_{i}, w \right) , y_i\right), \label{eq:loss} \end{equation} where $\mathcal{B} \subset \mathcal{X}$ is a mini-batch of data of size $|\mathcal{B}| \geq 1$, $(x_i, y_i) \in \mathcal{B}$, and $\ell$ is the loss function. Notice that, while we are aggregating the contributes of $\ell$ by averaging over the mini-batch data, every other normalization is fully compatible with what we propose. In the most common case of stochastic gradient optimization, a set of non-overlapping mini-batches is randomly sampled at each training epoch, in order to cover the whole set $\mathcal{X}$. We will refer to what we described so far as Classic Training (CT). \paragraph{Friendly Training.} CT provides data to the machine independently on the state of the network and on the information carried by the examples in each $\mathcal{B}$. However, data in $\mathcal{X}$ might include heterogeneous examples with different properties. For instance, their distribution could be multi-modal, it might include outliers or it could span over several disjoint manifolds, and so on and so forth. Existing results in the context of CL \cite{bengiocurriculum,Wu2020WhenDC} and SPL \cite{spcn} (Section~\ref{sec:intro}) show that it might be useful to provide the network with examples whose level of complexity progressively increases as long as learning proceeds. However, it is very unlikely to have information on the difficulty of the training examples and, more importantly, if the complexity is determined by humans it might not match the intrinsic difficulty that the machine will face in processing such examples. Alternatively, the value $\ell$ could be used as an indicator to estimate the difficulty of the data, to exclude the examples with largest loss values or to reduce their contribution in Eq.~(\ref{eq:loss}), more closely related to SPL \cite{spl-kumar,spcn}. Differently from the aforementioned approach, Friendly Training (FT) \cite{ft} \textit{transforms} the training examples according to the state of the learner, with the aim of discarding the parts of information that are too complex to be handled by the network with the current weights, while preserving what sounds more coherent with the expectations of the current classifier.\footnote{This is significantly different from deciding whether or not to keep a training example, to weigh its contribute in Eq.~(\ref{eq:loss}), or to re-order the examples. Interestingly, FT is compatible with (and not necessarily an alternative to) such existing strategies.} FT consists in alternating two distinct optimization phases, that are iterated multiple times. In the first phase, the training data are transformed in order to make them more easily manageable by the current network. The training procedure must determine how data should be simplified according to the way the current network behaves. In the second phase, the network is updated as in CT, but exploiting the simplified data instead of the original ones. The whole procedure is framed in the context of a developmental plan in which the amount of the alteration is progressively reduced as long as time passes, until it completely vanishes. This is inspired by the basic principle of strongly simplifying the data during the early stages of life of the classifier, in order to favour its development, while the extent of transformation is reduced when the classifier improves its skills. Clearly, to deploy a trained classifier that does not rely on altered data, the impact of the simplification must vanish during the training process, exposing the classifier to the original training data after a certain number of steps. Formally, FT perturbs the training data by estimating the variation $\delta_i$, \begin{equation} \tilde{x}_i = x_i + \delta_i, \label{eq:delta_new} \end{equation} for each example $x_i$. Such estimation is repeated from scratch for each training example, and at each training epoch. The terms $\delta_i$'s are obtained with the goal of minimizing $L$ in Eq.~(\ref{eq:loss}), replacing $x_i$ with $\tilde{x}_i$ of Eq.~(\ref{eq:delta_new}). Determining an accurate $\delta_i$ might require an iterative optimization procedure, and a maximum number of iterations is defined to control the strength of the perturbation, progressively reduced as long as training proceeds. \footnote{Further details are available in \cite{ft}.} \paragraph{Neural Friendly Training.} Despite the novel view introduced by FT, the instance of \cite{ft} is mostly inspired by the basic tools used in the context of Adversarial Training \cite{zhang2020fat}, with a perturbation model that requires a per-example independent optimization procedure. Here we propose to instantiate FT in a different manner, by considering that there might be some regularities in the way data samples are simplified. This leads to the introduction of a more structured transformation function that is shared by all the examples. This intuition is also motivated by recent studies in Adversarial Machine Learning that exploited perturbation models based on generative networks \cite{qiu2019semanticadv,ijcai2018-543}, although with the goal of fooling a classifier. Formally, a training sample $x_i\in \mathbb{R}^{d}$ is transformed into $\tilde{x}_i\in \mathbb{R}^{d}$ by means of the function $s(x_i, \theta)$, \begin{equation} \tilde{x}_i = s(x_i, \theta), \label{eq:delta} \end{equation} being $\theta$ a set of learnable parameters, shared by all the examples. We consider the case in which $s$ is implemented with an additional neural network, also referred to as \textit{auxiliary network}, whose weights and biases are collected in $\theta$, and we talk about Neural Friendly Training (NFT). For convenience in the notation, we keep the definition of $\delta_i$ inherited from Eq.~(\ref{eq:delta_new}), i.e., $\delta_i = \tilde{x}_i-x_i$. The term \textit{main network} refers to the network that implements $f$, i.e., the classifier, and we report in Fig.~\ref{fig:ft} a sketch of the proposed model. \begin{figure} \caption{(a) Classic deep network. (b) Neural Friendly Training (NFT): main deep network (top) and auxiliary network (bottom). The auxiliary network learns how to simplify the data $x$, while the main network learns the classification task exploiting the simplified data $\tilde{x}$. As long as training proceeds, the effect of the auxiliary network is progressively reduced, until it vanishes (and it is removed).} \label{fig:ft} \end{figure} In order to setup a valid developmental plan, we introduce an augmented criterion by re-defining the risk $L$ of Eq.~(\ref{eq:loss}), \begin{eqnarray} \hskip -5mm \nonumber L(\mathcal{B},w,\theta) = \frac{1}{|\mathcal{B}|} \sum_{i=1}^{|\mathcal{B}|} \hskip -2mm &\hskip -5mm \Bigg( \hskip -5mm & \hskip -2mm \ell \big(f ( \underbrace{s({x}_i, \theta)}_{\tilde{x}_i} ,w ) , y_i\big) + \\ [-3mm] & & \hskip 5mm \eta \big\| \underbrace{s({x}_i, \theta) - x_i}_{\delta_i} \big\|^2 \Bigg), \label{eq:loss2} \end{eqnarray} where $(x_i,y_i) \in \mathcal{B}$, and $\eta > 0$ is the weight of the squared Euclidean norm of the perturbation $\delta_i$. We indicate with $\gamma \geq 1$ the NFT iteration index, where each iteration consists of the two aforementioned phases. In the first phase, the auxiliary network is updated by minimizing Eq.~(\ref{eq:loss2}) with respect to $\theta$, keeping the main network fixed. In the second phase, the auxiliary network has the sole role of transforming the data, while the main network is updated by minimizing Eq.~(\ref{eq:loss2}) with respect to $w$. If all the training data is used in this phase, then $\gamma$ boils down to the epoch index (that is the case we considered in the experiments). If $\gamma_{max}$ is the maximum number of NFT iterations, we ensure that after $\gamma_{max\_simp} < \gamma_{max}$ steps the data are not perturbed anymore. In order to progressively reduce the perturbation level, we increase the value of $\eta$ in Eq.~(\ref{eq:loss2}). For a large $\eta$, NFT will strongly penalize the norm of $\delta_i$, becoming the dominant term in the optimization process of the auxiliary network, enforcing the net to keep $\delta_i$ small. We indicate with $\eta_{max}$ the maximum possible value of $\eta$, and at each step $\gamma$ of the developmental process we compute $\eta$ using the following law, being $[a]_{+}$ the positive part of $a$, \begin{minipage}{0.73\columnwidth} \begin{eqnarray} \hskip -0.5mm \eta = \eta_{max} \hskip -0.5mm \left(\hskip -1mm 1 \hskip -0.5mm - \hskip -0.5mm \left[ 1 \hskip -0.5mm - \hskip -0.5mm \frac{\gamma \hskip -0.5mm - \hskip -0.5mm 1}{\gamma_{max\_simp} \hskip -0.5mm - \hskip -0.5mm 1} \right]_{+}^2 \hskip -0.5mm \right) \hskip -2mm \label{etaplan} \end{eqnarray} \end{minipage} \hskip 2mm \begin{minipage}{0.228\columnwidth} \begin{tikzpicture}[scale=0.27] \hskip -5mm \begin{axis}[domain=0:10,samples=1500, ymin=0, ymax=1.5, xtick=\empty,ytick=\empty, extra x ticks=7,extra x tick labels={$\gamma_{max\_simp}$}, extra y ticks=1,extra y tick labels={$\eta_{max}$}, grid=both,axis lines=middle, x label style={at={(axis description cs:1.1,0.09), \Huge},anchor=north}, xlabel={$\gamma$}, y label style={at={(axis description cs:0.0,1.2), \Huge},anchor=north}, ylabel={$\eta$}, ] \addplot+[no marks, line width=1.5pt] {1- (max(1 - (x )/(7) , 0))^2)} node[above left] {}; \end{axis} \end{tikzpicture} \end{minipage} where $\eta \in \left[0, \eta_{max} \right]$. At $\gamma_{max\_simp}$ iterations, the penalty on $\| \delta_i \|^2$ will reach its maximum weighting. While this enforces the function $s(\cdot, \theta)$ to get closer to the identity function, we have no formal guarantees that it will effectively push the perturbation to zero. For this reason, after $\gamma_{max\_simp}$ iterations we drop the auxiliary network, exposing the system to the original training data. The developmental plan on $\eta$ favours a smooth transition between the setting in which the auxiliary network is used and when it is removed. The training procedure is detailed in Algorithm~\ref{alg:friendlyn}, \begin{algorithm} \caption{Neural Friendly Training.} \begin{algorithmic}[1] \renewcommand{\textbf{Input:}}{\textbf{Input:}} \renewcommand{\textbf{Output:}}{\textbf{Output:}} \REQUIRE Training set $\mathcal{X}$, initial weights and biases $w$, batch size $b$, max FT steps $\gamma_{max}$, max simplification steps $\gamma_{max\_simp}$, $\eta_{max} > 0$, learning rates $\alpha > 0$ and $\beta > 0$. \ENSURE The final $w$. \FOR {$\gamma = 1$ to $\gamma_{max}$} \STATE Compute $\eta$ following Eq.~(\ref{etaplan}) \IF {$\gamma > 1$ \AND $\gamma \leq \gamma_{max\_simp}$} \STATE $s$ $\leftarrow$ \texttt{auxiliary\_net}$(\cdot, \theta)$ \STATE Sample a set of minibatches $B = \{ \mathcal{B}_z \}$ from $\mathcal{X}$ \FOR {each mini-batch $\mathcal{B}_z \in B$} \STATE Compute $\nabla_{\theta} = \frac{\partial L(\mathcal{B}_z,w,h)}{\partial h}\Bigr|_{\substack{h=\theta}}$, see Eq.~(\ref{eq:loss2}) \hskip 0.0mm \rlap{\smash{$\left.\begin{array}{@{}c@{}}\\{}\\{}\\{}\\{}\end{array}\color{black}\right\} \color{black}\begin{tabular}{l}\hskip -3mm \rotatebox{90}{\scriptsize \textsc{First Phase}: update $s(\cdot, \theta)$}\end{tabular}$}} \STATE $\theta = \theta - \beta \cdot \nabla_{\theta}$ \ENDFOR \ELSE \STATE $s \leftarrow I(\cdot)$ \ENDIF \STATE Sample a set of minibatches $B = \{ \mathcal{B}_z \}$ from $\mathcal{X}$ \FOR {each mini-batch $\mathcal{B}_z \in B$} \STATE Compute $\nabla_{w} = \frac{\partial L(\mathcal{B}_z,h,\theta)}{\partial h}\Bigr|_{\substack{h=w}}$, see Eq.~(\ref{eq:loss2}) \hskip 1.5mm \rlap{\smash{ $\left.\begin{array}{@{}c@{}}\\{}\\{}\\{}\\{}\end{array}\color{black}\right\} \color{black}\begin{tabular}{l}\hskip -3mm \rotatebox{90}{\scriptsize \textsc{Second Phase}: update $f(\cdot, w)$}\end{tabular}$}} \STATE $w = w - \alpha \cdot \nabla_{w}$ \ENDFOR \ENDFOR \RETURN $w$ \end{algorithmic} \label{alg:friendlyn} \end{algorithm} and in the following lines we provide some further details. The auxiliary network is not updated during the first iteration ($\gamma = 1$), since the main network is still in its initial/random state. After $\gamma_{max\_simp}$ iterations, the auxiliary network is replaced by the identity function $I(\cdot)$ (line 10). Notice that the weight update equations (line 8 and line 16) can include any existing adaptive learning rate estimation procedures, and in our current implementation we are using the Adam optimizer with learning rates $\alpha$ and $\beta$ \cite{adam}, unless differently stated. While Algorithm~\ref{alg:friendlyn} formally returns the weights after having completed the last training iteration, as usual, the best configuration of the classifier can be selected by measuring the performance on a validation set (bypassing the auxiliary net at inference time). We qualitatively show the behavior of the proposed training strategy in the toy example of Fig.~\ref{fig:toy}. A very simple network with one hidden layer ($5$ neurons with hyperbolic tangent activation function) is trained on the popular two-moon dataset (two classes, 300 examples), optimized by Adam with mini-batch of size $64$. The auxiliary network alters the training data (from the popular 2-moon problem) in order to make them almost linearly separable during the early iterations. Then, the data distribution progressively moves toward the original configuration, and the decision boundary of the main classifier smoothly follows the data. In the last plot, the auxiliary network has been dropped and examples are located at their original positions in final stages of developmental plan. Of course, NFT increases the complexity of each training step, due to the extra projection computed by the auxiliary network in the forward stage of the classifier and due to the first phase of Algorithm~\ref{alg:friendlyn}. The actual additional computational burden of NFT with respect to CT depends on the architecture of the auxiliary network and on the number of sampled mini-batches. Moreover, instead of Eq.~\ref{etaplan}, different developmental plans could be selected to more quickly reduce the simplification and eventually drop the auxiliary network before the end of training, even if investigating these factors goes beyond the scope of this paper. When comparing NFT and FT we can see that, from the storage point of view, NFT needs to memorize a new network and the associated intermediate variables for optimization purposes, while FT only requires a new set of variables to store the delta terms. However, from the computational point of view, for each example $x_i$, FT performs $\tau \geq 1$ iterations to update the perturbation $\delta_i$, that implies $\tau$ inference steps on the main network (see Algorithm~1 of \citet{ft}). Differently, NFT does not require any inner example-wise iterative procedures (Algorithm~\ref{alg:friendlyn}, first phase). The inference time in the auxiliary network determines the concrete variations in terms of computational times with respect to FT. In our experience, on average, training with NFT took similar times to the ones of FT, since $\tau$ (in FT) gets reduced as time passes and we early stopped the inner FT iterations as suggested in \cite{ft}. \section{Experiments} \label{sec:methods} We carried out a detailed experimental activity aimed at evaluating how NFT behaves when compared to FT. We considered the same experimental conditions of \cite{ft}, initially using the same datasets (Section~\ref{larochelledata}), and then we focused on novel experiences (textual data, Section~\ref{textsux}, pictures of vehicles and animals, Section~\ref{cifar}), where FT was never tested before. We also performed an in-depth analysis on NFT (Section~\ref{sec:in-depth}). We considered the same four neural classifiers that were used in \cite{ft},\footnote{Code available at \url{https://sailab.diism.unisi.it/friendly}. } that consist in two feed-forward Fully-Connected multi-layer perceptrons, referred to as FC-A and FC-B, two Convolutional Neural Networks, named CNN-A and CNN-B, and we also tested a ResNet18 \cite{resnet} in one of the following experiences, motivated by related work \cite{Wu2020WhenDC}.\footnote{FC-A is a simple one-hidden-layer network with hyperbolic tangent activations (10 hidden neurons), while FC-B is deeper and larger model, with 5 hidden layers (2500-2000-1500-1000-500 neurons), batch normalization and ReLU activations. CNN-A consists of 2 convolutional layers, max pooling, dropout and 2 fully connected layers, while CNN-B is deeper (4 convolutional layers). Both of them exploit ReLU activation functions on the convolutional feature maps ($32$-$64$ filters in CNN-A, $32$-$48$-$64$-$64$ filters in CNN-B) and on the fully connected layers ($9216$-$128$ neurons for CNN-A, $5184$-$128$ neurons for CNN-B). Unless differently stated, learning of weights and biases is driven by the minimization of the cross-entropy loss, exploiting the Adam optimizer \cite{adam} with mini-batches of size $32$.} The auxiliary network was selected depending on the type of data that it is expected to simplify. The output layer has the same size of the input one and linear activation. In the case of image data (Section~\ref{larochelledata}, \ref{cifar}), the auxiliary network is inspired by U-Net \cite{unet}. U-Net progressively down-samples the image, encoding the context information into the convolutional feature maps, and then it up-samples and transforms the data until it matches the input size, also exploiting skip connections.\footnote{Code: \url{https://github.com/milesial/Pytorch-UNet}. In the down-sampling part, 2 initial conv. layers encode the image into $n_f$ feature maps. Then, $\nu$ down-sampling blocks (each of them composed of maxpooling and 2 conv. layers) are followed by $\nu$ up-sampling blocks (each of them composed of bilinear upscaling and 2 conv. layers). We considered $\nu \in \{1,2\}$, and $n_f \in \{64,96,128\}$.} In the case of 1-dim data (Section~\ref{textsux}) we used a fully-connected auxiliary net with $256$ hidden neurons. In all the experiments, networks were randomly initialized, providing the exact same initialization to both FT/NFT and CT, and we report results averaged over 3 runs, corresponding to 3 different instances of the initialization process. For each FT/NFT iteration, we sampled non-overlapping mini-batches until all the training data were considered, so that $\gamma$ is also the epoch index. We selected a large number of epochs $\gamma_{max}$ which we found to be sufficient to obtain a stable configuration of the weights in preliminary experiences (detailed below), and the reported metrics are about the model with the lowest validation error obtained during training. The error rate was selected as the main metric, since it is one of the most common and simple measure in classification problems. We performed some preliminary experiments to determine the optimal Adam learning rate in the case of CT. Then, we tuned the FT hyper-parameters ($\eta_{max}$, $\gamma_{max\_simp}$, $\beta$, $n_f$) by grid search (detailed below). We experimented on two machines equipped with NVIDIA GeForce RTX 3090 (24GB) GPUs. \subsection{Advanced Digit and Shape Recognition} \label{larochelledata} The collection of datasets presented in \cite{mnistvariations} is about $10$-class digit recognition problems and shape-based binary classification tasks ($28\times 28$, grayscale). In detail, \textsc{mnist-rot} consists of MNIST digits rotated by a random angle, while \textsc{mnist-back-image} features non-uniform backgrounds extracted by some random images, and \textsc{mnist-rot-back-image} combines the factors of variations of the first two datasets. In \textsc{rectangles-image} we find representations of rectangles, that might be wide or tall, with the inner and outer regions eventually filled with patches taken from other images, while \textsc{convex} is about convex or non-convex white regions on a black background. Datasets ($\approx$ 60k samples) are already divided into training, validation and test set . We compared the test error rates of the FC-A/B and CNN-A/B models in CT, FT/NFT, and also using the CL-inspired data sorting policy of \cite{ft}, named Easy-Examples First (EEF) that has the same temporal dynamics of FT. Experiments are executed for $\gamma_{max} = 200$ epochs, and we selected the model with the lowest validation error considering $\eta_{max} \in \{500 ,1000 ,2000 \}$, $\gamma_{max\_simp} \in \{0.25, 0.5, 0.85\} \cdot \gamma_{max}$, $\beta \in \{ 10^{-5}, 10^{-4}, 5 \cdot 10^{-4}\}$. Table~\ref{tab:main-table} reports the test error rate of the different models, where other baseline results exploiting different types of classifier can be found in \cite{ft} (typically overcame by FT/NFT). Our analysis starts by confirming that the family of Friendly Training algorithms (being them neural or not) very frequently shows better results than CT and of EEF. Moreover, the proposed NFT almost always improves the results of FT, supporting the idea of using an auxiliary network to capture regularities in the simplification process. In the case of CNN-A and CNN-B, the error rate of NFT is lower than in FT, with the exception of \textsc{rectangles-image}, where, however, NFT reported a pretty large standard deviation. In fully-connected architectures FC-A and FC-B, we still observe a positive impact of NFT, that usually beats FT. However, the improvement over CT can be appreciated in a less evident or more sparse manner. As a matter of fact, these architectures are less appropriate than CNNs to handle image data. However, it is still interesting to see how FC-B benefits from the auxiliary network introduced in NFT, that is indeed a convolutional architecture. Overall, results show that using an auxiliary network is better than independently estimating the perturbation offsets of each example, confirming the capability of the network to learn shared facets of the simplification process. {\setlength{\tabcolsep}{2pt} \begin{table}[h] \begin{center} \scalebox{0.9}{\begin{tabular}{lc|c@{\hspace{1mm}}c@{\hspace{0.5mm}}c@{\hspace{-0.5mm}}c@{\hspace{-0.5mm}}c} \toprule \multicolumn{2}{c}{$\ $}&mn-back&mn-rot-back&mn-rot&$\ \ $ rectangles $\ \ $&convex\\ \midrule \multirow{4}{*}{\rotatebox{90}{FC-A}} & CT&$28.34${\tiny $\pm 0.09$}&$64.06${\tiny $\pm 0.31$}&$43.16${\tiny $\pm 0.51$}&$24.31${\tiny $\pm 0.21$}&$33.91${\tiny $\pm 0.44$}\\ & EEF&$\textbf{28.18}${\tiny $\pm 0.47$}&$64.27${\tiny $\pm 0.19$}&$43.91${\tiny $\pm 0.73$}&$24.48${\tiny $\pm 0.11$}&$\textbf{33.17}${\tiny $\pm 0.93$}\\ & FT&$28.66${\tiny $\pm 0.06$}&$64.14${\tiny $\pm 0.36$}&$43.24${\tiny $\pm 0.43$}&$24.64${\tiny $\pm 0.37$}&$34.38${\tiny $\pm 0.22$}\\ & NFT&$\textbf{28.15}${\tiny $\pm 0.04$}&$64.55${\tiny $\pm 0.14$}&$\textbf{42.96}${\tiny $\pm 0.58$}&$24.57${\tiny $\pm 0.19$}&$34.25${\tiny $\pm 1.03$}\\ \midrule \multirow{4}{*}{\rotatebox{90}{FC-B}} & CT&$21.06${\tiny $\pm 0.39$}&$51.71${\tiny $\pm 0.79$}&$10.13${\tiny $\pm 0.27$}&$25.10${\tiny $\pm 0.20$}&$27.24${\tiny $\pm 0.05$}\\ & EEF&$21.38${\tiny $\pm 0.18$}&$52.95${\tiny $\pm 0.63$}&$\textbf{10.04}${\tiny $\pm 0.17$}&$\textbf{24.84}${\tiny $\pm 0.32$}&$28.21${\tiny $\pm 0.96$}\\ & FT&$21.74${\tiny $\pm 0.26$}&$\textbf{51.02}${\tiny $\pm 0.07$}&$11.19${\tiny $\pm 0.37$}&$\textbf{24.14}${\tiny $\pm 0.53$}&$27.49${\tiny $\pm 0.07$}\\ & NFT&$\textbf{20.91}${\tiny $\pm 0.52$}&$\textbf{50.20}${\tiny $\pm 0.16$}&$\textbf{10.09}${\tiny $\pm 0.32$}&$\textbf{25.09}${\tiny $\pm 0.09$}&$\textbf{26.81}${\tiny $\pm 0.15$}\\ \midrule \multirow{4}{*}{\rotatebox{90}{CNN-A}} & CT&$7.25${\tiny $\pm 0.16$}&$29.05${\tiny $\pm 0.45$}&$7.48${\tiny $\pm 0.14$}&$9.86${\tiny $\pm 0.32$}&$8.24${\tiny $\pm 0.09$}\\ & EEF&$\textbf{7.02}${\tiny $\pm 0.08$}&$29.12${\tiny $\pm 0.34$}&$7.61${\tiny $\pm 0.22$}&$12.82${\tiny $\pm 0.70$}&$8.72${\tiny $\pm 0.74$}\\ & FT&$\textbf{6.80}${\tiny $\pm 0.19$}&$\textbf{28.74}${\tiny $\pm 0.29$}&$\textbf{7.36}${\tiny $\pm 0.06$}&$\textbf{9.72}${\tiny $\pm 0.20$}&$8.59${\tiny $\pm 1.44$}\\ & NFT&$\textbf{6.59}${\tiny $\pm 0.09$}&$\textbf{28.67}${\tiny $\pm 0.35$}&$\textbf{7.17}${\tiny $\pm 0.17$}&$10.99${\tiny $\pm 1.89$}&$\textbf{8.03}${\tiny $\pm 0.23$}\\ \midrule \multirow{4}{*}{\rotatebox{90}{CNN-B}} & CT&$5.15${\tiny $\pm 0.15$}&$23.05${\tiny $\pm 0.21$}&$6.58${\tiny $\pm 0.06$}&$8.10${\tiny $\pm 1.90$}&$3.01${\tiny $\pm 0.41$}\\ & EEF&$\textbf{4.82}${\tiny $\pm 0.19$}&$\textbf{22.89}${\tiny $\pm 0.49$}&$7.02${\tiny $\pm 0.28$}&$8.35${\tiny $\pm 1.01$}&$3.75${\tiny $\pm 0.58$}\\ & FT&$\textbf{5.03}${\tiny $\pm 0.11$}&$\textbf{22.81}${\tiny $\pm 0.36$}&$6.95${\tiny $\pm 0.12$}&$\textbf{7.32}${\tiny $\pm 1.31$}&$\textbf{2.87}${\tiny $\pm 0.42$}\\ & NFT&$\textbf{4.96}${\tiny $\pm 0.34$}&$\textbf{22.22}${\tiny $\pm 0.62$}&$\textbf{6.48}${\tiny $\pm 0.25$}&$\textbf{6.27}${\tiny $\pm 0.62$}&$\textbf{2.78}${\tiny $\pm 0.34$} \\ \bottomrule \end{tabular}} \end{center} \caption{Comparison of different classifiers (FC-A, FC-B, CNN-A, CNN-B) and learning algorithms (CT, EEF, FT from \cite{ft} and our NFT) -- datasets of Section~\ref{larochelledata} (where \textsc{mn} stands for \textsc{mnist} and removing the suffix \textsc{image}). Test error and standard deviation over 3 runs are reported. For each architecture, those results that improve the CT case are in bold. } \label{tab:main-table} \end{table}} \subsection{Sentiment Analysis} \label{textsux} We investigate how NFT behaves in Natural Language Processing considering the task of Sentiment Analysis (positive/negative polarity). We selected two datasets and considered different representations of the examples. The first dataset is \textsc{imdb} \cite{maas-EtAl:2011:ACL-HLT2011}, also known as Large Movie Review Dataset, that is a collection of $50$k highly-polar reviews from the IMDB database. We considered a vocabulary of the most frequent $20$k words and TF-IDF \cite{tfidf-jones} representation of each review. The second dataset, \textsc{wines} \cite{thoutt_wine_nodate}, collects $130$k wine reviews scored in $[80,100]$, that we divided into two classes, i.e., $[80,90)$ vs. $[90,100]$. In this case, in order to acquire a broader outlook on the effect of NFT, we chose a different text representation, exploiting a pretrained Transformer-based architecture (DistilRoBERTa \cite{reimers-2019-sentence-bert}, with average pooling to compute dense representations of size $768$ for each review. We trained the deeper fully-connected architecture, FC-B, for $30$ epochs. NFT hyper-parameters were selected in $\eta_{max} \in \{10,100,500,1000,2000\}$, $\gamma_{max\_simp} \in \{0.05, 0.1, 0.25, 0.85, 0.5\} \cdot \gamma_{max}$, $\beta \in \{ 10^{-5}, 10^{-4}, 5 \cdot 10^{-4} \}$. Concerning FT, we extended the grids of \cite{ft} for all the new experiences of this paper, testing further parameter configurations (supplementary material at \url{https://sailab.diism.unisi.it/friendly}). As reported in Table~\ref{tab:otherdata} (top), the performance of CT is consistently improved by NFT, achieving lower error rates in both the datasets (and representations). The sentence classification task appears to be slightly more difficult in \textsc{wines}. This is probably due to the fact that wine reviews are less polarized, being them all highly scored. Concerning \textsc{imdb}, the superiority of NFT over CT and also FT is evident. Overall, these results confirm the versatility of NFT. \begin{table}[!hb] \scalebox{0.9}{\begin{tabular}{l|c} \toprule \multicolumn{1}{c}{$\ $} &imdb\\ \midrule FC-B CT&$13.27$ {\tiny $\pm 0.19$}\\ FC-B FT&${13.66}$ {\tiny $\pm 0.69$}\\ FC-B NFT \hskip 1.7mm $\ $ &$\textbf{11.93}$ {\tiny $\pm 0.09$}\\ \bottomrule \end{tabular}} \hskip 1mm \scalebox{0.9}{\begin{tabular}{l|c} \toprule \multicolumn{1}{c}{$\ $} &$\ \ \ \ \ $wines$\ \ \ \ \ $\\ \midrule FC-B CT&$17.38$ {\tiny $\pm 0.15$}\\ FC-B FT&$\textbf{17.07}$ {\tiny $\pm 0.11$}\\ FC-B NFT \hskip 0.8mm $\ $ &$\textbf{17.15}$ {\tiny $\pm 0.12$}\\ \bottomrule \end{tabular}}\\ \vskip 1mm \scalebox{0.9}{\begin{tabular}{l|c} \toprule \multicolumn{1}{c}{$\ $} &cifar-10\\ \midrule CNN-B CT&$29.75$ {\tiny $\pm 0.37$}\\ CNN-B FT&$30.19$ {\tiny $\pm 0.53$}\\ CNN-B NFT&$\textbf{29.00}$ {\tiny $\pm 0.36$} \\ \bottomrule \end{tabular}} \hskip 1mm \scalebox{0.9}{\begin{tabular}{l|c} \toprule \multicolumn{1}{c}{$\ $} &cifar-10-n10\\ \midrule ResNet CT&$9.30$ {\tiny $\pm 0.16$}\\ ResNet FT&$\textbf{8.92}$ {\tiny $\pm 0.23$}\\ ResNet NFT&$\textbf{8.10}$ {\tiny $\pm 0.19$} \\ \bottomrule \end{tabular}} \caption{Comparison of classifiers with different architectures and learning algorithms (CT, FT) -- data of Section~\ref{textsux} (top) and Section~\ref{cifar} (bottom). Mean test error is reported with standard deviation. Results improving CT are in bold.}\label{tab:otherdata} \end{table} \subsection{Image Classification} \label{cifar} CIFAR-10 \cite{krizhevsky_learning_2009} is a popular Image Classification dataset, consisting of $60$k $32 \times 32$ color images from 10 different classes. We divided the original training data into training and validation sets ($10$k examples used as validation set), and we initially evaluated NFT using the previously described generic CNN-B architecture. Table \ref{tab:otherdata} (bottom-left) shows that while were not able to improve the results CT using FT, NFT slightly improves the quality of the network, reducing the error rate and further confirming its benefits. \begin{figure} \caption{ResNet18 on \textsc{CIFAR-10} dataset for different amounts of noisy labels. Error bars include standard dev.} \label{fig:histo-noisy} \end{figure} However, state-of-the art convolutional networks specifically designed/tuned for CIFAR-10 usually achieve lower error rates, so that we decided to perform a more specific experimental activity. In particular, we considered ResNet18 \cite{resnet}, inheriting all the carefully selected optimization parameters and tricks that yield state-of-the-art results in CIFAR-10.\footnote{Stochastic Gradient Descent (learning rate $0.1$ with cosine annealing learning rate scheduler) with momentum ($0.9$) and weight decay ($5 \cdot 10^{-4}$), mini-batches of size 128, data augmentation -- see \url{https://github.com/kuangliu/pytorch-cifar}.} Since FT/NFT bring marginal benefits over CT, we designed a more challenging condition following the setup of recently published CL activity \cite{Wu2020WhenDC}. We introduced some noise by randomly permuting 10\% of the target labels, generating what we will refer to as \textsc{cifar-10-n10}. We trained the network for $250$ epochs, and reported results in Table~\ref{tab:otherdata} (bottom-right). NFT hyper-parameters were selected in $\eta_{max} \in \{500,1000,2000\}$, $\gamma_{max\_simp} \in \{0.25, 0.5, 0.7\} \cdot \gamma_{max}$, $\beta \in \{ 10^{-4}, 5 \cdot 10^{-4} \}$. The learning rate scheduler is applied starting from $\gamma_{max\_simp}$ with an initial learning rate which is $0.1 \cdot \alpha$. We observe that NFT effectively helps also when dealing with this type of network. While FT also carries a small improvement, it is far from the one obtained by NFT. We further investigated this result by varying the amount of noise injected into the training labels. Fig.~\ref{fig:histo-noisy} compares CT and NFT for different noise levels, up to $80\%$. Interestingly, the impact of NFT becomes more and more evident, gaining $\approx 8\%$ in strongly noisy environments, confirming that data simplification helps the main network to better discard the noisy information. \subsection{In-Depth Analysis}\label{sec:in-depth} We qualitatively compared NFT and FT in the \textsc{mnist-back-image} dataset of Section~\ref{larochelledata}, in which the important information is known (the digits), since the background is uncorrelated with the target. We mostly considered the CNN-A model, for which NFT led to the most significant improvements with respect to CT (Table~\ref{tab:main-table}). In Fig.~\ref{fig:simpl-images} we show how examples are affected when using an auxiliary network (bottom - NFT) or when independent transformations are estimated for each example through a gradient-based procedure (top - FT). Estimating the transformation function with a neural model leads to qualitatively different behavior. We observe that FT yields structured perturbations only when paired with CNN-A, emphasizing the digit areas. Differently NFT shows more natural perturbation patterns, removing distracting cues (background). Basically, the convolutional auxiliary net leads to transformations with much more detailed awareness of the visual structures. \begin{figure} \caption{\textsc{mnist-back-image}. Original data $x$ , perturbation $\delta$ (normalized) and resulting ``simplified'' images $\tilde{x}$ for FC-A and CNN-A at the end of the $1$st epoch. Some simplifications are hardly distinguishable. Top: FT. Bottom: NFT.} \label{fig:simpl-images} \end{figure} In Fig.~\ref{histoleft}, we report the evolution of test error rate during the training epochs (\textsc{mnist-back-image}, CNN-A), comparing NFT and CT. The developmental plan reduces the impact of the perturbation until epoch $175$ (afterwards, data are not altered anymore). The small bump right before such epoch is due to the final transition from altered to original data. The test error of NFT is higher than the one of CT when data are altered, as expected, while it becomes lower when the auxiliary network is dropped. On the other hand, fitting training data is easier during the early epochs in NFT, due to the simplification process. We also evaluated the sensitivity of the system to some hyper-parameters of NFT, keeping the main network fixed. In Fig.~\ref{historight}, we report the test error of CNN-A, \textsc{mnist-back-image} dataset, for different configurations of $\eta_{max}$, $\frac{\gamma_{max\_{simp}}}{\gamma_{max}}$, $n_f$, $\beta$. In particular, after having selected a sample run that is pretty representative of the general trend we observed in the experiments, we changed one of the aforementioned parameters and computed the error rate. Large values of $\eta_{max}$ reduce the freedom of auxiliary network in learning the transformation function. \begin{figure} \caption{Training and test error rates for NFT and CT on a single run -- \textsc{mnist-back-image}, CNN-A (best viewed in colors). The auxiliary network is dropped at epoch $175$. The training error of NFT is initially lower than in the case of CT since the auxiliary network simplifies the data. Differently, the test error is initially larger, since the test set is not simplified. As training proceeds, the simplification vanishes and the test data become aligned with the training ones.} \label{histoleft} \end{figure} \begin{figure} \caption{Test error under different configuration of the NFT hyper-parameters, CNN-A architecture. } \label{historight} \end{figure} Similarly, a short developmental plan with a small $\frac{\gamma_{max\_{simp}}}{\gamma_{max}}$ does not allow the main network to benefit from the progressively simplified data. In general, we did not experience a very significant sensitivity to the variations of $n_f$, and $64$ features turned out to be fine in most of the experiments, with some cases in which moving to $96$ was slightly preferable, as in the one we are showing in Fig.~\ref{historight}. Although in a fine-grained grid of values, we found that larger $\beta$ helped the auxiliary network to more quickly develop meaningful transformations. As a side note, we report that NFT was $\approx 1.5\times$ slower than CT, on average--see Sec.~\ref{sec:method}; performance optimization was outside the scope of this work. \section{Conclusions and Future Work} \label{sec:conclusions} In this paper, we presented a novel approach to Friendly Training, according to which training data are altered by an auxiliary neural network in order to improve the learning procedure of a neural network-based classifier. Thanks to a progressive developmental plan, the classifier implicitly learns from examples that better match its current expectations, reducing the impact of difficult examples or noisy data during early training. The auxiliary neural network is dropped at the end of the training routine. An extensive experimental evaluation showed that Neural Friendly Training leads to classifiers with improved generalization skills, overcoming vanilla Friendly Training in which an example-wise perturbation is estimated in an iterative manner. Future work will focus on the investigation of different developmental plans and the evaluation of the impact of Neural Friendly Training in terms of robustness to adversarial examples. \end{document}
arXiv
Journal of Computational Surgery Computing, Robotics, and Imaging for the Surgical Platform Vertebra segmentation based on two-step refinement Jean-Baptiste Courbot1, Edmond Rust1, Emmanuel Monfrini2 & Christophe Collet1 Journal of Computational Surgery volume 4, Article number: 1 (2016) Cite this article Knowledge of vertebra location, shape, and orientation is crucial in many medical applications such as orthopedics or interventional procedures. Computed tomography (CT) offers a high contrast between bone and soft tissues, but automatic vertebra segmentation remains difficult. Hence, the wide range of shapes, aging, and degenerative joint disease alterations as well as the variety of pathological cases encountered in an aging population make automatic segmentation sometimes challenging. Besides, daily practice implies a need for affordable computation time. This paper aims to present a new automated vertebra segmentation method (using a first bounding box for initialization) for CT 3D data which tackles these problems. This method is based on two consecutive steps. The first one is a new coarse-to-fine method efficiently reducing the data amount to obtain a coarse shape of the vertebra. The second step consists in a hidden Markov chain (HMC) segmentation using a specific volume transformation within a Bayesian framework. Our method does not introduce any prior on the expected shape of the vertebra within the bounding box and thus deals with the most frequent pathological cases encountered in daily practice. We experiment this method on a set of standard lumbar, thoracic, and cervical vertebrae and on a public dataset, on pathological cases, and in a simple integration example. Quantitative and qualitative results show that our method is robust to changes in shapes and luminance and provides correct segmentation with respect to pathological cases. Primitive bone tumors such as osteoid osteoma, metastatic lesions, and degenerative disorders such as arthritis or vertebral body collapse and traumatic injuries can affect one or several vertebrae. Diagnosis and characterization of these spine lesions rely on medical imaging. Computed tomography (CT) is yet one of the first-line imaging procedures. This cross-sectional imaging technique discriminates tissues along their densities and allows a good contrast between bones, surrounding organs, and soft tissues. However, identification of vertebrae can be difficult. Even if vertebrae vary in shape and orientation along the spine, these modifications can be slight between two neighbor elements of the backbone, making assessment of the exact level sometimes challenging. A precise knowledge of vertebrae location, shape, and orientation is however essential. Hence, an imaging follow-up of spine lesions requires a precise identification of the affected levels and consequent reliable vertebrae identification. The same considerations are relevant in case of multi-modality imaging, that is to say supplementary spinal imaging procedures (e.g., bone scan with SPECT/CT, 18F-fluoride PET/CT) performed so as to allow a better characterization of lesions or tumor burden. This is even more crucial for preoperative planning and for interventional radiology treatments. Vertebra segmentation and identification is therefore a key issue for many medical applications. Beside their ambiguous shapes and boundaries, one of the major concern in a segmentation perspective is the varying vertebrae neighborhood and shapes in a single patient, which led to the development of region-specific methods. Another important problem for clinical application is the eventuality of pathological cases, which is not always taken into account in previous works. This is challenging because of the wide range of diseases, e.g., on CT scans a spine lesion can affect the vertebra local shape (primitive tumor), global shape (scoliosis, fused vertebrae, degenerative disorders), or the intensity of some regions (hyper- or hypo-dense tumors). On top of that, one has to consider that a reliable vertebra segmentation method is one of the requirements needed to perform further advanced processing such as efficient image registration. Medical image segmentation methods can be divided in three types: the iconic, the texture-based, and the edge-based methods [1]. Iconic methods rely directly on voxel intensities and include amplitude segmentation (e.g., thresholding) and region-based methods [2]. Texture-based methods rely on local operators [3] to describe and discriminate objects along their apparent texture. Edge-based methods use more abstract descriptors to constrain the shapes and boundaries. As mentioned in [4], vertebra segmentation is a challenging problem since vertebrae are inhomogeneous in intensity and texture and have complex shapes, which make traditional segmentation techniques inefficient to the problem. The vast majority of recent methods dedicated to vertebrae segmentation are edge-based and rely on deformable models performing an adaptation of prior data, such as templates or statistical atlases, to the vertebrae volume. For example, [5, 6] use a prior statistical shape model [7] as an initialization followed by a rigid or non-rigid registration, and in [4, 8, 9], the authors use a shape-constrained deformable model to fit a prior mesh into the data. Nevertheless, two main key issues limit these works: The algorithms use complex shape description and processing, which dramatically increase global processing time. Methods are validated on a limited set of vertebrae in terms of scope (lumbar, thoracic, or cervical) and healthiness (middle-aged patient, healthy cases). We assume in the following that vertebrae are properly isolated in bounding boxes, delimited roughly by their inter-vertebral disk and corresponding mean planes. Several methods of vertebra localization can produce such planes, such as the works presented in [4, 8, 10]. Since spine partition is not in the scope of this paper, the volume extractions are made manually. Therefore, this work focuses on the segmentation of vertebral elements contained in the volume, which may include parts of neighbor vertebrae. Our method overcomes limitations listed above, since it does not rely on prior shape information or on complex shape descriptors. To restrain the computation time, we propose a coarse segmentation algorithm which drops voxel clusters from the data volume. This first step of the segmentation is built on the basis of a coherent voxel cluster statistical testing and is therefore robust to local and global luminance change. This coarse segmentation step will be referred as "Carving". The second step aims at discriminating the two classes in the remaining volume within a robust Hidden Markov Chain (HMC) framework and thus performs coherent voxel-level segmentation. No shape priors are introduced in the algorithms; thus, the method can deal with any type of standard vertebrae from lumbar to cervical as well as with most of the non-standard cases one can expect in clinical context. In the Sections "Coarse segmentation" and "Fine segmentation based on HMC modeling" the two-pass segmentation algorithm is described. The Section "Results" explains the experiments and the results obtained with the proposed method. A discussion on the results is given in the Section "Discussion and conclusion" as well as a conclusion on the method. Coarse segmentation In the context of medical image processing, coarse-to-fine methods are mostly used to perform fast registration (see, e.g., [11] or [9]) as they reduce computation time. On the other hand, image clustering is a well-known tool grouping individual elements (i.e., voxels) following a specific similarity criterion [12] (based on a given distance metric) and thus produces consistent high-level elements. In our case, it is desirable to combine both approaches to rapidly ensure a first accurate and consistent estimation of the anatomical vertebral volume. Therefore, a new algorithm is introduced to perform a coarse segmentation of the data volume within a previously delimited bounding box [4, 8, 10]. It processes the volume layer by layer iteratively, following three steps (see Fig. 1): The layer construction consists in selecting the external layer to be processed from the volume of interest in the current iteration. Flowchart of the coarse segmentation algorithm The layer clustering produces clustered voxels with a joint space-luminance criterion. The cluster selection tests if the clusters should be rejected or included in the final volume. The three steps are repeated until the volume is completely processed within the initial bounding box. Layer construction This step isolates the external layer of voxels on which the further processing will be applied. The first layer is defined by its depth I 1 from the borders of the volume. Given the boundary from the previous step, the following layers cover both an inner part of depth I j and an outer part of height O j , j>1. The layer are isolated with mathematical morphology operator. More precisely, let \(\hat {V}_{j-1}\) be the binary partially segmented volume obtained at the step (j−1) and R(a) be a ball structuring element of radius a. The layer V j at the j-th iteration is then defined as: $$ \begin{aligned} V_{1} &= V_{0} - V_{0} \ominus R(I_{1}) \\ V_{j} &= V_{j-1} \oplus R(O_{j}) - V_{j-1} \ominus R(I_{j}) ~\forall j>1 \end{aligned} $$ V 0 is the initial bounding box, and the operators ⊕ and ⊖ stand for morphological dilatation and erosion, respectively1. The external layer is used to avoid artifacts creation by processing the layer boundary regions at least twice. Figure 2 illustrates the layer construction step. 2D illustration for layer construction at the k-th iteration. Shaded parts are not processed at this step and are already excluded (outer part) or not proceeded yet (inner part) Layer clustering We develop a clustering method based on the simple linear iterative clustering (SLIC) method proposed by Achanta et al. [13]. The authors presented a clustering method for color images we generalize in the 3D gray level case. Thereafter, it will be referred as "SLIC-3D". In the color image case, a pixel i can be defined by its Cartesian coordinates (x i , y i ) and the L∗a∗b intensities (l i ,a i ,b i ). In [13], the authors combine the two representation spaces in one distance using two weighting parameters called m and S: m is used to balance the contributions of the color distance with respect to the Euclidean distance and S stands for the number of pixels a super-pixel is expected to contain. The SLIC algorithm proposed in [13] consists in clustering the pixels in order to approximately minimize for each pixel its combined distance to the cluster centroid. We propose on a similar principle a clustering algorithm addressing 3D data. The Euclidean distance covers then a 3D space and the luminance of a voxel stands for the color channel. As CT data is processed, its intensity is expressed in Hounsfield Units (HU) corresponding to X-ray absorption ratio of organic tissues with respect to water [14]. A given voxel i is then represented by its spatial coordinates (x i ,y i ,z i ) and its luminance l i . For a given voxel cluster k containing |k| elements, we define its centroid C k as its mean value along the four features: $$ C_{k} = [x_{k}, y_{k}, z_{k}, l_{k}]^{T} = \frac{1}{|k|}\sum_{i \in k}[x_{i}, y_{i}, z_{i}, l_{i}]^{T} $$ Then, a mixed distance D m combining the four features between a cluster centroid C k and a voxel i is given by: $$ D_{m}(C_{k},i) = \sqrt{\left(\frac{d_{c}(C_{k},i)}{m}\right)^{2} + \left(\frac{d_{s}(C_{k},i)}{S}\right)^{2}} $$ where d s is the 3D Euclidean distance and d c is a luminance distance between C k and i: $$ \begin{aligned} d_{s}(C_{k},i) &= \sqrt{(x_{k} - x_{i})^{2} + (y_{k} - y_{i})^{2} + (z_{k} - z_{i})^{2}} \\ d_{c}(C_{k},i) &= |l_{k} - l_{i}| \end{aligned} $$ The cluster centroids are initialized on a regular cubic grid of size S. For each cluster, the algorithm processes a cubic 2S×2S×2S region centered on the centroid spatial coordinates. Each voxel in the region closer to the cluster centroid than to its current cluster centroid is then re-labeled. Finally, the cluster centroids are updated, and the procedure can be repeated for a few iterations. The SLIC-3D procedure is summarized in Algorithm 1. Cluster selection The clusters from a given layer need to be tested to assess if they belong to the vertebra volume. Since vertebrae are mainly bone tissue and have typical luminance in CT volume, the test is built with the mean luminance l k of each cluster k. Thus, voxels are accepted or rejected by cluster, avoiding to deal with local irrelevant voxel variations. Furthermore, the test must be robust to changes in the vertebrae to proceed. Thus, simple test such as thresholding are not satisfying and a more elaborate procedure is needed. This is why an adaptive test is developed to ensure robust and consistent clusters acceptance or rejection. The test is based on the statistical region merging approach, proposed by Nock and Nielsen [15]. Whereas the authors use the test for a pixel pair set, we propose to test the clusters with respect to a reference luminance l 0 corresponding to the typical bone luminance in CT scans. For a given cluster k, let a first bone merging predicate be: $$ \mathcal{P}_{0}(k): |l_{k} - l_{0} | \leq b(k) $$ where b(·) is a merging threshold [15]: $$ b(k) = g \sqrt{\frac{1}{Q|k|} ln \left(\frac{1}{\delta} \right)} $$ where |k| is the number of voxels in the cluster k, g is the grey level range, and δ is the acceptable probability of error for the predicate. Q stands for the expected number of underlying independent random variables (r.v.) for the current region, and according to [15], it allows to quantify its statistical complexity. The bone merging predicate can then be stated as "accept the cluster k if |l k −l 0| < b(k)". Furthermore, an alternative is added to the test: vertebrae being compact objects, each interior cluster is necessarily part of the result. A predicate is needed to assert for a given cluster, the interiority of its neighbors. We define as interior to the remaining volume any point closer to the remaining volume center v=(x v ,y v ,z v ) than the centroid of the supervoxel. Then, an interiority predicate is built for a given cluster k and any of its neighbor k ′: $$ \mathcal{P}_{I}^{k}(k'): d_{s}(C_{k'},v) \leq d_{s}(C_{k}, v) $$ where C k and \(C_{k^{\prime }}\phantom {\dot {i}\!}\) are the centroids of the clusters k and k ′, respectively, and d s is the Euclidean distance defined in (4). Finally, the two predicates (5) and (7) are combined in the following vertebral predicate to test a cluster k: $$ \mathcal{P}(k): \mathcal{P}_{0}(k) \text{~or~} \mathcal{P}_{I}^{k'}(k) ~ \forall k' \text{~such as} \left\{\begin{array}{l} k \text{~and~} k' \text{are neighbor,} \\ \mathcal{P}(k') \text{~is valid.} \end{array}\right. $$ This predicate allows an efficient selection of voxel cluster. Figure 3 illustrates the whole coarse segmentation step. Graphical summary of a coarse segmentation step. Gray regions are not proceeded at this step, orange regions are the current layer and include the brown vertebrae region. White limits represent cluster boundaries. a Whole slice with the zoom-in region b in the dotted limits. c Result from the SLIC-3D clustering. d Possible outcome of the selection step with only the bone merging predicate (5). Blue accepted clusters, red rejected clusters. e Neighborhood search example for the two light-gray clusters f Expected outcome of the vertebral predicate (8) Model and parameters The model requires calibrating several parameters. They were evaluated on a first set of 12 lumbar thoracic and cervical vertebrae. Motivations are given below: Equation (1) defining the layer construction and the SLIC-3D method (Algorithm 1) both depend on a size parameter. We choose to define them only with the S parameter from SLIC-3D, meaning that we link the layer depth to the expected size of supervoxels. Thus, this parameter quantifies both the depth of the current layer and the scale of the supervoxel to exclude. We choose the two first values of S to be higher than the latter ones, as we process in a coarse-to-fine fashion. S is given in millimeters to ensure isotropy between axes and between scans. The m parameter used in clustering (Algorithm 1) mostly defines the clusters shape between a spatial regularity and an intensity regularity. We use decreasing values of m along iterations to exclude first spatially coherent and then intensity-coherent clusters. The statistical parameters g, Q, and δ used for cluster selection (Predicate 5) can be computed automatically on the basis of the cluster to proceed: g is the range of the current layer intensity and Q must be set lower than g to reduce the expected complexity of the bone merging predicate. The error probability δ can be fixed arbitrarily, e.g., as the inverse of the cardinal of the cluster. The reference intensity l 0 (Predicate 5) is the intensity of typical bone in CT scans and is provided by an expert. The algorithm is built on the three steps previously detailed and processes the volume iteratively. For a volume of height h, the number of iterations J is given by2: $$ J = 2 + \left\lceil \frac{h - S_{1} - S_{2}}{S_{j}} \right\rceil $$ where ⌈·⌉ is the ceiling operator, S 1, S 2, and S j stands respectively for the two first values of S and its value for any iteration j>2. The model is tolerant to parameter variations, as long as their order from coarse to fine is preserved. Note that variations of the S parameters may produce changes in terms of computation time, since it influences the number of iteration to proceed in (9). In all cases, we observed convergence towards similar results. The entire coarse segmentation method that we called Carving, as well as the parameters value, are summarized in Algorithm 2. Figure 4 illustrates the results obtained at this step. Result of the coarse segmentation step for a L3 vertebra. The three sectional views are the sagittal, axial, and coronal middle slice of the source volume. The result is represented by its red superimposition and the 3D interpolation of the coarse segmentation volume The result of the coarse segmentation is very nice given the expectation: the first need is to reduce the data amount to proceed, which is efficiently done. The algorithm actually does more than data volume reduction since the results already have the shape of the underlying vertebrae. However, this step alone remains too coarse, and we have now to use a finer segmentation to perform a voxel-level classification of the remaining volume. Fine segmentation based on HMC modeling The coarse segmentation results obtained at the previous section are smaller than the initial volume and include most of the anatomical vertebrae volume. However, to allow an efficient final segmentation, we need to have in the volume enough voxels of the two classes to separate. Thus, a region of interest (ROI) is built based on the previous coarse result. It is a morphological dilation with a ball structuring element of radius 10 mm. In this section, this ROI will be processed, as it preserves the expected shape of the vertebra and includes enough non-vertebral voxels to allow automatic separation. We are interested in a robust, voxel-wise segmentation method. The Bayesian framework meets these requirements and offers a consistent statistical modeling for the segmentation of an image into classes. When processing images or volumes, Hidden Markov Random Field (HMRF) [16] modeling often provides good results because the model do consider spatial relationships. However, using HMRF can be computationally time-consuming. This is mainly due to the sampling needed to perform estimations from an analytically unknown distribution. On the other hand, the classical HMC framework, while having the advantages of Bayesian segmentation, does not have the drawbacks of HMRF. It provides faster computations, and we use it with a specific volume transformation to preserve the most important spatial features. The Baum-Welch algorithm [17] is used for segmentation, based on parameters estimated with the Stochastic Expectation-Maximization (SEM) [18] method. Volume transformation First of all, the 3D data needs to be transformed to obtain a one-dimensional chain. This point must be carefully considered, since while it permits fast computation, it introduces an artificial 1D order in 3D data and thus uses only 2 out of 26 neighbors for each voxel. A 3D volume can be transformed by sweeping each line, column, and row from first to last but this transformation induces too much distortion to the original data structure. Another alternative is the Hilbert curve [19], which is known to be successful for transforming 2D or 3D images into chains (see, e.g., [20–22]). The resulting chain is more spatially regular; however, it creates artifacts in the HMC segmentation because the chain requires having relatively few state transitions to produce a smooth segmentation estimate. The Hilbert curve path does not ensure this; therefore, in this section, a new volume-to-chain transformation is introduced, relying on the shape information obtained at the coarse segmentation step. The volume is processed by slices. Given the symmetries of a vertebra, horizontal slices are retained: the axis of our spirals is axial3. For each slice, a spiral along concentric perimeters of the ROI section is built. The spiral path goes alternatively inward and outward, so that consecutive spiral extremities are spatially close from one slice to another. Algorithm 3 summarizes the process, which is illustrated in Fig. 5. Spiral transform illustration. The shaded regions represent the ROI sections, and the red line follows the chain path. a Example of a 10×10 pixel slice. b Example for three consecutive slices Forward-backward algorithm This algorithm allows the computation of posterior densities required for segmentation. Let N be the length of the chain obtained from the spiral transform. X=(X 1,…,X N ) and Y=(Y 1,…,Y N ) are respectively the random variables sequences representing the spiral transformation of the class volume and the observed volume. We will note x=(x 1,…,x N ) and y=(y 1,…,y N ) their respective realizations. The class volume elements take their values in Ω={ω 0,ω 1} since we want to discriminate vertebral (ω 1) from non-vertebral (ω 0) elements. The observed volume voxels remain in Hounsfield Units, typically in a range of [ −2000, 2000] HU. For clarity, we note p(x n ) and p(x) instead of p(X n =x n ) and p(X=x), respectively, and likewise for the Y process. We assume that (X,Y) is a HMC with independent noise (HMC-IN). The following properties are verified: X is a Markov chain: $$ p(\boldsymbol{x}) = p(x_{1}) p(x_{2}\,|\,x_{1}) \ldots p(x_{N}\,|\,x_{N-1}) $$ The (Y n )1≤n≤N are conditionally independent with respect to X: $$ p(\boldsymbol{y}\,|\,\boldsymbol{x}) = \prod_{n=1}^{N} p(y_{n}\,|\,\boldsymbol{x}) $$ The noise independence is verified : $$ p(y_{n}\,|\,\boldsymbol{x}) = p(y_{n}\,|\,x_{n}) ~ \forall n \in \left\{ 1, \ldots, N \right\} $$ The previous points lead to the following expression for the joint (X,Y) probability distribution: $$ p(\boldsymbol{x}, \boldsymbol{y}) = p(x_{1}) p(y_{1}\,|\,x_{1}) \prod_{n=2}^{N}p(x_{n}\,|\,x_{n-1}) p(y_{n}\,|\,x_{n}) $$ Then the classical forward-backward decomposition [17] of the posterior marginal probability yields: $$ \xi(x_{n}) = p(x_{n}\,|\,\boldsymbol{y}) = \frac{\alpha(x_{n}) \beta(x_{n})}{\sum\limits_{\omega \in \Omega} \alpha(\omega) \beta(\omega)} ~ \forall n \in \left\{ 1, \ldots, N \right\} $$ Where α and β are respectively the forward and backward probabilities for n∈{1,…,N}: $$ \begin{aligned} \alpha(x_{n}) &= p(x_{n}, y_{1}, \ldots, y_{n}) \\ \beta(x_{n}) &= p(y_{n+1}, \ldots, y_{N} \,|\,x_{n}) \end{aligned} $$ Both α and β can be computed using the following recursions: $$ \begin{aligned} \text{\textit{Initializations}:} &\left\{ \begin{array}{ll} \alpha(x_{1}) = p(x_{1}, y_{1})\\ \beta(x_{N}) = 1 \end{array}\right.\\ \text{\textit{Inductions}: }\\ \forall 1 \leq n \leq N-1 : &\left\{\begin{array}{ll} \alpha(x_{n+1}) = \left(\sum\limits_{\omega \in \Omega} \alpha(\omega) p(x_{n+1} \,|\, \omega) \right) p(y_{n+1}\,|\,x_{n+1})\\ \beta(x_{n}) = \sum\limits_{\omega\in \Omega} \beta(\omega) p(\omega \,|\, x_{n}) p(y_{n+1}\,|\,\omega) \end{array}\right.\\ \end{aligned} $$ The posterior marginal probabilities (14) can then be computed, and the maximum posterior mode (MPM) class estimation [23] yields: $$ \forall 1 \leq n \leq N ~~\hat{x}_{n}= \textit{{arg}} ~ \underset{\omega \in \Omega}{max}~ p(X_{n} = \omega \,|\, \boldsymbol{y}) $$ The distribution in (16) requires the knowledge of noise and model parameters. In an unsupervized segmentation framework, they must be estimated. An estimation method is reported in the next section. Parameter estimation The parameters from Eq. (13) need to be estimated to perform the class estimation. We note: $$ \begin{aligned} p(X_{1} = \omega_{i}) &= \pi_{i} \\ p(X_{n} = \omega_{j} \,|\, X_{n-1} = \omega_{i}) &= \pi_{ij} ~ \forall n \in \left\{ 1, \ldots, N \right\} \end{aligned} $$ Note that the π parameters do not depend of n since we supposed the HMC to be homogeneous. We use the SEM algorithm [18] to perform the parameter estimation. SEM is chosen over its determinist counterpart EM [24] for robustness reasons, since the algorithm needs to efficiently deal with non-standard cases (e.g., noise, pathologies, artifacts) and to avoid local extrema convergence. We assume that we are in the case of a Gaussian mixture: $$ p(y_{n}\,|\,X_{n}=\omega_{i}) \sim \mathcal{N}(\mu_{i}, \sigma_{i})\text{~with }i \in \left\{ 0, 1 \right\} \text{for~} n \in \left\{ 1, \ldots, N \right\} $$ The set of parameter to estimate is Θ={π ij ,π i ,μ i ,σ i } for i,j∈{0,1}. With complete data (x,y), one can perform the estimation of Θ with the maximum likelihood (ML) estimators. Complete data are however unavailable. This is why SEM iteratively provides simulations of x along posterior distributions. The SEM algorithm requires an accurate initialization to ensure fast parameter estimation convergence. At this step, we split the process: in some known cases, the volume can include air elements, which are clearly distinct from both soft tissues and bones. To avoid wrong class clustering, we use for the initialization a set of reference parameters obtained from other vertebra of the same patient without air in the neighborhood. This allows correct calibration with respect to the patient and the scanner, while avoiding wrong class clustering. In any other cases, we use a simple initialization where μ 0=0.25, μ 1=0.75, and σ 0=σ 1 are estimated through the ML estimator from the whole sequence y and π ij =π i =0.5 ∀i,j∈{0,1}. These initializations are choosen to ensure class separation and avoid reliance on other algorithm(s) convergence (e.g., the K-means algorithm [25]). For the simulation (S) step, one needs to compute the posterior transition probabilities, given by: $$ \begin{aligned} p_{\boldsymbol{\Theta}^{k}}(\boldsymbol{x} \,|\, \boldsymbol{y}) &= p(x_{1} \,|\, \boldsymbol{y}) \prod_{n=2}^{N} p(x_{n}\,|\,x_{n-1}, \boldsymbol{y}) \\ p(x_{n}\,|\,x_{n-1}, \boldsymbol{y}) &= \frac{p(x_{n} \,|\, x_{n-1})p(y_{n}\,|\,x_{n}) \beta(x_{n})}{\sum\limits_{\omega \in \Omega} p(\omega \,|\, x_{n-1})p(y_{n}\,|\,\omega) \beta(\omega)} ~\forall 2 \leq n \leq N \end{aligned} $$ An additional step in the original SEM procedure is introduced to produce a convergence measure for the parameters estimate. Since the individual π, μ, and σ parameters differ in nature, we cannot use a direct Euclidean distance comparison between two consecutive estimations of Θ. However, the forward-backward class estimation provides for a given Θ and fixed observations y a determinist result. The convergence between two SEM steps is then estimated through the variations between consecutive on-the-fly forward-backward estimates based on the consecutive parameter estimation. The convergence rate between two consecutive parameter estimations Θ k and Θ k−1 is computed together with their corresponding forward-backward estimates \(\hat {\boldsymbol {x}}^{k}\) and \(\hat {\boldsymbol {x}}^{k-1}\), respectively, as: $$ \epsilon = \frac{1}{N} \sum_{n=1}^{N} \left(\hat{x}_{n}^{k} - \hat{x}_{n}^{k-1}\right). $$ We assume that the SEM algorithm has converged when ε<1 %. This choice allows performing the algorithm in a small number of iterations: typically less than 15 iterations are needed. Setting a smaller ε increases the global processing time and does not provide noticeable improvement to the result. The adapted SEM algorithm is summarized in Algorithm 4. HMC segmentation algorithm First, the segmentation algorithm transforms the volume along a spiral path adapted to the coarse shape. The parameters of the mixture are then estimated with the SEM algorithm. Once the mixture estimation is done, posterior marginal probabilities (14) can be computed to find the MPM estimation (17) of the observed vertebra. Finally, the segmented volume is re-built along the initial chain path. Algorithm 5 summarizes the segmentation procedure, and Fig. 6 illustrates the result for the segmentation following the coarse result from Fig. 4. Result of the segmentation step for a L3 vertebra. It follows the coarse result presented in Fig. 4, with the same legend Complementary results are presented in Fig. 7. The gain of the HMC segmentation step is clear from the 3D interpolations: the results match our expectations of the vertebral volumes and does not include processing artifacts. Note that while the HMC excludes the inner part of the vertebral body for some lumbar vertebrae (e.g., the vertebra from Fig. 6), the border boundaries are well separated. This is not a key topic for a localization purpose, but other goals (e.g., biomechanical modelling) may require some post-processing in these cases. Further extensive and comparative results are presented in the next section, as well as robustness examples and whole-spine segmentations. 3D interpolations of the coarse and fine segmentation results. The first and second rows correspond to lumbar (L4) and thoracic (T11) vertebrae processing, respectively. The first column represents the result of the Carving method (Algorithm 2), and the second column contains the results after the HMC segmentation (Algorithm 5). Note that the observed granularity corresponds to the voxel size, which is the minimal size addressed in this work In this section, the method performance is evaluated. First, the method is qualitatively evaluated on a set of 339 standard vertebrae acquired in daily practice. Then, quantitative results on manually segmented data are reported. Pathological cases make then the robustness evaluation possible. Finally, we provide simple integration examples with a segmentation of the full spine. Standard cases: qualitative results The method is evaluated on a set of vertebral volumes from the whole spine of 15 consecutive patients in an oncologic tertiary center, with exclusion of patients with bone tumors or metastatic spine involvement. Patients had a mean age of 63 and presented degenerative joint alterations and some osteoporotic changes, reflecting most of the situations encountered in daily practice; 339 vertebral volumes were extracted and evaluated, meaning that almost all patients' vertebrae were tested. Each volume is applied successively Algorithm 2 for coarse segmentation and Algorithm 5 for fine segmentation. For the sake of comparison, the K-means algorithm is used as a benchmark since it has common ground with the proposed method: it processes the voxel intensities and has no prior on the volume shape. Since the two-class K-means classification fails when air is present in the volume, we use a sub-sample in which no air is present to provide a more accurate comparison. For this data, there is no available ground truth (e.g., manual segmentation). Therefore, one must resort to qualitative evaluation. We define, in a similar fashion than in [4], the following ranking: Excellent (100): the vertebra is exactly delimited inside its bounding box. Good (75): most of the anatomical structure is covered, but some voxels are segmented out. Bad (50): the vertebra is recognizable but noticeable part are missing from the result. Poor (25): the vertebra is not recognizable enough. Fail (0): the segmentation fails to proceed. The results were visually inspected by an expert with respect to these criteria. Table 1 summarizes the results obtained for the proposed method and the K-means method. Considering that both good and excellent results provide sufficient data for vertebrae segmentation and further advanced processing, our method provides about 78 % of successful results on the subsample where K-means gives 72 % of successful results, whereas on the full sample, the method yields 67 % of successful result and the 2-class K-means fails on the remaining 161 volumes including air (yielding thus an average score of 37.76 % on the full sample). Keeping in mind that the sample originates from daily routines and includes a significant proportion of minor pathologies, these results are of significant interest for clinical use. Table 1 Results for the subsample without air and for the full sample. This parting provides accurate comparative results on the sub-sample The algorithms were developed and tested using Matlab on an Intel i5 (2.6 GHz) on one core, without specific optimization of the code. The processing time is of 36 s by vertebra on average, with 10 s for the Carving step and 26 s for the HMC step. This processing time depends on the size of the vertebra to segment, the average total time being 71.4 s for lumbar vertebrae and 19.8 s for cervical vertebrae. For daily practice implementation, the use of C/C++ language is expected to provide a gain of a factor at least 10 to the processing time. Our method yields satisfying results. Noteworthy, bad and poor results provided by the method are mainly encountered in levels with either pronounced degenerative joint diseases alterations or marked osteoporosis with consequent low contrast (see Fig. 8). In such cases, our method provides better results than the K-means segmentation, supporting the evidence that our proposal is more reliable in data reflecting daily practice. In only 6.5 % of the cases, the K-means segmentation provided better results than our method. Illustrative comparison between K-means segmentation (a) and the proposed method (b) for the L3 vertebra in a patient with marked osteoporosis. K-means segmentation is rated 50, whereas the proposed segmentation is rated 75 The results presented here cover standard volumes mostly encountered in practice, but did not yield voxel-wise error rate. The next section presents a quantitative evaluation of the method, with respect to manual segmentations. When considering vertebral segmentation in CT images, there are few available complete dataset allowing a quantitative evaluation of a method. We use the dataset presented at the CSI 2014 challenge [26], available on the public SpineWeb platform [27]. This dataset contains 10 spine scans from a trauma center, acquired during daily clinical routine work. Patients were aged 16 to 35, and the scans covered thoracic and lumbar vertebrae in most cases. A total of 175 segmented volumes were extracted from the dataset. The performances are measured as the rate of correctly classified voxels (true positives and true negatives). Figure 9 summarizes the results with respect to the provided vertebra segmentation. On average, the proposed method yields 89.39±5.54 % of correct segmentation. On the same dataset, the K-means segmentation provides 81.65±12.23 % correct segmentation. These results imply that our method provide both good results and a small variability on the output. Note that these results concern only the vertebra of interest: a correctly labeled neighboring vertebra is ignored in the measurements. An illustration of the ground truth compared to a segmentation result is provided in Fig. 10. Box plots for the quantitative results. The average rates are 0.816 for the K-means segmentation and 0.894 for the proposed method (see text) Ground truth (first row) and segmentation (second row) on a L1 vertebra. Source volume and colored superimposed results are displayed from left to right within the median axial, coronal, and sagittal planes. In this case, the correct segmentation rate is 94.19 % Errors are either false positive (type I error) or false negative (type II). False positive are most encountered in the presence of high-intensity elements, such as calcifications or ribs near thoracic vertebrae. On the other hand, false negatives are either missing voxels at the vertebrae boundary or missing voxels within the vertebral body. These results are satisfying, given that most error sources are known and that specific post-processing could easily remove them. So far, the performances were evaluated on standard cases: the next section presents pathological cases that may be encountered in practice. Pathological cases: robustness evaluation As our method performs correctly on standard case, it has to be evaluated in more difficult situations, namely pathological cases. Since we aim at a clinical implementation, the method robustness to the most frequent non-standard cases is indeed mandatory. We first briefly describe the selected cases and the corresponding challenges, then we provide their corresponding segmentation results and discussion. The two main key points are changes in shape and in intensity of the object to segment. They correspond to anatomical and structural deformations, respectively. Structural changes are related to alterations of bone and medullar matrix, with consequent modification of density and signal intensity in the CT volume. Changes of shape and density can be related to aging alterations. In particular, arthrosis is responsible of spine alterations in a general population, and we selected it as the first specific case (see Fig. 11 a). The frequency and intensity of these modifications is in close relationship with age. After 40 years, hernia, osteophytes, and degenerative joint diseases are commonly encountered. We also selected a hernia case as an instance of common low-intensity structural alteration (see Fig. 11 b). a coronal and b, c axial sectional views of the selected cases. The arrows highlight their specificities (see text for details) On the other hand, many pathologic conditions can lead to bone density variations. For instance, osteoblastic cancerous tumors will increase bone density. On the other side, osteolytic tumoral involvement is associated with bone destruction and is therefore seen as areas of decreased bone density. Finally, treatments—general treatments as chemotherapy or interventional treatments as cementoplasy—can induce bone density alterations. In particular, cementoplasty, which can be described as the interventional introduction of artificial high-density material inside the vertebral body, represents an extreme case of overdensity. This is the third case we retained (see Fig. 11 c). Figure 12 provides the results on the three selected cases. They are discussed below: From Fig. 12 a, it appears clearly that the changes of the vertebral body boundaries do not impede the segmentation result, which is similar to the result presented in Fig. 6. Some imprecisions appears in the 3D interpolation; however, they are minor given the overall result. Results of the proposed segmentation method on the selected particular cases. They include parts of the upper and lower vertebrae as they appear in the bounding boxes. a Arthrosis on a L3 vertebra. Boundaries differs in width and in shape from a typical lumbar vertebrae. b Hernia in a L4 vertebra. Note that some calcifications are also segmented since their intensity is close to the bone intensity, and they are located near the vertebral body. c Cementoplasty in a T12 vertebra The hernia case presents a region which can be seen as a vacuum in the bone material. This vacuum is segmented out by the method since it differs from the bone in intensity. However, this particular point does not prevent the method to perform the segmentation correctly. Noteworthy, some inner parts of the vertebral body are included in the segmentation, which is not the case for standard lumbar vertebrae (e.g., Figures 6,12 a). This is due to the relative overdensity induced by the hernia in the surrounding material. Finally, the cementoplasty case represents a more challenging test. It produces indeed an almost homogeneous bright region inside the vertebral body, leading to a global distortion of the data in comparison with the standard cases. Nevertheless, the proposed method successes in providing a correct result (Fig. 12 c), which does not cover the full vertebra volume but does represent most of the underlying vertebra. It also shows that natural overdensities of lower range can be handled by our method, the cementoplasty being one of the most extreme cases. The results presented here show that the proposed method is robust to some of the most frequent particular cases met in clinical context. Furthermore, as it provides a correct result for a challenging case, one can expect it to be robust to most of the lower-intensity specificities. We provide in the next section further results covering the range of all vertebrae for two patients. Integration examples We present in this section examples of full-spine segmentations. Two CT scans acquired in clinical routine were selected: the first one does not present specificities in its spine and can be considered as a standard healthy case. The second one presents vertebral compression, which corresponds to a flattening of the inter-vertebral disk and occurs mostly with age. From each scan, we manually defined the vertebral bounding boxes so as to enclose the vertebral bodies. Each volume is processed separately and placed at its initial location. The final results are presented in Fig. 13. Integration examples on the two selected cases. The whole spines are segmented, with the 7 cervical, 12 thoracic, and 5 lumbar vertebrae. The volume rendering is interpolated from the binary segmentations and vertebral volumes are delimited with different colors, within the same color set. a: healthy spine. b: arthritic spine, with degenerative joint alterations, a thoracic Forestier's disease, and a L4 compression First of all, one can notice on both results that the segmentations include some separation artifacts due to the delimitations of the volumes (initial bounding box). Thus, some vertebral elements have been segmented out from the total result, as they are initially badly delimited. Note also that some non-vertebral elements have been segmented in; this is in particular the case with the L1 vertebra from the second case which present calcifications (as in Fig. 12 b). This also happens with surrounding bones, such as the ribs for thoracic vertebrae and the pelvis for the L5 vertebra. Nevertheless, the segmentation is not impacted by these inclusions and performs correctly. From Fig. 13, the changes in vertebrae shape and size are clear within one patient and also between patients. Indeed, it is noticeable that the seven lower vertebrae shape differs between the two cases. The vertebral compression induces deformations in the observed vertebral bodies; thus, their shape does differ from prior expectations. Note also that the second case presents an arthrosis between T12 and L1, causing vertebral bodies to join. Despite these specificities, our method performs correctly on all vertebrae; all of the vertebrae sub-structure are clearly segmented. The results show that the proposed method can be successfully integrated within a simple spine processing and thus can be used in more complex framework. Given the results from the previous sections, one can expect our method to perform well in most practical situations, regardless of the vertebra type, position, and specificity. Further discussion on the method is given in the next section. Vertebra segmentation is a challenging task. The wide range of shapes, the high rate of aging modifications, and the pathologic alterations frequently encountered in real cases explain the difficulties of an automatic segmentation in daily practice patients. Hence, most of the published works about vertebra segmentation seem to be developed and evaluated on ideal data, namely in a young population in which vertebra are well separated, with CT providing a very high contrast between medullar bone, vertebra boundaries, and soft tissues. Additionally, sometimes only the lumbar spine is evaluated, with a consequent lack of information about the robustness of the presented schemes toward thoracic and cervical vertebrae. This work is part of a larger project on spinal registration for patients presenting bone tumors. The segmentation method we present is thus developed in a real practice perspective, explaining why we took into account pathologic cases as well as the most frequent aging modifications, without any prior on vertebrae shape and luminance. Bounding box initialization method can be obtained automatically using state-of-the art techniques. The presented method results fulfills the requirements of automatic bone segmentation prior to registration processes, with an affordable computational time. 1Note that this operation is similar to a morphological gradient, with two different structuring elements instead of one. 2We use volume height since it is shorter than depth or width for the vertebral volumes we process. 3Experiments show that other perimeter-based path such as using a coronal axis slice by slice or using concentric helix leads to similar results. Sharma N, Aggarwal LM (2010) Automated medical image segmentation techniques. J Med Physics/Assoc Med Phys India 35(1): 3. Nyúl LG, Kanyó J, Máté E, Makay G, Balogh E, Fidrich M, Kuba A (2005) Method for automatically segmenting the spinal cord and canal from 3D CT images In: Computer Analysis of Images and Patterns, 456–463.. Springer. Malik J, Belongie S, Leung T, Shi J (2001) Contour and texture analysis for image segmentation. Int J Comput Vis 43(1): 7–27. Kim Y, Kim D (2009) A fully automatic vertebra segmentation method using 3D deformable fences. Comput Med Imaging Graph 33(5): 343–352. Mirzaalian H, Wels M, Heimann T, Kelm BM, Suehling M (2013) Fast and robust 3D vertebra segmentation using statistical shape models In: Engineering in Medicine and Biology Society (EMBC), 2013 35th Annual International Conference of the IEEE, 3379–3382, Osaka. Rasoulian A, Rohling R, Abolmaesumi P (2013) Lumbar spine segmentation using a statistical multi-vertebrae anatomical shape + pose model. IEEE Trans Med Imaging 32(10): 1890–1900. Heimann T, Meinzer HP (2009) Statistical shape models for 3D medical image segmentation: a review. Med Image Anal 13(4): 543–563. Klinder T, Ostermann J, Ehm M, Franz A, Kneser R, Lorenz C (2009) Automated model-based vertebra detection, identification, and segmentation in CT images. Med Image Anal 13(3): 471–482. Ma J, Lu L, Zhan Y, Zhou X, Salganicoff M, Krishnan A (2010) Hierarchical segmentation and identification of thoracic vertebra using learning-based edge detection and coarse-to-fine deformable model In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2010, 19–27.. Springer, Beijing. Glocker B, Feulner J, Criminisi A, Haynor DR, Konukoglu E (2012) Automatic localization and identification of vertebrae in arbitrary field-of-view CT scans In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2012, 590–598.. Springer, Nice. Chen T, Huang TS, Yin W, Zhou XS (2005) A new coarse-to-fine framework for 3D brain MR image registration In: Computer Vision for Biomedical Image Applications, 114–124.. Springer, Beijing. Duda RO, Hart PE, Stork DG (2012) Pattern classification. John Wiley & Sons. Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Susstrunk S (2012) SLIC superpixels compared to state-of-the-art superpixel methods. Pattern Anal Mach Intell IEEE Trans 34(11): 2274–2282. Sprawls P. Jr. (1995) Physical principles of medical imaging. Aspen Publication, Rockville. Nock R, Nielsen F (2004) Statistical region merging. Pattern Anal Mach Intell IEEE Trans 26(11): 1452–1458. Geman S, Geman D (1984) Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. Pattern Anal Mach Intell IEEE Trans6(6): 721–741. Baum LE, Petrie T, Soules G, Weiss N (1970) A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann Math Stat 1: 164–171. Celeux G, Diebolt J (1986) L'algorithme SEM: un algorithme d'apprentissage probabiliste: pour la reconnaissance de mélange de densités. Revue de statistique appliquée 34(2): 35–52. Sagan H (1994) Space-filling curves, Vol. 18. Springer. Bricq S, Collet C, Armspach JP (2008) Unifying framework for multimodal brain MRI segmentation based on hidden Markov chains. Med Image Anal 12(6): 639–652. Derrode S, Pieczynski W (2004) Signal and image segmentation using pairwise Markov chains. Signal Process IEEE Trans 52(9): 2477–2489. Fjortoft R, Delignon Y, Pieczynski W, Sigelle M, Tupin F (2003) Unsupervised classification of radar images using hidden Markov chains and hidden Markov random fields. Geosci Remote Sensing IEEE Trans 41(3): 675–686. Marroquin J, Mitter S, Poggio T (1987) Probabilistic solution of ill-posed problems in computational vision. J Am Stat Assoc 82(397): 76–89. Dempster AP, Laird NM, Rubin DB (1977) Maximum likelihood from incomplete data via the EM algorithm. J R Stat Soc Series B (Methodological) 39: 1–38. MacQueen J, et al. (1967) Some methods for classification and analysis of multivariate observations In: Proceedings of the fifth Berkeley symposium on mathematical statistics and probability, vol. 1, 281–297, Berkeley. Yao J, Burns JE, Munoz H, Summers RM (2012) Detection of vertebral body fractures based on cortical shell unwrapping In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2012, 509–516.. Springer. SpineWeb platform website. http://spineweb.digitalimaginggroup.ca/. ICube, Université de Strasbourg - CNRS, Illkirch, 67412, France Jean-Baptiste Courbot , Edmond Rust & Christophe Collet SAMOVAR, Département CITI, CNRS, Évry, 91011, France Emmanuel Monfrini Search for Jean-Baptiste Courbot in: Search for Edmond Rust in: Search for Emmanuel Monfrini in: Search for Christophe Collet in: Correspondence to Jean-Baptiste Courbot. JBC developed the segmentation method, conducted quantitative and specific evaluations and wrote the manuscript. ER conducted the quantitative experiments, analyzed the results, and helped the writing of the manuscript. EM and CC helped design the method and the writing of the manuscript. All authors read and approved the final manuscript. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. Courbot, J., Rust, E., Monfrini, E. et al. Vertebra segmentation based on two-step refinement. J Comput Surg 4, 1 (2016) doi:10.1186/s40244-016-0018-0 Accepted: 27 June 2016 Clinical imagery Automatic vertebra segmentation Coarse-to-fine modeling SLIC clustering Hidden Markov chain Follow SpringerOpen SpringerOpen Twitter page SpringerOpen Facebook page
CommonCrawl
\begin{document} \title{ exorpdfstring{Approximation of \ Weighted Automata with Storage} \begin{abstract} We use a non-deterministic variant of storage types to develop a framework for the approximation of automata with storage. This framework is used to provide automata-theoretic views on the approximation of multiple context-free languages and on coarse-to-fine parsing. \end{abstract} \section{Introduction} Formal grammars (e.g. context-free grammars) are used to model natural languages. Language models are often incorporated into systems that have to guarantee a certain response time, e.g. translation systems or speech recognition systems. The desire for low response times and the high parsing complexity of the used formal grammars are at odds. Thus, in real-world applications, the language model is often replaced by another language model that is easier to parse but still captures the desired natural language reasonably well. This new language model is called an \emph{approximation} of the original language model. Nederhof~\cite{Ned00a} gives an overview for the approximation of context-free grammars. In order to approximate a context-free grammar it is common (but not exclusive \cite{Ned00,Cha+06}) to first construct an equivalent pushdown automaton and then approximate this automaton \cite{KraTom81,Pul86,LanLan87,BerSch90,PerWri91,Eva97,Joh98}, e.g. by restricting the height of the pushdown. Automata with storage \cite{Sco67,Gol79,Eng86,Eng14} generalise pushdown automata. By attaching weights to the transitions of an automaton with storage, we can model, e.g. the \emph{multiplicity} with which a word belongs to a language or the \emph{cost} of recognising a word~\cite{Sch62,Eil74}. The resulting devices are called \emph{weighted} automata with storage and were studied in recent literature \cite{HerVog15,VogDroHer16}. Multiple context-free languages (MCFLs) \cite{SekMatFujKas91,VijWeiJos87} are currently studied as language models because they can express the non-projective constituents and discontinuous dependencies that occur in natural languages \cite{Mai10,KuhSat09}. Their approximation was recently investigated from a grammar-centric viewpoint \cite{BurLju05,Cra12}. MCFLs can be captured by automata with specific storage \cite{Vil02,Den16}, which allows an automata-theoretic view on their approximation. We develop a framework to study the approximation of weighted automata with arbitrary storage. To deal with non-determinism that arises due to approximation, we use automata with \emph{data storage} \cite{Gol79} which allow instructions to be non-deterministic;\footnote{We add predicates to Goldstine's original definition of data storage. This does not increase their expressiveness (\cref{lem:predicate-free-normal-form}).} and we investigate their relation to automata with storage (\cref{sec:nd-storage}). Weighted automata with data storage differ from Engelfriet's automata with storage~\cite{Eng86,Eng14} in two aspects: As instructions we allow binary relations instead of partial functions and each transition is associated with a weight from a semiring. Using a powerset construction, we show that (weighted) automata with data storage have the same expressive power as (weighted) automata with storage (\cref{lem:implementing-of-nd-storage,lem:implementing-of-nd-storage-weighted}). Our formalisation of strategies for approximating data storage (called \emph{approximation strategies}) is inspired by the storage simulation of Hoare~\cite{Hoa72,EngVog86}. We use partial functions as approximation strategies (\cref{sec:approximation}). Properties of the approximation strategy imply properties of the while approximation process: If an approximation strategy is a total function, then we have a superset approximation (\cref{thm:superset-approximation,thm:weighted-approx-types}\ref{item:wt-approx-types:over}). If an approximation strategy is injective, then we have a subset approximation (\cref{thm:subset-approximation,thm:weighted-approx-types}\ref{item:wt-approx-types:under}). In contrast to Engelfriet and Vogler~\cite{EngVog86}, we do not utilise flowcharts in our constructions. We demonstrate the benefit of our framework by providing an automata-based view on the approximation of MCFLs (\cref{sec:approximation-mcfl}) and by describing an algorithm for coarse-to-fine parsing of weighted automata with data storage (\cref{sec:coarse-to-fine-parsing}). \section{Preliminaries} The set $\{0, 1, 2, … \}$ of \emph{natural numbers} is denoted by $ℕ$, $ℕ ∖ \{0\}$ is denoted by $ℕ_+$, and $\{1, …, k\}$ is denoted by $[k]$ for every $k ∈ ℕ$ (note that \([0] = ∅\)). Let $A$ be a set. The \emph{power set of $A$} is denoted by $\mathcal{P}(A)$. Let $A$, $B$, and $C$ be sets and let $r ⊆ A × B$ and $s ⊆ B × C$ be binary relations. We denote $\{(b, a) ∈ B × A ∣ (a, b) ∈ r \}$ by $r^{-1}$, $\{ b ∈ B ∣ (a, b) ∈ r \}$ by $r(a)$ for every $a ∈ A$, and $⋃_{a ∈ A'} r(a)$ by $r(A')$ for every $A' ⊆ A$. The \emph{sequential composition of $r$ and $s$} is the binary relation \( r \comp s = \{ (a, c) ∈ A × C ∣ ∃b ∈ B: ((a, b) ∈ r) ∧ ((b, c) ∈ s) \}\text{.} \) We call $r$ an \emph{endorelation (on $A$)} if $A = B$. A \emph{semiring} is an algebraic structure $(K, {+}, {⋅}, 0, 1)$ where $(K, {+}, 0)$ is a commutative monoid, $(K, {⋅}, 1)$ is a monoid, 0 is absorptive with respect to ${⋅}$, and ${⋅}$ distributes over ${+}$. We say that $K$ is \emph{complete} if it has a sum operation $∑_I: K^I → K$ that extends ${+}$ for each countable set~$I$ \cite[Sec.~2]{DroKui09}. Let~$≤$ be a partial order on~$K$. We say that \emph{$K$ is positively~${≤}$-ordered} if $+$ preserves $≤$ (i.e. for each $a, b, c ∈ K$ with $a ≤ b$ holds $a + c ≤ b + c$), $⋅$ preserves $≤$ (i.e. for each $a, b, c ∈ K$ with $a ≤ b$ holds $a ⋅ c ≤ b ⋅ c$ and $c ⋅ a ≤ c ⋅ b$), and $0 ≤ a$ for each $a ∈ K$ (cf. Droste and Kuich~\cite[Sec.~2]{DroKui09}). The \emph{set of partial functions from $A$ to $B$} is denoted by $A \parto B$. The \emph{set of (total) functions from $A$ to $B$} is denoted by $A → B$. Let $f: A \parto B$ be a partial function. The \emph{domain of $f$} and the \emph{image of $f$} are defined by $\dom(f) = \{ a ∈ A ∣ ∃b ∈ B: f(a) = b \}$ and $\img(f) = \{ b ∈ B ∣ ∃a ∈ A: f(a) = b \}$, respectively. Abusing the notation, we may sometimes write $f(a) = \text{undefined}$ to denote that $a ∉ \dom(f)$. Note that every total function is a partial function and that each partial function is a binary relation. \section{Automata with data storage} \label{sec:nd-storage} In addition to the finite state control, automata with storage are allowed to check and manipulate a storage configuration that comes from a possibly infinite set. We propose a syntactic extension of automata with storage where the set of unary functions (the \emph{instructions}) is replaced by a set of binary relations on the storage configurations. \subsection{Data storage} \begin{definition} A \emph{data storage} is a tuple \(S = (C, P, R, c_{\text{i}})\) where $C$ is a set (of \emph{storage configurations}), \(P ⊆ \mathcal{P}(C)\) (\emph{predicates}), \(R ⊆ \mathcal{P}(C × C)\) (\emph{instructions}), \(c_{\text{i}} ∈ C\) (\emph{initial storage configuration}), and the set $r(c)$ is finite for every $r ∈ R$ and $c ∈ C$. \end{definition} Our definition of data storage differs from the original definition \cite[Def.~3.1]{Gol79} in that we have predicates. The “data storage types” introduced by Herrmann and Vogler \cite[Sec.~3]{HerVog16} are similar to our data storages. For instructions they use partial functions that may depend on the input of the automaton in addition to the current storage configuration instead of binary relations on storage configurations. Consider a data storage \(S = (C, P, R, c_{\text{i}})\). If every element of $R$ is a partial function, we call $S$ \emph{deterministic}. The definition of “deterministic data storage” in this paper coincides with the definition of “storage type” in previous literature \cite{HerVog15,VogDroHer16}. \begin{example} The deterministic data storage $\textrm{Count}$ models simple counting (Engelfriet~\cite[Def.~3.4]{Eng86,Eng14}): \(\mathrm{Count} = (ℕ, \{ℕ, ℕ₊, \{0\}\}, \{\mathrm{inc}, \mathrm{dec}\}, 0)\) where \(\mathrm{inc} = \{(n, n+1) ∣ n ∈ ℕ\}\) and \(\mathrm{dec} = \mathrm{inc}^{-1}\). \end{example} \begin{example}\label{ex:pd} The following deterministic data storage models pushdown storage: \footnote{We allows (in comparison to Engelfriet~\cite[Def.~3.2]{Eng86,Eng14}) the execution of (some) instructions on the empty pushdown.} \(\mathrm{PD}_Γ = (Γ^*, P_{\text{pd}}, R_{\text{pd}}, ε)\) where \(Γ\) is a nonempty finite set (\emph{pushdown symbols}); \(P_{\text{pd}} = \{Γ^*, \mathrm{bottom}\} ∪ \{\mathrm{top}_γ ∣ γ ∈ Γ\}\) with \( \mathrm{bottom} = \{ ε \} \) and \( \mathrm{top}_γ = \{γw ∣ w ∈ Γ^*\} \) for every $γ ∈ Γ$; and $R_{\text{pd}} = \{\mathrm{stay}, \mathrm{pop}\} ∪ \{\mathrm{push}_γ ∣ γ ∈ Γ\} ∪ \{ \mathrm{stay}_γ ∣ γ ∈ Γ\}$ with \( \mathrm{stay} = \{(w, w) ∣ w ∈ Γ^*\} \), \( \mathrm{pop} = \{(γw, w) ∣ w ∈ Γ^*, γ ∈ Γ\} \), \( \mathrm{push}_γ = \{(w, γw) ∣ w ∈ Γ^*\} \), and \( \mathrm{stay}_γ = \{(γ'w, γw) ∣ w ∈ Γ^*, γ' ∈ Γ\} \) for every $γ ∈ Γ$. \end{example} We call a data storage $S = (C, P, R, c_{\text{i}})$ \emph{boundedly non-deterministic (short: boundedly nd)} if there is a natural number $k$ such that $\lvert r(c) \rvert ≤ k$ holds for every $r ∈ R$ and $c ∈ C$. The following two examples illustrate that each deterministic data storage is also boundedly nd, but not vice versa. \begin{example}\label{ex:pd-with-pop-star} $\mathrm{PD}_Γ'$ extends $\mathrm{PD}_Γ$ (cf. \cref{ex:pd}) by adding an instruction $\mathrm{pop}^*$ that allows us to remove arbitrarily many symbols from the top of the pushdown: \(\mathrm{PD}_Γ' = (Γ^*, P_{\mathrm{pd}}, R_{\mathrm{pd}} ∪ \{\mathrm{pop}^*\}, ε)\) where \(\mathrm{pop}^* = \{(uw, w) ∣ u, w ∈ Γ^*\}\). The tuple $\mathrm{PD}_Γ'$ is a data storage because $\lvert \mathrm{stay}(w) \rvert = 1$, $\lvert \mathrm{pop}(w) \rvert ≤ 1$, $\lvert \mathrm{push}_γ(w) \rvert = 1$, $\lvert \mathrm{stay}_γ(w) \rvert ≤ 1$, and $\lvert \mathrm{pop}^*(w) \rvert = \lvert w \rvert + 1$ for each $w ∈ Γ^*$ and $γ ∈ Γ$ are all finite. But $\mathrm{PD}_Γ'$ is not boundedly nd. Assume that it were. Then there would be a number $k ∈ ℕ$ such that $\lvert r(w) \rvert ≤ k$ for every $r ∈ R_{\text{pd}}$ and $w ∈ Γ^*$. But if we take some $w' ∈ Γ^*$ of length $k$, then $\lvert \mathrm{pop}^*(w') \rvert = k + 1 > k$ which contradicts our assumption. \end{example} \begin{example}\label{ex:pd-with-push-prime} The data storage $\mathrm{PD}_Γ''$ extends $\mathrm{PD}_Γ$ (cf. \cref{ex:pd}) by adding an instruction $\mathrm{push}_Γ$ that allows us to add an arbitrary symbol from $Γ$ the top of the pushdown: \(\mathrm{PD}_Γ'' = (Γ^*, P_{\mathrm{pd}}, R_{\mathrm{pd}} ∪ \{\mathrm{push}_Γ\}, ε)\) where \(\mathrm{push}_Γ = \{(w, wγ) ∣ w ∈ Γ^*, γ ∈ Γ\}\). The data storage $\mathrm{PD}_Γ''$ is boundedly nd because if we take bound $k = \lvert Γ \rvert$, then $\lvert \mathrm{stay}(w) \rvert = 1 ≤ k$, $\lvert \mathrm{pop}(w) \rvert ≤ 1 ≤ k$, $\lvert \mathrm{push}_γ(w) \rvert = 1 ≤ k$, $\lvert \mathrm{stay}_γ(w) \rvert ≤ 1 ≤ k$, and $\lvert \mathrm{push}_Γ(w) \rvert = \lvert Γ \rvert ≤ k$. In particular, if $\lvert Γ \rvert > 1$, then $\mathrm{PD}_Γ''$ is not deterministic because $\lvert \mathrm{push}_Γ(w) \rvert = \lvert Γ \rvert > 1$. \end{example} \subsection{Automata with data storage} \begin{quote} \emph{For the rest of this paper let $Σ$ be an arbitrary non-empty finite set.} \end{quote} \begin{definition} Let \(S = (C, P, R, c_{\text{i}})\) be a data storage. An \emph{$(S, Σ)$-automaton} is a tuple \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})\) where $Q$ is a finite set (of \emph{states}), $T$ is a finite subset of \(Q × (Σ ∪ \{ε\}) × P × R × Q\) (\emph{transitions}), \(Q_{\text{i}} ⊆ Q\) (\emph{initial states}), and \(Q_{\text{f}} ⊆ Q\) (\emph{final states}). \end{definition} Let \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})\) be an $(S, Σ)$-automaton and \(S = (C, P, R, c_{\text{i}})\). An \emph{$ℳ$-configuration} is an element of \(Q × C × Σ^*\). For every \(τ = (q, v, p, r, q') ∈ T\), the \emph{transition relation of $τ$} is the endorelation $⊢_τ$ on the set of $ℳ$-configurations that contains \((q, c, vw) ⊢_τ (q', c', w)\) for every $w ∈ Σ^*$ and $(c, c') ∈ r$ with $c ∈ p$. The \emph{run relation of $ℳ$} is $⊢_ℳ = ⋃_{τ ∈ T} {⊢_τ}$. The transition relations are extended to sequences of transitions by setting \({⊢_{τ_1⋯τ_k}} = {⊢_{τ_1}} \comp … \comp {⊢_{τ_k}}\) for every \(k ∈ ℕ\) and \(τ_1, …, τ_k ∈ T\). In particular, for the case $k =0$ we use the identity on $Q × C × Σ^*$: ${⊢_ε} = \{ (d, d) ∣ d ∈ Q × C × Σ^* \}$. The \emph{set of runs of $ℳ$} is the set \begin{equation} \Runs_ℳ = \big\{ θ ∈ T^* ∣ ∃q, q' ∈ Q, c, c' ∈ C, w, w' ∈ Σ^*: (q, c, w) ⊢_θ (q', c', w') \big\}\text{.}\label{eq:runs} \end{equation} Let $w ∈ Σ^*$. The \emph{set of runs of $ℳ$ on $w$} is \( \Runs_ℳ(w) = \big\{ θ ∈ T^* ∣ ∃q ∈ Q_{\text{i}}, q' ∈ Q_{\text{f}}, c' ∈ C: (q, c_{\text{i}}, w) ⊢_θ (q', c', ε)\big\}\text{.}\label{eq:runs-word} \) The \emph{language accepted by $ℳ$} is the set \( L(ℳ) = \{w ∈ Σ^* ∣ \Runs_ℳ(w) ≠ ∅ \}\). Let $S$ be a data storage and $L ⊆ Σ^*$. We call $L$ \emph{$(S, Σ)$-recognisable} if there is an $(S, Σ)$-automaton $ℳ$ with $L = L(ℳ)$. \begin{figure} \caption{Graph of the $(\mathrm{PD}_Γ'', Σ)$-automaton $ℳ$ from \cref{ex:automaton-with-storage}} \label{fig:automaton-with-storage} \end{figure} \begin{example}\label{ex:automaton-with-storage} Recall the data storage $\mathrm{PD}_Γ''$ from \cref{ex:pd-with-push-prime}. Let $Σ = \{\text{a}, \text{b}, \#, \text{a}', \text{b}'\}$ and $Γ = \{\text{a}, \text{b}\}$, and consider the $(\mathrm{PD}_Γ'', Σ)$-automaton $ℳ = ([3], T, \{1\}, \{3\})$ where \begin{align*} T:\quad &\begin{array}[t]{@{(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} 1 & \text{a} & Γ^* & \mathrm{push}_Γ & 1 \\ 2 & \text{a}' & \mathrm{top}_{\text{a}} & \mathrm{pop} & 2 \end{array} &&\begin{array}[t]{@{(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} 1 & \text{b} & Γ^* & \mathrm{push}_Γ & 1 \\ 2 & \text{b}' & \mathrm{top}_{\text{b}} & \mathrm{pop} & 2 \end{array} &&\begin{array}[t]{@{(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}l} 1 & \# & Γ^* & \mathrm{stay} & 2 \\ 2 & ε & \mathrm{bottom} & \mathrm{stay} & 3 & \text{.} \end{array} \end{align*} The graph of $ℳ$ is shown in \cref{fig:automaton-with-storage}. The label of each edge in the graph contains the input that is read by the corresponding transition, the predicate that is checked, and the instruction that is executed. The language recognised by $ℳ$ is \( L(ℳ) = \{ u \# v ∣ u ∈ \{\text{a}, \text{b}\}^*, v ∈ \{\text{a}', \text{b}'\}^*, \lvert u \rvert = \lvert v \rvert\}\). The automaton $ℳ$ recognises a given word $u \# v$ (with $u ∈ \{\text{a}, \text{b}\}^*$ and $v ∈ \{\text{a}', \text{b}'\}^*$) as follows: In state 1, it reads the prefix $u$ and constructs any element of $Γ^*$ of length $\lvert u \rvert$ on the pushdown non-deterministically. It then reads $\#$ and goes to state 2. In state 2, it reads $\text{a}'$ for each $\text{a}$ on the pushdown and it reads $\text{b}'$ for each $\text{b}$ on the pushdown until the pushdown is empty. Since the pushdown can contain any sequence over $\{\text{a}, \text{b}\}$ of length $\lvert u \lvert$, $ℳ$ can read any sequence of $\{\text{a}', \text{b}'\}$ of length $\lvert u \rvert$, ensuring that $\lvert u \rvert = \lvert v \rvert$. \end{example} We call a data storage $S = (C, P, R, c_{\text{i}})$ \emph{predicate-free} if $P = \{ C \}$.\footnote{Even though $S$ has a predicate $C$, we still call it predicate-free since $C$ is trivial, i.e. $C$ accepts any storage configuration.} The following lemma shows that predicate-free-ness is a normal form among data storages. \begin{lemma}\label{lem:predicate-free-normal-form} For every data storage $S$ there is a predicate-free data storage $S'$ such that the classes of $(S, Σ)$-recognisable languages and the class of $(S', Σ)$-recognisable languages are the same. \end{lemma} \begin{proof}[Proof idea] Encode the predicates of $S$ in the instructions of $S'$. \end{proof} \begin{proposition}\label{lem:implementing-of-nd-storage} For every data storage $S$ there is a deterministic data storage $\det(S)$ such that the class of $(S, Σ)$-recognisable languages is equal to the class of $(\det(S), Σ)$-recognisable languages. \end{proposition} \begin{proof} Due to \cref{lem:predicate-free-normal-form} we can assume that $S$ is predicate-free. Thus, let \(S = (C, \{ C \}, R, c_{\text{i}})\). Using a power set construction, we obtain the deterministic data storage \(\det(S) = (\mathcal{P}(C), \{ \mathcal{P}(C) \}, \det(R), \{c_{\text{i}}\})\) where \(\det(R) = \{ \det(r) ∣ r ∈ R \}\) with \(\det(r) = \{ (d, r(d)) ∣ d ⊆ C, r(d) ≠ ∅\}\) for every $r ∈ R$. Let \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})\) be an $(S, Σ)$-automaton and \(ℳ' = (Q, T', Q_{\text{i}}, Q_{\text{f}})\) be a $(\det(S), Σ)$-automaton. We say that $ℳ$ and $ℳ'$ are \emph{related} if $T' = \det(T) = \{\det(τ) ∣ τ ∈ T\}$ with $\det(τ) = (q, v, \mathcal{P}(C), \det(r), q')$ for each $τ = (q, v, C, r, q') ∈ T$. Clearly, for every $(S, Σ)$-automaton there is an $(\det(S), Σ)$-automaton such that both are related, and vice versa. Now let \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})\) be an $(S, Σ)$-automaton and \(ℳ' = (Q, \det(T), Q_{\text{i}}, Q_{\text{f}})\) be a $(\det(S), Σ)$-automaton. Note that $ℳ$ and $ℳ'$ are related. We extend $\det: T → \det(T)$ to a function $\det: T^* → (\det(T))^*$ by point-wise application. We can show for every $θ ∈ T^*$ by induction on the length of $θ$ that \begin{equation} ∀ q, q' ∈ Q, c, c' ∈ C, w, w' ∈ Σ^*: \quad (q, c, w) ⊢_θ (q', c', w') \iff ∀d ∋ c: ∃d' ∋ c': (q, d, w) ⊢_{\det(θ)} (q', d', w') \label{eq:implementing-of-nd-storage:IH} \end{equation} holds. We obtain $L(ℳ) = L(ℳ')$ from \eqref{eq:implementing-of-nd-storage:IH} and since \(\{c_{\text{i}}\}\) is the initial storage configuration of $ℳ'$. \end{proof} For practical reasons it might be preferable to avoid the construction of power sets. The proof of the following \nameCref{prop:implementing-nd-storage-no-powerset} shows a construction for boundedly nd data storages. \begin{proposition}\label{prop:implementing-nd-storage-no-powerset} Let $S = (C, P, R, c_{\text{i}})$ be a boundedly nd data storage. There is a deterministic data storage $S'$ with the same set of storage configurations such that the class of $(S, Σ)$-recognisable languages is contained in the class of $(S', Σ)$-recognisable languages. \end{proposition} \begin{proof} We construct the deterministic data storage \(S' = (C, P, R', c_{\text{i}})\) where $R'$ is constructed as follows: Let $r ∈ R$ and \(r(c)_1, …, r(c)_{m_{r, c}}\) be a fixed enumeration of the elements of $r(c)$ for every $c ∈ C$. Furthermore, let \(k = \max\{\lvert r(c) \rvert ∣ r ∈ R, c ∈ C\}\). Since $S$ is boundedly nd, the number $k$ is well defined. We define for each $i ∈ [k]$ an instruction \(r_i'\) by \(r_i'(c) = r(c)_i\) if $i ≤ m_{r, c}$ and $r_i'(c) = \text{undefined}$ otherwise. Let~$R'$ contain the instruction $r_i'$ for every $r ∈ R$ and $i ∈ [k]$. Now let \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})\) be an $(S, Σ)$-automaton. We construct the $(S', Σ)$-automaton \(ℳ' = (Q, T', Q_{\text{i}}, Q_{\text{f}})\) where $T'$ contains for every transition $t = (q, v, p, r, q') ∈ T$ and $i ∈ [k]$ the transition \(t_i' = (q, v, p, r_i', q')\). Then \( {⊢_ℳ} = ⋃\nolimits_{t ∈ T} {⊢_t} = ⋃\nolimits_{t = (q, v, p, r, q') ∈ T} ⋃\nolimits_{i ∈ [k]} {⊢_{t_i'}} = ⋃\nolimits_{t' ∈ T'} {⊢_{t'}} = {⊢_{ℳ'}} \) and thus \(L(ℳ) = L(ℳ')\). \end{proof} The above construction fails for data storages that are not boundedly nd. Consider the data storage \(\mathrm{PD}_Γ'\) from \cref{ex:pd-with-pop-star}. Then there exists no bound $k_{\mathrm{pop}^*} ∈ ℕ$ as would be required in the proof. The containment shown in \cref{prop:implementing-nd-storage-no-powerset} is strict as the following example reveals. \begin{example}[due to Nederhof~\cite{Ned17pc}]\label{ex:implementing-nd-storage-no-powerset-proper-containment} Recall the data storage \(\mathrm{PD}_Γ''\) from \cref{ex:pd-with-push-prime}. Consider the similar data storage $\mathrm{PD}_Γ^† = (Γ^*, \{ Γ^*, \mathrm{bottom} \}, \{ \mathrm{stay}, \mathrm{push}_Γ \} ∪ \{\mathrm{pop}_γ ∣ γ ∈ Γ\}, ε)$ where \(\mathrm{pop}_γ = \{(γw, w) ∣ γ ∈ Γ, w ∈ Γ^*\}\) for each $γ ∈ Γ$. We can again think of $Γ^*$ as a pushdown. Now, starting from $\mathrm{PD}_Γ^†$, we construct the deterministic data storage $(\mathrm{PD}^†_Γ)'$ by the construction given in \cref{prop:implementing-nd-storage-no-powerset}. We thereby obtain $(\mathrm{PD}^†_Γ)' = (Γ^*, \{Γ^*, \mathrm{bottom}\}, \{ \mathrm{stay}\} ∪ \{ \mathrm{push}_γ ∣ γ ∈ Γ \} ∪ \{ \mathrm{pop}_γ ∣ γ ∈ Γ\}, ε)$. The only difference between $\mathrm{PD}_Γ^†$ and $(\mathrm{PD}^†_Γ)'$ is that the instruction \( \mathrm{push}_Γ \) is replaced by the $\lvert Γ \rvert$ instructions in the set $\{ \mathrm{push}_γ ∣ γ ∈ Γ \}$. Now consider the sets $Σ = \{ \text{a}, \text{b} \}$ and $Γ = Σ$, and the language $L = \{ww^{\text{R}} ∣ w ∈ Σ^*\} ⊆ Σ^*$ where $w^{\text{R}}$ denotes the reverse of $w$ for each $w ∈ Σ^*$. The following $((\mathrm{PD}^†_Γ)', Σ)$-automaton $ℳ'$ recognises $L$ and thus demonstrates that $L$ is $((\mathrm{PD}^†_Γ)', Σ)$-recognisable: $ℳ' = ([3], T', \{1\}, \{3\})$ with \begin{align*} T':\quad &\begin{array}[t]{@{(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} 1 & \text{a} & Γ^* & \mathrm{push}_{\text{a}} & 1 \\ 2 & \text{a} & Γ^* & \mathrm{pop}_{\text{a}} & 2 \end{array} &&\begin{array}[t]{@{(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} 1 & \text{b} & Γ^* & \mathrm{push}_{\text{b}} & 1 \\ 2 & \text{b} & Γ^* & \mathrm{pop}_{\text{b}} & 2 \end{array} &&\begin{array}[t]{@{(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}l} 1 & ε & Γ^* & \mathrm{stay} & 2 \\ 2 & ε & \mathrm{bottom} & \mathrm{stay} & 3 & \text{.} \end{array} \end{align*} In state 1, $ℳ'$ stores the input in reverse on the pushdown until it decides non-deterministically go to state 2. In state 2, $ℳ$ accepts the sequence of symbols that is stored on the pushdown. We can only enter the final state 3 if the pushdown is empty, thus $ℳ'$ recognises $L$. On the other hand, there is no $(\mathrm{PD}_Γ^†, Σ)$-automaton $ℳ$ that recognises $L$. Assume that some $(\mathrm{PD}_Γ^†, Σ)$-automaton $ℳ$ recognises $L$. Then $ℳ$ would have to encode the first half of the input in the pushdown since this unbounded information can not be stored in the states. The only instruction that adds information to the pushdown is $\mathrm{push}_Γ$. Thus, in the first half of the input, whenever we read the symbol a, we have to execute $\mathrm{push}_Γ$; and whenever we read the symbol b, we also have to execute $\mathrm{push}_Γ$. This offers no means of distinguishing the two situations (reading symbol a and reading symbol b) and hence no means of encoding the first half of the input in the pushdown. \end{example} \begin{proposition}\label{obs:equivalent-fsa} Let $S = (C, P, R, c_{\text{i}})$ be a data storage and $L$ be an $(S, Σ)$-recognisable language. If $C$ is finite, then $L$ is recognisable (by a finite state automaton). \end{proposition} \begin{proof} We will use a product construction. In particular, the states of the constructed finite state automaton are elements of $Q × C$. For this we employ non-deterministic finite-state automata with extended transition function (short: fsa) from Hopcroft and Ullman~\cite[Sec.~2.3]{HopUll79} in a notation similar to that of automata with storage. (We simply leave out the storage-related parts of the transitions.) Let \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})\). We construct the fsa \(ℳ' = (Q × C, Σ, T', Q_{\text{i}} × \{c_{\text{i}}\}, Q_{\text{f}} × C)\) where \(T' = \{ ((q, c), v, (q', c')) ∣ (q, v, p, r, q') ∈ T, (c, c') ∈ r, c ∈ p \}\). We can show \begin{equation} ∀q, q' ∈ Q, c, c' ∈ C, w, w' ∈ Σ^*: \quad (q, c, w) ⊢_ℳ^* (q', c', w') \iff ((q, c), w) ⊢_{ℳ'}^* ((q', c'), w')\text{.} \label{eq:equivalent-fsa:IH} \end{equation} by straight-forward induction on the length of runs. Using \eqref{eq:runs} and \eqref{eq:equivalent-fsa:IH}, we then derive $L(ℳ) = L(ℳ')$. \end{proof} \section{Approximation of automata with data storage} \label{sec:approximation} An approximation strategy maps a data storage to another data storage. It is specified in terms of storage configurations and naturally extended to predicates and instructions. \begin{definition} Let $S = (C, P, R, c_{\text{i}})$ be a data storage. An \emph{approximation strategy} is a partial function \(A: C \parto C'\) for some set $C'$. We call $A$ \emph{$S$-proper} if $(A^{-1} \comp r \comp A)(c')$ is finite for every $r ∈ R$ and $c' ∈ C'$. \end{definition} \begin{definition}\label{def:storage-approximation} Let \(S = (C, P, R, c_{\text{i}})\) be a data storage and \(A: C \parto C'\) be an $S$-proper approximation strategy. The \emph{approximation of $S$ with respect to $A$} is the data storage $\app{A}{S} = (C', \app{A}{P}, \app{A}{R}, A(c_{\text{i}}))$ where \(\app{A}{P} = \{\app{A}{p} ∣ p ∈ P\}\) with \(\app{A}{p} = \{ A(c) ∣ c ∈ p \}\) for every $p ∈ P$, and \(\app{A}{R} = \{\app{A}{r} ∣ r ∈ R\}\) with \(\app{A}{r} = A^{-1} \comp r \comp A\) for every $r ∈ R$. \end{definition} \begin{example} Consider the approximation strategy \(A_{\mathrm{o}}: ℕ → \{\mathrm{odd}\} ∪ \{2n ∣ n ∈ ℕ\}\) that assigns to every odd number the value $\mathrm{odd}$ and to every even number the number itself. Then $A_{\mathrm{o}}$ \emph{is not} $\mathrm{Count}$-proper since \((A_{\mathrm{o}}^{-1} \comp \mathrm{inc} \comp A_{\mathrm{o}})(\mathrm{odd}) = (A_{\mathrm{o}}^{-1} \comp \mathrm{dec} \comp A_{\mathrm{o}})(\mathrm{odd}) = \{2n ∣ n ∈ ℕ\}\) is not finite. On the other hand, consider the approximation strategy \(A_{\mathrm{eo}}: ℕ → \{\mathrm{even}, \mathrm{odd}\}\) that returns $\mathrm{odd}$ for every odd number and $\mathrm{even}$ otherwise. Then $A_{\mathrm{eo}}$ \emph{is} $\mathrm{Count}$-proper since \((A_{\mathrm{eo}}^{-1} \comp \mathrm{inc} \comp A_{\mathrm{eo}})(\mathrm{even}) = \{\mathrm{odd}\} = (A_{\mathrm{eo}}^{-1} \comp \mathrm{dec} \comp A_{\mathrm{eo}})(\mathrm{even})\) and \((A_{\mathrm{eo}}^{-1} \comp \mathrm{inc} \comp A_{\mathrm{eo}})(\mathrm{odd}) = \{\mathrm{even}\} = (A_{\mathrm{eo}}^{-1} \comp \mathrm{dec} \comp A_{\mathrm{eo}})(\mathrm{odd})\) are finite. \end{example} \begin{definition}\label{def:automaton-approximation} Let \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})\) be an $(S, Σ)$-automaton and $A$ an $S$-proper approximation strategy. The \emph{approximation of $ℳ$ with respect to $A$} is the $(\app{A}{S}, Σ)$-automaton \(\app{A}{ℳ} = (Q, \app{A}{T}, Q_{\text{i}}, Q_{\text{f}})\) where \(\app{A}{T} = \{\app{A}{τ} ∣ τ ∈ T\}\) and $\app{A}{τ} = (q, v, \app{A}{p}, \app{A}{r}, q')$ for each \(τ = (q, v, p, r, q') ∈ T\). \end{definition} \begin{example}\label{ex:Count-approximation-Aeo} Let $Σ = \{\text{a}, \text{b}\}$. Consider the ($\mathrm{Count}, Σ)$-automaton $ℳ = ([3], T, \{1\}, \{3\})$ and its approximation $\app{A_{\text{eo}}}{ℳ} = ([3], \app{A_{\text{eo}}}{T}, \{1\}, \{3\})$ with \begin{align*} T: \begin{array}[t]{r@{{}=(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ_1 & 1 & \text{a} & ℕ & \mathrm{inc} & 1 \\ τ_2 & 1 & \text{b} & ℕ & \mathrm{dec} & 2 \\ τ_3 & 2 & \text{b} & ℕ & \mathrm{dec} & 2 \\ τ_4 & 2 & ε & \{0\} & \mathrm{inc} & 3 \end{array} &&\app{A_{\text{eo}}}{T}: \begin{array}[t]{r@{{}=(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ_1' & 1 & \text{a} & \app{A_{\textrm{eo}}}{ℕ} & \app{A_{\textrm{eo}}}{\mathrm{inc}} & 1 \\ τ_2' & 1 & \text{b} & \app{A_{\textrm{eo}}}{ℕ} & \app{A_{\textrm{eo}}}{\mathrm{dec}} & 2 \\ τ_3' & 2 & \text{b} & \app{A_{\textrm{eo}}}{ℕ} & \app{A_{\textrm{eo}}}{\mathrm{dec}} & 2 \\ τ_4' & 2 & ε & \app{A_{\textrm{eo}}}{\{0\}} & \app{A_{\textrm{eo}}}{\mathrm{inc}} & 3 \end{array} \end{align*} where $\app{A_{\textrm{eo}}}{ℕ} = \app{A_{\textrm{eo}}}{ℕ₊} = \{\mathrm{even}, \mathrm{odd}\}$ and $\app{A_{\textrm{eo}}}{\{0\}} = \{\mathrm{even}\}$ are the predicates of $\app{A_{\textrm{eo}}}{\mathrm{Count}}$, and $\app{A_{\textrm{eo}}}{\mathrm{inc}} = \app{A_{\textrm{eo}}}{\mathrm{dec}} = \{(\mathrm{even}, \mathrm{odd}), (\mathrm{odd}, \mathrm{even})\}$ is the instruction of $\app{A_{\textrm{eo}}}{\mathrm{Count}}$. The word $\text{aabb} ∈ \{\text{a}, \text{b}\}^*$ is recognised by both automata: \[ \begin{array}{rlllll} (1, 0, \text{aabb}) &⊢_{τ_1} (1, 1, \text{abb}) &⊢_{τ_1} (1, 2, \text{bb}) &⊢_{τ_2} (2, 1, \text{b}) &⊢_{τ_3} (2, 0, ε) &⊢_{τ_4} (3, 1, ε) \\[.2em] (1, \mathrm{even}, \text{aabb}) &⊢_{τ_1'} (1, \mathrm{odd}, \text{abb}) &⊢_{τ_1'} (1, \mathrm{even}, \text{bb}) &⊢_{τ_2'} (2, \mathrm{odd}, \text{b}) &⊢_{τ_3'} (2, \mathrm{even}, ε) &⊢_{τ_4'} (3, \mathrm{odd}, ε)\text{.} \end{array} \] On the other hand, the word $\text{bb}$ can be recognised by $\app{A_{\textrm{eo}}}{ℳ}$ but not by $ℳ$: \begin{align*} (1, \mathrm{even}, \text{bb}) ⊢_{τ_2'} (2, \mathrm{odd}, \text{b}) ⊢_{τ_3'} (2, \mathrm{even}, ε) ⊢_{τ_4'} (3, \mathrm{odd}, ε)\text{.} \tag*\qedhere \end{align*} \end{example} \begin{observation}\label{obs:composition-of-approximations} Let $S = (C, P, R, c_{\text{i}})$, $ℳ$ be an $(S, Σ)$-automaton, and $A_1: C \parto \bar{C}$ and $A_2: \bar{C} \parto C'$ be approximation strategies. If $A_1$ is $S$-proper and $A_2$ is $\app{A_1}{S}$-proper, then $\app{A_2}{\app{A_1}{ℳ}} = \app{(A_1 \comp A_2)}{ℳ}$. \end{observation} We call an approximation strategy \emph{total} if it is a total function and we call it \emph{injective} if it is an injective partial function. The distinction between \emph{total} and \emph{injective} approximation strategies allows us to define two preorders on approximation strategies (\cref{def:finer-approximation}) and provides us with simple criteria to ensure that an approximation strategy leads to a superset (\cref{thm:superset-approximation}) or a subset approximation (\cref{thm:subset-approximation}). \begin{definition}\label{def:finer-approximation} Let \(A_1: C \parto C₁\) and \(A_2: C \parto C₂\) be approximation strategies. We call $A_1$ \emph{finer than} $A_2$, denoted by $A_1 \preceq A_2$, if there is a total approximation strategy $A: C₁ → C₂$ with $A_1 \comp A = A_2$. We call $A_1$ \emph{less partial than} $A_2$, denoted by $A_1 ⊑ A_2$, if there is an injective approximation strategy $A: C₁ \parto C₂$ with $A_1 \comp A = A_2$. \end{definition} \subsection{Superset approximations} In this section we will show that total approximation strategies (i.e. total functions) lead to superset approximations. \begin{lemma}\label{lem:superset-approximation} Let $ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})$ be an $(S, Σ)$-automaton, $S = (C, P, R, c_{\text{i}})$, and $A$ be an $S$-proper total approximation strategy. We extend $\app{A}{}: T → \app{A}{T}$ to sequences of transitions by point-wise application. Then for each $θ ∈ T^*$, $q, q' ∈ Q$, $c, c' ∈ C$, $w, w' ∈ Σ^*$: \( (q, c, w) ⊢_θ (q', c', w') ⟹ (q, A(c), w) ⊢_{\app{A}{θ}} (q', A(c'), w')\text{.} \) \end{lemma} \begin{proofidea} The claim can be shown by straightforward induction on the length of $θ$. \end{proofidea} \begin{theorem}\label{thm:superset-approximation} Let \(ℳ\) be an $(S, Σ)$-automaton and $A$ be an $S$-proper total approximation strategy. Then \(L(\app{A}{ℳ}) ⊇ L(ℳ)\). \end{theorem} \begin{proof} The claim follows immediately from \cref{lem:superset-approximation} and the definition of $\app{A}{ℳ}$. \end{proof} \begin{example} Recall $ℳ$ and $\app{A_{\text{eo}}}{ℳ}$ from \cref{ex:Count-approximation-Aeo}. Their recognised languages are $L(ℳ) = \{\text{a}^n\text{b}^n ∣ n ∈ ℕ_+\}$ and $L(\app{A_{\text{eo}}}{ℳ}) = \{ \text{a}^m \text{b}^n ∣ m ∈ ℕ,n ∈ ℕ_+, m ≡ n \mod 2\}$. Thus $L(\app{A_{\text{eo}}}{ℳ})$ is a superset of $L(ℳ)$. \end{example} \begin{corollary}\label{prop:finer-approximation} Let $ℳ$ be an $(S, Σ)$-automaton, and $A_1$ and $A_2$ be $S$-proper approximation strategies. If $A_1$ is finer than $A_2$, then $L(\app{A_1}{ℳ}) ⊆ L(\app{A_2}{ℳ})$. \end{corollary} \begin{proof} Since $A_1$ is finer than $A_2$, there is a total approximation strategy $A$ such that $A_1 \comp A = A_2$. It follows from the fact that $A_2$ is $S$-proper and from $A_1 \comp A = A_2$ that $A$ must be $\app{A₁}{S}$-proper. Hence we obtain \( L(\app{A_1}{ℳ}) \stackrel{\text{\cref{thm:superset-approximation}}}{⊆} L\big(\app{A}{\app{A_1}{ℳ}}\big) \stackrel{\text{\cref{obs:composition-of-approximations}}}{=} L(\app{(A_1 \comp A)}{ℳ}) = L(\app{A_2}{ℳ})\text{.} \) \end{proof} The following example shows four approximation strategies that occur in the literature. The first three approximation strategies approximate a context-free language by a recognisable language (taken from Nederhof~\cite[Sec.~7]{Ned00}). The fourth approximation strategy approximates a context-free language by another context-free language. It is easy to see that the shown approximation strategies are total and thus lead to superset approximations. \begin{example}\label{ex:superset-approximation} Let $Γ$ be a finite set and $k ∈ ℕ_+$. \begin{enumerate} \item Evans~\cite{Eva97} proposed to map each pushdown to its top-most element. The same result is achieved by dropping condition~7 and~8 from Baker~\cite{Bak81}. This idea is expressed by the total approximation strategy \( A_{\text{top}}: Γ^* → Γ ∪ \{@\} \) with \( A_{\text{top}}(ε) = @ \) and \( A_{\text{top}}(γw) = γ \) for every $w ∈ Γ^*$ and $γ ∈ Γ$, where $@$ is a new symbol that is not in $Γ$. \item Bermudez and Schimpf~\cite{BerSch90} proposed to map each pushdown to its top-most $k$ elements. The total approximation strategy \(A_{\text{top}, k}: Γ^* → \{w ∈ Γ^* ∣ \lvert w \vert ≤ k\} \) implements this idea where \( A_{\text{top},k}(w) = w \) if $\lvert w \rvert ≤ k$ and \( A_{\text{top},k}(w) = u \) if $w$ is of the form $uv$ for some $u ∈ Γ^k$ and $v ∈ Γ^+$. \item Pereira and Wright~\cite{PerWri91} proposed to map each pushdown to one where no pushdown symbol occurs more than once. To achieve this, they replace each substrings of the form $γw'γ$ (for some $γ ∈ Γ$ and $w' ∈ Γ^*$) in the given pushdown by $γ$: Consider \( A_{\text{uniq}}: Γ^* → \mathrm{Seq}_{\text{nr}}(Γ) \) with \( A_{\text{uniq}}(w) = A_{\text{uniq}}(uγv) \) if $w$ is of form $uγw'γv$ for some $γ ∈ Γ$ and \( A_{\text{uniq}}(w) = w \) otherwise, where $\mathrm{Seq}_{\text{nr}}(Γ)$ denotes the set of all sequences over $Γ$ without repetition. \item\label{ex:superset-approximation:equivalent-nts} In their coarse-to-fine parsing approach for context-free grammars (short: CFG), Charniak et~al.~\cite{Cha+06} propose, given an equivalence relation~$≡$ on the set of non-terminals $N$ of some CFG $G$, to construct a new CFG $G'$ whose non-terminals are the equivalence classes of $≡$.\footnote{Charniak et~al.~\cite{Cha+06} actually considered probabilistic CFGs, but for the sake of simplicity we leave out the probabilities here.} Let $Σ$ be the terminal alphabet of $G$. Say that $g: N → N/{≡}$ is the function that assigns for a nonterminal of $G$ its corresponding equivalence class; and let $g': (N ∪ Σ)^* → ((N/{≡}) ∪ Σ)^*$ be an extension of $g ∪ \{(σ, σ) ∣ σ ∈ Σ\}$. Then $g'$ is $\mathrm{PD}_{N ∪ Σ}$-proper and \( L(\app{g'}{ℳ}) = L(G') \) where $ℳ$ is the $(\mathrm{PD}_{N ∪ Σ}, Σ)$-automaton obtained from $G$ by the usual construction \cite[Thm.~5.3]{HopUll79}.\qedhere \end{enumerate} \end{example} \subsection{Subset approximations} In this section we will show that injective approximation strategies lead to a subset approximation, this is proved by a variation of the proof of \cref{thm:superset-approximation}. \begin{lemma}\label{lem:subset-approximation} Let \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}})\) be an $(S, Σ)$-automaton, \(S = (C, P, R, c_{\text{i}})\), and $A$ be an $S$-proper injective approximation strategy. Then for each $θ ∈ T^*$, $q, q' ∈ Q$, $c, c' ∈ \img(A)$, $w, w' ∈ Σ^*$: \( (q, c, w) ⊢_{\app{A}{θ}} (q', c', w') ⟹ (q, A^{-1}(c), w) ⊢_θ (q', A^{-1}(c'), w')\text{.} \) \end{lemma} \begin{proofidea} The claim can be shown by straightforward induction on the length of $θ$. \end{proofidea} \begin{theorem}\label{thm:subset-approximation} Let $ℳ$ be an $(S, Σ)$-automaton and $A$ be an $S$-proper injective approximation strategy. Then $L(\app{A}{ℳ}) ⊆ L(ℳ)$. \end{theorem} \begin{proof} Then the claim follows immediately from \cref{lem:subset-approximation} and the definition of $\app{A}{ℳ}$. \end{proof} \begin{corollary}\label{prop:less-partial-approximation} Let $ℳ$ be an $(S, Σ)$-automaton and $A_1$ and $A_2$ be $S$-proper approximation strategies. If $A_1$ is less partial than $A_2$, then $L(\app{A_1}{ℳ}) ⊇ L(\app{A_2}{ℳ})$. \end{corollary} \begin{proof} Since $A_1$ is less partial than $A_2$, we know that there is an injective approximation strategy $A$ such that $A_1 \comp A = A_2$. As in the proof of \cref{prop:finer-approximation} we know that $A$ is $\app{A_1}{S}$-proper. Hence we obtain \begin{align*} L(\app{A_1}{ℳ}) \stackrel{\text{\cref{thm:subset-approximation}}}{⊇} L\big(\app{A}{\app{A_1}{ℳ}}\big) \stackrel{\text{\cref{obs:composition-of-approximations}}}{=} L(\app{(A_1 \comp A)}{ℳ}) = L(\app{A_2}{ℳ})\text{.} \tag*\qedhere \end{align*} \end{proof} The following example approximates a context-free language with a recognisable language (taken from Nederhof~\cite[Sec.~7]{Ned00}). It is easy to see that the shown approximation strategy is injective and thus leads to subset approximations. \begin{example}\label{ex:subset-approximation} Let $Γ$ be a finite set and $k ∈ ℕ_+$. Krauwer and des Tombe~\cite{KraTom81}, Pulman~\cite{Pul86}, and Langendoen and Langsam~\cite{LanLan87} proposed to disallow pushdowns of height greater than $k$. This can be achieved by the partial identity \( A_{\text{bd},k}: Γ^+ \parto \{w ∈ Γ ∣ \lvert w \rvert ≤ k\} \) where \( A_{\text{bd},k}(w) = w \) if $\lvert w \rvert ≤ k$ and \( A_{\text{bd},k}(w) = \text{undefined} \) if \(\lvert w \rvert > k\). \end{example} \subsection{Potentially incomparable approximations} The following example shows that our framework is also capable of expressing approximation strategies that lead neither to superset nor to subset approximations. \begin{example} Let $Γ$ be a (not necessarily finite) set, $Δ$ be a finite set, $k ∈ ℕ_+$, and $g: Γ → Δ$ be a total function. For pushdown automata with an infinite pushdown alphabet, Johnson~\cite[end of Section~1.4]{Joh98} proposed to first approximate the infinite pushdown alphabet with a finite set and then restrict the pushdown height to $k$. This can be easily expressed as the composition of two approximations: \begin{align*} A_{\text{incomp},k}: Γ^+ &\parto \{w ∣ w ∈ Δ, \lvert w \rvert ≤ k\} &A_{\text{incomp},k} &= \hat{g} \comp A_{\text{bound},k} \end{align*} where $\hat{g}: Γ^+ → Δ^+$ is the point-wise application of $g$. Let $\lvert Δ \rvert < \lvert Γ \rvert$. Then $\hat{g}$ is total but not injective, $A_{\text{bound},k}$ is injective but not total, and $A_{\text{incomp},k}$ is neither total nor injective. Hence \cref{thm:superset-approximation,thm:subset-approximation} provide no further insights about the approximation strategy $A_{\text{incomp},k}$. This concurs with the observation of Johnson \cite[end of Section~1.4]{Joh98} that $A_{\text{incomp},k}$ is not guaranteed to induce either subset or superset approximations. \end{example} \subsection{Approximation of weighted automata with storage} \begin{definition}\label{def:weighted-automaton:syntax} Let $S$ be a data storage and $K$ be a complete semiring. An \emph{$(S, Σ, K)$-automaton} is a tuple \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}}, δ)\) where $(Q, T, Q_{\text{i}}, Q_{\text{f}})$ is an $(S, Σ)$-automaton and \(δ: T → K\) (\emph{transition weights}). We sometimes denote $(Q, T, Q_{\text{i}}, Q_{\text{f}})$ by $ℳ_{\text{uw}}$ (“$\text{uw}$” stands for unweighted). \end{definition} Consider the $(S, Σ, K)$-automaton \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}}, δ)\). The \emph{$ℳ$-configurations}, the \emph{run relation of $ℳ$}, and the \emph{set of runs of $ℳ$ on $w$} for every $w ∈ Σ^*$ are the same as for $ℳ_{\text{uw}}$. The \emph{weight of $θ$ in $ℳ$} is the value \(\wt_ℳ(θ) = δ(τ_1) ⋅ … ⋅ δ(τ_k)\) for every $θ = τ_1 ⋯ τ_k$ with $τ_1, …, τ_k ∈ T$. In particular, we let $\wt_ℳ(ε) = 1$. The \emph{weighted language induced by $ℳ$} is the function $⟦ℳ⟧: Σ^* → K$ where \begin{equation} ⟦ℳ⟧(w) = ∑\nolimits_{θ ∈ \Runs_ℳ(w)} \wt_ℳ(θ) \label{eq:weighted-automaton:semantics} \end{equation} For every $w ∈ Σ^*$. Let $S$ be a data storage, $K$ be a complete semiring, and $r: Σ^* → K$. We call $r$ \emph{$(S, Σ, K)$-recognisable} if there is an $(S, Σ, K)$-automaton $ℳ$ with $r = ⟦ℳ⟧$. We extend \cref{lem:implementing-of-nd-storage} to the weighted case, using the functions $\det$ as defined in \cref{lem:implementing-of-nd-storage}. \begin{proposition}\label{lem:implementing-of-nd-storage-weighted} The classes of $(S, Σ, K)$-recognisable and of $(\det(S), Σ, K)$-recog\-nis\-able languages are the same for every data storage $S$ and semiring $K$. \end{proposition} \begin{proof} Let $ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}}, δ)$ be an $(S, Σ, K)$-automaton and $ℳ' = (Q', T', Q_{\text{i}}', Q_{\text{f}}', δ')$ a $(\det(S), Σ, K)$-automaton. We call $ℳ$ and $ℳ'$ \emph{related} if $ℳ_{\text{uw}}$ and $ℳ_{\text{uw}}'$ are related, and $δ'(\det(τ)) = δ(τ)$ for every $τ ∈ T$. Note that $\det\colon T → \det(T)$ is a bijection. Clearly, for every $(S, Σ, K)$-automaton $ℳ$ there is an $(\det(S), Σ, K)$-automaton $ℳ'$ such that $ℳ$ and $ℳ'$ are related and vice versa. It remains to be shown that $⟦ℳ⟧ = ⟦ℳ'⟧$. For every $w ∈ Σ^*$, we derive \begin{align*} ⟦ℳ⟧(w) \stackrel{\text{\eqref{eq:weighted-automaton:semantics}}}{=} ∑\nolimits_{θ ∈ R_ℳ} \wt_ℳ(θ) = ∑\nolimits_{θ ∈ R_ℳ} \wt_{ℳ'}(\det(θ)) \stackrel{\text{\eqref{eq:implementing-of-nd-storage:IH}}}{=} ∑\nolimits_{θ' ∈ R_{ℳ'}} \wt_{ℳ'}(θ') \stackrel{\text{\eqref{eq:weighted-automaton:semantics}}}{=} ⟦ℳ'⟧(w)\text{.} \tag*{\qedhere} \end{align*} \end{proof} \begin{definition}\label{def:weighted-approx} Let \(ℳ = (Q, T, Q_{\text{i}}, Q_{\text{f}}, δ)\) be an $(S, Σ, K)$-automaton and $A$ be an $S$-proper approximation strategy. The \emph{approximation of $ℳ$ with respect to $A$} is the $(\app{A}{S}, Σ, K)$-automaton \( \app{A}{ℳ} = (Q, \app{A}{T}, Q_{\text{i}}, Q_{\text{f}}, \app{A}{δ}) \) where $\app{A}{S}$ and $\app{A}{T}$ are defined as in \cref{def:automaton-approximation}, and \( \app{A}{δ}(τ') = ∑_{τ ∈ T: \app{A}{τ} = τ'} δ(τ) \) for every $τ' ∈ \app{A}{T}$. \end{definition} \begin{lemma}\label{lem:wt-approx} Let \(ℳ\) be an $(S, Σ, K)$-automaton, $A$ be an $S$-proper approximation strategy, ${≤}$ be a partial order on $K$, and $K$ be positively ${≤}$-ordered. \begin{enumerate} \item\label{item:wt-approx-types:over} \(\wt_{\app{A}{ℳ}}(θ') ≥ ∑_{θ ∈ \Runs_{ℳ} : \app{A}{θ} = θ'} \wt_ℳ(θ)\) for every \(θ' ∈ \Runs_{\app{A}{ℳ}}\). \item\label{item:wt-approx-types:under} If $A$ is injective, then \(\wt_{\app{A}{ℳ}}(θ') = ∑_{θ ∈ \Runs_{ℳ} : \app{A}{θ} = θ'} \wt_ℳ(θ)\) for every \(θ' ∈ \Runs_{\app{A}{ℳ}}\). \end{enumerate} \end{lemma} \begin{proof} \textbf{ad~\ref{item:wt-approx-types:over}:} We proof the claim by induction on the length of $θ'$. For $θ' = ε$, we derive \begin{align*} \wt_{\app{A}{ℳ}}(ε) = 1 ≥ 1 = \wt_{ℳ}(ε) = ∑\nolimits_{θ ∈ \Runs_ℳ: \app{A}{θ} = ε} \wt_ℳ(θ)\text{.} \end{align*} For $θ'τ' ∈ \Runs_{\app{A}{ℳ}}$ with $τ' ∈ \app{A}{T}$, we derive \begin{align*} wt_{\app{A}{ℳ}}(θ'τ') &= \wt_{\app{A}{ℳ}}(θ') ⋅ \app{A}{δ}(τ') \\* &≥ \big( ∑\nolimits_{θ ∈ \Runs_ℳ, \app{A}{θ} = θ'} \wt_ℳ(θ) \big) ⋅ \app{A}{δ}(τ') \tag{by IH and since $⋅$ preserves $≤$} \\ &= \big( ∑\nolimits_{θ ∈ \Runs_ℳ, \app{A}{θ} = θ'} \wt_ℳ(θ) \big) ⋅ \big( ∑\nolimits_{τ ∈ T: \app{A}{τ} = τ'} δ(τ) \big) \tag{by \cref{def:weighted-approx}} \\ &= ∑\nolimits_{θ ∈ \Runs_ℳ, τ ∈ T: (\app{A}{θ} = θ') ∧ (\app{A}{τ} = τ')} \wt_ℳ(θ) ⋅ δ(τ) \tag{by distributivity of $K$} \\ &≥ ∑\nolimits_{θ ∈ \Runs_ℳ, τ ∈ T: θτ ∈ \Runs_ℳ ∧ (\app{A}{θτ} = θ'τ')} \wt_ℳ(θ) ⋅ δ(τ) \tag{by $(*)$ and since $+$ preserves $≤$} \\* &= ∑\nolimits_{\bar{θ} ∈ \Runs_ℳ: (\app{A}{\bar{θ}} = θ'τ')} \wt_ℳ(\bar{θ}) \tag{by \cref{def:weighted-approx}} \end{align*} For $(*)$, we note that the index set of the left sum subsumes that of the right sum and hence $≥$ is justified. \textbf{ad~\ref{item:wt-approx-types:under}:} The proof follows the same structure as the proof of \ref{item:wt-approx-types:over}. But we make the following modifications: In the induction base, we can write “$=$” instead of “$≥$” since $1 = 1$. For the induction step, we assume that \ref{item:wt-approx-types:under} holds for every $θ'$ of length $n$. Then the “$≥$” in the second line of the induction step can be replaced by “$=$”. In order to turn the “$≥$” in the fifth line of the induction step into “$=$”, we propose that the index sets of the left and the right sum are the same. This holds since $A$ is injective, $θ'τ'$ is in $R_{\app{A}{ℳ}}$, and hence (by \cref{lem:subset-approximation}) each $θτ$ with $\app{A}{θτ} = θ'τ'$ is in $\Runs_ℳ$. \end{proof} \begin{theorem}\label{thm:weighted-approx-types} Let \(ℳ\) be an $(S, Σ, K)$-automaton, $A$ be an $S$-proper approximation strategy, and ${≤}$ be a partial order on $K$, and $K$ be positively ${≤}$-ordered. \begin{enumerate} \item\label{item:weighted-approx-types:over} If $A$ is total, then \(⟦\app{A}{ℳ}⟧(w) ≥ ⟦ℳ⟧(w)\) for every $w ∈ Σ^*$. \item\label{item:weighted-approx-types:under} If $A$ is injective, then \(⟦\app{A}{ℳ}⟧(w) ≤ ⟦ℳ⟧(w)\) for every $w ∈ Σ^*$. \end{enumerate} \end{theorem} \begin{proof} \textbf{ad~\ref{item:weighted-approx-types:over}:} For every $w ∈ Σ^*$, we derive \begin{align*} &⟦\app{A}{ℳ}⟧(w) \stackrel{\text{\eqref{eq:weighted-automaton:semantics}}}{=} ∑\nolimits_{θ' ∈ \Runs_{\app{A}{ℳ}}(w)} \wt_{\app{A}{ℳ}}(θ') \stackrel{(*)}{≥} ∑\nolimits_{θ' ∈ \Runs_{\app{A}{ℳ}}(w)} ∑\nolimits_{θ ∈ \Runs_{ℳ}\colon \app{A}{θ} = θ'} \wt_{ℳ}(θ) \\* &\qquad\stackrel{\text{\cref{def:automaton-approximation}}}{=} ∑\nolimits_{θ' ∈ \Runs_{\app{A}{ℳ}}(w)} ∑\nolimits_{θ ∈ \Runs_{ℳ}(w)\colon \app{A}{θ} = θ'} \wt_ℳ(θ) \stackrel{(\dagger)}{=} ∑\nolimits_{θ ∈ \Runs_{ℳ}(w)} \wt_ℳ(θ) \stackrel{\text{\eqref{eq:weighted-automaton:semantics}}}{=} ⟦ℳ⟧(w) \end{align*} where $(*)$ follows from \cref{lem:wt-approx}\,\ref{item:wt-approx-types:over} and the fact that $+$ preserves $≤$. For $(\dagger)$, we argue that for each $θ ∈ \Runs_ℳ(w)$ there is exactly one $θ' ∈ R_{\app{A}{ℳ}}(w)$ with $\app{A}{θ} = θ'$ since $A$ is total. Hence the left side and the right side of the equation have exactly the same addends. Then, since $+$ is commutative, the “$=$” is justified. \textbf{ad~\ref{item:weighted-approx-types:under}:} For every $w ∈ Σ^*$, we derive \begin{align*} &⟦\app{A}{ℳ}⟧(w) \stackrel{\text{\eqref{eq:weighted-automaton:semantics}}}{=} ∑\nolimits_{θ' ∈ \Runs_{\app{A}{ℳ}}(w)} \wt_{\app{A}{ℳ}}(θ') \stackrel{\text{\cref{lem:wt-approx}\,\ref{item:wt-approx-types:under}}}{=} ∑\nolimits_{θ' ∈ \Runs_{\app{A}{ℳ}}(w)} ∑\nolimits_{θ ∈ \Runs_{ℳ}\colon \app{A}{θ} = θ'} \wt_{ℳ}(θ) \\* &\qquad\stackrel{\text{\cref{def:automaton-approximation}}}{=} ∑\nolimits_{θ' ∈ \Runs_{\app{A}{ℳ}}(w)} ∑\nolimits_{θ ∈ \Runs_{ℳ}(w)\colon \app{A}{θ} = θ'} \wt_ℳ(θ) \stackrel{(\ddagger)}{≤} ∑\nolimits_{θ ∈ \Runs_{ℳ}(w)} \wt_ℳ(θ) \stackrel{\text{\eqref{eq:weighted-automaton:semantics}}}{=} ⟦ℳ⟧(w)\text{.} \end{align*} For $(\ddagger)$, we argue that for each $θ ∈ \Runs_ℳ(w)$ there is at most one $θ' ∈ R_{\app{A}{ℳ}}(w)$ with $\app{A}{θ} = θ'$ since $A$ is a partial function. Hence all the addends on the left side of the inequality also occur on the right side. But there may be an addend $\wt_ℳ(θ)$ on the right side which does not occur on the left side because $\app{A}{θ} = \mathrm{undefined}$. Since $+$ preserves $≤$, the “$≤$” is justified. \qedhere \end{proof} \section{Approximation of multiple context-free languages} \label{sec:approximation-mcfl} Due to the equivalence of pushdown automata and context-free grammars \cite[Thms.~5.3 and~5.4]{HopUll79}, the approximation strategies in \cref{ex:superset-approximation,ex:subset-approximation} can be used for the approximation of context-free languages. The framework presented in this paper together with the automata characterisation of multiple context-free languages \cite[Thm.~18]{Den16} allows an automata-theoretic view on the approximation of multiple context-free languages. The automata characterisation uses an excursion-restricted form of automata with tree-stack storage \cite{Den16}. A tree-stack is a tree with a designated position inside of it (the \emph{stack pointer}). The automaton can read the label under the stack pointer, can determine whether the stack pointer is at the bottom (i.e. the root), and can modify the tree stack by moving the stack pointer or by adding a node. The excursion-restriction bounds how often the stack pointer may enter a position from its parent node. \begin{definition} Let $Γ$ be a finite set. The \emph{tree-stack storage over $Γ$} is the deterministic data storage \( \mathrm{TSS}_Γ = (\mathrm{TS}_Γ, P_{\text{ts}}, R_{\text{ts}}, c_{\text{i}, \text{ts}}) \) where \begin{itemize} \item \(\mathrm{TS}_Γ\) is the set of tuples $⟨ξ, ρ⟩$ where $ξ: ℕ_+^* \parto Γ ∪ \{@\}$, $\dom(ξ)$ is finite and prefix-closed,\footnote{A set $D ⊆ ℕ_+^*$ is \emph{prefix closed} if for each $w ∈ D$, every prefix of $w$ is also in $D$.} $ρ ∈ \dom(ξ)$, and $ξ(ρ') = @$ iff $ρ' = ε$ (We call $ξ$ the \emph{stack} and $ρ$ the \emph{stack pointer} of $⟨ξ, ρ⟩$.); \item \(c_{\text{i}, \text{ts}} = ⟨\{(ε, @)\}, ε⟩\); \item \(P_{\text{ts}} = \{\mathrm{TS}_Γ, \tsbottom\} ∪ \{\equals_γ ∣ γ ∈ Γ\}\) with \(\tsbottom = \{⟨ξ, ρ⟩ ∈ \mathrm{TS}_Γ ∣ ρ = ε\}\) and \(\equals_γ = \{⟨ξ, ρ⟩ ∈ \mathrm{TS}_Γ ∣ ξ(ρ) = γ\}\) for every $γ ∈ Γ$; and \item \(R_{\text{ts}} = \{\down\} ∪ \{\up_n, \push_{n, γ} ∣ n ∈ ℕ, γ ∈ Γ\}\) where for each $n ∈ ℕ_+$ and $γ ∈ Γ$: \begin{itemize} \item \(\up_n = \{(⟨ξ, ρ⟩, ⟨ξ, ρn⟩) ∣ ⟨ξ, ρ⟩ ∈ \mathrm{TS}_Γ, ρn ∈ \dom(ξ)\}\), \item \(\down = ⋃_{n ∈ ℕ₊} \up_n^{-1}\), and \item \(\push_{n, γ} = \{(⟨ξ, ρ⟩, ⟨ξ ∪ \{(ρn, γ)\}, ρn⟩) ∣ ⟨ξ, ρ⟩ ∈ \mathrm{TS}_Γ, ρn ∉ \dom(ξ)\}\).\qedhere \end{itemize} \end{itemize} \end{definition} \begin{example}\label{ex:mcfl} Consider $Σ = \{\text{a}, \text{b}, \text{c}\}$, $Γ = \{*, \# \}$, the $(\mathrm{TSS}_Γ, Σ)$-automaton $ℳ = ([4], T, \{1\}, \{4\})$, and \begin{align*} T: \begin{array}[t]{r@{{}=(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ₁ & 1 & \text{a} & \mathrm{TS}_Γ & \push_{1,*} & 1 \\ τ₂ & 1 & ε & \mathrm{TS}_Γ & \push_{1,\#} & 2 \\ τ₃ & 2 & ε & \equals_\# & \down & 2 \end{array} \quad \begin{array}[t]{r@{{}=(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ₄ & 2 & \text{b} & \equals_* & \down & 2 \\ τ₅ & 2 & ε & \tsbottom & \up_1 & 3 \\ τ₆ & 3 & \text{c} & \equals_* & \up_1 & 3 \end{array} \quad \begin{array}[t]{r@{{}=(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ₇ & 3 & ε & \equals_\# & \down & 4\text{.} \end{array} \end{align*} The runs of $ℳ$ all have a specific form: $ℳ$ executes $τ₁$ arbitrarily often (say $n$ times) until it executes $τ₂$, leading to the storage configuration \( ζ = ⟨\{(ε, @), (1, *), …, (1^n, *), (1^{n+1}, \#)\}, 1^{n+1}⟩ \) where $1^k$ means that 1 is repeated $k$ times. The stack of $ζ$ is a monadic tree where the leave is labelled with $\#$, the root is labelled with $@$, and the remaining $n$ nodes are labelled with $*$. The stack pointer of $ζ$ points to the leave. From this configuration $ℳ$ executes $τ₃$ once and $τ₄$ $n$ times (i.e. for each $*$ on the stack), moving the stack pointer to the root. Then $ℳ$ executes $τ₅$ once and $τ₆$ $n$ times, leading to the final state. Hence the language of $ℳ$ is $L(ℳ) = \{ \text{a}^n \text{b}^n \text{c}^n ∣ n ∈ ℕ \}$, which is not context-free. \end{example} \begin{example}\label{ex:approximation-mcfl} The following two approximation strategies for multiple context-free languages are taken from the literature. Let $Γ$ be a finite set. \begin{enumerate} \item Van Cranenburgh~\cite[Sec.~4]{Cra12} observed that the idea of \cref{ex:superset-approximation}\,\ref{ex:superset-approximation:equivalent-nts} also applies to multiple context-free grammars (short: MCFG). The idea can be applied to tree-stack automata similarly to the way it was applied to pushdown automata in \cref{ex:superset-approximation}\,\ref{ex:superset-approximation:equivalent-nts}. The resulting data storage is still a tree-stack storage. This approximation strategy is total and thus leads to a superset approximation. \item Burden and Ljunglöf~\cite[Sec.~4]{BurLju05} and van Cranenburgh~\cite[Sec.~4]{Cra12} proposed to split each production of a given MCFG into multiple productions, each of fan-out 1. Since the resulting grammar is of fan-out 1, it produces a context-free language and can be recognised by a pushdown automaton. The corresponding approximation strategy in our framework is \( A_{\text{cf}, Γ}: \mathrm{TS}_Γ → Γ^* \) with \( A_{\text{cf}, Γ}((ξ, n_1⋯n_k)) = ξ(n_1⋯n_k) ⋯ ξ(n_1n_2) ξ(n_1) \) for every $(ξ, n_1⋯n_k) ∈ \mathrm{TS}_Γ$ with $n_1, …, n_k ∈ ℕ_+$. The resulting data storage is a pushdown storage. $A_{\text{cf}, Γ}$ is total and thus leads to a superset approximation.\qedhere \end{enumerate} \end{example} \begin{figure}\label{fig:tss-approximation-transitions} \end{figure} \begin{example}\label{ex:approximations-of-mcfl} Let us consider the $(\mathrm{TSS}_Γ, Σ)$-automaton $ℳ$ from \cref{ex:mcfl}. \Cref{fig:tss-approximation-transitions} shows the transitions of the $(\app{A_{\text{cf},Γ}}{\mathrm{TSS}_Γ}, Σ)$-automaton $\app{A_{\text{cf},Γ}}{ℳ}$ (cf. \cref{ex:approximation-mcfl}) and the $(\app{(A_{\text{cf},Γ} \comp A_{\text{top}})}{\mathrm{TSS}_Γ}, Σ)$-automaton $\app{(A_{\text{cf},Γ} \comp A_{\text{top}})}{ℳ}$ (cf. also \cref{ex:superset-approximation}). The languages recognised by the two automata are \(L(\app{A_{\text{cf},Γ}}{ℳ}) = \{ a^n b^n c^m ∣ n, m ∈ ℕ \}\) and \(L(\app{(A_{\text{cf},Γ} \comp A_{\text{top}})}{ℳ}) = \{a^n b^m c^k ∣ n, m, k ∈ ℕ \}\). Clearly, \(L(\app{A_{\text{cf},Γ}}{ℳ})\) is a context-free language. Since \(\app{(A_{\text{cf}} \comp A_{\text{top}})}{ℳ}\) has finitely many storage configurations, its language is recognisable by a finite state automaton (\cref{obs:equivalent-fsa}). \end{example} \section{Coarse-to-fine $n$-best parsing for weighted automata with storage} \label{sec:coarse-to-fine-parsing} Parsing is a process that takes a finite representation $ℛ$ of a language \(L(ℛ) ⊆ Σ^*\) and a word $w ∈ Σ^*$, and outputs analyses of $w$ in $ℛ$. If $ℛ$ is a grammar, then the analyses of $w$ are the \emph{parse trees} in~$ℛ$ for~$w$. If $ℛ$ is an automaton (with storage), then the analyses of $w$ are the runs of~$ℛ$ on~$w$. Since this paper is concerned with weighted automata with storage, let $ℛ$ be an $(S, Σ, K)$-automaton. Also, let $K$ be partially ordered by a relation $≤$. We will call a run $θ$ „better than“ a run $θ'$ if $\wt_ℛ(θ) ≥ \wt_ℛ(θ')$. Using $\wt_ℛ$, we can assign weights to the runs of $ℛ$ on $w$ and enumerate those runs in descending order (with respect to $≤$) of their weights.\footnote{The resulting list of runs is not unique since different runs may get the same weight and since we only have a partial order.} If we output the first $n$ from the descending list of runs, we call the parsing \emph{$n$-best parsing} \cite{HuaChi05}. \emph{Coarse-to-fine parsing} \cite{Cha+06} employs a simpler (i.e. easier to parse) automaton $ℛ'$ to parse $w$ and uses the runs of $ℛ'$ on $w$ to narrow the search space for the runs of $ℛ$ on $w$. To ensure that there are runs of $ℛ'$ on $w$ whenever there are runs of $ℛ$ on $w$, we require that $L(ℛ') ⊇ L(ℛ)$. The automaton $ℛ'$ is obtained by superset approximation. In particular, we require $ℛ' = \app{A}{ℛ}$ for some total approximation strategy $A$. \begin{algorithm} \caption{Coarse-to-fine $n$-best parsing for weighted automata with storage}\label{alg:coarse-to-fine} \begin{algorithmic}[1] \Require $(S, Σ, K)$-automaton $ℳ$, \enspace $S$-proper total approximation strategy $A$, \enspace $n ∈ ℕ$, \enspace word $w ∈ Σ^*$ \Ensure some set of $n$ greatest (with respect to the image under $\wt_ℳ$ and $≤$) runs of $ℳ$ on $w$ \State \(X ← ∅\)\Comment{$X$ is the set of runs of $ℳ$ on $w$ that were already found} \State \(Y ← \Runs_{\app{A}{ℳ}}(w)\)\Comment{$Y$ is the set of runs of $\app{A}{ℳ}$ on $w$ that were not yet considered} \While{\(\lvert X \rvert < n\) \textbf{or} \(\min_{θ ∈ X} \wt_{ℳ}(θ) < \max_{θ' ∈ Y} \wt_{\app{A}{ℳ}}(θ')\)} \State \(θ' ← \text{smallest element of $Y$ with respect to the image under $\wt_{\app{A}{ℳ}}$}\) \State \(Y ← Y ∖ \{θ'\}\) \For{\textbf{each} \(θ ∈ \app{A}{}^{-1}(θ')\) that is a sequence of transitions in $ℳ$} \If{\(θ ∈ \Runs_ℳ\)} \(X ← X ∪ \{θ\}\)\Comment{it is sufficient to only check the storage behaviour for $θ$} \EndIf \EndFor \EndWhile \State\Return a set of $n$ greatest elements of $X$ with respect to the image under $\wt_ℳ$ \end{algorithmic} \end{algorithm} \Cref{alg:coarse-to-fine} describes coarse-to-fine $n$-best parsing for weighted automata with storage. The inputs are an $(S, Σ, K)$-automaton $ℳ$, an $S$-proper approximation strategy $A$ which will be used to construct an approximation of $ℳ$, a natural number $n$ which specifies how many runs should be computed, and a word $w ∈ Σ^*$ which we want to parse. The output is a set of $n$-best runs of $ℳ$ on $w$. The algorithm starts with a set $X$ that is empty (line~1) and a set $Y$ that contains all the runs of $\app{A}{ℳ}$ on $w$ (line~2). Then, as long as $X$ has less than $n$ elements or an element of $Y$ is greater than the smallest element in $X$ with respect to their weights (line~3), we take the greatest element $θ'$ of $Y$ (line~4), remove $θ'$ from $Y$ (line~5), calculate the corresponding sequences $θ$ of transitions from $ℳ$ (line~6), and add $θ$ to $X$ if $θ$ is a run of $ℳ$ (line~7). We can restrict the automaton $\app{A}{ℳ}$ to the input $w$ with the usual product construction. The set of runs of the resulting product automaton (let us call it $ℳ_{A, w}$) can be mapped onto $\Runs_{\app{A}{ℳ}}(w)$ by some projection $φ$. Hence $ℳ_{A, w}$ (finitely) represents $\Runs_{\app{A}{ℳ}}(w)$. The automaton $ℳ_{A, w}$ can be construed as a (not necessarily finite) graph $G_{A, w}$ with the $ℳ_{A, w}$-configurations as nodes. The edges shall be labelled with the images of the corresponding transitions of $ℳ_{A, w}$ under $φ$. Then the paths (i.e. sequences of edge labels) in $G_{A, w}$ from the initial $ℳ_{A, w}$-configuration to all the final $ℳ_{A, w}$-configurations are exactly the elements of $\Runs_{\app{A}{ℳ}}(w)$. Those paths can be enumerated in descending order of their weights using a variant of Dijkstra's algorithm. This provides us with a method to compute $\max_{θ' ∈ Y} \wt_{\app{A}{ℳ}}(θ')$ on line~3 and $θ'$ on line~4 of \cref{alg:coarse-to-fine}. \begin{example} Let $Γ = \{\text{a}, \text{b}, \text{c}\}$, $Σ = Γ ∪ \{\#\}$, $K$ be the Viterbi semiring \((ℕ ∪ \{∞\}, {\min}, {+}, ∞, 0)\) with linear order ${≤}$, and $A_\#: Γ^* → ℕ, u ↦ \lvert u \rvert$ be a total approximation strategy. Note that $\app{A_\#}{\mathrm{PD}_Γ} = \mathrm{Count}$. Now consider the \((\mathrm{PD}_Γ, Σ, K)\)-automaton \(ℳ = ([3], T, \{1\}, \{3\}, δ)\) and the $(\mathrm{Count}, Σ, K)$-automaton \(\app{A_\#}{ℳ} = ([3], T', \{1\}, \{3\}, δ')\) where \(T = \{τ_1, …, τ_8\}\) and \(T' = \{τ_1', τ_2', τ_4', τ_5', τ_6', τ_7', τ_8'\}\) with \begin{align*} &\begin{array}[t]{@{}r@{{=} (}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ_1 & 1 & \text{a} & Γ^* & \mathrm{push}_{\text{a}} & 1 \\ τ_5 & 2 & \text{a} & \mathrm{top}_{\text{a}} & \mathrm{pop} & 2 \end{array} &&\begin{array}[t]{@{}r@{{=} (}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ_2 & 1 & ε & Γ^* & \mathrm{push}_{\text{b}} & 1 \\ τ_6 & 2 & \text{b} & \mathrm{top}_{\text{b}} & \mathrm{pop}_{\text{b}} & 2 \end{array} &&\begin{array}[t]{@{}r@{{=} (}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ_3 & 1 & ε & Γ^* & \mathrm{push}_{\text{c}} & 1 \\ τ_7 & 2 & \text{c} & \mathrm{top}_{\text{c}} & \mathrm{pop} & 2 \end{array} &&\begin{array}[t]{@{}r@{{=} (}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ_4 & 1 & \# & Γ^* & \mathrm{stay} & 2 \\ τ_8 & 2 & ε & \mathrm{bottom} & \mathrm{stay} & 3 \end{array} \\[.3em] &\begin{array}[t]{@{}r@{{=}(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ_1' & 1 & \text{a} & ℕ & \mathrm{inc} & 1 \\ τ_5' & 2 & \text{a} & ℕ_+ & \mathrm{dec} & 2 \end{array} &&\begin{array}[t]{@{}r@{{=}(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}} τ_2' & 1 & ε & ℕ & \mathrm{inc} & 1 \\ τ_6' & 2 & \text{b} & ℕ_+ & \mathrm{dec} & 2 \end{array} &&\begin{array}[t]{@{}l@{}} \\ τ_7' {=} (2,\, \text{c},\,ℕ_+,\,\mathrm{dec},\,2) \end{array} &&\begin{array}[t]{@{}r@{{=}(}l@{,\,}l@{,\,}l@{,\,}l@{,\,}l@{)}l} τ_4' & 1 & \# & ℕ & \mathrm{id} & 2 \\ τ_8' & 2 & ε & \{0\} & \mathrm{id} & 3 & \text{,} \end{array} \end{align*} $δ(τ) = 1$ for each $τ ∈ T$, and $δ'(τ') = 1$ for each transition $τ' ∈ T'$. \footnote{$L(ℳ) = \{\text{a}^k\#w ∣ k ∈ ℕ, w ∈ \{a, b, c\}^*, \text{a occurs $k$ times in $w$}\}$ and $L(\app{A_\#}{ℳ}) = \{\text{a}^k\#w ∣ k ∈ ℕ, w ∈ \{a, b, c\}^*, \lvert w \rvert ≥ k\}$.} We use \cref{alg:coarse-to-fine} to obtain the 1-best run of $w = \text{a}\#\text{b}\text{a}$: On line~4, we get \(θ' = τ_1'τ_2'τ_4'τ_7'τ_5'τ_8'\) (the only run of $\app{A_\#}{ℳ}$ on $w$). Then there are only two possible values for $θ$ on line~7, namely \(θ_1 = τ_1τ_2τ_4τ_7τ_5τ_8\) and \(θ_2 = τ_1τ_3τ_4τ_7τ_5τ_8\) of which only $θ_2$ is a run of $ℳ$, hence the algorithm returns $\{θ_2\}$. \end{example} \paragraph{Outlook.} The author intends to extend \cref{alg:coarse-to-fine} to use multiple levels of approximation (i.e. multiple approximation strategies that can be applied in sequence) and to investigate the viability of this extension for parsing multiple context-free languages in the context of natural languages. \inputencoding{utf8} \end{document}
arXiv
Computer Graphics Stack Exchange is a question and answer site for computer graphics researchers and programmers. It only takes a minute to sign up. How to create 2D (directional) noise? I'm just getting started trying to understand noise generation algorithms. What I'm trying to achieve is to get a 2D (or 3D) grid of random directional vectors (again, 2D or 3D) according to a noise distribution like Perlin, ie. not just a grid of totally randomly generated vectors (which would be easy to do). Before starting to figure this out, I had assumed that 1D noise functions would be those that produce a series of float values (ranging from -1 to 1, for example), and that 2D noise functions would produce float2 values, 3D noise would produce float3 values, and so on. However, searching for so-called 2D noise functions, it seems that they are used to produce a 2D grid (ie. texture) of single float values. However, what I'm looking for is a 2D grid of 2D (float2 / vector2) values, or a 3D grid of 3D (float3 / vector3) values. From my (still pretty limited) understanding of a gradient-based noise like Perlin, I feel like I could probably hack something together from an existing Perlin function - doing a bi-linear/tri-linear interpolation of the surrounding gradient vectors which are ordinarily used for the subsequent dot product in Perlin noise. But I have no idea if that's the correct approach. Are there existing libraries/methods for something like this? kinkersnickkinkersnick Why don't you just use Perlin noise twice on the same grid, or volume? Each with slightly different parameters (a phase shift, or different pseudo-random vectors). In this case both component of your float2 are smoothly defined by a Perlin Noise field. A float2 $v$ could be defined as $v = \{P(u,v), P(u+0.5, v+0.5)\},$ where $P(u,v)$ is the Perlin noise function. However, this will make the $x$ and $y$ component of $v$ very close depending on the scaling of the grid values. ReynoldsReynolds $\begingroup$ I did wonder about that, but it seems unnecessarily computationally intensive to need to run the function twice. This is for a real-time application, so I want it to be as fast as possible ideally. But I will definitely investigate it, thanks! $\endgroup$ – kinkersnick $\begingroup$ Perlin noise is already a very efficient means to obtain noise. It is not uncommon to see multiple evaluations of Perlin noise per pixel for procedural textures. The performance loss will not be severe. $\endgroup$ – Reynolds You can use the gradient of the noise/hash which for a function $f:\mathbb{R}^n\rightarrow\mathbb{R}$ would be $n$ dimensional (depending on the application this may not work for you). Another possibility, as Reynolds mentioned is to generate the noise by calling the function multiple times. lightxbulblightxbulb Thanks for contributing an answer to Computer Graphics Stack Exchange! Why is it twice as expensive to make a noise function that can be tiled? What makes a good permutation table? benefit of perlin noise over value noise Creating a gently moving 2D fog effect How to modify Perlin (not simplex) noise to create continental like terrain generation What might be causing these artifacts in 5D & 6D simplex noise? Perlin Noise implementation looks blocky in WebGL2
CommonCrawl
Study of charmonium production in b-hadron decays and first evidence for the decay $B^0_s \to \phi\phi\phi$ LHCb Collaboration; Bernet, R; Müller, K; Serra, N; Steinkamp, O; Straumann, U; Vollhardt, A; et al (2017). Study of charmonium production in b-hadron decays and first evidence for the decay $B^0_s \to \phi\phi\phi$. European Physical Journal C - Particles and Fields, C77(9):609. Using decays to $\phi$-meson pairs, the inclusive production of charmonium states in b-hadron decays is studied with pp collision data corresponding to an integrated luminosity of 3.0$fb^{−1}$, collected by the LHCb experiment at centre-of-mass energies of 7 and 8 TeV. Denoting by $\mathcal{B}_C≡\mathcal{B}(b \to CX)×\mathcal{B}(C \to \phi\phi)$ the inclusive branching fraction of a b hadron to a charmonium state C that decays into a pair of $\phi$ mesons, ratios $R^{C1}_{C2} ≡ \mathcal{B}_{C1}/\mathcal{B}_{C2}$ are determined as $R^{\chi_{c0}}_{\eta_c(1S)} = 0.147 \pm 0.023 \pm 0.011, R^{\chi_{c1}}_{\eta_c(1S)} = 0.073 \pm 0.016 \pm 0.006, R^{\chi_{c2}}_{\eta_c(1S)} = 0.081 \pm 0.013 \pm 0.005, R^{\chi_{c1}}_{\chi_{c0}}=0.50 \pm 0.11 \pm 0.01, R^{\chi_{c2}}_{\chi_{c0}} = 0.56 \pm 0.10 \pm 0.01$ and $R^{\eta_c(2S)}_{\eta_c(1S)} = 0.040 \pm 0.011 \pm 0.004$. Here and below the first uncertainties are statistical and the second systematic. Upper limits at 90% confidence level for the inclusive production of X(3872), X(3915) and $\chi_{c2}(2P)$ states are obtained as $R^{X(3872)}_{\chi_{c1}} < 0.34$, $R^{X(3915)}_{\chi_{c0}} < 0.12$ and $R^{\chi_{c2}(2P)}_{\chi_{c2}} < 0.16$. Differential cross-sections as a function of transverse momentum are measured for the $\eta_c(1S)$ and $\chi_c$ states. The branching fraction of the decay $B^0_s \to \phi\phi\phi$ is measured for the first time, $\mathcal{B}(B^0_s \to \phi\phi\phi) = (2.15 \pm 0.54 \pm 0.28 \pm 0.21_{\mathcal{B}})×10^{−6}$. Here the third uncertainty is due to the branching fraction of the decay $B^0_s \to \phi\phi\phi$, which is used for normalization. No evidence for intermediate resonances is seen. A preferentially transverse $\phi$ polarization is observed. The measurements allow the determination of the ratio of the branching fractions for the $\eta_c(1S)$ decays to $\phi\phi$ and $p\overline{p}$ as $\mathcal{B}(\eta_c(1S) \to \phi\phi)/\mathcal{B}(\eta_c(1S)→p\overline{p}) = 1.79 \pm 0.14 \pm 0.32$. 1 citation in Web of Science® 3 citations in Scopus® 8 downloads since deposited on 29 Jan 2018 07 Faculty of Science > Physics Institute 530 Physics https://doi.org/10.1140/epjc/s10052-017-5151-8 LHCb Collaboration Bernet, R Müller, K Serra, N Steinkamp, O Straumann, U Vollhardt, A et al
CommonCrawl
Two consecutive positive even numbers are each squared. The difference of the squares is 60. What is the sum of the original two numbers? Let the two numbers be $x$ and $x + 2$, where $x$ is even. We want to find $x + (x + 2) = 2x + 2$, and we are told that $(x + 2)^2 - x^2 = 60$. This last equation can be factored as a difference of squares: $(x + 2 + x)(x + 2 - x) = (2x + 2)(2) = 60$. It follows that $2x + 2 = 60/2 = \boxed{30}$.
Math Dataset
Analysis of a Burgers equation with singular resonant source term and convergence of well-balanced schemes June 2012, 32(6): 1915-1938. doi: 10.3934/dcds.2012.32.1915 The Cauchy problem at a node with buffer Mauro Garavello 1, and Paola Goatin 2, Dipartimento di Scienze e Tecnologie Avanzate, Università del Piemonte Orientale "A. Avogadro", viale T. Michel 11, 15121 Alessandria, Italy INRIA Sophia Antipolis - Méditerranée, EPI OPALE, 2004, route des Lucioles - BP 93, 06902 Sophia Antipolis Cedex, France Received January 2011 Revised October 2011 Published February 2012 We consider the Lighthill-Whitham-Richards traffic flow model on a network composed by an arbitrary number of incoming and outgoing arcs connected together by a node with a buffer. Similar to [15], we define the solution to the Riemann problem at the node and we prove existence and well posedness of solutions to the Cauchy problem, by using the wave-front tracking technique and the generalized tangent vectors. Keywords: traffic flow at junctions, wave-front tracking., Scalar conservation laws, macroscopic models. Mathematics Subject Classification: Primary: 36L65; Secondary: 90B2. Citation: Mauro Garavello, Paola Goatin. The Cauchy problem at a node with buffer. Discrete & Continuous Dynamical Systems - A, 2012, 32 (6) : 1915-1938. doi: 10.3934/dcds.2012.32.1915 M. K. Banda, M. Herty and A. Klar, Gas flow in pipeline networks,, Netw. Heterog. Media, 1 (2006), 41. Google Scholar A. Bressan, A contractive metric for systems of conservation laws with coinciding shock and rarefaction curves,, J. Differential Equations, 106 (1993), 332. doi: 10.1006/jdeq.1993.1111. Google Scholar A. Bressan, "Hyperbolic Systems of Conservation Laws. The One-Dimensional Cauchy Problem,", Oxford Lecture Series in Mathematics and its Applications, 20 (2000). Google Scholar A. Bressan and R. M. Colombo, The semigroup generated by $2\times 2$ conservation laws,, Arch. Rational Mech. Anal., 133 (1995), 1. doi: 10.1007/BF00375350. Google Scholar A. Bressan, G. Crasta and B. Piccoli, Well-posedness of the Cauchy problem for $n\times n$ systems of conservation laws,, Mem. Amer. Math. Soc., 146 (2000). Google Scholar G. M. Coclite, M. Garavello and B. Piccoli, Traffic flow on a road network,, SIAM J. Math. Anal., 36 (2005), 1862. doi: 10.1137/S0036141004402683. Google Scholar R. M. Colombo, P. Goatin and B. Piccoli, Road network with phase transitions,, J. Hyperbolic Differ. Equ., 7 (2010), 85. doi: 10.1142/S0219891610002025. Google Scholar C. D'apice, R. Manzo and B. Piccoli, Packet flow on telecommunication networks,, SIAM J. Math. Anal., 38 (2006), 717. doi: 10.1137/050631628. Google Scholar M. Garavello and B. Piccoli, Traffic flow on a road network using the Aw-Rascle model,, Comm. Partial Differential Equations, 31 (2006), 243. Google Scholar M. Garavello and B. Piccoli, "Traffic Flow on Networks. Conservation Laws Models,", AIMS Series on Applied Mathematics, 1 (2006). Google Scholar M. Garavello and B. Piccoli, Conservation laws on complex networks,, Ann. H. Poincaré, 26 (2009), 1925. Google Scholar M. Garavello and B. Piccoli, A multibuffer model for LWR road networks,, preprint, (2010). Google Scholar S. Göttlich, M. Herty and A. Klar, Modelling and optimization of supply chains on complex networks,, Commun. Math. Sci., 4 (2006), 315. Google Scholar M. Herty, A. Klar and B. Piccoli, Existence of solutions for supply chain models based on partial differential equations,, SIAM J. Math. Anal., 39 (2007), 160. doi: 10.1137/060659478. Google Scholar M. Herty, J.-P. Lebacque and S. Moutari, A novel model for intersections of vehicular traffic flow,, Netw. Heterog. Media, 4 (2009), 813. Google Scholar M. Herty, S. Moutari and M. Rascle, Optimization criteria for modelling intersections of vehicular traffic flow,, Netw. Heterog. Media, 1 (2006), 275. doi: 10.3934/nhm.2006.1.275. Google Scholar M. Herty and M. Rascle, Coupling conditions for a class of second-order models for traffic flow,, SIAM J. Math. Anal., 38 (2006), 595. doi: 10.1137/05062617X. Google Scholar H. Holden and N. H. Risebro, "Front Tracking for Hyperbolic Conservation Laws,", Applied Mathematical Sciences, 152 (2002). Google Scholar M. J. Lighthill and G. B. Whitham, On kinematic waves. II. A theory of traffic flow on long crowded roads,, Proc. Roy. Soc. London. Ser. A., 229 (1955), 317. Google Scholar A. Marigo and B. Piccoli, A fluid dynamic model for $T$-junctions,, SIAM J. Math. Anal., 39 (2008), 2016. doi: 10.1137/060673060. Google Scholar P. I. Richards, Shock waves on the highway,, Operations Res., 4 (1956), 42. doi: 10.1287/opre.4.1.42. Google Scholar D. Sun, I. S. Strub and A. M. Bayen, Comparison of the performance of four Eulerian network flow models for strategic air traffic management,, Netw. Heterog. Media, 2 (2007), 569. doi: 10.3934/nhm.2007.2.569. Google Scholar Alberto Bressan, Khai T. Nguyen. Conservation law models for traffic flow on a network of roads. Networks & Heterogeneous Media, 2015, 10 (2) : 255-293. doi: 10.3934/nhm.2015.10.255 Michael Herty, Reinhard Illner. Analytical and numerical investigations of refined macroscopic traffic flow models. Kinetic & Related Models, 2010, 3 (2) : 311-333. doi: 10.3934/krm.2010.3.311 Maria Laura Delle Monache, Paola Goatin. A front tracking method for a strongly coupled PDE-ODE system with moving density constraints in traffic flow. Discrete & Continuous Dynamical Systems - S, 2014, 7 (3) : 435-447. doi: 10.3934/dcdss.2014.7.435 Stefano Bianchini, Elio Marconi. On the concentration of entropy for scalar conservation laws. Discrete & Continuous Dynamical Systems - S, 2016, 9 (1) : 73-88. doi: 10.3934/dcdss.2016.9.73 Laurent Lévi, Julien Jimenez. Coupling of scalar conservation laws in stratified porous media. Conference Publications, 2007, 2007 (Special) : 644-654. doi: 10.3934/proc.2007.2007.644 Georges Bastin, B. Haut, Jean-Michel Coron, Brigitte d'Andréa-Novel. Lyapunov stability analysis of networks of scalar conservation laws. Networks & Heterogeneous Media, 2007, 2 (4) : 751-759. doi: 10.3934/nhm.2007.2.751 Bertrand Haut, Georges Bastin. A second order model of road junctions in fluid models of traffic networks. Networks & Heterogeneous Media, 2007, 2 (2) : 227-253. doi: 10.3934/nhm.2007.2.227 Marte Godvik, Harald Hanche-Olsen. Car-following and the macroscopic Aw-Rascle traffic flow model. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 279-303. doi: 10.3934/dcdsb.2010.13.279 Mohamed Benyahia, Massimiliano D. Rosini. A macroscopic traffic model with phase transitions and local point constraints on the flow. Networks & Heterogeneous Media, 2017, 12 (2) : 297-317. doi: 10.3934/nhm.2017013 Adimurthi , Shyam Sundar Ghoshal, G. D. Veerappa Gowda. Exact controllability of scalar conservation laws with strict convex flux. Mathematical Control & Related Fields, 2014, 4 (4) : 401-449. doi: 10.3934/mcrf.2014.4.401 Maria Laura Delle Monache, Paola Goatin. Stability estimates for scalar conservation laws with moving flux constraints. Networks & Heterogeneous Media, 2017, 12 (2) : 245-258. doi: 10.3934/nhm.2017010 Boris P. Andreianov, Giuseppe Maria Coclite, Carlotta Donadello. Well-posedness for vanishing viscosity solutions of scalar conservation laws on a network. Discrete & Continuous Dynamical Systems - A, 2017, 37 (11) : 5913-5942. doi: 10.3934/dcds.2017257 Giuseppe Maria Coclite, Lorenzo di Ruvo, Jan Ernest, Siddhartha Mishra. Convergence of vanishing capillarity approximations for scalar conservation laws with discontinuous fluxes. Networks & Heterogeneous Media, 2013, 8 (4) : 969-984. doi: 10.3934/nhm.2013.8.969 Evgeny Yu. Panov. On a condition of strong precompactness and the decay of periodic entropy solutions to scalar conservation laws. Networks & Heterogeneous Media, 2016, 11 (2) : 349-367. doi: 10.3934/nhm.2016.11.349 Shijin Deng, Weike Wang. Pointwise estimates of solutions for the multi-dimensional scalar conservation laws with relaxation. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1107-1138. doi: 10.3934/dcds.2011.30.1107 Darko Mitrovic. New entropy conditions for scalar conservation laws with discontinuous flux. Discrete & Continuous Dynamical Systems - A, 2011, 30 (4) : 1191-1210. doi: 10.3934/dcds.2011.30.1191 Darko Mitrovic, Ivan Ivec. A generalization of $H$-measures and application on purely fractional scalar conservation laws. Communications on Pure & Applied Analysis, 2011, 10 (6) : 1617-1627. doi: 10.3934/cpaa.2011.10.1617 Alexander Bobylev, Mirela Vinerean, Åsa Windfäll. Discrete velocity models of the Boltzmann equation and conservation laws. Kinetic & Related Models, 2010, 3 (1) : 35-58. doi: 10.3934/krm.2010.3.35 Debora Amadori, Wen Shen. Front tracking approximations for slow erosion. Discrete & Continuous Dynamical Systems - A, 2012, 32 (5) : 1481-1502. doi: 10.3934/dcds.2012.32.1481 Stefano Bianchini. On the shift differentiability of the flow generated by a hyperbolic system of conservation laws. Discrete & Continuous Dynamical Systems - A, 2000, 6 (2) : 329-350. doi: 10.3934/dcds.2000.6.329 PDF downloads (6) Mauro Garavello Paola Goatin
CommonCrawl
\begin{document} \title[Pathwise stability and positivity of nonlinear SDEs]{A note on Pathwise stability and positivity of nonlinear stochastic differential equations} \author[I. S. Stamatiou]{I. S. Stamatiou} \email{[email protected]} \begin{abstract} We use the semi-discrete method, originally proposed in \emph{Halidias (2012), Semi-discrete approximations for stochastic differential equations and applications, International Journal of Computer Mathematics, 89(6)}, to reproduce qualitative properties of a class of nonlinear stochastic differential equations with nonnegative, non-globally Lipschitz coefficients and a unique equilibrium solution. The proposed fixed-time step method preserves the positivity of solutions and reproduces the almost sure asymptotic stability behavior of the equilibrium with no time-step restrictions. \end{abstract} \date\today \keywords{Explicit Numerical Scheme; Semi-Discrete Method; non-linear SDEs; Stochastic Differential Equations; Boundary Preserving Numerical Algorithm; Pathwise Stability; \newline{\bf AMS subject classification 2010:} 60H10, 60H35, 65C20, 65C30, 65J15, 65L20.} \maketitle \section{Introduction}\label{PSP:sec:intro} \setcounter{equation}{0} We are interested in the following class of scalar stochastic differential equations (SDEs), \beqq \label{PSP-eq:scalarSDEs} x_t =x_0 + \int_0^t x_sa(x_s)ds + \int_0^t x_sb(x_s)dW_s, \eeqq where $a(\cdot), b(\cdot)$ are non-negative functions with $b(u)\neq0$ for $u\neq0, x_0\geq0$ and $\{W_{t}\}_{t\geq0}$ is a one-dimensional Wiener process adapted to the filtration $\{{\mathcal F}_t\}_{t\geq0}.$ We want to reproduce dynamical properties of (\ref{PSP-eq:scalarSDEs}). We use a fixed-time step explicit numerical method, namely the semi-discrete method, which reads \beqq \label{PSP-eq:SDmethod} y_{n+1} =y_n\exp\left\{\left(a(y_n)-\frac{b^2(y_n)}{2}\right)\Delta + b(y_n)\Delta W_n\right\}, \quad n\in\bbN, \eeqq with $y_0=x_0,$ where $\Delta=t_{n+1}-t_{n}$ is the time step-size and $\Delta W_n:= W_{t_{n+1}}- W_{t_{n}}$ are the increments of the Wiener process. For the derivation of (\ref{PSP-eq:SDmethod}) see Section \ref{PSP:sec:proofs}. The scopes of this article are two. Our main goal is to reproduce the almost sure (a.s.) stability and instability of the unique equilibrium solution of (\ref{PSP-eq:scalarSDEs}), i.e. for the trivial solution $x_t\equiv0.$ The positivity of the drift pushes the solution to explosive situations and the diffusion stabilizes this effect in a way we want to mimic. On the other hand, SDE (\ref{PSP-eq:scalarSDEs}) has unique positive solutions when $x_0>0.$ The semi-discrete method (\ref{PSP-eq:SDmethod}) preserves positivity by construction. Explicit fixed-step Euler methods fail to strongly converge to solutions of (\ref{PSP-eq:scalarSDEs}) when the drift or diffusion coefficient grows superlinearly \cite[Theorem 1]{hutzenthaler_et_al.:2011}. Tamed Euler methods were proposed to overcome the aforementioned problem, cf. \cite[(4)]{hutzenthaler_jentzen:2015}, \cite[(3.1)]{tretyakov_zhang:2013}, \cite{sabanis:2016} and references therein; nevertheless in general they fail to preserve positivity. We also mention the method presented in \cite{neuenkirch_szpruch:2014} where they use the Lamperti-type transformation to remove the nonlinearity from the diffusion to the drift part of the SDE. Moreover, adaptive time-stepping strategies applied to explicit Euler method are an alternative way to address the problem and there is an ongoing research on that approach, see \cite{fand_giles:2016}, \cite{kelly_lord:2016} and \cite{kelly_et_al:2017}. However, the fixed-step method we propose reproduces the almost sure asymptotic stability behavior of the equilibrium with no time-step restrictions, compare Theorems \ref{PSP-theorem:asstability} and \ref{PSP-theorem:asinstability} with \cite[Theorem 4.1 and 4.2]{kelly_et_al:2017} respectively. Our proposed fixed-step method is explicit, strongly convergent, non-explosive and positive. The semi-discrete method was originally proposed in \cite{halidias:2012} and further investigated in \cite{halidias_stamatiou:2016}, \cite{halidias:2014}, \cite{halidias:2015}, \cite{halidias:2015d}, \cite{halidias_stamatiou:2015} and \cite{stamatiou:2017}. We discretize in each subinterval (in an appropriate additive or multiplicative way) the drift and/or the diffusion coefficient producing a new SDE which we have to solve and not an algebraic equation as all the standard numerical methods. The way of discretization is implied by the form of the coefficients of the SDE and is not unique. Let us now assume some minimal additional conditions for the functions $a(\cdot)$ and $b(\cdot).$ In particular, we assume locally Lipschitz continuity of $a(\cdot)$ and $b(\cdot),$ which in turn implies the existence of a unique, continuous ${\mathcal F}_t$-measurable process $x$ (cf. \cite[Ch. 2]{mao:2007}) satisfying (\ref{PSP-eq:scalarSDEs}) up to the explosion time $\tau_e^{x_0},$ i.e. on the interval $[0,\tau_e^{x_0}),$ where $$ \tau_e^{x_0}:=\inf\{t>0: |x_t^{x_0}|\notin [0,\infty)\}. $$ Denoting $\theta_e^{x_0}$ the first hitting time of zero, i.e. $$ \theta_e^{x_0}:=\inf\{t>0: |x_t^{x_0}|=0\}, $$ it was shown in \cite[Section 3]{appleby_et_al:2008} that in the case \beqq \label{PSP-eq:condition} \sup_{u\neq0} \frac{2a(u)}{b^2(u)}=\beta<1, \eeqq then $\tau_e^{x_0}=\theta_e^{x_0}=\infty,$ i.e. there exist unique positive solutions. The equilibrium zero solution of (\ref{PSP-eq:scalarSDEs}) is a.s. stable if (see again \cite[Section 3]{appleby_et_al:2008}) \beqq \label{PSP-eq:condition2} \lim_{u\rightarrow0} \frac{2a(u)}{b^2(u)}<1, \eeqq i.e. for all $x_0>0$ $$ \bfP (\{\w: \lim_{t\to\infty}x_t(\w)=0\})>0. $$ Condition (\ref{PSP-eq:condition2}) shows how condition (\ref{PSP-eq:condition}) is close to being sharp. Furthermore, the presence of a sufficiently intense stochastic perturbation (because of the positivity of the function $a(\cdot)$) is necessary for the existence of a unique global solution and stability of the zero equilibrium. The outline of the article is the following. In Section \ref{PSP:sec:main} we present our main results, that is Theorems \ref{PSP-theorem:asstability} and \ref{PSP-theorem:asinstability}, the proofs of which are deferred to Section \ref{PSP:sec:proofs}. Section \ref{PSP:sec:numerics} provides a numerical example. \section{Main results}\label{PSP:sec:main} \setcounter{equation}{0} \bass\label{NSF:assA} The functions $a(\cdot)$ and $b(\cdot)$ of (\ref{PSP-eq:scalarSDEs}) are non-negative with $b(u)\neq0$ for $u\neq0.$ \eass The following result provides sufficient conditions for solutions of the semi-discrete scheme (\ref{PSP-eq:SDmethod}) to demonstrate a.s. stability. \bth\label{PSP-theorem:asstability}[a.s. stability] Let $a(\cdot)$ and $b(\cdot)$ satisfy Assumption \ref{NSF:assA} and (\ref{PSP-eq:condition}), i.e. there exists $\beta<1$ such that \beqq \label{PSP-eq:conditionsup} \sup_{u\neq0} \frac{2a(u)}{b^2(u)}=\beta. \eeqq Let also $\{y_n\}_{n\in\bbN}$ be a solution of (\ref{PSP-eq:SDmethod}) with $y_0=x_0>0.$ Then for all $\Delta>0$, \beqq \label{PSP-eq:asstability} \lim_{\nto} y_n=0, \qquad \hbox{a.s.} \eeqq \ethe The following result provides sufficient conditions for solutions of the semi-discrete scheme (\ref{PSP-eq:SDmethod}) to demonstrate a.s. instability. \bth\label{PSP-theorem:asinstability}[a.s. instability] Let $a(\cdot)$ and $b(\cdot)$ satisfy Assumption \ref{NSF:assA} and there exists $\gamma>1$ such that \beqq \label{PSP-eq:conditioninf} \liminf_{u\rightarrow0} \frac{2a(u)}{b^2(u)}=\gamma. \eeqq Let also $\{y_n\}_{n\in\bbN}$ be a solution of (\ref{PSP-eq:SDmethod}) with $y_0=x_0>0.$ Then for all $\Delta>0$, \beqq \label{PSP-eq:asinstability} \bfP (\{\w: \lim_{\nto} y_n(\w)=0\})=0. \eeqq \ethe Note that there is no time-step restriction in the results of Theorems \ref{PSP-theorem:asstability} and \ref{PSP-theorem:asinstability}, i.e. (\ref{PSP-eq:asstability}) and (\ref{PSP-eq:asinstability}) hold for all $\Delta>0.$ \section{Numerical illustration}\label{PSP:sec:numerics} \setcounter{equation}{0} We will use the numerical example of \cite[Section 5]{kelly_et_al:2017}, that is we take $a(x)=x^2$ and $b(x)=\sigma x$ and $x_0=1$ in (\ref{PSP-eq:scalarSDEs}), i.e. \beqq \label{PSP-eq:exampleSDE} x_t =1 + \int_0^t (x_s)^3ds + \sigma\int_0^t (x_s)^2dW_s, \qquad t\geq0. \eeqq Note that the value of $\sigma$ determines the value of the ratio $2a(u)/b^2(u)=2/\sigma^2.$ The semi-discrete method (\ref{PSP-eq:SDmethod}) reads \beqq \label{PSP-eq:exampleSD} y_{n+1} =y_n\exp\left\{\left(1-\frac{\sigma^2}{2}\right)(y_n)^2\Delta + \sigma y_n\Delta W_n\right\}, \quad n\in\bbN, \eeqq with $y_0=1.$ First we examine the case of stability, that is when $\beta<1$ or $\sigma>\sqrt{2}.$ Figure \ref{PSP-fig:s23} displays a trajectory of the semi-discrete method (\ref{PSP-eq:exampleSD}) for the cases $\sigma=2$ and $\sigma=3$ accordingly. We observe the asymptotic stability in each case as well as the positivity of the paths. There is no need for time step restriction as in \cite[Fig. 2]{kelly_et_al:2017}. \begin{figure} \caption{Trajectory for (\ref{PSP-eq:exampleSD}) with $\sigma=2$.} \label{PSP-fig:s2} \caption{Trajectory for (\ref{PSP-eq:exampleSD}) with $\sigma=3.$} \label{PSP-fig:s3} \caption{Trajectories of (\ref{PSP-eq:exampleSD}) for different values of $\sigma$.} \label{PSP-fig:s23} \end{figure} Figures \ref{PSP-fig:s00b} and \ref{PSP-fig:s11b} displays the case when $\gamma>1$ or $\sigma<\sqrt{2}.$ We consider the cases $\sigma=0$ and $\sigma=1$ accordingly. Now, we observe instability and an apparent finite-time explosion. The apparent explosion time in the ordinary differential equation (case $\sigma=0$) is very close to the computed one $$ \tau_e^1:= \int_1^\infty u^{-3}du=0.5, $$ and becomes closer as we lower the step-size $\Delta.$ In the case $\sigma=1$ we observe again the apparent explosion time for the SDE which is now random. \begin{figure} \caption{Trajectory for (\ref{PSP-eq:exampleSD}) with $\sigma=0$ and $\Delta=0.01.$} \label{PSP-fig:s0} \caption{Trajectory for (\ref{PSP-eq:exampleSD}) with $\sigma=0$ and $\Delta=0.001.$} \label{PSP-fig:s0b} \caption{Trajectories of (\ref{PSP-eq:exampleSD}) for $\sigma=0$ and different values of $\Delta.$.} \label{PSP-fig:s00b} \end{figure} \begin{figure} \caption{Trajectory for (\ref{PSP-eq:exampleSD}) with $\sigma=1$ and $\Delta=0.01.$} \label{PSP-fig:s1} \caption{Trajectory for (\ref{PSP-eq:exampleSD}) with $\sigma=1$ and $\Delta=0.01.$} \label{PSP-fig:s1b} \caption{Trajectories of (\ref{PSP-eq:exampleSD}) for $\sigma=1.$} \label{PSP-fig:s11b} \end{figure} \section{Proofs}\label{PSP:sec:proofs} \setcounter{equation}{0} In this section we first discuss about the derivation of the semi-discrete scheme (\ref{PSP-eq:SDmethod}) and then we provide the proofs of Theorems \ref{PSP-theorem:asstability} and \ref{PSP-theorem:asinstability}. Given the equidistant partition $0=t_0<t_1<\ldots<t_N=T$ with step size $\Delta=T/N$ we consider the following process \beqq \label{PSP-eq:SDprocess} y_{t} =y_{t_n} + \int_{t_n}^t \un{a(y_{t_n})y_s}_{f(y_{t_n},y_s)}ds + \int_{t_n}^t \un{b(y_{t_n})y_s}_{g(y_{t_n},y_s)} dW_s, \eeqq in each subinterval $[t_n,t_{n+1}],$ with $y_0=x_0$ a.s. By (\ref{PSP-eq:SDprocess}) the form of discretization becomes apparent. We discretized the drift and diffusion coefficient in a multiplicative way producing a new SDE at each subinterval with the unique strong solution \beqq \label{PSP-eq:SDsolutionprocess} y_{t} =y_{t_n}\exp\left\{\left(a(y_{t_n})-\frac{b^2(y_{t_n})}{2}\right)(t-t_n) + b(y_{t_n})(W_t- W_{t_n})\right\}. \eeqq The first variable of the auxiliary functions $f(\cdot,\cdot)$ and $g(\cdot,\cdot)$ in (\ref{PSP-eq:SDprocess}) denote the discretized part. In case $a(\cdot)$ and $b(\cdot)$ are locally Lipschitz so are $f(\cdot,\cdot)$ and $g(\cdot,\cdot)$ and as a consequence we have a strong convergence result of the type (see \cite[Theorem 2.1]{halidias_stamatiou:2016}) $$ \lim_{\Delta\rightarrow0}\bfE \sup_{0\leq t\leq T}|y_t-x_t|^2=0, $$ in the case of finite moments of the original SDE and the approximation process, that is when $\bfE|x_t|^p\vee\bfE|y_t|^p<A$ for some $p>2$ and $A>0.$ However, the strong convergence of the method does not hold in all cases considered here since the moments of $(x_t)$ are bounded only up to an explosion time $\tau_e^{x_0}$ which may be finite in case (\ref{PSP-eq:condition}) does not hold. The main focus here is the preservation of the dynamics in the discretization as shown in Section \ref{PSP:sec:numerics}. The stability behavior of the equilibrium solution of (\ref{PSP-eq:SDmethod}) is an easy task since we have an analytic expression of the solution process. Nevertheless, we discuss the steps below. Take a $p>0$ to be specified later on and rewrite (\ref{PSP-eq:SDmethod}) as \beao |y_{n+1}|^p &=&|y_{n}|^p\exp\left\{\frac{pb^2(y_n)}{2}\left(\frac{2a(y_n)}{b^2(y_{n})}-1+p\right)\Delta\right\}\exp\left\{-\frac{p^2b^2(y_{n})}{2}\Delta + pb(y_{n})\Delta W_{n}\right\}\\ &=&\bbE(y_{n})\xi_{n+1}, \eeao where we used the notation $y_n$ for $y_{t_n},$ the exponential function $\bbE(\cdot)$ reads $$ \bbE(u) = \exp\left\{\frac{pb^2(u)}{2}\left(\frac{2a(u)}{b^2(u)}-1+p\right)\Delta\right\} $$ and for $t\in (t_n,t_{n+1}]$ we consider the SDE $$ d\xi_t = pb(y_n)dW_t $$ with $\xi_n$=$|y_{n}|^p.$ Therefore $\bfE\xi_{n+1}$=$\bfE|y_{n}|^p$ and choosing $0<p<1-\beta,$ where $\beta$ is as in Theorem \ref{PSP-theorem:asstability}, we get that $\bbE(u)\leq1$ for any $\Delta>0$ implying on the one hand the boundness of the moments of ${y_n}_{n\in\bbn}$ and on the other hand the $p_th$ moment exponential stability of the trivial solution of $y_t^{x_0}.$ This in turn implies the a.s. exponential stability of the trivial solution (see \cite[Theorem 4.4.2]{mao:2007}) and consequently (\ref{PSP-eq:asstability}). As a result we also now the rate of (\ref{PSP-eq:asstability}) which is exponential and determined by the function $\bbE(\cdot).$ The result of Theorem \ref{PSP-theorem:asinstability} follows by analogue arguments where now we consider the representation \beao |y_{n+1}|^{-p} &=&|y_{n}|^{-p}\exp\left\{\frac{pb^2(y_n)}{2}\left(\frac{-2a(y_n)}{b^2(y_{n})}+1+p\right)\Delta\right\}\exp\left\{-\frac{p^2b^2(y_{n})}{2}\Delta - pb(y_{n})\Delta W_{n}\right\}\\ &=&\bbE^*(y_{n})\xi^*_{n+1}, \eeao for $0<p<\gamma-1$ where $\gamma$ is as in the statement of Theorem \ref{PSP-theorem:asinstability} and $$ \bbE^*(u) = \exp\left\{\frac{pb^2(u)}{2}\left(-\frac{2a(u)}{b^2(u)}+1+p\right)\Delta\right\} $$ and for $t\in (t_n,t_{n+1}]$ $$ d\xi^*_t = -pb(y_n)dW_t. $$ \baselineskip12pt \appendix \end{document}
arXiv
Hiroyuki Yamanishi ORCID: orcid.org/0000-0001-9331-28361, Masumi Ono2 & Yuko Hijikata3 In our research project, we have developed a scoring rubric for a second language (L2) summary writing for English as a foreign language (EFL) students in Japanese universities. This study aimed to examine the applicability of our five-dimensional rubric, which features both analytic and holistic assessments, to classrooms in the EFL context. The examination especially focused on a newly added, optional overall quality dimension and two paraphrasing dimensions: paraphrase (quantity) and paraphrase (quality). Six teacher raters evaluated 16 summaries written by Japanese EFL university student writers using our new rubric. The scoring results were quantitatively compared with the scoring results of a commonly used rubric developed by the Educational Testing Service (ETS). For the qualitative examination, the teacher raters' retrospective comments on our five-dimensional rubric were analyzed. The quantitative examination demonstrated positive results as follows: (a) adding the optional overall quality dimension improved the reliability of our rubric, (b) the overall quality dimension worked well even if it was used alone, (c) our rubric and the ETS holistic rubric overlapped moderately as L2 summary writing assessments, and (d) the two paraphrasing dimensions covered similar but different aspects of paraphrasing. However, the quantitative analysis using the generalizability theory (G theory) simulated that the reliability (generalizability coefficients) of the rubric was not high when the number of raters decreased. The qualitative examination of the raters' perceptions of our rubric generally confirmed the quantitative results. This study confirmed the applicability of our rubric to EFL classrooms. This new type of rating scale can be characterized as a "hybrid" approach that offers the user a choice of analytic and holistic measures depending on individual purposes, which can enhance teachers' explicit instructions. The importance of summary writing skills is widely acknowledged (e.g., Delaney, 2008; Hidi & Anderson, 1986; Plakans, 2008) because these skills are essential at every level of education. In particular, these skills are crucial for university students because they are often required "to write summaries as stand-alone assignments" (Marshall, 2017, p. 71) or complete other types of assignments that incorporate various kinds of sources into their writing. Summary writing is an integrated writing task, and it is understood as a reading-to-write task. In other words, this involves a series of intricate processes of comprehension, condensation, and production (Kintsch & van Dijk, 1978), which is regarded as "a highly complex, recursive reading-writing activity involving constraints that can impose an overwhelming cognitive load on students" (Kirkland & Saunders, 1991, p. 105). Due to the complexity and higher-order treatments required for summary writing, student writers are likely to encounter difficulties producing effective summaries and developing summary writing skills on their own (Grabe, 2001), and this issue exists in the English language education context. Therefore, scaffolding students' learning processes and providing clear instructions for summarization skills are also necessary in English as a foreign language (EFL) classrooms. However, language teachers often encounter difficulties in teaching second language (L2) summary writing because of the multidimensional nature of this genre (Yu, 2013) and the limited number of appropriate, practical teaching materials and guidelines such as Marshall (2017) and Oshima, Hogue, and Ravitch (2014). In fact, there is a tendency of insufficient instruction in L2 summary writing in Japanese and Taiwanese educational contexts, which results in self-taught summarization skills (Ono, 2011). Ono (2011) study indicates this problematic situation in EFL writing education that can also be observed in other EFL contexts. Thus, to improve such situations, it is worth developing teaching guidelines for teachers to facilitate and assess L2 summary writing in classrooms. In this study, we focus on the development of tools that can be utilized in L2 summary writing instruction in an EFL classroom context with a particular focus on the assessment of summary writing by Japanese EFL university students. Paraphrasing and textual borrowing in integrated writing Integrated writing, including summary writing, is composed of a number of different subskills such as reading and writing, depending on task types. Among them, one central skill relevant to summary writing is paraphrasing, which is also often used in academic writing in general. According to Hirvela and Du (2013), summarizing and paraphrasing require different levels of condensation of information. Previous studies indicated that paraphrasing serves a crucial role in summary writing (e.g., Shi, 2012). For example, Johns and Mayes (1990) investigated summary writing operations used by English as a second language (ESL) university students where the performances of high- and low-proficiency students were compared. They demonstrated that the low-proficiency group copied information from the original text more frequently than the high-proficiency group and that students in both groups neither combined information across paragraphs nor invented topic sentences by using their own words. Similarly, Keck (2006, 2014) reported that L2 writers in a US university tended to struggle with paraphrasing by employing insufficient paraphrasing, so-called Near Copy, when compared to native speakers of English who employed effective paraphrasing to a greater degree, namely, Moderate Revision and Substantial Revision. This copying behavior has also been investigated through studies focusing on textual borrowing. For example, Gebril and Plakans (2016) examined how textual borrowing affects lexical diversity when learners borrow words from sources in integrated reading-based writing tasks. These analyses revealed that borrowing words from the source materials determined the writers' lexical diversity and that lexical diversity significantly differed across the writing scores. From a sociocultural perspective, paraphrasing behavior may be affected by cultural and linguistic differences. Whether paraphrasing and textual borrowing are influenced by cultural backgrounds was investigated when Chinese graduate students read research papers and paraphrased them (Shi & Dong, 2018). Their results showed that textual borrowing was used more in Chinese, which was the participants' first language (L1), than in English, their L2. Shi and Dong argue that paraphrasing practices have cultural differences in that some paraphrasing practices might be acceptable in Chinese writing but not in English writing. Regarding linguistic differences from English, the Japanese language, which is L1 in our research context, has many differences in terms of orthography, sentence structure, and semantics. In particular, how to replace a certain word with its umbrella term or synonym can be influenced by the learners' L1. Another issue related to paraphrasing is the phenomenon of patchwriting, which "is unacceptable paraphrasing, a type of plagiarism" (Marshall, 2017, p. 65). "Patchwriting occurs when a writer copies text from a source and changes only some of the words and grammar" (p. 65). This often happens among novice writers and is seen as a developmental phase of paraphrasing attempts (Pecorari, 2003). Hence, teachers need to be aware that this inadequate manner of paraphrasing occurs regardless of the students' intention and that it takes time until students fully understand and become accustomed to paraphrasing. Thus, paraphrasing plays a vital role in summary writing; however, this skill is considerably difficult to master and teach due to its complex characteristics and the influence of writers' cultural and linguistic backgrounds. Assessing L2 summaries holistically and analytically Teachers face challenges not only in teaching summary writing but also in assessing student summaries. Difficulties in the assessment of summaries arise for many reasons such as difficulty in identifying the main ideas (Alderson, 2000), the intricate operations employed in the summarizing process, and insufficient scoring guidelines for educational purposes. In classroom settings, measures for student writing vary depending on the context (Hamp-Lyons, 1995). Although portfolio-based assessments are favored in some contexts (Black, Daiker, Sommers, & Stygall, 1994), assessments of student writers' summaries usually employ scoring rubrics, which are scoring guidelines for different criteria. As Knoch (2009) points out, rubrics or rating scales serve a central role in evaluating integrated tasks, including L2 summary writing. Furthermore, the assessment of written L2 summaries has been a central concern among research in the fields of language testing and writing because of the increasing interest in integrated writing tasks along with task authenticity (Plakans, 2010, 2015; Weigle, 2004) and the importance of integrated skills in educational settings. In performance assessment (e.g., L2 writing assessment), scoring rubrics are generally divided into two types: holistic rubrics and analytic rubrics (Bacha, 2001; Hamp-Lyons, 1995; Hyland, 2003; Weigle, 2002). Holistic assessments provide only an overall score for the performance (Hyland, 2002; Weigle, 2002) and are often used for large-scale assessments such as placement tests or high-stakes examinations. Advantages of holistic assessments are its practicality and cost-effectiveness, as it takes less time for raters to complete the assessment, thereby reducing labor costs, compared to analytic assessments (Bacha, 2001; Hamp-Lyons, 1995; Hyland, 2003; Weigle, 2002). However, a disadvantage of holistic assessments is that they cannot provide informative feedback on the performance, which neither helps teachers identify the weaknesses and strengths of individual students' performance nor provides constructive feedback on the students' performance. By contrast, analytic assessments require more time for raters to complete the assessments, thereby increasing the labor costs, than holistic assessments because analytic rubrics have several dimensions related to the aspects of tasks or tests assigned. Multiple dimensions in analytic rubrics have descriptors where raters evaluate each dimension and choose a score for each of the dimensions based on the descriptors. This characteristic of analytic assessments enables teacher raters to provide diagnostic and comprehensive feedback on the students' performance and allows them to identify the strengths and weaknesses of individual performance (Hamp-Lyons, 1995) as well as the student writers' learning needs. In summary, holistic scoring rubrics should be chosen if only an overall, summative score of the performance is needed, whereas analytic scoring rubrics are more suitable if both a score for individual aspects of the performance and informative feedback are necessary (Mertler, 2001; Stevens & Levi, 2013). Therefore, analytic rubrics are often preferred in classroom contexts and used for educational purposes rather than testing purposes. One of the well-known rubrics for L2 summaries is a holistic one developed by the Educational Testing Service (ETS) (Educational Testing Service, 2002). This rubric was one of the pilot rubrics examined in the process of developing the Test of English as a Foreign Language Internet-based Test (TOEFL iBT®). The feature of this rubric is that test takers receive an overall score, ranging from one to five. This means that the descriptors under each score contain various subskills (e.g., organization, sentence formation, use of own language, and language from source text) related to L2 summary writing. This holistic rubric has been used for the evaluation of L2 summaries across countries, not only in classrooms but also for research (e.g., Cumming et al., 2005). For instance, Baba (2009) utilized the ETS holistic rubric when examining Japanese university students' L2 summary writing performance in an EFL context. In line with Baba's study, the ETS rubric is also used in our current study to assess L2 student summaries in Japanese contexts. Developing scoring rubrics for L2 summary writing Apart from the ETS rubrics, the development of locally contextualized rubrics for L2 summary writing is becoming important and popular among researchers. This section discusses the features of such studies with a focus on the features of rubrics and individual contexts. In the US university context, Becker (2016) examined the effects of holistic scoring rubrics on student performance by comparing four ESL student groups: (a) those who developed a scoring rubric immediately after they completed the summary writing task, (b) those who used the rubric to score their classmates' products for the summary task, (c) those who only looked at the rubric before they completed the summary task, and (d) the control group. As the first two groups outperformed the latter two groups, it was concluded that involvement in the process of rubric development and/or application is more effective than just reviewing the same rubric. Thus, Becker's study sheds light on the pedagogical value of students developing locally contextualized rubrics, as it can help to improve their summarizing performance. It is, however, noteworthy that, although the rubric used in Becker's study was a 5-point holistic rating scale, different teaching and research contexts may require a different type of rubric. In the EFL university context, for example, Yu (2007) developed a holistic rubric to evaluate the overall quality of summaries written in Chinese (L1) and English (L2) in a Chinese context. This holistic rubric used "an argumentation method (e.g., D+, D, and D−) to assign scores for each summary" (p. 567), ranging from A+ to F−. In this scoring method, if the score difference between two raters was greater than 3 (e.g., C and B+), a third rater would score the same summary, and if the difference of the three scores was still greater than 3, all three raters would negotiate to assign an agreed score (see Yu, 2007, for details). Although this scoring rubric was holistic in nature, it demonstrated the following four general guidelines: "faithfulness of the source text," "summary and source text relationships scores," "conciseness and coherence," and "rater understanding" (p. 568). In a different Chinese context, an analytic rubric was developed by Li (2014) to investigate the effects of source text genres on summarization performance and the perceptions of student writers. This four-component analytic rubric consisted of "Main Idea Coverage," "Integration," "Language Use," and "Source Use" (p. 79) on a 6-point scale (i.e., 0–5 for each component). Interestingly, the analyses demonstrated contradictory results as students performed better in the expository text summarization compared to the narrative text summarization, while their perceptions indicated that narrative texts were easier to summarize than expository texts. Unlike the development of these rubrics in Chinese contexts, to our knowledge, only a few studies have developed analytic rubrics for L2 summary writing in Japanese EFL university contexts (e.g., Sawaki, 2019). Although some may question the need for an analytic rubric specifically for the Japanese EFL context, we believe that it is important to meet language teachers' needs and reflect their beliefs as well as the educational policies of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) in Japan on the teaching of integrated language tasks such as L2 summary writing in secondary education (MEXT, 2018). Currently, language teachers in secondary and tertiary education are expected to teach integrated language skills and place emphasis on the integration of skills in language tests. Furthermore, in classroom settings, evaluating learners' paraphrasing skills and providing feedback on how to improve them cannot be accomplished by using a holistic rubric because of the complexity of such skills. Therefore, developing a scoring rubric to assess L2 summaries with a focus on paraphrasing is important for the effective teaching and assessment of integrated skills. Based on the context described above, we initiated a research project on the development of rubrics that can improve the teaching and assessment of L2 summary writing. More specifically, the project aims to develop a rubric as a support tool for both teacher raters and student writers to foster their learning, teaching, and assessment of L2 summary writing. The present study is part of a larger research project, which consists of a series of studies, and Fig. 1 outlines the organization of our research project. As presented in Fig. 1, our project began by examining the ETS (2002) holistic scoring rubric from the perspectives of reliability and validity in study I (Hijikata, Yamanishi, & Ono, 2011). We used the ETS holistic rubric in Japanese EFL university classrooms and found that the use of this holistic rubric lacked diagnostic, detailed, and comprehensible feedback for the students. From the results of study I, we concluded that the rubric was not completely appropriate for Japanese EFL classrooms, and this motivated us to develop a new rubric which informs what a holistic rubric cannot provide. Therefore, our subsequent investigation attempted to develop a rubric to fill this need. Organization of our research project In study II (Hijikata-Someya, Ono, & Yamanishi, 2015), we categorized the reflections of six EFL teacher raters who graded summaries produced by Japanese EFL university students. The difficulties in grading were qualitatively analyzed and coded into content, organization, vocabulary, language, mechanics, paraphrasing, and length. Their reflective comments were used to constitute our provisional rubric with four dimensions: content, paraphrase (quantity), paraphrase (quality), and language use. One way in which this provisional rubric was innovative was the emphasis on the aspect of paraphrasing. Study III (Yamanishi & Ono, 2018) built on studies I and II and covered the development and refinement of the provisional rubric using the "expert judgment" of three experts in the field of language testing research. Subsequently, the revised rubric contained five dimensions (Appendix 1) because an optional overall quality dimension, which is holistic in nature, was added to the provisional rubric. This addition was based on a suggestion from the experts so that teachers can evaluate (a) the quality of the summary itself from a holistic point of view and (b) whether the summary corresponds to the requirements of the summary writing task. Finally, the current study (study IV) aims to examine the applicability of our (revised) rubric to EFL classrooms using both quantitative and qualitative methods. Purpose of the present study Our research project aimed to develop a new scoring rubric for L2 summaries in the Japanese EFL context to enhance the learning, teaching, and assessment of L2 summary writing in the classroom, with the following features: The rubric can be used as a scoring scale and as a teaching and achievement guideline for L2 summary-writing instruction; The rubric can be used both analytically and/or holistically, depending on the purpose of the assessment; The rubric should be suitable for the Japanese EFL context, so that it can be used by both native English-speaking (NES) teacher raters and Japanese non-native English-speaking (NNES) teacher raters; and The rubric should not be time-consuming to use, and thus, it consists of simple and concise descriptors. As Fig. 1 illustrates, in our previous studies (study I through III), we developed a five-dimensional rubric (see Appendix 1)—content, paraphrase (quantity), paraphrase (quality), language use, and overall quality—that includes the features above. For example, for the first feature, our rubric contains two paraphrasing dimensions that intend to guide student writers on how to paraphrase. For the second feature, as a result of expert judgment in study III, we added an optional overall quality dimension that can be used holistically with the descriptor, "As a response to this task, the overall quality of this summary is. . . ." For the third feature, our rubric is written in English and Japanese, so that both NES and Japanese NNES teacher raters, and potentially students, can use it with ease. For the fourth feature, through study III, we simplified the wording of the descriptors of each dimension. However, the newly added overall quality dimension and the two paraphrasing dimensions have not undergone any validation processes. Thus, the purpose of the present study (study IV) is to conduct a quantitative and qualitative examination of our rubric, especially on the potential of these newly added dimensions. Fifty-one Japanese EFL students at two private universities (universities A and B) in Japan participated in this study. Students from university A were management majors, while students from university B specialized in various fields related to the English language. The students in both groups have studied English for more than 6 years, both before and during university. Both groups of students enrolled in general English language courses for new students in their respective universities. The students' English proficiency level ranged from intermediate to lower-intermediate, which is equivalent to levels B1–A2 in the Common European Framework of Reference for Languages: students from university A had an average TOEIC-IP® score of 532.1 (SD = 117.4), while students from university B had an average TOEFL-ITP® score of 420.7 (SD = 31.6). Given that there are few differences in the participants' characteristics between the two universities, the participant data were analyzed together without distinguishing between the two groups. Three NES teachers and three NNES teachers also participated in this study to assess the summaries as raters. NES and Japanese NNES raters were recruited for this study because both types of teachers often teach English courses in EFL contexts in Japan. The six raters varied in terms of educational background and teaching expertise, which reflected the various university contexts, where both novice and experienced teachers are involved in tertiary education. Three NES raters were recruited from the Department of Language and Linguistics of a university in the UK: NES 1 (a Ph.D. student and part-time lecturer, 19 years of teaching experience), NES 2 (a Ph.D. student and part-time lecturer, 15 years of teaching experience), and NES 3 (a Master's student and research associate, 5 years of teaching experience). The three Japanese NNES raters were recruited from different universities in Japan: NNES 1 (an associate professor, 10 years of teaching experience), NNES 2 (an associate professor, 8 years of teaching experience), and NNES 3 (a lecturer, 4 years of teaching experience). All of the participants (both the student writers and teacher raters) provided informed consent for participation in this study. The data collection procedures of this study partially overlap the collection procedures of our previous study (study II; Hijikata-Someya et al., 2015). During the classes taught by the authors of the present study, the participants produced a 50- to 60-word written summary of a 199-word comparison/contrast type passage entitled Right Brain/Left Brain, which was adapted from Oshima and Hogue (2007). The summary writing task took approximately 40 min, and the use of dictionaries was allowed. The six raters first scored the 102 summaries, which had been produced by 51 students (each student wrote the summaries twice, before and after an L2 summary writing instruction) using the ETS (2002) holistic rubric (scores ranging from 1 to 5). The raters then judged the difficulty of rating each summary using three indicators: 1 = easy, 2 = moderate, or 3 = difficult. Consequently, 16 out of 102 summaries (15.69%) were judged as difficult to score, because their average grading difficulty from the six raters exceeded 2.0. In Hijikata-Someya et al. (2015), we then used these 16 difficult summaries to identify what aspects of the summaries made them difficult, through a qualitative consideration of the raters' retrospective scoring comments. In the current study, we asked the same three NES raters and three NNES raters to score the difficult 16 summaries using the five-dimensional rubric (Appendix 1: scores ranging from 1 to 4) that we developed in study III (Yamanishi & Ono 2018). The 16 difficult summaries from Hijikata-Someya et al. (2015) were used again in this study for two reasons. First, the 16 summaries were thought to be suitable for examining our newly developed rubric's potential under a severe condition. In other words, we thought that if the rubric could demonstrate positive results even in severe and challenging conditions, it would be more reasonable to claim the potential of the rubric. Taking the implementation of the rubric in the classroom into consideration, the other reason is that we thought 102 summaries were too many for the raters to score using a multi-dimensional rubric. Sixteen months had passed between when the six raters rated the 16 summaries in the current study and when they first rated the 102 summaries; therefore, we judged the influence of the first scoring on the scoring in this study to be negligible. Before scoring, as a form of rater training, the raters were provided with a sample of anchor summaries (Appendix 2) that had been scored by us to illustrate a variety of summaries with different scores assigned to each dimension. Therefore, the raters were able to familiarize themselves with the new dimensions of the rubric, assess the severity of the scoring, and understand the appropriate scores for each score band. We used an open-ended questionnaire, which consisted of the following three questions, to solicit the raters' retrospective comments on our rubric: The revised rubric has five dimensions: content, paraphrase (quantity), paraphrase (quality), language use, and overall quality. What is your opinion on the usage of each of the dimensions? What is your opinion about the levels and descriptors within each dimension? What is your overall impression of the revised rubric? The questionnaire was in English for NES raters and in Japanese for NNES raters. After the raters completed the evaluation of the summaries and the questionnaire, their data was collected by email. As an alternative to the questionnaire, we considered using face-to-face interviews to receive feedback from the raters. However, because the raters belonged to different universities located in different areas of Japan and the UK, it would have been difficult for us to conduct in-person interviews during the limited data collection period. For our quantitative examination, the collected evaluation data of our rubric was analyzed and compared with the results of the ETS holistic rubric. The following analyses were performed: Reliability of our rubric, Correlations within and between the rubrics, Inter-rater reliability for both rubrics, and Generalizability of our rubric. As the differences in rater background—i.e., NES or NNES rater—were outside the scope of this study, these analyses were conducted using the average mean scores of the six raters. In this study, as a methodological advantage, we employed the generalizability theory (G theory) to examine the characteristics of our rubric in detail (Bachman, 2004; Brennan, 1992; In'nami & Koizumi, 2016; Lynch & McNamara, 1998; Shavelson & Webb, 1991). G theory "can decompose the score variances into those affected by the numerous factors and their interactions" (In'nami & Koizumi, 2016, p. 342), and the factors and their interactions are called variance components. The design and data analysis stage of G theory is a generalizability study (G study). The G study design of this study is a two-facet crossed design, and the variance components examined are listed as follows (see also Table 4): The variance associated with the object of measurement or the amount of inconsistency across student-writers' summaries is symbolized by p; The variance across raters or the amount of inconsistency across raters is symbolized by r; and The variance across items of the rubric or the amount of inconsistency across items is symbolized by i. In addition, their possible interactions are: The interactions between summaries and raters, p × r; The interactions between summaries and dimensions, p × i; The interactions between raters and dimensions, r × i; and The unresolved residuals, shown as p × r × i, that are not accounted for by the other variance components. G theory also allows us to simulate what the evaluation results would look like if we changed the numbers of raters and/or dimensions of a rubric to determine how to improve future evaluations. This stage is called the decision study, the D study, and this enables the simulation of changes to the generalizability coefficient (g coefficient) based on changes to the number of raters and/or dimensions in the rubric. The g coefficient (Eρ2 or G) is theoretically and practically equivalent to the reliability coefficient (α) in the classical test theory, and its maximum possible value is 1. The g coefficients here were calculated using Eq. 1 for the two-facet crossed design (see Shavelson & Webb, 1991, for more details): $$ G=\frac{p}{p+\frac{p\times r}{Nr}+\frac{p\times i}{Ni}+\frac{p\times r\times i}{Nr\times Ni}} $$ For our qualitative examination, the NES and NNES raters' retrospective comments on our rubric were analyzed descriptively to supplement the findings of the quantitative examination. The rater comments were first categorized into the following four categories: (a) usage of each dimension (16 units), (b) levels and descriptors within each dimension (11 units), (c) overall impression of the rubric (6 units), and (d) concerns about the rubric (4 units). Then, the first two categories were further divided into subcategories representing each dimension. Table 1 shows the distribution of the subcategories for the first two categories in the raters' comments. Table 1 Components of the raters' comments As illustrated in Table 1, each dimension was discussed by the NES and NNES raters. Similarly, the rater comments regarding the overall impression of and concerns about the rubric were also analyzed. Their comments are discussed in detail in the next section. In the discussion, the Japanese NNES raters' comments are English translations. Quantitative examinations Reliability of our rubric First, we calculated the reliability (internal consistency) of our newest rubric to compare the four-dimensional version, which consisted of content, paraphrase (quantity), paraphrase (quality), and language use, with the five-dimensional version, in which the holistic overall quality was added as an optional dimension. As Table 2 illustrates, while the reliability of the four-dimensional rubric was not high (α = .48), adding the overall quality dimension improved the reliability of the rubric (α = .70). Table 2 Internal consistency of our four- and five-dimensional rubrics The reliability of the five-dimensional version exceeded α = .60, which is regarded as moderately high reliability in performance assessments such as L2 writing assessments (Kudo & Negishi, 2002). This result indicates that by adding the overall quality dimension with a simple descriptor (As a response to this task, the overall quality of this summary is...), the rubric became more robust. Correlations within and between the rubrics To examine the potential of the overall quality dimension when used exclusively as a holistic measure and the two paraphrasing dimensions, correlation analyses were performed (see Fig. 2). As the number of summaries evaluated was not large (N = 16), we adopted a non-parametric measure (Spearman's rho). Correlation matrix and histogram. Scatter plots are shown in the lower left triangle, correlation coefficients are shown in the upper right triangle, and the remainder are histograms Figure 2 demonstrates that the correlation coefficient between overall quality and the four-dimensional total score—the sum of content, paraphrase (quantity), paraphrase (quality), and language use dimensions—was positive and very high, ρ = .91, p < .05. This means that the overall quality dimension has the potential to work well even if teacher raters use it as the sole factor for evaluating student summaries. The correlation coefficient between the two separate paraphrasing dimensions, paraphrase (quantity) and paraphrase (quality), was positive and high, ρ = .81, p < .05. This suggests that these two dimensions overlap to some extent but cover different aspects of paraphrasing. To investigate the concurrent validity of our rubric, the correlations between our rubric (the total and overall quality dimension) and the commonly used ETS holistic rubric were examined. The correlation coefficients between the ETS holistic rubric and our rubric were positive and moderate, ρ = .50, p < .05 (the ETS holistic and the total) or ρ = .55, p < .05 (the ETS holistic and overall quality dimension), proving that these two rubrics evaluate the same construct but focus on slightly different aspects of the L2 summary writing performance. Inter-rater reliability for both rubrics For the rater perspective, the inter-rater reliabilities among the six raters (three NES raters and three NNES raters) were examined. First, as Table 3 illustrates, the four dimensions that make up the analytic components of our rubric demonstrated reasonably high inter-rater reliabilities, α = .66–.75. Next, the inter-rater reliabilities of the overall quality dimension, the total, and the ETS's holistic rubric (shown in italics in Table 3) were examined. Table 3 Inter-rater reliability coefficients (N = 6) The results indicate that the overall quality dimension (α = .73) produced higher inter-rater reliability than the total (α = .68) and the ETS holistic rubric (α = .59). This indicates the potential of using the overall quality dimension alone as a holistic assessment. Generalizability of our rubric Based on the confirmation of the reasonably high reliability (α = .70) of our five-dimensional rubric as a performance evaluation tool, we used G theory to examine the characteristics of the rubric in more detail. At the first stage of the G theory analysis, we conducted a generalizability study (G study). In Table 4, the estimated variance components except for p × r × i (residual) are arranged in the order of the magnitude of their effect on the evaluation (from largest to smallest). We examined the three interactions of p × i, p × r, and i × r and found that the largest interaction was between the summaries and dimensions (p × i, 25.4%). This result is expected because each student writer's L2 summary writing performance assessed through each dimension of our rubric should vary, meaning each dimension was appropriately able to evaluate similar but different aspects of the students' summaries. The second largest interaction was between the summaries and raters (p × r, 19.1%). This indicates that each rater's evaluation was moderately affected by each student writer's varied performance. The last interaction was between the dimensions and raters (i × r), which contributed to a small portion of the variance, 6.3%. This means that each rater's interpretation of each dimension of the rubric did not differ to a great extent, which in turn means that the raters' evaluation of using our rubric was fairly consistent. Table 4 Estimated variance components The second stage of the G theory analysis was the decision study (D study), the results of which are illustrated in Fig. 3. The default number of raters was six, and the default for the dimensions was five, and we simulated the g coefficients by changing both numbers using Eq. 1. In this study, the simulation changed the numbers from 1 to 10, as shown in Fig. 3. The results of the D study examinations demonstrated that if the number of raters was reduced to three, the g coefficient would be fairly low, but slightly higher than .50. This suggests that there is some room for improvement for when this rubric is implemented in classrooms. Theoretically, the same kind of examination could also be conducted for the number of dimensions in the rubric; however, changing the number of dimensions is not a realistic option because it would require determining the validity of the revised rubrics (e.g., combining two dimensions into one). Thus, the simulation results reflecting the changes to the number of dimensions are shown simply as a reference. However, creating and utilizing D study simulations after conducting an evaluation is preferable as it allows useful, diagnostic information to be obtained that corresponds to its situation and purpose. D study simulation. The results changing the number of raters and dimensions are shown. The original number of raters and dimensions was six and five, respectively Qualitative examinations Overall impression of the rubric The analysis of the rater comments demonstrated that the raters' overall impression of the rubric was positive. Both the NES and NNES raters found our rubric easier to use than the ETS (2002) holistic rubric. For instance, the raters NES 1 and NNES 1 reported the following: "Compared to the previous holistic scale, the analytic scale was easy to use for the summary evaluation." (NES 1) "The new rubric was very easy to use." (NNES 1) Previously, the NES and NNES raters struggled when using the holistic rubric because it treated several aspects together (Hijikata-Someya et al., 2015). However, our rubric prevented such struggles and helped the raters perform a smooth assessment of the L2 summaries. The practicality of a rubric is important for classroom use; if a rubric is easy to use, it can save language teachers' time and costs (Stevens & Levi, 2013). Additionally, students can be encouraged to use the rubric for learning and peer-assessment purposes (Becker, 2016). Although there has been a claim that holistic scoring is more practical and cost-effective than analytic scoring (Bacha, 2001; Hamp-Lyons, 1995; Hyland, 2003; Weigle, 2002), analytic scoring may better suit integrated writing tasks such as summary writing for educational purposes. Thus, the raters' positive perceptions of our new rubric indicate that the rubric can enhance teachers' practical assessment of and diagnostic feedback on student summary writing performance in EFL classrooms. The content dimension was perceived positively by the NES and NNES raters. For instance, NNES 3 referred to the content as follows: "It was very easy to use. Specifically, the reference to 'secondary information' in the descriptor 4 was helpful when judging whether examples in the summaries were acceptable or not. Also, I didn't need to look back at the descriptors often because each level of the descriptors had a consistent description regarding 'information'." (NNES 3) The descriptors of our rubric were regarded as clear and distinctive by the NES and NNES raters. As we discussed earlier, because L2 summary writing requires complex information processing to select the main ideas (Brown & Day, 1983), assessing the content of the summaries tends to be challenging for some raters and language teachers who do not necessarily identify the same main ideas to be included in the summaries (Alderson, 2000). Based on these challenges, the content dimension in our rubric is expected to help raters evaluate the selected information in the summaries more effectively. Paraphrase (quantity) and paraphrase (quality) Paraphrase (quantity) and paraphrase (quality) were also regarded as reasonable by the NES and NNES raters. Both groups of raters provided positive opinions towards the two separate paraphrasing dimensions and understood the purpose of having both dimensions. "It is good that the rubric distinguishes the writers' effort to paraphrase from the appropriateness of paraphrasing in dealing with both the quantity and quality of paraphrasing." (NNES 1) "I like the idea of measuring originality on two dimensions, e.g., quantity and quality." (NES 2) The establishment of paraphrase (quantity) and paraphrase (quality) in the rubric seemed to work well in the EFL context and was perceived positively for educational purposes as it met the teacher raters' needs and demands. When they used the holistic rubric previously, they suggested that an ideal rubric for written summaries should explicitly deal with the important and difficult paraphrase dimension (Hijikata-Someya et al., 2015). Thus, these teacher raters' opinions were reflected in our rubric through the use of explicit, self-explanatory descriptors for the paraphrase dimensions. However, for paraphrase (quality), NNES 3 found it difficult to evaluate whether "more than four words in a row were copied from the original text," which was written in the score bands 1 and 2 in paraphrase (quality). In a similar vein, NNES 3 suggested the importance of teacher instruction in paraphrasing as follows: "To begin the summary task or as general writing instruction in class, teachers should tell students not to copy more than four consecutive words from the source text." (NNES 3) NES 1 also commented on the innovative feature of the paraphrase (quantity) dimension in the rubric: "It is good that the descriptors of Paraphrase (Quantity) are clear since [the] percentage of paraphrase at each level is specified." (NES 1) Instead of raters needing to calculate the exact percentage of paraphrases employed in the summaries, we provided the percentages shown in the descriptors as an approximate estimation of the paraphrasing to be employed. The use of labels for paraphrasing attempts such as Near Copy, Moderate Revision, and Substantial Revision (Keck, 2006, 2014) in the rubric seems to work well in some contexts, but these labels might be interpreted differently by individual teacher raters and student writers. Thus, our rubric was determined to be self-explanatory in terms of paraphrase (quantity) based on the listed percentages of paraphrasing provided for each level. The newly added overall quality dimension as a holistic measure was viewed as effective and useful by the NES and NNES raters, as demonstrated by the following comments: "As this is a holistic assessment of the summary, it could be used to check the consistency of the analytical scores." (NES 2) "Very easy to use. I felt that this aspect prevents summaries that only actively paraphrase from getting a high mark." (NNES 3) Although the basic feature of our rubric was analytic, the overall quality dimension served as a holistic assessment of the summaries, when considering all dimensions as a whole. In line with this, Marshall (2017) states the importance of the summary to represent "a sense of the complete original text" (p. 71). Thus, the addition of this holistic dimension allows raters to employ both analytic and holistic views of the summaries and maintain consistency between the two assessment methods. However, NES 2 suggested the potential need for descriptors of overall quality concerning each of the four score bands as follows: "As there are many contributing factors to a successful summary, there is a risk that the interpretation of poor / fair / good / very good will differ between [the] raters (effecting [sic] inter-rater reliability). Perhaps there could be descriptors for the different levels of 'Overall Quality'?" (NES 2) This suggestion should be considered to improve the rubric because the overall quality dimension can be used alone. One way to improve this would be to add descriptors explaining each score band, but this may impair the dimension's simple and holistic nature. Another option would be to develop a scoring method based on individual teacher raters' satisfactory standards that correspond to varied teaching contexts (e.g., Sawaki, 2019). For example, we could set the standard/reference point based on the teachers' needs and students' expertise as a score of 3 (good). If the quality of a summary is above the standard, it would be marked as 4 (very good), and if it is below the standard, it would be marked as 2 (fair) or 1 (poor). This option regards the overall quality dimension as a holistic criterion-referenced measure (the criterion score here is 3) to effectively maintain the holistic nature of this dimension. Concerns about the rubric NNES 2 highlighted concerns about different scoring weights between the paraphrase and content dimensions: "The aspect of paraphrase in the rubric seems reasonable since quantity and quality are treated separately whereas I felt that it may be questionable in terms of the balance between Paraphrase and Content in a total score . . . . I personally think that, in my case, the priority in the summary evaluation tends to be accurate reading comprehension rather than paraphrasing skills and language use." (NNES 2) This opinion is understandable because paraphrase is weighted more than the other dimensions due to the existence of paraphrase (quantity) and paraphrase (quality) in the rubric. In addition, the priority of the skills in the summary writing task depends on the purpose of the task and assessment, that is, whether it is used as a reading comprehension task or an integrated writing task. Hijikata-Someya et al. (2015) revealed that language teachers particularly struggled to assess and teach paraphrasing attempts in L2 summary writing. Therefore, this newly developed rubric emphasizes the importance of paraphrasing by including two paraphrase dimensions and placing more scoring weight on paraphrase than the other dimensions. In this study, as the final step of our research project on assessing L2 summary writing (see Fig. 1), we have gained insight into the potential use and function of our rubric through quantitative and qualitative examinations. With regard to the quantitative examination, the results of comparing four and five dimensions clarified that the optional overall quality dimension should be included because it improves the reliability of the rubric significantly, from α = .48 to .70. We then examined the correlations between our rubric and the ETS (2002) holistic rubric. The results revealed that the newly added overall quality dimension could work well even if used alone, and our rubric and the ETS holistic rubric had a positive, moderate correlation for L2 summary writing assessments. The examination of our rubric using G theory explained the nature of the evaluation results in detail through the G study examination. However, the D study simulation based on changing the number of raters indicated that the reliability (generalizability coefficients) of our rubric was not high enough when the number of raters decreased. This result suggests that we need further research to seek ways to help raters evaluate summaries using the rubric. One way to improve this might be to provide teacher raters with opportunities to practice evaluating student L2 summaries using the rubric in a rater training session. When administering these training sessions, making good use of G and D study examinations would be very helpful in specifying the characteristics of the evaluation results to improve future evaluation. The qualitative examinations supplemented the findings from our quantitative examination and demonstrated the potential of the overall quality dimension. Ideally, all the five dimensions of our rubric should be employed because of their high reliability; however, in educational contexts, the use of the overall quality dimension is a valid option. For example, the overall quality dimension can be used alone as a holistic assessment, depending on the purpose of the assessment and teaching in individual teaching contexts. In other words, if only holistic assessment results are needed, teacher raters might use this dimension exclusively; if they can conduct both holistic and analytic assessments, that is preferable. It should be noted that when the overall quality dimension is used alone for the assessment of students' L2 summaries, teacher raters should review and understand the other four dimensions and their descriptors in advance. Nevertheless, we do not exclude the usage of the four-dimensional rubric alone as a rigid and independent analytic assessment. In essence, this flexible combination of holistic and analytic assessments is expected to enhance the effective and efficient evaluation and teaching of L2 summary writing in various educational settings, based on the purpose of summary writing tasks and educational levels. Even within the Japanese EFL context, teachers' needs and students' expertise may vary considerably, which creates the demand for a flexible assessment tool such as the one we have developed. The correlation coefficient of the two separate paraphrasing dimensions, paraphrase (quantity) and paraphrase (quality), was positive and high, indicating that they overlap to some extent but cover different and important aspects of paraphrasing. The teacher raters' comments confirmed this; they appreciated the distinction of these two aspects of paraphrasing, which met their needs and demands. Teacher raters can emphasize both the quantitative and qualitative aspects of paraphrasing when they teach L2 summary writing by using this rubric, and the features of our rubric can also be helpful for student writers to understand the importance of paraphrasing. Student writers can understand how much paraphrasing is expected from the self-explanatory nature of the paraphrase (quantity) dimension, while the paraphrase (quality) dimension may encourage them to actively and appropriately paraphrase to a greater degree. Finally, we address the limitations that will be considered in our future studies. As a methodological limitation, the number of raters was not large, and their teaching backgrounds were not strictly controlled. Further investigation into the raters' attributes and backgrounds that could affect the evaluation results may be necessary. Another limitation is related to task selection: only a single summary writing task based on a comparison/contrast type of passage was employed. If more than one type of text or genre had been used, it would have been possible to discuss the appropriateness of the rubric from a broader perspective. Similarly, only a relatively short passage (i.e., 199 words) was used for the summary writing task, which means that there was no way to compare task difficulty. A comparison of both short and long passages for summary writing tasks could be ideal to ensure the effectiveness of the rubric. Furthermore, the summaries produced in this study were relatively short (i.e., 50–60 words); therefore, most of the summaries were written in one paragraph. If a long passage is used for a summary writing task and a longer summary is produced, new dimensions such as the organization of the summary may need to be added to the rubric. Despite these limitations, our newly developed rubric is innovative as it embodies the teacher raters' voices and experiences and is characterized as a "hybrid" of analytic and holistic assessments within a single rubric. Similar to a hybrid car using a conventional engine and an electric motor separately or simultaneously depending upon the situation and purpose, our new rubric offers flexibility for specific individual purposes. We hope that this rubric is helpful in teaching and assessing L2 summary writing in Japan and potentially in other EFL contexts. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. EFL: ESL: ETS: Educational Testing Service MEXT: Ministry of Education, Culture, Sports, Science and Technology NES: Native English speaker/speaking NNES: Non-Native English speaker/speaking TOEFL-ITP: Test of English as a Foreign Language—Institutional Testing Program TOEIC-IP: Test of English for International Communication—Institutional Program Alderson, J. C. (2000). Assessing reading. Cambridge: Cambridge University Press. Baba, K. (2009). Aspects of lexical proficiency in writing summaries in a foreign language. Journal of Second Language Writing, 18, 191–208. https://doi.org/10.1016/j.jslw.2009.05.003. Bacha, N. (2001). Writing evaluation: What can analytic versus holistic essay scoring tell us? System, 29, 371–383. https://doi.org/10.1016/S0346-251X(01)00025-2. Bachman, L. F. (2004). Statistical analyses for language assessment. Cambridge: Cambridge University Press. Becker, A. (2016). Student-generated scoring rubrics: Examining their formative value for improving ESL students' writing performance. Assessing Writing, 29, 15–24. https://doi.org/10.1016/j.asw.2016.05.002. Black, L., Daiker, D. A., Sommers, J., & Stygall, G. (1994). New directions in portfolio assessment. Portsmouth: Boynton/Cook Heinemann. Brennan, R. L. (1992). Elements of generalizability theory (Rev. ed.). Iowa City: ACT Publications. Brown, A. L., & Day, J. D. (1983). Macrorules for summarizing texts: The development of expertise. Journal of Verbal Learning and Verbal Behavior, 22, 1–14. https://doi.org/10.1016/S0022-5371(83)80002-4. Cumming, A., Kantor, R., Baba, K., Erdosy, U., Eouanzoui, K., & James, M. (2005). Differences in written discourse in independent and integrated prototype tasks for next generation TOEFL. Assessing Writing, 10, 5–43. https://doi.org/10.1016/j.asw.2005.02.001. Delaney, Y. A. (2008). Investigating the reading-to-write construct. Journal of English for Academic Purposes, 7, 140–150. https://doi.org/10.1016/j.jeap.2008.04.001. Educational Testing Service. (2002). LanguEdge courseware: Handbook for scoring speaking and writing. Princeton: Educational Testing Service. Gebril, A., & Plakans, L. (2016). Source-based tasks in academic writing assessment: Lexical diversity, textual borrowing and proficiency. Journal of English for Academic Purposes, 24, 78–88. https://doi.org/10.1016/j.jeap.2016.10.001. Grabe, W. (2001). Reading-writing relations: Theoretical perspectives and instructional practices. In D. Belcher & A. Hirvela (Eds.), Linking literacies: Perceptions on L2 reading-writing connections (pp. 15–47). Ann Arbor: University of Michigan Press. Hamp-Lyons, L. (1995). Rating nonnative writing: The trouble with holistic scoring. TESOL Quarterly, 29, 759–765. https://doi.org/10.2307/3588173. Hidi, S., & Anderson, V. (1986). Producing written summaries: Task demands, cognitive operations and implications for instruction. Review of Educational Research, 56, 473–493. https://www.jstor.org/stable/1170342. Hijikata-Someya, Y., Ono, M., & Yamanishi, H. (2015). Evaluation by native and non-native English teacher-raters of Japanese students' summaries. English Language Teaching, 8(7), 1–12. https://doi.org/10.5539/elt.v8n7p1. Hijikata, Y., Yamanishi, H., & Ono, M. (2011). The evaluation of L2 summary writing: Reliability of a holistic rubric. Paper presented at the 10th Symposium on Second Language Writing in 2011. Taipei, Taiwan: Howard International House. Hirvela, A., & Du, Q. (2013). "Why am I paraphrasing?": Undergraduate ESL writers' engagement with source-based academic writing and reading. Journal of English for Academic Purposes, 12, 87–98. https://doi.org/10.1016/j.jeap.2012.11.005. Hyland, K. (2002). Teaching and researching writing. Harlow: Pearson Education Limited. Hyland, K. (2003). Second language writing. Cambridge: Cambridge University Press. In'nami, Y., & Koizumi, R. (2016). Task and rater effects in L2 speaking and writing: A synthesis of generalizability studies. Language Testing, 33, 341–366. https://doi.org/10.1177/0265532215587390. Johns, A. M., & Mayes, P. (1990). An analysis of summary protocols of university ESL students. Applied Linguistics, 11, 253–271. https://doi.org/10.1093/applin/11.3.253. Keck, C. (2006). The use of paraphrase in summary writing: A comparison of L1 and L2 writers. Journal of Second Language Writing, 15, 261–278. https://doi.org/10.1016/j.jslw.2006.09.006. Keck, C. (2014). Copying, paraphrasing, and academic writing development: A re-examination of L1 and L2 summarization practices. Journal of Second Language Writing, 25, 4–22. https://doi.org/10.1016/j.jslw.2014.05.005. Kintsch, W., & van Dijk, T. A. (1978). Toward a model of text comprehension and production. Psychological Review, 85, 363–394. http://dx.doi.org/10.1.1.468.1535 Kirkland, M. R., & Saunders, M. A. P. (1991). Maximizing student performance in summary writing: Managing cognitive load. TESOL Quarterly, 25, 105–121. https://doi.org/10.2307/3587030. Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating scales. Language Testing, 26, 275–304. https://doi.org/10.1177/0265532208101008. Kudo, Y., & Negishi, M. (2002). Interrater reliability of free composition ratings by different methods. Annual Review of English Language Education in Japan (ARELE), 13, 91–100. Li, J. (2014). Examining genre effects on test takers' summary writing performance. Assessing Writing, 22, 75–90. https://doi.org/10.1016/j.asw.2014.08.003. Lynch, B. K., & McNamara, T. F. (1998). Using G-theory and Many-facet Rasch measurement in the development of performance assessments of the ESL speaking skills of immigrants. Language Testing, 15, 158–180. https://doi.org/10.1177/026553229801500202. Marshall, S. (2017). Advance in academic writing: Integrating research, critical thinking, academic reading and writing. Montréal: Pearson. Mertler, C. A. (2001). Designing scoring rubrics for your classroom. Practical Assessment, Research & Evaluation, 7(25). http://www.pareonline.net/getvn.asp?v=7&n=25. MEXT. (2018). Koutou gakkou gakushuu shidou youryou (The course of study for secondary education). http://www.mext.go.jp/component/a_menu/education/micro_detail/__icsFiles/afieldfile/2018/07/11/1384661_6_1_2.pdf Ono, M. (2011) Japanese and Taiwanese university students' summaries: A comparison of perceptions of summary writing. Journal of Academic Writing, 1, 191–205. https://doi.org/10.18552/joaw.v1i1.14 Oshima, A., & Hogue, A. (2007). Introduction to academic writing. White Plains: Pearson Education. Oshima, A., Hogue, A., & Ravitch, L. (2014). Longman academic writing series, level 4: Essays. White Plains: Pearson Education. Pecorari, D. (2003). Good and original: Plagiarism and patchwriting in academic second-language writing. Journal of Second Language Writing, 12, 317–345. https://doi.org/10.1016/j.jslw.2003.08.004. Plakans, L. (2008). Comparing composing processes in writing-only and reading-to-write test tasks. Assessing Writing, 13, 111–129. https://doi.org/10.1016/j.asw.2008.07.001. Plakans, L. (2010). Independent vs. integrated writing tasks: A comparison of task representation. TESOL Quarterly, 44, 185–194. https://www.jstor.org/stable/27785076. Plakans, L. (2015). Integrated second language writing assessment: Why? What? How? Language and Linguistics Compass, 9, 159–167. https://doi.org/10.1111/lnc3.12124. Sawaki, Y. (2019). Issues of summary writing instruction and assessment in academic writing classes. In Paper presented at the 48 th Research Colloquium of the Japan Language Testing Association. Japan: Waseda University. Shavelson, R. J., & Webb, N. M. (1991). Generalizability theory: A primer. Thousand Oaks: Sage. Shi, L. (2012). Rewriting and paraphrasing source texts in second language writing. Journal of Second Language Writing, 21, 134–148. https://doi.org/10.1016/j.jslw.2012.03.003. Shi, L., & Dong, Y. (2018). Chinese graduate students paraphrasing in English and Chinese contexts. Journal of English for Academic Purposes, 34, 46–56. https://doi.org/10.1016/j.jeap.2018.03.002. Stevens, D. D., & Levi, A. (2013). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning (2nd ed.). Sterling: Stylus Publications. Weigle, S. C. (2002). Assessing writing. Cambridge: Cambridge University Press. Weigle, S. C. (2004). Integrating reading and writing in a competency test for non-native speakers of English. Assessing Writing, 9, 27–55. https://doi.org/10.1016/j.asw.2004.01.002. Yamanishi, H., & Ono, M. (2018). Refining a provisional analytic rubric for L2 summary writing using expert judgment. Language Education & Technology, 55, 23–48. https://iss.ndl.go.jp/books/R000000004-I029417776-00. Yu, G. (2007). Students' voices in the evaluation of their written summaries: Empowerment and democracy for test takers? Language Testing, 24, 539–572. https://doi.org/10.1177/0265532207080780. Yu, G. (2013). From integrative to integrated language assessment: Are we there yet? Language Assessment Quarterly, 10, 110–114. https://doi.org/10.1080/15434303.2013.766744. The authors thank all the students and the raters who participated in this study. The project was funded by grants from the Japan Society for the Promotion of Science, KAKENHI (#23520725 and #26580121). Faculty of Science and Engineering, Chuo University, Tokyo, Japan Hiroyuki Yamanishi Faculty of Law, Keio University, Kanagawa, Japan Faculty of Humanities and Social Sciences, University of Tsukuba, Ibaraki, Japan Yuko Hijikata Search for Hiroyuki Yamanishi in: Search for Masumi Ono in: Search for Yuko Hijikata in: All authors read and approved the final manuscript. Correspondence to Hiroyuki Yamanishi. Table 5 The five-dimensional rubric Table 6 Sample summaries and their scores Yamanishi, H., Ono, M. & Hijikata, Y. Developing a scoring rubric for L2 summary writing: a hybrid approach combining analytic and holistic assessment. Lang Test Asia 9, 13 (2019). https://doi.org/10.1186/s40468-019-0087-6 L2 summary writing Analytic assessment Holistic assessment Japanese university students Generalizability theory Rater perception
CommonCrawl
\begin{document} \title[Lifts of simple curves]{Lifts of simple curves in finite regular coverings of closed surfaces} \author{Ingrid Irmer} \address{Mathematics Department\\ Technion, Israel Institute of Technology\\ Haifa, 32000\\ Israel } \email{[email protected]} \date{21 November, 2017} \begin{abstract} Suppose $S$ is a closed orientable surface and $\tilde{S}$ is a finite sheeted regular cover of $S$. When studying mapping class groups, the following question arose: Do the lifts of simple curves from $S$ generate $H_{1}(\tilde{S},\mathbb{Z})$? A family of examples is given for which the answer is ``no''. \end{abstract} \maketitle {\footnotesize \tableofcontents } \section{Introduction} \label{sect:intro} Let $S$ be a genus $g$ closed, orientable surface with base point and no boundary. Fix $p\colon\thinspace \tilde{S} \to S$, a finite-sheeted regular covering of $S$, where $\tilde S$ is connected. The \emph{simple curve homology of $p$} (denoted by $sc_{p}(H_{1}(\tilde{S};\mathbb{Z})$) is the span of $[\tilde{\gamma}]$ in $H_1(\tilde{S};\mathbb{Z})$ such that $\tilde{\gamma}$ is a connected component of $p^{-1}(\gamma)$ and $\gamma$ a simple closed curve in $S$.\\ Recall that the Torelli group of a surface is the subgroup of the mapping class group that consists of surface diffeomorphisms that act trivially on homology with integer coefficients. A survey of the Torelli group can be found in \cite{Johnson}. The following question was posed by Julien March\'{e} on Mathoverflow, \cite{MO}, and arose while studying the ergodicity of the action of the Torelli group on $\mathrm{SU}(2)$-character varieties of surfaces, \cite{Marche}: \begin{ques}[see \cite{MO}] \label{Mainquestion} \begin{enumerate} \item Does $sc_{p}(H_{1}(\tilde{S};\mathbb{Z})) =H_1(\tilde{S},\mathbb{Z})$? \item If not, how can we characterize the submodule $sc_{p}(H_{1}(\tilde{S};\mathbb{Z}))$? \end{enumerate} \end{ques} Since writing the first version of this paper, there has been much progress on answering Question \ref{Mainquestion} and related questions. By using quantum $\mathrm{SO}(3)$-representations coming from TQFT, the authors of \cite{Quantumreps} show that equality in part (i) of Question \ref{Mainquestion} does not always hold. However, such covers are not explicitly constructed in~\cite{Quantumreps}.\\ Denote by $Mod(\Sigma)$ the mapping class group of a surface $\Sigma$. Unlike the surface $S$, $\Sigma$ is not assumed to have empty boundary. When $\Sigma$ has $p$ punctures, genus $g$ and $n$ boundary components, we will sometimes denote it by $\Sigma^{p}_{g,n}$. \\ We now explain a relation between a rational coefficient version of Question~\ref{Mainquestion} \begin{equation} \label{eq.QQ} sc_{p}(H_{1}(\tilde{S};\mathbb{Z}))\otimes_{\mathbb{Z}} \mathbb{Q} =H_1(\tilde{S},\mathbb{Q}) \end{equation} and the Ivanov Conjecture~\cite{Ivanov}. The latter states that $H_1(\Gamma,\mathbb{Q})=0$ for any finite index subgroup $\Gamma$ of $Mod(\Sigma^{p}_{g,n})$ with $g\geq 3$. \\ Fix a finite-sheeted regular cover $p\colon\thinspace \tilde{\Sigma} \to \Sigma$ of $\Sigma$, and let $\Gamma_p$ denote the subgroup of $Mod(\Sigma)$ generated by all $\phi \in Mod(\Sigma)$ such that $\phi$ lifts to $\tilde{\phi} \in Mod(\tilde \Sigma)$. Let $\tilde \Gamma_p$ denote the subgroup of $Mod(\tilde \Sigma)$ generated by $\tilde \phi$ as before. \\ Boggi-Looijenga~\cite{BLconversation} observe that when~\eqref{eq.QQ} holds, the $\tilde\Gamma$ invariant submodule of $H_{1}(\tilde{\Sigma};\mathbb{Q})$ is trivial. If that happens for all finite regular covers $p$ of $\Sigma$, then Theorem C of \cite{Putman2} implies Ivanov's Conjecture for mapping class groups of surfaces with genus $g+1$, $n-1$ boundary components and $p$ punctures. A further discussion to the background of this question with integer and rational coefficients can be found in Section 8 of \cite{FH}.\\ For surfaces with nonempty boundary or at least one puncture, Farb-Hensel~\cite{FH} provide a representation theoretic framework in which to study part (ii) of Question \ref{Mainquestion} with integer or rational coefficients. For punctured surfaces, it was shown in a recent paper of Malestein-Putman~\cite{MalesteinPutman} that Question~\ref{Mainquestion} fails. \\ Our main theorem is an explicit family of counterexamples to Question~\ref{Mainquestion}. These are iterated homology surface coverings, with at least two iterations, where the last one uses an integer $m \geq 3$. \begin{thm} \label{thm.1} For the examples of Section~\ref{sub:non_example}, equality in part (i) of Question~\ref{Mainquestion} is false. \end{thm} The intuition that lifts of simple curves should generate homology possibly stems from the fact that counterexamples should be expected to have large genus; ``small'' genus coverings disproportionately satisfy a plethora of conditions that guarantee this. For example, it seems to be well-known that when the deck transformation group is Abelian, $sc_{p}(H_{1}(\tilde{S};\mathbb{Z})) =H_1(\tilde{S},\mathbb{Z})$. (As was explained to the author by Marco Boggi, \cite{Boggiemail}, this claim follows from arguments of Boggi-Looijenga and \cite{Looijenga} with rational and hence integral coefficients. It was proven directly for integral coefficients in an earlier version of this paper. For fundamental groups of surfaces with nonempty boundary or at least one puncture and complex coefficients, this claim is Proposition 3.1 of \cite{FH}.) \\ \textbf{Organisation of Paper.} Section~\ref{sect:background} recalls and provides some background and useful notation. Section~\ref{sect:example} studies a family of covering spaces in detail. These covering spaces are compositions of well known covering spaces, and the author makes no claims of originality in this section. The properties of the covering spaces that will be needed are simple and elementary, and hence are proven directly for completeness. The same is true for Subsection \ref{relations}, in which it is explained how to obtain spanning sets for homology of covers using relations in the deck transformation group. These results and examples are used in Subsection \ref{rationalvsintegral} to highlight differences between integral and rational homology, and in Subsection \ref{sub:non_example} to construct examples for which $sc_{p}(H_{1}(\tilde{S};\mathbb{Z}))$ is a proper submodule of $H_{1}(\tilde{S};\mathbb{Z})$. \subsection*{Acknowledgments} As noted above, the author became aware of this question on MathOverflow, and is grateful to Juli\'{e}n March\'{e} for posting the question, and the subsequent discussion by Richard Kent and Ian Agol. Mustafa Korkmaz and Sebastian Hensel pointed out an error in the first formulation of this paper. This paper was greatly improved as a result of communication with Marco Boggi, Stavros Garoufalidis, Neil Hoffman, Thomas Koberda, Eduard Looijenga, Andrew Putman, and the detailed comments of the anonymous referee. The author is also grateful to Ferruh \"Ozbudak for a fascinating discussion on similar methods in coding theory. This research was funded by a T\"{u}bitak Research Fellowship 2216 and thanks the University of Melbourne for its hospitality during the initial and final stages of this project. \section{Assumptions and background} \label{sect:background} Since the 2-sphere is simply connected, hence has no nontrivial covers, Question~\ref{Mainquestion} holds for genus 0 surfaces. Moreover, as pointed out by Ian Agol in \cite{MO}, part (i) of Question~\ref{Mainquestion} holds for genus 1 surfaces. Therefore we only need consider closed surfaces $S$ of genus at least two. \\ \textbf{Curves and intersection numbers.} By a \emph{curve} in $S$ we mean the free homotopy class of the image of a smooth map (which can be taken to be an immersion) of $S^1$ into $S$. In this convention, a curve is necessarily closed and connected. A curve $\gamma$ is said to be \textit{simple} if the free homotopy class contains an embedding of $S^1$ in $S$. \\ When it is necessary to work with based curves, the assumption will often be made that wherever necessary, a representative of the free homotopy class is conjugated by an arc, to obtain a curve passing through the base point. The counterexamples constructed in this paper are obtained by iterated Abelian covers. When analysing how a curve lifts, it will therefore only be necessary to know the homology class of the curve in the intermediate cover in question.\\ If $c$ is a curve in a surface $S$, then $[c] \in H_1(S,\mathbb{Z})$ will denote the corresponding homology class. \\ \textbf{$d$-lifts.} Given a finite sheeted regular covering $p\colon\thinspace \tilde{S} \to S$, the deck transformation group is denoted by $D$. For an element $\gamma \in \pi_1(S)$, let $d=d(\gamma)$ denote the smallest natural number for which $\gamma^d \in \pi_1(\tilde{S}) \subset \pi_1(S)$. Note that $d$ exists and $d \leq |D|$, where $|D|$ is the number of elements of $D$. In that case, we will say that $\gamma$ $d$-lifts. \\ \textbf{Primitivity. }A homology class $h$ is \textit{primitive} if it is nontrivial and there does not exist an integer $k>1$ and a homology class $h_{prim}$ such that $h=kh_{prim}$. \\ \section{Homology coverings of a surface} \label{sect:example} In this section we recall the definition of a homology covering of a surface and its basic properties. Our goal is to show that iterated homology coverings give a counterexample to part (i) of Question~\ref{Mainquestion}. \subsection{Definition of a homology covering} \label{sub.homcover} We begin by recalling the well-known homology covers of a closed genus $g$ surface $S$. Fix a natural number $m$ and let $p: \tilde \Sigma \to S$ denote the covering space of $S$ corresponding to the epimorphism $\phi: \pi_1(S) \to H_1(S,\mathbb{Z}/m \mathbb{Z})$ is given by the composition \begin{equation} \label{eq.phi} \pi_1(S) \to H_1(S,\mathbb{Z}) \to H_1(S,\mathbb{Z}/m \mathbb{Z}) \end{equation} of the Hurewitz homomorphism with the reduction of homology modulo $m$. Let $D \simeq (\mathbb{Z}/m \mathbb{Z})^{2g}$ denote the deck transformation group of $\phi$. These coverings are known as the \textit{mod-$m$-homology coverings} of $S$. Using an Euler characteristic argument, it follows that the genus of $\tilde{S}$ is $m^{2g}(g-1)+1$. Homology coverings are characteristic in the following sense: the subgroup $\pi_1(\tilde S)$ of $\pi_1(S)$ is invariant under all surface diffeomorphisms of $S$. \\ \subsection{Homology coverings with $g=2$} \label{sub.g=2} When $g=2$, we can give an explicit description of the covering $\tilde S$ as follows. Write $S=t_1 \cup t_2$ where $t_i$ for $i=1,2$ are genus 1 subsurfaces of $S$ with one boundary component. The pre-image of each $t_i$ under $\phi$ consists of $m^2$ copies of an $m^{2}$-holed torus, each of which is an $m^{2}$-fold cover of $t_i$, as illustrated in Figure \ref{holytorus}. \\ \begin{figure} \caption{ The pre-image of a 1-holed torus under the covering when $m=2$.} \label{holytorus} \end{figure} The pre-images $\phi^{-1}(t_i)$ for $i=1,2$ are glued together as follows. Let $K_{m^2,m^2}$ be the bi-partite graph with vertices the connected components of $\phi^{-1}(t_i)$ for $i=1,2$. Each connected component of $\phi^{-1}(t_1)$ is glued to each connected component of $\phi^{-1}(t_2)$ along a boundary curve, and this is represented by an edge of $K_{m^2,m^2}$. This is illustrated in Figure \ref{cube} for $m=2$, where the graph $K_{m^{2},m^{2}}$ is shown in grey. \\ \begin{figure} \caption{The covering space $\tilde{S}$ for $m=2$. The red curves are some connected components of pre-images of the generator $a_1$ of $\pi_{1}(S)$. The green curve is a connected component of the lift of $b_{1}b_{2}$, and the black curves are connected components of the lift of $[a_{1},b_{1}]$.} \label{cube} \end{figure} This subsection is now concluded with a useful lemma. \begin{lem} \label{nullnonsimple} Let $\tilde{S}\rightarrow S$ be the homology cover of~\eqref{eq.phi} with $g=2$. All simple, null homologous curves in $S$ 1-lift to nonseparating curves in $\tilde{S}$. Moreover, no curve in a primitive homology class in $S$ 1-lifts. \end{lem} \begin{proof} To start off with, the fact that null homologous curves 1-lift is a consequence of the fact that the cover has Abelian deck transformation group. \\ In Figure \ref{cube}, the black curves are lifts of simple null homologous curves from $S$. These black curves all 1-lift to non-separating curves in $\tilde{S}$. The covering is characteristic, so this observation is true independently of the choice of basis $\{a_{1}, b_{1},a_{2}, b_{2}\}$ from Equation \eqref{presentations}. It follows that all simple null homologous curves 1-lift to simple, non-separating curves in $\tilde{S}$. An analogous argument shows this is also true for $m>3$. \end{proof} \subsection{The homology of a homology covering} \label{relations} In this subsection, we will assume that $S$ is a closed surface of genus 2. It will be useful to describe the homology covers of a surface $S$ using a fixed presentation \begin{equation} \label{presentations} \pi_1(S) = \langle a_1, b_1, a_{1}, b_{2} \,\, | \,\, [a_{1},b_{1}] [a_{2},b_{2}]\rangle \end{equation} where $a_i,b_i$ are curves representing a usual symplectic basis for $H_{1}(S;\mathbb{Z})$, satisfying $i(a_{i}, a_{j})=i(b_{i}, b_{j})=0$ and $i(a_{i},b_{j})=\delta^{i}_{j}$.\\ The homomorphism~\eqref{eq.phi} is given explicitly by \begin{equation}\label{thecover} \begin{split} a_{1}\mapsto (1,0,0,0)\\ b_{1}\mapsto (0,1,0,0)\\ a_{2}\mapsto (0,0,1,0)\\ b_{2}\mapsto (0,0,0,1) \end{split} \end{equation} It will now be shown how to use the relations of the deck transformation group to obtain a generating set for $H_{1}(\tilde{S},\mathbb{Z})$. This subsection is only needed to show the necessity of the assumption $m\geq 3$ in the construction later on. \\ Suppose now that $D$ is any group for which there is the short exact sequence \begin{equation} \label{phi} 1\rightarrow \pi_{1}(\tilde{S}) \rightarrow \pi_{1}(S) \xrightarrow{\phi} D \rightarrow 1 \end{equation} for some covering space $\tilde{S}$. Let $\{g_{1}, \ldots, g_{n}\}$ be a set of elements of $\pi_{1}(S)$ whose image under $\phi$ is a generating set for $D$. Let $r=w(g_{1}, \ldots, g_{n})$ be a word in the elements $\{g_{1}, \ldots, g_{n}\}$ that is mapped to the identity by $\phi$, i.e. $w(\phi(g_{1}), \ldots, \phi(g_{n}))=I\in D$. The word $r$ could be either nontrivial in $\pi_{1}(S)$, or it could be a product of conjugates of the relation $[a_{1},b_{1}][a_{2},b_{2}]$ in $\pi_{1}(S)$.\\ Note that $\{\phi(a_{1}), \phi(a_{2}), \phi(b_{1}), \phi(b_{2})\}$ is a generating set for $D$, where $\{a_{1}, a_{2}, b_{1}, b_{2}\}$ is the choice of generating set from Equation \eqref{presentations}. Due to the assumption that the genus of $S$ is two, $D$ can always be generated by four generators.\\ If $r=r(g_1,\ldots,g_n)$ is a word in $g_1,\ldots,g_n$, then $\phi(r)=r(\phi(g_1),\ldots,\phi(g_n))$. A set of words $\{r_{1}(g_{1}, \ldots, g_{n}), r_{2}(g_{1}, \ldots, g_{n}), \ldots, r_{k}(g_{1}, \ldots, g_{n})\}$ is a complete set of relations for $D$ if \begin{equation*} \{\phi(g_{1}), \ldots, \phi(g_{n})\, \mid \, \phi(r_1),\ldots, \phi(r_k)\} \end{equation*} is a presentation for $D$.\\ The next lemma is a corollary of a presumably well known group theoretic statement. As the author could not find a reference, a proof is given here for the sake of completeness. \begin{lem} \label{generatingset} Suppose $D$ can be generated by no fewer than four generators. Let $\{r_{1}, r_{2}, \ldots, r_{k}\}$ be a set of words in $\pi_{1}(S)$ mapping to a complete set of relations for $D$. Then the set of homology classes of connected components of the pre-images of the curves representing the words $r_{1}, r_{2}, \ldots, r_{k}$ is a generating set for $H_{1}(\tilde{S};\mathbb{Z})$. \end{lem} \begin{proof} Consider a presentation for $D$ given by \begin{equation*} \{\phi(a_{1}), \phi(a_{2}), \phi(b_{1}), \phi(b_{2}) \, \mid \, \phi(r_{1}^{'}), \phi(r_{2}^{'}), \ldots, \phi(r_{k}^{'})\} \end{equation*} From the exact sequence \eqref{phi}, we see that each of the $r_{i}^{'}$ represents an element of $\pi_{1}(\tilde{S})$.\\ When the image of $\{r_{1}^{'}, r_{2}^{'}, \ldots, r_{k}^{'}\}$ under $\phi$ is a complete set of relations for $D$, it follows that any element of $\pi_{1}(\tilde{S})$ is a product of conjugates of elements of the set $\{r_{1}, r_{2}, \ldots, r_{k}\}$. Let $c$ be a loop representing the element $r_{i}^{'}$. The connected components of $p^{-1}(c)$ correspond to conjugates of $r_{i}^{'}$. Therefore, the connected components of the pre-images of the closed curves represented by the words $\{r_{1}^{'}, r_{2}^{'}, \ldots, r_{k}^{'}\}$ are a generating set for $H_{1}(\tilde{S},\mathbb{Z})$.\\ We now use the assumption that $D$ has no fewer than four generators to show that this is true for any presentation of $D$. Another presentation for $D$ can be written as follows \begin{equation*} \{w_{1}, w_{2}, w_{3}, w_{4}\, \mid \, \phi(r_{1}), \phi(r_{2}), \ldots, \phi(r_{m})\} \end{equation*} where $w_{1}, w_{2}, w_{3}$ and $w_{4}$ are words in $\phi(a_{1})$, $\phi(a_{2})$, $\phi(b_{1})$ and $\phi(b_{2})$, and $r_{1}, r_{2}, \ldots, r_{m}$ are products of conjugates of $r_{1}^{'}, r_{2}^{'}, \ldots, r_{k}^{'}$. For the same reason as before, the connected components of the pre-images of the closed curves in $S$ represented by the words $\{r_{1}, r_{2}, \ldots, r_{m}\}$ are a generating set for $H_{1}(\tilde{S};\mathbb{Z})$. \end{proof} \begin{rem} Note that the last sentence of the proof is not necessarily true for a proper subset of $\{r_{1}, r_{2}, \ldots, r_{m}\}$. If one or more of $w_{1}, w_{2}, w_{3}$ or $w_{4}$ is mapped to the identity in $D$, there is a presentation for $D$ with generators consisting of a proper subset of $\{w_{1}, w_{2}, w_{3}, w_{4}\}$ and relations consisting of a proper subset of $\{\phi(r_{1}), \phi(r_{2}), \ldots, \phi(r_{m})\}$. However, the assumption that $D$ is generated by no fewer than four elements rules out the possibility of a presentation for $D$ with relations consisting of a proper subset of $\{\phi(r_{1}), \phi(r_{2}), \ldots, \phi(r_{m})\}$. \end{rem} To use Lemma \ref{generatingset}, a complete set of relations for $D$ is needed. To start off with, there are the relations $\phi^{m}(a_{i})=1$ and $\phi^{m}(b_{i})=1$. These relations correspond to the submodule of $H_{1}(\tilde{S};\mathbb{Z})$ spanned by connected components of pre-images of the generators. In Figure \ref{cube} with $m=2$, some examples are drawn in red. Other relations are, for example, commutation relations or the relations stating that the remaining group elements have order $m$ in the deck transformation group. When $m=2$, the commutation relations are a consequence of the relations stating that all 16 group elements have order $m$. For example, \begin{align*} \phi(a_{1})\phi(b_{1}) &=(\phi(a_{1})\phi(b_{1}))^{-1} \qquad \text{ since }\phi(a_{1})\phi(b_{1}) \text{ has order 2}\\ &=\phi(b_{1})^{-1}\phi(a_{1})^{-1}\\ &=\phi(b_{1})\phi(a_{1}) \qquad\quad\,\,\,\, \text{ since }\phi(a_{1})\text{ and }\phi(b_{1})\text{ each have order 2} \end{align*} It will be shown later that this is a peculiarity of $m=2$; as shown in Lemma \ref{secondthought}, for $m>2$, we also need commutation relations. For $m=2$, the relations stating that all elements of the deck tranformation group are of order two are a complete set of relations for $D$. By Lemma \ref{generatingset}, this gives us a set of simple, nonseparating curves whose pre-images span $H_{1}(\tilde{S};\mathbb{Z})$. \\ \subsection{Integral versus rational homology} \label{rationalvsintegral} Examples for which $sc_{p}(H_{1}(\tilde{S};\mathbb{Z}))$ can not be all of $H_{1}(\tilde{S};\mathbb{Z})$ will now be constructed. This is done by showing that for $m>2$, connected components of pre-images of simple, nonseparating curves do not span $H_{1}(\tilde{S};\mathbb{Z})$; if we want to span $H_{1}(\tilde{S};\mathbb{Z})$ with connected components of pre-images of simple curves, separating curves will also be needed. The promised examples are then obtained by taking the composition of two such covering spaces, using Lemma \ref{nullnonsimple}. \begin{lem} In the homology covering space $p:\tilde{S}\rightarrow S$ of~\eqref{eq.phi} with $m\geq3$ and $g \geq 2$, connected components of pre-images of simple, nonseparating curves do not span $H_{1}(\tilde{S};\mathbb{Z})$. \label{secondthought} \end{lem} \begin{proof} The lemma will be proven for $m=3$ and it is claimed that analogous arguments work for $m>3$.\\ We will show by contradiction that $[p^{-1}([a_{1},b_{1}])] \in H_1(\tilde S,\mathbb{Z})$ is not in the span of homology classes of pre-images of simple, nonseparating curves of $S$. \\ Suppose a connected component of $p^{-1}([a_{1},b_{1}])$ is in the span of connected components of pre-images of simple, nonseparating curves. In the group $\pi_{1}(S)$ there is therefore the relation \begin{equation} \label{relation} [a_{1},b_{1}]^{-1}\gamma_{1}^{3}\gamma_{2}^{3}\gamma_{3}^{3}\ldots\gamma_{k}^{3} \kappa=1 \in \pi_1(\tilde S) \end{equation} where $\kappa$ is in the subgroup $[\pi_{1}(\tilde{S}), \pi_{1}(\tilde{S})]$ of $\pi_{1}(S)$ and $\gamma_i$ are elements of $\pi_1(S)$ representing simple closed curves in $S$. \\ Let $N$ denote the subgroup of $\pi_{1}(S)$ normally generated by words of Equation~\eqref{cubesandcomms}, \begin{equation} \label{cubesandcomms} [w,a_{i}^{\pm3}]\text{, }[w,b_{i}^{\pm3}] \text{ for }i\in 1,\ldots,g, \,\, w \in \pi_1(S) \qquad \text{and} \qquad [w,[\pi_{1}(S),\pi_{1}(S)]], \,\, w \in \pi_1(S) \,. \end{equation} and let $N_{2}$ be the quotient \begin{equation*} \frac{\pi_{1}(S)}{[\pi_{1}(S),[\pi_{1}(S), \pi_{1}(S)]]} \end{equation*} \begin{claim} \label{symbolmanipulation} $\gamma_{1}^{3}\gamma_{2}^{3}\gamma_{3}^{3}\ldots\gamma_{k}^{3} \in N$. \end{claim} \begin{proof}(of the Claim) The product $\gamma_{1}^{3}\gamma_{2}^{3}\gamma_{3}^{3}\ldots\gamma_{k}^{3}$ is null homologous in $S$, so for every generator $a_{i}$ (respectively $b_i$), $i\in \{1,2\}$, the sum of the powers in the product $\gamma_{1}^{3}\gamma_{2}^{3}\gamma_{3}^{3}\ldots\gamma_{k}^{3}$ must be zero. This implies that the elements of the set $\{\gamma_{h}\}$ can not all be elements of the generating set $\{a_{1}, a_{2}, b_{1}, b_{2}\}$. Assume otherwise: then for every $a^{3}_{i}$ $i\in \{1,2\}$, (respectively $b^{3}_{i}$) there must be an $a^{-3}_i$ (respectively $b^{3}_{i}$), in which case $\gamma_{1}^{3}\gamma_{2}^{3}\gamma_{3}^{3}\ldots\gamma_{k}^{3}$ would be in the commutator subgroup of $\pi_{1}(\tilde{S})$. Since Lemma \ref{nullnonsimple} states that a connected component of $p^{-1}([a_{1},b_{1}])$ is nonseparating, this would contradict Equation \eqref{relation}.\\ Suppose \begin{equation*} \gamma_{h}=x_{1}x_{2}\ldots x_{n}:=x_{1}y\text{, where each }x_{j}\in \{a_{1}, a_{2}, b_{1}, b_{2}\}\text{ for }j\in \{1,2,\ldots,n\}. \end{equation*} It will now be shown that $\gamma_{h}^{3}$ can be written as a product of cubes of generators, followed by an element of $N$.\\ It follows from the commutator identities $[x,yz]=[x,y][x,z]^{y}$ and $[zx,y]=[x,y]^{z}[z,y]$ that elements of $[\pi_{1}(\tilde{S}),\pi_{1}(\tilde{S})]$ are normally generated by words from Equation \eqref{cubesandcomms}, hence $[\pi_{1}(\tilde{S}), \pi_{1}(\tilde{S}]\in N$.\\ Note that \begin{align*} &\gamma_{h}^{3}=x_{1}yx_{1}yx_{1}y\\ &=x_{1}yx_{1}y^{2}x_{1}[x_{1}^{-1},y^{-1}]\text{, using }x_{1}y=yx_{1}[x_{1}^{-1},y^{-1}]\\ &=x_{1}^{2}y[y^{-1},x_{1}^{-1}]y^{2}x_{1}[x_{1}^{-1},y^{-1}] \end{align*} In $N_{2}$, this equals \begin{align*} &x_{1}^{2}y^{3}x_{1}[y^{-1}, x_{1}^{-1}][x_{1}^{-1}, y^{-1}]\\ &=x_{1}^{2}y^{3}x_{1}\\ &=x_{1}^{3}y^{3}[y^{-3},x_{1}^{-1}] \end{align*} In other words, \begin{equation} \label{moresymbols} (x_{1}y)^{3}=x_{1}^{3}y^{3}[y^{-3}, x_{1}^{-1}]n \end{equation} where $n\in N$.\\ It follows from Equation \ref{moresymbols} and the commutator identity $[zx,y]=[x,y]^{z}[z,y]$ that $[y^{-3}, x_{1}^{-1}]\in N$. This implies that $\gamma_{h}^{3}=x_{1}^{3}y^{3}n_{1}$, where $n_{1}\in N$.\\ Note that rearranging the orders of cubes of generators and elements of $N$ only introduces more elements in $N$. Repeating the argument just given on the shorter word $y$, and rearranging, shows that $\gamma_{h}^{3}$ is a product of cubes of generators, followed by an element of $N$.\\ It is therefore possible to write $\gamma_{1}^{3}\gamma_{2}^{3}\gamma_{3}^{3}\ldots\gamma_{k}^{3}$ as a product of cubes of generators, and elements of $N$. Again, rearranging the orders of cubes of generators and elements of $N$ only introduces more elements of $N$. It follows that $\gamma_{1}^{3}\gamma_{2}^{3}\gamma_{3}^{3}\ldots\gamma_{k}^{3}$ is a product of cubes of generators, and an element of $N$. Also, the products of the cubes of generators must be in $\pi_{1}(\tilde{S})$, by construction. In fact, the product of cubes must be in $[\pi_{1}(\tilde{S}), \pi_{1}(\tilde{S})]$ because Equation \eqref{relation} implies the sum of the powers of any generator in the product must be zero, otherwise $\gamma_{1}^{3}\gamma_{2}^{3}\gamma_{3}^{3}\ldots\gamma_{k}^{3}$ could not be null homologous in $S$. Since $[\pi_{1}(\tilde{S}), \pi_{1}(\tilde{S})]$ is contained in $N$, the claim follows. \end{proof} A contradiction will now be obtained by showing that $[a_{1}, b_{1}]$ cannot be contained in $N$.\\ Let $\psi$ be the homomorphism taking an element of $\pi_{1}(S)$ to its coset in $\pi_{1}(S)/N$, and $\psi_1$ be the homomorphism taking an element of $\pi_{1}(S)$ to its coset in $\pi_{1}(S)/[\pi_{1}(S),[\pi_{1}(S),\pi_{1}(S)]]$. We now compute the image of $[\pi_{1}(S), \pi_{1}(S)]$ under $\psi$.\\ It follows from the commutator identities $[x,yz]=[x,y][x,z]^{y}$ and $[zx,y]=[x,y]^{z}[z,y]$ that $[\pi_{1}(S),\pi_{1}(S)]$ is generated by conjugates of commutators of generators. Since $[\pi_{1}(S), \pi_{1}(S)]$ maps to a subgroup in the center of the image of $\psi_{1}$, it follows that $\psi_{1}([\pi_{1}(S), \pi_{1}(S)])$ is generated by the image of commutators of generators of $\pi_{1}(S)$. The group $\psi_{1}([\pi_{1}(S), \pi_{1}(S)])$ is therefore a finitely generated, Abelian group. Note that $[\pi_{1}(S), \pi_{1}(S)]$ is a free group, and $\psi_{1}([\pi_{1}(S), \pi_{1}(S)])$ is its abelianisation, so $\psi_{1}([\pi_{1}(S), \pi_{1}(S)])$ can not be the trivial group.\\ Again using the commutator identities, it follows that $\psi_{1}([w,a_{i}^{3}])=\psi_{1}([w,a_{i}]^{3})$. Similarly for $\psi_{1}([w,b_{i}^{3}])$. If the word $w$ is not a generator, it follows from the commutator identities and the fact that the image of the commutator subgroup under $\psi_1$ is in the center, that $\psi_{1}([w,a_{i}]^{3})$ can be written as a product of cubes of commutators of generators. Since the word $w$ can be taken to be any of the four generators, it follows that the image of $[\pi_{1}(S),\pi_{1}(S)]$ under $\psi$ is a finitely generated Abelian group, each element of which has order three. In particular, $\psi([a_{1},b_{1}])$ is not the identity. This proves the promised contradiction from which the lemma follows. \end{proof} \begin{rem} The argument in Lemma \ref{secondthought} does not work when $m=2$. This is because in this case we do not get any commutators of commutators in the expression for $A$, hence there is no contradiction to the existence of Equation \eqref{relation}. \end{rem} It follows from arguments of Boggi-Looijenga, \cite{Boggiemail} and \cite{Looijenga}, that when $D$ is Abelian, $H_{1}(\tilde{S};\mathbb{Q})$ is generated by homology classes of lifts of simple, nonseparating curves. In Figure \ref{holytorus}, for example, it is not hard to see that pairs of connected components of pre-images of a simple null homologous curve $n$ are in the span of connected components of pre-images of generators. When $m>3$, $m[\tilde{n}]$ is in the integral span of homology classes of connected components of lifts of simple, nonseparating curves. Hence $[\tilde{n}]$ is in the rational span, but not, as shown in Lemma \ref{secondthought}, in the integral span. \subsection{Proof of Theorem~\ref{thm.1}} \label{sub:non_example} The promised families of examples for which lifts of simple curves do not span the integral homology of the covering space will now be constructed. The examples are iterated homology coverings of $S$, with at least two iterations, where the last homology covering uses an integer $m \geq 3$.\\ Let $\tilde{S}\rightarrow S$ be the covering with $m\geq2$ just studied. Repeat the same construction, only with larger genus, and $m>2$, on $\tilde{S}$ to obtain a cover $\tilde{\tilde{S}}\rightarrow S$ factoring through $\tilde{S}$. That the result is a regular cover follows from the fact that it is a composition of two characteristic covers.\\ It is possible to see almost immediately that $sc_{p}(H_{1}(\tilde{\tilde{S}},\mathbb{Z}))$ can not be all of $H_{1}(\tilde{\tilde{S}},\mathbb{Z})$. In Lemma \ref{secondthought} we saw that 1-lifts of simple null homologous curves from $\tilde{S}$ were needed to generate $H_{1}(\tilde{\tilde{S}}; \mathbb{Z})$. However, by Lemma \ref{nullnonsimple}, no simple null homologous curves in $\tilde{S}$ project onto simple curves in $S$.\\ \end{document}
arXiv
Nonlocal operator In mathematics, a nonlocal operator is a mapping which maps functions on a topological space to functions, in such a way that the value of the output function at a given point cannot be determined solely from the values of the input function in any neighbourhood of any point. An example of a nonlocal operator is the Fourier transform. Formal definition Let $X$ be a topological space, $Y$ a set, $F(X)$ a function space containing functions with domain $X$, and $G(Y)$ a function space containing functions with domain $Y$. Two functions $u$ and $v$ in $F(X)$ are called equivalent at $x\in X$ if there exists a neighbourhood $N$ of $x$ such that $u(x')=v(x')$ for all $x'\in N$. An operator $A:F(X)\to G$ is said to be local if for every $y\in Y$ there exists an $x\in X$ such that $Au(y)=Av(y)$ for all functions $u$ and $v$ in $F(X)$ which are equivalent at $x$. A nonlocal operator is an operator which is not local. For a local operator it is possible (in principle) to compute the value $Au(y)$ using only knowledge of the values of $u$ in an arbitrarily small neighbourhood of a point $x$. For a nonlocal operator this is not possible. Examples Differential operators are examples of local operators. A large class of (linear) nonlocal operators is given by the integral transforms, such as the Fourier transform and the Laplace transform. For an integral transform of the form $(Au)(y)=\int \limits _{X}u(x)\,K(x,y)\,dx,$ where $K$ is some kernel function, it is necessary to know the values of $u$ almost everywhere on the support of $K(\cdot ,y)$ in order to compute the value of $Au$ at $y$. An example of a singular integral operator is the fractional Laplacian $(-\Delta )^{s}f(x)=c_{d,s}\int \limits _{\mathbb {R} ^{d}}{\frac {f(x)-f(y)}{|x-y|^{d+2s}}}\,dy.$ The prefactor $c_{d,s}:={\frac {4^{s}\Gamma (d/2+s)}{\pi ^{d/2}|\Gamma (-s)|}}$ involves the Gamma function and serves as a normalizing factor. The fractional Laplacian plays a role in, for example, the study of nonlocal minimal surfaces.[1] Applications Some examples of applications of nonlocal operators are: • Time series analysis using Fourier transformations • Analysis of dynamical systems using Laplace transformations • Image denoising using non-local means[2] • Modelling Gaussian blur or motion blur in images using convolution with a blurring kernel or point spread function See also • Fractional calculus • Linear map • Nonlocal Lagrangian • Action at a distance References 1. Caffarelli, L.; Roquejoffre, J.-M.; Savin, O. (2010). "Nonlocal minimal surfaces". Communications on Pure and Applied Mathematics: n/a. arXiv:0905.1183. doi:10.1002/cpa.20331. S2CID 10480423. 2. Buades, A.; Coll, B.; Morel, J.-M. (2005). A Non-Local Algorithm for Image Denoising. pp. 60–65. doi:10.1109/CVPR.2005.38. ISBN 9780769523729. S2CID 11206708. {{cite book}}: |work= ignored (help) External links • Nonlocal equations wiki
Wikipedia
Whitehead's lemma Whitehead's lemma is a technical result in abstract algebra used in algebraic K-theory. It states that a matrix of the form ${\begin{bmatrix}u&0\\0&u^{-1}\end{bmatrix}}$ For a lemma on Lie algebras, see Whitehead's lemma (Lie algebras). is equivalent to the identity matrix by elementary transformations (that is, transvections): ${\begin{bmatrix}u&0\\0&u^{-1}\end{bmatrix}}=e_{21}(u^{-1})e_{12}(1-u)e_{21}(-1)e_{12}(1-u^{-1}).$ Here, $e_{ij}(s)$ indicates a matrix whose diagonal block is $1$ and $ij^{th}$ entry is $s$. The name "Whitehead's lemma" also refers to the closely related result that the derived group of the stable general linear group is the group generated by elementary matrices.[1][2] In symbols, $\operatorname {E} (A)=[\operatorname {GL} (A),\operatorname {GL} (A)]$. This holds for the stable group (the direct limit of matrices of finite size) over any ring, but not in general for the unstable groups, even over a field. For instance for $\operatorname {GL} (2,\mathbb {Z} /2\mathbb {Z} )$ one has: $\operatorname {Alt} (3)\cong [\operatorname {GL} _{2}(\mathbb {Z} /2\mathbb {Z} ),\operatorname {GL} _{2}(\mathbb {Z} /2\mathbb {Z} )]<\operatorname {E} _{2}(\mathbb {Z} /2\mathbb {Z} )=\operatorname {SL} _{2}(\mathbb {Z} /2\mathbb {Z} )=\operatorname {GL} _{2}(\mathbb {Z} /2\mathbb {Z} )\cong \operatorname {Sym} (3),$ where Alt(3) and Sym(3) denote the alternating resp. symmetric group on 3 letters. See also • Special linear group#Relations to other subgroups of GL(n,A) References 1. Milnor, John Willard (1971). Introduction to algebraic K-theory. Annals of Mathematics Studies. Vol. 72. Princeton, NJ: Princeton University Press. Section 3.1. MR 0349811. Zbl 0237.18005. 2. Snaith, V. P. (1994). Explicit Brauer Induction: With Applications to Algebra and Number Theory. Cambridge Studies in Advanced Mathematics. Vol. 40. Cambridge University Press. p. 164. ISBN 0-521-46015-8. Zbl 0991.20005.
Wikipedia
Von Neumann–Bernays–Gödel set theory In the foundations of mathematics, von Neumann–Bernays–Gödel set theory (NBG) is an axiomatic set theory that is a conservative extension of Zermelo–Fraenkel–choice set theory (ZFC). NBG introduces the notion of class, which is a collection of sets defined by a formula whose quantifiers range only over sets. NBG can define classes that are larger than sets, such as the class of all sets and the class of all ordinals. Morse–Kelley set theory (MK) allows classes to be defined by formulas whose quantifiers range over classes. NBG is finitely axiomatizable, while ZFC and MK are not. A key theorem of NBG is the class existence theorem, which states that for every formula whose quantifiers range only over sets, there is a class consisting of the sets satisfying the formula. This class is built by mirroring the step-by-step construction of the formula with classes. Since all set-theoretic formulas are constructed from two kinds of atomic formulas (membership and equality) and finitely many logical symbols, only finitely many axioms are needed to build the classes satisfying them. This is why NBG is finitely axiomatizable. Classes are also used for other constructions, for handling the set-theoretic paradoxes, and for stating the axiom of global choice, which is stronger than ZFC's axiom of choice. John von Neumann introduced classes into set theory in 1925. The primitive notions of his theory were function and argument. Using these notions, he defined class and set.[1] Paul Bernays reformulated von Neumann's theory by taking class and set as primitive notions.[2] Kurt Gödel simplified Bernays' theory for his relative consistency proof of the axiom of choice and the generalized continuum hypothesis.[3] Classes in set theory The uses of classes Classes have several uses in NBG: • They produce a finite axiomatization of set theory.[4] • They are used to state a "very strong form of the axiom of choice"[5]—namely, the axiom of global choice: There exists a global choice function $G$ defined on the class of all nonempty sets such that $G(x)\in x$ for every nonempty set $x.$ This is stronger than ZFC's axiom of choice: For every set $s$ of nonempty sets, there exists a choice function $f$ defined on $s$ such that $f(x)\in x$ for all $x\in s.$[lower-alpha 1] • The set-theoretic paradoxes are handled by recognizing that some classes cannot be sets. For example, assume that the class $Ord$ of all ordinals is a set. Then $Ord$ is a transitive set well-ordered by $\in $. So, by definition, $Ord$ is an ordinal. Hence, $Ord\in Ord$, which contradicts $\in $ being a well-ordering of $Ord.$ Therefore, $Ord$ is not a set. A class that is not a set is called a proper class, $Ord$ is a proper class.[6] • Proper classes are useful in constructions. In his proof of the relative consistency of the axiom of global choice and the generalized continuum hypothesis, Gödel used proper classes to build the constructible universe. He constructed a function on the class of all ordinals that, for each ordinal, builds a constructible set by applying a set-building operation to previously constructed sets. The constructible universe is the image of this function.[7] Axiom schema versus class existence theorem Once classes are added to the language of ZFC, it is easy to transform ZFC into a set theory with classes. First, the axiom schema of class comprehension is added. This axiom schema states: For every formula $\phi (x_{1},\ldots ,x_{n})$ that quantifies only over sets, there exists a class $A$ consisting of the $n$-tuples satisfying the formula—that is, $\forall x_{1}\cdots \,\forall x_{n}[(x_{1},\ldots ,x_{n})\in A\iff \phi (x_{1},\ldots ,x_{n})].$ Then the axiom schema of replacement is replaced by a single axiom that uses a class. Finally, ZFC's axiom of extensionality is modified to handle classes: If two classes have the same elements, then they are identical. The other axioms of ZFC are not modified.[8] This theory is not finitely axiomatized. ZFC's replacement schema has been replaced by a single axiom, but the axiom schema of class comprehension has been introduced. To produce a theory with finitely many axioms, the axiom schema of class comprehension is first replaced with finitely many class existence axioms. Then these axioms are used to prove the class existence theorem, which implies every instance of the axiom schema.[8] The proof of this theorem requires only seven class existence axioms, which are used to convert the construction of a formula into the construction of a class satisfying the formula. Axiomatization of NBG Classes and sets NBG has two types of objects: classes and sets. Intuitively, every set is also a class. There are two ways to axiomatize this. Bernays used many-sorted logic with two sorts: classes and sets.[2] Gödel avoided sorts by introducing primitive predicates: ${\mathfrak {Cls}}(A)$ for "$A$ is a class" and ${\mathfrak {M}}(A)$ for "$A$ is a set" (in German, "set" is Menge). He also introduced axioms stating that every set is a class and that if class $A$ is a member of a class, then $A$ is a set.[9] Using predicates is the standard way to eliminate sorts. Elliott Mendelson modified Gödel's approach by having everything be a class and defining the set predicate $M(A)$ as $\exists C(A\in C).$[10] This modification eliminates Gödel's class predicate and his two axioms. Bernays' two-sorted approach may appear more natural at first, but it creates a more complex theory.[lower-alpha 2] In Bernays' theory, every set has two representations: one as a set and the other as a class. Also, there are two membership relations: the first, denoted by "∈", is between two sets; the second, denoted by "η", is between a set and a class.[2] This redundancy is required by many-sorted logic because variables of different sorts range over disjoint subdomains of the domain of discourse. The differences between these two approaches do not affect what can be proved, but they do affect how statements are written. In Gödel's approach, $A\in C$ where $A$ and $C$ are classes is a valid statement. In Bernays' approach this statement has no meaning. However, if $A$ is a set, there is an equivalent statement: Define "set $a$ represents class $A$" if they have the same sets as members—that is, $\forall x(x\in a\iff x\;\eta \;A).$ The statement $a\;\eta \;C$ where set $a$ represents class $A$ is equivalent to Gödel's $A\in C.$[2] The approach adopted in this article is that of Gödel with Mendelson's modification. This means that NBG is an axiomatic system in first-order predicate logic with equality, and its only primitive notions are class and the membership relation. Definitions and axioms of extensionality and pairing A set is a class that belongs to at least one class: $A$ is a set if and only if $\exists C(A\in C)$. A class that is not a set is called a proper class: $A$ is a proper class if and only if $\forall C(A\notin C)$.[12] Therefore, every class is either a set or a proper class, and no class is both. Gödel introduced the convention that uppercase variables range over classes, while lowercase variables range over sets.[9] Gödel also used names that begin with an uppercase letter to denote particular classes, including functions and relations defined on the class of all sets. Gödel's convention is used in this article. It allows us to write: • $\exists x\,\phi (x)$ instead of $\exists x{\bigl (}\exists C(x\in C)\land \phi (x){\bigr )}$ • $\forall x\,\phi (x)$ instead of $\forall x{\bigl (}\exists C(x\in C)\implies \phi (x){\bigr )}$ The following axioms and definitions are needed for the proof of the class existence theorem. Axiom of extensionality.  If two classes have the same elements, then they are identical. $\forall A\,\forall B\,[\forall x(x\in A\iff x\in B)\implies A=B]$ [13] This axiom generalizes ZFC's axiom of extensionality to classes. Axiom of pairing.  If $x$ and $y$ are sets, then there exists a set $p$ whose only members are $x$ and $y$. $\forall x\,\forall y\,\exists p\,\forall z\,[z\in p\iff (z=x\,\lor \,z=y)]$ [14] As in ZFC, the axiom of extensionality implies the uniqueness of the set $p$, which allows us to introduce the notation $\{x,y\}.$ Ordered pairs are defined by: $(x,y)=\{\{x\},\{x,y\}\}$ Tuples are defined inductively using ordered pairs: $(x_{1})=x_{1},$ ${\text{For }}n>1\!:(x_{1},\ldots ,x_{n-1},x_{n})=((x_{1},\ldots ,x_{n-1}),x_{n}).$[lower-alpha 3] Class existence axioms and axiom of regularity Class existence axioms will be used to prove the class existence theorem: For every formula in $n$ free set variables that quantifies only over sets, there exists a class of $n$-tuples that satisfy it. The following example starts with two classes that are functions and builds a composite function. This example illustrates the techniques that are needed to prove the class existence theorem, which lead to the class existence axioms that are needed. Example 1:  If the classes $F$ and $G$ are functions, then the composite function $G\circ F$ is defined by the formula: $\exists t[(x,t)\in F\,\land \,(t,y)\in G].$ Since this formula has two free set variables, $x$ and $y,$ the class existence theorem constructs the class of ordered pairs: $G\circ F\,=\,\{(x,y):\exists t[(x,t)\in F\,\land \,(t,y)\in G]\}.$ Because this formula is built from simpler formulas using conjunction $\land $ and existential quantification $\exists $, class operations are needed that take classes representing the simpler formulas and produce classes representing the formulas with $\land $ and $\exists $. To produce a class representing a formula with $\land $, intersection used since $x\in A\cap B\iff x\in A\land x\in B.$ To produce a class representing a formula with $\exists $, the domain is used since $x\in Dom(A)\iff \exists t[(x,t)\in A].$ Before taking the intersection, the tuples in $F$ and $G$ need an extra component so they have the same variables. The component $y$ is added to the tuples of $F$ and $x$ is added to the tuples of $G$: $F'=\{(x,t,y):(x,t)\in F\}\,$ and $\,G'=\{(t,y,x):(t,y)\in G\}$ In the definition of $F',$ the variable $y$ is not restricted by the statement $(x,t)\in F,$ so $y$ ranges over the class $V$ of all sets. Similarly, in the definition of $G',$ the variable $x$ ranges over $V.$ So an axiom is needed that adds an extra component (whose values range over $V$) to the tuples of a given class. Next, the variables are put in the same order to prepare for the intersection: $F''=\{(x,y,t):(x,t)\in F\}\,$ and $\,G''=\{(x,y,t):(t,y)\in G\}$ To go from $F'$ to $F''$ and from $G'$ to $G''$ requires two different permutations, so axioms that support permutations of tuple components are needed. The intersection of $F''$ and $G''$ handles $\land $: $F''\cap G''=\{(x,y,t):(x,t)\in F\,\land \,(t,y)\in G\}$ Since $(x,y,t)$ is defined as $((x,y),t)$, taking the domain of $F''\cap G''$ handles $\exists t$ and produces the composite function: $G\circ F=Dom(F''\cap G'')=\{(x,y):\exists t((x,t)\in F\,\land \,(t,y)\in G)\}$ So axioms of intersection and domain are needed. The class existence axioms are divided into two groups: axioms handling language primitives and axioms handling tuples. There are four axioms in the first group and three axioms in the second group.[lower-alpha 4] Axioms for handling language primitives: Membership.  There exists a class $E$ containing all the ordered pairs whose first component is a member of the second component. $\exists E\,\forall x\,\forall y\,[(x,y)\in E\iff x\in y]\!$ [18] Intersection (conjunction).  For any two classes $A$ and $B$, there is a class $C$ consisting precisely of the sets that belong to both $A$ and $B$. $\forall A\,\forall B\,\exists C\,\forall x\,[x\in C\iff (x\in A\,\land \,x\in B)]$ [19] Complement (negation).  For any class $A$, there is a class $B$ consisting precisely of the sets not belonging to $A$. $\forall A\,\exists B\,\forall x\,[x\in B\iff \neg (x\in A)]$ [20] Domain (existential quantifier).  For any class $A$, there is a class $B$ consisting precisely of the first components of the ordered pairs of $A$. $\forall A\,\exists B\,\forall x\,[x\in B\iff \exists y((x,y)\in A)]$ [21] By the axiom of extensionality, class $C$ in the intersection axiom and class $B$ in the complement and domain axioms are unique. They will be denoted by: $A\cap B,$ $\complement A,$ and $Dom(A),$ respectively.[lower-alpha 5] On the other hand, extensionality is not applicable to $E$ in the membership axiom since it specifies only those sets in $E$ that are ordered pairs. The first three axioms imply the existence of the empty class and the class of all sets: The membership axiom implies the existence of a class $E.$ The intersection and complement axioms imply the existence of $E\cap \complement E$, which is empty. By the axiom of extensionality, this class is unique; it is denoted by $\emptyset .$ The complement of $\emptyset $ is the class $V$ of all sets, which is also unique by extensionality. The set predicate $M(A)$, which was defined as $\exists C(A\in C)$, is now redefined as $A\in V$ to avoid quantifying over classes. Axioms for handling tuples: Product by $V$.  For any class $A$, there is a class $B$ consisting of the ordered pairs whose first component belongs to $A$. $\forall A\,\exists B\,\forall u\,[u\in B\iff \exists x\,\exists y\,(u=(x,y)\land x\in A)]$[23] Circular permutation.  For any class $A$, there is a class $B$ whose 3‑tuples are obtained by applying the circular permutation $(y,z,x)\mapsto (x,y,z)$ to the 3‑tuples of $A$. $\forall A\,\exists B\,\forall x\,\forall y\,\forall z\,[(x,y,z)\in B\iff (y,z,x)\in A]$[24] Transposition.  For any class $A$, there is a class $B$ whose 3‑tuples are obtained by transposing the last two components of the 3‑tuples of $A$. $\forall A\,\exists B\,\forall x\,\forall y\,\forall z\,[(x,y,z)\in B\iff (x,z,y)\in A]$[25] By extensionality, the product by $V$ axiom implies the existence of a unique class, which is denoted by $A\times V.$ This axiom is used to define the class $V^{n}$ of all $n$-tuples: $V^{1}=V$ and $V^{n+1}=V^{n}\times V.\,$ If $A$ is a class, extensionality implies that $A\cap V^{n}$ is the unique class consisting of the $n$-tuples of $A.$ For example, the membership axiom produces a class $E$ that may contain elements that are not ordered pairs, while the intersection $E\cap V^{2}$ contains only the ordered pairs of $E$. The circular permutation and transposition axioms do not imply the existence of unique classes because they specify only the 3‑tuples of class $B.$ By specifying the 3‑tuples, these axioms also specify the $n$-tuples for $n\geq 4$ since: $(x_{1},\ldots ,x_{n-2},x_{n-1},x_{n})=((x_{1},\ldots ,x_{n-2}),x_{n-1},x_{n}).$ The axioms for handling tuples and the domain axiom imply the following lemma, which is used in the proof of the class existence theorem. Tuple lemma —  1. $\forall A\,\exists B_{1}\,\forall x\,\forall y\,\forall z\,[(z,x,y)\in B_{1}\iff (x,y)\in A]$ 2. $\forall A\,\exists B_{2}\,\forall x\,\forall y\,\forall z\,[(x,z,y)\in B_{2}\iff (x,y)\in A]$ 3. $\forall A\,\exists B_{3}\,\forall x\,\forall y\,\forall z\,[(x,y,z)\in B_{3}\iff (x,y)\in A]$ 4. $\forall A\,\exists B_{4}\,\forall x\,\forall y\,\forall z\,[(y,x)\in B_{4}\iff (x,y)\in A]$ Proof • Class $B_{3}$: Apply product by $V$ to $A$ to produce $B_{3}.$ • Class $B_{2}$: Apply transposition to $B_{3}$ to produce $B_{2}.$ • Class $B_{1}$: Apply circular permutation to $B_{3}$ to produce $B_{1}.$ • Class $B_{4}$: Apply circular permutation to $B_{2}$, then apply domain to produce $B_{4}.$ One more axiom is needed to prove the class existence theorem: the axiom of regularity. Since the existence of the empty class has been proved, the usual statement of this axiom is given.[lower-alpha 6] Axiom of regularity.  Every nonempty set has at least one element with which it has no element in common. $\forall a\,[a\neq \emptyset \implies \exists u(u\in a\land u\cap a=\emptyset )].$ This axiom implies that a set cannot belong to itself: Assume that $x\in x$ and let $a=\{x\}.$ Then $x\cap a\neq \emptyset $ since $x\in x\cap a.$ This contradicts the axiom of regularity because $x$ is the only element in $a.$ Therefore, $x\notin x.$ The axiom of regularity also prohibits infinite descending membership sequences of sets: $\cdots \in x_{n+1}\in x_{n}\in \cdots \in x_{1}\in x_{0}.$ Gödel stated regularity for classes rather than for sets in his 1940 monograph, which was based on lectures given in 1938.[26] In 1939, he proved that regularity for sets implies regularity for classes.[27] Class existence theorem Class existence theorem — Let $\phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})$ be a formula that quantifies only over sets and contains no free variables other than $x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m}$ (not necessarily all of these). Then for all $Y_{1},\dots ,Y_{m}$, there exists a unique class $A$ of $n$-tuples such that: $\forall x_{1}\cdots \,\forall x_{n}[(x_{1},\dots ,x_{n})\in A\iff \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})].$ The class $A$ is denoted by $\{(x_{1},\dots ,x_{n}):\phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})\}.$[lower-alpha 7] The theorem's proof will be done in two steps: 1. Transformation rules are used to transform the given formula $\phi $ into an equivalent formula that simplifies the inductive part of the proof. For example, the only logical symbols in the transformed formula are $\neg $, $\land $, and $\exists $, so the induction handles logical symbols with just three cases. 2. The class existence theorem is proved inductively for transformed formulas. Guided by the structure of the transformed formula, the class existence axioms are used to produce the unique class of $n$-tuples satisfying the formula. Transformation rules.  In rules 1 and 2 below, $\Delta $ and $\Gamma $ denote set or class variables. These two rules eliminate all occurrences of class variables before an $\in $ and all occurrences of equality. Each time rule 1 or 2 is applied to a subformula, $i$ is chosen so that $z_{i}$ differs from the other variables in the current formula. The three rules are repeated until there are no subformulas to which they can be applied. This produces a formula that is built only with $\neg $, $\land $, $\exists $, $\in $, set variables, and class variables $Y_{k}$ where $Y_{k}$ does not appear before an $\in $. 1. $\,Y_{k}\in \Gamma $ is transformed into $\exists z_{i}(z_{i}=Y_{k}\,\land \,z_{i}\in \Gamma ).$ 2. Extensionality is used to transform $\Delta =\Gamma $ into $\forall z_{i}(z_{i}\in \Delta \iff z_{i}\in \Gamma ).$ 3. Logical identities are used to transform subformulas containing $\lor ,\implies ,\iff ,$ and $\forall $ to subformulas that only use $\neg ,\land ,$ and $\exists .$ Transformation rules: bound variables.  Consider the composite function formula of example 1 with its free set variables replaced by $x_{1}$ and $x_{2}$: $\exists t[(x_{1},t)\in F\,\land \,(t,x_{2})\in G].$ The inductive proof will remove $\exists t$, which produces the formula $(x_{1},t)\in F\land (t,x_{2})\in G.$ However, since the class existence theorem is stated for subscripted variables, this formula does not have the form expected by the induction hypothesis. This problem is solved by replacing the variable $t$ with $x_{3}.$ Bound variables within nested quantifiers are handled by increasing the subscript by one for each successive quantifier. This leads to rule 4, which must be applied after the other rules since rules 1 and 2 produce quantified variables. 1. If a formula contains no free set variables other than $x_{1},\dots ,x_{n},$ then bound variables that are nested within $q$ quantifiers are replaced with $x_{n+q}$. These variables have (quantifier) nesting depth $q$. Example 2:  Rule 4 is applied to the formula $\phi (x_{1})$ that defines the class consisting of all sets of the form $\{\emptyset ,\{\emptyset ,\dots \},\dots \}.$ That is, sets that contain at least $\emptyset $ and a set containing $\emptyset $ — for example, $\{\emptyset ,\{\emptyset ,a,b,c\},d,e\}$ where $a,b,c,d,$ and $e$ are sets. ${\begin{aligned}\phi (x_{1})\,&=\,\exists u\;\,[\,u\in x_{1}\,\land \,\neg \exists v\;\,(\;v\,\in \,u\,)]\,\land \,\,\exists w\;{\bigl (}w\in x_{1}\,\land \,\exists y\;\,[(\;y\,\in w\;\land \;\neg \exists z\;\,(\;z\,\in \,y\,)]{\bigr )}\\\phi _{r}(x_{1})\,&=\,\exists x_{2}[x_{2}\!\in \!x_{1}\,\land \,\neg \exists x_{3}(x_{3}\!\in \!x_{2})]\,\land \,\,\exists x_{2}{\bigl (}x_{2}\!\in \!x_{1}\,\land \,\exists x_{3}[(x_{3}\!\in \!x_{2}\,\land \,\neg \exists x_{4}(x_{4}\!\in \!x_{3})]{\bigr )}\end{aligned}}$ Since $x_{1}$ is the only free variable, $n=1.$ The quantified variable $x_{3}$ appears twice in $x_{3}\in x_{2}$ at nesting depth 2. Its subscript is 3 because $n+q=1+2=3.$ If two quantifier scopes are at the same nesting depth, they are either identical or disjoint. The two occurrences of $x_{3}$ are in disjoint quantifier scopes, so they do not interact with each other. Proof of the class existence theorem.  The proof starts by applying the transformation rules to the given formula to produce a transformed formula. Since this formula is equivalent to the given formula, the proof is completed by proving the class existence theorem for transformed formulas. Proof of the class existence theorem for transformed formulas The following lemma is used in the proof. Expansion lemma — Let $1\leq i<j\leq n,$ and let $P$ be a class containing all the ordered pairs $(x_{i},x_{j})$ satisfying $R(x_{i},x_{j}).$ That is, $P\supseteq \{(x_{i},x_{j}):R(x_{i},x_{j})\}.$ Then $P$ can be expanded into the unique class $Q$ of $n$-tuples satisfying $R(x_{i},x_{j})$. That is, $Q=\{(x_{1},\ldots ,x_{n}):R(x_{i},x_{j})\}.$ Proof: 1. If $i=1,$ let $P_{1}=P.$ Otherwise, $i>1,$ so components are added in front of $x_{i}{\text{:}}$ apply the tuple lemma's statement 1 to $P$ with $z=(x_{1},\dots ,x_{i-1}).$ This produces a class $P_{1}$ containing all the $(i+1)$-tuples $((x_{1},\dots ,x_{i-1}),x_{i},x_{j})=(x_{1},\dots ,x_{i-1},x_{i},x_{j})$ satisfying $R(x_{i},x_{j}).$ 2. If $j=i+1,$ let $P_{2}=P_{1}.$ Otherwise, $j>i+1,$ so components are added between $x_{i}$ and $x_{j}{\text{:}}$ add the components $x_{i+1},\dots ,x_{j-1}$ one by one using the tuple lemma's statement 2. This produces a class $P_{2}$ containing all the $j$-tuples $(((\cdots ((x_{1},\dots ,x_{i}),x_{i+1}),\cdots ),x_{j-1}),x_{j})=(x_{1},\dots ,x_{j})$ satisfying $R(x_{i},x_{j}).$ 3. If $j=n,$ let $P_{3}=P_{2}.$ Otherwise, $j<n,$ so components are added after $x_{j}{\text{:}}$ add the components $x_{j+1},\dots ,x_{n}$ one by one using the tuple lemma's statement 3. This produces a class $P_{3}$ containing all the $n$-tuples $((\cdots ((x_{1},\dots ,x_{j}),x_{j+1}),\cdots ),x_{n})=(x_{1},\dots ,x_{n})$ satisfying $R(x_{i},x_{j}).$ 4. Let $Q=P_{3}\cap V^{n}.$ Extensionality implies that $Q$ is the unique class of $n$-tuples satisfying $R(x_{i},x_{j}).$ Class existence theorem for transformed formulas — Let $\phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m})$ be a formula that: 1. contains no free variables other than $x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m}$; 2. contains only $\in $, $\neg $, $\land $, $\exists $, set variables, and the class variables $Y_{k}$ where $Y_{k}$ does not appear before an $\in $ ; 3. only quantifies set variables $x_{n+q}$ where $q$ is the quantifier nesting depth of the variable. Then for all $Y_{1},\dots ,Y_{m}$, there exists a unique class $A$ of $n$-tuples such that: $\forall x_{1}\cdots \,\forall x_{n}[(x_{1},\ldots ,x_{n})\in A\iff \phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m})].$ Proof: Basis step: $\phi $ has 0 logical symbols. The theorem's hypothesis implies that $\phi $ is an atomic formula of the form $x_{i}\in x_{j}$ or $x_{i}\in Y_{k}.$ Case 1: If $\phi $ is $x_{i}\in x_{j}$, we build the class $E_{i,j,n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in x_{j}\},$ the unique class of $n$-tuples satisfying $x_{i}\in x_{j}.$ Case a: $\phi $ is $x_{i}\in x_{j}$ where $i<j.$ The axiom of membership produces a class $P$ containing all the ordered pairs $(x_{i},x_{j})$ satisfying $x_{i}\in x_{j}.$ Apply the expansion lemma to $P$ to obtain $E_{i,j,n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in x_{j}\}.$ Case b: $\phi $ is $x_{i}\in x_{j}$ where $i>j.$ The axiom of membership produces a class $P$ containing all the ordered pairs $(x_{i},x_{j})$ satisfying $x_{i}\in x_{j}.$ Apply the tuple lemma's statement 4 to $P$ to obtain $P'$ containing all the ordered pairs $(x_{j},x_{i})$ satisfying $x_{i}\in x_{j}.$ Apply the expansion lemma to $P'$ to obtain $E_{i,j,n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in x_{j}\}.$ Case c: $\phi $ is $x_{i}\in x_{j}$ where $i=j.$ Since this formula is false by the axiom of regularity, no $n$-tuples satisfy it, so $E_{i,j,n}=\emptyset .$ Case 2: If $\phi $ is $x_{i}\in Y_{k}$, we build the class $E_{i,Y_{k},n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in Y_{k}\},$ the unique class of $n$-tuples satisfying $x_{i}\in Y_{k}.$ Case a: $\phi $ is $x_{i}\in Y_{k}$ where $i<n.$ Apply the axiom of product by $V$ to $Y_{k}$ to produce the class $P=Y_{k}\times V=\{(x_{i},x_{i+1}):x_{i}\in Y_{k}\}.$ Apply the expansion lemma to $P$ to obtain $E_{i,Y_{k},n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in Y_{k}\}.$ Case b: $\phi $ is $x_{i}\in Y_{k}$ where $i=n>1.$ Apply the axiom of product by $V$ to $Y_{k}$ to produce the class $P=Y_{k}\times V=\{(x_{i},x_{i-1}):x_{i}\in Y_{k}\}.$ Apply the tuple lemma's statement 4 to $P$ to obtain $P'=V\times Y_{k}=\{(x_{i-1},x_{i}):x_{i}\in Y_{k}\}.$ Apply the expansion lemma to $P'$ to obtain $E_{i,Y_{k},n}=\{(x_{1},\ldots ,x_{n}):x_{i}\in Y_{k}\}.$ Case c: $\phi $ is $x_{i}\in Y_{k}$ where $i=n=1.$ Then $E_{i,Y_{k},n}=Y_{k}.$ Inductive step: $\phi $ has $k$ logical symbols where $k>0$. Assume the induction hypothesis that the theorem is true for all $\psi $ with less than $k$ logical symbols. We now prove the theorem for $\phi $ with $k$ logical symbols. In this proof, the list of class variables $Y_{1},\dots ,Y_{m}$ is abbreviated by ${\vec {Y}}$, so a formula—such as $\phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})$—can be written as $\phi (x_{1},\dots ,x_{n},{\vec {Y}}).$ Case 1: $\phi (x_{1},\ldots ,x_{n},{\vec {Y}})=\neg \psi (x_{1},\ldots ,x_{n},{\vec {Y}}).$ Since $\psi $ has $k-1$ logical symbols, the induction hypothesis implies that there is a unique class $A$ of $n$-tuples such that: $\quad (x_{1},\ldots ,x_{n})\in A\iff \psi (x_{1},\ldots ,x_{n},{\vec {Y}}).$ By the complement axiom, there is a class $\complement A$ such that $\forall u\,[u\in \complement A\iff \neg (u\in A)].$ However, $\complement A$ contains elements other than $n$-tuples if $n>1.$ To eliminate these elements, use $\complement _{V^{n}}A=\,$$\complement A\cap V^{n}=\,$$V^{n}\setminus A,$ which is the complement relative to the class $V^{n}$ of all $n$-tuples.[lower-alpha 5] Then, by extensionality, $\complement _{V^{n}}A$ is the unique class of $n$-tuples such that: ${\begin{alignedat}{2}\quad &(x_{1},\ldots ,x_{n})\in \complement _{V^{n}}A&&\iff \neg [(x_{1},\ldots ,x_{n})\in A]\\&&&\iff \neg \psi (x_{1},\ldots ,x_{n},{\vec {Y}})\\&&&\iff \phi (x_{1},\ldots ,x_{n},{\vec {Y}}).\end{alignedat}}$ Case 2: $\phi (x_{1},\ldots ,x_{n},{\vec {Y}})=\psi _{1}(x_{1},\ldots ,x_{n},{\vec {Y}})\land \psi _{2}(x_{1},\ldots ,x_{n},{\vec {Y}}).$ Since both $\psi _{1}$ and $\psi _{2}$ have less than $k$ logical symbols, the induction hypothesis implies that there are unique classes of $n$-tuples, $A_{1}$ and $A_{2}$, such that: ${\begin{aligned}\quad &(x_{1},\ldots ,x_{n})\in A_{1}\iff \psi _{1}(x_{1},\ldots ,x_{n},{\vec {Y}}).\\&(x_{1},\ldots ,x_{n})\in A_{2}\iff \psi _{2}(x_{1},\ldots ,x_{n},{\vec {Y}}).\end{aligned}}$ By the axioms of intersection and extensionality, $A_{1}\cap A_{2}$ is the unique class of $n$-tuples such that: ${\begin{alignedat}{2}\quad &(x_{1},\ldots ,x_{n})\in A_{1}\cap A_{2}&&\iff (x_{1},\ldots ,x_{n})\in A_{1}\land (x_{1},\ldots ,x_{n})\in A_{2}\\&&&\iff \psi _{1}(x_{1},\ldots ,x_{n},{\vec {Y}})\land \psi _{2}(x_{1},\ldots ,x_{n},{\vec {Y}})\\&&&\iff \phi (x_{1},\ldots ,x_{n},{\vec {Y}}).\end{alignedat}}$ Case 3: $\phi (x_{1},\ldots ,x_{n},{\vec {Y}})=\exists x_{n+1}\psi (x_{1},\ldots ,x_{n},x_{n+1},{\vec {Y}}).$ The quantifier nesting depth of $\psi $ is one more than that of $\phi $ and the additional free variable is $x_{n+1}.$ Since $\psi $ has $k-1$ logical symbols, the induction hypothesis implies that there is a unique class $A$ of $(n+1)$-tuples such that: $\quad (x_{1},\ldots ,x_{n},x_{n+1})\in A\iff \psi (x_{1},\ldots ,x_{n},x_{n+1},{\vec {Y}}).$ By the axioms of domain and extensionality, $Dom(A)$ is the unique class of $n$-tuples such that:[lower-alpha 8] ${\begin{alignedat}{2}\quad &(x_{1},\ldots ,x_{n})\in Dom(A)&&\iff \exists x_{n+1}[((x_{1},\ldots ,x_{n}),x_{n+1})\in A]\\&&&\iff \exists x_{n+1}[(x_{1},\ldots ,x_{n},x_{n+1})\in A]\\&&&\iff \exists x_{n+1}\,\psi (x_{1},\ldots ,x_{n},x_{n+1},{\vec {Y}})\\&&&\iff \phi (x_{1},\ldots ,x_{n},{\vec {Y}}).\end{alignedat}}$ Gödel pointed out that the class existence theorem "is a metatheorem, that is, a theorem about the system [NBG], not in the system …"[30] It is a theorem about NBG because it is proved in the metatheory by induction on NBG formulas. Also, its proof—instead of invoking finitely many NBG axioms—inductively describes how to use NBG axioms to construct a class satisfying a given formula. For every formula, this description can be turned into a constructive existence proof that is in NBG. Therefore, this metatheorem can generate the NBG proofs that replace uses of NBG's class existence theorem. A recursive computer program succinctly captures the construction of a class from a given formula. The definition of this program does not depend on the proof of the class existence theorem. However, the proof is needed to prove that the class constructed by the program satisfies the given formula and is built using the axioms. This program is written in pseudocode that uses a Pascal-style case statement.[lower-alpha 9] ${\begin{array}{l}\mathbf {function} \;{\text{Class}}(\phi ,\,n)\\\quad {\begin{array}{rl}\mathbf {input} \!:\;\,&\phi {\text{ is a transformed formula of the form }}\phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m});\\&n{\text{ specifies that a class of }}n{\text{-tuples is returned.}}\\\;\;\;\;\mathbf {output} \!:\;\,&{\text{class }}A{\text{ of }}n{\text{-tuples satisfying }}\\&\,\forall x_{1}\cdots \,\forall x_{n}[(x_{1},\ldots ,x_{n})\in A\iff \phi (x_{1},\ldots ,x_{n},Y_{1},\ldots ,Y_{m})].\end{array}}\\\mathbf {begin} \\\quad \mathbf {case} \;\phi \;\mathbf {of} \\\qquad {\begin{alignedat}{2}x_{i}\in x_{j}:\;\;&\mathbf {return} \;\,E_{i,j,n};&&{\text{// }}E_{i,j,n}\;\,=\{(x_{1},\dots ,x_{n}):x_{i}\in x_{j}\}\\x_{i}\in Y_{k}:\;\;&\mathbf {return} \;\,E_{i,Y_{k},n};&&{\text{// }}E_{i,Y_{k},n}=\{(x_{1},\dots ,x_{n}):x_{i}\in Y_{k}\}\\\neg \psi :\;\;&\mathbf {return} \;\,\complement _{V^{n}}{\text{Class}}(\psi ,\,n);&&{\text{// }}\complement _{V^{n}}{\text{Class}}(\psi ,\,n)=V^{n}\setminus {\text{Class}}(\psi ,\,n)\\\psi _{1}\land \psi _{2}:\;\;&\mathbf {return} \;\,{\text{Class}}(\psi _{1},\,n)\cap {\text{Class}}(\psi _{2},\,n);&&\\\;\;\;\;\,\exists x_{n+1}(\psi ):\;\;&\mathbf {return} \;\,Dom({\text{Class}}(\psi ,\,n+1));&&{\text{// }}x_{n+1}{\text{ is free in }}\psi ;{\text{ Class}}(\psi ,\,n+1)\\&\ &&{\text{// returns a class of }}(n+1){\text{-tuples}}\end{alignedat}}\\\quad \mathbf {end} \\\mathbf {end} \end{array}}$ :\;\;&\mathbf {return} \;\,\complement _{V^{n}}{\text{Class}}(\psi ,\,n);&&{\text{// }}\complement _{V^{n}}{\text{Class}}(\psi ,\,n)=V^{n}\setminus {\text{Class}}(\psi ,\,n)\\\psi _{1}\land \psi _{2}:\;\;&\mathbf {return} \;\,{\text{Class}}(\psi _{1},\,n)\cap {\text{Class}}(\psi _{2},\,n);&&\\\;\;\;\;\,\exists x_{n+1}(\psi ):\;\;&\mathbf {return} \;\,Dom({\text{Class}}(\psi ,\,n+1));&&{\text{// }}x_{n+1}{\text{ is free in }}\psi ;{\text{ Class}}(\psi ,\,n+1)\\&\ &&{\text{// returns a class of }}(n+1){\text{-tuples}}\end{alignedat}}\\\quad \mathbf {end} \\\mathbf {end} \end{array}}} Let $\phi $ be the formula of example 2. The function call $A=Class(\phi ,1)$ generates the class $A,$ which is compared below with $\phi .$ This shows that the construction of the class $A$ mirrors the construction of its defining formula $\phi .$ ${\begin{alignedat}{2}&\phi \;&&=\;\;\exists x_{2}\,(x_{2}\!\in \!x_{1}\land \;\;\neg \;\;\;\;\exists x_{3}\;(x_{3}\!\in \!x_{2}))\,\land \;\;\,\exists x_{2}\,(x_{2}\!\in \!x_{1}\land \;\;\,\exists x_{3}\,(x_{3}\!\in \!x_{2}\,\land \;\;\neg \;\;\;\;\exists x_{4}\;(x_{4}\!\in \!x_{3})))\\&A\;&&=Dom\,(\;E_{2,1,2}\;\cap \;\complement _{V^{2}}\,Dom\,(\;E_{3,2,3}\;))\,\cap \,Dom\,(\;E_{2,1,2}\;\cap \,Dom\,(\;\,E_{3,2,3}\;\cap \;\complement _{V^{3}}\,Dom\,(\;E_{4,3,4}\;)))\end{alignedat}}$ Extending the class existence theorem Gödel extended the class existence theorem to formulas $\phi $ containing relations over classes (such as $Y_{1}\subseteq Y_{2}$ and the unary relation $M(Y_{1})$), special classes (such as $Ord$ ), and operations (such as $(x_{1},x_{2})$ and $x_{1}\cap Y_{1}$).[32] To extend the class existence theorem, the formulas defining relations, special classes, and operations must quantify only over sets. Then $\phi $ can be transformed into an equivalent formula satisfying the hypothesis of the class existence theorem. The following definitions specify how formulas define relations, special classes, and operations: 1. A relation $R$ is defined by: $R(Z_{1},\dots ,Z_{k})\iff \psi _{R}(Z_{1},\dots ,Z_{k}).$ 2. A special class $C$ is defined by: $u\in C\iff \psi _{C}(u).$ 3. An operation $P$ is defined by: $u\in P(Z_{1},\dots ,Z_{k})\iff \psi _{P}(u,Z_{1},\dots ,Z_{k}).$ A term is defined by: 1. Variables and special classes are terms. 2. If $P$ is an operation with $k$ arguments and $\Gamma _{1},\dots ,\Gamma _{k}$ are terms, then $P(\Gamma _{1},\dots ,\Gamma _{k})$ is a term. The following transformation rules eliminate relations, special classes, and operations. Each time rule 2b, 3b, or 4 is applied to a subformula, $i$ is chosen so that $z_{i}$ differs from the other variables in the current formula. The rules are repeated until there are no subformulas to which they can be applied. $\,\Gamma _{1},\dots ,\Gamma _{k},\Gamma ,$ and $\Delta $ denote terms. 1. A relation $R(Z_{1},\dots ,Z_{k})$ is replaced by its defining formula $\psi _{R}(Z_{1},\dots ,Z_{k}).$ 2. Let $\psi _{C}(u)$ be the defining formula for the special class $C.$ 1. $\Delta \in C$ is replaced by $\psi _{C}(\Delta ).$ 2. $C\in \Delta $ is replaced by $\exists z_{i}[z_{i}=C\land z_{i}\in \Delta ].$ 3. Let $\psi _{P}(u,Z_{1},\dots ,Z_{k})$ be the defining formula for the operation $P(Z_{1},\dots ,Z_{k}).$ 1. $\Delta \in P(\Gamma _{1},\dots ,\Gamma _{k})$ is replaced by $\psi _{P}(\Delta ,\Gamma _{1},\dots ,\Gamma _{k}).$ 2. $P(\Gamma _{1},\dots ,\Gamma _{k})\in \Delta $ is replaced by $\exists z_{i}[z_{i}=P(\Gamma _{1},\dots ,\Gamma _{k})\land z_{i}\in \Delta ].$ 4. Extensionality is used to transform $\Delta =\Gamma $ into $\forall z_{i}(z_{i}\in \Delta \iff z_{i}\in \Gamma ).$ Example 3:  Transforming $Y_{1}\subseteq Y_{2}.$ $Y_{1}\subseteq Y_{2}\iff \forall z_{1}(z_{1}\in Y_{1}\implies z_{1}\in Y_{2})\quad {\text{(rule 1)}}$ Example 4:  Transforming $x_{1}\cap Y_{1}\in x_{2}.$ ${\begin{alignedat}{2}x_{1}\cap Y_{1}\in x_{2}&\iff \exists z_{1}[z_{1}=x_{1}\cap Y_{1}\,\land \,z_{1}\in x_{2}]&&{\text{(rule 3b)}}\\&\iff \exists z_{1}[\forall z_{2}(z_{2}\in z_{1}\iff z_{2}\in x_{1}\cap Y_{1})\,\land \,z_{1}\in x_{2}]&&{\text{(rule 4)}}\\&\iff \exists z_{1}[\forall z_{2}(z_{2}\in z_{1}\iff z_{2}\in x_{1}\land z_{2}\in Y_{1})\,\land \,z_{1}\in x_{2}]\quad &&{\text{(rule 3a)}}\\\end{alignedat}}$ This example illustrates how the transformation rules work together to eliminate an operation. Class existence theorem (extended version) — Let $\phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})$ be a formula that quantifies only over sets, contains no free variables other than $x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m}$, and may contain relations, special classes, and operations defined by formulas that quantify only over sets. Then for all $Y_{1},\dots ,Y_{m},$ there exists a unique class $A$ of $n$-tuples such that $\forall x_{1}\cdots \,\forall x_{n}[(x_{1},\dots ,x_{n})\in A\iff \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})].$ [lower-alpha 10] Proof Apply the transformation rules to $\phi $ to produce an equivalent formula containing no relations, special classes, or operations. This formula satisfies the hypothesis of the class existence theorem. Therefore, for all $Y_{1},\dots ,Y_{m},$ there is a unique class $A$ of $n$-tuples satisfying $\forall x_{1}\cdots \,\forall x_{n}[(x_{1},\dots ,x_{n})\in A\iff \phi (x_{1},\dots ,x_{n},Y_{1},\dots ,Y_{m})].$ Set axioms The axioms of pairing and regularity, which were needed for the proof of the class existence theorem, have been given above. NBG contains four other set axioms. Three of these axioms deal with class operations being applied to sets. Definition.  $F$ is a function if $F\subseteq V^{2}\land \forall x\,\forall y\,\forall z\,[(x,y)\in F\,\land \,(x,z)\in F\implies y=z].$ In set theory, the definition of a function does not require specifying the domain or codomain of the function (see Function (set theory)). NBG's definition of function generalizes ZFC's definition from a set of ordered pairs to a class of ordered pairs. ZFC's definitions of the set operations of image, union, and power set are also generalized to class operations. The image of class $A$ under the function $F$ is $F[A]=\{y:\exists x(x\in A\,\land \,(x,y)\in F)\}.$ This definition does not require that $A\subseteq Dom(F).$ The union of class $A$ is $\cup A=\{x:\exists y(x\in y\,\,\land \,y\in A)\}.$ The power class of $A$ is ${\mathcal {P}}(A)=\{x:x\subseteq A\}.$ The extended version of the class existence theorem implies the existence of these classes. The axioms of replacement, union, and power set imply that when these operations are applied to sets, they produce sets.[34] Axiom of replacement.  If $F$ is a function and $a$ is a set, then $F[a]$, the image of $a$ under $F$, is a set. $\forall F\,\forall a\,[F{\text{ is a function}}\implies \exists b\,\forall y\,(y\in b\iff \exists x(x\in a\,\land \,(x,y)\in F))].$ Not having the requirement $A\subseteq Dom(F)$ in the definition of $F[A]$ produces a stronger axiom of replacement, which is used in the following proof. Theorem (NBG's axiom of separation) — If $a$ is a set and $B$ is a subclass of $a,$ then $B$ is a set. Proof The class existence theorem constructs the restriction of the identity function to $B$: $I{\upharpoonright _{B}}=\{(x_{1},x_{2}):x_{1}\in B\land x_{2}=x_{1}\}.$ Since the image of $a$ under $I{\upharpoonright _{B}}$ is $B$, the axiom of replacement implies that $B$ is a set. This proof depends on the definition of image not having the requirement $a\subseteq Dom(F)$ since $Dom(I{\upharpoonright _{B}})=B\subseteq a$ rather than $a\subseteq Dom(I{\upharpoonright _{B}}).$ Axiom of union.  If $a$ is a set, then there is a set containing $\cup a.$ $\forall a\,\exists b\,\forall x\,[\,\exists y(x\in y\,\,\land \,y\in a)\implies x\in b\,].$ Axiom of power set.  If $a$ is a set, then there is a set containing ${\mathcal {P}}(a).$ $\forall a\,\exists b\,\forall x\,(x\subseteq a\implies x\in b).$[lower-alpha 11] Theorem —  If $a$ is a set, then $\cup a$ and ${\mathcal {P}}(a)$ are sets. Proof The axiom of union states that $\cup a$ is a subclass of a set $b$, so the axiom of separation implies $\cup a$ is a set. Likewise, the axiom of power set states that ${\mathcal {P}}(a)$ is a subclass of a set $b$, so the axiom of separation implies that ${\mathcal {P}}(a)$ is a set. Axiom of infinity.  There exists a nonempty set $a$ such that for all $x$ in $a$, there exists a $y$ in $a$ such that $x$ is a proper subset of $y$. $\exists a\,[\exists u(u\in a)\,\land \,\forall x(x\in a\implies \exists y(y\in a\,\land \,x\subset y))].$ The axioms of infinity and replacement prove the existence of the empty set. In the discussion of the class existence axioms, the existence of the empty class $\emptyset $ was proved. We now prove that $\emptyset $ is a set. Let function $F=\emptyset $ and let $a$ be the set given by the axiom of infinity. By replacement, the image of $a$ under $F$, which equals $\emptyset $, is a set. NBG's axiom of infinity is implied by ZFC's axiom of infinity: $\,\exists a\,[\emptyset \in a\,\land \,\forall x(x\in a\implies x\cup \{x\}\in a)].\,$ The first conjunct of ZFC's axiom, $\emptyset \in a$, implies the first conjunct of NBG's axiom. The second conjunct of ZFC's axiom, $\forall x(x\in a\implies x\cup \{x\}\in a)$, implies the second conjunct of NBG's axiom since $x\subset x\cup \{x\}.$ To prove ZFC's axiom of infinity from NBG's axiom of infinity requires some of the other NBG axioms (see Weak axiom of infinity).[lower-alpha 12] Axiom of global choice The class concept allows NBG to have a stronger axiom of choice than ZFC. A choice function is a function $f$ defined on a set $s$ of nonempty sets such that $f(x)\in x$ for all $x\in s.$ ZFC's axiom of choice states that there exists a choice function for every set of nonempty sets. A global choice function is a function $G$ defined on the class of all nonempty sets such that $G(x)\in x$ for every nonempty set $x.$ The axiom of global choice states that there exists a global choice function. This axiom implies ZFC's axiom of choice since for every set $s$ of nonempty sets, $G\vert _{s}$ (the restriction of $G$ to $s$) is a choice function for $s.$ In 1964, William B. Easton proved that global choice is stronger than the axiom of choice by using forcing to construct a model that satisfies the axiom of choice and all the axioms of NBG except the axiom of global choice.[38] The axiom of global choice is equivalent to every class having a well-ordering, while ZFC's axiom of choice is equivalent to every set having a well-ordering.[lower-alpha 13] Axiom of global choice.  There exists a function that chooses an element from every nonempty set. $\exists G\,[G{\text{ is a function}}\,\land \forall x(x\neq \emptyset \implies \exists y(y\in x\land (x,y)\in G))].$ History Von Neumann's 1925 axiom system Von Neumann published an introductory article on his axiom system in 1925. In 1928, he provided a detailed treatment of his system.[39] Von Neumann based his axiom system on two domains of primitive objects: functions and arguments. These domains overlap—objects that are in both domains are called argument-functions. Functions correspond to classes in NBG, and argument-functions correspond to sets. Von Neumann's primitive operation is function application, denoted by [a, x] rather than a(x) where a is a function and x is an argument. This operation produces an argument. Von Neumann defined classes and sets using functions and argument-functions that take only two values, A and B. He defined x ∈ a if [a, x] ≠ A.[1] Von Neumann's work in set theory was influenced by Georg Cantor's articles, Ernst Zermelo's 1908 axioms for set theory, and the 1922 critiques of Zermelo's set theory that were given independently by Abraham Fraenkel and Thoralf Skolem. Both Fraenkel and Skolem pointed out that Zermelo's axioms cannot prove the existence of the set {Z0, Z1, Z2, ...} where Z0 is the set of natural numbers and Zn+1 is the power set of Zn. They then introduced the axiom of replacement, which would guarantee the existence of such sets.[40][lower-alpha 14] However, they were reluctant to adopt this axiom: Fraenkel stated "that Replacement was too strong an axiom for 'general set theory'", while "Skolem only wrote that 'we could introduce' Replacement".[42] Von Neumann worked on the problems of Zermelo set theory and provided solutions for some of them: • A theory of ordinals • Problem: Cantor's theory of ordinal numbers cannot be developed in Zermelo set theory because it lacks the axiom of replacement.[lower-alpha 15] • Solution: Von Neumann recovered Cantor's theory by defining the ordinals using sets that are well-ordered by the ∈-relation,[lower-alpha 16] and by using the axiom of replacement to prove key theorems about the ordinals, such as every well-ordered set is order-isomorphic with an ordinal.[lower-alpha 15] In contrast to Fraenkel and Skolem, von Neumann emphasized how important the replacement axiom is for set theory: "In fact, I believe that no theory of ordinals is possible at all without this axiom."[45] • A criterion identifying classes that are too large to be sets • Problem: Zermelo did not provide such a criterion. His set theory avoids the large classes that lead to the paradoxes, but it leaves out many sets, such as the one mentioned by Fraenkel and Skolem.[lower-alpha 17] • Solution: Von Neumann introduced the criterion: A class is too large to be a set if and only if it can be mapped onto the class V of all sets. Von Neumann realized that the set-theoretic paradoxes could be avoided by not allowing such large classes to be members of any class. Combining this restriction with his criterion, he obtained his axiom of limitation of size: A class C is not a member of any class if and only if C can be mapped onto V.[48][lower-alpha 18] • Finite axiomatization • Problem: Zermelo had used the imprecise concept of "definite propositional function" in his axiom of separation. • Solutions: Skolem introduced the axiom schema of separation that was later used in ZFC, and Fraenkel introduced an equivalent solution.[50] However, Zermelo rejected both approaches "particularly because they implicitly involve the concept of natural number which, in Zermelo's view, should be based upon set theory."[lower-alpha 19] Von Neumann avoided axiom schemas by formalizing the concept of "definite propositional function" with his functions, whose construction requires only finitely many axioms. This led to his set theory having finitely many axioms.[51] In 1961, Richard Montague proved that ZFC cannot be finitely axiomatized.[52] • The axiom of regularity • Problem: Zermelo set theory starts with the empty set and an infinite set, and iterates the axioms of pairing, union, power set, separation, and choice to generate new sets. However, it does not restrict sets to these. For example, it allows sets that are not well-founded, such as a set x satisfying x ∈ x.[lower-alpha 20] • Solutions: Fraenkel introduced an axiom to exclude these sets. Von Neumann analyzed Fraenkel's axiom and stated that it was not "precisely formulated", but it would approximately say: "Besides the sets ... whose existence is absolutely required by the axioms, there are no further sets."[54] Von Neumann proposed the axiom of regularity as a way to exclude non-well-founded sets, but did not include it in his axiom system. In 1930, Zermelo became the first to publish an axiom system that included regularity.[lower-alpha 21] Von Neumann's 1929 axiom system In 1929, von Neumann published an article containing the axioms that would lead to NBG. This article was motivated by his concern about the consistency of the axiom of limitation of size. He stated that this axiom "does a lot, actually too much." Besides implying the axioms of separation and replacement, and the well-ordering theorem, it also implies that any class whose cardinality is less than that of V is a set. Von Neumann thought that this last implication went beyond Cantorian set theory and concluded: "We must therefore discuss whether its [the axiom's] consistency is not even more problematic than an axiomatization of set theory that does not go beyond the necessary Cantorian framework."[57] Von Neumann started his consistency investigation by introducing his 1929 axiom system, which contains all the axioms of his 1925 axiom system except the axiom of limitation of size. He replaced this axiom with two of its consequences, the axiom of replacement and a choice axiom. Von Neumann's choice axiom states: "Every relation R has a subclass that is a function with the same domain as R."[58] Let S be von Neumann's 1929 axiom system. Von Neumann introduced the axiom system S + Regularity (which consists of S and the axiom of regularity) to demonstrate that his 1925 system is consistent relative to S. He proved: 1. If S is consistent, then S + Regularity is consistent. 2. S + Regularity implies the axiom of limitation of size. Since this is the only axiom of his 1925 axiom system that S + Regularity does not have, S + Regularity implies all the axioms of his 1925 system. These results imply: If S is consistent, then von Neumann's 1925 axiom system is consistent. Proof: If S is consistent, then S + Regularity is consistent (result 1). Using proof by contradiction, assume that the 1925 axiom system is inconsistent, or equivalently: the 1925 axiom system implies a contradiction. Since S + Regularity implies the axioms of the 1925 system (result 2), S + Regularity also implies a contradiction. However, this contradicts the consistency of S + Regularity. Therefore, if S is consistent, then von Neumann's 1925 axiom system is consistent. Since S is his 1929 axiom system, von Neumann's 1925 axiom system is consistent relative to his 1929 axiom system, which is closer to Cantorian set theory. The major differences between Cantorian set theory and the 1929 axiom system are classes and von Neumann's choice axiom. The axiom system S + Regularity was modified by Bernays and Gödel to produce the equivalent NBG axiom system. Bernays' axiom system In 1929, Paul Bernays started modifying von Neumann's new axiom system by taking classes and sets as primitives. He published his work in a series of articles appearing from 1937 to 1954.[59] Bernays stated that: The purpose of modifying the von Neumann system is to remain nearer to the structure of the original Zermelo system and to utilize at the same time some of the set-theoretic concepts of the Schröder logic and of Principia Mathematica which have become familiar to logicians. As will be seen, a considerable simplification results from this arrangement.[60] Bernays handled sets and classes in a two-sorted logic and introduced two membership primitives: one for membership in sets and one for membership in classes. With these primitives, he rewrote and simplified von Neumann's 1929 axioms. Bernays also included the axiom of regularity in his axiom system.[61] Gödel's axiom system (NBG) In 1931, Bernays sent a letter containing his set theory to Kurt Gödel.[36] Gödel simplified Bernays' theory by making every set a class, which allowed him to use just one sort and one membership primitive. He also weakened some of Bernays' axioms and replaced von Neumann's choice axiom with the equivalent axiom of global choice.[62][lower-alpha 22] Gödel used his axioms in his 1940 monograph on the relative consistency of global choice and the generalized continuum hypothesis.[63] Several reasons have been given for Gödel choosing NBG for his monograph:[lower-alpha 23] • Gödel gave a mathematical reason—NBG's global choice produces a stronger consistency theorem: "This stronger form of the axiom [of choice], if consistent with the other axioms, implies, of course, that a weaker form is also consistent."[5] • Robert Solovay conjectured: "My guess is that he [Gödel] wished to avoid a discussion of the technicalities involved in developing the rudiments of model theory within axiomatic set theory."[67][lower-alpha 24] • Kenneth Kunen gave a reason for Gödel avoiding this discussion: "There is also a much more combinatorial approach to L [the constructible universe], developed by ... [Gödel in his 1940 monograph] in an attempt to explain his work to non-logicians. ... This approach has the merit of removing all vestiges of logic from the treatment of L."[68] • Charles Parsons provided a philosophical reason for Gödel's choice: "This view [that 'property of set' is a primitive of set theory] may be reflected in Gödel's choice of a theory with class variables as the framework for ... [his monograph]."[69] Gödel's achievement together with the details of his presentation led to the prominence that NBG would enjoy for the next two decades.[70] In 1963, Paul Cohen proved his independence proofs for ZF with the help of some tools that Gödel had developed for his relative consistency proofs for NBG.[71] Later, ZFC became more popular than NBG. This was caused by several factors, including the extra work required to handle forcing in NBG,[72] Cohen's 1966 presentation of forcing, which used ZF,[73][lower-alpha 25] and the proof that NBG is a conservative extension of ZFC.[lower-alpha 26] NBG, ZFC, and MK NBG is not logically equivalent to ZFC because its language is more expressive: it can make statements about classes, which cannot be made in ZFC. However, NBG and ZFC imply the same statements about sets. Therefore, NBG is a conservative extension of ZFC. NBG implies theorems that ZFC does not imply, but since NBG is a conservative extension, these theorems must involve proper classes. For example, it is a theorem of NBG that the global axiom of choice implies that the proper class V can be well-ordered and that every proper class can be put into one-to-one correspondence with V.[lower-alpha 27] One consequence of conservative extension is that ZFC and NBG are equiconsistent. Proving this uses the principle of explosion: from a contradiction, everything is provable. Assume that either ZFC or NBG is inconsistent. Then the inconsistent theory implies the contradictory statements ∅ = ∅ and ∅ ≠ ∅, which are statements about sets. By the conservative extension property, the other theory also implies these statements. Therefore, it is also inconsistent. So although NBG is more expressive, it is equiconsistent with ZFC. This result together with von Neumann's 1929 relative consistency proof implies that his 1925 axiom system with the axiom of limitation of size is equiconsistent with ZFC. This completely resolves von Neumann's concern about the relative consistency of this powerful axiom since ZFC is within the Cantorian framework. Even though NBG is a conservative extension of ZFC, a theorem may have a shorter and more elegant proof in NBG than in ZFC (or vice versa). For a survey of known results of this nature, see Pudlák 1998. Morse–Kelley set theory has an axiom schema of class comprehension that includes formulas whose quantifiers range over classes. MK is a stronger theory than NBG because MK proves the consistency of NBG,[76] while Gödel's second incompleteness theorem implies that NBG cannot prove the consistency of NBG. For a discussion of some ontological and other philosophical issues posed by NBG, especially when contrasted with ZFC and MK, see Appendix C of Potter 2004. Models ZFC, NBG, and MK have models describable in terms of the cumulative hierarchy Vα  and the constructible hierarchy Lα . Let V include an inaccessible cardinal κ, let X ⊆ Vκ, and let Def(X) denote the class of first-order definable subsets of X with parameters. In symbols where "$(X,\in )$" denotes the model with domain $X$ and relation $\in $, and "$\models $" denotes the satisfaction relation: $\operatorname {Def} (X):={\Bigl \{}\{x\mid x\in X{\text{ and }}(X,\in )\models \phi (x,y_{1},\ldots ,y_{n})\}:\phi {\text{ is a first-order formula and }}y_{1},\ldots ,y_{n}\in X{\Bigr \}}.$ Then: • (Vκ, ∈) and (Lκ, ∈) are models of ZFC.[77] • (Vκ, Vκ+1, ∈) is a model of MK where Vκ consists of the sets of the model and Vκ+1 consists of the classes of the model.[78] Since a model of MK is a model of NBG, this model is also a model of NBG. • (Vκ, Def(Vκ), ∈) is a model of Mendelson's version of NBG, which replaces NBG's axiom of global choice with ZFC's axiom of choice.[79] The axioms of ZFC are true in this model because (Vκ, ∈) is a model of ZFC. In particular, ZFC's axiom of choice holds, but NBG's global choice may fail.[lower-alpha 28] NBG's class existence axioms are true in this model because the classes whose existence they assert can be defined by first-order definitions. For example, the membership axiom holds since the class $E$ is defined by: $E=\{x\in V_{\kappa }:(V_{\kappa },\in )\models \exists u\ \exists v[x=(u,v)\land u\in v]\}.$ • (Lκ, Lκ+, ∈), where κ+ is the successor cardinal of κ, is a model of NBG.[lower-alpha 29] NBG's class existence axioms are true in (Lκ, Lκ+, ∈). For example, the membership axiom holds since the class $E$ is defined by: $E=\{x\in L_{\kappa }:(L_{\kappa },\in )\models \exists u\ \exists v[x=(u,v)\land u\in v]\}.$ So E ∈ 𝒫(Lκ). In his proof that GCH is true in L, Gödel proved that 𝒫(Lκ) ⊆ Lκ+.[81] Therefore, E ∈ Lκ+, so the membership axiom is true in (Lκ, Lκ+, ∈). Likewise, the other class existence axioms are true. The axiom of global choice is true because Lκ is well-ordered by the restriction of Gödel's function (which maps the class of ordinals to the constructible sets) to the ordinals less than κ. Therefore, (Lκ, Lκ+, ∈) is a model of NBG. A countable model of NBG (without AC) is composed of the equivalence classes of first-order formulas with one free parameter only.[82] Category theory The ontology of NBG provides scaffolding for speaking about "large objects" without risking paradox. For instance, in some developments of category theory, a "large category" is defined as one whose objects and morphisms make up a proper class. On the other hand, a "small category" is one whose objects and morphisms are members of a set. Thus, we can speak of the "category of all sets" or "category of all small categories" without risking paradox since NBG supports large categories. However, NBG does not support a "category of all categories" since large categories would be members of it and NBG does not allow proper classes to be members of anything. An ontological extension that enables us to talk formally about such a "category" is the conglomerate, which is a collection of classes. Then the "category of all categories" is defined by its objects: the conglomerate of all categories; and its morphisms: the conglomerate of all morphisms from A to B where A and B are objects.[83] On whether an ontology including classes as well as sets is adequate for category theory, see Muller 2001. Notes 1. Axiom of global choice explains why it is provably stronger. 2. The historical development suggests that the two-sorted approach does appear more natural at first. In introducing his theory, Bernays stated: "According to the leading idea of von Neumann set theory we have to deal with two kinds of individuals, which we may distinguish as sets and classes."[11] 3. Gödel defined $(x_{1},x_{2},\ldots ,x_{n})=(x_{1},(x_{2},\ldots ,x_{n}))$ .[15] This affects the statements of some of his definitions, axioms, and theorems. This article uses Mendelson's definition.[16] 4. Bernays' class existence axioms specify unique classes. Gödel weakened all but three of Bernays' axioms (intersection, complement, domain) by replacing biconditionals with implications, which means they specify only the ordered pairs or the 3-tuples of the class. The axioms in this section are Gödel's except for Bernays' stronger product by V axiom, which specifies a unique class of ordered pairs. Bernays' axiom simplifies the proof of the class existence theorem. Gödel's axiom B6 appears as the fourth statement of the tuple lemma. Bernays later realized that one of his axioms is redundant, which implies that one of Gödel's axioms is redundant. Using the other axioms, axiom B6 can be proved from axiom B8, and B8 can be proved from B6, so either axiom can be considered the redundant axiom.[17] The names for the tuple-handling axioms are from the French Wikipédia article: Théorie des ensembles de von Neumann. 5. This article uses Bourbaki's complement notation $\complement A$ and relative complement notation $\complement _{X}A=\complement A\cap X$.[22] This prefix relative complement notation is used by the class existence theorem to mirror the prefix logical not ($\neg $). 6. Since Gödel states this axiom before he proves the existence of the empty class, he states it without using the empty class.[5] 7. The proofs in this and the next section come from Gödel's proofs, which he gave at the Institute for Advanced Study where he "could count upon an audience well versed in mathematical logic".[28] To make Gödel's proofs more accessible to Wikipedia readers, a few modifications have been made. The goal in this and the next section is to prove Gödel's M4, his fourth class existence theorem. The proof in this section mostly follows the M1 proof,[29] but it also uses techniques from the M3 and M4 proofs. The theorem is stated with class variables rather than M1's symbols for special classes (universal quantification over the class variables is equivalent to being true for any instantiation of the class variables). The major differences from the M1 proof are: unique classes of $n$-tuples are generated at the end of the basis and inductive steps (which require Bernays' stronger product by $V$ axiom), and bound variables are replaced by subscripted variables that continue the numbering of the free set variables. Since bound variables are free for part of the induction, this guarantees that, when they are free, they are treated the same as the original free variables. One of the benefits of this proof is the example output of the function Class, which shows that a class's construction mirrors its defining formula's construction. 8. One detail has been left out of this proof. Gödel's convention is being used, so $\exists x\,\phi (x)$ is defined to be $\exists x[\exists C(x\in C)\land \phi (x)].$ Since this formula quantifies over classes, it must be replaced with the equivalent $\exists x[x\in V\land \phi (x)].$ Then the three formulas in the proof having the form $\exists x_{n+1}[x_{n+1}\land \dots ]$ become $\exists x_{n+1}[x_{n+1}\in V\land \dots ],$ which produces a valid proof. 9. Recursive computer programs written in pseudocode have been used elsewhere in pure mathematics. For example, they have been used to prove the Heine-Borel theorem and other theorems of analysis.[31] 10. This theorem is Gödel's theorem M4. He proved it by first proving M1, a class existence theorem that uses symbols for special classes rather than free class variables. M1 produces a class containing all the $n$-tuples satisfying $\phi $, but which may contain elements that are not $n$-tuples. Theorem M2 extends this theorem to formulas containing relations, special classes, and operations. Theorem M3 is obtained from M2 by replacing the symbols for special classes with free variables. Gödel used M3 to define $A\times B=\{x:\exists y\exists z[x=(y,z)\land y\in A\land z\in B]\},$ which is unique by extensionality. He used $A\times B$ to define $V^{n}.$ Theorem M4 is obtained from M3 by intersecting the class produced by M3 with $V^{n}$ to produce the unique class of $n$-tuples satisfying the given formula. Gödel's approach, especially his use of M3 to define $A\times B$, eliminates the need for Bernays' stronger form of the product by $V$ axiom.[33] 11. Gödel weakened Bernays' axioms of union and power set, which state the existence of these sets, to the above axioms that state there is a set containing the union and a set containing the power set.[35] Bernays published his axioms after Gödel, but had sent them to Gödel in 1931.[36] 12. Since ZFC's axiom requires the existence of the empty set, an advantage of NBG's axiom is that the axiom of the empty set is not needed. Mendelson's axiom system uses the ZFC's axiom of infinity and also has the axiom of the empty set.[37] 13. For $V$ having a well-ordering implying global choice, see Implications of the axiom of limitation of size. For global choice implying the well-ordering of any class, see Kanamori 2009, p. 53. 14. In 1917, Dmitry Mirimanoff published a form of replacement based on cardinal equivalence.[41] 15. In 1928, von Neumann stated: "A treatment of ordinal number closely related to mine was known to Zermelo in 1916, as I learned subsequently from a personal communication. Nevertheless, the fundamental theorem, according to which to each well-ordered set there is a similar ordinal, could not be rigorously proved because the replacement axiom was unknown."[43] 16. von Neumann 1923. Von Neumann's definition also used the theory of well-ordered sets. Later, his definition was simplified to the current one: An ordinal is a transitive set that is well-ordered by ∈.[44] 17. After introducing the cumulative hierarchy, von Neumann could show that Zermelo's axioms do not prove the existence of ordinals α ≥ ω + ω, which include uncountably many hereditarily countable sets. This follows from Skolem's result that Vω+ω satisfies Zermelo's axioms[46] and from α ∈ Vβ implying α < β.[47] 18. Von Neumann stated his axiom in an equivalent functional form.[49] 19. Skolem's approach implicitly involves natural numbers because the formulas of an axiom schema are built using structural recursion, which is a generalization of mathematical recursion over the natural numbers. 20. Mirimanoff defined well-founded sets in 1917.[53] 21. Akihiro Kanamori points out that Bernays lectured on his axiom system in 1929-1930 and states that "… he and Zermelo must have arrived at the idea of incorporating Foundation [regularity] almost at the same time."[55] However, Bernays did not publish the part of his axiom system containing regularity until 1941.[56] 22. Proof that von Neumann's axiom implies global choice: Let $R=\{(x,y):x\neq \emptyset \land y\in x\}.$ Von Neumann's axiom implies there is a function $G\subseteq R$ such that $Dom(G)=Dom(R).$ The function $G$ is a global choice function since for all nonempty sets $x,$ $G(x)\in x.$ Proof that global choice implies von Neumann's axiom: Let $G$ be a global choice function, and let $R$ be a relation. For $x\in Dom(R),$ let $\alpha (x)={\text{least}}\,\{\alpha :\exists y[(x,y)\in R\cap V_{\alpha }]\}$ :\exists y[(x,y)\in R\cap V_{\alpha }]\}} where $V_{\alpha }$ is the set of all sets having rank less than $\alpha .$ Let $z_{x}=\{y:(x,y)\in R\cap V_{\alpha (x)}\}.$ Then $F=\{(x,G(z_{x})):x\in Dom(R)\}$ is a function that satisfies von Neumann's axiom since $F\subseteq R$ and $Dom(F)=Dom(R).$ 23. Gödel used von Neumann's 1929 axioms in his 1938 announcement of his relative consistency theorem and stated "A corresponding theorem holds if T denotes the system of Principia mathematica".[64] His 1939 sketch of his proof is for Zermelo set theory and ZF.[65] Proving a theorem in multiple formal systems was not unusual for Gödel. For example, he proved his incompleteness theorem for the system of Principia mathematica, but pointed out that it "holds for a wide class of formal systems ...".[66] 24. Gödel's consistency proof builds the constructible universe. To build this in ZF requires some model theory. Gödel built it in NBG without model theory. For Gödel's construction, see Gödel 1940, pp. 35–46 or Cohen 1966, pp. 99–103. 25. Cohen also gave a detailed proof of Gödel's relative consistency theorems using ZF.[74] 26. In the 1960s, this conservative extension theorem was proved independently by Paul Cohen, Saul Kripke, and Robert Solovay. In his 1966 book, Cohen mentioned this theorem and stated that its proof requires forcing. It was also proved independently by Ronald Jensen and Ulrich Felgner, who published his proof in 1971.[75] 27. Both conclusions follow from the conclusion that every proper class can be put into one-to-one correspondence with the class of all ordinals. A proof of this is outlined in Kanamori 2009, p. 53. 28. Easton built a model of Mendelson's version of NBG in which ZFC's axiom of choice holds but global choice fails. 29. In the cumulative hierarchy Vκ, the subsets of Vκ are in Vκ+1. The constructible hierarchy Lκ produces subsets more slowly, which is why the subsets of Lκ are in Lκ+ rather than Lκ+1.[80] References 1. von Neumann 1925, pp. 221–224, 226, 229; English translation: van Heijenoort 2002b, pp. 396–398, 400, 403. 2. Bernays 1937, pp. 66–67. 3. Gödel 1940, p. . 4. Gödel 1940, pp. 3–7. 5. Gödel 1940, p. 6. 6. Gödel 1940, p. 25. 7. Gödel 1940, pp. 35–38. 8. "The Neumann-Bernays-Gödel axioms". Encyclopædia Britannica. Retrieved 17 January 2019. 9. Gödel 1940, p. 3. 10. Mendelson 1997, pp. 225–226. 11. Bernays 1937, p. 66. 12. Mendelson 1997, p. 226. 13. Gödel's axiom A3 (Gödel 1940, p. 3). 14. Gödel's axiom A4 (Gödel 1940, p. 3). 15. Gödel 1940, p. 4). 16. Mendelson 1997, p. 230. 17. Kanamori 2009, p. 56; Bernays 1937, p. 69; Gödel 1940, pp. 5, 9; Mendelson 1997, p. 231. 18. Gödel's axiom B1 (Gödel 1940, p. 5). 19. Gödel's axiom B2 (Gödel 1940, p. 5). 20. Gödel's axiom B3 (Gödel 1940, p. 5). 21. Gödel's axiom B4 (Gödel 1940, p. 5). 22. Bourbaki 2004, p. 71. 23. Bernays' axiom b(3) (Bernays 1937, p. 5). 24. Gödel's axiom B7 (Gödel 1940, p. 5). 25. Gödel's axiom B8 (Gödel 1940, p. 5). 26. Gödel 1940, p. 6; Kanamori 2012, p. 70. 27. Kanamori 2009, p. 57; Gödel 2003, p. 121. Both references contain Gödel's proof but Kanamori's is easier to follow since he uses modern terminology. 28. Dawson 1997, p. 134. 29. Gödel 1940, pp. 8–11 30. Gödel 1940, p. 11. 31. Gray 1991. 32. Gödel 1940, pp. 11–13. 33. Gödel 1940, pp. 8–15. 34. Gödel 1940, pp. 16–18. 35. Bernays 1941, p. 2; Gödel 1940, p. 5). 36. Kanamori 2009, p. 48; Gödel 2003, pp. 104–115. 37. Mendelson 1997, pp. 228, 239. 38. Easton 1964, pp. 56a–64. 39. von Neumann 1925, von Neumann 1928. 40. Ferreirós 2007, p. 369. 41. Mirimanoff 1917, p. 49. 42. Kanamori 2012, p. 62. 43. Hallett 1984, p. 280. 44. Kunen 1980, p. 16. 45. von Neumann 1925, p. 223 (footnote); English translation: van Heijenoort 2002b, p. 398 (footnote). 46. Kanamori 2012, p. 61 47. Kunen 1980, pp. 95–96. Uses the notation R(β) instead of Vβ. 48. Hallett 1984, pp. 288–290. 49. von Neumann 1925, p. 225; English translation: van Heijenoort 2002b, p. 400. 50. Fraenkel, Historical Introduction in Bernays 1991, p. 13. 51. von Neumann 1925, pp. 224–226; English translation: van Heijenoort 2002b, pp. 399–401. 52. Montague 1961. 53. Mirimanoff 1917, p. 41. 54. von Neumann 1925, pp. 230–232; English translation: van Heijenoort 2002b, pp. 404–405. 55. Kanamori 2009, pp. 53–54. 56. Bernays 1941, p. 6. 57. von Neumann 1929, p. 229; Ferreirós 2007, pp. 379–380. 58. Kanamori 2009, pp. 49, 53. 59. Kanamori 2009, pp. 48, 58. Bernays' articles are reprinted in Müller 1976, pp. 1–117. 60. Bernays 1937, p. 65. 61. Kanamori 2009, pp. 48–54. 62. Kanamori 2009, p. 56. 63. Kanamori 2009, pp. 56–58; Gödel 1940, p. . 64. Gödel 1990, p. 26. 65. Gödel 1990, pp. 28–32. 66. Gödel 1986, p. 145. 67. Solovay 1990, p. 13. 68. Kunen 1980, p. 176. 69. Gödel 1990, p. 108, footnote i. The paragraph containing this footnote discusses why Gödel considered "property of set" a primitive of set theory and how it fit into his ontology. "Property of set" corresponds to the "class" primitive in NBG. 70. Kanamori 2009, p. 57. 71. Cohen 1963. 72. Kanamori 2009, p. 65: "Forcing itself went a considerable distance in downgrading any formal theory of classes because of the added encumbrance of having to specify the classes of generic extensions." 73. Cohen 1966, pp. 107–147. 74. Cohen 1966, pp. 85–99. 75. Ferreirós 2007, pp. 381–382; Cohen 1966, p. 77; Felgner 1971. 76. Mostowski 1950, p. 113, footnote 11. Footnote references Wang's NQ set theory, which later evolved into MK. 77. Kanamori 2009b, pp. 18, 29. 78. Chuaqui 1981, p. 313 proves that (Vκ, Vκ+1, ∈) is a model of MKTR + AxC. MKT is Tarski's axioms for MK without Choice or Replacement. MKTR + AxC is MKT with Replacement and Choice (Chuaqui 1981, pp. 4, 125), which is equivalent to MK. 79. Mendelson 1997, p. 275. 80. Gödel 1940, p. 54; Solovay 1990, pp. 9–11. 81. Gödel 1940, p. 54. 82. Heydorn, Ascan (8 August 2023). "A Countable Model of Set Theory" (PDF). Retrieved 12 August 2023. 83. Adámek, Herrlich & Strecker 2004, pp. 15–16, 40. Bibliography • Adámek, Jiří; Herrlich, Horst; Strecker, George E. (1990), Abstract and Concrete Categories (The Joy of Cats) (1st ed.), New York: Wiley & Sons, ISBN 978-0-471-60922-3. • Adámek, Jiří; Herrlich, Horst; Strecker, George E. (2004) [1990], Abstract and Concrete Categories (The Joy of Cats) (Dover ed.), New York: Dover Publications, ISBN 978-0-486-46934-8. • Bernays, Paul (1937), "A System of Axiomatic Set Theory—Part I", The Journal of Symbolic Logic, 2 (1): 65–77, doi:10.2307/2268862, JSTOR 2268862. • Bernays, Paul (1941), "A System of Axiomatic Set Theory—Part II", The Journal of Symbolic Logic, 6 (1): 1–17, doi:10.2307/2267281, JSTOR 2267281. • Bernays, Paul (1991), Axiomatic Set Theory (2nd Revised ed.), Dover Publications, ISBN 978-0-486-66637-2. • Bourbaki, Nicolas (2004), Elements of Mathematics: Theory of Sets, Springer, ISBN 978-3-540-22525-6. • Chuaqui, Rolando (1981), Axiomatic Set Theory: Impredicative Theories of Classes, North-Holland, ISBN 0-444-86178-5. • Cohen, Paul (1963), "The Independence of the Continuum Hypothesis", Proceedings of the National Academy of Sciences of the United States of America, 50 (6): 1143–1148, Bibcode:1963PNAS...50.1143C, doi:10.1073/pnas.50.6.1143, PMC 221287, PMID 16578557. • Cohen, Paul (1966), Set Theory and the Continuum Hypothesis, W. A. Benjamin. • Cohen, Paul (2008), Set Theory and the Continuum Hypothesis, Dover Publications, ISBN 978-0-486-46921-8. • Dawson, John W. (1997), Logical dilemmas: The life and work of Kurt Gödel, Wellesley, MA: AK Peters. • Easton, William B. (1964), Powers of Regular Cardinals (PhD thesis), Princeton University. • Felgner, Ulrich (1971), "Comparison of the axioms of local and universal choice" (PDF), Fundamenta Mathematicae, 71: 43–62, doi:10.4064/fm-71-1-43-62. • Ferreirós, José (2007), Labyrinth of Thought: A History of Set Theory and Its Role in Mathematical Thought (2nd revised ed.), Basel, Switzerland: Birkhäuser, ISBN 978-3-7643-8349-7. • Gödel, Kurt (1940), The Consistency of the Axiom of Choice and of the Generalized Continuum Hypothesis with the Axioms of Set Theory (Revised ed.), Princeton University Press, ISBN 978-0-691-07927-1. • Gödel, Kurt (2008), The Consistency of the Axiom of Choice and of the Generalized Continuum Hypothesis with the Axioms of Set Theory, with a foreword by Laver, Richard (Paperback ed.), Ishi Press, ISBN 978-0-923891-53-4. • Gödel, Kurt (1986), Collected Works, Volume 1: Publications 1929–1936, Oxford University Press, ISBN 978-0-19-514720-9. • Gödel, Kurt (1990), Collected Works, Volume 2: Publications 1938–1974, Oxford University Press, ISBN 978-0-19-514721-6. • Gödel, Kurt (2003), Collected Works, Volume 4: Correspondence A–G, Oxford University Press, ISBN 978-0-19-850073-5. • Gray, Robert (1991), "Computer programs and mathematical proofs", The Mathematical Intelligencer, 13 (4): 45–48, doi:10.1007/BF03028342, S2CID 121229549. • Hallett, Michael (1984), Cantorian Set Theory and Limitation of Size (Hardcover ed.), Oxford: Clarendon Press, ISBN 978-0-19-853179-1. • Hallett, Michael (1986), Cantorian Set Theory and Limitation of Size (Paperback ed.), Oxford: Clarendon Press, ISBN 978-0-19-853283-5. • Kanamori, Akihiro (2009b), The Higher Infinite: Large Cardinals in Set Theory from Their Beginnings, Springer, ISBN 978-3-540-88867-3. • Kanamori, Akihiro (2009), "Bernays and Set Theory" (PDF), Bulletin of Symbolic Logic, 15 (1): 43–69, doi:10.2178/bsl/1231081769, JSTOR 25470304, S2CID 15567244. • Kanamori, Akihiro (2012), "In Praise of Replacement" (PDF), Bulletin of Symbolic Logic, 18 (1): 46–90, doi:10.2178/bsl/1327328439, JSTOR 41472440, S2CID 18951854. • Kunen, Kenneth (1980), Set Theory: An Introduction to Independence Proofs (Hardcover ed.), North-Holland, ISBN 978-0-444-86839-8. • Kunen, Kenneth (2012), Set Theory: An Introduction to Independence Proofs (Paperback ed.), North-Holland, ISBN 978-0-444-56402-3. • Mendelson, Elliott (1997), An Introduction to Mathematical Logic (4th ed.), London: Chapman and Hall/CRC, ISBN 978-0-412-80830-2. - Pp. 225–86 contain the classic textbook treatment of NBG, showing how it does what we expect of set theory, by grounding relations, order theory, ordinal numbers, transfinite numbers, etc. • Mirimanoff, Dmitry (1917), "Les antinomies de Russell et de Burali-Forti et le problème fondamental de la théorie des ensembles", L'Enseignement Mathématique, 19: 37–52. • Montague, Richard (1961), "Semantic Closure and Non-Finite Axiomatizability I", in Buss, Samuel R. (ed.), Infinitistic Methods: Proceedings of the Symposium on Foundations of Mathematics, Pergamon Press, pp. 45–69. • Mostowski, Andrzej (1950), "Some impredicative definitions in the axiomatic set theory" (PDF), Fundamenta Mathematicae, 37: 111–124, doi:10.4064/fm-37-1-111-124. • Muller, F. A. (1 September 2001), "Sets, classes, and categories" (PDF), British Journal for the Philosophy of Science, 52 (3): 539–73, doi:10.1093/bjps/52.3.539. • Müller, Gurt, ed. (1976), Sets and Classes: On the Work of Paul Bernays, Studies in Logic and the Foundations of Mathematics Volume 84, Amsterdam: North Holland, ISBN 978-0-7204-2284-9. • Potter, Michael (2004), Set Theory and Its Philosophy: A Critical Introduction (Hardcover ed.), Oxford University Press, ISBN 978-0-19-926973-0. • Potter, Michael (2004p), Set Theory and Its Philosophy: A Critical Introduction (Paperback ed.), Oxford University Press, ISBN 978-0-19-927041-5. • Pudlák, Pavel (1998), "The Lengths of Proofs" (PDF), in Buss, Samuel R. (ed.), Handbook of Proof Theory, Elsevier, pp. 547–637, ISBN 978-0-444-89840-1. • Smullyan, Raymond M.; Fitting, Melvin (2010) [Revised and corrected edition: first published in 1996 by Oxford University Press], Set Theory and the Continuum Problem, Dover, ISBN 978-0-486-47484-7. • Solovay, Robert M. (1990), "Introductory note to 1938, 1939, 1939a and 1940", Kurt Gödel Collected Works, Volume 2: Publications 1938–1974, Oxford University Press, pp. 1–25, ISBN 978-0-19-514721-6. • von Neumann, John (1923), "Zur Einführung der transfiniten Zahlen", Acta Litt. Acad. Sc. Szeged X., 1: 199–208. • English translation: van Heijenoort, Jean (2002a) [1967], "On the introduction of transfinite numbers", From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931 (Fourth Printing ed.), Harvard University Press, pp. 346–354, ISBN 978-0-674-32449-7. • English translation: van Heijenoort, Jean (2002b) [1967], "An axiomatization of set theory", From Frege to Gödel: A Source Book in Mathematical Logic, 1879-1931 (Fourth Printing ed.), Harvard University Press, pp. 393–413, ISBN 978-0-674-32449-7. • von Neumann, John (1925), "Eine Axiomatisierung der Mengenlehre", Journal für die Reine und Angewandte Mathematik, 154: 219–240. • von Neumann, John (1928), "Die Axiomatisierung der Mengenlehre", Mathematische Zeitschrift, 27: 669–752, doi:10.1007/bf01171122, S2CID 123492324. • von Neumann, John (1929), "Über eine Widerspruchsfreiheitsfrage in der axiomatischen Mengenlehre", Journal für die Reine und Angewandte Mathematik, 160: 227–241. External links • "von Neumann-Bernays-Gödel set theory". PlanetMath. • Szudzik, Matthew. "von Neumann-Bernays-Gödel Set Theory". MathWorld. Set theory Overview • Set (mathematics) Axioms • Adjunction • Choice • countable • dependent • global • Constructibility (V=L) • Determinacy • Extensionality • Infinity • Limitation of size • Pairing • Power set • Regularity • Union • Martin's axiom • Axiom schema • replacement • specification Operations • Cartesian product • Complement (i.e. set difference) • De Morgan's laws • Disjoint union • Identities • Intersection • Power set • Symmetric difference • Union • Concepts • Methods • Almost • Cardinality • Cardinal number (large) • Class • Constructible universe • Continuum hypothesis • Diagonal argument • Element • ordered pair • tuple • Family • Forcing • One-to-one correspondence • Ordinal number • Set-builder notation • Transfinite induction • Venn diagram Set types • Amorphous • Countable • Empty • Finite (hereditarily) • Filter • base • subbase • Ultrafilter • Fuzzy • Infinite (Dedekind-infinite) • Recursive • Singleton • Subset · Superset • Transitive • Uncountable • Universal Theories • Alternative • Axiomatic • Naive • Cantor's theorem • Zermelo • General • Principia Mathematica • New Foundations • Zermelo–Fraenkel • von Neumann–Bernays–Gödel • Morse–Kelley • Kripke–Platek • Tarski–Grothendieck • Paradoxes • Problems • Russell's paradox • Suslin's problem • Burali-Forti paradox Set theorists • Paul Bernays • Georg Cantor • Paul Cohen • Richard Dedekind • Abraham Fraenkel • Kurt Gödel • Thomas Jech • John von Neumann • Willard Quine • Bertrand Russell • Thoralf Skolem • Ernst Zermelo Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Global dynamics in a two-species chemotaxis-competition system with two signals On critical Choquard equation with potential well July 2018, 38(7): 3595-3616. doi: 10.3934/dcds.2018155 Boundedness and large time behavior in a two-dimensional Keller-Segel-Navier-Stokes system with signal-dependent diffusion and sensitivity Hai-Yang Jin , Department of Mathematics, South China University of Technology, Guangzhou 510640, China * Corresponding author: Hai-Yang Jin Received October 2017 Published April 2018 Fund Project: The research of H.Y. Jin was supported by the NSF of China No. 11501218, and the Fundamental Research Funds for the Central Universities (No. 2017MS107). This paper is concerned with the following Keller-Segel-Navier-Stokes system with signal-dependent diffusion and sensitivity $\begin{cases}\tag{*}n_t+u·\nabla n = \nabla ·(d(c)\nabla n)-\nabla ·(χ (c) n\nabla c)+a n-bn^2, &x∈ Ω, ~~t>0, \\ c_t+u·\nabla c = Δ c+ n-c,&x∈ Ω, ~~t>0, \\ u_t+ u·\nabla u = Δ u-\nabla P+n\nabla φ,&x∈ Ω, ~~t>0, \\\nabla · u = 0& x∈ Ω, \ t>0, \end{cases}$ in a bounded smooth domain $Ω\subset \mathbb{R}^2$ with homogeneous Neumann boundary conditions, where $a≥0$ $b>0$ are constants, and the functions $d(c)$ $χ(c)$ satisfy the following assumptions: $(d(c), χ (c))∈ [C^2([0, ∞))]^2$ $d(c), χ(c)>0$ $c≥0$ $d'(c)<0$ $\lim\limits_{c\to∞}d(c) = 0$ $\lim\limits_{c\to∞} \frac{χ (c)}{d(c)}$ $\lim\limits_{c\to∞}\frac{d'(c)}{d(c)}$ The difficulty in analysis of system (*) is the possible degeneracy of diffusion due to the condition . In this paper, we will use function as weight function and employ the method of energy estimate to establish the global existence of classical solutions of (*) with uniform-in-time bound. Furthermore, by constructing a Lyapunov functional, we show that the global classical solution $(n, c, u)$ will converge to the constant state $(\frac{a}{b}, \frac{a}{b}, 0)$ $b>\frac{K_0}{16}$ $K_0 = \max\limits_{0≤c ≤∞}\frac{|χ(c)|^2}{d(c)}$ Keywords: Chemotaxis, boundedness, large time behavior, signal-dependent diffusion. Mathematics Subject Classification: Primary: 35A01, 35B40, 35K55, 35Q92, 92C17. Citation: Hai-Yang Jin. Boundedness and large time behavior in a two-dimensional Keller-Segel-Navier-Stokes system with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - A, 2018, 38 (7) : 3595-3616. doi: 10.3934/dcds.2018155 N. Bellomo, A. Bellouquid, Y. S. Tao and M. Winkler, Towards a mathematical theory of Keller-Segel models of pattern formation in biological tissues, Math. Models Methods Appl. Sci., 25 (2015), 1663-1763. doi: 10.1142/S021820251550044X. Google Scholar J. P. Bourguignon and H. Brezis, Remarks on Euler Equation, J. Functional Analysis, 15 (1974), 341-363. doi: 10.1016/0022-1236(74)90027-5. Google Scholar M. Chae, K. Kang and J. Lee, Existence of smooth solutions to coupled chemotaxis-fluid equations, Discrete Contin. Dyn. Syst., 33 (2013), 2271-2297. doi: 10.3934/dcds.2013.33.2271. Google Scholar M. Chae, K. Kang and J. Lee, Global existence and temporal decay in Keller-Segel models coupled to fluid equations, Comm. Partial Differential Equations, 39 (2014), 1205-1235. doi: 10.1080/03605302.2013.852224. Google Scholar A. Chertock, K. Fellner, A. Kurganov, A. Lorz and P. A. Markowich, Sinking, merging and stationary plumes in a coupled chemotaxis-fluid model: A high-resolution numerical approach, J. Fluid Mech., 694 (2012), 155-190. doi: 10.1017/jfm.2011.534. Google Scholar S. Childress and J. K. Percus, Nonlinear aspects of chemotaxis, Math. Biosci., 56 (1981), 217-237. doi: 10.1016/0025-5564(81)90055-9. Google Scholar T. Ciéslak and C. Stinner, Finite-time blowup and global-in-time unbounded solutions to a parabolic-parabolic quasilinear Keller-Segel system in higher dimensions, J. Differential Equations, 252 (2012), 5832-5851. doi: 10.1016/j.jde.2012.01.045. Google Scholar T. Ciéslak and C. Stinner, New critical exponents in a fully parabolic quasilinear Keller-Segel and applications to volume filling models, J. Differential Equations, 258 (2015), 2080-2113. doi: 10.1016/j.jde.2014.12.004. Google Scholar M. DiFrancesco, A. Lorz and P. A. Markowich, Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: global existence and asymptotic behavior, Discrete Contin. Dyn. Syst., 28 (2010), 1437-1453. doi: 10.3934/dcds.2010.28.1437. Google Scholar R. J. Duan, A. Lorz and P. A. Markowich, Global solutions to the coupled chemotaxis-fluid equations, Comm. Partial Differential Equations, 35 (2010), 1635-1673. doi: 10.1080/03605302.2010.497199. Google Scholar R. J. Duan and Z. Xiang, A note on global existence for the chemotaxis-Stokes model with nonlinear diffusion, Int. Math. Res. Not. IMRN, 7 (2014), 1833-1852. doi: 10.1093/imrn/rns270. Google Scholar M. Eisenbach, A note on global existence for the chemotaxis-Stokes model with nonlinear diffusion. Chemotaxis, Imperial College Press, London, 2004. Google Scholar E. Espejo and T. Suzuki, Reaction terms avoiding aggregation in slow fluids, Nonlinear Anal. Real World Appl., 21 (2015), 110-126. doi: 10.1016/j.nonrwa.2014.07.001. Google Scholar X. Fu, H. Tang, C. Liu, J. D. Huang, T. Hwa and P. Lenz, Stripe formation in bacterial system with density-suppressed motility, Phys. Rev. Lett., 108 (2012), 198102. doi: 10.1103/PhysRevLett.108.198102. Google Scholar Y. Giga, Solutions for semilinear parabolic equations in Lp and regularity of weak solution of the Navier-Stokes system, J. Differential Equations, 61 (1986), 186-212. doi: 10.1016/0022-0396(86)90096-3. Google Scholar Y. Giga and H. Sohr, Abstract Lp estimate for the Cauchy problem with application to the Navier-Stokes system in exterior domains, J. Funct. Anal., 102 (1991), 72-94. doi: 10.1016/0022-1236(91)90136-S. Google Scholar M. Herrero and J. Velázquez, A blow-up mechanism for a chemotaxis model, Ann. Scuola Norm. Sup. Pisa Cl. Sci., 24 (1997), 633-683. Google Scholar T. Hillen and K. Painter, A users guide to PDE models for chemotaxis, J. Math. Biol., 58 (2009), 183-217. doi: 10.1007/s00285-008-0201-3. Google Scholar T. Hillen and A. Potapov, The one-dimensional chemotaxis model: Global existence and asymptotic profile, Math. Methods Appl. Sci., 27 (2004), 1783-1801. doi: 10.1002/mma.569. Google Scholar D. Horstmann, From 1970 until present: The Keller-Segel model in chemotaxis and its consequences. Ⅰ, Jahresber. Deutsch. Math.-Verien., 105 (2003), 103-165. Google Scholar D. Horstmann, From 1970 until present: The Keller-Segel model in chemotaxis and its consequences. Ⅱ, Jahresber. Deutsch. Math.-Verien., 106 (2004), 51-69. Google Scholar D. Horstmann and M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Differential Equations, 215 (2005), 52-107. doi: 10.1016/j.jde.2004.10.022. Google Scholar W. Jäger and S. Luckhaus, On explosions of solutions to a system of partial differential equations modeling chemotaxis, Trans. Amer. Math. Soc., 329 (1992), 819-824. doi: 10.1090/S0002-9947-1992-1046835-6. Google Scholar H. Y. Jin, Y. J. Kim and Z. A. Wang, Boundedness, stabilization and pattern formation driven by density-suppressed motility, SIAM J. Appl. Math., 2018, to appear. Google Scholar E. F. Keller and L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theor. Biol., 26 (1970), 399-415. Google Scholar R. Kowalczyk and Z. Szymańska, On the global existence of solutions to an aggregation model, J. Math. Anal. Appl., 343 (2008), 379-398. doi: 10.1016/j.jmaa.2008.01.005. Google Scholar O. Ladyzhenskaya, V. Solonnikov and N. Uralceva, Linear and Quasilinear Equations of Parabolic Type, AMS, Providence, RI, 1968. Google Scholar J. G. Liu and A. Lorz, A coupled chemotaxis-fluid model: Global existence, Ann. Inst. H. Poincaré Anal. Non Linéaire, 28 (2011), 643-652. doi: 10.1016/j.anihpc.2011.04.005. Google Scholar A. Lorz, Coupled chemotaxis fluid model, Math. Models Methods Appl. Sci., 20 (2010), 987-1004. doi: 10.1142/S0218202510004507. Google Scholar A. Lorz, A coupled Keller-Segel-Stokes model: Global existence for small initial data and blow-up delay, Commun. Math. Sci., 10 (2012), 555-574. doi: 10.4310/CMS.2012.v10.n2.a7. Google Scholar T. Nagai, Blow-up of radially symmetric solutions to a chemotaxis system, Adv. Math. Sci. Appl., 5 (1995), 581-601. Google Scholar T. Nagai, T. Senba and K. Yoshida, Application of the Trudinger-Moser inequality to a parabolic system of chemotaxis, Funkcial. Ekvac., 40 (1997), 411-433. Google Scholar K. Osaki and A. Yagi, Finite dimensional attractors for one-dimensional Keller-Segel equations, Funkcial. Ekvac., 44 (2001), 441-469. Google Scholar K. Painter and J. A. Sherratt, Modeling the movement of interacting cell populations, J. Theor. Biol., 225 (2003), 327-339. doi: 10.1016/S0022-5193(03)00258-3. Google Scholar M. M. Porzio and V. Vespri, Hölder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, J. Differential Equations, 103 (1993), 146-178. doi: 10.1006/jdeq.1993.1045. Google Scholar T. Senba and T. Suzuki, Parabolic system of chemotaxis: Blowup in a finite and the infinite time, Methods Appl. Anal., 8 (2001), 349-367. doi: 10.4310/MAA.2001.v8.n2.a9. Google Scholar Y. Tao and M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, J. Differential Equations, 252 (2012), 692-715. doi: 10.1016/j.jde.2011.08.019. Google Scholar Y. Tao and M. Winkler, Locally bounded global solutions in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion, Ann. Inst. H. Poincaré Anal. Non Linéaire, 30 (2013), 157-178. doi: 10.1016/j.anihpc.2012.07.002. Google Scholar Y. S. Tao and M. Winkler, Large time behavior in a multidimensional chemotaxis-haptotaxis model with slow signal diffusion, SIAM J. Math. Anal., 47 (2015), 4229-4250. doi: 10.1137/15M1014115. Google Scholar Y. S. Tao and M. Winkler, Blow-up prevention by quadratic degradation in a two-dimensional Keller-Segel-Navier-Stokes system Z. Angew. Math. Phys., 67 (2016), Art. 138, 23 pp. doi: 10.1007/s00033-016-0732-1. Google Scholar Y. S. Tao and M. Winkler, Boundedness and decay enforced by quadratic degradation in a three-dimensional chemotaxis-fluid system, Z. Angew. Math. Phys., 66 (2016), 2555-2573. doi: 10.1007/s00033-015-0541-y. Google Scholar Y. Tao and M. Winkler, Effects of signal-dependent motilities in a Keller-Segel-type reaction-diffusion system, Math. Models Meth. Appl. Sci., 27 (2017), 1645-1683. doi: 10.1142/S0218202517500282. Google Scholar I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein, Bacterial swimming and oxygen transport near contact lines, Proc. Nat. Acad. Sci., USA, 102 (2005), 2277–2282 doi: 10.1073/pnas.0406724102. Google Scholar M. Winkler, Does a "volume-filling effect" always prevent chemotactic collapse?, Math. Methods Appl. Sci., 33 (2010), 12-24. doi: 10.1002/mma.1146. Google Scholar M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differential Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar M. Winkler, Global large-data solutions in a chemotaxis-(Navier-)Stokes system modeling cellular swimming in fluid drops, Comm. Partial Differential Equations, 37 (2012), 319-351. doi: 10.1080/03605302.2011.591865. Google Scholar M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar M. Winkler, Stabilization in a two-dimensional chemotaxis-Navier-Stokes system, Arch. Rational Mech. Anal., 211 (2014), 455-487. doi: 10.1007/s00205-013-0678-9. Google Scholar M. Winkler, Global weak solutions in a three-dimensional chemotaxis-Navier-Stokes system, Ann. Inst. H. Poincaré Anal. Non Linéaire, 33 (2016), 1329-1352. doi: 10.1016/j.anihpc.2015.05.002. Google Scholar M. Winkler, Global existence and slow grow-up in a quasilinear Keller-Segel system with exponentially decaying diffusivity, Nonlinearity, 30 (2017), 735-764. doi: 10.1088/1361-6544/aa565b. Google Scholar C. Yoon and Y. J. Kim, Global existence and aggregation in a Keller-Segel model with Fokker-Planck diffusion, Acta Application Mathematics, 149 (2017), 101-123. doi: 10.1007/s10440-016-0089-7. Google Scholar Hui Zhao, Zhengrong Liu, Yiren Chen. Global dynamics of a chemotaxis model with signal-dependent diffusion and sensitivity. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021011 Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229 Qiwei Wu, Liping Luan. Large-time behavior of solutions to unipolar Euler-Poisson equations with time-dependent damping. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2021003 Wenbin Lv, Qingyuan Wang. Global existence for a class of Keller-Segel models with signal-dependent motility and general logistic term. Evolution Equations & Control Theory, 2021, 10 (1) : 25-36. doi: 10.3934/eect.2020040 Emre Esentürk, Juan Velazquez. Large time behavior of exchange-driven growth. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 747-775. doi: 10.3934/dcds.2020299 Olivier Ley, Erwin Topp, Miguel Yangari. Some results for the large time behavior of Hamilton-Jacobi equations with Caputo time derivative. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021007 Pan Zheng. Asymptotic stability in a chemotaxis-competition system with indirect signal production. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1207-1223. doi: 10.3934/dcds.2020315 Mohammad Ghani, Jingyu Li, Kaijun Zhang. Asymptotic stability of traveling fronts to a chemotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021017 Jean-Paul Chehab. Damping, stabilization, and numerical filtering for the modeling and the simulation of time dependent PDEs. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021002 Yongxiu Shi, Haitao Wan. Refined asymptotic behavior and uniqueness of large solutions to a quasilinear elliptic equation in a borderline case. Electronic Research Archive, , () : -. doi: 10.3934/era.2020119 Veena Goswami, Gopinath Panda. Optimal customer behavior in observable and unobservable discrete-time queues. Journal of Industrial & Management Optimization, 2021, 17 (1) : 299-316. doi: 10.3934/jimo.2019112 Xiaoping Zhai, Yongsheng Li. Global large solutions and optimal time-decay estimates to the Korteweg system. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1387-1413. doi: 10.3934/dcds.2020322 Michiyuki Watanabe. Inverse $N$-body scattering with the time-dependent hartree-fock approximation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021002 Lin Shi, Xuemin Wang, Dingshi Li. Limiting behavior of non-autonomous stochastic reaction-diffusion equations with colored noise on unbounded thin domains. Communications on Pure & Applied Analysis, 2020, 19 (12) : 5367-5386. doi: 10.3934/cpaa.2020242 Qing Li, Yaping Wu. Existence and instability of some nontrivial steady states for the SKT competition model with large cross diffusion. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3657-3682. doi: 10.3934/dcds.2020051 Yukio Kan-On. On the limiting system in the Shigesada, Kawasaki and Teramoto model with large cross-diffusion rates. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3561-3570. doi: 10.3934/dcds.2020161 Jean-Claude Saut, Yuexun Wang. Long time behavior of the fractional Korteweg-de Vries equation with cubic nonlinearity. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1133-1155. doi: 10.3934/dcds.2020312 Hoang The Tuan. On the asymptotic behavior of solutions to time-fractional elliptic equations driven by a multiplicative white noise. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1749-1762. doi: 10.3934/dcdsb.2020318 Ting Liu, Guo-Bao Zhang. Global stability of traveling waves for a spatially discrete diffusion system with time delay. Electronic Research Archive, , () : -. doi: 10.3934/era.2021003 Linglong Du, Min Yang. Pointwise long time behavior for the mixed damped nonlinear wave equation in $ \mathbb{R}^n_+ $. Networks & Heterogeneous Media, 2020 doi: 10.3934/nhm.2020033 PDF downloads (182) HTML views (264) Hai-Yang Jin
CommonCrawl
\begin{document} \title[Second order vectorial $\infty$-eigenvalue problems]{Generalised second order vectorial $\infty$-eigenvalue problems} \author{Ed Clark} \address{E. C., Department of Mathematics and Statistics, University of Reading, Whiteknights Campus, Pepper Lane, Reading RG6 6AX, United Kingdom} \email{[email protected]} \author{Nikos Katzourakis} \address[Corresponding author]{N. K., Department of Mathematics and Statistics, University of Reading, Whiteknights Campus, Pepper Lane, Reading RG6 6AX, United Kingdom} \email{[email protected]} \thanks{E.C.\ has been financially supported through the UK EPSRC scholarship GS19-055} \subjclass[2020]{35P30, 35D30, 35J94, 35P15.} \date{} \keywords{Calculus of Variations in $L^\infty$; $\infty$-Eigenvalue problem; nonlinear eigenvalue problems; Absolute minimisers; Lagrange Multipliers.} \begin{abstract} We consider the problem of minimising the $L^\infty$ norm of a function of the hessian over a class of maps, subject to a mass constraint involving the $L^\infty$ norm of a function of the gradient and the map itself. We assume zeroth and first order Dirichlet boundary data, corresponding to the ``hinged" and the ``clamped" cases. By employing the method of $L^p$ approximations, we establish the existence of a special $L^\infty$ minimiser, which solves a divergence PDE system with measure coefficients as parameters. This is a counterpart of the Aronsson-Euler system corresponding to this constrained variational problem. Furthermore, we establish upper and lower bounds for the eigenvalue. \end{abstract} \maketitle \!\! \section{Introduction and main results} \label{Section1} Let $n, N \in \mathbb{N}$ with $n\geq 2$, and let $\Omega \Subset \mathbb{R}^n$ be a bounded open set with Lipschitz boundary $\partial \Omega$. In this paper we are interested in studying nonlinear second order $L^\infty$ eigenvalue problems. Specifically, we investigate the problem of finding a minimising map $u_{\infty}:\overline{\Omega}\longrightarrow \mathbb{R}^N$, that solves \beq \label{1.1} \begin{split} \|f(\mathrm{D}^2u_\infty)\|_{L^{\infty}(\Omega)}\,=\, \inf \Big\{& \|f(\mathrm{D}^2v)\|_{L^{\infty}(\Omega)}\ :\\ & v\in W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N), \ \|g(v, \mathrm{D} v)\|_{L^{\infty}(\Omega)}=1\Big\}. \end{split} \eeq Additionally, we pursue the necessary conditions that these constrained minimisers must satisfy, in the form of PDEs. In the above, $f: \mathbb{R}^{N \times n^2}_s \longrightarrow \mathbb{R}$ and $g :\mathbb{R}^{N} \times \mathbb{R}^{N\times n} \longrightarrow \mathbb{R}$ are given functions that will be required to satisfy some natural assumptions, to be discussed later in this section. We merely note now that $\mathbb{R}^{N \times n^2}_s$ symbolises the symmetric tensor space $\mathbb{R}^N \otimes (\mathbb{R}^n \vee \mathbb{R}^n)$ wherein the hessians of twice differentiable maps $u : \Omega \longrightarrow \mathbb{R}^N$ are valued. The functional Sobolev space $W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N)$ appearing above will taken to be either of: \beq \label{1.2} \left\{ \ \ \begin{split} W^{2,\infty}_{\mathrm C}(\Omega;\mathbb{R}^N) :&= \, W^{2,\infty}_0(\Omega;\mathbb{R}^N), \\ W^{2,\infty}_{\mathrm H}(\Omega;\mathbb{R}^N) :&= \, W^{2,\infty} \cap W^{1,\infty}_0 (\Omega;\mathbb{R}^N).\, \end{split} \right. \eeq The space $W^{2,\infty}_{\mathrm C}(\Omega;\mathbb{R}^N)$ encompasses the case of so-called clamped boundary conditions, which can be seen as first order Dirichlet or as coupled Dirichlet-Neumann conditions, requiring $|u| =|\mathrm{D} u|= 0$ on $\partial \Omega$. On the other hand, $W^{2,\infty}_{\mathrm H}(\Omega;\mathbb{R}^N)$ encompasses the so-called hinged boundary conditions, which are zeroth order Dirichlet conditions, requiring $|u| = 0$ on $\partial \Omega$. This is standard terminology for such problems, see e.g.\ \cite{KP}. Problem \eqref{1.1} lies within the Calculus of Variations in $L^{\infty}$, a modern area, initiated by Gunnar Aronsson in the 1960s. Since then this field has undergone a substantial transformation. There are some general complications one must be wary of when tackling $L^{\infty}$ variational problems. For example, the $L^{\infty}$ norm is generally not Gateaux differentiable, therefore the analogue of the Euler-Lagrange equations cannot be derived directly by considering variations. Any supremal functional also has issues with locality in terms of minimisation on subdomains. Further, the space itself lacks some fundamental functional analytic properties, such as reflexivity and separability. Higher order problems and problems involving constraints present additional difficulties and have been studied even more sparsely, see e.g.\ \cite{AB, BJ, CKM, CK, K1, K2, K3, K4, KM, KPr}. In fact, this paper is an extension of \cite{K4} to the second order case, and generalises part of the results corresponding to the existence of minimisers and the satisfaction of PDEs from \cite{KP}. In turn, the paper \cite{K4} generalised results on the scalar case of eigenvalue problems for the $\infty$-Laplacian (\cite{JL,JLM}). For various interesting results, see for instance \cite{AP, AB, BK, CDP, KZ, MWZ, PZ, RZ}. The vectorial and higher order nature of the problem we are considering herein precludes the use of standard methods, such as viscosity solutions (see e.g.\ \cite{K0} for a pedagogical introduction). However, we overcome these difficulties by approximating by corresponding $L^p$ problems for finite $p$ case and let $p\to \infty$. The intuition for using this technique is based on the rudimentary idea that, for a fixed $L^\infty$ function on a set of finite measure, its $L^p$ norm tends to its $L^{\infty}$ norm as $p\to \infty$. This technique is rather standard for $L^\infty$ problems, and in the vectorial higher order case we consider herein is essentially the only method known. Even the very recent intrinsic duality method of \cite{BK} is limited to scalar-valued first order problems. To state our main result, we now introduce the required hypotheses for the functions $f$ and $g$: \beq \label{1.3} \left\{ \begin{split} & (a) \ f\in C^1(\mathbb{R}^{N \times n^2}_s). \\ & (b) \ f \ \text{is} \ \text{(Morrey) quasiconvex}.\\ & (c) \ \text{There exist}\ 0< C_1\leq C_2 \text{ such that, }\text{for all}\ X\in \mathbb{R}^{N \times n^2}_s \setminus \{0\},\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0 < C_1f(X)\, \leq \, \partial f(X):X \, \leq \, C_2f(X). \phantom{\Big|}\\ & (d) \ \text{There exist} \ C_3, ..., C_6>0, \alpha>1 \text{ and } \beta\leq 1 \ \text{such that, }\text{for all} \ X\in \mathbb{R}^{N \times n^2}_s,\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ -C_3+C_4|X|^{\alpha}\, \leq \, f(X)\leq C_5|X|^{\alpha} +C_6, \phantom{\Big|}\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ |\partial f(X)|\, \leq \, C_5 f(X)^{\beta}+C_6.\phantom{_\big|} \end{split} \right. \eeq \beq \label{1.4} \left\{ \begin{split} & (a) \ g\in C^1(\mathbb{R}^{N}\times \mathbb{R}^{N\times n}). \\ & (b) \ g \ \text{is coercive, in the sense that}\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \lim_{t\to \infty} \bigg(\inf_{(\eta,P)\in \mathbb{R}^N \! \times \mathbb{R}^{N\times n}, |(\eta,P)|=1}g(t\eta, tP)\bigg)=\infty.\phantom{\Big|} \\ & (c) \ \text{There exist}\ 0<C_7\leq C_8 \text{ such that, }\text{for all} \ (\eta,P)\in \big(\mathbb{R}^N \! \times \mathbb{R}^{N\times n}\big) \! \setminus \! \{(0,0)\},\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ 0<C_7\, g(\eta, P)\, \leq \, \partial_\eta g (\eta, P)\cdot \eta+\partial_P g (\eta, P):P\; \leq \; C_8\,g(\eta, P). \phantom{\Big|} \end{split} \right. \ \ \eeq In the above, $\partial f(X)$ denotes the the derivative of $f$ whilst $\partial_\eta g $ and $\partial_P g $ signifies the respective partial derivatives. Additionally ``:" and $``\cdot"$ represent the Euclidean inner products. The terminology of (Morrey) quasiconvex refers to the standard notion for integral functionals (see e.g.\ \cite{D, Zh}), namely \[ \ \ \ F(X) \, \leq \, -\hspace{-10.5pt}\displaystyle\int_\Omega F(X+\mathrm{D}^2 \phi)\, \mathrm d \mathcal{L}^n, \ \ \ \ \forall\ \phi \in W^{2,\infty}_0(\Omega;\mathbb{R}^N),\ \forall\ X\in \mathbb{R}^{N \times n^2}_s. \] We note that herein we will be using the following function space symbolisations: \[ \begin{split} C^2_{\mathrm B}(\overline{\Omega};\mathbb{R}^N) \, &:=\, C^2(\overline{\Omega};\mathbb{R}^N) \cap W^{2,\infty}_{\mathrm B}(\Omega; \mathbb{R}^N), \\ W^{2,p}_{\mathrm C}(\Omega;\mathbb{R}^N) &:= \, W^{2,p}_0(\Omega;\mathbb{R}^N),\ \ p\in[1,\infty), \\ W^{2,p}_{\mathrm H}(\Omega;\mathbb{R}^N) &:= \, W^{2,p} \cap W^{1,p}_0 (\Omega;\mathbb{R}^N), \ \ p\in[1,\infty), \end{split} \] Further, we will be using the rescaled $L^p$ norms for $p\in[1, \infty)$, given by \[ \| h\|_{L^p(\Omega)}\,:=\,\bigg(\, \frac{1}{\mathcal{L}^{n}(\Omega)}\int_{\Omega}|h|^p \, \mathrm{d} {\mathcal{L}}^{n}\bigg)^{\frac{1}{p}}\,=\, \bigg(\, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega}|h|^p \, \mathrm{d} {\mathcal{L}}^{n}\bigg)^{\frac{1}{p}}. \] Finally, we observe that \eqref{1.3}(c), implies that $f>0$ on $\mathbb{R}^{N \times n^2}_s\setminus\{0\}$, $f(0)=0$ and $f$ is radially increasing, meaning that $t\mapsto f(tX)$ is increasing on $(0, \infty)$ for any fixed $X\in \mathbb{R}^{N \times n^2}_s\setminus\{0\}$. Similarly, \eqref{1.4}(c) implies that $g>0$ on $(\mathbb{R}^N \times \mathbb{R}^{N\times n}) \setminus\{(0,0)\}$, $g(0,0)=0$ and $g$ is radially increasing on $\mathbb{R}^N \times \mathbb{R}^{N\times n}$, namely $t\mapsto g(t\eta,tP)$ is increasing on $(0, \infty)$ for any fixed $(\eta,P)\in (\mathbb{R}^N \times \mathbb{R}^{N\times n})\setminus\{(0,0)\}$. Below is our main result, in which we consider both cases of boundary conditions simultaneously. \begin{theorem}\label{1} Suppose that the assumptions \eqref{1.3} and \eqref{1.4} hold true. Then: \\ (A) The problem \eqref{1.1} has a solution $u_{\infty}\in W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N).$ \\ (B) There exist Radon measures \[ {\mathrm M}_\infty \in \mathcal{M}\big(\overline{\Omega}; \mathbb{R}_s^{N \times n^2}\big), \ \ \ \nu_{\infty}\in \mathcal{M}(\overline{\Omega}), \] such that \beq \label{1.5} \begin{split} \int_{\overline{\Omega}} \mathrm{D}^2\phi : \mathrm{d} {\mathrm M}_\infty=\Lambda_{\infty} \int_{\overline{\Omega}} \Big( \partial_\eta g (u_{\infty}, \mathrm{D} u_{\infty})\cdot \phi \, +\, \partial_P g (u_{\infty}, \mathrm{D} u_{\infty}) : \mathrm{D} \phi \Big) \, \mathrm{d} \nu_{\infty} \end{split} \eeq for all test maps $\phi\in C^2_{\mathrm B}(\overline{\Omega}; \mathbb{R}^N)$, where \beq \label{1.6} \begin{split} \Lambda_{\infty} \, =\, \big\|f(\mathrm{D}^2(u_{\infty}) \big\|_{L^{\infty}(\Omega)}>0. \end{split} \eeq Additionally, we have the following a priori lower bound for the eigenvalue \beq \label{1.7} \Lambda_{\infty}\geq \Bigg ( \frac{ C_4 } {\mathrm{diam}(\Omega)^\alpha \Big(C(\infty,\Omega) \|\partial_\eta g \|_{L^{\infty}(\{g \leq1\})}+\|\partial_P g \|_{L^{\infty}(\{g \leq1\})}\Big)^\alpha } -C_3 \Bigg)^{\!\!+}, \eeq where $(\, \cdot\, )^+$ symbolises the positive part, and $C(\infty,\Omega)$ equals either the constant of the Poincar\'e inequality (in the case of clamped boundary conditions), or the constant of the Poincar\'e-Wirtinger inequality (in the case of hinged boundary conditions), both taken for $p=\infty$. \\ If additionally the boundary $\partial\Omega$ is $C^2$, we have the a priori upper bound \beq \label{1.8A} \begin{split} \Lambda_\infty \, \leq & \ C_6 \,+\, C_5\frac{2^{5\alpha}}{(c \omega(n))^\alpha} \bigg( \! 1 + \underset{0\leq t \leq 1}{\sup}R(t) \! \bigg)^{\!\!\alpha} \Big(2^{3n} + \underset{i=1,...,n-1}{\max}\big(\|\kappa_i\|_{C^0(\partial\Omega)}\big)^n \Big)^{\!\alpha} \centerdot \\ &\centerdot \Bigg\{ 1 + \bigg( \! 1+\frac{C}{\varepsilon_0^{n+1}} \! \bigg) \mathcal{H}^{n-1}(\partial\Omega) \, +\, \sum_{i=1}^{n-1} \left\| \frac{ \kappa_i \circ \mathrm{P}_{\Omega} }{ 1 - ( \kappa_i \circ \mathrm{P}_{\Omega})d_\Omega } \right\|_{L^\infty(\{d_\Omega< \varepsilon_0\} \cap \Omega)} \!\!\Bigg\}^{\!\alpha} , \end{split} \eeq where $c,C>0$ are dimensionless universal constants, $\omega(n)$ is the volume of the unit ball in $\mathbb{R}^n$, $\mathcal{H}^{n-1}(\partial\Omega)$ is the perimeter of $\Omega$, $\{\kappa_1,...,\kappa_{n-1}\}$ are the principal curvatures of $\partial\Omega$, $\mathrm{P}_{\Omega}$ is the orthogonal projection on $\partial\Omega$, $d_\Omega$ the distance function of $\partial\Omega$, $\varepsilon_0$ is the largest \[ \varepsilon \in \left(0 , \min\left\{1\, ,\, \underset{i=1,...,n-1}{\min}\frac{1}{ \|\kappa_i\|_{C^0(\partial\Omega)}} \right\}\right), \] for which we have that $d_\Omega \in C^2(\{d_\Omega\leq \varepsilon\} \cap \overline{\Omega})$ and $R(t)$ is the smallest radius of the $N$-dimensional ball, for which the sublevel set $\{g\leq t\}$ is contained into the cylinder $\bar\mathbb{B}^N_{R(t)}(0) \times \mathbb{R}^{N\times n}$, namely \[ R(t) \,:=\, \inf \Big\{ R>0 \, : \, \{g\leq t\} \subseteq \mathbb{B}^N_R(0) \times \mathbb{R}^{N\times n}\Big\}. \] \\ (C) The quadruple $(u_{\infty}, \Lambda_{\infty}, {\mathrm M}_\infty, \nu_{\infty})$ satisfies the following approximation properties: there exists a sequence of exponents $(p_j)_1^{\infty} \subseteq (n/ \alpha)$ where $p_j\to \infty$ as $j\to \infty$, and for any $p$, a quadruple \[ (u_p,\Lambda_p, {\mathrm M}_p, \nu_p)\ \in \ W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)\times [0, \infty) \times\mathcal{M}\big(\overline{\Omega}; \mathbb{R}_s^{N \times n^2}\big) \times \mathcal{M}(\overline{\Omega}), \] such that \beq \label{1.8} \left\{ \ \ \begin{array}{ll} u_{p} \longrightarrow u_\infty, & \ \ \text{in } C^1 \big(\overline{\Omega};\mathbb{R}^N \big), \\ \mathrm{D}^2 u_{p} \, -\!\!\!\!\!-\!\!\!\!\rightharpoonup\mathrm{D}^2 u_\infty, & \ \ \text{in } L^q \big(\Omega; \mathbb{R}^{N \times n^2}_s\big), \ \text{for all } q\in (1,\infty), \\ \Lambda_p \longrightarrow \Lambda_{\infty}, & \ \ \text{in } [0,\infty), \\ {\mathrm M}_p \weakstar {\mathrm M}_\infty, & \ \ \text{in }\mathcal{M}\big(\overline{\Omega}; \mathbb{R}_s^{N \times n^2}\big), \\ \nu_p \weakstar \nu_{\infty}, & \ \ \text{in } \mathcal{M}(\overline{\Omega}), \end{array} \right. \eeq as $p \to \infty$ along $(p_j)_1^{\infty}$. Further, $u_p$ solves the constrained minimisation problem \beq \label{1.9} \|f(\mathrm{D}^2u_p)\|_{L^{p}(\Omega)}\,=\, \inf \Big\{ \|f(\mathrm{D}^2v)\|_{L^{p}(\Omega)}\ : \ v\in W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N), \ \|g(v, \mathrm{D} v)\|_{L^{p}(\Omega)}=1\Big\}, \\ \eeq and $(u_p, \Lambda_p)$ satisfies \beq \label{1.10} \left\{ \begin{split} & {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p-1} \partial f(\mathrm{D}^2u_p): \mathrm{D}^2 \phi \, \mathrm{d} \mathcal{L}^n \\ = &\ (\Lambda_p)^p \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p-1}\Big(\partial_\eta g (u_p, \mathrm{D} u_p) \cdot \phi + \partial_P g (u_p, \mathrm{D} u_p) : \mathrm{D} \phi \Big) \, \, \mathrm{d} {\mathcal{L}}^{n} \end{split} \right. \eeq for all test maps $\phi\in W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)$. Finally, the measures ${\mathrm M}_p, \nu_p$ are given by \beq \label{1.11} \left\{ \ \ \begin{split} {\mathrm M}_p &= \frac{1}{\mathcal{L}^n(\Omega)}\bigg(\frac{f(\mathrm{D}^2u_p)}{ \Lambda_p}\bigg)^{\! p-1} \partial f(\mathrm{D}^2u_p) \, \mathcal{L}^{n} \text{\LARGE$\llcorner$}_{\Omega}, \\ \nu_p &= \frac{1}{\mathcal{L}^n(\Omega)}\, g(u_p, \mathrm{D} u_p)^{p-1} \, \mathcal{L}^{n} \text{\LARGE$\llcorner$}_{\Omega}. \\ \end{split} \right. \ \ \eeq \end{theorem} We note that one could pursue optimality in Theorem \ref{1}(A) by using $L^\infty$ versions of quasiconvexity, as developed by Barron-Jensen-Wang \cite{BJW2} but adapted to this higher order case, in regards to the existence of $L^\infty$ minimisers. However, for parts (B) and (C) of Theorem \ref{1} regarding the necessary PDE conditions, we do need Morrey quasiconvexity, as we rely essentially on the existence of solutions to the corresponding Euler-Lagrange equations and the theory of Lagrange multipliers in the finite $p$ case. Further, the measures ${\mathrm M}_\infty, \nu_{\infty}$ \emph{depend} on the minimiser $u_{\infty}$ in a non-linear fashion, hence one more could perhaps symbolise them more concisely as ${\mathrm M}_\infty(u_\infty), \nu_{\infty}(u_\infty)$. Consequently, the significance of these equations is currently understood to be mostly of conceptual value, rather than of computational nature. However, it is possible to obtain further information about the underlying structure of these parametric measure coefficients. This requires techniques such as measure function pairs and mollifications up to the boundary as in \cite{CK, H, K4}, but to keep the presentation as simple as possible, we refrain from pursuing this -considerably more technical- endeavour, which also requires stronger assumptions. \section{Proofs} \label{Section2} In this section we establish Theorem \ref{1}. Its proof is not labeled explicitly, but will be completed by proving a combination of smaller subsidiary results, including a selection of lemmas and propositions. Before introducing the approximating problem (the $L^p$ case for finite $p$), we need to establish a convergence result, which shows that the admissible classes of the $p$-problems are non-empty. It is required because the function $g$ appearing in the constraint is not assumed to be homogeneous, therefore a standard scaling argument does not suffice. \begin{lemma}\label{lemma2} For any $v\in W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N)\setminus \{0\}$, there exists $(t_p)_{p\in (n/\alpha, \infty]}$ with $t_p\to t_{\infty}$ as $p\to\infty$, such that \[ \big\|g\big(t_pv, t_p\mathrm{D} v\big)\big\|_{L^p(\Omega)}=1, \phantom{\Big|} \] for all $p\in (n/\alpha, \infty]$. Further, if $\|g(v, \mathrm{D} v) \|_{L^{\infty}(\Omega)}=1$, then $t_{\infty}=1$. \end{lemma} \noindent \textbf{Proof of Lemma} \ref{lemma2}. Fix $v\in W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N)\setminus \{0\}$ and define \[ \rho_{\infty}(t):=\max_{x\in \overline{\Omega}}g\big(tv(x),t\mathrm{D} v(x)\big), \ \ \ \ t\geq 0. \] It follows that $\rho_{\infty}(0)=0$ and $\rho_{\infty}$ is continuous on $[0, \infty)$. We will now show that $\rho_{\infty}$ is strictly increasing. We first show it is non-decreasing. For any $s>0$ and $(\eta,P) \in \mathbb{R}^N \times \mathbb{R}^{N\times n} \setminus \{(0,0)\}$, our assumption \eqref{1.4}(c) implies \[ \begin{split} 0\, &<\,\frac{C_7g(s\eta, sP)}{s} \\ &\leq \, \partial_\eta g (s\eta, sP)\cdot \eta \, +\, \partial_P g (s\eta, sP):P \\ &= \, \partial_{(\eta,P)} g (s\eta, sP) : (\eta,P) \\ &= \, \frac{\mathrm{d}}{\mathrm{d} s}\big(g(s\eta, sP)\big), \end{split} \] thus $s\mapsto g(s\eta, sP)$ is increasing on $(0, \infty)$. Hence, for any $x\in \overline{\Omega}$ and $t>s\geq0$ we have $g(sv(x), s\mathrm{D} v(x))\geq g(tv(x), t \mathrm{D} v(x))$, which yields, \[ \begin{split} \rho_{\infty}(s)=\max_{x\in \overline{\Omega}}g\big(sv(x),s\mathrm{D} v(x)\big)\, \leq \, \max_{x\in \overline{\Omega}}g\big(tv(x),t\mathrm{D} v(x)\big)=\rho_{\infty}(t). \end{split} \] We proceed to demonstrate that $t\mapsto \rho_{\infty}(t)$ is actually strictly monotonic over $(0,\infty)$. Fix $t_0>0$. By Danskin's theorem \cite{Dan}, the derivative from the right $\rho'(t_0^+)$ exists, and is given by the formula \[ \begin{split} \rho_{\infty}'(t_0^+) \, = \, \max_{x\in {\Omega}_{t_0}}\Big\{ \partial_{(\eta,P)} g (t_0v(x), t_0 \mathrm{D} v(x)) : \big(v(x),\mathrm{D} v(x)\big) \Big\}, \end{split} \] where \[ \begin{split} \Omega_{t_0}\,:=\, \Big \{ \overline{x}\in \overline{\Omega}\ :\ \rho_{\infty}(t_0)=g \big(t_0v(\overline{x}), t_0 \mathrm{D} v(\overline{x}) \big)\Big \}. \end{split} \] Hence, by \eqref{1.4}(c) we estimate \[ \begin{split} \rho_{\infty}'(t_0^+)&=\frac{1}{t_0}\max_{x\in {\Omega}_{t_0}}\Big\{ \partial_{(\eta,P)} g (t_0v(x), t_0 \mathrm{D} v(x)) : \big(t_0v(x),t_0\mathrm{D} v(x)\big) \Big\} \\ &\geq \frac{C_7}{t_0}\max_{x\in {\Omega}_{t_0}}g \big(t_0v(x), t_0\mathrm{D} v(x)\big)\\ &= \frac{C_7}{t_0} \rho_{\infty}(t_0)\\ &>0. \end{split} \] This implies that $\rho_{\infty}$ is strictly increasing on $(0,\infty)$. Next, recall that $g$ is coercive by assumption \eqref{1.4}(b), namely $g(s\eta, sP)\to \infty$ as $s\to \infty$, for fixed $(\eta,P)\neq (0,0)$. Thus, for any fixed point $\overline{x}\in \Omega$ with $(v(\overline{x}),\mathrm{D} v(\overline{x}))\ne (0,0)$, which exists because by assumption $v\not \equiv 0$, we have \[ \begin{split} \lim_{t\to \infty}\rho_{\infty}(t)\geq\lim_{t \to \infty} g(tv(\overline{x}), t \mathrm{D} v(\overline{x}))=\infty. \end{split} \] Since $\rho_\infty(0)=0$ and $\rho_{\infty}(t) \to\infty$ as $t\to\infty$, by continuity and the intermediate value theorem, there exists a number $t_\infty>0$ such that $\rho_{\infty}(t_{\infty})=1$, that is \[ \begin{split} \big\|g\big(t_{\infty}v, t_{\infty} \mathrm{D} v\big)\big\|_{L^{\infty}(\Omega)}=1. \end{split} \] If $\|g(v, \mathrm{D} v)\|_{L^{\infty}(\Omega)}=1$, then $t_{\infty}=1$, as a result of the strict monotonicity of $\rho_{\infty}$. Now let us fix $p\in (n/\alpha, \infty)$ and define the continuous function \[ \begin{split} \rho_p(t) \, :=\, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(tv, t \mathrm{D} v)^p \, \mathrm{d} \mathcal{L}^n , \ \ \ t\geq 0. \end{split} \] Since $g(0,0)=0$, it follows that $\rho_p(0)=0$ and that \[ \rho_p(t) \, =\frac{1}{\mathcal{L}^n(\Omega)}\int_{\{(v,\mathrm{D} v)\ne (0,0)\}} g(tv, t\mathrm{D} v)^p \, \mathrm{d} \mathcal{L}^n. \] By Morrey's theorem and our assumptions, we have that $v\in C^1(\overline{\Omega};\mathbb{R}^N)\setminus\{0\}$, therefore $\mathcal{L}^n\big( \{(v,\mathrm{D} v)\ne (0,0)\} \big)>0$. Consider the family of functions $\{g(tv, t\mathrm{D} v)^p \}_{t>0}$, defined on $\{(v,\mathrm{D} v)\ne (0,0)\}\subseteq \Omega$. By the monotonicity of $s\mapsto g(s\eta, sP)$ on $(0,\infty)$ for $(\eta,P)\neq (0,0)$, for $s<t$ we have \[ \text{$g(sv, s\mathrm{D} v)^p \leq g(tv, t\mathrm{D} v)^p$, \ on $\{(v,\mathrm{D} v)\ne (0,0)\}$}. \] Since $g(tv, t\mathrm{D} v)^p \to\infty$ pointwise on $\{(v,\mathrm{D} v)\ne (0,0)\}$ as $t \to\infty$, by the monotone convergence theorem, we infer that \[ \int_{\{(v,\mathrm{D} v)\ne (0,0)\}} g(tv, t\mathrm{D} v)^p \, \mathrm{d} \mathcal{L}^n \longrightarrow \infty, \] as $t\to\infty$. As a consequence, $\rho_p(t)\to \infty$ as $t\to \infty$. Since $\rho_p(0)=0$, by the intermediate value theorem there exists $t_p>0$ such that $\rho_p(t_p)=1$, namely \[ \big\|g(t_pv, t_p\mathrm{D} v)\big\|_{L^p(\Omega)}=1. \] For the sake of contradiction, suppose that $ t_p \not\to t_{\infty}$, as $p\to \infty$. In this case, there exists a subsequence $(t_{p_j})_1^{\infty}\subseteq (n/\alpha, \infty)$ and $t_0\in [0, t_{\infty}) \cup (t_{\infty}, \infty]$ such that $t_{p_j}\to t_0$ as $j\to \infty$. Further, $(t_{p_j})_1^{\infty}$ can assumed to be either monotonically increasing or decreasing. We first prove that $t_0$ is finite. If $t_0=\infty$, then the sequence $(t_{p_j})_1^{\infty}$ can be selected to be monotonically increasing. Therefore, by arguing as before, $g(t_{p_j}v, t_{p_j} \mathrm{D} v)\nearrow \infty$ as $j\to \infty$, pointwise on $\{(v,\mathrm{D} v)\ne (0,0)\}$, and the monotone convergence theorem provides the contradiction \[ 1 \, =\, \lim_{j\to \infty}{-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(t_{p_j}v, t_{p_j} \mathrm{D} v)^{p_j} \ \mathrm{d} \mathcal{L}^n\, =\, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} \lim_{j\to \infty} g(t_{p_j}v, t_{p_j} \mathrm{D} v)^{p_j} \ \mathrm{d} \mathcal{L}^n\, =\, \infty. \] Consequently, we have that $t_0\in[0, t_{\infty}) \cup (t_{\infty}, \infty)$. Since $(t_{p_j}v, t_{p_j}\mathrm{D} v)\to (t_0 v, t_0 \mathrm{D} v)$ uniformly on $\overline{\Omega}$ as $j\to \infty$, we calculate \[ \begin{split} 1 \, & =\, \big\|g(t_{p_j}v, t_{p_j}\mathrm{D} v)\big\|_{L^{p_j}(\Omega)} \\ &=\,\big\|g(t_0v, t_0\mathrm{D} v)\big\|_{L^{p_j}(\Omega)}\, +\, \mathrm o(1)_{j\to\infty} \\ &=\, \big\|g(t_0v, t_0\mathrm{D} v)\big\|_{L^{\infty}(\Omega)} \, +\,\mathrm o(1)_{j\to\infty} \\ &=\, \rho_{\infty}(t_0) \, +\,\mathrm o(1)_{j\to\infty}. \end{split} \] By passing to the limit as $j\to \infty$, we obtain a contradiction if $t_\infty\ne t_0$, because $\rho_{\infty}$ is a strictly increasing function and $\rho_{\infty}(t_\infty)=1$. In conclusion, $t_p\to t_{\infty}$ as $p\to\infty$. \qed Utilising the above result we can now show existence for the approximating minimisation problem for $p<\infty$. \begin{lemma}\label{lemma3} For any $p>n/ \alpha$, the minimisation problem \eqref{1.9} has a solution $u_p \in W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)$. \end{lemma} \noindent \textbf{Proof of Lemma} \ref{lemma3}. Let us fix $p\in(n/ \alpha, \infty)$ and $v_0 \in W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N)$ where $v_0\, \slashed{\equiv}\, 0$. By application of Lemma \ref{lemma2}, there exists $t_p>0$ such that $\|g(t_p v_0, t_p\mathrm{D} v_0)\|_{L^p(\Omega)}=1$ implying that $t_p v_0$ is indeed an element of the admissible class of \eqref{1.9}. Hence, we deduce that the admissible class is non empty. Further, by assumption \eqref{1.3}(b), $f$ is (Morrey) quasiconvex. We now confirm that $f^p$ is also (Morrey) quasiconvex function, as a consequence of Jensen's inequality: for any fixed $X\in \smash{\mathbb{R}^{N \times n^2}_s}$ and any $\phi \in W^{2,\infty}_0(\Omega;\mathbb{R}^N)$, we have \[ f^p(X)\leq \bigg( \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega}f(X+\mathrm{D}^2 \phi) \ \mathrm{d} \mathcal{L}^n \!\bigg)^{\!p}\leq \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega}f(X+\mathrm{D}^2 \phi)^p \ \mathrm{d} \mathcal{L}^n. \] By assumption by assumption \eqref{1.3}(d), we have for some new $C_5(p),C_6(p)>0$ that \[ f(X)^p \, \leq \, C_5(p)|X|^{\alpha p}+C_6(p), \] for any $X\in \smash{\mathbb{R}^{N \times n^2}_s}$. Moreover, by \cite[Theorem 3.6]{Zh} we have that the functional $v \mapsto \|f(\mathrm{D}^2v)\|_{L^{p}(\Omega)}$ is weakly lower semi-continuous on $W^{2,\alpha p}(\Omega;\mathbb{R}^N)$ and therefore the same is true over the closed subspace $W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)$. Let $(u_i)_{1}^{\infty}$ be a minimising sequence for \eqref{1.9}. As $f\geq 0$, it is clear that $\inf_{i\in \mathbb{N}} \|f(\mathrm{D}^2u_i)\|_{L^{p}(\Omega)}\geq 0$. Since the admissible class is non-empty, the infimum is finite. Additionally, by \eqref{1.3}(d), we have the bound \[ \begin{split} \inf_{i\in \mathbb{N}} \|f(\mathrm{D}^2u_i)\|_{L^{p}(\Omega)} &\leq \big\|f\big(\mathrm{D}^2(t_pv_0)\big)\big\|_{L^{p}(\Omega)} \\ &\leq \big\|C_5\big|t_p\mathrm{D}^2 v_0\big|^{\alpha}+C_6\big\|_{L^{\infty}(\Omega)} \\ &\leq C_5(t_p)^\alpha \|\mathrm{D}^2v_0\|^\alpha_{L^{\infty}(\Omega)}+C_6 \\ &< \infty. \end{split} \] Now we show that the functional is coercive in $W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)$, arguing separately for either case of boundary conditions. By assumption \eqref{1.3}(d) and the Poincar\'e inequality, for any $u\in W^{2,\alpha p}_{\mathrm C}(\Omega;\mathbb{R}^N)$ (satisfying $|u|=|\mathrm{D} u|=0$ on $\partial\Omega$), we have \[ \begin{split} \bigg( \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} \big|f(\mathrm{D}^2u)+C_3 \big|^p \, \mathrm{d} \mathcal{L}^n\bigg)^{\!\frac{1}{p}} \geq C_4\bigg( \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega}|\mathrm{D}^2 u|^{\alpha p} \, \mathrm{d} \mathcal{L}^n \bigg) ^{\!\frac{1}{p}} \ \geq C_4'\|u\|^{\alpha}_{W^{1,\alpha p}(\Omega)}, \end{split} \] for a new constant $C_4'=C_4(p)>0$. Hence, for any $u\in W^{2,\alpha p}_{\mathrm C}(\Omega;\mathbb{R}^N)$, \beq \label{coercivity} \|f(\mathrm{D}^2 u)\|_{L^p(\Omega)} \, \geq \, C_4'\big(\|u\|_{W^{2,\alpha p}(\Omega)}\big)^{\alpha} -C_3. \eeq The above estimate is also true when $u\in W^{2,\alpha p}_{\mathrm H}(\Omega;\mathbb{R}^N)$, but since in this case we have only $|u|=0$ on $\partial\Omega$, it requires an additional justification. By the Poincar\'e-Wirtinger inequality involving averages, for any $u\in W^{2,\alpha p}_{\mathrm H}(\Omega;\mathbb{R}^N)$ we have \[ \left\|\mathrm{D} u- {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} \mathrm{D} u \ \mathrm{d} \mathcal{L}^n \right\|_{L^{\alpha p}(\Omega)} \,\leq \, C\|\mathrm{D}^2u\|_{L^{\alpha p}(\Omega)}, \] where $C=C(\alpha,p,\Omega)>0$ is a constant. Since $|u|=0$ on $\partial\Omega$, by the Gauss-Green theorem we have \[ \int_{\Omega} \mathrm{D} u \ \mathrm{d} \mathcal{L}^n= \int_{\partial \Omega} u\otimes \hat{n} \ \mathrm{d} \mathcal{H}^{n-1}=0, \] where $\mathcal{H}^{n-1}$ denotes the $(n-1)$-dimensional Hausdorff measure. In conclusion, \[ \big\|\mathrm{D} u \|_{L^{\alpha p}(\Omega)} \, \leq \, C \|\mathrm{D}^2u\|_{L^{\alpha p}(\Omega)}, \] for any $u\in W^{2,\alpha p}_{\mathrm H}(\Omega;\mathbb{R}^N)$. The above estimate together with the standard Poincar\'e inequality applied to $u$ itself allow to infer that \eqref{coercivity} holds for any $u\in W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)$ in both cases of boundary conditions. Returning to our minimising sequence, by standard compactness results, exists $u_p \in W^{2,\alpha p}_{\mathrm H}(\Omega;\mathbb{R}^N) $ such that $ u_i \weak u_p$ in $W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)$, as $i\to\infty$ along a subsequence of indices. Additionally, by the Morrey estimate we have that $u_i\longrightarrow u_p$ in $ C^1(\overline{\Omega}; \mathbb{R}^N)$ as $i\to\infty$, along perhaps a further subsequence. Since $u\mapsto \|g(u, \mathrm{D} u)\|_{L^p(\Omega)}$ is weakly continuous on $W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)$, the admissible class is weakly closed in $W^{2,\alpha p}(\Omega;\mathbb{R}^N)$ and hence we may pass to the limit in the constraint. By weak lower semicontinuity of the functional, it follows that a minimiser $u_p$ which satisfies \eqref{1.9} does indeed exist. \qed Now we describe the necessary conditions (Euler-Lagrange equations) that approximating minimiser $u_p$ must satisfy. These equations will involve a Lagrange multiplier, emerging from the constraint $\|g(\cdot, \mathrm{D} (\cdot))\|_{L^p(\Omega)}=1$. \begin{lemma}\label{lemma4} For any $p>n/ \alpha$, let $u_p$ be the minimiser of \eqref{1.9} procured by Lemma \ref{lemma3}. Then, there exists $\lambda_p \in \mathbb{R}$ such that the pair $(u_p, \lambda_p)$ satisfies the following PDE system \[ \begin{split} &\, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p-1} \partial f(\mathrm{D}^2u_p): \mathrm{D}^2 \phi \, \mathrm{d} \mathcal{L}^n \\ &= \, \lambda_p \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p-1}\Big(\partial_\eta g (u_p, \mathrm{D} u_p) \cdot \phi \, +\, \partial_P g (u_p, \mathrm{D} u_p) : \mathrm{D} \phi \Big) \, \mathrm{d} {\mathcal{L}}^{n}, \end{split} \] for all test maps $\phi\in W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N).$ \end{lemma} In particular, it follows that in both cases $u_p$ is a weak solution in $W^{2,\alpha p}(\Omega;\mathbb{R}^N)$ to \beq \label{PDEinLp} \left\{ \ \ \begin{split} &\, \mathrm{D}^2 : \Big(f(\mathrm{D}^2 u_p)^{p-1} \partial f(\mathrm{D}^2u_p)\Big) \\ &= \, \lambda_p \, \bigg[ g(u_p, \mathrm{D} u_p)^{p-1}\partial_\eta g (u_p, \mathrm{D} u_p) \, -\, \mathrm{div} \Big(g(u_p, \mathrm{D} u_p)^{p-1}\partial_P g (u_p, \mathrm{D} u_p) \Big) \bigg], \end{split} \right. \eeq where we have used the notation $\mathrm{D}^2 : F = \sum_{i,j=1}^n \mathrm{D}^2_{ij} F_{ij}$, when $F \in C^2(\Omega;\mathbb{R}^{n \times n})$, which is equivalent to the double divergence (applied once column-wise and once row-wise). Note that in the case of hinged boundary data, we have an additional natural boundary condition arising (since $\mathrm{D} u$ is free on $\partial\Omega$), we we will not make an particular use of this extra information in the sequel, therefore we refrain from discussing it explicitly. \noindent \textbf{Proof of Lemma} \ref{lemma4}. The result follows by standard results on Lagrange multipliers in Banach spaces (see e.g.\ \cite[p.\ 278]{Z}), by utilising assumption \eqref{1.3}(d), which guarantees that the functional is Gateaux differentiable. \qed Now we establish some further results regarding the family of eigenvalues. \begin{lemma}\label{lemma5} Consider the family of pairs of eigenvectors-eigenvalues $\{(u_p,\lambda_p)\}_{p>n/ \alpha}$, given by Lemma \ref{lemma4}. Then, for any $p>n/\alpha$, there exists $\Lambda_p>0$ such that \[ \lambda_p=\big(\Lambda_p\big)^p >0. \] Further, by setting \[ L_p \, :=\, \big\|f(\mathrm{D}^2 u_p)\big\|_{L^p(\Omega)}, \] we have the bounds \[ 0<\, \bigg(\frac{C_1}{C_8}\bigg)^{\frac{1}{p}}L_p \,\leq \, \Lambda_p \, \leq \, \bigg(\frac{C_2}{C_7}\bigg)^{\frac{1}{p}}L_p. \] \end{lemma} \noindent \textbf{Proof of Lemma} \ref{lemma5}. We begin by showing that $L_p>0$, namely the infimum over the admissible class of the $p$-approximating minimisation problem is strictly positive, owing to the constraint and our assumptions \eqref{1.3}-\eqref{1.4}. Indeed, there is only one map $u\in W^{2,\alpha p}_{\mathrm B}(\Omega;\mathbb{R}^N)$ for which $\|f(\mathrm{D}^2u)\|_{L^p(\Omega)}=0$, namely $u_0\equiv 0$, but this is not an element of the admissible class since $\|g(u_0, \mathrm{D} u_0)\|_{L^p(\Omega)}=0$. Now consider the Euler-Lagrange equations in Lemma \ref{lemma4} and select $\phi:=u_p$, to obtain \[ \begin{split} & \,{-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p-1} \partial f(\mathrm{D}^2u_p): \mathrm{D}^2 u_p \, \mathrm{d} \mathcal{L}^n \\ &= \, \lambda_p \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p-1}\Big(\partial_\eta g (u_p, \mathrm{D} u_p) \cdot u_p + \partial_P g (u_p, \mathrm{D} u_p) : \mathrm{D} u_p \Big) \, \mathrm{d} {\mathcal{L}}^{n}. \end{split} \] As $f,g\geq 0$ we can manipulate the respective assumptions \eqref{1.3}(c) and \eqref{1.4}(c) to produce the following bounds: \[ \begin{split} C_1{-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p}\, \mathrm{d} \mathcal{L}^n \,& \leq \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega}f(\mathrm{D}^2u_p)^{p-1} \partial f(\mathrm{D}^2u_p): \mathrm{D}^2 u_p \, \mathrm{d} \mathcal{L}^n \\ &\leq \, C_2{-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p}\, \mathrm{d} \mathcal{L}^n, \end{split} \] \[ \begin{split} C_7 \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p}\, \mathrm{d} \mathcal{L}^n &\leq \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p-1} \Big(\partial_\eta g (u_p, \mathrm{D} u_p) \cdot u_p \,+ \\ &\ \ \ \ +\, \partial_P g (u_p, \mathrm{D} u_p) : \mathrm{D} u_p \Big) \, \mathrm{d} {\mathcal{L}}^{n}\\ &\leq \, C_8 \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p}\, \mathrm{d} \mathcal{L}^n. \end{split} \] The above two estimates, combined with the Euler-Lagrange equations, imply that $\lambda_p>0$. Hence, we may therefore define $\Lambda_p:=(\lambda_p)^{\frac{1}{p}}>0$. We will now obtain the upper and lower bounds. We determine the lower bound as follows: \[ \begin{split} C_1(L_p)^p \, &=\, C_1 \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p}\, \mathrm{d} \mathcal{L}^n \\ &\leq \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega}f^{p-1}(\mathrm{D}^2u_p) \partial f(\mathrm{D}^2u_p): \mathrm{D}^2 u_p \, \mathrm{d} \mathcal{L}^n \\ &= \, \lambda_p \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p-1}\Big(\partial_\eta g (u_p, \mathrm{D} u_p) \cdot \phi \, + \, \partial_P g (u_p, \mathrm{D} u_p) : \mathrm{D} u_p \Big) \, \, \mathrm{d} {\mathcal{L}}^{n} \\ &\leq \, \lambda_p C_8. \end{split} \] Hence, \[ \bigg(\frac{C_1}{C_8}\bigg)^{\frac{1}{p}}L_p\leq (\lambda_p)^{\frac{1}{p}}=\Lambda_p. \] The upper bound is determined analogously, by reversing the direction of the inequalities. Combining both bounds, we obtain the desired estimate. \qed \begin{proposition}\label{Proposition 6} There exists $(u_{\infty}, \Lambda_{\infty})\in W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N)\times (0, \infty)$ such that, along a sequence $(p_j)_1^{\infty}$ of exponents, we have \[ \left\{ \ \ \begin{array}{ll} u_{p} \longrightarrow u_\infty, & \ \ \text{in } C^1 \big(\overline{\Omega};\mathbb{R}^N \big), \\ \mathrm{D}^2 u_{p} \, -\!\!\!\!\!-\!\!\!\!\rightharpoonup\mathrm{D}^2 u_\infty, & \ \ \text{in } L^q (\Omega; \mathbb{R}_s^{N\times n^2}), \ \text{for all} \ \ q\in (1,\infty), \\ \Lambda_p \longrightarrow \Lambda_{\infty}, & \ \ \text{in } [0,\infty), \end{array} \right. \] as $p_j\to\infty$. Additionally, $u_{\infty}$ solves the minimisation problem \eqref{1.1} and $\Lambda_{\infty}$ is given by \eqref{1.6}. Finally $\Lambda_{\infty}$ satisfies the uniform bounds \eqref{1.7}. \end{proposition} \noindent \textbf{Proof of Proposition} \ref{Proposition 6}. Fix $p>n/\alpha, q\leq p$ and a map $v_0\in W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N)\setminus \{0\}$. Then, by Lemma \ref{lemma2} there exists $(t_p)_{p\in (n/\alpha, \infty]} \subseteq (0, \infty)$ such that $t_p \to t_{\infty}$ as $p\to \infty$ and satisfying $\|g(t_p v_0, t_p \mathrm{D} v_0)\|_{L^p(\Omega)}=1$ for all $p\in (n/\alpha, \infty]$. By H\"older's inequality and minimality, we have the following estimate \[ \begin{split} \big\|f(\mathrm{D}^2u_p) \big\|_{L^q(\Omega)} \, &\leq \, \big\|f(\mathrm{D}^2u_p) \big\|_{L^p(\Omega)} \\ &\leq \, \big\|f(t_p\mathrm{D}^2v_0) \big\|_{L^p(\Omega)} \\ & \leq\, \big\| f(t_p\mathrm{D}^2v_0) \big\|_{L^{\infty}(\Omega)} \\ &\leq K\,+\, \big\| f(t_\infty\mathrm{D}^2v_0) \big\|_{L^{\infty}(\Omega)}\\ & < \, \infty, \end{split} \] for some $K>0$. By \eqref{1.3}(d), we have the bound $f^q(X)\geq C_4(q)|X|^{\alpha q}- C_3(q)$ for some constants $C_3(q), C_4(q)>0$ and all $X\in \mathbb{R}^{N \times n^2}_s.$ By the previous bound, we conclude that \[ \sup_{q\geq p}\|\mathrm{D}^2u_p\|_{L^{\alpha q}(\Omega)}\leq C(q)< \infty, \] for some $q$-dependent constant. By arguing as in the proof of Lemma \ref{lemma3} through the use of Poincar\'e inequalities, we can conclude in both cases of boundary conditions with the bound \[ \sup_{q\geq p}\|u_p\|_{W^{2, \alpha q}(\Omega)}\leq C(q)< \infty, \] for a new $q$-dependent constant $C'(q)>0$. Standard compactness in Sobolev spaces and a diagonal sequence argument imply the existence of a mapping \[ u_\infty \in \bigcap_{n/\alpha<p<\infty} W^{2, \alpha p}_{\mathrm B}(\Omega; \mathbb{R}^N) \] and a subsequence $(p_j)_1^\infty$ such that the desired modes of convergence hold true as $p_j\to \infty$ along this subsequence of indices. Fix a map $v\in W^{2, \infty}_{\mathrm B}(\Omega; \mathbb{R}^N)$ satisfying the required constraint, namely $\|g(v, \mathrm{D} v) \|_{L^{\infty}(\Omega)}=1$. In view of Lemma \ref{lemma2}, there exists $(t_p)_{p\in (n/\alpha, \infty)}\subseteq (0, \infty)$ satisfying that $t_p\to 1$ as $p\to \infty$, and additionally $\|g(t_p v, t_p \mathrm{D} v)\|_{L^p(\Omega)}=1$ for all $p>n/\alpha$. By H\"older's inequality, the definition of $L_p$ and minimality, we have \[ \big\|f(\mathrm{D}^2 u_p)\big\|_{L^q(\Omega)} \, \leq \, \big\|f(\mathrm{D}^2 u_p)\big\|_{L^p(\Omega)} \, = \, L_p \, \leq \, \big\|f(t_p\mathrm{D}^2 v)\big\|_{L^p(\Omega)}, \] for any such $v$. By the weak lower semi-continuity of the functional on $W^{2, \alpha q}_{\mathrm B}(\Omega; \mathbb{R}^N)$, we may let $p_j \to\infty$ to obtain \[ \begin{split} \big\|f(\mathrm{D}^2u_{\infty})\big\|_{L^q(\Omega)} \, &\leq \, \liminf_{p_j\to \infty}L_p \\ &\leq \, \limsup_{p_j\to \infty} L_p \\ &\leq \, \limsup_{p_j \to \infty} \|f(t_{p_j}\mathrm{D}^2 v)\|_{L^p(\Omega)} \\ & =\, \|f(\mathrm{D}^2 v)\|_{L^{\infty}(\Omega)}. \end{split} \] Now we may let $q\to \infty$ in the above bound, hence producing \[ \big\|f(\mathrm{D}^2 u_{\infty})\big\|_{L^{\infty}(\Omega)} \, \leq \, \liminf_{p_j\to \infty}L_p \, \leq \, \limsup_{p_j\to \infty} L_p \, \leq \, \|f(\mathrm{D}^2 v)\|_{L^{\infty}(\Omega)}. \] for all mappings $v\in W^{2, \infty}_{\mathrm B}(\Omega; \mathbb{R}^N)$ satisfying the constraint $\|g(v, \mathrm{D} v)\|_{L^{\infty}(\Omega)}=1$. If we additionally show that in fact $u_{\infty}$ satisfies the constraint in \eqref{1.1}, then the above estimate shows both that it is the desired minimisers (by choosing $v:=u_\infty$), and also that the sequence $(L_{p_j})_1^\infty$ converges to the infimum. Now we show that this is indeed the case. In view of assumption \eqref{1.3}(d), the previous estimate implies also that $\mathrm{D}^2u_{\infty}\in \smash{L^{\infty}\big(\Omega; \mathbb{R}_s^{N \times n^2}\big)}$, which together with Poincar\'e inequalities (as in the proof of Lemma \ref{lemma3}) shows that in fact $u_{\infty} \in W^{2, \infty}_{\mathrm B}(\Omega; \mathbb{R}^N)$. By the continuity of the function $g$ and the fact that $u_{p} \longrightarrow u_\infty$ in $C^1 \big(\overline{\Omega};\mathbb{R}^N \big)$, we have \[ \begin{split} 1&= \, \|g(u_p, \mathrm{D} u_p)\|_{L^p(\Omega)}\\ &= \, \|g(u_{\infty}, \mathrm{D} u_{\infty})\|_{L^p(\Omega)}\,+\, \|g(u_p, \mathrm{D} u_p)\|_{L^p(\Omega)}-\|g(u_{\infty}, \mathrm{D} u_{\infty})\|_{L^p(\Omega)}\\ &= \, \|g(u_{\infty}, \mathrm{D} u_{\infty})\|_{L^p(\Omega)}\,+ \,\mathrm O\Big(\|g(u_p, \mathrm{D} u_p)- g(u_{\infty}, \mathrm{D} u_{\infty})\|_{L^{\infty}(\Omega)}\Big)\\ &\!\!\longrightarrow \|g(u_{\infty}, \mathrm{D} u_{\infty})\|_{L^{\infty}(\Omega)}, \end{split} \] as $p_j\to \infty$. Consequently, $u_{\infty}$ satisfies the constraint, and therefore lies in the admissible class of \eqref{1.1}. Since $v$ was arbitrary in the energy bound, we conclude that $u_{\infty}$ in fact solves \eqref{1.1}. let us now define \[ \Lambda_{\infty}:=\big\|f(\mathrm{D}^2 u_{\infty})\big\|_{L^{\infty}(\Omega)}. \] We now show that $\Lambda_{\infty}>0$. Indeed, by our assumptions \eqref{1.3}-\eqref{1.4}, there is only one map in $W^{2,\infty}(\Omega;\mathbb{R}^N)$ satisfying $\|f(\mathrm{D}^2u_0)\|_{L^{\infty}(\Omega)}=0$ and $|u_0|\equiv 0$ on $\partial\Omega$, namely the trivial map $u_0\equiv 0$, but $u_0$ is not contained in the admissible class of \eqref{1.1} because $\|g(u_0, \mathrm{D} u_0)\|_{L^{\infty}(\Omega)}=0$. We now show that $\Lambda_p \longrightarrow \Lambda_{\infty}$ as $p_j \to \infty$. By our earlier energy estimate, we have $L_p\longrightarrow \Lambda_{\infty}$ as $p_j \to \infty$. By Lemma \ref{lemma5}, we have \[ 0< \, \lim_{p_j\to \infty} \bigg(\frac{C_1}{C_8}\bigg)^{\frac{1}{p}}L_p \, \leq \, \lim_{p_j\to \infty} \Lambda_p \, \leq \, \lim_{p_j\to \infty} \bigg(\frac{C_2}{C_7}\bigg)^{\frac{1}{p}}L_p, \] and therefore $\Lambda_p \longrightarrow \Lambda_{\infty}$ as $p_j \to \infty$. To complete the proof we must deduce the claimed bounds for $\Lambda_{\infty}$. We first establish the lower bound. By utilising the Poincar\'e and Poincar\'e-Wirtinger inequalities (recall the proof of Lemma \ref{lemma3}) and that $g(0,0)=0$, we estimate \[ \begin{split} 1&= \, \big\|g(u_{\infty}, \mathrm{D} u_{\infty}) \big\|_{L^{\infty}(\Omega)} \\ &\leq \, \mathrm{diam}(\Omega) \big\|\mathrm{D} (g(u_{\infty}, \mathrm{D} u_{\infty})) \big\|_{L^{\infty}(\Omega)} \\ &\leq \, \mathrm{diam}(\Omega)\Big( \big\|\partial_\eta g (u_{\infty}, \mathrm{D} u_{\infty})\mathrm{D} u_{\infty} \big\|_{L^{\infty}(\Omega)} \, +\, \big\| \partial_P g (u_{\infty}, \mathrm{D} u_{\infty})\mathrm{D}^2 u_{\infty} \big\|_{L^{\infty}(\Omega)} \Big) \\ &\leq \, \mathrm{diam}(\Omega)\Big( \big\|\partial_\eta g \big\|_{L^{\infty} ((u_{\infty}, \mathrm{D} u_{\infty})(\overline{\Omega}))}\|\mathrm{D} u_{\infty}\|_{L^{\infty}(\Omega)} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \|\partial_P g \|_{L^{\infty} ((u_{\infty}, \mathrm{D} u_{\infty})(\overline{\Omega} ))}\|\mathrm{D}^2 u_{\infty}\|_{L^{\infty}(\Omega)}\Big) \\ &\leq \, \| \mathrm{D}^2 u_{\infty}\|_{L^{\infty}(\Omega)} \mathrm{diam}(\Omega) \Big( C(\infty,\Omega) \|\partial_\eta g \|_{L^{\infty}((u_{\infty}, \mathrm{D} u_{\infty})(\overline{\Omega} ))} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ + \, \|\partial_P g \|_{L^{\infty}((u_{\infty}, \mathrm{D} u_{\infty})(\overline{\Omega})}\Big), \end{split} \] where $C(\infty,\Omega)>0$ is the maximum of the Poincar\'e and the Poincar\'e-Wirtinger inequality constants on $\Omega$ for $p=\infty$ (with the former being equal to $\mathrm{diam}(\Omega)$). As $g\geq 0$ and $\|g(u_{\infty}, \mathrm{D} u_{\infty})\|_{L^{\infty}(\Omega)}=1$, we conclude that $0\leq g(u_{\infty}, \mathrm{D} u_{\infty})\leq 1$ on $\overline{\Omega}$. Hence $(u_{\infty}, \mathrm{D} u_{\infty})(\overline{\Omega}) \subseteq \{0\leq g \leq 1\}=\{g \leq 1\}$. Thus \[ \begin{split} 1&\leq \| \mathrm{D}^2 u_{\infty}\|_{L^{\infty}(\Omega)} \mathrm{diam}(\Omega) \Big( C(\infty,\Omega) \|\partial_\eta g \|_{L^{\infty}(\{ g \leq 1\})} \, +\, \|\partial_P g \|_{L^{\infty}(\{g \leq 1\})}\Big) \end{split} \] Rearranging assumption \eqref{1.3}(d), we may write $|X|\leq C_4^{-\frac{1}{\alpha}}(f(X)+C_3)^{\frac{1}{\alpha}}$, for any $X \in \mathbb{R}^{N \times n^2}_s$. Combining this inequality with the previous bound, we deduce \[ \begin{split} C_4^{\frac{1}{\alpha}} \leq \, \Big(\big\|f(\mathrm{D}^2 u_{\infty})\big\|_{L^{\infty}(\Omega)} +\, C_3\Big)^{\!\frac{1}{\alpha}} \mathrm{diam}(\Omega) \Big( & C(\infty,\Omega) \|\partial_\eta g \|_{L^{\infty}(\{ g \leq 1\})} \\ & + \, \|\partial_P g \|_{L^{\infty}(\{g \leq 1\})}\Big), \end{split} \] which leads directly to the claimed lower bound for the eigenvalue. Now we establish the upper bound for $\Lambda_\infty$. Since $\Omega$ is by assumption a bounded domain with $C^2$ boundary, by standard results (see e.g.\ \cite[Sec.\ 14.6]{GT}), the distance function \[ d_\Omega \equiv \mathrm{dist}(\cdot,\partial\Omega) \ : \ \ \mathbb{R}^n \longrightarrow \mathbb{R}, \] which is in $W^{1,\infty}_{\mathrm{loc}}(\mathbb{R}^n)$, is also $C^2$ on an inner tubular neighbourhood of $\partial\Omega$, namely there exists $\varepsilon_0 \in (0,1)$ such that \[ d_\Omega \in C^2(\overline{\Omega^{\varepsilon_0}}), \ \ \ \Omega^\varepsilon := \{d_\Omega <\varepsilon\} \cap \Omega. \] Let us also for convenience symbolise $\Omega_\varepsilon := \{d_\Omega >\varepsilon\} \cap \hspace{1pt} \Omega$. Let us also fix $k\in\{1,2\}$, a unit vector $e\in \mathbb{R}^N$ and $\zeta \in C^2(\mathbb{R}^n)$ with $\zeta \equiv 0$ on $\Omega_{\varepsilon_0}$. Then, for any $t_0>0$, the map $\xi_0 := t_0 (d_\Omega)^k\zeta e$ satisfies \[ \xi_0 \, \in \, C^2(\overline{\Omega};\mathbb{R}^N). \] Since $\mathrm{d}_\Omega =0$ on $\partial\Omega$ and also $\mathrm{D}(\mathrm{d}^2_\Omega)=0$ on $\partial\Omega$, it follows that $\xi_0 \in W^{2,\infty}_{\mathrm H}(\Omega;\mathbb{R}^N)$ if $k=1$, whilst $\xi_0 \in W^{2,\infty}_{\mathrm C}(\Omega;\mathbb{R}^N)$ if $k=2$. We will consider both cases simultaneously and declare this as \[ \xi_0 \in W^{2,\infty}_{\mathrm B}(\Omega;\mathbb{R}^N). \] By Lemma \ref{lemma2}, we can adjust the constant $t_0>0$ to arrange \[ \big\| g(\xi_0,\mathrm{D} \xi_0)\big\|_{L^\infty(\Omega)} = 1. \] Hence, $\xi_0$ is in the admissible class of the minimisation problem \eqref{1.1}. By minimality and assumption \eqref{1.3}, we have the estimate \beq \label{2.3} \Lambda_\infty \, \leq\, C_5 (t_0)^\alpha \Big(\big\| \mathrm{D}^2(d_\Omega^k \zeta)\big\|_{L^\infty(\Omega^{\varepsilon_0})}\Big)^\alpha +\, C_6. \eeq By a direct computation, we have \beq \label{2.4} \left\{ \begin{split} \mathrm{D}^2(d_\Omega^k \zeta) \, &= \, k\big[ (k-1)\mathrm{D} d_\Omega \otimes \mathrm{D} d_\Omega + d_\Omega^{k-1}\mathrm{D}^2 d_\Omega \big] \zeta \, +\, d_\Omega^k \mathrm{D}^2 \zeta \\ &\ \ \ + \, k d_\Omega^{k-1} \big( \mathrm{D} d_\Omega\otimes \mathrm{D} \zeta \, +\, \mathrm{D} \zeta \otimes \mathrm{D} d_\Omega \big), \end{split} \right. \eeq on $\overline{\Omega}$. For any $x\in \overline{\Omega^{\varepsilon_0}}$, let us set $\mathrm{P}_{\Omega}(x) := \mathrm{Proj}_{\partial\Omega}(x)$. Then, by \cite[Sec.\ 14.6]{GT}, it follows that $|x- \mathrm{P}_{\Omega}(x)|=d_\Omega(x)$, and we also have the next estimates \beq \label{2.5} \left\{ \ \ \begin{split} \| d_\Omega \|_{L^\infty(\Omega^{\varepsilon_0})} &\leq \varepsilon_0, \phantom{\Big|} \\ \| \mathrm{D} d_\Omega \|_{L^\infty(\Omega^{\varepsilon_0})} &\leq 1, \\ \big\| \mathrm{D}^2 d_\Omega \big\|_{L^\infty(\Omega^{\varepsilon_0})} &\leq \sum_{i=1}^{n-1} \left\| \frac{ \kappa_i \circ \mathrm{P}_{\Omega} }{ 1 - ( \kappa_i \circ \mathrm{P}_{\Omega})d_\Omega } \right\|_{L^\infty(\Omega^{\varepsilon_0})}, \end{split} \right. \eeq where $\{\kappa_1,...,\kappa_{n-1}\}$ are the principal curvatures of $\partial\Omega$. By \eqref{2.3}-\eqref{2.5} we have the estimate \beq \label{2.6} \begin{split} \Lambda_\infty \, \leq\, C_5 (2t_0)^\alpha \Bigg( & \| \mathrm{D}^2 \zeta \|_{L^\infty(\Omega)} \, +\, \| \mathrm{D} \zeta \|_{L^\infty(\Omega)} \\ & + \| \zeta \|_{L^\infty(\Omega)} \left( 1+ \sum_{i=1}^{n-1} \left\| \frac{ \kappa_i \circ \mathrm{P}_{\Omega} }{ 1 - ( \kappa_i \circ \mathrm{P}_{\Omega})d_\Omega } \right\|_{L^\infty(\Omega^{\varepsilon_0})} \right) \!\!\Bigg)^{\!\alpha} +\, C_6. \end{split} \eeq It remains to select an appropriate function $\zeta$ in order to estimate its derivatives in terms of the geometry of $\Omega$, and to obtain an estimate for $t_0$. For the former, we argue as follows. Let $(\eta^\delta)_{\delta>0}$ be the family of standard mollifying kernels, as e.g.\ in \cite{KV}. We select \[ \zeta := \eta^{\varepsilon_0} * (\chi_{\mathbb{R}^n\setminus\Omega}), \] which is the regularisation of the characteristic of the complement of $\Omega$. It follows that this function satisfies the initial requirements, and additionally \[ \left\{ \begin{split} \mathrm{D} \zeta &= \eta^{\varepsilon_0} * (\mathrm{D} \chi_{\mathbb{R}^n\setminus\Omega}) = \eta^{\varepsilon_0} * \big(\mathcal{H}^{n-1}\text{\LARGE$\llcorner$}_{\partial\Omega} \mathrm{D} \mathrm{d}_\Omega \big), \\ \mathrm{D}^2 \zeta &= \mathrm{D}\eta^{\varepsilon_0} * (\mathrm{D} \chi_{\mathbb{R}^n\setminus\Omega}) = \frac{1}{\varepsilon_0}(\mathrm{D}\eta)^{\varepsilon_0} * \big(\mathcal{H}^{n-1}\text{\LARGE$\llcorner$}_{\partial\Omega} \mathrm{D} \mathrm{d}_\Omega \big), \end{split} \right. \] by standard properties on the differentiation of BV functions (see e.g.\ \cite{EG}). Then, by Young's inequality for convolutions, we have the estimates \beq \label{2.7} \left\{ \begin{split} \| \mathrm{D}^2 \zeta \|_{L^\infty(\mathbb{R}^n)} & \leq \frac{C}{\varepsilon_0^{n+1}}\mathcal{H}^{n-1}(\partial\Omega), \\ \| \mathrm{D} \zeta \|_{L^\infty(\mathbb{R}^n)} &\leq \mathcal{H}^{n-1}(\partial\Omega), \\ \| \zeta \|_{L^\infty(\mathbb{R}^n)} &\leq 1, \phantom{\Big|} \end{split} \right. \eeq for some universal constant $C>0$. Now we work towards an estimate for $t_0$ appearing in \eqref{2.3}. By assumption \eqref{1.4}, we have that the sublevel sets $\{g\leq t\}$ are compact in $\mathbb{R}^N \times \mathbb{R}^{N\times n}$ for any $t\geq0$. Let us define $R(t)$ as the smallest radius of the $N$-dimensional ball, for which $\{g\leq t\}$ is contained into the cylinder $\bar\mathbb{B}^N_{R(t)}(0) \times \mathbb{R}^{N\times n}$: \beq \label{2.8} R(t) \,:=\, \inf \Big\{ R>0 \, : \, \{g\leq t\} \subseteq \mathbb{B}^N_R(0) \times \mathbb{R}^{N\times n}\Big\}. \eeq Then, we define a strictly increasing function $\rho : [0,\infty) \longrightarrow [0,\infty)$ by setting \beq \label{2.9} \rho(t) \,:=\, t+\sup_{0\leq s\leq t}R(s). \eeq Then, $\rho$ satisfies $\rho(0)=0$, and also that \[ \{g\leq t\} \, \subseteq \, \bar\mathbb{B}^N_{\rho(t)}(0) \times \mathbb{R}^{N\times n}, \] for any $t\geq0$. Further, by construction, \[ \Big\{(\eta, P) \in \mathbb{R}^N \times \mathbb{R}^{N\times n}\, :\ \rho^{-1}(|\eta|)\leq t \Big\}\, =\, \bar\mathbb{B}^N_{\rho(t)}(0) \times \mathbb{R}^{N\times n}. \] The above imply \[ \rho^{-1}(|\eta|) \,\leq\, g(\eta,P),\ \ \ (\eta, P) \in \mathbb{R}^N \times \mathbb{R}^{N\times n}. \] Next, since $d_\Omega^k\zeta$ vanishes on $\partial\Omega \cup \overline{\Omega_{\varepsilon_0}}$ and $g(0,0)=0$, we have \[ \begin{split} 1\, &=\, \|g(\xi_0,\mathrm{D}\xi_0)\|_{L^\infty(\Omega)} \\ &=\, \sup_{\Omega^{\varepsilon_0}}g(\xi_0,\mathrm{D}\xi_0) \\ &\geq\, \sup_{\Omega^{\varepsilon_0}}\rho^{-1}(|\xi_0|) \\ &\geq\, \sup_{\Omega^{\varepsilon_0}}\rho^{-1}\big(t_0 |d_\Omega^k \zeta|\big). \end{split} \] Since $d_\Omega\equiv \varepsilon_0/4$ on $\partial\Omega^{\varepsilon_0/4}$, and $\rho^{-1}$ is strictly increasing, the above implies \beq \label{2.10} \begin{split} 1\, &\geq\, \max_{\partial\Omega^{\varepsilon_0/4}}\rho^{-1}\big(t_0 |d_\Omega^k \zeta|\big) \\ &=\, \max_{\partial\Omega^{\varepsilon_0/4}}\rho^{-1}\Big(t_0 \Big(\frac{\varepsilon_0}{4}\Big)^k \zeta\Big) \\ &=\, \rho^{-1}\Big(t_0 \Big(\frac{\varepsilon_0}{4}\Big)^k \max_{\partial\Omega^{\varepsilon_0/4}}\zeta\Big). \end{split} \eeq Now we estimate $\max_{\partial\Omega^{\varepsilon_0/4}}\zeta$ from below. Fix $x\in \partial\Omega^{\varepsilon_0/4}$. Then, since the standard mollifying kernel $\eta$ is a radial function (see e.g.\ \cite{KV}), there exists a universal $c>0$ such that $\eta\geq c$ on $\mathbb{B}_{1/2}(0)$. Therefore, \[ \begin{split} \zeta(x) \, &=\, \frac{1}{\varepsilon_0^n}\int_{\mathbb{B}_{\varepsilon_0}(x)} \chi_{\mathbb{R}^n \setminus\Omega} \eta\Big( \frac{|y-x|}{\varepsilon_0}\Big) \mathrm d y \\ &\geq\, \frac{1}{\varepsilon_0^n}\int_{\mathbb{B}_{\varepsilon_0/2}(x) \setminus\Omega} \eta\Big( \frac{|y-x|}{\varepsilon_0}\Big) \mathrm d y \\ &\geq\, \frac{c}{\varepsilon_0^n}\mathcal{L}^n\big(\mathbb{B}_{\varepsilon_0/2}(x) \setminus\Omega \big), \end{split} \] for any $x\in \partial\Omega^{\varepsilon_0/4}$. Finally, since $\partial\Omega$ satisfies the exterior sphere condition, the set $\mathbb{B}_{\varepsilon_0/2}(x) \setminus\Omega$ contains a ball $\mathbb{B}_r(\bar x)$ centred at some point $\bar x$, where the maximum possible radius $\bar r$ is given by \[ \bar r\, =\, \min\bigg\{\frac{\varepsilon_0}{8} \,,\, \underset{i=1,...,n-1}{\min}\frac{1}{\|\kappa_i\|_{C^0(\partial\Omega)}} \bigg\} . \] Therefore, if $\omega(n)$ is the volume of the unit ball in $\mathbb{R}^n$, \[ \begin{split} \zeta(x) \, &\geq\, \frac{c}{\varepsilon_0^n}\mathcal{L}^n\big(\mathbb{B}_{\varepsilon_0/2}(x) \setminus\Omega \big) \\ &\geq\, \frac{c}{\varepsilon_0^n}\mathcal{L}^n(\mathbb{B}_{\bar r}(\bar x)) \\ & =\, \frac{c}{\varepsilon_0^n}\omega(n) \bar r^n \\ & =\, \frac{c \omega(n)}{\varepsilon_0^n} \min\bigg\{\Big(\frac{\varepsilon_0}{8}\Big)^n \,,\, \underset{i=1,...,n-1}{\min}\frac{1}{\big(\|\kappa_i\|_{C^0(\partial\Omega)}\big)^n} \bigg\} \\ & =\, c \omega(n) \min\bigg\{\frac{1}{2^{3n}} \,,\, \underset{i=1,...,n-1}{\min}\frac{1}{\big(\varepsilon_0\|\kappa_i\|_{C^0(\partial\Omega)}\big)^n} \bigg\}, \end{split} \] for any $x\in \partial\Omega^{\varepsilon_0/4}$. Hence, we have established the lower bound \beq \label{2.11} \max_{\partial\Omega^{\varepsilon_0/4}}\zeta \, \geq\, c \omega(n) \min\bigg\{\frac{1}{2^{3n}} \,,\, \underset{i=1,...,n-1}{\min}\frac{1}{\big(\varepsilon_0\|\kappa_i\|_{C^0(\partial\Omega)}\big)^n} \bigg\}. \eeq By \eqref{2.10} and \eqref{2.11}, we infer (since $\varepsilon_0<1$ and $k\in\{1,2\}$) that \beq \label{2.12} \begin{split} t_0 \, &\leq\, \frac{4^k \rho(1)\varepsilon_0^{n-k}}{c \omega(n)}\frac{1}{\min\bigg\{\dfrac{1}{2^{3n}}\, , \, \underset{i=1,...,n-1}{\min}\dfrac{1}{\big(\varepsilon_0\|\kappa_i\|_{C^0(\partial\Omega)}\big)^n} \bigg\}} \\ &\leq\, \frac{32 \rho(1)}{c \omega(n)}\Big(2^{3n} + \underset{i=1,...,n-1}{\max}\big(\|\kappa_i\|_{C^0(\partial\Omega)}\big)^n \Big). \end{split} \eeq By \eqref{2.6}, \eqref{2.7}, and \eqref{2.12}, we conclude with the following upper bound for the eigenvalue: \beq \label{2.13} \begin{split} \Lambda_\infty \, \leq &\ C_6 \, + \, C_5 \left[ \frac{16\rho(1)}{c \omega(n)}\Big(2^{3n} \,+ \, \underset{i=1,...,n-1}{\max}\big(\|\kappa_i\|_{C^0(\partial\Omega)}\big)^n \Big) \right]^\alpha \centerdot \\ &\centerdot \Bigg\{ 1\,+\, \bigg(1+\frac{C}{\varepsilon_0^{n+1}} \bigg) \mathcal{H}^{n-1}(\partial\Omega) \ + \, \sum_{i=1}^{n-1} \left\| \frac{ \kappa_i \circ \mathrm{P}_{\Omega} }{ 1 - ( \kappa_i \circ \mathrm{P}_{\Omega})d_\Omega } \right\|_{L^\infty(\Omega^{\varepsilon_0})} \!\!\Bigg\}^{\!\alpha}. \end{split} \eeq The claimed estimate \eqref{1.8A} follows from \eqref{2.13} above, by recalling that in view of \eqref{2.8}-\eqref{2.9}, we have \[ \rho(1) \, =\, 1 \,+\, \sup_{0\leq t \leq 1}R(t), \] and also that the last term of \eqref{2.13} is finite at least when \[ \varepsilon_0 \, <\, \frac{1}{\underset{i=1,...,n-1}{\max} \|\kappa_i\|_{C^0(\partial\Omega)}}. \] The result ensues. \qed \begin{lemma}\label{lemma7} For any $p>(n/\alpha) +2$, there exist measures $\nu_{\infty}\in \mathcal{M}(\overline{\Omega})$ and ${\mathrm M}_\infty \in \mathcal{M}(\overline{\Omega}; \mathbb{R}^{N \times n^2}_s)$ such that, along perhaps a further sequence $(p_j)_1^{\infty}$ of exponents, we have \[ \left\{ \ \ \begin{array}{ll} \nu_{p} \weakstar \nu_\infty, & \ \ \text{in } \mathcal{M}(\overline{\Omega}), \\ \mathrm M_{p} \weakstar {\mathrm M}_\infty, & \ \ \text{in } \mathcal{M}(\overline{\Omega}; \mathbb{R}^{N \times n^2}_s), \end{array} \right. \] as $j\to\infty$, where the approximating measures $\nu_p,\mathrm M_p$ are given by \eqref{1.11}. \end{lemma} \noindent \textbf{Proof of Lemma} \ref{lemma7}. We begin by noting that since $g\geq 0$ and $\|g(u_p, \mathrm{D} u_p)\|_{L^p(\Omega)}=1$, in view of \eqref{1.11} we have the bound \[ \|\nu_p\|(\overline{\Omega}) \, =\,\nu_p(\overline{\Omega}) \, =\, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p-1}\, \mathrm{d} \mathcal{L}^n\leq \bigg({-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} g(u_p, \mathrm{D} u_p)^{p}\, \mathrm{d} \mathcal{L}^n \bigg)^{\frac{p-1}{p}}=1. \] By the sequential weak$^*$ compactness of the space of Radon measures we can conclude that $\nu_{p} \weakstar \nu_\infty, \text{in } \mathcal{M}(\overline{\Omega})$ up to the passage to a further subsequence. Now we establish appropriate total variation bounds for the measure $\mathrm M_p$. Since $f\geq0$, by the bounds of Lemma \ref{lemma5} and assumption \eqref{1.3}, we estimate (for sufficiently large $p$) \[ \begin{split} \|{\mathrm M}_p\|(\overline{\Omega})\, &= \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} \bigg( \frac{f(\mathrm{D}^2 u_p)}{\Lambda_p}\bigg)^{p-1}|\partial f(\mathrm{D}^2 u_p)| \, \mathrm{d} \mathcal{L}^n \\ &\leq \, \frac{1}{\Lambda^{p-1}_p} \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p-1} \Big( C_5 f(\mathrm{D}^2 u_p)^{\beta}+C_6\Big) \, \mathrm{d} \mathcal{L}^n \\ &= \, \frac{C_5}{\Lambda^{p-1}_p} \, {-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p-1+\beta} \, \mathrm{d} \mathcal{L}^n \, + \, \frac{C_6}{\Lambda^{p-1}_p}{-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega} f(\mathrm{D}^2 u_p)^{p-1} \, \mathrm{d} \mathcal{L}^n. \end{split} \] Hence, \[ \begin{split} \|{\mathrm M}_p\|(\overline{\Omega})\, & \leq \, \frac{C_5}{\Lambda^{p-1}_p}\bigg({-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega}f(\mathrm{D}^2 u_p)^p \, \mathrm{d} \mathcal{L}^n \bigg)^{\frac{p-1+\beta}{p}} \, + \, \frac{C_6}{\Lambda^{p-1}_p}\bigg({-\hspace{-10.5pt}\displaystyle\int}_{\!\!\!\Omega}f(\mathrm{D}^2 u_p)^{p} \, \mathrm{d} \mathcal{L}^n\bigg)^{\frac{p-1}{p}} \\ &= \, C_5\frac{(L_p)^{p-1+\beta}}{\Lambda_p^{p-1}} \, + \, C_6\frac{(L_p)^{p-1}}{\Lambda_p^{p-1}} \\ &= \, \bigg(\frac{L_p}{\Lambda_p}\bigg)^{p-1}\big(C_5L_p^{\beta}+C_6\big) \\ &\leq \, \bigg(\frac{C_8}{C_1}\bigg)^{1-\frac{1}{p}}\Big(C_5(\Lambda_{\infty}+1)^{\beta}+C_6 \Big). \end{split} \] The above bound allows to conclude that $\mathrm M_{p} \weakstar {\mathrm M}_\infty \ \text{in} \ \mathcal{M}(\overline{\Omega}; \mathbb{R}^{N \times n^2}_s)$, along perhaps a further subsequence of indices $(p_j)_1^\infty$ as $j\to\infty$. \qed To conclude the proof of Theorem \ref{1} we must ensure the PDE system \eqref{1.5} is indeed satisfied by the quadruple $(u_{\infty}, \Lambda_{\infty}, {\mathrm M}_\infty, \nu_{\infty})$. \begin{lemma}\label{lemma8} If ${\mathrm M}_\infty\in \mathcal{M}(\overline{\Omega}; \mathbb{R}^{N \times n^2}_s)$ and $ \nu_{\infty}\in \mathcal{M}(\overline{\Omega})$ are the measures obtained in Lemma \ref{lemma7}, then the pair $(u_{\infty}, \Lambda_{\infty})$ satisfies \eqref{1.5} for all $\phi\in C^2_{\mathrm B}(\overline{\Omega};\mathbb{R}^N)$. \end{lemma} \noindent \textbf{Proof of Lemma} \ref{lemma8}. Fix a test function $\phi \in C^2_{\mathrm B}(\overline{\Omega};\mathbb{R}^N)$ and $p>n/\alpha+2$ by \eqref{1.11} we may rewrite the PDE system in \eqref{1.10} as follows \[ \int_{\Omega} \mathrm{D}^2 \phi : \mathrm{d} \mathrm M_p = \ \Lambda_p \, \int_{\Omega} \Big( \partial_\eta g (u_p, \mathrm{D} u_p) \cdot \phi + \partial_P g (u_p, \mathrm{D} u_p) : \mathrm{D} \phi \Big)\, \mathrm{d} \nu_p, \] Recall that, by Proposition \ref{Proposition 6}, we have $\Lambda_p\longrightarrow \Lambda_{\infty}$ and also $(u_p,\mathrm{D} u_p)\longrightarrow (u_{\infty},\mathrm{D} u_{\infty})$ uniformly on $\overline{\Omega}$, as $p_j\to \infty$. By assumption \eqref{1.4}(a), we have that $\partial_\eta g (u_p,\mathrm{D} u_p)\longrightarrow \partial_\eta g (u_{\infty}, \mathrm{D} u_{\infty})$ and also $\partial_P g (u_p,\mathrm{D} u_p)\longrightarrow \partial_P g (u_{\infty}, \mathrm{D} u_{\infty})$, both uniformly on $\overline{\Omega}$, as $p_j\to \infty$. The result ensues by invoking Lemma \ref{lemma7}, in conjunction with weak$^*$-strong continuity of the duality pairing $\mathcal{M}(\overline{\Omega})\times C(\overline{\Omega}) \longrightarrow \mathbb{R}$. \end{document}
arXiv
\begin{document} \title{Coherent dynamical control of quantum processes} \author{V. Rezvani} \email{[email protected]} \affiliation{Department of Physics, Sharif University of Technology, Tehran 14588, Iran} \author{A. T. Rezakhani} \email{[email protected]} \affiliation{Department of Physics, Sharif University of Technology, Tehran 14588, Iran} \begin{abstract} Manipulation of a quantum system requires the knowledge of how it evolves. To impose that the dynamics of a system becomes a particular target operation (for any preparation of the system), it may be more useful to have an equation of motion for the dynamics itself---rather than the state. Here we develop a Markovian master equation for the process matrix of an open system, which resembles the Lindblad Markovian master equation. We employ this equation to introduce a scheme for optimal local coherent process control at target times, and extend the Krotov technique to obtain optimal control. We illustrate utility of this framework through several quantum coherent control scenarios, such as optimal decoherence suppression, gate simulation, and passive control of the environment, in all of which we aim to simulate a given terminal process at a given final time. \end{abstract} \pacs{03.65.Yz, 02.30.Yy, 03.65.Wj, 03.67.Lx, 03.67.-a} \date{\today} \maketitle \section{Introduction} \label{sec:intro} In a real world a quantum system cannot be fully isolated from its surrounding reservoir or environment. Such system-environment couplings in general lead to a nonunitary description of the dynamics of the system \cite{breuer_theory_2002,book:rivas,book:Schaller,correlation-picture}. Consequently, useful quantum resources of an open system, such as quantum coherence and correlations, often diminish rapidly. To mitigate such adversarial effects, it seems necessary to employ ideas from quantum error correction \cite{nielsen_quantum_2010,book:QEC} and quantum control theory \cite{book:D'Alessandro, book:q.control, boscain_introduction_2020, Jirari_Optimal_Control}, such as quantum feedback control \cite{lloyd_coherent_2000}, decoherence-free subspaces and subsystems \cite{lidar_review_2012}, and dynamical decoupling \cite{viola_dynamical_1999, Khodjasteh, Viola-review}. If the system is initially uncorrelated with the environment, its dynamics can be faithfully described by quantum ``operations'' or ``channels'' (completely positive, trace-preserving linear maps), or equivalently by ``process matrices'' \cite{nielsen_quantum_2010}. These objects relate the instantaneous density matrix (i.e., state) of the system to its initial density matrix. Numerous methods, such as quantum process tomography, have been developed for characterization of process matrices \cite{nielsen_quantum_2010, emerson_symmetrized_2007, bendersky_selective_2008, mohseni_quantum-process_2008, mohseni_equation_2009}. In addition, under some assumptions, one can obtain master equations for dynamics of the state \cite{breuer_theory_2002,book:rivas,book:Schaller}. These master equations enable one to see how manipulation of the preparation or system Hamiltonian by external agents can affect the state of the system at any time. The ability to manipulate system dynamics has spurred quantum control applications \cite{Rabitz-0,Rabitz-1,Rabitz-2,peirce_optimal_1988, sundermann_extensions_1999, zhu_rapid_1998, zhu_rapidly_1998, bartana_laser_2001, carlini_time-optimal_2006, carlini_time-optimal_2007, carlini_time_2008, Gordon, QAB, Clausen, brody_quantum_2012, Suter-RMP, pang_optimal_2017, cavina_optimal_2018, cavina_variational_2018}. However, for some applications, it may still be more useful to have dynamical or master equations which describe the dynamics of the dynamics (the process matrix) rather than the dynamics of the state (the density matrix). A relevant example is a control scenario where one is interested to achieve a particular quantum operation in a physical system by applying suitable control fields. Since here the operation is of interest, a dynamical equation for how the associated process matrix evolves can provide more direct information about the target operation. Such equations can be particularly useful in dissipative control or reservoir engineering scenarios \cite{poyatos_quantum_1996, myatt_decoherence_2000, beige_quantum_2000, diehl_quantum_2008, verstraete_quantum_2009, lin_dissipative_2013, fedortchenko_finite-temperature_2014, reseng-1, Basilewitsch_2019, PRR-1}. In this paper, assuming Markovian evolution of the open system, we derive an equation of motion for the process matrix (for a precursor study see Ref. \cite{mohseni_equation_2009}). Next we use this equation to construct a fairly general scheme for \textit{optimal} control of the dynamics of open quantum systems, where the achieved operation is guaranteed to have the highest fidelity with the desired operation. We restrict ourselves to coherent control operations, where an external field is applied locally only on the system and modifies its Hamiltonian (assuming that the field does not modify the environment or the way it acts on the system). We use this scheme to study optimal coherent strategies for gate simulation, decoherence suppression, and passive control of the environment. Our optimization strategy is based on developing an extension of the Krotov method for quantum processes. In decoherence suppression, an optimal control field is applied to the open system to simulate a unitary evolution at a specified time. In quantum gate simulation, we show how a quantum gate can be simulated optimally when we are confined to coherent manipulation of the open system. This optimal control framework also allows us to force the environment to act as if it were another environment with different properties. For example, we show that how one can modify the system Hamiltonian such that at a specified time a dissipative environment looks like a depolarizing environment. We illustrate these scenarios for a practical model of a Rydberg ion under typical decoherence. The structure of this paper is as follows. In Sec. \ref{sec:CohCont} we derive our master equation for the process matrix. In addition, in this section we introduce the main components of the optimal control theory to manipulate instantaneously the dynamics of an open system. Next in Sec. \ref{sec:KrotovMethod} we solve this dynamical optimization problem based on a monotonically convergent algorithm, i.e., the Krotov algorithm. We focus on optimal coherent control of terminal processes in Sec. \ref{sec:Application} and apply this to three different scenarios. Section \ref{sec:conc} concludes the paper. The appendixes include some details and derivations. \section{COHERENT CONTROL OF QUANTUM PROCESSES} \label{sec:CohCont} Consider an open quantum system $S$ of $N-$dimensional Hilbert space which interacts with its surrounding environment $B$ with a large number of degrees of freedom. The central problem in the optimal control theory is the dynamical manipulation of such a system to attain a given objective under some constraints \cite{boscain_introduction_2020}. For example, the objective can be dictating the dynamics of the system at a predetermined final (terminal) time $t_{\mathrm{f}}$ to become as much as possible similar to a given target dynamics. This can be achieved by manipulating the system and/or its environment. In this paper, we introduce a procedure for the dynamical manipulation of the system $S$ by applying the external fields only to the system. \subsection{Dynamical variable: Process matrix} In our dynamical control scheme, we work with an object referred to as the ``process matrix,'' which has a pivotal role in the dynamics of the open system. In the following we recall the definition of this object and some of its important properties \cite{nielsen_quantum_2010}. Assume that the initial state of the total system (main system $+$ environment) is in a tensor-product form as $\varrho (t_{0})=\varrho_{S}(t_{0})\otimes\sum_{i}r_{i}\vert b_{i}\rangle\langle b_{i}\vert$, where the set of eigenvalues and eigenvectors of the initial state of the environment $B$ is denoted by $\left\lbrace r_{i},\vert b_{i}\rangle\right\rbrace $. Hence the time evolution of $S$ is described by a completely positive and trace-preserving linear map in a Kraus representation form as \citep{nielsen_quantum_2010} \begin{equation} \mathpzc{E}_{(t,t_{0})}\big(\circ\big)=\textstyle{\sum}_{\lambda ,\mu=1}^{N^{2}}\chi_{\lambda\mu}(t,t_{0})\, C_{\lambda}\circ C_{\mu}^{\dag}, \label{eq.1} \end{equation} where $\lbrace C_{\lambda}\rbrace_{\lambda=1}^{N^{2}}$ is a fixed orthonormal operator basis for the $N^{2}-$dimensional Liouville space of $S$, such that $\mathrm{Tr}[C_{\lambda}^{\dag}C_{\mu}]=\delta_{\mu\nu}$. In Eq. \eqref{eq.1}, the ``process matrix'' ${\chi }(t,t_{0})$ is defined as \begin{align} &\qquad\quad\chi(t,t_{0})=\mathpzc{B}^{\dagger}(t,t_{0}) \,\mathpzc{B}(t,t_{0}),\label{eq.2-2} \\ &\mathpzc{B}_{(i,j),\mu}(t,t_{0})=\sqrt{r_{i}}\,\mathrm{Tr}[\langle b_{i}\vert U^{\dagger}(t,t_{0})\vert b_{j}\rangle\, C_{\mu}], \label{eq.2-3} \end{align} where $U(t,t_{0})$ is the unitary operator generated by the total time-dependent Hamiltonian. Here we consider that the system is driven by an external control field $V_{\mathrm{field}}(t)$, which acts only on the system. Thus the total Hamiltonian becomes $H(t)=H_{S}+V_{\mathrm{field}}(t)+H_{B}+H_{\mathrm{int}}$, where $H_{S}$ ($H_{B}$) is the system (environment) Hamiltonian, and $H_{\mathrm{int}}$ denotes the system-environment interaction. In addition, in practical applications it is useful to consider that the external control $V_{\mathrm{field}}(t)$ can be adjusted by some knobs $\bm{\epsilon}(t)=\lbrace\epsilon_{m}(t) \rbrace$ as \begin{equation} V_{\mathrm{field}}(t)= \bm{\epsilon}(t)\cdot\bm{H}= \textstyle{\sum_{m}} \epsilon_{m}(t)H_{m}, \end{equation} where $H_{m}$ are some fixed control operators. The process matrix \eqref{eq.2-2} is a positive-semidefinite matrix and relates the initial state of the system to its states at next times. From Eq. \eqref{eq.2-2} one can see the process matrix contains all information about the dynamics of system $S$, and in this sense it is the dynamics itself. The trace-preserving property of the linear map $\mathpzc{E}_{(t,t_{0})}$ at all times $t\geqslant t_{0}$ implies that $\mathrm{Tr}[\chi (t,t_{0})]=N$. There is a one-to-one isomorphism, refereed to as the Choi-Jamiolkowski isomorphism \cite{Jamiolkowski, Choi}, between any completely positive map $\mathpzc{E}_{(t,t_{0})}$ and the density matrix of a composite system comprised of the open system $S$ and an ancilla of the same Hilbert space dimension, which is given by $\varrho_{\mathpzc{E}}(t)=\big(\mathpzc{E}_{(t,t_{0})}\otimes\mathbbmss{I}_{S}\big)\big(\vert\Phi_{+}\rangle\langle\Phi_{+}\vert\big)$, where $\vert\Phi_{+}\rangle$ is a maximally entangled state of the composite system. One can see that in the logical operator basis $\lbrace\widetilde{C}_{(i,j)}=\vert i\rangle\langle j\vert\rbrace_{i,j=1}^{N}$, with $\vert i\rangle$ being the computational basis, the process matrix $\widetilde{\chi}(t,t_{0})$ is proportional to the corresponding density matrix $\varrho_{\mathpzc{E}}(t)$ \cite{gilchrist_distance_2005}, \begin{equation} \widetilde{\chi} (t,t_{0})=N\varrho_{\mathpzc{E}}(t). \label{eq.2-1} \end{equation} In addition, the process matrix $\widetilde{\chi}(t,t_{0})$ is related to process matrix ${\chi}(t,t_{0})$ in an arbitrary operator basis $\{ C_{\alpha}\}$ through a unitary transformation $\widetilde{\chi}(t,t_{0})=\mathcal{S}^{\dag}{\chi}(t,t_{0})\mathcal{S}$, where the unitary operator $\mathpzc{S}$ is defined as $\mathcal{S}_{\alpha ,(i,j)}=\mathrm{Tr}[C_{\alpha}^{\dag}\widetilde{C}_{(i,j)}]$ for $\alpha\in\{1,\ldots,N^{2}\}$ and $i,j\in\{1,\ldots,N\}$. \subsection{Dynamical equation of the dynamics} After defining the process matrix as a dynamical variable, we need to determine how the process matrix evolves by considering the dissipative and external field effects in the dynamics space $\mathfrak{D}_{S} =\left\lbrace \chi (t,t_{0});\,\forall t\geqslant t_{0}\right\rbrace$. Here we restrict ourself to the special case of quantum Markovian evolutions \citep{book:rivas}, where the dynamical map \eqref{eq.1} satisfies the divisibility condition for all $t,s$ such that $t_{0}\leqslant s\leqslant t$, \begin{equation} \mathpzc{E}_{(t,t_{0})}=\mathpzc{E}_{(t,s)}\mathpzc{E}_{(s,t_{0})}. \label{eq.3-1} \end{equation} It is straightforward to see that this condition on the dynamics in turn implies the following Markovian master equation in the Lindblad form for the density matrix $\varrho_{S}(t)$: \begin{align} \dfrac{d\varrho_{S}(t)}{dt}=&-\dfrac{i}{\hbar}[{H}_{S}(t),\varrho_{S}(t) ]+\textstyle{\sum}_{\alpha=1}^{N^{2}-1}\gamma_{\alpha}(t) \big(L_{\alpha}(t)\varrho_{S}(t) L_{\alpha}^{\dag}(t) \nonumber\\ &-\dfrac{1}{2}\lbrace L_{\alpha}^{\dag}(t)L_{\alpha}(t),\varrho_{S}(t)\rbrace \big), \label{eq.3-2} \end{align} where the Lindblad operators are defined as $L_{\alpha}(t)=\sum_{\eta =1}^{N^{2}-1}u_{\alpha\eta}^{\ast}(t) C_{\eta}$, with $u(t)$ being the unitary operator diagonalizing the positive semidefinite matrix $a(t)=[a_{\xi\nu}(t)]=[\lim_{x\rightarrow 0}\chi_{\xi\nu}(t+x,t)/x]$ (for $\xi,\nu\in\{1,\ldots, N^{2}-1\}$) as $u(t)a(t)u^{\dag}(t)=\mathrm{diag}\big(\gamma_{\alpha}(t) \big)$. The coefficients $\gamma_{\alpha}(t)$ are referred to as the Lindblad rates. The Hermitian operator $H_{S}(t)$ is also defined as $H_{S}(t)=\big(M(t)-M^{\dag}(t) \big)/(2i)$, where $M(t)=(\hbar /\sqrt{N})\sum_{\lambda=1}^{N^{2}-1}a_{\lambda N^{2}}(t)C_{\lambda}$, $a_{\lambda N^{2}} (t)=\lim_{x\rightarrow 0}\chi_{\lambda N^{2}}(t+x,t)/x$. Interestingly an alternative microscopic, first-principle approach to derive the above Lindblad master equation has also been formulated \cite{breuer_theory_2002}, which can help to find the validity conditions for Eq. \eqref{eq.3-2} and also underlying physical meanings of its various components such as $H_{S}(t)$. This approach is based on the dynamics of the total system described by the von Neumann equation $\frac{d}{dt}\varrho_{SB}(t)=-(i/\hbar)[H(t),\varrho_{SB}(t)]$, partial tracing over the environment, and the weak-coupling, Born-Markov, and the secular approximations \cite{breuer_theory_2002,book:Schaller, lidar2019lecture}. The weak-coupling approximation implies that \begin{equation} \Vert H_{\mathrm{int}}\Vert \ll \max_{t}\Vert H_{S}+V_{\mathrm{field}}(t)+H_{B}\Vert, \label{eq.3-3} \end{equation} where $\Vert\cdot\Vert$ is the standard operator norm. In addition, Eq. (\ref{eq.3-2}) requires validity of particular assumptions about several time scales in the total system. In particular, \begin{gather} \tau_{B} \ll \delta t_{S}\ll\tau_{S}, \label{cond-1}\\ 1/\min_{\omega\neq \omega'}|\omega-\omega'|\ll \delta t_{S}, \label{cond-2} \end{gather} where $\tau_{B}$ is the relaxation time of the environment (the time at which the correlation functions of the environment decay), $\delta t_{S}\approx 1/\max_{t}\Vert H_{S}+V_{\mathrm{field}}(t)\Vert$ is the time scale of the variations of the driven system, $\tau_{S}$ is the time scale for the relaxation of the systems, and $\omega$ (or $\omega'$) indicates the energy gaps of $H_{S}$. The assumptions \eqref{eq.3-3} -- \eqref{cond-2} lead to time-independence of the Lindblad rates and operators \cite{Geva,Tannor-1}. In some sense, these assumptions guarantee that the control fields do not considerably modify how the environment affects the system. In addition, through the microscopic approach, the Hermitian operator $H_{S}(t)$ in Eq. \eqref{eq.3-2} appears to be the sum of the bare system, the field-system interaction, and the time-independent Lamb-shift Hamiltonian (environment-induced correction to the bare system Hamiltonian), $H_{S}(t)=H_{S}+V_{\mathrm{field}}(t)+H_{\mathrm{Lamb}}$. This form is in the \textit{lab} frame. To obtain a master equation for the dynamics $\chi (t,t_{0})$ in the Markovian regime, first we translate the divisibility condition \eqref{eq.3-1} in the language of the process matrix, \begin{equation} \chi_{\alpha\beta}(t,t_{0})=\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda,\nu=1}^{N^{2}}\chi_{\lambda\nu}(t,s)\chi_{\mu\eta}(s,t_{0})\,[ F_{\lambda}]_{\alpha\mu} \,[ F_{\nu}]^{*}_{\beta\eta}, \label{eq.3} \end{equation} where $F$ is a rank-$3$ tensor defined as $[F_{\lambda}]_{\alpha\mu}\equiv\mathrm{Tr}[C_{\alpha}^{\dag}C_{\lambda}C_{\mu}]$. From Eq. \eqref{eq.3} and by differentiating the process matrix, we can obtain a linear differential equation as \begin{align} \dfrac{d\chi(t,t_{0})}{dt}=-\dfrac{i}{\hbar} \mathpzc{K}\big(\chi(t,t_{0})\big), \label{eq.4} \end{align} for $\alpha,\beta\in\{1,\ldots,N^{2}\}$, where the time-dependent generator $\mathpzc{K}$ is given by \begin{align} -\dfrac{i}{\hbar}[\mathpzc{K}]_{\alpha\beta,\mu\eta}=&\lim_{x\rightarrow 0}\frac{1}{x}\big(\textstyle{\sum}_{\lambda,\nu=1}^{N^{2}}\chi_{\lambda\nu}(t+x,t) [ F_{\lambda}]_{\alpha\mu} [ F_{\nu}]^{*}_{\beta\eta} \nonumber \\ &-\delta_{\alpha\mu} \delta_{\beta\eta}\big). \label{eq.5} \end{align} Following the next steps similar to the formal derivation of the Lindblad equation for the density matrix yields an expression for the generator $\mathpzc{K}$. We have relegated the details to Appendix \ref{app:a}. From hereon we assume $t_{0}=0$ and use the shorthand $(t)$ for $(t,0)$. In addition, we assume the basis operators $\left\lbrace C_{\alpha}\right\rbrace$ are traceless, $\mathrm{Tr}[C_{\alpha}]=0$ for $\alpha\in\{1,\ldots, N^{2}-1\}$, except for $C_{N^{2}}=\mathbbmss{I}_{S}/\sqrt{N}$. These steps transform the dynamical master equation of the system (\ref{eq.3-2}) into the following Markovian dynamical equation for the process matrix itself: \begin{align} \dfrac{d\chi (t)}{dt}=&-\frac{i}{\hbar}\mathpzc{K}\big(\chi(t)\big) =-\dfrac{i}{\hbar} [\mathsf{H}_{S}(t),\chi(t) ]+\textstyle{\sum}_{\alpha=1}^{N^{2}-1}\gamma_{\alpha}(t) \nonumber\\ &\times\big(\mathsf{L}_{\alpha}(t)\chi(t) \mathsf{L}_{\alpha}^{\dag}(t) -\dfrac{1}{2}\lbrace \mathsf{L}_{\alpha}^{\dag}(t)\mathsf{L}_{\alpha}(t),\chi(t)\rbrace \big), \label{eq.6} \end{align} where $\gamma_{\alpha}(t)$'s are the same factors as in the Lindblad equation \eqref{eq.3-2}, \begin{equation} [\mathsf{Y}(t)]_{\mu\nu}\equiv\mathrm{Tr}[C_{\mu}^{\dag}\,Y(t)\, C_{\nu}], \label{eq.7} \end{equation} with $Y(t)\in\{H_{S}(t), L_{1}(t),\ldots,L_{N^{2}-1}(t)\}$, and the initial value condition is $\chi_{\mu\nu}(0)=N\delta_{\mu N^{2}}\delta_{\nu N^{2}}$, for $\mu,\nu\in\{1,\ldots, N^{2}\}$. Equation (\ref{eq.6}), which describes the dynamics, is one of the main results of this paper, and has evident similarity with Eq. (\ref{eq.3-2}). It is straightforward to see that the conditions \eqref{eq.3-3} -- \eqref{cond-2} yield that $\gamma_{\alpha}(t)$'s and $\mathsf{L}_{\alpha}(t)$'s become time-independent. In addition, for simplicity and noting that the Lamb-shift correction is of the second-order with respect to $H_{\mathrm{int}}$, we neglect $H_{\mathrm{Lamb}}$. It is important to note that the above equation directly addresses the dynamics of an open system without reference to its state. This is an appealing property which can be of significant practical importance, especially when one is interested to design or simulate a particular operation, dynamics, or gate, rather than a particular state. In this sense, our dynamical equation (\ref{eq.6}) can be taken as the basis for an enhanced quantum dynamical control scheme for open systems, which may have numerous applications in diverse areas, such as quantum computation \cite{koch_controlling_2016}. This lifting from the dynamics of a state to the dynamics of the dynamics can be compared with the closed-system scenario in which rather than the Schr\"{o}dinger equation $\frac{d}{dt}\varrho_{S}(t)=-(i/\hbar)[H_{S}(t),\varrho_{S}(t)]$ one can work with the dynamical equation $\frac{d}{dt}U_{S}(t,0)=-(i/\hbar)H_{S}(t)\,U_{S}(t,0)$, where $\varrho_{S}(t)=U_{S}(t,0)\,\varrho_{S}(0)\,U^{\dag}_{S}(t,0)$. \begin{remark} From Eqs. \eqref{eq.2-1} and \eqref{eq.6} one can obtain an equation for the Choi-Jamiolkowski density matrix $\varrho_{\mathpzc{E}}(t)$ as \begin{align} \dfrac{d\varrho_{\mathpzc{E}}(t)}{dt}=&-\dfrac{i}{\hbar}[\widetilde{\mathsf{H}}_{S}(t),{\varrho_{\mathpzc{E}}}(t) ]+\textstyle{\sum}_{\alpha=1}^{N^{2}-1}\gamma_{\alpha}(t) \big(\widetilde{\mathsf{L}}_{\alpha}(t){\varrho_{\mathpzc{E}}}(t) \widetilde{\mathsf{L}}_{\alpha}^{\dag}(t) \nonumber\\ &-\dfrac{1}{2}\lbrace \widetilde{\mathsf{L}}_{\alpha}^{\dag}(t) \widetilde{\mathsf{L}}_{\alpha}(t) , {\varrho_{\mathpzc{E}}}(t)\, \rbrace \big), \label{eq.9-1} \end{align} where $\widetilde{\mathsf{Z}}(t)$ is defined as $\widetilde{\mathsf{Z}}(t)=\mathcal{S}^{\dag}\mathsf{Z}(t)\mathcal{S}$, with $\mathsf{Z}(t)\in\{\mathsf{H}_{S}(t), \mathsf{L}_{1}(t),\ldots,\mathsf{L}_{N^{2}-1}(t)\}$. \end{remark} \subsection{Objective of the control} To analyze how effectively the applied fields $\bm{\epsilon}(t)$ performs toward achieving our objective, we need to choose a relevant figure-of-merit. A general figure-of-merit to control dynamics of the open system $S$ is a real scalar functional in the form of \begin{equation} \mathpzc{J}=\mathpzc{F}\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)+\mathpzc{J}_{d}[\chi(t)]+\mathpzc{J}_{f}[\bm{\epsilon}(t)]. \label{eq.10} \end{equation} The final time-dependent objective $\mathpzc{F}$ can be constructed based on a measure which compares how close the achieved dynamics at $t=t_{\mathrm{f}}$, $\chi(t_{\mathrm{f}})$, is to a given desired dynamics, $\Xi_{\,t_{\mathrm{f}}}$. For example, we can employ the quantum operator fidelity defined as \cite{Wang-fidelity,Suter-RMP} \begin{equation} \mathpzc{F}\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)=- w_{0}\dfrac{\mathrm{Tr}[\chi^{\dag}(t_{\mathrm{f}})\, \Xi_{\,t_{\mathrm{f}}}]}{\big(\mathrm{Tr}[\chi^{\dag}(t_{\mathrm{f}})\chi(t_{\mathrm{f}})]\,\mathrm{Tr}[\Xi^{\dag}_{\,t_{\mathrm{f}}}\Xi_{\,t_{\mathrm{f}}}]\big)^{1/2}}, \label{eq.11} \end{equation} where $ w_{0}\geqslant 0$ is a weight and the negative sign is a convention put to simply transform the optimization into a minimization problem. This quantity is guaranteed by the Cauchy-Schwarz inequality to be bounded as $- w_{0}\leqslant \mathpzc{F}\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)\leqslant 0$. For simplicity, later in this paper, when we discuss several examples in Sec. \ref{sec:Application}, we shall restrict ourselves to the case $w_{0}=1$, in which case $-\mathpzc{F}$ becomes the process fidelity. Note that in the context of optimal open-system quantum control theory, to achieve a desired quantum gate $O$ at a predetermined terminal time $t_{\mathrm{f}}$, various measures or fidelities have been proposed to represent the final time-dependent objective $\mathpzc{F}$ \cite{koch_controlling_2016}. However, majority of these measures are based on carving the dynamics on pure or logical states, rather than taking the dynamics into account directly. For example, one can name the pure state-based fidelity \cite{nielsen_simple_2002, goerz_robustness_2014} such as $\mathpzc{F}_{\mathrm{p}}=\textstyle{\int}d\vert\Psi\rangle\,\langle\Psi\vert O^{\dag}\mathpzc{E}_{(t_{\mathrm{f}},0)}\big(\vert \Psi\rangle\langle\Psi\vert\big)O\vert\Psi\rangle$ and the logical basis-based one \cite{goerz_optimal_2014, koch_controlling_2016} such as $\mathpzc{F}_{l}=(1/N)\sum_{i,j=1}^{N}\mathrm{Tr}[O\vert i\rangle \langle j\vert O^{\dag}\mathpzc{E}_{(t_{\mathrm{f}},0)}(\vert i\rangle\langle j\vert)]$. In contrast, our dynamics-based objective \eqref{eq.11} is directly related to the dynamics, without reference to any state. The functional $\mathpzc{J}_{d}$ depends on the dynamics of the system, $\chi (t)$, at the intermediate times $0\leqslant t< t_{\mathrm{f}}$ . In order to steer the dynamics $\chi (t)$ toward a desired one $\Xi (t)$ through the external fields, one can write this intermediate time-dependent objective $\mathpzc{J}_{d}$ as \begin{align} \mathpzc{J}_{d}[\chi(t)]=-\dfrac{ w_{d}}{t_{\mathrm{f}}} \int_{0}^{t_{\mathrm{f}}}dt\;\dfrac{\mathrm{Tr}[\chi^{\dag}(t)\, \Xi(t)]}{\big(\mathrm{Tr}[\chi^{2}(t)]\,\mathrm{Tr}[\Xi^{2}(t)]\big)^{1/2}}, \label{eq.12} \end{align} where $w_{d}\geqslant 0$ is a weight for this objective. In this work, however, in order to demonstrate basic utility of our proposed control scheme, we restrict ourselves to the dynamical control problem of the system at a given terminal time $t_{\mathrm{f}}$, rather than at an interval, which means that we simply set $w_{d}= 0$ in our study. Nevertheless, our scheme can be straightforwardly applied to the cases where one aims to control a dynamics $\chi(t)$ so that it can resemble a desired dynamics $\Xi(t)$ for a given time interval $0\leqslant t< t_{\mathrm{f}}$. \begin{figure}\label{Fig-1} \end{figure} A clear advantage of our objective measures \eqref{eq.11} and \eqref{eq.12} is that, in light of the existence of various experimental techniques such as process tomography to estimate process matrices \cite{mohseni_quantum-process_2008}, they are also experimentally accessible. The field-dependent functional $\mathpzc{J}_{f}$ has been introduced to take into account all operational or experimental constraints on the control fields. For example, we assume \begin{gather} \mathpzc{J}_{f} [\bm{\epsilon}(t)]=\textstyle{\int_{0}^{t_{\mathrm{f}}}}dt\,\mathcal{G}_{f}\big(\bm{\epsilon}(t),t\big),\label{eq.13.1} \\ \mathcal{G}_{f}\big(\bm{\epsilon}(t),t\big)=\textstyle{\sum}_{m} w_{m} \big(\epsilon_{m}(t)-\epsilon_{m}^{(\mathrm{ref})}(t) \big)^{2}/f_{m}(t), \label{eq.13} \end{gather} where $w_{m}\geqslant 0$ and $\epsilon_{m}^{(\mathrm{ref})}(t)$ are, respectively, a weight for this objective and a reference field. The shape function $f_{m}(t)$ can switch the external field $\epsilon_{m}(t)$ on and off smoothly \cite{sundermann_extensions_1999} (see also Refs. \cite{Gordon,Clausen} for energy-constrained protocls). By setting $\epsilon_{m}^{(\mathrm{ref})}(t)=0$, the functional \eqref{eq.13.1} implies a constraint on the energy of the control field. The main goal of our dynamical control scheme is to minimize the total functional $\mathpzc{J}$ \eqref{eq.10} containing only the final time- and the field-dependent objectives [Eqs. \eqref{eq.11} and \eqref{eq.13.1}] such that the dynamics of the system is governed by the master equation \eqref{eq.6}. Figure \ref{Fig-1} depicts a schematic of this scheme. In the following section, we proceed to solve this problem via a monotonically convergent optimization approach; the Krotov method \cite{book:Krotov, Krotov_method, sklarz_loading_2002,Schirmer-Krotov,Morzhin-Pechen,Goerz_2019}. \section{Solving the dynamical optimization problem: Krotov method} \label{sec:KrotovMethod} To solve the optimal control problem discussed in the previous section, we can resort to one of existing iterative methods. Here in particular we focus on monotonically convergent methods. One of such methods is the Zhu-Rabitz (ZR) method \cite{zhu_rapid_1998, zhu_rapidly_1998}, where after obtaining the control equations by variational approaches, a particular monotonically convergent algorithm is employed to solve these equations. In the generalized form of this technique \cite{maday_new_2003, ohtsuki_generalized_2004}, the speed of convergence and numerical accuracy of the algorithm can also be adjusted via appropriate convergence parameters. An alternative method to solve an optimization problem in a monotonically convergent fashion is the Krotov method \cite{Krotov_method, sklarz_loading_2002}. Having a dynamical equation and an objective functional, in this method a particular algorithm is employed to improve the control objective after each iteration. This method has been used in controlling states of closed or open systems, and it encompasses a wide range of optimization problems including nonconvex state-dependent functionals, intermediate time-dependent functionals, and nonlinear master equation for the state \cite{sklarz_loading_2002, reich_monotonically_2012}. Note that the ZR method is reduced to the Krotov method by choosing the convergence parameters suitably \cite{maday_new_2003}. Although for some convergence parameters the ZR method exhibits faster convergence and also a higher numerical accuracy than the Krotov method \cite{ohtsuki_generalized_2004}, to the best of our knowledge the ZR method has not yet been generalized to the case of nonconvex final time-dependent objectives such as the one we have introduced in Eq. \eqref{eq.11}. Hence here we focus on the Krotov method and extend it to the problem of controlling the process of an open system. We have relegated the details to Appendix \ref{app:b} and only mention the main results here in the following. In the following, it seems more convenient to vectorize the dynamics (process matrix) $\chi (t)$ and the field-dependent generator (denoted by $\mathpzc{K}_{\,\,\bm{\epsilon}}$) \cite{ohtsuki_bathinduced_1989, metrology_us} in an extended Hilbert space as $\chi(t)\, \rightarrow\vert\chi (t)\rangle\hskip-0.7mm\rangle$ and $\mathpzc{K}_{\,\,\bm{\epsilon}}\rightarrow \mathbbmss{K}_{\bm{\epsilon}}$, respectively. For any $\vert\chi_{1}\rangle\hskip-0.7mm\rangle$ and $\vert\chi_{2}\rangle\hskip-0.7mm\rangle$ in this space, the inner product is defined as $\langle\hskip-0.7mm\langle\chi_{1}\vert\chi_{2}\rangle\hskip-0.7mm\rangle =\mathrm{Tr}[\chi_{1}^{\dag}\chi_{2}]$. In this representation the final time-dependent objective \eqref{eq.11} takes the following form: \begin{equation} \mathpzc{F}\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)=- w_{0}\dfrac{\langle\hskip-0.7mm\langle\chi(t_{\mathrm{f}})\vert\Xi_{\,t_{\mathrm{f}}}\rangle\hskip-0.7mm\rangle}{[\langle\hskip-0.7mm\langle\chi(t_{\mathrm{f}})\vert\chi(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle\langle\hskip-0.7mm\langle\Xi_{\,t_{\mathrm{f}}}\vert\Xi_{\,t_{\mathrm{f}}}\rangle\hskip-0.7mm\rangle]^{1/2}}. \label{eq.14} \end{equation} Each iteration of the Krotov method contains two consecutive steps. At the first step, we have the dynamics $\vert \chi^{(n)} (t)\rangle\hskip-0.7mm\rangle$ as an input of the current iteration $(n+1)$ ($n\geqslant 0$) governed by the dynamical equation $d\vert \chi^{(n)} (t)\rangle\hskip-0.7mm\rangle /dt=-(i/\hbar)\mathbbmss{K}_{\bm{\epsilon}^{(n)}}\vert\chi ^{(n)}(t)\rangle\hskip-0.7mm\rangle$, with the initial boundary condition on the $\alpha$'s component $\vert \chi^{(n)}(0)\rangle\hskip-0.7mm\rangle_{\alpha} =N\delta_{\alpha N^{4}}$, where $\mathbbmss{K}_{\bm{\epsilon}^{(n)}}$ shows the time-dependent generator with $\bm{\epsilon}^{(n)}(t)$. The $\bm{\epsilon}^{(n)}(t)$ are the control fields updated in the previous iteration (and when $n=0$, these are the initial guess fields). Having these fields, an adjoint dynamics $\vert \Lambda (t) \rangle\hskip-0.7mm\rangle$ evolves backward in time according to \begin{equation} \dfrac{d\vert \Lambda (t) \rangle\hskip-0.7mm\rangle}{dt}=-\dfrac{i}{\hbar}\mathbbmss{K}^{{\dagger}}_{\bm{\epsilon}^{{(n)}}}\vert\Lambda (t) \rangle\hskip-0.7mm\rangle, \label{eq.15} \end{equation} with the following boundary condition at the final time $t_{\mathrm{f}}$: \begin{equation} \vert \Lambda (t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle =-\Big( \dfrac{\partial\mathpzc{F}}{\partial\langle\hskip-0.7mm\langle\chi (t_{\mathrm{f}})\vert}\Big)\Big| _{\chi^{(n)}(t_{\mathrm{f}})}, \label{eq.16} \end{equation} where $|_{\chi^{(n)}(t_{\mathrm{f}})}$ on the right-hand side (RHS) indicates the evaluation of the partial derivative at $\vert\chi ^{(n)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle$ and $\langle\hskip-0.7mm\langle\chi ^{(n)}(t_{\mathrm{f}})\vert$. Note that for the full control of an open system at a specified time interval---that is, when $\mathpzc{J}_{d}\neq 0$---the RHS of Eq. \eqref{eq.15} needs to be modified \cite{reich_monotonically_2012}. For the dynamical control of a system at a predetermined time with the final time-dependent objective \eqref{eq.14}, the boundary condition \eqref{eq.16} can be written as \begin{align} \vert \Lambda (t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle =&\dfrac{ w_{0}}{2}\Big(\dfrac{\vert {\Xi}_{t_{\mathrm{f}}}\rangle\hskip-0.7mm\rangle}{\big[\langle\hskip-0.7mm\langle \chi^{(n)}(t_{\mathrm{f}})\vert \chi^{(n)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle\langle\hskip-0.7mm\langle\Xi_{\,t_{\mathrm{f}}}\vert\Xi_{\,t_{\mathrm{f}}}\rangle\hskip-0.7mm\rangle\big]^{1/2}} \nonumber\\ &-\dfrac{\langle\hskip-0.7mm\langle \chi^{(n)}(t_{\mathrm{f}})\vert {\Xi}_{t_{\mathrm{f}}}\rangle\hskip-0.7mm\rangle\vert\chi^{(n)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle}{\big[\langle\hskip-0.7mm\langle\chi^{(n)}(t_{\mathrm{f}})\vert\chi^{(n)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle^{3}\langle\hskip-0.7mm\langle\Xi_{\,t_{\mathrm{f}}}\vert\Xi_{\,t_{\mathrm{f}}}\rangle\hskip-0.7mm\rangle\big]^{1/2}}\Big). \label{eq.17} \end{align} At the next step of the \textcolor{blue}{$n+1$}th iteration, the control field $\epsilon_{m}(t)$ needs to be updated according to the equation \begin{align} &\Big(\dfrac{\partial\mathcal{G}_{f}}{\partial\epsilon_{m}}\Big)\Big|_{\bm{\epsilon}^{(n+1)}}=\dfrac{2}{\hbar}\mathrm{Im}\Big\{\langle\hskip-0.7mm\langle\Lambda(t)\, \vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}}{\partial\epsilon_{m}}\Big)\Big| _{\bm{\epsilon}^{(n+1)}}\vert\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle\Big\} \nonumber\\ &\,+\dfrac{\sigma(t)}{\hbar}\mathrm{Im}\Big\{\langle\hskip-0.7mm\langle\Delta\chi^{(n+1)}(t)\, \vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}}{\partial\epsilon_{m}}\Big)\Big| _{\bm{\epsilon}^{(n+1)}}\vert\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle\Big\}, \label{eq.18} \end{align} where $\vert \Delta\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle=\vert \chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle -\vert \chi^{(n)}(t)\rangle\hskip-0.7mm\rangle$ is the change in the dynamics and $\vert \chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle$ follows the dynamical equation $d\vert \chi^{(n+1)} (t)\rangle\hskip-0.7mm\rangle /dt=-(i/\hbar)\mathbbmss{K}_{\bm{\epsilon}^{(n+1)}}\vert\chi ^{(n+1)}(t)\rangle\hskip-0.7mm\rangle$ with $\mathbbmss{K}_{\bm{\epsilon}^{(n+1)}}$. These two equations are coupled and should be solved simultaneously. Due to the nonconvexity of the final time-dependent objective \eqref{eq.14}, a coefficient $\sigma(t)$ has been introduced in Eq. \eqref{eq.18} to guarantee monotonic convergence of the ultimate algorithm. This time-dependent coefficient is determined analytically by the relation \begin{equation} \sigma(t)=-\bar{A}e^{\zeta_{B}(t_{\mathrm{f}}-t)},\qquad\zeta_{B}\in\mathbbmss{R}^{+}, \label{eq.19.1} \end{equation} where the constant coefficient $\bar{A}$ is given by \begin{gather} \bar{A}=\mathrm{max}\{\zeta_{A},2A+\zeta_{A}\},\qquad\zeta_{A}\in\mathbbmss{R}^{+}, \label{eq.19.2.1} \\ A=\sup_{\{\Delta\chi(t_{\mathrm{f}})\}}\dfrac{\Delta\mathpzc{F}+2\mathrm{Re}\langle\hskip-0.7mm\langle\Delta\chi(t_{\mathrm{f}})\vert\Lambda(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle}{\langle\hskip-0.7mm\langle\Delta\chi(t_{\mathrm{f}})\vert\Delta\chi(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle},\label{eq.19.2.2} \end{gather} with $\Delta\mathpzc{F}=\mathpzc{F}\big(\chi^{(n)}(t_{\mathrm{f}})+\Delta\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)-\mathpzc{F}\big(\chi^{(n)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)$. For a \textit{convex} final time-dependent objective as $\mathpzc{F}_{c}=- w_{0}\langle\hskip-0.7mm\langle\chi(t_{\mathrm{f}})\vert\Xi_{\,t_{\mathrm{f}}}\rangle\hskip-0.7mm\rangle$, we have $A=0$ and then $\bar{A}=0$ by setting $\zeta_{A}=0$. Since in this specific case $\sigma (t)$ is zero, then the Krotov method reduces to its first-order version---see Eq. \eqref{eq.18} with $\sigma(t)=0$. By considering Eq. \eqref{eq.13} as the field-dependent function $\mathcal{G}_{f}$, Eq. \eqref{eq.18} leads to the following update equation: \begin{align} \epsilon_{m}^{(n+1)}(t)=&\epsilon_{m}^{(\mathrm{ref})}(t)+\dfrac{f_{m}(t)}{ \hbar w_{m}}\Big\{\mathrm{Im}\langle\hskip-0.7mm\langle\Lambda(t)\, \vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}}{\partial\epsilon_{m}}\Big)\Big| _{\bm{\epsilon}^{(n+1)}}\vert\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle \nonumber \\ &+\dfrac{\sigma(t)}{2}\mathrm{Im}\langle\hskip-0.7mm\langle\Delta\chi(t)\, \vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}}{\partial\epsilon_{m}}\Big)\Big| _{\bm{\epsilon}^{(n+1)}}\vert\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle\Big\}. \label{eq.20} \end{align} Following Ref. \cite{palao_quantum_2002,palao_optimal_2003}, throughout this paper we set $\epsilon_{m}^{(\mathrm{ref})}(t)=\epsilon_{m}^{(n)}(t)$. By this choice, the field-dependent functional \eqref{eq.13.1} vanishes when the fields approach their optimal values. Then, in this asymptotic limit the monotonic convergence of the total and the final time-dependent objectives, $\mathpzc{J}$ and $\mathpzc{F}$, are equivalent. The updated control field $\bm{\epsilon}^{(n+1)}(t)$ is considered as a guess field for the next iteration. Then, the above procedure is iterated until the algorithm achieves a desired convergence threshold. A prohibitive issue to apply the second-order correction of the Krotov method, i.e., the second term in Eq. \eqref{eq.18} or the third term in Eq. \eqref{eq.20}, is to calculate the supremum over \textit{all} variations of the terminal dynamics $\{\Delta\chi(t_{\mathrm{f}})\}$ in order to obtain the constant coefficient $\bar{A}$ [Eqs. \eqref{eq.19.2.1} and \eqref{eq.19.2.2}]. A partial remedy for this is to replace $A$ in Eq. \eqref{eq.19.2.2} with a numerical ansatz as \cite{reich_monotonically_2012} \begin{equation} A^{(n+1)}=\dfrac{\Delta\mathpzc{F}^{(n+1)}+2\mathrm{Re}\langle\hskip-0.7mm\langle\Delta\chi^{(n+1)}(t_{\mathrm{f}})\vert\Lambda(t_{\mathrm{\mathrm{f}}})\rangle\hskip-0.7mm\rangle}{\langle\hskip-0.7mm\langle\Delta\chi^{(n+1)}(t_{\mathrm{f}})\vert\Delta\chi^{(n+1)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle}, \label{eq.21} \end{equation} where $\Delta\mathpzc{F}^{(n+1)}=\mathpzc{F}\big(\chi^{(n+1)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)-\mathpzc{F}\big(\chi^{(n)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)$. However, the parameter $A^{(n+1)}$ depends on $\vert\Delta\chi^{(n+1)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle=\vert\chi^{(n+1)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle-\vert\chi^{(n)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle$ with an \textit{unknown} dynamics $\vert\chi^{(n+1)}(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle$ which is to be determined in the current iteration $n+1$. In order to resolve this difficulty, we can substitute the parameter $A^{(n)}$ calculated in the previous iteration into Eq. \eqref{eq.19.2.1}. Note that this procedure may compromise the monotonic convergence of the algorithm. In such case, this failed iteration needs to be repeated until monotonic convergence is guaranteed by considering $A^{(n+1)}$ instead of $A$ in Eq. \eqref{eq.19.2.1}. Another potential approach to resolve this issue is to set $\bar{A}=\zeta_{A}\geqslant 0$, and to find some $\zeta_{A}$ by trial and error such that the monotonic convergence can be retrieved \cite{reich_monotonically_2012}. \section{Application: Dynamical control of a Rydberg ion} \label{sec:Application} Here we choose a related practical example to illustrate our dynamical control scheme. Rydberg neutral atoms and ions are appealing candidates for implementation of quantum computation with high fidelities \cite{Rydberg-RMP,madjarov_high-fidelity_2020, li_boost_2020, higgins_coherent_2017}. We consider a trapped Rydberg ion $^{88}\mathrm{Sr}^{+}$ \cite{higgins_coherent_2017} containing an energy level $\vert r\rangle \equiv 42\mathrm{S}_{1/2}$ as a Rydberg state. Only four energy levels of $^{88}\mathrm{Sr}^{+}$ have been shown in Fig. \ref{Fig-2:RydIon}. Two low-lying levels with the gap wavelength $\lambda_{0}=674\,\mathrm{nm}$ act as a register qubit, i.e., $\{\vert 0\rangle \equiv 4\mathrm{D}_{5/2},\,\vert 1\rangle \equiv 4\mathrm{S}_{1/2}\}$. The Rydberg state is addressable via two time-dependent lasers. The pump laser $\epsilon_{p}(t)$ couples the state $\vert 0\rangle$ to an intermediate state $\vert i\rangle \equiv 6\mathrm{P}_{3/2}$. The trapped ion in the state $\vert i\rangle$ is also excited to the Rydberg state by a Stokes laser $\epsilon_{s}(t)$. We consider these control laser pulses as in the following: \begin{equation} \epsilon_{m}(t)=q_{m}(t)\, \cos (\omega_{m}t),\qquad m\in \{s,p\}, \end{equation} where $q_{m}(t)$ and $\omega_{m}$ are the time-dependent amplitude and central frequency, respectively. Following Ref. \cite{goerz_optimizing_2015}, we consider an appropriate rotating frame defined by a suitable unitary operator defined as below in the logical basis $\{\vert 0\rangle,\,\vert 1\rangle ,\,\vert i\rangle ,\,\vert r\rangle\}$, \begin{equation} \mathpzc{U}(t)=\mathrm{diag}\big(1,1,e^{i\omega_{p}t},e^{i(\omega_{p}+\omega_{s})t}\big). \label{eq.21-1} \end{equation} Hereon in this paper, we shall work in this particular rotating frame, without denoting a separate notation for quantities therein. After applying the rotating-wave approximation ($\omega_{s},\omega_{p}\gg \omega_{0}=(2\pi c)/\lambda_{0}$), the Hamiltonian of the system becomes \cite{higgins_coherent_2017, goerz_optimizing_2015} \begin{equation} H_{S}(t)=\dfrac{\hbar}{2} \begin{pmatrix} 0& 0& \Omega_{p}(t)& 0 \\ 0& -2\omega_{0}& 0& 0 \\ \Omega_{p}(t)& 0& 2\Delta_{p}& -\Omega_{s}(t) \\ 0& 0& -\Omega_{s}(t)& 2\big(\Delta_{p}+\Delta_{s} \big) \\ \end{pmatrix}, \label{eq.21-2} \end{equation} where the detuning $\Delta_{p}$ ($\Delta_{s}$) is defined as $\Delta_{p}=\omega_{i}-\omega_{p}$ ($\Delta_{s}=\omega_{r}-\omega_{i}-\omega_{s}$) and $(1/2)\Omega_{p}(t)=(1/2)\mu_{0i}\, q_{p}(t)$ ($(1/2)\Omega_{s}(t)=(1/2)\mu_{ir}\, q_{s}(t)$) is the time-dependent Rabi frequency of the pump (Stokes) laser pulse with the dipole moment $\mu_{0i}$ ($\mu_{ir}$) for the $\vert 0\rangle\leftrightarrow\vert i\rangle$ ($\vert i\rangle\leftrightarrow\vert r\rangle$) transition. The frequency of the former (latter) transition has been denoted by $\omega_{i}$ ($\omega_{s}$). In the following we assume $\Delta_{p}=-\Delta_{s}=40\pi\,\mathrm{MHz}$ and refer to $\Omega_{p}(t)$ and $\Omega_{s}(t)$ (rather than $\epsilon_{p}(t)$ and $\epsilon_{s}(t)$) as the pump and Stokes laser pulses, respectively. \begin{figure} \caption{Four energy levels of the Rydberg ion $^{88}\mathrm{Sr}^{+}$ driven by two laser pulses. Energy levels $\vert 0\rangle \equiv 4\mathrm{D}_{5/2}$ and $\vert 1\rangle \equiv 4\mathrm{S}_{1/2}$ as a register qubit with the transition frequency $\omega_{0}=(2\pi c)/\lambda_{0}$, in which $\lambda_{0}=674\,\mathrm{nm}$ ($c$ is the speed of light). The state $\vert 0\rangle$ is coupled to an intermediate state $\vert i\rangle \equiv 6\mathrm{P}_{3/2}$ via a pump laser $\epsilon_{p}(t)$ with detuning $\Delta_{p}=40\pi\,\mathrm{MHz}$. The ionic transition $\vert i\rangle \leftrightarrow \vert r\rangle \equiv 42\mathrm{S}_{1/2}$ (Rydberg state) is driven by the Stokes laser $\epsilon_{s}(t)$ with detuning $\Delta_{s}=-\Delta_{p}$. The Rydberg state decays into the intermediate state within $\tau_{r}\approx 2.3\,\mu\mathrm{s}$. The decay time of the state $|i\rangle$ to the state $\vert 1\rangle$ is $\tau_{i}\approx35\,\mathrm{ns}$ \cite{higgins_coherent_2017}.} \label{Fig-2:RydIon} \end{figure} It has been known that the dominant effects of the surrounding environment of this trapped ion manifest as two spontaneous decays \citep{higgins_coherent_2017}: (a) $\vert i\rangle\rightarrow\vert 1\rangle$ with decay time $\tau_{i}\approx35\,\mathrm{ns}$ and (b) $\vert r\rangle\rightarrow\vert 1\rangle$ with decay time $\tau_{r}\approx 2.3\,\mu\mathrm{s}$. The associated Lindblad jump operators and rates of these processes are \begin{gather} L_{i}(t)=e^{-i\omega_{p}t}\vert 1\rangle\langle i\vert,\quad\gamma_{i}=1/\tau_{i},\label{eq.22}\\ L_{r}(t)=e^{-i(\omega_{p}+\omega_{s})t}\vert 1\rangle\langle r\vert,\quad\gamma_{r}=1/\tau_{r}.\label{eq.22-} \end{gather} From the dynamical equation \eqref{eq.6}, transformed into the rotating frame, with the time-dependent Hamiltonian \eqref{eq.21-2} and Lindblad operators \eqref{eq.22} and (\ref{eq.22-}) we can obtain a dynamical equation for the dynamics of this trapped ion. Note that the traceless operator basis $\{C_{\alpha}\}$ for this system is comprised of the generalized Gell-Mann basis matrices \cite{bertlmann_bloch_2008} for qudits. As an application of this dynamical equation we answer the following question in the next subsections by our coherent process control scheme: How can the dynamics of this Rydberg ion at a given final time be steered to a desired target process? In the following we consider three scenarios for the desired final process $\Xi_{\,t_{\mathrm{f}}}$. For the optimization we need the shape functions for the pump and Stokes lasers. Following Ref. \cite{goerz_robustness_2014}, here we choose the Blackman shape functions, given by \begin{align} f_{m}(t)=[1-g-\cos(k_{m}\pi t/t_{\mathrm{f}})+g\cos(l_{m}\pi t/t_{\mathrm{f}})]/2, \label{eq.22-1} \end{align} for $m\in\{s,p\}$, where $g=0.16$ and $k_{p}=4,\; l_{p}=8$ ($k_{s}=2,\; l_{s}=4$) for the pump (Stokes) laser. We also consider the following guess lasers to start the optimization process: \begin{equation} \Omega_{m}^{(0)}(t)=E_{m}f_{m}(t),\qquad m\in\{s,p\}, \label{eq.22-2} \end{equation} where $E_{m}$ is the peak amplitude and we set $E_{m}=94\pi\,\mathrm{MHz}$ for both laser pulses \cite{higgins_coherent_2017}. \ignore{ \begin{figure}\label{Fig-4} \end{figure} } \begin{figure}\label{Fig-3} \end{figure} \subsection{Scenario I: Gate simulation} \label{subsec:GateSimu} In the first scenario, the desired target process (in the rotating frame) is considered to be a specific unitary gate $O$, that is, \begin{equation} [{\Xi}_{\,t_{\mathrm{f}}}^{(\mathrm{G})}]_{\alpha\beta}\equiv \mathrm{Tr}[OC_{\alpha}^{\dag}]\,\mathrm{Tr}[OC_{\beta}^{\dag}]^{*}, \label{eq.23} \end{equation} where $\alpha ,\beta \in\{1,\ldots, 16\}$. For example, here we aim to obtain the optimal Stokes and pump lasers which steer the dynamics of the register qubit ($\{\vert 0\rangle ,\vert 1\rangle\}$) to be similar to the phase gate applied at a specified final time (here $t_{\mathrm{f}}=900\;\mathrm{ns}$), \begin{equation} O=\vert 0\rangle\langle 0\vert+e^{i\varphi}\vert 1\rangle\langle 1\vert+\vert i\rangle\langle i\vert+\vert r\rangle\langle r\vert. \label{eq.24} \end{equation} This operation acts trivially on the passive subspace $\{\vert i\rangle ,\vert r\rangle\}$ \cite{palao_quantum_2002}. Interestingly, Ref. \citep{higgins_coherent_2017} reports experimental realization of this gate on the register subspace by preparing the system as a pure state in this subspace. In contrast, our optimization scheme allows us to obtain any desired gate regardless of the preparation of the system in the register subspace. Figure \ref{Fig-3} shows the process fidelity $-\mathpzc{F}$ and the total functional $-\mathpzc{J}$ vs. the iteration number $n$ of the Krotov algorithm, both of which demonstrate similar behaviors. This figure indicates that the fidelity can improve only by a factor of $\approx\%27$ (up to iteration $5900$) through the optimization with the guess lasers chosen as in Eq. \eqref{eq.22-2}. Two plateaus are observed in the ranges $797<n<927$ and $1057<n<1893$, before the saturation of the fidelity at the value $\approx 0.646$. However, these plateaus are preliminary and the algorithm can still yield larger fidelities when it is given sufficiently larger iterations. This figure confirms that the iterative algorithm converges to the final objective $-\mathpzc{J}$ monotonically by the optimization procedure developed in Sec. \ref{sec:KrotovMethod}. This monotonic convergence occurs by incurring an extra numerical cost in updating the control laser pulses at each iteration, which is due to the last term in Eq. (\ref{eq.20})---see the inset of Fig. \ref{Fig-3}. The optimal pump and Stokes lasers have been shown in Fig. \ref{Fig-5}, respectively. It interesting to note that despite strike differences of the optimal and guess pulses, there is still a rough similarity between them---such a behavior has also been reported earlier in Ref. \cite{goerz_robustness_2014}. Note that in associated spectra the maximum peaks belong to the zero frequency, which in the lab frame correspond to the frequencies of the lasers whose detunings with the frequency of the $\vert 0\rangle\leftrightarrow\vert i\rangle$ and $\vert i\rangle\leftrightarrow\vert r\rangle$ transitions are given by $\Delta_{p}$ and $\Delta_{s}$, respectively. \begin{figure}\label{Fig-5} \end{figure} \subsection{Scenario II: Decoherence suppression} \label{subsec:DecoSupp} In this scenario, we are interested to see whether the pump and Stokes laser pulses applied to the trapped Rydberg ion can suppress the environment at a predetermined time $t_{\mathrm{f}}$ such that we have $\varrho_{S}(t_{\mathrm{f}}) = U_{S}^{\dag}(t_{\mathrm{f}})\varrho_{S}(0) U_{S}(t_{\mathrm{f}})$, where $U_{S}(t_{\mathrm{f}})$ is generated by the bare system Hamiltonian in the rotating frame, \begin{equation} H_{S}=\hbar\big(-\omega_{0}\vert 1\rangle\langle 1\vert +\Delta_{p}\vert i\rangle\langle i\vert+(\Delta_{s}+\Delta_{p})\vert r\rangle\langle r\vert\big). \label{eq.25} \end{equation} That is, the target process is given by \begin{align} [{\Xi}^{(\mathrm{D})}_{\,t_{\mathrm{f}}}]_{\alpha\beta}\equiv \mathrm{Tr}[U_{S}(t_{\mathrm{f}}) \,C_{\alpha}^{\dag}]\,\mathrm{Tr}[U_{S}(t_{\mathrm{f}}) \, C_{\beta}^{\dag}]^{*}, \label{eq.26} \end{align} for $\alpha ,\beta \in\{1,\ldots, 16\}$. We set $t_{\mathrm{f}}=500\;\mathrm{ns}$ for the target time of this scenario. The environment acts on the ion through two quantum channels with decay times $\tau_{i}\approx 35\,\mathrm{ns}$ and $\tau_{r}\approx 2.3\,\mu\mathrm{s}$ (see Eq. \eqref{eq.22} for the jump rates and operators of these channels). Since $\tau_{i}<t_{\mathrm{f}}<\tau_{r}$, practically in this example we aim to suppress the detrimental effects of the spontaneous emission process $\vert i\rangle\rightarrow\vert 1\rangle$ at $t_{\mathrm{f}}=500\,\mathrm{ns}$. \begin{figure}\label{Fig-6} \end{figure} The fidelity $-\mathpzc{F}$ as a function of the iteration number $n$ of the Krotov algorithm has been shown in Fig. \ref{Fig-6}. The fidelity reaches the value $\approx 0.687$ after $5133$ iterations. To go beyond this iteration, we have observed that the algorithm needs an exhaustive search on the space of the control parameter $A^{(n)}$ due to the ad hoc workaround for the numerical issue introduced in Sec. \ref{sec:KrotovMethod}. In the algorithm we have set $\zeta_{A}=0$. From Eqs. \eqref{eq.19.1}, \eqref{eq.19.2.1}, and \eqref{eq.20} this can lead to the Krotov method which is first order in $|\chi (t)\rangle\hskip-0.7mm\rangle$. We have also shown the control parameter $A^{(n)}$ as a function of the iteration number in the inset of Fig. \ref{Fig-6}. Except for the first iteration, in order to start the optimization process, the algorithm updates this parameter with nonzero values. The behavior seen at this inset (implying that $A^{(n)}>0$) in turn necessitates the second-order contribution in the updating equation \eqref{eq.20} to ensure monotonic convergence of the algorithm. A distinctive feature of this plot and also Fig. \ref{Fig-3} is the appearance of plateaus in some ranges of the iteration number, which indicate trappings in the algorithm (perhaps of the similar nature for trappings observed in Ref. \cite{ohtsuki_generalized_2004}). Figure \ref{Fig-8} indicates the optimal pump and Stokes laser pulses, respectively. Note the similarity and differences of the optimal and guess fields. The spectra of these fields have been depicted in the insets of these figures. We observe that a wide range of frequencies have significant contributions in the spectra of these optimal lasers. Since the bosonic environment contains a large number of frequencies, one may intuitively argue that a broadband spectra for the control laser fields may be needed to optimally alleviate the effect of the environment. \subsection{Scenario III: Passive control of the environment} \label{subsec:PassContEnv} \begin{figure}\label{Fig-8} \end{figure} In this scenario we want to optimally control the environment by applying appropriate external fields on the system. In particular, we aim to modify the effect of the environment passively such that it looks differently (as we wish) to the system. This is particularly interesting noting that designing systems which can ``imposter'' another system has recently attracted much attention \cite{Campos_2017, McCaul_2020}. \begin{figure}\label{Fig-9} \end{figure} In Fig. \ref{Fig-2:RydIon}, one of the effects of the environment on the Rydberg ion manifests as transferring population from $\vert i\rangle$ to $\vert 1\rangle$. Let us, for example, make this effect look like a depolarizing channel \cite{nielsen_quantum_2010} on the subspace $\{\vert 1\rangle ,\vert i\rangle\}$, \begin{equation} \mathpzc{D}^{(\mathrm{ch})}_{p}(\varrho)=[1-p(t)]\varrho + \big( p(t)/3\big)\textstyle{\sum_{\alpha=1}^{3}} \sigma_{\alpha} \varrho \sigma_{\alpha}, \label{eq.27} \end{equation} with $0\leqslant p(t)\, \leqslant 1$ and the operators $\sigma_{i}$ are defined as \begin{align} &\sigma_{1} \equiv\vert 1\rangle\langle i\vert+\vert i\rangle\langle 1\vert+\vert 0\rangle\langle 0\vert+\vert r\rangle\langle r\vert, \nonumber\\ &\sigma_{2} \equiv -i\vert 1\rangle\langle i\vert+i\vert i\rangle\langle 1\vert+\vert 0\rangle\langle 0\vert+\vert r\rangle\langle r\vert,\\ &\sigma_{3} \equiv \vert 1\rangle\langle 1\vert -\vert i\rangle\langle i\vert+\vert 0\rangle\langle 0\vert+\vert r\rangle\langle r\vert.\nonumber \end{align} It can be seen that the reduction of these operators on the subspace $\{\vert 1\rangle ,\vert i\rangle\}$ acts similarly to the Pauli operators for a qubit. We assume here that $p(t)=(1-e^{-6t/\tau_{i}})/2$. This is indeed the error probability of a depolarizing channel acting on a qubit with the depolarizing time $\tau_{d}=\tau_{i}/6$ and the Kraus operators $W_{0}(\Delta t)=\sqrt{1-p(\Delta t)}\mathbbmss{I}_{2}-i\Delta tH_{0}$ and $W_{\alpha}(\Delta t)=\sqrt{p(\Delta t)/3}\sigma_{\alpha}$ ($H_{0}$ and $\Delta t$ are the free Hamiltonian of the qubit and a short time interval, respectively). The desired channel \eqref{eq.27} leads to the following target process at $t=t_{\mathrm{f}}$ in the generalized Gell-Mann basis for $N=4$, \begin{figure}\label{Fig-11} \end{figure} \begin{align} [{\Xi}^{(\mathrm{ch})}_{\,t_{\mathrm{f}}}]_{\alpha\beta}=&\big\{\big(4-3p(t_{\mathrm{f}})\big)/2\big\}\delta_{\alpha16}\delta_{\beta16}+\big\{p(t_{\mathrm{f}})/6\big\}\big(2\delta_{\alpha 7}\delta_{\beta 7} \nonumber\\ &+\delta_{\alpha 1}\delta_{\beta 1}+3\delta_{\alpha 6}\delta_{\beta 6}+\delta_{\alpha 11}\delta_{\beta 11}+2\delta_{\alpha 10}\delta_{\beta 10} \nonumber\\ &+2\delta_{\alpha 1}\delta_{\beta 7} +2\delta_{\alpha 1}\delta_{\beta 10}\big) -\big\{\sqrt{6}p(t_{\mathrm{f}})/9\big\}\big(\delta_{\alpha 7}\delta_{\beta 11} \nonumber\\ &+\delta_{\alpha 1}\delta_{\beta 11} +\delta_{\alpha 10}\delta_{\beta 11}-3\delta_{\alpha 6}\delta_{\beta 16}\big) +\big\{\sqrt{2}p(t_{\mathrm{f}})/3\big\}\nonumber\\ &\times\big(\delta_{\alpha 7}\delta_{\beta 16} +\delta_{\alpha 1}\delta_{\beta 16}-\delta_{\alpha 6}\delta_{\beta 11} +\delta_{\alpha 10}\delta_{\beta 16}\big)\nonumber\\ &+\big\{\sqrt{3}p(t_{\mathrm{f}})/9\big\} \big(\delta_{\alpha 1}\delta_{\beta 6}+\delta_{\alpha 6}\delta_{\beta 7} -3\delta_{\alpha 11}\delta_{\beta 16} \nonumber\\ &+\delta_{\alpha 6}\delta_{\beta 10}\big) +\alpha\leftrightarrow\beta, \label{eq.28} \end{align} where $\alpha,\beta\in\{1,\ldots, 16\}$ and $\alpha\leftrightarrow\beta$ denotes terms similar to the previous after exchanging $\alpha \leftrightarrow \beta$. We set $t_{\mathrm{f}}=900\,\mathrm{ns}$ for the target time of this simulation. Figure \ref{Fig-9} indicates the process fidelity $-\mathpzc{F}$ and the total functional $-\mathpzc{J}$ vs. the iteration number $n$ of the Krotov algorithm. The optimization process improves the channel fidelity from $\approx 0.133$ to $\approx 0.769$ after performing $6000$ iterations. During the first $200$ iterations, the fidelity increases by a factor of $\%37$. Monotonic convergence of the total objective $-\mathpzc{J}$ is evident from this figure. We observe that in Figs. \ref{Fig-3}, \ref{Fig-6}, and \ref{Fig-9}, the convergence becomes slow when the algorithm approaches an optimal solution. In principle, this issue can be circumvented by combining the Krotov method and the quasi-Newton method \cite{Eitan_Optimal_Control}. However, we have not implemented this approach in this paper. The optimal laser pulses and their spectra have been shown in Fig. \ref{Fig-11}. \section{Summary and conclusions} \label{sec:conc} We have obtained an equation of motion for the process matrix associated with the dynamics of an open quantum system under the weak-coupling and Markovian assumptions. This equation is in the Lindblad form and resembles the master equation for the state or density matrix of the open system. Next, by using this equation, we have developed an open-system optimal control scheme where by local coherent manipulation of an open quantum system---through applying a control field---one can optimally implement quantum operations on the system under these conditions. The suitable optimal control field is given by minimizing a proper figure-of-merit, in which physical constraints have been included. This scheme can be straightforwardly extended to situations where applying a control field on the system may also affect how the environment acts on the system, and hence this scheme can enable various environment engineering scenarios. We have illustrated the utility of our scheme in three quantum control scenarios; decoherence suppression, gate simulation, and passive environment engineering. In the gate simulation scenario, the goal has been to force the system to evolve at a given time as closely as possible to a given unitary gate. In the decoherence suppression scenario, the objective has been to suppress as much as possible the effect of the interaction with the environment such that in a given time the evolution of the open system is simply given by its own Hamiltonian. The passive environment control scenario is an extension of the previous scenarios, in which simply by applying coherent control fields we have aimed to make the original environment look like another environment. Since these control scenarios are limited to coherent control of the system, i.e., without assuming the ability to manipulate the environment, they are subjective to the shape of applicable control fields, and may not achieve some operations with any desired high fidelity. However, our framework on its own is applicable to more general cases and can provide a feasible approach accessible with any given set of control operations. \begin{acknowledgments} V.R. acknowledges helpful discussions with Z. Nafari Qaleh and H. Yarlo. Partially supported by Sharif University of Technology's Office of Vice President for Research and Technology through Contract No. QA960512. \end{acknowledgments} \begin{widetext} \onecolumngrid \appendix \section{Derivation of the process dynamical equation} \label{app:a} Here we present the derivation of the dynamical equation of $\chi (t)$ based on a formal approach. By using Eq. (\ref{eq.3}) and by differentiating the process matrix, we obtain the following linear differential equation: \begin{equation} \dfrac{d\chi_{\alpha\beta}(t)}{dt}=\lim_{x \rightarrow 0}\dfrac{1}{x}\big(\chi_{\alpha\beta}(t+x)-\chi_{\alpha\beta}(t) \big) \equiv -\dfrac{i}{\hslash}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}} [\mathpzc{K}]_{\alpha\beta,\mu\eta} \, \chi_{\mu\eta}(t), \qquad \alpha,\beta\in\{1,\ldots,N^{2}\}. \label{e.a1} \end{equation} From the definition of the operator $F_{\lambda}$, i.e., $[F_{\lambda}]_{\alpha\mu}\equiv\mathrm{Tr}[ C_{\alpha}^{\dag}C_{\lambda}C_{\mu}]$, we can write the time-dependent generator $\mathpzc{K}$ as \begin{align} -\dfrac{i}{\hslash}[\mathpzc{K}]_{\alpha\beta,\mu\eta}=&\lim_{x\rightarrow 0}\frac{1}{x}\Big(\textstyle{\sum}_{\lambda,\gamma=1}^{N^{2}}\chi_{\lambda\gamma}(t+x,t) [ F_{\lambda}]_{\alpha\mu} [ F_{\gamma}]^{*}_{\beta\eta}-\delta_{\alpha\mu} \delta_{\beta\eta}\Big) \label{a.e2}\\ =&\lim_{x\rightarrow 0}\dfrac{1}{x}\Big(\dfrac{1}{N}\chi_{N^{2}N^{2}}(t+x,t)\delta_{\beta\eta}\delta_{\alpha\mu}+\dfrac{1}{\sqrt{N}}\textstyle{\sum}_{\lambda=1}^{N^{2}-1}\big(\chi_{\lambda N^{2}}(t+x,t)\delta_{\beta\eta} [F_{\lambda}]_{\alpha\mu} +\chi_{N^{2}\lambda}(t+x,t)\delta_{\alpha\mu} [ F_{\lambda}]^{*}_{\beta\eta}\big) \nonumber\\ &+\textstyle{\sum}_{\lambda,\gamma=1}^{N^{2}-1}\chi_{\lambda \gamma}(t+x,t) [ F_{\lambda}]_{\alpha\mu} [ F_{\gamma}]^{*}_{\beta\eta}-\delta_{\alpha\mu} \delta_{\beta\eta}\Big). \label{a.e3} \end{align} Now we introduce the time-dependent coefficients $a_{\lambda\gamma}(t)$ as \begin{align} a_{N^{2}N^{2}}(t) \equiv&\lim_{x\rightarrow 0} [\chi_{N^{2}N^{2}}(t+x,t)-N]/x,\nonumber\\ a_{N^{2}\lambda}(t) \equiv&\lim_{x\rightarrow 0} \chi_{N^{2}\lambda}(t+x,t)/x,\qquad \lambda\in\{1,\ldots,N^{2}-1\},\nonumber\\ a_{\lambda\gamma}(t) \equiv&\lim_{x\rightarrow 0} \chi_{\lambda\gamma}(t+x,t)/x,\qquad \lambda,\gamma\in\{1,\ldots,N^{2}-1\}, \label{a.e4} \end{align} which lead to the following compact form for the components of the generator: \begin{align} -\dfrac{i}{\hslash}[\mathpzc{K}] _{\alpha\beta,\mu\eta}=& \dfrac{1}{N}a_{N^{2}N^{2}}(t)\, \delta_{\beta\eta}\delta_{\alpha\mu}+\dfrac{1}{\sqrt{N}}\textstyle{\sum}_{\lambda=1}^{N^{2}-1}(a_{\lambda N^{2}}(t)\, \delta_{\beta\eta} [F_{\lambda}]_{\alpha\mu}+a_{N^{2}\lambda}(t)\, \delta_{\alpha\mu}[ F_{\lambda}]^{*}_{\beta\eta}) +\textstyle{\sum}_{\lambda,\gamma=1}^{N^{2}-1}a_{\lambda \gamma}(t) [F_{\lambda}]_{\alpha\mu}[ F_{\gamma}]^{*}_{\beta\eta}. \label{a.e5} \end{align} We shall now obtain the coefficients $a_{\lambda\gamma}(t)$ in terms of the operator basis of the Liouville space, i.e., $\{ C_{\alpha}\}_{\alpha=1}^{N^{2}}$. The trace-preserving property of the evolution map $\mathpzc{E}_{(t,t_{0})}$ leads to $\sum_{\alpha,\beta=1}^{N^{2}}\chi_{\alpha\beta}(t,t_{0}) \, C_{\beta}^{\dag} C_{\alpha}=\mathbbmss{I}_{S}$. Substituting $t_{0}\rightarrow t$ and $t\rightarrow t+x$ into the latter equation and then separating the terms including $K_{N^{2}}=\mathbbmss{I}_{S}/\sqrt{N}$ from the others yield \begin{align} \dfrac{1}{N}a_{N^{2}N^{2}}(t)\, \mathbbmss{I}_{S}+\dfrac{1}{\sqrt{N}}\textstyle{\sum}_{\lambda=1}^{N^{2}-1}\big(a_{\lambda N^{2}}(t)\, C_{\lambda}+a_{N^{2}\lambda}(t)\, C_{\lambda}^{\dag}\big)+\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}a_{\xi\nu}(t)\, C_{\nu}^{\dag} C_{\xi}=0, \label{a.e6} \end{align} where we have used the definition of the coefficients $a_{\xi\nu}(t)$ [Eqs. (\ref{a.e4})]. Now, we introduce the Hermitian operator \begin{align} H_{S}(t)=\dfrac{1}{2i}\big(M^{\dag}(t)-M(t)\big), \label{a.e7} \end{align} where $M(t)=(\hbar/\sqrt{N}) \textstyle{\sum}_{\lambda=1}^{N^{2}-1}a_{\lambda N^{2}}(t)\, C_{\lambda}$, by which we can rewrite Eq. (\ref{a.e6}) as \begin{align} \dfrac{1}{N}a_{N^{2}N^{2}}(t)\, \mathbbmss{I}_{S}+\dfrac{2}{\hbar}M^{\dag}(t)-\dfrac{2i}{\hbar}H_{S}(t)+\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}a_{\xi\nu}(t) \, C_{\nu}^{\dag} C_{\xi}=0. \label{a.e8} \end{align} After multiplying both sides of Eq. (\ref{a.e8}) by $C_{\lambda}$ ($\lambda\in\{1,\ldots,N^{2}-1\}$) and taking partial trace over system $S$, we obtain \begin{align} a_{N^{2}\lambda}(t)=\dfrac{i}{\hbar}\sqrt{N}\,\mathrm{Tr} [H_{S}(t)\,C_{\lambda}] -\dfrac{\sqrt{N}}{2}\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}a_{\xi\nu}(t)[F_{\xi}]_{\nu\lambda}. \label{a.e9} \end{align} Note that we have supposed here that the operator basis $\{C_{\alpha}\}$ are traceless except that $C_{N^{2}}=(1/\sqrt{N})\mathbbmss{I}_{S}$, i.e., $\mathrm{Tr}[C_{\alpha}] =0$ for $\alpha\in\{1,\ldots, N^{2}-1\}$. Moreover, by taking partial trace over the system from both sides of Eq. (\ref{a.e6}), we obtain \begin{align} a_{N^{2} N^{2}}(t)=-\textstyle{\sum}_{\nu,\xi=1}^{N^{2}-1}a_{\nu\xi}(t)\, \delta_{\nu\xi}. \label{a.e10} \end{align} Thus, by substituting Eqs. (\ref{a.e9}) and (\ref{a.e10}) into Eq. (\ref{a.e5}), the components of the generator $\mathpzc{K}$ can be obtained as \begin{align} -\dfrac{i}{\hslash}[ \mathpzc{K}]_{\alpha\beta,\mu\eta}=& -\dfrac{1}{N}\textstyle{\sum}_{\nu,\xi=1}^{N^{2}-1}a_{\nu\xi}(t)\, \delta_{\nu\xi}\delta_{\beta\eta}\delta_{\alpha\mu}+\textstyle{\sum}_{\lambda,\gamma=1}^{N^{2}-1}a_{\lambda\gamma}(t)\, [F_{\lambda}]_{\alpha\mu} [F_{\gamma}]_{\beta\eta}^{*} -\dfrac{i}{\hbar}\textstyle{\sum}_{\lambda=1}^{N^{2}-1}\mathrm{Tr}[C_{\lambda}^{\dag}H_{S}(t)] [F_{\lambda}]_{\alpha\mu} \delta_{\beta\eta} \nonumber\\ & +\dfrac{i}{\hbar}\textstyle{\sum}_{\lambda=1}^{N^{2}-1}\mathrm{Tr}[C_{\lambda}H_{S}(t)] [F_{\lambda}]_{\beta\eta}^{*}\delta_{\alpha\mu} -\dfrac{1}{2}\textstyle{\sum}_{\nu,\xi=1}^{N^{2}-1}\textstyle{\sum}_{\lambda=1}^{N^{2}-1}a_{\nu\xi}(t) [F_{\xi}]_{\nu\lambda}^{*} [F_{\lambda}]_{\alpha\mu}\delta_{\beta\eta} \nonumber \\ &-\dfrac{1}{2}\textstyle{\sum}_{\nu,\xi=1}^{N^{2}-1}\textstyle{\sum}_{\lambda=1}^{N^{2}-1} a_{\xi\nu}(t) [F_{\xi}]_{\nu\lambda}[F_{\lambda}]_{\beta\eta}^{*} \delta_{\alpha\mu}. \label{a.e11} \end{align} By using Eqs. \eqref{a.e4} and \eqref{eq.2-2}, we can prove $\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\upsilon_{\xi}^{*}a_{\xi\nu}(t)\, \upsilon_{\nu}=\lim_{x\rightarrow 0}(1/x)\sum_{\lambda ,\mu}r_{\mu}^{2}\big| \sum_{\xi =1}^{N^{2}-1}\upsilon_{\xi}^{*}\mathrm{Tr}\big[\langle b_{\lambda}\vert U(t+x,t)\vert b_{\mu}\rangle C_{\xi}^{\dag}\big]\big|^{2}\geqslant 0$ for any $(N^{2}-1)-$dimensional vector $\upsilon$ and for any time $t$. Hence $(N^{2}-1)-$dimensional matrix ${a}(t)=[a_{\xi\nu}(t)]$ is positive semidefinite. By substituting Eq. (\ref{a.e11}) into Eq. (\ref{e.a1}), a set of coupled differential equations are obtained as \begin{align} \dfrac{d\chi_{\alpha\beta}(t)}{dt}= &-\dfrac{1}{N}\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}a_{\xi\nu}(t)\, \delta_{\nu\xi}\delta_{\beta\eta}\delta_{\alpha\mu} \, \chi_{\mu\eta}(t) + \textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}a_{\xi\nu}(t) [F_{\xi}]_{\alpha\mu} [F_{\nu}]_{\beta\eta}^{*}\, \chi_{\mu\eta}(t)\, \nonumber\\ &-\dfrac{i}{\hbar}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}\big[C_{\lambda}^{\dag}H_{S}(t)\big] [F_{\lambda}]_{\alpha\mu}\delta_{\beta\eta} \, \chi_{\mu\eta}(t) -\dfrac{1}{2}\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda=1}^{N^{2}-1}a_{\xi\nu}(t) [F_{\nu}]_{\xi\lambda}^{*}[F_{\lambda}]_{\alpha\mu}\delta_{\beta\eta} \, \chi_{\mu\eta}(t) \nonumber \\ &-\dfrac{1}{2}\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda=1}^{N^{2}-1} a_{\xi\nu}(t) [F_{\xi}]_{\nu\lambda} [F_{\lambda}]_{\beta\eta}^{*} \delta_{\alpha\mu} \, \chi_{\mu\eta}(t) +\dfrac{i}{\hbar}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}[C_{\lambda}H_{S}(t)] [F_{\lambda}]_{\beta\eta}^{*}\delta_{\alpha\mu} \, \chi_{\mu\eta}(t), \label{a.e12} \end{align} where the upper limits of the summation over $\lambda$ in the third and sixth terms have been changed to $N^{2}$ because $H_{S}(t)$ is a traceless operator [see Eq. (\ref{a.e7})]. By using the orthonormality of the operator basis $\{C_{\alpha}\}$, the first term in Eq. (\ref{a.e12}) is absorbed into the forth and fifth terms by changing the upper limits of their summations over $\lambda$ to $N^{2}$. Hence Eq. (\ref{a.e12}) can be recast as \begin{align} \dfrac{d\chi_{\alpha\beta}(t)}{dt}= &\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}a_{\xi\nu}(t)[F_{\xi}]_{\alpha\mu} [F_{\nu}]_{\beta\eta}^{*} \, \chi_{\mu\eta}(t)-\dfrac{i}{\hbar}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr} [C_{\lambda}^{\dag}H_{S}(t)] [F_{\lambda}]_{\alpha\mu}\delta_{\beta\eta} \, \chi_{\mu\eta}(t)\, \nonumber\\ &-\dfrac{1}{2}\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda=1}^{N^{2}}a_{\xi\nu}(t) [F_{\nu}]_{\xi\lambda}^{*} [F_{\lambda}]_{\alpha\mu}\delta_{\beta\eta} \, \chi_{\mu\eta}(t) -\dfrac{1}{2}\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda=1}^{N^{2}} a_{\xi\nu}(t) [F_{\xi}]_{\nu\lambda} [F_{\lambda}]_{\beta\eta}^{*} \delta_{\alpha\mu} \, \chi_{\mu\eta}(t) \nonumber \\ & +\dfrac{i}{\hbar}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}[C_{\lambda}H_{S}(t)] [F_{\lambda}]_{\beta\eta}^{*}\delta_{\alpha\mu} \,\chi_{\mu\eta}(t). \label{a.e13} \end{align} Equation (\ref{a.e13}) can still be brought into a more compact form. By expanding the Hermitian operator $H_{S}(t)$ in terms of the operator basis $\{C_{\lambda}\}_{\lambda=1}^{N^{2}} $ as $H_{S}(t)=\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}[C_{\lambda}^{\dag}H_{S}(t)] C_{\lambda}$, it is straightforward to prove the following equality: \begin{equation} \textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}[C_{\lambda}^{\dag}H_{S}(t)] [F_{\lambda}]_{\alpha\mu}=\mathrm{Tr}[ C_{\alpha}^{\dag}H_{S}(t)C_{\mu}]. \label{a.e14} \end{equation} After some algebra, we also obtain another useful relation, \begin{align} \textstyle{\sum}_{\lambda=1}^{N^{2}} [F_{\nu}]_{\xi\lambda}^{*} [F_{\lambda}]_{\alpha\mu} &=\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}[C_{\lambda}^{\dag} C_{\nu}^{\dag} C_{\xi}]\,\mathrm{Tr}[ C_{\alpha}^{\dag}C_{\lambda}C_{\mu}] =\mathrm{Tr}[C_{\alpha}^{\dag} C_{\nu}^{\dag} C_{\xi}C_{\mu}] \nonumber \\ &=\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}[C_{\lambda}^{\dag} C_{\nu} C_{\alpha}]^{*}\, \mathrm{Tr}[C_{\lambda}^{\dag} C_{\xi}C_{\mu}] =\textstyle{\sum}_{\lambda=1}^{N^{2}}[F_{\nu}]_{\lambda\alpha}^{*} [F_{\xi}]_{\lambda\mu} \nonumber \\ &=[F_{\nu}^{\dag}F_{\xi}]_{\alpha\mu}, \label{a.e15} \end{align} where we have used $ C_{\nu}^{\dag} C_{\xi}=\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}[C_{\lambda}^{\dag} C_{\nu}^{\dag} C_{\xi}] C_{\lambda} $ and $ C_{\xi}C_{\mu}=\textstyle{\sum}_{\lambda=1}^{N^{2}}\mathrm{Tr}[C_{\lambda}^{\dag} C_{\xi}C_{\mu}] C_{\lambda} $. Thus, by using Eqs. (\ref{a.e14}) and (\ref{a.e15}), one can get another form for Eq. (\ref{a.e13}) as \begin{align} \dfrac{d\chi_{\alpha\beta}(t)}{dt}= &\dfrac{i}{\hbar}\textstyle{\sum}_{\eta=1}^{N^{2}}\chi_{\alpha\eta}(t)\, \mathrm{Tr}[C_{\eta}^{\dag}H_{S}(t) C_{\beta}] -\dfrac{i}{\hbar}\textstyle{\sum}_{\mu=1}^{N^{2}}\mathrm{Tr}\big[ C_{\alpha}^{\dag}H_{S}(t)C_{\mu}\big] \chi_{\mu\beta}(t) +\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu,\eta=1}^{N^{2}}a_{\xi\nu}(t)[F_{\xi}]_{\alpha\mu} \, \chi_{\mu\eta}(t)[F_{\nu}^{\dag}]_{\eta\beta}\nonumber\\ &-\dfrac{1}{2}\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\mu=1}^{N^{2}}a_{\xi\nu}(t) [F_{\nu}^{\dag}F_{\xi}]_{\alpha\mu} \, \chi_{\mu\beta}(t)-\dfrac{1}{2}\textstyle{\sum}_{\xi,\nu=1}^{N^{2}-1}\textstyle{\sum}_{\eta=1}^{N^{2}} a_{\xi\nu}(t)\, \chi_{\alpha\eta}(t)[F_{\nu}^{\dag}F_{\xi}]_{\eta\beta}. \label{a.e16} \end{align} Since the coefficient matrix $a(t)=[a_{\xi\nu}(t)]$ is a positive semidefinite matrix, we can diagonalize it via a time-dependent unitary matrix $u(t)$. Then, we have $u(t)a(t)u^{\dag}(t)=\gamma (t)$, where $\gamma (t)=\mathrm{diag}\big(\gamma_{\alpha}(t) \big)$ ($\alpha\in\{1,\ldots, N^{2}-1\}$) is the Lindblad coefficient matrix. Here, we define the time-dependent Lindblad operators as $A_{\lambda}(t)=\sum_{\xi=1}^{N^{2}-1}u_{\lambda\xi}^{\ast}(t) C_{\xi}$. Having these Lindblad coefficient matrix and operators as well as introducing the $N^{2}-$dimensional matrices $\mathsf{H}_{S}(t)$ and $\mathsf{L}_{\alpha}(t)$, as Eq. \eqref{eq.7}, eventually lead to the dynamical equation of the dynamics (\ref{eq.6}). \section{Krotov method} \label{app:b} In this appendix, we adapt and extend the Krotov method, as discussed in Ref. \cite{reich_monotonically_2012}, to the process control of an open quantum system. For this purpose we first consider the process matrix $\chi (t)$ as an $N^{4}-$component vector $\vert\chi(t)\rangle\hskip-0.7mm\rangle$ and the field-dependent generator $\mathpzc{K}_{\,\,\bm{\epsilon}}$ as an $N^{4}-$dimensional matrix $\mathbbmss{K}_{\bm{\epsilon}}$ in the extended Hilbert space. This is equipped with the scalar product $\langle\hskip-0.7mm\langle\chi_{1}\vert\chi_{2}\rangle\hskip-0.7mm\rangle =\mathrm{Tr}[\chi_{1}^{\dag}\chi_{2}]$ for any $\vert\chi_{1}\rangle\hskip-0.7mm\rangle$ and $\vert\chi_{2}\rangle\hskip-0.7mm\rangle$ belonging to this extended space. Here, we summarize the problem of controlling the terminal process of an open system. One of the main goals of optimal control theory is to find the optimal fields $\bm{\epsilon}(t)=\{\epsilon_{m}(t)\}$ that minimize the following total objective functional: \begin{equation} \mathpzc{J}=\mathpzc{F}\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)+\textstyle{\int_{0}^{t_{\mathrm{f}}}}dt\,\mathcal{G}_{f}\big(\bm{\epsilon}(t),t\big), \label{b.eq.1} \end{equation} with a final time-dependent function $\mathpzc{F}$ and a field-dependent function $\mathcal{G}_{f}$. In Eq. \eqref{b.eq.1} the notation $\chi(t_{\mathrm{f}})$ is to emphasize the dependence of the function $\mathpzc{F}$ to $\{\vert\chi(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle ,\langle\hskip-0.7mm\langle\chi(t_{\mathrm{f}})\vert\}$. The process also follows a dynamical equation as \begin{equation} \dfrac{d\vert\chi(t)\rangle\hskip-0.7mm\rangle}{dt}=-\dfrac{i}{\hbar}\mathbbmss{K}_{\bm{\epsilon}}\vert\chi(t)\rangle\hskip-0.7mm\rangle,\qquad\qquad\vert\chi(0)\rangle\hskip-0.7mm\rangle_{\alpha} =N\delta_{\alpha N^{4}},\quad\alpha\in\{1,\ldots, N^{4}\}. \label{b.eq.2} \end{equation} We proceed to solve this optimization problem via the Krotov method. From hereon and for brevity we may use the shorthand $\chi$ instead of $\{\vert\chi\rangle\hskip-0.7mm\rangle , \langle\hskip-0.7mm\langle\chi\vert\}$ as the dynamical variable of all intermediate time-dependent functions and omit the time variable $(t)$ from $\bm{\epsilon}(t)$, $\vert\chi (t)\rangle\hskip-0.7mm\rangle$ and $\langle\hskip-0.7mm\langle\chi (t)|$. Introducing an \textit{arbitrary} process-dependent and scalar function $\Upsilon(\chi ,t)$, we can rewrite the total functional \eqref{b.eq.1} as \begin{equation} \mathpzc{J}_{\,\Upsilon}=\mathpzc{M}_{\,\Upsilon}\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)-\Upsilon\big(\chi(0),0\big) -\textstyle{\int_{0}^{t_{\mathrm{f}}}} dt\,\mathpzc{R}_{\, \Upsilon}\big(\chi(t),\bm{\epsilon}(t),t\big). \label{b.eq.3} \end{equation} where the modified final time-dependent function $\mathpzc{M}_{\,\Upsilon}$ and the intermediate time-dependent function $\mathpzc{R}_{\,\Upsilon}$ are given by \begin{gather} \mathpzc{M}_{\,\Upsilon}\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)=\mathpzc{F} \big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big) + \Upsilon\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big), \label{b.eq.4}\\ \mathpzc{R}_{\, \Upsilon}(\chi,\bm{\epsilon},t) = -\mathcal{G}_{f}(\bm{\epsilon},t) +\dfrac{\partial\Upsilon}{\partial t}-\dfrac{i}{\hbar}\dfrac{\partial\Upsilon}{\partial\vert\chi\rangle\hskip-0.7mm\rangle}\mathbbmss{K}_{\bm{\epsilon}}\vert\chi\rangle\hskip-0.7mm\rangle+\dfrac{i}{\hbar}\langle\hskip-0.7mm\langle\chi\vert\mathbbmss{K}_{\bm{\epsilon}}^{\dag}\dfrac{\partial\Upsilon}{\partial\langle\hskip-0.7mm\langle\chi\vert}. \label{b.eq.5} \end{gather} In fact, the dynamical constraint \eqref{b.eq.2} has been incorporated in the total functional $\mathpzc{J}_{\,\Upsilon}$ using the function $\Upsilon$. As we see later, the freedom in the choice of this function enables one to design a \textit{monotonically convergent} algorithm. That is the algorithm approaches the minimum of the modified functional $\mathpzc{J}_{\,\Upsilon}$ after each iteration, \begin{equation} \mathpzc{J}_{\,\Upsilon}^{({n+1})}\leqslant \mathpzc{J}_{\,\Upsilon}^{({n})},\qquad\forall n\geqslant 0, \label{b.eq.5-1} \end{equation} where \begin{equation} \mathpzc{J}_{\,\Upsilon}^{(n)}=\mathpzc{M}_{\,\Upsilon}\big(\chi^{(n)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)-\Upsilon\big(\chi(0),0\big) -\textstyle{\int_{0}^{t_{\mathrm{f}}}}dt\,\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n)}(t),\bm{\epsilon}^{(n)}(t),t\big). \label{b.eq.5-2} \end{equation} Choosing the process-dependent function $\Upsilon$ suitably is the principal core of the Krotov method. In summary, the Krotov method contains two successive steps, which we explain below. \subsection{First step} Having the control fields $\bm{\epsilon}^{(n)}$ leads to the dynamics $d\vert\chi^{(n)}\rangle\hskip-0.7mm\rangle/dt=(-i/\hbar)\mathbbmss{K}_{\bm{\epsilon}^{(n)}}\vert\chi^{(n)}\rangle\hskip-0.7mm\rangle$, with $\mathbbmss{K}$ evaluated at $\bm{\epsilon}^{(n)}$. These control fields $\bm{\epsilon}^{(n)}$ can be the guess fields at the beginning (when $n=0$) of the optimization process or the updated fields in the previous iteration (when $n>0$). In this step we fix and determine the arbitrary scalar function $\Upsilon$ such that the total functional $\mathpzc{J}_{\,\Upsilon}$ is \textit{maximized} over the dynamics $\chi^{(n)}$. This can be formulated in terms of the following conditions: \begin{align} &\mathrm{i.}\quad\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n)},\bm{\epsilon}^{(n)},t\big)=\min_{\chi}\mathpzc{R}_{\,\Upsilon}\big(\chi,\bm{\epsilon}^{(n)},t\big)\qquad\qquad\forall t\in[0,t_{\mathrm{f}}], \label{b.eq.6}\\ &\mathrm{ii.}\quad\mathpzc{M}_{\,\Upsilon}\big(\chi^{(n)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)=\max_{\chi(t_{\mathrm{f}})}\mathpzc{M}_{\,\Upsilon}\big(\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big). \label{b.eq.7} \end{align} Note that the $\chi$ on the RHSs of the above equations do not need to satisfy Eq. (\ref{b.eq.2}); in this stage they are arbitrary dynamics. According to Eqs. \eqref{b.eq.6} and \eqref{b.eq.7}, the solution $\{\bm{\epsilon}^{(n)},\chi^{(n)}\}$ is the worst possible solution for the \textit{minimization} problem. In order to characterize this function, we assume that the RHS of dynamical equation \eqref{b.eq.2} and its Jacobian are bounded. Let us assume that the objective $\mathpzc{F}$ and the function $\mathcal{G}_{f}$ are bounded and twice differentiable. By adapting results of Ref. \cite{reich_monotonically_2012}, one can show that under these assumptions the following relation for the real-valued function $\Upsilon$ is a solution to the extremization problem posed by Eqs. \eqref{b.eq.6} and \eqref{b.eq.7}: \begin{equation} \Upsilon(\chi,t)=\langle\hskip-0.7mm\langle\chi\vert\Lambda\rangle\hskip-0.7mm\rangle+\langle\hskip-0.7mm\langle\Lambda\vert\chi\rangle\hskip-0.7mm\rangle+\dfrac{1}{2}\sigma (t)\, \langle\hskip-0.7mm\langle\Delta\chi\vert\Delta\chi\rangle\hskip-0.7mm\rangle, \label{b.eq.8} \end{equation} where \begin{equation} \vert\Delta\chi(t)\rangle\hskip-0.7mm\rangle=\vert\chi(t)\rangle\hskip-0.7mm\rangle-\vert\chi^{(n)}(t)\rangle\hskip-0.7mm\rangle \end{equation} is the change in the dynamics. Here the coefficients $\vert\Lambda(t)\rangle\hskip-0.7mm\rangle$ is defined as $\vert\Lambda(t)\rangle\hskip-0.7mm\rangle=\big(\partial\Upsilon/\partial\langle\hskip-0.7mm\langle\chi\vert\big)\big|_{\chi^{(n)}}$, and extrapolating the arguments of Ref. \cite{reich_monotonically_2012} the coefficient $\sigma(t)$ can be obtained as \begin{equation} \sigma(t)=\tilde{a}(e^{\tilde{c}(t_{\mathrm{f}}-t)}-1)+\tilde{b},\qquad\qquad\tilde{a},\tilde{b},-\tilde{c}<0. \label{b.eq.8-1} \end{equation} As a result, the extremization problem introduced in Eqs. \eqref{b.eq.6} and \eqref{b.eq.7} reduces to finding suitable coefficients $\tilde{a}$, $\tilde{b}$ and $\tilde{c}$. Before analytical calculation of these coefficients, we evaluate the necessary conditions to satisfy Eqs. \eqref{b.eq.6} and \eqref{b.eq.7}. First, we need to obtain a closed form for the function $\mathpzc{R}_{\,\Upsilon}$ [Eq. \eqref{b.eq.5}] by inserting $\Upsilon$ from Eq. \eqref{b.eq.8}. Note that Eq. \eqref{b.eq.8} also yields \begin{gather} \dfrac{\partial\Upsilon}{\partial \vert\chi\rangle\hskip-0.7mm\rangle}=\langle\hskip-0.7mm\langle\Lambda\vert +\dfrac{1}{2}\sigma(t)\, \langle\hskip-0.7mm\langle\Delta\chi\vert , \label{b.eq.8-2}\\ \dfrac{\partial\Upsilon}{\partial \langle\hskip-0.7mm\langle\chi\vert}=\vert\Lambda\rangle\hskip-0.7mm\rangle +\dfrac{1}{2}\sigma(t)\, \vert\Delta\chi\rangle\hskip-0.7mm\rangle, \label{b.eq.8-3}\\ \dfrac{\partial\Upsilon}{\partial t}=\langle\hskip-0.7mm\langle\chi\vert\dot{\Lambda}\rangle\hskip-0.7mm\rangle+\langle\hskip-0.7mm\langle\dot{\Lambda}\vert\chi\rangle\hskip-0.7mm\rangle+\dfrac{1}{2}\dot{\sigma} (t)\, \langle\hskip-0.7mm\langle\Delta\chi\vert\Delta\chi\rangle\hskip-0.7mm\rangle+\dfrac{1}{2}{\sigma} (t)\, \langle\hskip-0.7mm\langle\dot{\chi}^{(n)}\vert\Delta\chi\rangle\hskip-0.7mm\rangle+\dfrac{1}{2}{\sigma} (t)\, \langle\hskip-0.7mm\langle\Delta\chi\vert\dot{\chi}^{(n)}\rangle\hskip-0.7mm\rangle, \label{b.eq.8-4} \end{gather} where dot is the shorthand for time derivative $(d/dt)$. By using Eqs. \eqref{b.eq.5} and \eqref{b.eq.8-2} -- \eqref{b.eq.8-4} along with $\vert\dot{\chi}^{(n)}\rangle\hskip-0.7mm\rangle=(-i/\hbar)\mathbbmss{K}_{\bm{\epsilon}^{(n)}}\vert\chi^{(n)}\rangle\hskip-0.7mm\rangle$ we obtain the modified function $\mathpzc{R}_{\,\Upsilon}$ as \begin{align} \mathpzc{R}_{\,\Upsilon}(\chi,\bm{\epsilon},t)=&-\mathcal{G}_{f}(\bm{\epsilon},t)+\langle\hskip-0.7mm\langle\chi\vert\dot{\Lambda}\rangle\hskip-0.7mm\rangle +\langle\hskip-0.7mm\langle\dot{\Lambda}\vert\chi\rangle\hskip-0.7mm\rangle+\dfrac{1}{2}\dot{\sigma} (t)\, \langle\hskip-0.7mm\langle\Delta\chi\vert\Delta\chi\rangle\hskip-0.7mm\rangle- \dfrac{i}{2\hbar}{\sigma} (t)\, \langle\hskip-0.7mm\langle{\chi}^{(n)}\vert\mathbbmss{K}_{\bm{\epsilon}^{(n)}}^{\dag}\vert\Delta\chi\rangle\hskip-0.7mm\rangle +\dfrac{i}{2\hbar}{\sigma} (t)\, \langle\hskip-0.7mm\langle\Delta\chi\vert\mathbbmss{K}_{\bm{\epsilon}^{(n)}}\vert{\chi}^{(n)}\rangle\hskip-0.7mm\rangle \nonumber\\ &-\dfrac{i}{\hbar}\langle\hskip-0.7mm\langle\Lambda\vert \mathbbmss{K}_{\bm{\epsilon}}\vert\chi\rangle\hskip-0.7mm\rangle-\dfrac{i}{2\hbar}\sigma(t)\, \langle\hskip-0.7mm\langle\Delta\chi\vert\mathbbmss{K}_{\bm{\epsilon}}\vert\chi\rangle\hskip-0.7mm\rangle +\dfrac{i}{\hbar}\langle\hskip-0.7mm\langle\chi\vert \mathbbmss{K}_{\bm{\epsilon}}^{\dag}\vert\Lambda\rangle\hskip-0.7mm\rangle +\dfrac{i}{2\hbar}\sigma(t)\, \langle\hskip-0.7mm\langle\chi\vert\mathbbmss{K}_{\bm{\epsilon}}^{\dag}\vert\Delta\chi\rangle\hskip-0.7mm\rangle . \label{b.eq.8-5} \end{align} Thus, the necessary condition to satisfy Eq. \eqref{b.eq.6}, $\big(\partial\mathpzc{R}_{\,\Upsilon}/\partial\langle\hskip-0.7mm\langle\chi\vert\big)\big|_{(\chi^{(n)},\bm{\epsilon}^{(n)})}=0$, reduces to \begin{equation} \dfrac{d\vert\Lambda (t) \rangle\hskip-0.7mm\rangle}{dt}=-\dfrac{i}{\hbar}\mathbbmss{K}_{\bm{\epsilon}^{(n)}}^{\dag}\vert\Lambda (t)\rangle\hskip-0.7mm\rangle. \label{b.eq.9} \end{equation} The boundary condition of this dynamical equation is obtained from Eq. \eqref{b.eq.7}. In a similar vein, the necessary condition for satisfying this relation, i.e., $\big(\partial\mathpzc{M}_{\,\Upsilon}/\partial\langle\hskip-0.7mm\langle\chi\vert\big)\big|_{\chi^{(n)}(t_{\mathrm{f}})}=0$, can be read from Eqs. \eqref{b.eq.4} and \eqref{b.eq.8} as \begin{equation} \vert\Lambda(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle=-\Big(\dfrac{\partial\mathpzc{F}}{\partial\langle\hskip-0.7mm\langle\chi(t_{\mathrm{f}})\vert}\Big)\Big| _{\chi^{(n)}(t_{\mathrm{f}})}. \label{b.eq.10} \end{equation} \subsubsection{Analytical calculation of $\sigma(t)$} To calculate the constant parameters of the function $\sigma(t)$ in Eq. \eqref{b.eq.8-1}, one can first work out the following inequality equivalent to Eq. \eqref{b.eq.7}: \begin{equation} \Delta\mathpzc{M}_{\,\Upsilon}=\mathpzc{M}_{\,\Upsilon}\big(\chi^{(n)}(t_{\mathrm{f}})+\Delta\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)-\mathpzc{M}_{\,\Upsilon}\big(\chi^{(n)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)< 0, \label{in-eq} \end{equation} for all variations of the terminal dynamics $\Delta\chi(t_{\mathrm{f}})=\chi(t_{\mathrm{f}})-\chi^{(n)}(t_{\mathrm{f}})$. Using Eqs. \eqref{b.eq.4} and \eqref{b.eq.8}, and assuming $\langle\hskip-0.7mm\langle\Delta\chi(t_{\mathrm{f}})\vert\Delta\chi(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle\neq 0$, the inequality (\ref{in-eq}) gives \begin{equation} \dfrac{\Delta\mathpzc{F}+2\mathrm{Re}\langle\hskip-0.7mm\langle\Delta\chi(t_{\mathrm{f}})\vert\Lambda(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle}{\langle\hskip-0.7mm\langle\Delta\chi(t_{\mathrm{f}})\vert\Delta\chi(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle}+\dfrac{1}{2}\sigma(t_{\mathrm{f}})< 0,\qquad\qquad\forall \Delta\chi(t_{\mathrm{f}}), \label{b.eq.11} \end{equation} where \begin{equation} \Delta\mathpzc{F}=\mathpzc{F}\big(\chi^{(n)}(t_{\mathrm{f}})+\Delta\chi(t_{\mathrm{f}}),t_{\mathrm{f}}\big)-\mathpzc{F}\big(\chi^{(n)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big). \end{equation} Thus, from Eq. \eqref{b.eq.8-1}, the worst-case scenario to fulfill the strict condition \eqref{b.eq.11} is \begin{align} &\qquad\quad\qquad 2A+\tilde{b}<0, \label{b.eq.12}\\ &A=\sup_{\Delta\chi(t_{\mathrm{f}})}\dfrac{\Delta\mathpzc{F}+2\mathrm{Re}\langle\hskip-0.7mm\langle\Delta\chi(t_{\mathrm{f}})\vert\Lambda(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle}{\langle\hskip-0.7mm\langle\Delta\chi(t_{\mathrm{f}})\vert\Delta\chi(t_{\mathrm{f}})\rangle\hskip-0.7mm\rangle}. \label{b.eq.13} \end{align} Now let us consider an inequality equivalent to the relation \eqref{b.eq.6}, i.e., \begin{equation} \Delta\mathpzc{R}_{\,\Upsilon}=\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n)}+\Delta\chi,\bm{\epsilon}^{(n)},t\big)-\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n)},\bm{\epsilon}^{(n)},t\big)\geqslant 0, \label{b.eq.14.1} \end{equation} for all $\Delta\chi =\chi-\chi^{(n)}$ and $t\in [0,t_{\mathrm{f}}]$. Employing Eqs. \eqref{b.eq.8-5} and \eqref{b.eq.9} we get the following equation for $\Delta\mathpzc{R}_{\,\Upsilon}$: \begin{equation} \Delta\mathpzc{R}_{\,\Upsilon}=\langle\hskip-0.7mm\langle\Delta\chi\vert\Delta\chi\rangle\hskip-0.7mm\rangle\Big(\dfrac{1}{2}\dot{\sigma}(t)+\dfrac{\sigma(t)}{\hbar}\dfrac{\mathrm{Im}\langle\hskip-0.7mm\langle\Delta\chi\vert\mathbbmss{K}_{\bm{\epsilon}^{(n)}}\vert\Delta\chi\rangle\hskip-0.7mm\rangle}{\langle\hskip-0.7mm\langle\Delta\chi\vert\Delta\chi\rangle\hskip-0.7mm\rangle}\Big). \label{b.eq.14.2} \end{equation} From this equation and the fact that $x/2\geqslant -| x| \,\forall x\in\mathbbmss{R}$, we obtain a lower bound for $\Delta\mathpzc{R}_{\,\Upsilon}$ as follows: \begin{gather} \Delta\mathpzc{R}_{\,\Upsilon}\geqslant\langle\hskip-0.7mm\langle\Delta\chi\vert\Delta\chi\rangle\hskip-0.7mm\rangle\Big(\dfrac{1}{2}\dot{\sigma}(t)-|\sigma(t)|B \Big), \label{b.eq.15}\\ B=\dfrac{2}{\hbar}\sup_{\lbrace\Delta\chi\rbrace;\; t\in[0,t_{\mathrm{f}}]}\Big|\dfrac{\mathrm{Im}\langle\hskip-0.7mm\langle\Delta\chi\vert\mathbbmss{K}_{\bm{\epsilon}^{(n)}}\vert\Delta\chi\rangle\hskip-0.7mm\rangle}{\langle\hskip-0.7mm\langle\Delta\chi\vert\Delta\chi\rangle\hskip-0.7mm\rangle}\Big|. \label{b.eq.14} \end{gather} Here we assume $\langle\hskip-0.7mm\langle\Delta\chi\vert\Delta\chi\rangle\hskip-0.7mm\rangle\neq 0$, or equivalently $\Delta \chi \neq 0$. If it is demanded that this lower bound be positive \begin{equation} \qquad\qquad\dfrac{1}{2}\dot{\sigma}(t)-|\sigma(t)| B> 0, \label{b.eq.14.1} \end{equation} then the strict minimum condition for $\mathpzc{R}_{\,\Upsilon}$ at $\chi ^{(n)}$, i.e., $\Delta\mathpzc{R}_{\,\Upsilon}>0$, will hold. By using the fact that $| \sigma(t)|\geqslant \sigma(t),\;\forall t\in [0,t_{\mathrm{f}}]$, the condition \eqref{b.eq.14.1} can be reformulated as \begin{equation} \qquad\qquad\dfrac{1}{2}\dot{\sigma}(t)-\sigma(t)B> 0. \label{b.eq.15} \end{equation} From Eq. \eqref{b.eq.8-1} we obtain the inequalities $\dot{\sigma}(t)\, \geqslant -\tilde{a}\tilde{c}$ and $\sigma (t)\, \leqslant \tilde{b}$, which in turn yield \begin{equation} (1/2)\dot{\sigma}(t)-B\sigma(t)\, \geqslant -(\tilde{a}\tilde{c}/2)-B\tilde{b}. \end{equation} Then, imposing the following condition is equivalent to satisfying the condition \eqref{b.eq.15}: \begin{equation} -\dfrac{1}{2}\tilde{a}\tilde{c}-B\tilde{b}> 0, \label{b.eq.16} \end{equation} which does not depend on time $t$. We proceed here to find the parameters $\tilde{a}$, $\tilde{b}$, and $\tilde{c}$ of the inequalities \eqref{b.eq.12} and \eqref{b.eq.16}. We set $\tilde{a}$ and $\tilde{b}$ as \begin{equation} \tilde{a}=\tilde{b}=-\bar{A},\qquad\bar{A}=\max \{\zeta_{A},2A+\zeta_{A}\}, \label{b.eq.17} \end{equation} where $\zeta_{A}>0$. If $\bar{A}=\zeta_{A}$ ($\bar{A}=2A+\zeta_{A}$), then we have $2A-\zeta_{A} <0$ ($-\zeta_{A}<0$) for the inequality \eqref{b.eq.12}. Therefore, the last inequality is satisfied with the choice of $\tilde{a}$ and $\tilde{b}$ as in Eq. \eqref{b.eq.17}. By this choice the inequality \eqref{b.eq.16} also becomes \begin{equation} \tilde{c}+2B>0. \label{b.eq.17-1} \end{equation} Since $B\geqslant 0$ [Eq. \eqref{b.eq.14}], then one solution for the parameter $\tilde{c}$ is \begin{equation} \tilde{c}=\zeta_{B},\qquad\zeta_{B}>0. \label{b.eq.17-2} \end{equation} By substituting Eqs. \eqref{b.eq.17} and \eqref{b.eq.17-2} into Eq. \eqref{b.eq.8-1}, we obtain the following form for $\sigma (t)$: \begin{equation} \sigma (t)=-\bar{A}e^{\zeta_{B}(t_{\mathrm{f}}-t)}. \label{b.eq.17-3} \end{equation} Note that we can also set the parameters $\zeta_{A}$ and $\zeta_{B}$ to zero. For more details on this particular choice, see the end of this section. \subsection{Second step} The dynamics-dependent function $\Upsilon$ obtained in the previous step [Eq. \eqref{b.eq.8}] with the choice of $\sigma (t)$ as in Eq. \eqref{b.eq.17-3} enables us to find a local minimum for the total functional $\mathpzc{J}_{\,\Upsilon}$ [Eq. \eqref{b.eq.3}] with respect to the control fields $\bm{\epsilon}(t)$. According to Eq. \eqref{b.eq.3}, this implies that \begin{equation} \bm{\epsilon}^{(n+1)}=\mathrm{arg}\{\max_{\bm{\epsilon}}\mathpzc{R}_{\,\Upsilon}(\chi,\bm{\epsilon},t)\},\qquad\qquad\forall t\in[0,t_{\mathrm{f}}]. \label{b.eq.18} \end{equation} The optimal field $\bm{\epsilon}^{(n+1)}$ obtained in this step needs to be compatible with the dynamics $\chi^{(n+1)}$ given by \begin{equation} \dfrac{d\vert\chi^{(n+1)}\rangle\hskip-0.7mm\rangle}{dt}=-\dfrac{i}{\hbar}\mathbbmss{K}_{\bm{\epsilon}^{(n+1)}}\vert\chi^{(n+1)}\rangle\hskip-0.7mm\rangle,\qquad\qquad\vert\chi^{(n+1)}(0)\rangle\hskip-0.7mm\rangle_{\alpha}=N\delta_{\alpha N^{4}},\quad\alpha\in\{1,\ldots, N^{4}\}. \label{b.eq.19} \end{equation} Thus, the maximization problem \eqref{b.eq.18} and the dynamical equation \eqref{b.eq.19} should be solved simultaneously. We first consider one of the necessary conditions for the maximization problem \eqref{b.eq.18}, i.e., $\big(\partial\mathpzc{R}_{\,\Upsilon}/\partial\epsilon_{m}\big)\big|_{(\chi^{(n+1)},\bm{\epsilon}^{(n+1)})}=0$. Employing Eq. \eqref{b.eq.8-5} one can obtain an expression for the partial derivative $\partial\mathpzc{R}_{\,\Upsilon}/\partial\epsilon_{m}$ as \begin{align} \dfrac{\partial\mathpzc{R}_{\,\Upsilon}}{\partial\epsilon_{m}}=-\dfrac{\partial\mathcal{G}_{f}}{\partial\epsilon_{m}} -\dfrac{i}{\hbar}\langle\hskip-0.7mm\langle\Lambda\vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}}{\partial\epsilon_{m}}\Big)\vert\chi\rangle\hskip-0.7mm\rangle-\dfrac{i}{2\hbar}\sigma(t)\, \langle\hskip-0.7mm\langle\Delta\chi\vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}}{\partial\epsilon_{m}}\Big)\vert\chi\rangle\hskip-0.7mm\rangle +\dfrac{i}{\hbar}\langle\hskip-0.7mm\langle\chi\vert \Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}^{\dag}}{\partial\epsilon_{m}}\Big)\vert\Lambda\rangle\hskip-0.7mm\rangle +\dfrac{i}{2\hbar}\sigma(t)\, \langle\hskip-0.7mm\langle\chi\vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}^{\dag}}{\partial\epsilon_{m}}\Big)\vert\Delta\chi\rangle\hskip-0.7mm\rangle. \label{b.eq.19-1} \end{align} Then, the local maximum condition for the function $\mathpzc{R}_{\,\Upsilon}$ at $\bm{\epsilon}^{(n+1)}$, i.e., $\big(\partial\mathpzc{R}_{\,\Upsilon}/\partial\epsilon_{m}\big)\big|_{(\chi^{(n+1)},\bm{\epsilon}^{(n+1)})}=0$, becomes \begin{align} \Big(\dfrac{\partial\mathcal{G}_{f}}{\partial\epsilon_{m}}\Big)\Big|_{\bm{\epsilon}^{(n+1)}}=\dfrac{2}{\hbar}\mathrm{Im}\Big\{\langle\hskip-0.7mm\langle\Lambda(t)\, \vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}}{\partial\epsilon_{m}}\Big)\Big| _{\bm{\epsilon}^{(n+1)}}\vert\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle\Big\} +\dfrac{\sigma(t)}{\hbar}\mathrm{Im}\Big\{\langle\hskip-0.7mm\langle\Delta\chi^{(n+1)}(t)\, \vert\Big(\dfrac{\partial\mathbbmss{K}_{\bm{\epsilon}}}{\partial\epsilon_{m}}\Big)\Big| _{\bm{\epsilon}^{(n+1)}}\vert\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle\Big\}, \label{b.eq.19-2} \end{align} where $\vert\Delta\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle=\vert\chi^{(n+1)}(t)\rangle\hskip-0.7mm\rangle-\vert\chi^{(n)}(t)\rangle\hskip-0.7mm\rangle$. In this work, we restrict ourself to the field-dependent function $\mathcal{G}_{f}$ similar to Eq. \eqref{eq.13}, which are quadratic in the fields. We also assume that the field-dependent generator $\mathbbmss{K}_{\bm{\epsilon}}$ depends on the fields $\bm{\epsilon}$ linearly through the field-system interaction Hamiltonian $V_{\mathrm{field}}(t)$. Having such constraint and field-dependent generator as well as using Eq. \eqref{b.eq.19-1}, we obtain another necessary condition for the local maximum of the function $\mathpzc{R}_{\,\Upsilon}$ at $\bm{\epsilon}^{(n+1)}$, i.e., $\big(\partial^{2}\mathpzc{R}_{\,\Upsilon}/\partial\epsilon_{m}^{2}\big)\big|_{(\chi^{(n+1)},\bm{\epsilon}^{(n+1)})}< 0$, as follows: \begin{equation} w_{m} / f_{m}(t)>0. \label{b.eq.20} \end{equation} After these two steps, the change of the total functional $\mathpzc{J}_{\,\Upsilon}$ by iteration becomes \begin{align} \Delta\mathpzc{J}_{\,\Upsilon}^{(n+1)}=&\mathpzc{J}_{\,\Upsilon}^{(n+1)}-\mathpzc{J}_{\,\Upsilon}^{(n)}=\mathpzc{M}_{\,\Upsilon}\big(\chi^{(n+1)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)-\mathpzc{M}_{\,\Upsilon}\big(\chi^{(n)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big) \nonumber\\ &-\textstyle{\int_{0}^{t_{\mathrm{f}}}}dt\,\big\{\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n+1)},\bm{\epsilon}^{(n+1)},t\big)-\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n+1)},\bm{\epsilon}^{(n)},t\big)+\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n+1)},\bm{\epsilon}^{(n)},t\big)-\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n)},\bm{\epsilon}^{(n)},t\big)\big\}, \label{b.eq.21} \end{align} where for writing the RHS of the second equality we have used Eq. \eqref{b.eq.5-2}. Equations \eqref{b.eq.6} and \eqref{b.eq.7} lead to the inequalities $\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n+1)},\bm{\epsilon}^{(n)},t\big)\geqslant \mathpzc{R}_{\,\Upsilon}\big(\chi^{(n)},\bm{\epsilon}^{(n)},t\big),\;\forall t\in [0,t_{\mathrm{f}}]$ and $\mathpzc{M}_{\,\Upsilon}\big(\chi^{(n+1)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)\leqslant \mathpzc{M}_{\,\Upsilon}\big(\chi^{(n)}(t_{\mathrm{f}}),t_{\mathrm{f}}\big)$. We also obtain the inequality $\mathpzc{R}_{\,\Upsilon}\big(\chi^{(n+1)},\bm{\epsilon}^{(n+1)},t\big)\geqslant \mathpzc{R}_{\,\Upsilon}\big(\chi^{(n+1)},\bm{\epsilon}^{(n)},t\big),\,\forall t\in [0,t_{\mathrm{f}}]$ from Eq. \eqref{b.eq.18}. From these inequalities and Eq. \eqref{b.eq.21}, the monotonic decrease of the modified total functional $\mathpzc{J}_{\,\Upsilon}$ with respect to the iteration number, i.e., $\Delta\mathpzc{J}_{\,\Upsilon}^{(n+1)}\leqslant 0,\,\forall n\geqslant 0$, is proved. \textit{Remark}.---Note that the choice $\zeta_{i}=0$ ($i=A,B$) may not cause any change in the modified total functional $\mathpzc{J}_{\,\Upsilon}$ with the change in the dynamics $\chi$. This means we have $\Delta\mathpzc{M}_{\,\Upsilon}=0,\;\forall\Delta\chi(t_{\mathrm{f}})$ and $\Delta\mathpzc{R}_{\,\Upsilon}=0\;\forall\Delta\chi(t),\, t\in[0,t_{\mathrm{f}}]$. Thus, by this choice in the worst case, the total objective functional $\mathpzc{J}_{\,\Upsilon}$ remains unchanged during the first step of the Krotov method. However, after minimizing the modified total functional $\mathpzc{J}_{\,\Upsilon}$ with respect to the field in the second step, the strict monotonic convergence almost always is preserved even by the choice $\zeta_{i}=0$ for one or both of $i=A$ and $i=B$. \twocolumngrid \end{widetext} \end{document}
arXiv
Nice name In set theory, a nice name is used in forcing to impose an upper bound on the number of subsets in the generic model. It is used in the context of forcing to prove independence results in set theory such as Easton's theorem. Formal definition Let $M\models $ ZFC be transitive, $(\mathbb {P} ,<)$ a forcing notion in $M$, and suppose $G\subseteq \mathbb {P} $ is generic over $M$. Then for any $\mathbb {P} $-name $\tau $ in $M$, we say that $\eta $ is a nice name for a subset of $\tau $ if $\eta $ is a $\mathbb {P} $-name satisfying the following properties: (1) $\operatorname {dom} (\eta )\subseteq \operatorname {dom} (\tau )$ (2) For all $\mathbb {P} $-names $\sigma \in M$, $\{p\in \mathbb {P} |\langle \sigma ,p\rangle \in \eta \}$ forms an antichain. (3) (Natural addition): If $\langle \sigma ,p\rangle \in \eta $, then there exists $q\geq p$ in $\mathbb {P} $ such that $\langle \sigma ,q\rangle \in \tau $. References • Kunen, Kenneth (1980). Set theory: an introduction to independence proofs. Studies in logic and the foundations of mathematics. Vol. 102. Elsevier. p. 208. ISBN 0-444-85401-0.
Wikipedia
Proportional division A proportional division is a kind of fair division in which a resource is divided among n partners with subjective valuations, giving each partner at least 1/n of the resource by his/her own subjective valuation. For the social choice rule, see proportional-fair rule. Proportionality was the first fairness criterion studied in the literature; hence it is sometimes called "simple fair division". It was first conceived by Steinhaus.[1] Example Consider a land asset that has to be divided among 3 heirs: Alice and Bob who think that it's worth 3 million dollars, and George who thinks that it's worth $4.5M. In a proportional division, Alice receives a land-plot that she believes to be worth at least $1M, Bob receives a land-plot that he believes to be worth at least $1M (even though Alice may think it is worth less), and George receives a land-plot that he believes to be worth at least $1.5M. Existence A proportional division does not always exist. For example, if the resource contains several indivisible items and the number of people is larger than the number of items, then some people will get no item at all and their value will be zero. Nevertheless, such a division exists with high probability for indivisible items under certain assumptions on the valuations of the agents.[2] Moreover, a proportional division is guaranteed to exist if the following conditions hold: • The valuations of the players are non-atomic, i.e., there are no indivisible elements with positive value. • The valuations of the players are additive, i.e., when a piece is divided, the value of the piece is equal to the sum of its parts. Hence, proportional division is usually studied in the context of fair cake-cutting. See proportional cake-cutting for detailed information about procedures for achieving a proportional division in the context of cake-cutting. A more lenient fairness criterion is partial proportionality, in which each partner receives a certain fraction f(n) of the total value, where f(n) ≤ 1/n. Partially proportional divisions exist (under certain conditions) even for indivisible items. Variants Super-proportional division A super-proportional division is a division in which each partner receives strictly more than 1/n of the resource by their own subjective valuation. Of course such a division does not always exist: when all partners have exactly the same value functions, the best we can do is give each partner exactly 1/n. So a necessary condition for the existence of a super-proportional division is that not all partners have the same value measure. The surprising fact is that, when the valuations are additive and non-atomic, this condition is also sufficient. I.e., when there are at least two partners whose value function is even slightly different, then there is a super-proportional division in which all partners receive more than 1/n. See super-proportional division for details. Relations to other fairness criteria Implications between proportionality and envy-freeness Proportionality (PR) and envy-freeness (EF) are two independent properties, but in some cases one of them may imply the other. When all valuations are additive set functions and the entire cake is divided, the following implications hold: • With two partners, PR and EF are equivalent; • With three or more partners, EF implies PR but not vice versa. For example, it is possible that each of three partners receives 1/3 in his subjective opinion, but in Alice's opinion, Bob's share is worth 2/3. When the valuations are only subadditive, EF still implies PR, but PR no longer implies EF even with two partners: it is possible that Alice's share is worth 1/2 in her eyes, but Bob's share is worth even more. On the contrary, when the valuations are only superadditive, PR still implies EF with two partners, but EF no longer implies PR even with two partners: it is possible that Alice's share is worth 1/4 in her eyes, but Bob's is worth even less. Similarly, when not all cake is divided, EF no longer implies PR. The implications are summarized in the following table: Valuations2 partners3+ partners Additive$EF\implies PR$ $PR\implies EF$ $EF\implies PR$ Subadditive$EF\implies PR$$EF\implies PR$ Superadditive$PR\implies EF$- General-- Stability to voluntary exchanges One advantage of the proportionality criterion over envy-freeness and similar criteria is that it is stable with regards to voluntary exchanges. As an example, assume that a certain land is divided among 3 partners: Alice, Bob and George, in a division that is both proportional and envy-free. Several months later, Alice and George decide to merge their land-plots and re-divide them in a way that is more profitable for them. From Bob's point of view, the division is still proportional, since he still holds a subjective value of at least 1/3 of the total, regardless of what Alice and George do with their plots. On the other hand, the new division might not be envy free. For example, it is possible that initially both Alice and George received a land-plot which Bob subjectively values as 1/3, but now after the re-division George got all the value (in Bob's eyes) so now Bob envies George. Hence, using envy-freeness as the fairness criterion implies that we must constrain the right of people to voluntary exchanges after the division. Using proportionality as the fairness criterion has no such negative implications. Individual rationality An additional advantage of proportionality is that it is compatible with individual rationality in the following sense. Suppose n partners own a resource in common. In many practical scenarios (though not always), the partners have the option to sell the resource in the market and split the revenues such that each partner receives exactly 1/n. Hence, a rational partner will agree to participate in a division procedure, only if the procedure guarantees that he receives at least 1/n of his total value. Additionally, there should be at least a possibility (if not a guarantee) that the partner receives more than 1/n; this explains the importance of the existence theorems of super-proportional division. See also • Allocative efficiency • Fair cake-cutting • Perfect division • Inequity aversion References 1. Steinhaus, Hugo (1948). "The problem of fair division". Econometrica. 16 (1): 101–104. JSTOR 1914289. 2. Suksompong, Warut (2016). "Asymptotic existence of proportionally fair allocations". Mathematical Social Sciences. 81: 62–65. arXiv:1806.00218. doi:10.1016/j.mathsocsci.2016.03.007. • A summary of proportional and other division procedures appears in: Austin, A. K. (1982). "Sharing a Cake". The Mathematical Gazette. 66 (437): 212. doi:10.2307/3616548. JSTOR 3616548.
Wikipedia
Journal of Ethnobiology and Ethnomedicine Traditional use and perception of snakes by the Nahuas from Cuetzalan del Progreso, Puebla, Mexico Romina García-López1, Alejandro Villegas1,2, Noé Pacheco-Coronel1 & Graciela Gómez-Álvarez ORCID: orcid.org/0000-0001-9335-36971 Journal of Ethnobiology and Ethnomedicine volume 13, Article number: 6 (2017) Cite this article Indigenous cultures are the result of their adaptation to the natural surroundings, in such a way that, amongst their main features is a set of knowledge, technologies and strategies for the appropriation of nature. In Cuetzalan del Progreso, Puebla, Mexico snakes represent 71.1% of the total local herpetofauna; and in addition to this, different groups of Nahuas have shown to have information of their use of various snake species in many ways. This study was conducted to investigate the traditional uses of snakes in this cultural group. Formal and informal interviews were conducted with the inhabitants of the communities. During these interviews, 30 images of the different species of snakes present in the area were presented to the subjects, so that they would recognize them and reveal information about the knowledge they possess on them. A usage analysis was applied to each species considering the following categories: food purposes, medicinal, artisanal and magical-religious. Likewise, the frequency, the diversity and the value of use was estimated for these snakes. A total of 51 interviews were carried out. The individuals recognized 18 out of 30 images of snakes that were presented. The total of usage categories was five; we found that the magic-religious use was the most mentioned by 32 personas. Boa imperator and Antropoides nummifer were the species with the highest value of use. More than half of the interviewees mentioned killing snakes because they're poisonous and aggressive. In the magic-religious aspect the "Danza de los Negritos" is highlighted; this is a local festival, brought by Africans, and alludes to snakes. This study revealed that snakes are still very important for the culture in Cuetzalan del Progreso, finding that the magical-religious and the medicinal use stand out. On the other hand, the fear and misperception on the toxicity of snakes might represent a potential threat for their conservation. Therefore, it is necessary to carry out a long-term monitoring of the ethno-zoological activities, and develop a sustainable management plan compatible with the cultural characteristics of the natives of the region. With time, the state of the world's ecosystems has deteriorated due to human activities, making it necessary to analyze all the variables that intervene in said deterioration, to recognize which may be modified o eliminated in favor of the environment. Those called "environmental problems" can be described, interpreted and most importantly resolved only through an integrated approach [1]. One of the variables that intervene in the environmental deterioration is that relationship of man with nature. Said relationship of man with animals is affected by the cultural aspects of the different local groups, which are the result of their adaptation to the natural environments, and among the principal characteristics are a great amount of knowledge, technologies and strategies for the appropriation of species. As a result, these aspects press upon the populations of different species that man can utilize in a sustainable way, or endanger their survival [2]. Therefore, it is of a high importance to analyze how the human populations perceive and incorporate those traditional elements to relate to nature [3], and so contribute with effective strategies of conservation [4]. In this sense, those ethnozoological studies that explore the relationships between communities and the utilized fauna as well as a perception they have of the different species are very important. In Mexico, more than 80% of those areas considered protected are inhabited by indigenous groups, conserving their native language [5]. Within Mexican land most of the worlds' ecosystems are present, and inside these, we can find countless species of vertebrates that are endemic of the country, with amphibians and reptiles being most important, each with 373 (31%) and 830 (68.9%) species respectively [6]. Wilson and Johnson [7] reported that the highest levels of herpetofauna endemism in Mesoamerica are found in Mexico, with 259 (66.8%) amphibian species and 474 (57.2%) reptile species. Among these reptiles, snakes have a particular importance, and have been considered a sacred deity, associated to the forces of nature due to their unique method of locomotion, similar to the movement of water and lightning [8], by different cultures of the world at different moments [9]. According to De la Garza [9], in Mesoamerica and the western cultures, snakes are closely related to the earth, and symbolize the Great Mother Creator of the Cosmos, which means origin, but also death. The deadly poison in some species of snakes makes them be considered a being of supernatural powers and so, to be worshipped, but also feared. As a consequence, snakes have created a strong aversion and are therefore persecuted by man, being probably that group of animals with the worst reputation [10]. Studies undertaken in rural localities of the northern Mexico, as well as in Brazil, Portugal and Nepal show us that different groups of people have a negative attitude toward snakes, considering them "bad", and for this reason they should be eliminated; also, there's the belief that all snakes are poisonous and therefore they should be sacrificed [11–14]. This perception has also been observed in urban centers, including among students [15, 16]. In Mexico, snakes have been traditionally used by different ways, including as food with the boa (Boa imperator), snake which the Mexicas called mazacoatl, and ate its flesh, considering it softer than any domestic bird [17]. In our days, rattlesnakes are eaten by the native and rural groups of the northwest and central Mexico [18, 19]. Traditional medicine also utilizes these animals, because snakes are considered miraculous animals, in that they heal all kinds of illnesses [20, 21]. Meat, viscera, blood, skin, fangs and rattles are used to cure all types of illnesses, and is also considered a divine and protector animal [22–24]. Especially the Crotalus genus is still used to cure skin spots, cancer, ulcers, zits, rashes, facial moles, blackheads, stress, hemorrhoids, heart disease, rheumatism, itching, diabetes and sexual impotence [25, 26]. In other countries, for example Brazil, the flesh, fat and skin of boids and crotalids is consumed for the relief and treatment of rheumatism, arthritis, swellings, and muscular pains in humans [27–29], as well as for domestic animals and against the bite of other poisonous snakes [30, 31]. In northeastern Argentina, the boa is used against chicken pox and measles [32]. In some communities of India, the cobra is used as an animal of veneration and worship, and python flesh for bad vision [33, 34]. In Australia, northwestern groups of natives in Tasmania use the skin and the faeces of poisonous snakes as remedies for bone fractures and back pain [35]. In this same context, in some localities of Mexico, local groups conserve a traditional perception and use of snakes, especially in the State of Puebla, in the municipality of Cuetzalan del Progreso. This place includes 71.1% of the total of the local herpetofauna [36, 37], and 80% of the inhabitants speak the Nahua language [38]. Added to this, the known fact that Nahua groups have the information for the use of different species of snakes [39, 40], we considered it necessary to retake this knowledge also from other localities in the area: San Miguel Zinacapan and Ayotzinapan, in the municipality of Cuetzalan del Progreso, in the State of Puebla, to document the perception and the practices of the usage of snakes in the region. Cuetzalan del Progreso is located at Northwestern Puebla (19° 21' 36″ and 20° 05′ 18″ N, 97° 24′ 36″ and 97° 34′ 05″ W), with altitudes ranging from 320 to 1500 m (Fig. 1) [38]. Three physiographic areas converge here: Sierra Madre Oriental, known locally as the Sierra Norte de Puebla; the Llanura Costera del Golfo Norte, comprising Veracruz, and the Eje Volcánico Transversal. The existence of three distinct physiographic conditions in a 960 km2 area gives rise to a variety of landscapes with particular and complex conditions, and to specific geological substrate, soil, weather, vegetation, morphology and geomorphological processes [41]. This area has a warm climate with year-round rain. Cuetzalan is one of the regions with the highest values of precipitation in the country, ranging from 1900 to 4100 mm annually. Due to the irregularity of the topography and the local weather conditions, Cuetzalan has different types of vegetation, such as pine-oak forest, semi-deciduous tropical forest, and cloud forest [36]. San Miguel Tzinacapan has a population of 2 939 inhabitants and Ayotzinapan has one of 1 212; most of these dedicate themselves to corn, coffee and beans agriculture, and to a lower degree to hunting [38]. Study area showing the small town of Cuetzalan del Progreso and San Miguel Tzinacapan and Ayotzinapan communities, from which data were collected Samplings were performed from June of 2010 to July of 2011; to obtaining information, the intentional selection of people [42] was used through the Snowball sampling technique [43], "local experts" who had traditional knowledge of local fauna were considered, they were natives of the localities and did field work or hunting. Housewives were also considered with the goal of gathering more detailed information about the food and medicinal uses. In both localities inhabitants were interviewed in a formal and informal manner [44]. The formal interviews were carried out in two ways: the directed and the undirected ones. In the first, they were used to obtain information about a specific topic, the second allowed the informant to lead the course of the interview through a conversation [45]. The interviews consisted in presenting 30 visual stimuli (pictures) of the different species of snakes that are present in the area to the subjects, so that they would recognize them and reveal information about the knowledge they possess on them (like feeding habits, habitat, behavior, reproduction and toxicity) and the use that reptiles receive [46]. To analyze the data obtained during the interviews, according to Cotton [47], a usage analysis was carried out, considering the following categories: Food use (F), for when the snake or any of its parts was eaten; Medicinal (M), for when the snake or any of its parts were used as treatment for a disease or affliction that affected the body or soul of the person; Clothing (C), for when any part of the snake was used as clothes or as accessories (like earrings made out of vertebrae, leather bracelets, leather belts and others); Artisanal (A), for when people make artistic representations of the snakes in any kind of material, or use parts of the snakes as decorative ornaments; Magical-religious (R), for when there's beliefs, myths, superstitions and rites that people perform regarding snakes, in addition to amulet use. With this, the frequency of use, diversity of use and value of use was estimated using the following equations: The frequency of use was estimated with equation (1): $$ FU=\frac{M{n}_s}{Ni} $$ where Mn s is the number of mentions per species (s) and Ni is the number of interviews that were carried out. To estimate the diversity of use for each species, the equation 2 was used: $$ D{U}_s=\frac{C_s}{5} $$ where C s is the number of categories in which the species (s) was mentioned, and 5 is the number of total categories considered in this study. The value of use for each species was estimated with the summation of the value of use for each species in each category, for this the equation (3) was used: $$ V{U}_c=\frac{\sum iM{n}_c}{Ni} $$ where Mn c is the number of mentions of each interviewed (i) for each species in any category of use (c), Ni is the number of interviews, the subscript is substituted in each one of the categories of use in equation (3). The risk category in which the species are classified was searched for in the Mexican Official Norm-059 [48], the Red List of the International Union for Conservation of Nature [49], and the protection category in which they are included according to the Convention on International Trade in Endangered Species of Wild Fauna and Flora [50]. A total of 51 interviews were carried out, 43 of the respondents were men and eight were women, whose ages ranged between 17 and 83 years: 18 young people (Y; 17–39 years), 17 adults (A; 40–59) years and 16 elder (E; 60–83 years). The interviews included 18 agriculturists, six merchants, six housewives, five craftsmen, five teachers, three huntsmen and two dancers; the other six were: three professionals, two students and a painter. The individuals recognized 18 out of 30 images of snakes that were presented; these snakes were placed in the use categories that were mentioned in the interview. Eleven out of 18 snakes could be found in at least one category of protected species, according to the Mexican laws Nom-059-ECOL-2010, IUCN and CITES (Table 1). With the answers that were given by the respondents it was evident they possessed information on the biology of snakes in aspects of feeding habits (n = 38 mentions; Y = 12, A = 14, E = 12), habitat (n = 32 mentions; Y = 11; A = 11; E = 10), behavior (n = 24 mentions; Y = 9, A = 8, E = 7), reproduction (n = 19; Y = 4, A = 8, E = 7) and to a lesser extent on their toxicity (n = 11 mentions). Some informants (n = 15) indicated that snakes are harmful, because they cause death, wounds, they bite and are poisonous. More than the third part of the interviewees (n = 20) pointed out nonpoisonous snakes and referred to them as if they had poison. On the other hand, there were persons (n = 5) that considered snakes beneficial, for they end with plagues, give "luck" to crops and eat other snakes. Table 1 Species of snakes used by the inhabitants of the Cuetzalan. The use categories are indicated with the following letters: Food use (F), medicinal (M), clothing (C), artisanal use (A) and magical-religious use (R) The total number of categories of use was five (Fig. 2). The magical-religious category was the one that was mentioned the most (n = 32). Boa imperator stands out because of the beliefs that it's linked to, like for example, that by its mere presence in their land it protects the crops of the inhabitants, also, with Drymarchon melanurus, is part of the "Danza de los Negritos", which is a ritual of African origins and a celebration of the region, that symbolizes the healing of a snakebite with the death of the snake. About Bothrops asper, it was mentioned that its fangs are used as good luck amulets and to attract women. Geophis sp. and Tropidodipsas satorii are conserved inside a corn jar for a week, then they are freed in the countryside and with the, and jar it is believed that protection and good luck will be attracted to their homes. In the medicinal aspect, the informants (n = 12) consider snakes useful, for example, they use Drymobius margaritiferus to cure all kinds of illness by ingesting its meat, also, the meat of Atropoides nummifer is used to cure diabetes and the one of Botrops asper to treat rheums. From Imantodes cenchoa only its fat is used, it's smeared in the person's body to cure cancer, Tantilla rubra's meat is used as a remedy for mosquito bites. A smaller number of informants (n = 8) mentioned using snakes as part of their garments and as accessories, they indicated that the skin of Boa imperator (Fig. 3) and Tantilla rubra are utilized to make belts and shoes, as well as the one of Bothrops asper for belts. About the artisanal use, the informants mentioned that that they tend to use Boa imperator's skin to make wallets and knife carriers, Botrhops asper is used as an ornament for their houses, previously preparing the skin and their vertebra are used as bracelets. The fans, vertebrae and skin, are prepared by the local tanner, who after perfectly cleaning the material applies chrome and salt to it along 15 days. Finally, the food use obtained the number less of mentions (n = 5), pointing mainly to Boa imperator and Bothrops asper, which is eat smoked and is stewed in chilpozontle and mole (dishes made of chile); Tantilla rubra and Atropoides nummifer are also eaten, but only as smoked meat. Number of mentions of the informants according to the categories of use of the species registered in Cuetzalan del Progreso, Puebla Tanned skin of Boa imperator in possession of an inhabitant of San Miguel Tzinacapan, Cuetzalan del Progreso, Puebla As far as the evaluation that was made about the use of species, Table 2 shows the values that were obtained, observing that Boa imperator and Artopoides nummifer have the highest frequency of use, while Bothrops asper is the snake that is utilized in the most diverse ways. Table 2 Snake uses in Cuetzalan del Progreso, Puebla Inhabitant's perception of snakes When inhabitants encounter snakes it usually happens accidentally during their daily activities as, for example, while they are working in the field, when traveling to other towns or when snakes wander into their homes or wander the surroundings, and when they find a snake, they usually kill it. A great part of the interviewees (n = 36), mentioned sacrificing snakes because they are harmful, aggressive and poisonous. Most of the villagers mentioned that snakes are killed with a knife, and the animal is taken far away so that the bones do not infect a passer barefoot, or they also tend to put the snake inside a sack once it's dead, being very careful with its fangs because of the poison. If the snake they encounter is Boa imperator or Bothrops asper, they use the skin, but if it's any other species, they dispose of it far away. Snakes are also sacrificed by using a stick stick and beating their head directly, and after they do this they usually hang the snake from a tree or throw it of a cliff so that no one has any risk of harm, as they mention that vertebrae can be dangerous, rotting the skin of the persons who have direct contact with them. Danza de los Negritos In the magical-religious aspect, the "Danza de los Negritos" is part of their cultural traditions. The interviewees mentioned its origins and described it as a dance that was brought by the Africans who arrived to the area as slaves. They believe that the chief of this African culture had been bitten by a snake, and to heal the wound and avoid his death, a ritual was performed and ended with the death of the snake and with the chief being cured. With the passing of time, the ritual, that constitutes an annual festivity, has incorporated various new elements, like a special costume and the representation of the snake made of wood (Fig. 4) or plant roots. The "Danza de los Negritos" is an activity that doesn't happen during just one day, but in a long process of essays and continued dancing throughout the year, in which the chief of the dance takes charge of the dancers, feeding them during essays and treating them as his own children. The dance representation lasts three days, from the 27th to the 30th of September. This dance is composed by 34 songs plus a ritual that lasts half a day. Wood representation of the snake, made by the inhabitants of the area for the "Danza de los Negritos" in Cuetzalan del Progreso, Puebla Results showed us that the traditional use of snakes is very wide, more than that reported by Martin del Campo [17] used by the natives in pre-Columbian times in the center of Mexico, and by Gutiérrez-Mayén [36] and Blanco-Casco et al. [51] in the study area, indicating a higher use of these animals in the locality. In the State of Morelos, Galeano [52] found that, even with a major pressure of the urban growth, the inhabitants of some communities still preserve wildlife with use value. The same phenomenon happens in the Sierra de Nanchititla, in the State of Mexico, where the method of use of various snakes is reported, particularly of the rattlesnake as food, as ornament, medicine and as pets [19]. This paper reports a very wide use that the community has for snakes, which allows us to propose that the influence of urbanization and interposition of other cultures in the zone is still not a factor that would stimulate its inhabitants to culturally change, still using wild animals. United to this point, it can be observed that the knowledge of snakes the Nahua community has is not limited by the age of the natives, as the young showed us the same knowledge about these animals as the adults. The magical-religous use of snakes stands out for being the most common one. Gutiérrez-Mayén [36] registered for Cuetzalan, that Boa imperator was considered as a beneficial animal, due to de belief that it takes care of the crops and frees them of plagues, as they feed off mice that could destroy their crops. According to the local people Atropoides nummifer and Drymobius margaritiferus (considered poisonous, without being so) give some kind of protection due to their venom. The medicinal properties of Boa imperator mentioned by Sahagún [39] was not documented, probably due to it being more valuable in its magic-religious properties. But the record of nine other snakes, considered medicinal is relevant. Different authors have registered the medicinal importance of snakes in central Mexico, among them. Gómez-Álvarez and Pacheco-Coronel [26], mentioning that snakes of the genus Bothrops are used to cure cancer, fatigue, muscular pains and, protectors against evil, information similar to that found in this study; the inhabitants use Bothrops asper to cure all types of sickness, including cancer. The medicinal use of snakes has been registered in all the world to cure a series of diseases similar to those already mentioned. The use of boa in Brazil and Peru [27–29, 53] and in Argentina [32]. In Australia, the cobra (Naja siamensis) and the python (Python regius) [35]. Boa imperator is also used in Brazil to protect and to heal domestic animals and cattle [30, 31]. Some authors have discussed that the curative properties of snakes are related to their mythic and symbolic importance [20, 21], attributes to their association with soil and rain water [9, 54]. The scarce food usage found in this study may be related to the belief that the inhabitants have of some snakes being poisonous, as these interviewed people insisted that they took extreme cautions when consuming these animals. Seri in northern Mexico consume rattlesnakes, but are careful to remove head and tail before, due to the purported toxicity [18]. Sahagún [39] reported the food usage of Boa imperator by the Nahua, seen also in this study, together with four other species (Imantodes cenchoa, Tantilla rubra, Atropoides nummifer and Bothrops asper). This food usage is also found in South America, where boa is also consumed [53], probably due to the large amount of meat, valued for its high protein content, and its use is frequent among the local natives [55]. For dresses, Blanco-Casco et al. [51] reported the use of the skin of Boa imperator and Bothrops asper for belts, and the vertebrae for necklaces. We found this same use in Cuetzalan, where the necklaces of vertebrae were the showiest. The artisanal use is relatively scarce, but those made of carved wood, representing a fer-de-lance (Bothrops asper), show us the attention the locals can put in such carvings, with the ventral and dorsal scales perfectly represented. The artisanal use is relatively scarce, wooden handicrafts made to represent Bothrops asper, commonly known as "nauyaca", show how much attention inhabitants put in this species in particular, it's evident they possess great knowledge of their external characteristics, with the ability to represent its scales in the ventral and dorsal part perfectly. The perception the inhabitants have about snakes, notwithstanding that in some cases they consider them useful, is generally negative; killing snakes seems to be an activity arisen from the fear they have of them, though the interviews insisted that snakes are killed for fear of being attacked. Such fear seems to be equal for all species of snakes, poisonous or not; this way of thinking seems to be like that of some inhabitants of northern Mexico [11, 18], from Brazil, [12, 15, 16], Portugal [13], and Nepal [14, 56], where snakes are sacrificed for fear of being bitten, and especially if these animals are large. In Mexico, the fear of serpents may have arisen a long time ago, probably among the Mexicas who adored the snake (coatl), who transmuted them into different gods that managed the world of the living and of the dead [9]. It is probable that myths and beliefs propagate fear and this results in a negative perception of snakes [57, 58]. The perception that snakes are bad and must be eliminated has been observed among students in urban settings [15, 16], which may explain that such disgust or distaste for these animals may have an explanation, not only for the religious-magic symbolism, but also psychologic [59], as the snake represents the origin, but also the destruction and death [9]. Concerning the cosmovision of the "Danza de los Negritos", it is known that this dance is not only a form of celebration, of happiness, it is also a form of petitioning, as explained by Sten [60]. León-Portilla [61] mentions that the dance is a mystic form of work for their daily life, in this dance time and space change dimension and meaning, once the rites have begun, we leave the daily profane time, to enter the mythic time of the creation, that time that does not flow, which is perennial, and is reached through rituality which communicates us with the gods. The change of time and space is done through the ritual ceremonies, being the dance one of them, and maybe the most important one, the way to talk with the gods, to tell them what we want from them, the form to thank them, to please them, to give them tribute, dance is a prayer. The "Danza de los Negritos" is done in many states of the Mexican Republic, the principal site being the mountain people of the region of the State of Puebla, in the Totonac region of Veracruz and in certain points in Michoacan and Oaxaca. In general terms, it has been considered that the topics that create the majority of the variants of this dance originated during the Colonial Period relate to work in a sugar cane hacienda, and the magic ceremony of killing a snake [62]. This study shows that snakes are still a very important part of the culture of the inhabitants of Cuetzalan del Progreso, Puebla. The ethnozoological knowledge persists in this area, finding a diversity of uses for this reptile. Snakes are very valuable in the daily lives of its inhabitants because of the magical-religious and medicinal aspects, but at the same time it can expose them to an over-exploitation. Although the food use isn't that frequent, they can be of great help in the face of an economic crisis, mitigating food shortage; also, they can fight off malnutrition as they provide meat for the inhabitants' diet. On the other hand, the fear and misperception of their toxicity, that has a negative consequence in actions the people take, may represent potential threats for their preservation, since a high percentage of them are protected species. Another specific threat is the removal of rare or endangered species. Thus, the factors that are likely responsible for the large scale killing of snakes should beconsidered in the development of biodiversity preservation strategies and in public health. It is necessary to raise awareness and broaden the knowledge about snakes and the healing and prevention of snakebites through educational interventions, as well as the ability to recognize poisonous species. This must be considered in environmental education strategies. Therefore, it is necessary to carry out a long-term monitoring of the ethno -zoological activities of the region, as far as snakes are concerned, to establish baseline studies and develop a short term sustainable management plan that is compatible with the cultural characteristics of the region. Toledo VM, Alarcón CP, Barón L. Estudiar lo rural desde una perspectiva interdisciplinaria: una aproximación al caso de México. Universidad Nacional Autónoma de México. SEMARNAP. 1998. Alves RRN. Relationships between fauna and people and the role of ethnozoology in animal conservation. Ethnobiol Conserv. 2012;1:1–69. Santos-Fita D, Costa-Neto EM, Cano-Contreras E. El quehacer de la etnonozoología. In: Costa Neto EM, Santos-Fita D, Vargas-Clavijo M, editors. Manual de Etnozoología. Una guía teórico-práctica para investigar la interconexión del ser humano con los animales. Valencia España: Tundra; 2009. p. 23–44. Fleury LC, Almeida J. Populações tradicionais e conservação ambiental: uma contribuição da teoria social. Rev Bra Agroeco. 2007;2:3–19. Sarukhán J, Koleff P, Carabias J, Soberón J, Dirzo R, Llorente-Bousquets J, Halffter G, González R, March I, Mohar A, Anta S, De la Maza J. Capital Natural de México. Síntesis: Conocimiento actual, evaluación y perspectivas de sustentabilidad. CONABIO. 2009. Wilson LD, Mata-Silva V, Johnson JD. A conservation reassessment of the reptiles of Mexico based on the EVS measure. Amp Rep Conserv. 2013;7:1–47. e61. Wilson LD, Johnson JD. Distributional patterns of the herpetofauna of Mesoamerica, a biodiversity hotspot. In: Wilson LD, Townsend JH, Johnson JD, editors. Conservation of Mesoamerican Amphibians and Reptiles. USA: Eagle Mountain Publishing, LC, Eagle Mountain; 2010. p. 30–235. Seler E. Las imágenes de los animales en los manuscritos mexicanos y mayas. México: Editorial Juan Pablos; 2004. De la Garza M. El universo sagrado de la serpiente entre los mayas. México: Universidad Nacional Autónoma de México; 2003. Casas-Andreu G. Mitos leyendas y realidades de los reptiles en México. Cienc Ergo Sum. 2000;7:286–91. Gatica-Colima A, Jiménez-Castro JA. Serpientes de cascabel: percepción por algunos pobladores del desierto chihuahuense en el estado de Chihuahua. Rev Lat Rec Nat. 2009;5:198–204. Santos-Fita D, Costa-Neto EM, Sciavetti A. 'Offensive' snakes: cultural beliefs and practices related to snakebites in a Brazilian rural settlement. J Ethnobiol Ethnomed. 2010;6:13. Ceríaco LMP. Human attitudes towards herpetofauna: The influence of folklore and negative values on the conservation of amphibians and reptiles in Portugal. J Ethnobiol Ethnomed. 2012;8:8. Pandey DP, Pandey GS, Devkota K, Goode M. Public perceptions of snakes and snakebite management: implications for conservation and human health in southern Nepal. J Ethnobiol Ethnomed. 2016. doi:10.1186/s13002-016-0092-0. Pinheiro LT, Mota Rodríguez JF, Borges-Nojosa DM. Formal education, previous interaction and percepción influence the attitudes of people toward the conservation of snakes in a large urban center northeastern Brazil. J Ethnobiol Ethnomed. 2016;12:25. Alves RRN, Silva VN, Trovão DMBM, Oliveira JV, Mourão JS, Dias TLP, Alves AGC, Lucena RFP, Barboza RRD, Montenegro PFGP, Vieira WLS, Souto WMS. Students' attitudes toward and knowledge about snakes in the semiarid region of Northeastern Brazil. J Ethnobiol Ethnomed. 2014;10:30. Martín del Campo R. Ensayo de interpretación del libro undécimo de la Historia General de las Cosas de Nueva España de Fray Bernardino de Sahagún. An Inst Biol. Universidad Nacional Autónoma de México. 1938;9:379–391. Malkin B. Seri ethnozoology. Ocass Papers Idaho State Col Mus. 1962. No. 7. Monroy-Vilchis O, Cabrera L, Suarez P, Zarco-González MM, Rodríguez SC, Urios V. Uso tradicional de vertebrados silvestres en la Sierra de Nanchititla, México. Interciencia. 2008;33:308–13. Gómez-Álvarez G, Reyes-Gómez SR, Teutli-Solano C, Valadez-Azúa R. La medicina tradicional prehispánica, vertebrados terrestres y productos medicinales de tres mercados del valle de México. Rev Etnobiol. 2007;5:86–98. Gómez-Álvarez G, Valadez-Azúa R. Anfibios, reptiles y mamíferos utilizados en los productos medicinales expuestos en tres mercados del valle de México. In: XXVIII Mesa Redonda de la Sociedad Mexicana de Antropología. Derechos humanos, pueblos indígenas, cultura y nación. McClung de Tapia E, Serrano Sánchez C. coords. México: Universidad Nacional Autónoma de México-Sociedad Mexicana de Antropología; 2014. p. 909–20. Barajas E. Los animales usados en la medicina popular mexicana. México: Imprenta Universitaria; 1951. Rabiela T. La cosecha del agua en cuencas de México. México: Centro de Investigaciones y Estudios Superiores en Antropología Social; 1985. De María y Campos T. Los animales en la medicina tradicional mesoamericana. An Antrop. 1979;16:183–222. Fitzgerald LA, Painter W, Reuter A, Hoover C. Collection, trade and regulation of reptiles and amphibians of Chihuahua desert ecoregion: Traffic North America. Wachington: Word Wildlife Fund; 2004. Gómez-Álvarez G, Pacheco-Coronel N. Uso medicinal de serpientes comercializadas en dos mercados de la Ciudad de México. Rev Etnobiol. 2010;8:51–8. Alves RRN, Alves HN. The faunal drugstore: Animal-based remedies used in traditional medicines in Latin America. J Ethnobiol Ethnomed. 2011;7:9. Alves RRN, Neta ROS, Trovão DMBM, Barbosa JEL, Barros AT, Dias TLP. Traditional uses of medicinal animals in the semi-arid region of northeastern Brazil. J Ethnobiol Ethnomed. 2012;8:4. Barros FB, Varela SAM, Pereira HM, Vicente L. Medicinal use of fauna by a traditional community in the Brazilian Amazonia. J Ethnobiol Ethnomed. 2012;8:37. Barboza RRD, Souto WMS, Mourão JS. The use of zootherapeutics in folk veterinary medicine in the district of Cubati, Paraíba State, Brazil. J Ethnobiol Ethnomed. 2007;3:32. doi:10.1186/1746-4269-3-32. Souto WMS, Mourão JS, Barboza RRD, Mendonça LET, Lucena RFP, Confessor MVA, Vieira WLS, Montenegro PFGP, Lopez LCS, Alves RRN. Medicinal animals used in ethnoveterinary practices of the 'Cariri Paraibano', NE Brazil. J Ethnobiol Ethnomed. 2011;7:30. Martínez GJ. Use of fauna in the traditional medicine of native Toba (qom) from the Argentine Gran Chaco region: an ethnozoological and conservationist approach. Ethnobiol Conser. 2013. doi:10.15451/ec2013-8-2.2-1-43. Chakravorty J, Meyer-Rochow VB, Ghosh S. Vertebrates used for medicinal purposes by members of the Nyishi and Galo tribes in Arunachal Pradesh (North-East India). J Ethnobiol Ethnomed. 2011;7:13. Jaroli DP, Mahawar MM, Vyas N. An ethnozoological study in the adjoining areas of Mount Abu wildlife sanctuary, India. J Ethnobiol Ethnomed. 2010;6:6. Vats R, Thomas S. A study on use of animals as traditional medicine by Sukuna Tribe of Busega District in North-western Tanzania. J Ethnobiol Ethnomed. 2015;11:38. Gutiérrez-Mayén MG. Anfibios y reptiles del municipio de Cuetzalan del Progreso, Puebla. Estudio Herpetológico Informe Final. Benemérita Universidad Autónoma de Puebla. 1999. Canseco-Márquez L, Gutiérrez-Mayén MG. Herpetofauna del municipio de Cuetzalan del Progreso Puebla. In: Ramírez-Bautista A, Canseco-Márquez L, Mendoza-Quijano F, editors. Inventarios Herpetofaunísticos de México: Avances en el conocimiento de su Biodiversidad. Puebla: Pub Soc Herp Mex; 2006. p. 180–96. INEGI. Principales Resultados del Censo de Población y Vivienda 2010. Puebla: Instituto Nacional de Estadística y Geografía; 2010. Sahagún B. Historia general de las cosas de Nueva España. México: Editorial Porrúa; 1950. Hernández F. . Historia natural de la Nueva España: Volumen II, Obras Completas, Tomo III. Universidad Nacional Autónoma de México. 1959. INEGI. Prontuario de Información geográfica municipal de los Estados Unidos Mexicanos, vol. 21043. Cuetzalan del Progreso Puebla: Clave geostadística; 2009. p. 1–9. Albuquerque UP, Lucena RFP, Neto EMFL. Selection of Research Participants. In: Albuquerque UP, Cunha LVFC, Lucena RFP. Alves RRN, editors. Methods and Techniques in Ethnobiology and Ethnoecology. New York: Human Press, Springer; 2014. p. 1–13. Goodman L. Snowball Sampling. Ann Math Stat. 1961;12:48–170. Institute of Mathematical Statistics. Santos-Rodríguez D, Costa-Neto EM, Cano-Contreras E. Metodología de la investigación etnonozoológica. In: Costa-Neto EM, Santos-Fita D, Vargas-Clavijo A, editors. Manual de Etnozoología. Una guía teórico-práctica para investigar la interconexión del ser humano con los animales. Valencia España: Tundra; 2009. p. 253–72. Russell BH. Research methods in anthropology: Qualitative and quantitative approaches. 4th. Oxford: Altamira Press; 2006. Medeiros PM, Almeida ALS, Lucena RFP, Souto FJB, Albuquerque UP. Use of visual stimuli in ethnobiological research. In: Albuquerque UP, Cunha LVFC, Lucena RFP. Alves RRN, editors. Methods and Techniques in Ethnobiology and Ethnoecology. New York: Human Press, Springer; 2014. p.87–98. Cotton CM. Ethnobotany, principles and applications. Canada: John Wiley and Sons; 1996. SEMARNAT (Secretaría de Medio Ambiente y Recursos Naturales). Norma Oficial Mexicana NOM-059-SEMARNAT-2010, Protección ambiental-Especies nativas de México de flora y fauna silvestres-Categorías de riesgo y especificaciones para su inclusión, exclusión o cambio-Lista de especies en riesgo. 2010. http://www.semarnat.gob.mx/node/17. Accessed 6 Jan 2017. IUCN (International Union for Conservation Nature). The IUCN Red List of Threatened Species. Version 2014. 2014. p. 2. http://www.iucnredlist.org. Accessed 6 Jan 2017. CITES (Convention on International Trade in Endangered Species of Wild Fauna and Flora). Cites Trade Data Dashboards. 2014. https://www.cites.org/. Accessed 6 Jan 2017. Blanco-Casco MAM, Cornejo-Rodríguez FJ, Salgado-Espinoza C, Romero-Luna R, Navarrete-Zamora N, Mora-Malerna A, Neyra-González L, López-Binqüist C. Artesanías y medio ambiente. México: CONABIO; 2009. Galeano MME. Estrategias de investigación social cualitativa. Medellín: La Carrera Editores; 2007. Mendonça LET, Vieira WLS, Alves RRN. Caatinga Ethnoherpetology: Relationships between herpetofauna and people in a semiarid region of northeastern Brazil. Amp Rep Conserv. 2014;8:24–32. Aguilera C. Flora y fauna mexicana. México: Mitología y tradiciones. Colección Raíces Mexicanas. Everest Mexicana; 1985. Bapiste L, Hernández S, Polanco R, Quiceno M. La Fauna Silvestre Colombiana: Una Historia Económica y Social de un Proceso de Marginalización. In: Ulloa A, editor. Rostros culturales de la fauna: las relaciones entre los humanos y los animales en el contexto colombiano. Bogotá: Inst Col Antrop-ICANH-, Fundación Natura y Fundación Mac Arthur; 2002. Chettri K, Chhetry DT. Diversity of snakes in Sarlahi District, Nepal. Our Nature. 2013;11:201–7. Burghardt GM, Murphy JB, Chiszar D, Hutchins M. Combating ophiophobia: origins, treatment, education, and conservation tools. In: Mullin SJ, Seigel RA, editors. Snakes: ecology and conservation. New York and London: Comstock Publishing Associates, A Division of Cornell University Press; 2009. p. 262–80. 62. Dodd Jr CK. Status, conservation, and management. In: Seigel RA, Collins JT, Novak SS, editors. Snakes: ecology and evolutionary biology. New York: Macmillan Publishing Company; 1987. p. 478–513. Jung K. Símbolos de transformación. 2nd ed. Buenos Aires: Paidós; 1962. Sten M. Ponte a bailar tú que reinas. Antropología de la danza prehispánica. México: Joaquín Mortiz; 1990. p. 11–2. León-Portilla M. México-Tenochtitlán, su espacio y tiempo sagrados. México: Plaza y Valdés; 1987. Wachtel N. Los vencidos. Los indios del Perú frente a la Conquista española (1530–1570). Madrid: Alianza Universidad; 1973. We thank to Urbano Vázquez for their support during field work as well as Gabriela and Araceli Vázquez for their collaboration as translators during the interviews. Also, we thank to Kritzia Pardavé, Alma Pérez, David Bahena, Flor Hernández, Ana Linares and Linda Martínez for their help during the visits to the communities. Availability of data and material The datasets during and/or analyzed during the current study available from the corresponding author on reasonable request. RGL and GGA designed and carried out this study, RGL and NPC conducted the field work, RGL and AV did the data analysis, and AV and GGA wrote the manuscript. All the authors read and approved the final version of this paper. Since the manuscript doesn't include details, images, or videos relating to individual participants no need of submitting consent for publication. Laboratorio de Vertebrados, Facultad de Ciencias, Universidad Nacional Autónoma de México, C.P. 04510, Ciudad de México, Mexico Romina García-López , Alejandro Villegas , Noé Pacheco-Coronel & Graciela Gómez-Álvarez Departamento de Etología, Fauna Silvestre y Animales de Laboratorio, Facultad de Medicina Veterinaria y Zootecnia, Universidad Nacional Autónoma de México, C.P. 04510, Ciudad de México, Mexico Alejandro Villegas Search for Romina García-López in: Search for Alejandro Villegas in: Search for Noé Pacheco-Coronel in: Search for Graciela Gómez-Álvarez in: Correspondence to Graciela Gómez-Álvarez. García-López, R., Villegas, A., Pacheco-Coronel, N. et al. Traditional use and perception of snakes by the Nahuas from Cuetzalan del Progreso, Puebla, Mexico. J Ethnobiology Ethnomedicine 13, 6 (2017) doi:10.1186/s13002-016-0134-7 Ethnozoology Nahuas Usage category Usage value
CommonCrawl
\begin{document} \title{{On Robust Tie-line Scheduling \\ in Multi-Area Power Systems} \\ {\small Working paper}} \author{{Ye Guo \qquad Subhonmesh Bose \qquad Lang Tong} \thanks{\scriptsize Y. Guo and L. Tong are with the School of Electrical and Computer Engineering, Cornell University, Ithaca, NY, USA. (Emails: {\{yg299,lt35\}@cornell.edu}). S. Bose is with the Dept. of Electrical and Computer Engineering at the University of Illinois at Urbana-Champaign, Urbana, IL, USA. (Email: {[email protected]}).} } \date{} \maketitle \begin{abstract} The tie-line scheduling problem in a multi-area power system seeks to optimize tie-line power flows across areas that are independently operated by different system operators (SOs). In this paper, we leverage the theory of multi-parametric linear programming to propose algorithms for optimal tie-line scheduling respectively within a deterministic and a robust optimization framework. Aided by a coordinator, the proposed methods are proved to converge to the optimal schedule within a finite number of iterations. A key feature of the proposed algorithms, besides their finite step convergence, is that SOs do not reveal their dispatch cost structures, network constraints, or natures of uncertainty sets to the coordinator. The performance of the algorithms is evaluated using several power system examples. \end{abstract} \section{Introduction} \label{sec:intro} For historic and technical reasons, different parts of an interconnected power system and their associated assets are dispatched by different system operators (SOs). We call the geographical footprint within an SO's jurisdiction an \emph{area}, and transmission lines that interconnect two different areas as \emph{tie-lines}. Power flows over such tie-lines are generally scheduled 15 -- 75 minutes prior to power delivery. The report in \cite{MISO&PJM:10Overview} indicates that current scheduling techniques often lead to suboptimal tie-line power flows. The economic loss due to inefficient tie-line scheduling is estimated to the tune of \$73 million between the areas controlled by MISO and PJM alone in 2010. Tie-lines often have enough transfer capability to fulfill a significant portion of each area's power consumption \cite{White&Pike:11WP}. Thus they form important assets of multi-area power systems. SOs from multiple areas typically cannot aggregate their dispatch cost structures and detailed network constraints to solve a \textit{joint} optimal power flow problem. Therefore, distributed algorithms have been proposed. Prominent examples include \cite{Kim&Baldick:97TPS, ConejoAguado98TPS, Bakirtzis&Biskas:03TPS} that adopt the so-called \emph{dual decomposition} approach. These methods are iterative, wherein each SO optimizes the grid assets within its area, given the Lagrange multipliers associated with inter-area constraints. Typically, a coordinator mediates among the SOs and iteratively updates the multipliers. Alternative \emph{primal decomposition} approaches are also proposed in \cite{GuoTongetc:CRP_TPWRS, Li&Wu&Zhang&Wang::15TPS, Zhao&LitvinovZheng:14TPS}. Therein, the primal variables of the optimization problem are iteratively updated, sometimes requiring the SO of one area to reveal part of its cost structure and constraints to the SO of another area or a coordinator. Traditionally, solution techniques for the tie-line scheduling problem assume that the SOs and/or the coordinator has perfect knowledge of the future demand and supply conditions at the time of scheduling. Such assumptions are being increasingly challenged with the rapid adoption of distributed energy resources in the distribution grid and variable renewable generation like wind and solar energy in the bulk power systems. Said differently, one must explicitly account for the uncertainty in demand and supply in the tie-line scheduling problem. To that end, \cite{ahmadi2014multi,JiZhengTong:16TPS} propose to minimize the expected aggregate dispatch cost and \cite{LiWu_RobustMAED2016} propose to minimize the maximum of that cost. In this paper, we adopt the latter paradigm -- the \emph{robust} approach. \subsection*{Our contribution} With the system model in Section \ref{sec:systemModel}, we first formulate the deterministic tie-line scheduling problem in Section \ref{sec:deterministic}, where we propose an algorithm to solve this deterministic problem that draws from the theory of multiparametric programming \cite{borrelli2003constrained}. The key feature of our algorithm is that a coordinator can produce the optimal tie-line schedule upon communicating only \emph{finitely many times} with the SO in each area. In contrast to \cite{Zhao&LitvinovZheng:14TPS}, our method does not require SOs to reveal their cost structures nor their constraints to other SOs or to the coordinator. In Section \ref{sec:robust}, we formulate the robust counterpart of the tie-line scheduling problem. We then propose a technique that alternately uses the algorithm for the deterministic variant and a mixed-integer linear program to solve the robust problem. Again, our technique is proved to converge to the optimal robust tie-line schedule that requires the coordinator to communicate finitely many times with each SO. Also, SOs are not required to reveal the nature and range of the values the uncertain demand and available supply can take. Our proposed framework thus circumvents the substantial communication burden of the method proposed in\cite{LiWu_RobustMAED2016} towards the same problem. We remark that \cite{LiWu_RobustMAED2016} adopts the \emph{column-and-constraint generation} technique described in \cite{zeng2013solving} that requires SOs to reveal part of their network constraints, costs and ranges of demand and available renewable supply to the coordinator. We empirically demonstrate the performance of our algorithm in Section \ref{sec:simulation} and conclude in Section \ref{sec:conclusion}. \section{System model} \label{sec:systemModel} To formulate the tie-line scheduling problem, we begin by describing the model for multi-area power systems. Throughout, we restrict ourselves to a two-area power system, pictorially represented in Figure \ref{fig:1} for the ease of exposition. The model and the proposed methods can be generalized for tie-line scheduling among more than two areas. \begin{figure} \caption{\small An illustration of a two-area power system.} \label{fig:1} \end{figure} For the power network in each area, we distinguish between two types of buses: the \emph{internal} buses and the \emph{boundary} buses. The boundary ones in each area are connected to their counterparts in the other area via tie-lines. Internal buses do not share a connection to other areas. Assume that each internal bus has a dispatchable generator, a renewable generator, and a controllable load\footnote{While we assume that all loads are controllable, uncontrollable load at any node can be easily modeled by letting the limits on the allowable power demand at that node to be equal.}. Boundary buses do not have any asset that can inject or extract power. Such assumptions are not limiting in that one can derive an equivalent power network in each area that adheres to these assumptions. Let the power network in area $i$ be comprised of $n_i$ internal buses and $\overline{n}_i$ boundary buses for each $i=1,2$. We adopt a linear DC power flow model in this paper.\footnote{See \cite{ Erseghe:15TPS, Magnússon2015ADMMACOPF}, and the references therein for solution approaches for a multi-area ACOPF problem.} This approximate model sets all voltage magnitudes to their nominal values, ignores transmission line resistances and shunt reactances, and deems differences among the voltage phase angles across each transmission line to be small. Consequently, the real power injections into the network is a linear map of voltage phase angles (expressed in radians) across the network. To arrive at a mathematical description, denote by $g_i \in \mathbf{R}^{n_i}$, $w_i \in \mathbf{R}^{n_i}$, and $d_i \!\in\! \mathbf{R}^{n_i}$ as the vectors of (real) power generations from dispatchable generators, renewable generators, and controllable loads, respectively. Let $\theta_i \in \mathbf{R}^{n_i}$ and $\overline{\theta}_i \in \mathbf{R}^{\overline{n}_i}$ be the vectors of voltage phase angles at internal and boundary buses, respectively. Then, the power flow equations are given by \begin{align} \begin{pmatrix} \v{B}_{11} & \v{B}_{1\bar{1}}& & \\ \v{B}_{\bar{1}1} & \v{B}_{\bar{1}\bar{1}} & \v{B}_{\bar{1}\bar{2}} & \\ & \v{B}_{\bar{2}\bar{1}} & \v{B}_{\bar{2}\bar{2}} & \v{B}_{\bar{2}2} \\ & & \v{B}_{2\bar{2}} & \v{B}_{22} \end{pmatrix} \begin{pmatrix} \theta_{1}\\ \overline{\theta}_{1}\\ \overline{\theta}_{2}\\ \theta_{2} \end{pmatrix} = \begin{pmatrix} g_{1}+w_1-d_{1}\\ 0\\ 0\\ g_{2}+w_2-d_{2} \end{pmatrix}. \label{eq:ged_dclf} \end{align} Non-zero entries of the coefficient matrix depend on reciprocals of transmission line reactances, the unspecified blocks in that matrix are zeros. Throughout, assume that one of the boundary buses in area 1 is set as the slack bus for the two-area power system. That is, the voltage phase angle at said bus is assumed zero. Power injections from the supply and demand assets at the internal buses of area $i$ are constrained as \begin{align} \underline{G}_i \leq g_i \leq \overline{G}_i, \ \ 0 \leq w_i \leq \overline{W}_i, \ \ \underline{D}_i \leq d_i \leq \overline{D}_i. \label{eq:gdw.const} \end{align} The inequalities are interpreted elementwise. The lower and upper limits on dispatchable generation $\underline{G}_i, \overline{G}_i$ are assumed to be known at the time when tie-line flows are being scheduled. Our assumptions on the available renewable generation $\overline{W}_i$ and the limits on the demands $[\underline{D}_i, \overline{D}_i]$ will vary in the subsequent sections. In Section \ref{sec:deterministic}, we assume that these limits are known and provide a distributed algorithm to solve the deterministic tie-line scheduling problem. In Section \ref{sec:robust}, we formulate the robust counterpart, where these limits are deemed uncertain and vary over a known set. We then describe a distributed algorithm to solve the robust counterpart. The power transfer capabilities of transmission lines within area $i$ are succinctly represented as \begin{align} \v{H}_i \theta_i + \v{\overline{H}}_i \overline{\theta}_i \leq f_i \label{eq:defHi} \end{align} for each $i=1,2$. Here, $\v{H}_i$ and $\v{\overline{H}}_i$ define the branch-bus admittance matrices, and $f_i$ models the respective transmission line capacities. Similarly, the transfer capabilities of tie-lines joining the two areas assume the form \begin{align} \v{\overline{H}}_{12} \overline{\theta}_1 + \v{\overline{H}}_{21} \overline{\theta}_2 \leq f_{12}. \label{eq:defH12} \end{align} Again, $\v{\overline{H}}_{12}$, $\v{\overline{H}}_{21}$ denote the relevant branch-bus admittance matrices and $f_{12}$ models the tie-line capacities. Finally, we describe the cost model for our two-area power system. For respectively procuring $g_i$ and $w_i$ from dispatchable and renewable generators, and meeting a demand of $d_i$ from controllable loads, let the dispatch cost in area $i$ be given by \begin{align} \[P^g_i\right]^\T g_i + \[P^w_i\right]^\T \( \overline{W}_i - w_i \) + \[P^d_i\right]^\T \( \overline{D}_i - d_i \). \label{eq:defP} \end{align} We use the notation $v^\T$ to denote the transpose of any vector or matrix $v$. The linear cost structure in the above equation is reminiscent of electricity market practices in many parts of the U.S. today. The second summand models any spillage costs associated with renewable generators. The third models the disutility of not satisfying all demands. \section{The deterministic tie-line scheduling problem} \label{sec:deterministic} Tie-line flows are typically scheduled ahead of the time of power delivery. The lead time makes the supply and demand conditions uncertain during the scheduling process. Within the framework of our model, the available capacity in renewable supply and lower and upper bounds on power demands, i.e., $\overline{W}_i, \underline{D}_i, \overline{D}_i$, can be uncertain. In this section, we ignore such uncertainty and formulate the deterministic tie-line scheduling problem, wherein we assume perfect knowledge of $\overline{W}_i$, $\underline{D}_i$ and $\overline{D}_i$ to decide the dispatch in each area and the tie-line flows. Our discussion of the deterministic version will serve as a prelude to its robust counterpart in Section \ref{sec:robust}. To simplify exposition, consider the following notation. \begin{align*} x_i := \(g_{i},w_i,d_i,\theta_i \)^\T, \ \ \xi_i := \(\overline{W}_i, \underline{D}_i, \overline{D}_i \)^\T, \ \ y := \( \overline{\theta}_1, \overline{\theta}_2 \)^\T \end{align*} for $i = 1,2$. The above notation allows us to succinctly represent the constraints \eqref{eq:ged_dclf} -- \eqref{eq:defHi} as \begin{align*} \v{A}^x_i x_i+ \v{A}^\xi_i \xi_i + \v{A}^y_i y \leq b_i \end{align*} for each $i = 1,2$ and suitably defined matrices $\v{A}^x_i, \v{A}^\xi_i, \v{A}^y_i$ and vector $b_i$. Denote by $m_i$ the number of inequality constraints in the above equation. Next, we describe transmission constraints on tie-line power flows in \eqref{eq:defH12} as \begin{align*} y \in {\cal Y} \subset \mathbf{R}^{Y}. \end{align*} Without loss of generality, one can restrict ${\cal Y}$ to be a polytope\footnote{Assuming the power network to be connected, the modulus of the phase angle of any bus can be constrained to lie within the sum of admittance-weighted transmission line capacities connecting that bus to the slack bus.}. Finally, the cost of dispatch in area $i$, as described in \eqref{eq:defP}, can be written as $$ c_i(x_i, \xi_i) := c^0_i + [c^x_i]^\T \ x_i + [ c^\xi_i ]^\T \ \xi_i$$ for scalar $c^0_i$ and vectors $c^x_i$, $c^\xi_i$. Equipped with the above notation, we define the deterministic tie-line scheduling problem as follows. \begin{equation} \begin{alignedat}{8} & \underset{x_1, x_2, y}{\text{minimize}} & & \left[ c_1\(x_1, \xi_1\) + c_2\(x_2, \xi_2\) \right], \\ & \text{subject to} \quad && \v{A}^x_i x_i + \v{A}^\xi_i \xi_i + \v{A}^y_i y \leq b_i, \ i = 1,2,\\ &&& y \in {\cal Y}. \end{alignedat} \label{eq:detProb} \end{equation} \subsection{Distributed solution via critical region exploration} The structure of the optimization problem in \eqref{eq:detProb} lends itself to a distributed solution architecture that we describe below. Our proposed technique is similar in spirit to the critical region projection method described in \cite{GuoTongetc:CRP_TPWRS}.\footnote{The cost structure in \cite{GuoTongetc:CRP_TPWRS} is quadratic; the linear cost case does not directly follow from \cite{GuoTongetc:CRP_TPWRS}.} We assume that each area is managed by a system operator (SO), and a \emph{coordinator} mediates between the SOs. Assume that the SO of area $i$ (call it SO$_i$) knows the dispatch cost $c_i$ and the linear constraint involving $x_i, \xi_i, y$ in \eqref{eq:detProb} in area $i$, and that SOs and the coordinator all know ${\cal Y}$. Our algorithm relies on the properties of \eqref{eq:detProb} that we describe next. To that end, notice that \eqref{eq:detProb} can be written as \begin{align} \underset{y \in {\cal Y}}{\text{minimize}} \ {J^*\(y, \xi_1, \xi_2\)} := {J_1^*\(y, \xi_1\) + J_2^*\(y, \xi_2\)}, \label{eq:detProb.2} \end{align} where \begin{equation} \label{eq:areaProb} J_i^*\(y, \xi_i\) := \underset{x_i}{\text{minimum}} \quad c_i\(x_i, \xi_i\), \quad \text{subject to} \quad \v{A}^x_i x_i + \v{A}^\xi_i \xi_i + \v{A}^y_i y \leq b_i. \end{equation} Assume throughout that all optimization problems parameterized by $y$ is feasible for each $y \in {\cal Y}$. Techniques from \cite{LiWu_RobustMAED2016} can be leveraged to shrink ${\cal Y}$ appropriately, otherwise. The optimization problem in \eqref{eq:areaProb} is a multi-parametric linear program, linearly parameterized in $\(y, \xi_i\)$ on the right-hand side\footnote{The problem in \eqref{eq:areaProb} reformulated using the so-called epigraph form yields a multi-parametric program that is classically recognized as one linearly parameterized on the right-hand side.}. Such optimization problems are well-studied in the literature. For example, see \cite{borrelli2003constrained}. Relevant to our algorithm is the structure of the parametric optimal cost $J_i^*$. Describing that structure requires an additional notation. We say that a finite collection of polytopes $\{ {\cal P}^1, \ldots, {\cal P}^\ell \}$ define a \emph{polyhedral partition} of ${\cal Y}$, if no two polytopes intersect except at their boundaries, and their union equals ${\cal Y}$. With this notation, we now record the properties of $J_i^*$ in the following lemma. \begin{lemma} \label{lemma:pAffinePoly} $J_i^*(y, \xi_i)$ is piecewise affine and convex in $y \in {\cal Y}$. Sets over which $J_i^*( \cdot, \xi_i)$ is affine define a polyhedral partition of ${\cal Y}$. \end{lemma} The proof is immediate from \cite[Theorem 7.5]{borrelli2003constrained}. Details are omitted for brevity. We refer to the polytopes in the polyhedral partition of ${\cal Y}$ induced by $J_i^*(\cdot, \xi_i)$ as \emph{critical regions}. Recall that the feasible set of \eqref{eq:areaProb} is described by a collection of linear inequalities. Essentially, each critical region corresponds to the subset of ${\cal Y}$ over which a specific set of these inequality constraints are active -- \textit{i.e.}, are met with equalities -- at an optimal solution of \eqref{eq:areaProb}. A direct consequence of the above lemma is that the \emph{aggregate cost} $J^*(\cdot, \xi_1, \xi_2)$ is also piecewise-affine and convex. Sets over which this cost is affine define a polyhedral partition of ${\cal Y}$. The polytopes of that partition -- the critical regions -- are precisely the non-empty intersections between the critical regions induced by $J_1^*(\cdot, \xi_1)$ and those by $J_2^*(\cdot, \xi_2)$. The relationship between the critical regions induced by the various piecewise affine functions are illustrated in Figure \ref{fig:criticalRegion}. In what follows, we develop an algorithm wherein the coordinator defines a sequence of points in ${\cal Y}$ towards optimizing the aggregate cost. In each step, it relies on the SOs to identify their respective critical regions and the affine descriptions of their optimal costs at these iterates. That is, SO$_i$ can compute the critical region ${\cal P}^y_i$ that contains $y \in {\cal Y}$ and the affine description $\[\alpha^y_i\right]^\T z + \beta^y_i$ of its optimal dispatch cost $J_i^*\(z, \xi_i\)$ over $z \in {\cal P}^y_i$ by parameterizing the linear program described in \eqref{eq:areaProb}\footnote{The critical region containing $y\in{\cal Y}$ is unique, except when $y$ lies at the boundary of critical regions. In that event, assume that the SO returns one of the critical regions containing $y$.}. We relegate the details of this step to Appendix \ref{sec:CRAffine} to maintain continuity of presentation. For any $y \in {\cal Y}$, we assume in the sequel that the coordinator can collect this information from the SOs to construct the critical region ${\cal P}^y$ induced by the aggregate cost containing $y$ and its affine description $\[\alpha^y\right]^\T z + \beta^y$ for $z\in {\cal P}^y$, where \begin{align} \begin{aligned} {\cal P}^y := {\cal P}_1^y \cap {\cal P}_2^y, \ \ \alpha^y := \alpha_1^y + \alpha_2^y, \ \ \beta^y := \beta_1^y + \beta_2^y. \label{eq:alphaBeta} \end{aligned} \end{align} \definecolor{newColorLine}{gray}{0.9} \definecolor{darkYellowGreen}{RGB}{123, 164, 40} \definecolor{darkOliveGreen}{rgb}{0.33, 0.42, 0.18} \definecolor{darkTerracotta}{rgb}{0.8, 0.31, 0.36} \definecolor{deepCarminePink}{rgb}{0.84, 0.09, 0.41} \definecolor{myRed}{rgb}{0.84, 0.09, 0.41} \psset{unit=0.23in} \begin{figure} \caption{A pictorial representation of the critical regions induced by the areawise parametric optimal costs $J_1^*(\cdot, \xi_1), J_2^*(\cdot, \xi_2)$, and the aggregate cost $J^*(\cdot, \xi_1, \xi_2)$. The trapezoids represent ${\cal Y}$. Differently shaded polytopes indicate different critical regions.} \label{fig:criticalRegion} \end{figure} In presenting the algorithm, we assume that the coordinator can identify the \emph{lexicographically smallest} optimal solution of a linear program. A vector $a$ is said to be lexicographically smaller than $b$, if at the first index where they differ, the entry in $a$ is less than that in $b$. See \cite{dantzig2016linear} for details on such linear programming solvers. When a linear program does not have a unique optimizer\footnote{A linear program has non-unique optimizers when it is \emph{dual degenerate}. See \cite{dantzig2016linear} for details.}, such a choice provides a tie-breaking rule. The final piece required to state and analyze the algorithm is an optimality condition that is both necessary and sufficient for a candidate minimizer of \eqref{eq:detProb.2}. Stated geometrically, $y^* \in {\cal Y}$ is a minimizer of \eqref{eq:detProb.2} if and only if \begin{align} 0 \in \partial J^*( y^*, \xi_1, \xi_2) + N_{\cal Y}(y^*). \label{eq:optCriterion} \end{align} The first set on the right-hand side of \eqref{eq:optCriterion} is the sub-differential set of the aggregate cost $J^*(\cdot, \xi_1, \xi_2)$ evaluated at $y^*$ \footnote{We use the sub-differential characterization as opposed to the familiar gradient condition for optimality since $J^*(\cdot, \xi_1, \xi_2)$ is piecewise affine and may not be differentiable everywhere in ${\cal Y}$.}. And, the second set denotes the normal cone to ${\cal Y}$ at $y^*$. The addition stands for a set-sum. Algorithm \ref{alg:CRP} delineates the steps for the coordinator to solve the deterministic tie-line scheduling problem. In our algorithm, $\| v^* \|_2$ denotes the Euclidean norm of $v^*$. If ${\cal D} := \{ \alpha^1, \ldots, \alpha^{\ell_D}\}$ and $N_{\cal Y}\(y^*\) := \{ z \ | \ \v{K}^y z \geq 0 \}$, then computing the least-square solution $v^*$ amounts to solving the following convex quadratic program. \begin{eqnarray} {\text{minimize}} \quad \frac{1}{2}\vnorm{v}_2^2, \quad \text{subject to} \quad v = \sum_{j=1}^{\ell_D} \eta_j \alpha^j + \zeta, \ \ \mathds{1}^\T \ \eta = 1, \ \ \eta \geq 0, \ \ \v{K}^y \zeta \geq 0 \label{eq:ifopt} \end{eqnarray} over the variables $v \in \mathbf{R}^{\overline{n}_1 + \overline{n}_2}$, $\eta \in \mathbf{R}^{\ell_D}$, and $\zeta\in\mathbf{R}^{\ell_N}$, where $\mathds{1}$ is a vector of all ones, and $\v{K}^y \in \mathbf{R}^{\( \overline{n}_1 + \overline{n}_2\) \times \ell_N}$. \begin{algorithm} \caption{Solving the deterministic tie-line scheduling problem.} \label{alg:CRP} \begin{algorithmic}[1] \Initialize{$y \gets $ any point in ${\cal Y}$, $J^* \gets \infty$, \\${\cal D} \gets $ empty set, $\varepsilon \gets$ small positive number.} \Do \State Communicate with the SOs to obtain ${\cal P}^y$ and $\alpha^y, \beta^y$. \State Minimize $\[\alpha^y\right]^\T z + \[\beta^y\right]$ over ${\cal P}^y$. \label{step:min} \State $y^{\text{opt}} \gets$ lexicographically smallest minimizer in step \ref{step:min}. \State $J^{\text{opt}} \gets$ optimal cost in step \ref{step:min}. \If {$ J^{\text{opt}} < J^*$,} \State $y^* \gets y^{\text{opt}}$, $J^* \gets J^{\text{opt}}$, ${\cal D} \gets \{ \alpha^y \}$. \Else \State ${\cal D} \gets {\cal D} \cup \{ \alpha^y \}$. \EndIf \State $v^* \gets \mathop{\rm argmin}_{v \in {\rm conv}({\cal D}) + N_{{\cal Y}}(y^*)} \| v \|_2^2$. \State $y \gets y^{\text{opt}} - \varepsilon v^*$. \label{step:yUpdate} \doWhile{$v^* \neq 0$.} \end{algorithmic} \end{algorithm} \subsection{Analysis of the algorithm} The following result characterizes the convergence of Algorithm \ref{alg:CRP}. See Appendix \ref{sec:finiteTime} for its proof. \begin{theorem} \label{thm:finiteTime} Algorithm \ref{alg:CRP} terminates after finitely many steps, and $y^*$ at termination optimally solves \eqref{eq:detProb.2}. \end{theorem} The above result fundamentally relies on the fact that each time the variable $y$ is updated, it belongs to a critical region (induced by the aggregate cost) that the algorithm has not encountered so far. And, there are only finitely many such critical regions. That ensures termination in finitely many steps. Each time the algorithm ventures into a new critical region, we store the optimizer and the optimal cost over that critical region in the variables $y^{\text{opt}}$ and $J^{\text{opt}}$. Forcing the linear program to choose the lexicographically smallest optimizer always picks a unique vertex of the critical region as $y^{\text{opt}}$. Unless $J^{\text{opt}}$ improves upon the cost at $y^*$, we ignore the new point $y^{\text{opt}}$. However, the exploration of the new critical region provides a possibly new sub-gradient of the aggregate cost at $y^*$. The sub-differential set at $y^*$ is given by the convex hull of the sub-gradients of the aggregate cost over all critical regions that $y^*$ is a part of. The set ${\cal D}$ we maintain is such that ${\rm conv}({\cal D})$ is a \emph{partial} sub-differential set of the aggregate cost at $y^*$. Notice that $$ {\rm conv}({\cal D}) \subseteq \partial J^*(y^*, \xi_1, \xi_2)$$ throughout the algorithm. Therefore, any $y^*$ that meets the termination criterion of the algorithm automatically satisfies \eqref{eq:optCriterion}. As a result, such a $y^*$ is an optimizer of \eqref{eq:detProb.2}. The proposed technique is attractive in that each SO only needs to communicate finitely many times with the coordinator for the latter to reach an optimal tie-line schedule. Further, each SO$_i$ can compute its optimal dispatch $x_i^*$ by solving \eqref{eq:areaProb} with $y^*$. A closer look at the nature of the communication between the SOs and the coordinator reveals that an SO will not have to disclose the complete cost structure nor a complete description of the constraints within its area to the coordinator. \begin{remark} \label{rem:pieceLine} Algorithm \ref{alg:CRP} allows the coordinator to minimize $$F(y) := F_1(y) + F_2(y)$$ in a distributed manner, where $F_i: {\cal Y} \to \mathbf{R}$ satisfies two properties. First, it is piecewise affine and convex. Second, given any $y \in {\cal Y}$, SO$_i$ can compute an affine segment containing that $y$. While we do not explicitly characterize how fast the algorithm converges to its optimum, one can expect the number of steps to convergence to grow with the number of critical regions so induced. However, we do not expect our algorithm to explore all such critical regions on its convergence path. \end{remark} \subsection{A pictorial illustration of the algorithm} To gain more insights into the mechanics of Algorithm \ref{alg:CRP}, consider the example portrayed in Figure \ref{fig:iteration}. The coordinator begins with $y^A$ as the initial value of $y$. It communicates with SO$_i$ to obtain the critical region induced by $J_i^*$ containing $y^A$, and the affine description of $J_i^*$ over that critical region. Using the relation in \eqref{eq:alphaBeta}, it then computes the critical region ${\cal P}^A$ induced by the aggregate cost and the affine description of that cost $\[\alpha^A\right]^\T z + \beta^A$ over that region. For convenience, we use $$ {\cal P}^A:= {\cal P}^{y^A}, \quad \alpha^{A} := \alpha^{y^A}, \quad \beta^{A} := \beta^{y^A},$$ and extend the corresponding notation for $y^B, \ldots, y^E$. \psset{unit=0.38in} \begin{figure} \caption{An example to illustrate the iterative process of Algorithm \ref{alg:CRP}.} \label{fig:iteration} \end{figure} The coordinator solves a linear program to minimize the affine aggregate cost $\[\alpha^A\right]^\T z + \beta^A$ over $z \in {\cal P}^A$, and obtains the lexicographically smallest optimizer $y^{\text{opt}}$. Such an optimizer $y^{\text{opt}}$ is always a vertex of ${\cal P}^A$. Identify $y^B$ as that vertex in Figure \ref{fig:iteration}. The optimal cost at $y^B$ is indeed lower than the initial value of $J^*=\infty$, and hence, the coordinator sets $y^* \gets y^B$. It also updates $J^*$ to the aggregate cost at $y^B$, and the partial sub-differential set to ${\cal D} \gets \{\alpha^A\}$. Next, the coordinator solves the least square problem described in \eqref{eq:ifopt} to compute $v^*$. In so doing, it utilizes ${\cal D} = \{\alpha^A\}$, and $\v{K}^y = 0$ that describes the normal cone to ${\cal Y}$ at $y_B$.\footnote{The normal cone to ${\cal Y}$ at $y^B$ is $\{ 0 \}$ because $y^B$ lies in the interior of ${\cal Y}$.} Suppose $v^* \neq 0$. The coordinator updates the value of $y$ to $y^C$, obtained by moving a `small' step of length $\varepsilon$ from $y^B$ along $-v^*$. Recall that $y^C \notin {\cal P}^A$. The coordinator again communicates with the SOs to obtain the new critical region ${\cal P}^C$ induced by the aggregate cost that contains $y^C$. Again, it obtains the affine description of that cost and optimizes it over ${\cal P}^C$ to obtain the new $y^{\text{opt}}$. In the figure, we depict the case when $y^{\text{opt}}$ coincides with $y^*= y^B$. Notice that the optimal cost $J^{\text{opt}}$ at $y^{\text{opt}}$ is equal to $J^*$, and hence, the coordinator only updates the partial sub-differential set ${\cal D}$ to $\{ \alpha^A, \alpha^C \}$. With the updated set of ${\cal D}$, the coordinator solves \eqref{eq:ifopt} to obtain $v^*$. In this example, $v^*$ is again non-zero, and hence, the coordinator moves along a step of length $\varepsilon$ along $-v^*$ from $y^B$ to land at $y^D$. Again, $y^D \notin \{{\cal P}^A, {\cal P}^C\}$. The coordinator repeats the same steps to optimize the aggregate cost over ${\cal P}^D$ to obtain $y^E$ as the new $y^{\text{opt}}$. Two cases can now arise, that we describe separately. \begin{description}[font=$\bullet$\scshape\space\normalfont, leftmargin=0.2cm] \item If the optimal cost $J^{\text{opt}}$ at $y^{\text{opt}} = y^E$ does not improve upon the cost $J^*$ at $y^B$, the coordinator ignores $y^E$ and updates the set ${\cal D}$ to $\{ \alpha^A, \alpha^C, \alpha^D \}$. It computes $v^*$ with the updated ${\cal D}$. Again, if $v^* \neq 0$, it traverses along $-v^*$ to venture into a yet-unexplored critical region. The process continues till we get $y^*=y^B$ as an optimizer (if $v^* = 0$ at a future iterate), or we encounter the case we describe next. \item If $J^{\text{opt}} < J^*$, then the coordinator sets $y^E$ as the new $y^*$. It retraces the same steps with this new $y^*$. In this example, since $y^E$ is a vertex of ${\cal Y}$, one can show that \eqref{eq:ifopt} will yield $v^* = 0$, and hence, $y^* = y^E$ will optimize the aggregate cost over ${\cal Y}$. \end{description} \newcommand{\xi^{\sf L}}{\xi^{\sf L}} \newcommand{\xi^{\sf U}}{\xi^{\sf U}} \section{The robust counterpart} \label{sec:robust} The deterministic tie-line scheduling problem was formulated in the last section on the premise that available renewable supply and limits on power demands within each area are known at the time when tie-line schedules are decided. We now alter that assumption and allow these parameters to be uncertain. In particular, we let $\xi_i = \(\overline{W}_i, \underline{D}_i, \overline{D}_i \)$ take values in a \emph{box}, described by \begin{equation} \Xi_i := \{ \xi_i \in \mathbf{R}^{3 n_i} \ \vert \ \xi^{\sf L}_i \leq \xi_i \leq \xi^{\sf U}_i \}\label{eq:Xi} \end{equation} for $i=1,2$. The robust counterpart of the tie-line scheduling problem is then described by \begin{align} \underset{y \in {\cal Y}}{\text{minimize}} \( \underset{\xi_1 \in \Xi_1}{\text{max}} \ J_1^*\(y, \xi_1\) + \underset{\xi_2 \in \Xi_2}{\text{max}} \ J_2^*\(y, \xi_2\) \). \label{eq:robProb} \end{align} We now develop an algorithm that solves \eqref{eq:robProb} in a distributed fashion. Problem \eqref{eq:robProb} has a \emph{minimax} structure. Therefore, we employ a strategy in Algorithm \ref{alg:robust} to alternately minimize the objective function over ${\cal Y}$ and maximize it over $\Xi_1 \times \Xi_2$. Thanks to the following lemma, the maximization over $\Xi_1 \times \Xi_2$ can be reformulated into a mixed-integer linear program. \begin{lemma} \label{lemma:MILP} Fix $y \in {\cal Y}$. Then, there exists $\sf{M} > 0$ for which maximizing $J_i^*(y, \xi_i)$ over $\xi_i \in \Xi_i$ is equivalent to the following mixed-integer linear program: \begin{equation} \label{eq:MILP} \hspace{-0.3cm} \begin{aligned} & \underset{w_i, \rho, \lambda}{\text{maximize}} && c_i^0 + [c_i^\xi]^\T \ \xi^{\sf L}_i \!+\! ( \v{A}_i^\xi \xi^{\sf L}_i \! + \! \v{A}_i^y y \!-\! b_i)^\T \ \lambda \!+\! \mathds{1}^\T \rho, \\ & \text{subject to} \quad && c_i^x + [\v{A}_i^x]^\T \lambda = 0, \\ &&& \rho \leq {\sf{M}} w_i, \\ &&& \rho \leq {\sf{M}} (\mathds{1} - w_i) + \v{\Delta}^\xi_{i}(c_i^\xi + [\v{A}_i^\xi ]^\T \lambda), \\ &&& w_i \in \{ 0, 1 \}^{n_i}, \rho \in \mathbf{R}^{n_i}, \lambda \in \mathbf{R}_+^{m_i}. \end{aligned}\!\! \end{equation} \end{lemma} We use the notation $\v{\Delta}^\xi_{i}$ to denote a diagonal matrix with $\xi^{\sf U}_i-\xi^{\sf L}_i$ as the diagonal. The lemma builds on the fact that $J_i^*(y, \xi_i)$ is convex in $\xi_i$, and hence, reaches its maximum at a vertex of $\Xi_i$. The convexity is again a consequence of \cite[Theorem 7.5]{borrelli2003constrained}. Our proof in Appendix \ref{sec:lemma2} leverages duality theory of linear programming and the so-called \emph{big-M} method adopted in \cite[Chapter 2.11]{conforti2014integer} to reformulate the maximization of $J_i^*(y, \cdot)$ over the vertices of $\Xi_i$ into a mixed-integer linear program. An optimal $\xi_i^{\text{opt}}$ can be recovered from $w_i^*$ that is optimal in \eqref{eq:MILP} using $$ \xi_i^{\text{opt}} := \xi^{\sf L}_i + \v{\Delta}^\xi_{i} w_i^*.$$ Next, we present our algorithm for solving the robust counterpart. In the algorithm, the SOs exclusively maintain and update certain variables; we distinguish these from the ones the coordinator maintains. \begin{algorithm} \caption{Solving the robust counterpart.} \label{alg:robust} \begin{algorithmic}[1] \Initialize{ SO$_1$: ${\cal V}_1 \gets \{ \text{a vertex of }\Xi_1 \}$, \\ SO$_2$: ${\cal V}_2 \gets \{ \text{a vertex of }\Xi_2 \}$.} \Do \State Coordinator uses Algorithm \ref{alg:CRP} to solve $$\underset{y \in {\cal Y}}{\text{minimize}} \ \(\max_{\xi_1 \in {\cal V}_1} J_1^*(y, \xi_1) + \max_{\xi_2 \in {\cal V}_2} J_2^*(y, \xi_2)\). $$\label{step:CRP} \State $y^* \gets$ optimizer in step \ref{step:CRP}. \State $J^* \gets$ optimal cost in step \ref{step:CRP}. \State For $i=1,2$, SO$_i$ performs: \State \qquad Maximize $J_i^*(y^*, \cdot)$ over $\Xi_i$ using \eqref{eq:MILP}.\label{step:MILP} \State \qquad $\xi_i^{\text{opt}} \gets$ optimizer in step \ref{step:MILP}. \State \qquad $J^{\text{opt}}_i \gets$ optimal cost in step \ref{step:MILP}. \State \qquad ${\cal V}_i \gets {\cal V}_i \cup \{ \xi_i^\text{opt} \}$. \State \qquad \Return $J^{\text{opt}}_i$ to the coordinator. \doWhile{$J^{\text{opt}}_1 + J^{\text{opt}}_2 > J^*$.} \end{algorithmic} \end{algorithm} We summarize the main property of the above algorithm in the following theorem, whose proof is given in Appendix \ref{sec:theorem2}\footnote{The proof is similar to \cite[Preposition 2]{zeng2013solving}; we include it for completeness.}. \begin{theorem} \label{thm:finiteTimeRob} Algorithm \ref{alg:robust} terminates after finitely many steps, and $y^*$ at termination optimally solves \eqref{eq:robProb}. \end{theorem} Our algorithm to solve the robust counterpart makes use of Algorithm \ref{alg:CRP} in step \ref{step:CRP}. The coordinator performs this step with necessary communication with the SOs. However, it remains agnostic to the uncertainty sets $\Xi_1$ and $\Xi_2$ throughout. Therefore, our algorithm is such that the SOs in general will not be required to reveal their cost structures, network constraints, nor their uncertainty sets to the coordinator to optimally solve the robust tie-line scheduling problem. Further, Theorems \ref{thm:finiteTime} and \ref{thm:finiteTimeRob} together guarantee that the coordinator can arrive at the required schedule by communicating with the SOs only finitely many times. These define some of the advantages of the proposed methodology. In the following, we discuss some limitations of our method. The number of affine segments in the piecewise affine description of $\max_{\xi_i \in {\cal V}_i} J_i^*(y, \xi_i)$ increases with the size of the set ${\cal V}_i$. The larger that number, the heavier can be the computational burden on Algorithm \ref{alg:CRP} in step \ref{step:CRP}. To partially circumvent this problem, we initialize the sets ${\cal V}_i$ with that vertex of $\Xi_i$ that encodes the least available renewable supply and the highest nominal demand. Such a choice captures the intuition that dispatch cost is likely the highest with the least free renewable supply and the highest demand. Our empirical results in the next section corroborate that intuition. We make use of mixed-integer linear programs in step \ref{step:MILP} of the algorithm. This optimization class encompasses well-known NP-hard problems. Solvers in practice, however, often demonstrate good empirical performance. Popular techniques for mixed-integer linear programming include branch-and-bound, cutting-plane methods, etc. See \cite{conforti2014integer} for a survey. Providing polynomial-time convergence guarantees for \eqref{eq:MILP} remains challenging, but our empirical results in the next section appear encouraging. \section{Numerical Experiments} \label{sec:simulation} We report here the results of our implementation of Algorithm \ref{alg:robust} on several power system examples. All optimization problems were solved in IBM ILOG CPLEX Optimization Studio V12.5.0 \cite{CPLEX} on a PC with 2.0GHz Intel(R) Core(TM) i7-4510U microprocessor and 8GB RAM. \begin{figure} \caption{The power system model} \caption{Histogram of optimal aggregate costs} \caption{The two-area 44-bus system is portrayed on the left. It shows where the wind generators are added and the parameters for the tie-lines used in our experiments. The figure to the right plots the optimal aggregate costs from $\sf{P}_1$, $\sf{P}_2$ over 3000 samples of uncertain variables, and that of Algorithm \ref{alg:robust} on this system.} \label{fig:twoareaConfig} \label{fig:twoareaCost} \end{figure} \subsection{On a two-area 44-bus power system} \label{sec:2area} Consider the two-area power system shown in Figure \ref{fig:twoareaConfig}, obtained by connecting the IEEE 14- and 30-bus test systems \cite{IEEESys}. The networks were augmented with wind generators at various buses. Transmission capacities of all lines were set to 100MW. The available capacity of each wind generator was varied between 15MW and 25MW. The lower limits on all power demands were set to zero, while the upper limits were varied between 98\% and 102\% of their nominal values. Our setup had 36 uncertain variables -- 32 power demands and 4 available wind generation. Bus 5 in area 1 was the slack bus. From the data in Matpower \cite{MATPOWER}, we chose the linear coefficient in the nominal quadratic cost structure for each conventional generator to define $P^g_i$ in \eqref{eq:defP}. Further, we neglected wind spillage costs by letting $P^w_i = 0$, and defined $P^d_i$ by assuming a constant marginal cost of \$100/MWh for not meeting the highest demands. \begin{table}[H] \centering \begin{tabular}{@{\hskip 0.05in}c@{\hskip 0.05in}l@{\hskip 0.05in}c@{\hskip 0.05in}c@{\hskip 0.05in}} \hlinewd{1pt} Iteration & Step in Algorithm \ref{alg:robust} & \thead{Aggregate cost \\ (in \$/h)} & \thead{Run-time \\ (in ms)} \\ \hline 1 & Step \ref{step:CRP} to compute $y^*$ & 9897.7 & 113.6\\ 1 & Step \ref{step:MILP} to compute $\xi^{\text{opt}}$ & 9910.3 & 99.6\\ 2 & Step \ref{step:CRP} to compute $y^*$ & 9899.3 & 93.4\\ 2 & Step \ref{step:MILP} to compute $\xi^{\text{opt}}$ & 9899.3 & 121.5\\ \hlinewd{1pt} \end{tabular} \caption{\small Evolution of aggregate cost of Algorithm \ref{alg:robust} for the two-area power system in Figure \ref{fig:twoareaConfig}.} \label{table:twoarearesult} \end{table} To run Algorithm \ref{alg:robust}, we initialized ${\cal V}_i$ with the scenario that describes the highest power demands and the least available wind generation across all buses. To invoke Algorithm \ref{alg:CRP} in step \ref{step:CRP}, we initialized $y$ with a vector of all zeros. When the algorithm encountered the same step in future iterations, it was initialized with the optimal $y^*$ from the last iteration to provide a \emph{warm start}. Algorithm \ref{alg:robust} converged in two iterations, \textit{i.e.}, it ended when the cardinality of ${\cal V}_1$ and ${\cal V}_2$ were both two. The trajectory of the optimal cost and the run-times for each step are given in Table \ref{table:twoarearesult}. In the first iteration, Algorithm \ref{alg:CRP} in step \ref{step:CRP} with $\varepsilon = 10^{-5}$ converged in four iterations\footnote{The termination condition $ v^* = 0 $ is replaced by checking that the Euclidean norm of a suitably normalized $v^*$ is less than a threshold.} of its own and explored five critical regions induced by the aggregate cost. A naive search over ${\cal Y}$ yielded that the aggregate cost induced at least 126 critical regions. Our simulation indicates that Algorithm \ref{alg:CRP} only explores a `small' subset of all critical regions. Step \ref{step:MILP} of Algorithm \ref{alg:robust} was then solved to obtain $\xi_i^{\text{opt}}$. As Table \ref{table:twoarearesult} suggests, the aggregate cost $J_1^{\text{opt}} + J_2^{\text{opt}}$ exceeded $J^*$ obtained earlier in step \ref{step:CRP}. Thus, the scenario of demand and supply captured in our initial sets ${\cal V}_1$ and ${\cal V}_2$ was \emph{not} the one with maximum aggregate dispatch costs. To accomplish this step, two separate mixed-integer linear programs were solved -- one with 13 binary variables (in area 1) and the other with 23 binary variables (in area 2). CPLEX returned the global optimal solutions in 15ms and 77ms, respectively. In the next iteration, step \ref{step:CRP} was performed with $\xi_i^{\text{opt}}$ added to ${\cal V}_i$, where Algorithm \ref{alg:CRP} converged in five iterations, exploring only four critical regions. Finally, step \ref{step:MILP} yielded $J^{\text{opt}}_1 + J^{\text{opt}}_2 = J^*$, implying that the obtained $y^*$ defines an optimal robust tie-line schedule. To further understand the efficacy of our solution technique, we uniformly sampled the set $\Xi_1 \times \Xi_2$ 3000 times. With each sample $\(\xi_1, \xi_2\)$, we solved two optimization problems -- $\sf{P}_1$ and $\sf{P}_2$. Precisely, $\sf{P}_1$ is a deterministic tie-line scheduling problem solved with Algorithm \ref{alg:CRP}, and $\sf{P}_2$ is the optimal power flow problem in each area with the optimal $y^*$ obtained from Algorithm \ref{alg:robust} for the robust counterpart. The histograms of the optimal aggregate costs from $\sf{P}_1$ and $\sf{P}_2$ are plotted in Figure \ref{fig:twoareaCost}. The same figure also depicts the optimal cost of the robust tie-line scheduling problem, which naturally equals the maximum among the costs from $\sf{P}_2$. And for each sample, the gap between the optimal costs of $\sf{P}_1$ and $\sf{P}_2$ captures the cost due to lack of foresight. Figure \ref{fig:twoareaCost} reveals that such costs can be significant. The median run-time of $\sf{P}_1$ was 48.5ms over all samples. The run-time for the robust problem was 458.2ms -- roughly 10 times that median. \subsection{On a three-area 187-bus system test} \label{sec:3area} For this case study, we interconnected the IEEE 30-, 39-, and 118-bus test systems as shown in Figure \ref{fig:threeareaConfig}. All transmission capacities were set to 100MW. Five wind generators were added to the 118-bus system (at buses 17, 38, 66, 88, and 111), three in the 39-bus system (at buses 3, 19, and 38), and two in the 30-bus system (at buses 11, and 23). Again, we adopted the same possible set of available wind power generations and power demands, as well as the cost structures as in Section \ref{sec:2area}. In total, our robust tie-line scheduling problem modeled 151 uncertain variables. For this multi-area power system, Algorithm \ref{alg:robust} converged in the first iteration. The mixed integer programs in step \ref{step:MILP} yielded the global optimal solution for each area, taking 62ms, 109ms, and 281ms, respectively. We again sampled the set $\Xi_1 \times \Xi_2 \times \Xi_3$ 3000 times, and solved $\sf{P}_1$. The run-time of Algorithm \ref{alg:robust} was 825.3ms, that is roughly 1.8 times the median run-time of $\sf{P}_1$, given by 450.8ms. \begin{figure} \caption{A three-area 187-bus power system.} \caption{Performance of algorithms with $\#$ of tie-lines.} \caption{The power system model and how our algorithms perform with variation in number of tie-lines in a three-area 187-bus power system.} \label{fig:threeareaConfig} \label{fig:Perturbation} \end{figure} We studied how our algorithm scales with the number of boundary buses by adding more tie-lines to the same system. The aggregate iteration count of Algorithm \ref{alg:CRP} is expected to grow with the number of induced critical regions, that in turn should grow with the boundary bus count. On the other hand, the iteration count of Algorithm \ref{alg:robust} largely depends on the initial choice of the scenario encoded in the sets ${\cal V}_1, {\cal V}_2, {\cal V}_3$, and thus, varies to a lesser extent on the same count. Figure \ref{fig:Perturbation} validates these intuitions. \subsection{Summary of results from other case-studies} \label{sec:othertest} We compared Algorithm \ref{alg:CRP} with a dual decomposition based approach proposed in \cite{Bakirtzis&Biskas:03TPS}. That algorithm converges asymptotically, while our method converges in finitely many iterations. Table \ref{table:comparison} summarizes the comparison.\footnote{We say the method in \cite{Bakirtzis&Biskas:03TPS} converges when the power flow over each tie-line as calculated by the areas at its end mismatches by $<$ 0.01 p.u..} Compared to that in \cite{Bakirtzis&Biskas:03TPS}, our algorithm clocked lesser number of iterations and lower run-times in our experiments. \begin{table}[H] \centering \begin{tabular}{lcccc} \hlinewd{1pt} Items & \thead{Two-area \\44-bus system} & \thead{Three-area\\ 187-bus system} \\ \hline {\# iterations in Algorithm \ref{alg:CRP}} & 8 & 9 \\ {\# iterations of \cite{Bakirtzis&Biskas:03TPS}} & 23 & 78 \\ {Run-time of Algorithm \ref{alg:CRP} (ms)} & 458.2 & 825.3 \\ {Run-time of \cite{Bakirtzis&Biskas:03TPS} (ms)} & 779.8 & 1227.5 \\ \hlinewd{1pt} \end{tabular} \caption{Comparison with the method in \cite{Bakirtzis&Biskas:03TPS}.} \label{table:comparison} \end{table} Apart from the two systems considered so far, we ran Algorithm \ref{alg:robust} on a collection of other multi-area power systems, details of which can be found in Appendix \ref{app:otherSystems}. The results are summarized in Table \ref{table:moreresults}. Our experiments reveal that Algorithm \ref{alg:robust} often converges within 1 -- 4 iterations. The run-time of Algorithm \ref{alg:robust} grows significantly with the number of uncertain parameters. The 418-bus and the 536-bus systems with 422 and 546 uncertain variables, respectively, corroborate that conclusion. Such growth in run-time is expected because the complexity of \eqref{eq:MILP} grows with the number of binary decision variables that equals the number of uncertain parameters. Run-time of a joint multi-area optimal power flow problem with a sample scenario in the last column provides a reference to compare run-times for the robust one. \begin{table}[H] \centering \begin{tabular}{ccccccc} \hlinewd{1pt} \thead{\# \\ areas} & \thead{\# \\ buses} & \thead{\# \\ uncertain\\ variables} & \thead{\# \\ boundary\\ buses} & \thead{\# iter. in \\ Algorithm \ref{alg:robust}} & \thead{Run-time of \\ Algorithm \ref{alg:robust}\\ (ms)} & \thead{Run-time of \\ joint problem \\ (ms)} \\ \hline 2 & 87 & 91 &4& 1 & 719.6 & 310.0 \\ 2 & 175 & 179 &4& 1 & 871.1 & 340.5 \\ 2 & 236 & 240 &4& 1 & 1732.6 & 391.5 \\ 2 & 418 & 42 &10& 1 & 1020.7 & 455.7 \\ 2 & 418 & 422 &10& 4 & 6124.5 & 461.4 \\ 3 & 354 & 360 &12& 3 & 4127.4 & 655.8 \\ 3 & 536 & 54 &12& 1 & 2557.6 & 699.7 \\ 3 & 536 & 546 &12& 3 & 18359.8 & 701.2 \\ \hlinewd{1pt} \end{tabular} \caption{Performance of Algorithm \ref{alg:robust} on various multi-area power system examples provided in Appendix \ref{app:otherSystems}.} \label{table:moreresults} \end{table} \section{Conclusion} \label{sec:conclusion} This work presented an algorithmic framework to solve a tie-line scheduling problem in multi-area power systems. Our method requires a coordinator to communicate with the system operators in each area to arrive at an optimal tie-line schedule. In the deterministic setting, where the demand and supply conditions are assumed known during the scheduling process, our method (Algorithm \ref{alg:CRP}) was proven to converge in finitely many steps. In the case with uncertainty, we proposed a method (Algorithm \ref{alg:robust}) to solve the robust variant of the tie-line scheduling problem. Again, our method was shown to converge in finitely many steps. Our proposed algorithms do not require the system operator to reveal the dispatch cost structure, network parameters or even the support set of uncertain demand and supply within each area to the coordinator. We empirically demonstrated the efficacy of our algorithms on various multi-area power system examples. \appendix \section{How SO$_i$ can compute ${\cal P}_i^y$, $\alpha_i^y$, $\beta_i^y$} \label{sec:CRAffine} With $\xi_i\in\Xi_i$ and $y \in {\cal Y}$ fixed, consider the optimization problem described in \eqref{eq:areaProb}. Suppose the optimal solution $x_i^*(y, \xi_i)$ is unique. We suppress the dependency on $(y, \xi_i)$ for notational convenience. Distinguish between the constraints that are \emph{active} (met with an equality) versus that are \emph{inactive} at optimality with the subscript ${\cal A}$ and ${\cal I}$, respectively, as follows. \begin{align*} [\v{A}_i^x]_{{\cal A}} \ x_i^* + [\v{A}_{i}^\xi]_{{\cal A}} \ \xi_i + \[\v{A}_{i}^y\right]_{{\cal A}} y &= [b_i]_{{\cal A}},\\ [\v{A}_i^x]_{{\cal I}} \ x_i^* + [\v{A}_{i}^\xi]_{{\cal I}} \ \xi_i + \[\v{A}_{i}^y\right]_{{\cal I}} y &< [b_i]_{{\cal I}}. \end{align*} The set of active versus inactive constraints remains the same over the critical region ${\cal P}_i^y$. Assuming $[\v{A}_i^x]_{{\cal A}}$ is a square and invertible matrix, the optimal solution $x_i^*$ is unique for each $z \in {\cal P}^y_i$, given by \begin{align*} x_i^* = [\v{A}_i^x]_{{\cal A}}^{-1} \( [b_i]_{{\cal A}} - [\v{A}_{i}^\xi]_{{\cal A}} \ \xi_i - \[\v{A}_{i}^y\right]_{{\cal A}} z \). \label{eq:local_partition1} \end{align*} The inequalities for the inactive constraints, together with the above relation defines the critical region $ {\cal P}_i^y := \{z \in {\cal Y} \! : \! \v{D} z \leq d\}$, where \begin{align*} \v{D} &= - [\v{A}_i^x]_{{\cal I}} [\v{A}_i^x]_{{\cal A}}^{-1}[\v{A}_{i}^y]_{{\cal A}}+ [\v{A}_{i}^y]_{{\cal I}},\\ d &= [b_i]_{{\cal I}} - [\v{A}_{i}^\xi]_{{\cal I}} \ \xi_i - [\v{A}_i^x]_{{\cal I}} [\v{A}_i^x]_{{\cal A}}^{-1} \( [b_i]_{{\cal A}} - [\v{A}_{i}^\xi]_{{\cal A}} \ \xi_i \). \end{align*} Finally, $J_i^*(y, \xi_i) = c_i(x_i^*, \xi_i)$ yields \begin{align*} \alpha^y_i &= -[c_i^x]^\T \ [\v{A}_i^x]_{{\cal A}}^{-1}[\v{A}_{i}^y]_{{\cal A}}, \\ \beta^y_i &= c_i^0 + [c_i^\xi]^\T \ \xi_i + [c_i^x]^\T \ [\v{A}_i^x]_{{\cal A}}^{-1} \( [b_i]_{{\cal A}} - [\v{A}_{i}^\xi]_{{\cal A}} \ \xi_i \). \end{align*} The above expressions are derived under the premise that $[\v{A}_i^x]_{{\cal A}}$ is invertible. We refer the reader to \cite[Sections 7.2.2, 7.2.4]{borrelli2003constrained} for the procedure in the general case. \section{Proof of Theorem \ref{thm:finiteTime}}\label{sec:finiteTime} After each iteration of Algorithm \ref{alg:CRP}, $y^*$ is a vertex of a critical region induced by the aggregate optimal cost. Also, ${\cal D}$ is such that ${\rm conv}({\cal D}) \subseteq \partial J^*\(y^*, \xi_1, \xi_2\)$. Therefore, if the algorithm terminates with $v^* =0$, then $$ 0 \in {\rm conv}({\cal D}) + N_{\cal Y}\(y^*\) \subseteq \partial J^*\(y^*, \xi_1, \xi_2\) + N_{\cal Y}\(y^*\).$$ That is, $y^*$ optimally solves \eqref{eq:detProb.2}. Next, we argue that the algorithm terminates in finitely many iterations. Consider the sequence of $y^*$'s and $J^*$'s produced by the algorithm. Notice that $J^*$ is a piecewise constant but non-increasing sequence. Further, a change in $y^*$ always accompanies a strict decrease in $J^*$. Therefore, if $y^*$ changes in an iteration from a certain point, that same point can never become $y^*$ again. Since there are finitely many critical regions with finitely many vertices, it only remains to show that $y^*$ cannot remain constant over infinitely many iterations. Towards that goal, notice that $y^*$ can only belong to a finite number of critical regions. In the rest of the proof, we argue that the variable $y$ computed in step \ref{step:yUpdate} always belongs to a different such critical region containing $y^*$, unless the algorithm terminates. At an arbitrary iteration, assume that $y$ has taken values in critical regions ${\cal P}^1, \ldots, {\cal P}^{\ell_D}$ that contain $y^*$. For convenience, let the optimal aggregate cost be given by $\[\alpha^j\right]^\T z + \beta^j$ for $z \in {\cal P}^j$ for each $j=1,\ldots, \ell_D$. Thus, ${\cal D} := \{ \alpha^1, \ldots, \alpha^{\ell_D}\}$. Then, the new value of $y$ is computed as $y^* - \varepsilon v^*$, with $v^*$ as defined in \eqref{eq:ifopt}. If $v^* = 0$, then the algorithm terminates, proving our claim. Otherwise, assume that $y^* - \varepsilon v^* \in {\cal P}^1$, contrary to our hypothesis, implying \begin{align*} J^*(y^* - \varepsilon v^*, \xi_1, \xi_2) = \[\alpha^1\right]^\T \(y^* - \varepsilon v^*\) + \beta^1 = J^* (y^*, \xi_1, \xi_2) - \varepsilon \[ \alpha^1\right]^\T v^*. \end{align*} Since, $y^*$ optimizes the aggregate cost over ${\cal P}^1$, it suffices to show that $\[ \alpha^1\right]^\T v^* > 0$ to arrive at a contradiction. For convenience, define the matrix $\pmb{\alpha} := \( \alpha^1, \ldots, \alpha^{\ell_D} \)$. We prove more generally that $\pmb{\alpha}^\T v^* > 0$. Associate Lagrange multipliers $\phi, \psi$ with the equality constraints $v = \pmb{\alpha} \eta + \zeta$, and $\mathds{1}^\T \eta = 1$, respectively. Also, associate $\mu_\eta, \mu_\zeta$ with the inequality constraints $\eta \geq 0$ and $\v{K}^y \zeta \geq 0$, respectively. Then, an optimal primal-dual solution pair given by $v^*, \eta^*, \zeta^*$ and $\phi^*, \psi^*, \mu_\eta^*, \mu_\zeta^*$ satisfies the Karush-Kuhn-Tucker (KKT) optimality conditions -- comprised of the constraints in \eqref{eq:ifopt} and the following relations. \begin{gather*} {v^*- \phi^*} = 0, \ \ {\pmb{\alpha}^\T \phi^* + \psi^* \mathds{1} - \mu_\eta^*} = 0, \ \ {\phi^* - \[\v{K}^y\right]^\T \mu_{\zeta}^*} = 0, \\ \[ \mu_{\eta}^*\right]^\T \eta = 0, \ \ \[\mu_{\zeta}^*\right]^\T \zeta^* = 0, \ \ \mu_{\eta}^* \geq 0, \ \ \mu_\zeta^* \geq 0. \end{gather*} Using the KKT conditions, we have \begin{align*} \vnorm{v^*}_2^2 + \psi^* &= \(\phi^*\)^\T\(\pmb{\alpha} \eta^* + \zeta^*\) + \psi^* \\ &= \( \pmb{\alpha}^\T \phi^* + \psi^* \mathds{1} - \mu_\eta^* \)^\T \eta^* + \( \phi^* - \[\v{K}^y\right]^\T \mu_{\zeta}^* \)^\T \zeta^* \\ &= 0. \end{align*} Thus, $\psi^* < 0$. Together with the KKT conditions, that yields \begin{align*} \pmb{\alpha}^\T v^* = \pmb{\alpha}^\T \phi^* = -\psi^* \mathds{1} + \mu_\eta^* > 0. \end{align*} \section{Proof of Lemma \ref{lemma:MILP}}\label{sec:lemma2} Strong duality of the problem described in \eqref{eq:areaProb} implies that $J_i^*\(y, \xi_i\) $ equals the optimum of the following problem. \begin{equation*} \label{eq:areaDual} \hspace{-0.4cm} \begin{alignedat}{8} & \quad \underset{\lambda \in \mathbf{R}^{m_i}_+}{\text{maximum}} & & c_i^0 + [ c_i^\xi ]^\T \xi_i + \( \v{A}_{i}^y y + \v{A}_{i}^{\xi} \xi_i - b_i \)^\T \lambda, \\ & \quad \text{subject to} \quad & \quad & c_i^x + \[ \v{A}_i^x \right]^\T \lambda = 0. \end{alignedat} \end{equation*} Then, maximizing $J_i^*\(y, \xi_i\)$ over the vertices of $\Xi_i$, described by $ \{ \xi^{\sf L}_i + \v{\Delta}^\xi_i w_i \! : \! w_i \in \{0, 1\}^{n_i} \}$, is equivalent to \begin{equation*} \hspace{-0.4cm} \begin{alignedat}{8} & \quad {\text{maximize}} & & c_i^0 + [c_i^\xi]^\T \xi^{\sf L}_i \!+\! \( \v{A}_i^y y \! + \! \v{A}_i^\xi \xi^{\sf L}_i \!-\! b_i\)^\T \lambda \!+\! \mathds{1}^\T \rho, \\ & \quad \text{subject to} \quad & \quad & c_i^x + \[ \v{A}_i^x \right]^\T \lambda = 0, \\ &&& \rho = \mathop{\mathrm{diag}}(w_i) \cdot \v{\Delta}_i^\xi \( c_i^\xi + [ \v{A}_i^\xi ]^\T \lambda \). \end{alignedat} \end{equation*} over $w_i \in \{0, 1\}^{n_i}$, $\rho\in\mathbf{R}^{n_i}$, and $\lambda \in \mathbf{R}^{m_i}_+$. Here, $\mathop{\mathrm{diag}}(w_i)$ denotes the diagonal matrix with $w_i$ as the diagonal. Since we maximize $\mathds{1}^\T \rho$, one can replace the second equality constraint in the above problem with the inequality $$\rho \leq \mathop{\mathrm{diag}}(w_i) \cdot \v{\Delta}_i^\xi \( c_i^\xi + [ \v{A}_i^\xi ]^\T \lambda \),$$ that is further equivalent to \begin{align*} \rho \leq {\sf{M}} w_i, \ \text{and} \ \ \rho \leq {\sf{M}} \(\mathds{1} - w_i\) + \v{\Delta}_i^\xi \( c_i^\xi + [ \v{A}_i^\xi ]^\T \lambda \), \end{align*} for a large enough ${\sf{M}} > 0$. That completes the proof. \section{Proof of Theorem \ref{thm:finiteTimeRob}}\label{sec:theorem2} Let $J^{\text{rob}}$ denote the optimal aggregate cost of \eqref{eq:robProb}. Then, $J^*$ from step \ref{step:CRP} and $J^{\text{opt}}_1 + J^{\text{opt}}_2$ from step \ref{step:MILP} at any iteration of Algorithm \ref{alg:robust} satisfy $$ J^* \leq J^{\text{rob}} \leq J^{\text{opt}}_1 + J^{\text{opt}}_2.$$ If Algorithm \ref{alg:robust} terminates, the termination condition implies that the above inequalities are all equalities. In that event, $y^*$ optimally solves \eqref{eq:robProb}. To argue the finite-time convergence, notice that at least one among ${\cal V}_1$ and ${\cal V}_2$ increases in cardinality unless the termination condition is satisfied. The rest follows from the fact that $\Xi_1$ and $\Xi_2$ have finitely many vertices. \section{Power system details for additional simulations} \label{app:otherSystems} The multi-area power systems considered in Section \ref{sec:othertest} are given in Figure \ref{sec:additionalExamples}. Tie-line capacities were set to 100MW and their reactances were set to $0.25p.u.$ Capacity limits on the transmission lines within each area were set to their respective nominal values in Matpower \cite{MATPOWER} wherever present, and to 100MW, otherwise. For all two-area tests, two wind generators were installed in the two areas at buses 6 and 14 in area 1 and buses 11 and 23 in area 2. For the three-area tests, we replicated the placements described in Section \ref{sec:3area}. Power demands and available wind generations were varied the same way as in Sections \ref{sec:2area} and \ref{sec:3area}. \begin{figure} \caption{Two-area 87-bus system} \caption{Two-area 175-bus system} \caption{Two-area 236-bus system} \caption{Two-area 418-bus system} \caption{Three-area 354-bus system} \caption{Three-area 536-bus system} \caption{Additional power system examples considered for numerical experiments.} \label{sec:additionalExamples} \end{figure} \end{document}
arXiv
\begin{document} \title[Realizations of AF-algebras] {Realizations of AF-algebras as graph algebras, Exel-Laca algebras, and ultragraph algebras} \author{Takeshi Katsura} \address{Takeshi Katsura, Department of Mathematics\\ Keio University\\ Yokohama, 223-8522\\ JAPAN} \email{[email protected]} \author{Aidan Sims} \address{Aidan Sims, School of Mathematics and Applied Statistics\\ University of Wollongong\\ NSW 2522\\ AUSTRALIA} \email{[email protected]} \author{Mark Tomforde} \address{Mark Tomforde \\ Department of Mathematics\\ University of Houston\\ Houston \\ TX 77204-3008\\ USA} \email{[email protected]} \date{October 22, 2008; revised April 29, 2009} \subjclass[2000]{Primary 46L55} \keywords{graph $C^*$-algebras, Exel-Laca algebras, ultragraph $C^*$-algebras, AF-algebras, Bratteli diagrams} \begin{abstract} We give various necessary and sufficient conditions for an AF-algebra to be isomorphic to a graph $C^*$-algebra, an Exel-Laca algebra, and an ultragraph $C^*$-algebra. We also explore consequences of these results. In particular, we show that all stable AF-algebras are both graph $C^*$-algebras and Exel-Laca algebras, and that all simple AF-algebras are either graph $C^*$-algebras or Exel-Laca algebras. In addition, we obtain a characterization of AF-algebras that are isomorphic to the $C^*$-algebra of a row-finite graph with no sinks. \end{abstract} \maketitle \tableofcontents \section{Introduction} \label{intro-sec} In 1980 Cuntz and Krieger introduced a class of $C^*$-algebras constructed from finite matrices with entries in $\{0, 1 \}$ \cite{CK}. These $C^*$-algebras, now called Cuntz-Krieger algebras, are intimately related to the dynamics of topological Markov chains, and appear frequently in many diverse areas of $C^*$-algebra theory. Cuntz-Krieger algebras have been generalized in a number of ways, and two very natural generalizations are the \emph{graph $C^*$-algebras} and the \emph{Exel-Laca algebras}. For graph $C^*$-algebras one views a $\{0, 1 \}$-matrix as an edge adjacency matrix of a graph, and considers the Cuntz-Krieger algebras as $C^*$-algebras of certain finite directed graphs. For a (not necessarily finite) directed graph $E$, one then defines the graph $C^*$-algebra $C^*(E)$ as the $C^*$-algebra generated by projections $p_v$ associated to the vertices $v$ of $E$ and partial isometries $s_e$ associated to the edges $e$ of $E$ that satisfy relations determined by the graph. Graph $C^*$-algebras were first studied using groupoid methods \cite{KPR, KPRR}. Due to technical constraints, the original theory was restricted to graphs that are \emph{row-finite} and have \emph{no sinks}; that is, the set of edges emitted by each vertex is finite and nonempty. In fact much of the early theory restricted to this case \cite{BPRS, KPR, KPRR}, and it was not until later \cite{BHRS, DT, FLR} that the theory was extended to infinite graphs that are not row-finite. Interestingly, the non-row-finite setting is significantly more complicated than the row-finite case, with both new isomorphism classes of $C^*$-algebras and new kinds of $C^*$-algebraic phenomena exhibited. Another approach to generalizing the Cuntz-Krieger algebras was taken by Exel and Laca, who defined what are now called the Exel-Laca algebras \cite{EL}. In this definition one allows a possibly infinite matrix with entries in $\{0, 1 \}$ and considers the $C^*$-algebra generated by a set of partial isometries indexed by the rows of the matrix and satisfying certain relations determined by the matrix. The construction of the Exel-Laca algebras contains the Cuntz-Krieger construction as a special case. Furthermore, for \emph{row-finite matrices} (i.e., matrices in which each row contains a finite number of nonzero entries) with nonzero rows, the construction produces exactly the class of $C^*$-algebras of row-finite graphs with no sinks. Despite the fact that the classes of graph $C^*$-algebras and Exel-Laca algebras agree in the row-finite case, they are quite different in the non-row-finite setting. In particular, there are $C^*$-algebras of non-row-finite graphs that are not isomorphic to any Exel-Laca algebra, and there are Exel-Laca algebras of non-row-finite matrices that are not isomorphic to the $C^*$-algebra of any graph \cite{Tom2}. In order to bring graph $C^*$-algebras and Exel-Laca algebras together under one theory, Tomforde introduced the notion of an ultragraph and described how to associate a $C^*$-algebra to such an object \cite{Tom, Tom2}. These ultragraph $C^*$-algebras contain all graph $C^*$-algebras and all Exel-Laca algebras, as well as examples of $C^*$-algebras that are in neither of these two classes. The relationship among these classes is summarized in Figure~\ref{fig:three-classes}. \begin{figure} \caption{The relationship among graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras} \label{fig:three-classes} \end{figure} Given the relationship among these classes of $C^*$-algebras, it is natural to ask the following question. \vskip1ex \noindent \textbf{Question: }``How different are the $C^*$-algebras in the three classes of graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras?" \vskip1ex \noindent There are various ways to approach this question, and one such approach was taken in \cite{KMST2}, where it was shown that the classes of graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras agree up to Morita equivalence. More specifically, given a $C^*$-algebra $A$ in any of these three classes, one can always find a row-finite graph $E$ with no sinks such that $C^*(E)$ is Morita equivalent to $A$. Thus the three classes cannot be distinguished by Morita equivalence classes of $C^*$-algebras. The natural next question is to what extent they can be distinguished by isomorphism classes of $C^*$-algebras. A starting point for these investigations is to ask about AF-algebras. While no Cuntz-Krieger algebra is an AF-algebra, the classes of graph $C^*$-algebras and Exel-Laca algebras each include many AF-algebras. In fact, one of the early results in the theory of graph $C^*$-algebras shows that if $A$ is any AF-algebra, then there is a row-finite graph $E$ with no sinks such that $C^*(E)$ is Morita equivalent to $A$ \cite{Drinen2000}. From this fact and the result in \cite{KMST2} mentioned above, our three classes (graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras) each contain all AF-algebras up to Morita equivalence. The purpose of this paper is to examine the three classes of graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras and determine which AF-algebras are contained, up to isomorphism, in each class. This turns out to be a difficult task, and we are unable to give a complete solution to the problem. Nonetheless, we are able to give a number of sufficient conditions and a number of necessary conditions for a given AF-algebra to belong to each of these three classes (see \S\ref{sufficient-cond-subsec}~and~\S\ref{necessary-cond-subsec}). As special cases of our sufficient conditions, we obtain the following. \begin{itemize} \item If $A$ is a stable AF-algebra, then $A$ is isomorphic to the $C^*$-algebra of a row-finite graph with no sinks. \item If $A$ is a simple AF-algebra, then $A$ is isomorphic to either an Exel-Laca algebra or a graph $C^*$ -algebra. In particular, if $A$ is finite dimensional, then $A$ is isomorphic to a graph $C^*$-algebra; and if $A$ infinite dimensional, then $A$ is isomorphic to an Exel-Laca algebra. \item If $A$ is an AF-algebra with no nonzero finite-dimensional quotients, then $A$ is isomorphic to an Exel-Laca algebra. \end{itemize} \noindent From our necessary conditions, we obtain the following. \begin{itemize} \item If an ultragraph $C^*$-algebra is a commutative AF-algebra then it is isomorphic to $c_0(X)$ for an at most countable discrete set $X$. \item No finite-dimensional $C^*$-algebra is isomorphic to an Exel-Laca algebra. \item No infinite-dimensional UHF algebra is isomorphic to a graph $C^*$-algebra. \end{itemize} \noindent Moreover, we are able to give a characterization of AF-algebras that are isomorphic to $C^*$-algebras of row-finite graphs with no sinks in Theorem~\ref{no-unital-quotient-then-graph-alg}. \begin{theorem*} Let $A$ be an AF-algebra. Then the following are equivalent: \begin{enumerate} \item $A$ has no unital quotients. \item $A$ is isomorphic to the $C^*$-algebra of a row-finite graph with no sinks. \end{enumerate} \end{theorem*} Our results allow us to make a fairly detailed analysis of the AF-algebras in each of our three classes, and in Figure~\ref{fig:Venn} at the end of this paper we draw a Venn diagram relating various classes of AF-algebras among the graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras. Our results are powerful enough that we are able to give examples in each region of the Venn diagram, and also state definitively whether or not there are unital and nonunital examples in each region. Finally, we remark that a particularly useful aspect of our sufficiency results is their constructive nature. When one first approaches the problem of identifying which AF-algebra are in our three classes, one may be tempted to use the $K$-theory classification of AF-algebras. There are, however, two problems with this approach: (1) Since any AF-algebra is Morita equivalent to the $C^*$-algebra of a row-finite graph with no sinks, we know that all ordered $K_0$-groups are attained by the AF-algebras in each of our three classes. Thus we need to identify which \emph{scaled} ordered $K_0$-groups are attained by the AF-algebras in each class. Unfortunately, however, little is currently known about the scale for the $K_0$-groups of $C^*$-algebras in these three classes. (2) More importantly, even if we could decide exactly which scaled ordered $K_0$-groups are attained by, for example, graph AF-algebras, we would obtain at best an abstract characterization of which AF-algebras are graph $C^*$-algebras. Unless our understanding of the scaled ordered $K_0$-groups achieved by AF graph $C^*$-algebras extended to an algorithm for producing a graph whose $C^*$-algebra achieved a given scaled ordered $K_0$-group, we would be unable to take a given AF-algebra $A$ and view it as a graph $C^*$-algebra. Most notably, we could not expect to ``see" the canonical generators of $C^*(E)$ in $A$. With an awareness of the limitations of an abstract characterization, we instead present constructive methods for realizing AF-algebras as $C^*$-algebras in our three classes. Given a certain type of AF-algebra $A$ we show how to build an ultragraph $\mathcal{G}$ from a certain type of Bratteli diagram for $A$ so that $C^*(\mathcal{G})$ is isomorphic to $A$ (see \S\ref{ultra-contruct-subsec}). This ultragraph $C^*$-algebra is always an Exel-Laca algebra, and in special situations (see \S\ref{sufficient-cond-subsec}) it is also a graph $C^*$-algebra. Furthermore, one can extract from $\mathcal{G}$ a $\{0,1\}$-matrix for the Exel-Laca algebra or a directed graph for the graph $C^*$-algebra as appropriate. This paper is organized as follows. In \S\ref{prelim-sec} we establish definitions and notation for graph $C^*$-algebras, Exel-Laca algebras, ultragraph $C^*$-algebras, and AF-algebras. In \S\ref{lemmas-sec} we establish some technical lemmas regarding Bratteli diagrams and inclusions of finite-dimensional $C^*$-algebras. In \S\ref{results-sec} we state the main results of this paper. Specifically, in \S\ref{ultra-contruct-subsec} we describe how to take a Bratteli diagram for an AF-algebra $A$ with no nonzero finite-dimensional quotients and build an ultragraph $\mathcal{G}$. In \S\ref{sufficient-cond-subsec} we prove that the associated ultragraph $C^*$-algebra $C^*(\mathcal{G})$ is isomorphic to $A$. We also show that $C^*(\mathcal{G})$ is always isomorphic to an Exel-Laca algebra, and describe conditions which imply $C^*(\mathcal{G})$ is also a graph $C^*$-algebra. These results give us a number of sufficient conditions for AF-algebras to be contained in our three classes of graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras. We also present examples showing that none of our sufficient conditions are necessary. In \S\ref{necessary-cond-subsec} we give several necessary conditions for AF-algebras to be in each of our three classes. These conditions allow us to identify a number of obstructions to realizations of various AF-algebras in each class. We conclude in \S\ref{Venn-sec} by summarizing our containments. First, we characterize precisely which simple AF-algebras fall into each of our classes. Second, we summarize many of the relationships we have derived, including containments for the finite-dimensional and stable AF-algebras, and draw a Venn diagram to represent these containments. We are able to use our results from \S\ref{results-sec} to exhibit examples in each region of the Venn diagram, thereby showing these regions are nonempty. We are also able to describe precisely when unital and nonunital examples occur in these regions. \section{Preliminaries} \label{prelim-sec} In the following four subsections we establish definitions and notation for graph $C^*$-algebras, Exel-Laca algebras, ultragraph $C^*$-algebras, and AF-algebras. Since the literature for each of these classes of $C^*$-algebras is large and well developed, we present only the definitions and notation required in this paper. However, for each class we provide introductory references where more detailed information may be found. \subsection{Graph $C^*$-algebras} Introductory references include \cite{BPRS, Rae, Tom9}. \begin{definition} A \emph{graph} $E=(E^{0},E^{1},r,s)$ consists of a countable set $E^{0}$ of vertices, a countable set $E^{1}$ of edges, and maps $r \colon E^{1} \to E^{0}$ and $s \colon E^1 \to E^0$ identifying the range and source of each edge. \end{definition} A \emph{path} in a graph $E = (E^0, E^1, r, s)$ is a sequence of edges $\alpha := e_1 \ldots e_n$ with $s(e_{i+1}) = r(e_i)$ for $1 \leq i \leq n-1$. We say that $\alpha$ has \emph{length} $n$. We regard vertices as paths of length 0 and edges as paths of length 1, and we then extend our notation for the vertex set and the edge set by writing $E^n$ for the set of paths of length $n$ for all $n \ge 0$. We write $E^*$ for the set $\bigsqcup_{n=0}^\infty E^n$ of paths of finite length, and extend the maps $r$ and $s$ to $E^*$ by setting $r(v) = s(v) = v$ for $v \in E^0$, and $r(\alpha_1 \ldots \alpha_n) = r(\alpha_n)$ and $s(\alpha_1\ldots\alpha_n) = s(\alpha_1)$. If $\alpha$ and $\beta$ are elements of $E^*$ such that $r(\alpha) = s(\beta)$, then $\alpha\beta$ is the path of length $|\alpha|+|\beta|$ obtained by concatenating the two. Given $\alpha, \beta \in E^*$, and a subset $X$ of $E^*$, we let \[ \alpha X \beta := \{ \gamma \in E^* : \gamma = \alpha \gamma' \beta \text{ for some } \gamma' \in X \}. \] So when $v$ and $w$ are vertices, we have \begin{align*} vX &= \{\gamma \in X : s(\gamma) = v\},\\ Xw &= \{\gamma \in X : r(\gamma) = w\},\text{ and}\\ vXw &= \{ \gamma \in X : s(\gamma) = v \text{ and } r(\gamma) = w \}. \end{align*} In particular, $vE^1w$ denotes the set of edges from $v$ to $w$ and $|vE^1w|$ denotes the number of edges from $v$ to $w$. We say a vertex $v$ is a \emph{sink} if $vE^1 = \emptyset$ and an \emph{infinite emitter} if $vE^1$ is infinite. A graph is called \emph{row-finite} if it has no infinite emitters. \begin{definition}[Graph $C^*$-algebras] If $E=(E^{0},E^{1},r,s)$ is a graph, then the \emph{graph $C^*$-algebra} $C^{*}(E)$ is the universal $C^*$-algebra generated by mutually orthogonal projections $\{p_{v} : v \in E^{0} \}$ and partial isometries $\{ s_{e} : e \in E^{1} \}$ with mutually orthogonal ranges satisfying \begin{enumerate} \item $s_{e}^{*}s_{e} = p_{r(e)}$ \ \ for all $e \in E^{1}$ \item $p_{v}=\displaystyle {\ \sum_{e \in vE^1} s_{e} s_{e}^*}$ \ \ for all $v\in E^{0}$ such that $0<|vE^1|<\infty$ \item $s_{e}s_{e}^{*} \leq p_{s(e)}$ \ \ for all $e \in E^{1}$. \end{enumerate} \end{definition} We write $v \geq w$ to mean that there is a path $\alpha \in E^*$ such that $s(\alpha) = v$ and $r(\alpha) = w$. A \emph{cycle} in a graph $E$ is a path $\alpha \in E^*$ of nonzero length with $r(\alpha) = s(\alpha)$. \cite[Theorem~2.4]{KPR} says that $C^*(E)$ is an AF-algebra if and only if $E$ has no cycles. \subsection{Exel-Laca algebras} Introductory references include \cite{EL, EL2, FLR, Szy4}. \begin{definition}[Exel-Laca algebras] Let $I$ be a finite or countably infinite set, and let $A = \{A(i,j)\}_{i,j \in I}$ be a $\{0,1\}$-matrix over $I$ with no identically zero rows. The \emph{Exel-Laca algebra} $\mathcal{O}_A$ is the universal $C^*$-algebra generated by partial isometries $\{ s_i : i \in I \}$ with commuting initial projections and mutually orthogonal range projections satisfying $s_i^* s_i s_j s_j^* = A(i,j) s_js_j^*$ and \begin{equation} \label{conditionfour} \prod_{x \in X} s_x^*s_x \prod_{y \in Y} (1-s_y^*s_y) = \sum_{j \in I} A(X,Y,j) s_js_j^* \end{equation} whenever $X$ and $Y$ are finite subsets of $I$ such that $X \not= \emptyset$ and the function $$j \in I \mapsto A(X,Y,j) := \prod_{x \in X} A(x,j) \prod_{y \in Y} (1-A(y,j))$$ is finitely supported. (We interpret the unit in (\ref{conditionfour}) as the unit in the multiplier algebra of $\mathcal{O}_A$.) \end{definition} We will see in Remark~\ref{rem:ELalgis,,,} that for a $\{0,1\}$-matrix $A$ with no identically zero rows, the canonical ultragraph $\mathcal{G}_A$ of $A$ satisfies $C^*(\mathcal{G}_A) \cong \mathcal{O}_A$. With this notation, \cite[Theorem~4.1]{Tom2} implies that the Exel-Laca algebra $\mathcal{O}_A$ is an AF-algebra if and only if $\mathcal{G}_A$ has no cycle. The latter condition can be restated as: there does not exist a finite set $\{i_1, \ldots, i_n\} \subseteq I$ with $A(i_k, i_{k+1}) = 1$ for all $1 \leq k \leq n-1$ and $A(i_n, i_1) =1$. It is well known that the class of graph $C^*$-algebras of row-finite graphs with no sinks and the class of Exel-Laca algebras of row-finite matrices coincide. However, we have been unable to find a reference, so we give a proof here. \begin{lemma} \label{row-finite-graphs-matrices-lem} The class of graph $C^*$-algebras of row-finite graphs with no sinks and the class of Exel-Laca algebras of row-finite matrices coincide. In particular, \begin{enumerate} \item If $E = (E^0, E^1, r, s)$ is a row-finite graph with no sinks, and if we define a $\{0,1\}$-matrix $A_E$ over $E^1$ by $$A_E(e,f) := \begin{cases} 1 & \text{ if r(e) = s(f)} \\ 0 & \text{ otherwise,}\end{cases}$$ then $A_E$ is a row-finite matrix with no identically zero rows and $C^*(E) \cong \mathcal{O}_{A_E}$. \item If $A$ is a row-finite $\{0,1\}$-matrix over $I$ with no identically zero rows, and if we define a graph $E_A$ by setting $E_A^0 := I$ and drawing an edge from $v \in I$ to $w \in I$ if and only if $A(v,w) = 1$, then $E_A$ is a row-finite graph with no sinks and $\mathcal{O}_A \cong C^*(E_A)$. \end{enumerate} \end{lemma} \begin{proof} For (1) let $E = (E^0, E^1, r, s)$ be a row-finite graph with no sinks, and define the matrix $A_E$ as above. Since $E$ is row-finite, $A_E$ is also row-finite. Let $\{ S_e : e \in E^1 \}$ be a generating Exel-Laca $A_E$-family in $\mathcal{O}_{A_E}$. For $v \in E^0$ we define $P_v := \sum_{s(e) = v} S_eS_e^*$ in $\mathcal{O}_{A_E}$. (Note that this sum is always finite since $A_E$ is row-finite.) We now show that $\{S_e, P_v : e \in E^1, v \in E^0 \}$ is a Cuntz-Krieger $E$-family in $\mathcal{O}_{A_E}$. The $S_e$'s have mutually orthogonal range projections by the Exel-Laca relations, and hence the $P_v$'s are also mutually orthogonal projections. In addition, Condition~(2) and Condition~(3) in the definition of graph $C^*$-algebras obviously hold from our definition of $P_v$. It remains to show Condition~(1) holds. If $e \in E^1$, let $X := \{ e \}$ and $Y := \emptyset$. Then for $j \in E^1$, we have $A_E(X,Y,j) := 1$ if and only if $s(j) = r(e)$. Since $E$ is row-finite, the function $j \mapsto A_E(X,Y,j)$ is finitely supported, and (\ref{conditionfour}) gives $S_e^*S_e = \sum_{j \in E^1} A(X,Y,j) S_jS_j^* = \sum_{ s(j) = r(e) } S_jS_j^* = P_{r(e)}$, so Condition~(1) holds. Thus $\{S_e, P_v : e \in E^1, v \in E^0 \}$ is a Cuntz-Krieger $E$-family, and by the universal property of $C^*(E)$ we obtain a $*$-ho\-mo\-mor\-phism $\phi \colon C^*(E) \to \mathcal{O}_{A_E}$ with $\phi(s_e) = S_e$ and $\phi(p_v) = P_v$ where $\{s_e, p_v \}$ is a generating Cuntz-Krieger $E$-family for $C^*(E)$. By checking on generators, one can see that $\phi$ is equivariant with respect to the gauge actions on $C^*(E)$ and $\mathcal{O}_{A_E}$, and thus the Gauge-Invariant Uniqueness Theorem \cite[Theorem~2.1]{BPRS} implies that $\phi$ is injective. Since the image of $\phi$ contains the generators $\{ S_e : e \in E^1 \}$ of $\mathcal{O}_{A_E}$, $\phi$ is also surjective. Thus $C^*(E) \cong \mathcal{O}_{A_E}$. For (2) let $A$ be a row-finite $\{0,1\}$-matrix with no identically zero rows. Let $\mathcal{G}_A$ be the canonical ultragraph of $A$ (see Remark~\ref{rem:ELalgis,,,}). Then the source map of $\mathcal{G}_A$ is bijective and $C^*(\mathcal{G}_A) \cong \mathcal{O}_A$. Since $A$ is a row-finite matrix, the range of each edge in $\mathcal{G}_A$ is a finite set. Thus $C^*(\mathcal{G}_A)$ is isomorphic to the $C^*$-algebra of the graph formed by replacing each edge in $\mathcal{G}_A$ with a set of edges from $s(e)$ to $w$ for all $w \in r(e)$ \cite[Remark~2.5]{KMST2}. But this is precisely the graph $E_A$ described in the statement above. \end{proof} \subsection{Ultragraph $C^*$-algebras} Introductory references include \cite{KMST, KMST2, Tom, Tom2}. For a set $X$, let $\mathcal{P}(X)$ denote the collection of all subsets of $X$. \begin{definition}(\cite[Definition~2.1]{Tom}) An \emph{ultragraph} $\mathcal{G} = (G^0, \mathcal{G}^1, r,s)$ consists of a countable set of vertices $G^0$, a countable set of ultraedges $\mathcal{G}^1$, and functions $s\colon \mathcal{G}^1\rightarrow G^0$ and $r\colon \mathcal{G}^1 \rightarrow \mathcal{P}(G^0)\setminus\{\emptyset\}$. \end{definition} Note that in the literature, ultraedges are typically just referred to as edges. However, since we will frequently be passing back and forth between graphs and ultragraphs in this paper, we feel that using the term ultraedge will serve as a helpful reminder that edges in ultragraphs behave differently than in graphs. \begin{definition} For a set $X$, a subset $\mathcal{C}\/$ of $\mathcal{P}(X)$ is called an {\em algebra} if \begin{enumerate} \item[(i)] $\emptyset\in\mathcal{C}$, \item[(ii)] $A\cap B\in\mathcal{C}$ and $A\cup B\in\mathcal{C}$ for all $A,B\in\mathcal{C}$, and \item[(iii)] $A\setminus B\in \mathcal{C}$ for all $A,B\in\mathcal{C}$. \end{enumerate} \end{definition} \begin{definition} For an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r,s)$, we let $\mathcal{G}^0$ denote the smallest algebra in $\mathcal{P}(G^0)$ containing the singleton sets and the sets $\{r(e) : e \in \mathcal{G}^1\}$. \end{definition} \begin{definition} A \emph{representation} of an algebra $\mathcal{C}$ is a collection of projections $\{p_A\}_{A\in \mathcal{C}}$ in a $C^*$-al\-ge\-bra satisfying $p_\emptyset = 0$, $p_A p_B = p_{A \cap B}$, and $p_{A\cup B} = p_A + p_B - p_{A \cap B}$ for all $A,B \in \mathcal{C}$. \end{definition} Observe that a representation of an algebra automatically satisfies $p_{A \setminus B} = p_A - p_A p_B$. \begin{definition} \label{dfn:CK-G-fam} For an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r,s)$, the \emph{ultragraph $C^*$-algebra} $C^*(\mathcal{G})$ is the universal $C^*$-algebra generated by a representation $\{p_A\}_{A\in \mathcal{G}^0}$ of $\mathcal{G}^0$ and a collection of partial isometries $\{s_e\}_{e \in \mathcal{G}^1}$ with mutually orthogonal ranges that satisfy \begin{enumerate} \item $s_e^*s_e = p_{r(e)}$ for all $e \in \mathcal{G}^1$, \item $s_es_e^* \leq p_{s(e)}$ for all $e \in \mathcal{G}^1$, \item $p_v = \sum_{e \in v\mathcal{G}^1} s_es_e^*$ whenever $0 < |v\mathcal{G}^1| < \infty$, \end{enumerate} where we write $p_v$ in place of $p_{ \{ v \} }$ for $v \in G^0$. \end{definition} As with graphs, we call a vertex $v \in G^0$ a \emph{sink} if $v\mathcal{G}^1 = \emptyset$ and an \emph{infinite emitter} if $v\mathcal{G}^1$ is infinite. A \emph{path} in an ultragraph $\mathcal{G}$ is a sequence of ultraedges $\alpha = e_1 e_2 \ldots e_n$ with $s(e_{i+1}) \in r(e_i)$ for $1 \leq i \leq n-1$. A \emph{cycle} is a path $\alpha = e_1 \ldots e_n$ with $s(e_1) \in r(e_n)$. \cite[Theorem~4.1]{Tom2} implies that $C^*(\mathcal{G})$ is an AF-algebra if and only if $\mathcal{G}$ has no cycles. \begin{remark}\label{rem:ELalgis,,,} A graph may be regarded as an ultragraph in which the range of each ultraedge is a singleton set. The constructions of the two $C^*$-algebras then coincide: the graph $C^*$-algebra of a graph is the same as the ultragraph $C^*$-algebra of that graph when regarded as an ultragraph (see \cite[\S3]{Tom} for more details). For a $\{0,1\}$-matrix $A$ over $I$ with nonzero rows, the canonical associated ultragraph $\mathcal{G}_A=(G_A^0, \mathcal{G}_A^1, r,s)$ is defined by $G_A^0 = \mathcal{G}_A^1 = I$, $r(i)=\{j\in I : A(i,j)=1\}$ and $s(i)=i$ for $i \in \mathcal{G}_A^1$ (see \cite[Definition~2.5]{Tom}). It follows from \cite[Theorem~4.5]{Tom} that $C^*(\mathcal{G}_A) \cong \mathcal{O}_A$. The ultragraph $\mathcal{G}_A$ has the property that $s$ is bijective. Conversely an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r, s)$ with bijective $s$ is isomorphic to $\mathcal{G}_A$ where $A$ is the edge matrix of $\mathcal{G}$. Thus one can say that an Exel-Laca algebra is a $C^*$-algebra of a ultragraph with bijective source map. From these observations, one can see that the class of ultragraph $C^*$-algebras contains both the class of graph $C^*$-algebras and the class of Exel-Laca algebras. \end{remark} \subsection{AF-algebras} Introductory references include \cite{Bra, Eff, GH} as well as \cite[Chapter 6]{Dav} and \cite[\S6.1, \S6.2, and \S7.2]{Mur}. \begin{definition} An AF-algebra is a $C^*$-algebra that is the direct limit of a sequence of finite-dimensional $C^*$-algebras. Equivalently, a $C^*$-algebra $A$ is an AF-algebra if and only if $A = \overline{\bigcup_{n=1}^\infty A_n}$ for a sequence of finite-dimensional $C^*$-subalgebras $A_1 \subseteq A_2 \subseteq \cdots \subseteq A$. \end{definition} To discuss AF-algebras, we need first to briefly discuss inclusions of finite-dimensional $C^*$-algebras. Fix finite-dimensional $C^*$-algebras $A = \bigoplus^m_{i=1} M_{a_i}(\mathbb{C})$ and $B = \bigoplus^n_{j=1} M_{b_j}(\mathbb{C})$. Let $M=(m_{i,j})_{i,j}$ be an $m \times n$ nonnegative integer matrix with no zero rows such that \begin{equation}\label{eq:dimensions fit} \sum^m_{i=1} m_{i,j} a_j \le b_j\quad\text{for all $j$}. \end{equation} There exists an inclusion $\phi_M \colon A \hookrightarrow B$ with the following property. For an element $x = (x_i)^m_{i=1} \in A$, the image $\phi_M(x)$ of $x$ has the form $(y_j)^n_{j=1} \in B$ where for each $j \le n$, the matrix $y_j$ is block-diagonal with $m_{i,j}$ copies of each $x_i$ along the diagonal and $0$'s elsewhere. (Equation~\ref{eq:dimensions fit} ensures that this is possible.) The map $\phi_M$ is not uniquely determined by this property, but its unitary equivalence class is. Every inclusion $\phi$ of $A$ into $B$ is unitarily equivalent to $\phi_M$ for some matrix $M$. Specifically, $M=(m_{i,j})_{i.j}$ is the matrix such that $m_{i,j}$ is equal to the rank of $1_{B_j} \phi(p_i)$ where $1_{B_j}$ is the unit for the $j$\textsuperscript{th} summand of $B$, and where $p_i$ is any rank-1 projection in the $i$\textsuperscript{th} summand of $A$. We refer to $M$ as the \emph{multiplicity matrix} of the inclusion $\phi$. \begin{definition} \label{Brat-diag-defn} A \emph{Bratteli diagram} $(E,d)$ consists of a directed graph $E = (E^0,E^1,r,s)$ together with a collection $d = \{ d_v : v \in E^0 \}$ of positive integers satisfying the following conditions. \begin{enumerate} \item[(1)] $E$ has no sinks; \item[(2)] $E^0$ is partitioned as a disjoint union $E^0 = \bigsqcup_{n=1}^\infty V_n$ where each $V_n$ is a finite set, \item[(3)] for each $e \in E^1$ there exists $n \in \mathbb{N}$ such that $s(e) \in V_n$ and $r(e) \in V_{n+1}$; and \item[(4)] for each vertex $v \in E^0$ we have $d_v \geq \sum_{e \in E^1v} d_{s(e)}$ for all $v \in E^0$. \end{enumerate} \end{definition} If $(E,d)$ is a Bratteli diagram, then $E$ is a row-finite graph with no sinks. We regard $d$ as a labeling of the vertices by positive integers, so to draw a Bratteli diagram we sometimes just draw the directed graph, replacing each vertex $v$ by its label $d_v$. \begin{remark} Those experienced with Bratteli diagrams will notice that our definition of a Bratteli diagram is slightly nonstandard. Specifically, a Bratteli diagram is traditionally specified as undirected graph in which each edge connects vertices in consecutive levels. Of course, an orientation of the edges is then implicitly chosen by the decomposition $E^0 = \bigsqcup V_n$, so it makes no difference if we instead draw a directed edge pointing from the vertex in level $n$ to the vertex in level $n+1$. \end{remark} \begin{example} \label{Bratteli-Ex-one} The following is an example of a Bratteli diagram: \[\begin{tikzpicture}[xscale=1.5] \node[inner sep=1pt] (b1) at (1,0) {1}; \node[inner sep=1pt] (a2) at (2,1) {4}; \node[inner sep=1pt] (b2) at (2,0) {1}; \node[inner sep=1pt] (c2) at (2,-1) {1}; \node[inner sep=1pt] (a3) at (3,1) {8}; \node[inner sep=1pt] (b3) at (3,0) {2}; \node[inner sep=1pt] (c3) at (3,-1) {1}; \node[inner sep=1pt] (a4) at (4,1) {16}; \node[inner sep=1pt] (b4) at (4,0) {3}; \node[inner sep=1pt] (c4) at (4,-1) {1}; \node[inner sep=1pt] (a5) at (5,1) {32}; \node[inner sep=1pt] (b5) at (5,0) {4}; \node[inner sep=1pt] (c5) at (5,-1) {1}; \node[inner sep=1pt] (a6) at (6,1) {64}; \node[inner sep=1pt] (b6) at (6,0) {5}; \node[inner sep=1pt] (c6) at (6,-1) {1}; \node[inner sep=1pt] (a7) at (7,1) {$\cdots$}; \node[inner sep=1pt] (b7) at (7,0) {$\cdots$}; \node[inner sep=1pt] (c7) at (7,-1) {$\cdots$}; \draw[-latex] (b1)--(b2); \draw[-latex] (b1)--(a2); \foreach \x/\xx in {2/3,3/4,4/5,5/6,6/7} { \draw[-latex] (a\x.north east) .. controls (\x.5,1.25) .. (a\xx.north west); \draw[-latex] (a\x.south east) .. controls (\x.5,0.75) .. (a\xx.south west); \draw[-latex] (b\x)--(b\xx); \draw[-latex] (c\x)--(b\xx); \draw[-latex] (c\x)--(c\xx); } \end{tikzpicture}\] \end{example} \vskip1ex Given a Bratteli diagram $(E,d)$, we construct an AF-algebra $A$ as follows. For each $v \in E^0$, let $A_v$ be an isomorphic copy of $M_{d_v}(\mathbb{C})$, and for each $n \in \mathbb{N}$, let $A_n := \bigoplus_{v \in V_n} A_v$. For each $n$ let $\phi_n \colon A_n \to A_{n+1}$ be the homomorphism whose multiplicity matrix is $( |vE^1w| )_{v \in V_n, w \in V_{n+1}}$. We then define $A$ to be the direct limit $\varinjlim(A_n,\phi_n)$. Since the $\phi_n$ are determined up to unitary equivalence by $(E,d)$, the isomorphism class of $A$ is also uniquely determined by $(E,d)$. \begin{example} In Example~\ref{Bratteli-Ex-one}, we see that \begin{align*} A_1 & = \mathbb{C} & \phi_1(x) & = \left( \begin{smallmatrix} x & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{smallmatrix} \right) \oplus x \oplus 0 \\ A_2 &= M_4 (\mathbb{C}) \oplus \mathbb{C} \oplus \mathbb{C} & \phi_2(x, y, z) &= \left( \begin{smallmatrix} x & 0 \\ 0 & x \end{smallmatrix} \right) \oplus \left( \begin{smallmatrix} y & 0 \\ 0 & z \end{smallmatrix} \right) \oplus z \\ A_3 &= M_8(\mathbb{C}) \oplus M_2(\mathbb{C}) \oplus \mathbb{C} & \phi_3(x, y, z) &= \left( \begin{smallmatrix} x & 0 \\ 0 & x \end{smallmatrix} \right) \oplus \left( \begin{smallmatrix} y & 0 \\ 0 & z \end{smallmatrix} \right) \oplus z \\ & \ \vdots & & \ \vdots \\ A_n &= M_{2^{n}} (\mathbb{C}) \oplus M_{n-1} (\mathbb{C}) \oplus \mathbb{C} & \phi_n(x, y, z) &= \left( \begin{smallmatrix} x & 0 \\ 0 & x \end{smallmatrix} \right) \oplus \left( \begin{smallmatrix} y & 0 \\ 0 & z \end{smallmatrix} \right) \oplus z \\ &\ \vdots & & \ \vdots \end{align*} \end{example} The following \emph{telescoping} operation on a Bratteli diagram preserves the associated AF-algebra. Given $(E,d)$, we choose an increasing subsequence $\{n_m\}_{m=1}^\infty$ of $\mathbb{N}$. The set of the vertices of the new Bratteli diagram is $\bigcup_{m=1}^\infty V_{n_m}$, the set of the edges of the new Bratteli diagram is $\bigcup_{m=1}^\infty (V_{n_m}E^*V_{n_{m+1}})$, and the new function $d$ is the restriction of the old $d$ to $\bigcup_{m=1}^\infty V_{n_m}$. For example, if we have the portion of a Bratteli diagram shown below on the left and remove the middle column of vertices, we obtain the portion of the Bratteli diagram shown below on the right. \[\begin{tikzpicture}[scale=1.5] \node[circle,inner sep=1pt] (a0) at (0,2) {1}; \node[circle,inner sep=1pt] (c0) at (0,0) {1}; \node[circle,inner sep=1pt] (a1) at (1,2) {1}; \node[circle,inner sep=1pt] (b1) at (1,1) {3}; \node[circle,inner sep=1pt] (c1) at (1,0) {4}; \node[circle,inner sep=1pt] (a2) at (2,2) {4}; \node[circle,inner sep=1pt] (c2) at (2,0) {10}; \draw[-latex] (a0)--(a1); \draw[-latex] (a0)--(b1); \draw[-latex] (c0)--(c1); \draw[-latex] (a1)--(a2); \draw[-latex] (b1)--(a2); \draw[color=white] (b1)--(c2) node[pos=0.5,inner sep=0.1cm] (b1c2) {}; \draw[-latex] (b1) .. controls (b1c2.north east) .. (c2); \draw[-latex] (b1) .. controls (b1c2.south west) .. (c2); \draw[-latex] (c1)--(c2); \node at (3.5,1) {\Large$\rightsquigarrow$}; \node[circle,inner sep=1pt] (x0) at (5,2) {1}; \node[circle,inner sep=1pt] (y0) at (5,0) {1}; \node[circle,inner sep=1pt] (x2) at (7,2) {4}; \node[circle,inner sep=1pt] (y2) at (7,0) {10}; \draw[color=white] (x0)--(x2) node[pos=0.5,inner sep=0.2cm] (x0x2) {}; \draw[-latex] (x0) .. controls (x0x2.north) .. (x2); \draw[-latex] (x0) .. controls (x0x2.south) .. (x2); \draw[color=white] (x0)--(y2) node[pos=0.5,inner sep=0.2cm] (x0y2) {}; \draw[-latex] (x0) .. controls (x0y2.north east) .. (y2); \draw[-latex] (x0) .. controls (x0y2.south west) .. (y2); \draw[-latex] (y0)--(y2); \end{tikzpicture}\] \vskip1ex We say that two Bratteli diagrams $(E,d)$ and $(E',d')$ are equivalent if there is a finite sequence $(E_1, d_1), \ldots, (E_n, d_n)$ such that $(E_1, d_1) = (E,d)$, $(E_n, d_n) = (E', d')$ and for each $1 \le i \le n-1$, one of $(E_i, d_i)$ and $(E_{i+1}, d_{i+1})$ is a telescope of the other. Bratteli proved in \cite{Bra} that two Bratteli diagrams give rise to isomorphic AF-algebras if and only if the diagrams are equivalent (see \cite[\S 1.8]{Bra} and \cite[Theorem~2.7]{Bra} for details). The class of AF-algebras is closed under forming ideals and quotients. On the other hand, the three classes of graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras are not closed under forming ideals nor quotients. However we can show the following. \begin{lemma}\label{ideal_quotient} The class of graph AF-algebras is closed under forming ideals and quotients. \end{lemma} \begin{proof} If $E$ is a graph and the graph $C^*$-algebra $C^*(E)$ is an AF-algebra, then $E$ has no cycles by \cite[Theorem~2.4]{KPR}. Thus $E$ vacuously satisfies Condition~(K), and it follows that every ideal of $C^*(E)$ is gauge-invariant by \cite[Corollary~3.8]{BHRS}. Thus every ideal of $C^*(E)$ as well as its quotient is a graph $C^*$-algebra by \cite[Lemma~1.6]{DHS} and \cite[Theorem~3.6]{BHRS}. \end{proof} \begin{remark} A quotient of an Exel-Laca AF-algebra need not be an Exel-Laca algebra. For example, if $\MU{\mathcal{K}}$ is the minimal unitization of the compact operators $\mathcal{K}$ on a separable infinite-dimensional Hilbert space, then $M_2(\MU{\mathcal{K}})$ is an Exel-Laca AF-algebra that has a quotient, $M_2(\mathbb{C})$, that is not an Exel-Laca algebra --- for details see Example~\ref{E-L-quotient-not-Ex} and Corollary~\ref{No-fin-dim-E-L-Cor}. Whether ideals of Exel-Laca AF-algebras are necessarily Exel-Laca algebras is an open question. We also do not know whether ideals and quotients of ultragraph AF-algebras are necessarily ultragraph $C^*$-algebras. As we shall see later, this uncertainty causes problems in the analyses of Exel-Laca AF-algebras and ultragraph AF-algebras. \end{remark} \begin{lemma} The three classes of graph AF-algebras, Exel-Laca AF-algebras, and ultragraph AF-algebras are closed under taking direct sums. \end{lemma} \begin{proof} Each of the four classes of AF-algebras, graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras is closed under forming direct sums. The result follows. \end{proof} \section{Some technical lemmas} \label{lemmas-sec} In this section we establish some technical results for Bratteli diagrams and inclusions of finite-dimensional $C^*$-algebras. We will use these technical results to prove many of our realization results in \S\ref{results-sec}. \begin{lemma} \label{fin-quotient-lem-1} Suppose $A$ is an AF-algebra that has no quotients isomorphic to $\mathbb{C}$, and suppose that $(E,d)$ is a Bratteli diagram for $A$. Let $H = \{ v \in E^0 : d_v = 1 \}$, and let $F$ be the subgraph of $E$ such that $F^0 := E^0 \setminus H$ and $F^1 := \{e \in E^1 : s(e) \not\in H\}$ with $r, s \colon F^1 \to F^0$ inherited from $E$. Let $d \colon F^0 \to \mathbb{N}$ be the restriction of $d \colon E^0 \to \mathbb{N}$. Then $(F,d)$ is a Bratteli diagram for $A$. \end{lemma} \begin{proof} First note that if $e \in E^1$ with $r(e) \in H$, then $d_{r(e)}=1$ and hence $d_{s(e)} = 1$ and $s(e) \in H$. Hence $F$ is in fact a subgraph of $E$. We claim that for any $n \in \mathbb{N}$ and $v \in V_n$, there exists $m \in \mathbb{N}$ such that whenever $w \in V_{n+m}$ and $v \geq w$, we have $d_w \geq 2$. We fix $n \in \mathbb{N}$ and $v \in V_n$, suppose that there is no such $m$, and seek a contradiction. Let $v_0 := v$. Inductively choose $e_i \in E^1$ such that $s(e_i) = v_{i-1}$ and such that for each $m \in \mathbb{N}$ there exists $w \in V_{n+i+m}$ with $r(e_i) \ge w$ and $d_w = 1$, setting $v_i := r(e_i)$. Then the infinite path $e_1e_2 \ldots$ satisfies $d_{s(e_n)} = 1$ for all $n$. Hence $\{ x \in E^0 : x \not\geq s(e_n) \text{ for any $n$} \}$ is a saturated hereditary subset and the quotient of $A$ by the corresponding ideal is an AF-algebra with Bratteli diagram \[\begin{tikzpicture}[xscale=1.5] \foreach \x in {1,2,3,4,5} { \node[inner sep=1pt] (\x) at (\x,0) {$\ 1 \ $}; } \node (6) at (6,0) {$\cdots$}; \foreach \x/\xx in {1/2,2/3,3/4,4/5,5/6} { \draw[-latex] (\x)--(\xx); } \end{tikzpicture}\] Hence this quotient is isomorphic to $\mathbb{C}$, which contradicts our hypothesis on $A$. This establishes the claim. Let $B$ be the AF-algebra associated to the Bratteli diagram $F$, and let $\iota_n \colon B_n \to A_n$ denote obvious inclusion of the $n$\textsuperscript{th} approximating subalgebra of $B$ determined by $F$ into the $n$\textsuperscript{th} approximating subalgebra of $A$ determined by $E$. Let $\phi^E_{n,m} \colon A_n \to A_m$ be the connecting maps in the directed system associated to $E$, and let $\phi^E_{n, \infty} : A_n \to A$ be the inclusion of $A_n$ into the direct limit algebra $A$. Likewise, let $\phi^F_{n,m} \colon B_n \to B_m$ be the connecting maps in the directed system associated to $F$, and let $\phi^F_{n, \infty} \colon B_n \to B$ be the inclusion of $B_n$ into the direct limit algebra $B$. We see that $\phi_{n,n+1}^E \circ \iota_n = \iota_{n+1} \circ \phi_{n,n+1}^F$ for all $n$, and thus by the universal property of the direct limit $B = \varinjlim(B_n, \phi^F_n)$, there is a $*$-ho\-mo\-mor\-phism $\iota_\infty \colon B \to A$ with $\phi_{n, \infty}^E \circ \iota_n = \iota_\infty \circ \phi_{n, \infty}^F$. Since each $\iota_n$ is injective, it follows that $\iota_\infty$ is injective. We shall also show that $\iota_\infty$ is also surjective and hence an isomorphism. It suffices to show that for any $v \in V_n$ and for any $a$ in the direct summand $A_v$ of $A_n$ corresponding to $v$, we have $\phi^E_{n,\infty}(a) \in \operatorname{im} \iota_\infty$. By the previous paragraph we may choose $m$ so that whenever $w \in V_{n+m}$ and $v \geq w$, then $d_w \geq 2$. It follows that \[ \phi^E_{n, n+m}(a) \in \bigoplus_{\substack{w \in V_{n+m} \\ d_w \geq 2}} M_{d_w} (\mathbb{C}) \subseteq \iota_{n+m} (B_{n+m}), \] so that $\phi^E_{n, n+m}(a) = \iota_{n+m} (b)$ for some $b \in B_{n+m}$. Thus \[ \phi^E_{n,\infty} (a) = \phi^E_{n+m, \infty} \circ \phi^E_{n, n+m}(a) = \phi^E_{n+m, \infty} \circ \iota_{n+m} (b) = \iota_\infty \circ \phi^F_{n+m, \infty} (b) \in \operatorname{im} \iota_\infty \] and $\iota_\infty$ is surjective. Hence $\iota_\infty$ is an isomorphism as required. \end{proof} \begin{lemma} \label{fin-quotient-lem-2} Suppose $A$ is an AF-algebra with no nonzero finite-dimensional quotients. Then any Bratteli diagram for $A$ can be telescoped to obtain a second Bratteli diagram $(E,d)$ for $A$ such that for all $n \in \mathbb{N}$ and for each $v \in V_{n+1}$ either $d_v > \sum_{e \in E^1v} d_{s(e)}$ or there exists $w \in V_n$ with $|wE^1v| \geq 2$. \end{lemma} \begin{proof} Let $(F,d)$ be a Bratteli diagram for $A$ with $F^0$ partitioned into levels as $F^0 = \bigsqcup_{n=1}^\infty W_n$. It suffices to show that for every $m$ there exists $n \ge m$ such that for every $v \in W_n$ satisfying $d_v = \sum_{\alpha \in W_m F^* v} d_{s(\alpha)}$, there exists $w \in W_m$ with $|w F^* v| \ge 2$. We suppose not, and seek a contradiction. That is, we suppose that there exists $m$ such that for every $n \ge m$ the set \[ X_n := \Big\{x \in W_n : d_x = \sum_{\alpha \in W_m F^* x} d_{s(\alpha)}\text{ and } |wF^* x| \le 1\text{ for all }w \in W_m\Big\} \] is nonempty. By telescoping $(F,d)$ to $\bigsqcup_{n=m}^\infty W_n$ we may assume $m=1$. We claim that if $n \le p$, $x \in X_p$, and $v \in W_n$ with $v \ge x$, then $v \in X_n$. Indeed, \begin{align} d_x &= \sum_{\alpha \in W_1 F^* x} d_{s(\alpha)}\notag \\ &= \sum_{\beta \in W_n F^* x}\Big(\sum_{\gamma \in W_1 F^* s(\beta)} d_{s(\gamma)}\Big) \label{line1} \\ &\le \sum_{\beta \in W_n F^* x} d_{s(\beta)} \label{line2} \\ &\le d_x. \notag \end{align} Thus we have equality throughout, and the equality of \eqref{line1}~and~\eqref{line2} implies $d_{s(\beta)} = \sum_{\gamma \in W_1 F^* s(\beta)} d_{s(\gamma)}$ for each $\beta \in W_n F^* x$. In particular, since $v \ge x$, we have that $d_v = \sum_{\gamma \in W_1 F^* v} d_{s(\gamma)}$. Moreover for each $w \in W_1$, \[ 1 \ge |w F^* x| \ge |w F^* v| \, |v F^* x|, \] so $v \ge x$ implies that $|w F^* v| \le 1$, and $v \in X_n$ as required. We shall now construct an infinite path $\lambda = \lambda_1 \lambda_2 \ldots$ in $F$ such that $s(\lambda_n) \in X_{n}$ for all $n$. If $x \in X_{n}$, then since $d_x$ is nonzero and $d_x = \sum_{\alpha \in W_1 F^* x} d_{s(\alpha)}$, there exists $w \in W_1$ such that $w \ge x$. Since $W_1$ is finite, there exists $w_1 \in W_1$ such that for infinitely many $n$ there exists $x \in X_{n}$ with $w_1 \geq x$. Since $w_1F^1$ is finite, there exists $\lambda_1 \in w_1F^1$ such that for infinitely many $n$, we have $r(\lambda_1) \ge x$ for some $x \in X_{n}$. We set $w_2 := r(\lambda_1)$ which is in $X_{2}$ by the claim above. Proceeding in this way, we produce an infinite path $\lambda = \lambda_1 \lambda_2 \ldots$ in $F$ such that $s(\lambda_n) \in X_{n}$ for all $n$. For each $w \in W_1$ such that $w \ge s(\lambda_n)$ for some $n$, we define $n_w := \min\{n : w \ge s(\lambda_n)\}$. Let $N := \max\{n_w : \text{$w \in W_1$ and $w \ge s(\lambda_n)$ for some $n$}\}$. We claim that $F^1 r(\lambda_n) = \{ \lambda_n \}$ for all $n \geq N$. Fix $n \geq N$, and $e \in F^1 r(\lambda_n)$. Since $r(\lambda_n) = s(\lambda_{n+1}) \in X_{n+1}$, we have $s(e) \in X_{n}$. Hence $W_1 F^* s(e)$ is nonempty, so we may fix $\beta \in W_1 F^* s(e)$. Now $\beta e$ is the unique path in $s(\beta) F^* r(\lambda_n)$ by definition of $X_{n+1}$. Let $\alpha$ be the unique path from $s(\beta)$ to $s(\lambda_{n_{s(\beta)}})$. Since $n_{s(\beta)} \le N \le n$, we have $\alpha \lambda_{n_{s(\beta)}} \lambda_{n_{s(\beta)} + 1} \ldots \lambda_{n}$ in $s(\beta) F^* r(\lambda_n)$, and the uniqueness of this path then forces $\beta e = \alpha \lambda_{n_{s(\beta)}} \lambda_{n_{s(\beta)} + 1} \ldots \lambda_{n}$, and in particular $e = \lambda_{n}$. Thus $F^1 r(\lambda_n) = \{ \lambda_n \}$ as required. Since $F^1 r(\lambda_n) = \{ \lambda_n \}$, we have $W_1 F^* r(\lambda_n) = W_1 F^* \lambda_n = \{\beta\lambda_n : \beta \in W_1 F^*s(\lambda_n)\}$. Hence that $r(\lambda_n) \in X_{n+1}$ and that $s(\lambda_n) \in X_{n}$ imply that \[ d_{r(\lambda_n)}= \sum_{\alpha \in W_1 F^* r(\lambda_n)}d_{s(\alpha)} = \sum_{\beta \in W_1 F^* s(\lambda_n)}d_{s(\beta)} = d_{s(\lambda_{n})} \] for all $n \geq N$. This implies $d_{s(\lambda_{n})} =d_{s(\lambda_{N})}$ for all $n \ge N$. Moreover, $\{ y \in F^0 : \text{$y \not\geq s(\lambda_n)$ for all $n$} \}$ is a saturated hereditary subset, and the quotient of $A$ by the ideal corresponding to this set is an AF-algebra with a Bratteli diagram of the form \[\begin{tikzpicture}[xscale=2] \foreach \x in {1,2,3,4,5} { \node[inner sep=1pt] (\x) at (\x,0) {$d_{s(\lambda_{N})}$}; } \node (6) at (6,0) {$\cdots$}; \foreach \x/\xx in {1/2,2/3,3/4,4/5,5/6} { \draw[-latex] (\x)--(\xx); } \end{tikzpicture}\] Hence this quotient is isomorphic to $M_{d_{s(\lambda_{N})}}(\mathbb{C})$, which contradicts the hypothesis that $A$ has no finite-dimensional quotients. \end{proof} \begin{lemma} \label{f-d-quotient-char-lem} Let $A$ be an AF-algebra. Then $A$ has no nonzero finite-dimensional quotients if and only if there exists a Bratteli diagram $(E,d)$ for $A$ satisfying the following two properties: \begin{itemize} \item[(1)] $d_v \geq 2$ for all $v \in E^0$; and \item[(2)] for all $n \in \mathbb{N}$ and for each $v \in V_{n+1}$ either $d_v > \sum_{e \in E^1v} d_{s(e)}$ or there exists $w \in V_n$ with $|w E^1v| \geq 2$. \end{itemize} \end{lemma} \begin{proof} If $A$ has no nonzero finite-dimensional quotients, then by Lemma~\ref{fin-quotient-lem-1} there exists a Bratteli diagram for $A$ satisfying~(1). Lemma~\ref{fin-quotient-lem-2} shows that this Bratteli diagram may be telescoped to obtain a Bratteli diagram for $A$ satisfying~(2). The vertices of the telescoped diagram are a subset of those of the original diagram, and the values of $d_v$ are the same for those vertices $v$ common to both. In particular, telescoping preserves property~(1), so the telescoped Bratteli diagram will then satisfy both (1)~and~(2). Conversely, suppose that there exists a Bratteli diagram $(E,d)$ for $A$ satisfying (1) and (2). If $I$ is a proper ideal of $A$, then $I$ corresponds to a saturated hereditary subset $H$, and the complement $(E\setminus H, d)$ of $H$ in $(E,d)$ is a Bratteli diagram for $A/I$. Fix a vertex $v$ in this complement. Since $H$ is saturated hereditary, there exists an edge $e_1 \in E^1$ with $s(e_1) = v$ and $r(e_1)$ in the complement also. Inductively, we may produce an infinite path $e_1 e_2 \ldots$ in the complement. It follows from property~(2) that $d_{s(e_i)} < d_{s(e_{i+1})}$ for all $i$, which implies that the function $d \colon (E\setminus H)^0 \to \mathbb{N}$ is unbounded. Hence $A/I$ is infinite dimensional. \end{proof} \begin{lemma} \label{unital-quotient-lem} Suppose $A$ is an AF-algebra with no unital quotients. Then any Bratteli diagram for $A$ can be telescoped to obtain a second Bratteli diagram $(E,d)$ for $A$ such that for all $v \in E^0$ we have $d_v > \sum_{e \in E^1v}d_{s(e)}$. \end{lemma} \begin{proof} Let $(F,d)$ be a Bratteli diagram for $A$ with $F^0$ partitioned into levels as $F^0 = \bigsqcup_{n=1}^\infty W_n$. It suffices to show that for every $m$ there exists $n \ge m$ such that for every $v \in W_n$ we have $d_v > \sum_{\alpha \in W_m F^* v} d_{s(\alpha)}$. Suppose not, and seek a contradiction. That is, we suppose that there exists $m$ such that for every $n \ge m$ the set \[ Y_n := \Big\{ x \in W_n : d_x = \sum_{\alpha \in W_m F^*x} d_{s(\alpha)} \Big\} \] is nonempty. By telescoping $(F,d)$ to $\bigsqcup_{n=m}^\infty W_n$ we may assume $m=1$. If we let \[ T := \{ w \in F^0 : \text{for infinitely many $n$ there exists $x \in Y_{n}$ with $w \geq x$} \}, \] then the complement of $T$ is a saturated hereditary subset, and the quotient of $A$ by the ideal corresponding to this complement has a Bratteli diagram obtained by restricting to the vertices in $T$. Along similar lines to Lemma~\ref{fin-quotient-lem-2}, one can show that if $n \leq p$, $x \in Y_p$, and $v \in W_n$ with $v \geq x$, then $v \in Y_n$. Hence each $v \in T \cap W_n$ is in $Y_n$. This implies that each $v \in T$ has the property that $d_v = \sum_{e \in F^1v} d_{s(e)}$, and hence all the inclusions in the corresponding directed system are unital. Thus the quotient of $A$ considered above is unital. This contradicts the hypothesis that $A$ has no unital quotients. \end{proof} \begin{lemma} \label{unital-quotient-char-lem} Let $A$ be an AF-algebra. Then $A$ has no unital quotients if and only if $A$ has a Bratteli diagram $(E,d)$ such that for all $v \in E^0$ we have both $d_v \geq 2$ and $d_v > \sum_{e \in E^1v} d_{s(e)}$. \end{lemma} \begin{proof} If $A$ has no unital quotients, then the existence of such a Bratteli diagram follows from Lemma~\ref{f-d-quotient-char-lem} and Lemma~\ref{unital-quotient-lem}. Conversely, suppose that $A$ has such a Bratteli diagram $(E,d)$, and fix a nonzero quotient $A/I$ of $A$. There is a subdiagram $(F,d)$ of $(E,d)$ which is a Bratteli diagram for $A/I$. In particular $d_v > \sum_{e \in F^1 v} d_{s(e)}$ for all $v \in F^0$. It follows that the inclusions in the direct limit decomposition of $A$ corresponding to $(F,d)$ are all nonunital. Hence $A/I$ is nonunital. \end{proof} \begin{lemma} \label{Evil-Lemma-:-(} Let $A$ be a $C^*$-algebra which is generated by finite-dimensional subalgebras $B$ and $C$. Suppose that $B = \bigoplus_{v\in V} B^v$ where each $B^v \cong M_{b_v}(\mathbb{C})$ and that $C = \bigoplus_{w\in W} C^w$ where each $C^w \cong M_{c_w}(\mathbb{C})$. For each $v\in V$ suppose that $q^v$ is a minimal projection in $B^v$ such that $q^v \in C$ and $(1_{B^v} - q^v) C = \{0\}$. For each $v,w$, let $m_{v,w}$ denote the rank of $q^v 1_{C^w}$ in $C^w$, and let \[ a_w := c_w + \sum_{v\in V}(b_v-1) m_{v,w}. \] Then $A = \bigoplus_{w\in W} A^w$ where each $A^w \cong M_{a_w}(\mathbb{C})$. Moreover, the inclusion $C^w \hookrightarrow A^w$ has multiplicity $1$ for $w\in W$, and the inclusion $B \hookrightarrow A$ has multiplicity matrix $(m_{v,w})_{v\in V,w\in W}$. Finally, the unit $1_A$ of $A$ is equal to $\big(1_B - \sum_{v\in V}q^v\big) + 1_C$. \end{lemma} \begin{proof} The assumptions on the $q^v$ imply that $\big(1_B - \sum_{v\in V}q^v\big) + 1_C$ is the unit of $A$. To obtain the desired decomposition of $A$, we construct a family of matrix units for $A$. We begin by fixing convenient systems of matrix units for the $B^v$ and the $C^w$. For $v \in V$, let $\{\beta^v_{r,s} : 0 \le r,s \le b_v-1\}$ be a family of matrix units for $B^v$ such that $\beta^v_{0,0} = q^v$. Similarly, for $w \in W$ let $\{\gamma^w_{k,l} : 0 \le k,l \le c_w-1\}$ be a family of matrix units for $C^w$ such that for each $v \in V$ there exists a subset $\kappa_{v,w} \subset \{0,1,\ldots,c_w-1\}$ satisfying $q^v 1_{C^w} = \sum_{k \in \kappa_{v,w}}\gamma^w_{k,k}$. Note that the subsets $\{\kappa_{v,w}\}_{v \in V}$ of $\{0,1,\ldots,c_w-1\}$ are mutually disjoint and satisfy $|\kappa_{v,w}|=m_{v,w}$. We are now ready to define the desired matrix units for $A$; these matrix units will be indexed by the set \[ I_w := \big(\{0,1,\ldots,c_w-1\} \times \{0\}\big) \sqcup \bigsqcup_{v\in V} \big(\kappa_{v,w} \times \{1,2, \ldots, b_v-1\}\big) \] for $w\in W$. We have $|I_w| = c_w + \sum_{v\in V} |\kappa_{v,w}| (b_v-1) = a_w$. Define elements $\{\alpha^w_{(k,r),(l,s)} : w \in W,\ (k,r),(l,s) \in I_w\}$ by \[ \alpha^w_{(k,r),(l,s)} := \begin{cases} \gamma^w_{k,l} &\text{ if $r=s=0$}\\ \gamma^w_{k,l} \beta^v_{0,s} &\text{ if $r=0$, $l \in \kappa_{v,w}$ and $s\geq 1$} \\ \beta^{v'}_{r,0} \gamma^w_{k,l} &\text{ if $k \in \kappa_{v',w}$, $r\geq 1$ and $s=0$} \\ \beta^{v'}_{r,0} \gamma^w_{k,l} \beta^v_{0,s} &\text{ if $k \in \kappa_{v',w}$, $r\geq 1$, $l \in \kappa_{v,w}$ and $s\geq 1$}. \end{cases} \] We first claim that for each $w,w' \in W$, each $(k,r),(l,s) \in I_w$ and each $(k',r'),(l',s') \in I_{w'}$, \begin{equation}\label{eq:alpha matuits} \alpha^w_{(k,r),(l,s)} \alpha^{w'}_{(k',r'),(l',s')} = \begin{cases} \alpha^w_{(k,r),(l',s')} &\text{ if $w = w'$ and $(l,s) = (k',r')$}\\ 0 &\text{ otherwise.} \end{cases} \end{equation} To verify \eqref{eq:alpha matuits}, we consider four cases. \noindent\textsc{Case~1:} $s=r'=0$. Since $\gamma^w_{k,l}$ are matrix units and since the $C^w$ are orthogonal, we have \[ \gamma^w_{k,l}\gamma^{w'}_{k',l'}=\begin{cases} \gamma^w_{k,l'} &\text{ if $w = w'$ and $l = k'$}\\ 0 &\text{ otherwise.} \end{cases} \] This implies \eqref{eq:alpha matuits} in the case $s=r'=0$. \noindent\textsc{Case~2:} $s \ge 1$ and $r'=0$. Then $\beta^v_{0,s} \gamma^{w'}_{k',l'} = \beta^v_{0,s} \beta^v_{s,s}\gamma^{w'}_{k',l'} = 0$ because $\beta^v_{s,s} \le \sum_{s=1}^{b_v-1}\beta^v_{s,s} = 1_{B^v} - q^v$ which is orthogonal to $C$ by assumption. This shows $\alpha^w_{(k,r),(l,s)} \alpha^{w'}_{(k',r'),(l',s')} = 0$. \noindent\textsc{Case~3:} $s = 0$ and $r' \ge 1$. This case follows from Case~2 by taking adjoints. \noindent\textsc{Case~4:} $s \ge 1$ and $r' \ge 1$. Then \[ \gamma^w_{k,l} \beta^v_{0,s} \beta^{v'}_{r',0} \gamma^{w'}_{k',l'} = \begin{cases} \gamma^w_{k,l} \beta^v_{0,0} \gamma^{w'}_{k',l'} &\text{ if $v = v'$ and $s = r'$}\\ 0 &\text{ otherwise.} \end{cases} \] Since $\gamma^w_{k,l}\beta^v_{0,0} =\gamma^w_{k,l}$ we have \[ \gamma^w_{k,l} \beta^v_{0,0} \gamma^{w'}_{k',l'} =\gamma^w_{k,l} \gamma^{w'}_{k',l'} = \begin{cases} \gamma^w_{k,l'} &\text{ if $w = w'$ and $l = k'$}\\ 0 &\text{ otherwise.} \end{cases} \] These show~\eqref{eq:alpha matuits} in case~4, completing the proof of the claim. For each $w \in W$, let $A^w:=\operatorname{span} \{\alpha^w_{(k,r),(l,s)} : (k,r),(l,s) \in I_w\}\subset A$. From \eqref{eq:alpha matuits}, we see that $A^w$ is isomorphic to $M_{a_w}(\mathbb{C})$ for each $w \in W$, and that $\{A^w\}_{w\in W}$ are orthogonal to each other. We next show that $A = \sum_{w \in W}A^w$. To see this, it suffices to show that all the matrix units $\beta^v_{r,s}$ and $\gamma^w_{k,l}$ for $B$ and $C$ belong to $\sum_{w \in W}A^w$. If $l \in \kappa_{v,w}$, then \[ \gamma^w_{k,l}\beta^v_{0,0} =(\gamma^w_{k,l}1_{C^{w}})q^v =\gamma^w_{k,l}\Big(\sum_{l' \in \kappa_{v,w}}\gamma^{w}_{l',l'}\Big) =\gamma^w_{k,l}. \] Similarly, we get $\beta^{v'}_{0,0}\gamma^w_{k,l}=\gamma^w_{k,l}$ if $k \in \kappa_{v',w}$. We may deduce from these two equalities that $\alpha^w_{(k,r),(l,s)}= \beta^{v'}_{r,0}\gamma^w_{k,l}\beta^v_{0,s}$ for all $k \in \kappa_{v',w}$, all $r\geq 0$, all $l \in \kappa_{v,w}$ and all $s\geq 0$. For each $v \in V$, we have \[ \beta^v_{0,0} = q^v = \sum_{w \in W}q^v 1_{C^w} = \sum_{w \in W}\sum_{k \in \kappa_{v,w}}\gamma^w_{k,k}. \] It follows that \[ \beta^v_{r,s} = \beta^v_{r,0}\beta^v_{0,0}\beta^v_{0,s} = \sum_{w \in W}\sum_{k \in \kappa_{v,w}} \beta^v_{r,0}\gamma^w_{k,k}\beta^v_{0,s}, = \sum_{w \in W}\sum_{k \in \kappa_{v,w}} \alpha^w_{(k,r),(k,s)} \in \sum_{w \in W}A^w \] for all $v \in V$ and all $0 \le r,s \le b_v-1$. We also have $\gamma^w_{k,l} = \alpha^w_{(k,0),(l,0)}$ for $w \in W$ and $0 \le k,l \le c_w-1$. Thus we get $A=\sum_{w \in W}A^w$. It is clear that the inclusion $C^w \hookrightarrow A^w$ has multiplicity $1$ for $w\in W$. To see that the inclusion $B \hookrightarrow A$ has multiplicity matrix $(m_{v,w})_{v\in V,w\in W}$, it suffices to see that for each $v \in V$ and $w \in W$, the product of the minimal projection $q^v \in B^v$ and the unit $1_{A^w}$ of $A^w$ has rank $m_{v,w}$ in $A^w \cong M_{a_w}(\mathbb{C})$. Since $q^v \in C$, we have \[ q^v1_{A^w}=q^v1_{C^w} = \sum_{k \in \kappa_{v,w}}\gamma^w_{k,k} = \sum_{k \in \kappa_{v,w}}\alpha^w_{(k,0),(k,0)}. \] This shows that the rank of $q^v1_{A^w} \in A^w$ is $|\kappa_{v,w}| = m_{v,w}$. \end{proof} \section{Realizations of AF-algebras} \label{results-sec} \subsection{A construction of an ultragraph from a certain type of Bratteli diagram} \label{ultra-contruct-subsec} In this section we show how to construct ultragraphs from certain Bratteli diagrams and use these ultragraphs to realize particular classes of AF-algebras as ultragraph $C^*$-algebras, Exel-Laca algebras, and graph $C^*$-algebras. \begin{definition} \label{ultragraph-def} Let $A$ be an AF-algebra with no nonzero finite-dimensional quotients. By Lemma~\ref{f-d-quotient-char-lem} there exists a Bratteli diagram $(E,d)$ for $A$ satisfying the following two properties: \begin{itemize} \item[(1)] $d_v \geq 2$ for all $v \in E^0$; and \item[(2)] for all $n \in \mathbb{N}$ and for each $v \in V_{n+1}$ either $d_v > \sum_{e \in E^1 v} d_{s(e)}$ or there exists $w \in V_n$ with $|w E^1v| \geq 2$. \end{itemize} We define \[ \Delta_v := d_v - \sum_{e \in E^1v} (d_{s(e)} - 1). \] The symbol $\Delta$ has been chosen to connote ``difference". Note that from the property (1), $\Delta_v = d_v$ if and only if $v$ is a source. In addition, it follows from the properties of our Bratteli diagram that $\Delta_v \geq 2$ for all $v \in E^0$. We claim that for each $v \in E^0$ there exists an injection $k_v\colon E^1 v \to \{0,1,\ldots, \Delta_v-1\}$ such that there exists $e \in E^1 v$ with $k_v(e) = 0$ if and only if $d_v = \sum_{e \in E^1 v}d_{s(e)}$, and in this case $e$ is not the only element of $s(e)E^1v$. To justify this claim, first observe that \begin{align*} \Delta_v &= d_v - \sum_{e \in E^1 v} (d_{s(e)} - 1) = d_v - \sum_{e \in E^1 v} d_{s(e)} + \sum_{e \in E^1 v} 1 = \big( d_v - \sum_{e \in E^1 v} d_{s(e)} \big) + |E^1v| \end{align*} Hence if $d_v > \sum_{e \in E^1 v} d_{s(e)}$ we may always choose an injection $k_v\colon E^1v \to \{0,1,\ldots, \Delta_v-1\}$ so that its image does not contain $0$. On the other hand if $d_v = \sum_{e \in E^1 v} d_{s(e)}$, then by hypothesis on the Bratteli diagram there exists $w \in E^0$ with $|wE^1v| \geq 2$ so we may choose a bijection $k_v\colon E^1v \to \{0,1,\ldots, \Delta_v-1\}$ such that $e \in E^1v$ with $k_v(e) = 0$ satisfies $s(e)=w$. This establishes the claim. We now define an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r_\mathcal{G}, s_\mathcal{G})$ by \[ G^0 := \{ v_i : v \in E^0 \text{ and } 1 \leq i \leq \Delta_v -1 \} \qquad \text{ and } \qquad \mathcal{G}^1 := \{ e_{v_i} : v_i \in G^0 \} \] with \[ s_\mathcal{G} (e_{v_i}) := v_i \quad \text{ for all $v_i \in G^0$}, \qquad \qquad r_\mathcal{G} (e_{v_i}) := \{v_{i-1}\} \quad \text{ for $2 \leq i \leq \Delta_v-1$} \] and \begin{align*} r_\mathcal{G}(e_{v_1}) := \big\{ w_k : \text{ there exists a path $\lambda=\lambda_1\lambda_2\ldots \lambda_n$ such that $s(\lambda)=v$, $r(\lambda)=w$,}& \\ \text{$k_{r(\lambda_i)}(\lambda_i)=0$ for $i=1,2,\ldots, n-1$, and $k_{w}(\lambda_n)=k\ge 1$}&\big\}. \end{align*} \end{definition} To check that $\mathcal{G}$ is an ultragraph, we only need to see that $r_\mathcal{G}(e_{v_1}) \neq \emptyset$. \begin{lemma} \label{lem:r_G(e_{v_1})} For all $n$ and $v \in V_n$, the set $r_\mathcal{G}(e_{v_1})$ is nonempty and satisfies \[ r_\mathcal{G}(e_{v_1}) = \{ w_{k_w(e)} : w \in V_{n+1}, e \in vE^1 w, k_w(e)\ge 1\} \cup \bigcup_{w \in V_{n+1}, e \in vE^1 w, k_w(e)=0} r_\mathcal{G}(e_{w_1}). \] \end{lemma} \begin{proof} The latter equality follows from the definition of $r_\mathcal{G}(e_{v_1})$. For each $v \in V_n$, there exists $w \in V_{n+1}$ such that $vE^1 w \neq \emptyset$. By the assumption on $k_w$, there exists $e \in vE^1 w$ such that $k_w(e) \geq 1$. Thus $w_{k_w(e)} \in r_\mathcal{G}(e_{v_1})$. This shows that $r_\mathcal{G}(e_{v_1})$ is nonempty. \end{proof} \begin{remark} By definition, $r_\mathcal{G}(e_{v_1}) \subset \bigcup_{k=n+1}^\infty V_k$ for $v \in V_n$. One can show that this property together with the equality in Lemma~\ref{lem:r_G(e_{v_1})} uniquely determines $\{r_\mathcal{G}(e_{v_1})\}_{v \in E^0}$. \end{remark} \begin{example} \textbf{An example of the ultragraph construction:} Consider a Bratteli diagram $(E,d)$ satisfying Conditions (1)~and~(2) of Lemma~\ref{f-d-quotient-char-lem} and whose first three levels are as illustrated below. In the diagram, each vertex is labeled with its name, and above the label $a$ appears the integer $d_a$. \[\begin{tikzpicture}[scale=1.5] \node[circle,inner sep=0.5pt] (s) at (0,1) {\small$s$}; \node[inner sep=1.5pt,anchor=south] at (s.north) {\tiny2}; \node[circle,inner sep=0.5pt] (t) at (0,0) {\small$t$}; \node[inner sep=1.5pt,anchor=south] at (t.north) {\tiny2}; \node[circle,inner sep=0.5pt] (u) at (0,-1) {\small$u$}; \node[inner sep=1.5pt,anchor=south] at (u.north) {\tiny3}; \node[circle,inner sep=0.5pt] (v) at (2,0.75) {\small$v$}; \node[inner sep=1.5pt,anchor=south] at (v.north) {\tiny8}; \node[circle,inner sep=0.5pt] (w) at (2,-0.75) {\small$w$}; \node[inner sep=1.5pt,anchor=south] at (w.north) {\tiny7}; \node[circle,inner sep=0.5pt] (x) at (4,1) {\small$x$}; \node[inner sep=1.5pt,anchor=south] at (x.north) {\tiny9}; \node[circle,inner sep=0.5pt] (y) at (4,0) {\small$y$}; \node[inner sep=1.5pt,anchor=south] at (y.north) {\tiny22}; \node[circle,inner sep=0.5pt] (z) at (4,-1) {\small$z$}; \node[inner sep=1.5pt,anchor=south] at (z.north) {\tiny16}; \node at (5,0) {\dots}; \draw[-latex] (s)-- node[above,pos=0.6] {\small$e$} (v) node[pos=0.5,anchor=south,inner sep=1pt] {}; \draw[opacity=0,color=white] (t)--(v) node[pos=0.5,sloped,inner sep=0.25cm] (tv) {}; \draw[-latex] (t) .. controls (tv.north) .. node[above,pos=0.3] {\small$e'$} (v); \draw[-latex] (t) .. controls (tv.south) .. node[below,pos=0.8] {\small$e''$} (v); \draw[opacity=0,color=white] (t)--(w) node[pos=0.5,sloped,inner sep=0.25cm] (tw) {}; \draw[-latex] (t) .. controls (tw.north) .. node[above,pos=0.8] {\small$f$}(w); \draw[-latex] (t) .. controls (tw.south) .. node[below,pos=0.3] {\small$f'$}(w); \draw[-latex] (u)-- node[below,pos=0.6] {\small$f''$} (w) node[pos=0.5,anchor=south,inner sep=1pt] {}; \draw[-latex] (v)-- node[above,pos=0.6] {\small$g$} (x) node[pos=0.5,anchor=south,inner sep=1pt] {}; \draw[-latex] (v)-- node[above,pos=0.6] {\small$h$} (y) node[pos=0.5,anchor=south,inner sep=1pt] {}; \draw[-latex] (v)-- node[below,pos=0.3] {\small$k$} (z) node[pos=0.5,anchor=south,inner sep=1pt] {}; \draw[opacity=0,color=white] (w)--(y) node[pos=0.5,sloped,inner sep=0.25cm] (vy) {}; \draw[-latex] (w) .. controls (vy.north) .. node[above,pos=0.7] {\small$h'$}(y); \draw[-latex] (w) .. controls (vy.south) .. node[below,pos=0.55] {\small$h''$} (y); \draw[-latex] (w)-- node[below,pos=0.6] {\small$k'$} (z) node[pos=0.5,anchor=south,inner sep=1pt] {}; \end{tikzpicture}\] The values of $\Delta$ for the vertices visible in the diagram are \[\begin{array}{c@{\extracolsep{2em}}c@{\extracolsep{2em}}c} \Delta_s = 2 & \Delta_v = 5 & \Delta_x = 2 \\ \Delta_t = 2 & \Delta_w = 3 & \Delta_y= 3\\ \Delta_u = 3 & \ & \Delta_z = 3 \\ \end{array}\] So the corresponding section of the resulting ultragraph $\mathcal{G}$ will have vertices \[ G^0 = \{s_1, t_1, u_1, u_2, v_1, v_2, v_3, v_4, w_1, w_2, x_1, y_1, y_2, z_1, z_2, \ldots \}, \] and each of these vertices ${a_i}$ will emit exactly one ultraedge $e_{a_i}$. For $i \not=1$, we have $r_\mathcal{G}(e_{a_i}) = \{a_{i-1}\}$. To determine the ranges of the $e_{a_1}$, we must choose injections $k_a\colon E^1a \to \{0,1,\ldots, \Delta_a-1\}$ for $a \in E^0$ with the properties described above; in particular, this necessitates that $0$ is in the image of $k_a$ only when $a=w$ or $a=y$, and also that $k_w(f'')\neq 0$ and $k_y(h)\neq 0$. One possible set of choices of injections $k_a$ is \begin{align*} &\begin{array}[t]{ccc} k_v(e)=1,& k_v(e')=3,& k_v(e'')=4,\\ k_w(f)=0,& k_w(f')=2,& k_w(f'')=1, \end{array} & &\begin{array}[t]{ccc} k_x(g)=1,&& \\ k_y(h)=1,& k_y(h')=0,& k_y(h'')=2,\\ k_z(k)=2,& k_z(k')=1.& \end{array} \end{align*} We can calculate \begin{gather*} r_\mathcal{G}(e_{s_1}) = \{v_1\}, \qquad r_\mathcal{G}(e_{t_1}) = \{v_3, v_4, w_2, y_2, z_1\} \cup r_\mathcal{G}(e_{y_1}), \qquad r_\mathcal{G}(e_{u_1}) = \{w_1\},\\ r(e_{v_1}) = \{x_1, y_1, z_2\}, \quad\text{ and }\quad r(e_{w_1}) = \{z_1, y_2\} \cup r_\mathcal{G}(e_{y_1}). \end{gather*} We may now draw the fragment of the ultragraph $\mathcal{G}$ corresponding to the given fragment of the Bratteli diagram $(E,d)$. \[\begin{tikzpicture}[scale=2] \node[circle,inner sep=0.5pt] (s1) at (0,-1) {\small$u_1$}; \node[circle,inner sep=0.5pt] (s2) at (0,-0.5) {\small$u_2$}; \node[circle,inner sep=0.5pt] (t1) at (0,0.5) {\small$t_1$}; \node[circle,inner sep=0.5pt] (u1) at (0,1.5) {\small$s_1$}; \node[circle,inner sep=0.5pt] (v1) at (2,-1) {\small$w_1$}; \node[circle,inner sep=0.5pt] (v2) at (2,-0.5) {\small$w_2$}; \node[circle,inner sep=0.5pt] (w1) at (2,0.5) {\small$v_1$}; \node[circle,inner sep=0.5pt] (w2) at (2,1) {\small$v_2$}; \node[circle,inner sep=0.5pt] (w3) at (2,1.5) {\small$v_3$}; \node[circle,inner sep=0.5pt] (w4) at (2,2) {\small$v_4$}; \node[circle,inner sep=0.5pt] (x1) at (4,-1) {\small$z_1$}; \node[circle,inner sep=0.5pt] (x2) at (4,-0.5) {\small$z_2$}; \node[circle,inner sep=0.5pt] (y1) at (4,0.5) {\small$y_1$}; \node[circle,inner sep=0.5pt] (y2) at (4,1) {\small$y_2$}; \node[circle,inner sep=0.5pt] (z1) at (4,2) {\small$x_1$}; \node at (5,0.5) {\dots}; \draw[-latex] (s2)--(s1); \draw[-latex] (s1)--(v1); \draw[-latex] (t1)--(v2); \draw[-latex] (t1)--(w3); \draw[-latex] (t1)--(w4); \draw[-latex] (t1)--(x1); \draw[-latex] (t1)--(y2); \draw[-latex] (u1)--(w1); \draw[-latex] (v2)--(v1); \draw[-latex] (v1)--(x1); \draw[-latex] (v1)--(y2); \draw[-latex] (w4)--(w3); \draw[-latex] (w3)--(w2); \draw[-latex] (w2)--(w1); \draw[-latex] (w1)--(x2); \draw[-latex] (w1)--(y1); \draw[-latex] (w1)--(z1); \draw[-latex] (x2)--(x1); \draw[-latex] (y2)--(y1); \end{tikzpicture}\] Note that by definition of the ultragraph $\mathcal{G}$, each vertex emits exactly one ultraedge, so in the picture any multiple arrows leaving the same vertex actually have the same label and constitute a single ultraedge of $\mathcal{G}$. \end{example} \subsection{Sufficient conditions for realizations} \label{sufficient-cond-subsec} \begin{theorem} \label{AF-as-ultra-thm} Let $A$ be an AF-algebra with a Bratteli diagram satisfying the conditions of Lemma~\ref{f-d-quotient-char-lem}. If $\mathcal{G}$ is an ultragraph constructed from this Bratteli diagram as in Definition~\ref{ultragraph-def}, then $A \cong C^*(\mathcal{G})$. In addition, $C^*(\mathcal{G})$ is an Exel-Laca algebra. \end{theorem} \begin{proof} Let $(E,d)$ be a Bratteli diagram for $A$ with the vertices partitioned into levels as $E^0 = \bigsqcup_{n=1}^\infty V_n$ and satisfying the conditions of Lemma~\ref{f-d-quotient-char-lem}, and let $\mathcal{G}$ be an ultragraph constructed from $(E,d)$ as in Definition~\ref{ultragraph-def}. Our strategy is to find a direct limit decomposition of $C^*(\mathcal{G})$ so that at each level we may apply Lemma~\ref{Evil-Lemma-:-(} to see that the inclusion of finite-dimensional algebras is the same as the corresponding inclusion in the direct limit decomposition of $A$ determined by $(E,d)$. For each $v \in E^0$ let \[ C^v := C^* ( \{ s_{e_{v_i}} : 1 \leq i \leq \Delta_v - 1 \}). \] We have $s_{e_{v_i}}s_{e_{v_i}}^*=p_{v_i}$ for $1 \leq i \leq \Delta_v - 1$ and $s_{e_{v_i}}^*s_{e_{v_i}}=p_{v_{i-1}}$ for $2 \leq i \leq \Delta_v - 1$. We define a projection $q^v := p_{r_\mathcal{G}(e_{v_1})}=s_{e_{v_1}}^*s_{e_{v_1}} \in C^v$, which is orthogonal to $p_{v_i}$ for $1 \leq i \leq \Delta_v - 1$. These computations show that there exist matrix units $\{\gamma^v_{k,l} : 0\leq k,l \leq \Delta_v -1\}$ in $C^v$ such that $\gamma^v_{0,0}=q^v$, $\gamma^v_{i,i}=p_{v_i}$ and $\gamma^v_{i,i-1}=s_{e_{v_i}}$ for $1 \leq i \leq \Delta_v - 1$. Explicitly, $\gamma^v_{k,l} \in C^v$ is given by \[ \gamma^v_{k,l} := s_{e_{v_k}}s_{e_{v_{k-1}}} \cdots s_{e_{v_1}} q^vs_{e_{v_1}}^*s_{e_{v_2}}^* \cdots s_{e_{v_l}}^* \] for $0\leq k,l \leq \Delta_v -1$. This shows that $C^v$ is isomorphic to $M_{\Delta_v} (\mathbb{C})$ with minimal projection $q^v$ and the unit $\sum_{i=1}^{\Delta_v - 1}p_{v_{i}}+q^v$. For each $n \in \mathbb{N}$ \[ C_n := C^*( \{ s_{e_{v_i}} : v \in V_n \text{ and } 1 \leq i \leq \Delta_v - 1 \} ) \] is equal to $\bigoplus_{v \in V_n} C^v$. Moreover, for $n \in \mathbb{N}$, define \[\textstyle B_n := C^*\big(\bigcup_{j=1}^n C_j \big) = C^*\big( \{ s_{e_{v_i}} : v \in \bigcup_{j=1}^n V_j \text{ and } 1 \leq i \leq \Delta_v-1 \} \big). \] \textbf{Claim:} For each $n \in \mathbb{N}$, the unit $1_{B_n}$ of $B_n$ is given by $\sum_{v \in \bigcup_{j=1}^n V_j}\sum_{i=1}^{\Delta_v -1}p_{v_i} + \sum_{v\in V_n}q^v$, and there exists a decomposition $B_n = \bigoplus_{v \in V_n} B^v$ such that each $B^v \cong M_{d_v} (\mathbb{C})$ with minimal projection $q^v$; and for each $n \in \mathbb{N}$, the inclusion $B_n \hookrightarrow B_{n+1}$ has multiplicity matrix $( |vE^1w| )_{v \in V_n, w \in V_{n+1}}$. We proceed by induction on $n$. When $n =1$, let $B^v := C^v$ for $v \in V_1$. Then $B_1= C_1$ has the decomposition $B_1=\bigoplus_{v \in V_1} B^v$. For each $v \in V_1$, we have $\Delta_v = d_v$ because $v$ is a source. Hence $B^v = C^v$ is isomorphic to $M_{d_v} (\mathbb{C})$ with minimal projection $q^v$ and the unit $\sum_{i=1}^{\Delta_v - 1}p_{v_{i}}+q^v$. This shows the claim in the case $n=1$. For the inductive step, assume that $B_n$ has the desired decomposition. To apply Lemma~\ref{Evil-Lemma-:-(} to the $C^*$-al\-ge\-bra $B_{n+1}$ which is generated by $B_n$ and $C_{n+1}$, we check that for each $v \in V_n$ the minimal projection $q^{v} \in B^v$ is in $C_{n+1}$ and satisfies $(1_{B^v} - q^{v}) C_{n+1} = \{0\}$. We see that \[ \sum_{v \in V_{n}}(1_{B^v} - q^{v}) =1_{B_n} - \sum_{v \in V_{n}}q^{v} =\sum_{v \in \bigcup_{j=1}^n V_j}\sum_{i=1}^{\Delta_v -1}p_{v_i} \] which is orthogonal to $C_{n+1}$. This proves $(1_{B^v} - q^{v}) C_{n+1} = \{0\}$ for all $v \in V_{n}$. For each $v \in V_{n}$, Lemma~\ref{lem:r_G(e_{v_1})} implies \begin{equation}\label{eq:qv=} q^{v}=p_{r_\mathcal{G}(e_{v_1})} = \sum_{w \in V_{n+1}}\Big( \sum_{\substack{e \in vE^1 w\\ k_w(e)\ge 1}} p_{w_{k_w(e)}} + \sum_{\substack{e \in vE^1 w \\ k_w(e)=0}} p_{r_G(e_{w_1})}\Big) = \sum_{w \in V_{n+1}}\sum_{e \in vE^1 w}\gamma^w_{k_w(e),k_w(e)}. \end{equation} Hence $q^{v} \in C_{n+1}$. Thus we can apply Lemma~\ref{Evil-Lemma-:-(} to obtain the decomposition $B_{n+1} = \bigoplus_{w \in V_{n+1}} B^w$. Since the inclusion $C^w \hookrightarrow B^w$ has multiplicity $1$ for $w\in W$, the projection $q^{w}$ is minimal in $B^w$. From \eqref{eq:qv=}, $q^{v} 1_{C^w}$ has rank $|vE^1w|$ in $C^w$ for $w \in V_{n+1}$. The definition of $\Delta_w$ implies that \[ d_w = \Delta_w + \sum_{w \in V_{n+1}} (d_v-1) |vE^1w|. \] Hence $B^w$ is isomorphic to $M_{d_w} (\mathbb{C})$ for $w \in V_{n+1}$. The conclusion of Lemma~\ref{Evil-Lemma-:-(} also shows that the inclusion $B_n \hookrightarrow B_{n+1}$ has multiplicity matrix $( |vE^1w| )_{v \in V_n, w \in V_{n+1}}$, and that the unit of $B_{n+1}$ is equal to $\sum_{v \in \bigcup_{j=1}^{n+1} V_j}\sum_{i=1}^{\Delta_v -1}p_{v_i} + \sum_{w\in V_{n+1}}q^w$. This proves the claim. We see that $\bigcup_{n=1}^\infty B^n$ contains $\{ s_e : e \in \mathcal{G}^1 \}$. Since each vertex $v$ in $\mathcal{G}$ emits exactly one ultraedge $e$, $p_v = s_es_e^*$ is contained in $\bigcup_{n=1}^\infty B^n$. Thus $\bigcup_{n=1}^\infty B^n$ contains all the generators of $C^*(\mathcal{G})$. Hence $C^*(\mathcal{G}) = \overline{\bigcup_{n=1}^\infty B^n}$ is an AF-algebra, and the preceding paragraphs show that $(E,d)$ is a Bratteli diagram for $C^*(\mathcal{G})$, giving $A \cong C^*(\mathcal{G})$. Since every vertex of $\mathcal{G}$ emits exactly one ultraedge, $C^*(\mathcal{G})$ is an Exel-Laca algebra (see Remark~\ref{rem:ELalgis,,,}). \end{proof} \begin{corollary} \label{no-nonzero-f-d-quotient-then-EL} If $A$ is an AF-algebra with no nonzero finite-dimensional quotients, then $A$ is isomorphic to an Exel-Laca algebra. \end{corollary} \begin{proof} Since $A$ has no nonzero finite-dimensional quotients, Lemma~\ref{f-d-quotient-char-lem} implies that $A$ has a Bratteli diagram satisfying the conditions stated. It follows from Theorem~\ref{AF-as-ultra-thm} that $A$ is isomorphic to an Exel-Laca algebra. \end{proof} The following result is important in that it is one of the few instances where we can give a complete characterization of AF-algebras in a certain graph $C^*$-algebra class. In particular, we give necessary and sufficient conditions for an AF-algebra to be the $C^*$-algebra of a row-finite graph with no sinks. \begin{theorem} \label{no-unital-quotient-then-graph-alg} Let $A$ be an AF-algebra. Then the following are equivalent: \begin{enumerate} \item $A$ has no (nonzero) unital quotients. \item $A$ is isomorphic to the $C^*$-algebra of a row-finite graph with no sinks. \end{enumerate} \end{theorem} \begin{proof} We shall first prove that $(1)$ implies $(2)$. Suppose that $A$ has no unital quotients. By Corollary~\ref{unital-quotient-char-lem} there is a Bratteli diagram $(E,d)$ for $A$ such that for all $v \in E^0$ we have both $d_v \geq 2$ and $d_v > \sum_{e \in E^1 v} d_{s(e)}$. Let $\mathcal{G}$ be an ultragraph constructed from $(E,d)$ as in Definition~\ref{ultragraph-def}. Theorem~\ref{AF-as-ultra-thm} implies that $A \cong C^*(\mathcal{G})$. Furthermore, since $d_v > \sum_{e \in E^1 v} d_{s(e)}$, we have $k_v(e) \geq 1$ for all $v \in E^0$ and $e \in E^1v$. For $v \in E^0$, Lemma~\ref{lem:r_G(e_{v_1})} implies $r_\mathcal{G}(e_{v_1}) = \{ w_{k_w(e)} : w \in V_{n+1}, e \in vE^1 w, k_w(e)\ge 1\}$. Thus, $r_\mathcal{G}(e)$ is finite for every $e \in \mathcal{G}^1$. Hence $C^*(\mathcal{G})$ is isomorphic to a graph $C^*$-algebra of a row-finite graph with no sinks (see \cite[Remark~5.25]{KMST2}). We next prove that $(2)$ implies $(1)$. Suppose that $A \cong C^*(E)$, where $E$ is a row-finite graph with no sinks. Since $C^*(E)$ is an AF-algebra, it follows from \cite[Theorem~2.4]{KPR} that $E$ has no cycles. Thus $E$ satisfies Condition~(K), and \cite[Theorem~4.4]{BPRS} implies that every ideal of $C^*(E)$ is gauge invariant. Suppose $I$ is a proper ideal of $C^*(E)$. Then $I = I_H$ for some saturated hereditary proper subset $H \subset E^0$, and $C^*(E) / I_H \cong C^*(E_H)$, where $E_H$ is the nonempty subgraph of $E$ with $E_H^0 := E^0 \setminus H$ and $E_H^1 := \{ e \in E^1 : r(e) \notin H \}$ (see \cite[Theorem~4.1]{BPRS}). Since $H$ is saturated hereditary, that $E$ has no sinks implies that $E_H$ has no sinks. Since $E$ has no cycles, $E_H$ also has no cycles. Because $E_H$ is a nonempty graph with no cycles and no sinks, $E_H^0$ is infinite. Thus $C^*(E_H)$ is nonunital \cite[Proposition~1.4]{KPR}. \end{proof} \begin{corollary} \label{cor:stable} Let $A$ be a stable AF-algebra. Then there is a row-finite graph $E$ with no sinks such that $A \cong C^*(E)$. In particular, $A$ is isomorphic to a graph $C^*$-algebra, to an Exel-Laca algebra, and to an ultragraph $C^*$-algebra. \end{corollary} \begin{proof} Since any nonzero quotient of a stable $C^*$-algebra is stable, every quotient of $A$ is stable, and in particular nonunital. The result then follows from Theorem~\ref{no-unital-quotient-then-graph-alg}. \end{proof} \begin{lemma}\label{lem:M_2(C^*(G))} Let $\mathcal{G} = (G^0, \mathcal{G}^1, r,s)$ be an ultragraph. Let $\wt{\mathcal{G}} = (\wt{G}^0, \wt{\mathcal{G}}^1, \tilde{r},\tilde{s})$ be the ultragraph defined by $\wt{G}^0:=G^0\sqcup\{v_0\}$ and $\wt{\mathcal{G}}^1:=\mathcal{G}^1\sqcup\{e_0\}$ with \[ \tilde{s}|_{\mathcal{G}^1}=s,\qquad \tilde{s}(e_0)=v_0,\qquad \tilde{r}|_{\mathcal{G}^1}=r, \qquad \text{ and } \qquad \tilde{r}(e_0)=G^0. \] Then $C^*\big(\wt{\mathcal{G}}\big)\cong M_2(\MU{C^*(\mathcal{G})})$, where $\MU{C^*(\mathcal{G})}$ is the minimal unitization of $C^*(\mathcal{G})$. \end{lemma} \begin{proof} We first notice that the algebra $\wt{\mathcal{G}}^0$ is generated by the algebra $\mathcal{G}^0 \subseteq \mathcal{P}(\wt{G}^0)$ and the two elements $G_0, \{v_0\} \in \mathcal{P}(\wt{G}^0)$. The universal property of $C^*(\wt{\mathcal{G}})$ implies that there is a $*$-ho\-mo\-mor\-phism $\phi\colon C^*(\wt{\mathcal{G}}) \to M_2(\MU{C^*(\mathcal{G})})$ satisfying \[ \phi(p_A) = \left( \begin{smallmatrix} p_A & 0 \\ 0 & 0 \end{smallmatrix} \right) \text{ for all $A \in \mathcal{G}^0$} \qquad \text{ and } \qquad \phi(s_e) = \left( \begin{smallmatrix} s_e & 0 \\ 0 & 0 \end{smallmatrix} \right) \text{ for all $e \in \mathcal{G}^1$} \] and \[ \phi(p_{G^0}) = \left( \begin{smallmatrix} 1 & 0 \\ 0 & 0 \end{smallmatrix} \right), \quad \phi(p_{v_0}) = \left( \begin{smallmatrix} 0 & 0 \\ 0 & 1 \end{smallmatrix} \right), \ \text{ and } \ \phi(s_{e_0}) = \left( \begin{smallmatrix} 0 & 0 \\ 1 & 0 \end{smallmatrix} \right). \] The Gauge-Invariant Uniqueness Theorem \cite[Theorem~6.8]{Tom} shows that $\phi$ is injective. Standard calculations show that the image under $\phi$ of the generating Cuntz-Krieger $\widetilde{\mathcal{G}}$-family in $C^*(\widetilde{\mathcal{G}})$ generates $M_2(\MU{C^*(\mathcal{G})})$. Hence $\phi$ is an isomorphism. \end{proof} \begin{corollary} \label{A-AF-M_2-AF} Let $A$ be a $C^*$-algebra, and let $\MU{A}$ denote the minimal unitization of $A$. If $A$ is isomorphic to an Exel-Laca algebra, then $M_2(\MU{A})$ is isomorphic to an Exel-Laca algebra. \end{corollary} \begin{proof} If $A$ is isomorphic to an Exel-Laca algebra, then by Remark~\ref{rem:ELalgis,,,} $A \cong C^*(\mathcal{G})$ where $\mathcal{G}$ is an ultragraph with bijective source map. By Lemma~\ref{lem:M_2(C^*(G))} $C^*(\wt{\mathcal{G}}) \cong M_2 (\MU{A})$, and since $\wt{\mathcal{G}}$ is an ultragraph with bijective source map, $C^*(\wt{\mathcal{G}})$ is an Exel-Laca algebra. \end{proof} The following example shows that the converse of Corollary~\ref{no-nonzero-f-d-quotient-then-EL} does not hold. \begin{example} \label{E-L-quotient-not-Ex} Let $A$ be a nonunital, simple AF-algebra (such as $\mathcal{K}$). By Corollary~\ref{simple-AF-graph-EL} $A$ is isomorphic to an Exel-Laca algebra, and by Corollary~\ref{A-AF-M_2-AF} $M_2(\MU{A})$ is an Exel-Laca algebra. However, $M_2(\MU{A})$ has a quotient isomorphic to the finite-dimensional $C^*$-algebra $M_2(\mathbb{C})$. Thus the converse of Corollary~\ref{no-nonzero-f-d-quotient-then-EL} does not hold. (It is also worth mentioning that $M_2(\mathbb{C})$ is a quotient of an Exel-Laca algebra, but $M_2(\mathbb{C})$ is not itself an Exel-Laca algebra; cf.~Corollary~\ref{No-fin-dim-E-L-Cor}.) \end{example} The following elementary example shows that the $C^*$-algebra of a row-finite graph with sinks may admit unital quotients (cf.~Theorem~\ref{no-unital-quotient-then-graph-alg}). \begin{example} The AF-algebra $M_2(\mathbb{C}) \oplus M_2(\mathbb{C})$ is isomorphic to the $C^*$-algebra of the graph $\bullet \longleftarrow \bullet \longrightarrow \bullet$ by \cite[Corollary~2.3]{KPR}. However, this $C^*$-algebra has $M_2(\mathbb{C})$ as a unital quotient. Thus graphs with sinks can have associated $C^*$-algebras that are AF-algebras with proper unital quotients. \end{example} The next example is more intriguing. Before considering this example, one is tempted to believe that if $E$ is a row-finite graph, then $C^*(E)$ is isomorphic to a direct sum of a countable collection of algebras of compact operators on (finite or countably infinite dimensional) Hilbert spaces and the $C^*$-algebra of a row-finite graph with no sinks (see Proposition~\ref{prp:direct-sums-as-graph-algs}). This would give a characterization of AF-algebras associated to row-finite graphs along similar lines to Theorem~\ref{no-unital-quotient-then-graph-alg}. However, the example shows that this is not the case in general. \begin{example} Let $E$ be the graph \[\begin{tikzpicture}[scale=1.5] \foreach \x in {1,2,3,4} { \node[inner sep=1pt] (v\x) at (\x,1) {$v_{\x}$}; \node[inner sep=1pt] (w\x) at (\x,0) {$w_{\x}$}; } \node[inner sep=3pt] (v5) at (5,1) {$\cdots$}; \node[inner sep=3pt] (w5) at (5,0) {$\cdots$}; \foreach \x/\xx in {1/2,2/3,3/4,4/5} { \draw[-latex] (v\x)--(v\xx); \draw[-latex] (v\x)--(w\x); } \end{tikzpicture}\] Then for each $n \in \mathbb{N}$ the set $H_n := \{ v_n, v_{n+1}, \ldots \} \cup \{ w_n, w_{n+1}, \ldots \}$ is a saturated hereditary subset of $E$, and $C^*(E) / I_{H_n}$ is a finite-dimensional $C^*$-algebra. Thus $C^*(E)$ is an AF-algebra with infinitely many finite-dimensional quotients. This shows that, unlike what occurs for row-finite graphs with no sinks (cf.~Theorem~\ref{no-unital-quotient-then-graph-alg}), the situation with sinks is much more complicated. It also shows that $C^*(E)$ does not have a Bratteli diagram of the types described in Lemma~\ref{unital-quotient-lem} or Lemma~\ref{unital-quotient-char-lem}. Hence our construction of the ultragraph described in \S\ref{ultra-contruct-subsec} cannot be applied. \end{example} By eliminating the bad behavior arising in the preceding example, we obtain a limited extension of Theorem~\ref{no-unital-quotient-then-graph-alg} to graphs containing sinks. \begin{proposition}\label{prp:direct-sums-as-graph-algs} Let $A$ be an AF algebra. Then the following are equivalent: \begin{enumerate} \item\label{it:dir-sums:only-if} $A$ is isomorphic to the $C^*$-algebra of a row-finite graph in which each vertex connects to at most finitely many sinks; and \item\label{it:dir-sums:if} $A$ has the form $\big(\bigoplus_{x \in X} M_{n_x}(\mathbb{C})\big) \oplus A'$ where $X$ is an at most countably-infinite index set, each $n_x$ is a positive integer, and $A'$ is an AF algebra with no unital quotients. \end{enumerate} \end{proposition} \begin{proof} To see that (\ref{it:dir-sums:only-if})~implies~(\ref{it:dir-sums:if}), we let $E$ be a row-finite graph in which each vertex connects to at most finitely many sinks and such that $A \cong C^*(E)$. Since $A$ is an AF-algebra, $E$ has no cycles. Let $\operatorname{sinks}(E)$ denote the collection $\{v \in E^0 : vE^1 = \emptyset\}$ of sinks in $E$. Let $H$ be the smallest saturated hereditary subset of $E^0$ containing $\operatorname{sinks}(E)$. Since each vertex connects to at most finitely many sinks, $H$ is equal to the set of $v \in E^0$ such that $vE^n = \emptyset$ for some $n$. Let $F$ be the graph with vertices $F^0 := E^0 \setminus H$, edges $F^1 = \{e \in E^1 : r(e) \not\in H\}$ and range and source maps inherited from $E$. Note that the description of $H$ above implies that $F$ has no sinks; moreover $F$ is row-finite because $E$ is. We claim that \[\textstyle C^*(E) \cong \big(\bigoplus_{v \in \operatorname{sinks}(E)} \mathcal{K}(\ell^2(E^* v))\big) \oplus C^*(F). \] To prove this, we first define a Cuntz-Krieger $E$-family $\{q_v : v \in E^0\}$, $\{t_e : e \in E^1\}$ in $\big(\bigoplus_{v \in \operatorname{sinks}(E)} \mathcal{K}(\ell^2(E^* v))\big) \oplus C^*(F)$. We will denote the universal Cuntz-Krieger $F$-family by $\{p^F_v : v \in F^0\}$, $\{s^F_e : e \in F^1\}$, and we will denote the matrix units in each $\mathcal{K}(\ell^2(E^* v))$ by $\{\Theta^v_{\alpha,\beta} : \alpha,\beta \in E^* v\}$. As a notational convenience, for $v \in E^0 \setminus F^0$, we write $p^F_v = 0$, and similarly for $e \in E^1 \setminus F^1$, we write $s^F_e = 0$. For $v \in E^0$, let \[\textstyle q_v := \Big(\bigoplus_{w \in \operatorname{sinks}(E)} \sum_{\alpha \in v E^* w} \Theta^w_{\alpha,\alpha}\Big) \oplus p^F_v \] and for $e \in E^1$, let \[\textstyle t_e := \Big(\bigoplus_{w \in \operatorname{sinks}(E)} \sum_{\alpha \in r(e) E^* w} \Theta^v_{e\alpha,\alpha}\Big) \oplus s^F_e. \] Routine calculations show that $\{q_v : v \in E^0\}$, $\{t_e : e \in E^1\}$ is a Cuntz-Krieger $E$-family. This family clearly generates $\big(\bigoplus_{v \in \operatorname{sinks}(E)} \mathcal{K}(\ell^2(E^* v))\big) \oplus C^*(F)$, and each $q_v$ is nonzero because if $p^F_v = 0$ then $v$ must connect to a sink $w$ in which case $q_v$ dominates some $\Theta^w_{\alpha,\alpha}$. An application of the Gauge-Invariant Uniqueness Theorem \cite[Theorem~2.1]{BPRS} implies that there is an isomorphism \[\textstyle \pi_{q,t} \colon C^*(E) \to \Big(\bigoplus_{v \in \operatorname{sinks}(E)} \mathcal{K}(\ell^2(E^* v))\Big) \oplus C^*(F) \] such that $\pi_{q,t}(p_v) = q_v$ and $\pi_{q,t}(s_e) = t_e$. To complete the proof of (\ref{it:dir-sums:only-if})~implies~(\ref{it:dir-sums:if}), let $X \subset \operatorname{sinks}(E)$ denote the subset $\{v \in \operatorname{sinks}(E) : |E^* v| < \infty\}$, and for each $v \in X$ let $n_v := |E^* v|$. We have $\mathcal{K}(\ell^2(E^*v)) = M_{n_v}(\mathbb{C})$ for each $v \in X$. Recall that $F$ is row-finite and has no sinks, so Theorem~\ref{no-unital-quotient-then-graph-alg} implies that $C^*(F)$ has no unital quotient. For each $v \in \operatorname{sinks}(E) \setminus X$, the $C^*$-algebra $\mathcal{K}(\ell^2(E^* v))$ is simple and nonunital. Thus \[\textstyle A' := \Big(\bigoplus_{v \in \operatorname{sinks}(E) \setminus X} \mathcal{K}(\ell^2(E^* v))\Big) \oplus C^*(F') \] has no finite-dimensional quotients. We get \[\textstyle A \cong C^*(E) \cong \Big(\bigoplus_{v \in \operatorname{sinks}(E)} \mathcal{K}(\ell^2(E^*v))\Big) \oplus C^*(F) \cong \Big(\bigoplus_{v \in X} M_{n_v}(\mathbb{C})\Big) \oplus A' \] as required. To see that (\ref{it:dir-sums:if})~implies~(\ref{it:dir-sums:only-if}), let $A = \big(\bigoplus_{x \in X} M_{n_x}(\mathbb{C}) \big) \oplus A'$ as in~(\ref{it:dir-sums:if}). By Theorem~\ref{no-unital-quotient-then-graph-alg}, there is a row-finite graph $E'$ with no sinks such that $C^*(E') \cong A'$. For each $x \in X$, let $E_x$ be a copy of the graph \[\begin{tikzpicture}[xscale=1.5] \node[inner sep=1pt] (v1) at (0,0) {$v_1$}; \node[inner sep=1pt] (v2) at (1,0) {$v_2$}; \node[inner sep=3pt] (dots) at (2,0) {$\cdots$}; \node[inner sep=1pt] (vn) at (3,0) {$v_{n_x}$}; \draw[-latex] (v1)--(v2); \draw[-latex] (v2)--(dots); \draw[-latex] (dots)--(vn); \end{tikzpicture}\] A standard argument shows that $C^*(E_x) \cong M_{n_x}(\mathbb{C})$. Moreover $E := \big(\bigsqcup_{x \in X} E_x\big) \sqcup E'$ satisfies \[\textstyle C^*(E) \cong \Big(\bigoplus_{x \in X} C^*(E_x)\Big) \oplus C^*(E') \cong A \] as required. \end{proof} For completeness, we conclude the section with the following well-known result. \begin{lemma}\label{lem:fin-dim-graph-alg} A $C^*$-algebra $A$ is finite dimensional if and only if it is isomorphic to the $C^*$-algebra of a finite directed graph with no cycles. \end{lemma} \begin{proof} If $E$ is a finite directed graph with no cycles, then $E^*$ is finite, and hence $C^*(E) = \cspa\{s_\mu s^*_\nu : \mu,\nu \in E^*\}$ is finite dimensional. On the other hand, if $A$ is finite-dimensional, then there exist an integer $n \ge 1$ and nonnegative integers $d_1, \dots, d_n$ such that $A \cong \bigoplus^n_{i=1} M_{d_i}(\mathbb{C})$, and \cite[Corollary~2.3]{KPR} then implies that $A$ is isomorphic to the $C^*$-algebra of a finite directed graph with no cycles. (Moreover, we remark that the last part of the proof of Proposition~\ref{prp:direct-sums-as-graph-algs} actually shows that every finite-dimensional $C^*$-algebra is the $C^*$-algebra of a finite graph with no cycles.) \end{proof} \subsection{Obstructions to realizations} \label{necessary-cond-subsec} Here we present a number of necessary conditions for an AF algebra to be an ultragraph $C^*$-algebra, an Exel-Laca algebra, or a graph $C^*$-algebra. Recall that an ultragraph $C^*$-algebra $C^*(\mathcal{G})$ is an AF-algebra if and only if $\mathcal{G}$ has no cycles by \cite[Theorem~4.1]{Tom2}. \begin{proposition} \label{prop:commutatuveUGA} Let $\mathcal{G}$ be an ultragraph and suppose that $C^*(\mathcal{G})$ is an AF-algebra. If $C^*(\mathcal{G})$ is commutative, then the ultragraph $\mathcal{G}$ has no ultraedges, and $C^*(\mathcal{G}) \cong c_0 (G^0)$. \end{proposition} \begin{proof} It suffices to show that $\mathcal{G}$ has no ultraedges. Suppose that $e$ is an ultraedge in $\mathcal{G}$, and let $v = s(e)$. Since $C^*(\mathcal{G})$ is commutative, we have $p_{r(e)} = s_e^*s_e = s_es_e^* \leq p_{s(e)}$, and hence $r(e) = \{s(e)\}$. Thus $e$ is a cycle. This contradicts the hypothesis that $C^*(\mathcal{G})$ is an AF-algebra. \end{proof} \begin{proposition}\label{prp:ELobstruction} Let $A$ be an AF-algebra that is also an Exel-Laca algebra. Then $A$ does not have a quotient isomorphic to $\mathbb{C}$, and for each $n \in \mathbb{N}$ there is a $C^*$-subalgebra of $A$ isomorphic to $M_n(\mathbb{C})$. \end{proposition} \begin{proof} There exists an ultragraph $\mathcal{G} = (G^0, \mathcal{G}^1, r, s)$ with bijective $s$ such that $C^*(\mathcal{G}) \cong A$ (see Remark~\ref{rem:ELalgis,,,}). The ultragraph $\mathcal{G}$ has no cycles. Let $\{p_v\}_{v\in G^0}$ and $\{s_e\}_{e \in \mathcal{G}^1}$ be the generator of $C^*(\mathcal{G})$ as in Definition~\ref{dfn:CK-G-fam}. Suppose, for the sake of contradiction, that there exists a nonzero $*$-ho\-mo\-mor\-phism $\chi \colon C^*(\mathcal{G}) \to \mathbb{C}$. Since $\chi$ is nonzero, there exists $v \in G^0$ with $\chi(p_v)\neq 0$. Let $e\in \mathcal{G}^1$ be the unique ultraedge with $s(e)=v$. Since $\mathcal{G}$ has no cycles, we have $v \notin r(e)$. Hence $p_v$ is orthogonal to $s_e^*s_e$. Thus $$| \chi (s_e) |^2 \chi (p_v) = \overline{\chi (s_e)} \chi(s_e) \chi (p_v) = \chi (s_e^* s_e p_v) = 0,$$ and since $\chi(p_v)\neq 0$, it follows that $| \chi (s_e) |^2 = 0$ and $\chi (s_e) = 0$. But then $\chi (p_v) = \chi (s_es_e^*) = \chi(s_e) \chi (s_e^*) = 0$, which is a contradiction. Hence $C^*(\mathcal{G})$ has no quotients isomorphic to $\mathbb{C}$. Let $n\in \mathbb{N}$. We will construct a $C^*$-subalgebra of $C^*(\mathcal{G})$ isomorphic to $M_n(\mathbb{C})$. Choose $v_1 \in G^0$ and let $e_1 \in \mathcal{G}^1$ be the unique ultraedge with $s(e_1) = v_1$. Then choose a vertex $v_2 \in r(e_1)$. Since $\mathcal{G}$ has no cycles, we have $v_2 \neq v_1$. Continuing in this manner, we can find distinct vertices $v_1, v_2, \ldots, v_{n} \in G^0$ such that $v_{k+1} \in r(e_k)$ for $k = 1, 2, \ldots , n-1$, where $e_k \in \mathcal{G}^1$ is the unique ultraedge with $s(e_k) = v_k$. For $1 \leq i,j \leq n$, we define \[ \Theta_{i,j} := s_{e_i} s_{e_{i+1}} \ldots s_{e_{n-1}} p_{v_n} s_{e_{n-1}}^* s_{e_{n-2}}^* \ldots s_{e_j}^*. \] One can check that $\{ \Theta_{i,j} : 1 \leq i,j \leq n \}$ is a family of matrix units, and thus the $C^*$-subalgebra of $C^*(\mathcal{G})$ generated by $\{ \Theta_{i,j} : 1 \leq i,j \leq n \}$ is isomorphic to $M_n(\mathbb{C})$. \end{proof} \begin{corollary} If $A$ is an AF-algebra that is also an Exel-Laca algebra, then $A$ has a Bratteli diagram $(E, d)$ such that $d_v \geq 2$ for all $v \in E^0$. \end{corollary} \begin{proof} Since $A$ has no quotient isomorphic to $\mathbb{C}$, the result follows from Lemma~\ref{fin-quotient-lem-1}. \end{proof} \begin{corollary} \label{No-fin-dim-E-L-Cor} No finite-dimensional $C^*$-algebra is isomorphic to an Exel-Laca algebra. \end{corollary} \begin{definition} We recall that a $C^*$-algebra $A$ is said to be \emph{Type~I} if whenever $\pi \colon A \to \mathcal{B} (\mathcal{H})$ is a nonzero irreducible representation, then $\mathcal{K} (\mathcal{H} ) \subseteq \pi (A)$. In the literature, the terms \emph{postliminary}, \emph{GCR}, and \emph{smooth} are all synonymous with Type~I. \end{definition} \begin{proposition} \label{prp:GAobstruction} Let $C^*(E)$ be a graph $C^*$-algebra that is also an AF-algebra. Then every unital quotient of $C^*(E)$ is Type~I and has finitely many ideals. \end{proposition} \begin{proof} By Lemma~\ref{ideal_quotient}, it suffices to show that if a graph $C^*$-algebra $C^*(E)$ is a unital AF-algebra then $C^*(E)$ is Type~I and has finitely many ideals. Note that $C^*(E)$ is a unital AF-algebra if and only if $E$ has a finite number of vertices and no cycles. We first show that $C^*(E)$ has finitely many ideals. Since $E$ has no cycles, it satisfies Condition~(K). Hence any ideal of $C^*(E)$ is of the form $I_{(H,S)}$ for a saturated hereditary subset $H$ of $E^0$ and a subset $S \subseteq E^0$ of the set of breaking vertices for $H$ \cite[Theorem~3.5]{DT}. Since the set $E^0$ of vertices of $E$ is finite, there are only a finite number of such pairs $(H,S)$. Thus $C^*(E)$ has finitely many ideals. To prove that $C^*(E)$ is of Type~I, first observe that any graph with finitely many vertices and no cycles contains a sink $v$, and the ideal $I_v$ generated by $p_v$ is then a nontrivial gauge-invariant ideal which is Morita equivalent to $\mathbb{C}$ and hence of Type~I (see \cite[Proposition~2]{aHRW} and the subsequent remark in \cite{aHRW}). We shall show by induction on the number of nonzero ideals of $C^*(E)$ that $C^*(E)$ is Type~I. Our basis case is when has just one nontrivial ideal $I$. That is, $C^*(E)$ is simple, and then the Type~I ideal $I_v$ of the preceding paragraph is $C^*(E)$ itself, proving the result. Now suppose as an inductive hypothesis that the result holds whenever $C^*(E)$ has at most $n$ distinct nonzero ideals, and suppose that $C^*(E)$ has $n+1$ such. Let $v$ be a sink in $E$ and let $I_v$ be the corresponding nonzero Type~I ideal as in the preceding paragraph. If $C^*(E)/I_v$ is trivial, then $C^*(E) = I_v$ is of Type~I, so we may assume that $C^*(E)/I_v$ is nonzero. Then Lemma~\ref{ideal_quotient} implies that $C^*(E)/I_v$ is a unital AF-algebra that is a graph $C^*$-algebra. Moreover, $C^*(E)/I_v$ has strictly fewer ideals than $C^*(E)$, so the inductive hypothesis implies that $C^*(E)/I_v$ is of Type~I. Since an extension of a Type~I $C^*$-algebra by a Type~I $C^*$-algebra is Type~I (see \cite[Theorem~5.6.2]{Mur}), it follows that $C^*(E)$ is of Type~I. \end{proof} \begin{theorem} \label{simple-AF-graph-EL} For a simple AF-algebra $A$ we have the following. \begin{enumerate} \item If $A$ is finite dimensional then $A$ is isomorphic to a graph $C^*$-algebra but not isomorphic to an Exel-Laca algebra. \item If $A$ is infinite dimensional and unital then $A$ is isomorphic to an Exel-Laca algebra but not isomorphic to a graph $C^*$-algebra. \item If $A$ is infinite dimensional and nonunital then $A$ is isomorphic to a $C^*$-algebra of a row-finite graph with no sinks (which is also isomorphic to the Exel-Laca algebra of a row-finite matrix by Lemma~\ref{row-finite-graphs-matrices-lem}). \end{enumerate} In particular, each simple AF-algebra $A$ is isomorphic to either an Exel-Laca algebra or a graph $C^*$-algebra. \end{theorem} \begin{proof} The statement in (1) follows from Lemma~\ref{lem:fin-dim-graph-alg} and Corollary~\ref{No-fin-dim-E-L-Cor}. For (2) we observe that if $A$ is simple, infinite dimensional, and unital, then it follows from Corollary~\ref{no-nonzero-f-d-quotient-then-EL} that $A$ is isomorphic to an Exel-Laca algebra. Since $A$ is in particular unital, to see that $A$ is not a graph $C^*$-algebra, it suffices by Proposition~\ref{prp:GAobstruction} to show that it is not of Type~I. If we suppose for contradiction that $A$ is of Type~I, then as it is simple, we must have $A \cong \mathcal{K}(\mathcal{H})$ for some Hilbert space $\mathcal{H}$. Since $A$ is unital, $\mathcal{H}$ and hence $\mathcal{K}(\mathcal{H})$ must be finite-dimensional, contradicting that $A$ is infinite dimensional. The statement in (3) follows from Theorem~\ref{no-unital-quotient-then-graph-alg}. The final assertion follows from (1), (2), and (3). \end{proof} \begin{corollary} \label{UHF-inf-not-graph} If $A$ is an infinite-dimensional UHF algebra, then $A$ is not isomorphic to a graph $C^*$-algebra. \end{corollary} \section{A summary of known containments} \label{Venn-sec} In this section we use our results to describe how various classes of AF-algebras are contained in the classes of graph $C^*$-algebras, Exel-Laca algebras, and ultragraph algebras. We first examine the simple AF-algebras, where we have a complete description. Moreover, we see that the simple AF-algebras allow us to distinguish among the four classes of $C^*$-algebras of row-finite graphs with no sinks, graph $C^*$-algebras, Exel-Laca algebras, and ultragraph algebras. Second, we consider general AF-algebras, and while our description in this case is not complete, we are able to describe how the finite-dimensional and stable AF-algebras are contained in the classes of graph $C^*$-algebras, Exel-Laca algebras, and ultragraph algebras. Furthermore, we use our results to show that there are numerous other AF-algebras in the various intersections of these classes. \subsection{Simple AF-algebras} Consider the following partition of the simple AF-algebras. \begin{align*} \textrm{AF}^\text{simple}_{\textit{finite}} &:= \text{finite-dimensional simple AF-algebras} \\ \textrm{AF}^\text{simple}_{\infty, \textit{unital}} &:= \text{infinite-dimensional simple AF-algebras that are unital} \\ \textrm{AF}^\text{simple}_{\infty, \textit{nonunital}} &:= \text{infinite-dimensional simple AF-algebras that are nonunital} \end{align*} Theorem~\ref{simple-AF-graph-EL} and Theorem~\ref{no-unital-quotient-then-graph-alg} imply that \begin{align*} \textrm{AF}^\text{simple}_{\infty, \textit{nonunital}} &= \text{simple AF-algebras that are $C^*$-algebras of} \\ & \qquad \quad \text{row-finite graphs with no sinks,} \\ \textrm{AF}^\text{simple}_{\textit{finite}} \cup \textrm{AF}^\text{simple}_{\infty, \textit{nonunital}} &= \text{simple AF-algebras that are graph $C^*$-algebras,} \\ \textrm{AF}^\text{simple}_{\infty, \textit{unital}} \cup \textrm{AF}^\text{simple}_{\infty, \textit{nonunital}} &= \text{simple AF-algebras that are Exel-Laca algebras} \\ \intertext{\noindent and} \textrm{AF}^\text{simple}_{\textit{finite}} \cup \textrm{AF}^\text{simple}_{\infty, \textit{unital}} \cup \textrm{AF}^\text{simple}_{\infty, \textit{nonunital}} &= \text{simple AF-algebras that are ultragraph algebras.} \end{align*} Hence these three classes of simple AF-algebras allow us to distinguish among the four classes of $C^*$-algebras of row-finite graphs with no sinks, graph $C^*$-algebras, Exel-Laca algebras, and ultragraph algebras. However, they do not allow us to distinguish between the classes of $C^*$-algebras of row-finite graphs with no sinks and the intersection of graph $C^*$-algebras and Exel-Laca algebras. Nor do they allow us to distinguish between the classes of ultragraph $C^*$-algebras and the union of graph $C^*$-algebras and Exel-Laca algebras. To distinguish these classes we will need nonsimple examples. \subsection{More general AF-algebras} For nonsimple AF-algebras, we cannot give such an explicit description. Nevertheless, in Figure~\ref{fig:Venn} we present a Venn diagram summarizing the relationships we have established for finite-dimensional and stable AF-algebras, and also give various examples in the intersections of our classes of graph $C^*$-algebras, Exel-Laca algebras, and ultragraph $C^*$-algebras. \begin{figure} \caption{A Venn diagram summarizing AF-algebra containments} \label{fig:Venn} \end{figure} \begin{table}[htp!] \[\begin{tabular}{||c|c|c||}\hline \raisebox{0pt}[1em][0em]{Region} & unital $C^*$-algebra& nonunital $C^*$-algebra\\[0.25ex] \hline \raisebox{0pt}[1em][0em]{(a)} & $c_c$ & $c_0 \oplus c_c$ \\[0.25ex] \raisebox{0pt}[1em][0em]{(b)} & $\mathcal{K}^+$ & $c_0$ \\[0.25ex] \raisebox{0pt}[1em][0em]{(c)} & $M_{2^\infty} \oplus \mathbb{C}$ & $M_{2^\infty} \oplus \mathbb{C} \oplus \mathcal{K}$ \\[0.25ex] \raisebox{0pt}[1em][0em]{(d)} & $M_2(\MU{\mathcal{K}})$ & $M_2(\MU{\mathcal{K}}) \oplus \mathcal{K}$ \\[0.25ex] \raisebox{0pt}[1em][0em]{(e)} & --- & $C^*(F_2)$ \\[0.25ex] \raisebox{0pt}[1em][0em]{(f)} & $M_{2^\infty}$ & $M_{2^\infty} \oplus \mathcal{K}$ \\[0.25ex]\hline \end{tabular}\] $ $ \caption{Examples of $C^*$-algebras lying in each region of Figure~\ref{fig:Venn}}\label{tab:egs-for-diagram} \end{table} Table~\ref{tab:egs-for-diagram} presents, for each region of the Venn diagram of Figure~\ref{fig:Venn}, both a unital and a nonunital example belonging to that region, with three exceptions: we give no examples of finite-dimensional or stable AF algebras, nor any example of a unital AF algebra which is the $C^*$-algebra of a row-finite graph with no sinks. Our reasons for these omissions are as follows: examples of finite-dimensional and stable AF algebras are obvious, and necessarily unital and nonunital respectively; and no unital example exists in region (e) by Theorem~\ref{no-unital-quotient-then-graph-alg}. In Table~\ref{tab:egs-for-diagram}, we use the following notation: \begin{itemize} \item $M_{2^\infty}$ denotes the UHF algebra of type $2^\infty$. \item $\mathcal{K}$ denotes the compact operators on a separable infinite-dimensional Hilbert space. \item $\MU{\mathcal{K}}$ denotes the minimal unitization of the $C^*$-algebra $\mathcal{K}$. \item $c_0$ denotes the space $\{f : \mathbb{N} \to \mathbb{C} \ | \ \lim_{n \to \infty} f(n) = 0\}$. \item $c_c$ denotes the space $\{f : \mathbb{N} \to \mathbb{C} \ | \ \lim_{n \to \infty} f(n) \in \mathbb{C} \}$. \item $F_2$ denotes the graph\quad \raisebox{-0.5ex}[2ex][0.5ex]{\begin{tikzpicture}[xscale=1.5] \node[inner sep=1pt] (a) at (0,0) {\small $v_1$}; \node[inner sep=1pt] (b) at (1,0) {\small $v_2$}; \node[inner sep=1pt] (c) at (2,0) {\small $v_3$}; \node[inner sep=1pt] (d) at (3,0) {\small $v_4$}; \node at (3.35,0) {\dots}; \draw[-latex] (a) .. controls (0.5,0.15) .. (b); \draw[-latex] (a) .. controls (0.5,-0.15) .. (b); \draw[-latex] (b) .. controls (1.5,0.15) .. (c); \draw[-latex] (b) .. controls (1.5,-0.15) .. (c); \draw[-latex] (c) .. controls (2.5,0.15) .. (d); \draw[-latex] (c) .. controls (2.5,-0.15) .. (d); \end{tikzpicture}}. \end{itemize} \noindent We now justify that the examples listed have the desired properties. \begin{enumerate}\renewcommand{\alph{enumi}}{\alph{enumi}} \item \begin{itemize} \item The unital AF-algebra $c_c$ is not an ultragraph $C^*$-algebra since it is commutative and its spectrum is not discrete (see Proposition~\ref{prop:commutatuveUGA}). \item The nonunital AF-algebra $c_0\oplus c_c$ is not an ultragraph algebra for precisely the same reason that $c_c$ is not. \end{itemize} \item \begin{itemize} \item The minimal unitization $\MU{\mathcal{K}}$ of the compact operators is isomorphic to the $C^*$-algebra of the graph\ \raisebox{-3pt}[2.5ex][1.5ex]{\begin{tikzpicture}[scale=2] \node[circle,inner sep=1pt] (v) at (0,0) {\small$v$}; \node[circle,inner sep=1pt] (w) at (1,0) {\small$w$}; \draw[-latex] (v.20) -- (w.163) node[pos=0.5,anchor=south,inner sep=1.5pt] {\tiny$(\infty)$}; \draw[-latex] (v.340) -- (w.197); \end{tikzpicture}}\ with two vertices $v,w$ and infinitely many edges from $v$ to $w$. Since, $\MU{\mathcal{K}}$ has a quotient isomorphic to $\mathbb{C}$, it is not an Exel-Laca algebra by Proposition~\ref{prp:ELobstruction}. \item The nonunital AF-algebra $c_0$ is the $C^*$-algebra of the graph with infinitely many vertices and no edges. It is not an Exel-Laca algebra by Proposition~\ref{prp:ELobstruction}. \end{itemize} \item \begin{itemize} \item Since $M_{2^\infty}$ is an infinite-dimensional simple AF-algebra, Theorem~\ref{simple-AF-graph-EL} implies that $M_{2^\infty}$ is an Exel-Laca algebra and hence also an ultragraph algebra. In addition, $\mathbb{C}$ is a graph $C^*$-algebra so also an ultragraph $C^*$-algebra. Since the class of ultragraph $C^*$-algebras is closed under direct sums, $M_{2^\infty} \oplus \mathbb{C}$ is a unital ultragraph $C^*$-algebra. It is not an Exel-Laca algebra because it has a quotient isomorphic to $\mathbb{C}$ (see Proposition~\ref{prp:ELobstruction}), and it is not a graph $C^*$-algebra because it has a unital quotient $M_{2^\infty}$ that is not Type~I (see Proposition~\ref{prp:GAobstruction}). \item Since $\mathcal{K}$ and $M_{2^\infty} \oplus \mathbb{C}$ are both ultragraph $C^*$-algebras, the direct sum $M_{2^\infty} \oplus \mathbb{C} \oplus \mathcal{K}$ is a nonunital ultragraph $C^*$-algebra. It is neither a graph $C^*$-algebra nor an Exel-Laca algebra as above. \end{itemize} \item \begin{itemize} \item The unital AF-algebra $M_2(\MU{\mathcal{K}})$ is isomorphic to the $C^*$-algebra of the following graph \[\begin{tikzpicture}[scale=2] \node[circle,inner sep=-0.5pt] (v) at (0,0) {$\bullet$}; \node[circle,inner sep=-0.5pt] (w) at (1,0) {$\bullet$}; \draw[-latex] (v.north east) -- (w.north west) node[pos=0.5,anchor=south,inner sep=1.5pt] {\small$(\infty)$}; \draw[-latex] (v.south east) -- (w.south west); \node[circle,inner sep=-0.5pt] (x) at (1,-0.5) {$\bullet$}; \draw[-latex] (x)--(v.-60); \draw[-latex] (x)--(w); \end{tikzpicture}\] and it is also isomorphic to the Exel-Laca algebra of the matrix \[\left(\begin{tabular}{cccccccc} 0&1&1&1&1&$\cdots$ \\ 0&0&1&0&0& \\ 0&0&0&1&0& \\ 0&0&0&0&1& \\ \vdots&&&&&$\ddots$ \end{tabular}\right).\] It is not isomorphic to the $C^*$-algebra of a row-finite graph with no sinks by Theorem~\ref{no-unital-quotient-then-graph-alg}. \item The nonunital AF-algebra $M_2(\MU{\mathcal{K}}) \oplus \mathcal{K}$ is isomorphic to both a graph $C^*$-algebra and an Exel-Laca algebra because its two direct summands have this property. It is not the $C^*$-algebra of a row-finite graph with no sinks by Theorem~\ref{no-unital-quotient-then-graph-alg} because it admits the unital quotient $M_2(\MU{\mathcal{K}})$. \end{itemize} \item \begin{itemize} \item There is no unital example in this region by Theorem~\ref{no-unital-quotient-then-graph-alg}. \item Let $F_2$ denote the graph\quad \raisebox{-0.5ex}[2ex][0.5ex]{\begin{tikzpicture}[xscale=1.5] \node[inner sep=1pt] (a) at (0,0) {\small $v_1$}; \node[inner sep=1pt] (b) at (1,0) {\small $v_2$}; \node[inner sep=1pt] (c) at (2,0) {\small $v_3$}; \node[inner sep=1pt] (d) at (3,0) {\small $v_4$}; \node at (3.35,0) {\dots}; \draw[-latex] (a) .. controls (0.5,0.15) .. (b); \draw[-latex] (a) .. controls (0.5,-0.15) .. (b); \draw[-latex] (b) .. controls (1.5,0.15) .. (c); \draw[-latex] (b) .. controls (1.5,-0.15) .. (c); \draw[-latex] (c) .. controls (2.5,0.15) .. (d); \draw[-latex] (c) .. controls (2.5,-0.15) .. (d); \end{tikzpicture}}. Then $C^*(F_2)$ is a graph $C^*$-algebra, and since $F_2$ is cofinal with no cycles and no sinks, $C^*(F_2)$ is simple by \cite[Corollary~3.10]{KPR}. In addition, $C^*(F_2)$ is nonunital because $F_2$ has infinitely many vertices. Since $C^*(F_2)$ is the $C^*$-algebra of a row-finite graph with no sinks, it is both a graph $C^*$-algebra and an Exel-Laca algebra (see Lemma~\ref{row-finite-graphs-matrices-lem}). The function $g : F_2^0 \to \mathbb{R}^+$ defined by $g(v_i) = 2^{-i}$ is a graph trace with norm 1 (see \cite[Definition~2.2]{Tomforde2004}), and the existence of such a function implies that $C^*(F_2)$ is not stable by (a)${}\implies{}$(c) of \cite[Theorem~3.2]{Tomforde2004}. \end{itemize} \item \begin{itemize} \item As in example~(c), the unital AF-algebra $M_{2^\infty}$ is an Exel-Laca algebra but not a graph $C^*$-algebra. \item As in example~(c), the nonunital AF-algebra $M_{2^\infty} \oplus \mathcal{K}$ is an Exel-Laca algebra but not a graph $C^*$-algebra. \end{itemize} \end{enumerate} \end{document}
arXiv
\begin{document} \pagestyle{fancy} \fancyhead{} \maketitle \section{Introduction} \label{intro} Core concepts from game theory underpin many advances in multi-agent systems research. Among these, Nash equilibrium is particularly prevalent. Despite the difficulty of computing a Nash equilibrium~\citep{daskalakis2009complexity,chen2009settling}, a plethora of algorithms~\citep{lemke1964equilibrium,sandholm2005mixed,porter2008simple,govindan2003global,blum2006continuation} and suitable benchmarks~\citep{nudelman2004run} have been developed, however, none address large normal-form games with many actions and many players, especially those too big to be stored in memory. In this work, we develop an algorithm for approximating a Nash equilibrium of a normal-form game with so many actions and players that only a small subset of the possible outcomes in the game can be accessed at a time. We refer the reader to~\citet{mckelvey1996computation} for a review of approaches for normal-form games. Several algorithms exactly compute a Nash equilibrium for small normal-form games and others efficiently approximate Nash equilibria for special game classes, however, practical algorithms for approximating Nash in large normal-form games with many players, e.g. 7, and many actions, e.g., 21, is lacking. Computational efficiency is of paramount importance for large games because a general normal-form game with $n$ players and $m$ actions contains $nm^n$ payoffs; simply enumerating all payoffs can be intractable and renders classical approaches ineligible. A common approach is to return the profile found by efficient no-regret algorithms that sample payoffs as needed~\citep{blum_mansour_2007} although~\citet{flokas2020no} recently proved that many from this family do not converge to mixed Nash equilibria in \emph{all} games, 2-player games included. While significant progress has been made for computing Nash in 2-player normal-form games which can be represented as a \emph{linear} complementarity problem, the many-player setting induces a \emph{nonlinear} complementarity problem, which is ``often hopelessly impractical to solve exactly''~(\cite{shoham2008multiagent}, p. 105).\footnote{While any n-player game can, in theory, be efficiently solved for approximate equilibria by reducing it to a two-player game, in practice this approach is not feasible for solving large games due to the blowups involved in the reductions. Details in Appx.~\ref{app:beyondtwo}.} The combination of high dimensionality ($m^n$ vs $m^2$ distinct outcomes) and nonlinearity (utilities are degree-$n$ polynomials in the strategies vs degree-$2$) makes many-player games much more complex. This more general problem arises in cutting-edge multiagent research when learning~\citep{gray2020human} and evaluating~\citep{anthony2020learning} agents in Diplomacy, a complex 7-player board game. \citet{gray2020human} used no-regret learning to approximate a Nash equilibrium of subsampled games, however, this approach is brittle as we show later in Figure~\ref{fig:noregfail}. In \cite{anthony2020learning}, five Diplomacy bots were ranked according to their mass under an approximate Nash equilibrium. We extend that work to encourage convergence to a particular Nash and introduce sampling along with several technical contributions to scale evaluation to 21 Diplomacy bots, a >1000-fold increase in meta-game size. Equilibrium computation has been an important component of AI in multi-agent systems~\citep{shoham2008multiagent}. It has been (and remains) a critical component of super-human AI in poker~\citep{Bowling15Poker,Moravcik17DeepStack,Brown17Libratus,brown2020combining}. As mentioned above, Nash computation also arises when strategically summarizing a larger domain by learning a lower dimensionality empirical game~\citep{wellman2006methods}; such an approach was used in the AlphaStar League, leading to an agent that beat humans in StarCraft~\citep{vinyals2019alphastar,vinyals2019grandmaster}. Ultimately, this required solving for the Nash of a 2-player, 888-action game, which can take several seconds using state-of-the-art solvers on modern hardware. In contrast, solving an empirical game of Diplomacy, e.g., a 7-player 888-action game, would naively take longer than the current age of the universe. This is well beyond the size of any game we inspect here, however, we approximate the Nash of games several orders of magnitude larger than previously possible, thus taking a step towards this ambitious goal. \textbf{Our Contribution:} We introduce stochastic optimization into a classical \emph{homotopy} approach resulting in an algorithm that avoids the need to work with the full payoff tensor all at once and is, to our knowledge, the first algorithm generally capable of practically approximating a unique Nash equilibrium in large (billions of outcomes) many-player, many-action normal-form games. We demonstrate our algorithm on 2, 3, 4, 6, 7 and 10 player games (10 in Appx.~\ref{app:xtra_games}; others in \S\ref{exp}). We also perform various ablation studies of our algorithm (Appx.~\ref{app:ablations}), compare against several baselines including solvers from the popular \texttt{Gambit} library (more in Appx.~\ref{app:morealgs}), and examine a range of domains (more in Appx.~\ref{app:xtra_games}). The paper is organized as follows. After formulating the Nash equilibrium problem for a general $n$-player normal-form game, we review previous work. We discuss how we combine the insights of classical algorithms with ideas from stochastic optimization to develop our final algorithm, \emph{average deviation incentive descent with adaptive sampling}, or ADIDAS. Finally, we compare our proposed algorithm against previous approaches on large games of interest from the literature: games such as Colonel Blotto~\citep{arad2012multi}, classical Nash benchmarks from the GAMUT library~\citep{nudelman2004run}, and games relevant to recent success on the $7$-player game Diplomacy~\citep{anthony2020learning,gray2020human}. \section{Preliminaries} In a finite $n$-player game in normal form, each player $i \in \{1,\ldots,n\}$ is given a strategy set $\mathcal{A}_i = \{a_{i1}, \ldots, a_{im_i}\}$ consisting of $m_i$ pure strategies. The pure strategies can be naturally indexed by non-negative integers, so we redefine $\mathcal{A}_i = \{0, \ldots, m_i - 1\}$ as an abuse of notation for convenience. Each player $i$ is also given a payoff or utility function, $u_i: \mathcal{A} \rightarrow \mathbb{R}$ where $\mathcal{A} = \prod_i \mathcal{A}_i$. In games where the cardinality of each player's strategy set is the same, we drop the subscript on $m_i$. Player $i$ may play a mixed strategy by sampling from a distribution over their pure strategies. Let player $i$'s mixed strategy be represented by a vector $x_i \in \Delta^{m_i-1}$ where $\Delta^{m_i-1}$ is the $(m_i-1)$-dimensional probability simplex embedded in $\mathbb{R}^{m_i}$. Each function $u_i$ is then extended to this domain so that $u_i(\boldsymbol{x}) = \sum_{\boldsymbol{a} \in \mathcal{A}} u_i(\boldsymbol{a}) \prod_{j} x_{ja_j}$ where $\boldsymbol{x} = (x_1, \ldots, x_n)$ and $a_j \in \mathcal{A}_j$ denotes player $j$'s component of the joint action $\boldsymbol{a} \in \mathcal{A}$. For convenience, let $x_{-i}$ denote all components of $\boldsymbol{x}$ belonging to players other than player $i$. We say $\boldsymbol{x} \in \prod_i \Delta^{m_i-1}$ is a Nash equilibrium iff, for all $i \in \{1, \ldots, n\}$, $u_i(z_i, x_{-i}) \le u_i(\boldsymbol{x})$ for all $z_i \in \Delta^{m_i-1}$, i.e., no player has any incentive to unilaterally deviate from $\boldsymbol{x}$. Nash is most commonly relaxed with $\epsilon$-Nash, an additive approximation: $u_i(z_i, x_{-i}) \le u_i(\boldsymbol{x}) + \epsilon$ for all $z_i \in \Delta^{m_i-1}$. Later we explore the idea of regularizing utilities with a function $S^{\tau}_i$ (e.g., entropy) as follows: \begin{align} u^{\tau}_i(\boldsymbol{x}) &= u_i(\boldsymbol{x}) + S^{\tau}_i(x_i, x_{-i}). \end{align} As an abuse of notation, let the atomic action $a_{i}$ also denote the $m_i$-dimensional ``one-hot" vector with all zeros aside from a $1$ at index $a_{i}$; its use should be clear from the context. And for convenience, denote by $H^i_{il} = \mathbb{E}_{x_{-il}}[u_i(a_i, a_l, x_{-il})]$ the Jacobian\footnote{See Appx.~\ref{appx:nfg_grad} for an example derivation of the gradient if this form is unfamiliar.} of player $i$'s utility with respect to $x_i$ and $x_l$; $x_{-il}$ denotes all strategies belonging to players other than $i$ and $l$ and $u_i(a_i, a_l, x_{-il})$ separates out $l$'s strategy $x_l$ from the rest of the players $x_{-i}$. We also introduce $\nabla^i_{x_i}$ as player $i$'s utility gradient. Note player $i$'s utility can now be written succinctly as $u_i(x_i, x_{-i}) = x_i^\top \nabla^i_{x_i} = x_i^\top H^i_{il} x_l$ for any $l$. In a polymatrix game, interactions between players are limited to local, pairwise games, each of which is represented by matrices $H^i_{ij}$ and $H^j_{ij}$. This reduces the exponential $nm^n$ payoffs required to represent a general normal form game to a quadratic $n(n-1)m^2$, an efficiency we leverage later. \subsection{Related work} Several approaches exist for computing Nash equilibria of $n$-player normal form games\footnote{Note that Double-Oracle~\citep{mcmahan2003planning} and PSRO~\citep{lanctot2017unified} can be extended to n-player games, but require an n-player normal form meta-solver (Nash-solver) and so cannot be considered solvers in their own right. This work provides an approximate meta-solver.}. Simplicial Subdivision (SD)~\citep{van1987simplicial} searches for an equilibrium over discretized simplices; accuracy depends on the grid size which scales exponentially with the number of player actions. \citet{govindan2003global} propose a homotopy method (GW) that begins with the unique Nash distribution of a game whose payoff tensor has been perturbed by an arbitrary constant tensor. GW then scales back this perturbation while updating the Nash to that of the transformed game. GW is considered an extension of the classic Lemke-Howson algorithm (\citeyear{lemke1964equilibrium}) to $3$+ player games (see \S4.3, p. 107 of ~\citep{shoham2008multiagent}). \begin{figure} \caption{Algorithm Comparison and Overview.} \label{fig:algcomp} \end{figure} Another homotopy approach perturbs the payoffs with entropy bonuses, and evolves the Nash distribution along a continuum of quantal response equilibria (QREs) using a predictor-corrector method to integrate a differential equation~\citep{turocy2005dynamic} \textemdash we also aim to follow this same continuum. In a slightly different approach,~\citet{perolat2020poincar} propose an adaptive regularization scheme that repeatedly solves for the equilibrium of a transformed game. Simple search methods~\citep{porter2008simple} that approach Nash computation as a constraint satisfaction problem appear to scale better than GW and SD as measured on GAMUT benchmarks~\citep{nudelman2004run}. Lyapunov approaches minimize non-convex energy functions with the property that zero energy implies Nash~\citep{shoham2008multiagent}, however these approaches may suffer from convergence to local minima with positive energy. In some settings, such as polymatrix games with payoffs in $[0, 1]$, gradient descent on appropriate energy functions\footnote{\Eqref{popexp} but with $\max$ instead of $\sum$ over player regrets. Note that for symmetric games with symmetric equilibria, these are equivalent up to a multiplicative factor $n$.} guarantees a $(\frac{1}{2} + \delta)$-Nash in time polynomial in $\frac{1}{\delta}$~\citep{deligkas2017computing} and performs well in practice~\citep{deligkas2016empirical}. \paragraph{Teaser} Our proposed algorithm consists of two key conceptual schemes. One lies at the crux of homotopy methods (see Figures~\ref{fig:algcomp} and~\ref{fig:homotopy}). We initialize the Nash approximation, $\boldsymbol{x}$, to the joint uniform distribution, the unique Nash of a game with infinite-temperature entropy regularization. The temperature is then annealed over time. To recover the Nash at each temperature, we minimize an appropriately adapted energy function via (biased) stochastic gradient descent. This minimization approach can be seen as simultaneously learning a suitable polymatrix decomposition of the game similarly to~\citet{govindan2004computing} but from batches of stochastic play, i.e., we compute Monte Carlo estimates of the payoffs in the bimatrix game between every pair of players by observing the outcomes of the players' joint actions (sampled from $\boldsymbol{x}$ after each update) rather than computing payoffs as exact expectations. \nocite{facchinei2007finite} \nocite{sandholm2005mixed} \nocite{blum2006continuation} \section{Deviation Incentive \& Warm-Up} We propose minimizing the energy function in~\eqref{popexp} below, \emph{average deviation incentive} (ADI), to approximate a Nash equilibrium of a large, entropy-regularized normal form game. This loss measures, on average, how much a single agent can exploit the rest of the population by deviating from a joint strategy $\boldsymbol{x}$. For sake of exposition, we drop the normalizing constant from the denominator (number of players, $n$), and consider the sum instead of the average. This quantity functions as a \emph{loss} that can be minimized over $\mathcal{X} = \prod_i \Delta^{m_i-1}$ to find a Nash distribution. Note that when ADI is zero, $\boldsymbol{x}$ is a Nash. Also, if $\sum_k$ is replaced by $\max_k$, this loss measures the $\epsilon$ of an $\epsilon$-Nash, and therefore,~\eqref{popexp} is an upper bound on this $\epsilon$. Lastly, note that, in general, this loss function is non-convex and so convergence to local, suboptimal minima is theoretically possible if naively minimizing via first order methods like gradient descent \textemdash we explain in \S\ref{annealing} how we circumvent this pitfall via temperature annealing. Let $\texttt{BR}_k = \texttt{BR}(x_{-k}) = \argmax_{z_k \in \Delta^{m_k-1}} u^{\tau}_k(z_k, x_{-k})$ be player $k$'s best response to all other players' current strategies where $u^\tau_k$ is player $k$'s utility regularized by entropy with temperature $\tau$ and formally define \begin{align} \mathcal{L}^{\tau}_{adi}(\boldsymbol{x}) &= \sum_k \overbrace{u^{\tau}_k(\texttt{BR}_k, x_{-k}) - u^{\tau}_k(x_k,x_{-k})}^{\text{incentive to deviate to $\texttt{BR}_k$ vs $\boldsymbol{x}_k$}}. \label{popexp} \end{align} If $\tau=0$, we drop the superscript and use $\mathcal{L}_{adi}$. The Nash equilibrium of the game regularized with Shannon entropy is called a \emph{quantal response equilibrium}, QRE($\tau$) (see p. 152-154, 343 of~\cite{fudenberg1998theory}). Average deviation incentive has been interpreted as a pseudo-distance from Nash in prior work, where it is referred to as \texttt{NashConv} \citep{lanctot2017unified}. We prefer average deviation incentive because it more precisely describes the function and allows room for exploring alternative losses in future work. The objective can be decomposed into terms that depend on $x_k$ (second term) and $x_{-k}$ (both terms). Minimizing the second term w.r.t. $x_k$ seeks strategies with high utility, while minimizing both terms w.r.t. $x_{-k}$ seeks strategies that cannot be exploited by player $k$. In reducing $\mathcal{L}_{adi}$, each player $k$ seeks a strategy that not only increases their payoff but also removes others' temptation to exploit them. A related algorithm is Exploitability Descent (ED)~\citep{Lockhart19ED}. Rather than minimizing $\mathcal{L}_{adi}$, each player independently maximizes their utility assuming the other players play their best responses. In the two-player normal-form setting, ED is equivalent to extragradient~\citep{korpelevich1976extragradient}~(see Appx.~\ref{app:ed_connection}). However, ED is only guaranteed to converge to Nash in two-player, zero-sum games. We include a comparison against ED as well as Fictitious-play, another popular multiagent algorithm, in Appx.~\ref{app:xtra_comparisons}. We also relate $\mathcal{L}_{adi}$ to Consensus optimization~\citep{mescheder2017numerics} in Appx.~\ref{app:consensus_connection}. \subsection{Warm-Up} \label{annealing} \citet{mckelvey1995quantal} proved the existence of a continuum of QREs starting at the uniform distribution (infinite temperature) and ending at what they called the \emph{limiting logit equilibrium} (LLE). Furthermore, they showed this path is unique for \emph{almost all games}, partially circumventing the equilibrium selection problem. We encourage the reader to look ahead at Figure~\ref{fig:homotopy} for a visual of the homotopy that may prove helpful for the ensuing discussions. In this work, we assume we are given one of these common games with a unique path (no branching points) so that the LLE is well defined (\textbf{Assumption~\ref{nobranches}}). Furthermore, we assume there exist no ``turning points'' in the temperature $\tau$ along the continuum (\textbf{Assumption~\ref{noturning}}). \citet{turocy2005dynamic} explains that even in generic games, temperature might have to be temporarily increased in order to remain on the path (principal branch) to the LLE. However, \citeauthor{turocy2005dynamic} also proves there exists a $\tau^*$ such that no turning points exist with $\tau > \tau^*$ suggesting that as long as we remain near the principal branch after $\tau^*$, we can expect to proceed to the LLE. We follow the principal path by alternating between annealing the temperature and re-solving for the Nash at that temperature by minimizing $\mathcal{L}^{\tau}_{adi}$. We present a basic version of our approach that converges to the limiting logit equilibrium assuming access to exact gradients in Algorithm~\ref{alg_warmup} (proof in Appx.~\ref{app:conv}). We substitute $\lambda=\frac{1}{\tau}$ and initialize $\lambda=0$ in order to begin at infinite temperature. The proof of this simple warm-up algorithm relies on the detailed examination of the continuum of QREs proposed in~\cite{mckelvey1995quantal} and further analyzed in~\cite{turocy2005dynamic}. Theorem~\ref{conv_basic} presented below is essentially a succinct repetition of one of their known results (Assumptions~\ref{sensitivity} and~\ref{boa} below are expanded on in Appx.~\ref{app:conv}). In subsequent sections, we relax the exact gradient assumption and assume gradients are estimated from stochasic play (i.e., each agent samples an action from their side of the current approximation to the Nash). \begin{algorithm}[H] \begin{algorithmic}[1] \STATE Given: Total anneal steps $T_{\lambda}$, total optimizer iterations $T^*$, and anneal step size $\Delta \lambda$. \STATE $\lambda = 0$ \STATE $\boldsymbol{x} \leftarrow \{ \frac{1}{m_i} \boldsymbol{1} \,\, \forall \,\, i\}$ \FOR{$t_{\lambda} = 1: T_{\lambda}$} \STATE $\lambda \leftarrow \lambda + \Delta \lambda$ \label{line:anneal} \STATE $\boldsymbol{x} \leftarrow \texttt{OPT}(\texttt{loss} = \mathcal{L}^{\tau=\lambda^{-1}}_{adi}, \boldsymbol{x}_{init} = \boldsymbol{x}, iters = T^*)$ \label{line:descend} \ENDFOR \STATE return $\boldsymbol{x}$ \end{algorithmic} \caption{Warm-up: Anneal \& Descend} \label{alg_warmup} \end{algorithm} \begin{theorem} \label{conv_basic} Make assumptions~\ref{nobranches} and~\ref{noturning}. Also, assume the QREs along the homotopy path have bounded sensitivity to $\lambda$ given by a parameter $\sigma$ (Assumption~\ref{sensitivity}), and basins of attraction with radii lower bounded by $r$ (Assumption~\ref{boa}). Let the step size $\Delta \lambda \le \sigma (r - \epsilon)$ with tolerance $\epsilon$. And let $T^*$ be the supremum over all $T$ such that Assumption~\ref{boa} is satisfied for any inverse temperature $\lambda \ge \Delta \lambda$. Then, assuming gradient descent for \texttt{OPT}, \Algref{alg_warmup} converges to the limiting logit equilibrium $\boldsymbol{x}^*_{\lambda=\infty} = \boldsymbol{x}^*_{\tau=0}$ in the limit as $T_{\lambda} \rightarrow \infty$. \end{theorem} \subsection{Evaluating $\mathcal{L}^{\tau}_{adi}$ with Joint Play} \label{bias_intuition} In the warm up, we assumed we could compute exact gradients which required access to the entire payoff tensor. However, we want to solve very large games where enumerating the payoff tensor is prohibitively expensive. Therefore, we are particularly interested in minimizing $\mathcal{L}^{\tau}_{adi}$ when only given access to samples of joint play, $\boldsymbol{a} \sim \prod_i x_i$. The best response operator, $\texttt{BR}$, is nonlinear and hence can introduce bias if applied to random samples. For example, consider the game given in Table~\ref{tab:biased_game} and assume $x_2 = [0.5, 0.5]^\top$. \begin{table}[!ht] \centering \begin{tabular}{c|c|c} $u_1$ & $a_{21}$ & $a_{22}$ \\ \hline $a_{11}$ & 0 & 0 \\ $a_{12}$ & 1 & -2 \\ $a_{13}$ & -2 & 1 \end{tabular} \hspace{1.0cm} \begin{tabular}{c|c|c} $u_2$ & $a_{21}$ & $a_{22}$ \\ \hline $a_{11}$ & 0 & 0 \\ $a_{12}$ & 0 & 0 \\ $a_{13}$ & 0 & 0 \end{tabular} \caption{A 2-player game with biased stochastic $\texttt{BR}$'s.} \label{tab:biased_game} \end{table} Consider computing (row) player $1$'s best response to a single action sampled from (column) player $2$'s strategy $x_2$. Either $a_{21}$ or $a_{22}$ will be sampled with equal probability, which results in a best response of either $a_{12}$ or $a_{13}$ respectively. However, the true expected utilities for each of player $1$'s actions given player $2$'s strategy are $[0, -0.5, -0.5]$ for which the best response is the first index, $a_{11}$. The best response operator completely filters out information on the utility of the true best response $a_{11}$. Intuitively, a \emph{soft} best response operator, demonstrated in equations (\ref{zero_temp})-(\ref{goldilocks}), that allows some utility information for each of the actions to pass through could alleviate the problem: \begin{align} \mathbb{E}[\texttt{BR}{}^{\tau\rightarrow 0}] &= [0.00, 0.50, 0.50] \label{zero_temp} \\ \mathbb{E}[\texttt{BR}{}^{\tau=1}] &\approx [0.26, 0.37, 0.37] \\ \mathbb{E}[\texttt{BR}{}^{\tau=10}] &\approx [\mathbf{0.42}, 0.29, 0.29]. \label{goldilocks} \end{align} By adding an entropy regularizer to the utilities, $\tau \mathcal{H}(x_i)$, we induce a soft-$\texttt{BR}$. Therefore, the homotopy approach has the added benefit of partially alleviating gradient bias for moderate temperatures. Further empirical analysis of bias can be found in Appx.~\ref{appx:bias}. \section{ADIDAS} In the previous section, we laid out the conceptual approach we take and identified bias as a potential issue to scaling up computation with Monte Carlo approximation. Here, we inspect the details of our approach, introduce further modifications to reduce the issue of bias, and present our resulting algorithm ADIDAS. Finally, we discuss the advantages of our approach for scaling to large games. \subsection{Deviation Incentive Gradient} \label{ped_grads} Regularizing the utilities with weighted Shannon entropy, $u^{\tau}_k(\boldsymbol{x}) = u_k(\boldsymbol{x}) + S^\tau_k(x_k, x_{-k})$, where $S^\tau_k(x_k, x_{-k}) = -\tau \sum_{a_k} x_{ka_k} \ln(x_{ka_k})$, leads to the following average deviation incentive gradient derived in Appx.~\ref{gen_pop_exp_grad} where $\texttt{BR}_j = \texttt{softmax}(\nabla^j_{x_j}/\tau)$ and $\texttt{diag}(v)$ creates a diagonal matrix with $v$ on the diagonal: \begin{align} &\nabla_{x_i} \mathcal{L}^{\tau}_{adi}(\boldsymbol{x}) = -\overbrace{(\nabla^i_{x_i} - \tau (\ln(x_i) + 1))}^{\text{policy gradient}} \nonumber \\ &+ \sum_{j \ne i} \Big[ J_{x_i}(\texttt{BR}_j)^\top (\nabla^j_{x_j} - \tau (\ln(\texttt{BR}_j) + 1)) + H^j_{ij} (\texttt{BR}_j - x_j) \Big] \label{qre_grad} \\ &\text{with } J_{x_i}(\texttt{BR}_j) = \frac{1}{\tau} (\texttt{diag}(\texttt{BR}_j) - \texttt{BR}_j \texttt{BR}_j^\top) H^j_{ji}. \label{qre_jac} \end{align} In the limit, $\nabla_{x_i} \mathcal{L}^{\tau}_{adi}(\boldsymbol{x}) \stackrel{\tau\rightarrow 0^+}{=} -\nabla^i_{x_i} + \sum_{j \ne i} H^j_{ij} (\texttt{BR}_j - x_j)$. The first term is recognized as player $i$'s payoff or \emph{policy} gradient. The second term is a correction that accounts for the other players' incentives to exploit player $i$ through a strategy deviation. Each $H^j_{ij}$ approximates player $j$'s payoffs in the bimatrix game between players $i$ and $j$. Recall from the preliminaries that in a polymatrix game, these matrices capture the game exactly. We also explore an adaptive Tsallis entropy in Appx.~\ref{gen_pop_exp_grad}. \subsection{Amortized Estimates with Historical Play} \label{amortized_estimates} Section~\ref{bias_intuition} discusses the bias that can be introduced when best responding to sampled joint play and how the annealing process of the homotopy method helps alleviate it by softening the $\texttt{BR}$ operator with entropy regularization. To reduce the bias further, we could evaluate more samples from $\boldsymbol{x}$, however, this increases the required computation. Alternatively, assuming strategies have changed minimally over the last few updates (i.e., $\boldsymbol{x}^{(t-2)} \approx \boldsymbol{x}^{(t-1)} \approx \boldsymbol{x}^{(t)}$), we can instead reuse historical play to improve estimates. We accomplish this by introducing an auxiliary variable $y_i$ that computes an exponentially averaged estimate of each player $i$'s payoff gradient $\nabla^i_{x_i}$ throughout the descent similarly to~\citet{sutton2008convergent}. We also use $y_i$ to compute an estimate of ADI, $\hat{\mathcal{L}}^{\tau}_{adi}$, as follows: \begin{align} \hat{\mathcal{L}}^{\tau}_{adi}(\boldsymbol{x}, \boldsymbol{y}) &= \sum_k y_k^\top (\hat{\texttt{BR}}_k - x_k) + S^\tau_k(\hat{\texttt{BR}}_k, x_{-k}) - S^\tau_k(x_k, x_{-k}) \label{amortized_exp_reg} \end{align} where $\hat{\texttt{BR}}_k = \argmax_{z_k \in \Delta^{m_k-1}} y_k^\top z_k + S^\tau_k(z_k, x_{-k})$ is computed with $y_k$ instead of $\nabla^k_{x_k}$. Likewise, replace all $\nabla^k_{x_k}$ with $y_k$ and $\texttt{BR}_k$ with $\hat{\texttt{BR}}_k$ in equations (\ref{qre_grad}) and (\ref{qre_jac}) when computing the gradient: \subsection{Putting It All Together} \label{convergence} \begin{algorithm}[H] \begin{algorithmic}[1] \STATE Given: Strategy learning rate $\eta_x$, auxiliary learning rate $\eta_y$, initial temperature $\tau$ ($=100$), ADI threshold $\epsilon$, total iterations $T$, simulator $\mathcal{G}_i$ that returns player $i$'s payoff given a joint action. \STATE $\boldsymbol{x} \leftarrow \{ \frac{1}{m_i} \boldsymbol{1} \,\, \forall \,\, i\}$ \STATE $\boldsymbol{y} \leftarrow \{ \boldsymbol{0} \,\, \forall \,\, i \}$ \FOR{$t = 1: T$} \STATE $a_i \sim x_i \,\, \forall \,\, i$ \label{start} \FOR{$i \in \{1, \ldots, n\}$} \FOR{$j \ne i \in \{1, \ldots, n\}$} \STATE $H^i_{ij}[r,c] \leftarrow \mathcal{G}_i(r,c,a_{-ij}) \,\, \forall \,\, r \in \mathcal{A}_i, c \in \mathcal{A}_j$ \ENDFOR \ENDFOR \label{end} \STATE $\nabla^i_{x_i} = H^i_{ij} x_j$ for any $x_j$ (or average the result over all $j$) \STATE $y_i \leftarrow y_i - \max(\frac{1}{t}, \eta_y) (\nabla^i_{x_i} - y_i)$ \STATE $x_i \leftarrow x_i - \eta_x \nabla_{x_i} \hat{\mathcal{L}}^{\tau}_{adi}(\boldsymbol{x}, \boldsymbol{y})$ (def. in~\S\ref{amortized_estimates} and code in Appx.~\ref{app:code}) \IF{$\hat{\mathcal{L}}^{\tau}_{adi}(\boldsymbol{x}, \boldsymbol{y}) < \epsilon$ (def. in~\eqref{amortized_exp_reg})} \STATE $\tau \leftarrow \frac{\tau}{2}$ \label{anneal} \ENDIF \ENDFOR \STATE return $\boldsymbol{x}$ \end{algorithmic} \caption{ADIDAS} \label{alg_saped} \end{algorithm} Algorithm~\ref{alg_saped}, ADIDAS, is our final algorithm. ADIDAS attempts to approximate the unique continuum of quantal response equilibria by way of a quasi-stationary process\textemdash see Figure~\ref{fig:homotopy}. Whenever the algorithm finds a joint strategy $\boldsymbol{x}$ exhibiting $\hat{\mathcal{L}}^{\tau}_{adi}$ below a threshold $\epsilon$ for the game regularized with temperature $\tau$, the temperature is exponentially reduced (line~\ref{anneal} of ADIDAS) as suggested in~\citep{turocy2005dynamic}. Incorporating stochastic optimization into the process enables scaling the classical homotopy approach to extremely large games (large payoff tensors). At the same time, the homotopy approach selects a unique limiting equilibrium and, symbiotically, helps alleviate gradient bias, further amortized by the reuse of historical play. \paragraph{Limitations:} As mentioned earlier, gradient bias precludes a rigorous convergence proof of ADIDAS. However, recent work showed that gradient estimators that are biased, but consistent worked well empirically~\citep{chen2018fastgcn} and follow-up analysis suggests consistency may be an important property~\citep{chen2018stochastic}. Bias is also being explored in the more complex Riemannian optimization setting where it has been proven that the amount of bias in the gradient shifts the stationary point by a proportional amount~\citep{durmus2020convergence}. Note that ADIDAS gradients are also consistent in the limit of infinite samples of joint play, and we also find that biased stochastic gradient descent maintains an adequate level of performance for the purpose of our experiments. No-regret algorithms scale, but have been proven not to converge to Nash~\citep{flokas2020no} and classical solvers~\citep{mckelvey2014gambit} converge to Nash, but do not scale. ADIDAS suffers from gradient bias, an issue that may be further mitigated by future research. In this sense, ADIDAS is one of the few, if only, algorithms that can practically approximate Nash in many-player, many-action normal-form games. \begin{figure} \caption{ADIDAS pathologies } \label{fig:manynasherror} \caption{$10$-player, 2-action El Farol homotopy } \label{fig:el_farol_homotopy} \caption{(\subref{fig:manynasherror}) In the presence of multiple equilibria, ADIDAS may fail to follow the path to the uniquely defined Nash due to gradient noise, gradient bias, and a coarse annealing schedule. If these issues are severe, they can cause the algorithm to get stuck at a local optimum of $\mathcal{L}^{\tau}_{adi}$\textemdash see Figure~\ref{fig:dipmedexp_ate} in \S\ref{large_scale}. (\subref{fig:el_farol_homotopy}) Such concerns are minimal for the El Farol Bar stage game by~\citet{arthur1994complexity}. The solid black curves represent (biased) descent trajectories while the dashed segments indicate the temperature is being annealed.} \label{fig:homotopy} \end{figure} \begin{table}[ht!] \centering \begin{tabular}{c|c|c|c} Alg Family & Classical & No-Regret & This Work \\ \hline \hline Convergence to Nash & Yes & No & Yes$^\dag$ \\ \hline Payoffs Queried & $nm^n$ & $Tnm^{\ddag}$ & $T(nm)^2$ \end{tabular} \caption{Comparison of solvers. $^\dag$See \emph{Limitations} in \S\ref{convergence} and Appx.~\ref{app:conv:sto}. $^\ddag$Reduce to $T$ at the expense of higher variance.} \label{tab:nealg_comp} \end{table} \subsection{Complexity and Savings} \label{scale} A normal form game may also be represented with a tensor $U$ in which each entry $U[i, a_1, \ldots, a_n]$ specifies the payoff for player $i$ under the joint action $(a_1, \ldots, a_n)$. In order to demonstrate the computational savings of our approach, we evaluate the ratio of the number of entries in $U$ to the number of entries queried (in the sense of~\citep{babichenko2016query,fearnley2015learning,fearnley2016finding}) for computing a single gradient, $\nabla \mathcal{L}^{\tau}_{adi}$. This ratio represents the number of steps that a gradient method can take before it is possible to compute $\mathcal{L}^{\tau}_{adi}$ exactly in expectation. Without further assumptions on the game, the number of entries in a general payoff tensor is $nm^n$. In contrast, computing the stochastic deviation incentive gradient requires computing $H_{ij}^j$ for all $i, j$ requiring less than $(nm)^2$ entries\footnote{Recall $\nabla^i_{x_i}$ can be computed with $\nabla^i_{x_i} = H^i_{ij} x_j$ for any $x_j$.}. The resulting ratio is $\frac{1}{n} m^{n-2}$. For a $7$-player, $21$-action game, this implies at least $580,000$ descent updates can be used by stochastic gradient descent. If the game is symmetric and we desire a symmetric Nash, the payoff tensor can be represented more concisely with $\frac{(m + n - 1)!}{n! (m - 1)!}$ entries (number of multisets of cardinality $n$ with elements taken from a finite set of cardinality $m$). The number of entries required for a stochastic gradient is less than $m^2$. Again, for a $7$-player $21$-action game, this implies at least $2,000$ update steps. Although there are fewer unique entries in a symmetric game, we are not aware of libraries that allow sparse storage of or efficient arithmetic on such permutation-invariant tensors. ADIDAS can exploit this symmetry. \section{Experiments} \label{exp} We test the performance of ADIDAS empirically on very large games. We begin by considering Colonel Blotto, a deceptively complex challenge domain still under intense research~\citep{behnezhad2017faster,boix2020multiplayer}, implemented in OpenSpiel~\citep{LanctotEtAl2019OpenSpiel}. For reference, both the 3 and 4-player variants we consider are an order of magnitude ($>20\times$) larger than the largest games explored in~\citep{porter2008simple}. We find that no-regret approaches as well as existing methods from Gambit~\citep{mckelvey2014gambit} begin to fail at this scale, whereas ADIDAS performs consistently well. At the same time, we empirically validate our design choice regarding amortizing gradient estimates (\S\ref{amortized_estimates}). Finally, we end with our most challenging experiment, the approximation of a unique Nash of a 7-player, 21-action (> billion outcome) Diplomacy meta-game. We use the following notation to indicate variants of the algorithms compared in Table~\ref{tab:competingalgs}. A $y$ superscript prefix, e.g., $^y$\texttt{QRE}, indicates the estimates of payoff gradients are amortized using historical play; its absence indicates that a fresh estimate is used instead. $\bar{x}_t$ indicates that the average deviation incentive reported is for the average of $\boldsymbol{x}^{(t)}$ over learning. A subscript of ${\infty}$ indicates best responses are computed with respect to the true expected payoff gradient (infinite samples). A superscript $auto$ indicates the temperature $\tau$ is annealed according to line~\ref{anneal} of~\Algref{alg_saped}. An $s$ in parentheses indicates lines~\ref{start}-\ref{end} of ADIDAS are repeated $s$ times, and the resulting $H^i_{ij}$'s are averaged for a more accurate estimate. Each game is solved on 1 CPU, except Diplomacy (see Appx.~\ref{app:runtime}). \begin{table}[ht!] \begin{tabular}{l|l} \texttt{FTRL} & Simultaneous Gradient Ascent \\ \texttt{RM} & Regret-Matching~\citep{blackwell1956analog} \\ \texttt{ATE} & ADIDAS with Tsallis (Appx.~\ref{ate_exp}) \\ \texttt{QRE} & ADIDAS with Shannon \end{tabular} \caption{Algorithms} \label{tab:competingalgs} \end{table} \begin{table}[ht!] \begin{tabular}{r|l} $\eta_x$ & $10^{-5}, 10^{-4}, 10^{-3}, 10^{-2}, 10^{-1}$ \\ $\eta_x^{-1} \cdot \eta_y$ & $1, 10, 100$ \\ $\tau$ & $0.0, 0.01, 0.05, 0.10$ \\ $\Pi_{\Delta}(\nabla \mathcal{L}_{adi})$ & Boolean \\ Bregman-$\psi(\boldsymbol{x})$ & $\{\frac{1}{2}||\boldsymbol{x}||^2, -\mathcal{H}(\boldsymbol{x})\}$ \\ $\epsilon$ & $0.01, 0.05$ \end{tabular} \caption{Hyperparameter Sweeps} \label{tab:hypsweeps} \end{table} Sweeps are conducted over whether to project gradients onto the simplex ($\Pi_{\Delta}(\nabla \mathcal{L}_{adi})$), whether to use a Euclidean projection or entropic mirror descent~\citep{beck2003mirror} to constrain iterates to the simplex, and over learning rates. Averages over $10$ runs of the best hyperparameters are then presented\footnote{Best hyperparameters are used because we expect running ADIDAS with multiple hyperparameter settings in parallel to be a pragmatic approach to approximating Nash.} except for Diplomacy for which we present all settings attempted (more in Appx.~\ref{app:more_dip}). Performance is measured by $\mathcal{L}_{adi}$, a.k.a. \texttt{NashConv}~\citep{lanctot2017unified}. For symmetric games, we enforce searching for a symmetric equilibrium (see Appx.~\ref{app:sym}). For sake of exposition, we do not present all baselines in all plots, however, we include the full suite of comparisons in the appendix. Our experiments demonstrate that without any additional prior information on the game, ADIDAS is the only practical approach for approximating a Nash equilibrium over many-players and many-actions. We argue this by systematically ruling out other approaches on a range of domains. For example, in Figure~\ref{fig:blotto_tracking_qre}, \texttt{RM} reduces ADI adequately in Blotto. We do not present \texttt{RM} with improvements in Figure~\ref{fig:blotto_tracking_qre} such as using exact expectations, \texttt{RM}$_{\infty}$, or averaging its iterates, \texttt{RM}$(\bar{x}_t)$, because we show that both these fail to save \texttt{RM} on the GAMUT game in Figure~\ref{fig:noregfail}. In other words, we do not present baselines that are unnecessary for logically supporting the claim above. Code is available at \href{https://github.com/deepmind/open_spiel}{github.com/deepmind/open\_spiel}~\cite{LanctotEtAl2019OpenSpiel}. \begin{figure} \caption{3-player, 286-action Blotto } \label{fig:blotto3track_qre} \caption{4-player, 66-action Blotto } \label{fig:blotto4track_qre} \caption{Amortizing estimates of joint play using $\boldsymbol{y}$ can reduce gradient bias, further improving performance (e.g., compare \texttt{QRE}$^{auto}$ to $^y$\texttt{QRE}$^{auto}$ in (\subref{fig:blotto3track_qre}) or (\subref{fig:blotto4track_qre})).} \label{fig:blotto_tracking_qre} \end{figure} \subsection{Med-Scale re. \S\ref{scale}} \label{med_scale} Govindan-Wilson is considered a state-of-the-art Nash solver, but it does not scale well to large games. For example, on a symmetric, $4$-player Blotto game with $66$ actions ($10$ coins, $3$ fields), GW, as implemented in Gambit, is estimated to take 53,000 hours\footnote{Public correspondence with primary \texttt{gambit} developer [\href{https://github.com/gambitproject/gambit/issues/261\#issuecomment-660894391}{link}].}. Of the solvers implemented in Gambit, none finds a symmetric Nash equilibrium within an hour\footnote{\texttt{gambit-enumpoly} returns several non-symmetric, pure Nash equilibria. Solvers listed in Appx.~\ref{app:gambit_solvers}. Symmetric equilibria are necessary for ranking in symmetric meta-games.}. Of those, \texttt{gambit-logit}~\citep{turocy2005dynamic} is expected to scale most gracefully. Experiments from the original paper are run on maximum $5$-player games ($2$-actions per player) and $20$-action games ($2$-players), so the $4$-player, $66$-action game is well outside the original design scope. Attempting to run $\texttt{gambit-logit}$ anyways with a temperature $\tau=1$ returns an approximate Nash with $\mathcal{L}_{adi}=0.066$ after $101$ minutes. In contrast, Figure~\ref{fig:blotto4track_qre} shows ADIDAS achieves a lower ADI in $\approx 3$ minutes. \begin{figure} \caption{2-player, 3-action modified Shapley's } \label{fig:modshapley_qre} \caption{6-player, 5-action GAMUT-D7 } \label{fig:gamutd7_qre} \caption{ADIDAS reduces $\mathcal{L}_{adi}$ in both these nonsymmetric games. In contrast, regret matching stalls or diverges in game~(\subref{fig:modshapley_qre}) and diverges in game~(\subref{fig:gamutd7_qre}). \texttt{FTRL} makes progress in game~(\subref{fig:modshapley_qre}) but stalls in game~(\subref{fig:gamutd7_qre}). In game~(\subref{fig:modshapley_qre}), created by~\protect\citet{ostrovski2013payoff}, to better test performance, $\boldsymbol{x}$ is initialized randomly rather than with the uniform distribution because the Nash is at uniform.} \label{fig:noregfail} \end{figure} \paragraph{Auxiliary $y$ re. \S\ref{amortized_estimates}} The introduction of auxiliary variables $y_i$ are supported by the results in Figure~\ref{fig:blotto_tracking_qre}\textemdash $^y$\texttt{QRE}$^{auto}$ significantly improves performance over \texttt{QRE}$^{auto}$ and with low algorithmic cost. \paragraph{No-regret, No-convergence re.~\S\ref{convergence}} In Figure~\ref{fig:blotto_tracking_qre}, \texttt{FTRL} and \texttt{RM} achieve low ADI quickly in some cases. \texttt{FTRL} has recently been proven not to converge to Nash, and this is suggested to be true of no-regret algorithms in general~\citep{flokas2020no,mertikopoulos2018cycles}. Before proceeding, we demonstrate empirically in Figure~\ref{fig:noregfail} that \texttt{FTRL} and \texttt{RM} fail on games where ADIDAS significantly reduces ADI. Note that GAMUT (D7) was highlighted as a particularly challenging problem for Nash solvers in~\citep{porter2008simple}. \begin{figure} \caption{7-player, 5-action symmetric Nash ($x_t$) } \label{fig:dipmednash_ate} \caption{ADI Estimate } \label{fig:dipmedexp_ate} \caption{(\subref{fig:dipmednash_ate}) Evolution of the symmetric Nash approximation returned by ADIDAS for the $7$-player Diplomacy meta-game that considers a subset $\{0, 2, 4, 10, 20\}$ of the available $21$ bots; (\subref{fig:dipmedexp_ate}) ADI estimated from auxiliary variable $\boldsymbol{y}_t$. Black vertical lines indicate the temperature $\tau$ was annealed.} \label{fig:diplomacy_med_ate} \end{figure} \subsection{Large-Scale} \label{large_scale} Figure~\ref{fig:diplomacy_med_ate} demonstrates an empirical game theoretic analysis~\citep{wellman2006methods,jordan2007empirical,wah2016empirical} of a large symmetric $7$-player Diplomacy meta-game where each player elects $1$ of $5$ trained bots to play on their behalf. Each bot represents a snapshot taken from an RL training run on Diplomacy~\citep{anthony2020learning}. In this case, the expected value of each entry in the payoff tensor represents a winrate. Each entry can only be estimated by simulating game play, and the result of each game is a Bernoulli random variable (ruling out deterministic approaches, e.g., \texttt{gambit}). To estimate winrate within $0.01$ (ADI within $0.02$) of the true estimate with probability $95\%$, a Chebyshev bound implies more than $223$ samples are needed. The symmetric payoff tensor contains $330$ unique entries, requiring over $73$ thousand games in total. ADIDAS achieves near zero ADI in less than $7$ thousand iterations with $50$ samples of joint play per iteration ($\approx 5 \times$ the size of the tensor). \paragraph{Continuum of QREs approaching LLE} The purpose of this work is to approximate a unique Nash (the LLE) which ADIDAS is designed to do, however, the approach ADIDAS takes of attempting to track the continuum of QREs (or the continuum defined by the Tsallis entropy) allows returning these intermediate QRE strategies which may be of interest. Access to these intermediate approximations can be useful when a game playing program cannot wait for ADIDAS's final output to play a strategy, for example, in online play. Interestingly, human play appears to track the continuum of QREs in some cases where the human must both learn about the game (rules, payoffs, etc.) whilst also evolving their strategy~\citep{mckelvey1995quantal}. \balance Notice in Figure~\ref{fig:diplomacy_med_ate} that the trajectory of the Nash approximation is not monotonic; for example, see the kink around $2000$ iterations where bots $10$ and $20$ swap rank. The continuum of QRE's from $\tau=\infty$ to $\tau=0$ is known to be complex providing further reason to carefully estimate ADI and its gradients. \paragraph{Convergence to a Local Optimum} \balance One can also see from Figure~\ref{fig:dipmedexp_ate} that $^y\texttt{ATE}^{0.0}$ has converged to a suboptimal local minimum in the energy landscape. This is likely due to the instability and bias in the gradients computed without any entropy bonus; notice the erratic behavior of its ADI within the first $2000$ iterations. \subsection{Very Large-Scale re. \S\ref{scale}} \label{very_large_scale} Finally, we repeat the above analysis with all $21$ bots. To estimate winrate within $0.015$ (ADI within $0.03$) of the true estimate with probability $95\%$, a Chebyshev bound implies approximately $150$ samples are needed. The symmetric payoff tensor contains $888,030$ unique entries, requiring over $100$ million games in total. Note that ignoring the symmetry would require simulating $150 \times 21^7 \approx 270$ billion games and computing over a trillion payoffs ($\times7$ players). Simulating all games, as we show, is unnecessarily wasteful, and just storing the entire payoff tensor in memory, let alone computing with it would be prohibitive without special permutation-invariant data structures ($\approx 50$GB with \texttt{float32}). In Figure~\ref{fig:dipbignash_all_ate}, ADIDAS with $\eta_x = \eta_y = 0.1$ and $\epsilon = 0.001$ achieves a stable ADI below $0.03$ in less than $100$ iterations with $10$ samples of joint play per iteration and each game repeated $7$ times ($< 2.5\%$ of the games run by the naive alternative). As expected, bots later in training (darker lines) have higher mass under the Nash distribution computed by $^y\texttt{ATE}^{auto}$. Runtime is discussed in Appx.~\ref{app:runtime}. \paragraph{Importance of Entropy Bonus} Figure~\ref{fig:dipbignash_all_ate} shows how the automated annealing mechanism ($^y\texttt{ATE}^{auto}$) seeks to maintain entropy regularization near a ``sweet spot" \textemdash too little entropy ($^y\texttt{ATE}^{0.0}$) results in an erratic evolution of the Nash approximation and too much entropy ($^y\texttt{ATE}^{1.0}$) prevents significant movement from the initial uniform distribution. Figure~\ref{fig:dipbigexp_ate2} shows that ADIDAS with the automated annealing mechanism meant to trace the QRE continuum achieves a lower ADI than its fixed temperature variants. \begin{figure} \caption{7-player, 21-action symmetric Nash ($x_t$) } \label{fig:dipbignash_all_ate} \caption{ADI Estimate } \label{fig:dipbigexp_ate2} \caption{(\subref{fig:dipbignash_all_ate}) Evolution of the symmetric Nash approximation returned by ADIDAS for the $7$-player, $21$-bot Diplomacy meta-game; (\subref{fig:dipbigexp_ate2}) ADI estimated from auxiliary variable $y_t$. Black vertical lines indicate the temperature $\tau$ was annealed.} \label{fig:diplomacy_big_ate} \end{figure} In the appendix, we perform additional ablation studies (e.g., no entropy, annealing), measure accuracy of $\hat{\mathcal{L}}^{\tau}_{adi}$, compare against more algorithms on other domains, and consider Tsallis entropy. \section{Conclusion} Existing algorithms either converge to Nash, but do not scale to large games or scale to large games, but do not converge to Nash. We proposed an algorithm to fill this void that queries necessary payoffs through sampling, obviating storing the full payoff tensor in memory. ADIDAS is principled and shown empirically to approximate Nash in large-normal form games. \section*{Checklist} \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerYes{}. \item Did you describe the limitations of your work? \answerYes{} Summarized succinctly in Figure~\ref{fig:manynasherror}. \item Did you discuss any potential negative societal impacts of your work? \answerNo{}. We see no obvious negative impact. \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerYes{}. \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerYes{}. See Appendix~\ref{app:conv}. \item Did you include complete proofs of all theoretical results? \answerYes{}. See Appendix~\ref{app:conv}. \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerYes{}. We include pseudocode and two key methods in python in the appendix. All games are available via open source libraries (OpenSpiel, gambit, and GAMUT) or referenced directly in papers (e.g., modified Shapley's game). \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerYes{}. \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerYes{}. We report standard error of the mean. \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerYes{}. See \S\ref{exp} and Appx.~\ref{app:runtime}. \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerYes{}. \item Did you mention the license of the assets? \answerNo{}. The OpenSpiel (\href{https://github.com/deepmind/open_spiel/blob/master/LICENSE}{Apache}), gambit (\href{http://www.gambit-project.org/gambit13/index.html}{GNU license}), and GAMUT (\href{http://gamut.stanford.edu/}{Apache license on download page}) licenses are clearly stated on their websites. \item Did you include any new assets either in the supplemental material or as a URL? \answerNo{}. \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerNA{}. \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerNA{}. \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerNA{}. \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerNA{}. \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerNA{}. \end{enumerate} \end{enumerate} \onecolumn \appendix \appendixpage \tableofcontents \addtocontents{toc}{\protect\setcounter{tocdepth}{2}} \section{Runtime} \label{app:runtime} We briefly discussed runtime of ADIDAS in the main body within the context of the Colonel Blotto game. The focus of the paper is on the divide between algorithms that can solve for Nash in any reasonable amount of time (e.g., $\approx 3$ minutes) and those that cannot (e.g., GW with 53,000 hours). The modified Shapley's game and D7-Covariant game from GAMUT are both smaller than the Blotto game, so we omitted a runtime discussion for these. The Diplomacy experiment required simulating Diplomacy games on a large shared compute cluster with simulated games taking anywhere from 3 minutes to 3 hours. Games were simulated at each iteration of ADIDAS asynchronously using a pool of 1000 workers (4 CPUs per worker, 1 worker per game); the Nash approximate $x_t$ was updated separately on a single CPU. The main computational bottleneck in this experiment was simulating the games themselves, rather than computing gradients from those games. Therefore, the number of games simulated (entries accessed in the payoff tensor) is a realistic metric of algorithmic efficiency. \section{Two vs More Than Two Player Games} \label{app:beyondtwo} An $n$-player game for all $n \ge 3$ can be reduced in polynomial time to a $2$-player game such that the Nash equilibria of the $2$-player game can be efficiently used to compute approximate Nash equilibria of the $n$-player game~\citep{daskalakis2009complexity,chen2006settling,etessami2010complexity}. \section{Symmetric Nash for Symmetric Games} \label{app:sym} Note that a symmetric Nash equilibrium is guaranteed to exist for a finite, normal-form game~\cite{fey2012symmetric}. One of the reasons we enforce symmetry is that we had Nash-ranking in mind when designing the algorithm and experiments. In that case, for a symmetric meta-game, we desire a symmetric equilibrium so we have a single ranking to go by for evaluation. If each player, in for example the 7-player Diplomacy meta-game, returned a different distribution at Nash, then we'd have to figure out which player's side of the Nash to use for ranking. \section{Convergence of ADIDAS} \label{app:conv} We first establish convergence of the simplified algorithm as described in the warm-up and then discuss convergence of the our more sophisticated, scalable algorithm ADIDAS. \subsection{Convergence Warm-up: Full Access to In-Memory Payoff Tensor} \label{app:conv:warmup} The proof of this simple warm-up algorithm relies heavily on the detailed examination of the continuum of QREs proposed in~\cite{mckelvey1995quantal} and further analyzed in~\cite{turocy2005dynamic}. The Theorem presented below is essentially a succinct repetition of one of their results. \begin{assumption}[No Principal Branching] \label{nobranches} The continuum of QREs from the uniform Nash to the limiting logit equilibrium is unique and contains no branching points. \end{assumption} \begin{assumption}[No Turning Points] \label{noturning} The continuum of QREs from the uniform Nash to the limiting logit equilibrium proceeds along a path with monotonically decreasing (increasing) $\tau$ ($\lambda$). \end{assumption} \begin{assumption}[Bounded sensitivity of QRE to temperature] \label{sensitivity} The shift in location of the QRE is upper bounded by an amount proportional to the increase in inverse temperature: $||x^*_{\lambda + \Delta \lambda} - x^*_{\lambda}|| \le \sigma \Delta \lambda.$ \end{assumption} \begin{assumption}[Bound on BoA's of QRE's] \label{boa} Under gradient descent dynamics, the basin of attraction for any quantal response equilibrium, $x^*_{\lambda} = $ QRE$_{\tau=\lambda^{-1}}$, contains a ball of radius $r$. Formally, assuming $x_{t+1} \leftarrow x_t - \eta_t g_t$ with $g_t = \nabla_x \mathcal{L}^{\tau}_{adi}(x_t)$, $\eta_t$ a square-summable, not summable step size (e.g., $\propto t^{-1}$), and given $x_0 \in B(x^*_{\lambda}, r)$, there exists a $T$ such that $x_{t \ge T} \in B(x^*_{\lambda}, \epsilon)$ for any $\epsilon$. \end{assumption} \begin{reptheorem}{conv_basic} Assume the QREs along the homotopy path have bounded sensitivity to $\lambda$ given by a parameter $\sigma$ (Assumption~\ref{sensitivity}), and basins of attraction with radii lower bounded by $r$ (Assumption~\ref{boa}). Let the step size $\Delta \lambda \le \sigma (r - \epsilon)$ with tolerance $\epsilon$. And let $T^*$ be the supremum over all $T$ such that Assumption~\ref{boa} is satisfied for any inverse temperature $\lambda \ge \Delta \lambda$. Then, assuming gradient descent for \texttt{OPT}, \Algref{alg_warmup} converges to the limiting logit equilibrium $x^*_{\lambda=\infty} = x^*_{\tau=0}$ in the limit as $T_{\lambda} \rightarrow \infty$. \end{reptheorem} \begin{proof} Recall~\citet{mckelvey1995quantal} proved there exists a unique continuum of QREs tracing from infinite temperature ($\lambda=0$) to zero temperature ($\lambda=\infty$) for almost all games. Assumption~\ref{boa} effectively assumes the game in question is one from that class. \Algref{alg_warmup} initializes $\lambda=0$ and $x$ to the uniform distribution which is the exact QRE for that temperature. Next, in step~\ref{line:anneal}, the temperature is annealed by an amount that, by Lemma~\ref{sensitivity}, ensures $||x^*_{\lambda + \Delta \lambda} - x|| = ||x^*_{\lambda + \Delta \lambda} - x^*_{\lambda}|| \le r - \epsilon$, where $r$ is a minimal radius of the basin of attraction for any QRE. Then, in step~\ref{line:descend}, \texttt{OPT} returns an $\epsilon$-approximation, $x$, to the new QRE after $T^*$ steps, which implies $||x - x^*_{\lambda + \Delta \lambda}|| \le \epsilon$. The proof then continues by induction. The inverse temperature is increased by an amount ensuring then next QRE is within $r - \epsilon$ of the previous. The current approximation, $x$ is within $\epsilon$ of the previous, therefore, it is within $r - \epsilon + \epsilon = r$ of the next QRE, i.e., it is in its basin of attraction. The inverse temperature $\lambda$ is always increased by an amount such that the current approximation is always within the boundary of attraction for the next QRE. Therefore, in the limit of infinite annealing steps, $x$ converges to the QRE with zero temperature, known as the limiting logit equilibrium. \end{proof} \subsection{Convergence Sketch: Sampling the Payoff Tensor} \label{app:conv:sto} We do not rigorously prove any theoretical convergence result for the stochastic setting. A convergence proof is complicated by the fact that despite our efforts to reduce gradient bias, some bias will always remain. Although we make assumptions that ensure each iterate begins in the basin of attraction of the QRE of interest, even proving convergence of a hypothetically unbiased stochastic gradient descent to that specific local minimum could only be guaranteed with high probability (dependent on step size). Our goal was to outline a sensible argument that ADIDAS would converge to Nash asymptotically. Our claim of convergence stands on the shoulders of the work of~\citet{mckelvey1995quantal} who proved that there exists a unique path $P$ of Quantal Response Equilibria (QREs) parameterized by temperature $\tau$ which begins at the uniform distribution Nash ($\tau=\infty$) and ends at the limiting logit equilibium ($\tau=0$). \citet{turocy2005dynamic} solves for this path explicitly by solving the associated initial value problem (differential equation) where $t=\frac{1}{\tau}$ takes the place of the typical independent variable time. By numerically integrating this differential equation with infintessimally small steps $dt$,~\citet{turocy2005dynamic} can ensure the iterates progress along the path towards the limiting logit equilibrium (LLE). ADIDAS takes a conceptually similar approach. First, it initializes to the uniform equilibrium. Then it takes a small step $\Delta t$. In practice, the initial step we take increases $t$ from $0$ to $1$, which worked well enough, but one can imagine taking a smaller step, e.g., $0$ to $10^{-9}$. After such a small step, the QRE of the game with lower temperature will not have moved far from the initial uniform equilibrium. Therefore, we can minimize ADI to solve for the new QRE, thereby recovering to a point on the unique path $P$. The fact that we can only access the payoff tensor by samples means that we may need to sample many times ($s$ times) to obtain an accurate Monte Carlo estimate of the gradient of ADI. By repeating this process of decaying the temperature ($\tau_k > \tau_{k+1}$ $\Leftrightarrow$ $t_k < t_{k+1}$) and recovering the new QRE with gradient descent (possibly $n_k$ steps) on ADI ($\boldsymbol{x}_t = \boldsymbol{x}(\tau_k) \rightarrow \boldsymbol{x}_{t+n_k} = \boldsymbol{x}(\tau_{k+1})$), we too can follow $P$. In the limit as $s$, $n_k$, and $N=\sum_k n_k$ go to infinity and $\Delta t$ goes to zero, the issues identified in Figure~\ref{fig:manynasherror} are mitigated and we recover the LLE. Note, $n_k$ is effectively increased by reducing $\epsilon$ in~\Algref{alg_saped}. We claim ``ADIDAS is the first that can approximate Nash in large many-player, many-action normal-form games" because, in principle, it is technically sound according to the argument just presented but also efficient (does not require infinite samples in practice) as demonstrated empirically in our experiments. Note that because we only argue ADIDAS is asymptotically convergent (we provide no convergence rates), we do not contradict any Nash complexity results. \section{Deviation Incentive Gradient} \label{gen_pop_exp_grad} We now provide the general form for the ADI gradient for normal form games. \begin{align} \nabla_{x_i} \mathcal{L}_{adi}(\boldsymbol{x}) &= \textcolor{green}{\nabla_{x_i} [u^{\tau}_i(\texttt{BR}(x_{-i}), x_{-i}) - u^{\tau}_i(x_i,x_{-i})]} \nonumber \\ &+ \sum_{j \ne i} \textcolor{blue}{\nabla_{x_i} [u^{\tau}_j(\texttt{BR}(x_{-j}), x_{-j}) - u^{\tau}_j(x_j,x_{-j})]}. \end{align} \begin{align} \textcolor{green}{\nabla_{x_i} [u^{\tau}_i(\texttt{BR}(x_{-i}), x_{-i}) - u^{\tau}_i(x_i,x_{-i})]} &= \cancelto{0}{J_{x_i}(\texttt{BR}(x_{-i}))}^\top \big( \nabla_{z_i} u^{\tau}_i(z_i, x_{-i}) \vert_{\texttt{BR}_i, x_{-i}} \big) \label{self} \\ &+ \sum_{k \ne i} \cancelto{0}{J_{x_i}(x_{k})}^\top \big( \nabla_{z_{k}} u^{\tau}_i(\texttt{BR}(x_i), z_{-i}) \vert_{\texttt{BR}_i, x_{-i}} \big) \nonumber \\ &- \nabla_{x_i} u^{\tau}_i(x_i, x_{-i}) = \textcolor{green}{- \nabla_{x_i} u^{\tau}_i(x_i, x_{-i})}.\label{self_term_1} \end{align} \begin{align} \textcolor{blue}{\nabla_{x_i} [u^{\tau}_j(\texttt{BR}(x_{-j}), x_{-j}) - u^{\tau}_j(x_j,x_{-j})]} &= J_{x_i}(\texttt{BR}(x_{-j}))^\top \big( \nabla_{z_j} u^{\tau}_j(z_j, x_{-j}) \vert_{\texttt{BR}_j, x_{-j}} \big) \label{other} \\ &+ \sum_{k \ne j} J_{x_i}(x_{k})^\top \big( \nabla_{z_{k}} u^{\tau}_j(\texttt{BR}(x_{-j}), z_{-j}) \vert_{\texttt{BR}_j, x_{-j}} \big) \nonumber \\ &- \nabla_{x_i} u^{\tau}_j(x_j, x_{-j}) \nonumber \\ &= \textcolor{blue}{J_{x_i}(\texttt{BR}(x_{-j}))^\top \big( \nabla_{z_j} u^{\tau}_j(z_j, x_{-j}) \vert_{\texttt{BR}_j, x_{-j}} \big)} \label{other_term_1} \\ &\textcolor{blue}{+ \big( \nabla_{z_{i}} u^{\tau}_j(\texttt{BR}(x_{-j}), z_{-j}) \vert_{\texttt{BR}_j, x_{-j}} \big)} \label{other_term_2} \\ &\textcolor{blue}{- \nabla_{x_i} u^{\tau}_j(x_j, x_{-j})}. \label{other_term_3} \end{align} For entropy regularized utilities $u^\tau_i = u_i + S^\tau_i$, the policy gradient decomposes as \begin{align} \nabla_{x_i} u^{\tau}_j(x_j, x_{-j}) &= \nabla_{x_i} u_j(x_j, x_{-j}) + \nabla_{x_i} S^{\tau}_j(x_j, x_{-j}). \label{util_reg_decomp} \end{align} \subsection{Gradient of Utility} \label{appx:nfg_grad} Before deriving the ADI gradient, we first show that the partial derivative of the utility can be defined in terms of expected utilities or payoffs. This eases presentation later. For example, in a 3-player game, player 1’s expected utility is defined as \begin{align} u_1(a_1, a_2, a_3) &= \sum_{a_1 \in \mathcal{A}_1} \sum_{a_2 \in \mathcal{A}_2} \sum_{a_3 \in \mathcal{A}_3} u_1(a_1, a_2, a_3) x_{1 a_1} x_{2 a_2} x_{3 a_3}. \end{align} Taking the derivative with respect to player 1’s strategy $x_{1 a_1}$, i.e., the probability of player 1 specifically playing action $a_1$, we find \begin{align} \frac{\partial u_1}{\partial x_{1 a_1}} &= \sum_{a_2 \in \mathcal{A}_2} \sum_{a_3 \in \mathcal{A}_3} u_1(a_1, a_2, a_3) x_{2 a_2} x_{3 a_3} \\ &= \mathbb{E}_{a_2 \sim x_2, a_3 \sim x_3} [ u_1(a_1, a_2, a_3) ]. \end{align} From here, it should be clear that the full vector of partial derivatives, i.e., gradient, can be written as \begin{align} \nabla_{x_1}(u_1) = \mathbb{E}_{a_2 \sim x_2, a_3 \sim x_3} [ u_1(a_1, a_2, a_3) ] \,\, \forall a_1 \in \mathcal{A}_1. \end{align} The Jacobian can be defined similarly (e.g, consider differentiating this result w.r.t. $x_{2 a_2}$ for all $a_2$. \subsection{Tsallis-Entropy} \label{app:tsallis_entropy_derivation} First we derive gradients assuming utilities are carefully regularized using a Tsallis entropy bonus, $S^\tau_k$, parameterized by \emph{temperature} $\tau=p \in [0, 1]$: \begin{align} S^{\tau}_k(x_k, x_{-k}) &= \frac{s_k}{p+1} ( 1 - \sum_m x_{km}^{p+1} ) = s_k \frac{p}{p+1} \Big[ \overbrace{\frac{1}{p} ( 1 - \sum_m x_{km}^{p+1} )}^{\text{Tsallis entropy}} \Big] \end{align} where $s_k = \Big( \sum_m (\nabla^k_{x_{km}})^{1/p} \Big)^p = ||\nabla^k_{x_k}||_{1/p}$. For Tsallis entropy, we assume payoffs in the game have been offset by a constant so that they are positive. The coefficients in front of the Tsallis entropy term are chosen carefully such that a \emph{best response} for player $k$ can be efficiently computed: \begin{align} \texttt{BR}(x_{-k}) &= \argmax_{z_k \in \Delta} z_k^\top \nabla^k_{x_k} + \frac{s_k}{p+1} ( 1 - \sum_m z_{km}^{p+1} ). \end{align} First note that the maximization problem above is strictly concave for $s_k > 0$ and $p \in (0, 1]$. If these assumptions are met, then any maximum is a unique global maximum. This is a constrained optimization problem, so in general the gradient need not be zero at the global optimum, but in this case it is. We will find a critical point by setting the gradient equal to zero and then prove that this point lies in the feasible set (the simplex) and satisfies second order conditions for optimality. \begin{align} \nabla_{x_k} u_k^\tau(x_k, x_{-k}) &= \nabla^k_{x_k} - ||\nabla^k_{x_k}||_{1/p} x_{k}^{p} = 0 \\ \implies \texttt{BR}(x_{-k}) &= \Big[ \frac{\nabla^k_{x_k}}{||\nabla^k_{x_k}||_{1/p}} \Big]^{\frac{1}{p}} = \Big[ \frac{\nabla^k_{x_k}}{s_k} \Big]^{\frac{1}{p}} = \frac{(\nabla^k_{x_k})^{\frac{1}{p}}}{||\nabla^k_{x_k}||_{1/p}^{1/p}} = \frac{(\nabla^k_{x_k})^{\frac{1}{p}}}{\sum_m (\nabla^k_{x_{km}})^{\frac{1}{p}}} \in \Delta. \end{align} The critical point is on the simplex as desired. Furthermore, the Hessian at the critical point is negative definite, $H(\texttt{BR}) = -p s_k\texttt{diag}(\texttt{BR}^{-1}) \prec 0$, so this point is a local maximum (and by strict concavity, a unique global maximum). If the original assumptions are not met and $s_k = 0$, then this necessarily implies $u_k^\tau(x_k, x_{-k}) = 0$ for all $x_k$. As all actions achieve equal payoff, we define the best response in this case to be the uniform distribution. Likewise, if $p=0$, then the Tsallis entropy regularization term disappears ($1-\sum_m x_{km} = 0$) and the best response is the same as for the unregularized setting. Note in the unregularized setting, we define the best response to be a mixed strategy over all actions achieving the maximal possible utility. \subsubsection{Gradients} \label{grad_calculations} We now derive the necessary derivatives for computing the deviation incentive gradient. \paragraph{Entropy Gradients} \begin{align} (A)\,\, \textcolor{green}{\nabla_{x_i} S^{\tau}_i (x_i, x_{-i})} &= -s_i x_i^p \\\nonumber\\ (B)\,\, \textcolor{blue}{\nabla_{x_i} s_j} &= p \Big( \sum_m (\nabla^j_{x_{jm}})^{1/p} \Big)^{p-1} \Big( \sum_m \frac{1}{p} \nonumber (\nabla^j_{x_{jm}})^{\frac{1}{p}-1} H^j_{j_m i} \Big) \\ &= \Big( \sum_m (\nabla^j_{x_{jm}})^{1/p} \Big)^{p-1} \Big( \sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}-1} H^j_{j_m i} \Big) \nonumber \\ &= \frac{1}{s_j^{\frac{1}{p}-1}} H^j_{ij} (\nabla^j_{x_j})^{\frac{1}{p}-1} = H^j_{ij} \texttt{BR}(x_{-j})^{1-p} \\ &\stackrel{p=1}{=} H^j_{ij} \mathbf{1} \nonumber \\ &\stackrel{p=\frac{1}{2}}{=} H^j_{ij} \frac{\nabla^j_{x_j}}{s_j} \nonumber \\\nonumber\\ (C)\,\, \textcolor{red}{\nabla_{x_i} S^{\tau}_j (x_j, x_{-j})} &= \frac{1}{s_j} S^{\tau}_j(x_j, x_{-j}) \textcolor{blue}{\underbrace{\nabla_{x_i} s_j}_{(B)}} \end{align} \paragraph{Best Response Gradients} \begin{align} (D)\,\, \textcolor{orange}{J_{x_i} [(\nabla^j_{x_j})^{\frac{1}{p}}]} &= J_{x_i} [(H^j_{ji} x_i)^{\frac{1}{p}}] \nonumber \\ &= \frac{1}{p} (\nabla^j_{x_j})^{\frac{1}{p}-1} \odot H^j_{ji} \end{align} where $\odot$ denotes elementwise multiplication or, more generally, broadcast multiplication. In this case, $(\nabla^j_{x_j})^{\frac{1}{p}-1} \in \mathbb{R}^{d_j \times 1}$ is broadcast multiplied by $H^j_{ji} \in \mathbb{R}^{d_j \times d_i}$ to produce a Jacobian matrix in $\mathbb{R}^{d_j \times d_i}$. \begin{align} (E)\,\, J_{x_i}(\texttt{BR}(x_{-j})) &= \frac{1}{\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}} J_{x_i} [(\nabla^j_{x_j})^{\frac{1}{p}}] - [(\nabla^j_{x_j})^{\frac{1}{p}}] [\frac{1}{\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}}]^2 \nabla_{x_i} [\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}]^\top \nonumber \\ &= \frac{1}{\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}} J_{x_i} [(\nabla^j_{x_j})^{\frac{1}{p}}] - [(\nabla^j_{x_j})^{\frac{1}{p}}] [\frac{1}{\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}}]^2 [\sum_m J_{x_i} [(\nabla^j_{x_{jm}})^{\frac{1}{p}}]]^\top \nonumber \\ &= \frac{1}{\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}} J_{x_i} [(\nabla^j_{x_j})^{\frac{1}{p}}] - [(\nabla^j_{x_j})^{\frac{1}{p}}] [\frac{1}{\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}}]^2 [\mathbf{1}^\top J_{x_i} [(\nabla^j_{x_{j}})^{\frac{1}{p}}]] \nonumber \\ &= [\frac{1}{\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}} \mathbf{I}_j - [(\nabla^j_{x_j})^{\frac{1}{p}}] [\frac{1}{\sum_m (\nabla^j_{x_{jm}})^{\frac{1}{p}}}]^2 \mathbf{1}^\top] J_{x_i} [(\nabla^j_{x_j})^{\frac{1}{p}}] \nonumber \\ &= \frac{1}{||\nabla^j_{x_j}||_{1/p}^{1/p}} [\mathbf{I}_j - \frac{(\nabla^j_{x_j})^{\frac{1}{p}}}{||\nabla^j_{x_j}||_{1/p}^{1/p}} \mathbf{1}^\top] J_{x_i} [(\nabla^j_{x_j})^{\frac{1}{p}}] \nonumber \\ &= \frac{1}{||\nabla^j_{x_j}||_{1/p}^{1/p}} [\mathbf{I}_j - \texttt{BR}(x_{-j}) \mathbf{1}^\top] \textcolor{orange}{\underbrace{J_{x_i} [(\nabla^j_{x_j})^{\frac{1}{p}}]}_{(D)}} \nonumber \\ &= \frac{1}{s_j^{1/p}} [\mathbf{I}_j - \texttt{BR}(x_{-j}) \mathbf{1}^\top] [\frac{1}{p} (\nabla^j_{x_j})^{\frac{1}{p}-1} \odot H^j_{ji}] \end{align} \paragraph{Deviation Incentive Gradient Terms} Here, we derive each of the terms in the ADI gradient. The numbers left of the equations mark which terms we are computing in section~\ref{gen_pop_exp_grad}. \begin{align} (\ref{self_term_1})\,\, \nabla_{x_i} [u^{\tau}_i(\texttt{BR}(x_{-i}), x_{-i}) - u^{\tau}_i(x_i,x_{-i})] &= -\nabla_{x_i} u^{\tau}_i(x_i,x_{-i}) \nonumber \\ &\stackrel{(\ref{util_reg_decomp})+(A)}{=} -(\nabla^i_{x_i} - s_i x_i^p). \end{align} \begin{align} (\ref{other_term_1})\,\, \nabla_{z_j} u^{\tau}_j(z_j, x_{-j}) \vert_{\texttt{BR}_j, x_{-j}} &= [\nabla_{z_j} u_j(z_j, x_{-j}) + \textcolor{green}{\underbrace{\nabla_{z_j} S^{\tau}_j(z_j, x_{-j})}_{(A)}}] \vert_{\texttt{BR}_j, x_{-j}} \nonumber \\ &= [ \nabla^j_{z_j} - s_j z_j^p ] \vert_{\texttt{BR}_j, x_{-j}} \nonumber \\ &= \nabla^j_{x_j} - s_j \texttt{BR}(x_{-j})^p \nonumber \\ &= \nabla^j_{x_j} - \nabla^j_{x_j} = 0. \end{align} \begin{align} (\ref{other_term_2})\,\, \nabla_{z_{i}} u^{\tau}_j(\texttt{BR}(x_{-j}), z_{-j}) \vert_{\texttt{BR}_j, x_{-j}} &= [\nabla_{z_i} u_j(\texttt{BR}(x_{-j}), z_{-j}) + \textcolor{red}{\underbrace{\nabla_{z_i} S^{\tau}_j(\texttt{BR}(x_{-j}), z_{-j})}_{(C)}}] \vert_{\texttt{BR}_j, x_{-j}} \nonumber \\ &= [H^j_{ij} \texttt{BR}(x_{-j}) + \frac{1}{s_j} S^{\tau}_j(\texttt{BR}(x_{-j}), x_{-j}) H^j_{ij} \texttt{BR}(x_{-j})^{1-p}] \nonumber \\ &= H^j_{ij} \texttt{BR}(x_{-j}) [1 + \frac{1}{s_j} S^{\tau}_j(\texttt{BR}(x_{-j}), x_{-j}) \texttt{BR}(x_{-j})^{-p}] \end{align} \begin{align} (\ref{other_term_3})\,\, \nabla_{z_{i}} u^{\tau}_j(x_j, z_{-j}) \vert_{x_j, x_{-j}} &= [\nabla_{z_i} u_j(x_j, z_{-j}) + \textcolor{red}{\underbrace{\nabla_{z_i} S^{\tau}_j(x_j, z_{-j})}_{(C)}}] \vert_{x_j, x_{-j}} \nonumber \\ &= [H^j_{ij} x_j + \frac{1}{s_j} S^{\tau}_j(x_j, x_{-j}) H^j_{ij} x_j^{1-p}] \nonumber \\ &= H^j_{ij} x_j [1 + \frac{1}{s_j} S^{\tau}_j(x_j, x_{-j}) x_j^{-p}] \end{align} \begin{align} (\ref{other})\,\, &\nabla_{x_i} [u^{\tau}_j(\texttt{BR}(x_{-j}), x_{-j}) - u^{\tau}_j(x_j,x_{-j})] \\ &\stackrel{(\ref{other_term_1})+(\ref{other_term_2})-(\ref{other_term_3})}{=} H^j_{ij} \texttt{BR}(x_{-j}) [1 + \frac{1}{s_j} S^{\tau}_j(\texttt{BR}(x_{-j}), x_{-j}) \texttt{BR}(x_{-j})^{-p}] - H^j_{ij} x_j [1 + \frac{1}{s_j} S^{\tau}_j(x_j, x_{-j}) x_j^{-p}] \nonumber \\ &= H^j_{ij} (\texttt{BR}(x_{-j}) - x_j) \nonumber \\ &+ \frac{1}{p+1} (1 - ||\texttt{BR}(x_{-j})||_{p+1}^{p+1}) H^j_{ij} \texttt{BR}(x_{-j})^{1-p} - \frac{1}{p+1} (1 - ||x_j||_{p+1}^{p+1}) H^j_{ij} x_j^{1-p} \nonumber \\ &= H^j_{ij} \Big[ (\texttt{BR}(x_{-j}) - x_j) + \frac{1}{p+1} \big( (1 - ||\texttt{BR}(x_{-j})||_{p+1}^{p+1}) \texttt{BR}(x_{-j})^{1-p} - (1 - ||x_j||_{p+1}^{p+1}) x_j^{1-p} \big) \Big]. \nonumber \end{align} \paragraph{Deviation Incentive Gradient (Tsallis Entropy)} Finally, combining the derived terms gives: \begin{align} &\nabla_{x_i} \mathcal{L}_{adi}(\boldsymbol{x}) = -(\nabla^i_{x_i} - x_i^p ||\nabla^i_{x_i}||_{1/p}) \\ &+ \sum_{j \ne i} H^j_{ij} \Big[ (\texttt{BR}(x_{-j}) - x_j) + \frac{1}{p+1} \big( (1 - ||\texttt{BR}(x_{-j})||_{p+1}^{p+1}) \texttt{BR}(x_{-j})^{1-p} - (1 - ||x_j||_{p+1}^{p+1}) x_j^{1-p} \big) \Big]. \nonumber \end{align} Note that in the limit of zero temperature, the gradient approaches \begin{align} \nabla_{x_i} \mathcal{L}_{adi}(\boldsymbol{x}) &\stackrel{p\rightarrow 0^+}{=} -\overbrace{(\nabla^i_{x_i} - \boldsymbol{1} ||\nabla^i_{x_i}||_{\infty})}^{\text{policy gradient}} + \sum_{j \ne i} H^j_{ij} (\texttt{BR}_j - x_j). \end{align} The second component of the policy gradient term is orthogonal to the tangent space of the simplex, i.e., it does not contribute to movement along the simplex so it can be ignored in the limit of $p \rightarrow 0^+$. Also, a Taylor series expansion of the adaptive Tsallis entropy around $p=0$ shows $S^{\tau=p}_k = p s_k \mathcal{H}(x_k) + \mathcal{O}(p^2)$, so the Tsallis entropy converges to a multiplicative constant of the Shannon entropy in the limit of zero entropy. If a similar homotopy exists for Tsallis entropy, maybe its limit point is the same limiting logit equilibrium as with Shannon entropy. We leave this to future research. \textit{Aside: } If you want to increase the entropy, just add a large constant to all payoffs which makes $\texttt{BR} = \frac{1}{d}$ in the limit; it can be shown that $\frac{1}{d}$ then becomes an equilibrium. Notice $\texttt{BR}$ is invariant to multiplicative scaling of the payoffs. Therefore, deviation incentive is linear with respect to multiplicative scaling. One idea to decrease entropy is to subtract a constant from the payoffs such that they are still positive but smaller. This can accomplish the desired effect, but will require more samples to estimate random variables with tiny values in their denominator. It seems like it won't be any more efficient than decreasing $p$. \subsection{Shannon Entropy} The Nash equilibrium of utilities regularized with Shannon entropy is well known as the Quantal Response Equilbrium or Logit Equilibrium. The best response is a scaled softmax over the payoffs. We present the relevant intermediate gradients below. \begin{align} S_k^\tau(x_k, x_{-k}) &= -\tau \sum_i x_i \log(x_i) \\ \texttt{BR}(x_{-k}) &= \texttt{softmax}(\frac{\nabla^k_{x_k}}{\tau}) \\ \nabla_{x_i} S^\tau_i(x_i, x_{-i}) &= -\tau (\log(x_i) + 1) \\ \nabla_{x_i} S^\tau_j(x_j, x_{-j}) &= 0 \\ J_{x_i}(\texttt{BR}(x_{-j})) &= \frac{1}{\tau} (\texttt{diag}(\texttt{BR}_j) - \texttt{BR}_j \texttt{BR}_j^\top) H^j_{ji} \\ \nabla_{z_j} u^\tau_j(z_j, x_{-j})\vert_{\texttt{BR}_j, x_{-j}} &= \nabla^j_{x_j} - \tau (\log(\texttt{BR}_j) + 1) \\ \nabla_{x_i} [u^\tau_i(\texttt{BR}(x_{-i}), x_{-i}) - u^\tau_i(x_i,x_{-i})] &= -(\nabla^i_{x_i} - \tau (\log(\texttt{BR}_i) + 1)) \\ \nabla_{z_i} u^\tau_j(\texttt{BR}(x_{-j}), z_{-j})\vert_{\texttt{BR}_j,x_{-j}} &= H^j_{ij} \texttt{BR}(x_{-j}) \\ \nabla_{z_i} u^\tau_j(x_j, z_{-j})\vert_{x_j,x_{-j}} &= H^j_{ij} x_j \end{align} \begin{align} &\nabla_{x_i} [u^\tau_j(\texttt{BR}(x_{-j}), x_{-j}) - u^\tau_j(x_j, x_{-j})] \nonumber \\ &= [\frac{1}{\tau} (\texttt{diag}(\texttt{BR}_j) - \texttt{BR}_j \texttt{BR}_j^\top) H^j_{ji}]^\top (\nabla^j_{x_j} - \tau (\log(\texttt{BR}_j) + 1)) + H^j_{ij} \texttt{BR}(x_{-j}) - H^j_{ij} x_j \end{align} \paragraph{Deviation Incentive Gradient (Shannon Entropy)} Combining the derived terms gives: \begin{align} &\nabla_{x_i} \mathcal{L}_{adi}(\boldsymbol{x}) = -(\nabla^i_{x_i} - \tau (\log(x_i) + 1)) \\ &+ \sum_{j \ne i} [\frac{1}{\tau} (\texttt{diag}(\texttt{BR}_j) - \texttt{BR}_j \texttt{BR}_j^\top) H^j_{ji}]^\top (\nabla^j_{x_j} - \tau (\log(\texttt{BR}_j) + 1)) + H^j_{ij} [\texttt{BR}(x_{-j}) - x_j]. \nonumber \end{align} \section{Ablations} \label{app:ablations} We introduce some additional notation here. A superscript indicates the temperature of the entropy regularizer, e.g., \texttt{QRE}$^{0.1}$ uses $\tau=0.1$ and \texttt{QRE}$^{auto}$ anneals $\tau$ as before. \texttt{PED} minimizes $\mathcal{L}_{adi}$ without any entropy regularization or amortized estimates of payoff gradients (i.e., without the auxiliary variable $y$). \subsection{Bias re. \S\ref{bias_intuition}+\S\ref{ped_grads}} \label{appx:bias} Figure~\ref{fig:gradient_bias_qre} demonstrates there exists a sweet spot for the amount of entropy regularization\textemdash too little and gradients are biased, too much and we solve for the Nash of a game we are not interested in. \begin{figure} \caption{Bias } \label{fig:bias_qre} \caption{Concentration } \label{fig:concentration_qre} \caption{Bias-Bias Tradeoff on Blotto($10$ coins, $3$ fields, $4$ players). Curves are drawn for samples sizes of $n=\{1, 4, 21, 100\}$. Circles denote the minimum of each curve for all $n \in [1, 100]$. Zero entropy regularization results in high gradient bias, i.e., stochastic gradients, $\nabla^{\tau=0}_n$, do not align well with the expected gradient, $\nabla^{\tau=0}_{\infty}$, where $n$ is the number of samples. On the other hand, higher entropy regularization allows lower bias gradients but with respect to the entropy regularized utilities, not the unregularized utilities that we are interested in. The sweet spot lies somewhere in the middle. (\subref{fig:bias_qre}) SGD guarantees assume gradients are unbiased, i.e., the mean of sampled gradients is equal to the expected gradient in the limit of infinite samples $n$. Stochastic average deviation incentive gradients violate this assumption, the degree to which depends on the amount of entropy regularization $\tau$ and number of samples $n$; $\tau=10^{-2}$ appears to minimize the gradient bias for $n=100$ although with a nonzero asymptote around $2.5$. (\subref{fig:concentration_qre}) Computing a single stochastic gradient using more samples can reduce bias to zero in the limit. Note samples here refers to joint actions drawn from strategy profile $\boldsymbol{x}$, not gradients as in (\subref{fig:bias_qre}). Additional samples makes gradient computation more expensive, but as we show later, these sample estimates can be amortized over iterations by reusing historical play. Both of the effects seen in (\subref{fig:bias_qre}) and (\subref{fig:concentration_qre}) guide development of our proposed algorithm: (\subref{fig:bias_qre}) suggests using $\tau>0$ and (\subref{fig:concentration_qre}) suggests reusing recent historical play to compute gradients (with $\tau>0$).} \label{fig:gradient_bias_qre} \end{figure} \subsection{Auxiliary $y$ re. \S\ref{amortized_estimates}} The introduction of auxiliary variables $y_i$ are also supported by the results in Figure~\ref{fig:app:blotto_tracking_qre}\textemdash \texttt{QRE}$^{0.0}$ is equivalent to \texttt{PED} and $^y$\texttt{QRE}$^{0.0}$ is equivalent to \texttt{PED} augmented with $y$'s to estimate averages of payoff gradients. \begin{figure} \caption{3-player Blotto } \label{fig:app:blotto3track_qre} \caption{4-player Blotto } \label{fig:app:blotto4track_qre} \caption{Adding an appropriate level of entropy can accelerate convergence (compare PED to \texttt{QRE}$^{0.01}$ in (\subref{fig:blotto4track_qre})). And amortizing estimates of joint play using $y$ can reduce gradient bias, further improving performance (e.g., compare \texttt{QRE}$^{0.00}$ to $^y$\texttt{QRE}$^{0.00}$ in (\subref{fig:app:blotto3track_qre}) or (\subref{fig:app:blotto4track_qre})).} \label{fig:app:blotto_tracking_qre} \end{figure} \subsection{Annealing $\tau$ re. \S\ref{annealing}} ADIDAS includes temperature annealing, replacing the need to preset $\tau$ with instead an ADI threshold $\epsilon$. Figure~\ref{fig:app:blotto_qre} compares this approach against other variants of the algorithm and shows this automated annealing mechanism reaches comparable final levels of ADI. \begin{figure} \caption{3-player Blotto } \label{fig:app:blotto3_qre} \caption{4-player Blotto } \label{fig:app:blotto4_qre} \caption{Average deviation incentive of the symmetric joint strategy $\boldsymbol{x}^{(t)}$ is plotted against algorithm iteration $t$. Despite \texttt{FTRL}'s lack of convergence guarantees, it converges quickly in these games.} \label{fig:app:blotto_qre} \end{figure} \subsection{Convergence re. \S\ref{convergence}} In Figure~\ref{fig:app:blotto_qre}, \texttt{FTRL} and \texttt{RM} achieve low ADI quickly in some cases. \texttt{FTRL} has recently been proven not to converge to Nash, and this is suggested to be true of no-regret algorithms in general~\citep{flokas2020no,mertikopoulos2018cycles}. Before proceeding, we demonstrate empirically in Figure~\ref{fig:app:noregfail} that \texttt{FTRL} and \texttt{RM} fail on games where minimizing $\mathcal{L}^{\tau}_{adi}$ still makes progress, even without an annealing schedule. \begin{figure} \caption{Modified-Shapley's } \label{fig:app:modshapley_qre} \caption{GAMUT-D7 } \label{fig:app:gamutd7_qre} \caption{ADIDAS reduces $\mathcal{L}_{adi}$ in both games. In game~(\subref{fig:app:modshapley_qre}), created by~\protect\citet{ostrovski2013payoff}, to better test performance, $\boldsymbol{x}$ is initialized randomly rather than with the uniform distribution because the Nash is at uniform. In~(\subref{fig:app:gamutd7_qre}), computing gradients using full expectations (in black) results in very low ADI. Computing gradients using only single samples plus historical play allows a small reduction in ADI. More samples (e.g., $n=10^3$) allows further reduction.} \label{fig:app:noregfail} \end{figure} \subsection{ADI stochastic estimate} Computing ADI exactly requires the full payoff tensor, so in very large games, we must estimate ADI. Figure~\ref{fig:exp_est_qre} shows how estimates of $\mathcal{L}_{adi}$ computed from historical play track their true expected value throughout training. \begin{figure} \caption{Blotto-3 } \label{fig:exp_bias_blotto3_qre} \caption{Blotto-4 } \label{fig:exp_bias_blotto4_qre} \caption{Accuracy of running estimate of $\mathcal{L}^{\tau}_{adi}$ computed from $y^{(t)}$ (in light coral) versus true value (in red).} \label{fig:exp_est_qre} \end{figure} \section{Experiments Repeated with \texttt{ATE}} \label{ate_exp} \paragraph{Bias re. \S\ref{bias_intuition}+\S\ref{ped_grads}} We first empirically verify that adding an entropy regularizer to the player utilities introduces a trade-off: set entropy regularization too low and the best-response operator will have high bias; set entropy regularization too high and risk solving for the Nash of a game we are not interested in. Figure~\ref{fig:gradient_bias_ate} shows there exists a sweet spot in the middle for moderate amounts of regularization (temperatures). \begin{figure} \caption{Bias } \label{fig:bias_ate} \caption{Concentration } \label{fig:concentration_ate} \caption{Bias-Bias Tradeoff on Blotto($10$ coins, $3$ fields, $4$ players). Curves are drawn for samples sizes of $n=\{1, 4, 21, 100\}$. Circles denote the minimum of each curve for all $n \in [1, 100]$. Zero entropy regularization results in high gradient bias, i.e., stochastic gradients, $\nabla^{\tau=0}_n$, do not align well with the expected gradient, $\nabla^{\tau=0}_{\infty}$, where $n$ is the number of samples. On the other hand, higher entropy regularization allows lower bias gradients but with respect to the entropy regularized utilities, not the unregularized utilities that we are interested in. The sweet spot lies somewhere in the middle. (\subref{fig:bias_ate}) SGD guarantees assume gradients are unbiased, i.e., the mean of sampled gradients is equal to the expected gradient in the limit of infinite samples $n$. Stochastic average deviation incentive gradients violate this assumption, the degree to which depends on the amount of entropy regularization $\tau$ and number of samples $n$; $p=10^{-2}$ appears to minimize the gradient bias for $n=100$ although with a nonzero asymptote around $2.5$. (\subref{fig:concentration_ate}) Computing a single stochastic gradient using more samples can reduce bias to zero in the limit. Note samples here refers to joint actions from strategy profile $\boldsymbol{x}$, not gradients as in (\subref{fig:bias_ate}). Additional samples makes gradient computation more expensive, but as we show later, these sample estimates can be amortized over iterations by reusing historical play. Both the effects seen in (\subref{fig:bias_ate}) and (\subref{fig:concentration_ate}) guide development of our proposed algorithm: (\subref{fig:bias_ate}) suggests using $\tau>0$ and (\subref{fig:concentration_ate}) suggests reusing recent historical play to compute gradients (with $\tau>0$).} \label{fig:gradient_bias_ate} \end{figure} \paragraph{Auxiliary $y$ re. \S\ref{amortized_estimates}} The introduction of auxiliary variables $y_i$ are supported by the results in Figure~\ref{fig:blotto_tracking_ate}\textemdash \texttt{ATE}$^{0.0}$ is equivalent to \texttt{PED} and $^y$\texttt{ATE}$^{0.0}$ is equivalent to \texttt{PED} augmented with $y$'s to estimate averages of payoff gradients. \begin{figure} \caption{Blotto-3 } \label{fig:blotto3track_ate} \caption{Blotto-4 } \label{fig:blotto4track_ate} \caption{(\subref{fig:blotto3track_ate}) $3$-player Blotto game; (\subref{fig:blotto4track_ate}) $4$-player Blotto game. Adding an appropriate level of entropy (e.g., $\tau=0.01$) can accelerate convergence (compare PED to \texttt{ATE}$^{0.01}$ in (\subref{fig:blotto4track_ate})). And amortizing estimates of joint play can reduce gradient bias, further improving performance (e.g., compare \texttt{ATE}$^{0.01}$ to $^y$\texttt{ATE}$^{0.01}$ in (\subref{fig:blotto3track_ate}) or (\subref{fig:blotto4track_ate})).} \label{fig:blotto_tracking_ate} \end{figure} In Figure~\ref{fig:blotto_tracking_ate}, we also see a more general relationship between temperature and convergence rate. Higher temperatures appear to result in faster initial convergence ($\mathcal{L}_{adi}$ spikes initially in Figure~\ref{fig:blotto3track_ate} for $\tau<0.1$) and lower variance but higher asymptotes, while the opposite holds for lower temperatures. These results suggest annealing the temperature over time to achieve fast initial convergence and lower asymptotes. Lower variance should also be possible by carefully annealing the learning rate to allow $y$ to accurately perform tracking. Fixed learning rates were used here; we leave investigating learning rate schedules to future work. Figure~\ref{fig:blotto4track_ate} shows how higher temperatures (through a reduction in gradient bias) can result in accelerated convergence. \paragraph{Annealing $\tau$ re. \S\ref{annealing}} ADIDAS includes temperature annealing replacing the need for setting the hyperparameter $\tau$ with instead an ADI threshold $\epsilon$. Figure~\ref{fig:blotto_ate} compares this approach against several other variants of the algorithm and shows this automated annealing mechanism reaches comparable final levels of ADI. \begin{figure} \caption{Blotto-3 } \label{fig:blotto3_ate} \caption{Blotto-4 } \label{fig:blotto4_ate} \caption{(\subref{fig:blotto3_ate}) $3$-player Blotto game; (\subref{fig:blotto4_ate}) $4$-player Blotto game. The maximum a single agent can exploit the symmetric joint strategy $\boldsymbol{x}^{(t)}$ is plotted against algorithm iteration $t$. Despite \texttt{FTRL}'s lack of convergence guarantees, it converges quickly in these Blotto games.} \label{fig:blotto_ate} \end{figure} \paragraph{Convergence re. \S\ref{convergence}} In Figure~\ref{fig:blotto_ate}, \texttt{FTRL} and \texttt{RM} achieve low levels of ADI quickly in some cases. \texttt{FTRL} has recently been proven not to converge to Nash, and this is suggested to be true of no-regret algorithms such as \texttt{RM} in general~\citep{flokas2020no,mertikopoulos2018cycles}. Before proceeding, we demonstrate empirically in Figure~\ref{fig:noregfail_ate} that \texttt{FTRL} and \texttt{RM} fail on some games where ADIDAS still makes progress. \begin{figure} \caption{Modified-Shapley's } \label{fig:modshapley_ate} \caption{GAMUT-D7 } \label{fig:gamutd7_ate} \caption{(\subref{fig:modshapley_ate}) Modified-Shapley's; (\subref{fig:gamutd7_ate}) GAMUT-D7. Deviation incentive descent reduces $\mathcal{L}_{adi}$ in both games. In~(\subref{fig:modshapley_ate}), to better test the performance of the algorithms, $x$ is initialized randomly rather than with the uniform distribution because the Nash is at uniform. In~(\subref{fig:gamutd7_ate}), computing ADI gradients using full expectations (in black) results in very low levels of ADI. Computing estimates using only single samples plus historical play allows a small reduction in ADI. More samples (e.g., $n=10^3$) allows further reduction.} \label{fig:noregfail_ate} \end{figure} \paragraph{Large-Scale re \S\ref{scale}} Computing ADI exactly requires the full payoff tensor, so in very large games, we must estimate the ADI. Figure~\ref{fig:exp_est_ate} shows how estimates of $\mathcal{L}_{adi}$ computed from historical play track their true expected value throughout training. \begin{figure} \caption{Blotto-3 } \label{fig:exp_bias_blotto3_ate} \caption{Blotto-4 } \label{fig:exp_bias_blotto4_ate} \caption{Accuracy of running estimate of $\mathcal{L}^{\tau}_{adi}$ computed from $y^{(t)}$ (in light coral) versus true value (in blue).} \label{fig:exp_est_ate} \end{figure} \section{Comparison Against Additional Algorithms} \label{app:morealgs} \subsection{ED and FP Fail} \label{app:xtra_comparisons} We chose not to include Exploitability Descent (ED) or Fictitious Play (FP) in the main body as we considered them to be ``straw men". ED is only expected to converge in 2-player, zero-sum games. FP is non-convergent in some 2-player games as well~\citep{goldberg2013approximation}. \begin{figure} \caption{FP, ED, PED access the full tensor. $^y$ATE$^{auto}$ samples.} \label{fig:fp_ed} \end{figure} We run ED and FP with true expected gradients \& best responses ($s$$=$$\infty$) on the $3$ player game in Figure~\ref{fig:fp_ed} to convince the reader that failure to converge is not due to stochasticity. \subsection{Gambit Solvers} \label{app:gambit_solvers} We ran all applicable gambit solvers on the 4-player, 10-coin, 3-field Blotto game (comand listed below). All solvers fail to return a Nash equilibrium except \texttt{gambit-enumpoly} which returns all $36$ permutations of the following pure, non-symmetric Nash equilibrium: \begin{align} x^* &= [(10,0,0), (10,0,0), (0,10,0), (0,0,10)] \end{align} where each of the four players places 10 coins on one of the three fields. \begin{itemize} \item \texttt{gambit-enumpoly} \item \texttt{gambit-gnm} \item \texttt{gambit-ipa} \item \texttt{gambit-liap} \item \texttt{gambit-simpdiv} \item \texttt{gambit-logit} \end{itemize} Command: \begin{lstlisting} timeout 3600s gambit-enumpoly -H < blotto_10_3_4.nfg >> enumpoly.txt; timeout 3600s gambit-gnm < blotto_10_3_4.nfg >> gnm.txt; timeout 3600s gambit-ipa < blotto_10_3_4.nfg >> ipa.txt; timeout 3600s gambit-liap < blotto_10_3_4.nfg >> liap.txt; timeout 3600s gambit-simpdiv < blotto_10_3_4.nfg >> simpdiv.txt; timeout 3600s gambit-logit -m 1.0 -e < blotto_10_3_4.nfg >> logit.txt \end{lstlisting} \section{Additional Game Domains} \label{app:xtra_games} \subsection{Diplomacy Experiments - Subsampled Games} Figure~\ref{fig:dipsub} runs a comparison on $40$ subsampled tensors ($7$-players, $4$-actions each) taken from the $40$ turns of a single Diplomacy match. The four actions selected for each player are sampled from the corresponding player's trained policy. \begin{figure} \caption{Subsampled games.} \label{fig:dipsub} \end{figure} Figure~\ref{fig:diplomacy} runs a comparison on two Diplomacy meta-games, one with 5 bots trained using Fictious Play and the other with bots trained using Iterated Best Response (IBR) \textemdash these are the same meta-games analyzed in Figure 3 of~\citep{anthony2020learning}. \begin{figure} \caption{Diplomacy FP } \label{fig:dipfp} \caption{Diplomacy IBR } \label{fig:dipibr} \caption{(\subref{fig:dipfp}) FP; (\subref{fig:dipibr}) IBR. The maximum a single agent can exploit the symmetric joint strategy $\boldsymbol{x}^{(t)}$ is plotted against algorithm iteration $t$. Many of the algorithms quickly achieve near zero $\mathcal{L}_{adi}$, so unlike in the other experiments, hyperparameters are selected according according to the earliest point at which exploitablity falls below $0.01$ with ties split according to the final value.} \label{fig:diplomacy} \end{figure} Figure~\ref{fig:diplomacy_med_ate} demonstrates an empirical game theoretic analysis~\citep{wellman2006methods,jordan2007empirical,wah2016empirical} of a large symmetric $7$-player Diplomacy meta-game where each player elects $1$ of $5$ trained bots to play on their behalf. In this case, the expected value of each entry in the payoff tensor represents a winrate. Each entry can only be estimated by simulating game play, and the result of each game is a Bernoulli random variable. To obtain a winrate estimate within $0.01$ of the true estimate with probability $95\%$, a Chebyshev bound implies more than $223$ samples are needed. The symmetric payoff tensor contains $330$ unique entries, requiring over $74$ thousand games in total. In the experiment below, ADIDAS achieves negligible ADI in less than $7$ thousand iterations with $50$ samples of joint play per iteration ($\approx 5 \times$ the size of the tensor). \subsection{Diplomacy Experiments - Empirical Game Theoretic Analysis} \label{app:more_dip} Figure~\ref{fig:diplomacy_big_all_ate_25} repeats the computation of Figure~\ref{fig:diplomacy_big_ate} with a smaller auxiliary learning rate $\eta_y$ and achieves better results. \begin{figure} \caption{7-player, 21-action symmetric Nash ($x_t$) } \label{fig:dipbignash_all_ate_25} \caption{ADI Estimate } \label{fig:dipbigexp_ate2_25} \caption{(\subref{fig:dipbignash_all_ate}) Evolution of the symmetric Nash approximation; (\subref{fig:dipbigexp_ate2}) ADI estimated from auxiliary variable $y_t$. Black vertical lines indicate the temperature $\tau$ was annealed. Auxiliary learning rate $\eta_y = 1/25$. In addition to the change in $\eta_y$ from $1/10$, also note the change in axes limits versus Figure~\ref{fig:diplomacy_big_ate}.} \label{fig:diplomacy_big_all_ate_25} \end{figure} \subsection{El Farol Bar Stage Game} \label{app:elfarol} We compare ADIDAS variants and regret matching in Figure~\ref{fig:elfarol} on the 10-player symmetric El Farol Bar stage game with hyperparameters $n = 10$, $c = 0.7$, $C = nc$, $B = 0$, $S = 1$, $G = 2$ (see Section 3.1, The El Farol stage game in~\citep{whitehead2008farol}). Recall that the homotopy that ADIDAS attempts to trace is displayed in Figure~\ref{fig:el_farol_homotopy} of the main body. \begin{figure} \caption{(10-player, 2-action El Farol stage game) ADIDAS and regret matching both benefit from additional samples. Both find the same unique mixed Nash equilibrium. In this case, regret matching finds it much more quickly.} \label{fig:elfarol} \end{figure} \section{Description of Domains} \label{app:descrip_games} \subsection{Modified Shapley's} \label{mod_shap_def} The modified Shapley's game mentioned in the main body (Figure~\ref{fig:modshapley_qre}) is defined in Table~\ref{tab:mod_shap_def}~\citep{ostrovski2013payoff}. \begin{table}[ht!] \centering \begin{subfigure}[b]{.49\textwidth} \centering \begin{tabular}{|c|c|c|} \hline $1$ & $0$ & $\beta$ \\ \hline $\beta$ & $1$ & $0$ \\ \hline $0$ & $\beta$ & $1$ \\ \hline \end{tabular} \caption{Player A's Payoff Matrix \label{fig:mod_shap_player_A}} \end{subfigure} \begin{subfigure}[b]{.49\textwidth} \centering \begin{tabular}{|c|c|c|} \hline $-\beta$ & $1$ & $0$ \\ \hline $0$ & $-\beta$ & $1$ \\ \hline $1$ & $0$ & $-\beta$ \\ \hline \end{tabular} \caption{Player B's Payoff Matrix \label{fig:mod_shap_player_B}} \end{subfigure} \caption{(\subref{fig:mod_shap_player_A}) Player A; (\subref{fig:mod_shap_player_B}) Player B. We set $\beta=0.5$ in experiments and subtract $-\beta$ from each payoff matrix to ensure payoffs are non-negative; \texttt{ATE} requires non-negative payoffs.} \label{tab:mod_shap_def} \end{table} \subsection{Colonel Blotto} \label{app:blotto} Despite its apparently simplicity, Colonel Blotto is a complex challenge domain and one under intense research~\citep{behnezhad2017faster,behnezhad2018battlefields,behnezhad2019optimal,ahmadinejad2019duels,boix2020multiplayer}. \section{Connections to Other Algorithms} \label{app:alg_connections} \subsection{Consensus Algorithm} \label{app:consensus_connection} ADIDAS with Tsallis entropy and temperature fixed to $\tau=p=1$ recovers the regularizer proposed for the Consensus algorithm~\citep{mescheder2017numerics} plus entropy regularization. To see this, recall from Appx.~\ref{app:tsallis_entropy_derivation}, the Tsallis entropy regularizer: \begin{align} S^{\tau=p=1}_k(x_k, x_{-k}) &= \frac{s_k}{2} ( 1 - \sum_m x_{km}^{2} ) \end{align} where $s_k = \Big( \sum_m (\nabla^k_{x_{km}})^{1/1} \Big)^1 = ||\nabla^k_{x_k}||_{1/1}$ is treated as a constant w.r.t. $\boldsymbol{x}$. In the case with $\tau=p=1$, $\texttt{BR}_k = \texttt{BR}(x_{-k}) = \frac{1}{s_k} \nabla^k_{x_{k}}$ where we have assumed the game has been offset by a constant so that it contains only positive payoffs. Plugging these into the definition of $\mathcal{L}^{\tau}_{adi}$, we find \begin{align} \mathcal{L}^{\tau}_{adi}(\boldsymbol{x}) &= \sum_k u^{\tau}_k(\texttt{BR}_k, x_{-k}) - u^{\tau}_k(x_k,x_{-k}) \\ &\approx \sum_k u_k(\frac{1}{s_k}\nabla^k_{x_k}, x_{-k}) - u_k(x_k, x_{-k}) \\ &= \sum_k \frac{1}{s_k} \underbrace{||\nabla^k_{x_k}||^2}_{\text{consensus regularizer}} - x_k^\top \nabla^k_{x_k}. \end{align} Note the Consensus regularizer can also be arrived at by replacing the best response with a 1-step gradient ascent lookahead, i.e., $\texttt{BR}_k = x_k + \eta \nabla^k_{x_k}$: \begin{align} \mathcal{L}^{\tau}_{adi}(\boldsymbol{x}) &= \sum_k u^{\tau}_k(\texttt{BR}_k, x_{-k}) - u^{\tau}_k(x_k,x_{-k}) \\ &\approx \sum_k u_k(x_k + \eta \nabla^k_{x_k}, x_{-k}) - u_k(x_k, x_{-k}) \\ &= \sum_k (x_k^\top \nabla^k_{x_k}) + \eta ||\nabla^k_{x_k}||^2 - (x_k^\top \nabla^k_{x_k}) \\ &= \sum_k \eta ||\nabla^k_{x_k}||^2. \end{align} \subsection{Exploitability Descent as Extragradient} \label{app:ed_connection} In normal-form games, Exploitability Descent (ED)~\citep{Lockhart19ED} is equivalent to Extragradient~\citep{korpelevich1976extragradient} (or Mirror Prox~\citep{juditsky2011solving}) with an infinite intermediate step size. Recall $\texttt{BR}_k = \argmax_{x \in \Delta^{m_k - 1}} u_k(x_k, x_{-k}) = \argmax_{x \in \Delta^{m_k - 1}} x^\top \nabla^k_{x_k}$. Using the convention that ties between actions result in vectors that distribute a $1$ uniformly over the maximizers, the best response can be rewritten as $\texttt{BR}_k = \lim_{\hat{\eta} \rightarrow \infty} \Pi[x_k + \hat{\eta} \nabla^k_{x_k}]$ where $\Pi$ is the Euclidean projection onto the simplex. Define $F(\boldsymbol{x})$ such that its $k$th component $F(\boldsymbol{x})_k = -\nabla^k_{x_k} = -\nabla^k_{x_k}(x_{-k})$ where we have simply introduced $x_{-k}$ in parentheses to emphasize that player $k$'s gradient is a function of $x_{-k}$ only, and not $x_k$. Equations without subscripts imply they are applied in parallel over the players. ED executes the following update in parallel for all players $k$: \begin{align} x_{k+1} &\leftarrow \Pi[x_k + \eta \nabla_{x_k} \{ u_k(x_k, x_{-k}) \} \vert_{x_{-k} = \texttt{BR}_{-k}}]. \end{align} Define $\hat{x}_k = x_k - \hat{\eta} F(\boldsymbol{x})_k$. And as an abuse of notation, let $\hat{x}_{-k} = x_{-k} - \hat{\eta} F(\boldsymbol{x})_{-k}$. Extragradient executes the same update in parallel for all players $k$: \begin{align} x_{k+1} &\leftarrow \Pi[x_k - \eta F(\Pi[\boldsymbol{x} - \hat{\eta} F(\boldsymbol{x})])_k] \\ &= \Pi[x_k - \eta F(\Pi[\boldsymbol{x} + \hat{\eta} \nabla_{\boldsymbol{x}}])_k] \\ &= \Pi[x_k - \eta F(\texttt{BR})_k] \\ &= \Pi[x_k + \eta \nabla_{x_k} \{ u_k(x_k, x_{-k}) \} \vert_{x_{-k} = \texttt{BR}_{-k}}]. \end{align} Extragradient is known to converge in two-player zero-sum normal form games given an appropriate step size scheme. The main property that Extragradient relies on is monotonicity of the vector function $F$. All two-player zero-sum games induce a monotone $F$, however, this is not true in general of two-player general-sum or games with more players. ED is only proven to converge for two-player zero-sum, but this additional connection provides an additional reason why we do not expect ED to solve many-player general-sum normal-form games, which are the focus of this work. Please see Appx.~\ref{app:xtra_comparisons} for an experimental demonstration. \section{Python Code} \label{app:code} For the sake of reproducibility we have included code in python+numpy. \begin{lstlisting}[language=Python, caption=Header.] """ Copyright 2020 ADIDAS Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. """ import numpy as np from scipy import special def simplex_project_grad(g): """Project gradient onto tangent space of simplex.""" return g - g.sum() / g.size \end{lstlisting} \begin{lstlisting}[language=Python, caption=ADIDAS Gradient.] def gradients_qre_nonsym(dist, y, anneal_steps, payoff_matrices, num_players, temp=0., proj_grad=True, exp_thresh=1e-3, lrs=(1e-2, 1e-2), logit_clip=-1e5): """Computes exploitablity gradient and aux variable gradients. Args: dist: list of 1-d np.arrays, current estimate of nash y: list of 1-d np.arrays, current est. of payoff gradient anneal_steps: int, elapsed num steps since last anneal payoff_matrices: dict with keys as tuples of agents (i, j) and values of (2 x A x A) arrays, payoffs for each joint action. keys are sorted and arrays are indexed in the same order. num_players: int, number of players temp: non-negative float, default 0. proj_grad: bool, if True, projects dist gradient onto simplex exp_thresh: ADI threshold at which temp is annealed lrs: tuple of learning rates (lr_x, lr_y) logit_clip: float, minimum allowable logit Returns: gradient of ADI w.r.t. (dist, y, anneal_steps) temperature (possibly annealed, i.e., reduced) unregularized ADI (stochastic estimate) shannon regularized ADI (stochastic estimate) """ # first compute policy gradients and player effects (fx) policy_gradient = [] other_player_fx = [] grad_y = [] unreg_exp = [] reg_exp = [] for i in range(num_players): nabla_i = np.zeros_like(dist[i]) for j in range(num_players): if j == i: continue if i < j: hess_i_ij = payoff_matrices[(i, j)][0] else: hess_i_ij = payoff_matrices[(j, i)][1].T nabla_ij = hess_i_ij.dot(dist[j]) nabla_i += nabla_ij / float(num_players - 1) grad_y.append(y[i] - nabla_i) if temp >= 1e-3: # numerical under/overflow for temp < 1e-3 br_i = special.softmax(y[i] / temp) br_i_mat = (np.diag(br_i) - np.outer(br_i, br_i)) / temp log_br_i_safe = np.clip(np.log(br_i), logit_clip, 0) br_i_policy_gradient = nabla_i - temp * (log_br_i_safe + 1) else: power = np.inf s_i = np.linalg.norm(y[i], ord=power) br_i = np.zeros_like(dist[i]) maxima_i = (y[i] == s_i) br_i[maxima_i] = 1. / maxima_i.sum() br_i_mat = np.zeros((br_i.size, br_i.size)) br_i_policy_gradient = np.zeros_like(br_i) policy_gradient_i = np.array(nabla_i) if temp > 0: log_dist_i_safe = np.clip(np.log(dist[i]), logit_clip, 0) policy_gradient_i -= temp * (log_dist_i_safe + 1) policy_gradient.append(policy_gradient_i) unreg_exp_i = np.max(y[i]) - y[i].dot(dist[i]) unreg_exp.append(unreg_exp_i) entr_br_i = temp * special.entr(br_i).sum() entr_dist_i = temp * special.entr(dist[i]).sum() reg_exp_i = y[i].dot(br_i - dist[i]) + entr_br_i - entr_dist_i reg_exp.append(reg_exp_i) other_player_fx_i = (br_i - dist[i]) other_player_fx_i += br_i_mat.dot(br_i_policy_gradient) other_player_fx.append(other_player_fx_i) # then construct ADI gradient grad_dist = [] for i in range(num_players): grad_dist_i = -policy_gradient[i] for j in range(num_players): if j == i: continue if i < j: hess_j_ij = payoff_matrices[(i, j)][1] else: hess_j_ij = payoff_matrices[(j, i)][0].T grad_dist_i += hess_j_ij.dot(other_player_fx[j]) if proj_grad: grad_dist_i = simplex_project_grad(grad_dist_i) grad_dist.append(grad_dist_i) unreg_exp_mean = np.mean(unreg_exp) reg_exp_mean = np.mean(reg_exp) _, lr_y = lrs if (reg_exp_mean < exp_thresh) and (anneal_steps >= 1 / lr_y): temp = np.clip(temp / 2., 0., 1.) if temp < 1e-3: # consistent with numerical issue above temp = 0. grad_anneal_steps = -anneal_steps else: grad_anneal_steps = 1 return ((grad_dist, grad_y, grad_anneal_steps), temp, unreg_exp_mean, reg_exp_mean) \end{lstlisting} \begin{lstlisting}[language=Python, caption=ADIDAS Gradient (assuming symmetric Nash and with Tsallis entropy).] def gradients_ate_sym(dist, y, anneal_steps, payoff_matrices, num_players, p=1, proj_grad=True, exp_thresh=1e-3, lrs=(1e-2, 1e-2)): """Computes ADI gradient and aux variable gradients. Args: dist: list of 1-d np.arrays, current estimate of nash y: list of 1-d np.arrays, current est. of payoff gradient anneal_steps: int, elapsed num steps since last anneal payoff_matrices: dict with keys as tuples of agents (i, j) and values of (2 x A x A) arrays, payoffs for each joint action. keys are sorted and arrays are indexed in the same order. num_players: int, number of players p: float in [0, 1], Tsallis entropy-regularization proj_grad: bool, if True, projects dist gradient onto simplex exp_thresh: ADI threshold at which p is annealed lrs: tuple of learning rates (lr_x, lr_y) Returns: gradient of ADI w.r.t. (dist, y, anneal_steps) temperature, p (possibly annealed, i.e., reduced) unregularized ADI (stochastic estimate) tsallis regularized ADI (stochastic estimate) """ nabla = payoff_matrices[0].dot(dist) if p >= 1e-2: # numerical under/overflow when power > 100. power = 1. / float(p) s = np.linalg.norm(y, ord=power) if s == 0: br = np.ones_like(y) / float(y.size) # uniform dist else: br = (y / s)**power else: power = np.inf s = np.linalg.norm(y, ord=power) br = np.zeros_like(dist) maxima = (y == s) br[maxima] = 1. / maxima.sum() unreg_exp = np.max(y) - y.dot(dist) br_inv_sparse = 1 - np.sum(br**(p + 1)) dist_inv_sparse = 1 - np.sum(dist**(p + 1)) entr_br = s / (p + 1) * br_inv_sparse entr_dist = s / (p + 1) * dist_inv_sparse reg_exp = y.dot(br - dist) + entr_br - entr_dist entr_br_vec = br_inv_sparse * br**(1 - p) entr_dist_vec = dist_inv_sparse * dist**(1 - p) policy_gradient = nabla - s * dist**p other_player_fx = (br - dist) other_player_fx += 1 / (p + 1) * (entr_br_vec - entr_dist_vec) other_player_fx_translated = payoff_matrices[1].dot( other_player_fx) grad_dist = -policy_gradient grad_dist += (num_players - 1) * other_player_fx_translated if proj_grad: grad_dist = simplex_project_grad(grad_dist) grad_y = y - nabla _, lr_y = lrs if (reg_exp < exp_thresh) and (anneal_steps >= 1 / lr_y): p = np.clip(p / 2., 0., 1.) if p < 1e-2: # consistent with numerical issue above p = 0. grad_anneal_steps = -anneal_steps else: grad_anneal_steps = 1 return ((grad_dist, grad_y, grad_anneal_steps), p, unreg_exp, reg_exp) \end{lstlisting} \end{document}
arXiv
\begin{document} \date{\today} \author[J. Huizenga]{Jack Huizenga} \address{Department of Mathematics, The Pennsylvania State University, University Park, PA 16802} \email{[email protected]} \subjclass[2010]{Primary: 14J60. Secondary: 14E30, 14J29, 14C05} \keywords{Moduli spaces of sheaves, Hilbert schemes of points, ample cone, Bridgeland stability} \thanks{During the preparation of this article the author was partially supported by a National Science Foundation Mathematical Sciences Postdoctoral Research Fellowship} \title{Birational geometry of moduli spaces of sheaves and Bridgeland stability} \begin{abstract} Moduli spaces of sheaves and Hilbert schemes of points have experienced a recent resurgence in interest in the past several years, due largely to new techniques arising from Bridgeland stability conditions and derived category methods. In particular, classical questions about the birational geometry of these spaces can be answered by using new tools such as the positivity lemma of Bayer and Macr\`i. In this article we first survey classical results on moduli spaces of sheaves and their birational geometry. We then discuss the relationship between these classical results and the new techniques coming from Bridgeland stability, and discuss how cones of ample divisors on these spaces can be computed with these new methods. This survey expands upon the author's talk at the 2015 Bootcamp in Algebraic Geometry preceding the 2015 AMS Summer Research Institute on Algebraic Geometry at the University of Utah. \end{abstract} \maketitle \setcounter{tocdepth}{1} \tableofcontents \section{Introduction} The topic of vector bundles in algebraic geometry is a broad field with a rich history. In the 70's and 80's, one of the main questions of interest was the study of low rank vector bundles on projective spaces $\mathbb{P}^r$. One particularly challenging conjecture in this subject is the following. \begin{conjecture}[Hartshorne \cite{Hartshorne}]If $r\geq 7$ then any rank $2$ bundle on $\mathbb{P}^r_\mathbb{C}$ splits as a direct sum of line bundles. \end{conjecture} The Hartshorne conjecture is connected to the study of subvarieties of projective space of small codimension. In particular, the above statement implies that if $X\subset \mathbb{P}^r$ is a codimension $2$ smooth subvariety and $K_X$ is a multiple of the hyperplane class then $X$ is a complete intersection. Thus, early intersect in the study of vector bundles was born out of classical questions in projective geometry. Study of these types of questions led naturally to the study of \emph{moduli spaces} of (semistable) vector bundles, parameterizing the isomorphism classes of (semistable) vector bundles with given numerical invariants on a projective variety $X$ (we will define \emph{semistable} later---for now, view it as a necessary condition to get a good moduli space). As often happens in mathematics, these spaces have become interesting in their own right, and their study has become an entire industry. Beginning in the 80's and 90's, and continuing to today, people have studied the basic questions of the geometry of these spaces. Are they smooth? Irreducible? What do their singularities look like? When is the moduli space nonempty? What are divisors on the moduli space? Especially when $X$ is a curve or surface, satisfactory answers to these questions can often be given. We will survey several foundational results of this type in \S\ref{sec-moduli}-\ref{sec-properties}. More recently, there has been a great deal of interest in the study of the \emph{birational geometry} of moduli spaces of various geometric objects. Loosely speaking, the goal of such a program is to understand alternate birational models, or \emph{compactifications}, of a moduli space as themselves being moduli spaces for slightly different geometric objects. For instance, the Hassett-Keel program \cite{HassettHyeon} studies alternate compactifications of the Deligne-Mumford compactification $\overline M_g$ of the moduli space of stable curves. Different compactifications can be obtained by studying (potentially unstable) curves with different types of singularities. In addition to being interesting in their own right, moduli spaces provide explicit examples of higher dimensional varieties which can frequently be understood in great detail. We survey the birational geometry of moduli spaces of sheaves from a classical viewpoint in \S\ref{sec-classical}. In the last several years, there has been a great deal of progress in the study of the birational geometry of moduli spaces of sheaves owing to Bridgeland's introduction of the concept of a \emph{stability condition} \cite{bridgeland:stable,bridgelandK3}. Very roughly, there is a complex manifold $\Stab(X)$, the \emph{stability manifold}, parameterizing stability conditions $\sigma$ on $X$. There is a moduli space corresponding to each condition $\sigma$, and the stability manifold decomposes into chambers where the corresponding moduli space does not change as $\sigma$ varies in the chamber. For one of these chambers, the \emph{Gieseker chamber}, the corresponding moduli space is the ordinary moduli space of semistable sheaves. The moduli spaces corresponding to other chambers often happen to be the alternate birational models of the ordinary moduli space. In this way, the birational geometry of a moduli space of sheaves can be viewed in terms of a variation of the moduli problem. In \S\ref{sec-Bridgeland} we will introduce Bridgeland stability conditions, and especially study stability conditions on a surface. We study some basic examples on $\mathbb{P}^2$ in \S\ref{sec-exP2}. Finally, we close the paper in \S\ref{sec-positivity} by surveying some recent results on the computation of ample cones of Hilbert schemes of points and moduli spaces of sheaves on surfaces. \section{Moduli spaces of sheaves}\label{sec-moduli} The definition of a Bridgeland stability condition is motivated by the classical theory of semistable sheaves. In this section we review the basics of the theory of moduli spaces of sheaves, particularly focusing on the case of a surface. The standard references for this material are Huybrechts-Lehn \cite{HuybrechtsLehn} and Le Potier \cite{LePotierLectures}. \subsection{The moduli property} First we state an idealized version of the moduli problem. Let $X$ be a smooth projective variety with polarization $H$, and fix a set of discrete numerical invariants of a coherent sheaf $E$ on $X$. This can be accomplished by fixing the \emph{Hilbert polynomial} $$P_E(m) = \chi(E\otimes \mathcal{O}_X(mH))$$ of the sheaf. A \emph{family of sheaves on $X$ over $S$} is a (coherent) sheaf $\mathcal{E}$ on $X\times S$ which is $S$-flat. For a point $s\in S$, we write $E_s$ for the sheaf $\mathcal{E}|_{X\times \{s\}}$. We say $\mathcal{E}$ is a family of \emph{semistable sheaves of Hilbert polynomial $P$} if $E_s$ is semistable with Hilbert polynomial $P$ for each $s\in S$ (see \S\ref{ssec-semistable} for the definition of semistability). We define a \emph{moduli functor} $$\mathcal M'(P):\mathrm{Sch}^{o}\to \Set$$ by defining $\mathcal M'(P)(S)$ to be the set of isomorphism classes of families of semistable sheaves on $X$ with Hilbert polynomial $P$. We will sometimes just write $\mathcal M'$ for the moduli functor when the polynomial $P$ is understood. Let $p:X\times S\to S$ be the projection. If $\mathcal{E}$ is a family of semistable sheaves on $X$ with Hilbert polynomial $P$ and $L$ is a line bundle on $S$, then $\mathcal{E}\otimes p^*L$ is again such a family. The sheaves $E_s$ and $(\mathcal{E}\otimes p^*L)|_{X\times \{s\}}$ parameterized by any point $s\in S$ are isomorphic, although $\mathcal{E}$ and $\mathcal{E}\otimes p^*L$ need not be isomorphic. We call two families of sheaves on $X$ \emph{equivalent} if they differ by tensoring by a line bundle pulled back from the base, and define a refined moduli functor $\mathcal M$ by modding out by this equivalence relation: $\mathcal M= \mathcal M'/\sim$. The basic question is whether or not $\mathcal M$ can be represented by some nice object, e.g. by a projective variety or a scheme. We recall the following definitions. \begin{definition} A functor $\mathcal F:\mathrm{Sch}^o\to \Set$ is \emph{represented} by a scheme $X$ if there is an isomorphism of functors $\mathcal F \cong \Mor_{\mathrm{Sch}}(-,X)$. A functor $\mathcal F: \mathrm{Sch}^o\to \Set$ is \emph{corepresented} by a scheme $X$ if there is a natural transformation $\alpha:\mathcal F\to \Mor_{\mathrm{Sch}}(-,X)$ with the following universal property: if $X'$ is a scheme and $\beta:\mathcal F\to \Mor_{\mathrm{Sch}}(-,X')$ a natural transformation, then there is a unique morphism $\pi: X\to X'$ such that $\beta$ is the composition of $\alpha$ with the transformation $\Mor_{\mathrm{Sch}}(-,X)\to \Mor_{\mathrm{Sch}}(-,X')$ induced by $\pi$. \end{definition} \begin{remark} Note that if $\mathcal F$ is represented by $X$ then it is also corepresented by $X$. If $\mathcal F$ is represented by $X$, then $\mathcal F(\Spec\mathbb{C}) \cong \Mor_{\mathrm{Sch}}(\Spec \mathbb{C},X)$. That is, the points of $X$ are in bijective correspondence with $\mathcal F(\Spec \mathbb{C})$. This need not be true if $\mathcal F$ is only corepresented by $X$. If $\mathcal F$ is corepresented by $X$, then $X$ is unique up to a unique isomorphism. \end{remark} We now come to the basic definition of moduli space of sheaves. \begin{definition} A scheme $M(P)$ is a \emph{moduli space of semistable sheaves with Hilbert polynomial $P$} if $M(P)$ corepresents $\mathcal M(P)$. It is a \emph{fine moduli space} if it represents $\mathcal M(P)$. \end{definition} The most immediate consequence of $M$ being a moduli space is the existence of the \emph{moduli map}. Suppose $E$ is a family of semistable sheaves on $X$ parameterized by $S$. Then we obtain a morphism $S\to M$ which intuitively sends $s\in S$ to the isomorphism class of the sheaf $E_s$. In the special case when the base $\{s\}$ is a point, a family over $\{s\}$ is the isomorphism class of a single sheaf, and the moduli map $\{s\}\to M$ sends that class to a corresponding point. The compatibilities in the definition of a natural transformation ensure that in the case of a family $\mathcal{E}$ parameterized by a base $S$ the image in $M$ of a point $s\in S$ depends only on the isomorphism class of the sheaf $E_s$ parameterized by $s$. In the ideal case where the moduli functor $\mathcal M$ has a fine moduli space, there is a universal sheaf $\mathcal U$ on $X$ parameterized by $M$. We have an isomorphism $$\mathcal M(M)\cong \Mor_\mathrm{Sch}(M,M)$$ and the distinguished identity morphism $M\to M$ corresponds to a family $\mathcal U$ of sheaves parameterized by $M$ (strictly speaking, $\mathcal U$ is only well-defined up to tensoring by a line bundle pulled back from $M$). This universal sheaf has the property that if $\mathcal{E}$ is a family of semistable sheaves on $X$ parameterized by $S$ and $f:S\to M$ is the moduli map, then $\mathcal{E}$ and $({\mathrm{id}}_X\times f)^*\mathcal U$ are equivalent. \subsection{Issues with a naive moduli functor} In this subsection we give some examples to illustrate the importance of the as-yet-undefined \emph{semistability} hypothesis in the definition of the moduli functor. Let $\mathcal M^n$ be the \emph{naive} moduli functor of (flat) families of coherent sheaves with Hilbert polynomial $P$ on $X$, omitting any semistability hypothesis. We might hope that this functor is (co)representable by a scheme $M^n$ with some nice properties, such as the following. \begin{enumerate} \item $M^n$ is a scheme of finite type. \item The points of $M^n$ are in bijective correspondence with isomorphism classes of coherent sheaves on $X$ with Hilbert polynomial $P$. \item A family of sheaves over a smooth punctured curve $C-\{\mathrm{pt}\}$ can be uniquely completed to a family of sheaves over $C$. \end{enumerate} However, unless some restrictions are imposed on the types of sheaves which are allowed, all three hopes will fail. Properties (2) and (3) will also typically fail for semistable sheaves, but this failure occurs in a well-controlled way. \begin{example} Consider $X=\mathbb{P}^1$, and let $P=P_{\mathcal{O}_{\mathbb{P}^1}^{\oplus 2}} = 2m+2$ be the Hilbert polynomial of the rank $2$ trivial bundle. Then for any $n\geq 0$, the bundle $$\mathcal{O}_{\mathbb{P}^1}(n)\oplus \mathcal{O}_{\mathbb{P}^1}(-n)$$ also has Hilbert polynomial $2m+2$, and $h^0(\mathcal{O}_{\mathbb{P}^1}(n)\oplus \mathcal{O}_{\mathbb{P}^1}(-n))=n+1$. If there is a moduli scheme $M^n$ parameterizing all sheaves on $\mathbb{P}^1$ of Hilbert polynomial $P$, then $M^n$ cannot be of finite type. Indeed, the loci $$W_n = \{E : h^0(E)\geq n\} \subset M(P)$$ would then form an infinite decreasing chain of closed subschemes of $M(P)$. \end{example} \begin{example}\label{ex-Sequiv1} Again consider $X=\mathbb{P}^1$ and $P=2m+2$. Let $S = \Ext^1(\mathcal{O}_{\mathbb{P}^1}(1),\mathcal{O}_{\mathbb{P}^1}(-1))=\mathbb{C}$. For $s\in S$, let $E_s$ be the sheaf $$0\to \mathcal{O}_{\mathbb{P}^1}(-1)\to E_s \to \mathcal{O}_{\mathbb{P}^1}(1)\to 0$$ defined by the extension class $s$. One checks that if $s\neq 0$ then $E_s \cong \mathcal{O}_{\mathbb{P}^1}^{\oplus 2}$, but the extension is split for $s=0$. It follows that the moduli map $S\to M^n$ must be constant, so $\mathcal{O}_{\mathbb{P}^1}\oplus \mathcal{O}_{\mathbb{P}^1}$ and $\mathcal{O}_{\mathbb{P}^1}(1)\oplus \mathcal{O}_{\mathbb{P}^1}(-1)$ are identified in the moduli space $M^n$. \end{example} \begin{example}\label{ex-Sequiv} Suppose $X$ is a smooth variety and $F$ is a coherent sheaf with $\dim\Ext^1(F,F)\geq 1$. Let $S\subset \Ext^1(F,F)$ be a $1$-dimensional subspace, and for any $s\in S$ let $E_s$ be the corresponding extension of $F$ by $F$. Then if $s,s'\in S$ are both not zero, we have $$E_s\cong E_{s'} \not\cong E_0 = F\oplus F.$$ As in the previous example, we see that $F\oplus F$ and a nontrivial extension of $F$ by $F$ must be identified in $M^n$. Therefore any two extensions of $F$ by $F$ must also be identified in $M^n$. \end{example} If $F$ is semistable, then Example \ref{ex-Sequiv} is an example of a nontrivial family of \emph{$S$-equivalent} sheaves. A major theme of this survey is that $S$-equivalence is the main source of interesting birational maps between moduli spaces of sheaves. \subsection{Semistability}\label{ssec-semistable} Let $E$ be a coherent sheaf on $X$. We say that $E$ is \emph{pure} of dimension $d$ if the support of $E$ is $d$-dimensional and every nonzero subsheaf of $E$ has $d$-dimensional support. \begin{remark} If $\dim X=n$, then $E$ is pure of dimension $n$ if and only if $E$ is torsion-free. \end{remark} If $E$ is pure of dimension $d$ then the Hilbert polynomial $P_E(m)$ has degree $d$. We write it in the form $$P_E(m)=\alpha_d(E) \frac{m^d}{d!}+\cdots,$$ and define the \emph{reduced Hilbert polynomial} by $$p_E(m) = \frac{P_E(m)}{\alpha_d(E)}.$$ In the principal case of interest where $d=n=\dim X$, Riemann-Roch gives $\alpha_n(E) = r(E)H^n$ where $r(E)$ is the rank, and $$p_E(m) = \frac{P_E(m)}{r(E)H^n}.$$ \begin{definition} A sheaf $E$ is \emph{(semi)stable} if it is pure of dimension $d$ and any proper subsheaf $F\subset E$ has $$p_F \leqpar p_E,$$ where polynomials are compared at large values. That is, $p_F<p_E$ means that $p_F(m)<p_E(m)$ for all $m\gg 0$. \end{definition} The above notion of stability is often called Gieseker stability, especially when a distinction from other forms of stability is needed. The foundational result in this theory is that Gieseker semistability is the correct extra condition on sheaves to give well-behaved moduli spaces. \begin{theorem}[{\cite[Theorem 4.3.4]{HuybrechtsLehn}}] Let $(X,H)$ be a smooth, polarized projective variety, and fix a Hilbert polynomial $P$. There is a projective moduli scheme of semistable sheaves on $X$ of Hilbert polynomial $P$. \end{theorem} While the definition of Gieseker stability is compact, it is frequently useful to use the Riemann-Roch theorem to make it more explicit. We spell this out in the case of a curve or surface. We define the \emph{slope} of a coherent sheaf $E$ of positive rank on an $n$-dimensional variety by $$\mu(E) = \frac{c_1(E).H^{n-1}}{r(E)H^n}.$$ \begin{example}[Stability on a curve] Suppose $C$ is a smooth curve of genus $g$. The Riemann-Roch theorem asserts that if $E$ is a coherent sheaf on $C$ then $$\chi(E) = c_1(E)+ r(E)(1-g).$$ Polarizing $C$ with $H=p$ a point, we find $$P_E(m)=\chi(E(m)) = c_1(E(m))+r(E)(1-g)=r(E)m +(c_1(E)+r(E)(1-g)),$$ and so $$p_E(m) = m+\frac{c_1(E)}{r(E)}+(1-g).$$ We conclude that if $F\subset E$ then $p_F \leqpar p_E$ if and only if $\mu(F)\leqpar \mu(E)$. \end{example} \begin{example}[Stability on a surface]\label{ex-stabilitySurface} Let $X$ be a smooth surface with polarization $H$, and let $E$ be a sheaf of positive rank. We define the \emph{total slope} and \emph{discriminant} by $$\nu(E) = \frac{c_1(E)}{r(E)}\in H^2(X,\mathbb{Q}) \qquad \textrm{and} \qquad \Delta(E) = \frac{1}{2}\nu(E)^2-\frac{\ch_2(E)}{r}\in \mathbb{Q}.$$ With this notation, the Riemann-Roch theorem takes the particularly simple form $$\chi(E) = r(E)(P(\nu(E))-\Delta(E)),$$ where $P(\nu)= \chi(\mathcal{O}_X)+\frac{1}{2}\nu(\nu-K_X)$ (see \cite{LePotierLectures}). The total slope and discriminant behave well with respect to tensor products: if $E$ and $F$ are locally free then \begin{align*}\nu(E\otimes F) &= \nu(E)+\nu(F)\\ \Delta(E\otimes F)&= \Delta(E) + \Delta(F).\end{align*} Furthermore, $\Delta(L) = 0$ for a line bundle $L$; equivalently, in the case of a line bundle the Riemann-Roch formula is $\chi(L) = P(c_1(L))$. Then we compute \begin{align*} \chi(E(m)) &= r(E)(P(\nu(E)+mH)-\Delta(E))\\ &= r(E)(\chi(\mathcal{O}_X)+\frac{1}{2}(\nu(E)+mH)(\nu(E)+mH-K_X)-\Delta(E))\\ &= r(E)(P(\nu(E))+\frac{1}{2}(mH)^2+mH.(\nu(E)+\frac{1}{2}K_X)-\Delta(E))\\ &= \frac{r(E)H^2}{2}m^2+r(E)H.(\nu(E)+\frac{1}{2}K_X)m + \chi(E), \end{align*} so $$p_E(m) = \frac{1}{2}m^2+\frac{H.(\nu(E)+\frac{1}{2}K_X)}{H^2}m+\frac{\chi(E)}{r(E)H^2}$$ Now if $F\subset E$, we compare the coefficients of $p_F$ and $p_E$ lexicographically to determine when $p_F \leqpar p_E$. We see that $p_F \leqpar p_E$ if and only if either $\mu(F) < \mu(E)$, or $\mu(F)=\mu(E)$ and $$\frac{\chi(F)}{r(F)H^2}\leqpar \frac{\chi(E)}{r(E)H^2}.$$ \end{example} \begin{example}[Slope stability]\label{ex-slopestability} The notion of \emph{slope semistability} has also been studied extensively and frequently arises in the study of Gieseker stability. We say that a torsion-free sheaf $E$ on a variety $X$ with polarization $H$ is \emph{$\mu$-(semi)stable} if every subsheaf $F\subset E$ of strictly smaller rank has $\mu(F)\leqpar \mu(E)$. As we have seen in the curve and surface case, the coefficient of $m^{n-1}$ in the reduced Hilbert polynomial $p_E(m)$ is just $\mu(E)$ up to adding a constant depending only on $(X,H)$. This observation gives the following chain of implications: $$\mu\textrm{-stable} \Rightarrow \textrm{stable} \Rightarrow \textrm{semistable} \Rightarrow \mu\textrm{-semistable}.$$ While Gieseker (semi)stability gives the best moduli theory and is therefore the most common to work with, it is often necessary to consider these various other forms of stability to study ordinary stability. \end{example} \begin{example}[Elementary modifications]\label{ex-elementary} As an example where $\mu$-stability is useful, suppose $X$ is a smooth surface and $E$ is a torsion-free sheaf on $X$. Let $p\in X$ be a point where $X$ is locally free, and consider sheaves $E'$ defined as kernels of maps $E\to \mathcal{O}_p$, where $\mathcal{O}_p$ is a skyscraper sheaf: $$0\to E'\to E\to \mathcal{O}_p\to 0.$$ Intuitively, $E'$ is just $E$ with an additional simple singularity imposed at $p$. Such a sheaf $E'$ is called an \emph{elementary modification} of $E$. We have $\mu(E)=\mu(E')$ and $\chi(E') = \chi(E)-1$, which makes elementary modifications a useful tool for studying sheaves by induction on the Euler characteristic. Suppose $E$ satisfies one of the four types of stability discussed in Example \ref{ex-slopestability}. If $E$ is $\mu$-(semi)stable, then it follows that $E'$ is $\mu$-(semi)stable as well. Indeed, if $F\subset E'$ with $r(F)<r(E')$, then also $F\subset E$, so $\mu(F)\leqpar\mu(E)$. But $\mu(E)=\mu(E')$, so $\mu(F)\leqpar\mu(E')$ and $E'$ is $\mu$-(semi)stable. On the other hand, elementary modifications do not behave as well with respect to Gieseker (semi)stability. For example, take $X=\mathbb{P}^2$. Then $E= \mathcal{O}_{\mathbb{P}^2}\oplus \mathcal{O}_{\mathbb{P}^2}$ is semistable, but any any elementary modification $E'$ of $\mathcal{O}_{\mathbb{P}^2}\oplus\mathcal{O}_{\mathbb{P}^2}$ at a point $p\in \mathbb{P}^2$ is isomorphic to $I_p\oplus \mathcal{O}_{\mathbb{P}^2}$, where $I_p$ is the ideal sheaf of $p$. Thus $E'$ is not semistable. It is also possible to give an example of a stable sheaf $E$ such that some elementary modification is not stable. Let $p,q,r\in \mathbb{P}^2$ be distinct points. Then $\ext^1(I_r,I_{\{p,q\}})=2$. If $E$ is any non-split extension $$0\to I_{\{p,q\}}\to E\to I_r\to 0$$ then $E$ is clearly $\mu$-semistable. In fact, $E$ is stable: the only stable sheaves $F$ of rank $1$ and slope $0$ with $p_F\leq p_E$ are $\mathcal{O}_{\mathbb{P}^2}$ and $I_s$ for $s\in \mathbb{P}^2$ a point, but $\Hom(I_s,E)=0$ for any $s\in \mathbb{P}^2$ since the sequence is not split. Now if $s\in \mathbb{P}^2$ is a point distinct from $p,q,r$ and $E\to \mathcal{O}_s$ is a map such that the composition $I_{\{p,q\}}\to E\to \mathcal{O}_s$ is zero, then the corresponding elementary modification $$0\to E'\to E\to \mathcal{O}_s\to 0$$ has a subsheaf $I_{\{p,q\}}\subset E'$. We have $p_{I_{\{p,q\}}} = p_{E'}$, so $E'$ is strictly semistable. \end{example} \begin{example}[Chern classes]\label{ex-Chern} Let $K_0(X)$ be the Grothendieck group of $X$, generated by classes $[E]$ of locally free sheaves, modulo relations $[E] = [F]+[G]$ for every exact sequence $$0\to F\to E\to G\to 0.$$ There is a symmetric bilinear \emph{Euler pairing} on $K_0(X)$ such that $([E],[F]) = \chi(E\otimes F)$ whenever $E,F$ are locally free sheaves. The \emph{numerical Grothendieck group} $K_{\num}(X)$ is the quotient of $K_0(X)$ by the kernel of the Euler pairing, so that the Euler pairing descends to a nondegenerate pairing on $K_{\num}(X)$. It is often preferable to fix the Chern classes of a sheaf instead of the Hilbert polynomial. This is accomplished by fixing a class ${\bf v}\in K_{\num}(X)$. Any class ${\bf v}$ determines a Hilbert polynomial $P_{\bf v} = ({\bf v},[\mathcal{O}_X(m)])$. In general, a polynomial $P$ can arise as the Hilbert polynomial of several classes ${\bf v}\in K_{\num}(X)$. In any family $\mathcal{E}$ of sheaves parameterized by a connected base $S$ the sheaves $E_s$ all have the same class in $K_{\num}(X)$. Therefore, the moduli space $M(P)$ splits into connected components corresponding to the different vectors ${\bf v}$ with $P_{\bf v}= P$. We write $M({\bf v})$ for the connected component of $M(P)$ corresponding to ${\bf v}$. \end{example} \subsection{Filtrations} In addition to controlling subsheaves, stability also restricts the types of maps that can occur between sheaves. \begin{proposition}\label{prop-seesaw} \begin{enumerate} \item (See-saw property) In any exact sequence of pure sheaves $$0\to F\to E\to Q\to 0$$ of the same dimension $d$, we have $p_F \leqpar p_E$ if and only if $p_E \leqpar p_Q$. \item If $F,E$ are semistable sheaves of the same dimension $d$ and $p_F > p_E$, then $\Hom(F,E)=0$. \item If $F,E$ are stable sheaves and $p_F=p_E$, then any nonzero homomorphism $F\to E$ is an isomorphism. \item Stable sheaves $E$ are \emph{simple:} $\Hom(E,E)=\mathbb{C}$. \end{enumerate} \end{proposition} \begin{proof} (1) We have $P_E = P_F + P_Q$, so $\alpha_d(E) = \alpha_d(F)+\alpha_d(Q)$ and $$p_E = \frac{P_E}{\alpha_d(E)} = \frac{P_F+P_Q}{\alpha_d(E)} = \frac{\alpha_d(F)p_F+\alpha_d(Q)p_Q}{\alpha_d(E)}.$$ Thus $p_E$ is a weighted mean of $p_F$ and $p_Q$, and the result follows. (2) Let $f:F\to E$ be a homomorphism, and put $C = \im f$ and $K = \ker f$. Then $C$ is pure of dimension $d$ since $E$ is, and $K$ is pure of dimension $d$ since $F$ is. By (1) and the semistability of $F$, we have $p_C\geq p_F > p_E$. This contradicts the semistability of $E$ since $C\subset E$. (3) Since $p_F = p_E$, $F$ and $E$ have the same dimension. With the same notation as in (2), we instead find $p_C\geq p_F = p_E$, and the stability of $E$ gives $p_C=p_E$ and $C=E$. If $f$ is not an isomorphism then $p_K = p_F$, contradicting stability of $F$. Therefore $f$ is an isomorphism. (4) Suppose $f:E\to E$ is any homomorphism. Pick some point $x\in X$. The linear transformation $f_x:E_x\to E_x$ has an eigenvalue $\lambda\in \mathbb{C}$. Then $f-\lambda \id_E$ is not an isomorphism, so it must be zero. Therefore $f=\lambda \id_E$. \end{proof} Harder-Narasimhan filtrations enable us to study arbitrary pure sheaves in terms of semistable sheaves. Proposition \ref{prop-seesaw} is one of the important ingredients in the proof of the next theorem. \begin{theorem-definition}[\cite{HuybrechtsLehn}] Let $E$ be a pure sheaf of dimension $d$. Then there is a unique filtration $$0=E_0\subset E_1\subset \cdots \subset E_\ell = E$$ called the \emph{Harder-Narasimhan filtration} such that the quotients $\mathrm{gr}_i = E_i/E_{i-1}$ are semistable of dimension $d$ and reduced Hilbert polynomial $p_i$, where $$p_1 > p_2 > \cdots >p_\ell.$$ \end{theorem-definition} In order to construct (semi)stable sheaves it is frequently necessary to also work with sheaves that are not semistable. The next example outlines one method for constructing semistable vector bundles. This general method was used by Dr\'ezet and Le Potier to classify the possible Hilbert polynomials of semistable sheaves on $\mathbb{P}^2$ \cite{LePotierLectures,DLP}. \begin{example}\label{ex-existence} Let $(X,H)$ be a smooth polarized projective variety. Suppose $A$ and $B$ are vector bundles on $X$ and that the sheaf $\sHom(A,B)$ is globally generated. For simplicity assume $r(B)-r(A) \geq \dim X$. Let $S \subset \Hom(A,B)$ be the open subset parameterizing injective sheaf maps; this is set is nonempty since $\sHom(A,B)$ is globally generated. Consider the family $\mathcal{E}$ of sheaves on $X$ parameterized by $S$ where the sheaf $E_s$ parameterized by $s\in S$ is the cokernel $$0\to A\fto{s} B\to E_s\to 0.$$ Then for general $s\in S$, the sheaf $E_s$ is a vector bundle \cite[Proposition 2.6]{HuizengaJAG} with Hilbert polynomial $P:=P_B-P_A$. In other words, restricting to a dense open subset $S'\subset S$, we get a family of locally free sheaves parameterized by $S'$. Next, semistability is an open condition in families. Thus there is a (possibly empty) open subset $S''\subset S'$ parameterizing semistable sheaves. Let $\ell>0$ be an integer and pick polynomials $P_1,\ldots,P_\ell$ such that $P_1+\cdots +P_\ell = P$ and the corresponding reduced polynomials $p_1,\ldots,p_\ell$ have $p_1>\cdots > p_\ell$. Then there is a locally closed subset $S_{P_1,\ldots,P_\ell} \subset S'$ parameterizing sheaves with a Harder-Narasimhan filtration of length $\ell$ with factors of Hilbert polynomial $P_1,\ldots,P_{\ell}$. Such loci are called \emph{Shatz strata} in the base $S'$ of the family. Finally, to show that $S''$ is nonempty, it suffices to show that the Shatz stratum $S_P$ corresponding to semistable sheaves is dense. One approach to this problem is to show that every Shatz stratum $S_{P_1,\ldots,P_\ell}$ with $\ell \geq 2$ has codimension at least $1$. See \cite[Chapter 16]{LePotierLectures} for an example where this is carried out in the case of $\mathbb{P}^2$. \end{example} Just as the Harder-Narasimhan filtration allows us to use semistable sheaves to build up arbitrary pure sheaves, Jordan-H\"older filtrations decompose semistable sheaves in terms of stable sheaves. \begin{theorem-definition}\cite{HuybrechtsLehn} Let $E$ be a semistable sheaf of dimension $d$ and reduced Hilbert polynomial $p$. There is a filtration $$0 = E_0 \subset E_1\subset \cdots \subset E_\ell = E$$ called the \emph{Jordan-H\"older filtration} such that the quotients $\mathrm{gr}_i = E_{i}/E_{i-1}$ are stable with reduced Hilbert polynomial $p$. The filtration is not necessarily unique, but the list of stable factors is unique up to reordering. \end{theorem-definition} We can now precisely state the critical definition of $S$-equivalence. \begin{definition} Semistable sheaves $E$ and $F$ are \emph{$S$-equivalent} if they have the same list of Jordan-H\"older factors. \end{definition} We have already seen an example of an $S$-equivalent family of semistable sheaves in Example \ref{ex-Sequiv}, and we observed that all the parameterized sheaves must be represented by the same point in the moduli space. In fact, the converse is also true, as the next theorem shows. \begin{theorem} Two semistable sheaves $E,F$ with Hilbert polynomial $P$ are represented by the same point in $M(P)$ if and only if they are $S$-equivalent. Thus, the points of $M(P)$ are in bijective correspondence with $S$-equivalence classes of semistable sheaves with Hilbert polynomial $P$. In particular, if there are strictly semistable sheaves of Hilbert polynomial $P$, then $M(P)$ is not a fine moduli space. \end{theorem} \begin{remark} The question of when the open subset $M^s(P)$ parameterizing stable sheaves is a fine moduli space for the moduli functor $\mathcal M^s(P)$ of stable families is somewhat delicate; in this case the points of $M^s(P)$ are in bijective correspondence with the isomorphism classes of stable sheaves, but there still need not be a universal family. One positive result in this direction is the following. Let ${\bf v}\in K_{\num}(X)$ be the numerical class of a stable sheaf with Hilbert polynomial $P$ (see Example \ref{ex-Chern}). Consider the set of integers of the form $({\bf v},[F])$, where $F$ is a coherent sheaf and $(-,-)$ is the Euler pairing. If their greatest common divisor is $1$, then $M^s({\bf v})$ carries a universal family. (Note that the number-theoretic requirement also guarantees that there are no semistable sheaves of class ${\bf v}$.) See \cite[\S 4.6]{HuybrechtsLehn} for details. \end{remark} \section{Properties of moduli spaces}\label{sec-properties} To study the birational geometry of moduli spaces of sheaves in depth it is typically necessary to have some kind of control over the geometric properties of the space. For example, is the moduli space nonempty? Smooth? Irreducible? What are the divisor classes on the moduli space? Our original setup of studying a smooth projective polarized variety $(X,H)$ of any dimension is too general to get satisfactory answers to these questions. We first mention some results on smoothness which hold with a good deal of generality, and then turn to more specific cases with far more precise results. \subsection{Tangent spaces, smoothness, and dimension} Let $(X,H)$ be a smooth polarized variety, and let ${\bf v}\in K_{\num}(X)$. The tangent space to the moduli space $M=M({\bf v})$ is typically only well-behaved at points $E\in M$ parameterizing stable sheaves $E$, due to the identification of $S$-equivalence classes of sheaves in $M$. Let $D=\Spec \mathbb{C}[\varepsilon]/(\varepsilon^2)$ be the dual numbers, and let $E$ be a stable sheaf. Then the tangent space to $M$ is the subset of $\Mor(D,M)$ corresponding to maps sending the closed point of $D$ to the point $E$. By the moduli property, such a map corresponds to a sheaf $\mathcal{E}$ on $X\times D$, flat over $D$, such that $E_0=E$. Deformation theory identifies the set of sheaves $\mathcal{E}$ as above with the vector space $\Ext^1(E,E)$, so there is a natural isomorphism $T_EM \cong \Ext^1(E,E)$. The obstruction to extending a first-order deformation is a class $\Ext^2(E,E)$, and if $\Ext^2(E,E)=0$ then $M$ is smooth at $E$. For some varieties $X$ it is helpful to improve the previous statement slightly, since the vanishing $\Ext^2(E,E)=0$ can be rare, for example if $K_X$ is trivial. If $E$ is a vector bundle, let $$\mathrm{tr}: \sEnd(E)\to \mathcal{O}_X$$ be the \emph{trace map}, acting fiberwise as the ordinary trace of an endomorphism. Then $$H^i(\sEnd(E))\cong \Ext^i(E,E),$$ so there are induced maps on cohomology$$\mathrm{tr}^i:\Ext^i(E,E)\to H^i(\mathcal{O}_X).$$ We write $\Ext^i(E,E)_0 \subset \Ext^i(E,E)$ for $\ker \mathrm{tr}^i$, the subspace of \emph{traceless extensions}. The subspaces $\Ext^i(E,E)_0$ can also be defined if $E$\ is just a coherent sheaf, but the construction is more delicate and we omit it. \begin{theorem}\label{thm-smoothness} The tangent space to $M$ at a stable sheaf $E$ is canonically isomorphic to $\Ext^1(E,E)$, the space of first order deformations of $E$. If $\Ext^2(E,E)_0=0$, then $M$ is smooth at $E$ of dimension $\ext^1(E,E)$. \end{theorem} We now examine several consequences of Theorem \ref{thm-smoothness} in the case of curves and surfaces. \begin{example} Suppose $X$ is a smooth curve of genus $g$, and let $M(r,d)$ be the moduli space of semistable sheaves of rank $r$ and degree $d$ on $X$. Then the vanishing $\Ext^2(E,E)=0$ holds for any sheaf $E$, so the moduli space $M(P)$ is smooth at every point parameterizing a stable sheaf $E$. Since stable sheaves are simple, the dimension at such a sheaf is $$\ext^1(E,E) = 1-\chi(E,E) = r^2(g-1)+1.$$ \end{example} \begin{example} Let $(X,H)$ be a smooth variety, and let ${\bf v}=[\mathcal{O}_X]\in K_{\num}(X)$ be the numerical class of $\mathcal{O}_X$. The moduli space $M({\bf v})$ parameterizes line bundles numerically equivalent to $\mathcal{O}_X$; it is the connected component $\Pic^0 X$ of the Picard scheme $\Pic X$ which contains $\mathcal{O}_X$. For any line bundle $L\in M({\bf v})$, we have $\sEnd(L)\cong \mathcal{O}_X$ and the trace map $\sEnd(L)\to \mathcal{O}_X$ is an isomorphism. Thus $\Ext^2(L,L)_0=0$, and $M({\bf v})$ is smooth of dimension $\ext^1(L,L) = h^1(\mathcal{O}_X) =:q(X)$, the \emph{irregularity} of $X$. \end{example} \begin{example}\label{ex-dimSurface} Suppose $(X,H)$ is a smooth surface and $E\in M^s(P)$ is a stable vector bundle. The sheaf map $$\mathrm{tr}:\sEnd(E)\to \mathcal{O}_X$$ is surjective, so the induced map $\mathrm{tr}^2:\Ext^2(E,E)\to H^2(\mathcal{O}_X)$ is surjective since $X$ is a surface. Therefore $\ext^2(E,E)_0=0$ if and only if $\ext^2(E,E) = h^2(\mathcal{O}_X)$. We conclude that if $\ext^2(E,E)_0=0$ then $M(P)$ is smooth at $E$ of local dimension \begin{align*}\dim_E M(P)=\ext^1(E,E) &= 1-\chi(E,E)+\ext^2(E,E) \\&= 1-\chi(E,E)+h^2(\mathcal{O}_X) \\&= 2r^2\Delta(E)+\chi(\mathcal{O}_X)(1-r^2)+q(X). \end{align*} \end{example} \begin{example} If $(X,H)$ is a smooth surface such that $H.K_X<0$, then the vanishing $\Ext^2(E,E)=0$ is automatic. Indeed, by Serre duality, $$\Ext^2(E,E) \cong \Hom(E,E\otimes K_X)^*.$$ Then $$\mu(E\otimes K_X) = \mu(E) + \mu(K_X) = \mu(E) + H.K_X < \mu(E),$$ so $\Hom(E,E\otimes K_X) = 0$ by Proposition \ref{prop-seesaw}. The assumption $H.K_X<0$ in particular holds whenever $X$ is a del Pezzo or Hirzebruch surface. Thus the moduli spaces $M({\bf v})$ for these surfaces are smooth at points corresponding to stable sheaves. \end{example} \begin{example} If $(X,H)$ is a smooth surface and $K_X$ is trivial (e.g. $X$ is a K3 or abelian surface), then the weaker vanishing $\Ext^2(E,E)_0=0$ holds. The trace map $\mathrm{tr}^2:H^2(\sEnd(E))\to H^2(\mathcal{O}_X)$ is Serre dual to an isomorphism $$H^0(\mathcal{O}_X)\to H^0(\sEnd(E)) = \Hom(E,E),$$ so $\mathrm{tr}^2$ is an isomorphism and $\Ext^2(E,E)_0=0$. \end{example} \subsection{Existence and irreducibility} What are the possible numerical invariants ${\bf v}\in K_{\num}(X)$ of a semistable sheaf on $X$? When the moduli space is nonempty, is it irreducible? As usual, the case of curves is simplest. \subsubsection{Existence and irreducibility for curves} Let $M=M(r,d)$ be the moduli space of semistable sheaves of rank $r$ and degree $d$ on a smooth curve $X$ of genus $g\geq 1$. Then $M$ is nonempty and irreducible, and unless $X$ is an elliptic curve and $r,d$ are not coprime then the stable sheaves are dense in $M$. To show $M(r,d)$ is nonempty one can follow the basic outline of Example \ref{ex-existence}. For more details, see \cite[Chapter 8]{LePotierLectures}. Irreducibility of $M(r,d)$ can be proved roughly as follows. We may as well assume $r\geq 2$ and $d\geq 2rg$ by tensoring by a sufficiently ample line bundle. Let $L$ denote a line bundle of degree $d$ on $X$, and consider extensions of the form $$0\to \mathcal{O}_X^{r-1} \to E\to L\to 0.$$ As $L$ and the extension class vary, we obtain a family of sheaves $\mathcal{E}$ parameterized by a vector bundle $S$ over the component $\Pic^d(X)$ of the Picard group. On the other hand, by the choice of $d$, any semistable $E\in M(r,d)$ is generated by its global sections. A general collection of $r-1$ sections of $E$ will be linearly independent at every $x\in X$, so that the quotient of the corresponding inclusion $\mathcal{O}_X^{r-1}\to E$ is a line bundle. Thus every semistable $E$ fits into an exact sequence as above. The (irreducible) open subset of $S$ parameterizing semistable sheaves therefore maps onto $M(r,d)$, and the moduli space is irreducible. \subsubsection{Existence for surfaces} For surfaces the existence question is quite subtle. The first general result in this direction is the Bogomolov inequality. \begin{theorem}[Bogomolov inequality]\label{thm-bogomolov} If $(X,H)$ is a smooth surface and $E$ is a $\mu_H$-semistable sheaf on $X$ then $$\Delta(E)\geq 0.$$ \end{theorem} \begin{remark}Note that the discriminant $\Delta(E)$ is independent of the particular polarization $H$, so the inequality holds for any sheaf which is slope-semistable with respect to some choice of polarization.\end{remark} Recall that line bundles $L$ have $\Delta(L)=0$, so in a sense the Bogomolov inequality is sharp. However, there are certainly Chern characters ${\bf v}$ with $\Delta({\bf v})\geq 0$ such that there is no semistable sheaf of character ${\bf v}$. A refined Bogomolov inequality should bound $\Delta(E)$ from below in terms of the other numerical invariants of $E$. Solutions to the existence problem for semistable sheaves on a surface can often be viewed as such improvements of the Bogomolov inequality. \subsubsection{Existence for $\mathbb{P}^2$}\label{sssec-existP2} On $\mathbb{P}^2$, the classification of Chern characters ${\bf v}$ such that $M({\bf v})$ is nonempty has been carried out by Dr\'ezet and Le Potier \cite{DLP,LePotierLectures}. A \emph{(semi)exceptional bundle} is a rigid (semi)stable bundle, i.e. a (semi)stable bundle with $\Ext^1(E,E)=0$. Examples of exceptional bundles include line bundles, the tangent bundle $T_{\mathbb{P}^2}$, and infinitely more examples obtained by a process of \emph{mutation}. The dimension formula for a moduli space of sheaves on $\mathbb{P}^2$ reads $$\dim M({\bf v}) = r^2(2\Delta-1)+1,$$ so an exceptional bundle has discriminant $\Delta = \frac{1}{2}-\frac{1}{2r^2}<\frac{1}{2}.$ The dimension formula suggests an immediate refinement of the Bogomolov inequality: if $E$ is a non-exceptional stable bundle, then $\Delta(E)\geq \frac{1}{2}$. However, exceptional bundles can provide even stronger Bogomolov inequalities for non-exceptional bundles. For example, suppose $E$ is a semistable sheaf with $0<\mu(E) <1$. Then $\Hom(E,\mathcal{O}_X)=0$ and $$\Ext^2(E,\mathcal{O}_X)\cong \Hom(\mathcal{O}_X,E\otimes K_X)^*=0$$ by semistability and Proposition \ref{prop-seesaw}. Thus $\chi(E,\mathcal{O}_X)\leq 0$. By the Riemann-Roch theorem, this inequality is equivalent to the inequality $$\Delta(E) \geq P(-\mu(E))$$ where $P(x) = \frac{1}{2}x^2+\frac{3}{2}x+1$; this inequality is stronger than the ordinary Bogomolov inequality for any $\mu(E)\in (0,1)$. Taking all the various exceptional bundles on $\mathbb{P}^2$ into account in a similar manner, one defines a function $\delta:\mathbb{R}\to \mathbb{R}$ with the property that any non-semiexceptional semistable bundle $E$ satisfies $\Delta(E)\geq \delta(\mu(E))$. The graph of $\delta$ is Figure \ref{fig-deltaCurve}. Dr\'ezet and Le Potier prove the converse theorem: exceptional bundles are the only obstruction to the existence of stable bundles with given numerical invariants. \begin{theorem} Let ${\bf v}$ be an integral Chern character on $\mathbb{P}^2$. There is a non-exceptional stable vector bundle on $\mathbb{P}^2$ with Chern character ${\bf v}$ if and only if $\Delta({\bf v})\geq \delta(\mu({\bf v}))$. \end{theorem} The method of proof follows the outline indicated in Example \ref{ex-existence}. \begin{figure} \caption{The curve $\delta(\mu)$ occurring in the classification of stable bundles on $\mathbb{P}^2$. If $(r,\mu,\Delta)$ are the invariants of an integral Chern character, then there is a non-exceptional stable bundle $E$ with these invariants if and only if $\Delta\geq \delta(\mu)$. The invariants of the first several exceptional bundles are also displayed.} \label{fig-deltaCurve} \end{figure} \subsubsection{Existence for other rational surfaces} In the case of $X=\mathbb{P}^1\times \mathbb{P}^1$, Rudakov \cite{Rudakov1,Rudakov2} gives a solution to the existence problem that is similar to the Dr\'ezet-Le Potier result for $\mathbb{P}^2$. However, the geometry of exceptional bundles is more complicated than for $\mathbb{P}^2$, and as a result the classification is somewhat less explicity. To our knowledge a satisfactory answer to the existence problem has not yet been given for a Del Pezzo or Hirzebruch surface. \subsubsection{Irreducibility for rational surfaces} For many rational surfaces $X$ it is known that the moduli space $M_H({\bf v})$ is irreducible. One common argument is to introduce a mild relaxation of the notion of semistability and show that the stack parameterizing such objects is irreducible and contains the semistable sheaves as an open dense substack. For example, Hirschowitz and Laszlo \cite{HirschowitzLaszlo} introduce the notion of a \emph{prioritary sheaf} on $\mathbb{P}^2$. A torsion-free coherent sheaf $E$ on $\mathbb{P}^2$ is prioritary if $$\Ext^2(E,E(-1)) = 0.$$ By Serre duality, any torsion-free sheaf whose Harder-Narasimhan factors have slopes that are ``not too far apart'' will be prioritary, so it is very easy to construct prioritary sheaves. For example, semistable sheaves are prioritary, and sheaves of the form $\mathcal{O}_{\mathbb{P}^2}(a)^{\oplus k} \oplus \mathcal{O}_{\mathbb{P}^2}(a+1)^{\oplus l}$ are prioritary. The class of prioritary sheaves is also closed under elementary modifications, which makes it possible to study them by induction on the Euler characteristic as in Example \ref{ex-elementary}. The Artin stack $\mathcal P({\bf v})$ of prioritary sheaves with invariants ${\bf v}$ is smooth, essentially because $\Ext^2(E,E)=0$ for any prioritary sheaf. There is a unique prioritary sheaf of a given slope and rank with minimal discriminant, given by a sheaf of the form $\mathcal{O}_{\mathbb{P}^2}(a)^{\oplus k} \oplus \mathcal{O}_{\mathbb{P}^2}(a+1)^{\oplus l}$ with the integers $a,k,l$ chosen appropriately. Hirschowitz and Laszlo show that any connected component of $\mathcal P({\bf v})$ contains a sheaf which is an elementary modification of another sheaf. By induction on the Euler characteristic, they conclude that $\mathcal P({\bf v})$ is connected, and therefore irreducible. Since semistability is an open property, the stack $\mathcal M({\bf v})$ of semistable sheaves is an open substack of $\mathcal P({\bf v})$ and therefore dense and irreducible if it is nonempty. Thus the coarse space $M({\bf v})$ is irreducible as well. Walter \cite{Walter} gives another argument establishing the irreducibility of the moduli spaces $M_H({\bf v})$ on a Hirzebruch surface whenever they are nonempty. The arguments make heavy use of the ruling, and study the stack of sheaves which are prioritary with respect to the fiber class. In more generality, he also studies the question of irreducibility on a geometrically ruled surface, at least under a condition on the polarization which ensures that semistable sheaves are prioritary with respect to the fiber class. \subsubsection{Existence and irreducibility for K3's}\label{sssec-K3exist} By work of Yoshioka, Mukai, and others, the existence problem has a particularly simple and beautiful solution when $(X,H)$ is a smooth K3 surface (see \cite{Yoshioka3}, or \cite{BM2,BM3} for a simple treatment). Define the \emph{Mukai pairing} $\langle-,-\rangle$ on $K_{\num}(X)$ by $\langle {\bf v},{\bf w}\rangle = -\chi({\bf v},{\bf w})$; we can make sense of this formula by the same method as in Example \ref{ex-Chern}. Since $X$ is a K3 surface, $K_X$ is trivial and the Mukai pairing is symmetric by Serre duality. By Example \ref{ex-dimSurface}, if there is a stable sheaf $E$ with invariants ${\bf v}$ then the moduli space $M({\bf v})$ has dimension $2+\langle {\bf v},{\bf v}\rangle$ at $E$. If $E$ is a stable sheaf of class ${\bf v}$ with $\langle {\bf v},{\bf v}\rangle = -2$, then $E$ is called \emph{spherical} and the moduli space $M_H({\bf v})$ is a single reduced point. A class ${\bf v}\in K_{\num}(X)$ is called \emph{primitive} if it is not a multiple of another class. If the polarization $H$ of $X$ is chosen suitably generically, then ${\bf v}$ being primitive ensures that there are no strictly semistable sheaves of class ${\bf v}$. Thus, for a generic polarization, a necessary condition for the existence of a stable sheaf is that $\langle {\bf v},{\bf v}\rangle \geq -2$. \begin{definition} A primitive class ${\bf v}=(r,c,d)\in K_{\num}(X)$ is called \emph{positive} if $\langle {\bf v},{\bf v}\rangle \geq -2$ and either \begin{enumerate} \item\label{req1} $r >0$, or \item $r = 0$ and $c$ is effective, or \item\label{req3} $r = 0$, $c=0$, and $d>0$. \end{enumerate} \end{definition} The additional requirements (\ref{req1})-(\ref{req3}) in the definition are automatically satisfied any time there is a \emph{sheaf} of class ${\bf v}$, so they are very mild. \begin{theorem}\label{thm-k3exist} Let $(X,H)$ be a smooth K3 surface. Let ${\bf v}\in K_{\num}(X)$, and write ${\bf v} = m {\bf v}_0$, where ${\bf v}_0$ is primitive and $m$ is a positive integer. If ${\bf v}_0$ is positive, then the moduli space $M_H({\bf v})$ is nonempty. If furthermore $m=1$ and the polarization $H$ is sufficiently generic, then $M_H({\bf v})$ is a smooth, irreducible, holomorphic symplectic variety. If $M_H({\bf v})$ is nonempty and the polarization is sufficiently generic, then ${\bf v}_0$ is positive. \end{theorem} The Mukai pairing can be made particularly simple from a computational standpoint by studying it in terms of a different coordinate system. Let $$H^*_\mathrm{alg}(X) = H^0(X,\mathbb{Z})\oplus \NS(X) \oplus H^4(X,\mathbb{Z}).$$ Then there is an isomorphism $v:K_{\num}(X)\to H^*_{\mathrm{alg}}(X,\mathbb{Z})$ defined by $v({\bf v}) = {\bf v}\cdot \sqrt{\td(X)}.$ The vector $v({\bf v})$ is called a \emph{Mukai vector}. The Todd class $\td(X)\in H_\mathrm{alg}^*(X)$ is $(1,0,2)$, so $\sqrt{\td(X)} = (1,0,1)$ and $$v({\bf v}) = (\ch_0({\bf v}),\ch_1({\bf v}),\ch_0({\bf v})+\ch_2({\bf v}))=(r,c_1,r+\frac{c_1^2}{2}-c_2).$$ Suppose ${\bf v},{\bf w}\in K_{\num}(X)$ have Mukai vectors $v({\bf v}) = (r,c,s)$, $v({\bf w}) = (r',c',s')$. Since $\sqrt{\td(X)}$ is self-dual, the Hirzebruch-Riemann-Roch theorem gives $$\langle {\bf v},{\bf w}\rangle = -\chi({\bf v},{\bf w}) = -\int_X {\bf v}^*\cdot {\bf w} \cdot \td(X) = -\int_X (r,-c,s)\cdot (r',c',s') = cc'-rs'-r's.$$ It is worth pointing out that Theorem \ref{thm-k3exist} can also be stated as a strong Bogomolov inequality, as in the Dr\'ezet-Le Potier result for $\mathbb{P}^2$. Let ${\bf v}_0$ be a primitive vector which is the vector of a coherent sheaf. The irregularity of $X$ is $q(X)=0$ and $\chi(\mathcal{O}_X)=2$, so as in Example \ref{ex-dimSurface} $$\langle {\bf v}_0,{\bf v}_0 \rangle = 2r^2\Delta({\bf v}_0) +2(1-r^2)-2 = 2r^2(\Delta({\bf v}_0)-1).$$ Therefore, ${\bf v}_0$ is positive and non-spherical if and only if $\Delta({\bf v}_0)\geq 1$. \subsubsection{General surfaces}\label{ssec-Hilb} On an arbitrary smooth surface $(X,H)$ the basic geometry of the moduli space is less understood. To obtain good results, it is necessary to impose some kind of additional hypotheses on the Chern character ${\bf v}$. For one possibility, we can take ${\bf v}$ to be the character of an ideal sheaf $I_Z$ of a zero-dimensional scheme $Z\subset X$ of length $n$. Then the moduli space of sheaves of class ${\bf v}$ with determinant $\mathcal{O}_X$ is the \emph{Hilbert scheme} of $n$ points on $X$, written $X^{[n]}$. It parameterizes ideal sheaves of subschemes $Z\subset X$ of length $n$. \begin{remark} Note that any rank $1$ torsion-free sheaf $E$ with determinant $\mathcal{O}_X$ admits an inclusion $E\to E^{**} := \det E =\mathcal{O}_X$, so that $E$ is actually an ideal sheaf. Unless $X$ has irregularity $q(X) = 0$, the Hilbert scheme $X^{[n]}$ and moduli space $M({\bf v})$ will differ, since the latter space also contains sheaves of the form $L\otimes I_Z$, where $L$ is a line bundle numerically equivalent to $\mathcal{O}_X$. In fact, $M({\bf v}) \cong X^{[n]}\times \Pic^0(X)$. \end{remark} Classical results of Fogarty show that Hilbert schemes of points on a surface are very well-behaved. \begin{theorem}[\cite{Fogarty1}] The Hilbert scheme of points $X^{[n]}$ on a smooth surface $X$ is smooth and irreducible. It is a fine moduli space, and carries a universal ideal sheaf. \end{theorem} At the other extreme, if the rank is arbitrary then there are \emph{O'Grady-type} results which show that the moduli space has many good properties if we require the discriminant of our sheaves to be sufficiently large. \begin{theorem}[\cite{HuybrechtsLehn,OGrady}] There is a constant $C$ depending on $X,H$, and $r$, such that if ${\bf v}$ has rank $r$ and $\Delta({\bf v}) \geq C$ then the moduli space $M_H({\bf v})$ is nonempty, irreducible, and normal. The $\mu$-stable sheaves $E$ such that $\ext^2(E,E)_0=0$ are dense in $M_H({\bf v})$, so $M_H({\bf v})$ has the expected dimension $$\dim M_H({\bf v}) = 2r^2\Delta(E)+\chi(\mathcal{O}_X)(1-r^2)+q(X).$$ \end{theorem} \section{Divisors and classical birational geometry}\label{sec-classical} In this section we introduce some of the primary objects of study in the birational geometry of varieties. We then study some simple examples of the birational geometry of moduli spaces from the classical point of view. \subsection{Cones of divisors} Let $X$ be a normal projective variety. Recall that $X$ is \emph{factorial} if every Weil divisor on $X$ is Cartier, and $\mathbb{Q}$-factorial if every Weil divisor has a multiple that is Cartier. To make the discussion in this section easier we will assume that $X$ is $\mathbb{Q}$-factorial. This means that describing a codimension $1$ locus on $X$ determines the class of a $\mathbb{Q}$-Cartier divisor. \begin{definition} Two Cartier divisors $D_1,D_2$ (or $\mathbb{Q}$- or $\mathbb{R}$-Cartier divisors) are \emph{numerically equivalent}, written $D_1 \equiv D_2$, if $D_1\cdot C = D_2 \cdot C$ for every curve $C\subset X$. The \emph{Neron-Severi space} $N^1(X)$ is the real vector space $\Pic(X)\otimes \mathbb{R}/\equiv$. \end{definition} \subsubsection{Ample and nef cones}The first object of study in birational geometry is the \emph{ample cone} $\Amp(X)$ of $X$. Roughly speaking, the ample cone parameterizes the various projective embeddings of $X$. A Cartier divisor $D$ on $X$ is \emph{ample} if the map to projective space determined by $\mathcal{O}_X(mD)$ is an embedding for sufficiently large $m$. The Nakai-Moishezon criterion for ampleness says that $D$ is ample if and only if $D^{\dim V}.V >0$ for every subvariety $V\subset X$. In particular, ampleness only depends on the numerical equivalence class of $D$. A positive linear combination of ample divisors is also ample, so it is natural to consider the cone spanned by ample classes. \begin{definition} The \emph{ample cone} $\Amp(X)\subset N^1(X)$ is the open convex cone spanned by the numerical classes of ample Cartier divisors. An $\mathbb{R}$-Cartier divisor $D$ is ample if its numerical class is in the ample cone. \end{definition} From a practical standpoint it is often easier to work with \emph{nef} (i.e. \emph{numerically effective}) divisors instead of ample divisors. We say that a Cartier divisor $D$ is nef if $D.C\geq 0$ for every curve $C\subset X$. This is clearly a numerical condition, so nefness extends easily to $\mathbb{R}$-divisors and they span a cone $\Nef(X)$, the \emph{nef cone} of $X$. By Kleiman's theorem, the problems of studying ample or nef cones are essentially equivalent. \begin{theorem}[{\cite[Theorem 1.27]{Debarre}}] The nef cone is the closure of the ample cone, and the ample cone is the interior of the nef cone: $$ \Nef(X) = \overline{\Amp(X)} \qquad \mbox{and} \qquad \Amp(X) = \Nef(X)^{\circ} .$$ \end{theorem} Nef divisors are particularly important in birational geometry because they record the behavior of the simplest nontrivial morphisms to other projective varieties, as the next example shows. \begin{example}\label{ex-nefpullback} Suppose $f:X\to Y$ is any morphism of projective varieties. Let $L$ be a very ample line bundle on $Y$, and consider the line bundle $f^*L$. If $C\subset X$ is any irreducible curve, we can find an effective divisor $D\subset Y$ representing $L$ such that the image of $C$ is not contained entirely in $D$. This implies $C.(f^*L) \geq 0$, so $f^*L$ is nef. Note that if $f$ contracts some curve $C\subset X$ to a point, then $C.(f^*L)=0$, so $f^*L$ is on the boundary of the nef cone. As a partial converse, suppose $D$ is a nef divisor on $X$ such that the linear series $|mD|$ is base point free for some $m>0$; such a divisor class is called \emph{semiample}. Then for sufficiently large and divisible $m$, the image of the map $\phi_{|mD|}:X\to |mD|^*$ is a projective variety $Y_m$ carrying an ample line bundle $L$ such that $\phi_{|mD|}^* L = \mathcal{O}_X(mD).$ See \cite[Theorem 2.1.27]{Lazarsfeld} for details and a more precise statement. \end{example} \begin{example}\label{ex-nefCompute} Classically, to compute the nef (and hence ample) cone of a variety $X$ one typically first constructs a subcone $\Lambda \subset \Nef(X)$ by finding divisors $D$ on the boundary arising from interesting contractions $X\to Y$ as in Example \ref{ex-nefpullback}. One then dually constructs interesting curves $C$ on $X$ to span a cone $\Nef(X)\subset \Lambda'$ given as the divisors intersecting the curves nonnegatively. If enough divisors and curves are constructed so that $\Lambda = \Lambda'$, then they equal the nef cone. One of the main features of the positivity lemma of Bayer and Macr\`i will be that it produces nef divisors on moduli spaces of sheaves $M$ without having to worry about finding a map $M\to Y$ to a projective variety giving rise to the divisor. A priori these nef divisors may not be semiample or have sections at all, so it may or may not be possible to construct these divisors and prove their nefness via more classical constructions. See \S\ref{sec-positivity} for more details. \end{example} \begin{example} For an easy example of the procedure in Example \ref{ex-nefCompute}, consider the blowup $X = \Bl_p \mathbb{P}^2$ of $\mathbb{P}^2$ at a point $p$. Then $\Pic X \cong \mathbb{Z} H\oplus \mathbb{Z} E$, where $H$ is the pullback of a line under the map $\pi:X\to \mathbb{P}^2$ and $E$ is the exceptional divisor. The Neron-Severi space $N^1(X)$ is the two-dimensional real vector space spanned by $H$ and $E$. Convex cones in $N^1(X)$ are spanned by two extremal classes. Since $\pi$ contracts $E$, the class $H$ is an extremal nef divisor. We also have a fibration $f: X\to \mathbb{P}^1$, where the fibers are the proper transforms of lines through $p$. The pullback of a point in $\mathbb{P}^1$ is of class $H-E$, so $H-E$ is an extremal nef divisor. Therefore $\Nef(X)$ is spanned by $H$ and $H-E$. \end{example} \subsubsection{(Pseudo)effective and big cones} The easiest interesting space of divisors to define is perhaps the \emph{effective cone} $\Eff(X) \subset N^1(X)$, defined as the subspace spanned by numerical classes of effective divisors. Unlike nefness and ampleness, however, effectiveness is \emph{not} a numerical property: for instance, on an elliptic curve $C$, a line bundle of degree $0$ has an effective multiple if and only if it is torsion. The effective cone is in general neither open nor closed. Its closure $\overline\Eff(X)$ is less subtle, and called the \emph{pseudo-effective cone}. The interior of the effective cone is the \emph{big cone} $\Bigc(X)$, spanned by divisors $D$ such that the linear series $|mD|$ defines a map $\phi_{|mD|}$ whose image has the same dimension as $X$. Thus, big divisors are the natural analog of birational maps. By Kodaira's Lemma \cite[Proposition 2.2.6]{Lazarsfeld}, bigness is a numerical property. \begin{example} The strategy for computing pseudoeffective cones is typically similar to that for computing nef cones. On the one hand, one constructs effective divisors to span a cone $\Lambda\subset \overline\Eff(X)$. A \emph{moving curve} is a numerical curve class $[C]$ such that irreducible representatives of the class pass through a general point of $X$. Thus if $D$ is an effective divisor we must have $D.C\geq 0$; otherwise $D$ would have to contain every irreducible curve of class $C$. Thus the moving curve classes dually determine a cone $\overline{\Eff}(X) \subset \Lambda'$, and if $\Lambda=\Lambda'$ then they equal the pseudoeffective cone. This approach is justified by the seminal work of Boucksom-Demailly-P\u aun-Peternell, which establishes a duality between the pseudoeffective cone and the cone of moving curves \cite{BDPP}. \end{example} \begin{example} On $X = \Bl_p\mathbb{P}^2$, the curve class $H$ is moving and $H.E = 0$. Thus $E$ spans an extremal edge of $\Eff(X)$. The curve class $H-E$ is also moving, and $(H-E)^2 = 0$. Therefore $H-E$ spans the other edge of $\Eff(X)$, and $\Eff(X)$ is spanned by $H-E$ and $E$. \end{example} \subsubsection{Stable base locus decomposition} The nef cone $\Nef(X)$ is one chamber in a decomposition of the entire pseudoeffective cone $\overline\Eff(X)$. By the \emph{base locus} $\Bs(D)$ of a divisor $D$ we mean the base locus of the complete linear series $|D|$, regarded as a subset (i.e. not as a subscheme) of $X$. By convention, $\Bs(D) = X$ if $|D|$ is empty. The \emph{stable base locus} of $D$ is the subset $$\BBs(D) = \bigcap_{m >0} \Bs(D)$$ of $X$. One can show that $\BBs(D)$ coincides with the base locus $\Bs (mD)$ of sufficiently large and divisible multiples $mD.$ \begin{example} The base locus and stable base locus of $D$ depend on the class of $D$ in $\Pic(X)$, not just on the numerical class of $D$. For example, if $L$ is a degree $0$ line bundle on an elliptic curve $X$, then $\Bs(L) = X$ unless $L$ is trivial, and $\BBs(L)=X$ unless $L$ is torsion in $\Pic(X)$. \end{example} Since (stable) base loci do not behave well with respect to numerical equivalence, for the rest of this subsection we assume $q(X) = 0$ so that linear and numerical equivalence coincide and $N^1(X)_\mathbb{Q} = \Pic(X)\otimes \mathbb{Q}$. Then the pseudoeffective cone $\overline{\Eff}(X)$ has a wall-and-chamber decomposition where the stable base locus remains constant on the open chambers. These various chambers control the birational maps from $X$ to other projective varieties. For example, if $f:X\dashrightarrow Y$ is the rational map given by a sufficiently divisible multiple $|mD|$, then the indeterminacy locus of the map is contained in the stable base locus. \begin{example} Stable base loci decompositions are typically computed as follows. First, one constructs effective divisors in a multiple $|mD|$ and takes their intersection to get a variety $Y$ with $\BBs(D) \subset Y$. In the other direction, one looks for curves $C$ on $X$ such that $C.D <0$. Then any divisor of class $mD$ must contain $C$, so $\BBs(D)$ contains every curve numerically equivalent to $C$. \end{example} When the Picard rank of $X$ is two, the chamber decompositions can often be made very explicit. In this case it is notationally conventient to write, for example, $(D_1,D_2]$ to denote the cone of divisors of the form $a_1D_1+a_2D_2$ with $a_1>0$ and $a_2\geq 0$. \begin{example} Let $X = \Bl_p \mathbb{P}^2$. The nef cone is $[H,H-E]$, and both $H,H-E$ are basepoint free. Thus the stable base locus is empty in the closed chamber $[H,H-E]$. If $D\in (H,E]$ is an effective divisor, then $D.E <0$, so $D$ contains $E$ as a component. The stable base locus of divisors in the chamber $(H,E]$ is $E$. \end{example} We now begin to investigate the birational geometry of some of the simplest moduli spaces of sheaves on surfaces from a classical point of view. \subsection{Birational geometry of Hilbert schemes of points} Let $X$ be a smooth surface with irregularity $q(X)=0$, and let ${\bf v}$ be the Chern character of an ideal sheaf $I_Z$ of a collection $Z$ of $n$ points. Then $M({\bf v})$ is the Hilbert scheme $X^{[n]}$ of $n$ points on $X$, parameterizing zero-dimensional schemes of length $n$. See \S\ref{ssec-Hilb} for its basic properties. \subsubsection{Divisor classes} Divisor classes on the Hilbert scheme $X^{[n]}$ can be understood entirely in terms of the birational \emph{Hilbert-Chow morphism} $h:X^{[n]}\to X^{(n)}$ to the symmetric product $X^{(n)} = \Sym^n X$. Informally, this map sends the ideal sheaf of $Z$ to the sum of the points in $Z$, with multiplicities given by the length of the scheme at each point. \begin{remark} The symmetric product $X^{(n)}$ can itself be viewed as the moduli space of $0$-dimensional sheaves with Hilbert polynomial $P(m)=n$. Suppose $E$ is a zero-dimensional sheaf with constant Hilbert polynomial $\ell$ and that $E$ is supported at a single point $p$. Then $E$ admits a length $\ell$\ filtration where all the quotients are isomorphic to $\mathcal{O}_p$. Thus, $E$ is $S$-equivalent to $\mathcal{O}_p^{\oplus \ell}$. Since $S$-equivalent sheaves are identified in the moduli space, the moduli space $M(P)$ is just $X^{(n)}$. The Hilbert-Chow morphism $h:X^{[n]}\to X^{(n)}$ can now be seen to come from the moduli property for $X^{(n)}$. Let $\mathcal{I}$ be the universal ideal sheaf on $X\times X^{[n]}$. The quotient of the inclusion $\mathcal{I}\to \mathcal{O}_{X\times X^{[n]}}$ is then a family of zero-dimensional sheaves of length $n$. This family induces a map $X^{[n]}\to X^{(n)}$, which is just the Hilbert-Chow morphism. \end{remark} The exceptional locus of the Hilbert-Chow morphism is a divisor class $B$ on the Hilbert scheme $X^{[n]}$. Alternately, $B$ is the locus of nonreduced schemes. It is swept out by curves contained in fibers of the Hilbert-Chow morphism. A simple example of such a curve is given by fixing $n-2$ points in $X$ and allowing a length $2$ scheme $\Spec \mathbb{C}[\varepsilon]/(\varepsilon^2)$ to ``spin'' at one additional point. \begin{remark} The divisor class $B/2$ is also Cartier, although it is not effective so it is harder to visualize. Let $\mathcal{Z}\subset X\times X^{[n]}$ denote the universal subscheme of length $n$, and let $p:\mathcal{Z}\to X$ and $q:\mathcal{Z}\to X^{[n]}$ be the projections. Then the \emph{tautological bundle} $q_*p^*\mathcal{O}_X$ is a rank $n$ vector bundle with determinant of class $-B/2$. \end{remark} Any line bundle $L$ on $X$ induces a line bundle $L^{(n)}$ on the symmetric product. Pulling back this line bundle by the Hilbert-Chow morphism gives a line bundle $L^{[n]}:= h^*L^{(n)}$. This gives an inclusion $\Pic(X)\to \Pic(X^{[n]})$. If $L$ can be represented by a reduced effective divisor $D$, then $L^{[n]}$ can be represented by the locus $$D^{[n]} := \{Z\in X^{[n]} :Z\cap D \neq \emptyset\}.$$ Fogarty proves that the divisors mentioned so far generate the Picard group. \begin{theorem}[Fogarty \cite{Fogarty2}] Let $X$ be a smooth surface with $q(X) = 0$. Then $$\Pic(X^{[n]}) \cong \Pic(X) \oplus \mathbb{Z}(B/2).$$ Thus, tensoring by $\mathbb{R}$, $$N^1(X^{[n]}) \cong N^1(X) \oplus \mathbb{R} B.$$ \end{theorem} There is another interesting way to use a line bundle on $X$ to construct effective divisor classes. In examples, many extremal effective divisors can be realized in this way. \begin{example} Suppose $L$ is a line bundle on $X$ with $m:=h^0(L)> n$. If $Z\subset X$ is a general subscheme of length $n$, then $H^0(L\otimes I_Z)\subset H^0(L)$ is a subspace of codimension $n$. Thus we get a rational map $$\phi: X^{[n]}\dashrightarrow G := \Gr(m-n,m)$$ to the Grassmannian $G$ of codimension $n$ subspaces of $H^0(L)$. The line bundle $\tilde L^{[n]} := \phi^*\mathcal{O}_G(1)$ (which is well-defined since the indeterminacy locus of $\phi$ has codimension at least $2$) can be represented by an effective divisor as follows. Let $W\subset H^0(L)$ be a sufficiently general subspace of dimension $n$; one frequently takes $W$ to be the subspace of sections of $L$ passing through $m-n$ general points. Then the locus $$\tilde D^{[n]} = \{Z\in X^{[n]} : H^0(L\otimes I_Z) \cap W \neq \{0\}\}$$ is an effective divisor representing $\phi^*\mathcal{O}_G(1)$. \end{example} \subsubsection{Curve classes} Let $C\subset X$ be an irreducible curve. There are two immediate ways that we can induce a curve class on $X^{[n]}$. \begin{example} Fix $n-1$ points $p_1,\ldots,p_{n-1}$ on $X$ which are not in $C$. Allowing an $n$th point $p_n$ to travel along $C$ gives a curve $\tilde C_{[n]} \subset X^{[n]}$. \end{example} \begin{example} Suppose $C$ admits a $g_n^1$. If the $g_n^1$ is base-point free, then we get a degree $n$ map $C\to \mathbb{P}^1$. The fibers of this map induce a rational curve $\mathbb{P}^1\to X^{[n]}$, and we write $C_{[n]}$ for the class of the image. If the $g_n^1$ is not base-point free, we can first remove the basepoints to get a map $\mathbb{P}^1\to X^{[m]}$ for some $m<n$, and then glue the basepoints back on to get a map $\mathbb{P}^1\to X^{[n]}$. The class $C_{[n]}$ doesn't depend on the particular $g_n^1$ used to construct the curve (see for example \cite[Proposition 3.5]{HuizengaThesis} in the case of $\mathbb{P}^2$). \end{example} \begin{remark} Typically the curve classes $C_{[n]}$ are more interesting than $\tilde C_{[n]}$ and they frequently show up as extremal curves in the cone of curves. However, the class $C_{[n]}$ is only defined if $C_{[n]}$ carries an interesting linear series of degree $n$, while $\tilde C_{[n]}$ always makes sense; thus curves of class $\tilde C_{[n]}$ are also sometimes used. \end{remark} Both curve classes $\tilde C_{[n]}$ and $C_{[n]}$ have the useful property that the intersection pairing with divisors is preserved, in the sense that if $D\subset X$ is a divisor then $$D^{[n]}.\tilde C_{[n]} = D^{[n]}.C_{[n]} = D.C;$$ indeed, it suffices to check the equalities when $D$ and $C$ intersect transversely, and in that case $D^{[n]}$ and $C_{[n]}$ (resp. $\tilde C_{[n]}$) intersect transversely in $D.C$ points. The intersection with $B$ is more interesting. Clearly $$\tilde C_{[n]}.B = 0.$$ On the other hand, the nonreduced schemes parameterized by a curve of class $C_{[n]}$ correspond to ramification points of the degree $n$ map $C\to \mathbb{P}^1$. The Riemann-Hurwitz formula then implies $$C_{[n]}.B = 2g(C)-2+2n.$$ One further curve class is useful; we write $C_0$ for the class of a curve contracted by the Hilbert-Chow morphism. \subsubsection{The intersection pairing} At this point we have collected enough curve and divisor classes to fully determine the intersection pairing between curves and divisors and find relations between the various classes. The classes $C_0$ and $C_{[n]}$ for $C$ any irreducible curve span $N_1(X)$, so to completely compute the intersection pairing we are only missing the intersection number $C_0.B$. However, since this intersection number is negative, we use the additional curve and divisor classes $\tilde C_{[n]}$ and $\tilde D^{[n]}$ to compute this number. To this end, we compute the intersection numbers of $\tilde D^{[n]}$ with our curve classes. \begin{example}\label{ex-divCurveInt1} To compute $\tilde D^{[n]}.C_0$, let $m = h^0(\mathcal{O}_X(D))$, fix $m-n$ general points $p_1,\ldots,p_{m-n}$ in $X$, and represent $\tilde D^{[n]}$ as the set of schemes $Z$ such that there is a curve on $X$ of class $D$ passing through $p_1,\ldots,p_{m-n}$ and $Z$. Schemes parameterized by $C_0$ are supported at $n-1$ general points $q_1,\ldots,q_{n-1}$, with a spinning tangent vector at $q_{n-1}$. There is a unique curve $D'$ of class $D$ passing through $p_1,\ldots,p_{m-n},q_1,\ldots,q_{n-1}$, and it is smooth at $q_{n-1}$, so there is a single point of intersection between $C_0$ and $\tilde D^{[n]}$, occurring when the tangent vector at $q_{n-1}$ is tangent to $D'$. Thus $\tilde D^{[n]}.C_0 = 1$. \end{example} \begin{example} Next we compute $\tilde C_{[n]}.\tilde D^{[n]}$. Represent $\tilde D^{[n]}$ as in Example \ref{ex-divCurveInt1}. The curve class $\tilde C_{[n]}$ is represented by fixing $n-1$ points $q_1,\ldots,q_{n-1}$ and letting $q_n$ travel along $C$. There is a unique curve $D'$ of class $D$ passing through $p_1,\ldots,p_{m-n},q_1,\ldots,q_{n-1}$, so $\tilde C_{[n]}$ meets $\tilde D^{[n]}$ when $q_n \in C\cap D'$. Thus $\tilde C_{[n]}.\tilde D^{[n]} = C.D$. \end{example} \begin{example} For an irreducible curve $C\subset X$, write $\hat C_{[n]}$ for the curve class on $X^{[n]}$ obtained by fixing $n-2$ general points in $X$, fixing one point on $C$, and letting one point travel along $C$ and collide with the point fixed on $C$. It follows immediately that \begin{align*} \hat C_{[n]}.D^{[n]} &= C.D\\ \hat C_{[n]}.\tilde D^{[n]} &= C.D -1. \end{align*} Less immediately, we find $\hat C_{[n]}.B = 2$: while the curve meets $B$ set-theoretically in one point, a tangent space calculation shows this intersection has multiplicity $2$. \end{example} We now collect our known intersection numbers. $$\begin{array}{c|ccc} &D^{[n]} & \tilde D^{[n]} & B\\ \hline C_{[n]} & C.D && 2g(C)-2+2n\\ \tilde C_{[n]} & C.D & C.D& 0 \\ \hat C_{[n]} & C.D & C.D-1 & 2\\ C_0 & 0 & 1 \end{array}$$ As $\tilde D^{[n]}. C_0 \neq 0$, the divisors $\tilde D^{[n]}$ are all not in the codimension one subspace $N^1(X)\subset N^1(X^{[n]})$. Therefore the divisor classes of type $D^{[n]}$ and $\tilde D^{[n]}$ together span $N^1(X)$. It now follows that $$C_0+\hat C_{[n]} = \tilde C_{[n]}$$ since both sides pair the same with divisors $D^{[n]}$ and $\tilde D^{[n]}$, and thus $C_0.B=-2$. We then also find relations $$C_{[n]} = \tilde C_{[n]}-(g(C)-1+n)C_0$$ and $$\tilde D^{[n]} = D^{[n]} - \frac{1}{2}B.$$ In particular, the divisors of type $\tilde D^{[n]}$ are all in the half-space of divisors with negative coefficient of $B$ in terms of the Fogarty isomorphism $N^1(X^{[n]}) \cong N^1(X) \oplus \mathbb{R} B$. We can also complete our intersection table. $$\begin{array}{c|ccc} &D^{[n]} & \tilde D^{[n]} & B\\ \hline C_{[n]} & C.D &C.D -(g(C)-1+n)& 2g(C)-2+2n\\ \tilde C_{[n]} & C.D & C.D& 0 \\ \hat C_{[n]} & C.D & C.D-1 & 2\\ C_0 & 0 & 1 & -2 \end{array}$$ \subsubsection{Some nef divisors}\label{sssec-someNef} Part of the nef cone of $X^{[n]}$ now follows from our knowledge of the intersection pairing. First observe that since $C_0.D^{[n]} = 0$ and $C_0.B <0$, the nef cone is contained in the half-space of divisors with nonpositive $B$-coefficient in terms of the Fogarty isomorphism. If $D$ is an ample divisor on $X$, then the divisor $D^{(n)}$ on the symmetric product is also ample, so $D^{[n]}$ is nef. Since a limit of nef divisors is nef, it follows that if $D$ is nef on $X$ then $D^{[n]}$ is nef on $X^{[n]}$. Furtermore, if $D$ is on the boundary of the nef cone of $X$ then $D^{[n]}$ is on the boundary of the nef cone of $X^{[n]}$. Indeed, if $C.D = 0$ then $\tilde C_{[n]}.D^{[n]} = 0$ as well. This proves $$\Nef (X^{[n]}) \cap N^1(X) = \Nef(X),$$ where by abuse of notation we embed $N^1(X)$ in $N^1(X^{[n]})$ by $D\mapsto D^{[n]}$. Boundary nef divisors which are not contained in the hyperplane $N^1(X)$ are more interesting and more challenging to compute. Bridgeland stability and the positivity lemma will give us a tool for computing and describing these classes. \subsubsection{Examples} We close our initial discussion of the birational geometry of Hilbert schemes of points by considering several examples from this classical point of view. \begin{example}[$\mathbb{P}^{2[n]}$]\label{ex-P2nNef} The Neron-Severi space $N^1(\mathbb{P}^{2[n]})$ of the Hilbert scheme of $n$ points in $\mathbb{P}^2$ is spanned by $H^{[n]}$ and $B$, where $H$ is the class of a line in $\mathbb{P}^2$. Any divisior in the cone $(H,B]$ is negative on $C_0$, so the locus $B$ swept out by curves of class $C_0$ is contained in the stable base locus of any divisor in this chamber. Since $B.\tilde H_{[n]}=0$ and $\tilde H_{[n]}$ is the class of a moving curve, the divisor $B$ is an extremal effective divisor. The divisor $H^{[n]}$ is an extremal nef divisor by \S\ref{sssec-someNef}, so to compute the full nef cone we only need one more extremal nef class. The line bundle $\mathcal{O}_{\mathbb{P}^2}(n-1)$ is \emph{$n$-very ample}, meaning that if $Z\subset \mathbb{P}^2$ is any zero-dimensional subscheme of length $n$, then $H^0(I_Z(n-1))$ has codimension $n$ in $H^0(\mathcal{O}_{\mathbb{P}^2}(n-1))$. Consequently, if $G$ is the Grassmannian of codimension-$n$ planes in $H^0(\mathcal{O}_{\mathbb{P}^2}(n-1))$, then the natural map $\phi: \mathbb{P}^{2[n]}\to G$ is a \emph{morphism}. Thus $\phi^* \mathcal{O}_G(1)$ is nef. In our notation for divisors, putting $D = (n-1)H$ we conclude that $$\tilde D^{[n]} = (n-1)H^{[n]} - \frac{1}{2}B$$ is nef. Furthermore, $\tilde D^{[n]}$ is not ample. Numerically, simply observe that $\tilde D^{[n]}. H_{[n]} = 0$. More geometrically, if two length $n$ schemes $Z,Z'$ are contained in the same line $L$ then the subspaces $H^0(I_Z(n-1))$ and $H^0(I_{Z'}(n-1))$ are equal, so $\phi$ identifies $Z$ and $Z'$. Note that if $Z$ and $Z'$ are both contained in a single line $L$ then their ideal sheaves can be written as extensions $$0\to \mathcal{O}_{\mathbb{P}^2}(-1) \to I_Z \to \mathcal{O}_L(-n)\to 0$$ $$0\to \mathcal{O}_{\mathbb{P}^2}(-1) \to I_{Z'} \to \mathcal{O}_L(-n)\to 0.$$ This suggests that if we have some new notion of semistability where $I_Z$ is strictly semistable with Jordan-H\"older factors $\mathcal{O}_{\mathbb{P}^2}(-1)$ and $\mathcal{O}_L(-n)$ then the ideal sheaves $I_Z$ and $I_{Z'}$ will be $S$-equivalent. Thus, in the moduli space of such objects, $I_Z$ and $I_{Z'}$ will be represented by the same point of the moduli space. \end{example} \begin{example}[$\mathbb{P}^{2[2]}$]\label{ex-P2Hilb2} The divisor $\tilde H^{[2]} = H^{[2]}-\frac{1}{2}B$ spanning an edge of the nef cone is also an extremal effective divisor on $\mathbb{P}^{2[2]}$. Indeed, the orthogonal curve class $H_{[2]}$ is a moving curve on $\mathbb{P}^{2[2]}$. Thus there two chambers in the stable base locus decomposition of $\overline{\Eff}(\mathbb{P}^{2[2]})$. \end{example} \begin{example}[$\mathbb{P}^{2[3]}$]\label{ex-P2Hilb3} By Example \ref{ex-P2nNef}, on $\mathbb{P}^{2[3]}$ the divisor $2H^{[3]}-\frac{1}{2}B$ is an extremal nef divisor. The open chambers of the stable base locus decomposition are $$(H^{[3]},B), (2H^{[3]}-\frac{1}{2}B,H^{[3]}), \mbox{ and } (H^{[3]}-\frac{1}{2}B,2H^{[3]}-\frac{1}{2}B).$$ To establish this, first observe that $H^{[3]} - \frac{1}{2}B$ is the class of the locus $D$ of collinear schemes, since $D.C_0 = 1$ and $D.\tilde H_{[3]}=1.$ The divisor $2H^{[3]}-\frac{1}{2}B$ is orthogonal to curves of class $H_{[3]}$, so the locus of collinear schemes swept out by these curves lies in the stable base locus of any divisor in $(H^{[3]}-\frac{1}{2}B, 2H^{[3]}-\frac{1}{2}B)$. In the other direction, any divisor in $(H^{[3]}-\frac{1}{2}B,2H^{[3]}-\frac{1}{2}B)$ is the sum of a divisor on the ray spanned by $D$ and an ample divisor. It follows that the stable base locus in this chamber is exactly $D$. \end{example} For many more examples of the stable base locus decomposition of $\mathbb{P}^{2[n]}$, see \cite{ABCH} for explicit examples with $n\leq 9$, \cite{CH} for a discussion of the chambers where monomial schemes are in the base locus, and \cite{HuizengaJAG,CHW} for the effective cone. Alternately, see \cite{CHGokova} for a deeper survey. Also, see the work of Li and Zhao \cite{LiZhao} for more recent developments unifying several of these topics. \subsection{Birational geometry of moduli spaces of sheaves} We now discuss some of the basic aspects of the birational geometry of moduli spaces of sheaves. Many of the concepts are mild generalizations of the picture for Hilbert schemes of points. \subsubsection{Line bundles}\label{ssec-lineBundles} The main method of constructing line bundles on a moduli space of sheaves is by a determinantal construction. First suppose $\mathcal{E}/S$ is a family of sheaves on $X$ parameterized by $S$. Let $p:S\times X\to S$ and $q:S\times X\to X$ be the projections. The \emph{Donaldson homomorphism} is a map $\lambda_\mathcal{E}:K(X)\to \Pic(S)$ defined by the composition $$\lambda_\mathcal{E}:K(X)\fto{q^*} K^0(S\times X)\fto{\cdot [\mathcal{E}]} K^0(S\times X) \fto{p_!} K^0(S)\fto{\det}\Pic(S) $$ Here $p_! = \sum_i (-1)^i R^ip_*$. Informally, we pull back a sheaf on $X$ to the product, twist by the family $\mathcal{E}$, push forward to $S$, and take the determinant line bundle. Thus we obtain from any class in $K(X)$ a line bundle on the base $S$ of the family $\mathcal{E}$. The above discussion is sufficient to define line bundles on a moduli space $M({\bf v})$ of sheaves if there is a universal family $\mathcal{E}$ on $M({\bf v})$: there is then a map $\lambda_\mathcal{E}:K(X)\to \Pic(M({\bf v}))$, and the image typically consists of many interesting line bundles on the moduli space. Things are slightly more delicate in the general case where there is no universal family. As motivation, given a class ${\bf w}\in K(X)$, we would like to define a line bundle $L$ on $M({\bf v})$ with the following property. Suppose $\mathcal{E}/S$ is a family of sheaves of character ${\bf v}$ and that $\phi:S\to M({\bf v})$ is the moduli map. Then we would like there to be an isomorphism $\phi^*L \cong \lambda_{\mathcal{E}}({\bf w}),$ so that the determinantal line bundle $\lambda_{\mathcal{E}}({\bf w})$ on $S$ is the pullback of a line bundle on the moduli space $M({\bf v})$. In order for this to be possible, observe that the line bundle $\lambda_{\mathcal{E}}({\bf w})$ must be unchanged when it is replaced by $\mathcal{E}\otimes p^*N$ for some line bundle $N\in \Pic(S)$. Indeed, the moduli map $\phi: S\to M({\bf v})$ is not changed when we replace $\mathcal{E}$ by $\mathcal{E}\otimes p^*N$, so $\phi^*L$ is unchanged as well. However, a computation shows that $$\lambda_{\mathcal{E}\otimes p^*N}({\bf w}) = \lambda_{\mathcal{E}}({\bf w}) \otimes N^{\otimes \chi({\bf v}\otimes {\bf w})}.$$ Thus, in order for there to be a chance of defining a line bundle $L$ on $M({\bf v})$ with the desired property we need to assume that $\chi({\bf v}\otimes {\bf w})=0$. In fact, if $\chi({\bf v}\otimes {\bf w})=0$, then there is a line bundle $L$ as above on the stable locus $M^s({\bf v})$, denoted by $\lambda^s({\bf w})$. To handle things rigorously, it is necessary to go back to the construction of the moduli space via GIT. See \cite[\S 8.1]{HuybrechtsLehn} for full details, as well as a discussion of line bundles on the full moduli space $M({\bf v})$. \begin{theorem}[{\cite[Theorem8.1.5]{HuybrechtsLehn}}] Let ${\bf v}^\perp\subset K(X)$ denote the orthogonal complement of ${\bf v}$ with respect to the Euler pairing $\chi(-\otimes -)$. Then there is a natural homomorphism $$\lambda^s: {\bf v^\perp}\to \Pic(M^s({\bf v})).$$ \end{theorem} In general it is a difficult question to completely determine the Picard group of the moduli space. One of the best results in this direction is the following theorem of Jun Li. \begin{theorem}[\cite{JunLiPicard}] Let $X$ be a regular surface, and let ${\bf v}\in K(X)$ with $\rk {\bf v}=2$ and $\Delta({\bf v}) \gg 0$. Then the map $$\lambda^s : {\bf v}^\perp \otimes \mathbb{Q} \to \Pic(M^s({\bf v}))\otimes \mathbb{Q}$$ is a surjection. \end{theorem} More precise results are somewhat rare. We discuss a few of the main such examples here. \begin{example}[Picard group of moduli spaces of sheaves on $\mathbb{P}^2$] Let $M({\bf v})$ be a moduli space of sheaves on $\mathbb{P}^2$. The Picard group of this space was determined by Dr\'ezet \cite{DrezetPicard}. The answer depends on the $\delta$-function introduced in the classification of semistable characters in \S\ref{sssec-existP2}. If ${\bf v}$ is the character of an exceptional bundle then $M({\bf v})$ is a point and there is nothing to discuss. If $\delta(\mu({\bf v})) = \Delta({\bf v})$, then $M({\bf v})$ is a moduli space of so-called \emph{height zero} bundles and the Picard group is isomorphic to $\mathbb{Z}$. Finally, if $\delta(\mu({\bf v}))> \Delta({\bf v})$ then the Picard group is isomorphic to $\mathbb{Z}\oplus \mathbb{Z}$. In each case, the Donaldson morphism is surjective. \end{example} \begin{example}[Picard group of moduli spaces of sheaves on $\mathbb{P}^1\times \mathbb{P}^1$] Let $M({\bf v})$ be a moduli space of sheaves on $\mathbb{P}^1\times \mathbb{P}^1$. Already in this case the Picard group does not appear to be known in every case. See \cite{YoshiokaRuled} for some partial results, as well as results on ruled surfaces in general. \end{example} \begin{example}[Picard group of moduli spaces of sheaves on a $K3$ surface] Let $X$ be a $K3$ surface, and let ${\bf v}\in K_{\num}(X)$ be a primitive positive vector (see \S\ref{sssec-K3exist}). Let $H$ be a polarization which is generic with respect to ${\bf v}$. In this case the story is similar to the computation for $\mathbb{P}^2$, with the Beauville-Bogomolov form playing the role of the $\delta$ function. If $\langle{\bf v},{\bf v}\rangle = -2$ then $M_H({\bf v})$ is a point. If $\langle {\bf v},{\bf v}\rangle = 0$, then the Donaldson morphism $\lambda: {\bf v}^\perp\otimes \mathbb{R} \to N^1(M_H({\bf v}))$ is surjective with kernel spanned by ${\bf v}$, and $N^1(M_H({\bf v}))$ is isomorphic to ${\bf v}^\perp/{\bf v}$. Finally, if $\langle{\bf v},{\bf v}\rangle >0$ then the Donaldson morphism is an isomorphism. See \cite{Yoshioka3} or \cite{BM2} for details. \end{example} \begin{example}[Brill-Noether divisors] For birational geometry it is important to be able to construct sections of of line bundles. The determinantal line bundles introduced above frequently have special sections vanishing on \emph{Brill-Noether divisors}. Let $(X,H)$ be a smooth surface, and let ${\bf v}$ and ${\bf w}$ be an orthogonal pair of Chern characters, i.e. suppose that $\chi({\bf v}\otimes {\bf w})=0$, and suppose that there is a reasonable, e.g. irreducible, moduli space $M_H({\bf v})$ of semistable sheaves. Suppose $F$ is a vector bundle with $\ch F = {\bf w}$, and consider the locus $$D_F = \{ E \in M_H({\bf v}): H^0(E\otimes F)\neq 0\}.$$ If we assume that $H^2(E\otimes F)=0$ for every $E\in M_H({\bf v})$ and that $H^0(E\otimes F) = 0$ for a general $E\in M_H({\bf v})$ then the locus $D_F$ will be an effective divisor. Furthermore, its class is $\lambda({\bf w}^*)$. The assumption that $H^2(E\otimes F) = 0$ often follows easily from stability and Serre duality. For instance, if $\mu_H({\bf v}),\mu_H({\bf w}),\mu_H(K_X)> 0$ and $F$ is a semistable vector bundle then $$H^2(E\otimes F) = \Ext^2(F^*,E) = \Hom(E,F^*(K_X))^*=0$$ by stability. On the other hand, it can be quite challenging to verify that $H^0(E\otimes F)= 0$ for a general $E\in M_H({\bf v})$. These types of questions have been studied in \cite{CHW} in the case of $\mathbb{P}^2$ and \cite{Ryan} in the case of $\mathbb{P}^1\times \mathbb{P}^1$. Interesting effective divisors arising in the birational geometry of moduli spaces frequently arise in this way. \end{example} \subsubsection{The Donaldson-Uhlenbeck-Yau compactification} For Hilbert schemes of points $X^{[n]}$, the symmetric product $X^{(n)}$ offered an alternate compactification, with the map $h:X^{[n]}\to X^{(n)}$ being the Hilbert-Chow morphism. Recall that from a moduli perspective the Hilbert-Chow morphism sends the ideal sheaf $I_Z$ to (the $S$-equivalence class of) the structure sheaf $\mathcal{O}_Z$. Thinking of $\mathcal{O}_X$ as the double-dual of $I_Z$, the sheaf $\mathcal{O}_Z$ is the cokernel in the sequence $$0\to I_Z\to \mathcal{O}_X\to \mathcal{O}_Z\to 0.$$ The Donaldson-Uhlenbeck-Yau compactification can be viewed as analogous to the compactification of the Hilbert scheme by the symmetric product. Let $(X,H)$ be a smooth surface, and let ${\bf v}$ be the Chern character of a semistable sheaf of positive rank. Set-theoretically, the Donaldson-Uhlenbeck-Yau compactification $M_H^{DUY}({\bf v})$ of the moduli space $M_H({\bf v})$ can be defined as follows. Recall that the double dual of any torsion-free sheaf $E$ on $X$ is locally free, and there is a canonical inclusion $E\to E^{**}$. (Note, however, that the double-dual of a Gieseker semistable sheaf is in general only $\mu_H$-semistable). Define $T_E$ as the cokernel $$0\to E \to E^{**}\to T_E\to 0,$$ so that $T_E$ is a skyscraper sheaf supported on the singularities of $E$. In the Donaldson-Uhlenbeck-Yau compactification of $M_H({\bf v})$, a sheaf $E$ is replaced by the pair $(E^{**},T_E)$ consisting of the $\mu_H$-semistable sheaf $E^{**}$ and the $S$-equivalence class of $T_E$, i.e. an element of some symmetric product $X^{(n)}$. In particular, two sheaves which have isomorphic double duals and have singularities supported at the same points (counting multiplicity) are identified in $M_H^{DUY}({\bf v})$, even if the particular singularities are different. The Jun Li morphism $j:M_H({\bf v})\to M_H^{DUY}({\bf v})$ inducing the Donaldson-Uhlenbeck-Yau compactification arises from the line bundle $\lambda({\bf w})$ associated to the character ${\bf w}$ of a $1$-dimensional torsion sheaf supported on a curve whose class is a multiple of $H$. See \cite[\S8.2]{HuybrechtsLehn} or \cite{JunLi} for more details. \subsubsection{Change of polarization}\label{sssec-variation} Classically, one of the main interesting sources of birational maps between moduli spaces of sheaves is provided by varying the polarization. Suppose that $\{H_t\}$ $(0\leq t \leq 1)$ is a continuous family of ample divisors on $X$. Let $E$ be a sheaf which is $\mu_{H_0}$-stable. It may happen for some time $t>0$ that $E$ is not $\mu_{H_t}$-stable. In this case, there is a smallest time $t_0$ where $E$ is not $\mu_{H_{t_0}}$-stable, and then $E$ is strictly $\mu_{H_{t_0}}$-semistable. There is then an exact sequence $$0\to F \to E \to G\to 0$$ of $\mu_{H_{t_0}}$-semistable sheaves with the same $\mu_{H_{t_0}}$-slope. For $t<t_0$, we have $$\mu_{H_t}(F)<\mu_{H_t}(E)<\mu_{H_t}(G).$$ On the other hand, in typical examples the inequalities will be reversed for $t>t_0$: $$\mu_{H_t}(F)>\mu_{H_t}(E)>\mu_{H_t}(G).$$ While $E$ is certainly not $\mu_{H_t}$-semistable for $t>t_0$, if there are sheaves $E'$ fitting as extensions in sequences $$0\to G\to E'\to F\to 0$$ then it may happen that $E'$ is $\mu_{H_t}$-stable for $t>t_0$ (although they are certainly not $\mu_{H_t}$-semistable for $t<t_0$). Thus, the set of $H_t$-semistable sheaves changes as $t$ crosses $t_0$, and the moduli space $M_{H_t}({\bf v})$ changes accordingly. It frequently happens that only some very special sheaves become destabilized as $t$ crosses $t_0$, in which case the expectation would be that the moduli spaces for $t<t_0$ and $t>t_0$ are birational. To clarify the dependence between the geometry of the moduli space $M_H({\bf v})$ and the choice of polarization $H$, we partition the cone $\Amp(X)$ of ample divisors on $X$ into chambers where the moduli space remains constant. Let ${\bf v}$ be a primitive vector, and suppose $E$ has $\ch(E) = {\bf v}$ and is strictly $H$-semistable for some polarization $H$. Let $F\subset E$ be an $H$-semistable subsheaf with $\mu_H(F) = \mu_H(E)$. Then the locus $\Lambda \subset \Amp(X)$ of polarizations $H'$ such that $\mu_{H'}(F) = \mu_{H'}(E)$ is a hyperplane in the ample cone, called a \emph{wall}. The collection of all walls obtained in this way gives the ample cone a locally finite wall-and-chamber decomposition. As $H$ varies within an open chamber, the moduli space $M_H({\bf v})$ remains unchanged. On the other hand, if $H$ crosses a wall then the moduli spaces on either side may be related in interesting ways. Notice that if say $X$ has Picard rank $1$ or we are considering Hilbert schemes of points then no interesting geometry can be obtained by varying the polarization. Recall that in Example \ref{ex-P2Hilb3} we saw that even $\mathbb{P}^{2[3]}$ has nontrivial alternate birational models. One of the goals of Bridgeland stability will be to view these alternate models as a variation of the stability condition. Variation of polarization is one of the simplest examples of how a stability condition can be modified in a continuous way, and Bridgeland stability will give us additional ``degrees of freedom'' with which to vary our stability condition. \section{Bridgeland stability}\label{sec-Bridgeland} The definition of a Bridgeland stability condition needs somewhat more machinery than the previous sections. However, we will primarily work with explicit stability conditions where the abstract nature of the definition becomes very concrete. While it would be a good idea to review the basics of derived categories of coherent sheaves, triangulated categories, t-structures, and torsion theories, it is also possible to first develop an appreciation for stability conditions and then go back and fill in the missing details. Good references for background on these topics include \cite{GelfandManin} and \cite{Huybrechts}. \subsection{Stability conditions in general} Let $X$ be a smooth projective variety. We write $D^b(X)$ for the bounded derived category of coherent sheaves on $X$. We also write $K_{\num}(X)$ for the Grothendieck group of $X$ modulo numerical equivalence. Following \cite{bridgeland:stable}, we make the following definition. \begin{definition}\label{def-Bridgeland} A \emph{Bridgeland stability condition} on $X$ is a pair $\sigma =(Z,\mathcal A)$ consisting of an $\mathbb{R}$-linear map $Z:K_{\num}(X)\otimes \mathbb{R}\to \mathbb{C}$ (called the \emph{central charge}) and the heart $\mathcal A\subset D^b(X)$ of a bounded t-structure (which is an abelian category). Additionally, we require that the following properties be satisfied. \begin{enumerate} \item\label{ax-positivity} (Positivity) If $0\neq E\in \mathcal A$, then $$Z(E)\in \mathbb{H}:=\{re^{i\theta}:0 < \theta \leq \pi \textrm{ and } r> 0\}\subset \mathbb{C}.$$ We define functions $r(E) = \Im Z(E)$ and $d(E) = -\Re Z(E)$, so that $r(E)\geq 0$ and $d(E) >0$ whenever $r(E)=0$. Thus $r$ and $d$ are generalizations of the classical rank and degree functions. The \emph{(Bridgeland) $\sigma$-slope} is defined by $$\mu_\sigma(E) = \frac{d(E)}{r(E)} = -\frac{\Re Z(E)}{\Im Z(E)}.$$ \item (Harder-Narasimhan filtrations) An object $E\in \mathcal A$ is called (Bridgeland) $\sigma$-(semi)stable if $$\mu_\sigma(F) \leqpar \mu_\sigma(E)$$ whenever $F\subset E$ is a subobject of $E$ in $\mathcal A$. We require that every object of $\mathcal A$ has a finite Harder-Narasimhan filtration in $\mathcal A$. That is, there is a unique filtration $$0= E_0 \subset E_1 \subset \cdots \subset E_\ell = E$$ of objects $E_i\in \mathcal A$ such that the quotients $F_i = E_i/E_{i-1}$ are $\sigma$-semistable with decreasing slopes $\mu_\sigma(F_1) > \cdots > \mu_{\sigma}(F_\ell)$. \item (Support property) The support property is one final more technical condition which must be satisfied. Fix a norm $\|\cdot\|$ on $K_{\num}(X)\otimes \mathbb{R}$. Then there must exist a constant $C>0$ such that $$\|E\| \leq C \| Z(E)\|$$ for all semistable objects $E\in \mathcal A$. \end{enumerate} \end{definition} \begin{remark} Let $(X,H)$ be a smooth surface. The subcategory $\coh X\subset D^b(X)$ of sheaves with cohomology supported in degree $0$ is the heart of the standard t-structure. We can then try to define a central charge $$Z(E) = -c_1(E).H+i\rk(E)H^2,$$ and the corresponding slope function is the ordinary slope $\mu_H$. However, this does \emph{not} give a Bridgeland stability condition, since $Z(E)=0$ for any finite length torsion sheaf. Thus it is not immediately clear in what way Bridgeland stability generalizes ordinary slope- or Gieseker stability. Nonetheless, for any fixed polarization $H$ and character ${\bf v}$ there are Bridgeland stability conditions $\sigma$ where the $\sigma$-(semi)stable objects of character ${\bf v}$ are precisely the $H$-Gieseker (semi)stable sheaves of character ${\bf v}$. See \S\ref{ssec-largeVolume} for more details. \end{remark} \begin{remark} To work with the definition of a stability condition, it is crucial to understand what it means for a map $F\to E$ between objects of the heart $\mathcal A$ to be injective. The following exercise is a good test of the definitions involved. \begin{exercise}\label{ex-exact} Let $\mathcal A \subset D^b(X)$ be the heart of a bounded t-structure, and let $\phi:F\to E$ be a map of objects of $\mathcal A$. Show that $\phi$ is injective if and only if the mapping cone $\cone(\phi)$ of $\phi$ is also in $\mathcal A$. In this case, there is an exact sequence $$0\to F\to E\to \cone(\phi)\to 0$$ in $\mathcal A$. \end{exercise} \end{remark} One of the most important features of Bridgeland stability is that the space of all stability conditions on $X$ is a complex manifold in a natural way. In particular, we are able to continuously vary stability conditions and study how the set (or moduli space) of semistable objects varies with the stability condition. Let $\Stab(X)$ denote the space of stability conditions on $X$. Then Bridgeland proves that there is a natural topology on $\Stab(X)$ such that the forgetful map \begin{align*} \Stab(X)&\to \Hom_\mathbb{R}(K_{\num}(X)\otimes \mathbb{R}, \mathbb{C})\\ (Z,\mathcal A) & \mapsto Z \end{align*} is a local homeomorphism. Thus if $\sigma = (Z,\mathcal A)$ is a stability condition and the linear map $Z$ is deformed by a small amount, there is a unique way to deform the category $\mathcal A$ to get a new stability condition. \subsubsection{Moduli spaces} Let $\sigma$ be a stability condition and fix a vector ${\bf v}\in K_{\num}(X)$. There is a notion of a flat family $\mathcal{E}/S$ of $\sigma$-semistable objects parameterized by an algebraic space $S$ \cite{BM2}. Correspondingly, there is a moduli stack $\mathcal M_\sigma({\bf v})$ parameterizing flat families of $\sigma$-semistable object of character ${\bf v}$. In full generality there are many open questions about the geometry of these moduli spaces. In particular, when is there a projective coarse moduli space $M_{\sigma}({\bf v})$ parameterizing $S$-equivalence classes of $\sigma$-semistable objects of character ${\bf v}$? Several authors have addressed this question for various surfaces, at least when the stability condition $\sigma$ does not lie on a \emph{wall} for ${\bf v}$ (see \S \ref{ssec-walls}). For instance, there is a projective moduli space $M_\sigma({\bf v})$ when $X$ is $\mathbb{P}^2$ \cite{ABCH}, $\mathbb{P}^1\times \mathbb{P}^1$ or $\mathbb{F}_1$ \cite{ArcaraMiles}, an abelian surface \cite{MYY}, a $K3$ surface \cite{BM2}, or an Enriques surface \cite{Nuer}. While projectivity of Gieseker moduli spaces can be shown in great generality, there is no known uniform GIT construction of moduli spaces of Bridgeland semistable objects. Each proof requires deep knowledge of the particular surface. \subsection{Stability conditions on surfaces}\label{ssec-surfaceConds} Bridgeland \cite{bridgelandK3} and Arcara-Bertram \cite{AB} explain how to construct stability conditions on a smooth surface. The construction is very explicit, and these are the only kinds of stability conditions we will consider in this survey. Before beginning we introduce some notation to make the definitions more succinct. Let $X$ be a smooth surface and let $H,D\in \Pic(X)\otimes \mathbb{R}$ be an ample divisor and an arbitrary \emph{twisting} divisor, respectively. We formally define the \emph{twisted Chern character} $\ch^D = e^{-D} \ch$. Explicitly expanding this definition, this means that \begin{align*} \ch^D_0 &= \ch_0\\ \ch^D_1 &= \ch_1 - D\ch_0\\ \ch^D_2 &= \ch_2 - D\ch_1 + \frac{D^2}{2} \ch_0. \end{align*} We can also define twisted slopes and discriminants by the formulas \begin{align*} \mu_{H,D} &= \frac{H.\ch_1^D}{H^2 \ch_0^D}\\ \Delta_{H,D} &= \frac{1}{2} \mu_{H,D}^2 - \frac{\ch_2^D}{H^2\ch_0^D}. \end{align*} For reasons that will become clear in \S\ref{ssec-largeVolume} it is often useful to add in an additional twist by $K_X/2$. We therefore additionally define $$\overline\ch^D = \ch^{D+\frac{1}{2}K_X} \qquad \overline \mu_{H,D} = \mu_{H,D+\frac{1}{2}K_X} \qquad \overline\Delta_{H,D} = \Delta_{H,D+\frac{1}{2}K_X}.$$ \begin{remark} Note that the twisted slopes $\mu_{H,D}$ and $\overline \mu_{H,D}$ are primarily just a notational convenience; they only differ from the ordinary slope by a constant (depending on $H$ and $D$). On the other hand, twisted discriminants $\Delta_{H,D}$ and $\overline\Delta_{H,D}$ do not obey such a simple formula, and are genuinely useful. \end{remark} \begin{remark}[Twisted Gieseker stability] We have already encountered $H$-Gieseker (semi)stability and the associated moduli spaces $M_H({\bf v})$ of $H$-Gieseker semistable sheaves. There is a mild generalization of this notion called \emph{$(H,D)$-twisted Gieseker (semi)stability}. A torsion-free coherent sheaf $E$ is $(H,D)$-twisted Gieseker (semi)stable if whenever $F\subsetneq E$ we have \begin{enumerate} \item $\overline \mu_{H,D} (F)\leq \overline \mu_{H,D}(E)$ and \item whenever $\overline \mu_{H,D}(F) = \overline\mu_{H,D}(E)$, we have $\overline\Delta_{H,D}(F) \geqpar \overline\Delta_{H,D}(E)$. \end{enumerate} Compare with Example \ref{ex-stabilitySurface}, which is the case $D=0$. When $H,D$ are $\mathbb{Q}$-divisors, Matsuki and Wentworth \cite{MatsukiWentworth} construct projective moduli spaces $M_{H,D}({\bf v})$ of $(H,D)$-twisted Gieseker semistable sheaves. Note that any $\mu_H$-stable sheaf is both $H$-Gieseker stable and $(H,D)$-twisted Gieseker stable, so that the spaces $M_H({\bf v})$ and $M_{H,D}({\bf v})$ are often either isomorphic or birational. \end{remark} \begin{exercise}\label{ex-twistedBogomolov} Use the Hodge Index Theorem and the ordinary Bogomolov inequality (Theorem \ref{thm-bogomolov}) to show that if $E$ is $\mu_H$-semistable then $$\overline\Delta_{H,D}(E) \geq 0.$$ \end{exercise} We now define a half-plane (or \emph{slice}) of stability conditions on $X$ corresponding to a choice of divisors $H,D\in \Pic(X)\otimes \mathbb{R}$ as above. First fix a number $\beta\in \mathbb{R}$. We define two full subcategories of the category $\coh X$ of coherent sheaves by \begin{align*} \mathcal{T}_\beta &= \{E\in \coh X:\overline \mu_{H,D} (G) > \beta \textrm{ for every quotient $G$ of }E\}\\ \mathcal F_\beta &= \{E \in \coh X: \overline\mu_{H,D}(F) \leq \beta \textrm{ for every subsheaf $F$ of }E\}. \end{align*} Note that by convention the (twisted) Mumford slope of a torsion sheaf is $\infty$, so that $\mathcal{T}_\beta$ contains all the torsion sheaves on $X$. On the other hand, sheaves in $\mathcal F_\beta$ have no torsion subsheaf and so are torsion-free. For any $\beta\in \mathbb{R}$, the pair of categories $(\mathcal{T}_\beta,\mathcal F_\beta)$ form what is called a \emph{torsion pair}. Briefly, this means that $\Hom(T,F)=0$ for any $T\in \mathcal{T}_\beta$ and $F\in \mathcal F_\beta$, and any $E\in \coh X$ can be expressed naturally as an extension $$0\to F\to E\to T\to 0$$ of a sheaf $T\in T_\beta$ by a sheaf $F\in \mathcal F_\beta$. Then there is an associated t-structure with heart $$\mathcal A_\beta = \{E^{\bullet} : \mbox{H}^{-1}(E^{\bullet})\in \mathcal F_\beta,\mbox{H}^0(E^\bullet) \in \mathcal{T}_{\beta}, \textrm{ and } \mbox{H}^i(E^\bullet)=0\textrm{ for } i\neq -1,0\} \subset D^b(X),$$ where we use a Roman $\mbox{H}^i(E^{\bullet})$ to denote cohomology sheaves. Some objects of $\mathcal A_\beta$ are the sheaves $T$ in $\mathcal{T}_\beta$ (viewed as complexes sitting in degree $0$) and shifts $F[1]$ where $F\in \mathcal F_\beta$, sitting in degree $-1$. More generally, every object $E^{\bullet}\in \mathcal A_\beta$ is an extension $$0\to \mbox{H}^{-1}(E^{\bullet})[1] \to E^\bullet \to \mbox{H}^0(E^{\bullet})\to 0,$$ where the sequence is exact in the heart $\mathcal A_\beta$. To define stability conditions we now need to define central charges compatible with the hearts $\mathcal A_\beta$. Let $\alpha\in \mathbb{R}_{>0}$ be an arbitrary positive real number. We define $$Z_{\beta,\alpha} = -\overline \ch_2^{D+\beta H}+\frac{\alpha^2H^2}{2}\overline \ch_0^{D+\beta H} + i H \overline\ch_1^{D+\beta H},$$ and put $\sigma_{\beta,\alpha} = (Z_{\beta,\alpha},\mathcal A_\beta)$. Note that if $E$ is an object of nonzero rank with twisted slope $\overline \mu_{H,D}$ and discriminant $\overline \Delta_{H,D}$ then the corresponding Bridgeland slope is $$\mu_{\sigma_{\beta,\alpha}} = -\frac{\Re Z_{\beta,\alpha}}{\Im Z_{\beta,\alpha}} = \frac{(\overline \mu_{H,D} - \beta)^2 -\alpha^2 - 2\overline \Delta_{H,D}}{\overline \mu_{H,D} - \beta}.$$ \begin{theorem}[\cite{AB}] Let $X$ be a smooth surface, and let $H,D\in \Pic(X)\otimes \mathbb{R}$ with $H$ ample. If $\beta,\alpha\in \mathbb{R}$ with $\alpha>0$, then the pair $\sigma_{\beta,\alpha}=(Z_{\beta,\alpha},\mathcal A_\beta)$ defined above is a Bridgeland stability condition. \end{theorem} The most interesting part of the theorem is the verification of the Positivity axiom \ref{ax-positivity} in the Definition \ref{def-Bridgeland} of a stability condition, which we now sketch. The other parts are quite formal. \begin{proof}[Sketch proof of positivity] Note that $Z:=Z_{\beta,\alpha}$ is an $\mathbb{R}$-linear map. Since the upper half-plane $\mathbb{H} = \{re^{i\theta}:0< \theta \leq \pi \textrm{ and } r>0\}$ is closed under addition, the exact sequence $$0\to \mbox{H}^{-1}(E^{\bullet})[1] \to E^{\bullet} \to \mbox{H}^0(E^\bullet)\to 0$$ implies that it is sufficient to check $Z(T)\in \mathbb{H}$ and $Z(F[1]) \in \mathbb{H}$ whenever $T\in \mathcal{T}_\beta$ and $F\in \mathcal F_\beta$. If $T\in \mathcal{T}_\beta$ is not torsion, then $\overline\mu_{H,D}(T) > \beta$ is finite. Expanding the definitions immediately gives $H.\overline\ch_1^{D+\beta H}(T)>0$, so $Z(T)\in \mathbb{H}$. If $T$ is torsion with positive-dimensional support, then again $H.\overline\ch_1^{D+\beta H}(T) >0$ and $Z(T)\in \mathbb{H}$. Finally, if $T\neq 0$ has zero-dimensional support then $-\overline \ch_2^{D+\beta H}(T) = -\ch_2(T) >0$ so $Z(T)\in \mathbb{H}$. Suppose $0\neq F\in \mathcal F_\beta$. If actually $\overline\mu_{H,D}(F)<\beta$, then $H.\overline\ch_1^{D+\beta H}(F) < 0$ and $Z(F[1])\in \mathbb{H}$ again follows. So suppose that $\overline \mu_{H,D}(F) = \beta$, which gives $\Im Z(F)=0$. By the definition of $\mathcal F_\beta$, the sheaf $F$ is torsion-free and $\overline \mu_{H,D+\beta H}$-semistable of $\overline \mu_{H,D+\beta H}$ slope $0$. By Exercise \ref{ex-twistedBogomolov} we find that $\overline \Delta_{H,D+\beta H}(F) \geq 0$. The formula for the twisted discriminant and the fact that $\alpha>0$ then gives $\Re Z(F)<0$, so $\Re Z(F[1]) >0$. \end{proof} To summarize, if we let $\Pi=\{(\beta,\alpha):\beta,\alpha\in \mathbb{R}, \alpha>0\}$, the choice of a pair of divisors $H,D\in \Pic(X)\otimes \mathbb{R}$ with $H$ ample defines an embedding \begin{align*} \Pi &\to \Stab(X)\\ (\beta,\alpha)& \mapsto \sigma_{\beta,\alpha}. \end{align*} This half-plane of stability conditions is called the \emph{$(H,D)$-slice} of the stability manifold. We will sometimes abuse notation and write $\sigma\in \Pi$ for a stability condition $\sigma$ parameterized by the slice. While the stability manifold can be rather large and unwieldy in general (having complex dimension $\dim_\mathbb{R} K_{\num}(X)\otimes \mathbb{R}$), much of the interesting geometry can be studied by inspecting the different slices of the manifold. \subsection{Walls}\label{ssec-walls} Fix a class ${\bf v}\in K_{0}(X)$. The stability manifold $\Stab(X)$ of $X$ admits a locally finite wall-and-chamber decomposition such that the set of $\sigma$-semistable objects of class ${\bf v}$ does not vary as $\sigma$ varies within an open chamber. This is analogous to the wall-and-chamber decomposition of the ample cone $\Amp(X)$ for classical stability, see \S\ref{sssec-variation}. If ${\bf v}$ is primitive, then a stability condition $\sigma$ lies on a wall if and only if there is a strictly $\sigma$-semistable object of character ${\bf v}$. For computations, the entire stability manifold can be rather unwieldy to work with. One commonly restricts attention to stability conditions in some easily parameterized subset of the stability manifold. Here we focus on the $(H,D)$-slice $\{\sigma_{\beta,\alpha}:\beta,\alpha\in \mathbb{R}, \alpha>0\}$ of stability conditions on a smooth surface $X$ determined by a choice of divisors $H,D\in \Pic(X)\otimes \mathbb{R}$ with $H$ ample. \begin{definition} Let $X$ be a smooth surface, and fix divisors $H,D\in \Pic(X)\otimes \mathbb{R}$ with $H$ ample. Let ${\bf v}, {\bf w} \in K_{\num}(X)$ be two classes which have different $\mu_{\sigma_{\beta,\alpha}}$-slopes for some $(\beta,\alpha)$ with $\alpha>0$. \begin{enumerate} \item The \emph{numerical wall} for $\bf v$ determined by $\bf w$ is the subset $$W({\bf v},{\bf w})=\{(\beta,\alpha):\mu_{\sigma_{\beta,\alpha}}({\bf v}) = \mu_{\sigma_{\beta,\alpha}}({\bf w})\}\subset \Pi.$$ \item The numerical wall for ${\bf v}$ determined by ${\bf w}$ is a \emph{wall} if there is some $(\beta,\alpha)\in W({\bf v},{\bf w})$ and an exact sequence $$0\to F\to E\to G\to 0$$ of $\sigma_{\beta,\alpha}$-semistable objects with $\ch F = {\bf w}$ and $\ch E = {\bf v}$. \end{enumerate} \end{definition} \subsubsection{Geometry of numerical walls} The geometry of the numerical walls in a slice of the stability manifold is particularly easy to describe. Verifying the following properties is a good exercise in the algebra of Chern classes and the Bridgeland slope function. \begin{enumerate} \item First suppose ${\bf v}$ has nonzero rank and that the Bogomolov inequality $\Delta_{H,D}({\bf v}) \geq 0$ holds. Then the vertical line $\beta = \overline\mu_{H,D}({\bf v})$ is a numerical wall. The other numerical walls form two nested families of semicircles on either side of the vertical wall. These semicircles have centers on the $\beta$-axis, and their apexes lie along the hyperbola $\Re Z_{\beta,\alpha}({\bf v}) = 0$ in $\Pi$. The two families of semicircles accumulate at the points $$(\mu_{H,D}({\bf v})\pm \sqrt{2\Delta_{H,D}({\bf v})},0)$$ of intersection of $\Re Z_{\beta,\alpha}({\bf v}) = 0$ with the $\beta$-axis. See Figure \ref{fig-walls} for an approximate illustration. \begin{figure} \caption{Schematic diagram of numerical walls in the $(H,D)$-slice for a nonzero rank character ${\bf v}$ with slope $\OV \mu_{H,D}$ and discriminant $\OV \Delta_{H,D}$.} \label{fig-walls} \end{figure} \item If instead ${\bf v}$ has rank zero but $c_1({\bf v})\neq 0$, then the curve $\Re Z_{\beta,\alpha}({\bf v}) = 0$ in $\Pi$ degenerates to the vertical line $$\beta = \frac{\overline\ch_2^D({\bf v})}{\overline\ch_1^D({\bf v}).H}.$$ The numerical walls for ${\bf v}$ are all semicircles with center $(\overline\ch_2^D({\bf v})/(\overline\ch_1^D({\bf v}).H),0)$ and arbitrary radius. \end{enumerate} \begin{exercise}In ${\bf v},{\bf w}$ have nonzero rank and different slopes, the numerical semicircular wall $W({\bf v},{\bf w})$ has center $(s_W,0)$ and radius $\rho_W$ satisfying \begin{align*} s_W &= \frac{\overline \mu_{H,D}({\bf v})+\overline \mu_{H,D}({\bf w})}{2}-\frac{\overline\Delta_{H,D}({\bf v}) - \overline\Delta_{H,D}({\bf w})}{\overline\mu_{H,D}({\bf v}) - \overline \mu_{H,D}({\bf w})}\\ \rho_W^2 &= (s_W-\overline\mu_{H,D}({\bf v}))^2 - 2\overline \Delta_{H,D}({\bf v}).\end{align*} If $(s_W-\mu_{H,D}({\bf v}))^2\leq 2\overline\Delta_{H,D}({\bf v})$, then the wall is empty. \end{exercise} \begin{remark} Let ${\bf v}$ be a character of nonzero rank. It follows from the above discussion that if $W,W'$ are numerical walls for ${\bf v}$ both lying left of the vertical wall $\beta = \overline\mu_{H,D}({\bf v})$ then $W$ is nested inside $W'$ if and only if $s_W > s_{W'}$, where the center of $W$ (resp. $W'$) is $(s_W,0)$ (resp. $(s_{W'},0)$). \end{remark} \subsubsection{Walls and destabilizing sequences} In the definition of a wall $W:=W({\bf v},{\bf w})$ for ${\bf v}$ determined by a character ${\bf w}$ we required that there is some point $(\beta,\alpha)\in W$ and a \emph{destabilizing} exact sequence $$0\to F\to E\to G\to 0$$ of $\sigma_{\beta,\alpha}$-semistable objects, where $\ch(E) = {\bf v}$ and $\ch(F) = {\bf w}$. Note that since $(\beta,\alpha)\in W$ we in particular have $\mu_{\sigma_{\beta,\alpha}}({ F}) = \mu_{\sigma_{\beta,\alpha}}({E}) = \mu_{\sigma_{\beta,\alpha}}(G)$. The above sequence is an exact sequence of objects of the categories $\mathcal A_\beta$. By the geometry of the numerical walls, the wall $W$ separates the slice $\Pi$ into two open regions $\Omega,\Omega'$. Relabeling the regions if necessary, for $\sigma \in \Omega$ we have $\mu_\sigma(F) > \mu_\sigma(E)$. Therefore $E$ is not $\sigma$-semistable for any $\sigma\in \Omega$. On the other hand, $E$ \emph{may} be $\sigma$-semistable for $\sigma\in \Omega$; at least the subobject $F\subset E$ does not violate the semistability of $E$. Our definition of a wall is perhaps somewhat unsatisfactory due to the dependence on picking some point $(\beta,\alpha)\in W$ where there is a destabilizing exact sequence as above. The next result shows that this definition is equivalent to an a priori stronger definition which appears more natural. Roughly speaking, destabilizing sequences ``persist'' along the entire wall. \begin{proposition}[{\cite[Lemma 6.3]{ABCH}} for $\mathbb{P}^2$, \cite{Maciocia} in general]\label{prop-destSeq} Suppose that $$0\to F\to E\to G\to 0$$ is an exact sequence of $\sigma_{\beta,\alpha}$-semistable objects of the same $\sigma_{\beta,\alpha}$-slope. Put $\ch F = {\bf w}$ and $\ch E = {\bf v}$, and suppose ${\bf v}$ and ${\bf w}$ do not have the same slope everywhere in the $(H,D)$-slice. Let $W = W({\bf v},{\bf w})$ be the wall defined by these characters. If $(\beta',\alpha')\in W$ is any point on the wall, then the above exact sequence is an exact sequence of $\sigma_{\beta',\alpha'}$-semistable objects of the same $\sigma_{\beta',\alpha'}$-slope. In particular, each of the objects $F,E,G$ appearing in the above sequence lie in the category $\mathcal A_{\beta'}$. \end{proposition} Note that the first part of the proposition is essentially equivalent to the final statement by Exercise \ref{ex-exact}. \subsection{Large volume limit}\label{ssec-largeVolume} As mentioned earlier, (twisted) Gieseker moduli spaces of sheaves on surfaces can be recovered as certain moduli spaces of Bridgeland-semistable objects. We say that an object $E^\bullet\in \mathcal A_\beta$ is a \emph{sheaf} if it is isomorphic to a sheaf sitting in degree $0$. We continue to work in an $(H,D)$-slice of stability conditions on a smooth surface $X$. \begin{theorem}[{\cite[\S 6]{ABCH}} for $\mathbb{P}^2$, \cite{Maciocia} in general]\label{thm-largeVol} Let ${\bf v}\in K_{\num}(X)$ be a character of positive rank with $\overline\Delta_{H,D}({\bf v})\geq 0$. Let $\beta < \overline\mu_{H,D}({\bf v})$, and suppose $\alpha \gg 0$ (depending on ${\bf v}$). Then an object $E^\bullet\in \mathcal A_\beta$ is $\sigma_{\beta,\alpha}$-semistable if and only if it is an $(H,D)$-semistable sheaf. \end{theorem} \begin{proof} Since $\beta< \overline\mu_{H,D}({\bf v})$, the stability condition $\sigma_{\beta,\alpha}$ lies left of the vertical wall $\beta = \overline\mu_{H,D}({\bf v})$. The walls for ${\bf v}$ are locally finite. Considering a neighborhood of a stability condition on the vertical wall shows that there is some largest semicircular wall $W$ left of the vertical wall. The set of $\sigma$-semistable objects is constant as $\sigma$ varies in the chamber between $W$ and the vertical wall. It is therefore enough to show the following two things. (1) If $E^\bullet \in \mathcal A_\beta$ has $\ch E^{\bullet} = {\bf v}$ and is $\sigma_{\beta,\alpha}$-semistable for $\alpha \gg 0$ then $E^{\bullet}$ is an $(H,D)$-semistable sheaf. (2) If $E$ is an $(H,D)$-semistable sheaf of character ${\bf v}$, then $E$ is $\sigma_{\beta,\alpha}$-semistable for $\alpha \gg 0$. That is, we may pick $\alpha$ depending on $E$, and not just depending on ${\bf v}$. (1) First suppose $E^\bullet\in \mathcal A_\beta$ is $\sigma_{\beta,\alpha}$-semistable for $\alpha\gg 0$ and $\ch E^{\bullet} = {\bf v}$. If $E^\bullet$ is not a sheaf, then we have an interesting exact sequence $$0\to \mbox{H}^{-1}(E^{\bullet})[1] \to E^{\bullet} \to \mbox{H}^0(E^{\bullet})\to 0$$ in $\mathcal A_\beta$. Since $F:=\mbox{H}^{-1}(E^\bullet)\in \mathcal F_\beta$, the formula for the Bridgeland slope shows that $$\mu_{\sigma_{\beta,\alpha}}(F[1])=\mu_{\sigma_{\beta,\alpha}}(F)\to \infty$$ as $\alpha\to \infty$. On the other hand, since $G:= H^0(E^\bullet)\in \mathcal{T}_\beta$ we have $\mu_{\sigma_{\beta,\alpha}}(G)\to -\infty$ as $\alpha\to \infty$, noting that $\rk(G) > 0$ since $\rk {\bf v} >0$. This is absurd since $E^\bullet$ is $\sigma_{\beta,\alpha}$-semistable for $\alpha \gg 0$, and we conclude that $E:=E^\bullet\in \mathcal{T}_\beta$ is a sheaf. Similar arguments show that $E$ is $(H,D)$-semistable. First suppose $E$ has a $\overline\mu_{H,D}$-stable subsheaf $F$ with $\overline\mu_{H,D}(F)> \overline\mu_{H,D}(E)$. Then the corresponding exact sequence of sheaves $$0\to F\to E\to G\to 0$$ is actually a sequence of objects in $\mathcal{T}_\beta$. Indeed, any quotient of an object in $\mathcal{T}_\beta$ is in $\mathcal{T}_\beta$, and $F\in \mathcal{T}_\beta$ by construction. Thus this is actually an exact sequence in $\mathcal A_\beta$. The formula for the Bridgeland slope then shows that $\mu_{\sigma_{\beta,\alpha}}(F) > \mu_{\sigma_{\beta,\alpha}}(E)$ for $\alpha \gg 0$, violating the $\sigma_{\beta,\alpha}$-semistability of $E$. We conclude that $E$ is $\overline\mu_{H,D}$-semistable. To see that $E$ is $(H,D)$-semistable, suppose there is a sequence $$0\to F\to E\to G\to 0$$ of sheaves of the same $\overline\mu_{H,D}$-slope, but $\overline\Delta_{H,D}(F)<\overline\Delta_{H,D}(E)$. Then the formula for the Bridgeland slope gives $\mu_{\sigma_{\beta,\alpha}}(F)> \mu_{\sigma_{\beta,\alpha}}(E)$ for \emph{every} $\alpha$, again contradicting the $\sigma_{\beta,\alpha}$-semistability of $E$ for large $\alpha$. (2) Suppose $E$ is $(H,D)$-semistable of character ${\bf v}$, and suppose $F^\bullet$ is a subobject of $E$ in $\mathcal A_\beta$. Taking the long exact sequence in cohomology sheaves of the exact sequence $$0\to F^\bullet \to E\to G^\bullet\to 0$$ in $\mathcal A_\beta$ gives an exact sequence of sheaves $$0\to \mbox{H}^{-1}(F^\bullet)\to 0\to \mbox{H}^{-1}(G^{\bullet})\to \mbox{H}^0(F^{\bullet}) \to E\to H^0(G^{\bullet})\to 0.$$ Therefore $\mbox{H}^{-1}(F^{\bullet})=0$, i.e. $F:=F^{\bullet}$ is a sheaf in $\mathcal{T}_\beta$. The $(H,D)$-semistability of $E$ then gives $\overline \mu_{H,D}(F) \leq \overline \mu_{H,D}(E)$, with $\overline \Delta_{H,D}(F)\geq \overline \Delta_{H,D}(E)$ in case of equality. The formula for $\mu_{\sigma_{\beta,\alpha}}$ then shows that if $\alpha \gg 0$ we have $\mu_{\sigma_{\beta,\alpha}}(F) \leq \mu_{\sigma_{\beta,\alpha}}(E)$. It follows from the finiteness of the walls that $E$ is actually $\sigma_{\beta,\alpha}$-semistable for large $\alpha$. \end{proof} In particular, if ${\bf v}\in K_{\num}(X)$ is the character of an $(H,D)$-semistable sheaf of positive rank, then there is some largest wall $W$ lying to the left of the vertical wall (or, possibly, there are no walls left of the vertical wall). This wall is called the \emph{Gieseker wall}. For stability conditions $\sigma$ in the open chamber $\mathcal C$ bounded by the Gieseker wall and the vertical wall, we have $$M_\sigma({\bf v}) \cong M_{H,D}({\bf v}).$$ Therefore, any moduli space of twisted semistable sheaves can be recovered as a moduli space of Bridgeland semistable objects. \section{Examples on \texorpdfstring{$\mathbb{P}^2$}{the projective plane}}\label{sec-exP2} In this subsection we investigate a couple of the first interesting examples of Bridgeland stability conditions and their relationship to birational geometry. We focus here on the characters of some small Hilbert schemes of points on $\mathbb{P}^2$. In these cases the definitions simplify considerably, and things can be understood explicitly. \subsection{Notation} Let $X=\mathbb{P}^2$, and fix the standard polarization $H$. We take $D=0$; in general, the choice of twisting divisor is only interesting modulo the polarization, as adding a multiple of the polarization to $D$ only translates the $(H,D)$-slice. The twisting divisor becomes more relevant in examples of higher Picard rank. Additionally, since $K_{\mathbb{P}^2}$ is parallel to $H$, we may as well work with the ordinary slope and discriminant $$\mu = \frac{\ch_1}{r} \qquad \Delta= \frac{1}{2}\mu^2 - \frac{\ch_2}{r}$$ instead of the more complicated $\overline \mu_{H,0}$ and $\overline \Delta_{H,0}$. With these conventions, if ${\bf v}$ and ${\bf w}$ are characters of positive rank then the wall $W({\bf v},{\bf w})$ has center $(s_W,0)$ and radius $\rho_W$ given by \begin{align*}s_W &= \frac{\mu({\bf v}) + \mu({\bf w})}{2} - \frac{\Delta({\bf v})-\Delta({\bf w})}{\mu({\bf v})-\mu({\bf w})}\\ \rho_W^2 &= (s_W- \mu({\bf v}))^2-2\Delta({\bf v}). \end{align*} If we further let ${\bf v} = \ch I_Z$ be the character of an ideal of a length $n$ scheme $Z\in \mathbb{P}^{2[n]}$, then the formulas further simplify to \begin{align*} s_W &= \frac{\mu({\bf w})}{2} + \frac{n-\Delta({\bf w})}{\mu({\bf w})}\\ \rho_W^2 &= s_W^2-2n. \end{align*} The main question to keep in mind is the following. \begin{question} Let $I_Z$ be the ideal sheaf of $Z\in \mathbb{P}^{2[n]}$. For which stability conditions $\sigma$ in the slice is $I_Z$ a $\sigma$-semistable object? What does the destabilizing sequence of $I_Z$ look like along the wall where it is destabilized? \end{question} Note that since $I_Z$ is a Gieseker semistable sheaf, it is $\sigma_{\beta,\alpha}$-semistable if $\alpha \gg 0$ and $\beta<0 = \mu(I_Z)$. There will be some wall $W$ left of the vertical wall where $I_Z$ is destabilized by some subobject $F$. For stability conditions $\sigma$ below this wall, $I_Z$ is never $\sigma$-semistable. Thus the region in the slice where $I_Z$ is $\sigma$-semistable is bounded by the wall $W$ and the vertical wall. It potentially consists of several of the chambers in the wall-and-chamber decomposition of the slice. \subsection{Types of walls} There are two very different ways in which an ideal sheaf $I_Z$ of length $n$ can be destabilized along a wall. The simplest way $I_Z$ can be destabilized is if it is destabilized by an actual subsheaf, i.e. if there is an exact sequence of sheaves $$0\to I_Y(-k)\to I_Z \to T\to 0$$ giving rise to the wall for some zero-dimensional scheme $Y$ of length $\ell$. The character ${\bf w} = \ch I_W(-k)$ has $(r,\mu,\Delta) = (1,-k,\ell)$, so this wall has center $(s_W,0)$ with \begin{equation}\label{rank1center} s_W=-\frac{k}{2} -\frac{n-\ell}{k}.\end{equation} A wall obtained in this way is called a \emph{rank one wall}. On the other hand, subobjects of $I_Z$ in the categories $\mathcal A_\beta$ need not be subsheaves of $I_Z$! In particular, it is entirely possible that $I_Z$ is destabilized by a sequence $$0\to F\to I_Z\to G \to 0$$ where ${\bf w} = \ch F$ has $\rk{\bf w}\geq 2$. Such destabilizing sequences, giving so-called \emph{higher rank walls}, are somewhat more troublesome to deal with. It will be helpful to bound their size, which we now do. As in the proof of Theorem \ref{thm-largeVol}, the long exact sequence of cohomology sheaves shows that any subobject $F\subset I_Z$ in a category $\mathcal A_\beta$ must actually be a sheaf (but not necessarily a subsheaf). Let $K$ and $C$ be the kernel and cokernel, respectively, of the map of sheaves $F\to I_Z$, so that there is an exact sequence of sheaves $$0\to K\to F\to I_Z\to C\to 0.$$ In order for $G$ to be in the categories $\mathcal A_\beta$ along the wall $W = W({\bf w},{\bf v})$ (which must be the case by Proposition \ref{prop-destSeq}), it is necessary and sufficient that we have $K\in \mathcal F_\beta$ and $C\in \mathcal{T}_\beta$ for all $\beta$ along the wall. Indeed, $K$ and $C$ are the cohomology sheaves of the mapping cone of the map $F\to I_Z$, so this follows from Exercise \ref{ex-exact}. The sequence $$0\to F\to I_Z\to G\to 0$$ will be exact in the categories along the wall if $F$ is additionally in $\mathcal{T}_\beta$ for $\beta$ along the wall. These basic considerations lead to the following result. \begin{lemma}[\cite{ABCH}, or see {\cite[Lemma 3.1 and Corollary 3.2]{Boetal}} for a generalization to arbitrary surfaces]\label{lem-higherRank} If an ideal sheaf $I_Z$ of $n$ points in $\mathbb{P}^2$ is destabilized along a wall $W$ given by a subobject $F$ of rank at least $2$, then the radius $\rho_W$ of $W$ satisfies $$\rho_W^2 \leq \frac{n}{4}.$$ \end{lemma} \begin{proof} We use the notation from above. Since $I_Z$ is rank $1$ and torsion-free, a nonzero map $F\to I_Z$ has torsion cokernel. Therefore $C$ is torsion, and it is no condition at all to have $C\in \mathcal{T}_\beta$ along the wall. We further deduce that $c_1(C)\geq 0$, so $c_1(K) \geq c_1(F)$ and $\rk(K) = \rk(F)-1$. Let $(s_W,0)$ and $\rho_W$ be the center and radius of $W$. Since $F\in \mathcal{T}_\beta$ along the wall and $K\in \mathcal F_\beta$ along the wall, we have $$2\rho_W \leq \mu(F)-\mu(K) = \frac{c_1(F)}{\rk(F)}-\frac{c_1(K)}{\rk(K)}\leq - \frac{c_1(F)}{\rk(F)(\rk(F)-1)}\leq - \mu(F) \leq -s_W-\rho_W,$$ so $3\rho_W \leq -s_W$. Squaring both sides, $9\rho_W^2 \leq s_W^2 = \rho_W^2+2n$ by the formula for the radius. The result follows. \end{proof} \subsection{Small examples} We now consider the stability of ideal sheaves of small numbers of points in $\mathbb{P}^2$ in detail. \begin{example}[Ideals of 2 points]\label{ex-2ptBridgeland} Let $I_Z$ be the ideal of a length $2$ scheme $Z\in \mathbb{P}^{2[2]}$. Such an ideal fits in an exact sequence $$0\to \mathcal{O}_{\mathbb{P}^2}(-1)\to I_Z\to \mathcal{O}_L(-2)\to 0$$ where $L$ is the line spanned by $Z$. If $W = W(\ch \mathcal{O}_{\mathbb{P}^2}(-1),I_Z)$ is the wall defined by this sequence, then $Z$ is certainly not $\sigma$-semistable for stability conditions $\sigma$ inside $W$. On the other hand, we claim that $I_Z$ is $\sigma$-semistable for stability conditions $\sigma$ on or above $W$. To see this, we rule out the possibility that $I_Z$ is destabilized along some wall $W'$ which is larger than $W$. The wall $W$ has center $(s_W,0)$ with $s_W = -5/2$ by Equation \ref{rank1center}. Its radius is $\rho_W = 3/2$, so the wall $W$ passes through the point $(-1,0)$. If $W'$ is given by a rank $1$ subobject $I_Y(-k)$ then we must have $-k > -1$ in order for $I_Y(-k)$ to be in the categories $\mathcal A_\beta$ along the wall $W'$. This then forces $k=0$, which means $I_Y$ does not define a semicircular wall. This is absurd. The other possibility is that $W'$ is a higher-rank wall. But then by Lemma \ref{lem-higherRank}, $W'$ has radius $\rho_{W'}$ satisfying $\rho_{W'}^2 \leq 1/2$. This contradicts that $W'$ is larger than $W$. \end{example} Note that in the above example, if $\sigma$ is a stability condition on the wall $W$ then $I_Z$ is strictly $\sigma$-semistable and $S$-equivalent to any ideal $I_{Z'}$ where $Z'$ lies on the line spanned by $Z$. Thus the set of $S$-equivalence classes of $\sigma$-semistable objects is naturally identified with $\mathbb{P}^{2*}$. \begin{example}[Ideals of $3$ collinear points]\label{ex-3ptBridgelandCollinear} Let $I_Z$ be the ideal of a length $3$ scheme $Z\in \mathbb{P}^{2[3]}$ which is supported on a line. As in Example \ref{ex-2ptBridgeland}, we claim that $I_Z$ is destablized by the sequence $$0\to \mathcal{O}_{\mathbb{P}^2}(-1)\to I_Z\to \mathcal{O}_L(-3)\to 0.$$ That is, if $W$ is the wall corresponding to the sequence, then $I_Z$ is $\sigma$-semistable for conditions $\sigma$ on or above the wall. (From the existence of the sequence it is immediately clear that $I_Z$ is not $\sigma$-semistable below the wall.) We compute $s_W = -7/2$ and $\rho_W = \frac{5}{2}$. As in Example \ref{ex-2ptBridgeland}, we conclude that there is no larger rank $1$ wall. Any higher rank wall $W'$ would have $\rho_{W'}^2 \leq 3/2$, so there can be no larger higher rank wall either. Therefore $I_Z$ is $\sigma$-semistable on and above $W$. \end{example} For the next example we will need one additional useful fact. \begin{proposition}[{\cite[Proposition 6.2]{ABCH}}]\label{prop-lineBundle} A line bundle $\mathcal{O}_{\mathbb{P}^2}(-k)$ or a shifted line bundle $\mathcal{O}_{\mathbb{P}^2}(-k)[1]$ is $\sigma_{\beta,\alpha}$-stable whenever it is in the category $\mathcal A_\beta$. Thus, $\mathcal{O}_{\mathbb{P}^2}(-k)$ is $\sigma_{\beta,\alpha}$-stable if $\beta < -k$, and $\mathcal{O}_{\mathbb{P}^2}(-k)[1]$ is $\sigma_{\beta,\alpha}$-stable if $\beta \geq -k$. \end{proposition} In the next example we see our first example of an ideal sheaf destabilized by a higher rank subobject. \begin{example}[Ideals of $3$ general points]\label{ex-3ptBridgeland} Let $I_Z$ be the ideal of a length $3$ scheme $Z\in \mathbb{P}^{2[3]}$ which is \emph{not} supported on a line. In this case, the ideal $I_Z$ has a minimal resolution of the form $$0\to \mathcal{O}_{\mathbb{P}^2}(-3)^2\to \mathcal{O}_{\mathbb{P}^2}(-2)^3\to I_Z\to 0$$ or, equivalently, there is a distinguished triangle $$\mathcal{O}_{\mathbb{P}^2}(-2)^3\to I_Z\to \mathcal{O}_{\mathbb{P}^2}(-3)^2[1] \to \cdot.$$ Consider the wall $W = W(\mathcal{O}_{\mathbb{P}^2}(-2),I_Z)$ defined by this sequence. It has center at $(s_W,0)$ with $s_W = -5/2$, and its radius is $1/2$. By Proposition \ref{prop-lineBundle} and Exercise \ref{ex-exact}, the above triangle gives an exact sequence $$0\to \mathcal{O}_{\mathbb{P}^2}(-2)^3\to I_Z\to \mathcal{O}_{\mathbb{P}^2}(-3)^2[1]\to 0$$ in the categories $\mathcal A_\beta$ along the wall. Then for any $\sigma$ on the wall, $I_Z$ is an extension of $\sigma$-semistable objects of the same slope, and hence is $\sigma$-semistable. It follows that $I_Z$ is destabilized precisely along $W$. \end{example} \begin{remark}[Correspondence between birational geometry and Bridgeland stability] In \cite[\S 10]{ABCH}, this chain of examples is continued in great detail. The regions of $\sigma$-semistability of ideal sheaves $I_Z$ of up to $9$ points are completely determined by similar methods. A remarkable correspondence between these regions of stability and the stable base locus decomposition was observed and conjectured to hold in general. The following result has since been proved by Li and Zhao. \begin{theorem}[\cite{LiZhao}] Let $Z\in \mathbb{P}^{2[n]}$. Let $W$ be the Bridgeland wall where the ideal sheaf $I_Z$ is destabilized. Also, let $yH-\frac{1}{2}B$ be the ray in the Mori cone past which the point $Z\in \mathbb{P}^{2[n]}$ enters the stable base locus. Then $$s_W = -y-\frac{3}{2}.$$ \end{theorem} Therefore, computations in Bridgeland stability provide a dictionary between semistability and birational geometry. Compare with Examples \ref{ex-P2Hilb2}, \ref{ex-P2Hilb3}, \ref{ex-2ptBridgeland}, \ref{ex-3ptBridgelandCollinear}, \ref{ex-3ptBridgeland}, which establish the cases $n=2,3$ of the result. More conceptually, Li and Zhao prove that the alternate birational models of any moduli space of sheaves on $\mathbb{P}^2$ can be interpreted as a Bridgeland moduli space, and they match up the walls in the Mori chamber decomposition of the effective cone with the walls in the wall-and-chamber decomposition of the stability manifold. As a consequence, they are able to give new computations of the effective, movable, and ample cones of divisors on these spaces. A crucial ingredient in this program is the smoothness of these Bridgeland moduli spaces, as well as a Dr\'ezet-Le Potier type classification of characters ${\bf v}$ for which Bridgeland moduli spaces are nonempty \cite[Theorems 0.1 and 0.2]{LiZhao}. \end{remark} The next exercise computes the Gieseker wall for a Hilbert scheme of points on $\mathbb{P}^2$. This is the easiest case of the main problem we will discuss in the next section. \begin{exercise} Following Examples \ref{ex-2ptBridgeland} and \ref{ex-3ptBridgelandCollinear}, show that the largest wall where some ideal sheaf $I_Z$ of $n$ points is destabilized is the wall $W(\ch \mathcal{O}_{\mathbb{P}^2}(-1),I_Z)$. Furthermore, an ideal $I_Z$ is destabilized along this wall if and only if $Z$ lies on a line. \end{exercise} \begin{remark} A similar program to the above has also been undertaken on some other rational surfaces such as Hirzebruch and del Pezzo surfaces. See \cite{BC}. \end{remark} \section{The positivity lemma and nef cones}\label{sec-positivity} We close the survey by discussing the positivity lemma of Bayer and Macr\`i and recent applications of this tool to the computation of cones of nef divisors on Hilbert schemes of points and moduli spaces of sheaves. This provides an example where Bridgeland stability provides insight that at present is not understood from a more classical point of view. \subsection{The positivity lemma} The positivity lemma is a tool for constructing nef divisors on moduli spaces of Bridgeland-semistable objects. On a surface, (twisted) Gieseker moduli spaces can themselves be viewed as Bridgeland moduli spaces, so this will also allow us to construct nef divisors on classical moduli spaces. As with the construction of divisors on Gieseker moduli spaces, the starting point is to define a divisor on the base of a family of objects. When the moduli space carries a universal family, the family can be used to define a divisor on the moduli space. In this direction, let $\sigma=(Z,\mathcal A)$ be a stability condition on $X$, and let $\mathcal{E}/S$ be a flat family of $\sigma$-semistable objects of character ${\bf v}$ parameterized by a proper algebraic space $S$. We define a numerical divisor class $D_{\sigma,\mathcal{E}}\in N^1(S)$ on $S$ depending on $\mathcal{E}$ and $\sigma$ by specifying the intersection $D_{\sigma,\mathcal{E}}.C$ with every curve class $C\subset S$. Let $\Phi_\mathcal{E}: D^b(S) \to D^b(X)$ be the Fourier-Mukai transform with kernel $\mathcal{E}$, defined by $$\Phi_{\mathcal{E}}(F) = q_*(p^*F \otimes \mathcal{E}),$$ where $p:S\times X\to S$ and $q:S\times X\to X$ are the projections and all the functors are derived. Then we declare $$D_{\sigma,\mathcal{E}}.C = \Im \left(- \frac{Z(\Phi_\mathcal{E}(\mathcal{O}_C))}{Z({\bf v})}\right).$$ \begin{remark} Note that if $Z({\bf v}) = -1$ then the formula becomes $$D_{\sigma,\mathcal{E}}.C = \Im(Z(\Phi_\mathcal{E}(\mathcal{O}_C))).$$ If $\Phi_{\mathcal{E}}(\mathcal{O}_C)\in \mathcal A$, then $D_{\sigma,\mathcal{E}}.C\geq 0$ would follow from the positivity of the central charge. While it is not necessarily true that $\Phi_{\mathcal{E}}(\mathcal{O}_C)\in \mathcal A$, this fact nonetheless plays an important role in the proof of the positivity lemma. \end{remark} The positivity lemma states that this assignment actually defines a nef divisor on $S$. Furthermore, there is a simple criterion to detect the curves $C$ meeting the divisor orthogonally. \begin{theorem}[Positivity lemma, Theorem 4.1 \cite{BM2}]\label{thm-positivity} The above assignment defines a well-defined numerical divisor class $D_{\sigma,\mathcal{E}}$ on $S$. This divisor is nef, and a complete, integral curve $C\subset S$ satisfies $D_{\sigma,\mathcal{E}}. C = 0$ if and only if the objects parameterized by two general points of $C$ are $S$-equivalent with respect to $\sigma$. \end{theorem} If the moduli space $M_{\sigma}({\bf v})$ carries a universal family $\mathcal{E}$, then Theorem \ref{thm-positivity} constructs a nef divisor $D_{\sigma,\mathcal{E}}$ on the moduli space. In fact, the divisor does not depend on the choice of $\mathcal{E}$; we will see this in the next subsection. \begin{remark} If multiplies of $D_{\sigma,\mathcal{E}}$ define a morphism from $S$ to projective space, then the curves $C$ contracted by this morphism are characterized as the curves with $D_{\sigma,\mathcal{E}}. C = 0$. Thus, in a sense, all the interesting birational geometry coming from such a nef divisor $D_{\sigma,\mathcal{E}}$ is due to $S$-equivalence. Unfortunately, in general, a nef divisor does not necessarily give rise to a morphism---multiples of the divisor do not necessarily have any sections at all. However, in such cases the positivity lemma is especially interesting. Indeed, one of the easiest ways to construct nef divisors is to pull back ample divisors by a morphism (recall Examples \ref{ex-nefpullback} and \ref{ex-nefCompute}). The positivity lemma can potentially produce nef divisors not corresponding to any any map at all, in which case nefness is classically more difficult to check. \end{remark} \subsection{Computation of divisors} It is interesting to relate the Bayer-Macr\`i divisors $D_{\sigma,\mathcal{E}}$ with the determinantal divisors on a base $S$ arising from a family $\mathcal{E}/S$. Now would be a good time to review \S \ref{ssec-lineBundles}. Recall that the Donaldson homomorphism is a map $$\lambda_{\mathcal{E}} : {\bf v}^\perp\to N^1(S)$$ depending on a choice of family $\mathcal{E}/S$, where ${\bf v}^\perp \subset K_{\num}(X)_\mathbb{R}$. Reviewing the definition of $\lambda_{\mathcal{E}}$, the definition only actually depends on the class of $\mathcal{E} \in K^0(S\times X)$, so it immediately extends to the case where $\mathcal{E}$ is a family of $\sigma$-semistable objects. Since the Euler pairing $(-,-)$ is nondegenerate on $K_{\num}(X)_\mathbb{R}$, any linear functional on $K_{\num}(X)_\mathbb{R}$ vanishing on ${\bf v}$ can be represented by a vector in ${\bf v}^\perp$. In particular, there is a unique vector ${\bf w}_{Z}\in {\bf v}^\perp$ such that $$\Im\left(-\frac{Z({\bf w})}{Z({\bf v})}\right)=({\bf w}_{Z},{\bf w})$$ holds for all ${\bf w}\in K_{\num}(X)_\mathbb{R}$. Note that the definition of ${\bf w}_Z$ is essentially purely linear-algebraic, and makes no reference to $S$ or ${\mathcal{E}}$. The next result shows that the Bayer-Macr\`i divisors are all determinantal. \begin{proposition}[{\cite[Proposition 4.4]{BM2}}] We have $$D_{\sigma,\mathcal{E}} = \lambda_\mathcal{E}({\bf w}_{Z}).$$ \end{proposition} If $N$ is any line bundle on $S$, then we have $$D_{\sigma,\mathcal{E} \otimes p^*N} = \lambda_{\mathcal{E} \otimes p^*N}({\bf w}_Z) = \lambda_{\mathcal{E}}({\bf w}_Z) = D_{\sigma,\mathcal{E}}.$$ In particular, if $S$ is a moduli space $M_{\sigma}({\bf v})$ with a universal family $\mathcal{E}$, then the divisor $D_{\sigma}:=D_{\sigma,\mathcal{E}}$ does not depend on the choice of universal family. \begin{remark} See \cite[\S4]{BM2} for less restrictive hypotheses under which a divisor can be defined on the moduli space. \end{remark} In explicit cases, it can be useful to compute the character ${\bf w}_Z$ in more detail. The next result does this in the case of an $(H,D)$-slice of divisors on a smooth surface $X$ (review \S \ref{ssec-surfaceConds}). \begin{lemma}[{\cite[Proposition 3.8]{Boetal}}] Let $X$ be a smooth surface and let $H,D\in \Pic(X)\otimes \mathbb{R}$, with $H$ ample. If $\sigma$ is a stability condition in the $(H,D)$-slice with center $(s_W,0)$, then the character ${\bf w}_Z$ is a multiple of $$(-1,-\frac{1}{2}K_X+s_WH+D,m)\in {\bf v}^\perp,$$ where we write Chern characters as $(\ch_0,\ch_1,\ch_2)$. Here the number $m$ is determined by the property that the character is in ${\bf v}^\perp$. \end{lemma} \subsection{Gieseker walls and nef cones} For the rest of the survey we let $X$ be a smooth surface and fix an $(H,D)$-slice $\Pi$ of stability conditions. Let ${\bf v}\in K_{\num}({\bf v})$ be the Chern character of an $(H,D)$-semistable sheaf of positive rank. Additionally assume for simplicity that $M_{H,D}({\bf v})$ has a universal family $\mathcal{E}$, so that in particular every $(H,D)$-semistable sheaf is $(H,D)$-stable. Recall that the \emph{Gieseker wall} $W$ for ${\bf v}$ in the $(H,D)$-slice is, by definition, the largest wall where an $(H,D)$-semistable sheaf of character ${\bf v}$ is destabilized. For conditions $\sigma$ on or above $W$, every $(H,D)$-semistable sheaf is $\sigma$-semistable. Therefore, for any such $\sigma$, the universal family $\mathcal{E}$ is a family of $\sigma$-semistable objects parameterized by $M_{H,D}({\bf v})$. Each condition $\sigma$ on or above the wall therefore gives a nef divisor $D_{\sigma}=D_{\sigma,\mathcal{E}}$ on the moduli space. \begin{corollary} With notation as above, if $s \leq s_W$, then the divisor on $M_{H,D}({\bf v})$ corresponding to the class $$(-1,-\frac{1}{2}K_X+sH+D,m)\in {\bf v}^\perp $$ under the Donaldson homomorphism is nef. \end{corollary} Now let $\sigma$ be a stability condition on the Gieseker wall. It is natural to wonder whether the ``final'' nef divisor $D_\sigma$ produced by this method is a boundary nef divisor. This may or may not be the case. By Theorem \ref{thm-positivity}, the divisor $D_\sigma$ is on the boundary of $\Nef(M_{H,D}({\bf v}))$ if and only if there is a curve in $M_{H,D}({\bf v})$ parameterizing sheaves which are generically $S$-equivalent with respect to the stability condition $\sigma$. This happens if there is some sheaf $E\in M_{H,D}({\bf v})$ destabilized along $W$ by a sequence $$0\to F\to E\to G\to 0$$ where it is possible to vary the extension class in $\Ext^1(G,F)$ to obtain non-isomorphic objects $E'$. This can be subtle, and typically requires further analysis. \subsection{Nef cones of Hilbert schemes of points on surfaces} In this section we survey the recent results of \cite{Boetal} computing nef divisors on the Hilbert scheme $X^{[n]}$ of points on a smooth surface of irregularity $q(X) = 0$. Let ${\bf v} = \ch I_Z$, where $Z\in X^{[n]}$. For each pair of divisors $(H,D)$ on $X$, we can interpret $X^{[n]}$ as the moduli space $M_{H,D}({\bf v})$. A stability condition $\sigma$ in the $(H,D)$-slice on a wall $W$ with center $(s_W,0)$ induces a divisor $D_\sigma$ on $X^{[n]}$ with class a multiple of $$\frac{1}{2}K_X^{[n]} - s_WH^{[n]}-D^{[n]}-\frac{1}{2}B.$$ The ray spanned by this class tends to the ray spanned by $H^{[n]}$ as $s_W\to -\infty$. As $s_W$ varies in the above expression we obtain a two-dimensional cone of divisors in $N^1(X^{[n]})$ containing the ray spanned by the nef divisor $H^{[n]}$. The positivity lemma allows us to study the nefness of divisors in this cone by studying the Gieseker wall for ${\bf v}$ in the $(H,D)$-slice. Changing the twisting divisor $D$ changes which two-dimensional cone we look at, and the entire nef cone of $X^{[n]}$ can be studied by the systematic variation of the twisting divisor. The main result we discuss in this section addresses the computation of the Gieseker wall in an $(H,D)$-slice, at least assuming the number $n$ of points is sufficiently large. We find that the Gieseker wall, or more precisely the subobject computing it, ``stabilizes'' once $n$ is sufficiently large. \begin{theorem}[{\cite[various results from \S 3]{Boetal}}]\label{thm-hilbAsymptotic} There is a curve $C\subset X$ (depending on $H,D$) such that if $n\gg 0$ then the Gieseker wall for ${\bf v}$ in the $(H,D)$-slice is computed by the rank 1 subobject $\mathcal{O}_X(-C)$. The intersection number $C.H$ is minimal among all effective curves $C$ on $X$. The divisor $D_\sigma$ corresponding to a stability condition $\sigma$ on the Gieseker wall is an extremal nef divisor. Orthogonal curves to $D_{\sigma}$ can be obtained by letting $n$ points move in a $g_n^1$ on $C$. \end{theorem} Note that everything here has already been verified for $X=\mathbb{P}^2$, and in fact $n\geq 2$ is sufficient in this case. The destabilizing subobject is always $\mathcal{O}_{\mathbb{P}^2}(-1)$. \begin{proof}[Sketch proof] Consider the character ${\bf v}$ as varying with $n$. Then $\overline\mu_{H,D}({\bf v})$ is constant, and $\overline \Delta_{H,D}({\bf v})$ is of the form $n + {\mathrm {const}}$. Consider the wall $W'$ given by a rank 1 object $I_Y(-C)$ with $C$ an effective curve, and put ${\bf w} = \ch I_Y(-C)$. The wall $W'$ has center at $(s_{W'},0)$ with $$s_{W'} = \frac{\overline \mu_{H,D}({\bf v}) + \overline \mu_{H,D}({\bf w})}{2} - \frac{\overline \Delta_{H,D}({\bf v}) - \overline \Delta_{H,D}({\bf w})}{\overline \mu_{H,D}({\bf v}) - \overline \mu_{H,D}({\bf w})}.$$ As a function of $n$, this looks like \begin{equation}\label{centerW'} s_{W'} = -\frac{n}{\mu_H({\bf v})-\mu_H({\bf w})}+\mathrm{const} = \frac{n}{ \mu_H({\bf w})} + \mathrm{const} = -\frac{n}{C.H}+\mathrm{const},\end{equation} where the constant depends on ${\bf w}$. Correspondingly, the radius $\rho_{W'}$ grows approximately linearly in $n$. Note that the numerical wall given by $\mathcal{O}_X(-C)$ is always at least as large as the numerical wall given by $I_Y(-C)$, by a discriminant calculation. Furthermore, if $I_Y(-C)$ gives an actual wall, i.e. if there is some $I_Z\in X^{[n]}$ fitting in a sequence $$0\to I_Y(-C) \to I_Z \to T\to 0,$$ then $\mathcal{O}_X(-C)$ also gives an actual wall. Thus, if the Gieseker wall is computed by a rank $1$ sheaf then it is computed by a line bundle $\mathcal{O}_X(-C)$. In fact, for $n\gg 0$ the Gieseker wall is computed by a line bundle $\mathcal{O}_X(-C)$ and not by some higher rank subobject. This is because an analog of Lemma \ref{lem-higherRank} for arbitrary surfaces shows that any higher rank wall for ${\bf v}$ in the $(H,D)$-slice has radius \emph{squared} bounded by $n$ times a constant depending on $H,D$. On the other hand, as soon as we know there is \emph{some} wall given by a rank $1$ subobject it follows that there are walls with radius which is linear in $n$, implying that the Gieseker wall is not a higher rank wall. To see that there is some rank $1$ wall if $n \gg 0$, let $C$ be any effective curve. For some $Z\in X^{[n]}$, there is an exact sequence of sheaves $$0\to \mathcal{O}_X(-C) \to I_Z\to I_{Z\subset C}\to 0.$$ We know the numerical wall $W'$ corresponding to the subobject $\mathcal{O}_X(-C)$ has radius which grows linearly with $n$. In particular, for $n \gg 0$ the wall is nonempty. Furthermore, since $\overline\mu_{H,D}(\mathcal{O}_X(-C))$ and $\overline\mu_{H,D}(I_Z)$ are constant with $n$ but $\overline\Delta_{H,D}(I_Z)$ is unbounded with $n$, the sheaf $\mathcal{O}_X(-C)$ is eventually in some of the categories along the wall $W'$. Thus the above exact sequence of sheaves is an exact sequence along the wall, and this wall is larger than any higher rank wall. We conclude that $I_Z$ is either destabilized along $W'$ or destabilized along some possibly larger rank $1$ wall. Either way, there is a rank $1$ wall, and the Gieseker wall is a rank $1$ wall, computed by some line bundle $\mathcal{O}_X(-C)$ with $C$ effective. More precisely, the curve $C$ such that the subsheaf $\mathcal{O}_X(-C)$ computes the Gieseker wall for $n\gg 0$ is the effective curve which gives the largest numerical (and hence actual) wall. Considering the Formula (\ref{centerW'}) for the center of the wall determined by $\mathcal{O}_X(-C)$, we find that $C$ must be an effective curve of minimal $H$-degree. Furthermore, $C$ must be chosen to minimize the constant which appears in that formula (this depends additionally on $D$). Any such curve $C$ which asymptotically minimizes Formula (\ref{centerW'}) in this way computes the Gieseker wall for $n\gg 0$. Curves orthogonal to the divisor $D_\sigma$ given by a stability condition on the Gieseker wall can now be obtained by varying the extension class in the sequence $$0\to \mathcal{O}_X(-C) \to I_Z\to I_{Z\subset C}\to 0;$$ this corresponds to letting $Z$ move in a pencil on $C$, which can certainly be done for $n\gg 0$. \end{proof} More care is taken in \cite{Boetal} to determine the precise bounds on $n$ which are necessary for the various steps of the proof. The general method is applied to compute nef cones of Hilbert schemes of sufficiently many points on very general surfaces in $\mathbb{P}^3$, very general double covers of $\mathbb{P}^2$, and del Pezzo surfaces of degree $1$. The last example provides an example of a surface of higher Picard rank, where the variation of the twisting divisor is exploited. See \cite[\S 4-5]{Boetal} for details. We highlight one of the first interesting cases where the answer appears to be unknown. \begin{problem} Let $X\subset \mathbb{P}^3$ be a very general quintic surface, so that the Picard rank is $1$ by the Noether-Lefschetz theorem. Compute the nef cone of $X^{[2]}$ and $X^{[3]}$. \end{problem} Once $n\geq 4$ in the previous example, the nef cone is known by the general methods above. See \cite[Proposition 4.5]{Boetal}. \subsection{Nef cones of moduli spaces of sheaves on surfaces} We close our discussion with a survey of the main result of \cite{CHNefGeneral} on the cone of nef divisors on a moduli space of sheaves with large discriminant on an arbitrary smooth surface. In the case of $\mathbb{P}^2$, this result was first discovered in the papers \cite{CHAmple,LiZhao}. The picture for an arbitrary surface is a modest simultaneous generalization of the $\mathbb{P}^2$ case as well as the Hilbert scheme case for an arbitrary surface (see \S 7.4 or \cite{Boetal}). Again let $X$ be a smooth surface and let $H,D$ be divisors giving a slice of stability conditions. Let ${\bf v}$ be the character of an $(H,D)$-semistable sheaf of positive rank. We assume the discriminant $\overline\Delta_{H,D} ({\bf v})\gg 0$ is sufficiently large. Suppose the moduli space $M_{H,D}({\bf v})$ carries a (quasi-)universal family. The goal of \cite{CHNefGeneral} is to compute the Gieseker wall for ${\bf v}$ in the $(H,D)$-slice and to show that the divisor $D_\sigma$ corresponding to a stability condition $\sigma$ on the Gieseker wall is a boundary nef divisor. The basic picture is similar to the case of a Hilbert scheme of points, and indeed Theorem \ref{thm-hilbAsymptotic} will follow as a special case of this more general result. However, the asymptotics can easily be made much more explicit in the Hilbert scheme case. The common thread between the two results is that as the discriminant $\overline\Delta_{H,D}({\bf v})$ is increased, the character ${\bf w}$ of a destabilizing subobject giving rise to the Gieseker wall stabilizes. It is furthermore easy to give properties which almost uniquely define the character ${\bf w}$. \begin{definition} Fix an $(H,D)$-slice. An \emph{extremal Chern character} ${\bf w}$ for ${\bf v}$ is any character satisfying the following defining properties. \begin{enumerate}[label=(E\arabic*)] \item \label{cond-rankBound}We have $0<r({\bf w})\leq r({\bf v})$, and if $r({\bf w})=r({\bf v})$, then $c_1({\bf v})-c_1({\bf w})$ is effective. \item \label{cond-slopeClose} We have $\mu_H({\bf w}) < \mu_H({\bf v})$, and $\mu_H({\bf w})$ is as close to $\mu_H({\bf v})$ as possible subject to \ref{cond-rankBound}. \item \label{cond-stable} The moduli space $M_{H,D}({\bf w})$ is nonempty. \item \label{cond-discriminantMinimal} The discriminant $\overline\Delta_{H,D}({\bf w})$ is as small as possible, subject to \ref{cond-rankBound}-\ref{cond-stable}. \item \label{cond-rankMaximal} The rank $r({\bf w})$ is as large as possible, subject to \ref{cond-rankBound}-\ref{cond-discriminantMinimal}. \end{enumerate} \end{definition} Note that properties \ref{cond-rankBound}-\ref{cond-discriminantMinimal} uniquely determine the slope $\mu_H({\bf w})$ and discriminant $\overline\Delta_{H,D}({\bf w})$, although $c_1({\bf w})$ is not necessarily uniquely determined. Condition \ref{cond-rankMaximal} uniquely specifies the rank of ${\bf w}$. We then have the following theorem. Furthermore, notice that the definition does not depend on the discriminant $\overline\Delta_{H,D}({\bf v})$, so that ${\bf w}$ can be held constant as $\overline \Delta_{H,D}({\bf v})$ varies. \begin{theorem}[\cite{CHNefGeneral}]\label{thm-moduliAsymptotic} Suppose $\overline\Delta_{H,D}({\bf v}) \gg 0$. Then the Gieseker wall for ${\bf v}$ in the $(H,D)$-slice is computed by a destabilizing subobject of character ${\bf w}$, where ${\bf w}$ is an extremal Chern character for ${\bf v}$. Furthermore, the divisor $D_\sigma$ corresponding to a stability condition $\sigma$ on the Gieseker wall is a boundary nef divisor. \end{theorem} The argument is largely similar to the proof of Theorem \ref{thm-hilbAsymptotic}. First one shows that the destabilizing subobject along the Gieseker wall must actually be a subsheaf, and not some higher rank object. This justifies restriction \ref{cond-rankBound} in the definition of ${\bf w}$ (note that if $r({\bf w}) = r({\bf v})$ then the only way there can be an injection of sheaves $F\to E$ with $\ch F = {\bf w}$ and $\ch E = {\bf v}$ is if the induced map $\det F\to \det E$ is injective, forcing $c_1({\bf v}) - c_1({\bf w})$ to be effective. Next, one shows that the subsheaf defining the Gieseker wall must actually be an $(H,D)$-semistable sheaf. Recalling the formula $$s_W = \frac{\overline\mu_{H,D}({\bf v})+\overline\mu_{H,D}({\bf w})}{2} - \frac{\overline\Delta_{H,D}({\bf v})-\overline\Delta_{H,D}({\bf w})}{\overline \mu_{H,D}({\bf v}) - \overline\mu_{H,D}({\bf w})}$$ for the center of a wall, conditions \ref{cond-slopeClose}-\ref{cond-discriminantMinimal} then ensure that the numerical wall defined by ${\bf w}$ is as large as possible when $\OV \Delta_{H,D}({\bf v})\gg 0$. Therefore, the Gieseker wall for ${\bf v}$ is no larger than the wall defined by the extremal character ${\bf w}$. \begin{remark} Actually computing the extremal character ${\bf w}$ can be extremely challenging. Minimizing the discriminant of ${\bf w}$ subject to the condition that the moduli space $M_{H,D}({\bf w})$ is nonempty essentially requires knowing the sharpest possible Bogomolov inequalities for semistable sheaves on $X$. Conversely, if the nef cones of moduli spaces of sheaves on $X$ are known, strong Bogomolov-type inequalities can be deduced. On surfaces such as $\mathbb{P}^2$ and $K3$ surfaces, the extremal character can be computed mechanically using the classification of semistable sheaves; recall for example \S\ref{sssec-existP2} and \S\ref{sssec-K3exist}. \end{remark} The proof of Theorem \ref{thm-moduliAsymptotic} diverges from the Hilbert scheme case when we need to show that the numerical wall for ${\bf v}$ defined by an extremal character ${\bf w}$ is an actual wall. In the Hilbert scheme case, it is trivial to produce ideal sheaves $I_Z$ which are destabilized by a rank $1$ object $\mathcal{O}_X(-C)$: we simply put $Z$ on $C$, and get an exact sequence $$0\to \mathcal{O}_X(-C)\to I_Z\to I_{Z\subset C}\to 0$$ which is an exact sequence in the categories along the wall if the number of points is sufficiently large. To prove Theorem \ref{thm-moduliAsymptotic}, we instead need to produce $(H,D)$-semistable sheaves $E$ of character ${\bf v}$ fitting in sequences of the form $$0\to F \to E \to G\to 0$$ where $F$ is $(H,D)$-semistable of character ${\bf w}$. This is somewhat technical. Let ${\bf u} = \ch G$; then ${\bf u}$ has $r({\bf u})<r({\bf v})$, and $\overline \Delta_{H,D}({\bf u}) \gg 0$. Therefore, by induction on the rank, we may assume the Gieseker wall of ${\bf u}$ has been computed. We then show that the Gieseker walls for ${\bf w}$ and ${\bf u}$ are nested inside $W:=W({\bf v},{\bf w})$ if $\overline \Delta_{H,D}({\bf v})\gg 0$. Therefore, any sheaves $F\in M_{H,D}({\bf w})$ and $G\in M_{H,D}({\bf u})$ are actually $\sigma$-semistable for any stability condition $\sigma$ on $W$. Then any extension $E$ of $G$ by $F$ is $\sigma$-semistable, and it can further be shown that a general such extension is actually $(H,D)$-stable. By varying the extension class, we can produce curves in $M_{H,D}({\bf v})$ parameterizing non-isomorphic $(H,D)$-stable sheaves; these curves are orthogonal to the nef divisor given by the Gieseker wall. See \cite[\S 5-6]{CHNefGeneral} for details. \begin{remark} Several applications of Theorem \ref{thm-moduliAsymptotic} to simple surfaces are given in \cite[\S 7]{CHNefGeneral}. \end{remark} \end{document}
arXiv
Norm group In number theory, a norm group is a group of the form $N_{L/K}(L^{\times })$ where $L/K$ is a finite abelian extension of nonarchimedean local fields. One of the main theorems in local class field theory states that the norm groups in $K^{\times }$ are precisely the open subgroups of $K^{\times }$ of finite index. See also • Takagi existence theorem References • J.S. Milne, Class field theory. Version 4.01.
Wikipedia
\begin{document} \centerline{\bf LORENTZ GROUPS OF CYCLOTOMIC EXTENSIONS} \centerline{\it Relativity and Reciprocity} \ \centerline{Vadim Schechtman} \ \centerline{March 16, 2021} \centerline{\bf Abstract} \ In this (mostly historical) note we show how a unified Kummer-Artin-Schreier sequence from [W], [SOS] may be recovered from the relativistic velocity addition law. \ \centerline{\bf Introduction} \ In the beginning of XX-th century, almost at the same time, two articles have appeared. One of them has been published in {\it Annalen der Physik}, and became very celebrated. The other one appeared in {\it Mathematischen Annalen}; it received the prize from the {\it K\"oniglische Gesellschaft der Wissenschaften zu G\"ottingen}. It was written by a number theorist Philipp Friedrich Furtw\"angler\footnote{{\it "der zu den bedeutensten Zahlentheoretiker seiner Zeit geh\"orte"}} (see [F], [H]) and contained a proof of a reciprocity law for $l$-th powers envisioned by Hilbert, generalizing the classical quadratic reciprocity. Later on (see [SS]) it was remarked that the Furtw\"angler's definitions contain implicitly certain group scheme which approximates between the multiplicative group $\mu_p$ of $p$-th roots of unity (the kernel of the Kummer map) and its additive analogue $\alpha_p$ (the kernel of the Artin-Schreier map) (see also [ST], Exercises 1 and 2). The first mentioned paper is [E], where the author discusses the consequences of the postulate that the speed of light $c$ remains constant in moving frames. It is based on what became known as a relativistic formula for velocity addition. In the present note we remark that a unified Kummer-Artin-Schreier group scheme may be extracted from this addition formula, so as the case $c$ finite ("Lorentzian") corresponds to Kummer, whereas $c = \infty$ ("Galilean") corresponds to Artin-Schreier. The reader may notice a connection between the relativistic addition formula and symmetric functions described in 1.1. \ {\bf Acknowledgement.} This note was inspired by [B]. I am grateful to Y.Bazaliy, J.Tapia and B.Toen for consultations. \centerline{\bf \S 1. Relativistic velocity addition law} At the moment we will work over a ground commutative ring $k$; in 1.3 below we will suppose that $k = \mathbb{R}$. {\bf 1.1. Velocity addition and symmetric functions.} Denote $$ u\oplus v = u\oplus_h v = \frac{u + v}{1 + h^2uv} = \frac{\sigma_1(u, v)}{1 + h^2\sigma_2(u, v)} \eqno{(1.1.1)} $$ This operation is commutative (obvious) and associative: $$ (u\oplus v) \oplus w = u\oplus (v \oplus w) = \frac{\sigma_1(u,v,w) + h^2\sigma_3(u,v,w)}{1 + h^4\sigma_2(u,v,w)} \eqno{(1.1.2)} $$ More generally, $$ u\oplus v \oplus w \oplus t = \frac{\sigma_1(u,v,w,t) + h^2\sigma_3(u,v,w,t)}{1 + h^2\sigma_2(u,v,w,t) + h^4\sigma_4(u,v,w,t)}, \eqno{(1.1.3)} $$ etc. The operation $\oplus_h$ has the neutral element $0$, and the inverse $u\mapsto -u$. We get a rational group law to be denoted by $\mathbb{G}_h$. Thus $\mathbb{G}_h = \operatorname{Spec} k[x]$, and the group law is a rational map $$ \oplus_h: \ \mathbb{G}_h\times \mathbb{G}_h --\longrightarrow \mathbb{G}_h. $$ {\bf 1.2. Pseudo-orthogonal $2\times 2$-matrices.} Let $$ A(u) = A_h(u) = \left(\begin{matrix} 1 & - h^2u\\ - u & 1\end{matrix}\right). $$ Note that $A_0(u)$ is just a lower triangular matrix. Introduce a scalar product $( , )_h$ on $V = k^2$ by $$ ((a,b), (a',b'))_h = h^2aa' - bb'. \eqno{(1.2.1)} $$ Then the lines (or columns) of $A(u)$ are orthogonal with respect to this scalar product, we can write $$ A(u)\in O(2, ( , )_h). $$ We have $$ A(u)A(v) = (1 + h^2uv)A(u\oplus_h v). \eqno{(1.2.2)} $$ Taking determinants, we get $$ a(u)a(v) = ( 1 + h^2uv) a(u\oplus v) $$ where $$ a(u) = a_h(u) = \det A(u) = 1 - h^2u^2. $$ In other words, the function $$ b(u, v) = b_h(u, v) = 1 + h^2uv = \sigma_0(u,v) + h^2\sigma_2(u,v) $$ is a coboundary: $$ \frac{a(u\oplus v)}{a(u)a(v)} = \frac{1}{b(u,v)^2}. \eqno{(1.2.3)} $$ It follows that if we define $$ B_h(u) = \frac{A_h(u)}{a_h(u)^{1/2}} $$ then $$ B_h(u)\in SO(2, (,)_h) = \mathbb{L}_h \subset SL_2 $$ and $$ B_h(u)B_h(v) = B_h(u\oplus_h v). \eqno{(1.2.4)} $$ For $h = 0$ the matrices $B_0(u) = A_0(u)$ generate a group $\mathbb{L}_0$ of lower unipotent matrices isomorphic to $\mathbb{G}_a$. \ {\it Physical interpretation} \ Consider the relativistic $1|1$-dimensional space-time where the speed of light is $c = 1/h$. Denote $$ L_c(u) := B_h(u). $$ If $(x, t)$ are coordinates of an event in a frame at rest then $$ \left(\begin{matrix} x'\\ t' \end{matrix}\right) = L_c(u)\left(\begin{matrix} x\\ t \end{matrix}\right) $$ are its coordinates in the frame moving at the velocity $u$, cf. for example [B]. \ {\bf 1.3. Positivity.} Let $k = \mathbb{R}$. Suppose that $h > 0$, and let $$ h = \frac{1}{c}, \ c > 0 $$ (so $c$ is the "velocity of light"). Then $\oplus_h = \oplus_c$ gives rise to a group law on $$ I_c := (-c, c), $$ i.e. $I_c$ becomes a Lie group. The arrow $\beta_h$ (see (2.1.2) below) is an isomorphism of real Lie groups $$ I_{c} \iso \mathbb{R}_{>0}^* $$ Thus we have got a family of embeddings of Lie groups $$ \iota(c): \mathbb{R}_{>0}^* \iso I_c \hookrightarrow SL_2(\mathbb{R}) $$ which degenerates in the limit $c\longrightarrow\infty$ to an oryspheric (lower triangular) subgroup $$ \mathbb{R} \isom U_-\hookrightarrow SL_2(\mathbb{R}). $$ Replacing $c$ by $ic$ we get a family of compact tori $$ S^1\isom T_c\hookrightarrow SL_2(\mathbb{R}). $$ \ \centerline{\bf \S 2. Relation to $\mathbb{G}_m$} \ {\bf 2.1. } We claim that for $h\in k^*$ the group law $\oplus_h$ is (essentially) isomorphic to $\mathbb{G}_m$. We have $$ 1 + h\frac{u + v}{1 + h^2uv} = \frac{(1 + hu)(1 + hv)}{1 + h^2uv}, $$ therefore $$ \frac{1 + h(u\oplus_{h}v)}{\sqrt{a_{h}(u\oplus_{h}v)}} = \frac{1 + hu}{\sqrt{a_{h}(u)}}\cdot \frac{1 + hv}{\sqrt{a_{h}(v)}} \eqno{(2.1.1)} $$ This means that if we define $$ \beta_h:\ \mathbb{L}_{h} \longrightarrow \mathbb{G}_m $$ by $$ \beta_h(u) = \frac{1 + hu}{\sqrt{1 - h^2u^2}} = \sqrt{\frac{1 + hu}{1 - hu}} \eqno{(2.1.2)} $$ (on the level of points) then $\beta_h$ is a morphism of group laws. The inverse map $$ \beta_h^{-1}:\ \mathbb{G}_m \longrightarrow \mathbb{L}_h $$ is $$ v\mapsto \frac{1}{h}\frac{v^2 - 1}{v^2 + 1}, \eqno{(2.1.3)} $$ well defined if $h\in k^*$. In fact we can get the Lorentz group law (1.1.1) by transfering the group law from $\mathbb{G}_m$ using the maps $\phi_h, \phi_{h}^{-1}$. \ {\it Relation to tori} \ The other way around, we can start from matrices $A_h(u)$. We remark that $A_h(u)$ has eigenvalues $\lambda_\pm(u) = 1 \pm hu$ with eigenvectors $$ v_\pm = \left(\begin{matrix}\mp h\\ 1 \end{matrix}\right). $$ This implies that $$ C_h^{-1}A_h(u)C_h = \left(\begin{matrix} 1 + hu & 0 \\ 0 & 1 - hu \end{matrix}\right) $$ where $$ C_h = \left(\begin{matrix} - h & h \\ 1 & 1 \end{matrix}\right). $$ Therefore the subgroup $\mathbb{L}_h\subset SL_2$ is conjugated be means of $C_h$ to the standard maximal torus, and the map $\beta_h$ is the conjugation. Thus we've got a family of tori which degenerates as $h\longrightarrow 0$ to an oryspheric subgroup $\mathbb{L}_0 = U_-$. \ {\bf 2.2. SOS group.} Cf. [W], [SOS], [SS]. Let $$ \mathbb{S}_h = \mathbb{G}'_h = \operatorname{Spec} k[x, (1 + hx)^{-1}] $$ We have a morphism $$ \alpha_h:\ \mathbb{S}_h\longrightarrow \mathbb{G}_m = \operatorname{Spec} k[z, z^{-1}],\ z\mapsto 1 + hx $$ which is an isomorphism of $h\in k^*$. On the level of points: $$ \alpha_h(u) = 1 + hu, $$ $$ \alpha^{-1}_h(v) = \frac{1}{h}(v - 1). $$ Using it we can transfer the group law on $\mathbb{G}_m$ to that on $\mathbb{S}_h$. Since $$ (1 + hx)(1 + hy) = 1 + h(x + y + hxy), $$ the resulting group law on $\mathbb{S}_h$ will be (on the level of points) $$ u \oplus'_h v = u + v + huv \eqno{(2.2.1)} $$ which makes sense for any $h\in k$. The inverse: $$ u\mapsto - \frac{u}{1 + hu}. $$ \ {\it The Kummer map} \ For any $n\in \mathbb{Z}$ we have a group map $$ n:\ \mathbb{G}_m \longrightarrow \mathbb{G}_m,\ v\mapsto v^n. $$ We define a group map $$ \psi_n:\ \mathbb{S}_h \longrightarrow \mathbb{S}_{h^n} $$ by transferring, so that the square $$ \begin{matrix} \mathbb{S}_h & \overset{\psi_n}\longrightarrow & \mathbb{S}_{h^n}\\ \alpha_h\downarrow & & \downarrow\alpha_{h^n}\\ \mathbb{G}_m & \overset{n}\longrightarrow & \mathbb{G}_m\\ \end{matrix} $$ would be commutative, whence the formula $$ \psi_n(u) = \frac{(hu + 1)^n - 1}{h^n}, \eqno{(2.2.2)} $$ cf. [SOS]. \ {\bf 2.3. The Kummer map for $\mathbb{L}_h$.} We play the same game with $\mathbb{L}_h$. Namely a morphism of groups $\phi_n$ included into a commutative diagram $$ \begin{matrix} \mathbb{L}_h & \overset{\phi_n}\longrightarrow & \mathbb{L}_{h^n}\\ \beta_h\downarrow & & \downarrow\beta_{h^n}\\ \mathbb{G}_m & \overset{n}\longrightarrow & \mathbb{G}_m\\ \end{matrix} $$ is defined by $$ \phi_n(u) = \frac{1}{h^n}\frac{(1 + nh)^n - (1 - nh)^n}{(1 + nh)^n + (1 - nh)^n}. $$ \ \centerline{\bf \S 3. Roots of unity and Artin-Schreier} \ \ In 3.1, 3.2 below we recall [SOS]; in 3.3 we describe its analogue for the Lorentz group. \ {\bf 3.1. $p$-cyclotomic extension of $\mathbb{Q}_p$. } Together with [SOS] consider a ring $k = \mathbb{Z}_p[\zeta_p]$, $\zeta = \zeta_p = e^{2\pi i/p}$, $p$ being a prime number. Thus $k$ is the ring of integers in $\mathbb{Q}_p(\zeta_p)$. Let $h = \zeta - 1 \in k$. We have a totally ramified extension of degree $p - 1$, $$ \mathbb{Z}_p \subset k = \mathbb{Z}_p[y]/(1 + y + \ldots + y^{p-1}), $$ $k$ is a discrete valuation ring with a uniformising parameter $h$ and quotient field $k/(h) = \mathbb{F}_p$, cf. [Se], Ch. IV, Prop. 17. We have $$ h^{p-1} = w p \eqno{(3.1.1)} $$ where $w\in k^*$ and $$ w \equiv - 1 \mod h. \eqno{(3.1.2)} $$ {\bf 3.1.1. Example.} Let $p = 3$; then: $$ \zeta^2 + \zeta + 1 = 0, $$ $$ h^2 = (\zeta - 1)^2 = - 3\zeta, $$ $$ w = - \zeta = (1 + \zeta)^{-1} = - 1 - h. $$ \ (a) {\it The group $\mathbb{S}_h$} \ {\bf 3.2.} Consider the map $$ \psi_p:\ \mathbb{S}_h\longrightarrow \mathbb{S}_{h^p}, $$ $$ \psi_p(u) = \frac{(hu + 1)^p - 1}{h^p} $$ Due to $(3.1.1)$ this map is defined over $k$. On the other hand due to $(3.1.2)$ its special fiber $$ \psi_{p,s} := \psi_p\otimes_k\mathbb{F}_p: \mathbb{G}_{a,\mathbb{F}_p} \longrightarrow \mathbb{G}_{a,\mathbb{F}_p} $$ coincides with the Artin-Schreier map $$ \psi_{p,s}(u) = \wp(u) := u^p - u, $$ cf. [SOS], (2.2.2), (2.2.3). \ (b) {\it The group $\mathbb{L}_h$} \ {\bf 3.3.} Similarly consider the map $$ \phi_p:\ \mathbb{L}_h\longrightarrow \mathbb{L}_{h^p}, $$ $$ \phi_p(u) = \frac{1}{h^p}\frac{(1 + ph)^n - (1 - ph)^n}{(1 + ph)^n + (1 - ph)^n}. $$ Again due to $(3.1.2)$ its special fiber at $h = 0$ coincides with the Artin - Schreier map. \ \centerline{\bf \S 4. Remarks on $q$-deformed tori} \ In this Section we use our viewpoint to clarify somewhat the contents of [MRT], 6.4.1. {\bf 4.1. Filtered circle.} In [MRT], 6.4.1 the authors introduce an object $\mathbf{S}^1_{\operatorname{Fil},\mathbb{Z}}$ which they call the ''filtered circle over $\mathbb{Z}$''. They start with the formal group law $\mathbb{S}_h$ (2.2.1) over $\mathbb{A}^1_\mathbb{Z}$ which they denote $\mathbb{G}$, $h$ (their $\lambda$) being the coordinate along $\mathbb{A}^1$. Let $\mathcal{R} = \operatorname{Dist}(\mathbb{G})$ denote the algebra of distributons - the dual to the algebra of functions on $\mathbb{G}$; it is a filtered commutative and cocommutative Hopf algebra. They define a filtered group scheme $$ \mathbf{H}_\mathbb{Z} = \operatorname{Spec}(Rees(\mathcal{R})) $$ where $Rees$ denotes the Rees construction. By definition $\mathbf{S}^1_{\operatorname{Fil},\mathbb{Z}}$ is its classifying stack $$ \mathbf{S}^1_{\operatorname{Fil},\mathbb{Z}} = B\mathbf{H}_\mathbb{Z}. $$ This construction can be $q$-deformed. To do this, the authors remark that the fiber at $h = 1$, $\mathcal{R}_1 = \operatorname{Dist}(\mathbb{G}_1)$ is isomorphic to the algebra of polynomials $f(x)\in\mathbb{Q}[x]$ such that $f(\mathbb{Z})\subset \mathbb{Z}$ (I am grateful to B.Toen who explained to me the above statement). The last algebra admits a $q$-deformation $\mathcal{R}_{1,q}$ introduced in [HH]. This allows to define a $q$-deformation $\mathcal{R}_q$ of $\mathcal{R}$. It was remarked in [HH] that the algebra $\mathcal{R}_{1,q}$ is isomorphic to the standard Cartan subalgebra $K_q$ of the Lusztig's divided power quantum group $U_q\mathfrak{sl}_2$. \ {\bf 4.2. Lorentzian counterpart.} We may replace in the above discussion $\mathbb{S}_h$ by $\mathbb{L}_h$ which by definition lies inside $SL_2$; so we consider a model $SL_{2,\mathbb{Z}}$ of $SL_2$ over $\mathbb{Z}$, and $\{\mathbb{L}_h\}$ is a family of conjugated tori inside it. We can consider a $q$-deformation of $\operatorname{Dist}(SL_{2,\mathbb{Z}})$, i.e. the corresponding quantum group $U_q\mathfrak{sl}_2$, and it would be natural to think that a family of quantum tori from 4.1 may be identified with a $q$-deformation $\mathbb{L}_{h,q} \subset U_q\mathfrak{sl}_2$. However a realization of this idea meets some difficulties at the moment. \ {\bf 4.3. Towards a filtered $S^3$.} The Lie group $SL_2(\mathbb{C})$ is homotopy equivalent to $SU(2)$ (the Weyl's unitary trick) which is in turn homeomorphic to the sphere $S^3$. This rises a question whether the whole packet of [MRT] and [R] admits a noncommutative ''quaternionic'' version, with $S^1$ replaced by $S^3$, cf. [Sch]. \ \centerline{\bf Literature} [E] A.Einstein, Zur Elektrodynamik bewegter K\"orper, {\it Ann. der Physik} {\bf 17} (1905), 891 - 921. [F] Ph.Furtw\"angler, \"Uber die Reziprot\"atsgesetze zwischen $l$-ten Potenzresten in algebr\"aischen Zahlk\"orpers, {\it Abh. K\"oniglischen Ges. Wiss. G\"ottingen} {\bf 2} (1902), Heft 3, 1 - 82; {\it Math. Annalen} {\bf 58} (1904), 1-50. [B] Yaroslav Bazaliy, Three zoom talks on Special relativity, Toulouse "Oxford" seminar, Fall 2020. [GMS] I.M.Gelfand, R.A.Minlos, Z.Ya.Shapiro, Representations of the rotation and Lorentz groups and their applications [HH] Nate Harman, Sam Hopkins, Quantum integer-valued polynomials, arXiv:1601.06110. [H] Helmut Hasse, History of Class field thery, in: Algebraic Number Theory, J.W.S.Cassels, A.Fr\"ohlich (eds). 1967, 266-279. [MRT] Tasos Moulinos, Marco Robalo, Bertrand Toen, A universal HKR \newline theorem, arXiv:1906.0011 [M] N.David Mermin, It's about time [R] Arpon Raksit, Hochschild homology and the derived de Rham complex revisited, arXiv:2007.02576. [Sch] V.Schechtman, Moufang loops and toric surfaces, arXiv:2102.12868. [SOS] T.Sekiguchi, F.Oort, N.Suwa, On the deformation of Artin-Schreier to Kummer, {\it Ann. Sci. ENS} {\bf 22} (1989), 345 - 375. [Se] J.-P.Serre, Corps locaux [ST] J.-P.Serre, J.Tate, Exercises, in: Algebraic Number Theory, J.W.S.Cassels, A.Fr\"ohlich (eds). 1967, 266-279. [SS] Noriyuki Suwa, Tsutomu Sekiguchi, Th\'eorie de Kummer-Artin-Schreier et applications, {\it J. Th. Nombr. de Bordeaux} {\bf 7} (1995), 177-189. [W] William C.Waterhouse, A unified Kummer-Artin-Schreier sequence, {\it Math. Ann. } {\bf 277} (1987), 447 - 451. \ \end{document}
arXiv
The Coronavirus Pandemic the biggest crisis to hit the world in 100 years Covid-19 has transformed nearly every aspect of our world. We have all experienced the benefits of waking up 10 minutes for online school, being able to wear PJ pants in a zoom, and spending more time with your family. However, our friends, families and communities have had their lives changed in critical ways that promise to have much longer-lasting effects. From work and play to mental health and our sense of community, the coronavirus crisis has brought huge changes to the way we live. Telemedicine is a permanent part of Australian healthcare, Working from home has become normalized, and we awkwardly bump elbows (with the occasional guilty hug). Many countries have formed plans to get over this pandemic, such as enforcing a lockdown, stay at home orders or banning overseas travel. Together we will look at the effects of Covid-19 on Australia and our world, and how it brought light and funding to science, opened areas to research, helped create new technology, innovative stances, political problems, and caused false belief. COVID: THE APPEARANCE A pandemic is an outbreak of disease that affects the whole world. There have been very few pandemics which are the Black plague, smallpox, bird flu, and Covid-19. The World Health Organization declared the Covid-19 outbreak a Public Health Emergency of International Concern on 30 January 2020, and a pandemic on 11 March 2020. This image is just to explain the difference between an Endemic, Epidemic, and Pandemic, Sourced: Google.com. (2011). covid 19 graphs australia - Google Search. [online] Covid 19 (or 'severe acute respiratory syndrome coronavirus 2 {SARS-CoV-2} ) Is a virus which is airborne, highly contagious, and can be extremely fatal. Covid -19 has evry similar symptoms to the flu, the most common symptoms including sore throat, fever, and cough, mainly attaching the upper airways. The risk of complications or death is low but that doesn't mean we shouldn't be concerned. Covid-19 is caused by infection and can spread quickly through droplets when you cough, sneeze which is from saliva or the nose. You could also receive the virus by touching surfaces that were contaminated with the virus by others who were infected. Because o this, many health precautions were put into place including the importance of sanitization in order to remove the contact from the virus. The covid-19 variant that is currently spreading is the Delta variant which is much more contagious and fatal. Sourced: Adobe Spark Free Photos, shows how Delta is the 'most dangerous' variant. The pathogen that causes Covid-19 is a viral pathogen. The Covid -19 type that infects humans which have caused the pandemic is Sars-CoV-2. We are still unsure how the virus originated, however the first case waa recorded on the 31st of December 2019. The World Health Organization (WHO) was informed of cases of pneumonia of unknown cause in Wuhan City, China. The 'coronavirus' was identified as the cause by Chinese authorities on 7 January 2020, and was temporarily named "2019-nCoV". Coronaviruses have been around for a long time now and can cause other viruses other than Covid-19 such as the common cold. These are the different types of Pathogens. The pathogen causing Covid-19 is a viral pathogen. A pathogen is a type of germ. Pathogens are microorganisms that can not be seen without science technology. These pathogens can cause sickness and we can come in touch with these pathogens and become sick at any time. The pathogen causing Covid-19 is the Viral pathogen. This pathogen can cause the common cold and diseases such as chickenpox. Sourced: Google.com. (2011). pathogens types - Google Search. [online] Available at: https://www.google.com/search?q=pathogens At the moment if you are infected with Covid-19 you are to stay home and isolate while sick. The virus lasts from 5-14 days once recovered it is required that you recieve another Covid-19 test to make sure you are no longer positive. As for recovery stay home get some rest and take panadol or other equal medication for symptoms. It is very important if you feel lightheaded, have chest pain, and have shortness of breath you go to the emergency and seek medical help as these are fatal complications. Your body will use its immune system to fight off pathogens and try to stop them. Covid-19 in infectious but your 3 lines of defense will help you to fight off the disease. Your first line of defense includes chemical defense from your nose hair, skin, mouth, throat, lungs, urine, and bowel. The physical barriers are salvia and tears. These will try to keep you safe however if they fail, you have your second line of defense. The second line of defense uses non-specific defenses. It uses 2 different white blood cells to defeat the pathogen. Two white blood cells are Neutrophils and Metrophils which help stop the pathogen to make you better. The final line is not usually needed as the first to can help. The third line also uses white blood cells called lymphocytes which target pathogens and are stronger than the other white blood cells. This is the immune system that will help protect you from Covid-19 but you can still get it which is why it is important to wash and sanitize your hands. There is an image and explanation below that will help explain the immune system. You can still get Covid-19 though so you should wear masks and sanitize your hands. The lines of immunity defense, protect you from pathogens so you don't get sick. ‌Google.com. (2011). covid 19 graphs australia - Google Search [Accessed 20 Oct. 2021] Covid 19 can affect everyone differently. Some people may get mild flu-like symptoms such as fever, headache, runny nose, and sore throat when they are infected. 14% of people may have complicated symptoms such as Chest pain, shortness of breath, and loss of speech and mobility, or confusion when infected with Covid-19. Some people may also be asymptomatic which means you will have Covid-19 but not show symptoms. Most people who are infected with Covid-19 will have the mild/moderate version and will recover at home isolated from 5-14 days. These are all the symptoms of Covid-19. Sourced: Healthdirect.gov The first case of Covid-19 was on December 31st, 2019 in China. The first case of Covid 19 in Australia was on January 19, 2020. We then went into our first lockdown in March 2020. We then shut our borders on the 20th of March 2020. The Covid-19 is one of the biggest pandemics to hit the world. There have been 219million cases and 4.55 million deaths. As of right now in Australia Victoria, the total covid cases are 52,902 cases and there have been 905 deaths. In the whole of Australia, there is a total of 127k cases and 1,432 deaths. Most have come from Victoria and NSW. The above image is a timeline of Covid-19. As we can see the first case was in Wuhan China and it then spread to The Grand Princess cruise ship which had passengers from all over the world on it. This spread the virus to multiple different countries. Countries all over the world then shut their borders. We then started to get hope as vaccinations started getting tested. We now are hopefully near the end of the pandemic. We are getting vaccinated all over the world as quickly as we can. Sourced: Yalemedicine.com From the below graph we can see that in Australia Victoria has had the most cases and Tasmania has had the least amount of cases. We can also see the age group with the most cases is 20-29 year old females and then 29-29 year old males. Graph explaining the statistics of the virus. Sourced: Australian Government Department of Health, https://www.health.gov.au/. COVID: THE CRISIS WHAT IS CONSIDERED A CRISIS? A crisis is considered as a time of intense difficulty or danger where difficult or important decisions are made. Covid fits this broad definition of a crisis however this term can be used for many things that have little to no long term impact or only impact the individual. WHY IS COVID THE BIGGEST CRISIS? The Corona Virus is not limited to just affecting some countries heavily or in just some types of countries/ climates around the world It is in fact causing a large impact on every continent and has a moderate to severe impact on all countries around the world. There are no countries which have been unscarthed from this disease. Moreover even countries that have not had severe or much at all coronavirus in their countries are still heavily impacted by its effects. map showing different countries and their covid cases; National Institute of Allergy and Infectious Diseases, N. (2021). Graph of covid stats [Graph]. Worldwide map of COVID-19. https://www.nationsonline.org/oneworld/map The effects of Covid are far reaching to all the corners of the world through its effects on individuals as a virus and on communities and countries as an economic disaster. Covid being a crisis from the standpoint of it being a virus is undeniably disastrous, it is a highly contagious virus that has around a 2-3% death percentage with non vaxxanatied people and it had already spread to most countries before most had heard its name. Although North Korea and Turkmenistan both dubiously claim to be COVID-free according to ABC news. The reason behind covid being so far reaching is due to its much larger infection rate than the common flu, one of the most well known viruses because everyone gets at some point in their life making it highly contagious. And due to this it has caused 219 million to be infected worldwide within 2 years killing around 4.55 million. HOW HAS IT AFFECTED AUSTRALIA? Moreover for within Australia the country is struggling with the amount of cases 116k in total and 23,300 cases active roughly as stated on the 7/10/21 by the Australian department of health. Moveover Covid has caused large economic damages to many countries. Example being Australia becoming a Two speed economy in NSW, VIC, QLD mostly. Graph showing the amount of cases on day in all Australian states Victorian COVID-19 data | Coronavirus Victoria. (2021). Retrieved 7 October 2021, from https://www.coronavirus.vic.gov.au/victorian-coronavirus-covid-19-data WHAT IS TWO-SPEED ECONOMY? A two speed economy is where one sector of an industry or business grows at a much more rapid rate than another, often making the rate of growth in the smaller sector slow down. In this case we have people during covid making the most money they ever made in their lives and people struggling to get a job or to keep working. Example the retail trade grew in total profit by 27.4% ($6.3b) in 2019 to 2020. While accommodation services including hotels, motels and more as well as food services including restaurants and pubs together declined significantly by 13.1% (-$5.6b) from 2019 - 2020. This is all according to bureau of Australian statics. Australian Industry, 2019-20 financial year. (2021). Retrieved 27 October 2021, EFFECT ON GOVERNMENT APPEARANCE The severe effects of the Coronavirus are also affecting the way some people think about the government in Australia as many are unsure about the way that the premiers and acting governing bodies have response and tried to deal with the coronavirus pandemic. This untrust in the acting governing bodies is mostly to the Victorian, new south wales and queensland Premiers and their cabinet as well as towards Scott morrosian and the federal government. Victoria and new south wales governments have lost some trust and gone under strutane due the multiple long lock down and failure to contain covid outbreaks and the queensland government has gone under judgement due to their in ability to deal with covid efficiently and just closing the borders. This inturn has caused more extreme politically placed people to lose more trust in some of the parts of governments or let them gain more reason to support their ideology. This can be seen in Australia where people are spreading their ideology and ideas through the internet to those who aren't sure what to believe. In addition this can be dangerous with people spreading incorrect or dangerous information. THE PUSH AGAINST THE GOVERNING BODIES Melbourne descends into chaos as police arrest 62 and fire rubber pellets at anti-lockdown protesters (2021). Available at: https://www.theguardian.com (Accessed: 13 October 2021). Moreover, there have been recent protests in response to the double vaxxed to work rule which has been implemented in Victoria . It is stated in this rule on the vic.gov website that "Workers in residential aged care facilities, Workers at construction sites Workers in healthcare settings Workers at school, childcare and early education services (plus outside school hours care services) Authorised Providers and Authorised Workers. There have been other recent cases of false information being spread around on the internet. For example the more recent cases of a small part of the community eating horse dewormers and while there are human versions of the horse dewormer they are using the horse one which is for horses. This was due to a laboratory experiment showing ivermectin could to an amount stop covid from multiplying in animal cells under a microscope. However this was one experiment that was done and was not proven to work on human cells. Instead people have only read the title of a documents and took the information that they wanted to see. Link to article explain the situation (more of this mentioned below) COVID: CONTROL AND CONSPIRACY Even with the current resources and enforcement, Covid-19 is still not being controlled in Victoria and many other parts of Australia. But what are these resources and rules and why do people choose not to obey? THE IMPACT COVID HAS HAD ON UTILIZING RESOURCES AND PROGRESSING SOCIETY. Covid-19 has enormously aided in advances of future technology and science. Governments around the world have funded for research and innovation conducted by universities, public research institutions and firms in the aftermath of the COVID-19 crisis. STI, which stands for science, technology and innovation, plays a critical role during the pandemic in building a more resilient, sustainable and inclusive future. As a result of the Covid-19 virus, new and improved forms of scientific communication have evolved. One example is the amount of data being published on social media platforms by Scientists and other people of authorization in the field. They are are reviewing, editing, analyzing, and publishing manuscripts and data speedily, for quick access and collection into research. The public are also learning so much more about Science and Technology as they are forced to be faced with it every day. More people are tuning in to the news, reading articles and conducting research. The vaccine was also created, using new and improved technology, however some of the population is still skeptical on how safe it really is. THE VACCINE: Australia began receiving the COVID-19 vaccine in February. The Australian Government's first priority was to protect our most vulnerable Australians first, as well as the frontline heroes who are protecting all of us. This includes aged care and disability care residents and workers, frontline health care workers, and quarantine and border workers. The first vaccine available to Victoria specifically was Astrazeneca, this was given priority before pfizer came along. According to the World Health Organization (WHO), there are at least 13 different COVID-19 vaccines currently in use around the world, including the Pfizer-BioNTech vaccine and the AstraZeneca vaccine. Vaccines are one of the safest and most effective health interventions for infectious diseases. They have had a staggering impact on reducing the burden of infectious cases worldwide. However, a vaccine is only effective if people are willing to receive it. With rapid research development, many Victorians are concerned that the vaccine was rushed, and with these concerns comes vaccine hesitancy. Why do people oppose vaccines? Refusing vaccines started back in the early 1800s when the smallpox vaccine started being used in large numbers. The idea of injecting someone with a part of a cowpox blister to protect them from smallpox faced a lot of criticism. The criticism was based on sanitary, religious, and political objections. Some also believed that the vaccine went against their religion. Although vaccine hesitancy across Australia, and Victoria, continues to steadily fall, from 15% on 23rd September to 13.3% on the 10th October, there is still much that the state needs to combat. The current hesitancy rate suggests that Australia can achieve at least 87% of the population fully vaccinated, consistent with the steady rise of first dose uptake across states. Most diseases prior to Covid-19 had very little conspiracies surrounding them. Things such as measles vaccine and flu injections, however, the covid-19 vaccine was one of many debate between whether science is to be trusted, the government is to be trusted, and if the vaccines are safe. Vaccine Hesitancy Report Card (2021), Melbourne Institute: Applied Economic & Social Research. The Above image represents the proportion of the adult population who are vaccine hesitant on the 5th of February 2021. Victoria comes second, shortly under Queensland at 32.03% Vaccine Hesitancy Report Card (2021), Melbourne Institute: Applied Economic & Social Research. The Above image represents the proportion of the adult population who are vaccine hesitant on the 10th of October 2021. All state's vaccine hesitancy has dropped as it has become a requirement, and there is much encouragement to have it. Since we witnessed covid-19 being slowly brought into the world, first starting in China and moving to Australia, we formed beliefs along the way as we saw the virus grow and mutate along with us. With the newfound importance of media that is being available, it is very easy for unreliable information to be spread around. Even when it doesn't come to covid-19, there are conspiracies behind companies, famous actors and more. It isn't a surprise that those who lack ability to trust jump to try and find faults in the virus and vaccine. Health professionals and physicians have been combating misconceptions about the vaccine for over twenty years, there are many excuses and conspiracies in order to not receive the vaccine, some popular ones include: CONSPIRACY/EXCUSE 1: Vaccines cause autism. During a 1997 study published by British Surgeon Andrew Wakefield, it was suggested that the mumps, measles, rubella (MMR) vaccine was increasing autism in British children. The article was published in a prestigious medical journal named The Lancet, and rised fear in several that vaccines caused autism. The 'study' has ebem discredited several times by many scientists and doctors who say that the paper includes ethical violations, procedural errors and more. Due to the falsehoods of the paper, Wakefield lost his medical license and the paper was retracted from The Lancet, with many who still shame upon what he published. Even though it was claimed as false, the hypothesis was taken seriously, and several other major studies were conducted. None of them found a link between any vaccine and the likelihood of developing autism, but the public who do not wish to take a vaccine still use it to support their argument. You can read more about Andrew Wakefield's Paper below: Andrew Wakefield/Vaccines cause Autism Sourced by: The Guardian, Photograph: Daniel Berehulak/Getty (Andrew Wakefield and his then-wife Carmel in 2007, flanked by supporters ahead of an appearance before the GMC) CONSPIRACY/EXCUSE 2: Natural immunity is better than vaccine-acquired immunity. The most famous reason for the population to oppose vaccines is because they believe their bodies can combat a virus more beneficently without the need to inject an antibody in their systems. In some cases, natural immunity does result in a stronger immunity to the disease than a vaccination. However, the dangers of approaching the virus without a vaccine far outweigh the benefits. The chances for death when retracting covid are much higher than death when receiving a Covid vaccine. If you wanted to gain immunity to measles, for example, by contracting the disease, you would face a 1 in 500 chance of death from your symptoms. In contrast, the number of people who have had severe allergic reactions from the vaccine for MMR, it is less than one-in-one million! After vaccination, some side affects are common side effects, such as soreness and fever are common, but this is only a fraction of what the real virus would feel like if you were to contract it. According to CHOP, HPV, tetanus, Haemophilus influenzae type b (Hib), and pneumococcal vaccines induce a better immune response than natural infection. The official science.org website informs that even though natural immunity is an extremely powerful shield to the virus, the vaccine is vital. Why the Vaccine is still Vital Despite Immunity/Science.org A Jerusalem health care worker in January prepares a dose of the Pfizer-BioNTech vaccine designed to prevent COVID-19. (Sourced: Science.org, Picture: ahmad gharabli/AFP VIA GETTY IMAGES) CONSPIRACY/EXCUSE 3: 'The AstraZeneca vaccine could give me a blood clot.' There was new information released that the AstraZeneca vaccine has been linked to an extremely rare blood-clotting condition called thrombosis with thrombocytopenia syndrome (TTS). But really, the risk is extreme small to contract. there were 87 cases of TTS in Australia by early August, from 6.1 million doses of the AstraZeneca vaccine. Five of those people have sadly passed, however, this means the average number of deaths from TTS caused by AstraZeneca is less than 1 in a million. TTS can be dangerous, but since it was found health professionals have became trained well to recognize and treat TTS, and the chance of survival has became much higher. If you are worried about the Astrazeneca vaccine and the chance of blood clots, there are several other safe and secure vaccines, just like the Astrazeneca vaccine that are available. (Sourced; BBC article, https://www.bbc.com/news/explainers-56665396, (Cuffe, 2021)) Above is a table to represent how dangerous are the risks of the astrazeneca vaccine compared to others we are currently experiencing, in reality it is much less and even benificial. The AstraZeneca vaccine is safe and low risk. In fact, you're more likely to get a blood clot from COVID-19 itself, and from many other things, such as oral contraceptive pills, which 1,200 out of 1 million women will develop a blood clot, for every 4,500 flights over 4 hours, there will be 1 incidence of blood clotting, non-steroidal anti-inflammatories (NSAIDs) and many others. It is a personal decision on whether or not you would like to take the atsrazeneca vaccine (unless of course it is the only one available to the age group), however it is not an excuse to invalidate the vaccine or discredit a need for it. CONSPIRACY/EXCUSE 4: 'I don't trust the vaccines because they were developed quickly'. There is much speculation amongst how quick the vaccines were created. As it does seem that they were just created 'overnight' scientists and health professionals have worked with manufacturers to develop a vaccine as soon as the pandemic occured. For example Oxford University in the UK had years of research to create the AstraZeneca vaccine. The technology it has been developed over the past 10 years. It was tested on diseases like the flu and other coronaviruses, such as MERS. The funding helped the vaccine trials to be fast-tracked yet accurate. All of the Covid-19 vaccines approved for use in Australia have been through the same trials as any past vaccine. As more vaccines are developing around the world, such as the Johnson and Johnson vaccine. Pfizer, Moderna, COVAX and Sputnik V, the new and improved data is showing the importance of the vaccines, and how they are highly effective for protection the population. This data by (Our World in Data, google.com) shows how many people have received at least one dose of a vaccine in Australia. People who are fully vaccinated may have received more than one dose. CONSPIRACY/EXCUSE 5: 'Forcing me to wear a mask violates my human rights' Many Victorians may have seen the video of the woman arguing with Bunnings staff members who have nicely asked her to wear her mask, as per the Victorian Government's emergency directive. The woman refuses, crying discrimination and claiming it is a violation of the "1948 Charter of Human Rights", which to where she is referring to the Universal Declaration of Human Rights, proclaimed by the UN in 1948. However, Victoria also has its own Charter of Human Rights, which sets out the basic rights, freedoms and responsibilities of Victorians. Like all states, Victoria has legislation which enables the State to make lawful directives when a state of emergency is declared. These emergency laws essentially give the government to make directives they usually wouldn't be able to make in normal circumstances, in order to protect the community. Masks are effective and may be uncomfortable, but they are not a violation to human rights. Face coverings limit the volume and travel distance of expiratory droplets dispersed when talking, breathing, and coughing. A face covering without vents or holes will also filter out particles containing the virus from inhaled and exhaled air, reducing the chances of infection. news.com.au, Nationwide News Pty Ltd © 2021. Are masks effective? Graph. WHAT ARE THEY DOING TO COMBAT THE CONSPIRACIES? Victoria is trying to combat the several conspiracies/excuses every day. Daniel Andrews speaks about at least one or two every time he comes on his press conference, to again remind Victorians that what they hear on the media is not always true, and actions are not justified that are against the policy or new roadmap. On several occasions he was required to call out those who did not follow the policy, and share to the nation the consequences. The consequences of the 'beliefs' of people opposing to wear a mask or get vaccinated are fatal, in order to put an end to them. Victoria now requires all workers in Melbourne and regional Victoria, on the Authorised Worker list will require their first COVID-19 vaccine dose by Friday, 15 October in order to continue working onsite. They will need to be fully vaccinated by 26 November. If they fail to meet these standards or refuse to be vaccinated, they will be fired from their job. Daniel Andrews talks about how it is a 'choice' to be vaccinated, but because of the current protests and misinformation, as well as spike of cases, he must do what he can. Although labeled a choice, here is really no choice for Victorians to not get vaccinated, and if not obliged, they can be faced with great trouble. Many do not follow the order to vaccinate or wear protective gear because of personal reasons such as religion or culture. However, many pastors and leaders in faith have came together with the aid of government officials to encourage their religion or culture to take the right steps in order to protect themselves, not lose their jobs and help the community, including the pope. The above video shows Pope Francis urging people to get vaccinated against Covid-19, calling it an act of love toward others. This is produced to encourage Christians to get vaccinated because of their hesitancy due to their beliefs. (Vatican News - English, 2021) THE DIFFERENCE BETWEEN THE PFIZER AND ASTRAZENECA VACCINE: There is a main difference between the more popular vaccines, Pfizer and Astrazeneca, that make people in Victoria and everywhere around the world wonder which one to choose. The Astrazeneca vaccine created by Oxford was introduced to Asutralia from the 8th of March 2021. It was beneficial to our country as 'Australia is in a unique position because importantly this vaccine gives us the ability to manufacture onshore.' It was originally only targeted to priorities. This includes health care workers and people over the age of 60. Professor Alison McMillan, Chief Nursing and Midwifery Officer, states that the reason that it was given to those over 60 only originally, was that the TTS condition that we spoke about prior, (thrombosis with thrombocytopenia syndrome) was more frequent with the younger age groups. For people 60 and over, the benefit of the vaccine in protecting against severe disease and death is much greater protection than if you were to get COVID-19 virus, as older people are much more vulnerable, according to The Australian Government Department of Health (and video attached below!) The Astrazeneca vaccine uses the traditional method and old technology that was used when creating the past vaccines. The AstraZeneca-Oxford vaccine is known to be a "chimpanzee adenovirus-vectored vaccine". this uncommon term means that thecreators of the vaccine took a virus that normally infects chimpanzees, and genetically modified to avoid any possible disease consequences in people. The modified version of the "chimpanzee adenovirus-vectored vaccine" also known as Astrazeneca, carries a portion of the coronavirus called the "spike protein." When the vaccine is injected and makes its way into the human cell, it triggers an immune response against that protein, producing antibodies and memory cells that will be able to recognize the virus that causes Covid-19. This method is used in other vaccines such as the Malaria and Ebola vaccine. However, the Pfizer vaccine is different. Sourced: RACPG, (NewsGP, 2021), A photo of the Pfizer and Astrazeneca vaccine being held beside each other. The Pfizer vaccine (as well as the moderna) use a new and improved technology instead of sticking to the traditional method. The Pfizer-BioNTech vaccine uses mRNA technology that scientists have been working on for many years. Some of the earliest successful mRNA vaccine clinical trial results were published in 2008. How the vaccine enters your body, according to healthline.com: After you receive the vaccine, the mRNA that it contains is taken up by nearby cells. Once mRNA is inside of a cell, the mRNA remains outside of the nucleus and doesn't directly impact your DNA. The mRNA in the vaccine provides instructions to the cell on how to produce the spike protein that's found on the surface of SARS-CoV-2. The virus uses this protein to bind to and enter cells before it begins reproducing and spreading the virus throughout your cells. Using the information provided by the vaccine mRNA, the cell makes spike protein. When this process has been completed, the mRNA is then destroyed. The spike proteins that the cell has produced are then displayed on the cell surface. Immune cells in your body can now recognize the spike protein as a foreign substance and work to build an immune response against it. Your immune system can now produce antibodies and other immune cells that specifically recognize the SARS-CoV-2 spike protein. These tools can help to protect you from becoming sick if you're exposed to the novel coronavirus. We don't know how this will be in the long run, but it is 5% more effective than the Astrazeneca vaccine, and is targeted to (now) all age groups. Many who are skeptical on the vaccine but must take it by law usually lean towards the Astrazeneca vaccine, as it follows a traditional method. We have not determined the new technology but it has been conducted with much research to support, and we will most likely use this method going forward. According to Forbes, both Moderna and Pfizer, whose vaccines use an mRNA platform, found their vaccines to be about 95% effective. The AstraZeneca/Oxford vaccine showed a somewhat lower efficacy, but is less expensive and poses fewer issues involved in distribution and administration. "We can only hope that together with Pfizer and also Moderna and also Astrazeneca, we will manufacture enough doses." Dr. Ruud Dobber, president of AstraZeneca U.S. LOCKDOWN: In Australia, the government took measures with Covid-19 by enforcing a lockdown every time there was a strike in cases. In victoria, this method was used several times. From the first lockdown which started in March, to the current one, which now holds the title of Victoria's sixth lockdown, it isn't turning out great for the state. The current lockdown we are experiencing of September and onwards 2021, the state was put in immediate lockdown after just a small spike in covid cases. Victorians were sick of this as they have already been the most locked down state in the world, counting on a total of 256 days (so far). As soon as one case started and the state was put into lockdown, people couldn't take it anymore. Many were questioning why we were required to be placed in lockdown with just a few covid cases when states in America are fully open with thousands of cases (so far). The ignorance to follow lockdown restrictions and the struggle to handle the cases after the 6th attempt, didn't work out for the better. Even though past lockdowns ended up successful, Victorians are tired of repeating the same process and breach lockdown rules for their own personal 'freedom'. For the 6th and current lockdown, the cases reached a new high and tallied a total of 2,297 in 24 hours. SHOULD WE HAVE BEEN WORRIED ABOUT VACCINES, OR FOCUSED ON GETTING CASES DOWN? After speaking to La Trobe University Scholar Jeff Yeoman, we asked the question below above. Should we still worry on getting cases down, or getting everybody vaccinated? Dr Yeoman states that while it was crucial to keep cases down, getting people vaccinated is more of a reasonable option as of now. The more people vaccinated, the less severe cases meaning hospitals won't be overflowing and running out of rooms for patients. It is proven that this method is starting to become efficient, at the moment, with around 70% vaccinated with a double dose in Victoria, the numbers are still increasing, but sever cases are declining and risk is getting smaller. There is only .1% of severe Covid cases if you are vaccinated, so based on these statistics we should continue vaccinating. "The vaccines are definitely benefiting the risk of contracting a severe covid case, with the majority of the population being vaccinated, we can reach our targets of herd immunity." Dr. Jeff Yeoman, (Sourced: www.latrobeuniversity/scholars.com) Victorian Premier Daniel Andrews captured discussing the new Victorian road map, setting for lockdown to lift once 70% vaccinated. (Sourced: abc.net.au) In Melbourne's sixth lockdown the Victorian premier, Daniel Andrews, has formally abandoned most Victorian's hopes of getting to zero daily cases. Instead, he has replaced it with a new goal, to get at least 80% of Victorians over the age of 16 double vaccinated. Lockdown is expected to lift once the vaccination rate gets to 70%, expected around 26 October. Support for the Victorian government's handling of the pandemic is waning. In fact, last week's Essential poll for Guardian Australia found approval of the Andrews government had dropped to 44%! Calla Wahlquist from 'The Guardian' states: "Speak to Melbournians and the overwhelming majority say they support the decision of the Andrews government to lock down hard at the first signs of an outbreak, However there is still much opposition to the health measures, while still a minority movement, is growing and has erupted onto the streets." COVID: WHAT WENT WRONG? 6 key points of what went wrong: Not placed in lockdown quick enough, multiple times Ordering too little of the vaccine and not fast enough Placing inexperienced workers in hotel quarantine Not mandating masks until months too late Not closing borders fast enough multiple times, the most recent causing the third wave Cross use of staff in nursing homes HOW HAS THE MISTAKES EFFECTED VICTORIA IN THE LONG TERM? In ways the mistakes are not still negatively affecting Victoria, as we have learnt from them and used them to help with what we are dealing with now. Our mistakes have taught us far more knowledge about the virus. We now have scientists who are experts in covid and treating it correctly, because everytime we do something wrong we are able to figure out what the error was, and adapt to fixing that for the future. HOW HAVE WE IMPROVED FOLLOWING THESE MISTAKES As written earlier, without the mistakes that have been made, Victoria definitely would not be going as well as we are now, even if it seems like we are struggling immensely, we would be in a far worse situation had we not taken the time to study the consequences of each action. As an example, in the beginning of the pandemic it was believed that masks would not make a difference against this virus, however now mask wearing is almost a constant restriction. 'Research shows that face masks cut down the chances of both transmitting and catching the coronavirus, and some studies hint that masks might reduce the severity of infection if people do contract the disease.' This is written in an article by nature.com, claiming scientists have confirmed that evidence. It helps citizens to see that victoria has been educated hugely in comparison to thinking masks didn't work in the pandemics beginning. WHAT DID WE DO WRONG TO STILL BE IN LOCKDOWN ALMOST 2 YEARS LATER? One mistake we made towards mid this year was not working hard enough to keep the delta strain out of Victoria. This is far more infectious than the original covid strain (alpha), and that is why although we are still in an extremely harsh lockdown, cases are still multiplying and infection rates are growing extremely. The delta variant is roughly 5 times as infectious as the alpha variant that was spreading around in 2020. There are multiple reasons why Victoria could be struggling far more than other states. For example, Victoria has the second highest population density out of all the Australian states. This obviously makes it easier for a virus to thrive, and therefore harder to get rid of it. Many people also believe that it is the government behind the severity of Victoria's lockdown, however there is no evidence to prove this is true. STATISTICS OF COVID'S EFFECTS ON VICTORIA? Covid has killed 826 victorians so far, and 1102 australians. As of october 18th, 64.1 percent of victorians are fully vaccinated, and as of october 20th the whole of australia is 70 percent fully vaxxed. WHY DID VICTORIA AND NSW STRUGGLE THE MOST? Victoria and NSW struggled the most as they are two most populated states in Australia, and have lots of tourists and planes leaving and arriving each day. Before borders shut cases emerged from international travel and travel from both states to each other. Altogether Australia has had 75,342 cases compared to the USA 41,853,362 cases so why is Australia still struggling and having lockdowns for half the year? America and Australia approached Lockdown differently, Victoria went into more lockdowns as they were originally trying to reach a 0 case target instead of a vaccine target. Victoria locked down at 50 cases, and had several snap lockdowns whereas America and other countries are not locking down until reaching high number cases. WHY DOES AUSTRALIA, SPECIFICALLY VICTORIA, STILL HAVE SO LITTLE PEOPLE VACCINATED? Australia did not order in enough vaccines as soon as they came out, meaning we didn't get enough to start vaccinating the whole nation immediately. 'Melbourne GPs say they are being forced to turn away huge numbers of vaccine-seeking locals, including busloads of vulnerable residents from care facilities, because the commonwealth's supply of doses has not increased to match the explosion in demand.' This is a sentence from an article by theguardian.com, and it is proving the knowledge that in the beginning of the pandemic it was very difficult to get a vaccine. Victoria also ordered far more astrazeneca than we did pfizer, which was a mistake as astra has now been proven to sometimes have fatal blood clotting side effects, and is much safer when only being given to older people. 53.8m doses of astrazeneca were initially ordered into Australia, while only 20m of Pfizer were ordered. Since then, another vaccine named Moderna has arrived in Victoria, which is very similar to Pfizer. On the 21st of October 2021, Victoria hit the milestone of 70 percent fully vaccinated, after having to work through multiple vaccination shortages. a picture of Australia's original vaccine rollout plan. Since then this has been edited multiple times. Source: victoria state government. 2021. covid 19 update. [online] Available at: <https://www.dhhs.vic.gov.au/covid-19-chief-health-officer-update> [Accessed 20 October 2021]. How has victoria ended up in such a bad lockdown TWICE when almost every other state have only been in small lockdowns? If we think back to last year, Victoria came out of lockdown when we had zero cases, and were out for about a month, however when cases started worsening again, we didn't go back into lockdown until we were getting 700 new cases daily. This was a huge mistake as it meant it would take months and months for us to be able to reverse the continuous effect. Since then, we have learnt from this, evident when we look at the amount of snap lockdowns Victoria has been in during 2021. Snap lockdowns are created when small case numbers begin to arise, and they are used to stop the spread from growing larger and becoming uncontrollable. a graph of the case numbers during victoria's second wave, sourced: Static.ffx.io. 2021. the rise and fall of victorias second wave. [online] Available at: <https://static.ffx.io/images/> [Accessed 18 October 2021]. WHY IS VICTORIA STILL IN LOCKDOWN AND WHY DID VICTORIA NOT GO INTO LOCKDOWN FASTER IN THE BEGINNING? The cases keep going up and down when we come in and out of lockdown. Cases will continue to rise and lower until we get around 75%-80% vaccinated. As of right now, 88.8% have had the first dose of vaccine and 68% are fully vaccinated. Victoria was scheduled to be will be 70% fully vaccinated on 18 November and 80% vaccinated on 10 December, however now it seems like Victoria will be 80% fully vaccinated within two weeks. In the beginning of this pandemic no one would have known or been able to guess how long this would go on for, and how fatal it would prove to be. People, including government health officials, didn't put Victoria into lockdown straight away because they were not aware that was the solution. A pandemic this extreme hasn't happened in decades and decades, so it was difficult for even the most important people to know how to handle this. A map depicting the rise and fall of covid cases, from the beginning of the pandemic to now. Vic.gov.au. (2021). Victorian COVID-19 data | Coronavirus Victoria. [online] Available at: https://www.coronavirus.vic.gov.au/victorian-coronavirus-covid-19-data. (snip from the victorian health website) THE QUARANTINE ISSUE: There was a huge issue which led to the beginning of the second wave. Someone decided to trade the hotel quarantine officers with private security officers, and it lead to a massive outbreak causing 768 deaths. It is still unknown who it was, as no one owned up to it. WHAT MISTAKES DID WE MAKE DURING THE VACCINE ROLLOUT? Victoria ordered a much larger load of AZ than we did pfizer, and as it turned out, pfizer was the type the majority of people required. Victoria also did not get in early enough, so the first few months of having the vaccine there was a very, very large shortage of vaccinations. This was a mistake as we have now figured that the vaccine is the most effective solution at the moment. As of october 18th we're at the milestone of 61 percent second dose. A third vaccine has since been provided in Australia, however again, there is a very small amount of it. WHAT WERE VICTORIAS INITIAL MISTAKES DURING THE BEGINNING OF THE PANDEMIC? Victoria received their first positive covid case on January 25th 2020. A man who had recently returned from Wuhan, China. However, Victoria did not close their borders until almost 2 months later, on March 20th. This was the first big mistake, as in this time it allowed many cases to be created and spread throughout the community. Masks were also not made mandatory until the 18th July 2020. Victoria has now been under mask mandates since early this year. During the second wave in 2020, Premier Daniel Andrews writes; "Ultimately, you run out of people, you can have all the machines you want, but machines don't treat patients, people treat patients. And you just run out of them after a while." This was addressing the real time issue that there were so many people in the icu, and while there was so far enough room for them all, there was a shortage of health workers available to care for them all. This has since been corrected, as the vaccine has helped reduce the amount of people with covid having to be hospitalised. Photo of scientists with the Covid-19 Vaccine. Sourced: timesnow,com (Phelamei, 2021) WHAT COULD WE HAVE DONE TO POTENTIALLY LESSEN THIS VIRUS IN VICTORIA? (collective solution, all members) Vaccinations are the main target in the Victorian government today, and this is what they should have carried out from the beginning. We need to get everyone vaccinated. This is one of the biggest mistakes that the Victorian Government did not make as soon as there were zero cases. As great as it is to have zero cases, it will eventually go away by one mishap of a positive covid-19 case, and this was what occurred currently. While there was no cases, it gave the Government the perfect opportunity to have the time to receive and manufacture the vaccines, and vaccinate everybody while there is no active cases. Victoria didn't get enough vaccinations right away. There were 25 million people needing to be vaccinated but only a about 100,000 vaccines. As well as the lack of vaccines only accessible for those over the age of 40. Although the vaccines were given to priority, such as NSW at the time, the Australian Government disregarded Victoria as they were already virus-free. Even when giving priority vaccines to NSW, the government should have provided Victoria with them so they could keep the state without any cases. Once there is one case during a no restriction policy, it gets out of hand. Daniel Andrews should not have put Victoria into a snap lockdown once there was a sudden spike with a few cases. It is reasonable to put Victoria in a lockdown, however when it is 6 times, with only a few cases, Victorians are not obeying lockdown protocol. With the constant lockdowns when it is not serious, just to get the number back to zero, many Victorians are sick of it. Protests and rebuttal occurs and people do their own thing. Because of this, when it comes to a serious lockdown, they are'not going to listen as they are used to the quick descend into zero cases like each other snap lockdown. The Victorian Government should have been more focused on effective contact tracing and isolating them efficiently with monitoring instead of locking down the whole city. Those with the virus are much more likely to listen to the rules than those who are locked down for no reason. While we do not want to be like America, it is good to have a sense of balance, and remind Victorians that lockdown is serious. With no hope at contact tracing now, we must use vaccination as an alternative to open the state and make Victorians happy, why didn't we do this when there were just 5 cases? Why must we wait for lockdown to spiral out of control? As of now we have improved and are able to get everyone 12 and over vaccinated. We still don't have a vaccine for kids under 12, but if we get 80% of the population double dosed, than we are closer to achieving our goal. Now that we have reached 70% fully vaccinated in Victoria, there is a new and improved roadmap which includes assess to more freedoms on 22nd of October. There are more cases, but luckily they are not ascending into the 10's of thousands. This proves that vaccinations are the solution and a lot of our mistakes were surrounding not getting the population vaccinated quick enough. Although Australia is still not fully vaccinated, they are well on track. Victoria has reached the third step of the road map until being in 'covid normal'. there is no curfew or reasons to leave home, and schools have had a much needed re-opening. The New Victoria Roadmap (Fletchers Real Estate, 2020) Compared to the rest of the world Australia at the start did not have enough vaccines but now they have received enough to continue the rollout. However, compared to countries like Portugal, whose population is fully vaccinated and has the highest vaccination rates in the world, we need much improvement. Portugal was able to do this within 9 months and is back to life without lockdowns and restrictions. If Australia had done this we too could have been out months ago. National Institute of Allergy and Infectious Diseases, N. (2021). Graph of covid stats [Graph]. Worldwide map od COVID-19. https://www.nationsonline.org/oneworld/map/New-Coronavirus-2019-nCoV-world-map.htm All Coronavirus (COVID-19) Clinics for Perth, Western Australia. (2020). Retrieved 11 October 2021, from https://www.afterhourshomedoctorwa.com.au/blogs/coronavirus-covid-19-clinics-perth-australia/ What makes COVID-19 so dangerous? (2021). Available at: https://www.ucihealth.org/blog/2020/04/why-is-covid19-so-dangerous (Accessed: 13 October 2021). New Freedoms When 70 Per Cent First Dose Target Reached | Premier of Victoria. (2021). Retrieved 7 October 2021, from https://www.premier.vic.gov.au/new-freedoms-when-70-cent-first-dose-target-reached Information for industry & workers required to be vaccinated | Coronavirus Victoria. (2021). Retrieved 7 October 2021, from https://www.coronavirus.vic.gov.au/information-workers-required-be-vaccinated#workers-required-to-be-vaccinated Victorian COVID-19 data | Coronavirus Victoria. (2021). Retrieved 7 October 2021, from https://www.coronavirus.vic.gov.au/victorian-coronavirus-covid-19-data Very few countries are untouched by coronavirus. Here are those that claim to be COVID-free (2020). Available at: https://www.abc.net.au/news/2020-11-12/what-are-the-countries-that-remain-free-of-coronavirus/12874248 (Accessed: 7 October 2021). Melbourne descends into chaos as police arrest 62 and fire rubber pellets at anti-lockdown protesters (2021). Available at: https://www.theguardian.com/australia-news/2021/sep/21/victoria-covid-update-melbourne-descends-into-chaos-as-police-fire-rubber-pellets-at-anti-lockdown-protestors (Accessed: 13 October 2021). CDC (2021). Symptoms of COVID-19. [online] Centers for Disease Control and Prevention. Available at: https://www.cdc.gov/coronavirus/2019-ncov/symptoms-testing/symptoms.html [Accessed 20 Oct. 2021]. Google.com. (2011). covid 19 graphs australia - Google Search. [online] Available at: https://www.google.com/search?q=covid+19+graphs+australia&rlz=1C1GCEV_enAU834AU834&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjEmqjthdjzAhXf7XMBHTBFCM4Q_AUoAXoECAEQAw&biw=1280&bih=610&dpr=1.5&safe=active&ssui=on#imgrc=yYu9skT2ugiLxM [Accessed 20 Oct. 2021]. Australian Government Department of Health (2020). What you need to know about coronavirus (COVID-19). [online] Australian Government Department of Health. Available at: https://www.health.gov.au/news/health-alerts/novel-coronavirus-2019-ncov-health-alert/ongoing-support-during-coronavirus-covid-19/what-you-need-to-know-about-coronavirus-covid-19#:~:text=The%20virus%20can%20spread%20from,your%20mouth%20or%20face. [Accessed 20 Oct. 2021]. Weiss, S.R. and Navas-Martin, S. (2005). Coronavirus Pathogenesis and the Emerging Pathogen Severe Acute Respiratory Syndrome Coronavirus. Microbiology and Molecular Biology Reviews, [online] 69(4), pp.635–664. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1306801/ [Accessed 20 Oct. 2021]. ‌Google.com. (2011). covid 19 graphs australia - Google Search. [online] Available at: https://www.google.com/search?q=covid+19+graphs+australia&rlz=1C1GCEV_enAU834AU834&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjEmqjthdjzAhXf7XMBHTBFCM4Q_AUoAXoECAEQAw&biw=1280&bih=610&dpr=1.5&safe=active&ssui=on#imgrc=MlmDyX1gAVfMpM [Accessed 20 Oct. 2021]. Google.com. (2011). covid-19 timeline - Google Search. [online] Available at: https://www.google.com/search?q=covid-19+timeline&rlz=1C1GCEV_enAU834AU834&source=lnms&tbm=isch&sa=X&ved=2ahUKEwjXg_u6-dLzAhXu8HMBHUPvBYoQ_AUoAXoECAEQAw&biw=1280&bih=610&dpr=1.5&safe=active&ssui=on#imgrc=neHXMWb30P392M [Accessed 20 Oct. 2021]. Static.ffx.io. 2021. the rise and fall of victorias second wave. [online] Available at: <https://static.ffx.io/images/$zoom_0.6823729616989003%2C$multiply_0.7554%2C$ratio_1.776846%2C$width_1059%2C$x_85%2C$y_131/t_crop_custom/q_86%2Cf_auto/0423759945912feae40c1ed33273e21b1c87634c> [Accessed 18 October 2021]. victoria state government. 2021. covid 19 update. [online] Available at: <https://www.dhhs.vic.gov.au/covid-19-chief-health-officer-update> [Accessed 20 October 2021]. 2021. australias covid 19 vaccine rollout. [online] Available at: <https://www.theguardian.com/australia-news/2021/mar/16/australia-covid-vaccine-rollout-distribution-when-can-you-get-the-coronavirus-jab https://www.health.gov.au/sites/default/files/documents/2021/10/covid-19-vaccine-rollout-update-7-october-2021.pdf> [Accessed 20 October 2021]. Prod.static9.net.au. 2021. vaccine rollout. [online] Available at: <https://prod.static9.net.au/fs/a424f8f6-f45e-4008-98b9-8948df18e413> [Accessed 20 October 2021]. How Portugal managed to have the world's highest vaccination rate - ABC News. (2021). ABC News. [online] 5 Oct. Available at: https://www.abc.net.au/news/2021-10-05/how-portugal-managed-to-have-the-highest-rates-of-vaccination/100514138 [Accessed 7 Oct. 2021]. ‌Uq.edu.au. (2020). COVID-19: Why does it affect people differently? [online] Available at: https://medicine.uq.edu.au/blog/2020/07/covid-19-why-does-it-affect-people-differently [Accessed 20 Oct. 2021]. ‌Google.com. (2021). victorias cases graph 2021 - Google Search. [online] Available at: https://www.google.com/search?q=victorias+cases+graph+2021&tbm=isch&ved=2ahUKEwi8isv4h9jzAhWOm0sFHc7oAooQ2-cCegQIABAA&oq=victorias+cases+graph+2021&gs_lcp=CgNpbWcQA1DBqgNYw7MDYLe1A2gAcAB4AIAB6wKIAbkNkgEHMC4xLjIuM5gBAKABAaoBC2d3cy13aXotaW1nwAEB&sclient=img&ei=9I5vYbzAII63rtoPztGL0Ag&bih=610&biw=1280&rlz=1C1GCEV_enAU834AU834&safe=active&ssui=on#imgrc=1T0AohLuH6fZ1M [Accessed 20 Oct. 2021]. Uq.edu.au. (2020). COVID-19: Why does it affect people differently? [online] Available at: https://medicine.uq.edu.au/blog/2020/07/covid-19-why-does-it-affect-people-differently [Accessed 20 Oct. 2021]. World Health Organization: WHO (2021). The Oxford/AstraZeneca COVID-19 vaccine: what you need to know. [online] Who.int. Available at: https://www.who.int/news-room/feature-stories/detail/the-oxford-astrazeneca-covid-19-vaccine-what-you-need-to-know [Accessed 21 Oct. 2021]. ‌Hopkins, J.S. (2020). How Pfizer Delivered a Covid Vaccine in Record Time: Crazy Deadlines, a Pushy CEO. [online] WSJ. Available at: https://www.wsj.com/articles/how-pfizer-delivered-a-covid-vaccine-in-record-time-crazy-deadlines-a-pushy-ceo-11607740483 [Accessed 21 Oct. 2021]. ‌Marton, H. (2021). 7 reasons people don't get COVID-19 vaccinations, and why you should — right now. [online] Healthdirect.gov.au. Available at: https://www.healthdirect.gov.au/blog/7-reasons-people-dont-get-covid-19-vaccinations [Accessed 21 Oct. 2021]. ‌Science.org. (2021). Having SARS-CoV-2 once confers much greater immunity than a vaccine—but vaccination remains vital. [online] Available at: https://www.science.org/content/article/having-sars-cov-2-once-confers-much-greater-immunity-vaccine-vaccination-remains-vital [Accessed 21 Oct. 2021]. ‌Cuffe, R. (2021). AstraZeneca vaccine: How do you weigh up the risks and benefits? [online] BBC News. Available at: https://www.bbc.com/news/explainers-56665396 [Accessed 21 Oct. 2021]. ‌Victorian Equal Opportunity and Human Rights Commission. (2021). Explainer: Mandatory COVID-19 vaccinations and your rights. [online] Available at: https://www.humanrights.vic.gov.au/resources/explainer-mandatory-covid-19-vaccinations-and-your-rights/ [Accessed 21 Oct. 2021]. ‌Calla Wahlquist (2021). How Melbourne's "short, sharp" Covid lockdowns became the longest in the world. [online] the Guardian. Available at: https://www.theguardian.com/australia-news/2021/oct/02/how-melbournes-short-sharp-covid-lockdowns-became-the-longest-in-the-world [Accessed 21 Oct. 2021]. ‌NewsGP. (2021). newsGP - Mixing COVID vaccines could result in stronger immune response. [online] Available at: https://www1.racgp.org.au/newsgp/clinical/mixing-covid-vaccines-could-result-in-stronger-imm [Accessed 21 Oct. 2021]. ‌Australian Government Department of Health (2020). Coronavirus (COVID-19) at a glance – 19 May 2020. [online] Australian Government Department of Health. Available at: https://www.health.gov.au/resources/publications/coronavirus-covid-19-at-a-glance-19-may-2020 [Accessed 21 Oct. 2021]. ‌Kandola, A. (2020). What is an anti-vaxxer? [online] Medicalnewstoday.com. Available at: https://www.medicalnewstoday.com/articles/anti-vaxxer [Accessed 21 Oct. 2021]. ‌Macali, A. (2021). Daily Confirmed Cases in Victoria - COVID Live. [online] Covidlive.com.au. Available at: https://covidlive.com.au/report/daily-cases/vic [Accessed 21 Oct. 2021]. Wellcome. (2021). How have Covid-19 vaccines been made quickly and safely? | News | Wellcome. [online] Available at: https://wellcome.org/news/quick-safe-covid-vaccine-development [Accessed 21 Oct. 2021]. Vic.gov.au. (2021). COVIDSafe Settings | Coronavirus Victoria. [online] Available at: https://www.coronavirus.vic.gov.au/coronavirus-covidsafe-settings [Accessed 21 Oct. 2021]. ‌Australian Government Department of Health (2020). Coronavirus (COVID-19) health alert. [online] Australian Government Department of Health. Available at: https://www.health.gov.au/news/health-alerts/novel-coronavirus-2019-ncov-health-alert [Accessed 21 Oct. 2021]. ‌Lucy Mae Beers (2021). Victoria COVID cases on Thursday hours before Melbourne's sixth lockdown ends. [online] 7NEWS. Available at: https://7news.com.au/lifestyle/health-wellbeing/victoria-covid-cases-on-thursday-hours-before-melbournes-sixth-lockdown-ends-c-4294097 [Accessed 21 Oct. 2021]. ‌Vic.gov.au. (2021). About the AstraZeneca vaccine | Coronavirus Victoria. [online] Available at: https://www.coronavirus.vic.gov.au/about-astrazeneca-vaccine [Accessed 21 Oct. 2021]. Created with images by BlenderTimer - "coronavirus corona virus" • Dieterich01 - "corona covid-19 pandemic" • ELG21 - "coronavirus face mask infection"
CommonCrawl
Causes of the N–S compressional aftershocks of the E–W compressional 2008 Iwate–Miyagi Nairiku earthquake (M7.2) in the northeastern Japan arc Sumire Maeda ORCID: orcid.org/0000-0002-7783-72421, Toru Matsuzawa2, Keisuke Yoshida2, Tomomi Okada2 & Takeyoshi Yoshida2 In this study, we investigated the influence of local structural heterogeneities on aftershocks of the 2008 Iwate–Miyagi Nairiku earthquake (M 7.2) that occurred in the northeastern Japan arc. Although this area is characterized by an E–W compressional thrust faulting stress regime, many N–S compressional thrust-type and strike-slip-type earthquakes were also observed in the aftershock activity of the 2008 event. The occurrence of such N–S compressional aftershocks indicates that the differential stress was very low before the main shock. We investigated Vp/Vs structure in the aftershock area in detail to find most of the area showed Vp/Vs as low as less than 1.70, which is consistent with the previous studies. Our result shows, however, the aftershock area was dotted with high Vp/Vs small areas, which suggests the locations of abundant fluid-filled cracks. The comparison of the distributions of Vp/Vs, the stress changes due to the mainshock, and focal mechanisms of the aftershocks indicate that the N–S compressional aftershocks are concentrated in regions with high Vp/Vs and large N–S compressional stress change. From these results, we concluded that the N–S compressional aftershocks occurred in regions with low strength due to the high pore pressure and with N–S compressional stress caused by the stress change due to the mainshock in a low differential stress regime. The spatial distribution and focal mechanisms of inland earthquakes provide important insight into local structural heterogeneities (e.g., fluid, lithology, stress field, temperature, weak planes) in the crust (e.g., Magistrale and Zhou 1996; Umino et al. 1998; Hardebeck and Hauksson 1999; Maeda et al. 2018; Hauksson and Meier 2019). Earthquakes with fault types very different from those expected from the regional stress field have been reported by many researchers (e.g., Terakawa et al. 2013; Yoshida et al. 2014b; Vavryčuk and Adamová 2018). Causes of such strange earthquakes are usually attributed to the stress rotations due to large earthquakes or complex fault distributions. These phenomena also indicate that the maximum shear stress is not much larger than the stress drops of earthquakes there. Maeda et al. (2018) showed that microearthquake faulting styles in northwestern Kii Peninsula, southwestern Japan, can vary across a small region, indicating the crustal strength is very low in the region. These results indicate that the earthquakes do not necessarily occur in large differential stress regimes but rather in weak regions. Thus, it is very important to investigate the relations among the distributions of earthquakes, stress and structures to understand the inland earthquake generation mechanisms. The Iwate–Miyagi Nairiku earthquake with a Japan Meteorological Agency (JMA) magnitude (Mj) of 7.2 occurred in the northeastern Japan arc on 14 June 2008 (Fig. 1) and was followed by numerous aftershocks. According to Terakawa and Matsu'ura (2010), this region is located in a stress regime dominated by E–W compression, which also corresponds to the faulting style of the 2008 mainshock. However, many thrust-type earthquakes with N–S compression were identified as aftershocks of the 2008 event (Yoshida et al. 2014b). These exceptional aftershocks are concentrated in a region much narrower than the N–S compressional stress field predicted from the mainshock source model of Iinuma et al. (2009), which indicates that the distribution of these "exceptional" aftershocks is controlled not only by stress distribution but also by the strength distribution. Distribution of calderas, tectonic lines, and earthquakes in the study area. Calderas and tectonic lines are from Nunohara et al. (2010) and Baba (2017), respectively. Epicenters of earthquakes in 2008 at depths ≤ 15 km are shown as black dots and are from the JMA unified catalog. The green star denotes the mainshock epicenter estimated by Okada et al. (2012) Immediately after the mainshock, the Group for the Aftershock Observations of the Iwate–Miyagi Nairiku Earthquake deployed a dense temporary seismic network in and around the focal area (Okada et al. 2012). Aftershock distribution and seismic velocity structure were investigated in detail. Okada et al. (2010, 2012, 2014) revealed that aftershocks occurred mainly in a shallow high-velocity zone overlying a low-velocity zone and interpreted this as a brittle zone overlying a fluid-rich area. If the earthquakes were caused by high pore fluid pressure due to the fluid provided from below, the ratio of P- and S-wave velocities (Vp/Vs) is expected to be higher there than in the surrounding area if the fluid was intruding in cracks (Takei 2002). In this study, we investigate the relations among the distributions of Vp/Vs, stress change due to the 2008 event and its aftershocks. Our study is composed of the following three steps: First, we examine the distribution of aftershocks of the 2008 Iwate–Miyagi Nairiku earthquake and their faulting styles in detail. Second, we estimate a fine-scale Vp/Vs distribution in the aftershock area using a method proposed by Lin and Shearer (2007), which has higher resolution for regions with dense seismicity than usual seismic tomography methods. We discuss the relationship between the N–S compressional aftershocks of the 2008 earthquake and the inhomogeneous Vp/Vs structure in the last step. Data and methods Hypocenter and focal mechanism data We combined the focal mechanism solutions of Yoshida et al. (2014b) with the precise hypocenters relocated by Okada et al. (2012) to investigate the characteristics of the focal mechanism distribution in detail. Yoshida et al. (2014b) conducted a detailed analysis of regional hypocenters and focal mechanisms of earthquakes (Mj > 1.0) that occurred between 1996 and 2010. They relocated hypocenters by applying the double-difference (DD) location method (Waldhauser and Ellsworth 2000) and determined focal mechanism solutions using the HASH program (Hardebeck and Shearer 2002), adding newly picked P-wave first-motion data and using the one-dimensional velocity model of Hasegawa et al. (1978). Okada et al. (2012) determined three-dimensional seismic velocity structure and relocated hypocenters simultaneously using the double-difference (DD) tomography method (Zhang and Thurber 2003) with data from a dense temporary observation network deployed in the study area. The one-dimensional (1D) velocity structure of Hasegawa et al. (1978) was used as their initial model. They analyzed earthquakes within the magnitude range of 1.0 ≤ Mj ≤ 5.7 that occurred between 14 June 2008 and 30 September 2008, ending 3 months after the mainshock. The double-difference tomography method has the advantage of combining picked absolute arrival times with precise relative arrival times obtained from waveform analysis. As a result, this method can produce a more accurate velocity model and hypocenter distribution than standard tomography methods when hypocenters are densely distributed (Zhang and Thurber 2003). In addition, the hypocenters relocated by Okada et al. (2012) are considered well located due to a good station coverage during the period of temporary aftershock observation (Yoshida et al. 2014b). Moreover, the DD tomography method can estimate the absolute locations of hypocenters, while DD relocation with a 1D structural model can only obtain precise relative locations. We, therefore, used the focal mechanisms determined by Yoshida et al. (2014b) combined with the precise hypocenter locations of Okada et al. (2012). In combining the two catalogs, we treat two hypocenters from respective catalogs as the same if differences of the origins of the two events are less than 0.5 s and 0.01° in time and space (both in latitude and longitude), respectively. Focal mechanism solutions are classified as normal, strike-slip, or reverse types using the same scheme as Maeda et al. (2018), who introduced two grades (A and B) for each fault type: grade A means that a focal mechanism has an axis with a plunge angle θ ≥ 60°, and grade B indicates a plunge angle 45° ≤ θ < 60°. We classified focal mechanisms satisfying neither criterion (i.e., θ < 45° for all axes) into a group labeled "other". We find not only E–W compressional-type events, as expected from the regional stress field (Terakawa and Matsu'ura 2010), but also N–S compressional-type aftershocks. To clarify the distribution of these exceptional aftershocks, we divided reverse fault-type and strike-slip fault-type events into N–S and E–W compressional subtypes: focal mechanisms with P-axes oriented within 30° of N–S (i.e., 0° ± 30° or 180° ± 30°) were regarded as N–S compressional; others were regarded as E–W-compressional. Note that the range of orientations for the N–S compressional P-axes is much narrower than that for the E–W subclass because we would like to select N–S compressional events that cannot be treated as E–W compressional ones even when accounting for possible orientation uncertainty. Thus, we classified focal mechanisms into 11 subtypes: four subtypes of reverse faulting, four strike-slip, two normal fault subtypes, and one for "other". Focal mechanisms could change slightly if we use the three-dimensional (3D) velocity model estimated by Okada et al. (2012). However, the focus of our concern here is the group of N–S-compressional focal mechanisms; such focal mechanisms cannot change to E–W compression with any physically plausible velocity model. Thus, we use the focal mechanisms of Yoshida et al. (2014b) without applying further corrections. Estimation of V p /V s ratios To investigate the relation between the aftershock activity and existence of fluid, we estimated the Vp/Vs distribution using the method proposed by Lin and Shearer (2007). This method is basically the same as computing a Wadati diagram, but can estimate local Vp/Vs ratios within a cluster of similar earthquakes using precise differential P and S arrival times obtained from waveform cross-correlation, yielding a resolution better than that obtained using usual tomographic methods (e.g., Lin and Shearer 2009). As a preliminary step, we calculated waveform cross-correlations to improve the accuracy of arrival time differences. We used 16,332 earthquakes from the JMA unified earthquake catalog, located in the aftershock area of the 2008 Iwate–Miyagi Nairiku earthquake for the period from 14 June 2008 to 14 June 2009. We applied a 5–12-Hz bandpass filter to the 100-Hz sampling seismograms and computed cross-correlation functions for all the event pairs with epicentral separations less than 1.0 km. We used differential arrival times for hypocenter relocation whose waveforms show the cross-correlation coefficients larger than 0.8. The time lag of the absolute maximum of each cross-correlation function was initially computed at the original 100 Hz sampling rate. We then interpolated the waveform data with a quadratic function to obtain a precise lag time with an accuracy of ~ 1 ms, as in Shelly et al. (2013). We adopted durations of 2.5 and 4.0 s for the P- and S-wave windows, respectively, starting 0.3 s before respective phase onsets. P-wave windows were truncated to avoid contamination by S-wave energy for seismograms with S–P times less than 2.5 s (Yoshida and Hasegawa 2018). As a result, 1,315,244 differential arrival times were obtained. Next, we applied the method of Lin and Shearer (2007) to the differential arrival times. Figure 2a illustrates the conceptual relationship between P differential travel time (\(\delta T_{\text{p}}\)) and S differential travel time (\(\delta T_{\text{s}}\)) for a pair of earthquakes observed at several stations. The data are plotted along a line whose slope represents the Vp/Vs ratio between the two hypocenters of the event pair when no errors are included. Since we do not know the correct travel times for real earthquakes, we have to use travel times (\(T_{\text{p}}\) and \(T_{\text{s}}\)) calculated from the differences between observed arrival times (\(t_{\text{p}}\) and \(t_{\text{s}}\)) and the calculated origin time (\(t_{\text{o}}\)): Conceptual diagram of the method proposed by Lin and Shearer (2007). a P vs. S differential travel times for a single pair of events. b Synthetic P vs. S differential arrival times for several pairs of events in a compact cluster. c Distribution of differential arrival times after shifting the data shown in (b) so that they are aligned along a line crossing the origin, using the average values of respective event pairs for alignment. The slope of the line represents the local Vp/Vs ratio for the cluster. d Example of real data distribution after Lin and Shearer (2007). Outliers are indicated by red arrows and text $$T_{\text{p}} = t_{\text{p}} - t_{\text{o}} ,$$ $$T_{\text{s}} = t_{\text{s}} - t_{\text{o}} .$$ Then, the differential travel times can be expressed as $$\delta T_{\text{p}} = \delta t_{\text{p}} - \delta t_{\text{o}} ,$$ $$\delta T_{\text{s}} = \delta t_{\text{s}} - \delta t_{\text{o}} ,$$ where \(\delta t_{\text{p}}\), \(\delta t_{\text{s}}\), and \(\delta t_{\text{o}}\) represent differences in P-wave arrival times, S-wave arrival times, and origin times for an earthquake pair, respectively. If we plot data from several event pairs in a compact earthquake cluster, the result should resemble the illustration in Fig. 2b. The data are distributed along many parallel lines; each line corresponds to data from one pair of earthquakes at multiple stations. The slope of each line is the Vp/Vs ratio within the earthquake cluster. The offsets seen in Fig. 2b are due to uncertainties in differential origin times (δt0). Lin and Shearer (2007) showed that these offsets can be canceled, as shown in Fig. 2c, if we plot "de-meaned" differential arrival times instead of the differential travel times estimated from Eqs. (3) and (4). This is because the de-meaned differential arrival times (\(\widehat{\delta }t_{p}^{i}\) and \(\widehat{\delta }t_{s}^{i}\)), as defined by $$\widehat{\delta }t_{p}^{i} \equiv \delta t_{p}^{i} - \delta \overline{t}_{p} ,$$ $$\widehat{\delta }t_{s}^{i} \equiv \delta t_{s}^{i} - \delta \overline{t}_{s} ,$$ $$\widehat{\delta }t_{s}^{i} = \left( {\frac{{V_{P} }}{{V_{S} }}} \right)\widehat{\delta }t_{p}^{i} ,$$ where \(\delta t_{P}^{i}\) and \(\delta t_{s}^{i}\) are the differential P and S arrival times for an event pair at station i, respectively; and \(\delta \overline{t}_{P}\) and \(\delta \overline{t}_{s}\) are the mean values of the differential arrival times from all stations for the event pair. Equation (7) indicates that if we plot de-meaned differential arrival times (\(\widehat{\delta }t_{p}^{i}\), \(\widehat{\delta }t_{s}^{i}\)), then the data should be distributed along a line crossing the coordinate origin with a slope equal to the Vp/Vs ratio. In this way, we can estimate the local Vp/Vs ratio from all event pairs in the cluster (Lin and Shearer 2007). In principle, this provides a local measure of the average Vp/Vs within a cluster, which is typically only a few kilometers across (Lin and Shearer 2009). In this study, we divide the region into many bins and estimate the Vp/Vs ratio in each bin using the earthquakes located within it. The size of each bin is fixed at 1.5 km × 1.5 km × 1.5 km, taking the hypocenter density, expected differential arrival times and typical estimation errors into consideration. The differential arrival times estimated from the cross-correlation of the seismic waveforms may have large estimation errors due to the mismatching of the phases by one cycle which is known as "cycle skipping." This phenomenon occurs only when the pass-band of the filter is too narrow. Since we use here a rather wide pass-band of 5–12 Hz, such phenomenon is hard to occur. If the corner frequency of an earthquake is lower than 5 Hz (~ 0.2 s) and attenuation is very strong; however, the resultant filtered waveform might be monochromatic with a dominant frequency of 5 Hz. In this case, the differential arrival times might differ from the others by one cycle (about 0.2 s) or larger. Even if such cycle skipping occurs, the data will be removed as an outlier as shown in Fig. 2d because the error is much larger than usual estimation error. We calculated the mean μ and standard deviation std for each set of P and S differential arrival times and excluded values outside the μ ± 3std range. Finally, we recalculated the average values of the P and S differential arrival times. After removing outliers, we superimposed data from many event pairs located in each bin and estimated the Vp/Vs ratio in each bin from the slope of the distribution using principal components analysis (PCA) (e.g., Pearson 1901; Hotelling 1933). By applying PCA to our two-dimensional data set, we obtain two principal component vectors and their respective eigenvalues λ. We estimated the Vp/Vs ratio for each bin from the inclination of the eigenvector with the largest eigenvalue. Since each eigenvalue is proportional to the width of the distribution in the direction of the corresponding eigenvector, the ratio of two eigenvalues can be used as an indicator of the reliability of the estimated Vp/Vs value. When the event cluster size is too small or the station separation is too narrow, the eigenvalue ratio tends to become as small as 1.0 and the estimated Vp/Vs ratio is unreliable. To evaluate the reliability of the obtained Vp/Vs ratio, we used the bootstrap method of Efron (1987): in each bin, we carried out bootstrap resampling 100 times and calculated the Vp/Vs ratio of the resampled data using the standard deviation stdb of the bootstrap results as the standard error of Vp/Vs. We retained estimates with stdb ≤ 0.1 and eigenvalue ratios λ1/λ2≥ 10.0 for further analyses. Then, we shifted the bins horizontally by 0.005 degrees and vertically by 1.0 km, and estimated the Vp/Vs for each bin using the same procedure as described above and superimpose those sets of results to investigate the detailed but smoothed Vp/Vs distribution. Focal mechanism distribution Distributions of focal mechanisms corresponding to individual fault types are shown in Fig. 3, and cross sections are shown in Fig. 4. Reverse or strike-slip aftershocks with E–W compression are dominant. Figure 3a indicates that most reverse-fault earthquakes with the E–W compression distribute in the NNE–SSW direction, and they can be further subdivided into northern and southern clusters as indicated by two dashed-line ovals. On the other hand, N–S compressional thrust-type earthquakes are concentrated around the center of the aftershock area, comprising a cluster in the NNW–SSE direction (Fig. 3b). Note that they are distributed as if they connect two clusters of E–W A-type reverse-fault earthquakes (RAEW). Focal mechanism distributions. Focal mechanisms of earthquakes for the period from 14 June 2008 to 30 September 2008 are shown. a Distribution of reverse-faulting events with E–W compression composing subgroups A (RAEW) and B (RBEW). Dashed black ellipses indicate northern and southern clusters. b Distribution of reverse-faulting events with N–S compression composing subgroups A (RANS) and B (RBNS). c Distribution of normal-faulting events composing subgroups A (NA) and B (NB), and other events (O). The black rectangle indicates the location of the cross sections in Fig. 4. d Distribution of E–W-compressional strike-slip events composing subgroups A (SAEW) and B (SBEW). e Distribution of N–S-compressional strike-slip events composing subgroups A (SANS) and B (SBNS) Cross sections of focal mechanism distributions. Focal mechanisms of earthquakes in the black rectangle in Fig. 3c are projected onto the plane A–A'. See the caption of Fig. 3 and the text for the classification of focal mechanisms Okada et al. (2012) showed that the aftershocks include westward- and eastward-dipping clusters, which were previously interpreted as mutually conjugate faults. InSAR and tilt data also support this interpretation (Takada et al. 2009; Fukuyama 2015). Figure 4 shows that thrust-type earthquakes dominated by E–W compression similar to the mainshock dominate the westward-dipping cluster, which probably represents the source fault of the mainshock. The eastward-dipping cluster contains many thrust-type earthquakes with N–S compression, but many E–W compressional thrust earthquakes also occur in this cluster within the hanging wall of the suspected mainshock fault. Strike-slip earthquakes seem to be concentrated around the intersection of the two dipping clusters of reverse-fault earthquakes in Fig. 4. V p /V s distribution The distributions of estimated Vp/Vs ratios with errors stdb ≤ 0.1 and eigenvalue ratios λ1/λ2≥ 10.0 are shown in Fig. 5. Only 958 of 8357 bins with earthquake data met these reliability criteria. The average value of Vp/Vs across all reliable bins is 1.63 and the frequency distribution is shown in Fig. 6a; most ratios in this region lie within the range 1.45 ≤ Vp/Vs ≤ 1.80. Examples of the data distributions of this study are shown in Fig. 6b–d. The depth ranges of bins are 0.00–1.50, 0.75–2.25, 1.50–3.00, 2.25–3.75, 3.00–4.50, 3.75–5.25, 4.50–6.00, 5.25–6.75, 6.00–7.50, 6.75–8.25, 7.50–9.00, and 8.25–9.75 km. The estimated minimum, average, and maximum values of the Vp/Vs ratios in different depth ranges are listed in Table 1. Vp/Vs distribution estimated in this study. The yellow star denotes the mainshock epicenter estimated by Okada et al. (2012). See the caption of Fig. 1 and the text for calderas and tectonic lines a Histogram of Vp/Vs ratios estimated in this study and b–d examples of the data distributions. Slope of the red line represents the local Vp/Vs ratio for the cluster. Blue line is drawn as a reference whose slope is 1.73. Red texts represent the center of the longitude, latitude and depth ranges, and Vp/Vs ratio for the corresponding data Table 1 Minimum, average, and maximum Vp/Vs ratios in different depth ranges Figure 5 shows that Vp/Vs ratios were estimated only in the Hanayama caldera at the southern end of the aftershock area (Fig. 1) in the shallowest depth range (0.00–1.50 km). In 0.75–2.25 km depth range, Vp/Vs ratios were estimated close to the northern and southern ends of the aftershock area. The southern end of Vp/Vs values shows Vp/Vs < 1.70, while both low and high Vp/Vs ratios appear in the northern end. In ranges of 1.50–3.00, 2.25–3.75, 3.00–4.50, 3.75–5.25, 4.50–6.00 and 5.25–6.75 km, Vp/Vs ratios were estimated between the Kurikomayama Nanroku caldera and the Kunimiyama caldera (Mizuyama caldera by Osozawa and Nunohara 2013) and they are considerably inhomogeneous. In ranges of 3.00–4.50 and 3.75–5.25 km, exceptionally high Vp/Vs ratios seem to be located near the main shock. In ranges of 3.75–5.25, 4.50–6.00, 5.25–6.75, 6.00–7.50, 6.75–8.25 and 7.50–9.00 km, Vp/Vs values are generally estimated less than 1.70 inside the Kunimiyama caldera, but several bins of high Vp/Vs are located adjacent to the bins with low Vp/Vs. In 8.25–9.75 km depth range, Vp/Vs ratios are close to 1.70. Low-V p /V s regions in the aftershock area As shown in the previous section, Vp/Vs ratios estimated using the method of Lin and Shearer (2007) in the study area show Vp/Vs values of 1.45–1.80 over a wide range of the aftershock area (Figs. 5, 6). Most of the values are much smaller than 1.73 (Poisson's ratio of 0.25), which is often assumed to be the average or expected value of Vp/Vs. Many previous studies have confirmed that Vp/Vs in seismogenic zones in the upper crust is often less than 1.70 (e.g., Hashida and Ukawa 1997; Nakajima et al. 2001; Kato et al. 2005; Matsubara et al. 2008; Okada et al. 2012; Jo and Hong 2013; Okada et al. 2014; Yukutake et al. 2015). In particular, Nakajima et al. (2001) and Okada et al. (2012, 2014) reported that Vp/Vs in the aftershock area of the Iwate–Miyagi Nairiku earthquake was basically less than 1.70. Yukutake et al. (2015) estimated Vp/Vs around Hakone volcano using the method of Lin and Shearer (2007) and found values as low as Vp/Vs= 1.58 ± 0.07 above the magma source. Hashida and Ukawa (1997) investigated the upper crust of the Kanto–Tokai region, Japan, and obtained Vp/Vs ratios less than 1.70 in southern Nagano Prefecture, northern Kanto district, and Izu Peninsula. One reason why low Vp/Vs ratios dominate in these regions is thought to be the influence of the mineral composition of rocks constituting the upper crust. According to the measurement of Christensen (1996), Granite–granodiorite shows Vp/Vs ratios of 1.70 at 200 MPa. Basement rocks in the aftershock area of this study are granitoids (Sasada 1985), and thus, their Vp/Vs ratios are expected to be ~ 1.70. Christensen (1996) also showed Vp/Vs = 1.48 for quartzite at 200 MPa. Therefore, if quartz is abundant in the area, or quartz veins are developed in granitic basement rocks, the Vp/Vs ratios are expected to be less than 1.70. Sasada (1984, 1985) clarified the chemical composition of granitoid in pre-Neogene basement rocks around Mt. Kurikoma in the present study area: many rock samples are tonalite that are poor in K-feldspar and rich in quartz (generally larger than 25%vol., but up to 35%vol.) (Additional file 1: Figure S1). Their quartz-rich samples are taken from regions that correspond to the low-Vp/Vs regions estimated in this study, while samples with quartz contents of < 25% seems to be distributed in the region of high Vp/Vs ratios revealed by Okada et al. (2012) (Additional file 1: Figure S2). A low Vp/Vs value can be also explained by fluid in granular (equilibrium geometry) or tube-shaped pores in rocks (Takei 2002). In the remarkable low-Vp/Vs region estimated by Okada et al. (2012), however, Vs is higher than in the surrounding areas. The tomography results of Nakajima et al. (2001) also show that Vp/Vs in this study area is lower and Vs is higher than in the surrounding area. If the basement rocks contain significant fluid, Vs would be relatively low. Therefore, it is difficult to explain the low Vp/Vs ratios in this study area with the presence of fluid. In measurements made by Christensen (1996) at 200 MPa pressure, the P- and S-wave velocities of granite–granodiorite are 6.2 km/s and 3.7 km/s, respectively. The P-wave velocity of quartzite is estimated to be 6.0 km/s (i.e., slower than granite–granodiorite), while the S-wave velocity is 4.0 km/s. Therefore, a quartz-rich model can explain the low Vp/Vs and high Vs in the aftershock area of the Iwate–Miyagi Nairiku earthquake. Relationship between Vp/Vs and earthquake generation The results of this study show that low Vp/Vs dominates the aftershock area of the Iwate–Miyagi Nairiku earthquake. The method of Lin and Shearer (2007), however, cannot estimate Vp/Vs in regions without earthquakes. Thus, it is not resolved whether the low Vp/Vs region is limited to the aftershock area or extends throughout the upper crust. Therefore, it is necessary to compare our results with the results of tomography to estimate the extent of the low Vp/Vs region resolved by the present work. Nakajima et al. (2001) investigated P- and S-wave velocities in the Tohoku district (northeastern Honshu, Japan) and computed Vp/Vs ratios, yielding an averaged Vp/Vs of 1.69 for the upper crust in the Tohoku region and a value of ~ 1.70 at 10-km depth in the aftershock area of the Iwate–Miyagi Nairiku earthquake. Okada et al. (2012) reported high-resolution cross sections of the Vp/Vs distribution in the aftershock area of the Iwate–Miyagi Nairiku earthquake computed from data of a dense temporary seismic observation network. Their cross sections confirm that Vp/Vs ratios of 1.61–1.70 are dominant in the aftershock area, although some regions have ratios as high as 1.90. The present analysis has revealed that the area of the Iwate–Miyagi Nairiku earthquake have low Vp/Vs ratios, but Vp/Vs > 1.90 in some regions, which is consistent with the results of Okada et al. (2012). Those tomographic results show a gradual change from a region of low Vp/Vs to a region with high Vp/Vs, but the spatial resolution in their study was several kilometers or more. On the other hand, the results of our study suggest that high Vp/Vs ratios are confined to "spots" as shown in Fig. 5. From the results of the present and previous studies, it can be concluded that the upper crust in the region is dominated by Vp/Vs ratios less than 1.70. Moreover, our study indicates that seismogenic regions within the upper crust are heterogeneous, with Vp/Vs varying on length scales of several kilometers. The heterogeneity of these Vp/Vs values might be attributed to the presence of fluids, differences in rock types, or both. Causes of the N–S-compressional aftershocks The focal mechanism analysis of the present study shows that the westward-dipping planar aftershock distribution is dominated by E–W-compressional thrust earthquakes, while the eastward-dipping distribution contains many N–S-compressional thrust earthquakes as shown in Fig. 4. After the occurrence of a large thrust earthquake with E–W compression, N–S compressional stress is expected to increase in the area immediately east and west of the main rupture zone (Yoshida et al. 2014a). Therefore, if the E–W background compressive stress in the Tohoku district (e.g., Terakawa and Matsu'ura 2010) is not extremely large, aftershocks showing N–S compression are likely to occur east and west of the mainshock rupture area. However, the actual aftershocks that show reverse faulting with N–S compression are distributed forming a narrow eastward-tilting cluster. To verify whether this distribution can be attributed to heterogeneities in the stress or strength distributions, we quantitatively examine the distributions of focal mechanisms with N–S compression, stress field, and Vp/Vs ratios. Figure 7 shows the distribution of Vp/Vs ratios estimated by Okada et al. (2012), the distribution of the N–S compression-type focal mechanisms (RANS, SANS) classified in this study, and the distribution of the axes of maximum compressive stress change due to the mainshock (herein, Δσ1) estimated by Yoshida et al. (2014a). Large N–S compressional stress changes are observed both east and west of the aftershock area, corresponding to the hanging wall and footwall, respectively (see Additional file 1: Figure S2). Earthquakes dominated by N–S compression occur mainly in regions of Vp/Vs ≈ 1.70, sandwiched between low-Vp/Vs regions to the west and high-Vp/Vs regions to the east. Both the results of our study in Fig. 5 and Okada et al. (2012) shown in Fig. 7 represent that Vp/Vs ratios are less than 1.70 in most of the aftershock area. Figure 7 also indicates that earthquakes with N–S compression occurred mainly in regions with Vp/Vs ≈ 1.70, Δσ1 axes oriented N–S, and differential stress changes > 8 MPa (log(Δσ1 − Δσ3) > 0.9). There are no N–S compressional earthquakes in regions with Vp/Vs < 1.70, even if Δσ1 axes were oriented N–S and differential stress change exceeded 8 MPa. Distribution of Vp/Vs ratios, events with N–S compression, and axes of maximum compressive stress change. Distribution of Vp/Vs at 4.5 km depth is from Okada et al. (2012). Focal mechanism solutions with N–S compression from category A events (RANS and SANS) in the depth range 3.25–6.25 km are shown. Bars represent axes of the maximum compressive stress changes (Δσ1) caused by the mainshock at 5-km depth projected to the surface (Yoshida et al. 2014b). The color of each bar represents to the magnitude of the calculated differential stress change. Only differential stress changes ≥ 5 MPa are plotted. In the title of the bottom-left scale bar, "log" means the common (base 10) logarithm and the units are MPa. The value of 0.7 on the logarithmic scale corresponds to 5 MPa and 0.9 corresponds to 8 MPa In the previous section, we showed that basement rocks in the upper crust of the northeastern Japan arc are generally characterized by Vp/Vs < 1.70. Takei (2002) pointed out that Vp/Vs increases when fluid-filled cracks are present. Therefore, the regions of the upper crust with Vp/Vs > 1.70 might correspond to fluid-rich portions. Kato et al. (2005) discussed the relationship between the aftershocks of the 2004 Niigata–Chuetsu (mid-Niigata prefecture) earthquake and the velocity structure of the upper crust. They found that the aftershocks were distributed in a zone of low-to-moderate Vp/Vs (1.65 ≤ Vp/Vs ≤ 1.75), and a zone of significantly low Vp/Vs (< 1.65) coincides with the area where aftershock activity was low. They proposed that the low-Vp/Vs zone corresponds to old basement rock and the low-to-moderate-Vp/Vs zone could be explained by the presence of water-filled pores with high aspect ratios, based on the results of Takei (2002). From these observations, they suggest that the inhomogeneous distribution of pore water contributes to the generation of aftershocks. From the discussion above, we conclude that earthquakes with N–S compressive stress occurred only in regions where N–S compressional stress change caused by the mainshock was large and the strength was low owing to the abundance of fluids. In this study, we investigated the influence of local structural heterogeneities in the upper crust on aftershocks of the 2008 Iwate–Miyagi Nairiku earthquake in the northeastern Japan arc. We found that the aftershock region is basically characterized by low Vp/Vs ratio probably due to the quartz-rich composition of rocks but the region is dotted with high Vp/Vs areas as small as several kilometers. The aftershocks are composing westward-dipping cluster and eastward-dipping cluster. While the former is dominated by reverse-fault-type earthquakes with E–W compression which is the same as the mainshock, many N–S compressional earthquakes are concentrated in space within the latter. These N–S compressional earthquakes occurred mainly in regions where VP/VS ≈ 1.70, N–S compressional stress change was induced by the mainshock, and the change in the differential stress was larger than 8 MPa. There are no aftershocks with N–S compression in regions with Vp/Vs < 1.70, even if the expected N–S compressive stress change was large there. Thus, the N–S compressional aftershocks were probably generated by high pressure of pore fluids in addition to the N–S compressional stress change caused by the mainshock. The datasets analyzed in this study are available from the corresponding author upon reasonable request. JMA: Japan Meteorological Agency JMA magnitude double-difference RAEW: reverse-fault A-type earthquake with E–W compression RBEW: reverse-fault B-type earthquake with E–W compression RANS: reverse-fault A-type earthquake with N–S compression RBNS: reverse-fault B-type earthquake with N–S compression SAEW: strike-slip-fault A-type earthquake with E–W compression SBEW: strike-slip-fault B-type earthquake with E–W compression SANS: strike-slip-fault A-type earthquake with N–S compression SBNS: strike-slip-fault B-type earthquake with N–S compression normal-fault A-type earthquake normal-fault B-type earthquake other type earthquake PCA: Baba K (2017) Marine geology (Chapter 10). In: Geological Society of Japan (ed) Regional Geology of Japan 2, Tohoku District (Nihon Chiho Chishitsushi 2 Tohoku Chiho). Asakura Publishing Co. Ltd., pp 427–478 (in Japanese) Christensen NI (1996) Poisson's ratio and crustal seismology. J Geophys Res 101(B2):3139–3156. https://doi.org/10.1029/95JB03446 Efron B (1987) Better bootstrap confidence intervals. J Am Stat Assoc 82(397):171–185. https://doi.org/10.1080/01621459.1987.10478410 Fukuyama E (2015) Dynamic faulting on a conjugate fault system detected by near-fault tilt measurements. Earth Planets Space 67(1):38. https://doi.org/10.1186/s40623-015-0207-1 Geological Survey of Japan, AIST (ed.) (2018) Seamless digital geological map of Japan 1:200,000 Hardebeck JL, Hauksson E (1999) Role of fluids in faulting inferred from stress field signatures. Science 285:236–239. https://doi.org/10.1126/science.285.5425.236 Hardebeck JL, Shearer PM (2002) A new method for determining first-motion focal mechanisms. B Seismol Soc Am 92(6):2264–2276. https://doi.org/10.1785/0120010200 Hasegawa A, Umino N, Takagi A (1978) Double-planed structure of the deep seismic zone in the northeastern Japan arc. Tectonophysics 47(1):43–58. https://doi.org/10.1016/0040-1951(78)90150-6 Hashida Y, Ukawa M (1997) Upper Crustal V p /V s ratios in Kanto-Tokai district, Japan. J Seismol Soc Japan 50:315–327 (in Japanese with English abstract) Hauksson E, Meier M (2019) Applying depth distribution of seismicity to determine thermo-mechanical properties of the seismogenic crust in southern california: comparing lithotectonic blocks. Pure Appl Geophys 176:1061–1081. https://doi.org/10.1007/s00024-018-1981-z Hotelling H (1933) Analysis of a complex of statistical variables into principal components. J Educ Psychol 24(6 & 7):417–441 Iinuma T, Ohzono M, Ohta Y, Miura S, Kasahara M, Takahashi H, Sagiya T, Matsushima T, Nakao S, Ueki S, Tachibana K, Sato T, Tsushima H, Takatsuka K, Yamaguchi T, Ichiyanagi M, Takada M, Ozawa K, Fukuda M, Asahi Y, Nakamoto M, Yamashita Y, Umino N (2009) Aseismic slow slip on an inland active fault triggered by a nearby shallow event, the 2008 Iwate–Miyagi Nairiku earthquake (Mw68). Geophys Res Lett 36:L20308. https://doi.org/10.1029/2009gl040063 Jo E, Hong TK (2013) VP/VS ratios in the upper crust of the southern Korean Peninsula and their correlations with seismic and geophysical properties. J Asian Earth Sci 66:204–214. https://doi.org/10.1016/j.jseaes.2013.01.008 Kato A, Kurashimo E, Hirata N, Sakai S, Iwasaki T, Kanazawa T (2005) Imaging the source region of the 2004 mid-Niigata prefecture earthquake and the evolution of a seismogenic thrust-related fold. Geophys Res Lett 32(7):1–4. https://doi.org/10.1029/2005gl022366 Lin G, Shearer P (2007) Estimating local V p /V s ratios within similar earthquake clusters. B Seismol Soc Am 97(2):379–388. https://doi.org/10.1785/0120060115 Lin G, Shearer PM (2009) Evidence for water-filled cracks in earthquake source regions. Geophys Res Lett 36(17):1–5. https://doi.org/10.1029/2009GL039098 Maeda S, Matsuzawa T, Toda S, Yoshida K, Katao H (2018) Complex microseismic activity and depth-dependent stress field changes in Wakayama, southwestern Japan. Earth Planets Space 70(1):21. https://doi.org/10.1186/s40623-018-0788-6 Magistrale H, Zhou HW (1996) Lithologic control of the depth of earthquakes in Southern California. Science 273:639–642. https://doi.org/10.1126/science.273.5275.639 Matsubara M, Obara K, Kasahara K (2008) Three-dimensional P- and S-wave velocity structures beneath the Japan Islands obtained by high-density seismic stations by seismic tomography. Tectonophysics 454:86–103. https://doi.org/10.1016/j.tecto.2008.04.016 Nakajima J, Matsuzawa T, Hasegawa A, Zhao D (2001) Three-dimensional structure of Vp, Vs, and V p /V s beneath northeastern Japan: implications for arc magmatism and fluids. J Geophys Res 106(B10):21843–21857. https://doi.org/10.1029/2000JB000008 Nunohara K, Yoshida T, Yamada R, Maeda S, Ikeda K, Nagahashi Y, Yamamoto A, Kudo T (2010) Geology and geologic structure around the area of hypocenter of the 2008 Iwate-Miyagi Nairiku earthquake. Earth Monthly 32:356–366 (in Japanese) Okada T, Umino N, Hasegawa A (2010) Deep structure of the Ou mountain range strain concentration zone and the focal area of the 2008 Iwate–Miyagi Nairiku earthquake, NE Japan—seismogenesis related with magma and crustal fluid. Earth Planets Space 62:347–352. https://doi.org/10.5047/eps.2009.11.005 Okada T, Umino N, Hasegawa A (2012) Hypocenter distribution and heterogeneous seismic velocity structure in and around the focal area of the 2008 Iwate-Miyagi Nairiku Earthquake, NE Japan—possible seismological evidence for a fluid driven compressional inversion earthquake. Earth Planets Space 64:717–728. https://doi.org/10.5047/eps.2012.03.005 Okada T, Matsuzawa T, Nakajima J, Uchida N, Yamamoto M, Hori S, Kono T, Nakayama T, Hirahara S, Hasegawa A (2014) Seismic velocity structure in and around the Naruko volcano, NE Japan, and its implications for volcanic and seismic activities. Earth Planets Space 66:114. https://doi.org/10.1186/1880-5981-66-114 Osozawa S, Nunohara K (2013) Surface ruptures of the 2008 M 6.9 Iwate–Miyagi Nairiku earthquake at Ichinoseki City area: reverse fault reactivation of Late Miocene caldera-collapse normal faults overlapping a Middle Miocene listric normal fault. J Geol Soc Japan 119(Supplement):18–26 (in Japanese) Pearson K (1901) On lines and planes of closest fit to systems of points in space. Philos Mag Ser 2(11):559–572 Sasada M (1984) The pre Neogene basement rocks of the Kamuro Yama Kurikoma Yama area, northeastern Honshu, Japan Part 1, Onikobe Yuzawa Mylonaite zone. J Geol Soc Japan 90(12):865–874 (in Japanese with English abstract) Sasada M (1985) The pre Neogene basement rocks of the Kamuro Yama Kurikoma Yama area, Northeastern Honshu, Japan Part 2, Boundary between the Abukuma and Kitakami belts. J Geol Soc Japan 91(1):1–17 (in Japanese with English abstract) Shelly DR, Moran SC, Thelen WA (2013) Evidence for fluid-triggered slip in the 2009 Mount Rainier. Washington earthquake swarm. Geophys Res Lett 40(8):1506–1512. https://doi.org/10.1002/grl.50354 Takada Y, Kobayashi T, Furuya M, Murakami M (2009) Coseismic displacement due to the 2008 Iwate-Miyagi Nairiku earthquake detected by ALOS/PALSAR: preliminary results. Earth Planets Space 61:e9–e12. https://doi.org/10.1186/BF03353153 Takei Y (2002) Effect of pore geometry on VP/VS: from equilibrium geometry to crack. J Geophys Res 107(B2):2043. https://doi.org/10.1029/2001JB000522 Terakawa T, Matsu'ura M (2010) The 3-D tectonic stress fields in and around Japan inverted from centroid moment tensor data of seismic events. Tectonics 29:TC6008. https://doi.org/10.1029/2009tc002626 Terakawa T, Hashimoto C, Matsu'ura M (2013) Changes in seismic activity following the 2011 Tohoku-oki earthquake: effects of pore fluid pressure. Earth Planet Sci Lett 365:17–24. https://doi.org/10.1016/j.epsl.2013.01.017 Umino N, Matsuzawa T, Hori S, Nakamura A, Yamamoto A, Hasegawa A, Yoshida T (1998) 1996 Onikobe earthquakes and their relation to crustal structure. Zisin 51:253–264 (in Japanese with English abstract) Vavryčuk V, Adamová P (2018) Detection of stress anomaly produced by interaction of compressive fault steps in the West Bohemia Swarm Region, Czech Republic. Tectonics 37:4212–4225. https://doi.org/10.1029/2018TC005163 Waldhauser F, Ellsworth WL (2000) A double-difference earthquake location algorithm: method and application to the Northern Hayward Fault, California. B Seismol Soc Am 90(6):1353–1368. https://doi.org/10.1785/0120000006 Wessel P, Smith W (1998) New, improved version of generic mapping tools released. Eos Trans AGU 79(47):579 Yoshida K, Hasegawa A (2018) Sendai-Okura earthquake swarm induced by the 2011 Tohoku-Oki earthquake in the stress shadow of NE Japan: detailed fault structure and hypocenter migration. Tectonophysics 733(January):132–147. https://doi.org/10.1016/j.tecto.2017.12.031 Yoshida K, Hasegawa A, Okada T, Takahashi H, Kosuga M, Iwasaki T, Yamanaka Y, Katao H, Iio Y, Kubo A, Matsushima T, Miyamachi H, Asano Y (2014a) Pore pressure distribution in the focal region of the 2008 M7.2 Iwate–Miyagi Nairiku earthquake. Earth Planets Space 66:59. https://doi.org/10.1186/1880-5981-66-59 Yoshida K, Hasegawa A, Okada T, Iinuma T (2014b) Changes in the stress field after the 2008 M7.2 Iwate–Miyagi Nairiku earthquake in northeastern Japan. J Geophys Res: Solid Earth 119:9016–9030. https://doi.org/10.1002/2014JB011291 Yukutake Y, Honda R, Harada M, Arai R, Matsubara M (2015) A magma-hydrothermal system beneath Hakone volcano, central Japan, revealed by highly resolved velocity structures. J Geophys Res: Solid Earth 120:3293–3308. https://doi.org/10.1002/2014JB011856 Zhang H, Thurber CH (2003) Double-difference tomography: the method and its application to the Hayward Fault. California. B Seismol Soc Am 93(5):1875–1889. https://doi.org/10.1785/0120020190 We thank R. Takagi of Tohoku University and T. Kubota of the National Research Institute for Earth Science and Disaster Resilience for the valuable discussion. The authors thank the Lead Guest Editor J. Nakajima and the reviewers T. Yamada and an anonymous reviewer for their helpful comments, which have improved the manuscript. We used the unified earthquake catalog of the Japan Meteorological Agency (JMA). The figures in this paper were prepared using the Generic Mapping Tools (GMT) software package (Wessel and Smith 1998). This research was supported by JSPS KAKENHI Grant Number JP26109002. Geological Survey of Japan, National Institute of Advanced Industrial Science and Technology, AIST Tsukuba Central, 7, Higashi-1-1-1, Tsukuba, Ibaraki, 305-8567, Japan Sumire Maeda Graduate School of Science, Tohoku University, 6-6 Aza-Aoba, Aramaki, Aoba-ku, Sendai, 980-8578, Japan Toru Matsuzawa , Keisuke Yoshida , Tomomi Okada & Takeyoshi Yoshida Search for Sumire Maeda in: Search for Toru Matsuzawa in: Search for Keisuke Yoshida in: Search for Tomomi Okada in: Search for Takeyoshi Yoshida in: Data analysis and manuscript preparation were carried out mainly by SM. TM directed the study at all stages. KY analyzed the data to estimate accurate arrival time differences. KY and TO provided focal mechanism data and relocated hypocenters using the double-difference tomography method, respectively. TY participated in the design of the study and discussion. All authors read and approved the final manuscript. Correspondence to Sumire Maeda. Additional file 1. Additional figures. Maeda, S., Matsuzawa, T., Yoshida, K. et al. Causes of the N–S compressional aftershocks of the E–W compressional 2008 Iwate–Miyagi Nairiku earthquake (M7.2) in the northeastern Japan arc. Earth Planets Space 71, 94 (2019) doi:10.1186/s40623-019-1073-z Received: 10 May 2019 DOI: https://doi.org/10.1186/s40623-019-1073-z Microearthquakes Stress field Heterogeneous structure Geological structure Crustal Dynamics: Toward Integrated View of Island Arc Seismogenesis
CommonCrawl
\begin{definition}[Definition:Cycle (Graph Theory)] A '''cycle''' is a circuit in which no vertex except the first (which is also the last) appears more than once. An '''$n$-cycle''' is a cycle with $n$ vertices. The set of vertices and edges which go to make up a cycle form a subgraph. This subgraph itself is also referred to as a '''cycle'''. \end{definition}
ProofWiki
The TORCH concept is based on the detection of Cherenkov light produced in a quartz radiator plate. It is an evolution of the DIRC technique, extending the performance by the use of precise measurements of the emission angles and arrival times of detected photons. This allows dispersion in the quartz to be corrected for, and the time of photon emission to be determined with a target precision of $\rm 70~ps$ per photon. Combining the information from the 30 or so detected photons from each charged particle that traverses the plate, exceptional resolution on the time-of-flight of order $\rm 15~ps$ should be possible. The TORCH technique is a candidate for application in a future upgrade of the LHCb experiment, for low-momentum charged particle identification. Over a flight distance of $\rm 10~m$ it would provide clean pion-kaon separation up to $\rm 10~GeV$, in the busy environment of collisions at the LHC. Fast timing will also be crucial at higher luminosity for pile-up rejection. A 5-year R&D program has been pursued with industry to develop suitable photon detectors with the required fast timing performance, fine spatial granularity (0.8 mm-wide pixels), long lifetime $\rm (5~C/cm^2$ integrated charge at the anode) and large active area (80% for a linear array). This is being achieved using $\rm 6 \times 6~cm^2$ micro-channel plate PMTs, and final prototype tubes are expected to be delivered early in 2017. Earlier prototype tubes have demonstrated most of the required features individually, using fast read-out electronics that has been developed based on NINO+HPTDC chips. A small-scale prototype of the optical arrangement has been tested in beam at CERN over the last year, and demonstrated close to nominal performance. Components for a large-scale prototype which will be read out using 10 MCP-PMTs, including a highly-polished synthetic quartz radiator plate of dimensions $\rm 125 \times 66 \times 1~cm^3$, are currently being manufactured for delivery on the same timescale. The status of the project will be reviewed, including the latest results from test beam analysis, and the progress towards the final prototype. The TORCH detector is an evolution of the DIRC technique, for precision time-of-flight over large areas, being developed for a future upgrade of the LHCb experiment. The R&D project is delivering high-performance photon detectors and an optical system in synthetic quartz for a large-scale prototype. The status of the project will be reviewed, including the latest results from test beam analysis and progress towards the prototype.
CommonCrawl
\begin{document} \title{Integration of A* Search and Classic Optimal Control for Safe Planning of Continuum Deformation of a Multi-Quadcopter System } \author{Hossein Rastgoftar \thanks{{\color{black}H. Rastgoftar is with the Department of Aerospace Engineering, University of Michigan, Ann Arbor, MI, 48109 USA e-mail: [email protected].}} } \markboth{ } {Shell \MakeLowercase{\textit{et al.}}: Bare Demo of IEEEtran.cls for IEEE Journals} \maketitle \begin{abstract} This paper offers an algorithmic approach to plan continuum deformation of a multi-quadcopter system (MQS) in an obstacle-laden environment. We treat the MQS as finite number of particles of a deformable body coordinating under a homogeneous transformation. In this context, we define the MQS homogeneous deformation coordination as a decentralized leader-follower problem, and integrate the principles of continuum mechanics, A* search method, and optimal control to safety and optimally plan MQS continuum deformation coordination. In particular, we apply the principles of continuum mechanics to obtain the safety constraints, use the A* search method to assign the intermediate configurations of the leaders by minimizing the travel distance of the MQS, and determine the leaders' optimal trajectories by solving a constrained optimal control problem. The optimal planning of the continuum deformation coordination is acquired by the quadcopter team in a decentralized fashion through local communication. \end{abstract} \begin{IEEEkeywords} Large-Scale Coordination, Affine Transformation, Optimal Control, A* Search, Safety, Decentralized Control, and Local Communication. \end{IEEEkeywords} \section{Introduction}\label{Introduction} Multi-agent coordination has been an active research area over the past few decades. Many aspects of multi-agent coordination have been explored and several centralized and decentralized multi-agent control {\color{black}approaches} already exist. In spite of vast amount of existing research on multi-agent coordination, scalability, maneuverability, safety, resilience, and optimality of group coordination are still very important issues for exploration and study. The goal of this paper is to address these important problems in a formal and algorithmic way through integrating the principles of continuum mechanics, A* search method, and classic optimal control approach. \subsection{Related Work} Consensus and containment control are two available decentralized muti-agent coordination approaches. Multi-agent consensus have found numerous applications such as flight formation control \cite{zhang2018collision}, multi-agent surveillance \cite{du2017pursuing}, and air traffic control \cite{artunedo2017consensus}. Consensus control of homogeneous and heterogeneous multi-agents systems \cite{cheng2018event} was studied in the past. Multi agent consensus under fixed \cite{tu2017decentralized} and switching \cite{munoz2017adaptive, liu2018leader} communication topologies have been widely investigated by the researchers over the past two decades. Stability of consensus algorithm in the presence of delay is analyzed in Ref. \cite{ma2020consensus}. Researchers have {\color{black}also} investigated multi-agent consensus in the presence of actuation failure \cite{wang2019fault, shahab2019distributed}, sensor failure \cite{liu2017kalman}, and adversarial agents \cite{leblanc2011consensus}. Containment control is a decentralized leader-follower multi-agent coordination approach in which the desired coordination is defined by leaders and acquired by followers through local communication. Early work studied stability and convergence of multi-agent containment protocol in Refs. \cite{ji2008containment, liu2012necessary}, under fixed \cite{li2016containment} or switching \cite{su2015multi} communication topologies, as well as multi-agent containment in the presence of fixed \cite{asgari2019necessary} and time-varying \cite{atrianfar2020sampled} time delays. Resilient containment control is studied in the presence of actuation failure \cite{cui2018command}, sensor failure \cite{ye2017observer}, and adversarial agents \cite{zuo2019resilient}. Also, researchers investigated the problems of finite-time \cite{qin2019distributed} and fixed-time \cite{xu2020distributed} containment control of multi-agent systems in the past. \subsection{Contributions} The main objective of this paper is to integrate the principles of continuum mechanics with search and optimization methods to safely plan continuum deformation of a multi-quadcopter system (MQS). In particular, we treat quadcopters as a finite number of particles of a $2$-D deformable body coordinating in a $3$-D where the desired coordination {\color{black}of the continuum} is defined by a homogeneous deformation. Homogeneous deformation is a non-singular affine transformation which is classified as a Lagrangian continuum deformation problem. Due to linearity of homogeneous transformation, it can be defined as a decentralized leader-follower coordination problem in which leaders' desired positions are uniquely related to the components of the Jacobian matrix and rigid-body displacement vector of the homogeneous transformation at any time $t$. {\color{black}This paper develops} an algorithmic protocol for safe planning of coordination of a large-scale MQS by determining the global desired trajectories of leaders in an obstacle-laden motion space, containing obstacles with arbitrary geometries. To this end, we integrate the A* search method, optimal control planning, and eigen-decomposition to plan the desired trajectories of the leaders minimizing travel distances between their initial and final configurations. Containing the MQS by a rigid ball, the path of the center of the containment ball is safely determined using the A* search method. We apply the principles of Lagrangian continuum mechanics to decompose the homogeneous deformation coordination and {\color{black}to} ensure inter-agent collision avoidance {\color{black}through} constraining the deformation eigenvalues. {\color{black}By eigen-decomposition of a homogeneous transformation, we can also} determine the leaders' intermediate configurations and formally specify safety requirements for a large-scale MQS {\color{black}coordination} in a geometrically-constrained environment. Additionally, we {\color{black}assign} safe desired trajectories of leaders, connecting consecutive configurations of the leader agents, by solving a constrained optimal control planning problem. This paper is organized as follows: Preliminary notions including graph theory definitions and position notations are presented in Section \ref{Preliminaries}. Problem Statement is presented in Section \ref{Problem Statement} and followed by continuum deformation coordination planning developed in Section \ref{Continuum Deformation Planning}. We review the existing approach for continuum deformation acquisition {\color{black}through local communication} in Section \ref{Continuum Deformation Acquisition}. Simulation results are presented in Section \ref{Simulation Results} and followed by Conclusion in Section \ref{Conclusion}. \section{Preliminaries}\label{Preliminaries} \subsection{Graph Theory Notions}\label{Graph Theory Notions} We consider {\color{black}the} group coordination of a quadcopter team consisting of $N$ quadcopters in an obstacle-laden environment. Communication among quadcopters are defined by graph $\mathcal{G}\left(\mathcal{V},\mathcal{E}\right)$ with node set $\mathcal{V}=\{1,\cdots,N\}$, defining the index numbers of the quadcopters, and edge set $\mathcal{E}\subset \mathcal{V}\times \mathcal{V}$. In-neighbors of quadcopter $i\in \mathcal{V}$ is defined by set $\mathcal{N}_i=\left\{j:\left(j,i\right)\in \mathcal{E}\right\}$. In this paper, quadcopters are treated as particles of a $2$-D continuum, where the desired coordination is defined by {\color{black}a} homogeneous transformation \cite{rastgoftar2020scalable}. {\color{black}A} desired {\color{black}$2$-D} homogeneous transformation is defined by three leaders and acquired by the remaining follower quadcopters through local communication. Without loss of generality, leaders and followers are identified by $\mathcal{V}_L=\left\{1,2,3\right\}\subset \mathcal{V}$ and $\mathcal{V}_F=\left\{4,\cdots,N\right\}$. Note that leaders move independently, therefore, $\mathcal{N}_i=\emptyset$, if $i\in \mathcal{V}_L$. \begin{assumption} Graph $\mathcal{G}\left(\mathcal{V},\mathcal{E}\right)$ is defined such that every follower quadcopter accesses position information of three in-ineighbor agents, thus, \begin{equation} \bigwedge_{i\in \mathcal{V}_F}\left(\mathcal{N}_i=3\right). \end{equation} \end{assumption} \subsection{Position Notations} In this paper, we define actual position $\mathbf{r}_i(t)=\begin{bmatrix} x_i(t)&y_i(t)&z_i(t) \end{bmatrix}^T$, global desired position $\mathbf{p}_i(t)=\begin{bmatrix} x_{i,HT}(t)&y_{i,HT}(t)&z_{i,HT}(t) \end{bmatrix}^T$, local desired position $\mathbf{r}_{i,d}(t)=\begin{bmatrix} x_{i,d}(t)&y_{i,d}(t)&z_{i,d}(t) \end{bmatrix}^T$, and reference position $\mathbf{p}_{i,0}=\begin{bmatrix} x_{i,0}&y_{i,0}&0 \end{bmatrix}^T$ for every quadcopter $i\in \mathcal{V}$. Actual position $\mathbf{r}_i(t)$ is the output vector of the control system of quadcopter $i\in \mathcal{V}$. Global desired position of quadcopter $i\in \mathcal{V}$ is defined by a homogeneous transformation with the details provided in Ref. \cite{rastgoftar2020scalable} and discussed in Section \ref{Continuum Deformation Planning}. Local desired position of quadcopter $i\in \mathcal{V}$ is given by \begin{equation} \mathbf{r}_{i,d}(t)=\begin{cases} \mathbf{p}_i(t)&i\in \mathcal{V}_L\\ \sum_{j\in \mathcal{N}_i}\mathbf{r}_j(t)&i\in \mathcal{V}_F\\ \end{cases} , \end{equation} where $w_{i,j}>0$ is a constant communication weight between follower $i\in \mathcal{V}_F$ and in-neighbor quadcopter $j\in \mathcal{N}_i$, and \begin{equation} \sum_{j\in \mathcal{N}_i}w_{i,j}=1. \end{equation} Followers' communication weights are consistent with the reference positions of quadcopters and satisfy the following equality constraints: \begin{equation} \bigwedge_{i\in \mathcal{V}_F}\left(\sum_{j\in \mathcal{N}_i}w_{i,j}\left(\mathbf{p}_{j,0}-\mathbf{p}_{i,0}\right)=0\right). \end{equation} \begin{remark} The initial configuration of the MQS is obtained by a rigid-body rotation of the reference configuration. Therefore, initial position of every quadcopter $i\in \mathcal{V}$ denoted by $\mathbf{r}_{i,s}$ is not necessarily the same as the reference position $\mathbf{p}_{i,0}${\color{black},} but $\mathbf{r}_{i,s}$ and $\mathbf{p}_{i,0}$ satisfy the following relation: \begin{equation} \bigwedge_{i=1}^{N-1}\bigwedge_{j=i+1 }^N\left(\|\mathbf{r}_{i,s}-\mathbf{r}_{j,s}\|=\|\mathbf{p}_{i,0}-\mathbf{p}_{j,0}\|\right), \end{equation} where $\|\cdot\|$ is the 2-norm symbol. \end{remark} \section{Problem Statement}\label{Problem Statement} We treat the MQS as {\color{black}particles} of a {\color{black}$2$-D} deformable body navigating in an obstacle-laden environment. The desired formation of the MQS is given by \begin{equation}\label{followerleaderdesiredrelation} \mathbf{y}_{F,HT}(t)=\mathbf{H}\mathbf{y}_{L,HT}(t), \end{equation} at any time $t\in [t_s,t_u]$, where $\mathbf{H}\in \mathbb{R}^{3\left(N-3\right)\times 9}$ is a constant shape matrix that is obtained based on reference positions in Section \ref{Continuum Deformation Planning}{\color{black}. Also,} \begin{subequations} \begin{equation} \mathbf{y}_{L,HT}=\mathrm{vec}\left(\begin{bmatrix} \mathbf{p}_{1}&\cdots&\mathbf{p}_{3} \end{bmatrix}^T\right)\in \mathbb{R}^{9\times 1}, \end{equation} \begin{equation} \mathbf{y}_{F,HT}=\mathrm{vec}\left(\begin{bmatrix} \mathbf{p}_{4}&\cdots&\mathbf{p}_{N} \end{bmatrix}^T\right)\in \mathbb{R}^{3\left(N-3\right)\times 1} \end{equation} \end{subequations} aggregate the components of desired positions of followers and leaders, respectively, where ``vec'' {\color{black}is} the matrix vectorization symbol. Per Eq. \eqref{followerleaderdesiredrelation}, {\color{black}the} desired formation of followers{\color{black}, assigned by $\mathbf{y}_{F,HT}(t)$,} is uniquely determined based on the desired leaders' trajectories defined by $\mathbf{y}_{L,HT}(t)$ over the time interval $\left[t_s,t_u\right]$. The MQS is constrained to remain inside the rigid containment ball \begin{equation} \resizebox{0.99\hsize}{!}{ $ \mathcal{S}\left({\mathbf{d}}\left(t\right),r_{\mathrm{max}}\right)=\left\{\left(x,y,z\right):\left(x-d_x\right)^2+\left(x-d_x\right)^2+\left(z-d_z\right)^2\leq r_{\mathrm{max}}^2\right\} $ } \end{equation} with {\color{black}the} constant radius $r_{\mathrm{max}}$ and {\color{black}the} center $\mathbf{d}(t)=\begin{bmatrix} d_x(t)&d_y(t)&d_z(t) \end{bmatrix}^T$ at time $t\in \left[t_s,t_u\right]$. The main objective of this paper is to determine $\mathbf{y}_{L,HT}(t)$ and ultimate time $t_u$ such that the MQS travel distances are minimized, and the following constraints are all satisfied at any time $t\in \left[t_s,t_u\right]$: \begin{subequations} \begin{equation}\label{c1} \forall t\in \left[t_s,t_u\right],\qquad \mathbf{y}_{L,HT}^T\left(t\right)\mathbf{\Psi}\mathbf{y}_{L,HT}\left(t\right)-A_s=0, \end{equation} \begin{equation}\label{c2} \forall t\in \left[t_s,t_u\right],\qquad \bigwedge_{i\in \mathcal{V}}\bigwedge_{j\in \mathcal{V},j\neq i}\|\mathbf{r}_i(t)-\mathbf{r}_j(t)\|\neq 2\epsilon, \end{equation} \begin{equation}\label{c3} \forall t\in \left[t_s,t_u\right],\qquad \bigwedge_{i\in \mathcal{V}_L} \left(z_{i,HT}\left(t\right)=d_z(t)\right), \end{equation} \begin{equation}\label{c4} \forall t\in \left[t_s,t_u\right],\qquad \bigwedge_{\in \mathcal{V}}\left(\left(x_{i,HT}(t),y_{i,HT}(t)\right)\in \mathcal{S}\left({\color{black}\mathbf{d}}\left(t\right),r_{\mathrm{max}}\right)\right), \end{equation} \end{subequations} where {\color{black}$x_{i,HT}(t)$ and $y_{i,HT}(t)$ are the $x$ and $y$ components of the global desired position of quadcopter $i\in \mathcal{V}$ at time $t\in \left[t_s,t_u\right]$,} \begin{equation} \mathbf{\Psi}=\mathbf{O}^T\mathbf{P}^T\mathbf{P}\mathbf{O} \end{equation} is constant, \begin{subequations} \begin{equation} \mathbf{P}={1\over 4}\begin{bmatrix} 0&0&0&0&1& -1\\ 0&0&0&-1&0&1\\ 0&0&0&1&-1&0\\ 0&-1&1& 0&0&0\\ 1&0&-1& 0&0&0\\ -1&1&0& 0&0&0\\ \end{bmatrix} , \end{equation} and $\mathbf{O}=\begin{bmatrix} \mathbf{I}_6&\mathbf{0}_{6\times 3} \end{bmatrix}$. \end{subequations} The constraint equation \eqref{c1} ensures that the area of the leading triangle, with vertices occupied by the desired position of the leaders, remains constant and equal to $A_s$ at any time $t\in \left[t_s,t_u\right]$. Constraint equation \eqref{c2} ensures that no two quadcopters collide{\color{black},} if every quadcopter {\color{black}$i\in \mathcal{V}$} can be enclosed by a ball with constant radius $\epsilon$. Constraint equation \eqref{c3} ensures that the desired formation of the MQS lies in a horizontal plane at any time $t\in \left[t_s,t_u\right]$. Per Eq. \eqref{c4}, the desired MQS formation is {\color{black}constrained} to remain inside the ball $\mathcal{S}\left({\mathbf{d}}(t),r_{\mathrm{max}}\right)$ at any time $t\in \left[t_s,t_u\right]$. {\color{black}To accomplish the goal of this paper, we} integrate (i) A* search, (ii) eigen-decomposition, (iii) optimal control planning to assign leaders' optimal trajectories ensuring safety requirements \eqref{c1}-\eqref{c4} {\color{black}by performing the following sequential steps:} \textbf{Step 1: Assigning Intermediate Locations of the Containment Ball:} Given initial and final positions of the center of the containment ball, denoted by {\color{black}$\bar{\mathbf{d}}_s=\mathbf{d}\left(t_s\right)=\bar{\mathbf{d}}_0$ and $\bar{\mathbf{d}}_u=\mathbf{d}\left(t_s\right)=\bar{\mathbf{d}}_{n_\tau}$}, and obstacle geometries, we apply the A* search method to determine the intermediate positions of the center of the containment ball $\mathcal{S}$, denoted by $\bar{\mathbf{d}}_{1}$, $\cdots$, $\bar{\mathbf{d}}_{n_\tau-1}$, such that: (i) the travel distance between the initial and final configurations of the MQS is minimized and (ii) the containment ball do not collide the obstacles{\color{black},} arbitrarily distributed in the coordination space. {\color{black}\textbf{Step 2: Assigning Leaders' Intermediate Configurations:} By knowing $\bar{\mathbf{d}}_{1}$, $\cdots$, $\bar{\mathbf{d}}_{n_\tau-1}$, we define \begin{equation}\label{betak} \beta_k= \dfrac{\sum_{j=0}^k\left(\bar{\mathbf{d}}_{j}-\bar{\mathbf{d}}_{0}\right)}{\sum_{j=0}^{n_\tau}\left(\bar{\mathbf{d}}_{j}-\bar{\mathbf{d}}_{0}\right)} \end{equation} and \begin{equation}\label{tku} k=0,1,\cdots,n_\tau,\qquad t_k(t_u)=\left(1-\beta_k\right)t_s+\beta_kt_u \end{equation} for $k=0,\cdots,n_\tau$, where $t_k$ is when the center of the containment ball $\mathcal{S}$ reaches desired intermediate position $\bar{\mathbf{d}}_{k}$.} Given $\mathbf{y}_{L,HT}\left(t_s\right)=\bar{\mathbf{y}}_{L,h,0}$, $\mathbf{y}_{L,HT}\left(t_u\right)=\bar{\mathbf{y}}_{L,h,n_\tau}$, Section \ref{Planning2} decomposes the homogeneous deformation coordination to determine the intermediate configurations of the leaders {\color{black}that are denoted by $\bar{\mathbf{y}}_{L,HT,1}$, $\cdots$, $\bar{\mathbf{y}}_{L,HT,n_\tau-1}$}. \textbf{Step 3: Assigning Leaders' Desired Trajectories:} By expressing $\bar{\mathbf{d}}=\begin{bmatrix} \bar{d}_{x,k}&\bar{d}_{y,k}&\bar{d}_{z,k} \end{bmatrix}^T$ for $k=0,1,\cdots,n_\tau$, $z$ components of the leaders' desired trajectories are {\color{black}the same at anytime $t\in \left[t_s,t_u\right]$}, and defined by \begin{equation}\label{zcomponent} \forall i\in \mathcal{V}_L,\qquad z_{i,HT}=\bar{d}_{z,k}\left(1-\gamma(t,T_k)\right)+\bar{d}_{z,k+1}\gamma\left(t,T_k\right) \end{equation} at any time $t\in \left[t_k,t_{k+1}\right]$ for $k=0,\cdots,n_\tau-1$, where $T_k=t_{k+1}-t_k$, and \begin{equation}\label{gamma} \gamma(t,T_k)=6\left({t-t_k\over t_{k+1}-t_k}\right)^5-15\left({t-t_k\over t_{k+1}-t_k}\right)^4+10\left({t-t_k\over t_{k+1}-t_k}\right)^3 \end{equation} for $t\in \left[t_k,t_{k+1}\right]$. Note that $\gamma(t_k)=0$, $\gamma_{k+1}=1$, $\dot{\gamma}\left(t_k\right)=\dot{\gamma}\left(t_{k+1}\right)=0$, and $\ddot{\gamma}\left(t_k\right)=\ddot{\gamma}\left(t_{k+1}\right)=0$. The $x$ and $y$ components of the desired trajectories of leaders are governed by dynamics \begin{equation}\label{maindynamicsss} \dot{\mathbf{x}}_L=\mathbf{A}_L{\mathbf{x}}_L+\mathbf{B}_L{\mathbf{u}}_L, \end{equation} where ${\mathbf{u}}_L\in \mathbf{R}^{9\times 1}$ is the input vector, and \begin{subequations} \begin{equation} \mathbf{x}_L(t)=\left(\mathbf{I}_2\otimes\mathbf{O}\right)\begin{bmatrix} \mathbf{y}_{L,HT}(t)\\\dot{\mathbf{y}}_{L,HT}(t) \end{bmatrix} ^T\in \mathbb{R}^{12\times 1} \end{equation} \begin{equation} \mathbf{A}_L=\begin{bmatrix} \mathbf{0}_{6\times 6}&\mathbf{I}_{6}\\ \mathbf{0}_{6\times 6}&\mathbf{0}_{6\times 6}\\ \end{bmatrix} , \end{equation} \begin{equation} \mathbf{B}_L=\begin{bmatrix} \mathbf{0}_{6\times 6}\\ \mathbf{I}_{6}\\ \end{bmatrix} , \end{equation} \end{subequations} $\mathbf{0}_{6\times 6}\in \mathbb{R}^{6\times 6}$ is a zero-entry matrix, and $\mathbf{I}_6\in \mathbb{R}^{6\times 6}$ is an identity matrix. Control input $\mathbf{u}_L\in \mathbb{R}^{6\times 1}$ is optimized by minimizing cost function \begin{equation} \min \mathrm{J}(\mathbf{u}_L,t_u)=\min {1\over 2}\sum_{k=0}^{n_\tau-1} \left(\int_{t_k(t_u)}^{t_{k+1}(t_u)}\mathbf{u}_L^T\left(\tau\right)\mathbf{u}_L\left(\tau\right)d\tau\right) \end{equation} subject to dynamics \eqref{maindynamicsss}, safety conditions \eqref{c1}-\eqref{c4}, and boundary conditions \begin{equation} \bigwedge_{k=0}^{n_\tau}\left(\mathbf{x}_{L}(t_k)=\bar{\mathbf{x}}_{L,k}\right). \end{equation} A desired continuum deformation coordination, planned by the leader quadcopters, is acquired by followers in a decentralized fashion using the protocol developed in Refs. \cite{rastgoftar2020scalable, rastgoftar2020fault}. This protocol is discussed in Section \ref{Continuum Deformation Acquisition}. \section{Continuum Deformation Planning}\label{Continuum Deformation Planning} The desired configuration of the MQS is defined by {\color{black}affine transformation} \begin{equation}\label{globaldesiredcoordination} i\in \mathcal{V},\qquad \mathbf{p}_i\left(t\right)=\mathbf{Q}\left(t\right){\mathbf{p}}_{i,0}+\mathbf{s}\left(t\right), \end{equation} at time $t\in \left[t_s,t_u\right]$, where $\mathbf{p}_i(t)=\begin{bmatrix} x_{i,HT}(t)&y_{i,HT}(t)&z_{i,HT}(t) \end{bmatrix}^T\in \mathbb{R}^3$ is the desired position of quadcopter $i\in \mathcal{V}$, $\mathbf{p}_{i,0}$ is the reference position of quadcopter $i\in \mathcal{V}$, and $\mathbf{s}(t)=\begin{bmatrix} s_x(t)&s_y(t)&s_z(t) \end{bmatrix}^T$ is the rigid body displacement vector. Also, Jacobian matrix $\mathbf{Q}=\left[Q_{ij}\right]\in \mathbb{R}^{3\times 3}$ given by \begin{equation} \mathbf{Q}\left(t\right)=\begin{bmatrix} \mathbf{Q}_{xy}(t)&\mathbf{0}_{2\times 1}\\ \mathbf{0}_{1\times 2}&1 \end{bmatrix} \end{equation} is non-singular at any time $t\in \left[t_s,t_u\right]${\color{black},} where $\mathbf{Q}_{xy}(t)\in \mathbb{R}^{2\times 2}$ specifies the deformation of the leading triangle, defined by the three leaders. Because $Q_{31}=Q_{32}=Q_{13}=Q_{23}=0$, the leading triangle lies in the horizontal plane at any time $t\in \left(t_s,t_u\right]$, if the $z$ components of desired positions of the leaders are all identical at the initial time $t_s$. \begin{assumption}\label{assmq1} This paper assumes that $\mathbf{Q}(t_s)=\mathbf{I}_3$. Therefore, initial and reference positions of quadcopter $i\in \mathcal{V}$ are related by \begin{equation} \mathbf{p}_i\left(t_s\right)=\mathbf{p}_{i,0}+\bar{\mathbf{d}}_s. \end{equation} \end{assumption} The global desired trajectory of quadcopter $i\in \mathcal{V}${\color{black}, defined by affine transformation \eqref{globaldesiredcoordination}, can be expressed} by \begin{equation}\label{form2} \mathbf{p}_i(t)=\left(\mathbf{I}_3\otimes \mathbf{\Omega}_2^T\left(\mathbf{p}_{1,0},\mathbf{p}_{2,0},\mathbf{p}_{3,0},\mathbf{p}_{i,0}\right)\right)\mathbf{y}_{L,HT}{\color{black}\left(t\right)}, \end{equation} where $\mathbf{\Omega}_2\left(\mathbf{p}_{1,0},\mathbf{p}_{2,0},\mathbf{p}_{3,0},\mathbf{p}_{i,0}\right)\in \mathbb{R}^{3\times 1}$ is defined based on reference positions of leaders $1$, $2$, and $3${\color{black}, as well as} quadcopter $i\in \mathcal{V}$ by \begin{equation} \mathbf{\Omega}_2\left(\mathbf{p}_{1,0},\mathbf{p}_{2,0},\mathbf{p}_{3,0},\mathbf{p}_{i,0}\right)=\begin{bmatrix} x_{1,0}&x_{2,0}&x_{3,0}\\ y_{1,0}&y_{2,0}&y_{3,0}\\ 1&1&1 \end{bmatrix} ^{-1} \begin{bmatrix} x_{i,0}\\ y_{i,0}\\ 1 \end{bmatrix} . \end{equation} Note that sum of the entries of vector $\mathbf{\Omega}_2\left(\mathbf{p}_{1,0},\mathbf{p}_{2,0},\mathbf{p}_{3,0},\mathbf{p}_{i,0}\right)$ {\color{black}is $1$} for arbitrary {\color{black}vectors} $\mathbf{p}_{1,0}$, $\mathbf{p}_{2,0}$, $\mathbf{p}_{3,0}$, and $\mathbf{p}_{i,0}$, distributed in the $x-y$ plane, if $\mathbf{p}_{1,0}$, $\mathbf{p}_{2,0}$, $\mathbf{p}_{3,0}$ form a triangle. \begin{remark} By using Eq. \eqref{form2}, followers' global desired positions can be expressed based on leaders' global desired positions using relation \eqref{followerleaderdesiredrelation}, where \begin{equation} \mathbf{H}=\mathbf{I}_3\otimes \begin{bmatrix} \mathbf{\Omega}_2^T\left(\mathbf{p}_{1,0},\mathbf{p}_{2,0},\mathbf{p}_{3,0},\mathbf{p}_{4,0}\right)\\ \vdots\\ \mathbf{\Omega}_2^T\left(\mathbf{p}_{1,0},\mathbf{p}_{2,0},\mathbf{p}_{3,0},\mathbf{p}_{N,0}\right)\\ \end{bmatrix} \in \mathbb{R}^{3\left(N-3\right)\times 9} \end{equation} is constant and determined based on reference positions of {\color{black}the} MQS. \end{remark} \begin{remark} Eq. \eqref{globaldesiredcoordination} is used for eigen-decomposition, safety analysis, and planning of the desired continuum deformation coordination. On the other hand, Eq. \eqref{form2} is used in Section \ref{Communication-Based Guidance Protocol} to define the MQS continuum as a decentralized leader-follower problem and ensure the boundedness of the trajectory tracking controllers {\color{black}that are} independently planned by individual {\color{black}quadcopeters}. \end{remark} \begin{theorem}\label{thm1} Assume that three leader quadcopters $1$, $2$, and $3$ remain non-aligned at any time $t\in [t_s,t_u]$. Then, the desired configuration of {\color{black}the} leaders at time $t\in \left[t_s,t_u\right]$, defined by $\mathbf{y}_{L,HT}(t)$, is related to the leaders' initial configuration, defined by $\bar{\mathbf{y}}_{L,HT,0}$, and the rigid body displacement vector $\mathbf{s}(t)$ by \begin{equation} \mathbf{y}_{L,HT}(t)=\mathbf{D}\left(\mathbf{I}_3\otimes \mathbf{Q}(t)\right)\mathbf{D}\bar{\mathbf{y}}_{L,HT,0} +\mathbf{D}\left(\mathbf{1}_{3\times 1}\otimes \mathbf{s}(t)\right), \end{equation} where $\otimes$ is the Kronecker product symbol and $\mathbf{D}\in \mathbb{R}^{9\times 9}$ is an involutory matrix defined as follows: \begin{equation} D_{ij}=\begin{cases} 1&i=1,2,3,~j=3(i-1)+1\\ 1&i=4,5,6,~j=3(i-1)+2\\ 1&i=7,6,9,~j=3i\\ \end{cases} . \end{equation} Also, elements of matris $\mathbf{Q}_{xy}{\color{black}\left(t\right)}$ and rigid-body displacement vector $\mathbf{s}(t)$ can be related to $\mathbf{y}_{L,HT}(t)$ by \begin{subequations} \begin{equation}\label{qelement} Q_{11}(t)=\mathbf{E}_1\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t), \end{equation} \begin{equation}\label{qelement} Q_{12}(t)=\mathbf{E}_2\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t), \end{equation} \begin{equation}\label{qelement} Q_{21}(t)=\mathbf{E}_3\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t), \end{equation} \begin{equation}\label{qelement} Q_{22}(t)=\mathbf{E}_4\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t), \end{equation} \begin{equation}\label{dt} \mathbf{s}(t)=\begin{bmatrix} \mathbf{E}_5\mathbf{\Gamma}\mathbf{O}\\ \mathbf{E}_6 \end{bmatrix} \mathbf{y}_{L,HT}(t), \end{equation} \end{subequations} {\color{black}at any time $t\in \left[t_s,t_u\right]$,} where $\mathbf{E}_1=\begin{bmatrix} 1&\mathbf{0}_{1\times 5} \end{bmatrix}$, $\mathbf{E}_2=\begin{bmatrix} 0&1&\mathbf{0}_{1\times 4} \end{bmatrix}$, $\mathbf{E}_3=\begin{bmatrix} \mathbf{0}_{1\times 2}&1&\mathbf{0}_{1\times 3} \end{bmatrix}$, $\mathbf{E}_4=\begin{bmatrix} \mathbf{0}_{1\times 3}&1&\mathbf{0}_{1\times 2} \end{bmatrix}$, $\mathbf{E}_5=\begin{bmatrix} \mathbf{0}_{2\times 4}&\mathbf{I}_2 \end{bmatrix}$, $\mathbf{E}_6={1\over 3}\begin{bmatrix} \mathbf{0}_{1\times 6}&\mathbf{1}_{1\times 3} \end{bmatrix}\in \mathbb{R}^{3\times 9}$, and \[ \mathbf{\Gamma}=\begin{bmatrix} x_{1,0}&y_{1,0}&0&0&1&0\\ x_{2,0}&y_{2,0}&0&0&1&0\\ x_{3,0}&y_{3,0}&0&0&1&0\\ 0&0&x_{1,0}&y_{1,0}&0&1\\ 0&0&x_{2,0}&y_{2,0}&0&1\\ 0&0&x_{3,0}&y_{3,0}&0&1\\ \end{bmatrix} ^{-1} . \] \end{theorem} \begin{proof} Vectors $\mathbf{y}_{L,HT}{\color{black}\left(t\right)}$ and $\bar{\mathbf{y}}_{L,HT,0}$ can be expressed by $\mathbf{y}_{L,HT}(t)=\mathbf{D}\begin{bmatrix} \mathbf{p}_1^T(t)&\mathbf{p}_2^T(t)&\mathbf{p}_3^T(t) \end{bmatrix}^T$ and $\bar{\mathbf{y}}_{L,HT,0}=\mathbf{D}\begin{bmatrix} \mathbf{p}_{1,0}&\mathbf{p}_{2,0}&\mathbf{p}_{3,0} \end{bmatrix}^T$, respectively. By provoking Eq. \eqref{globaldesiredcoordination}, we can write \begin{equation}\label{prooofffff2} \begin{bmatrix} \mathbf{p}_1(t)\\ \mathbf{p}_2(t)\\ \mathbf{p}_3(t) \end{bmatrix} =\left(\mathbf{I}_3\otimes \mathbf{Q}(t)\right) \begin{bmatrix} \mathbf{p}_{1,0}\\ \mathbf{p}_{2,0}\\ \mathbf{p}_{3,0} \end{bmatrix} +\mathbf{1}_{3\times 1}\otimes \mathbf{s}(t){\color{black},} \end{equation} {\color{black}and} Eq. \eqref{prooofffff2} can be rewritten as follows: \begin{equation}\label{proofeq1} \mathbf{D}\mathbf{y}_{L,HT}(t)=\mathbf{I}_3\otimes \mathbf{D}\bar{\mathbf{y}}_{L,HT,0}+\mathbf{1}_{3\times 1}\otimes \mathbf{d}(t). \end{equation} Because $\mathbf{D}$ is involutory, $\mathbf{D}=\mathbf{D}^{-1}$ and Eq. \eqref{globaldesiredcoordination} can be obtained by pre-multiplying $\mathbf{D}$ on both sides of Eq. \eqref{prooofffff2}. By replacing $\mathbf{p}_i(t)$ and $\mathbf{p}_{i,0}$ by {\color{black}$\begin{bmatrix} x_{i,HT}(t)&y_{i,HT}(t)&z_{i,HT}(t) \end{bmatrix}^T$ and $\begin{bmatrix} x_{i,0}&y_{i,0}(t)&0 \end{bmatrix}^T$ into Eq. \eqref{globaldesiredcoordination} for every leader $i\in \mathcal{V}_L$, elements of $\mathbf{Q}_{xy}(t)$, denoted by $Q_{11}(t)$, $Q_{12}(t)$, $Q_{21}(t)$, and $Q_{22}(t)$, and $x$ and element of $\mathbf{s}(t)$, denoted by $s_x(t)$, and $s_y(t)$,} can be related to the $x$ and $y$ components of the leaders' desired positions \[ \begin{bmatrix} Q_{11}&Q_{12}&Q_{21}&Q_{22}&s_x&s_y \end{bmatrix} ^T =\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}, \] {\color{black}at any time $t\in \left[t_s,t_u\right]$,} where \[ \mathbf{O}\mathbf{y}_{L,HT}=\begin{bmatrix} x_{1,HT}&x_{2,HT}&x_{3,HT}&y_{1,HT}&y_{2,HT}&y_{3,HT} \end{bmatrix}^T. \] Note that matrix $\mathbf{\Gamma}$ is non-singular{\color{black},} if leaders are non-aligned at the initial time $t_s$ \cite{rastgoftar2020scalable}. \end{proof} Theorem \ref{thm1} is used in Section \ref{Planning1} to obtain the final location of the center of the containment ball, denoted by $\bar{\mathbf{d}}_u$, where $\bar{\mathbf{d}}_u$ is one of the inputs of the A* solver (See Algorithm \ref{euclid}). In particular, $\bar{\mathbf{d}}_u=\mathbf{s}\left(t_u\right)$ is obtained by Eq. \eqref{dt}{\color{black}, if} $\mathbf{y}_{L,HT}(t)$ is substituted by $\bar{\mathbf{y}}_{L,HT,n_\tau}=\mathbf{y}_{L,HT}(t_u)$ on the right-hand side of Eq. \eqref{dt}. In addition, Section \ref{Planning2} uses Theorem \ref{thm1} to assign the intermediate formations of the leader team. \subsection{A* Search Planning} \label{Planning1} The A* search method is used to safely plan the coordination of the containment disk $\mathcal{S}$ by optimizing the intermediate {\color{black}locations of the center of the containment ball, denoted by} $\bar{\mathbf{d}}_1$ through $\bar{\mathbf{d}}_{n_\tau-1}$, {\color{black}for} given $\bar{\mathbf{d}}_s$ and $\bar{\mathbf{d}}_{n_\tau}$, {\color{black}where geometry of obstacles is known in the coordination space}. We first develop an algorithm for collision avoidance of the MQS with obstacles in Section \ref{Obstacle Collision Avoidance}. This algorithm is used by the A* optimizer to determine $\bar{\mathbf{d}}_1$ through $\bar{\mathbf{d}}_{n_\tau-1}$, as described in Section \ref{A* Optimizer Functionality}. \begin{definition} Let $i-j-k-l$ be an arbitrary tetrahedron whose vertices are positioned as $\mathbf{p}_i=\begin{bmatrix} x_i&y_i&z_i \end{bmatrix}^T$, $\mathbf{p}_j=\begin{bmatrix} x_j&y_j&z_j \end{bmatrix}^T$, $\mathbf{p}_k=\begin{bmatrix} x_k&y_k&z_k \end{bmatrix}^T$, and $\mathbf{p}_l=\begin{bmatrix} x_l&y_l&z_l \end{bmatrix}^T$ is a $3$-D coordination space. Also, $\mathbf{p}_f=\begin{bmatrix} x_f&y_f&z_f \end{bmatrix}^T$ is the position of an arbitrary point $f$ in the coordination space. Then, \begin{equation} \mathbf{\Omega}_3\left(\mathbf{p}_i,\mathbf{p}_j,\mathbf{p}_k,\mathbf{p}_l,\mathbf{p}_f\right)=\begin{bmatrix} \mathbf{p}_i&\mathbf{p}_j&\mathbf{p}_k&\mathbf{p}_l\\ 1&1&1&1 \end{bmatrix} ^{-1} \begin{bmatrix} \mathbf{p}_f\\ 1 \end{bmatrix} \end{equation} is a finite vector with the entries summing up to $1$ \cite{rastgoftar2020scalable}. \end{definition} The vector function $\mathbf{\Omega}_3$ is used in Section \ref{Obstacle Collision Avoidance} to {\color{black}specify collision avoidance condition.} \subsubsection{Obstacle Collision Avoidance} \label{Obstacle Collision Avoidance} We enclose obstacles by a finite number of polytopes identified by set ${\color{black}\mathcal{H}}=\left\{1,\cdots,M\right\}$, where $ \mathcal{P}=\bigcup_{j\in {\color{black}\mathcal{H}}}\mathcal{P}_j $ defines vertices of polytopes containing obstacles in the motion space, and $\mathcal{P}_j$ is a finite set defining identification numbers of vertices {\color{black}of} polytope $j\in \mathcal{O}$ containing the $j-th$ obstacle in the motion space. Polytope $\mathcal{P}_j$ is made of $m_j$ distinct tetrahedral cells, where $\mathcal{T}_{j,l}$ defines the identification numbers of the nodes of the $l$-th tetrahedral cell ($l=1,\cdots,m_j$). Therefore, $\mathcal{P}$ can be expressed as follows: \begin{equation} \mathcal{P}=\bigwedge_{j\in \mathcal{P}}\bigwedge_{l=1}^{m_j}\mathcal{T}_{j,l}. \end{equation} \begin{definition} We say ${\mathbf{d}}$ is a valid position for the center of the containment ball $\mathcal{S}$ with radius $r_{\mathrm{max}}$, if the following two conditions are satisfied: \begin{subequations} \begin{equation}\label{firstcontainmentcond} \bigwedge_{j\in \mathcal{P}}\bigwedge_{l=1}^{m_j}\bigwedge_{p\in \mathcal{T}_{j,l}}\left(\left(x_p,y_p,z_p\right)\notin \mathcal{S}\left({\mathbf{d}},r_{\mathrm{max}}\right)\right), \end{equation} \begin{equation}\label{secondcontainmentcond} \forall \mathbf{r}\in \partial \mathcal{S},\qquad \bigwedge_{j\in \mathcal{P}}\bigwedge_{l=1}^{m_j}\bigwedge_{\mathcal{T}_{j,l}=\left\{v_1,\cdots,v_4\right\}}\left(\mathbf{\Omega}_3\left(\mathbf{p}_{v_1},\mathbf{p}_{v_2},\mathbf{p}_{v_3},\mathbf{p}_{v_4},\mathbf{r}\right)\not\ge\mathbf{0}\right), \end{equation} \end{subequations} where $\partial \mathcal{S}\left({\mathbf{d}},r_{\mathrm{max}}\right)$ is the boundary of the containment ball. In Eq. \eqref{firstcontainmentcond}, $p\in \mathcal{T}_{j,l}$ is the index number of {\color{black}one of the nodes of} tetrahedron $\mathcal{T}_{j,l}$ {\color{black}that is} positioned at $\left(x_p,y_p,z_p\right)$ for $j\in \mathcal{P}$ and $l=1,\cdots,m_j$. In Eq. \eqref{secondcontainmentcond}, $\mathbf{p}_{v_1}$, $\mathbf{p}_{v_2}$, $\mathbf{p}_{v_3}$, and $\mathbf{p}_{v_4}$ denote positions of vertices $v_1$, $v_2$, $v_3$, and $v_4$ of tetrahedron $\mathcal{T}_{j,l}$ for $j\in \mathcal{P}$ and $l=1,\cdots,m_j$. \end{definition} The constraint equation \eqref{firstcontainmentcond} ensures that vertices of the containment polytopes are all outside the ball $\mathcal{S}$. Also, condition \eqref{secondcontainmentcond} requires that the center of the containment ball is outside of all polytopes defined by $\mathcal{P}$. \begin{remark} The safety condition \eqref{firstcontainmentcond} is necessary but not sufficient for ensuring of the MQS collision avoidance with obstacles. Fig. \ref{obstacleavoidance} illustrates a situation in which collision is not avoided because the safety condition \eqref{secondcontainmentcond} is violated while \eqref{firstcontainmentcond} is satisfied. More specifically, Fig. \ref{obstacleavoidance} shows that vertices of a tetrahedron enclosing an obstacle are outside of containment ball $\mathcal{S}$, where $\mathcal{S}$ contains the MQS. However, the containment ball enclosing the MQS is contained by the tetrahedron representing obstacle in the motion space. \end{remark} \begin{figure} \caption{Violation of collision avoidance requirements: MQS leaders are contained by the containment ball while the tetrahedron, representing an obstacle, encloses the containment ball in the motion space. } \label{obstacleavoidance} \end{figure} \subsubsection{A* Optimizer Functionality} \label{A* Optimizer Functionality} To plan the desired coordination of the MQS, we represent the coordination space by a finite number of nodes obtained by uniform discretization of the motion space. Let $\mathcal{D}_x=\left\{\Delta x,2\Delta x,\cdots,n_x\Delta x\right\}$, $\mathcal{D}_y=\left\{\Delta y,2\Delta y,\cdots,n_y\Delta x\right\}$, and $\mathcal{D}_z=\left\{\Delta z,2\Delta z,\cdots,n_z\Delta z\right\}$ define all possible discrete values for the $x$, $y$, and $z$ components of the nodes distributed in the motion space. Then, \begin{equation} \resizebox{0.99\hsize}{!}{ $ \mathcal{D}=\left\{\tilde{\mathbf{d}}=\left(\tilde{d}_x\Delta x,\tilde{d}_y \Delta y,\tilde{d}_z \Delta z\right):\tilde{d}_x\Delta x\in \mathcal{D}_x,\tilde{d}_y\Delta y\in \mathcal{D}_y,\tilde{d}_z\Delta z\in \mathcal{D}_z\right\} $ } \end{equation} defines positions of the nodes in the motion space. \begin{assumption} The containment polytopes {\color{black}enclosing obstacles} are defined such that $\mathcal{P}\subset \mathcal{D}$. \end{assumption} \begin{definition} We define \begin{equation} \begin{split} \mathcal{F}=&\bigg\{\tilde{\mathbf{d}}\in \mathcal{D}: \left(\bigwedge_{j\in \mathcal{P}}\bigwedge_{l=1}^{m_j}\bigwedge_{p\in \mathcal{T}_{j,l}}\left(\left(x_p,y_p,z_p\right)\notin \mathcal{S}\left(\tilde{\mathbf{d}},r_{\mathrm{max}}\right)\right)\right)\wedge\\ &\left(\bigwedge_{j\in \mathcal{P}}\bigwedge_{l=1}^{m_j}\bigwedge_{\mathcal{T}_{j,l}=\left\{v_1,\cdots,v_4\right\}}\left(\mathbf{\Omega}_3\left(\mathbf{p}_{v_1},\mathbf{p}_{v_2},\mathbf{p}_{v_3},\mathbf{p}_{v_4},\mathbf{r}\right)\not\ge\mathbf{0}\right)\right),~\\ &\mathrm{for~}\mathbf{r}\in\partial \mathcal{S}\left(\tilde{\mathbf{d}},r_{\mathrm{max}}\right)\bigg\}\subset \mathcal{D} \end{split} \end{equation} as the set of valid positions for the center of ball $\mathcal{S}$. \end{definition} \begin{assumption} Initial and final positions of the containment ball are defined such that $\bar{\mathbf{d}}_s\in \mathcal{F}$ and $\bar{\mathbf{d}}_{n_\tau}\in \mathcal{F}$. \end{assumption} \begin{definition} Set \begin{equation} \resizebox{0.99\hsize}{!}{ $ \mathcal{A}\left(\tilde{\mathbf{d}}\right)=\left\{\left(\tilde{\mathbf{d}}+\left(h_x\Delta_x,h_y\Delta_y,h_z\Delta_z\right)\right)\in \mathcal{F}:h_x,h_y,h_z\in \{-1,0,1\}\right\} $ } \end{equation} defines all possible valid neighboring points of point $\tilde{\mathbf{d}}\in \mathcal{F}$. \end{definition} \begin{definition} For every $\tilde{\mathbf{d}}\in \mathcal{F}$, the straight line distance \begin{equation} C_H\left(\tilde{\mathbf{d}},\bar{\mathbf{d}}_u\right)=\|\tilde{\mathbf{d}}-\bar{\mathbf{d}}_{u}\| \end{equation} is considered as the heuristic {\color{black}cost} of position vector $\tilde{\mathbf{d}}\in \mathcal{F}$. \end{definition} \begin{definition} For every $\tilde{\mathbf{d}}\in \mathcal{F}$ and $\tilde{\mathbf{d}}'\in \mathcal{A}\left(\tilde{\mathbf{d}}\right)$, \begin{equation} C_O\left(\tilde{\mathbf{d}},\tilde{\mathbf{d}}'\right)=\|\tilde{\mathbf{d}}-\tilde{\mathbf{d}}'\| \end{equation} is the operation cost for the movement from $\tilde{\mathbf{d}}\in \mathcal{F}$ towards $\tilde{\mathbf{d}}'\in \mathcal{A}\left(\tilde{\mathbf{d}}\right)$. \end{definition} \begin{algorithm}\label{astaralgorithmmm} \caption{A* Planning of the MQS Coordination}\label{euclid1} \begin{algorithmic}[1] \State \textit{Get: $\bar{\mathbf{d}}_s$ and $\bar{\mathbf{d}}_{u}$} \State \textit{Define:} Open set $\mathcal{O}=\left\{\bar{\mathbf{d}}_s\right\}$, Closed set $\mathcal{C}=\emptyset$, and $\tilde{\mathbf{d}}_{\mathrm{best}}=\bar{\mathbf{d}}_s$ \While{$\tilde{\mathbf{d}}_{\mathrm{best}}=\bar{\mathbf{d}}_u$ or $\mathcal{O}\neq \emptyset$} \State $\tilde{\mathbf{d}}_{\mathrm{best}}\leftarrow \argmin\limits_{\tilde{\mathbf{d}}\in \mathcal{F}}\left(g\left(\tilde{\mathbf{d}}\right)+C_H\left(\tilde{\mathbf{d}},\bar{\mathbf{d}}_u\right)\right)$ \State Update $\mathcal{O}$: $\mathcal{O}\leftarrow \mathcal{O}\setminus \left\{\tilde{\mathbf{d}}_{\mathrm{best}}\right\}$ \State Update $\mathcal{C}$: $\mathcal{C}\leftarrow \mathcal{C}\bigcup \left\{\tilde{\mathbf{d}}_{\mathrm{best}}\right\}$ \State Assign $\mathcal{A}\left(\tilde{\mathbf{d}}_{\mathrm{best}}\right)$ \State $\mathcal{R}\left(\tilde{\mathbf{d}}_{\mathrm{best}}\right)\leftarrow \mathcal{A}\left(\tilde{\mathbf{d}}_{\mathrm{best}}\right)\setminus \left(\mathcal{A}\left(\tilde{\mathbf{d}}_{\mathrm{best}}\right)\bigcap \mathcal{C}\right)$ \For{\texttt{< every $\tilde{\mathbf{d}}\in \mathcal{R}\left(\tilde{\mathbf{d}}_{\mathrm{best}}\right)$>}} \State $\tilde{\mathbf{b}}\left(\tilde{\mathbf{d}}\right)\leftarrow \tilde{\mathbf{d}}$ \If{$\tilde{\mathbf{d}}\in \mathcal{O}$} \If{$g\left(\tilde{\mathbf{d}}_{\mathrm{best}}\right)+C_O\left(\tilde{\mathbf{d}}_{\mathrm{best}},\tilde{\mathbf{d}}\right)<g\left(\tilde{\mathbf{d}}\right)$} \State $g\left(\tilde{\mathbf{d}}\right)\leftarrow g\left(\tilde{\mathbf{d}}_{\mathrm{best}}\right)+C_O\left(\tilde{\mathbf{d}}_{\mathrm{best}},\tilde{\mathbf{d}}\right)$ \State $\tilde{\mathbf{b}}\left(\tilde{\mathbf{d}}\right)\leftarrow \tilde{\mathbf{d}}_{\mathrm{best}}$ \EndIf \EndIf \EndFor \State $\mathcal{O}\leftarrow \mathcal{R}\left(\tilde{\mathbf{d}}_{\mathrm{best}}\right)\bigcup \mathcal{O}$ \EndWhile \end{algorithmic} \end{algorithm} Given initial and final locations of the center of the containment ball $\mathcal{S}$, denoted by $\bar{\mathbf{d}}_s$ and $\bar{\mathbf{d}}_u$, the A* search algorithm is applied to determine optimal intermediate positions $\bar{\mathbf{b}}_s$, $\cdots$, $\bar{\mathbf{b}}_{m_\tau}$ along the optimal path of the containment ball $\mathcal{S}$ from $\bar{\mathbf{d}}_s$ to $\bar{\mathbf{d}}_u$ in an obstacle-laden environment (See Algorithm \ref{euclid1}). More specifically, the A* optimizer generates $\bar{\mathbf{b}}_s$, $\cdots$, $\bar{\mathbf{b}}_{m_\tau}$ by searching over set $\mathcal{F}$, where \begin{subequations} \begin{equation} \bar{\mathbf{b}}_s=\bar{\mathbf{d}}_s, \end{equation} \begin{equation} \bar{\mathbf{b}}_{m_\tau}=\bar{\mathbf{d}}_u, \end{equation} \begin{equation} \left(\bar{\mathbf{b}}_{k},\bar{\mathbf{b}}_{k+1}\right)\in \mathcal{A}\left(\bar{\mathbf{b}}_{k}\right). \end{equation} \end{subequations} The center of the containment ball $\mathcal{S}$ moves along the straight paths obtained by connecting $\bar{\mathbf{b}}_s$, $\cdots$, $\bar{\mathbf{b}}_{m_\tau}$. Therefore, $n_\tau $ serially-connected line segments defines the optimal path of the containment ball, where $n_\tau\leq m_\tau$, $\bar{\mathbf{d}}_s=\bar{\mathbf{b}}_s$, $\bar{\mathbf{d}}_{n_\tau}=\bar{\mathbf{b}}_{m_\tau}=\bar{\mathbf{d}}_u$, and the end point of the $k$-th line segment connects $\bar{\mathbf{d}}_{k-1}$ to $\bar{\mathbf{d}}_{k}$. Given $\bar{\mathbf{b}}_s$, $\cdots$, $\bar{\mathbf{b}}_{m_\tau}$, algorithm \ref{euclid} is used to determine $\bar{\mathbf{d}}_1$, $\cdots$, $\bar{\mathbf{d}}_{n_\tau-1}$. \begin{algorithm}\label{astaralgorithmmm} \caption{Assignment of Optimal Way-points $\bar{\mathbf{d}}_1$, $\cdots$, $\bar{\mathbf{d}}_{n_\tau-1}$}\label{euclid} \begin{algorithmic}[1] \State \textit{Get: $\bar{\mathbf{b}}_s=\bar{\mathbf{d}}_s$, $\cdots$, $\bar{\mathbf{b}}_{m_\tau}=\bar{\mathbf{d}}_u$} \State \textit{Set: $i=0$} \For{\texttt{< $k\leftarrow1$ to $m_\tau-1$ >}} \If{$\bar{\mathbf{b}}_k-\bar{\mathbf{b}}_{k-1}\neq\bar{\mathbf{b}}_{k+1}-\bar{\mathbf{b}}_{k}$} \State $i\leftarrow i+1$ \State $\bar{\mathbf{d}}_i=\bar{\mathbf{b}}_k$ \EndIf \EndFor \end{algorithmic} \end{algorithm} \subsection{Intermediate Configuration of the Leading Triangle} \label{Planning2} Matrix $\mathbf{Q}_{xy}(t)$ can be expressed by \begin{equation}\label{Qxy} \mathbf{Q}_{xy}(t)=\mathbf{R}_{xy}(t)\mathbf{U}_{xy}(t), \end{equation} where rotation matrix $\mathbf{R}_{xy}(t)$ and pure deformation matrix $\mathbf{U}_{xy}(t)$ are defined as follows: \begin{subequations} \begin{equation}\label{Rxy} \mathbf{R}_{xy}(t)=\begin{bmatrix}\cos \theta_r&-\sin \theta_r\\ \sin\theta_r&\cos \theta_r\end{bmatrix}, \end{equation} \begin{equation}\label{Uxy} \mathbf{U}_{xy}(t)=\mathbf{R}_D(t)\mathbf{\Lambda}(t)\mathbf{R}_D^T(t), \end{equation} \end{subequations} {\color{black}where} \begin{subequations} \begin{equation}\label{LAMBDAAAAAAA} \mathbf{\Lambda}(t)=\begin{bmatrix} \sigma_1(t)&0\\ 0&\sigma_2(t) \end{bmatrix} , \end{equation} \begin{equation}\label{RDDDDDDDDD} \mathbf{R}_D(t)=\begin{bmatrix}\cos \theta_d&-\sin \theta_d\\ \sin\theta_d&\cos \theta_d\end{bmatrix}. \end{equation} \end{subequations} Note that $\theta_r(t)>0$ and $\theta_d(t)>0$ are the rotation and shear deformation angles; and $\sigma_1(t)$ and $\sigma_2(t)$ are the first and second deformation eigenvalues. Because $\mathbf{\Lambda}(t)$ is positive definite and diagonal, matrix $\mathbf{U}_{xy}(t)$ is positive definite at any time $t\in \left[t_s,t_u\right]$ \cite{rastgoftar2020scalable}. \begin{proposition}\label{prop1} Matrix $\mathbf{U}_{xy}^m$ can be expressed as \begin{equation}\label{Uxym} \mathbf{U}_{xy}^m(t)=\begin{bmatrix}a_m(t)&b_m(t)\\b_m(t)&a_m(t)\end{bmatrix}, \end{equation} with \begin{subequations} \begin{equation}\label{amm} a_m(t)=\sigma_1^m(t)\cos^2\theta_d(t)+\sigma_2^m(t)\sin^2\theta_d(t), \end{equation} \begin{equation}\label{bmm} b_m(t)=\left(\sigma_1^m(t)-\sigma_2^m(t)\right)\sin\theta_d(t)\cos\theta_d(t), \end{equation} \begin{equation}\label{cmm} c_m(t)=\sigma_1^m(t)\sin^2\theta_d(t)+\sigma_2^m(t)\cos^2\theta_d(t). \end{equation} \end{subequations} Also, $\sigma_1$, $\sigma_2$, and $\theta_d$ can be related to $a_m$, $b_m$, and $c_m$ by \begin{subequations} \begin{equation}\label{lambda1} \sigma_1(t)=\sqrt[m]{\dfrac{a_m(t)+c_m(t)}{2}+\sqrt{\big[{1\over 2}\left(a_m(t)-c_m(t)\right)\big]^2+b_m^2(t)}}, \end{equation} \begin{equation}\label{lambda2} \sigma_2(t)=\sqrt[m]{\dfrac{a_m(t)+c_m(t)}{2}-\sqrt{\big[{1\over 2}\left(a_m(t)-c_m(t)\right)\big]^2+b_m^2(t)}}, \end{equation} \begin{equation}\label{thetad} \theta_d(t)=\dfrac{1}{2}\tan^{-1}\left(\dfrac{2b_m(t)}{a_m(t)-c_m(t)}\right). \end{equation} \end{subequations} \end{proposition} \begin{proof} Because $\mathbf{R}_{D}(t)$ is orthogonal at time $t$, $\mathbf{R}_{D}^T(t)\mathbf{R}_{D}(t)=\mathbf{I}_2$. If matrix $\mathbf{U}_{xy}^m$ {\color{black}is} expressed as \begin{equation}\label{UDANYM} \mathbf{U}_{xy}^m(t)=\mathbf{R}_{D}(t)\mathbf{\Lambda}^m\mathbf{R}_{D}(t), \end{equation} for $m=1,2,\cdots$, then, \begin{equation}\label{UDM} \begin{split} \mathbf{U}_{xy}^{m+1}(t)=&\mathbf{R}_{D}(t)\mathbf{\Lambda}\mathbf{R}_{D}^T(t)\mathbf{R}_{D}(t)\mathbf{\Lambda}^m\mathbf{R}_{D}(t)\\ =&\mathbf{R}_{D}(t)\mathbf{\Lambda}^{m+1}\mathbf{R}_{D}(t). \end{split} \end{equation} {\color{black}Since} Eq. \eqref{UDANYM} is valid for $m=0${\color{black}, Eq.} \eqref{UDM} ensures that Eq. \eqref{UDANYM} is valid for any $m>0$. By replacing \eqref{LAMBDAAAAAAA} and \eqref{RDDDDDDDDD} into \eqref{UDANYM}, elements of matrix $\mathbf{U}_{xy}^m$ ($a_m$, $b_m$, $c_m$) are obtained by Eqs. \eqref{amm}, \eqref{bmm}, and \eqref{cmm}. \end{proof} By provoking Proposition \ref{prop1}, matrix $\mathbf{U}_{xy}^2=\mathbf{Q}_{xy}^T\mathbf{Q}_{xy}$ \cite{rastgoftar2020scalable} can be expressed in the form of Eq. \eqref{Uxym} where $m=2$ and \begin{subequations} \begin{equation} a_2(t)=\mathbf{y}_{L,HT}^T(t)\mathbf{O}^T\mathbf{\Gamma}^T\left(\mathbf{E}_1^T\mathbf{E}_1+\mathbf{E}_3^T\mathbf{E}_3\right)\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t), \end{equation} \begin{equation} b_2(t)=\mathbf{y}_{L,HT}^T(t)\mathbf{O}^T\mathbf{\Gamma}^T\left(\mathbf{E}_1^T\mathbf{E}_2+\mathbf{E}_3^T\mathbf{E}_4\right)\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t), \end{equation} \begin{equation} c_2(t)=\mathbf{y}_{L,HT}^T(t)\mathbf{O}^T\mathbf{\Gamma}^T\left(\mathbf{E}_2^T\mathbf{E}_2+\mathbf{E}_4^T\mathbf{E}_4\right)\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t). \end{equation} \end{subequations} Therefore, we can determine $\sigma_1(t)$, $\sigma_2(t)$, and {\color{black}$\theta_d(t)$} by replacing $m=2$, $a_m(t)=a_2(t)$, $b_m(t)=b_2(t)$, and $c_m(t)=c_2(t)$ into Eqs. \eqref{lambda1}, \eqref{lambda2}, and \eqref{thetad} at time $t\in \left[t_s,t_u\right]$. Furthermore, matrix $\mathbf{R}_{xy}(t)=\mathbf{Q}\mathbf{U}_{xy}^{-1}$ is related to $\mathbf{y}_{L,HT}(t)$ by \begin{equation}\label{RXYPOS} \mathbf{R}_{xy}(t)=\begin{bmatrix} \mathbf{E}_1\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t)&\mathbf{E}_2\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t)\\ \mathbf{E}_3\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t)&\mathbf{E}_4\mathbf{\Gamma}\mathbf{O}\mathbf{y}_{L,HT}(t)\\ \end{bmatrix} \begin{bmatrix} a_2(t)&b_2(t)\\ b_2(t)&c_2(t) \end{bmatrix} ^{-1\over 2}. \end{equation} {\color{black}Therefore,} rotation angle $\theta_r(t)$ is obtained at any time $t\in \left[t_s,t_u\right]$ {\color{black}by knowing rotation matrix $\mathbf{R}_{xy}(t)$ over time interval $\left[t_s,t_u\right]$}. \begin{proposition}\label{prop2} If the area of the leading triangle remains constant at any time $t\in [t_s,t_u]$, then the following conditions hold: \begin{subequations} \begin{equation}\label{ac1} \sigma_2(t)={1\over \sigma_1(t)}, \end{equation} \begin{equation}\label{ac2} a_2(t)c_2(t)-b_2{\color{black}^2}=1. \end{equation} \end{subequations} \end{proposition} \begin{proof} Per Assumption \ref{assmq1}, $\mathbf{U}_{xy}(t_s)=\mathbf{I}_2$. If the area of the leading triangle remains constant, then $\sigma_{1}(t)\sigma_2(t)=\sigma_{1}(t_s)\sigma_2(t_s)=1$ and $\left|\mathbf{U}_{xy}(t)\right|{\color{black}= a_2(t)c_2(t)-b_2^2}=1$ at any time $t$. Therefore, conditions \eqref{ac1} and \eqref{ac2} hold at any time $t\in \left[t_s,t_u\right]$. \end{proof} \begin{theorem} Assume every quadcopter $i\in \mathcal{V}$ can be enclosed by a ball of radius $\epsilon$, and it can execute a proper control input $\mathbf{u}_i$ such that \begin{equation}\label{BoundedDeviation} \bigwedge_{i\in \mathcal{V}}\|\mathbf{r}_{i}(t)-\mathbf{p}_i(t)\|\leq \delta,\qquad \forall t\in \left[t_s,t_u\right]. \end{equation} Let \begin{equation} d_{\mathrm{min}}=\min\limits_{i,j\in \mathcal{V},~j\neq i}\left\|\mathbf{p}_{i,0}-\mathbf{p}_{j,o}\right\|, \end{equation} be the minimum separation distance between two quadcopters. Then, collision between every two quadcopers and collision of the MQS with obstacles are both avoided, if the largest eigenvalue of matrix $\mathbf{U}_{xy}$ satisfies inequality constraint \begin{equation}\label{firstsigma1} \sigma_1(t)\leq \dfrac{d_\mathrm{min}}{2\left(\delta+\epsilon\right)}, \end{equation} and every quadcopter remains inside the containment ball {\color{black}$\mathcal{S}\left(\mathbf{d}\left(t\right),r_{\mathrm{max}}\right)$} at any time $t\in \left[t_s,t_u\right]$. \end{theorem} \begin{proof} Per Eqs. \eqref{lambda1} and \eqref{lambda2}, $\sigma_2(t)\leq \sigma_1(t)$ at any time $t\in \left[t_s,t_u\right]$. Collision between every two quadcopters is avoided, if \cite{rastgoftar2020scalable} \begin{equation}\label{interagentcollision} \sigma_2(t)\geq {2\left(\delta+\epsilon\right)\over d_{\mathrm{min}}},\qquad \forall t\in \left[t_s,t_u\right]. \end{equation} Per Proposition \ref{prop2}, $\sigma_2(t)={1\over \sigma_1(t)}$. Thus, Eq. \eqref{interagentcollision} can be rewritten as follows: \begin{equation}\label{interagentcollisionmain} \sigma_1(t)\leq {2\left(\delta+\epsilon\right)\over d_{\mathrm{min}}},\qquad \forall t\in \left[t_s,t_u\right]. \end{equation} By applying A* search method, we ensure that the containment ball does not hit obstacles in the motion space. Therefore, obstacle collision avoidance is guaranteed{\color{black},} if quadcopters are all inside the containment ball $\mathcal{S}\left(\mathbf{d}\left(t\right),r_{\mathrm{max}}\right)$ at any time $t\in \left[t_s,t_u\right]$. \end{proof} \textbf{Intermediate Configurations Leaders:} We offer a procedure {\color{black}with the following five main steps} to determine the intermediate waypoints of the leaders: \textit{Step 1:} Given $\bar{\mathbf{y}}_{L,HT,n_\tau}=\mathbf{y}_{L,HT}\left(t_{u}\right)$, $\sigma_{1,n_\tau}=\sigma_1(t_u)$, $\theta_{d,n_\tau}=\theta_d(t_u)$, and $\theta_{r,n_\tau}=\theta_r(t_u)$ are computed using Eqs. \eqref{lambda1}, \eqref{thetad}, and \eqref{RXYPOS}, respectively. \textit{Step 2:} We compute \begin{subequations} \begin{equation} \sigma_{1,k}=\beta_k\sigma_{1,0}+(1-\beta_k)\sigma_{1,n_\tau}, \end{equation} \begin{equation} \theta_{d,k}=(1-\beta_k)\theta_{d,n_\tau}, \end{equation} \begin{equation} \theta_{r,k}=(1-\beta_k)\theta_{r,n_\tau} \end{equation} \end{subequations} for $k=1,\cdots,n_\tau-1${\color{black}, where $\beta_k$ is computed using Eq. \eqref{betak}.} \textit{Step 3:} We compute $\sigma_{2,k}={1\over \sigma_{1,k}}$ for $k=1,\cdots,n_\tau-1$. \textit{Step 4:} Given $\sigma_{1,k}${\color{black},} $\sigma_{2,k}$, and $\theta_{d,k}${\color{black},} matrix $\mathbf{U}_{xy,k}=\mathbf{U}_{xy}\left(t_k\right)$ is obtained by Eq. \eqref{Uxy} for $k=1,\cdots,n_\tau-1$. Also, matrix $\mathbf{R}_{xy,k}=\mathbf{R}_{xy}\left(t_k\right)$ is obtained using Eq. \eqref{Rxy} by knowing the rotation angle $\theta_{r,k}$ for $k=1,\cdots,n_\tau-1$. \textit{Step {\color{black}5}:} By knowing $\mathbf{R}_{xy,k}=\mathbf{R}_{xy}\left(t_k\right)$ and $\mathbf{U}_{xy,k}=\mathbf{U}_{xy}\left(t_k\right)${\color{black},} the Jacobian matrix $\mathbf{Q}_{xy,k}=\mathbf{Q}_{xy}\left(t_k\right)$ is obtained using Eq. \eqref{Qxy}. Then, we can use relation \eqref{globaldesiredcoordination} to obtain $\bar{\mathbf{y}}_{L,HT,k}$ by replacing $\mathbf{Q}_{xy,k}=\mathbf{Q}_{xy}\left(t_k\right)$ and $\bar{\mathbf{d}}_k$ for $k=1,\cdots,n_\tau-1$. \subsection{Optimal Control Planning} \label{Planning3} {\color{black}This section offers an optimal control solution to determine the leaders' desired trajectories connecting every two consecutive waypoints $\bar{\mathbf{y}}_{L,HT,k}$ and $\bar{\mathbf{y}}_{L,HT,k+1}$ for $k=0,1,\cdots,n_\tau-1$, where $z$ components of the leaders is defined by Eq. \eqref{zcomponent}, and $x$ and $y$ components the leaders' desired trajectories are governed by \eqref{maindynamicsss}.} \textbf{Coordination Constraint:} Per equality constraint \eqref{c1}, the area of the leading triangle, given by \begin{equation}\label{aa} A(t)=\mathbf{y}_{L,HT}^T(t)\mathbf{\Psi}\mathbf{y}_{L,HT}(t), \end{equation} must be equal to constant value $A_s$ at any time $t\in \left[t_s,t_u\right]$. This equality constraint is satisfied, if $\mathbf{y}_{L,HT}(t)$ is updated by dynamics \eqref{maindynamicsss}, $c\left(\mathbf{x}_L,\mathbf{u}_L\right)=\ddot{A}\left(t\right)=0$ at any time $t\in \left[t_k,t_{k+1}\right]$ for $k=0,1,\cdots,n_\tau-1$, and the following boundary conditions are satisfied: \begin{subequations} \begin{equation}\label{c11} \resizebox{0.99\hsize}{!}{ $ k=0,1,\cdots,n_\tau,\qquad {\mathbf{y}}_{L,HT}^T\left(t_k\right)\mathbf{D}^T\mathbf{O}^T\mathbf{P}^T\mathbf{P}\mathbf{O}\mathbf{D}\mathbf{y}_{L,HT}\left(t_k\right)-A_s=0, $ } \end{equation} \begin{equation}\label{c22} \resizebox{0.99\hsize}{!}{ $ k=0,1,\cdots,n_\tau,\qquad \dot{\mathbf{y}}_{L,HT}^T\left(t_k\right)\mathbf{D}^T\mathbf{O}^T\mathbf{P}^T\mathbf{P}\mathbf{O}\mathbf{D}\mathbf{y}_{L,HT}\left(t_k\right)=0. $ } \end{equation} \end{subequations} By taking the second time derivative of ${A}\left(t\right)$, $c\left(\mathbf{x}_L,\mathbf{u}_L\right)$ is obtained as follows: \begin{equation}\label{mainequality} c\left(\mathbf{x}_L,\mathbf{u}_L,t\right)= \mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xx}}\mathbf{x}_L+2\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{u}_L=0, \end{equation} where \begin{subequations} \begin{equation}\label{gxx} \mathbf{\Gamma}_{\mathbf{xx}}=2\begin{bmatrix} \mathbf{0}_{6\times 6}&\mathbf{P}\\ \mathbf{P}&\mathbf{0}_{6\times 6}\\ \end{bmatrix} , \end{equation} \begin{equation}\label{gxu} \mathbf{\Gamma}_{\mathbf{xu}}=\begin{bmatrix} \mathbf{P}\\ \mathbf{0}_{6\times 6}\\ \end{bmatrix} . \end{equation} \end{subequations} The objective of the optimal control planning is to determine the desired trajectories of the leaders by minimization of cost function \begin{equation}\label{costttttttttttttttttt} k=0,1,\cdots,n_\tau-1,\qquad \mathrm{J}={1\over 2}\int_{t_k\left(t_u\right)}^{t_{k+1}\left(t_u\right)}\mathbf{u}_L^T(t)\mathbf{u}_L^T(t)dt \end{equation} subject to boundary conditions \begin{subequations}\label{boundaryconditionsss} \begin{equation} \mathbf{x}_L(t_k)=\bar{\mathbf{x}}_{L,k}, \end{equation} \begin{equation}\label{conditiontkplus1} \mathbf{x}_L(t_{k+1})=\bar{\mathbf{x}}_{L,k+1}, \end{equation} \end{subequations} and equality constraint \eqref{mainequality} at any time $t\in \left[t_k\left(t_u\right),t_{k+1}\left(t_u\right)\right]$ for $k=0,1,\cdots,n_\tau-1$ where $t_k(t_u)$ is obtained by \eqref{tku}. \begin{theorem} Suppose leaders' desired trajectories are updated by dynamics \eqref{maindynamicsss} such that equality constraint \eqref{mainequality} is satisfied at any time $t\in \left[t_k\left(t_u\right),t_{k+1}\left(t_u\right)\right]$ given the boundary conditions in Eq. \eqref{boundaryconditionsss}. Assuming the ultimate time $t_u$ is given, $t_k$ and $t_{k+1}$ obtained by Eq. \eqref{tku} are fixed, and the optimal desired trajectories of leaders minimizing the cost function \eqref{costttttttttttttttttt} are governed by dynamics \begin{equation}\label{rawwwwwwwwwwwwwwwwa} \begin{bmatrix} \dot{\mathbf{x}}_L\\ \dot{\lambda} \end{bmatrix} = \mathbf{A}_{\mathbf{x\lambda}}\left(\gamma(t)\right) \begin{bmatrix} {\mathbf{x}}_L\\ {\lambda} \end{bmatrix} , \end{equation} where \begin{subequations} \begin{equation}\label{62aagamma} \mathbf{A}_{\mathbf{x\lambda}}\left(\gamma(t)\right)=\begin{bmatrix} \mathbf{A}_L-2\gamma(t)\mathbf{B}_L\mathbf{\Gamma}_{\mathbf{xu}}^T&-\mathbf{B}_L\mathbf{B}_L^T\\ -2\gamma \mathbf{\Gamma}_{\mathbf{xx}}+4\gamma^2(t) \mathbf{\Gamma}_{\mathbf{xu}}\mathbf{\Gamma}_{\mathbf{xu}}^T&-\mathbf{A}_L^T+2\gamma(t)\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{B}_L^T\\ \end{bmatrix} , \end{equation}\label{gammmaammamamama} \begin{equation} \gamma\left(t\right)=\dfrac{\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xx}}\mathbf{x}_L+\mathbf{\Gamma}_{\mathbf{x}}^T\mathbf{x}_L-2\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{B}_L^T\mathbf{\lambda}}{4\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{\Gamma}_{\mathbf{xu}}^T\mathbf{x}_L}, \end{equation} \end{subequations} and $\lambda \in \mathbb{R}^{18\times 1}$ is the co-state vector. In addition, the state vector $\mathbf{x}_{L}(t)$ and co-state vector $\lambda(t)$ are obtained by \begin{subequations} \begin{equation}\label{xlt} \begin{split} \mathbf{x}_L(t)=&\left(\mathbf{\Phi}_{11}\left(t,t_k\right)-\mathbf{\Phi}_{12}\left(t,t_{k+1}\right)\mathbf{\Phi}_{11}\left(t_{k+1},t_k\right)\right)\bar{\mathbf{x}}_{L,k}\\ +&\mathbf{\Phi}_{12}\left(t,t_{k+1}\right)\bar{\mathbf{x}}_{L,k+1} \end{split} \end{equation} \begin{equation}\label{lambdalt} \begin{split} \lambda(t)=&\left(\mathbf{\Phi}_{21}\left(t,t_k\right)-\mathbf{\Phi}_{22}\left(t,t_{k+1}\right)\mathbf{\Phi}_{11}\left(t_{k+1},t_k\right)\right)\bar{\mathbf{x}}_{L,k}\\ +&\mathbf{\Phi}_{22}\left(t,t_{k+1}\right)\bar{\mathbf{x}}_{L,k+1} \end{split} \end{equation} \end{subequations} at time $t\in \left[t_k,t_{k+1}\right]$, where \begin{equation}\label{SatateTransitionMatrrrixxxxxxxs} \mathbf{\Phi}=\begin{bmatrix} \mathbf{\Phi}_{11}\left(t,t_k\right)&\mathbf{\Phi}_{12}\left(t,t_k\right)\\ \mathbf{\Phi}_{21}\left(t,t_k\right)&\mathbf{\Phi}_{22}\left(t,t_k\right)\\ \end{bmatrix}=\mathrm{exp}\left(\int_{t_k}^t\mathbf{A}_{\mathbf{x\lambda}}\left(\gamma(s)\right)ds\right) \end{equation} is the state transition matrix with partitions $\mathbf{\Phi}_{11}\left(t,t_k\right)\in \mathbb{R}^{12\times12}$, $\mathbf{\Phi}_{12}\left(t,t_k\right)\in \mathbb{R}^{12\times12}$, $\mathbf{\Phi}_{21}\left(t,t_k\right)\in \mathbb{R}^{12\times12}$, and $\mathbf{\Phi}_{22}\left(t,t_k\right)\in \mathbb{R}^{12\times12}$. \end{theorem} \begin{proof} The optimal leaders' trajectories are determined by minimization of the augmented cost function \begin{equation}\label{JP} \resizebox{0.99\hsize}{!}{ $ \mathrm{J}_a=\int_{t_k}^{t_{k+1}}\left({1\over 2}\mathbf{u}_L^T\mathbf{u}_L^T+\mathbf{\lambda}^T\left(\mathbf{A}_L{\mathbf{x}}_L+\mathbf{B}_L{\mathbf{u}}_L-\dot{\mathbf{x}}_L\right)+\gamma c\left(\mathbf{x}_L,\mathbf{u}_L\right)\right)dt, $ } \end{equation} where $\mathbf{\lambda}\in \mathbb{R}^{12\times 1}$ is the co-state vector and $\gamma(t)$ is the Lagrange multiplier. By taking variation from the augmented cost function \eqref{JP}, we can write \begin{equation}\label{JPP} \begin{split} \delta \mathrm{J}_a=\int_{t_k}^{t_{k+1}}&\Bigg[\delta \mathbf{u}_L^T\left(\mathbf{u}_L+\mathbf{B}_L^T\mathbf{\lambda}+\gamma {\partial c\over \partial \mathbf{u}_L}\right)+\delta \mathbf{x}_L^T\left(\dot{\mathbf{\lambda}}+\mathbf{A}_L^T\mathbf{\lambda}+\gamma {\partial c\over \partial \mathbf{x}_L}\right)\\ +&\delta \mathbf{\lambda}^T\left(\mathbf{A}_L{\mathbf{x}}_L+\mathbf{B}_L{\mathbf{u}}_L-\dot{\mathbf{x}}_L\right)\bigg]dt=0, \end{split} \end{equation} where ${\partial c\over \partial \mathbf{x}_L}=2\mathbf{\Gamma}_{\mathbf{xx}}\mathbf{x}_L+2\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{u}_L$ and ${\partial c\over \partial \mathbf{u}_L}=2\mathbf{\Gamma}_{\mathbf{xu}}^T\mathbf{x}_L$. By imposing $\delta\mathrm{J}_a=0$, the state dynamics \eqref{maindynamicsss} is obtained, the co-state dynamics become \begin{equation} \dot{\mathbf{\lambda}}=-\mathbf{A}_L^T\mathbf{\lambda}-\gamma(t)\dfrac{\partial c}{\partial \mathbf{x}_L}, \end{equation} and $\mathbf{u}_L$ is obtained as follows: \begin{equation}\label{ul} \mathbf{u}_{L}=-\mathbf{B}_L^T\mathbf{\lambda}-\gamma {\partial c\over \partial \mathbf{u}_L}=-\mathbf{B}_L^T\mathbf{\lambda}-2\gamma(t) \mathbf{\Gamma}_{\mathbf{xu}}^T\mathbf{x}_L. \end{equation} By substituting $\mathbf{u}_L=-\mathbf{B}_L^T\mathbf{\lambda}-2\gamma(t) \mathbf{\Gamma}_{\mathbf{xu}}\mathbf{x}_L$, the equality constraint \eqref{mainequality} is converted to \begin{equation} \label{organizedequality} c\left(\mathbf{x}_L,\mathbf{u}_L\right)=\left(4\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{\Gamma}_{\mathbf{xu}}^T\mathbf{x}_L\right)\gamma(t)+\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xx}}\mathbf{x}_L-2\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{B}_L^T\mathbf{\lambda}=0. \end{equation} By substituting $\mathbf{u}_L=-\mathbf{B}_L^T\mathbf{\lambda}-2\gamma(t) \mathbf{\Gamma}_{\mathbf{xu}}\mathbf{x}_L-\gamma\mathbf{\mathbf{\Gamma}}_u$ into Eq. \eqref{maindynamicsss}, we also obtain the leaders' desired trajectories solving dynamics \eqref{rawwwwwwwwwwwwwwwwa}. The solution of dynamics \eqref{rawwwwwwwwwwwwwwwwa} is given by \begin{equation}\label{optimalsolutiuo} \begin{bmatrix} {\mathbf{x}}_L(t)\\ {\lambda}(t) \end{bmatrix} = \begin{bmatrix} \mathbf{\Phi}_{11}\left(t,t_k\right)&\mathbf{\Phi}_{12}\left(t,t_k\right)\\ \mathbf{\Phi}_{21}\left(t,t_k\right)&\mathbf{\Phi}_{22}\left(t,t_k\right)\\ \end{bmatrix} \begin{bmatrix} \bar{\mathbf{x}}_{L,k}\\ {\lambda}_{k} \end{bmatrix} \end{equation} at time $t\in [t_k,t_{k+1}]$, where ${\lambda}_{k}=\lambda\left(t_k\right)$. By imposition boundary condition \eqref{conditiontkplus1}, \begin{equation} \lambda_k=\mathbf{\Phi}_{12}\left(t_k,t_{k+1}\right)\left(\mathbf{x}_L\left(t_{k+1}\right)-\mathbf{\Phi}_{11}\left(t_{k+1},t_k\right)\mathbf{x}_L\left(t_{k}\right)\right) \end{equation} is obtained from Eq. \eqref{optimalsolutiuo}. By substituting $\lambda_k$ into Eq. \eqref{optimalsolutiuo}, $\mathbf{x}_L(t)$ is obtained by Eq. \eqref{xlt} at any time $t\in \left[t_k,t_{k+1}\right]$. \end{proof} \begin{algorithm} \caption{Assignment of travel time $t_u$ and desired trajectory $\mathbf{x}_L(t)$ over $\left[t_0,t_u\right]$}\label{euclid33} \begin{algorithmic}[1] \State \textit{Get:} $\bar{\mathbf{x}}_{L,0}$, $\cdots$, $\bar{\mathbf{x}}_{L,n_\tau}$ and $\beta_0$, $\cdots$, $\beta_{n_\tau-1}$, $\epsilon_T$, $\epsilon_\gamma$, small $T_{\mathrm{min}}$ and large $T_{\mathrm{max}}$ ($t_{u,\mathrm{min}}< t_u< t_{u,\mathrm{max}}$) \State \textit{Set:} small $T_{\mathrm{min}}$, large $T_{\mathrm{max}}$, $t_0=0$, $t_1=0$, $\cdots$, $t_{n_\tau-1}=0$ \State $t_u={T_{\mathrm{min}}+T_{\mathrm{max}}\over 2}$ \While{$t_u-T_{\mathrm{min}}\geq \epsilon_T$} \For{\texttt{< $k\leftarrow 0$ to $n_\tau-1$}} \State $t_k\leftarrow \beta_k t_u$ \State $t_{k+1}\leftarrow \beta_{k+1} t_u$ \State $\gamma'(t)=0$ at every time $t\in \left[t_k,t_{k+1}\right]$ \State $\gamma(t)=0$ at every time $t\in \left[t_k,t_{k+1}\right]$ \State $e_\gamma\leftarrow 2\epsilon \gamma$ \While{$e_\gamma\geq \epsilon_\gamma$} \State Compute $\mathbf{A}{\color{black}_{\mathbf{x}\lambda}}\left(\gamma(t)\right)$ using Eq. \eqref{62aagamma} \State Compute $\mathbf{\Phi}\left(t,t_k\right)$ $\mathbf{x}_L\left(t\right)$ using Eq. \eqref{SatateTransitionMatrrrixxxxxxxs} \State Obtain $\mathbf{x}_L\left(t\right)$ by Eq. \eqref{xlt} for $t\in\left[t_k,t_{k+1}\right]$ \State Obtain $\lambda\left(t\right)$ by Eq. \eqref{lambdalt} for $t\in\left[t_k,t_{k+1}\right]$ \State Compute $\gamma'(t)$ for $t\in\left[t_k,t_{k+1}\right]$: \State $\gamma'\left(t\right)=\dfrac{\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xx}}\mathbf{x}_L+\mathbf{\Gamma}_{\mathbf{x}}^T\mathbf{x}_L-2\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{B}_L^T\mathbf{\lambda}}{4\mathbf{x}_L^T\mathbf{\Gamma}_{\mathbf{xu}}\mathbf{\Gamma}_{\mathbf{xu}}^T\mathbf{x}_L}$ \State $e_\gamma=\max\limits_{t\in \left[t_k,t_{k+1}\right]}~\left|\gamma(t)-\gamma'(t)\right|$ \State $\gamma(t)=\gamma'(t)$ \EndWhile \EndFor \State $e_T=\max\limits_{t\in \left[t_0,t_u\right]}~\bigwedge_{i\in \mathcal{V}}\|\mathbf{r}_{i}(t)-\mathbf{p}_i(t)\|$ \If{$e_T\leq \delta$} \State $T_{\mathrm{max}}\leftarrow t_u$ \EndIf \If{$e_T> \delta$} \State $T_{\mathrm{min}}\leftarrow t_u$ \EndIf \EndWhile \end{algorithmic} \end{algorithm} \section{Continuum Deformation Acquisition} \label{Continuum Deformation Acquisition} This paper considers collective motion of a quadcopter team consisting of $N$ quadcopters, where dynamics of quadcopter $i\in \mathcal{V}$ is given by { \begin{equation} \label{generalnonlineardynamics} \begin{cases} \dot{\mathbf{x}}_i=\mathbf{f}_i\left(\mathbf{x}_i\right)+\mathbf{g}_i\left(\mathbf{x}_i\right)\mathbf{u}_i\\ \mathbf{r}_i=\mathbf{C}\mathbf{x}_i \end{cases} . \end{equation} } In \eqref{generalnonlineardynamics}, $ \mathbf{x}_i= \begin{bmatrix} \mathbf{r}_i^T&\dot{\mathbf{r}}_i^T&\phi_i&\theta_i&\psi_i&{\bf{\omega}}_i^T \end{bmatrix} ^T $ is the state, $\mathbf{u}_i= \begin{bmatrix} p_i&\tau_{\phi,i}&\tau_{\theta,i}&\tau_{\psi,i} \end{bmatrix} ^T$ is the input, {$\mathbf{C}_i=\begin{bmatrix} \mathbf{I}_3&\mathbf{0}_{3\times 9} \end{bmatrix}$,} \[ { \resizebox{0.99\hsize}{!}{ $ \mathbf{f}_i\left(\mathbf{x}_i\right)= \begin{bmatrix} \dot{\mathbf{r}}_i\\ {1\over m_i}p_i\hat{\mathbf{k}}_{b,i}-g\hat{\mathbf{e}}_3\\ \mathbf{\Gamma}_i^{-1}\left(\phi_i,\theta_i,\psi_i\right){\bf{\omega}}_i\\ \mathbf{J}_i^{-1}{\bf{\omega}}_i\times \left(\mathbf{J}_i{\bf{\omega}}_i\right)\\ \end{bmatrix}, ~\mathrm{and}~\mathbf{g}_i\left(\mathbf{x}_i\right)= \begin{bmatrix} \mathbf{0}_{3\times 1}&\mathbf{0}_{3\times 3}\\ {1\over m_i}\hat{\mathbf{k}}_{b,i}&\mathbf{0}_{3\times 3}\\ \mathbf{0}_{3\times 1}&\mathbf{0}_{3\times 1}\\ \mathbf{0}_{3\times 1}&\mathbf{J}_i^{-1}\\ \end{bmatrix} , $ }} \] where $m_i$ and $\mathbf{J}_i$ are the mass and mass moment of inertia of quadcopter $i\in \mathcal{V}$, respectively, $\mathbf{0}_{3\times 1}\in \mathbb{R}^{3\times 1}$, $\mathbf{0}_{3\times 3}\in \mathbb{R}^{3\times {3}}${, and $\mathbf{0}_{3\times 9}\in \mathbb{R}^{3\times {9}}$} are the zero-entry matrices, $\mathbf{I}_3\in\mathbb{R}^{3\times 3}$ is the identity matrix, $g=9.81m/s^2$ is the gravity, and \begin{equation} \label{Eq55} \mathbf{\Gamma}_i\left(\phi_i,\theta_i,\psi_i\right)= \begin{bmatrix} 1&0&-\sin\theta_i\\ 0&\cos\phi_i&\cos\theta_i\sin\phi_i\\ 0&-\sin\phi_i&\cos\phi_i\cos\theta_i \end{bmatrix} . \end{equation} The dynamics of leader and follower quadcopter sub-teams are given by \begin{subequations} \begin{equation} \label{Leaders} \begin{cases} \dot{\mathbf{x}}_L=\mathbf{F}_L\left(\mathbf{x}_L\right)+\mathbf{G}_L\left(\mathbf{x}_L\right)\mathbf{u}_L\\ \mathbf{y}_L=\mathbf{C}_L \mathbf{x}_L \end{cases} , \end{equation} \begin{equation} \label{Followers} \begin{cases} \dot{\mathbf{x}}_F=\mathbf{F}_F\left(\mathbf{x}_F\right)+\mathbf{G}_F\left(\mathbf{x}_F\right)\mathbf{u}_L\\ \mathbf{y}_F=\mathbf{C}_F \mathbf{x}_F \end{cases} , \end{equation} \end{subequations} where $\mathbf{C}_L\in \mathbb{R}^{9\times 36}$, $\mathbf{C}_F\in \mathbb{R}^{3\left(N-3\right)\times 12\left(N-3\right)}$, $\mathbf{x}_L=\begin{bmatrix} \mathbf{x}_1^T&\cdots&\mathbf{x}_3^T \end{bmatrix}^T$ and $\mathbf{x}_F=\begin{bmatrix} \mathbf{x}_4^T&\cdots&\mathbf{x}_N^T \end{bmatrix}^T$ are the state vectors of leaders and followers, $\mathbf{u}_L=\begin{bmatrix} \mathbf{u}_1^T&\cdots&\mathbf{u}_3^T \end{bmatrix}^T$ and $\mathbf{u}_F=\begin{bmatrix} \mathbf{u}_4^T&\cdots&\mathbf{u}_N^T \end{bmatrix}^T$ are the input vectors of leaders and followers, $\mathbf{y}_L=\begin{bmatrix} \mathbf{r}_1^T&\cdots&\mathbf{r}_3^T \end{bmatrix}^T$ and $\mathbf{y}_F=\begin{bmatrix} \mathbf{r}_4^T&\cdots&\mathbf{r}_N^T \end{bmatrix}^T$ are the output vectors of leaders and followers, and $\mathbf{F}_L\left(\mathbf{x}_L\right)=\begin{bmatrix} \mathbf{f}_1^T\left(\mathbf{x}_1\right)&\cdots&\mathbf{f}_3^T\left(\mathbf{x}_3\right) \end{bmatrix}^T$, $\mathbf{F}_F\left(\mathbf{x}_F\right)=\begin{bmatrix} \mathbf{f}_4^T\left(\mathbf{x}_4\right)&\cdots&\mathbf{f}_N^T\left(\mathbf{x}_B\right) \end{bmatrix}^T$, $\mathbf{G}_L\left(\mathbf{x}_L\right)=\begin{bmatrix} \mathbf{f}_1^T\left(\mathbf{x}_1\right)&\cdots&\mathbf{f}_3^T\left(\mathbf{x}_3\right) \end{bmatrix}^T$, $\mathbf{G}_F\left(\mathbf{x}_F\right)=\begin{bmatrix} \mathbf{f}_4^T\left(\mathbf{x}_4\right)&\cdots&\mathbf{f}_N^T\left(\mathbf{x}_B\right) \end{bmatrix}^T$ are smooth functions. The continuum deformation, defined by \eqref{globaldesiredcoordination} and planned by leaders $1$, $2$, and $3$, are acquired by followers in a decentralized fashion through local communication \cite{rastgoftar2020scalable}. Communication among the quadcopters are defined by graph $\mathcal{G}\left(\mathcal{V},\mathcal{E}\right)$ with the properties presented in Section \ref{Graph Theory Notions}. Here, we review the existing communication-based guidance protocol and the trajectory control design \cite{rastgoftar2020scalable} in Sections \ref{Communication-Based Guidance Protocol} and \ref{Trajectory Control Design} below. \subsection{Communication-Based Guidance Protocol}\label{Communication-Based Guidance Protocol} Given followers' communication weights, we define matrix \[ \mathbf{W}=\begin{bmatrix} \mathbf{0}_{3\times 3}&\mathbf{0}_{3\times \left(N-3\right)}\\ \mathbf{B}_{\mathrm{MQS}}&\mathbf{A}_{\mathrm{MQS}} \end{bmatrix}\in \mathbb{R}^{\left(N-3\right)\times N} \] with partitions $\mathbf{B}_{\mathrm{MQS}}\in \mathbb{R}^{\left(N-3\right)\times 3}$ and $\mathbf{A}_{\mathrm{MQS}}\in \mathbb{R}^{\left(N-3\right)\times \left(N-3\right)}$, and $(i,j)$ entry \cite{rastgoftar2020scalable} \begin{equation} W_{ij}=\begin{cases} w_{i,j}&i\in \mathcal{V}_F,~j\in \mathcal{N}_{i}\\ -1&j=i\\ 0&\mathrm{otherwise} \end{cases} . \end{equation} In Ref. \cite{rastgoftar2020scalable}, we show that \[ \mathbf{y}_{HT}=\mathrm{vec}\left(\begin{bmatrix}\mathbf{p}_1(t)&\cdots&\mathbf{p}_N(t)\end{bmatrix}^T\right)\in \mathbb{R}^{3N\times 1}, \] aggregating $x$, $y$, and $z$ components of global desired positions of all quadcopters, can be defined based on $\mathbf{y}_{L,HT}(t)$ by \begin{equation} \mathbf{y}_{HT}(t)=\left(\mathbf{I}_3\otimes\mathbf{W}_L\right)\mathbf{y}_{L,HT}(t), \end{equation} where \begin{equation} \mathbf{W}_L=\begin{bmatrix} \mathbf{\Omega}_2^T\left(\mathbf{p}_{1,0},\mathbf{p}_{2,0},\mathbf{p}_{3,0},\mathbf{p}_{1,0}\right)\\ \vdots\\ \mathbf{\Omega}_2^T\left(\mathbf{p}_{1,0},\mathbf{p}_{2,0},\mathbf{p}_{3,0},\mathbf{p}_{N,0}\right)\\ \end{bmatrix} \in \mathbb{R}^{N\times 3} \end{equation} is defined based on $\mathbf{W}$ by \begin{equation} \mathbf{W}_L=\left(-\mathbf{I}_N+\mathbf{W}\right)^{-1}\begin{bmatrix} \mathbf{I}_{3}&\mathbf{0}_{3\times \left(N-3\right)} \end{bmatrix} ^T. \end{equation} Given the output vectors of the leaders' dynamics \eqref{Leaders}, denoted by $\mathbf{y}_L$, and followers' dynamics \eqref{Followers}, denoted by $\mathbf{y}_F$, we define the MQS output vector \[ \mathbf{y}(t)=\mathbf{R}_L\mathbf{y}_L(t)+\mathbf{R}_F\mathbf{y}_F(t) \] to measure deviation of the MQS from the desired continuum deformation coordination by checking constraint \eqref{BoundedDeviation}, where $\mathbf{R}_L=\left[R_{L_{ij}}\right]\in \mathbb{R}^{3N\times 9}$ and $\mathbf{R}_F=\left[R_{F_{ij}}\right]\in \mathbb{R}^{3N\times 3(N-3)}$ are defined as follows: \begin{subequations} \begin{equation} R_{L_{ij}}=\begin{cases} 1&i=j,~ j\leq3\\ 1&i=j+N,~ 4\leq j\leq6\\ 1&i=j+N,~ 7\leq j\leq9\\ 0&\mathrm{otherwise} \end{cases} , \end{equation} \begin{equation} R_{F_{ij}}=\begin{cases} 1&4\leq i\leq N,~ j\leq3\\ 1&N+4\leq i\leq 2N,~ 4<j\leq6\\ 1&2N+4\leq i\leq 3N,~ 4<j\leq6\\ 0&\mathrm{otherwise} \end{cases} . \end{equation} \end{subequations} As shown in Fig. \ref{blockdiagram}, $\mathbf{y}_{L,HT}(t)$ is the reference input of the control system of leader coordination, and \begin{equation} \mathbf{y}_{F,d}(t)=\left(\mathbf{I}_3\otimes \mathbf{A}_{\mathrm{MQS}}\right)\mathbf{y}_F(t)+\left(\mathbf{I}_3\otimes \mathbf{B}_{\mathrm{MQS}}\right)\mathbf{y}_L(t) \end{equation} is the reference input of the control system of the follower quadcopter team. \subsection{Trajectory Control Design}\label{Trajectory Control Design} The objective of control design is to determine $\mathbf{u}_L\in \mathbb{R}^{12\times 1}$ and $\mathbf{u}_F$ such that \eqref{BoundedDeviation} is satisfied at any time $t\in \left[t_0,t_u\right]$. We can rewrite the safety condition \eqref{BoundedDeviation} as \begin{equation}\label{convertedbounded} \bigwedge_{i\in \mathcal{V}}\left(\left(\mathbf{y}(t)-\mathbf{y}_{HT}(t)\right)^T\mathbf{S}_i^T\mathbf{S}_i\left(\mathbf{y}(t)-\mathbf{y}_{HT}(t)\right)\leq\delta^2\right),\qquad \forall t, \end{equation} where $\mathbf{S}_i=\left[\mathbf{S}_{i_{pq}}\right]\in \mathbb{R}^{3\times 3N}$ is defined as follows: \begin{equation} \mathbf{S}_{i_{pq}}=\begin{cases} 1&\bigwedge_{i=1}^3\left(\left(p=i\right)\wedge \left(q=N(i-1)+i\right)\right)\\ 0&\mathrm{otherwise} \end{cases} . \end{equation} We use the feedback linearization approach presented in Ref. \cite{rastgoftar2020scalable} to obtain the control input vector $\mathbf{u}_i(t)$ for every quadcopter $i\in \mathcal{V}$ such that inequality constraint \eqref{convertedbounded} is satisfied. \begin{figure*} \caption{The block diagram of the MQS continuum deformation acquisition. } \label{blockdiagram} \end{figure*} \begin{figure} \caption{(a,b) MQS initial and final formations.} \label{Formation} \end{figure} \section{Simulation Results} \label{Simulation Results} We consider an MQS consisting of $N=8$ quadcopters with the initial formation shown in Fig. \ref{Formation} (a). The MQS is initially distributed over horizontal plane $z=43m$ where $\bar{\mathbf{d}}_s=\begin{bmatrix} 1935&215&43\end{bmatrix}^T$ is the position of the center of the containment ball $\mathcal{S}$ at the initial time $t_s=0s$. It is desired that the MQS finally reaches the final formation shown in Fig. \ref{Formation} (b) in an obstacle laden environment shown in Fig. \ref{MQS}. The final formation of the MQS is obtained by homogeneous transformation of the MQS initial formation and specified by choosing $\sigma_{1,n_\tau}=1.2$, $\sigma_{2,n_\tau}={1\over \sigma_{1,n_\tau}}=0.83$, $\theta_{d,n_\tau}=-{\pi\over 4}$, and $\bar{\mathbf{d}}_u=\begin{bmatrix}850&2250&50\end{bmatrix}^T$. \begin{figure} \caption{Collective of the MQS in an obstacle-laden environment.} \label{MQS} \end{figure} \textbf{Inter-agent Communication:} Given quadcopters' initial positions, followers' in-neighbors and communication weights are computed using the approach presented in Section \ref{Communication-Based Guidance Protocol} and listed in Table \ref{Table1}. Note that quadcopters' identification numbers are defined by set $\mathcal{V}=\{1,\cdots,10\}$, where $\mathcal{V}_L=\{1,2,3\}$ and $\mathcal{V}_F=\{4,\cdots,10\}$ define the identification numbers of the leader and follower quadcopters, respectively. \begin{table}[] \caption{In-neighbor agents of followers $4$ through $33$ and followers' communication weights} \centering \begin{tabular}{|c|ccc|ccc|} \hline &\multicolumn{3}{c}{In-neighbors}&\multicolumn{2}{c}{Communication weights}\\ $i\in \mathcal{V}_F$&$i_1$&$i_2$&$i_3$&$w_{i,i_1}$&$w_{i,i_2}$&$w_{i,i_3}$\\ \hline 4& 1& 7& 8& 0.55& 0.15 &0.30\\ 5 & 2 & 6 & 8 &0.60 & 0.15 & 0.25\\ 6 & 3 & 5 & 7 &0.60 & 0.15 & 0.25\\ 7 & 4 & 6 & 8 & 0.40 & 0.20 & 0.40\\ 8& 4& 5& 7& 0.45 &0.25& 0.30\\ \hline \end{tabular} \label{Table1} \end{table} \begin{figure} \caption{Components of optimal control input $\mathbf{u}_L^*$ versus time for $t\in [0,490]s$.} \label{accelerationha} \end{figure} \textbf{Safety Specification:} We assume that every quacopter can be enclosed by a ball of radius $\epsilon=0.45m$. For the initial formation shown in Fig. \ref{Formation} (a), $d_{\mathrm{min}}=3.5652m$ is the minimum separation distance between every two quadcopters. Furthermore, $\sigma_{\mathrm{min}}={1\over \sigma_{1,n_\tau}}=0.83$ is the lower bound for the eigenvalues of matrix $\mathbf{U}_{xy}$. Per Eq. \eqref{interagentcollision}, \[ \delta={1\over2}\left(d_{\mathrm{min}}\sigma_{\mathrm{min}}-2\epsilon\right)=1.04m \] is the upper-bound for deviation of every quadcopter from its global desired position at any time $t\in \left[t_0,t_u\right]$. \begin{figure} \caption{Components of the optimal desired trajectories of the leaders for over time interval $\left[0,490\right]s$. } \label{leadersdesired} \end{figure} \textbf{MQS Planning:} It is desired that the MQS remains inside a ball of radius $r_{\mathrm{max}}=50m$ at any time $t\in [t_0,t_u]$. By using A* search method, the optimal intermediate waypoints of the center of the containment ball are obtained. Then, the optimal path of the containment ball is assigned and shown in Fig. \ref{MQS}. Given the intermediate waypoints of the center of containment ball, the desired trajectories of the leaders are determined by solving the constrained optimal control problem given in Section \ref{Planning3}. Given $t_s=0s$ and $\delta=1.04m$, $t_u=490s$ is assigned by using Algorithm \ref{euclid33}. Components of the optimal control input vector $\mathbf{u}_L^*(t)$, $\ddot{x}_{1,HT}^*(t)$, $\ddot{x}_{2,HT}^*(t)$, $\ddot{x}_{3,HT}^*(t)$, $\ddot{y}_{1,HT}^*(t)$, $\ddot{y}_{2,HT}^*(t)$, and $\ddot{y}_{3,HT}^*(t)$, and components of global desired positions of leaders, ${x}_{1,HT}^*(t)$, ${x}_{2,HT}^*(t)$, ${x}_{3,HT}^*(t)$, ${y}_{1,HT}^*(t)$, ${y}_{2,HT}^*(t)$, and ${y}_{3,HT}^*(t)$, are plotted versus time $t$ in Figs. \ref{accelerationha} and \ref{leadersdesired}, respectively. Furthermore, deviation of every quadcopter from the global desired position is plotted in Fig. \ref{error-v5}. It is seen that deviation of no quadcopter exceeds $\delta=1.04m$ at any time $t\in [0,490]s$. \begin{figure} \caption{Deviation of every quadcopter from its global desired trajectory desired position over time interval $[0,490]s$. } \label{error-v5} \end{figure} \section{Conclusion}\label{Conclusion} This paper developed an algorithmic and formal approach for continuum deformation planning of a multi-quadcopter system coordinating in a geometrically-constrained environment. By using the principles of Lagrangian continuum mechanics, we obtained safety conditions for inter-agent collision avoidance and follower containment through constraining the eigenvalues of the Jacobian matrix of the continuum deformation coordination. To obtain safe and optimal transport of the MQS, we contain the MQS by a rigid ball, and determine the intermediate waypoints of the containment ball using the A* search method. Given the intermediate configuration of the containment ball, we first determined the leaders' intermediate configurations by decomposing the homogeneous deformation coordination. Then, we assigned the optimal desired trajectories of the leader quadcopters by solving a constrained optimal control problem. \section{\hspace{0.3cm}Acknowledgement} This work has been supported by the National Science Foundation under Award Nos. 1914581 and 1739525. The author gratefully thanks Professor Ella Atkins. \begin{IEEEbiography}[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{rastgoftar.png}}] {\textbf{Hossein Rastgoftar}} an Assistant Professor at Villanova University and an Adjunct Assistant Professor at the University of Michigan. He was an Assistant Research Scientist in the Aerospace Engineering Department from 2017 to 2020. Prior to that he was a postdoctoral researcher at the University of Michigan from 2015 to 2017. He received the B.Sc. degree in mechanical engineering-thermo-fluids from Shiraz University, Shiraz, Iran, the M.S. degrees in mechanical systems and solid mechanics from Shiraz University and the University of Central Florida, Orlando, FL, USA, and the Ph.D. degree in mechanical engineering from Drexel University, Philadelphia, in 2015. His current research interests include dynamics and control, multiagent systems, cyber-physical systems, and optimization and Markov decision processes. \end{IEEEbiography} \end{document}
arXiv
Factorial experiment In statistics, a full factorial experiment is an experiment whose design consists of two or more factors, each with discrete possible values or "levels", and whose experimental units take on all possible combinations of these levels across all such factors. A full factorial design may also be called a fully crossed design. Such an experiment allows the investigator to study the effect of each factor on the response variable, as well as the effects of interactions between factors on the response variable. This article is about factorial design. For factor loadings, see Factor analysis. For factorial numbers (n!), see Factorial. For the vast majority of factorial experiments, each factor has only two levels. For example, with two factors each taking two levels, a factorial experiment would have four treatment combinations in total, and is usually called a 2×2 factorial design. In such a design, the interaction between the variables is often the most important. This applies even to scenarios where a main effect and an interaction are present. If the number of combinations in a full factorial design is too high to be logistically feasible, a fractional factorial design may be done, in which some of the possible combinations (usually at least half) are omitted. Other terms for "treatment combinations" are often used, such as runs (of an experiment), points (viewing the combinations as vertices of a graph, and cells (arising as intersections of rows and columns). History Factorial designs were used in the 19th century by John Bennet Lawes and Joseph Henry Gilbert of the Rothamsted Experimental Station.[1] Ronald Fisher argued in 1926 that "complex" designs (such as factorial designs) were more efficient than studying one factor at a time.[2] Fisher wrote, "No aphorism is more frequently repeated in connection with field trials, than that we must ask Nature few questions, or, ideally, one question, at a time. The writer is convinced that this view is wholly mistaken." Nature, he suggests, will best respond to "a logical and carefully thought out questionnaire". A factorial design allows the effect of several factors and even interactions between them to be determined with the same number of trials as are necessary to determine any one of the effects by itself with the same degree of accuracy. Frank Yates made significant contributions, particularly in the analysis of designs, by the Yates analysis. The term "factorial" may not have been used in print before 1935, when Fisher used it in his book The Design of Experiments.[3] Advantages of factorial experiments Many people examine the effect of only a single factor or variable. Compared to such one-factor-at-a-time (OFAT) experiments, factorial experiments offer several advantages[4][5] • Factorial designs are more efficient than OFAT experiments. They provide more information at similar or lower cost. They can find optimal conditions faster than OFAT experiments. • Factorial designs allow additional factors to be examined at no additional cost. • When the effect of one factor is different for different levels of another factor, it cannot be detected by an OFAT experiment design. Factorial designs are required to detect such interactions. Use of OFAT when interactions are present can lead to serious misunderstanding of how the response changes with the factors. • Factorial designs allow the effects of a factor to be estimated at several levels of the other factors, yielding conclusions that are valid over a range of experimental conditions. Example of advantages of factorial experiments In his book, Improving Almost Anything: Ideas and Essays, statistician George Box gives many examples of the benefits of factorial experiments. Here is one.[6] Engineers at the bearing manufacturer SKF wanted to know if changing to a less expensive "cage" design would affect bearing life. The engineers asked Christer Hellstrand, a statistician, for help in designing the experiment.[7] Box reports the following. "The results were assessed by an accelerated life test. … The runs were expensive because they needed to be made on an actual production line and the experimenters were planning to make four runs with the standard cage and four with the modified cage. Christer asked if there were other factors they would like to test. They said there were, but that making added runs would exceed their budget. Christer showed them how they could test two additional factors "for free" – without increasing the number of runs and without reducing the accuracy of their estimate of the cage effect. In this arrangement, called a 2×2×2 factorial design, each of the three factors would be run at two levels and all the eight possible combinations included. The various combinations can conveniently be shown as the vertices of a cube ... " "In each case, the standard condition is indicated by a minus sign and the modified condition by a plus sign. The factors changed were heat treatment, outer ring osculation, and cage design. The numbers show the relative lengths of lives of the bearings. If you look at [the cube plot], you can see that the choice of cage design did not make a lot of difference. … But, if you average the pairs of numbers for cage design, you get the [table below], which shows what the two other factors did. … It led to the extraordinary discovery that, in this particular application, the life of a bearing can be increased fivefold if the two factor(s) outer ring osculation and inner ring heat treatments are increased together." Bearing life vs. heat and osculation Osculation −Osculation + Heat − 1823 Heat + 21106 "Remembering that bearings like this one have been made for decades, it is at first surprising that it could take so long to discover so important an improvement. A likely explanation is that, because most engineers have, until recently, employed only one factor at a time experimentation, interaction effects have been missed." Example The simplest factorial experiment contains two levels for each of two factors. Suppose an engineer wishes to study the total power used by each of two different motors, A and B, running at each of two different speeds, 2000 or 3000 RPM. The factorial experiment would consist of four experimental units: motor A at 2000 RPM, motor B at 2000 RPM, motor A at 3000 RPM, and motor B at 3000 RPM. Each combination of a single level selected from every factor is present once. This experiment is an example of a 22 (or 2×2) factorial experiment, so named because it considers two levels (the base) for each of two factors (the power or superscript), or #levels#factors, producing 22=4 factorial points. Designs can involve many independent variables. As a further example, the effects of three input variables can be evaluated in eight experimental conditions shown as the corners of a cube. This can be conducted with or without replication, depending on its intended purpose and available resources. It will provide the effects of the three independent variables on the dependent variable and possible interactions. Notation 2×2 factorial experiment AB (1) −− a +− b −+ ab ++ The notation used to denote factorial experiments conveys a lot of information. When a design is denoted a 23 factorial, this identifies the number of factors (3); how many levels each factor has (2); and how many experimental conditions there are in the design (23 = 8). Similarly, a 25 design has five factors, each with two levels, and 25 = 32 experimental conditions. Factorial experiments can involve factors with different numbers of levels. A 243 design has five factors, four with two levels and one with three levels, and has 16 × 3 = 48 experimental conditions. [8] To save space, the points in a two-level factorial experiment are often abbreviated with strings of plus and minus signs. The strings have as many symbols as factors, and their values dictate the level of each factor: conventionally, $-$ for the first (or low) level, and $+$ for the second (or high) level. The points in this experiment can thus be represented as $--$, $+-$, $-+$, and $++$. Another common and useful notation for the two levels is 0 and 1, so that the treatment combinations are 00, 01, 10, and 11. The factorial points can also be abbreviated by (1), a, b, and ab, where the presence of a letter indicates that the specified factor is at its high (or second) level and the absence of a letter indicates that the specified factor is at its low (or first) level (for example, "a" indicates that factor A is on its high setting, while all other factors are at their low (or first) setting). (1) is used to indicate that all factors are at their lowest (or first) values. In an $s_{1}\cdots s_{k}$ (or $s_{1}\times \cdots \times s_{k}$) factorial experiment, there are k factors, the ith factor at $s_{i}$ levels. If $A_{i}$ is the set of levels of the ith factor, then the set of treatment combinations is the Cartesian product $T=A_{1}\times \cdots \times A_{k}$. A treatment combination is thus a k-tuple $\mathbf {t} =(t_{1},\ldots ,t_{k})$. If $s_{1}=\cdots =s_{k}\equiv s$, say, the experiment is said to be symmetric and of type $s^{k}$, and the same set $A$ is used to denote the set of levels of each factor. In a 2-level experiment, for example, one may take $A=\{+,-\}$, as above; the treatment combination $--$ is by denoted (1), $+-$ by a, and so on. Implementation For more than two factors, a 2k factorial experiment can usually be recursively designed from a 2k−1 factorial experiment by replicating the 2k−1 experiment, assigning the first replicate to the first (or low) level of the new factor, and the second replicate to the second (or high) level. This framework can be generalized to, e.g., designing three replicates for three level factors, etc. A factorial experiment allows for estimation of experimental error in two ways. The experiment can be replicated, or the sparsity-of-effects principle can often be exploited. Replication is more common for small experiments and is a very reliable way of assessing experimental error. When the number of factors is large (typically more than about 5 factors, but this does vary by application), replication of the design can become operationally difficult. In these cases, it is common to only run a single replicate of the design, and to assume that factor interactions of more than a certain order (say, between three or more factors) are negligible. Under this assumption, estimates of such high order interactions are estimates of an exact zero, thus really an estimate of experimental error. When there are many factors, many experimental runs will be necessary, even without replication. For example, experimenting with 10 factors at two levels each produces 210=1024 combinations. At some point this becomes infeasible due to high cost or insufficient resources. In this case, fractional factorial designs may be used. As with any statistical experiment, the experimental runs in a factorial experiment should be randomized to reduce the impact that bias could have on the experimental results. In practice, this can be a large operational challenge. Factorial experiments can be used when there are more than two levels of each factor. However, the number of experimental runs required for three-level (or more) factorial designs will be considerably greater than for their two-level counterparts. Factorial designs are therefore less attractive if a researcher wishes to consider more than two levels. Main effects and interactions A fundamental concept in experimental design is the contrast. Let $\mu (\mathbf {t} )$ be the expected response to treatment combination $\mathbf {t} =(t_{1},\ldots ,t_{k})$, and let $T$ be the set of treatment combinations. A contrast in $\mu $ is a linear expression $\sum _{\mathbf {t} \in T}c(\mathbf {t} )\mu (\mathbf {t} )$ such that $\sum _{\mathbf {t} \in T}c(\mathbf {t} )=0$. The function $c(\mathbf {t} )$ is a contrast function. Typically the order of the treatment combinations t is fixed, so that $c$ is a contrast vector with components $c(\mathbf {t} )$. These vectors will be written as columns. Example: In a one-factor experiment the expression $2\mu (1)-\mu (2)-\mu (3)$ represents a contrast between level 1 of the factor and the combined impact of levels 2 and 3. The corresponding contrast function $c$ is given by $c(1)=2,c(2)=c(3)=-1$, and the contrast vector is $[2,-1,-1]^{\mathsf {T}}$, the transpose (T) indicating a column. Contrast vectors belong to the Euclidean space $\mathbb {R} ^{n}$, where $n=|T|$, the number of treatment combinations. It is easy to see that if $c$ and $d$ are contrast vectors, so is $c+d$, and so is $rc$ for any real number $r$. As usual, contrast vectors $c$ and $d$ are said to be orthogonal (denoted $c\perp d$) if their dot product is zero, that is, if $\sum _{\mathbf {t} \in T}c(\mathbf {t} )d(\mathbf {t} )=0$. More generally, Bose[9] has given the following definitions: • A contrast vector $c$ belongs to the main effect of factor i if the value of $c(t_{1},\ldots ,t_{k})$ depends only on $t_{i}$. Example: In the $2\times 2$ illustration above, the contrast $-\mu ((1))-\mu (a)+\mu (b)+\mu (ab)$ represents the main effect of factor $B$, as the coefficients in this expression depend only on the level of $B$ (high versus low). The contrast vector is displayed in the column for factor $B$ in the table above. Any scalar multiple of this vector also belongs to this main effect. For example, it is common to put the factor 1/2 in front of the contrast describing a main effect in a $2\times 2$ experiment, so that the contrast for $B$ compares two averages rather than two sums. • The contrast vector $c$ belongs to the interaction between factors i and j if (a) the value of $c(t_{1},\ldots ,t_{k})$ depends only on $t_{i}$ and $t_{j}$, and (b) $c$ is orthogonal to the contrast vectors for the main effects of factors $i$ and $j$. These contrasts detect the presence or absence of additivity between the two factors.[10][11] Additivity may be viewed as a kind of parallelism between factors, as illustrated in the Analysis section below. Interaction is lack of additivity. Example: In the $2\times 2$ experiment above, additivity is expressed by the equality $\mu ((1))-\mu (a)=\mu (b)-\mu (ab)$, which can be written $\mu ((1))-\mu (a)-\mu (b)+\mu (ab)=0$. In the latter equation, the expression on the left-hand side is a contrast, and the corresponding contrast vector would be $(1,-1,-1,1)'$. It is orthogonal to the contrast vectors for $A$ and $B$. Any scalar multiple of this vector also belongs to interaction. • Similarly, for any subset $I$ of $\{1,\ldots ,k\}$ having more than two elements, a contrast vector $c$ belongs the interaction between the factors listed in $I$ if (a) the value of $c(t_{1},\ldots ,t_{k})$ depends only on the levels $t_{i},i\in I$ and (b) $c$ is orthogonal to all contrasts of lower order among those factors. Let $U_{i}$ denote the set of contrast vectors belonging to the main effect of factor $i$, $U_{ij}$ the set of those belonging to the interaction between factors $i$ and $j$, and more generally $U_{I}$ the set of contrast vectors belonging to the interaction between the factors listed in $I$ for any subset $I\subset \{1,\ldots ,k\}$ with $|I|\geq 2$ (here again $|I|$ denotes cardinality). In addition, let $U_{\emptyset }$ denote the set of constant vectors on $T$, that is, vectors whose components are equal. This defines a set $U_{I}$ corresponding to each $I\subset \{1,\ldots ,k\}$. It is not hard to see that each $U_{I}$ is a vector space, a subspace of $\mathbb {R} ^{n}$, where (as before) $n=|T|$, the number of treatment combinations. The following are well-known, fundamental facts:[12][13] 1. If $I\neq J$ then $U_{I}\perp U_{J}$. 2. $\mathbb {R} ^{n}$ is the sum of all the subspaces $U_{I}$. 3. For each $I$, dim $U_{I}=\prod _{i\in I}(s_{i}-1)$. (The empty product is defined to be 1.) These results underpin the usual analysis of variance or ANOVA (see below), in which a total sum of squares is partitioned into the sums of squares for each effect (main effect or interaction), as introduced by Fisher. The dimension of $U_{I}$ is the degrees of freedom for the corresponding effect. Example: In a two-factor or $a\times b$ experiment the orthogonal sum reads $\mathbb {R} ^{ab}=U_{\emptyset }\oplus U_{1}\oplus U_{2}\oplus U_{12}$, and the corresponding dimensions are $ab=1+(a-1)+(b-1)+(a-1)(b-1)$, giving the usual formulas for degrees of freedom for main effects and interaction (the total degrees of freedom is $ab-1$). The next section illustrates these ideas in a $3\times 3$ experiment. Components of interaction and confounding with blocks In certain symmetric factorial experiments the sets $U_{I}$ that represent interactions can themselves be decomposed orthogonally. A key application, confounding with blocks, is described at the end of this section. Consider the following example in which each of two factors, $A$ and $B$, has 3 levels, denoted 0, 1 and 2. According to the formula in the previous section, such an experiment has 2 degrees of freedom for each main effect and 4 for interaction (that is, $\dim(U_{1})=\dim(U_{2})=2$ and $\dim(U_{12})=4$). The layout table below on the left describes the nine cells or treatment combinations $(t_{1},t_{2})$, which are written without parentheses or commas (for example, (1,2) is written 12). In the contrasts table at right, the first column lists these cells, while the last eight columns contain contrast vectors. The columns labeled $A$ and $B$ belong to the main effects of those two factors, as explained below. The last four columns are orthogonal to both and so must belong to interaction. (In fact, these eight vectors are bases of $U_{1}$, $U_{2}$, and $U_{12}$, respectively.) In addition, the last four columns have been separated into two sets of two. These two effects, which are usually labelled $AB$ and $AB^{2}$, are components of interaction, each having 2 degrees of freedom. $A$ $B$ Layout of a $3\times 3$ experiment $t_{2}$ $t_{1}$ 0 1 2 0 00 01 02 1 10 11 12 2 20 21 22 Contrast vectors in a $3\times 3$ experiment cell$t_{1}+t_{2}$$t_{1}+2t_{2}$$A$$B$$AB$$AB^{2}$ 000011111111 011211-10-100-1 0221110-10-1-10 1011-1011-10-10 1120-10-100-111 1202-100-1110-1 20220-1110-10-1 21010-1-1011-10 22100-10-1-1011 It is easy to see that the contrast vectors labeled $A$ depend only on the values of $t_{1}$ (the first component of each cell), so that these vectors indeed belong to the main effect of $A$. Similarly, those labeled $B$ belong to that main effect since they depend only on the values of $t_{2}$; sorting the $B$ column and comparing with the column of cells makes this easier to see. In a similar fashion, the $AB$ and $AB^{2}$ contrast vectors depend respectively on the values of $t_{1}+t_{2}$ and $t_{1}+2t_{2}$ modulo 3, which are contained in the second and third columns of the table. To see this easily, one may sort the contrasts table by the value of $t_{1}+t_{2}$ and observe how the values of the $AB$ contrast vectors follow the same pattern as those of $t_{1}+t_{2}$. The same holds for $AB^{2}$ and $t_{1}+2t_{2}$. One can verify that the contrast vectors of $AB$ are orthogonal to those of $AB^{2}$. One should also note this important naming convention: The exponents of $A$ and $B$ in the expressions $AB$ and $AB^{2}$ are the coefficients of $t_{1}$ and $t_{2}$ in the defining expressions $t_{1}+t_{2}$ and $t_{1}+2t_{2}$. A key point is that each main effect and each component of interaction corresponds to a partition of the nine treatment combinations in three sets of three. The partitions for $A$ and $B$ are given respectively by the rows and columns of the layout table. To see the partitions corresponding to $AB$ and $AB^{2}$, one may fill the layout table with the values of $t_{1}+t_{2}$, and again with the values of $t_{1}+2t_{2}$: $AB$ 012 120 201 $AB^{2}$ 021 102 210 In each table, the three cells labeled 0 form one block of a partition, and those labeled 1 and 2 form two other blocks. (One may note in passing that each table is a Latin square of order 3. The two squares would in fact be mutually orthogonal. This orthogonality is what makes the $AB$ contrast vectors perpendicular to those of $AB^{2}$, while latinity makes these vectors perpendicular to those of $A$ and $B$.) A similar result holds for any $s\times s$ factorial experiment, or indeed any $s^{k}$ (= $s\times s\times \cdots $) experiment, as long as the number $s>2$ is a prime (as in the above example) or a prime power (for example, $s=8$ or $9$).[14] Each component of interaction is defined by solving an equation $a_{1}t_{1}+\cdots +a_{k}t_{k}=b$ for $(t_{1},\ldots ,t_{k})$, where the solution sets as $b$ varies form a partition of the treatment combinations. The necessary arithmetic is that of the finite field $GF(s)$, which is simply arithmetic modulo $s$ when $s$ is prime. The same naming convention holds as in the example above: The component defined by the expression $a_{1}t_{1}+\cdots +a_{k}t_{k}$ is labeled $A_{1}^{a_{1}}\cdots A_{k}^{a_{k}}$. Every interaction is then an orthogonal sum of components, each carrying $s-1$ degrees of freedom. Each component would then appear in the ANOVA table for such an experiment. Examples of such analyses can be found in some introductory texts.[15][16] The fact that every component of interaction is defined by a partition (blocking) of the set of treatment combinations makes components of interaction the essential tool in dealing with factorial experiments that must be run in blocks, where certain effects will be confounded with blocks[17][18] (their contrasts will be identical to contrasts between blocks). Here the goal is to choose the blocking so as not to confound main effects and low-order interactions. For example, if it is necessary to run a $3\times 3$ experiment in 3 blocks of 3, one might choose the blocks defined by the $AB$ component. This would confound $AB$ with blocks, but would leave the main effects and the $AB^{2}$ component unconfounded. Analysis A factorial experiment can be analyzed using ANOVA or regression analysis.[19] To compute the main effect of a factor "A" in a 2-level experiment, subtract the average response of all experimental runs for which A was at its low (or first) level from the average response of all experimental runs for which A was at its high (or second) level. Other useful exploratory analysis tools for factorial experiments include main effects plots, interaction plots, Pareto plots, and a normal probability plot of the estimated effects. When the factors are continuous, two-level factorial designs assume that the effects are linear. If a quadratic effect is expected for a factor, a more complicated experiment should be used, such as a central composite design. Optimization of factors that could have quadratic effects is the primary goal of response surface methodology. Analysis example Montgomery [4] gives the following example of analysis of a factorial experiment:. An engineer would like to increase the filtration rate (output) of a process to produce a chemical, and to reduce the amount of formaldehyde used in the process. Previous attempts to reduce the formaldehyde have lowered the filtration rate. The current filtration rate is 75 gallons per hour. Four factors are considered: temperature (A), pressure (B), formaldehyde concentration (C), and stirring rate (D). Each of the four factors will be tested at two levels. Onwards, the minus (−) and plus (+) signs will indicate whether the factor is run at a low or high level, respectively. Design matrix and resulting filtration rate A B C D Filtration rate − − − − 45 + − − − 71 − + − − 48 + + − − 65 − − + − 68 + − + − 60 − + + − 80 + + + − 65 − − − + 43 + − − + 100 − + − + 45 + + − + 104 − − + + 75 + − + + 86 − + + + 70 + + + + 96 • Plot of the main effects showing the filtration rates for the low (−) and high (+) settings for each factor. • Plot of the interaction effects showing the mean filtration rate at each of the four possible combinations of levels for a given pair of factors. The non-parallel lines in the A:C interaction plot indicate that the effect of factor A depends on the level of factor C. A similar results holds for the A:D interaction. The graphs indicate that factor B has little effect on filtration rate. The analysis of variance (ANOVA) including all 4 factors and all possible interaction terms between them yields the coefficient estimates shown in the table below. ANOVA results Coefficients Estimate Intercept 70.063 A 10.813 B 1.563 C 4.938 D 7.313 A:B 0.063 A:C −9.063 B:C 1.188 A:D 8.313 B:D −0.188 C:D −0.563 A:B:C 0.938 A:B:D 2.063 A:C:D −0.813 B:C:D −1.313 A:B:C:D 0.688 Because there are 16 observations and 16 coefficients (intercept, main effects, and interactions), p-values cannot be calculated for this model. The coefficient values and the graphs suggest that the important factors are A, C, and D, and the interaction terms A:C and A:D. The coefficients for A, C, and D are all positive in the ANOVA, which would suggest running the process with all three variables set to the high value. However, the main effect of each variable is the average over the levels of the other variables. The A:C interaction plot above shows that the effect of factor A depends on the level of factor C, and vice versa. Factor A (temperature) has very little effect on filtration rate when factor C is at the + level. But Factor A has a large effect on filtration rate when factor C (formaldehyde) is at the − level. The combination of A at the + level and C at the − level gives the highest filtration rate. This observation indicates how one-factor-at-a-time analyses can miss important interactions. Only by varying both factors A and C at the same time could the engineer discover that the effect of factor A depends on the level of factor C. The best filtration rate is seen when A and D are at the high level, and C is at the low level. This result also satisfies the objective of reducing formaldehyde (factor C). Because B does not appear to be important, it can be dropped from the model. Performing the ANOVA using factors A, C, and D, and the interaction terms A:C and A:D, gives the result shown in the following table, in which all the terms are significant (p-value < 0.05). ANOVA results Coefficient Estimate Standard error t value p-value Intercept 70.062 1.104 63.444 2.3 × 10−14 A 10.812 1.104 9.791 1.9 × 10−6 C 4.938 1.104 4.471 1.2 × 10−3 D 7.313 1.104 6.622 5.9 × 10−5 A:C −9.063 1.104 −8.206 9.4 × 10−6 A:D 8.312 1.104 7.527 2 × 10−5 See also • Combinatorial design • Design of experiments • Orthogonal array • Plackett–Burman design • Taguchi methods • Welch's t-test Notes 1. Yates, Frank; Mather, Kenneth (1963). "Ronald Aylmer Fisher". Biographical Memoirs of Fellows of the Royal Society. London, England: Royal Society. 9: 91–120. doi:10.1098/rsbm.1963.0006. 2. Fisher, Ronald (1926). "The Arrangement of Field Experiments" (PDF). Journal of the Ministry of Agriculture of Great Britain. London, England: Ministry of Agriculture and Fisheries. 33: 503–513. 3. "Earliest Known Uses of Some of the Words of Mathematics (F)". jeff560.tripod.com. 4. Montgomery, Douglas C. (2013). Design and Analysis of Experiments (8th ed.). Hoboken, New Jersey: Wiley. ISBN 978-1-119-32093-7. 5. Oehlert, Gary (2000). A First Course in Design and Analysis of Experiments (Revised ed.). New York City: W. H. Freeman and Company. ISBN 978-0-7167-3510-6. 6. George E.P., Box (2006). Improving Almost Anything: Ideas and Essays (Revised ed.). Hoboken, New Jersey: Wiley. ASIN B01FKSM9VY. 7. Hellstrand, C.; Oosterhoorn, A. D.; Sherwin, D. J.; Gerson, M. (24 February 1989). "The Necessity of Modern Quality Improvement and Some Experience with its Implementation in the Manufacture of Rolling Bearings [and Discussion]". Philosophical Transactions of the Royal Society. 327 (1596): 529–537. doi:10.1098/rsta.1989.0008. S2CID 122252479. 8. Penn State University College of Health and Human Development (2011-12-22). "Introduction to Factorial Experimental Designs". 9. Bose (1947, pp. 110–111) 10. Beder (2022, pp. 29–30) 11. Graybill (1976, pp. 559–560) 12. Beder (2022, pp. 164–165) 13. Cheng (2019, pp. 77–81) 14. Beder (2022, pp. 180-190, 193-195) 15. Hicks (1982, p. 298) 16. Wu & Hamada (2021, p. 269) 17. Dean, Voss & Draguljić (2017, Sec. 14.2) 18. Montgomery (2013, Confounding in the $2^{k}$ Factorial Design; Confounding in the $3^{k}$ Factorial Design) 19. Cohen, J (1968). "Multiple regression as a general data-analytic system". Psychological Bulletin. 70 (6): 426–443. CiteSeerX 10.1.1.476.6180. doi:10.1037/h0026714. References • Beder, Jay H. (2022). Linear Models and Design. Cham, Switzerland: Springer. doi:10.1007/978-3-031-08176-7. ISBN 978-3-031-08175-0. S2CID 253542415. • Bose, R. C. (1947). "Mathematical theory of the symmetrical factorial design". Sankhya. 8: 107–166. • Box, G. E.; Hunter, W. G.; Hunter, J. S. (2005). Statistics for Experimenters: Design, Innovation, and Discovery (2nd ed.). Wiley. ISBN 978-0-471-71813-0. • Cheng, Ching-Shui (2019). Theory of Factorial Design: Single- and Multi-Stratum Experiments. Boca Raton, Florida: CRC Press. ISBN 978-0-367-37898-1. • Dean, Angela; Voss, Daniel; Draguljić, Danel (2017). Design and Analysis of Experiments (2nd ed.). Cham, Switzerland: Springer. ISBN 978-3-319-52250-0. • Graybill, Franklin A. (1976). Fundamental Concepts in the Design of Experiments (3rd ed.). New York: Holt, Rinehart and Winston. ISBN 0-03-061706-5. • Hicks, Charles R. (1982). Theory and Application of the Linear Model. Pacific Grove, CA: Wadsworth & Brooks/Cole. ISBN 0-87872-108-8. • Wu, C. F. Jeff; Hamada, Michael S. (30 March 2021). Experiments: Planning, Analysis, and Optimization. John Wiley & Sons. ISBN 978-1-119-47010-6. External links • Introduction to Factorial Experimental Designs (The Methodology Center, Penn State University) • Factorial Designs (California State University, Fresno) • GOV.UK Factorial randomised controlled trials (Public Health England) Design of experiments Scientific method • Scientific experiment • Statistical design • Control • Internal and external validity • Experimental unit • Blinding • Optimal design: Bayesian • Random assignment • Randomization • Restricted randomization • Replication versus subsampling • Sample size Treatment and blocking • Treatment • Effect size • Contrast • Interaction • Confounding • Orthogonality • Blocking • Covariate • Nuisance variable Models and inference • Linear regression • Ordinary least squares • Bayesian • Random effect • Mixed model • Hierarchical model: Bayesian • Analysis of variance (Anova) • Cochran's theorem • Manova (multivariate) • Ancova (covariance) • Compare means • Multiple comparison Designs Completely randomized • Factorial • Fractional factorial • Plackett-Burman • Taguchi • Response surface methodology • Polynomial and rational modeling • Box-Behnken • Central composite • Block • Generalized randomized block design (GRBD) • Latin square • Graeco-Latin square • Orthogonal array • Latin hypercube Repeated measures design • Crossover study • Randomized controlled trial • Sequential analysis • Sequential probability ratio test • Glossary • Category •  Mathematics portal • Statistical outline • Statistical topics Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Search Results: 1 - 10 of 4421 matches for " Boris Hanin " Page 1 /4421 Correlations and Pairing Between Zeros and Critical Points of Gaussian Random Polynomials Boris Hanin Abstract: We study the asymptotics of correlations and nearest neighbor spacings between zeros and holomorphic critical points of $p_N$, a degree N Hermitian Gaussian random polynomial in the sense of Shiffman and Zeldtich, as N goes to infinity. By holomorphic critical point we mean a solution to the equation $\frac{d}{dz}p_N(z)=0.$ Our principal result is an explicit asymptotic formula for the local scaling limit of $\E{Z_{p_N}\wedge C_{p_N}},$ the expected joint intensity of zeros and critical points, around any point on the Riemann sphere. Here $Z_{p_N}$ and $C_{p_N}$ are the currents of integration (i.e. counting measures) over the zeros and critical points of $p_N$, respectively. We prove that correlations between zeros and critical points are short range, decaying like $e^{-N\abs{z-w}^2}.$ With $\abs{z-w}$ on the order of $N^{-1/2},$ however, $\E{Z_{p_N}\wedge C_{p_N}}(z,w)$ is sharply peaked near $z=w,$ causing zeros and critical points to appear in rigid pairs. We compute tight bounds on the expected distance and angular dependence between a critical point and its paired zero. Pairing of Zeros and Critical Points for Random Meromorphic Functions on Riemann Surfaces Abstract: We prove that zeros and critical points of a random polynomial $p_N$ of degree $N$ in one complex variable appear in pairs. More precisely, if $p_N$ is conditioned to have $p_N(\xi)=0$ for a fixed $\xi \in \C\backslash\set{0},$ we prove that there is a unique critical point z in the annulus $N^{-1-\ep}<\abs{z-\xi}< N^{-1+\ep}}$ and no critical points closer to $\xi$ with probability at least $1-O(N^{-3/2+3\ep}).$ We also prove an analogous statement in the more general setting of random meromorphic functions on a closed Riemann surface. Scaling Limit for the Kernel of the Spectral Projector and Remainder Estimates in the Pointwise Weyl Law Yaiza Canzani,Boris Hanin Abstract: We obtain new off-diagonal remainder estimates for the kernel of the spectral projector of the Laplacian onto frequencies up to \lambda. A corollary is that the kernel of the spectral projector onto frequencies (\lambda, \lambda+1] has a universal scaling limit as \lambda go to infinity at any non self-focal point. Our results also imply that immersions of manifolds without conjugate points into Euclidean space by arrays of eigenfunctions with frequencies in (\lambda, \lambda + 1] are embeddings for all {\lambda} sufficiently large. Finally, we find precise asymptotics for sup norms of gradients of linear combinations of eigenfunctions with frequencies in ({\lambda}, {\lambda} + 1]. High Frequency Eigenfunction Immersions and Supremum Norms of Random Waves Abstract: A compact Riemannian manifold may be immersed into Euclidean space by using high frequency Laplace eigenfunctions. We study the geometry of the manifold viewed as a metric space endowed with the distance function from the ambient Euclidean space. As an application we give a new proof of a result of Burq-Lebeau and others on upper bounds for the sup-norms of random linear combinations of high frequency eigenfunctions. Mean of the $L^\infty$-norm for $L^2$-normalized random waves on compact aperiodic Riemannian manifolds Abstract: This article concerns upper bounds for $L^\infty$-norms of random approximate eigenfunctions of the Laplace operator on a compact aperiodic Riemannian manifold $(M,g).$ We study $f_{\lambda}$ chosen uniformly at random from the space of $L^2$-normalized linear combinations of Laplace eigenfunctions with eigenvalues in the interval $(\lambda^2, \lr{\lambda+1}^2].$ Our main result is that the expected value of $\norm{f_\lambda}_\infty$ grows at most like $C \sqrt{\log \lambda}$ as $\lambda \to \infty$, where $C$ is an explicit constant depending only on the dimension and volume of $(M,g).$ In addition, we obtain concentration of the $L^\infty$-norm around its mean and median and study the analogous problems for Gaussian random waves on $(M,g).$ Nodal Sets of Random Eigenfunctions for the Isotropic Harmonic Oscillator Boris Hanin,Steve Zelditch,Peng Zhou Abstract: We consider Gaussian random eigenfunctions (Hermite functions) of fixed energy level of the isotropic semi-classical Harmonic Oscillator on ${\bf R}^n$. We calculate the expected density of zeros of a random eigenfunction in the semi-classical limit $h \to 0.$ In the allowed region the density is of order $h^{-1},$ while in the forbidden region the density is of order $h^{-\frac{1}{2}}$. The computer graphics due to E.J. Heller illustrate this difference in "frequency" between the allowed and forbidden nodal sets. Developing an Expert and Reflexive Approach to Problem-Solving: The Place of Emotional Knowledge and Skills [PDF] Vanessa Hanin, Catherine Van Nieuwenhoven Psychology (PSYCH) , 2018, DOI: 10.4236/psych.2018.92018 Abstract: Nowadays it is widely accepted that mathematics and, especially, problemsolving tasks, are particularly concerned by the issue of emotions. Yet, educational interventions designed to improve students' problem-solving competence and performance still mainly focus on cognitive and metacognitive knowledge and skills. The main purpose of this study was to design and assess the benefit of a training program that promotes the development of not only cognitive but also emotional knowledge and skills. This benefit was assessed using four variables, namely, problem-solving performance, problem-solving competence, academic emotions and emotion regulation strategies. 428 fifth and sixth graders took part in the study, split into four conditions: 1) a "cognition" condition which received an intervention on an eight-step problemsolving process; 2) an "emotion" condition in which emotional knowledge and skills were developed through various activities; 3) an "emotion and cognition" condition overlapping the two previous ones, and 4) a "control" condition. The findings showed that the "emotion and cognition" condition and the "cognition" condition had equivalent cognitive efficiency. However, only the former reduced negative emotions, aroused the emergence of positive ones, promoted the use of adaptive emotion regulation strategies and discouraged the use of maladaptive ones. The practical implications for educational practices and possible avenues for further research are discussed. Why Victory in the War on Cancer Remains Elusive: Biomedical Hypotheses and Mathematical Models Leonid Hanin Cancers , 2011, DOI: 10.3390/cancers3010340 Abstract: We discuss philosophical, methodological, and biomedical grounds for the traditional paradigm of cancer and some of its critical flaws. We also review some potentially fruitful approaches to understanding cancer and its treatment. This includes the new paradigm of cancer that was developed over the last 15 years by Michael Retsky, Michael Baum, Romano Demicheli, Isaac Gukas, William Hrushesky and their colleagues on the basis of earlier pioneering work of Bernard Fisher and Judah Folkman. Next, we highlight the unique and pivotal role of mathematical modeling in testing biomedical hypotheses about the natural history of cancer and the effects of its treatment, elaborate on model selection criteria, and mention some methodological pitfalls. Finally, we describe a specific mathematical model of cancer progression that supports all the main postulates of the new paradigm of cancer when applied to the natural history of a particular breast cancer patient and fit to the observables. Sequential and concomitant non-bismuth quadruple therapies are ineffective for H. pylori eradication in Palestine. A randomized trial [PDF] Yasser Abu-Safieh, Hanin Yamin Open Journal of Gastroenterology (OJGas) , 2012, DOI: 10.4236/ojgas.2012.24034 Abstract: Background: Increasing clarithromycin resistance has undermined the effectiveness of traditional clarithromycin-containing triple eradication therapy of Helicobacter pylori infections. Sequential and concomitant therapies show improved outcome with clarithromycin resistance. Aim: To evaluate the effectiveness of sequential and concomitant 4-drug non-bismuth therapies for eradication of Helicobacter pylori in a prospective, randomized, clinical trial conducted in Palestine. Patients and Methods: Patients who underwent upper endoscopy for a clinical indication and tested positive for rapid urease test were included. Subjects randomly allocated into two groups: One received a modified sequential therapy: esomeprazole 40 mg OD and amoxicillin 1 g BID for 5 days then esomeprazole 40 mg OD, clarithromycin 500 mg BID and tinidazole 500 mg BID for another 5 days. The other group received concomitant therapy in which the same 4 drugs and doses were all given daily for 10 days. Stool antigen was tested 4 weeks after completion of treatment. Results: Five hundred thirty three (533) patients were tested for H. pylori and 180 (34%) were positive; 141 patients were included in the study and 112 patients completed. The overall per protocol eradication rate was (74%; 95% CI = 65.9% - 82.1%). The eradication rates for sequential therapy was, (70.9%; 95% CI = 58.9% - 82.9%) and for concomitant therapy (77.2%; 95% CI = 66.3% - 88.1%). The results intention-to-treat were: sequential 61%, concomitant 57%. Conclusion: Neither sequential nor concomitant therapy achieved an acceptable H. pylori eradiation rate in Palestine. Identification problem for stochastic models with application to carcinogenesis, cancer detection and radiation biology L. G. Hanin Discrete Dynamics in Nature and Society , 2002, DOI: 10.1080/1026022021000001454 Abstract: A general framework for solving identification problem for a broad class of deterministic and stochastic models is discussed. This methodology allows for a unified approach to studying identifiability of various stochastic models arising in biology and medicine including models of spontaneous and induced Carcinogenesis, tumor progression and detection, and randomized hit and target models of irradiated cell survival. A variety of known results on parameter identification for stochastic models is reviewed and several new results are presented with an emphasis on rigorous mathematical development.
CommonCrawl
\begin{document} \title{A note on the spectra and eigenspaces of the universal adjacency matrices of arbitrary lifts of graphs } \author{C. Dalf\'o\footnote{Departament de Matem\`atica, Universitat de Lleida, Igualada (Barcelona), Catalonia, {\tt{[email protected]}}}, M. A. Fiol\footnote{Departament de Matem\`atiques, Universitat Polit\`ecnica de Catalunya; and Barcelona Graduate School of Mathematics, Barcelona, Catalonia, {\tt{[email protected]}}}, S. Pavl\'ikov\'a\footnote{Department of Mathematics and Descriptive Geometry, Slovak University of Technology, Bratislava, Slovak Republic, {\tt {[email protected]}}}, and J. \v{S}ir\'a\v{n}\footnote{Department of Mathematics and Statistics, The Open University, Milton Keynes, UK; and Department of Mathematics and Descriptive Geometry, Slovak University of Technology, Bratislava, Slovak Republic, {\tt {[email protected]}}} } \date{} \maketitle \begin{abstract} The universal adjacency matrix $U$ of a graph $\Gamma$, with adjacency matrix $A$, is a linear combination of $A$, the diagonal matrix $D$ of vertex degrees, the identity matrix $I$, and the all-1 matrix $J$ with real coefficients, that is, $U=c_1 A+c_2 D+c_3 I+ c_4 J$, with $c_i\in \mathbb R$ and $c_1\neq 0$. Thus, as particular cases, $U$ may be the adjacency matrix, the Laplacian, the signless Laplacian, and the Seidel matrix. In this note, we show that basically the same method introduced before by the authors can be applied for determining the spectra and bases of all the corresponding eigenspaces of arbitrary lifts of graphs (regular or not). \end{abstract} \noindent{\it Keywords:} Graph; covering; lift; universal adjacency matrix; spectrum; eigenspace. \noindent{\it 2010 Mathematics Subject Classification:} 05C20, 05C50, 15A18. \blfootnote{\begin{minipage}[l]{0.3\textwidth} \includegraphics[trim=10cm 6cm 10cm 5cm,clip,scale=0.15]{eu_logo} \end{minipage} \hspace{-2cm} \begin{minipage}[l][1cm]{0.79\textwidth} The first author has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No 734922. \end{minipage}} \section{Introduction}\label{sec:int} For a graph $\Gamma$ on $n$ vertices, with adjacency matrix $A$ and degree sequence $d_1,\ldots, d_n$, the {\em universal adjacency matrix} is defined as $U=c_1 A+c_2 D+c_3 I+ c_4 J$, with $c_i\in \mathbb R$ and $c_1\neq 0$, where $D=\diag(d_1,\ldots,d_n)$, $I$ is the identity matrix, and $J$ is the all-1 matrix. See, for instance, Haemers and Omidi \cite{ho11} or Farrugia and Sciriha \cite{fc15}. Thus, for given values of the coefficients $(c_1,c_2,c_3,c_4)$, the universal adjacency matrix particularizes to important matrices used in algebraic graph theory, such as the adjacency matrix $(1,0,0,0)$, the Laplacian $(-1,1,0,0)$, the signless Laplacian $(1,1,0,0)$, and the Seidel matrix $(-2,0,-1,1)$. Because of the interest in the study of combinatorial properties, some methods were proposed to determine the spectra of graphs with some symmetries. This includes graphs with transitive groups (Lov\'asz \cite{l75}), Cayley graphs (Babay \cite{ba79}), and some lifts of voltage graphs, or covers, (Godsil and Henseland \cite{gh92}). More recently, by using representation theory (see, for instance, Burrow~\cite{b93}, and James and Liebeck~\cite{JaLi}), Dalf\'o, Fiol, and \v{S}ir\'a\v{n} \cite{dfs19} considered a more general construction and derived a method for determining the spectrum of a regular lift of a `base' (di)graph equipped with an ordinary voltage assignment, or, equivalently, the spectrum of a regular cover of a (di)graph. Recall, however, that by far not all coverings are regular; a description of arbitrary graph coverings by the so-called permutation voltage assignments was given by Gross and Tucker \cite{gt77}. In \cite{dfps19}, the authors generalized their previous results to arbitrary lifts of graphs (regular or not). They not only gave the complete spectra of lifts, but also provided bases of the corresponding eigenspaces, both in a very explicit way. In this note, we show that (basically) the same method of \cite{dfps19} can be applied to find the spectra of lifts and their eigenspaces of a universal adjacency matrix. This note is organized as follows. In the next section, we recall the basic notions of permutation voltage assignments on a graph, together with their associated lifts, along with an equivalent description of these in terms of relative voltage assignments. Section \ref{sec:main-result} deals with the main result, where the complete spectrum of the universal adjacency matrix of a relative lift graph is determined, together with bases of the associated eigenspaces. Finally, our method is illustrated by an example in Section \ref{sec:example}. \section{General lifts of a graph} Let $\Gamma$ be an undirected graph (possibly with loops and multiple edges), and let $n$ be a positive integer. As usual in algebraic and topological graph theory, we think of every undirected edge joining vertices $u$ and $v$ (not excluding the case when $u=v$) as consisting of a pair of oppositely directed arcs; if one of them is denoted $a$, then the opposite one is denoted $a^-$. Let $V=V(\Gamma)$ and $X=X(\Gamma)$ be the sets of vertices and arcs of $\Gamma$, respectively. The following concepts were introduced by Gross in \cite{g74}. Let $G$ be a group. An ({\em ordinary\/}) {\em voltage assignment} on the graph $\Gamma$ is a mapping $\alpha: X\to G$ with the property that $\alpha(a^-)=(\alpha(a))^{-1}$ for every arc $a\in X$. Thus, a voltage assigns an element $g\in G$ to each arc of the graph in such a way that a pair of mutually reverse arcs $a$ and $a^{-}$, forming an undirected edge, receive mutually inverse elements $g$ and $g^{-1}$. The graph $\Gamma$ and the permutation voltage assignment $\alpha$ determine a new graph $\Gamma^{\alpha}$, called the {\em lift} of $\Gamma$, which is defined as follows. The vertex and arc sets of the lift are simply the Cartesian products $V^{\alpha}=V\times G$ and $X^{\alpha}=X\times G$, respectively. Moreover, for every arc $a\in X$ from a vertex $u$ to a vertex $v$ for $u,v\in V$ (possibly, $u=v$) in $\Gamma$, and for every $g\in G$, there is an arc $(a,g)\in X^{\alpha}$ from the vertex $(u,g)\in V^{\alpha}$ to the vertex $(v,g\alpha(a))\in V^{\alpha}$. Notice that if $a$ and $a^{-}$ are a pair of mutually reverse arcs forming an undirected edge of $G$, then for every $g\in G$ the pair $(a,g)$ and $(a^-,g\alpha(a))$ form an undirected edge of the lift $\Gamma^{\alpha}$, making the lift an undirected graph. The mapping $\pi:\Gamma^{\alpha}\to \Gamma$ defined by erasing the second coordinate, that is, $\pi(u,g)=u$ and $\pi(a,g)=a$, for every $u\in V$, $a\in X$ and $g\in G$, is a (regular) {\em covering}, with its usual meaning in algebraic topology; see, for instance, Gross and Tucker \cite{gt87}. To generate all (not necessarily regular) graph coverings, Gross and Tucker \cite{gt77} introduced the more general concept of permutation voltage assignment. With this aim, let $G$ to be a subgroup of the symmetric group ${\rm Sym}(n)$, that is, a permutation group on the set $[n]=\{1,2, \ldots, n\}$. Then, a {\em permutation voltage assignment} on $\Gamma$ is a mapping $\alpha: X\to G$ with the same symmetric property as before. So, the lift $\Gamma^{\alpha}$, with vertex set $V^{\alpha}=V\times [n]$ and arc set $X^{\alpha}=X\times [n]$, has an arc $(a,i)\in X^{\alpha}$ from the vertex $(u,i)$ to the vertex $(v,j)$ if and only if $a=(u,v)\in X$ and $j=i\alpha(a)$. Notice that we write the argument of a permutation to the left of the symbol of the permutation, with the composition being read from the left to the right. The permutation voltage assignments and the corresponding lifts can, equivalently, be described in the language of the so-called relative voltage assignments defined as follows. Let $\Gamma$ be the graph considered above, $K$ a group and $H$ a subgroup of $K$ of index $n$; and let $K/H$ denote the set of right cosets of $H$ in $K$. Furthermore, let $\beta: X\to K$ be a mapping satisfying $\beta(a^-)= (\beta(a))^{-1}$ for every arc $a\in X$; in this context, one calls $\beta$ a {\em voltage assignment in $K$ relative to $H$}, or simply a relative voltage assignment. The {\em relative lift} $\Gamma^{\beta}$ has vertex set $V^{\beta}= V\times K/H$ and arc set $X^{\beta}=X\times K/H$. The incidence in the lift is given as expected: If $a$ is an arc from a vertex $u$ to a vertex $v$ in $\Gamma$, then for every right coset $J\in K/H$ there is an arc $(a,J)$ from the vertex $(u,J)$ to the vertex $(v,J\beta(a))$ in $\Gamma^{\beta}$. We slightly abuse the notation and denote by the same symbol $\pi$ the covering $\Gamma^{\beta}\to \Gamma$ given by suppressing the second coordinate. Under additional and very natural assumptions, there is a $1$-to-$1$ correspondence between {\em connected} lifts generated by permutation and relative voltage assignments on a connected base graph $\Gamma$. To present more details, fix a vertex $u$ in the base graph and let ${\cal W}_u$ be the collection of all closed walks in the base graph that begin and end at $u$. If $\gamma$ is either a permutation or a relative voltage assignment on $\Gamma$ in some $G\le {\rm Sym}(n)$ or in some group $K$ relative to a subgroup $H$, respectively, and if $W=a_1a_2\ldots, a_k$ is a sequence of arcs forming a walk in ${\cal W}_u$, we let $\gamma(W)=\gamma(a_1) \gamma(a_2)\ldots \gamma(a_k)$ denote the product of the voltages taken as the walk is traversed. Then, the set $\{\gamma(W);\ W\in {\cal W}_u\}$ forms a subgroup of $G$ or $K$, known as the {\em local group}, and it will be denoted by $G_u$ and $K_u$, respectively. It is well known that there is no loss of generality in the study of voltage assignments and lifts under the assumption that $G_u=G$ and $K_uH=K$, meaning that there are no `superfluous' voltages in the assignments. We are now ready to explain the correspondence between the two kinds of lifts. First, let $\alpha$ be a permutation voltage assignment as in the previous paragraph, taking place in a permutation group $G\le {\rm Sym}(n)$. Assume that $G=G_u$ as above and, moreover, that $G$ is transitive on $[n]$; the two conditions together are equivalent to the connectivity of the lift. Then, the corresponding relative voltage group is $K=G$, with the subgroup $H$ being the stabilizer in $K$ of an arbitrarily chosen point in the set $[n]$. The corresponding relative assignment $\beta$ is simply identical to $\alpha$, but acting by right multiplication on the set $K/H$. Observe that, in this construction, the core of $H$ in $K$, that is, the intersection of all $K$-conjugates of $H$, is trivial. Conversely, let $\beta$ be a voltage assignment in a group $K$ relative to a subgroup $H$ of index $n$ and with a trivial core in $K$. Assume again that $K_uH=K$. Now, this alone guarantees the connectivity of the lift. Then, one may identify the set $K/H$ with $[n]$ in an arbitrary way, and $\alpha(a)$ for $a\in X$ is the permutation of $[n]$ induced (in this identification) by right multiplication on the set of (right) cosets $K/H$ by $\beta(a)\in K$. We note that there are fine points in this correspondence to be considered on the `permutation' side if $G_u$ is intransitive or a proper subgroup of $G$, and on the `relative' side if $H$ is not core-free or if $K_uH$ is proper in $K$; these details are, however, irrelevant for our purposes. We also point out that a covering described in terms of a permutation voltage assignment as above is regular (that is, generated by an ordinary voltage assignment) if and only if the corresponding voltage group is a regular permutation group on the set $[n]$. Of course, this translates to the language of relative voltage assignments by the normality of $H$ in $K$. In such a case, the covering admits a description in terms of ordinary voltage assignment in the factor group $K/H$ and with voltage $H\beta(a)$ assigned to an arc $a\in X$ with original relative voltage $\beta(a)$. \section{The spectrum of the universal adjacency matrix of a relative lift} \label{sec:main-result} Let $\Gamma$ be a connected graph on $k$ vertices (with loops and multiple edges allowed), and let $\beta$ be a relative voltage assignment on the arc set $X$ of $\Gamma$ in a group $G$ with identity element $e$, and subgroup $H$ of index $n$. Now we show that the spectrum of (the universal adjacency matrix of) relative lift $\Gamma^{\beta}$ may be computed by using the same result by the authors in \cite{dfps19}. The only basic difference is the so-called base matrix, which in our general case, must be defined as follows. \begin{definition} \label{B(U)} To the pair $(\Gamma,\alpha)$ as above, we assign the $k\times k$ {\em universal base matrix} $B(U)$ defined by the sum $$ B(U)=c_1B(A)+c_2B(D)+c_3B(I)+c_4B(J), $$ where the matrices $B(A)+B(D)+B(I)+B(J)$ have entries as follows: \begin{itemize} \item $B(A)_{uv}=\alpha(a_1)+\cdots +\alpha(a_j)$ if $a_1,\ldots,a_j$ is the set of all the arcs of $\Gamma$ from $u$ to $v$, not excluding the case $u=v$, and $B(A)_{uv}=0$ if $(u,v)\not\in X$; \item $B(D)_{uu}=\deg(u)e$, and $B(D)_{uv}=0$ if $u\neq v$; \item $B(I)_{uu}=e$, and $B(I)_{uv}=0$ if $u\neq v$; \item $B(J)_{uv}=\pi^0+\pi^1+\cdots+\pi^{n-1}$, where $\pi(=\pi^1)=(12\ldots n)\in Sym(n)$, for any $u,v\in V$. \end{itemize} Recall that $e(=\pi^0)$ stands for the identity element of $G$, and the sum must be seen as an element of the complex group algebra $\mathbb C(G)$. \end{definition} Let $\rho \in \Irep(G)$ be a unitary irreducible representation of $G$ of dimension $d_\rho$. For our graph $\Gamma$ on $k$ vertices, the assignment $\alpha$ in $G$ relative to $H$, and the universal base matrix $U$, let $\rho(U)$ be the $d_\rho k\times d_\rho k$ matrix obtained from $U$ by replacing every non-zero entry $(U)_{u,v} \in \mathbb C(G)$ as above by the $d_\rho\times d_\rho$ matrix $\rho(U_{u,v})$, where each element $g$ of the group is replaced by $\rho(g)$, and the zero entries of $B$ are changed to all-zero $d_\rho\times d_\rho$ matrices. We refer to $\rho(U)$ as the {\em $\rho$-image} of the universal base matrix $U$. In the following theorem, whose particular case of the adjacency spectrum was given by the authors in \cite{dfps19}, we use the rank of the matrix $\rho(H)= \sum_{h\in H}\rho(h)$. Also, for every $\rho\in \Irep(G)$, we consider the $\rho$-image of the universal base matrix $U$, and we let $\Sp(\rho(U))$ denote the spectrum of $\rho(U)$, that is, the multiset of all the $d_\rho k$ eigenvalues of the matrix $\rho(U)$. Finally, the notation $\rank(\rho(H))\cdot \Sp(\rho(B))$ denotes the multiset of $\rank(\rho(H))\cdot d_\rho k$ values obtained by taking each of the $d_\rho k$ entries of the spectrum $\Sp(\rho(B))$ exactly $\rank(\rho(H))$ times. In particular, if $\rank(\rho(H))=0$, the set $\rank(\rho(H))\cdot \Sp(\rho(B))=\emptyset$. \begin{theorem} \label{t:spec-univ} Let $\Gamma$ be a base graph of order $k$ and let $\alpha$ be a voltage assignment on $\Gamma$ in a group $G$ relative to a subgroup $H$ of index $n$ in $G$. Then, the multiset of the $kn$ eigenvalues of the universal adjacency matrix $U^{\alpha}$ for the relative lift $\Gamma^{\alpha}$ is \begin{equation} \label{eq:t:spec-univ} \Sp( U^{\alpha})=\bigcup_{\rho\in \Irep(G)}\rank(\rho(H))\cdot \Sp(\rho(B(U))). \end{equation} \end{theorem} \begin{proof} Let $U^{\alpha}$ be the universal adjacency matrix $U=c_1A+c_2D+c_3I+c_4J$, for some constants $c_i\in \mathbb R$, of the relative lift $\Gamma^{\alpha}$ with vertex set $V^{\alpha}=V\times G/H$, where $V$ is the vertex set of the base graph $\Gamma$, and $G/H$ is the set of right cosets of $H$ in $G$. In more detail, $U^{\alpha}$ is a $kn\times kn$ matrix indexed by ordered pairs $(u,J)\in V^{\alpha}$ whose entries are given as follows. If there is no arc from $u$ to $u'$ for $u,u'\in V$ in $\Gamma$, then there is no arc from any vertex $(u,J)$ to any vertex $(u',J')$ for $J,J'\in G/H$ in the lift $\Gamma^{\alpha}$. So, in this case, the $(u,J)(u',J')$ entry of $U^{\alpha}$ is $c_2\deg(u)+c_3+c_4$. In the case of existing adjacencies, if for some $g\in G$, there are exactly $t$ arcs $a_1,\ldots,a_t$ from a vertex $u$ to a vertex $v$ in $\Gamma$ that carry the voltage $g=\alpha(a_1)=\cdots =\alpha(a_t)$, then for every right coset $J\in G/H$ the $(u,J)(v,Jg)$-th entry of $U^{\alpha}$ is equal to $c_1t+c_2\deg(u)+c_3+c_4$. From this, the proof is basically the same as in \cite[Th. 4.1]{dfps19}, so that now we only sketch it. The steps are the following: \begin{enumerate} \item Depending on the coefficients $c_i$ with $1\le i\le 4$, consider the corresponding universal base $k\times k$ matrix of $\Gamma$ relative to $H$, according to Definition \ref{B(U)}. \item For each irreducible representation $\rho\in \Irep(G)$, with dimension $d_{\rho}$, consider the $\rho$-image $\rho(B(U))$ of the universal base matrix. \item Prove that every eigenvector of $\rho(B(U))$ can be ``lifted'' to at most $d_{\rho}$ eigenvectors of the corresponding universal adjacency matrix $U^{\alpha}$ of the lift $\Gamma^{\alpha}$. Thus proving that the spectrum of $\rho(B(U))$ is contained in the whole spectrum of $U^{\alpha}$. \item Prove that, for each $\rho\in \Irep(G)$, there are exactly $\rank(\rho(H))\cdot d_{\rho}$ independent eigenvectors of $U^{\alpha}$ obtained at step 3, and that they also are independent of the other eigenvectors obtained from any other irreducible representation $\rho'\in \Irep(G)$. \item Prove that the total number of obtained independent eigenvalues of $U^{\alpha}$, that is $\sum_{\rho\in\Irep(G)} \rank(\rho(H))\cdot d_{\rho}$, equals $n$, the number of vertices of $\Gamma^{\alpha}$, see \cite[Prop. 3.3 ]{dfps19}. \end{enumerate} Notice that the proof is constructive, in the sense that we determine bases for all the eigenspaces of the relative lift $\Gamma^{\alpha}$. \end{proof} In particular, when $H$ is the trivial subgroup, we get an ordinary voltage assignment on $G$, and \eqref{eq:t:spec-univ} becomes \begin{equation} \label{eq:t:spec-univ-ord} \Sp( U^{\alpha})=\bigcup_{\rho\in \Irep(G)}\dim_{\rho}\cdot \Sp(\rho(B(U))) \end{equation} since $\rank(\rho(H))=\rank(e)=\dim(\rho)$. \section{An example} \label{sec:example} \begin{figure} \caption{A graph $\Gamma$ and its relative lift $\Gamma^{\alpha}$ in the group Sym(3). } \label{fig1} \end{figure} Following the method of Theorem \ref{t:spec-univ}, we now work out some different spectra of the relative lift shown in Figure \ref{fig1}. Referring to such a figure, we consider the base graph $\Gamma$ with vertex set $V=\{u,v\}$ and arc set $X=\{a,a^-,b,b^-\}$, where the pairs $\{a,a^-\}$, $\{b,b^-\}$ determined an edge joining $u$ to $v$, and a loop at $v$, respectively. The permutation voltage assignment $\alpha$ on $\Gamma$ in the group ${\rm Sym}(3)$ is given by $\alpha(a)=\alpha(a^-)=e$, $\alpha(b)=(123)=s$, and $\alpha(b^-)=(132)=s^2=t$. An equivalent description is to regard $\alpha$ as a relative voltage assignment, with values of $\alpha$ on arcs acting on the right cosets of $G/H$ for $H={\rm Stab}_G(1)=\{e,(23)\}$ by right multiplication. Then, the different base matrices with entries in the group algebra $\mathbb C(G)$ have the form \begin{align*} B(A) & = \left(\begin{array}{cc} 0 & \alpha(a) \\ \alpha(a^{-1}) & \alpha(b)+\alpha(b^-) \end{array}\right) = \left(\begin{array}{cc} 0 & e \\ e & s+t \end{array}\right);\\ B(D) & = \left(\begin{array}{cc} \deg(u)e & 0 \\ 0 & \deg(v)e \end{array}\right) = \left(\begin{array}{cc} e & 0 \\ 0 & 3e \end{array}\right);\\ B(I) & = \left(\begin{array}{cc} e & 0 \\ 0 & e \end{array}\right);\\ B(J) & = \left(\begin{array}{cc} e+s+t & e+s+t \\ e+s+t & e+s+t \end{array}\right). \end{align*} The voltage group $G={\rm Sym}(3)=\{e,g,h,r,s,t\}$ with $g=(23)$, $h=(12)$, $r=(13)$, $s=(123)$, and $t=(132)$ has a complete set of irreducible unitary representations $\Irep(G)=\{\iota,\pi,\sigma\}$ corresponding to the symmetries of an equilateral triangle with vertices positioned at the complex points $e^{i\frac{2\pi}{3}}$, $1$, and $e^{i\frac{4\pi}{3}}$, where \begin{align*} \iota &: G\to \{1\},\quad \dim(\iota)=1\quad \mbox{(the trivial representation)},\\ \pi &: G\to \{\pm 1\},\quad \dim(\pi)=1\quad \mbox{(the parity permutation representation), and},\\ \sigma &:G\to GL(2,\mathbb C),\quad \dim(\sigma)=2, \mbox{ generated by the unitary matrices} \end{align*} \begin{equation*} \label{eq:reprho} \sigma(g) = \frac{1}{2}\left(\begin{array}{cc} -1 & -\sqrt{3} \\ -\sqrt{3} & 1 \end{array}\right)\qquad {\rm and} \qquad \sigma(h) = \frac{1}{2}\left(\begin{array}{cc} -1 & \sqrt{3} \\ \sqrt{3} & 1 \end{array}\right), \end{equation*} whence we obtain $$ \sigma(r)=\sigma(ghg)= \left(\begin{array}{cc} 1 & 0\\ 0 & -1 \end{array}\right),\quad \sigma(s)=\sigma(gh)= \frac{1}{2}\left(\begin{array}{cc} -1 & -\sqrt{3} \\ \sqrt{3} & -1 \end{array}\right),\quad \mbox{and} $$ $$ \sigma(t)=\sigma(hg)= \frac{1}{2}\left(\begin{array}{cc} -1 & \sqrt{3} \\ -\sqrt{3} & -1 \end{array}\right). $$ To determine the `multiplication factors' appearing in front of the spectra in the statement of Theorem \ref{t:spec-univ}, we evaluate $\iota(H)=\iota(e)+\iota(g)=1+1=2$, of rank 1, $\pi(H)=\pi(e) + \pi(g) = 1-1=0$, of rank 0, and \begin{equation} \label{eq:rH} \sigma(H)=\sigma(e) + \sigma(g) = \frac{1}{2}\left(\begin{array}{rr} 1 & -\sqrt{3} \\ -\sqrt{3} & 3\end{array}\right) \end{equation} of rank 1. Then, by Theorem \ref{t:spec-univ}, the spectra of the universal adjacency matrices $U^{\alpha}$ of the relative lift $\Gamma^{\alpha}$ is obtained by the union of the sets $\Sp(\iota(U))$ and $\Sp(\sigma(U))$. Let us see some different cases. \subsection*{Adjacency spectrum} In this case, we only need to consider the images of $B=B(A)$ under the representations $\iota$ and $\sigma$, which are: \begin{equation} \label{eq:rhoB} \iota(B) {=} \left(\begin{array}{cc} 0 & 1 \\ 1 & 2 \end{array}\right),\quad \sigma(B) = \left(\begin{array}{cccc} 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \\ 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \end{array}\right), \end{equation} with spectra $\Sp(\iota(B))=\{1\pm \sqrt{2}\}$ and $\Sp(\sigma(B))=\{[\frac{1}{2}(-1\pm\sqrt{5})]^{2}\}$. Then, $$ \textstyle \Sp(A^{\alpha}) = \{1\pm \sqrt{2},[\frac{1}{2}(-1\pm\sqrt{5})]^2\}, $$ where the superscripts indicate the eigenvalue multiplicities. \subsection*{Laplacian spectrum} Since the Laplacian matrix can be written as $L=D-A$, we consider the base matrix of $B(L)=B(D)-B(A)=\left(\begin{array}{cc} e & -e \\ -e & 3e-s-t \end{array}\right)$ under the representations $\iota$ and $\sigma$, which are: \begin{equation} \label{eq:rhoB(L)} \iota(B(L)) {=} \left(\begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array}\right),\quad \sigma(B(L)) = \left(\begin{array}{cccc} 1 & 0 & -1 & 0 \\ 0 & 1 & 0 & -1 \\ -1 & 0 & 4 & 0 \\ 0 & -1 & 0 & 4 \end{array}\right), \end{equation} with spectra $\Sp(\iota(B(L)))=\{0,2\}$ and $\Sp(\sigma(B(L)))=\{[\frac{1}{2}(5\pm\sqrt{13})]^2\}$. Thus, $$ \textstyle \Sp(L^{\alpha}) = \{0,2,[\frac{1}{2}(5\pm\sqrt{13})]^2\}. $$ \subsection*{Signless Laplacian spectrum} The signless Laplacian is $Q=A+D$. So, we consider the base matrix of $B(Q)=B(A)+B(D)= \left(\begin{array}{cc} e & e \\ e & 3e+s+t \end{array}\right)$ with the representations $\iota$ and $\sigma$, as follows: \begin{equation} \label{eq:rho(Q)} \iota(B(Q)) {=} \left(\begin{array}{cc} 1 & 1 \\ 1 & 5 \end{array}\right),\quad \sigma(B(Q)) = \left(\begin{array}{cccc} 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 2 & 0 \\ 0 & 1 & 0 & 2 \end{array}\right), \end{equation} with spectra $\Sp(\iota(B(Q)))=\{3\pm\sqrt{5}\}$ and $\Sp(\sigma(B(Q)))=\{[\frac{1}{2}(3\pm\sqrt{5})]^2\}$. Thus, $$ \textstyle \Sp(Q^{\alpha}) = \{3\pm\sqrt{5},[\frac{1}{2}(3\pm\sqrt{5})]^2\}. $$ \subsection*{Seidel spectrum} The Seidel matrix can be defined as $S=\overline{A}-A=J-2A-I$. Then, we consider the base matrix of $B(S)=B(J)-2B(A)-B(I)=\left(\begin{array}{cc} s+t & -e+s+t \\ -e+s+t & -s-t \end{array}\right)$ under the representations $\iota$ and $\sigma$, which are: \begin{equation} \label{eq:rho(B(S))} \iota(B(S)) {=} \left(\begin{array}{cc} 2 & 1 \\ 1 & -2 \end{array}\right),\quad \sigma(B(S)) = \left(\begin{array}{cccc} -1 & 0 & -2 & 0 \\ 0 & -1 & 0 & -2 \\ -2 & 0 & 1 & 0 \\ 0 & -2 & 0 & 1 \end{array}\right), \end{equation} with spectra $\Sp(\iota(B(S)))=\{\pm\sqrt{5}\}$ and $\Sp(\sigma(B(S)))=\{[\pm\sqrt{5}]^2\}$. Thus, $$ \textstyle \Sp(S^{\alpha}) = \{[\pm\sqrt{5}]^3\}. $$ \subsection{Using ordinary voltage assignments} Note that, in our example, the group $G$ generated by the permutation $s=(123)$ is a regular permutation on $\{1,2,3\}$, isomorphic to the cyclic group $\mathbb Z_3$. Then, the covering $\Gamma^{\beta}\to \Gamma$ is regular, and we can also construct the lift $\Gamma^{\alpha}$ from an ordinary voltage assignment on $\mathbb Z_3$. As it is well-known, the irreducible representations $\rho_i$, for $i=0,\ldots,n-1$ of the cyclic group $\mathbb Z_n$ are $\phi_{i}:\ 1,\omega^i,\omega^{2i},\ldots,\omega^{(n-1)i}$, for $i=0,\ldots,n-1$, where $\omega$ is a primitive $n$-th root of unity. Thus, all these representations have dimension one. In our case, with $e=s^0$, $s$, and $s^2=s^{-1}$ being the elements of $\mathbb Z_3$, such representations are shown in Table \ref{table-Z3}. \begin{table}[h] \centering \begin{tabular}{|c||c|c|c|} \hline $\quad g\in \mathbb Z_3$ & $e$ & $s$ & $s^2$ \\ \hline\hline $\rho_0$ $(d_1=1)$ & $1$ & $1$ & $1$ \\ \hline $\rho_1$ $(d_2=1)$ & $1$ & $e^{i\frac{2\pi}{3}}$ & $e^{i\frac{4\pi}{3}}$ \\ \hline $\rho_2$ $(d_3=1)$ & $1$ & $e^{i\frac{4\pi}{3}}$ & $e^{i\frac{2\pi}{3}}$ \\ \hline \end{tabular} \caption{The irreducible representations of the cyclic group $\mathbb Z_3$.} \label{table-Z3} \end{table} Then, by using the same base matrices $M\in \{B,B(L),B(Q),B(S)\}$ as before (with $t=s^2$), we get the following matrices $\rho_i(M)$, for $i=0,1,2$, together with their spectra: \begin{itemize} \item {\bf Adjacency spectrum.} \begin{align*} \rho_0(B) & = \left(\begin{array}{cc} 0 & 1 \\ 1 & 2 \end{array}\right),\ \Sp(\rho_0(B))=\{1\pm \sqrt{2}\};\\ \rho_1(B) & =\rho_2(B) = \left(\begin{array}{cc} 0 & 1 \\ 1 & -1 \end{array}\right),\ \textstyle \Sp(\rho_1(B))=\Sp(\rho_2(B))=\{\frac{1}{2}(-1\pm \sqrt{5})\}. \end{align*} \item {\bf Laplacian spectrum.} \begin{align*} \rho_0(L(B)) &= \left(\begin{array}{cc} 1 & -1 \\ -1 & 1 \end{array}\right), \ \Sp(\rho_0(L(B)))=\{0,2\}; \\ \rho_1(L(B)) &=\rho_2(L(B)) = \left(\begin{array}{cc} 1 & -1 \\ -1 & 4 \end{array}\right),\ \textstyle \Sp(\rho_1(L(B)))=\Sp(\rho_2(L(B)))=\{\frac{1}{2}(5\pm \sqrt{13})\}. \end{align*} \item {\bf Signess Laplacian spectrum.} \begin{align*} \rho_0(Q(B)) &= \left(\begin{array}{cc} 1 & 1 \\ 1 & 5 \end{array}\right),\ \Sp(\rho_0(Q(B)))=\{3\pm \sqrt{5}\};\\ \rho_1(Q(B)) & =\rho_2(Q(B)) = \left(\begin{array}{cc} 1 & 1 \\ 1 & 2 \end{array}\right),\ \textstyle \Sp(\rho_1(Q(B)))=\Sp(\rho_2(Q(B)))=\{\frac{1}{2}(3\pm \sqrt{5})\}. \end{align*} \item {\bf Seidel spectrum.} \begin{align*} \label{eq2:rhoB} \rho_0(S(B)) &= \left(\begin{array}{cc} 2 & 1 \\ 1 & -2 \end{array}\right),\ \Sp(\rho_0(S(B)))=\{\pm \sqrt{5}\};\\ \rho_1(S(B)) & =\rho_2(S(B)) = \left(\begin{array}{cc} -1 & -2 \\ -2 & 1 \end{array}\right),\ \textstyle \Sp(\rho_1(S(B)))=\Sp(\rho_2(S(B)))=\{\pm\sqrt{5}\}. \end{align*} \end{itemize} From these spectra, and applying \eqref{eq:t:spec-univ-ord}, we obtain again the different spectra of our example. As a last comment, note that, in this context of ordinary voltage assignments on a group $G$ of order $n$, of a base graph $\Gamma$, the base matrix of $J$ has entries $(J)_{uv}=\sum_{g\in G} g$. Then, from representation theory, the matrices $\rho_0(J)$ and $\rho_i(J)$ for $i\neq 0$ turn out to be $nJ$ and $O$ (the $0$-matrix with appropriate dimension), respectively. Thus, for $i\neq 0$, $\rho_i(L(B))=-2\rho_i(A)-\rho_i(I)=-2\rho_i(A)-I$, and we can state the following lemma (see the above results for an example). \begin{lemma} Let $\Gamma$ be a base graph with ordinary voltage assignment $\alpha$ on a group $G$ of order $n$. Let $B$ be the base matrix of $\Gamma$. Then, apart from the eigenvalues of $nJ-2B-I$, the other Seidel eigenvalues of the lift $\Gamma^{\alpha}$ are of the form $-2\lambda-1$, where $\lambda$ is an adjacency eigenvalue of $\Gamma^{\alpha}$ $($that is, $\lambda\in \Sp(\rho_i(B))$ for $i\neq 0$\/$)$. \end{lemma} \vskip 1cm \noindent{\bf Acknowledgments.}~ The research of the two first authors is partially supported by AGAUR from the Catalan Government under project 2017SGR1087 and by MICINN from the Spanish Government under project PGC2018-095471-B-I00. The research of the first author has also been supported by MICINN from the Spanish Government under project MTM2017-83271-R. The third and the fourth authors acknowledge support from the APVV Research Grants 15-0220 and 17-0428, and the VEGA Research Grants 1/0142/17 and 1/0238/19. \end{document}
arXiv
Volume 21 Supplement 8 Selected articles from the 5th International Scientific Conference "Plant genetics, genomics, bioinformatics, and biotechnology" (PlantGen2019): genomics Multi-trait multi-locus SEM model discriminates SNPs of different effects Anna A. Igolkina ORCID: orcid.org/0000-0001-8851-96211, Georgy Meshcheryakov1, Maria V. Gretsova1,2, Sergey V. Nuzhdin1,3 & Maria G. Samsonova1 There is a plethora of methods for genome-wide association studies. However, only a few of them may be classified as multi-trait and multi-locus, i.e. consider the influence of multiple genetic variants to several correlated phenotypes. We propose a multi-trait multi-locus model which employs structural equation modeling (SEM) to describe complex associations between SNPs and traits - multi-trait multi-locus SEM (mtmlSEM). The structure of our model makes it possible to discriminate pleiotropic and single-trait SNPs of direct and indirect effect. We also propose an automatic procedure to construct the model using factor analysis and the maximum likelihood method. For estimating a large number of parameters in the model, we performed Bayesian inference and implemented Gibbs sampling. An important feature of the model is that it correctly copes with non-normally distributed variables, such as some traits and variants. We applied the model to Vavilov's collection of 404 chickpea (Cicer arietinum L.) accessions with 20-fold cross-validation. We analyzed 16 phenotypic traits which we organized into five groups and found around 230 SNPs associated with traits, 60 of which were of pleiotropic effect. The model demonstrated high accuracy in predicting trait values. Understanding how genetic variation translates into phenotypic effects is one of the central challenges facing fundamental biology, agriculture, and medicine. Solutions of this problem fall into two main classes: association studies and trait prediction studies. Genome-wide association studies (GWAS) are designed to identify genetic variants associated with a trait. Initially, GWAS was conducted for each trait separately testing SNPs one by one. However, single-locus approaches may lead to biased estimates due to multiple testing correction, and they are not suitable in the common case of genetically correlated traits. To alleviate the latter challenge, multi-trait models have been proposed [1, 2]. One way to cope with correlated traits is to model the inter-trait covariance as a random effect in linear mixed effects models [3]. Until recently, this model could use only a pair of correlated traits at a time due to the computational intensity [4]. To avoid this complexity, variable reduction techniques were suggested to replace several phenotypic traits with new independent constructs. These constructs play the role of new traits and can be obtained with a standard principal component analysis of traits (PCA), various principal components of heritability (PCH) [5,6,7] or pseudo-principal components [8]; however, the biological interpretation of these artificial traits is not clear. Moreover, these methods do not distinguish trait-specific and pleiotropic variants. To carry this out, meta-analysis combining several single-trait GWAS of different traits was proposed [9]. It can derive trait-specific variants, but, as correlated traits were not analyzed simultaneously, this method is not multi-trait by definition. Another challenge in association studies is to develop a powerful multi-locus model. Single-locus models require correction for multiple testing, which dramatically reduces power. To avoid this problem, multi-locus models that consider all markers simultaneously have been proposed. Due to the 'large p (number of SNPs), small n (sample size)' problem, many multi-locus models are based on regularization/penalized techniques: LASSO [10], Elastic Net [11], Bayesian LASSO [12], adaptive mixed LASSO [13]. Other multi-locus methods, which are incorporated in the mrMLM package, involve a two-step algorithm which first selects candidate variants from a single-locus design and then examines them together in a multi-locus manner [14]. Despite their diversity, the multi-locus models are limited in multi-trait cases and seldom pay attention to different types of SNP effects (e.g. pleiotropic, single-trait, direct, indirect). In contrast to GWAS, the second broad class of studies make genome-wide trait predictions. These studies have gained popularity and enjoy practical application in agriculture, specifically, in estimating individual breeding values and selecting breeding lines [15]. Genomic prediction methods not only search for trait-variant associations but also validate them by demonstrating their predictive ability. Similar to GWAS, these methods are based on various regression models that typically include multiple loci and consider kin relationships between individuals. The latter is usually treated as the random effect, i.e. the multivariate normally distributed variable with zero mean and a covariance matrix proportional to pedigree-based or marker-based kinship [16]. The random effect can be estimated together with marker effects as in BLUP and various GWAS mixed-models [17,18,19] or before the association analysis as in GRAMMAR [20]. Despite the broad spectrum of multi-trait and multi-locus models in GWAS and trait prediction studies, only a few of them simultaneously incorporate correlated traits and several associated variants [21,22,23,24,25]. In principle, multi-trait and multi-locus models have the potential to reveal complex and important types of associations; for instance, a single variant might have a direct effect on one trait and an indirect impact on the other trait, may act on a single trait or its effect might be pleiotropic affecting several traits. However, none of these traits-variants associations are explicitly embedded into known models. This is why it is tempting to have these relationships described explicitly, as in structural equation models. Structural equation modeling (SEM) is a multivariate statistical analysis technique first introduced for path analysis by geneticist Sewell Wright [26, 27]. Once predominantly used in genetics, econometric, and sociology, SEM applications have gradually shifted to the field of molecular biology [28]. For example, SEM has been used to explore alterations in gene networks in diseases [29, 30], to provide a quantitative map of relationships between traits and disease [31], and to infer gene regulatory networks involving several hundred genes and eQTLs [32, 33]. SEM models have also been applied in association studies in both multi-trait and multi-locus designs. For example, the GW-SEM method has been developed to test the association of a SNP with multiple phenotypes through a latent construct [34]. In comparison with the existing multi-trait single-locus GWAS software package GEMMA (Zhou and Stephens 2014), GW-SEM provides more accurate estimates of associations; however, GEMMA is almost three times faster than GW-SEM. Another SEM-based model which can be used in association studies has been proposed for multi-trait QTL mapping [35]. This method assumes that phenotypes are causally related forming a core structure without latent constructs, and QTLs play the role of exogenous variable to the structure. This approach allows the model to decompose QTL effects into direct, indirect, and total effects. However, the assumption of causally related traits is limiting because the correlation between traits can additionally be caused by pleiotropy rather than the direct influence of traits on each other. Therefore, the current SEM-based models for genotype-phenotype associations can be improved to address these drawbacks. Here, we propose a new multi-trait multi-locus SEM-based model – mtmvSEM – that considers both correlated traits joined into latent constructs, which can be causally related to each other, and multiple SNPs influencing both traits and latent variables. In contrast to PCA-based approaches, our model does not operate with artificial phenotypes in the form of linear combinations of traits, but rather the phenotypes are regressed on the latent constructs. The proposed configuration of the model distinguishes pleiotropic and single-trait effects of SNPs on latent variables and phenotypes, respectively. Moreover, SNP effects can be differentiated between direct and indirect. This explicit separation of SNP roles may provide a better understanding of genetic mechanisms underlying a trait than other multi-trait multi-locus models. Our approach faces several challenges. First, in case of a large number of traits and variants, the model potentially belongs to the "large p, small n" class, so that the standard maximum likelihood (ML) method for estimating parameters in SEM models is limited due to the parameter identification criteria. This problem can be solved by applying the Bayesian approach, which uses prior information about model parameters. Bayesian multiple-regression methods are widely used for genomic prediction in agriculture and in GWAS [36] reducing the number of tests, and consequently, increasing robustness and power as compared to standard GWAS analyses [37]. In our model, we performed Bayesian inference and obtained posterior distributions of parameters by Gibbs sampling, a Markov chain Monte Carlo (MCMC) algorithm. Another challenge in our model is the inclusion of both continuous and ordinal variables given that variants and many phenotypes are measured on ordinal scales. As a result, it is impossible to estimate parameters in SEM models using statistical models relying on the normality assumption. These limitations explain the sparsity of studies conducting SEM analyses in a genome-wide context. In our model, we incorporated techniques to cope with ordinal data – polychoric and polyserial correlations – that provide a correct analysis of genetic variants and traits. Our model was applied to a dataset of 404 chickpea landraces analyzed recently [38]. Chickpea is the second most widely grown food legume, providing a vital source of nutritional nitrogen for ~ 15% of the world's population. To accelerate chickpea breeding, it is important to identify regions controlling agronomically important traits. However, while performing GWAS, we found that 16 out of 30 phenotypic traits considered were correlated. Therefore, to obtain statistically reliable markers and to understand the causal relationships between traits and variants, the mtmlSEM model developed here was applied to this dataset. We also used the model to predict chickpea phenotypic traits and got sufficiently good results for most of them. Application of mtmlSEM model to chickpea dataset To test whether the relations between latent factors in the model are reasonable and to evaluate impacts of different types of SNPs, we compared four types of models (Fig. 1). We denote a model having parameters in the B matrix as connected and a model without a B matrix as zero. We denote a model without the K matrix as base and a model having parameters in the K matrix as extended. Four model configurations were considered covering all possible combinations (Fig. 1). For each of the four models, we assessed its predictive ability with the fixed 20-fold cross-validation. In each of the 20 training sets, we automatically obtained the same set of 5 factors influencing 16 partly correlated phenotypes (Table 1, Additional File 1). The first two factors reflect different types of productivity traits. The third factor reflects joint variation in the color of different plant parts. The fourth can be interpreted as a phenological factor. The fifth reflects joint variation of traits related to plant architecture, in particular, plant height and height of the lover pod attachment. Table 1 5 factors influencing 16 partly correlated phenotypes In the connected model, the latent factors were joined into a directed acyclic graph and this procedure resulted in slightly different structural parts for the 20 training set models. We found that the number of connections between latent variables varied from four to six with four being common to all training sets (Fig. 2). From the statistical viewpoint, relationships between latent variables reflect their common variances that maximize the likelihood of the sample covariance matrix subject to parameters of the model. However, a biological interpretation of the connections may be that the relationships between factors related to productivity and plant color reflect selection on market class: desi chickpeas have a small dark seed, while kabuli have large lightly colored seeds [39]. Relations between productivity and phenology as well as between productivity and plant architecture are also apparent: plant productivity reflects the efficiency of plant metabolism that obviously influences plant architecture and phenology [40]. We first added SNPs influencing the latent factors to obtain both the connected and zero base models. The number of SNPs in the connected base models constructed for 20 training sets varied from 52 to 62; for zero base models, this number was in the range from 36 to 46. The larger number of SNPs in connected models as compared with zero models can be explained by the essential difference between SNPs attributed to these model types. In connected base models, some SNPs are associated with several latent factors and therefore affect a larger number of phenotypic traits than in zero models. Therefore, in connected models, SNPs describe a more complex variance-covariance structure and, as a result, a larger number of SNPs is required to estimate it. Notably, SNPs influencing latent factors do not explain the variances specific to individual phenotypic traits. To take into account these variances, we built extended models for each training set. The number of SNPs in connected extended models varied from 223 to 256; in zero extended models, this number was in the range from 218 to 242. The significant increase in the number of SNPs in extended models as compared with base models can be explained by the fact that extended models additionally consider around ten SNPs per each of the 16 traits on average. To obtain parameter estimates for each of the 80 models (4 model types and 20 training sets), we performed five Gibbs sampling chains of length 2000 and checked several diagnostics with tools in the coda CRAN package. The Gelman-Rubin diagnostics was higher than 1.05 in only 1% of all parameters. The minimum effective sample size for a parameter was 83 and the mean and median effective sample sizes across all parameters and models were 3193 and 3304, respectively. Based on these diagnostic values, we concluded that there was good convergence of the Gibbs sampling chains and took parameter estimates for testing. For all model types, the accuracy of trait prediction is good for plant height, some traits related to productivity, and all traits related to plant color (Table 2, Additional File 2). Closer inspection of the table showed that the connected base model outperformed the zero base model for 9 phenotypic traits, the opposite situation was observed for 5 traits, and predictions for the remaining 2 traits were nearly equal. When comparing the connected and zero extended models, the number of times one model outperforms the other is nearly equal (Table 2) and the number of predictions with equal accuracy increases pointing to greater similarity between these models. Table 2 Accuracy of trait prediction for four models (Pearson correlation between actual values and predicted and coefficient of determination). Bold font: connected model outperforms zero model; Italic font - prediction accuracies of connected and zero models are nearly equal Next, we analyzed positions of trait-associated SNPs on the chromosomes in both connected and zero extended model types. For each of these types, we had independently built 20 models due to the fixed 20-fold CV, and, consequently, the sets of SNPs included into the models were different. To evaluate the congruence between chromosomal positions of SNPs from different sets, we applied the sliding window technique (500 kb window size with 100 kb step) and, for each window, we counted the number of models having at least one SNP in it. We applied this technique for five subsets of SNPs separately, such that each subset was associated with a factor and its attributed phenotypes. We visualized the evaluated congruence between 20 models in Fig. 3. We found that the models agree with each other due to the significant amount of windows, where all models have SNPs. We next compared positions of peaks with GWAS-hits obtained by a single-trait, single-locus model for the chickpea dataset [38]. Utilizing the permutation test, we found that positions of the GWAS-hits and the peaks are not independent (p-value < 0.05) indicating that there is some concordance between our models and GWAS analysis. In Fig. 3, some GWAS hits do not have any matches with peaks, because our model does not include correlated SNPs, which naturally occur in GWAS results. Moreover, our model describes essentially more information than single-trait GWAS; therefore, some peaks do not match any GWAS hits. GWAS often relies on data with a number of highly correlated phenotypic traits. Due to these correlations, significant SNPs are frequently associated with several phenotypes, i.e., they are pleiotropic. Until recently, multi-trait multi-locus models could neither distinguish SNP effects between pleiotropic and single-trait ones nor analyze a large number of traits and variants. In a SEM-based model, aggregation of pleiotropic effects into latent constructs makes it possible to distinguish SNP effects and, therefore, shed more light on mechanisms underlying associations. Large numbers of SNPs and traits in the model can lead to a parameter identification problem that, nevertheless, can be solved by applying Bayesian approach for parameter estimation. Here we developed the mtmlSEM (multi-trait multi-locus SEM) model that estimates and evaluates casual relations between phenotypes and SNPs, reliably discriminates variant effects between single-trait and pleiotropic ones, and has good predictive ability. The developed model is a general one and can be applied to analysis of associations between variants and correlated traits in any dataset. It consists of two main steps. Firstly, the structure of the model is automatically constructed, such that correlated traits are joined into latent factors and explanatory SNPs are introduced to latent factors and phenotypic traits directly. Under this paradigm, one could consider latent factors as aggregating yet unknown biological processes that explain the SNP influence on phenotypes. At the second step, the parameter estimates are obtained with MCMC (Gibbs sampling) after the Bayesian inference of posterior distributions for parameters. At the next step, the applicability of the mtmlSEM model was illustrated on a dataset of chickpea accessions. Many phenotypic traits in this dataset are correlated and therefore single-trait GWAS inferences can be biased. We compared four models: zero or connected means inclusion or not parameters in B, base or extended means inclusion or not parameters in K. To estimate model accuracy, we applied the 20-fold cross-validation, which led to construction of 20 different models for each model type. After the accuracy of trait prediction was assessed, it became evident that among base models, connected ones describe the covariance structure of the data more accurately and, therefore, showed better predictive ability than the zero models. Therefore, one may conclude that joining latent factors into a structure was reasonable as all phenotypes are mutually dependent and cannot be considered as isolated blocks of traits. In the case of extended models, the supplementary SNPs added to phenotypes described the residual variance not covered by the base models, so that the connected and zero extended models were comparable in both total numbers of SNPs and accuracy. We next tested the utility of the models to predict associations between SNPs and phenotypes. We found that in that base and connected extended models behave similarly supporting their resemblance to one another. The associations revealed with mtmlSEM model and in standard GWAS analysis are consistent and the differences observed arise due to exclusion of correlated SNPs from the mtmlSEM models, and because mtmlSEM models consider individual and pleiotropic effects of SNPs separately. These effects could be singled out by calculating the difference between SNP effects in extended and zero models. However, the pleiotropic SNP effects are central to trait prediction in the models since the addition of SNPs to traits does not result in marked increase of prediction accuracy (see Table 2). We developed the mtmlSEM model that describes casual relations between between single-trait and pleiotropic SNPs and phenotypic traits. The particular strength of mtmlSEM model developed here is its ability to predict traits from genomic data. Notably, while the chickpea dataset used in this study is relatively small, the accuracy of the predictions for many traits was good and is comparable or even superior to the accuracy of breeding values predictions in genomic selection models. However, the applicability of mtmlSEM models in genomic selection studies requires further investigation. Structural equation modeling First proposed by S. Wright [26] for path analysis, SEM is defined today as a diverse set of tools and approaches covering regression models, path analysis and confirmatory factor analysis. The first SEM model was LISREL, and it has two distinct parts: structural and measurement [41, 42]. The structural part of LISREL reflects the causal relationships between endogenous and exogenous latent variables; the measurement model describes how latent variables influence their manifest variables: $$ \begin{array}{ll}\eta & =\mathrm{B}\eta +\varepsilon \\ {}p& =\Lambda \eta +\delta \end{array}\operatorname{} $$ where η is a vector of nη latent factors (both exogeneous and endogenous), p is a vector of np observed manifest variables, Λ is a matrix of factor loadings, B is a matrix of relationships between latent factors, ε ∼ N(0, Θε) and δ ∼ N(0, Θδ) are random errors, Θε and Θδ are diagonal matrices of sizes (nη, nη) and (np, np), respectively. To adapt this model for genotype-phenotype studies, we considered p as a vector of phenotypes, and η as a vector of latent variables, which describe the shared variance of genetically correlated traits. One possible interpretation of the measurement part of the model in these terms is that latent variables play the role of molecular mechanisms governing the correlation between traits. The structural part describes the interplay between these mechanisms. To construct the mtmlSEM model, we extended the LISREL model with observed exogenous variables assuming them as SNPs. New exogenous variables influence either latent factors or phenotypes traits1 and mean pleiotropic and single-trait effects, respectively. As a result, latent variables η become only endogenous and the SEM model is transformed as follows: $$ \begin{array}{ll}\eta & =\mathrm{B}\eta +\Pi g+\varepsilon \\ {}p& =\Lambda \eta +\mathrm{K}y+\delta \end{array}\operatorname{} $$ where g and y are variables of SNPs influencing latent factors and phenotypic traits, respectively; Π and K are matrixes of SNP influences on latent factors and phenotypes, respectively. We assumed that each column of both the Π and K matrices can contain only one cell with a parameter such that each SNP can influence only one variable. SNPs in the structural part, g, describe a part of phenotypic variance, which is common for several traits. However, each phenotype has its own variance, which is described by SNPs in the measurement part, y. If the B matrix is not zero, a pleiotropic SNP, which directly influences one latent variable and its related traits, can indirectly affect other latent variables and their traits. Therefore, in mtmlSEM model, SNPs can be subdivided into single-trait, pleiotropic and direct/indirect effects. The Maximum likelihood method, most often used to estimate parameters in SEM model, assumes that all observed and latent variables are normally distributed. Under this assumption, the sample covariance matrix of observed variables follows the Wishart distribution with the mean equal to the model-implied covariance matrix. In our dataset, some of the phenotypic traits and all SNPs take discrete ordinal values; therefore, the ML approach cannot be applied. To consider ordinal variables as normally distributed, we substituted sample covariances between ordinal variables with polychoric correlations and between ordinal and continuous variables with polyserial correlations (see section Ordinal variables). The ML approach can be applied after this manipulation (see Additional File 3). Construction of measurement part We identified latent variables influencing phenotypic traits applying factor analysis (FA). To determine the number of factors, we applied the parallel analysis [43]. Then, we performed FA and attributed a trait to a factor if the absolute value of the factor loading (i.e. standardized regression coefficient) exceeds 0.5. Factors influencing less than two phenotypes and phenotypes not attributed to the factors were filtered out. As a result, we obtained the measurement part of the model (1), which is a set of latent factors that influence the subsets of phenotypic traits: $$ p=\Lambda \eta +\delta $$ where Λ is a sparse matrix. The model does not contain an intercept term because traits are standardized to have mean zero and variance one. Construction of structural part In FA, factors are independent and influence all observed variables. By setting some factor loadings to zero, we probably violated the factor independency; therefore, we expect them to be non-independent. To include factor dependency into the model, we allowed factors to be in causal relationships that describe presumable common variance between them: $$ \eta =\mathrm{B}\eta +\varepsilon $$ where B is the coefficient matrix for relationships between latent variables, η. The model does not contain an intercept term because latent variables are assumed to have mean zero. Eq. (4) together with eq. (3) form the traditional LISREL model. To obtain the positions of parameters in the B matrix, we iteratively add them one by one until a stopping criterion is met. At an iteration, we considered each pair of latent factors and examined two possible relationships within the pair: to and back links. For each causal relationship not forming a cycle in the structural part, we estimated the parameters of the corresponding LISREL model by the ML method and checked for statistical significance of all the parameters in both Λ and B matrices (p-value < 0.05). Next, we defined the best relationship between latent factors as having the highest likelihood value and fixed the corresponding position of a new parameter in B. The iterations continued until the log-likelihood value stops decreasing. SNP selection Before SNPs were incorporated into the model, we estimated parameters for the constructed LISREL part of the model (Eq. (1)) and fixed all parameter values in B and Λ matrices. This is necessary to do as SNP addition enlarges the number of parameters that makes further ML estimation unstable. Therefore, we added SNPs to the model with fixed B and Λ matrices. We first automatically introduced SNPs for each latent variable (vector g in Eq. (2)) into the model starting from the exogenous latent variables and breadth-first following the direct acyclic graph (DAG) of the structural part. Then, we performed the same automatic procedure and introduced SNPs for phenotypes (vector y in Eq. (2)). Selecting a SNP for a variable, whether it is a latent factor or phenotype, consisted of three steps. At the first step, we included SNPs one by one as influencing the variable and perform the ML estimation of model parameters. The sample covariance matrix of all observed variables for both phenotypic traits and SNPs follows the Wishart distribution with the mean equal to model-implied covariance matrix (see Additional File 3). Secondly, based on the ML estimates, we calculate the Wishart density for the sample covariance matrix of phenotypes only taking as the mean parameter of the distribution the model-implied covariance of phenotypes. At the third step, we sort all SNPs according to the calculated densities and put the top SNP into the model fixing the corresponding parameter in Π or K matrix with the ML estimate. This automatic algorithm for selecting SNPs was implemented using the tools of the semopy [44] Python package. Ordinal variables The estimation of parameters in the SEM model is traditionally based on the assumption that all variables, whether they are observed or latent, are normally distributed. However, in the mtmlSEM model, this assumption is inevitably violated because SNPs take only discrete values, for instance, {0, 1, 2}, in the additive model. Moreover, the ordinal scale is often used for measurements of phenotypic traits. We considered ordinal data as coming from a hidden continuous normal distribution with a threshold specification [45] and introduced additional latent variables to the model as follows. Let \( \overset{\sim }{x} \) be a latent normally distributed variable that mimics the ordinal variable x taking values from {x1, x2, …xn}. Suppose for a given data set the proportions of these values are {f1, f2, …fn}, respectively. Let thresholds {− ∞ = t0, t1, …tn = ∞} divide the normal distribution into n parts corresponding to the proportions tk equal to the standard normal quantile at \( {\sum}_{i=1}^k{f}_i \). Although the exact continuous measurements of \( \overset{\sim }{x} \) are not available, we consider that if x = xk, then \( {t}_{k-1}<\overset{\sim }{x}\le {t}_k \) [45]. Thereby, for each SNP and ordinal phenotypic trait, we introduce to the model additional normally distributed latent variables. Let the vector of phenotypes p be split into two parts: continuous traits, u, modelled as normally distributed, and discrete phenotypes, v, measured on an ordinal scale. For the latter, as well as for g and y variables, we apply the threshold approach described above and introduce vectors of latent variables \( \overset{\sim }{v} \), \( \overset{\sim }{g} \) and \( \overset{\sim }{y} \), respectively. Therefore, the model (2) is transformed to $$ \begin{array}{ll}\eta & =\mathrm{B}\eta +\Pi \overset{\sim }{g}+\varepsilon \\ {}\left(\begin{array}{c}u\\ {}\overset{\sim }{v}\end{array}\right)& =\Lambda \eta +\mathrm{K}\overset{\sim }{y}+\delta \end{array}\operatorname{} $$ Bayesian estimation of model parameters The ML method is used to estimate parameters of SEM models most of the time. However, if the number of parameters is large, as in our mtmlSEM model, this method is computationally unstable and prone to optimization failure. In contrast to the ML method, the Bayesian approach can cope with this situation taking into account prior information about parameters and maximizing the posterior distribution of parameters and latent variables. We considered values in the B, Λ, Π, K matrices that were fixed during model construction as prior information and performed the Bayesian inference to obtain the posterior distributions for all parameters (denote set of all parameters as ϕ = {B, Λ, Π, K, Θε, Θδ }) and latent variables (\( \eta, \overset{\sim }{v},\overset{\sim }{g},\overset{\sim }{y} \)) (see Additional File 4). As a result, we were able to generate posterior distributions of parameters by the Gibbs sampler, a Markov chain Monte Carlo algorithm. We initiated each chain with random values, and, at each iteration of the sampler, we draw datasets for \( \overset{\sim }{v} \), \( \overset{\sim }{g} \) and \( \overset{\sim }{y} \) from truncated normal distributions, independently of ϕ; datasets for η from the multivariate normal distribution conditional on ϕ; diagonal values in Θε from the inverse gamma distribution conditional on ϕ; values in rows of the block matrix [B, Π] from multivariate normal distributions conditional on ϕ; diagonal values in Θδ from the inverse gamma distribution conditional on ϕ; values in rows of the block matrix [Λ, K] from multivariate normal distributions conditional on ϕ. To get parameter estimates, we performed Gibbs sampling on 5 chains of length 2000, checked convergence indicators (Gelnman-Rubin diagnostics and the effective sample size), and calculated the parameter estimates. The chickpea dataset The chickpea dataset (Cicer arietinum L.) consists of 404 accessions from the Vavilov Institute of Plant Genetic Resources (VIR) seed bank. In 2017, these accessions were phenotyped for 30 phenological, morphological, agronomical, and biological traits. Some of these traits are categorical and others are quantitative. Phenotype abbreviations and units of measurement are in Additional File 2. Genotyping by sequencing (GBS) of chickpea accessions identified 56,855 segregating single nucleotide polymorphisms (SNPs). These SNPs were further filtered to meet requirements for minor allele frequency (MAF) > 3% and genotype call-rate > 90%. 2579 SNPs in 404 accessions passed all filtering criteria and were retained for further analysis. The phenotype data were further transformed in two ways. Firstly, for some categorial traits, we merged categories to make them more distinct (Additional File 2). Secondly, several quantitative traits were log-transformed to satisfy the assumption of normality (Fig. 4). All quantitative traits were further centered and scaled by calculation of z-score. Examples of the genome-wide multi-trait SEM model. a Connected base model; (b) Zero base model; (c) Zero extended model; (d) Connected extended model Latent factors joined to form structural part of connected SEM model. Dashed arrows represent relationships, which were not present is all training sets for directed acyclic graph obtained; Solid lanes represent relationships, which were found in each of 20 training sets The sliding-window congruence between models obtained in 20-fold cross validation. The hight of a peak reflects the number of models having at least one SNPs within the window corresponding to the peak Distributions of the data after preparation. Grey-coloured traits were not transformed. Yellow-coloured traits are categorial traits that were transformed; orange-coloured traits are non-categorial and were log-transformed Test for predictive ability The model was validated by 20-fold cross-validation. We randomly partitioned the dataset into 20 training (about 380 samples) and test (20 samples) sets and fixed the splits. For each training set, we independently constructed an mtmlSEM model and obtained parameter estimates after Gibbs sampling on 5 chains taking these parameters to predict values of phenotypic traits in the corresponding test set. The prediction accuracy was estimated by calculating the Pearson correlation between observed and predicted values across all test sets, the coefficient of determination and normalized rooted mean square error (Additional File 5). The datasets analyzed and the scripts during the current study are available in the [GitHub] repository, https://github.com/iganna/mtmlSEM.git All figures were created by A.Igolkina. mtmlSEM: Multi-train multi-locus SEM Genome-wide association studies Factor analysis Yang Q, Wang Y. Methods for analyzing multivariate phenotypes in genetic association studies. J Probab Stat. 2012;2012:1–13. https://doi.org/10.1155/2012/652569. Hackinger S, Zeggini E. Statistical methods to detect pleiotropy in human complex traits. Open Biol. 2017;7:170125. https://doi.org/10.1098/rsob.170125. Laird NM, Ware JH. Random-effects models for longitudinal data. Biometrics. 1982;38:963–74. Korte A, Vilhjálmsson BJ, Segura V, Platt A, Long Q, Nordborg M. A mixed-model approach for genome-wide association studies of correlated traits in structured populations. Nat Genet. 2012;44:1066–71. https://doi.org/10.1038/ng.2376. Ott J, Rabinowitz D. A principal-components approach based on heritability for combining phenotype information. Hum Hered. 1999;49:106–11. https://doi.org/10.1159/000022854. Wang Y, Fang Y, Jin M. A ridge penalized principal-components approach based on heritability for high-dimensional data. Hum Hered. 2007;64:182–91. https://doi.org/10.1159/000102991. Lange C, van Steen K, Andrew T, Lyon H, DeMeo DL, Raby B, et al. A family-based association test for repeatedly measured quantitative traits adjusting for unknown environmental and/or polygenic effects. Stat Appl Genet Mol Biol. 2004;3:1–27. https://doi.org/10.2202/1544-6115.1067. Gao H, Zhang T, Wu Y, Wu Y, Jiang L, Zhan J, et al. Multiple-trait genome-wide association study based on principal component analysis for residual covariance matrix. Heredity (Edinb). 2014;113:526–32. doi:https://doi.org/10.1038/hdy.2014.57. Turley P, Walters RK, Maghzian O, Okbay A, Lee JJ, Fontana MA, et al. Multi-trait analysis of genome-wide association summary statistics using MTAG. Nat Genet. 2018;50:229–37. https://doi.org/10.1038/s41588-017-0009-4. Wu TT, Chen YF, Hastie T, Sobel E, Lange K. Genome-wide association analysis by lasso penalized logistic regression. Bioinformatics. 2009;25:714–21. https://doi.org/10.1093/bioinformatics/btp041. Cho S, Kim H, Oh S, Kim K, Park T. Elastic-net regularization approaches for genome-wide association studies of rheumatoid arthritis. BMC Proc. 2009;3(Suppl 7):S25. https://doi.org/10.1186/1753-6561-3-s7-s25. Yi N, Xu S. Bayesian LASSO for quantitative trait loci mapping. Genetics. 2008;179:1045–55. https://doi.org/10.1534/genetics.107.085589. Wang D, Eskridge KM, Crossa J. Identifying QTLs and epistasis in structured plant populations using adaptive mixed LASSO. J Agric Biol Environ Stat. 2011;16:170–84. https://doi.org/10.1007/s13253-010-0046-2. Wen Y-J, Zhang H, Ni Y-L, Huang B, Zhang J, Feng J-Y, et al. Methodological implementation of mixed linear models in multi-locus genome-wide association studies. Brief Bioinform. 2018;19:700–12. https://doi.org/10.1093/bib/bbw145. Crossa J, Pérez-Rodríguez P, Cuevas J, Montesinos-López O, Jarquín D. de los Campos G, et al. genomic selection in plant breeding: methods, models, and perspectives. Trends Plant Sci. 2017;22:961–75. https://doi.org/10.1016/j.tplants.2017.08.011. Goudet J, Kay T, Weir BS. How to estimate kinship. Mol Ecol. 2018;27:4121–35. https://doi.org/10.1111/mec.14833. Segura V, Vilhjálmsson BJ, Platt A, Korte A, Seren Ü, Long Q, et al. An efficient multi-locus mixed-model approach for genome-wide association studies in structured populations. Nat Genet. 2012;44:825–30. https://doi.org/10.1038/ng.2314. Robinson GK. That BLUP is a good thing: the estimation of random effects. Stat Sci. 1991;6:15–32. https://doi.org/10.1214/ss/1177011926. Zhou X, Stephens M. Genome-wide efficient mixed-model analysis for association studies. Nat Genet. 2012;44:821–4. https://doi.org/10.1038/ng.2310. Aulchenko YS, de Koning D-J, Haley C. Genomewide rapid association using mixed model and regression: a fast and simple method for Genomewide pedigree-based quantitative trait loci association analysis. Genetics. 2007;177:577–85. https://doi.org/10.1534/genetics.107.075614. Liu J, Yang C, Shi X, Li C, Huang J, Zhao H, et al. Analyzing association mapping in pedigree-based GWAS using a penalized multitrait mixed model. Genet Epidemiol. 2016;40:382–93. https://doi.org/10.1002/gepi.21975. Zhan X, Zhao N, Plantinga A, Thornton TA, Conneely KN, Epstein MP, et al. Powerful genetic association analysis for common or rare variants with high-dimensional structured traits. Genetics. 2017;206:1779–90. https://doi.org/10.1534/genetics.116.199646. Dutta D, Scott L, Boehnke M, Lee S. Multi-SKAT: general framework to test for rare-variant association with multiple phenotypes. Genet Epidemiol. 2019;43:4–23. https://doi.org/10.1002/gepi.22156. Weighill D, Jones P, Bleker C, Ranjan P, Shah M, Zhao N, et al. Multi-phenotype association decomposition: unraveling complex gene-phenotype relationships. Front Genet. 2019;10. https://doi.org/10.3389/fgene.2019.00417. Lippert C, Casale F, Rakitsch B, Stegle O. LIMIX: genetic analysis of multiple traits. bioRxiv. 2014. http://europepmc.org/article/PPR/ppr7019. Wright S. Correlation and causation. J Agric Res. 1921;20:557–85. Wright S. On the nature of size factors. Genetics. 1918;3:367–74. Igolkina AA, Samsonova MG. SEM: Structural Equation Modeling in Molecular Biology. Biophys (Russian Fed). 2018;63. https://link.springer.com/article/10.1134/S0006350918020100. Igolkina AA, Armoskus C, Newman JRB, Evgrafov OV, McIntyre LM, Nuzhdin SV, et al. Analysis of gene expression variance in schizophrenia using structural equation modeling. Front Mol Neurosci. 2018;11. https://www.frontiersin.org/articles/10.3389/fnmol.2018.00192/full. Pepe D, Grassi M. Investigating perturbed pathway modules from gene expression data via structural equation models. BMC Bioinformatics. 2014;15:132. https://doi.org/10.1186/1471-2105-15-132. Karns R, Succop P, Zhang G, Sun G, Indugula SR, Havas-Augustin D, et al. Modeling metabolic syndrome through structural equations of metabolic traits, comorbid diseases, and GWAS variants. Obesity. 2013;21:745–54. Liu B, de la Fuente A, Hoeschele I. Gene network inference via structural equation modeling in Genetical genomics experiments. Genetics. 2008;178:1763–76. https://doi.org/10.1534/genetics.107.080069. Cai X, Bazerque JA, Giannakis GB. Inference of gene regulatory networks with sparse structural equation models exploiting genetic perturbations. PLoS Comput Biol. 2013;9. Verhulst B, Maes HH, Neale MC. GW-SEM: a statistical package to conduct genome-wide structural equation modeling. Behav Genet. 2017;47:345–59. Mi X, Eskridge K, Wang D, Baenziger PS, Campbell BT, Gill KS, et al. Regression-based multi-trait QTL mapping using a structural equation model. Stat Appl Genet Mol Biol. 2010;9:38. https://doi.org/10.2202/1544-6115.1552. Fernando RL, Garrick D. Bayesian Methods Applied to GWAS; 2013. p. 237–74. https://doi.org/10.1007/978-1-62703-447-0_10. Yang Y, Basu S, Mirabello L, Spector L, Zhang L. A Bayesian gene-based genome-wide association study analysis of osteosarcoma trio data using a hierarchically structured prior. Cancer Inform. 2018;17:117693511877510. https://doi.org/10.1177/1176935118775103. Sokolkova AB, Chang PL, Carrasquila-Garcia N, Nuzhdina NV, Cook DR, Nuzhdin SV, et al. Signatures of Ecological Adaptation in Genomes of Chickpea Landraces. Biophys (Russian Fed). 2020;65. https://link.springer.com/article/10.1134/S0006350920020244. Purushothaman R, Upadhyaya HD, Gaur PM, Gowda CLL, Krishnamurthy L. Kabuli and desi chickpeas differ in their requirement for reproductive duration. F Crop Res. 2014;163:24–31. Taiz L, Zeiger E. Plant physiology. 5th ed. Sunderland: Sinauer Associates; 2010. Bollen KA. Structural equations with latent variables. Hoboken, NJ: Wiley; 1989. https://doi.org/10.1002/9781118619179. Kline RB. Pronciples and practice of Structural Equation Modeling (3rd ed.): The Gulford Press; 2011. ISBN 9781462523344. Horn JL. A rationale and test for the number of factors in factor analysis. Psychometrika. 1965;30:179–85. https://doi.org/10.1007/BF02289447. Igolkina AA, Meshcheryakov G. semopy: A Python Package for Structural Equation Modeling. Struct Equ Model A Multidiscip J. 2020:1–12. https://www.tandfonline.com/doi/abs/10.1080/10705511.2019.1704289?scroll=top&needAccess=true&journalCode=hsem20. Lee S-Y. Structural equation modeling: a Bayesian approach. Wiley: Chichester; 2007. https://doi.org/10.1002/9780470024737. We would like to thank Katrina Sherbina for the careful proofreading. About this supplement This article has been published as part of BMC Genomics Volume 21 Supplement 8, 2020: Selected articles from the 5th International Scientific Conference "Plant genetics, genomics, bioinformatics, and biotechnology" (PlantGen2019): genomics. The full contents of the supplement are available online at https://bmcgenomics.biomedcentral.com/articles/supplements/volume-21-supplement-8. Model development, data analysis, manuscript preparation and writing were supported by the RBFR grant 18–29-13033 to AI, SN and MS. All the research, namely model development and testing, as well as paper writing and publication costs, were covered from the grant funds. Peter the Great Saint-Petersburg Polytechnic University, Russian Federation, Polytechnicheskaya, 29, St. Petersburg, 195251, Russia Anna A. Igolkina, Georgy Meshcheryakov, Maria V. Gretsova, Sergey V. Nuzhdin & Maria G. Samsonova Centre for Genome Bioinformatics, St. Petersburg State University, St. Petersburg, 199034, Russia Maria V. Gretsova Program Molecular & Computational Biology, Dornsife College of Letters Arts and Science, University of Southern California, Los Angeles, CA, USA Sergey V. Nuzhdin Anna A. Igolkina Georgy Meshcheryakov Maria G. Samsonova Аll authors read and approved of the final manuscript. Methodology was developed by A.A.I; data analysis and visualization were performed by A.A.I.; software was developed by A.A.I and G.M; Bayesian inference was performed by A.A.I. and M.V.G. Initial writing and draft preparation was done by A.A.I.; review and editing were made by A.A.I., M.G.S., S.V.N, M.V.G. Data is curated by M.G.S. and S.V.N. Correspondence to Anna A. Igolkina or Maria G. Samsonova. Supplementary information Additional File 1. Absolute values of correlations between phenotypic traits. Description of phenotypic trait. Maximum Likelihood estimates. Bayesian inference and Gibbs sampling. Root mean square error. Igolkina, A.A., Meshcheryakov, G., Gretsova, M.V. et al. Multi-trait multi-locus SEM model discriminates SNPs of different effects. BMC Genomics 21, 490 (2020). https://doi.org/10.1186/s12864-020-06833-2 Multi-trait multi-locus SEM
CommonCrawl
Exam-Style Questions on Graphs Problems on Graphs adapted from questions set in previous Mathematics exams. GCSE Higher The equation of the line L1 is \(y = 2 - 5x\). The equation of the line L2 is \(3y + 15x + 17 = 0\). Show that these two lines are parallel. Worked Solution Which of the following lines is parallel to the x-axis? \(y=-7\) \(x-y=1\) \(x=10\) \(x+y=0\) \(x=y\) Show that line \(5y = 7x - 7\) is perpendicular to line \(7y = -5x + 55\). The graph shows the temperature recorded in a tent over a 25 hour period. (a) For how many of the 25 hours is the temperature more than 5 degrees? (b) By how much does the temperature change between hour 19 and hour 24? A straight line goes through the points \((a, b)\) and \((c, d)\), where $$a + 3 = c$$ $$b + 6 = d$$ Find the gradient of the line. The graph shows the height of water in a container over a period time during which the water enters the container at a constant rate. Which of the following might be a diagram of the container? a. b. c. d. e. The diagram is of a container which is filled with water entering at a constant rate. Which of the following might be the graph of height of the water in the container plotted against time? Match the equation with the letter of its graph $$y=3-\frac{10}{x}$$ $$y=2^x$$ $$y=\sin x$$ $$y=x^2+7x$$ $$y=x^2-8$$ $$y=2-x$$ The graph of y = f(x) is drawn accurately on the grid. (a) Write down the coordinates of the turning point of the graph. (b) Write down estimates for the roots of f(x) = 0 (c) Use the graph to find an estimate for f(-5.5). (a) By completing the square, solve \(x^2+8x+13=0\) giving your answer to three significant figures. (b) From the completed square you found in part (a) find the minimum value of the curve \(y=x^2+8x+13\). Suppose a rhombus ABCD is drawn on a coordinate plane with the point A situated at (4,7). The diagonal BD lies on the line \(y = 2x - 5 \) Find the equation the line that passes through A and C. The graph shows the temperature (\(T\)) of an unidentified flying object over a period of 10 seconds (\(t\)). Use the graph to work out an estimate of the rate of decrease of temperature at 7 seconds. You must show your working. The graph of the following equation is drawn and then reflected in the x-axis $$y = 2x^2 - 3x + 2$$ (a) What is the equation of the reflected curve? The original curve is reflected in the y-axis. (b) What is the equation of this second reflected curve? The diagram below is a sketch of a curve, a parabola, which is not drawn to scale. The quadratic graph intersects the x-axis at (-5, 0) and at another point. It also intersects the y-axis at (0, –10). Work out the coordinates of the turning point of the graph if its equation is in the form \(y = x^2 + bx + c \). The graph shows the distance travelled, in metres, of a commuter train as it pulls out of a station. Estimate the speed of the train, in m/s, after 10 seconds. You must show your working. (a) Find the interval for which \(x^2 - 9x + 18 \le 0\) (b) The point (-4, -4) is the turning point of the graph of \(y = x^2 + ax + b\), where a and b are integers. Find the values of a and b. (a) Write \(2x^2+8x+27\) in the form \(a(x+b)^2+c\) where \(a\), \(b\), and \(c\) are integers, by 'completing the square' (b) Hence, or otherwise, find the line of symmetry of the graph of \(y = 2x^2+8x+27\) (c) Hence, or otherwise, find the turning point of the graph of \(y = 2x^2+8x+27\) On the grid below, draw the graph of \(y = 1 - 2x\) for values of \(x\) from -2 to 2. IB Standard A function is defined as \(f(x) = 2{(x - 3)^2} - 5\) . (a) Show that \(f(x) = 2{x^2} - 12x + 13\). (b) Write down the equation of the axis of symmetry of this graph. (c) Find the coordinates of the vertex of the graph of \(f(x)\). (d) Write down the y-intercept. (e) Make a sketch the graph of \(f(x)\). Let \(g(x) = {x^2}\). The graph of \(f(x)\) may be obtained from the graph of \(g(x)\) by the two transformations: a stretch of scale factor \(s\) in the y-direction; followed by a translation of \(\left( {\begin{array}{*{20}{c}} j\\ k \end{array}} \right)\) . (f) Find the values of \(j\), \(k\) and \(s\). IB Studies Consider a straight line graph L1, which intersects the x-axis at A(8, 0) and the y-axis at B (0, 4). (a) Write down the coordinates of C, the midpoint of line segment AB. (b) Calculate the gradient of the line L1. The line L2 is parallel to L1 and passes through the point (5 , 9). (c) Find the equation of L2. Give your answer in the form \(ay + bx + c = 0\) where \(a, b \text{ and } c \in \mathbb{Z}\). \(f\) and \(g\) are two functions such that \(g(x)=3f(x+2)+7\). The graph of \(f\) is mapped to the graph of \(g\) under the following transformations: A vertical stretch by a factor of \(a\) , followed by a translation \(\begin{pmatrix}b \\c \\ \end{pmatrix}\) Find the values of (a) \(a\); (b) \(b\); (c) \(c\). (d) Consider two other functions \(h\) and \(j\). Let \(h(x)=-j(2x)\). The point A(8, 7) on the graph of \(j\) is mapped to the point B on the graph of \(h\). Find the coordinates of B. Let \(f(x) = {x^2}\) and \(g(x) = 3{(x+2)^2}\) . The graph of \(g\) can be obtained from the graph of \(f\) using two transformations. (a) Give a full description of each of the two transformations. (b) The graph of \(g\) is translated by the vector \( \begin{pmatrix}-4\\5\\ \end{pmatrix}\) to give the graph of \(h\). The point \(( 2{\text{, }}-1)\) on the graph of \(f\) is translated to the point \(P\) on the graph of \(h\). Find the coordinates of \(P\). (a) Sketch the graph of \(f(x) = 4\sin x - 5\cos x \), for \(–2\pi \le x \le 2\pi \). (b) Find the amplitude of \(f\). (c) Find the the period of \(f\). (d) Find the \(x\)-intercept that lies between 0 and 3. (e) Hence write \(f(x)\) in the form \(a \sin (bx + c) \). (f) Write down one value of \(x\) such that \(f'(x) = 0 \). (g) Write down the two values of \(p\) for which the equation \(f(x) = p\) has exactly two solutions. Let \(f\) and \(g\) be functions such that \(g(x) = 3f(x - 2) + 7\) . The graph of \(f\) is mapped to the graph of \(g\) under the following transformations: vertical stretch by a factor of \(k\) , followed by a translation \(\left( \begin{array}{l} p\\ q \end{array} \right)\) . Write down the value of: (a) \(k\) (b) \(p\) (c) \(q\) (d) Let \(h(x) = - g(2x)\) . The point A(\(8\), \(7\)) on the graph of \(g\) is mapped to the point \({\rm{A}}'\) on the graph of \(h\) . Find \({\rm{A}}'\) Let \(f(x) = \frac{9x-3}{bx+9}\) for \(x \neq -\frac9b, b \neq 0\). (a) The line \(x = 3\) is a vertical asymptote to the graph of \(f\). Find the value of b. (b) Write down the equation of the horizontal asymptote to the graph of \(f\). (c) The line \(y = c\) , where \(c\in \mathbb R\) intersects the graph of \( \begin{vmatrix}f(x) \end{vmatrix} \) at exactly one point. Find the possible values of \(c\). Two functions are defined as follows: \(f(x) = 2\ln x\) and \(g(x) = \ln \frac{x^2}{3}\). (a) Express \(g(x)\) in the form \(f(x) - \ln a\) , where \(a \in {{\mathbb{Z}}^ + }\) . (b) The graph of \(g(x)\) is a transformation of the graph of \(f(x)\) . Give a full geometric description of this transformation.
CommonCrawl
\begin{document} \baselineskip .3in \newcommand{\mbox{pr}}{\mbox{pr}} \newfont{\sss}{cmsy10 at 12pt} \newcommand{Section }{Section } \title{\textbf{\large Statistical Matching using Fractional Imputation }} \author{Jae-kwang Kim \and Emily Berg \and Taesung Park} \maketitle \begin{abstract} Statistical matching is a technique for integrating two or more data sets when information available for matching records for individual participants across data sets is incomplete. Statistical matching can be viewed as a missing data problem where a researcher wants to perform a joint analysis of variables that are never jointly observed. A conditional independence assumption is often used to create imputed data for statistical matching. We consider an alternative approach to statistical matching without using the conditional independence assumption. We apply parametric fractional imputation of \cite{kim11} to create imputed data using an instrumental variable assumption to identify the joint distribution. We also present variance estimators appropriate for the imputation procedure. We explain how the method applies directly to the analysis of data from split questionnaire designs and measurement error models. \end{abstract} Key Words: Data combination, Data fusion, Hot deck imputation, Split questionnaire design, Measurement error model. \section{Introduction} Survey sampling is a scientific tool for making inference about the target population. However, we often do not collect all the necessary information in a single survey, due to time and cost constraints. In this case, we wish to exploit, as much as possible, information already available from different data sources from the same target population. Statistical matching, sometimes called data fusion \citep{baker89} or data combination \citep{ridder07}, aims to integrate two or more data sets when information available for matching records for individual participants across data sets is incomplete. \cite{dorazio06} and \cite{leulescu13} provide comprehensive overviews of the statistical matching techniques in survey sampling. Statistical matching can be viewed as a missing data problem where a researcher wants to perform a joint analysis of variables that are never jointly observed. \cite{moriarity01} provide a theoretical framework for statistical matching under a multivariate normality assumption. \cite{rassler02} develops multiple imputation techniques for statistical matching with pre-specified parameter values for non-identifiable parameters. \cite{lahiri05} address regression analysis with linked data. \cite{ridder07} provide a rigorous treatment of the assumptions and approaches for statistical matching in the context of econometrics. \begin{table}[H] \begin{center} \begin{tabular}{rcccc} \hline \ \ \ \ \ \ \ \ & \ \ $X$ \ \ & \ \ $Y_1$ \ \ & \ \ $Y_2$ \ \ & \ \ \ \ \ \ \ \ \\ \cline{2-5} \ \ \ \ \ \ \ \ \ \ \ Sample A \ \ \ \ \ & o & o & & \\ \ \ \ \ \ \ \ \ \ \ \ Sample B \ \ \ \ \ & o & & o & \\ \hline \end{tabular} \caption{A Simple data structure for matching} \label{table1} \end{center} \end{table} Statistical matching aims to construct fully augmented data files to perform statistically valid joint analyses. To simplify the setup, suppose that two surveys, Survey A and Survey B, contain partial information about the population. Suppose that we observe $x$ and $y_1$ from the Survey A sample and observe $x$ and $y_2$ from the Survey B sample. Table \ref{table1} illustrates a simple data structure for matching. If the Survey B sample (Sample B) is a subset of the Survey A sample (Sample A), then we can apply record linkage techniques \citep{herzog07} to obtain values of $y_{1}$ for the survey B sample. However, in many cases, such perfect matching is not possible (for instance, because the samples may contain non-overlapping subsets), and we may rely on a probabilistic way of identifying the ``statistical twins'' from the other sample. That is, we want to create $y_1$ for each element in sample B by finding the nearest neighbor from Sample A. Nearest neighbor imputation has been discussed by many authors, including \cite{chen01} and \cite{beaumont09}, in the context of missing survey items. Finding the nearest neighbor is often based on ``how close'' they are in terms of $x$'s only. Thus, in many cases, statistical matching is based on the assumption that $y_1$ and $y_2$ are independent, conditional on $x$. That is, \begin{equation} y_1 \perp y_2 \mid x. \label{cia} \end{equation} Assumption (\ref{cia}) is often referred to as the conditional independence (CI) assumption and is heavily used in practice. In this paper, we consider an alternative approach that does not rely on the CI assumption. Instead, we adopt an approach to statistical matching based on an instrumental variable, as discussed briefly in \cite{ridder07}. \cite{kimshao13} propose the fractional imputation method for statistical matching under an instrumental variable assumption. After we discuss the assumptions in Section 2, we review the fractional imputation methods in Section 3. Furthermore, we consider two extensions, one to split questionnaire designs (in Section 4) and the other to measurement error models (in Section 5). Results from two simulation studies are presented in Section 6. \section{Basic Setup} For simplicity of the presentation, we consider the setup of two independent surveys from the same target population consisting of $N$ elements. As discussed in Section 1, suppose that Sample A collects information only on $x$ and $y_1$ and Sample B collects information only on $x$ and $y_2$. To illustrate the idea, suppose for now that $(x, y_1, y_2)$ are generated from a normal distribution such that $$ \begin{pmatrix} x \\ y_1 \\ y_2 \end{pmatrix} \sim N \left[\begin{pmatrix} \mu_x \\ \mu_1 \\ \mu_2 \end{pmatrix}, \begin{pmatrix} \sigma_{xx} & \sigma_{1x} & \sigma_{2x} \\ & \sigma_{11} & \sigma_{12} \\ & & \sigma_{22} \end{pmatrix} \right] . $$ Clearly, under the data structure in Table \ref{table1}, the parameter $\sigma_{12}$ is not estimable from the samples. The conditional independence assumption in (\ref{cia}) implies that $ \sigma_{12} = \sigma_{1x} \sigma_{2x}/\sigma_{xx} $ and $ \rho_{12} = \rho_{1x} \rho_{2x}$ That is, $\sigma_{12}$ is completely determined from other parameters, rather than estimated directly from the realized samples. Synthetic data imputation under the conditional independence assumption in this case can be implemented in two steps: \begin{description} \item{[Step 1]} Estimate $f(y_1 \mid x)$ from Sample A, and denote the estimate by $\hat{f}_a (y_1 \mid x)$. \item{[Step 2]} For each element $i$ in Sample B, use the $x_i$ value to generate imputed value(s) of $y_1$ from $\hat{f}_a ( y_1 \mid x_i)$. \end{description} Since $y_1$ values are never observed in Sample B, synthetic values of $y_1$ are created for all elements in Sample B, leading to synthetic imputation. \cite{haziza09} provides a nice review of literature on imputation methodology. \cite{kimrao12} present a model-assisted approach to synthetic imputation when only $x$ is available in Sample B. Such synthetic imputation completely ignores the observed information in $y_2$ from Sample B. Statistical matching based on conditional independence assumes that $Cov(y_1, y_2 \mid x)=0$. Thus, the regression of $y_2$ on $x$ and $y_1$ using the imputed data from the above synthetic imputation will estimate a zero regression coefficient for $y_1$. That is, the estimate $\hat{\beta}_2$ for $$ \hat{y}_2 = \hat{\beta}_0 + \hat{\beta}_1 x + \hat{\beta}_2 y_1, $$ will estimate zero. Such analyses can be misleading if CI does not hold. To explain why, we consider an omitted variable regression problem: \begin{eqnarray*} y_1 &=& \beta_0^{(1)} + \beta_1^{(1)} x+ \beta_2^{(1)} z + e_1 \\ y_2 &=& \beta_0^{(2)} + \beta_1^{(2)} x + \beta_2^{(2)} z + e_2 \end{eqnarray*} where $z, e_1, e_2$ are independent and are not observed. Unless $\beta_2^{(1)}=\beta_2^{(2)}=0$, the latent variable $z$ is an unobservable confounding factor that explains why $Cov(y_1, y_2 \mid x) \neq 0$. Thus, the coefficient on $y_{1}$ in the population regression of $y_{2}$ on $x$ and $y_{1}$ is not zero, We consider an alternative approach which is not built on the conditional independence assumption. First, assume that we can decompose $x$ as $x=(x_1, x_2)$ such that \begin{eqnarray*} (i) & & f (y_2 \mid x_1, x_2, y_1 ) = f ( y_2 \mid x_1, y_1) \\ (ii) & & f ( y_1 \mid x_1, x_2=a) \neq f( y_1 \mid x_1, x_2=b) \end{eqnarray*} for some $a \neq b$. Thus, $x_2$ is conditionally independent of $y_2$ given $x_1$ and $y_1$ but $x_2$ is correlated with $y_1$ given $x_1$. Note that $x_1$ may be null or have a degenerate distribution, such as an intercept. The variable $x_2$ satisfying the above two conditions is often called an instrumental variable (IV) for $y_1$. The directed acyclic graph in Figure~\ref{figgraph} illustrates the dependence structure of a model with an instrumental variable. \cite{ridder07} used ``exclusion restrictions'' to describe the instrumental variable assumption. One example where the instrumental variable assumption is reasonable is repeated surveys. In the repeated survey, suppose that $y_t$ is the study variable at year $t$ and satisfies Markov property $$ P( y_{t+1} \mid y_1, \cdots, y_{t} ) = P( y_{t+1} \mid y_{t} ),$$ where $P(y_{t})$ denotes a cumulative distribution function. In this case, $y_{t-1}$ is an instrumental variable for $y_t$. In fact, any last observation of $y_s (s \le t)$ is the instrumental variable for $y_t$. \begin{figure} \caption{Graphical illustration of the dependence structure for a model in which $x_{2}$ is an instrumental variable for $y_{1}$ and $x_{1}$ is an additional covariate in the models for $y_{2}$ and $y_{1}$.} \label{figgraph} \end{figure} Under the instrumental variable assumption, one can use two-step regression to estimate the regression parameters of a linear model. The following example presents the basic ideas. \begin{example} Consider the two sample data structure in Table \ref{table1}. We assume the following linear regression model: \begin{equation} y_{2i} = \beta_0 + \beta_1 x_{1i} + \beta_2 y_{1i} + e_i, \label{model9-3} \end{equation} where $e_i \sim (0, \sigma_e^2)$ and $e_i$ is independent of $(x_{1j}, x_{2j}, y_{1j})$ for all $i,j$. In this case, a consistent estimator of $\beta = (\beta_0, \beta_1, \beta_2)$ can be obtained by the two-stage least squares (2SLS) method as follows: \index{Two-stage least squares method} \index{2SLS method | \see{Two-stage least squares method}} \begin{enumerate} \item From sample $A$, fit the following ``working model'' for $y_1$ \begin{equation} y_{1i} = \alpha_0 + \alpha_1 x_{1i} + \alpha_2 x_{2i} + u_i , \ \ \ u_i \sim (0, \sigma_u^2) \label{model9-4} \end{equation} to obtain a consistent estimator of $\alpha=(\alpha_0, \alpha_1, \alpha_2)'$ defined by $$ \hat{\alpha} = (\hat{\alpha}_0, \hat{\alpha}_1, \hat{\alpha}_2)'= \left( X'X \right)^{-1} X' Y_1 $$ where $X= \left[ X_0, X_1, X_2 \right]$ is a matrix whose $i$-th row is $(1, x_{1i}, x_{2i})$ and $Y_1$ is a vector with $y_{1i}$ being the $i$-th component. \item A consistent estimator of $\beta=(\beta_0, \beta_1, \beta_2)'$ is obtained by the least squares method for the regression of $y_{2i}$ on $(1, x_{1i}, \hat{y}_{1i})$ where $\hat{y}_{1i}=\hat{\alpha}_0 + \hat{\alpha}_1 x_{1i} + \hat{\alpha}_2 x_{2i} $. \end{enumerate} \label{example9.1} \end{example} Asymptotic unbiasedness of the 2SLS estimator under the instrumental variable assumption is discussed in Appendix A. The 2SLS method is not directly applicable if the regression model (\ref{model9-3}) is nonlinear. Also, while the 2SLS method gives estimates of the regression parameters, 2SLS does not provide consistent estimators for more general parameters such as $\theta=Pr(y_2 <1 \mid y_1 <3)$. Stochastic imputation can provide a solution for estimating a more general class of parameters. We explain how to modify parametric fractional imputation of \cite{kim11} to address general purpose estimation in statistical matching problems. \section{Fractional imputation} We now describe the fractional imputation methods for statistical matching without using the CI assumption. The use of fractional imputation for statistical matching was originally presented in Chapter 9 of Kim and Shao (2013). To explain the idea, note that $y_1$ is missing in Sample B and our goal is to generate $y_1$ from the conditional distribution of $y_1$ given the observations. That is, we wish to generate $y_1$ from \begin{equation} f\left( y_1 \mid x, y_2 \right) \propto f \left( y_2 \mid x, y_1 \right) f \left( y_1 \mid x \right). \label{9-2} \end{equation} To satisfy model identifiability, we may assume that $x_2$ is an IV for $y_1$. Under IV assumption, (\ref{9-2}) reduces to \begin{equation*} f\left( y_1 \mid x, y_2 \right) \propto f \left( y_2 \mid x_1, y_1 \right) f \left( y_1 \mid x \right). \end{equation*} To generate $y_1$ from (\ref{9-2}), we can consider the following two-step imputation: \begin{enumerate} \item Generate $y_1^*$ from $ \hat{f}_a \left( y_1 \mid x \right)$. \item Accept $y_1^*$ if $f\left( y_2 \mid x, y_1^* \right)$ is sufficiently large. \end{enumerate} Note that the first step is the usual method under the conditional independence assumption. The second step incorporates the information in $y_2$. The determination of whether $f(y_{2}\,|\,x ,y_{1}^{*})$ is sufficiently large required for Step 2 is often made by applying a Markov Chain Monte Carlo (MCMC) method such as the Metropolis-Hastings algorithm \citep{chib1995understanding}. That is, let $y_1^{(t-1)}$ be the current value of $y_1$ in the Markov Chain. Then, we accept $y_1^*$ with probability $$ R(y_1^*, y_1^{(t-1)} ) = \min \left\{ 1, \frac{ f( y_2 \mid x, y_1^*)}{ f(y_2 \mid x, y_1^{(t-1)}) } \right\}. $$ Such algorithms can be computationally cumbersome because of slow convergence of the MCMC algorithm. Parametric fractional imputation of \cite{kim11} enables generating imputed values in (\ref{9-2}) without requiring MCMC. The following EM algorithm by fractional imputation can be used: \begin{enumerate} \item For each $i \in B$, generate $m$ imputed values of $y_{1i}$, denoted by $y_{1i}^{*(1)}, \cdots, y_{1i}^{*(m)}$, from $ \hat{f}_a \left( y_1 \mid x_i \right)$, where $\hat{f}_a \left( y_1 \mid x \right)$ denotes the estimated density for the conditional distribution of $y_1$ given $x$ obtained from sample A. \item Let $\hat{\theta}_t$ be the current parameter value of $\theta$ in $f\left( y_2 \mid x, y_1 \right)$. For the $j$-th imputed value $y_{1i}^{*(j)}$, assign the fractional weight $$ w_{ij(t)}^* \propto f\left( y_{2i} \mid x_{i}, y_{1i}^{*(j)}; \hat{\theta}_t \right)$$ such that $ \sum_{j=1}^m w_{ij}^* = 1$. \item Solve the fractionally imputed score equation for $\theta$ \begin{equation} \sum_{i \in B} w_{ib} \sum_{j=1}^m w_{ij(t)}^* S(\theta; x_{i}, y_{1i}^{*(j)} , y_{2i}) = 0 \label{10} \end{equation} to obtain $\hat{\theta}_{t+1}$, where $S(\theta; x, y_1, y_2) = \partial \log f ( y_2 \mid x, y_1; \theta)/\partial \theta$, and $w_{ib}$ is the sampling weight of unit $i$ in Sample B. \item Go to step 2 and continue until convergence. \end{enumerate} In (\ref{10}), note that, for sufficiently large $m$, \begin{eqnarray*} \sum_{j=1}^m w_{ij(t)}^* S(\theta; x_{i}, y_{1i}^{*(j)} , y_{2i}) & \cong & \frac{ \int S(\theta; x_{i}, y_1, y_{2i}) f( y_{2i} \mid x_{i}, y_{1i}^{*(j)}; \hat{\theta}_t ) \hat{f}_a ( y_1 \mid x_i ) dy_1 }{ \int f( y_{2i} \mid x_{i}, y_{1i}^{*(j)}; \hat{\theta}_t ) \hat{f}_a ( y_1 \mid x_i ) dy_1 } \\ &=& E\left\{ S(\theta; x_{i}, Y_1, y_{2i}) \mid x_{i}, y_{2i} ; \hat{\theta}_t \right\} . \end{eqnarray*} If $y_{i1}$ is categorical, then the fractional weight can be constructed by the conditional probability corresponding to the realized imputed value \citep{ibrahim90}. Step 2 is used to incorporate observed information of $y_{i2}$ in Sample B. Note that Step 1 is not repeated for each iteration. Only Step 2 and Step 3 are iterated until convergence. Because Step 1 is not iterated, convergence is guaranteed and the observed likelihood increases. See Theorem 2 of \cite{kim11}. \begin{remark} In Section 2, we introduce IV only because this is what it is typically done in the literature to ensure identifiability. The proposed method itself does not rely on this assumption. To illustrate a situation where we can identify the model without introducing the IV assumption, suppose that the model is \begin{eqnarray*} y_2 &=& \beta_0 + \beta_1 x + \beta_2 y_1 + e_2 \\ y_1 &= & \alpha_0 + \alpha_1 x + e_1 \end{eqnarray*} with $e_1 \sim N(0, x \sigma_1^2)$ and $e_2 \mid e_1 \sim N(0, \sigma_2^2)$, then $$ f( y_2 \mid x) = \int f( y_2 \mid x, y_1 ) f(y_1 \mid x) d y_1 $$ is also a normal distribution with mean $(\beta_0 + \beta_2 \alpha_0) + (\beta_1 + \beta_2 \alpha_1) x$ and variance $\sigma_2^2 + \beta_2^2 \sigma_1^2 x$. Under the data structure in Table 1, such a model is identified without assuming the IV assumption. \end{remark} Instead of generating $y_{1i}^{*(j)}$ from $\hat{f}_a( y_1 \mid x_{i} )$, we can consider a hot-deck fractional imputation (HDFI) method, where all the observed values of $y_{1i}$ in Sample A are used as imputed values. In this case, the fractional weights in Step 2 are given by $$ w_{ij}^* (\hat{\theta}_t) \propto w_{ij0}^* f\left( y_{2i} \mid x_{i}, y_{1i}^{*(j)}; \hat{\theta}_t \right), $$ where \begin{equation} w_{ij0}^* = \frac{ \hat{f}_a (y_{1j} \mid x_{i} ) }{ \sum_{k \in A} w_{ka} \hat{f}_a (y_{1j} \mid x_{k} ) } . \label{fwgt0} \end{equation} The initial fractional weight $w_{ij0}^*$ in (\ref{fwgt0}) is computed by applying importance weighting with $$ \hat{f}_a ( y_{1j}) = \int \hat{f}_a(y_{1j} \mid x ) \hat{f}_a (x) dx \propto \sum_{i \in A} w_{ia} \hat{f}_{a} (y_{1j} \mid x_i )$$ as the proposal density for $y_{1j}$. The M-step is the same as for parametric fractional imputation. See \cite{kimyang13} for more details on HDFI. In practice, we may use a single imputed value for each unit. In this case, the fractional weights can be used as the selection probability in Probability-Proportional-to-Size (PPS) sampling of size $m=1$. For variance estimation, we can either use a linearization method or a resampling method. We first consider variance estimation for the maximum likelihood estimator (MLE) of $\theta$. If we use a parametric model ${f}(y_1 \mid x)= f( y_1 \mid x; {\theta}_1)$ and $f( y_2 \mid x, y_1; \theta_2)$, the MLE of $\theta=(\theta_1, \theta_2)$ is obtained by solving \begin{equation} \left[ S_1(\theta_1) , \bar{S}_2(\theta_1, \theta_2) \right]= (0,0), \label{score} \end{equation} where $S_1(\theta_1)= \sum_{i \in A} w_{ia} S_{i1} (\theta_1)$, $S_{i1}(\theta_1) = \partial \log f( y_{1i} \mid x_i; \theta_1)/ \partial \theta_1$ is the score function of $\theta_1$, $$\bar{S}_2(\theta_1, \theta_2) = E\{ S_2(\theta_2) \mid X, Y_2; \theta_1, \theta_2 \}, $$ $S_2(\theta_2)= \sum_{i \in B} w_{ib} S_{i2}(\theta_2) $, and $S_{i2}(\theta_2) = \partial \log f (y_{2i} \mid x_i, y_{1i}; \theta_2)/ \partial \theta_2$ is the score function of $\theta_2$. Note that we can write $\bar{S}_2(\theta_1, \theta_2) = \sum_{i \in B} w_{ib} E\{ S_{i2} (\theta_2) \mid x_i, y_{2i} ; \theta\}$. Thus, \begin{eqnarray*} \frac{\partial }{\partial \theta_1'} \bar{S}_2 (\theta) &=& \sum_{i \in B} w_{ib} \frac{\partial }{\partial \theta_1'} \left[ \frac{\int S_{i2}(\theta_2) f (y_{1} \mid x_i; \theta_1) f(y_{2i} \mid x_i, y_1; \theta_2) dy_1}{ \int f (y_1 \mid x_i; \theta_1) f(y_{2i} \mid x_i, y_1; \theta_2) dy_1} \right] \\ &=& \sum_{i \in B} w_{ib} E\{ S_{i2}(\theta_2) S_{i1} (\theta_1)' \mid x_i, y_{2i}; \theta \} \\ &- & \sum_{i \in B} w_{ib} E\{ S_{i2}(\theta_2) \mid x_i, y_{2i}; \theta \}E\{ S_{i1} (\theta_1)' \mid x_i, y_{2i}; \theta\} \end{eqnarray*} and \begin{eqnarray*} \frac{\partial }{\partial \theta_2'} \bar{S}_2 (\theta) &=& \sum_{i \in B} w_{ib} \frac{\partial }{\partial \theta_2'} \left[ \frac{\int S_{i2}(\theta_2) f (y_{1} \mid x_i; \theta_1) f(y_{2i} \mid x_i, y_1; \theta_2) dy_1}{ \int f (y_1 \mid x_i; \theta_1) f(y_{2i} \mid x_i, y_1; \theta_2) dy_1} \right] \\ &=& \sum_{i \in B} w_{ib} E\{ \frac{\partial }{ \partial \theta_2'} S_{i2}(\theta_2) \mid x_i, y_{2i}; \theta\}\\ &+& \sum_{i \in B} w_{ib} E\{ S_{i2}(\theta_2) S_{i2} (\theta_2)' \mid x_i, y_{2i}; \theta\}\\ &-& \sum_{i \in B} w_{ib}E\{ S_{i2}(\theta_2) \mid x_i, y_{2i}; \theta \}E\{ S_{2i} (\theta_2)' \mid x_i, y_{2i}; \theta \}. \end{eqnarray*} Now, $\partial \bar{S}_2 (\theta)/\partial \theta_1'$ can be consistently estimated by \begin{equation} \hat{B}_{21} = \sum_{i \in B}w_{ib} \sum_{j=1}^m w_{ij}^* S_{2ij}^*( \hat{\theta}_2) \left\{ S_{1ij}^* (\hat{\theta}_1)- \bar{S}_{1i}^* (\hat{\theta}_1) \right\}' , \label{9-bb} \end{equation} where $ S_{1ij}^* (\hat{\theta}_1)= S_1(\hat{\theta}_1; x_i, y_{1i}^{*(j)}) $, $ S_{2ij}^* (\hat{\theta}_2)= S_2(\hat{\theta}_2; x_i, y_{1i}^{*(j)}, y_{2i}) $, and $\bar{S}_{1i} ^*(\hat{\theta}_1) = \sum_{j=1}^m w_{ij}^* S_1(\hat{\theta}_1; x_i, y_{1i}^{*(j)})$. Also, $\partial \bar{S}_2 (\theta)/ \partial \theta_2'$ can be consistently estimated by \begin{equation} - \hat{I}_{22} = \sum_{i \in B} w_{ib} \sum_{j=1}^m w_{ij}^* \dot{S}_{2ij}^* (\hat{\theta}_2) - \hat{B}_{22} \label{9-tau} \end{equation} where $$ \hat{B}_{22} = \sum_{i \in B} w_{ib} \sum_{j=1}^m w_{ij}^* S_{2ij}^* (\hat{\theta}_2) \left\{ S_{2ij}^* (\hat{\theta}_2) - \bar{S}_{2i}^* (\hat{\theta}_2) \right\}' , $$ $\dot{S}_{2ij}^* ({\theta}_2)= \partial S_2(\theta_2; x_i, y_{1i}^{*(j)}, y_{2i})/\partial \theta_2'$ and $\bar{S}_{2i}^* (\theta_2) = \sum_{j=1}^m w_{ij}^* S_{2ij}^* (\theta_2)$. Using a Taylor expansion with respect to $\theta_1$, \begin{eqnarray*} \bar{S}_2 (\hat{\theta}_1, \theta_2 ) & \cong & \bar{S}_2 ( {\theta}_1, \theta_2 ) - E \left\{ \frac{\partial}{ \partial \theta_1'}\bar{S}_2 (\theta) \right\} \left[ E\left\{ \frac{\partial}{ \partial \theta_1'} S_1(\theta_1) \right\} \right]^{-1} S_1(\theta_1) \\ &=& \bar{S}_2 (\theta) +K S_1(\theta_1), \end{eqnarray*} and we can write $$ V(\hat{\theta}_2) \doteq \left\{ E\left(\frac{\partial}{ \partial \theta_2' } \bar{S}_2 \right) \right\}^{-1} V\left\{ \bar{S}_2 (\theta ) + K S_1(\theta_1) \right\} \left\{ E\left(\frac{\partial}{ \partial \theta_2' } \bar{S}_2 \right) \right\}^{-1'}. $$ Writing $$ \bar{S}_{2}(\theta ) = \sum_{i \in B} w_{ib} \bar{s}_{2i} (\theta ), $$ with $\bar{s}_{2i}(\theta) = E\{ S_{i2}(\theta_{2})\,|\, x_i, y_{2i}; \theta\}$, a consistent estimator of $V\left\{ \bar{S}_2 (\theta) \right\}$ can be obtained by applying a design-consistent variance estimator to $\sum_{i\in B} w_{ib}\hat{s}_{2i}$ with $ \hat{s}_{2i} = \sum_{j=1}^m w_{ij}^* S_{2ij}^* (\hat{\theta}_2) $. Under simple random sampling for Sample B, we have $$ \hat{V}\left\{ \bar{S}_2 (\theta) \right\} =n_B^{-2} \sum_{i \in B} \hat{s}_{2i} \hat{s}_{2i}'. $$ Also, $$ V\left\{ K S_1 (\theta_1) \right\}$$ is consistently estimated by $$ \hat{V}_2 = \hat{K} \hat{V}(S_1) \hat{K}', $$ where $\hat{K}=\hat{B}_{21} \hat{I}_{11}^{-1}$, $\hat{B}_{21}$ is defined in (\ref{9-bb}), and $\hat{I}_{11}= - \partial S_1 (\theta_1)/ \partial \theta_1'$ evaluated at $\theta_1=\hat{\theta}_1$. Since the two terms $\bar{S}_2 (\theta )$ and $S_1(\theta_1)$ are independent, the variance can be estimated by $$ \hat{V}(\hat{\theta}) \doteq \hat{I}_{22}^{-1} \left[ \hat{V}\left\{ \bar{S}_2 (\theta ) \right\} + \hat{V}_2\right] \hat{I}_{22}^{-1'} , $$ where $\hat{I}_{22}$ is defined in (\ref{9-tau}). More generally, one may consider estimation of a parameter $\eta$ defined as a root of the census estimating equation $\sum_{i=1}^{N}U(\eta; x_i, y_{1i}, y_{2i})=0$. Variance estimation of the FI estimator of $\eta$ computed from $\sum_{i \in B} w_{ib} \sum_{j=1}^m w_{ij}^{*}U(\eta; x_i, y_{1i}^{*(j)}, y_{2i})=0$ is discussed in Appendix B. \section{Split questionnaire survey design} In Section 3, we consider the situation where Sample A and Sample B are two independent samples from the same target population. We now consider another situation of a split questionnaire design where the original sample $S$ is selected from a target population and then Sample A and Sample B are randomly chosen such that $A \cup B = S$ and $A \cap B = \phi$. We observe $(x,y_1)$ from Sample A and observe $(x, y_2)$ from Sample B. We are interested in creating fully augmented data with observation $(x, y_1, y_2)$ in $S$. Such split questionnaire survey designs are gaining popularity because they reduce response burden \citep{raghu95, chipperfield09}. Split questionnaire designs have been investigated for the Consumer Expenditure survey \citep{gonzalez08} and the National Assessment of Educational Progress (NAEP) survey in the US. In applications of split-questionnaire designs, analysts may be interested in multiple parameters such as the mean of $y_{1}$ and the mean of $y_{2}$, in addition to the coefficient in the regression of $y_{2}$ on $y_{1}$. To construct a fully augmented dataset in $S$, we still assume the instrumental variable assumption given in $(i)$ and $(ii)$ of Section 2. That is, we assume $x = (x_1, x_2)$, where $x_2$ satisfies $f(y_{2}\,|\, x_{1}, x_{2}, y_{1}) = f(y_{2}\,|\, x_{1},y_{1})$ and $f(y_{1}\,|\, x_{1},x_{2} = a) \neq f(y_{1}\,|\,x_{1},x_{2} = b)$ for some $a\neq b$. One can use the sample data for inference about the marginal distribution of $y_{1}$, the marginal distribution of $y_{2}$, and the conditional distribution of $y_{1}$ or $y_{2}$ given $x$. The instrumental variable assumption permits identification of the parameters defining the joint distribution of $y_1$ and $y_{2}$. Estimators of parameters in the marginal distributions of $y_{1}$ and $y_{2}$ based on the fully imputed data set are more efficient than estimators based only on the sample data if $y_{1}$ and $y_{2}$ are correlated. In some split questionnaire designs (i.e. \cite{raghu95}), the sample design is constructed so that every pair of questions is assigned to some subsample. This restriction on the design permits inference for joint distributions. The instrumental variable assumption allows inference for joint distributions with more general designs where some pairs of questions (i.e., questions leading to responses $y_{2}$ and $y_{1}$) are never asked to the same individual. We consider a design where the original Sample $S$ is partitioned into two subsamples: $A$ and $B$. We assume that $x_{i}$ is observed for $i\in S$, $y_{1i}$ is collected for $i\in A$ and $y_{2i}$ is collected for $i\in B$. (For simplicity, we assume that no nonresponse occurs for either Sample $A$ or Sample $B$.) The probability of selection into $A$ or $B$ may depend on $x_{i}$ but can not depend on $y_{1i}$ or $y_{2i}$. As a consequence, the design used to select subsample $A$ or $B$ is non-informative for the specified model \citep[Chapter 6]{fuller09}. We let $w_{i}$ denote the sampling weight associated with the full sample $S$. We assume a procedure is available for estimating the variance of an estimator of the form $\hat{Y} = \sum_{i\in S} w_{i}y_{i}$, and we denote the variance estimator by $\hat{V}_{s}(\sum_{i\in S} w_{i}y_{i})$. A procedure for obtaining a fully imputed data set is as follows. First, use the procedure of Section 3 to obtain imputed values $\{y_{1i}^{*(j)}: i\in B, j = 1,\ldots, m\}$ and an estimate, $\hat{\theta}$, of the parameter in the distribution $f(y_{2}\,|\,y_{1},x_{1}; \theta)$. The estimate $\hat{\theta}$ is obtained by solving \begin{equation} \sum_{i \in B} w_i \sum_{j=1}^m w_{ij}^* S_2(\theta; x_{1i}, y_{1i}^{*(j)}, y_{2i} ) = 0, \label{4-1} \end{equation} where $S_2(\theta; x_1, y_1, y_2) = \partial \log f(y_2 \mid y_1, x_1; \theta)/\partial \theta$. Given $\hat{\theta}$, generate imputed values $y_{2i}^{*(j)} \sim f(y_{2}\,|\,y_{1i},x_{1i}; \hat{\theta})$, for $i\in A$ and $j=1,\ldots, m$. Under the instrumental variable assumption, the parameter estimator $\hat{\theta}$ generated by solving (\ref{4-1}) is fully efficient in the sense that the imputed value of $y_{2i}$ for Sample A leads to no efficiency gain. To see this, note that the score equation using the imputed value of $y_{2i}$ is computed by \begin{equation} \sum_{i \in A} w_i m^{-1} \sum_{j=1}^m S_2(\theta; x_{1i}, y_{1i}, y_{2i}^{*(j)} ) + \sum_{i \in B} w_i m^{-1}\sum_{j=1}^m w_{ij}^* S_2( \theta; x_{1i}, y_{1i}^*, y_{2i}) = 0. \label{4-2} \end{equation} Because $y_{2i}^{*(1)}, \cdots, y_{2i}^{*(m)}$ are generated from $f(y_{2}\,|\,y_{1i},x_{1i}; \hat{\theta})$, $$ p \lim_{m \rightarrow \infty} \sum_{i \in A} w_i m^{-1} \sum_{j=1}^m S_2(\theta; x_{1i}, y_{1i}, y_{2i}^{*(j)} ) = \sum_{i \in A} w_i E\{ S_2(\theta; x_{1i}, y_{1i}, Y_2) \mid y_{1i}, x_{1i} ; \hat{\theta} \}. $$ Thus, by the property of score function, the first term of (\ref{4-2}) evaluated at $\theta=\hat{\theta}$ is close to zero and the solution to (\ref{4-2}) is essentially the same as the solution to (\ref{4-1}). That is, there is no efficiency gain in using the imputed value of $y_{2i}$ in computing the MLE for $\theta$ in $f(y_{2} \mid y_1, x_1; \theta)$. However, the imputed values of $y_{2i}$ can improve the efficiency of inferences for parameters in the joint distribution of $(y_{1i}, y_{2i})$. As a simple example, consider estimation of $\mu_{2}$, the marginal mean of $y_{2i}$. Under simple random sampling, the imputed estimator of $\theta = \mu_2$ is \begin{equation} \hat{\theta}_{I,m} = \frac{1}{n} \left\{\sum_{i\in A} \left( m^{-1}\sum_{j=1}^{m}y_{2i}^{*(j)} \right)+ \sum_{i\in B}y_{2i}\right\}. \end{equation} For sufficiently large $m$, we can write \begin{eqnarray*} \hat{\theta}_{I,m} &=& \frac{1}{n} \left\{ \sum_{i \in A} \hat{y}_{2i} + \sum_{i \in B} y_{2i} \right\} \\ &=& \frac{1}{n} \left\{ \sum_{i \in A} \left( \hat{\beta}_0 + \hat{\beta}_1 x_{1i} + \hat{\beta}_2 {y}_{1i} \right)+ \sum_{i \in B} y_{2i} \right\} , \end{eqnarray*} where $(\hat{\beta}_0, \hat{\beta}_1, \hat{\beta}_2)$ satisfies $$ \sum_{i \in B} \left( y_{2i} - \hat{\beta}_0 - \hat{\beta}_1 x_{1i} - \hat{\beta}_2 \hat{y}_{1i} \right) = 0 $$ and $\hat{y}_{1i} = \hat{\alpha}_0 + \hat{\alpha}_1 x_{1i} + \hat{\alpha}_2 x_{2i} $ with $(\hat{\alpha}_0 , \hat{\alpha}_1, \hat{\alpha}_2)$ satisfying $$ \sum_{i \in A} \left( y_{1i} - \hat{\alpha}_0 - \hat{\alpha}_1 x_{1i} - \hat{\alpha}_2 x_{2i} \right) = 0. $$ Under the regression model $$ y_{2i} = \beta_0 + \beta_1 x_{1i} + \beta_2 \hat{y}_{1i} + e_i $$ where $e_i \sim (0, \sigma_e^2)$, the variance of $\hat{\theta}_{I,m}$ is, for sufficiently large $m$, $$ V( \hat{\theta}_{I,m} ) = \frac{1}{n} V(y_2) + \left( \frac{1}{n_b} - \frac{1}{n} \right) V(e) $$ which is smaller than the variance of the direct estimator $\hat{\theta} = n_b^{-1} \sum_{i \in B} y_{2i}$. \section{Measurement error models} We now consider an application of statistical matching to the problem of measurement error models. Suppose that we are interested in the parameter $\theta$ in the conditional distribution $f(y\mid x; \theta)$. In the original sample, instead of observing $(x_i, y_i)$, we observe $(z_i, y_i)$, where $z_i$ is a contaminated version of $x_i$. Because inference for $\theta$ based on $(z_i, y_i)$ may be biased, additional information is needed. One common way to obtain additional information is to collect $(x_i, z_i)$ in an external calibration study. In this case, we observe $(x_i, z_i)$ in Sample A and $(z_i, y_i)$ in Sample B, where sample A is the calibration sample, and Sample B is the main sample. \cite{guo11} discuss an application of external calibration. The external calibration framework can be expressed as a statistical matching problem. Table \ref{table2} makes the connection between statistical matching and external calibration explicit. The $(x_i, z_i, y_i)$ in the measurement error framework correspond to the $(y_{1i}, x_{2i}, y_{2i})$ in the setting of statistical matching. A straightforward extension of the measurement error model considered here incorporates additional covariates, such as the $x_{1i}$ of the statistical matching framework. \begin{table} \begin{center} \begin{tabular}{c|ccc} \hline & $z_i$ & $x_i$ & $y_i$ \\ \hline \hline Survey A (calibration study) & o & o & \\ Survey B (main study) & o & & o \\ \hline \end{tabular} \end{center} \caption{Data structure for measurement error model} \label{table2} \end{table} An instrumental variable assumption permits inference for $\theta$ based on data with the structure of Table 1. In the notation of the measurement error model, the instrumental variable assumption is \begin{eqnarray} f(y_i\,|\,x_i,z_i) = f(y_i\,|\,x_i) \mbox{ and } f(z_i\,|\, x_i = a) \neq f(z_i\,|\,x_i=b), \end{eqnarray} for some $a\neq b$. The instrumental variable assumption may be judged reasonable in applications related to error in covariates because the subject-matter model of interest is $ f(y_i\,|\,x_i),$ and $z_i$ is a contaminated version of $x_i$ that contains no additional information about $y_i$ given $x_i$. For fully parametric $f(y_i\,|\,x_i)$, $f(z_i\,|\,x_i)$ and $f(x_i)$, one can use parametric fractional imputation to execute the EM algorithm. This method requires evaluating the conditional expectation of the complete-data score function given the observed values. To evaluate the conditional expectation using fractional imputation, we first express the conditional distribution of $x$ given $(z,y)$ as, \begin{eqnarray} f \left( x \mid z, y \right) \propto f \left( y \mid x \right) f ( x \mid z). \end{eqnarray} We let an estimator $\hat{f}_{a}(x_i\,|\,z_i)$ of $f(x_i\,|\,z_i)$ be available from the calibration sample (Sample A). Implementation of the EM algorithm via fractional imputation involves the following steps: \begin{enumerate} \item For each $i \in B$, generate $x_i^{*(j)}$ from $\hat{f}_a ( x \mid z_i ) $, for $j = 1,\ldots, m$, \item Compute the fractional weights $$ w_{ij}^* \propto f ( y_i \mid x_i^{*(j)}; \hat{\theta}_{t} ) .$$ \item Update $\theta$ by solving $$ \sum_{i \in B} w_{ib} \sum_{j=1}^m w_{ij}^* S(\theta; x_i^{*(j)} , y_i) = 0,$$ where $S(\theta; x_{i}^{*(j)}, y_{i}) = \partial \mbox{log}\{f \left( y \mid x; \theta \right)\}/ \partial \theta $. \item Go to Step 2 until convergence. \end{enumerate} The method above requires generating data from $f(x\,|\,z)$. For some nonlinear models or models with non-constant variances, simulating from the conditional distribution of $x$ given $z$ may require Monte Carlo methods such as accept-reject or Metropolis Hastings. The simulation of Section 6.2 exemplifies a simulation in which the conditional distribution of $x\,|\, z$ has no closed form expression. In this case, we may consider an alternative approach, which may be computationally simpler. To describe this approach, let $h(x \mid z)$ be the ``working'' conditional distribution, such as the normal distribution, from which samples are easily generated. A special case of $h(x\mid z)$ is $f(x)$, the marginal density of $X$, which is used for selecting donors for HDFI. We assume that estimates $\hat{f}_{a}(x\mid z)$ and $\hat{h}_a(x \mid z)$ of $f(x\mid z)$ and $h( x\mid z)$, respectively, are available from Sample A. Implementation of the EM algorithm via fractional imputation then involves the following steps: \begin{enumerate} \item For each $i \in B$, generate $x_i^{*(j)}$ from $\hat{h}_a ( x\mid z_i) $, for $j = 1,\ldots, m$, \item Compute the fractional weights \begin{eqnarray}\label{wformethod2} w_{ij}^* \propto f ( y_i \mid x_i^{*(j)}; \hat{\theta}_{t} )\hat{f}_{a}(x_i^{*(j)} \mid z_i)/ \hat{h}_a (x_i^{*(j)} \mid z_i). \end{eqnarray} \item Update $\theta$ by solving $$ \sum_{i \in B} w_{ib} \sum_{j=1}^m w_{ij}^* S(\theta; x_i^{*(j)} , y_i) = 0 .$$ \item Go to Step 2 until convergence. \end{enumerate} \begin{remark} Variance estimation is a straightforward application of the linearization method in Section 3. The hot-deck fractional imputation method described in Section 3 with fractional weights defined in (\ref{fwgt0}) also extends readily to the measurement error setting. For HDFI, the proposal distribution $\hat{h}_a(x \mid z)$ can be the empirical distribution with weights proportional to the sampling weights in Sample A. The imputed values are the $n_{A}$ values of $x_{i}$. The weight $w_{ij}^{*}$ used for HDFI is \begin{eqnarray}\label{wformethod2hdfi} w_{ij}^* \propto f ( y_i \mid x_i^{*(j)}; \hat{\theta}_{t} )\hat{f}_{a}(x_i^{*(j)} \mid z_i)/w_{ja}, \end{eqnarray} where $ x_i^{*(j)} = x_{j}$ from sample $A$, and $w_{ja}$ is the associated sampling weight. \end{remark} \section{Simulation study} To test our theory, we present two limited simulation studies. The first simulation study considers the setup of combining two independent surveys of partial observation to obtain joint analysis. The second simulation study considers the setup of measurement error models with external calibration. \subsection{Simulation One} To compare the proposed methods with the existing methods, we generate 5,000 Monte Carlo samples of $(x_{i},y_{1i}, y_{2i})$ with size $n=400$, where $$\left( \begin{array}{c} y_{1i}\\x_i\\ \end{array}\right) \sim N\left(\left[ \begin{array}{c} 2\\3\\ \end{array}\right], \left[ \begin{array}{ccc} 1& 0.7\\ 0.7 &1\\ \end{array}\right]\right), $$ \begin{equation} y_{2i}=\beta_0+\beta_1y_{1i}+e_i, \label{16} \end{equation} $e_i \sim N(0,\sigma^2)$, and $\beta=(\beta_0,\beta_1,\sigma^2)=(1,1,1)$. Note that, in this setup, we have $f(y_2 \mid x, y_1) = f(y_2 \mid y_1)$ and so the variable $x$ plays the role of the instrumental variable for $y_1$. Instead of observing $(x_i, y_{1i}, y_{2i})$ jointly, we assume that only $(y_1, x)$ are observed in Sample A and only $(y_2, x)$ are observed in Sample B, where Sample A is obtained by taking the first $n_a=400$ elements and Sample B is obtained by taking the remaining $n_b=400$ elements from the original sample. We are interested in estimating four parameters: three regression parameters $\beta_0, \beta_1, \sigma^2$ and $\pi=P( y_1 < 2, y_2 < 3)$, the proportion of $y_1<2$ and $y_2<3$. Four methods are considered in estimating the parameters: \begin{enumerate} \item Full sample estimation (Full): Uses the complete observation of $(y_{1i}, y_{2i})$ in Sample B. \item Stochastic regression imputation (SRI): Use the regression of $y_1$ on $x$ from Sample A to obtain $(\hat{\alpha}_0, \hat{\alpha}_1, \hat{\sigma}_1^2)$, where the regression model is $y_1 = \alpha_0 + \alpha_1 x + e_1 $ with $e_1 \sim (0, \sigma_1^2)$. For each $i \in B$, $m=10$ imputed values are generated by $y_{1i}^{*(j)} = \hat{\alpha}_0 + \hat{\alpha}_1 x_i + e_i^{*(j)}$ where $e_i^{*(j)} \sim N(0, \hat{\sigma}_1^2)$. \item Parametric fractional imputation (PFI) with $m=10$ using the instrumental variable assumption. \item Hot-deck fractional imputation (HDFI) with $m=10$ using the instrumental variable assumption. \end{enumerate} Table \ref{table6.1} presents Monte Carlo means and Monte Carlo variances of the point estimators of the four parameters of interest. SRI shows large biases for all parameters considered because it is based on the conditional independence assumption. Both PFI and HDFI provide nearly unbiased estimators for all parameters. Estimators from HDFI are slightly more efficient than those from PFI because the two-step procedure in HDFI uses the full set of respondents in the first step. The theoretical asymptotic variance of $\hat{\beta}_1$ computed from PFI is \begin{eqnarray*} V\left( \hat{\beta}_1 \right) & \doteq & \frac{1}{(0.7)^2} \frac{1}{400} 2 \left(1-\frac{0.7^2}{2} \right) + \frac{1}{(0.7)^2} \frac{1}{400} (1- 0.7^2) \doteq 0.0103 \end{eqnarray*} which is consistent with the simulation result in Table \ref{table6.1}. In addition to point estimation, we also compute variance estimators for PFI and HDFI methods. Variance estimators show small relative biases (less than 5\% in absolute values) for all parameters. Variance estimation results are not presented here for brevity. \begin{table} \begin{center} \begin{tabular}{llrr} Parameter & Method & Mean & Variance \\ \hline \hline & Full & 1.00 & 0.0123 \\ $\beta_0$ & SRI & 1.90 & 0.0869\\ & PFI & 1.00 & 0.0472 \\ & HDFI & 1.00 & 0.0465 \\ \hline & Full & 1.00 & 0.00249 \\ $\beta_1$ & SRI & 0.54 & 0.01648 \\ & PFI & 1.00 & 0.01031 \\ & HDFI & 1.00 & 0.01026 \\ \hline & Full & 1.00 & 0.00482 \\ $\sigma^2$ & SRI & 1.73 & 0.01657 \\ & PFI & 0.99 & 0.02411 \\ & HDFI & 0.99 & 0.02270 \\ \hline & Full & 0.374 & 0.00058 \\ $\pi$ & SRI & 0.305 & 0.00255 \\ & PFI & 0.375 & 0.00059 \\ & HDFI & 0.375& 0.00057 \\ \hline \hline \end{tabular} \caption{Monte Carlo means and variances of point estimators from Simulation One. (SRI, stochastic regression imputation; PFI, parametric fractional imputation; HDFI; hot-deck fractional imputation)} \label{table6.1} \end{center} \end{table} The proposed method is based on the instrumental variable assumption. To study the sensitivity of the proposed fractional imputation method, we performed an additional simulation study. Now, instead of generating $y_{2i}$ from (\ref{16}), we use \begin{equation} y_{2i}=0.5+ y_{1i} + \rho(x_i - 3) +e_i, \label{17} \end{equation} where $e_i \sim N(0, 1)$ and $\rho$ can take non-zero values. We use three values of $\rho$, $\rho \in \{0, 0.1, 0.2\}$, in the sensitivity analysis and apply the same PFI and HDFI procedure that is based on the assumption that $x$ is an instrumental variable for $y_1$. Such assumption is satisfied for $\rho=0$, but it is weakly violated for $\rho=0.1 $ or $\rho=0.2$. Using the fractionally imputed data in sample B, we estimated three parameters, $\theta_1=E(Y_1)$, $\theta_2$ is the slope for the simple regression of $y_2$ on $y_1$, and $\theta_3 = P( y_1 <2, y_2< 3)$, the proportion of $y_1<2$ and $y_2<3$. Table \ref{table6.2} presents Monte Carlo means and variances of the point estimators for three parameters under three different models. In Table \ref{table6.2}, the absolute values of the difference between the fractionally imputed estimator and the full sample estimator increase as the value of $\rho$ increases, which is expected as the instrumental variable assumption is more severely violated for larger values of $\rho$, but the differences are relatively small for all cases. In particular, the estimator of $\theta_1$ is not affected by the departure from the instrumental variable assumption. This is because the imputation estimator under incorrect imputation model still provides unbiased estimator for the population mean as long as the regression imputation model contains an intercept term \citep{kimrao12}. Thus, this limited sensitivity analysis suggests that the proposed method seems to provide comparable estimates when the instrumental variable assumption is weakly violated. \begin{table} \begin{center} \begin{tabular}{lcrrr} Model & Parameter & Method & Mean & Variance \\ \hline \hline & & Full & 2.00 & 0.00235 \\ & $\theta_1$ & PFI & 2.00 & 0.00352 \\ & & FHDI & 2.00 & 0.00249 \\ \cline{2-5} & & Full & 1.00 & 0.00249\\ $\rho=0$ & $ \theta_2$ & PFI & 1.00 & 0.01031 \\ & & FHDI & 1.00 & 0.01026 \\ \cline{2-5} & & Full & 0.43 & 0.00061 \\ & $\theta_3$& PFI & 0.43 & 0.00059\\ & & FHDI & 0.43 & 0.00057 \\ \hline & & Full & 2.00 & 0.00235 \\ & $\theta_1$& PFI & 2.00 & 0.00353 \\ & & FHDI & 02.00 & 0.00250 \\ \cline{2-5} & & Full & 1.07 & 0.00248 \\ $\rho=0.1$ & $ \theta_2$ & PFI & 1.14 & 0.01091 \\ & & FHDI & 1.14 & 0.01081\\ \cline{2-5} & & Full & 0.44 & 0.00061 \\ & $ \theta_3$& PFI & 0.45 & 0.00062 \\ & & FHDI & 0.45 & 0.00059 \\ \hline & & Full & 2.00 & 0.00235 \\ & $\theta_1$ & PFI & 2.00 & 0.00353 \\ & & FHDI & 2.00 & 0.00250 \\ \cline{2-5} & & Full & 1.14 & 0.00250 \\ $\rho=0.2$ & $ \theta_2$& PFI & 1.28 & 0.01115 \\ & & FHDI & 1.28 & 0.01102 \\ \cline{2-5} & & Full & 0.44 & 0.00061 \\ & $\theta_3$& PFI & 0.46 & 0.00066 \\ & & FHDI & 0.46 & 0.00062 \\ \hline \hline \end{tabular} \caption{Monte Carlo means and Monte Carlo variances of the two point estimators for sensitivity analysis in Simulation One (Full, full sample estimator; PFI, parametric fractional imputation; HDFI; hot-deck fractional imputation)} \label{table6.2} \end{center} \end{table} \subsection{Simulation Two} In the second simulation study, we consider a binary response variable $y_{i}$, where \begin{eqnarray}\label{modelfory} y_{i}\sim\mbox{Bernoulli}(p_{i}), \\ \nonumber \mbox{logit}(p_{i}) = \gamma_{0} + \gamma_{x}x_{i}, \end{eqnarray} and $x_{i}\sim N(\mu_{x},\sigma^{2}_{x})$. In the main sample, denoted by $B$, instead of observing $(x_{i}, y_{i})$, we observe $(z_{i}, y_{i})$, where \begin{eqnarray} z_{i} = \beta_{0} + \beta_{1}x_{i} + u_{i}, \end{eqnarray} and $u_{i}\sim\mbox{N}(0,\sigma^{2}\,|\, x_{i}\,|\,^{2\alpha})$. We observe $(x_{i},z_{i})$, $i = 1,\ldots, n_{A}$ in a calibration sample, denoted by A. For the simulation, $n_{A}=n_{B}=800$, $\gamma_{0} =1$, $\gamma_{x} = 1$, $\beta_{0} = 0$, $\beta_{1} = 1$, $\sigma^{2} = 0.25$, $\alpha = 0.4$, $\mu_{x} = 0$, and $\sigma^{2}_{x} = 1$. Primary interest is in estimation of $\gamma_{x}$ and testing the null hypothesis that $\gamma_{x} = 1$. The MC sample size is 1000. \hspace{.2 in } We compare the PFI and HDFI estimators of $\gamma_{x}$ to three other estimators. Because the conditional distribution of $x_{i}$ given $z_{i}$ is non-standard, we use the weights of (\ref{wformethod2}) and (\ref{wformethod2hdfi}) to implement PFI and HDFI, where the proposal distribution $\hat{h}_{a}(x_{i},\,|\, z_{i})$ is an estimate of the marginal distribution of $x_{i}$ based on the data from sample A. We consider the following five estimators: \begin{enumerate} \item {\it PFI}: For PFI, the proposal distribution for genearating $x_{i}^{*(j)}$ is a normal distribution with mean $\hat{\mu}_{x}$ and variance $\hat{\sigma}^{2}_{x}$, where $\hat{\mu}_{x}$ and variance $\hat{\sigma}^{2}_{x}$ are the maximum liklihood estimates of $\mu_{x}$ and $\sigma^{2}_{x}$ based sample $A$. The fractional weight defined in (\ref{wformethod2}) has the form, \begin{eqnarray}\label{specificwformesim} w_{ij}^{*} \propto p_{i}^{y_{i}}(1-p_{i})^{1-y_{i}}\hat{f}_{a}(z_{i}\,|\,x_{i}), \end{eqnarray} where $p_{i}$ is the function of $(\gamma_{0},\gamma_{x})$ defined by (\ref{modelfory}), and $\hat{f}_{a}(z_{i}\,|\,x_{i})$ is the estimate of $f(z_{i}\,|\,x_{i})$ based on maximum likelihood estimation with the sample A data. The imputation size $m = 800$. \item {\it HDFI}: For HDFI, instead of generating $x_{i}^{*(j)}$ from a normal distribution, the $\{x_{i}^{*(j)}: j=1,\ldots, 800\}$ are the 800 values of $x_{i}$ from sample $A$. \item {\it Naive}: A {\it naive} estimator is the estimator of the slope in the logistic regression of $y_{i}$ on $z_{i}$ for $i\in B$. \item {\it Bayes}: We use the approach of \cite{guo11} to define a Bayes estimator. The model for this simulation differs from the model of \cite{guo11} in that the response of interest is binary. We implement GIBBS sampling with JAGS \citep{plummer03}, specifying diffuse proper prior distributions for the parameters of the model. Letting \begin{eqnarray*} \theta_{1} = (\mbox{log}(\sigma^{2}_{x}), \mbox{log}(\sigma^{2}), \mu_{x},\beta_{0},\beta_{1},\gamma_{0},\gamma_{x}), \end{eqnarray*} we assume a priori that $\theta_{1} \sim \mbox{N}(0,10^{6}I_{7})$, where $I_{7}$ is a $7\times 7$ identity matrix, and the notation $N(0,V)$ denotes a normal distribution with mean 0 and covariance matrix $V$. The prior distribution for the power $\alpha$ is uniform on the interval $[-5,5]$. \hspace{.2 in } To evaluate convergence, we examine trace plots and potential scale reduction factors defined in \cite{gelman03} for 10 preliminary simulated data sets. We initiate three MCMC chains, each of length 1500 from random starting values and discard the first 500 iterations as burn-in. The potential scale reduction factors across the 10 simulated data sets range from 1.0 to 1.1, and the trace plots indicate that the chains mix well. To reduce computing time, we use 1000 iterations of a single chain for the main simulation, after discarding the first 500 for burn-in. \item A {\it Weighted Regression Calibration (WRC)} estimator. The WRC estimator is a modification of the weighted regression calibration estimator defined in \cite{guo11} for a binary response variable. The computation for the weighted regression calibration estimator involves the following steps: \begin{enumerate} \item[(i)] Using OLS, regress $x_{i}$ on $z_{i}$ for the calibration sample. \item[(ii)] Regress the logarithm of the squared residuals from step (i) on the logarithm of $z_{i}^{2}$ for the calibration sample. Let $\hat{\lambda}$ denote the estimated slope from the regression. \item[(iii)] Using WLS with weight $|z_{i}|^{2\hat{\lambda}}$, regress $x_{i}$ on $z_{i}$ for the calibration sample. Let $\hat{\eta}_{0}$ and $\hat{\eta}_{1}$ be the estimated intercept and slope, respectively, from the WLS regression. \item[(iv)] For each unit $i$ in the main sample, let $\hat{x}_{i} = \hat{\eta}_{0} + \hat{\eta}_{1}z_{i}$. \item[(v)] The estimate of $(\gamma_{0},\gamma_{x})$ is obtained from the logistic regression of $y_{i}$ on $\hat{x}_{i}$ for $i$ in the main sample. \end{enumerate} \end{enumerate} \hspace{.2 in } Table~\ref{tab1me} contains the MC bias, variance, and MSE of the five estimators of $\gamma_{x}$. The naive estimator has a negative bias because $z_{i}$ is a contaminated version of $x_{i}$. The variance of the PFI estimator is modestly smaller than the variance of the HDFI estimator because the PFI estimator incorporates extra information through the parametric assumption about the distribution of $x_{i}$. The PFI and HDFI estimators are superior to the Bayes and WRC estimators. \hspace{.2 in } We compute an estimate of the variance of the PFI and HDFI estimators of $\gamma_{x}$ using the variance expression based on the linear approximation. We define the MC relative bias as the ratio of the difference between the MC mean of the variance estimator and the MC variance of the estimator to the MC variance of the estimator. The MC relative biases of the variance estimators for PFI and HDFI are -0.0096 and -0.0093, respectively. \begin{table} \begin{center} \begin{tabular}{crrr} \\ Method & MC Bias & MC Variance & MC MSE \\ \hline\hline PFI & 0.0239 & 0.0386 & 0.0392 \\ HDFI & 0.0246 & 0.0387 & 0.0393 \\ Naive & -0.2241 & 0.0239 & 0.0742 \\ Bayes & 0.0406 & 0.0415 & 0.0432 \\ WRC & 0.112 & 0.0499 & 0.0625 \\ \hline\hline \end{tabular} \caption{Monte Carlo (MC) means, variances, and mean squred errors (MSE) of point estimators of $\gamma_{x}$ from Simulation Two. (PFI, parametric fractional imputation; HDFI, hot-deck fractional imputation; WRC, weighted regression calibration; MC, Monte Carlo; MSE, mean squared error)} \label{tab1me} \end{center} \end{table} \section{Concluding Remarks} We approach statistical matching as a missing data problem and use PFI to obtain consistent estimators and corresponding variance estimators. The imputation approach applies more generally than two stage least squares, which is restricted to estimation of regression coefficients in linear models. Rather than rely on the often unrealistic conditional independence assumption, the imputation procedure derives from an assumption that an instrumental variable is available. The measurement error framework of Section 5 and Section 6.2, in which external calibration provides an independent measurement of the true covariate of interest, is a situation in which the study design may be judged to support the instrumental variable assumption. Although the procedure is based on the instrumental variable assumption, the simulations of Section 6.1 show that the imputation method is robust to modest departures from the requirements of an instrumental variable. The proposed methodology is applicable without the instrumental variable assumption, as long as the model is identified. If the model is not identifiable, then the EM algorithm for the proposed PFI method does not necessarily converge. In practice, one can treat the specified model identified if the EM sequence obtained from the specified model converges. The resulting analysis is consistent under the specified model. This is one of the main advantages of using the frequentist approach over Bayesian. In the Bayesian approach, it is possible to obtain the posterior values even under non-identified models and the resulting analysis can be misleading. Statistical matching can also be used to evaluate effects of multiple treatments in observational studies. By properly applying statistical matching techniques, we can create an augmented data file of potential outcomes so that causal inference can be investigated with the augmented data file \citep{morgan07}. Such extensions will be presented elsewhere. \section*{Acknowledgment} We thank Professor Yanyuan Ma, an anonymous referee and the AE for very constructive comments. The research of the first author was partially supported by a grant from NSF (MMS-121339) and Brain Pool program (131S-1-3-0476) from Korean Federation of Science and Technology Society. The research of the second author was supported by a Cooperative Agreement between the US Department of Agriculture Natural Resources Conservation Service and Iowa State University. The work of the third author was supported by the Bio-Synergy Research Project (2013M3A9C4078158) of the Ministry of Science, ICT and Future Planning through the National Research Foundation in Korea. \section*{Appendix} \subsection*{A. Asymptotic unbiasedness of 2SLS estimator} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \setcounter{equation}{0} Assume that we observe $(y_1,x)$ in Sample A and observe $(y_2,x)$ in Sample B. To be more rigorous, we can write $(y_{1a}, x_a)$ to denote the observation $(y_1,x)$ in Sample A. Also, we can write $(y_{2b},x_b)$ to denote the observations in Sample B. In this case, the model can be written as \begin{eqnarray*} y_{1a}&=&\phi_0 1_a +\phi_1 x_{1a} + \phi_2 x_{2a} + e_{1a}\\ y_{2b}&=&\beta_0 1_b+ \beta_1 x_{1b} + \beta_2 y_{1b}+ e_{2b} \end{eqnarray*} with $E(e_{1a} \mid x_a)= 0$ and $E(e_{2b} \mid x_b, y_{1b} )= 0$. Note that $y_{1b}$ is not observed from the sample. Instead, we use $\hat{y}_{1b}$ using the OLS estimate obtained from Sample A. Writing $X_a=[ 1_a, x_a]$ and $X_b=[1_b, x_b]$, we have $\hat{y}_{1b} =X_b ( X_a' X_a)^{-1} X_a' y_{1a}=X_b \hat{\phi}_a$. The 2SLS estimator of $\beta=(\beta_0, \beta_1, \beta_2)'$ is then $$ \hat{\beta}_{2SLS} = (Z_b' Z_b)^{-1} Z_b' y_{2b} $$ where $Z_b=[1_b, x_{1b}, \hat{y}_{1b}] $. Thus, we have \begin{eqnarray} \hat{\beta}_{2SLS} -\beta&=& (Z_b' Z_b)^{-1} Z_b' (y_{2b}- Z_b \beta ) \notag \\ &=& (Z_b'Z_b)^{-1} Z_b' \{ \beta_2 (y_{1b}- \hat{y}_{1b}) + e_{2b} \} . \label{a1} \end{eqnarray} We may write $$ y_{1b} = \phi_0 1_b +\phi_1 x_b +e_{1b} = X_b \phi + e_{1b} $$ where $E(e_{1b} | x_b)=0$. Since \begin{eqnarray*} \hat{y}_{1b} &=& X_b (X_a' X_a)^{-1} X_a' y_{1a} \\ &=& X_b (X_a' X_a)^{-1} X_a' ( X_a \phi + e_{1a}) \\ &=& X_b \phi + X_b (X_a' X_a)^{-1} X_a' e_{1a} , \end{eqnarray*} we have $$ y_{1b} - \hat{y}_{1b} = e_{1b} - X_b (X_a' X_a)^{-1} X_a' e_{1a} $$ and (\ref{a1}) becomes \begin{equation} \hat{\beta}_{2SLS} -\beta = (Z_b'Z_b)^{-1} Z_b' \{ \beta_2 e_{1b} - \beta_2 X_b (X_a' X_a)^{-1} X_a' e_{1a} + e_{2b} \}. \label{a2} \end{equation} Assume that the two samples are independent. Thus, $E(e_{1b} \mid x_a, x_b, y_{1a})=0$. Also, $E\{ (Z_b' Z_b )^{-1} Z_b' e_{2b} \mid x_a, x_b, y_{1a}, y_{1b}\}=0$. Thus, \begin{eqnarray*} E\{ \hat{\beta}_{2SLS} -\beta \mid x_a, x_b,y_{1a} \}& =&E\{ -\beta_2 (Z_b'Z_b)^{-1} Z_b' X_b (X_a' X_a)^{-1} X_a' e_{1a} \mid x_a, x_b, y_{1a} \} \end{eqnarray*} and \begin{eqnarray*} (Z_b'Z_b)^{-1} Z_b' X_b (X_a' X_a)^{-1} X_a' e_{1a} &=& (Z_b'Z_b)^{-1} Z_b' \{ X_b (X_a' X_a)^{-1} X_a' (y_{1a} - X_a \phi ) \} \\ &=& (Z_b'Z_b)^{-1} Z_b' X_b ( \hat{\phi} _a- \phi) . \end{eqnarray*} This term has zero expectation asymptotically because $n_b^{-1} Z_b' Z_b$ and $n_b^{-1} Z_b' X_b$ are bounded in probability and $(\hat{\phi}_a-\phi)$ converges to zero. \subsection*{B. Variance estimation } \renewcommand{A.\arabic{equation}}{B.\arabic{equation}} \setcounter{equation}{0} Let the parameter of interest be defined by the solution to $U_N(\eta) = \sum_{i=1}^N U(\eta; y_{1i}, y_{2i})=0$. We assume that $\partial U_N(\eta)/ \partial \theta=0$. Thus, parameter $\eta$ is priori independent of $\theta$ which is the parameter in the data-generating distribution of $(x,y_1,y_2)$. Under the setup of Section 3, let $\hat{\theta}=(\hat{\theta}_1, \hat{\theta}_2)$ be the MLE of $\theta=(\theta_1, \theta_2)$ obtained by solving (\ref{score}). Also, let $\hat{\eta}$ be the solution to $\bar{U}(\eta \mid \hat{\theta})=0$ where $$ \bar{U}( \eta \mid \theta) = \sum_{i \in B} \sum_{j=1}^m w_{ib} w_{ij}^* U( \eta; y_{1i}^{*(j)}, y_{2i} ), $$ and $$ w_{ij}^* \propto f(y_{1i}^{*(j)} \mid x_i; \hat{\theta}_1) f( y_{2i} \mid y_{1i}^{*(j)} ; \hat{\theta}_2) / h(y_{1i}^{*(j)} \mid x_i) $$ with $\sum_{j=1}^m w_{ij}^*=1$. Here, $h(y_{1} \mid x)$ is the proposal distribution of generating imputed values of $y_1$ in the parametric fractional imputation. By introducing the proposal distribution $h$, we can safely ignore the dependence of imputed values $y_{1i}^{*(j)}$ on the estimated parameter value $\hat{\theta}_1$. By Taylor linearization, $$ \bar{U} ( \eta \mid \hat{\theta}) \cong \bar{U} (\eta \mid \theta ) + \left( \partial \bar{U} / \partial \theta_1'\right) ( \hat{\theta}_1 - \theta_1) + \left( \partial \bar{U} / \partial \theta_2'\right) ( \hat{\theta}_2 - \theta_2) $$ Note that $$ \hat{\theta}_1 - \theta_1 \cong \{ I_1(\theta_1) \}^{-1} S_1 ( \theta_1) $$ where $I_1 (\theta_1) = - \partial S_1 (\theta_1)/ \partial \theta_1' $. Also, $$ \hat{\theta}_2 - \theta_2 \cong \left\{ -\frac{\partial}{ \partial \theta_2'} \bar{S}_2( \theta) \right\}^{-1} \bar{S}_{2} (\theta) $$ where $$\bar{S}_2 ( \theta) = \sum_{i \in B} \sum_{j=1}^m w_{i} w_{ij}^*(\theta) S_2 (\theta_2 ; y_{1i}^{*(j)}, y_{2i} ) .$$ Thus, we can establish $$ \bar{U} ( \eta \mid \hat{\theta}) \cong \bar{U} (\eta \mid \theta ) + K_1 S_1(\theta_1) + K_2 \bar{S}_2 (\theta), $$ where $K_1= D_{21} I_{11}^{-1} $ and $K_2 = D_{22} I_{22}^{-1} $ with $I_{11}= - E( \partial S_1/ \partial \theta_1')$, $I_{22} = - E ( \partial \bar{S}_2 / \partial \theta_2' ) $, $D_{21} = E\{ U(\eta) S_1(\theta_1)' \}$ and $D_{22}=E\{ U(\eta) S_2(\theta_2)' \}$, we have $$ V\{ \bar{U} (\eta \mid \hat{\theta}) \} = \tau^{-1} \left\{ V_1 + V_2 \right\} \tau^{-1'} $$ where $\tau = -E \{ \partial \bar{U} (\eta | \theta) /\partial \eta' \}$, $$ V_1= V\left\{ \sum_{i \in B} w_i ( \bar{u}_i^* + K_2 S_{2i}^* ) \right\} , $$ $\bar{u}_{i}^{*} = E[U(\hat{\eta}; y_{1i}, y_{2i})\,|\, y_{2i}; \hat{\theta}]$, and $ V_2 = V\{ K_1 \sum_{i \in A} w_i S_{1i} \} .$ A consistent estimator of each component can be developed similarly to Section 3. \subsection*{C. Score Tests} \renewcommand{A.\arabic{equation}}{A.\arabic{equation}} \setcounter{equation}{0} In some applications related to measurement error, an analytical question of interest may be phrased in terms of a null hypothesis about the parameter $\theta$. Suppose that $\theta = (\theta_1, \theta_2)$, and the null hypothesis of interest is $H_{0}: \theta_2= \theta_{2,0}$ for a specified $\theta_{2,0}$. Hypotheses about functions of $\theta_1$ and $\theta_2$ can be expressed as a null hypothesis about a sub-vector of interest after appropriate reparametrization. We define a score test using the approach of \cite{rao98} and \cite{boos92}. Let \begin{eqnarray} U_{1i}(\theta_{1},\theta_{2}, \eta) = (U_{11i}(\theta_{1},\theta_{2},\eta), U_{12i}(\theta_{1},\theta_{2},\eta)), \end{eqnarray} where $U_{1ki}(\theta_{1},\theta_{2},\eta) = E[S_{1ki}(\theta_{1}, \theta_{2}, x_{i})\,|\,y_{i},z_{i},\eta]$ for $k=1,2$, \begin{eqnarray} S_{1i}(\theta) = (S_{11i}(\theta_1, \theta_2, x_i), S_{12i}(\theta_{1}, \theta_{2}, x_{i})), \end{eqnarray} and $S_{1ki}$ is the vector of derivatives of the complete data log likelihood with respect to $\theta_k$. Under the null hypothesis, an estimator $\tilde{\theta}_{1}$ satisfies, \begin{eqnarray}\label{u11theta1tilde} U_{11}(\tilde{\theta}_{1}, \theta_{2},\eta) = \sum_{i\in B} w_{iB}U_{11i}(\tilde{\theta}_{1},\theta_{2,0},\hat{\eta}) = 0, \end{eqnarray} and we use parametric fractional imputation to solve (\ref{u11theta1tilde}). By a Taylor expansion, \begin{eqnarray}\label{u11scoretaylor} 0 &=& U_{11}(\tilde{\theta}_{1}, \theta_{2,0},\hat{\eta}) \\ \nonumber &\approx& U_{11}(\theta_{1},\theta_{2,0},\eta) + \tau_{1,11}(\tilde{\theta}_{1} - \theta_{1}) + \Delta_{1,\eta}(\hat{\eta} - \eta), \end{eqnarray} and \begin{eqnarray}\label{u22scoretaylor} U_{12}(\tilde{\theta}_{1},\theta_{2,0},\hat{\eta}) &\approx& U_{12}(\theta_{1},\theta_{2,0},\eta) + \tau_{1,21}(\tilde{\theta}_{1} - \theta_{1}) + \Delta_{2,\eta}(\hat{\eta} - \eta), \end{eqnarray} where $\tau_{1,k1}$ is the matrix of derivatives of $U_{1k}(\theta_{1},\theta_{2,0},\eta)$ with respect to $\theta_{1}$, and $\Delta_{k,\eta}$ is the matrix of derivatives of $U_{1k}(\theta_{1},\theta_{2,0},\eta)$ with respect to $\eta$. Solving (\ref{u11scoretaylor}) for $\tilde{\theta}_{1} - \theta_{1}$ and plugging the resulting expression into (\ref{u22scoretaylor}) gives, \begin{eqnarray} U_{12}(\tilde{\theta}_{1},\theta_{2,0},\hat{\eta}) &=& U_{12}(\theta_{1},\theta_{2,0},\eta) - \tau_{1,21}\tau_{1,11}^{-1}\left\{U_{11}(\theta_{1},\theta_{2,0},\eta)\right\} \\ \nonumber && + (-\tau_{1,11}^{-1}\Delta_{1,\eta}, \Delta_{2,\eta})(\hat{\eta} - \eta). \end{eqnarray} An estimate of the variance of $U_{12}(\tilde{\theta},\theta_{2,0},\hat{\eta})$ is \begin{eqnarray} \hat{V}_{s} = \hat{V}\{\sum_{i\in B} w_{iB}\hat{v}_{i}\} + (-\hat{\tau}_{1,11}^{-1}\hat{\Delta}_{1,\eta}, \hat{\Delta}_{2,\eta})\hat{V}\{\hat{\eta}\}(-\hat{\tau}_{1,11}^{-1}\hat{\Delta}_{1,\eta}, \hat{\Delta}_{2,\eta})', \end{eqnarray} where \begin{eqnarray} \hat{v}_{i} = U_{12i}(\tilde{\theta},\theta_{2,0},\hat{\eta}) - \tau_{1,21}\tau_{1,11}^{-1}U_{11i}(\tilde{\theta}_{1},\theta_{2,0},\eta). \end{eqnarray} A size $\alpha$ score test of the null hypothesis, $H_{0}: \theta_{2} = \theta_{2,0}$ rejects if $T(\theta_{2,0}) > \chi^{2}_{p}(1-\alpha)$, where \begin{eqnarray} T(\theta_{2,0}) = [U_{12}(\tilde{\theta}_{1},\theta_{2,0},\hat{\eta})]'\hat{V}_{s}^{-1}[U_{12}(\tilde{\theta}_{1},\theta_{2,0},\hat{\eta})], \end{eqnarray} and $\chi^{2}_{p}(\cdot)$ is the quantile function of a chi-squared distribution with $p$ degrees of freedom. A confidence region for $\theta_{2}$ with confidence level $1-\alpha$ is the set of $\theta_{2}$ with $T(\theta_{2,0} = \theta_{2}) < \chi^{2}_{p}(1-\alpha)$. \end{document}
arXiv
How I Acted Like A Pundit And Screwed Up On Donald Trump Trump's nomination shows the need for a more rigorous approach. By Nate Silver Filed under 2016 Election Donald Trump during a campaign event at the U.S. Cellular Convention Center on Feb. 1 in Cedar Rapids, Iowa. Since Donald Trump effectively wrapped up the Republican nomination this month, I've seen a lot of critical self-assessments from empirically minded journalists — FiveThirtyEight included, twice over — about what they got wrong on Trump. This instinct to be accountable for one's predictions is good since the conceit of "data journalism," at least as I see it, is to apply the scientific method to the news. That means observing the world, formulating hypotheses about it, and making those hypotheses falsifiable. (Falsifiability is one of the big reasons we make predictions.1) When those hypotheses fail, you should re-evaluate the evidence before moving on to the next subject. The distinguishing feature of the scientific method is not that it always gets the answer right, but that it fails forward by learning from its mistakes. But with some time to reflect on the problem, I also wonder if there's been too much #datajournalist self-flagellation. Trump is one of the most astonishing stories in American political history. If you really expected the Republican front-runner to be bragging about the size of his anatomy in a debate, or to be spending his first week as the presumptive nominee feuding with the Republican speaker of the House and embroiled in a controversy over a tweet about a taco salad, then more power to you. Since relatively few people predicted Trump's rise, however, I want to think through his nomination while trying to avoid the seduction of hindsight bias. What should we have known about Trump and when should we have known it? It's tempting to make a defense along the following lines: Almost nobody expected Trump's nomination, and there were good reasons to think it was unlikely. Sometimes unlikely events occur, but data journalists shouldn't be blamed every time an upset happens,2 particularly if they have a track record of getting most things right and doing a good job of quantifying uncertainty. We could emphasize that track record; the methods of data journalism have been highly successful at forecasting elections. That includes quite a bit of success this year. The FiveThirtyEight "polls-only" model has correctly predicted the winner in 52 of 57 (91 percent) primaries and caucuses so far in 2016, and our related "polls-plus" model has gone 51-for-57 (89 percent). Furthermore, the forecasts have been well-calibrated, meaning that upsets have occurred about as often as they're supposed to but not more often. But I don't think this defense is complete — at least if we're talking about FiveThirtyEight's Trump forecasts. We didn't just get unlucky: We made a big mistake, along with a couple of marginal ones. The big mistake is a curious one for a website that focuses on statistics. Unlike virtually every other forecast we publish at FiveThirtyEight — including the primary and caucus projections I just mentioned — our early estimates of Trump's chances weren't based on a statistical model. Instead, they were what we "subjective odds" — which is to say, educated guesses. In other words, we were basically acting like pundits, but attaching numbers to our estimates.3 And we succumbed to some of the same biases that pundits often suffer, such as not changing our minds quickly enough in the face of new evidence. Without a model as a fortification, we found ourselves rambling around the countryside like all the other pundit-barbarians, randomly setting fire to things. There's a lot more to the story, so I'm going to proceed in five sections: 1. Our early forecasts of Trump's nomination chances weren't based on a statistical model, which may have been most of the problem. 2. Trump's nomination is just one event, and that makes it hard to judge the accuracy of a probabilistic forecast. 3. The historical evidence clearly suggested that Trump was an underdog, but the sample size probably wasn't large enough to assign him quite so low a probability of winning. 4. Trump's nomination is potentially a point in favor of "polls-only" as opposed to "fundamentals" models. 5. There's a danger in hindsight bias, and in overcorrecting after an unexpected event such as Trump's nomination. Our early forecasts of Trump's nomination chances weren't based on a statistical model, which may have been most of the problem. Usually when you see a probability listed at FiveThirtyEight — for example, that Hillary Clinton has a 93 percent chance to win the New Jersey primary — the percentage reflects the output from a statistical model. To be more precise, it's the output from a computer program that takes inputs (e.g., poll results), runs them through a bunch of computer code, and produces a series of statistics (such as each candidate's probability of winning and her projected share of the vote), which are then published to our website. The process is, more or less, fully automated: Any time a staffer enters new poll results into our database, the program runs itself and publishes a new set of forecasts.4 There's a lot of judgment involved when we build the model, but once the campaign begins, we're just pressing the "go" button and not making judgment calls or tweaking the numbers in individual states. Anyway, that's how things usually work at FiveThirtyEight. But it's not how it worked for those skeptical forecasts about Trump's chance of becoming the Republican nominee. Despite the lack of a model, we put his chances in percentage terms on a number of occasions. In order of appearance — I may be missing a couple of instances — we put them at 2 percent (in August), 5 percent (in September), 6 percent (in November), around 7 percent (in early December), and 12 percent to 13 percent (in early January). Then, in mid-January, a couple of things swayed us toward a significantly less skeptical position on Trump. First, it was becoming clearer that Republican "party elites" either didn't have a plan to stop Trump or had a stupid plan. Also, that was about when we launched our state-by-state forecast models, which showed Trump competitive with Cruz in Iowa and favored in New Hampshire. From that point onward, we were reasonably in line with the consensus view about Trump, although the consensus view shifted around quite a lot. By mid-February, after his win in New Hampshire, we put Trump's chances of winning the nomination at 45 percent to 50 percent, about where betting markets had him. By late February, after he'd won South Carolina and Nevada, we said, at about the same time as most others, that Trump would "probably be the GOP nominee." But why didn't we build a model for the nomination process? My thinking was this: Statistical models work well when you have a lot of data, and when the system you're studying has a relatively low level of structural complexity. The presidential nomination process fails on both counts. On the data side, the current nomination process dates back only to 1972, and the data availability is spotty, especially in the early years. Meanwhile, the nomination process is among the most complex systems that I've studied. Nomination races usually have multiple candidates; some simplifying assumptions you can make in head-to-head races don't work very well in those cases. Also, the primaries are held sequentially, so what happens in one state can affect all the later ones. (Howard Dean didn't even come close to defeating John Kerry in 2004, for example, finishing with barely more than 100 delegates to Kerry's roughly 2,700, but if Dean had held on to win Iowa, he might have become the nominee.) To make matters worse, the delegate rules themselves are complicated, especially on the GOP side, and they can change quite a bit from year to year. The primaries may literally be chaotic, in the sense that chaos theory is defined. Under these conditions, any model is going to be highly sensitive to its assumptions — both in terms of which variables are chosen and how the model is parameterized. The thing is, though, that if the nomination is hard to forecast with a model, it's just as hard to forecast without a model. We don't have enough historical data to know which factors are really predictive over the long run? Small, seemingly random events can potentially set the whole process on a different trajectory? Those are problems in understanding the primaries period, whether you're building a model or not. And there's one big advantage a model can provide that ad-hoc predictions won't, which is how its forecasts evolve over time. Generally speaking, the complexity of a problem decreases as you get closer to the finish line. The deeper you get into the primaries, for example, the fewer candidates there are, the more reliable the polls become, and the less time there is for random events to intervene, all of which make the process less chaotic. Thus, a well-designed model will generally converge toward the right answer, even if the initial assumptions behind it are questionable. Suppose, for instance, we'd designed a model that initially applied a fairly low weight to the polls — as compared with other factors like endorsements — but increased the weight on polls as the election drew closer.5 Based on having spent some time last week playing around with a couple of would-be models, I suspect that at some point — maybe in late November after Trump had gained in polls following the Paris terror attacks — the model would have shown Trump's chances of winning the nomination growing significantly. A model might also have helped to keep our expectations in check for some of the other candidates. A simple, two-variable model that looked at national polls and endorsements would have noticed that Marco Rubio wasn't doing especially well on either front, for instance, and by the time he was beginning to make up ground in both departments, it was getting late in the game. Without having a model, I found, I was subject to a lot of the same biases as the pundits I usually criticize. In particular, I got anchored on my initial forecast and was slow to update my priors in the face of new data. And I found myself selectively interpreting the evidence and engaging in some lazy reasoning.6 Another way to put it is that a model gives you discipline, and discipline is a valuable resource when everyone is losing their mind in the midst of a campaign. Was an article like this one — the headline was "Dear Media, Stop Freaking Out About Donald Trump's Polls" — intended as a critique of Trump's media coverage or as a skeptical analysis of his chances of winning the nomination? Both, but it's all sort of a muddle. Trump's nomination is just one event, and that makes it hard to judge the accuracy of a probabilistic forecast. The campaign has seemed to last forever, but from the standpoint of scoring a forecast, the Republican nomination is just one event. Sometimes, low-probability events come through. Earlier this month, Leicester City won the English Premier League despite having been a 5,000-to-1 underdog at the start of the season, according to U.K. bookmakers. By contrast, our 5 percent chance estimate for Trump in September 2015 gave him odds of "only" about 20-to-1 against. What should you think about an argument along the lines of "sorry, but the 20-to-1 underdog just so happened to come through this time!" It seems hard to disprove, but it also seems to shirk responsibility. How, exactly, do you evaluate a probabilistic forecast? The right way is with something called calibration. Calibration works like this: Out of all events that you forecast to have (for example) a 10 percent chance of occurring, they should happen around 10 percent of the time — not much more often but also not much less often. Calibration works well when you have large sample sizes. For example, we've forecast every NBA regular season and playoff game this year. The biggest upset came on April 5, when the Minnesota Timberwolves beat the Golden State Warriors despite having only a 4 percent chance of winning, according to our model. A colossal failure of prediction? Not according to calibration. Out of all games this year where we've had one team as at least a 90 percent favorite, they've won 99 out of 108 times, or around 92 percent of the time, almost exactly as often as they're supposed to win. Another, more pertinent example of a well-calibrated model is our state-by-state forecasts thus far throughout the primaries. Earlier this month, Bernie Sanders won in Indiana when our "polls-only" forecast gave him just a 15 percent chance and our "polls-plus" forecast gave him only a 10 percent chance. More impressively, he won in Michigan, where both models gave him under a 1 percent chance. But there have been dozens of primaries and only a few upsets, and the favorites are winning about as often as they're supposed to. In the 31 cases where our "polls-only" model gave a candidate at least a 95 percent chance of winning a state, he or she won 30 times, with Clinton in Michigan being the only loss. Conversely, of the 93 times when we gave a candidate less than a 5 percent chance of winning,7 Sanders in Michigan was the only winner. WIN PROBABILITY RANGE NO. FORECASTS EXPECTED NO. WINNERS ACTUAL NO. WINNERS 95-100% 31 30.5 30 75-94% 15 12.5 13 50-74% 11 6.9 9 5-24% 22 2.4 1 0-4% 93 0.9 1 Calibration for FiveThirtyEight "polls-only" forecast Based on election day forecasts in 2016 primaries and caucuses. Probabilities listed as ">99%" and "<1%" are treated as 99.5 percent and 0.5 percent, respectively, for purposes of calculating expected number of winners. 50-74% 14 8.7 11 Calibration for FiveThirtyEight "polls-plus" forecast It's harder to evaluate calibration in the case of our skeptical forecast about Trump's chances at the nomination. We can't put it into context of hundreds of similar forecasts because there have been only 18 competitive nomination contests8 since the modern primary system began in 1972 (and FiveThirtyEight has only covered them since 2008). We could possibly put the forecast into the context of all elections that FiveThirtyEight has issued forecasts for throughout its history — there have been hundreds of them, between presidential primaries, general elections and races for Congress, and these forecasts have historically been well-calibrated. But that seems slightly unkosher since those other forecasts were derived from models, whereas our Trump forecast was not. Apart from calibration, are there other good methods to evaluate a probabilistic forecast? Not really, although sometimes it can be worthwhile to look for signs of whether an upset winner benefited from good luck or quirky, one-off circumstances. For instance, it's potentially meaningful that in down-ballot races, "establishment" Republicans seem to be doing just fine this year, instead of routinely losing to tea party candidates as they did in 2010 and 2012. Perhaps that's a sign that Trump was an outlier — that his win had as much to do with his celebrity status and his $2 billion in free media coverage as with the mood of the Republican electorate. Still, I think our early forecasts were overconfident for reasons I'll describe in the next section. The historical evidence clearly suggested that Trump was an underdog, but the sample size probably wasn't large enough to assign him quite so low a probability of winning. Data-driven forecasts aren't just about looking at the polls. Instead, they're about applying the empirical method and demanding evidence for one's conclusions. The historical evidence suggested that early primary polls weren't particularly reliable — they'd failed to identify the winners in 2004, 2008 and 20129 — and that other measurable factors, such as endorsements, were more predictive. So my skepticism over Trump can be chalked up to a kind of rigid empiricism. When those indicators had clashed, the candidate leading in endorsements had won and the candidate leading in the polls had lost. Expecting the same thing to happen to Trump wasn't going against the data — it was consistent with the data! To be more precise about this, I ran a search through our polling database for candidates who led national polls at some point in the year before the Iowa caucuses, but who lacked broad support from "party elites" (such as measured by their number of endorsements, for example). I came up with six fairly clear cases and two borderline ones. The clear cases are as follows: From top left: George Wallace, Jesse Jackson, Gary Hart, Jerry Brown, Herman Cain and Newt Gingrich. George Wallace, the populist and segregationist governor of Alabama, led most national polls among Democrats throughout 1975, but Jimmy Carter eventually won the 1976 nomination. Jesse Jackson, who had little support from party elites, led most Democratic polls through the summer and fall of 1987, but Michael Dukakis won the 1988 nomination. Gary Hart also led national polls for long stretches of the 1988 campaign — including in December 1987 and January 1988, after he returned to the race following a sex scandal. With little backing from party elites, Hart wound up getting just 4 percent of the vote in New Hampshire. Jerry Brown, with almost no endorsements, regularly led Democratic polls in late 1991 and very early 1992, especially when non-candidate Mario Cuomo wasn't included in the survey. Bill Clinton surpassed him in late January 1992 and eventually won the nomination. Herman Cain emerged with the Republican polling lead in October 2011 but dropped out after sexual harassment allegations came to light against him. Mitt Romney won the nomination. Newt Gingrich surged after Cain's withdrawal and held the polling lead until Romney moved ahead just as Iowa was voting in January 2012. Note that I don't include Rick Perry, who also surged and declined in the 2012 cycle but who had quite a bit of support from party elites, or Rick Santorum, whose surge didn't come until after Iowa. There are two borderline cases, however: Howard Dean led most national polls of Democrats from October 2003 through January 2004 but flamed out after a poor performance in Iowa. Dean ran an insurgent campaign, but I consider him a borderline case because he did win some backing from party elites, such as Al Gore. Rudy Giuliani led the vast majority of Republican polls throughout 2007 but was doomed by also-ran finishes in Iowa and New Hampshire. Giuliani had a lot of financial backing from the Republican "donor class" but few endorsements from Republican elected officials and held moderate positions out of step with the party platform. So Trump-like candidates — guys who had little party support but nonetheless led national polls, sometimes on the basis of high name recognition — were somewhere between 0-for-6 and 0-for-8 entering this election cycle, depending on how you count Dean and Giuliani. Based on that information, how would you assess Trump's chances this time around? This is a tricky question. Trump's eventual win was unprecedented, but there wasn't all that much precedent. Bayes' theorem can potentially provide some help, in the form of what's known as a uniform prior. A uniform prior works like this: Say we start without any idea at all about the long-term frequency of a certain type of event.10 Then we observe the world for a bit and collect some data. In the case of Trump, we observe that similar candidates have won the nomination zero times in 8 attempts. How do we assess Trump's probability now? According to the uniform prior, if an event has occurred x times in n observations, the chance of it occurring the next time around is this: \(\frac{\displaystyle x+1}{\displaystyle n+2}\) For example, if you've observed that an event has occurred 3 times in 4 chances (75 percent of the time) — say, that's how often your pizza has been delivered on time from a certain restaurant — the chance of its happening the next time around is 4 out of 6, according to the formula, or 67 percent. Basically, the uniform prior has you hedging a bit toward 50-50 in the cases of low information.11 In the case of Trump, we'd observed an event occurring either zero times out of 6 trials, or zero times out of 8, depending on whether you include Giuliani and Dean. Under a uniform prior, that would make Trump's chances of winning the nomination either 1 in 8 (12.5 percent) or 1 in 10 (10 percent) — still rather low, but higher than the single-digit probabilities we assigned him last fall. We've gotten pretty abstract. The uniform prior isn't any sort of magic bullet, and it isn't always appropriate to apply it. Instead, it's a conservative assumption that serves as a sanity check. Basically, it's saying that there wasn't a ton of data and that if you put Trump's chances much below 10 percent, you needed to have a pretty good reason for it. Did we have a good reason? One potentially good one was that there was seemingly sound theoretical evidence, in the form of the book "The Party Decides," and related political science literature, supporting skepticism of Trump. The book argues that party elites tend to get their way and that parties tend to make fairly rational decisions in who they nominate, balancing factors such as electability against fealty to the party's agenda. Trump was almost the worst imaginable candidate according to this framework — not very electable, not very loyal to the Republican agenda and opposed by Republican party elites. There's also something to be said for the fact that previous Trump-like candidates had not only failed to win their party's nominations, but also had not come close to doing so. When making forecasts, closeness counts, generally speaking. An NBA team that loses a game by 20 points is much less likely to win the rematch than one that loses on a buzzer-beater. And in the absence of a large sample of data from past presidential nominations, we can look toward analogous cases. How often have nontraditional candidates been winning Republican down-ballot races, for instance? Here's a list of insurgent or tea party-backed candidates who beat more established rivals in Republican Senate primaries since 201012 (yes, Rubio had once been a tea party hero): 2010 Sharron Angle Nevada 2010 Marco Rubio Florida 2010 Ken Buck Colorado 2012 Todd Akin Missouri 2010 Mike Lee Utah 2012 Ted Cruz Texas 2010 Joe Miller Alaska 2012 Deb Fischer Nebraska 2010 Christine O'Donnell Delaware 2012 Richard Mourdock Indiana 2010 Rand Paul Kentucky Winning insurgent candidates, 2010-14 That's 11 insurgent candidate wins out of 104 Senate primaries during that time period, or about 10 percent. A complication, however, is that there were several candidates competing for the insurgent role in the 2016 presidential primary: Trump, but also Cruz, Ben Carson and Rand Paul. Perhaps they began with a 10 percent chance at the nomination collectively, but it would have been lower for Trump individually. There were also reasons to be less skeptical of Trump's chances, however. His candidacy resembled a financial-market bubble in some respects, to the extent there were feedback loops between his standing in the polls and his dominance of media coverage. But it's notoriously difficult to predict when bubbles burst. Particularly after Trump's polling lead had persisted for some months — something that wasn't the case for some of the past Trump-like candidates13 — it became harder to justify still having the polling leader down in the single digits in our forecast; there was too much inherent uncertainty. Basically, my view is that putting Trump's chances at 2 percent or 5 percent was too low, but having him at (for instance) 10 percent or 15 percent, where we might have wound up if we'd developed a model or thought about the problem more rigorously, would have been entirely appropriate. If you care about that sort of distinction, you've come to the right website! Trump's nomination is potentially a point in favor of "polls-only" as opposed to "fundamentals" models. In these last two sections, I'll be more forward-looking. Our initial skepticism of Trump was overconfident, but given what we know now, what should we do differently the next time around? One seeming irony is that for an election that data journalism is accused of having gotten wrong, Trump led in the polls all along, from shortly after the moment he descended the elevator at Trump Tower in June until he wrapped up the nomination in Indiana.14 As I mentioned before, however, polls don't enjoy any privileged status under the empirical method. The goal is to find out what works based on the historical evidence, and historically polls are considerably more reliable in some circumstances (a week before the general election) than in others (six months before the Iowa caucuses). Still, Trump's nomination comes at a time when I've had increasing concern about how much value other types of statistical indicators contribute as compared with polls. While in primaries, there's sometimes a conflict between what the polls say and "The Party Decides" view of the race, in general elections, the battle is between polls and "fundamentals." These fundamentals usually consist of economic indicators15 and various measures of incumbency.16 (The conflict is pertinent this year: Polls have Clinton ahead of Trump, whereas fundamentals-based models suggest the race should be a toss-up.) But there are some big problems with fundamentals-based models. Namely, while they backtest well — they can "explain" past election results almost perfectly — they've done poorly at predicting elections when the results aren't known ahead of time. Most of these models expected a landslide win for Al Gore in 2000, for example. Some of them predicted George H.W. Bush would be re-elected in 1992 and that Bob Dole would beat Bill Clinton in 1996. These models did fairly well as a group in 2012, but one prominent model, which previously had a good track record, wrongly predicted a clear win for Romney. Overall, these models have provided little improvement over polls-only forecasts since they regularly began to be published in 1992. A review from Ben Lauderdale and Drew Linzer suggests that the fundamentals probably do contribute some predictive power, but not nearly as much as the models claim. These results are also interesting in light of the ongoing replication crisis in science, in which results deemed to be highly statistically significant in scientific and academic journals often can't be duplicated in another experiment. Fundamentals-based forecasts of presidential elections are particularly susceptible to issues such as "p-hacking" and overfitting because of the small sample sizes and the large number of potential variables that might be employed in a model. Polling-based forecasts suffer from fewer of these problems because they're less sensitive to how the models are designed. The FiveThirtyEight, RealClearPolitics and Huffington Post Pollster polling averages all use slightly different methods, for example, but they're usually within a percentage point or two of one another for any given election, and they usually predict the same winner unless the election is very close. By contrast, subtle changes in the choice of "fundamentals" variables can produce radically different forecasts. In 2008, for instance, one fundamentals-based model had Barack Obama projected to win the election by 16 percentage points, while another picked John McCain for a 7-point victory. Put another way, polling-based models are simpler and less assumption-driven, and simpler models tend to retain more of their predictive power when tested out of sample. This is a complicated subject, and I don't want to come across as some sort of anti-fundamentals fundamentalist. Reducing the weight placed on fundamentals isn't the same as discarding them entirely, and there are methods to guard against overfitting and p-hacking. And certain techniques, especially those that use past voting results, seem to add value even when you have plenty of polling. (For instance, extrapolating from the demographics in previous states to predict the results in future states generally worked well in the Democratic primaries this year, as it had in 2008.) The evidence is concerning enough, however, that we'll probably publish both "polls-only" and "polls-plus" forecasts for the general election, as we did for the primaries. There's a danger in hindsight bias, and in overcorrecting after an unexpected event such as Trump's nomination. Not so long ago, I wrote an article about the "hubris of experts" in dismissing an unconventional Republican candidate's chances of becoming the nominee. The candidate, a successful businessman who had become a hero of the tea party movement, was given almost no chance of winning the nomination despite leading in national polls. The candidate was a fairly heavy underdog, my article conceded, but there weren't a lot of precedents, and we just didn't have enough data to rule anything out. "Experts have a poor understanding of uncertainty," I wrote. "Usually, this manifests itself in the form of overconfidence." That was particularly true given that they were "coming to their conclusions without any statistical model," I said. The candidate, as you may have guessed, was not Trump but Herman Cain. Three days after that post went up in October 2011, accusations of sexual harassment against Cain would surface. About a month later, he'd suspend his campaign. The conventional wisdom would prevail; Cain's polling lead had been a mirage. Listen to the latest episode of the FiveThirtyEight politics podcast. More: Apple Podcasts | ESPN App | RSS When Trump came around, I'd turn out to be the overconfident expert, making pretty much exactly the mistakes I'd accused my critics of four years earlier. I did have at least a little bit more information at my disposal: the precedents set by Cain and Gingrich. Still, I may have overlearned the lessons of 2012. The combination of hindsight bias and recency bias can be dangerous. If we make a mistake — buying into those polls that showed Cain or Gingrich ahead, for instance — we feel chastened, and we'll make doubly sure not to make the same mistake again. But we can overcompensate and make a mistake in the opposite direction: For instance, placing too little emphasis on national polls in 2016 because 2012 "proved" they didn't mean anything. There are lots of examples like this in the political world. In advance of the 2014 midterms, a lot of observers were convinced that the polls would be biased against Democrats because Democrats had beaten their polls in 2012. We pointed out that the bias could just as easily run in the opposite direction. That's exactly what happened. Republicans beat their polls in almost every competitive Senate and gubernatorial race, picking up a couple of seats that they weren't expected to get. So when the next Trump-like candidate comes along in 2020 or 2024, might the conventional wisdom overcompensate and overrate his chances? It's possible Trump will change the Republican Party so much that GOP nominations won't be the same again. But it might also be that he hasn't shifted the underlying odds that much. Perhaps once in every 10 tries or so, a party finds a way to royally screw up a nomination process by picking a Trump, a George McGovern or a Barry Goldwater. It may avoid making the same mistake twice — the Republican Party's immune system will be on high alert against future Trumps — only to create an opening for a candidate who finds a novel strategy that no one is prepared for. Cases like these are why you should be wary about claims that journalists (data-driven or otherwise) ought to have known better. Very often, it's hindsight bias, sometimes mixed with cherry-picking17 and — since a lot of people got Trump wrong — occasionally a pinch of hypocrisy.18 Still, it's probably helpful to have a case like Trump in our collective memories. It's a reminder that we live in an uncertain world and that both rigor and humility are needed when trying to make sense of it. In the absence of predictions, there are ways for journalism to be more falsifiable, such as if the reporting is transparent, precise, accessible, comprehensive, and does not rely on arguments from authority. Nor should they get credit whenever a favorite wins. I think we should have designed a model — but given that we didn't, I don't think there's anything wrong with framing our guesstimates in percentage terms. It's easy to make weasel-worded statements along the lines of "Trump is an underdog to win the nomination, but we can't rule anything out," leaving enough ambiguity in that sentence that you can claim to have been prescient, whatever the result. Putting things in percentage terms is more accountable, especially if you want to test your forecasts for calibration later on. But sometimes I wish that we had a clearer way to distinguish when we're just spitballing from when we're listing the result of a model or formula. Maybe we could use a different font, like Comic Sans? There are some checks and balances; if the model shows a very large swing in the probabilities for a race, the program will ask a human to double-check the numbers to ensure that we haven't entered a poll incorrectly, for example. Roughly speaking, this is how most of our election models work. They initially use a blend of polls and "fundamentals," but become closer to polls-only as the election approaches. For instance, I'd look at betting markets, which gave Trump, say, a 15 percent chance of winning the nomination as of mid-October, and I'd calibrate my estimates of his chances relative to those. Trump was priced too high, I figured, because the conventional wisdom wasn't sufficiently aware of the poor predictive track record of national polls. But the market was already assigning a fairly steep discount to Trump's price relative to where polls had him. (By contrast, Rubio was priced at about 30 percent in betting markets in October, despite being well behind Trump in polls.) Was it too much of a discount or too little? Without a model, it was hard to say. Remember, there were a lot of also-rans in the Republican race, so there can be several candidates in each state with long odds. By competitive, I mean excluding races where an incumbent president was running without significant competition. In 2004, John Kerry did not lead in national polls until after his win in Iowa. In 2008, Hillary Clinton and Rudy Giuliani were consistently ahead in national polls before Iowa. In 2012, Romney usually trailed down the pre-Iowa stretch run to Herman Cain or Newt Gingrich. In fact, our initial assumption (our prior belief) is that the long-term frequency is equally likely to be anywhere from 0 percent of the time to 100 percent of the time. With a lot of observational data, the uniform prior won't matter much. If you'd observed that an event had occurred 300 times in 400 chances, applying the prior would reduce your estimate only to 74.9 percent from 75 percent. You might notice there are no examples from 2014 in the table; that's because there were no clear cases of insurgent candidates upsetting establishment-backed alternatives that year. Although it was true for others, such as Giuliani. Although, note that while the topline results from the polls were usually good for Trump, other poll-based measures pointed toward the possibility that he had a "ceiling" with GOP voters. Trump's favorability ratings among Republicans were middling, and he sometimes polled behind candidates like Rubio and Cruz in hypothetical one-on-one matchups, suggesting that he'd lose ground as the field winnowed. Trump also never polled well in Iowa (and he eventually lost the state), raising the possibility that he'd lose favor with voters once they got a closer look at him. A better economy is thought to help the party holding the White House. An incumbent seeking a second term is supposed to have an advantage for re-election, but some models say a party is at a disadvantage when the incumbent is term-limited and a party is seeking a third consecutive term, as Democrats are this year. While I can't blame people for criticizing our Trump coverage, I'm less sympathetic to those who made a big deal of the misses for our polling-based models in state primaries such as the Democratic contests in Indiana and Michigan, given that the models have had a strong overall track record this year. For instance, The New York Times's Jim Rutenberg, who wrote a critical column about FiveThirtyEight's coverage, dismissed Trump's chances when Trump was contemplating a bid four years ago because early polling "has proven to be an unreliable barometer of actual primary and caucus results months later." Nate Silver is the founder and editor in chief of FiveThirtyEight. @natesilver538 Donald Trump (1562 posts) 2016 Election (1135) 2016 Republican Primary (320)
CommonCrawl
Anatomical, physical, and mechanical properties of four pioneer species in Malaysia H. Hamdan1, A. S. Nordahlia1, U. M. K. Anwar1, M. Mohd Iskandar1, M. K. Mohamad Omar1 & Tumirah K1 The purpose of this study is to evaluate the anatomical, physical, and mechanical properties of four pioneer species, i.e., batai (Paraserianthes moluccana), ludai (Sapium baccatum), mahang (Macaranga gigantea), and sesendok (Endospermum malaccense). Correlation of factors influencing density, shrinkage, and mechanical properties were also discussed. Samples were obtained from the Forest Research Institute Malaysia (FRIM) campus. From the result obtained, these four pioneer species characterised by medium-to-large vessel with absent of tyloses and gum deposit, fine ray, thin walled fibre, runkel ratio less than 1.0, low in density, and mechanical properties. Sesendok has significantly higher value in fibre length, fibre diameter, fibre lumen diameter, fibre wall thickness, vessel diameter, density, MOR, MOE, compression parallel to grain, and shear parallel to grain compared to the other three pioneer species which were 2001 µm, 45 µm, 35 µm, 5.1 µm, 300 µm, 514 kg/m3, 79.5 N/mm2, 9209 N/mm2, 38.7 N/mm2, and 10.1 N/mm2, respectively. Between these four pioneer species, ludai has significantly higher in runkel ratio which was 0.57, whereas mahang shows significantly higher in slenderness ratio and number of vessels/mm2 which were 50.2 and 5 vessel/mm2, respectively. On the other hand, batai has higher tangential, radial and longitudinal shrinkage compared to ludai, mahang, and sesendok which were 3.0%, 2.4%, and 0.8%, respectively. Based on the basic property study, batai, ludai, mahang, and sesendok could be suitable for pulp and paper, plywood, light construction, furniture, interior finishing, and general utility. Fibre length, fibre wall thickness, and vessel diameter correlated significantly with density and mechanical properties. Shrinkage and mechanical properties were significantly influenced by density. Pioneer species is seen as an alternative material to the depleting resources of commercial timber from natural forest. It grows on previously disturbed land, such as areas of clear cutting, damage by the elements of nature, or in former agricultural land. These species adapted well to nutrient-depleted soils and colonize them more easily than other species. They are also known as successional species and make the soil more livable for species that are not good colonizers by putting nutrients back into the soil and providing shade for other plants [1]. Information on the availability of pioneer species was obtained from the National Forest Inventory 4 Report for Peninsular Malaysia conducted by the Forest Department Peninsular Malaysia (JPSM) in 2000–2002 [2]. According to [1], pioneer species such as batai, ludai, mahang, and sesendok have potential for the cellulosic industry due to its fast growth, relatively free from common or major known pests and diseases, and yet produce acceptable wood. Study on the anatomical, physical, and mechanical properties need to be done on the pioneer species to explore the suitability of these species for various applications in wood-based industry such as in pulp and paper and also in plywood industry where the demands on this product are increased. Anatomical properties study such as the cell structure and fibre morphology are very important to determine the different areas of application. As an example, fibre morphology is an indicator on the suitability of timber for pulp and paper products [3]. Besides that, fibre length and fibre wall thickness are also a determinant to predict the density and mechanical properties [4]. On the other hand, vessel size is related to the treatment ability, where large vessel indicates the easy treatment compared to small vessels [5]. Physical properties such as density and shrinkage are related to wood quality. Density is correlated with shrinkage, drying, machining, and mechanical properties [6, 7]. Shrinkage of wood is another important physical property noted by Kiaei [8]. It is necessary to have good understanding on the shrinkage behavior of wood, since this property is associated with effects such as warping, cupping, checking, and splitting that will contribute to the most troublesome physical properties of the wood [9]. Mechanical properties would affect the wood quality, characterised the suitability of wood for structural applications, and also can be used as an indicator to the quality of sawn lumber [10, 11]. The purpose of this study is to evaluate the anatomical, physical, and mechanical properties of four pioneer species, i.e., batai (Paraserianthes moluccana), ludai (Sapium baccatum), mahang (Macaranga gigantea), and sesendok (Endospermum malaccense). These four pioneer species were selected in this study to meet the needs of the wood industry that requires continuous supply and short-term raw materials. Therefore, batai, ludai, mahang, and sesendok were selected for the study due to their fast growth in which in 10 years which they are able to harvest. Correlation factors influencing density, shrinkage, and mechanical properties were also presented. It is hoped that these basic properties will be useful to the wood-based industry to explore the suitable products from the pioneer timbers species. Preparation of materials Samples of batai (Paraserianthes moluccana), ludai (Sapium baccatum), mahang (Macaranga gigantea), and sesendok (Endospermum malaccense) were obtained from the Forest Research Institute Malaysia (FRIM) campus. The trees were planted at a spacing of 3 × 3 m. Three trees from each species which age of 14 years were felled at 15 cm above the ground. Two discs approximately 3 cm in thickness and billets of 2 m length were cut. Discs were assigned for two different studies, viz., for anatomical and physical properties study, and billets of 2 m long were used for mechanical property study [12]. Determination of anatomical properties The anatomical feature study was conducted according to the method by Schweingruber and Schulze [13]. A wood block 10 × 10 × 10 mm was each taken from the wood disc. The blocks were boiled in distilled water until they were well soaked and sank. The sledge microtome was used to cut thin sections from the transverse, tangential, and radial surfaces of each block. The thickness of wood sections must be in the range of between 25 µm. The transverse, tangential, and radial sections were kept in separate petri dishes for the staining process. Staining was carried out using 1% safranin-0. These sections were washed with 50% ethanol and dehydrated using a series of ethanol solutions with concentrations of 70%, 80%, 90%, and 95%. Then, one drop of Canada Balsam was placed on top of the section and covered with a cover slip. The slides were oven-dried at 60 °C for a few days. The maceration technique was used to determine the fibre morphology [14]. A wood block split into matchstick size pieces before being macerated using a mixture of 30% hydrogen peroxide:glacial acetic acid at a ratio of 1:1 at 45 °C for 2 to 3 h until all of the lignin had dissolved and the cellulose fibres appeared whitish. Microscopic observations and measurement of the wood anatomical features were carried out using the light microscope. The descriptive terminology follows the International Association of Wood Anatomists (IAWA) List of Microscopic Features for Hardwood Identification [14]. For all the anatomical property measurements, there were 25 readings which were taken randomly for each species of batai, ludai, mahang, and sesendok. The Slenderness ratio (fibre length/fibre diameter) and Runkel ratio (2 × wall thickness/lumen diameter) [15, 16] were also calculated. Determination of physical and mechanical properties Physical properties were tested using British Standard 373:1957 Methods of Testing Small Clear Specimens of Timber [17]. Samples of size 20 mm in radial × 20 mm in longitudinal × 40 mm in tangential directions were cut from the woods for the analyses of density and shrinkage. Density was determined on the basis of oven dry weight and green volume. The shrinkage test was conducted in green to air-dry conditions. The tangential, radial, and longitudinal sections of each sample were marked and measured with a pair of digital vernier callipers (Mitutoyo) to the nearest 0.01 mm. A total of 90 specimens were used for each species of batai, ludai, mahang, and sesendok. Shrinkage was calculated using the following equations: $$ S_{a} \left( \% \right) = \frac{{D_{i} {-}D_{a} }}{{D_{i} }} \times 100, $$ where Sa: shrinkage from green to air-dry conditions, Di: initial dimension (mm), and Da: air-dry dimension (mm). Samples for mechanical properties were tested in accordance with British Standard 373:1957 Methods of Testing Small Clear Specimens of Timber [17]. Types of tests that were conducted: static bending of modulus of rupture (MOR) and modulus of elasticity (MOE), compression, and shear parallel to the grain. The standard dimensions for static bending test were 300 × 20 × 20 mm. Dimensions of 20 × 20 × 60 mm specimens were used for the test of compression parallel to the grain. Each specimen was placed in a vertical position. The dimensions of specimens for shear parallel to the grain were 20 × 20 × 20 mm. The direction of shearing was parallel to the longitudinal direction of the grain. The test was made on the tangential and radial planes of the sample. The total number of specimens was 90 for each species of batai, ludai, mahang, and sesendok. All tests were conducted using the 100 KN Shimadzu testing machine. Statistical analysis was performed using Statistical Analysis System (SAS) version 9.1.3 software. Analysis of variance (ANOVA) was used to determine whether or not the differences in means were significant. If the differences were significant, Least Significant Difference (LSD) test was used to determine which of the means were significantly different from one another. The relationship between the properties was analysed using simple correlation analysis. Anatomical properties Anatomical features of batai, ludai, mahang, and sesendok are shown in Figs. 1, 2, 3, and 4. The anatomical features of these four pioneer species are described for their identification and an important indication on the suitability of the timber for its potential usage. Figure 1 shows the anatomical features of batai. It shows that the vessels are predominantly solitary and in radial multiples of 2–4 with simple perforation. The tangential diameter ranges from 282 to 299 µm and the frequency is 1–3/mm2. Tyloses and deposit are absent. The axial parenchyma is vasicentric and diffuse but visible as white dots in cross section when observe with a hand lens than in the microscope. Its rays are usually uniseriate although sometimes present as biseriates with cell height at 310–550 µm and homocellular with procumbent cells. Fibres are non-septate, while crystal is present in chambered axial parenchyma but silica grains absent. Batai: (a) tranverse section, (b) tangential section, and (c) radial section Ludai: (a) tranverse section, (b) tangential section, and (c) radial section Mahang: (a) tranverse section, (b) tangential section, and (c) radial section Sesendok: (a) tranverse section, (b) tangential section, and (c) radial section Anatomical features of ludai (Fig. 2) show that the vessels are predominantly solitary and in radial multiples of 2–6 with simple perforation. The tangential diameter ranges from 243 to 257 µm and frequency is at 3–5/mm2. Tyloses and deposit are absent. Axial parenchyma is irregularly wavy, narrow bands, more distinct with hand lens than in the microscope due to lack of contrast with fibres. Rays are exclusively uniseriate, height ranging from 2500 to 8000 µm, homocellular cells. Fibres are non-septate, while silica grains are present in rays and axial parenchyma. Anatomical features of mahang (Fig. 3) show that the vessels are solitary and in radial multiples of 2–3 with simple perforations. Tangential diameter ranges from 155 to 167 µm and frequency 4–7/mm2. Tyloses and deposit are absent. Axial parenchyma is in narrow bands. Rays 1–3 seriate, height ranging from 1700 to 3100 µm, heterocellular with procumbent and upright cells. Fibres are non-septate. Crystal is often present in rays or axial parenchyma. Silica grain is absent. Anatomical features of sesendok (Fig. 4) show that the vessels are predominantly in radial pairs and multiples of 2–7 in a series and occasional clusters with simple perforation. Tangential diameter ranges from 291 to 309 µm and the frequency is 1–3/mm2. Tyloses and deposit are distinctly absent. Axial parenchyma is regularly spaced apotracheal bands, more distinct with hand lens than observing under the microscope. Rays are 1–2 seriate, height of 500–1500 µm, heterocellular with procumbent and upright cells. The fibres are non-septate, while crystal and silica grains are absent. Table 1 summarizes the result of anatomical properties of batai, ludai, mahang, and sesendok with comparison to other well-known plantation timbers. Results showed that the fibre lengths are significantly different at (p ≤ 0.05), with sesendok having the longest fibre as compared to the other three pioneer species. This present result is similar with the finding by [18] who also found in his study that sesendok has the longest fibre which is very long for hardwood and could be suitable for the pulp and paper. In comparison to other well-known plantation timbers, i.e., rubberwood (Hevea brasiliensis) and Eucalyptus grandis, these four pioneer species show comparable value in terms of fibre length. The fibre wall of sesendok is the thickest at 5.1 µm followed by mahang, ludai, and batai. Fibre wall thickness of these four pioneer species categorised as very thin fibre walled which is the fibre lumen is three times wider than the double wall thickness. Runkel ratio of batai, ludai, mahang, and sesendok is less than 1.0 which were 0.27, 0.57, 0.38, and 0.28, respectively, whilst the slenderness ratio for batai, ludai, mahang, and sesendok were 36.4, 43.2, 50.2, and 45.6, respectively. On the other hand, vessel diameter of batai, ludai, and sesendok is categorised as large with sesendok having the significantly largest vessel diameter. Vessel diameter for mahang is the smallest as compared to the other three pioneer species and categorised as medium-sized vessel. A number of vessels present for these four pioneer species are categorised as very few. Table 1 Anatomical properties of batai, ludai, mahang, and sesendok in comparison with other well-known plantation timbers The suitability of the timber for papermaking is based on the runkel ratio. Fibres with a runkel ratio of less than 1.0 are suitable for use as pulp with good strength properties [3]. High runkel ratio indicates inferior raw material for papermaking where the fibre is stiff, less flexible and forms bulkier paper with lower bonded area [19]. Based on the result obtained (Table 1), the mean runkel ratios of all four pioneer species studied were less than 1.0, indicating that the fibres from the timber would produce good quality paper. Besides that, the tearing strength and folding endurance of paper are indicated by the slenderness ratio [20]. Larger slenderness ratio results are better for paper making where it indicates a better formed and well-bonded paper [19, 21]. Present results showed that the slenderness ratio of the four pioneer species is in the range of 36.4–50.2. This result is comparable to the study for Eucalyptus grandis, as shown in Table 1, which ranges from 42.6 to 59.8 [22]. Batai, ludai, mahang, and sesendok also show thin fibre wall and large fibre lumen diameter which, according to [5] this features, contributes to the good adhesive penetration. Observation on the anatomical features of the four pioneer species shows that all the timbers can be categorised as having medium-to-large vessel according to the vessel category by [14]. Karl [23] stated that wood species with medium-to-large vessel may not be good for printing papers, while [24] reported that generally species with medium-to-large pores are light with course texture which is suitable for general usage. These four pioneer species have larger vessel, absent of tylosis, and gum deposit, which have uniseriate and fine rays which according to [5, 25, 26], and these characteristics make them easy to be impregnated to enhance the wood properties. The absence of gum deposits in batai, ludai, mahang, and sesendok would also make these timbers suitable for veneering into plywood. Adeniyi et al. [5] further reported that timber for plywood should be free from gum deposits as it would interfere with wood gluability. The anatomical features show that these four pioneer species mostly have uniseriate rays which could contribute to the excellent nailing property. As reported by [27], wood with multiseriate rays is poor in nailing property as it has a tendency to split when nailed. However, the presence of silica in ludai would cause a blunting effect on sawteeth. This is also reported by [28] where silica that present in Coelostegia griffithii and Durio griffithii cause a blunting effect on sawteeth. Physical and mechanical properties Results of physical and mechanical properties are tabulated in Table 2. Based on the density, batai, ludai, mahang, and sesendok are classified as light timber, which are comparable to rubberwood and Eucalyptus grandis. From the result obtained, sesendok has the highest density, followed by mahang, ludai, and batai. From this result, the trend of density in four pioneer species could be related with the fibre length, fibre wall thickness, and vessel diameter. Longest fibre and thickest fibre wall found in sesendok (Table 1) are directly related to the highest density among the four species studied. On the other hand, batai has the shortest, thinnest fibre, and large vessel diameter (Table 1) which contribute to the lower density. Similar result was also reported by [29] and [30] where density is correlated to the fibre length, fibre wall thickness, and vessel diameter. Table 2 Physical and mechanical properties of batai, ludai, mahang, and sesendok in comparison with other well-known plantation timbers In terms of shrinkage (Table 2), batai and sesendok have the highest shrinkage for tangential, radial, and longitudinal. Ludai and mahang shows no significant difference in tangential and longitudinal shrinkage between them. Percentage of shrinkage for sesendok and batai are rated as high, whilst ludai and mahang rated as average. The shrinkage rating is based on the percentage shrinkage of tangential from green to air dry as reported by [31]. The Sesendok shows significantly higher value in MOR, MOE, compression, and shear parallel to grain, followed by mahang, ludai, and the lowest mechanical properties of batai. Van Gelder [32] reported that pioneer species had a significantly lower wood density, MOR, and compression strength. Correlation factors influencing density, shrinkage, and mechanical properties Table 3 presented the correlation factors influencing density, shrinkage, and mechanical properties of batai, ludai, mahang, and sesendok. Based on the results, density was positively correlated with fibre length except in batai, where the correlation is moderate-to-strong. Fibre diameter is weakly correlated with density in batai (r = 0.229) and mahang (r = 0.325). Density is positively correlated with fibre wall thickness in all species study with weak-to-moderate correlation. Vessel diameter also significantly correlates with the density with negative and weak-to-moderate correlation in all species. In terms of shrinkage, it shows significantly correlation with fibre length and fibre wall thickness in batai, ludai, mahang, and sesendok. Shrinkage is highly affected by density compared to other anatomical properties with positive and very weak-to-very strong correlation in all species. On the other hand, mechanical properties are significantly correlated with fibre length, fibre wall thickness, and vessel diameter with positive correlation in batai, ludai, mahang, and sesenduk. Among the properties, density is the best factor to be correlated with MOR, MOE, compression parallel to grain, and shear parallel to grain which shows positive and weak-to-strong correlation. Fibre dimeter and fibre lumen diameter are also significantly correlated with some properties, as shown in Table 3. Table 3 Correlation factors influencing density, shrinkage and mechanical properties of batai, ludai, mahang, and sesendok This study found the anatomical properties that significantly affect the density and mechanical properties are fibre length, fibre wall thickness, and vessel diameter. Similar findings were also observed by [4, 33] in Pseudolachnostylis maprounaefolia and Azadirachta excelsa, respectively. [5] further stated that strong wood has smaller size of vessel diameter and thick fibre wall. On the other hand, shrinkage was affected significantly by the anatomical properties, namely, fibre length and fibre wall thickness. Significant correlation of fibre with shrinkage was also reported by [34] in Gmelina arborea. Based on the result obtained (Table 3), the number of vessels per mm2 does not significantly influenced the density, shrinkage, and mechanical properties which was also confirmed by [4]. Thus, it can be inferred from the results of this study that shrinkage and mechanical properties are highly dependence on density. Wood with higher density has higher shrinkage and mechanical properties. This is in good agreement with [35,36,37] who also reported significant relationship between density and shrinkage in Melia azedarach, Azadiractha indica, and Pinus pinaster, respectively. Whilst, correlation between density and mechanical properties was also observed by [38,39,40] in Acacia mangium, Acacia melanoxylon, and Tectona grandis, respectively. Based on the result obtained, sesendok have the largest vessel, longest and thickest fibre, highest in density, and mechanical properties compared to batai, ludai, and mahang. These four pioneer species could be suitable for pulp and paper, since they have longer fibre and a runkel ratio less than 1.0. The absence of gum deposit made the timbers suitable for plywood. Besides that, these four pioneer species have low density and mechanical properties which makes them suitable for light construction, furniture, interior finishing, and general utility. Batai, ludai, mahang, and sesendok have excellent nailing property and could be easily treated. In terms of correlation, fibre length, fibre wall thickness, and vessel diameter are significantly correlated to density and mechanical properties. In this present study, density is a good indicator for predicting shrinkage and mechanical properties. Generally, batai, ludai, mahang, and sesendok could be a promising timber species as an alternative material to the depleting resources of commercial timber. All data analysed during this study are included in this published article. modulus of rupture MOE: Cheah LC (1995) Pioneer species for fast growing tree plantations in Malaysia-an evaluation. FRIM Technical Information No53. Forest Research Institute Malaysia, Kepong, Selangor Forest Department Peninsular Malaysia (JPSM) (2004) National forest inventory 4 report for Peninsular Malaysia conducted by the Forest Department Peninsular Malaysia (JPSM) in 2000–2002 Takeuchi R, Wahyudi I, Aiso H, Ishiguri F, Istikowati TW, Ohkubo T, Ohshima J, Iizuka K, Yokota S (2016) Wood properties related to pulp and paper quality in two Macaranga species naturally regenerated in secondary forests, Central Kalimantan, Indonesia. TROPICS 25(3):107–115 Uetimane EJ, Ali AC (2011) Relationship between mechanical properties and selected anatomical features of Ntholo (Pseudolachnostylis maprounaefolia). J Trop For Sci 23(2):166–176 Adeniyi IM, Adebagbo CA, Oladapo FM, Ayetan G (2013) Utilisation of some selected wood species in relation to their anatomical features. Glob J Sci Front Res Agric Veter 13(9):2249–4626 Igartua DV, Monteoliva SE, Monterubbianesi MG, Villegas MS (2003) Basic density and fibre length at breast height of Eucalyptus globulus for parameter prediction of the whole tree. IAWA J 24(2):173–184 Miyoshi Y, Kojiro K, Furuta Y (2018) Effects of density and anatomical feature on mechanical properties of various wood species in lateral tension. J Wood Sci 64:509–514 Kiaei M (2011) Anatomical, physical and mechanical properties of Eldas Pine (Pinus eldarica Medw.) grown in the Kelardasht region. Turkish J Agric For 35:3–42 Shukla SR, Sharma SK, Rao RV (2003) Specific gravity and shrinkage behaviour of eight-year-old plantation grown Tecomella undulata. J Trop For Products 9:35–44 Hsu CYL, Chauhan SS, King N, Lindstrom H (2003) Modulus of elasticity of stemwood vs branchwood in 7-year-old Pinus radiata families. NZ J Forest Sci 33(1):35–46 Karlinasari L, Wahyuna ME, Nugroho M (2008) Non-destructive ultrasonic testing method for determining bending strength properties of Gmelina wood (Gmelina arborea). J Trop For Sci 20(2):99–104 Tan YE, Lim NPT, Gan KS, Wong TC, Lim SC, Thilagawaty M (2010) Testing methods for plantation grown tropical timbers. ITTO project on improving utilization and value adding of plantation timbers from sustainable sources in Malaysia project NO. PD 306/04(1). Forest Research Institute Malaysia, Kepong, Selangor Schweingruber FH, Borner A, Schulze ED (2006) Atlas of woody plant stems: evolution, structure and environmental modifications. Springer, New York Wheeler EA, Baas P, Gasson PE (1989) IAWA List of microscopic features for hardwood identification. IAWA Bull 10(3):219–332 Gülsoy SK, Hafizoglu H, Pekgozlu AK, Tumen İ, Donmez İE, Sivrikaya H (2017) Fiber properties of axis and scale of eleven different coniferous cones. Ind Crops Prod 109:45–52 Singh S, Mohanty AK (2007) Wood fiber reinforced bacterial bioplastic composites: fabrication and performance evaluation. Compos Sci Technol 67(9):1753–1763 BS (British Standard) 373 (1957) Methods of testing small clear specimens of timber. British Standard Institution Ogata K, Fujii T, Abe H, Baas P (2008) Identification of the timbers of Southeast Asia and the Western Pacific. Forestry and Forest Product Research Institute, Tsukuba Ashori A, Nourbakhsh A (2009) Studies on Iranian cultivated paulownia: a potential source of fibrous raw material for paper industry. Eur J Wood Wood Prod 67:323–327 Yahya R, Sugiyama J, Silsia D, Grill J (2010) Some anatomical features of an Acacia hybrid, A. mangium and A. auriculiformis grown in Indonesia with regard to pulp yield and paper strength. J Trop For Sci 22:343–351 Rodriguez HG, Maiti R, Kumari A, Sarkar NC (2016) Variability in wood density and wood fibre characterization of woody species and their possible utility in Northeastern Mexico. Am J Plant Sci 7:1139–1150 Palermo GPM, Latorraca JVF, Carvalho AM, Calonego FW, Severo ETD (2015) Anatomical properties of Eucalyptus grandis wood and transition age between the juvenile and mature woods. Eur J Wood Product 73:775–780. https://doi.org/10.1007/s00107-015-0947-4 Karl FW (1984) Forestry handbook, Society of American Foresters. pp 616–623 Jayeola AA, David OA, Abayomi EF (2009) Use of wood characters in the identification of selected timber species in Nigeria. Not Bot Hort Agrobot 37(2):2832 Sint KM, Militz H, Hapla F (2011) Treatability and penetration indices of four lesser-used Myanmar hardwood. J Wood Res 56(1):13–22 Sint KM, Stergios A, Gerald K, Frantisek H, Holger M (2013) Wood anatomy and topochemistry of Bombax ceiba L. and Bombax insigne Wall. Bioresource 8(1):530544 Lim SC, Nordahlia AS, Abd Latif M, Gan KS, Rahim S (2016) Identification and properties of Malaysian Timbers. Malaysian Forest Records No. 53. Forest Research Institute Malaysia, Kepong, Selangor Wong WC, Lim SC (1990) Malaysian Timbers-Durian. Timber Trade Leaflet No 113. Forest Research Institute Malaysia, Kepong, Selangor Nordahlia AS, Ani S, Zaidon A, Mohd Hamami S (2011) Fibre morphology and physical properties of 10-year-old sentang (Azadirachta excelsa) planted from rooted cuttings and seedlings. J Trop For Sci 23(2):222–227 Alia-Syahirah Y, Paridah MT, Hamdan H, Anwar UMK, Nordahlia AS, Lee SH (2019) Effects of anatomical characteristics and wood density on surface roughness and their relation to surface wettability of hardwood. J Trop For Sci 31(3):269–277 Lim SC, Gan KS, Chung RCK (2019) A dictionary of Malaysian Timbers. Malayan Forest Records No 30. Forest Research Institute Malaysia, Kepong, Selangor Van Gelder HA, Poorter L, Sterck FJ (2006) In wood mechanics, allometry, and life history variation in a tropical rain forest tree community. New Phytol 171(2):367–378 Nordahlia AS, Anwar UMK, Hamdan H, Zaidon A, Mohd Omar MK (2014) Mechanical properties of 10-year-old sentang (Azadirachta excelsa) grown from vegetative propagation. J Trop For Sci 26(2):240–248 Okon KE (2014) Relationships between fibre dimensional characteristics and shrinkage behavior in a 25-year-old Gmelina arborea in Oluwa forest reserve, South West Nigeria. Appl Sci Res 6(5):50–57 Van Duong D, Matsumura J (2018) Transverse shrinkage variations within tree stems of Melia azedarach planted in northern Vietnam. J Wood Sci 64:720–729 Sotannde OA, Oluyege AO, Adeogun PF, Maina SB (2010) Variation in wood density, grain and anisotropic shrinkage of plantation grown Azadiractha indica. J Appl Sci Res 6(11):1855–1861 Muñoz GR, Anta MB (2010) Physical properties of tinning wood in maritime pine (Pinus pinaster Ait): Case study. Eur J For Res 129(103):71045 Fanny H, Ramadhani AP, Harry P, Sri S (2018) Pengaruh kecepatan pertumbuhan terhadap sifat fisika dan mekanika kayu Acacia mangium Umur 4 Tahun asal Wonogiri, Jawa Tengah (Effect of growth rate on physical and mechanical properties of 4 year old Acacia mangium wood from Wonogiri, Central Java). J For Sci 12:248–254 Machado JS, Louzada JL, Santos AJA, Nunes L, Anjos O, Rodrigues J, Simões RMS, Pereira H (2014) Variation of wood density and mechanical properties of Blackwood (Acacia melanoxylon R.Br.). Mater Des 56:975–980 Izekor DN, Fuwape JA, Oluyege AO (2010) Effects of density on variations in the mechanical properties of plantation grown Tectona grandis wood. Appl Sci Res 2(6):113–120 Naji HR, Suhaimi MH, Nobuchi T, Bakar ES (2013) Intra and interclonal variation in anatomical properties of Hevea Brasiliensis Muell. Argic Wood Fiber Sci 45(3):268–278 Bal BC, Bektaş İ (2012) The physical properties of heartwood and sapwood of Eucalyptus grandis. Proligno 8(4):35–43 Bal BC, Bektaş İ (2013) The mechanical properties of heartwood and sapwood of Flooded gum (Eucalyptus grandis) Grown in Karabucak, Turkey. Ormancilik Dergisi (Forestry Magazine) 9(1):71–76 Forest Products Division, Forest Research Institute Malaysia, 52109, Kepong, Selangor Darul Ehsan, Malaysia H. Hamdan, A. S. Nordahlia, U. M. K. Anwar, M. Mohd Iskandar, M. K. Mohamad Omar & Tumirah K H. Hamdan A. S. Nordahlia U. M. K. Anwar M. Mohd Iskandar M. K. Mohamad Omar Tumirah K All authors have participated sufficiently in the study of wood properties and are responsible for the entire contents. The author read and approved the final manuscript. Correspondence to A. S. Nordahlia. Hamdan, H., Nordahlia, A.S., Anwar, U.M.K. et al. Anatomical, physical, and mechanical properties of four pioneer species in Malaysia. J Wood Sci 66, 59 (2020). https://doi.org/10.1186/s10086-020-01905-z DOI: https://doi.org/10.1186/s10086-020-01905-z Pioneer species
CommonCrawl
BMC Ecology Composition, uniqueness and connectivity across tropical coastal lagoon habitats in the Red Sea Zahra Alsaffar1,2, João Cúrdia1, Xabier Irigoien1,3,4 & Susana Carvalho ORCID: orcid.org/0000-0003-1300-19531 BMC Ecology volume 20, Article number: 61 (2020) Cite this article Tropical habitats and their associated environmental characteristics play a critical role in shaping macroinvertebrate communities. Assessing patterns of diversity over space and time and investigating the factors that control and generate those patterns is critical for conservation efforts. However, these factors are still poorly understood in sub-tropical and tropical regions. The present study applied a combination of uni- and multivariate techniques to test whether patterns of biodiversity, composition, and structure of macrobenthic assemblages change across different lagoon habitats (two mangrove sites; two seagrass meadows with varying levels of vegetation cover; and an unvegetated subtidal area) and between seasons and years. In total, 4771 invertebrates were identified belonging to 272 operational taxonomic units (OTUs). We observed that macrobenthic lagoon assemblages are diverse, heterogeneous and that the most evident biological pattern was spatial rather than temporal. To investigate whether macrofaunal patterns within the lagoon habitats (mangrove, seagrass, unvegetated area) changed through the time, we analysed each habitat separately. The results showed high seasonal and inter-annual variability in the macrofaunal patterns. However, the seagrass beds that are characterized by variable vegetation cover, through time, showed comparatively higher stability (with the lowest values of inter-annual variability and a high number of resident taxa). These results support the theory that seagrass habitat complexity promotes diversity and density of macrobenthic assemblages. Despite the structural and functional importance of seagrass beds documented in this study, the results also highlighted the small-scale heterogeneity of tropical habitats that may serve as biodiversity repositories. Comprehensive approaches at the "seascape" level are required for improved ecosystem management and to maintain connectivity patterns amongst habitats. This is particularly true along the Saudi Arabian coast of the Red Sea, which is currently experiencing rapid coastal development. Also, considering the high temporal variability (seasonal and inter-annual) of tropical shallow-water habitats, monitoring and management plans must include temporal scales. Coastal lagoons are important transition systems providing essential socio-economic goods and services (e.g. shore protection, fisheries, carbon sequestration) [1,2,3]. Coastal lagoons harbour well-adapted and sometimes unique assemblages of species, which play a vital role directly supporting local populations. These ecosystems are naturally stressed on daily to annual-time scales [4,5,6,7,8] and display high environmental variability (e.g. temperature, salinity, primary productivity, nutrients, dissolved oxygen). Such variability is reflected in the biological patterns that alter in response to the new environmental conditions. Lagoon ecosystems are also being increasingly affected by human disturbances that can compromise their ecological and socio-economic values [5, 9,10,11,12]. Subtropical and tropical coastal lagoons encompass a range of essential soft-substrate habitats, such as mangroves, seagrasses and unvegetated bottoms. These habitats are associated with different environmental conditions, resulting not only from their location along the depth profile but also their structural complexity, and biological assemblages [13,14,15,16]. However, while these habitats contain a diverse range of organisms spatial distribution patterns and connectivity in subtropical and tropical lagoon habitats have mainly been assessed using fish and other mobile marine fauna [17,18,19,20,21,22,23]. Studies describing and comparing macrobenthic distribution patterns and the strength of connectivity linkages across different shallow-water tropical lagoon habitats are particularly limited compared to temperate systems (e.g. [15, 24,25,26,27]). Spatial differences in the community can provide information regarding the ecological requirements of species. For example, species able to colonize multiple habitats will most likely be less sensitive to environmental changes, whereas those more directly associated with a specific habitat may be less tolerant to environmental changes. In general, harsher environmental conditions are observed in the intertidal area, dominated by mangrove trees, with conditions being attenuated with increasing depth, a pattern that is associated with a consistent increase in species richness and abundance [28, 29]. Indeed, mangrove habitats are characterized as unfavourable environments influenced by high salinity, high fluctuation of temperature, desiccation, and poor soil condition (depleted oxygen) [30]. On the other hand, if undisturbed, seagrass habitats provide comparatively more stable environmental conditions through time [31,32,33] as well as protection from predators [34]. Furthermore, the knowledge about the role of temporal variability in driving macrobenthic patterns is still scarce [35,36,37,38,39,40,41]. While seasonal changes in tropical regions are comparatively less distinct than in temperate regions [42], temporal variability in benthic patterns exists [39, 43, 44]. Investigating temporal variability patterns is essential to obtain a deeper knowledge of the dynamics and processes regulating lagoon communities. Indeed, considering the current scenario of global climate change, it is critical to better understand how the distribution patterns of organisms in these habitats are changing and particularly how they respond to changes in temperature and other key environmental drivers [1, 45]. Temporal variation patterns in the abundance and composition of macrofaunal invertebrates have been intensively studied in temperate coastal ecosystems in relation to environmental variables [46,47,48,49]. Temporal variability in temperature and food availability, for example, can influence recruitment events with consequences for the structure, distribution, and abundance of the community [50,51,52]. Similarly, sediment composition, organic matter, and vegetation cover, which may vary in time, are also main drivers of observed ecological patterns. However, most of those studies have been conducted in temperate regions and, more recently in polar habitats (e.g. [53,54,55,56,57]). Comparatively, less attention has been dedicated to sub-tropical and tropical areas (e.g. [58,59,60,61]). This is even more striking in regards to the assessment of inter-annual variability (but see [40, 62, 63]). Assuming that harsher environmental conditions will occur towards the intertidal area (i.e. mangrove habitats), we hypothesise (i) a decrease in species richness (i.e. the total number of species) and in the number of exclusive species from subtidal to intertidal areas, as less resistant species are progressively excluded along the the environmental gradient. We also hypothesise that (ii) shallow water seagrass meadows will harbour higher numbers of species particularly compared with unvegetated bottoms, as a result of habitat complexity, protection from predators and food availability [64,65,66]. Likewise, we hypothesise (iii) that temporal changes will be less evident in subtidal (vegetated and unvegetated) than intertidal habitats [30, 67] and that subtidal seagrasses areas will support more stable communities through time. Ecologically related management decisions require a sound knowledge of the biodiversity of the ecosystem. By assessing the variability in spatial and temporal patterns of macro benthic organisms we expand on the existing knowledge on tropical coastal lagoons which are sensitive as well as ecologically and economically valuable. Macrobenthic community composition: general characterization and connectivity among habitats A total of 4771 invertebrates were identified within the different habitats surveyed in the lagoon (Fig. 1a), belonging to 272 operational taxonomic units (OTUs) distributed among 11 phyla, 16 classes, 40 orders, and 80 families. Annelida dominated both in abundance and number of taxa, contributing to, respectively, 51.0% and 42.0% of the total values. Sipuncula (15.0%), Arthropoda (13.0%), Mollusca (12.0%), and Echinodermata (7.0%) also contributed to the overall density. Regarding the number of species, Arthropoda (28.0%) and Mollusca (18.0%) were, along with Annelida, the phyla contributing the most to the total number of species. a Map showing the locations of the habitats in the lagoon. b Annual variability in sea surface temperature in the lagoon during the study period. M1 and M2, mangrove; S1 and S2, seagrass; and unvegetated area (Unv.). SU1 and SU2, summer sampling dates 1 and 2; W1 and W2, winter sampling dates 1 and 2. The map was produced by the authors using data freely available (http://www.thematicmapping.org/downloads/world_borders.php; https://www.gadm.org/download_country_v3.html, Saudi Arabia) At the species level, the sipunculid Phascolion (Phascolion) strombus strombus (12.2% of the total abundance) was the most abundant species, followed by the polychaetes Simplisetia erythraeensis (5.8%), Eunice indica (4.4%), Ceratocephale sp. (3.3%), Aonides sp. (2.7%), Lumbrineris sp.1 (2.7%), and Lysidice unicornis (2.6%), the amphipod Metaprotella africana (3.3%), and the bivalves Barbatia foliata5 (2.7%) and Paphies angusta (2.4%). Most of these taxa were found in at least four of the studied sites, except for Metaprotella africana (exclusive to S1) and Barbatia foliata, exclusive to seagrass habitats (S1 and S2). All the remaining taxa contributed to less than 2% of the total abundance. Only eight taxa (3% the total number of taxa) spanned across the five habitats. Most of them were polychaetes (Capitellethus sp., Drillonereis sp., Euclymene spp., Lumbrineris sp.1, Lysidice unicornis, Notomastus spp.). Nemertea (und.) and the sipunculid Phascolion (Phascolion) strombus strombus were also observed across the five sites. Simplisetia erythraeensis was absent from the unvegetated site. There were 62 taxa shared between intertidal and subtidal sites, and only 18 exclusive species to the mangrove habitats (as a whole), representing 6.6% of the of the gamma diversity (2.2%, M2; 4.4%, M1). On the other hand, subtidal habitats showed a rather consistent percentage of exclusive species, ranging from 29.4% in S1, 32.3% in S2 and 33.8% in the unvegetated area (S1: 12.8%; S2: 18.4%; Unvegetated: 15.1% of the gamma diversity, i.e. the total number of taxa observed in the lagoon). Both seagrass habitats showed a higher percentage of resident species (i.e. species present in over 85% of the sampling dates in a certain habitat) compared to mangrove and unvegetated areas (Table 2). In terms of the number of individuals, those taxa contributed to 45.0% and 34.0% for S1 and S2, respectively, of the site's total abundance. S2 showed a more balanced distribution of the four habitat preference traits analysed (i.e. resident, frequent, occasional, rare) and relatively stable numbers throughout the study period (Table 2). Regardless of the habitat, occasional species accounted for more than 12.6% of the total number of species. Macrobenthic patterns of variability across the lagoon seascape show that the community was structured by habitat with limited seascape ecological connectivity across the different habitats (Fig. 2a). The environmental data gathered partially explained the multivariate variability of the biological data with the two first axes of the distance-based redundancy analysis (dbRDA) explaining more than half of the constrained variability but only 19.1% of the total variability of the biological communities. The dbRDA plot reinforces a clear separation of the communities inhabiting mangrove areas, S1, and the unvegetated habitat, whereas S2 presented affinities (i.e. higher connectivity) with either S1 or mangrove stations depending on the sampling period (Fig. 2b). Samples from the unvegetated habitat were associated with depth and percentages of medium and fine sand. Seagrass habitats (particularly S1) were separated based on the higher silt and clay (fine particles) content, whereas mangrove habitats presented a slightly higher percentage of coarse sand. Multivariate patterns suggest that the nature of the biotope itself drives the composition and structure of macrobenthic communities. The investigation of temporal variability was undertaken for each habitat separately. Multivariate analysis of the community data. a Ordination (non-metric multivariate dimensional analysis) and classification diagram of the sampling habitats based on the Bray–Curtis dissimilarity on non-transformed data. b Distance-based redundancy analysis (dbRDA) plot based on a set of environmental variables; salinity, temperature, depth, grain size fractions (coarse sand medium sand, fine sand, fines), organic matter: LOI (%) and chlorophyll a on biological data from lagoon habitats; M1 and M2, mangrove; S1 and S2, seagrass; and unvegetated area (Unv). The points represented the sampling events (winter 1, winter 2, summer 1, and summer 2) for 2014 and 2015. Coarse sand and fines data are square root transformed and LOI loge transformed. Length and direction of vectors indicate the strength and direction of the relationship Temporal variability within habitats The high variability patterns in the seagrass biomass along the study period (Fig. 3) was reflected in the biological changes but was not fully aligned with the temporal pattern in sea water temperature (Fig. 1b). When analysing the full dataset and regardless the diversity metric considered, S2 consistently presented the highest number of taxa (155, observed; 184.8–219.7, estimated), whereas M2 was the poorest taxa site. Density was also higher at S2 (801.9 ind.m−2) and lowest at the unvegetated area (388.8 ind.m−2) (Table 1). Biomass of seagrass plants along the study period (2014–2015) in both seagrass stations. SU1 and SU2, summer sampling dates 1 and 2; W1 and W2, winter sampling dates 1 and 2. S1 and S2, seagrass sites Table 1 Total number of Operational Taxonomic Units (OTUs), estimated number of taxa based on Chao, Jacknife (1st order) and Bootstrap, and average density (ind.m−2) per habitat. M1 and M2, mangrove; S1 and S2, seagrass In general, a higher number of OTUs were observed in the subtidal habitats than the intertidal mangrove areas (Fig. 4a), with M2 showing a consistently depressed number of taxa across all sampling dates. Abundance was also generally higher within seagrass meadows (Fig. 4b). M2 also presented the lowest Shannon–Wiener diversity whereas, in general, higher values were observed at S2 or at the unvegetated habitat (Fig. 4c). Alpha-diversity metrics per habitat and over time. a Number of Operation Taxonomic Units (OTUs), b density, and c Shannon–Wiener diversity. M1 and M2, mangrove; S1 and S2, seagrass; and unvegetated area (Unv) Biological similarity within each habitat was markedly low, ranging from 14% (M2) to 25% (S1) (Table 2). Both habitats also showed a higher dominance with only four and six species contributing to over 62% of the habitat's abundance, respectively. In the remaining habitats, a minimum of 13 taxa was needed to reach the same level of abundance (Table 2). Except for S1, where none of the dominant taxa was a polychaete, this group dominated all the other habitats. S1 was dominated by a sipunculid (Phascolion (Phascolion) strombus strombus), two bivalves (Barbatia foliata and Cardiolucina semperiana), one amphipod (Metaprotella africana) and two echinoderms (Aquilonastra burtoni and Amphioplus cyrtacanthus). Table 2 Cumulative percentage of the taxa (Cum %) contributing to more than 60% of each habitat's total abundance Temporal variation in the structure of macrobenthic assemblages within each habitat examined on the basis of the Bray–Curtis and Jaccard resemblance measures indicated different patterns depending on the habitat in analysis. Major differences were not detected between metrics and therefore only plots for Bray–Curtis matrices are presented (Fig. 5). The results of the Permutational Multivariate Analysis of Variance (PERMANOVA) confirmed different temporal trajectories in the analysed habitats (Table 3). Both resemblance metrics applied to M1 and S1 datasets showed a significant interaction of the main factors (Year x Season). The pair-wise tests indicated for M1 a significant inter-annual difference both in winter and summer. For S1, inter-annual differences were only detected in winter. With regard to seasonal differences, S1 presented significant variability in both years (except in the composition–Jaccard-for 2015) but in M1 differences were only detected in 2014 (Table 3). Macrobenthic communities at M2 and S2 showed significant inter-annual variability (except for S2 with presence/absence) (Table 3). Finally, the unvegetated area showed significant and independent seasonal and inter-annual variability (Table 3). Non-metric multidimensional scaling (nMDS) based on Bray–Curtis dissimilarity matrices based on untransformed data, for temporal variation in the structure of macrobenthic assemblages within each habitat. M1 and M2, mangrove; S1 and S2, seagrass; and unvegetated area (Unv) Table 3 Two-way PERMANOVA model and pair-wise tests based on Bray–Curtis and Jaccard matrices within habitats among seasons and year (Year and Season interaction; Yr x Se) This study investigated the distribution patterns of macrobenthic communities inhabiting adjacent shallow-water habitats in a tropical coastal lagoon with particular focus on how they are connected and how communities within each habitat vary over time. Even though ecological seascape connectivity has been previously demonstrated particularly for fish, information on the benthic dynamics in tropical lagoons is still scarce. The Al Qadimah lagoon, likewise other tropical lagoons, encompasses a wide range of habitats including both hard (not addressed here) and soft-substrates. Within the latter, changes in the vegetation cover result in a mosaic of habitats with different sedimentary properties that will determine the structure of local macrobenthic communities [68]. Here, we observed a clear zonation of the benthic communities, driven by habitat-related factors acting at varying spatial scales [69]. The present results also provided new insights into the temporal variability (seasonal and inter-annual) of different lagoon shallow-water habitats in a tropical seascape. Uniqueness of lagoon habitats within the seascape A clear pattern of habitat-dependent association was observed with the different habitats harbouring distinct macrobenthic assemblages. The high spatial variability of macrofaunal patterns is most likely linked to the heterogeneity of the seascape and to the high contribution of rare species to the overall abundance. Recent studies showed that biological variability is driven by the relative high contribution of rare and common species, with rare species playing a major role in the temporal patterns, as a result of their vulnerability to fluctuations in environmental conditions (e.g. [70, 71]). Subtidal habitats harboured 70% of the total number of species. Overall, seagrass habitats showed the highest number of taxa, which agrees with previous studies [65, 68, 72,73,74]. Variability was, however, high and significant differences within the subtidal area were not detected. The structural complexity provided by the seagrass canopy and the developed rhizome and root systems that contribute to sediment stability may favour the development of diverse communities [70, 75, 76]. In the tropics, the canopy can play an additional critical role providing shade that can attenuate the effects of sea water temperature [8] that in the study region can reach over 32 °C in the summer. Yet, we found that denser seagrass meadows are not always the most favourable habitats for several invertebrates, even though this result may be site-dependent [77,78,79,80]. Indeed, the site displaying the highest variability in the cover during the study period, showed the highest number of taxa, density of individuals, and exclusive number of species (32.3% of the site's total number of species). Dense vegetation can physically obstruct the movement of large burrowing macroinvertebrates [68, 81]. Also, despite the increased aeration within the sediment due to the developed root system [82], the decomposition of the high amounts of organic matter will require increased oxygen consumption and result in anoxic regions and accumulation of toxic products [83, 84]. Therefore, vegetated areas with comparatlively lower cover might harbour higher species numbers as a result of species avoiding toxic anoxic conditions in densely covered areas [85]. Within mangrove habitats species encounter harsh physical environmental conditions (e.g. high salinity, hypoxia, desiccation, high concentration of toxins) and in general nitrogen limitation (C/N ratio often > 100; although mangroves in the Red Sea are carbon limited compared to other locations [86]) due to a low nutritional value of the main source of organic matter, i.e. leaf litter [25]. Under these consitions, populations of a few tolerant/opportunistic species dominate the macrobenthic communities [25, 87]. In the present study, the deepest mangrove area (M2) was dominated by only four species, the polychaetes Simplisetia erythraeensis, Ceratocephale sp. and Paucibranchia adenensis, and the bivalve Paphies angusta contributed to over 60% of the total abundance. In the shallowest mangrove area, despite the dominance of polychaetes, the sipunculid (Phascolion (Phascolion) strombus strombus) and some decapods (Diogenes costatus and Thalamita poissonii) were also co-dominant. Decapods are critical players for the ecosystem functioning of these habitats by processing leaf litter and oxygenating sediment through their burrows [88, 89] and therefore their dominance in the habitat is not surprising. As observed elsewhere, mangrove habitats showed the lowest number of species compared to nearby seagrass and unvegetated substrates, as previously found [90, 91]. Connectedness and stability at the scale of the seascape In the present study, nearby seagrass meadows differed in cover and depth location, which might have resulted in limited similarity in faunal communities (both habitats shared 35.0% of total number of species). Higher similarities (~ higher seascape connectivity) were detected among subtidal habitats than between those and mangroves (intertidal habitats). Nevertheless, 62 taxa, representing 22.8% of the gamma diversity, were shared between intertidal and subtidal habitats, suggesting that several species may utilize contrasting yet adjacent habitats within the lagoon seascape. Despite the fact that the overlap of species across the five habitats is lower (eight taxa; 2.9% of the total number of taxa) than previously reported [92, 93], the present study suggests the connectivity between intertidal and subtidal areas and the need for integrated management measures. The results obtained may result from the low hydrodynamic conditions present but information on the hydrographic patterns is non-existent. The effect of tides can result in displacement of specimens through water movement [94] and depending on their height can also expose organisms to desiccation for variable periods of time, which may hinder the distribution of most of the species toward the intertidal area. Specially, when analysed together, mangrove habitats contributed to 6.6% (M1, 4.4%; M2, 2.2%) of the gamma diversity, contrasting with the unvegetated subtidal area and the seagrass meadows that supported, respectively, 15.1% and 31.3% (S1, 12.9%; S2, 18.4%). Mangrove forests can produce relatively large amounts of organic matter through the conversion of leaf litter into detritus [64], that are later exported to nearby habitats [95,96,97]. Therefore, the proximity of the mangrove stands to shallow water seagrass meadows will most likely contribute to the higher biodiversity and, particularly, higher density observed within seagrasses. The populations of suspension-feeders, such as Barbatia foliata, which was dominant in the seagrass meadow (S1), supports the idea of higher availability of organic suspended particulate matter derived from, among others, nearby mangrove canopies and this higher availability will also support more resident organisms [68, 99]. Despite the high temporal variability observed in all habitats, highlighted by the dissimilarity indices, seagrass habitats showed a comparatively higher stability, with the lowest values of inter-annual variability, similar to previous studies in temperate areas [8, 98]. These habitats also supported the highest number of resident species (i.e. those present in over 85% of the sampling periods). At the lagoon entrance, the exclusive presence of Schizaster gibberulus, a sea urchin previously associated with the near shore coastal biotope in the region [16], suggests that the unvegetated area may be located along a corridor connecting offshore and lagoon communities, with patterns likely dependent on the hydrodynamic processes [99]. Its position between the lagoon and the open coastal water may also explain the high number of species observed (121), with a large proportion being exclusively associated with this habitat (33.9%). It is worth noting that given the generally low density observed in the Red Sea [16, 100], future studies will require to increase the replication across multiple spatial scale to fully understand the dynamics of benthic macroinvertebrates under low nutrient, high temperature, and high salinity conditions. Therefore, conclusions related to abundance and diversity should be interpreted with caution. The present findings reinforce the need for an integrated understanding of shallow-water habitats from a seascape perspective, in opposition to a fragmented analysis of the isolated habitats [21, 101, 102]. Whereas the latter may be relevant when looking at particular species, the contribution of each habitat to the dynamics of the whole macrobenthic assemblages is relevant and should not be disregarded by managers when aiming for marine biodiversity conservation. Indeed, in tropical regions, seagrass beds and mangroves have been reported as key nursery areas for several reef fishes such as parrotfishes (Labridae, Scarini), grunts (Haemulidae) and snappers (Lutjanidae) [103,104,105,106] that rely on the macrobenthos as food resources. Large-scale migrations (over 30 km) by juvenile snappers, between inshore nursery habitats and reefs in the central Red Sea have been reported [22]. Also, mangrove forests have been linked to enhanced biomass and biodiversity of coral reef fishes [18, 21, 104, 107, 108]. Sustained connectivity of the habitats may enhance the resilience of coral populations to recover after disturbance [107]. Therefore, disturbing the corridors connecting coral reefs with other inshore habitats may even have consequences for reef conservation at a local scale. Overall, the present study confirmed a decreasing gradient in the total number of species and number of exclusive species towards the mangrove habitats. It also supports the role of seagrass habitat complexity in promoting diversity and density of organisms. Nevertheless, high and stable seagrass cover does not necessarily result in the highest biodiversity levels. But the presence of these plants plays an essential role in the biodiversity of coastal lagoons. Seagrass habitats in contrast to mangrove forests and the unvegetated area show lower inter-annual variability and higher number of resident species, suggesting more stable communities. Current findings highlight habitat-structured patterns and persistent patchiness evidenced by a limited number of overlapping species (dominance of habitat specialists over generalists) within the seascape. This is particularly relevant considering the proximity of the analysed habitats but may result from the low dominance levels compared to temperate regions [92, 98, 109]. Nevertheless, 22.8% of the gamma diversity was represented by taxa spanning between subtidal and intertidal habitats. Hence, holistic, i.e. interconnected seascape management approaches, rather than those focusing on single habitats should be prioritized to protect biodiversity and fisheries [22, 110, 111]. Study area and sampling design The present study was carried out in the Al Qadimah lagoon (22° 22′ 39.3″ N, 39° 07′ 47.2″ E) located in the central region of the Saudi Arabian Red Sea (Fig. 1a). This shallow lagoon (average depth 2.19 m) has an approximate area of 14 km2 and is not impacted by direct anthropogenic disturbances typical of other coastal lagoons (e.g. freshwater or sewage discharges, fisheries, habitat destruction from coastal development). It is, however, situated between two urbanized areas, which are increasing in size (King Abdullah University of Science and Technology, 7000 inhabitants; King Abdullah Economic City, currently 5000 inhabitants but it is expected to reach 50,000 in the near future) but that are not directly connected with the lagoon. Hence, it offers a rare opportunity to study the natural roles of environmental drivers in shaping macrobenthic communities inhabiting such critical wetlands. Scattered along the extent of its margins, well-developed mangrove stands of Avicennia marina are observed. The bottom of the lagoon, particularly in the inner areas is characterized by more or less fragmented seagrass meadows. To depths of approximately 50 cm, Cymodocea rotundata is the dominant species with smaller patches of Cymodocea serrulata also being present. Below this depth, seagrass meadows are mainly characterized by mono-specific stands of Enhalus acoroides down to 2 m depth. Towards the sea, unvegetated bottoms with either sponges mixed with coral rubble or sand progressively replace seagrass meadows. In the Red Sea, there are two marked seasons (Fig. 1b), winter (November–April) and summer (May–October). In order to investigate inter-annual and seasonal changes in macrobenthic patterns, samples were collected in two different periods in winter (January; March) and summer (June; September) of 2014 and 2015. Five permanent soft-sediment habitats typical of tropical coastal lagoons were selected: 1. upper mangrove area (M1); 2. deeper mangrove area (M2); 3. shallow seagrass meadow (S1, mix meadows of Cymodocea serrulata interspaced with Cymodocea rotundata; relatively high cover all year round); 4. deeper seagrass meadow (S2, monospecific stands of Enhalus acoroides with high variability in the vegetation cover throughout the study period); and 5. unvegetated soft-sediments (Fig. 1a). The unvegetated sandy substrate was located between 8 and 10 m depth. Due to the widespread distribution of seagrasses, mangroves and in order to minimize the direct influence of those habitats on the colonization patterns of unvegetated areas, the site was located at the entrance of the lagoon. Sampling strategy At each habitat and sampling period, conductivity, temperature, and depth (CTD) casts were carried out with a multiparameter probe (OCEAN SEVEN 316 Plus and 305 Plus). The CTD casts also recorded oxygen saturation in the water column. Water samples for the analysis of chlorophyll a (chl a) were collected using a Niskin bottle at each station (2 L per station). Sediment samples were collected using a 0.1 m2 Van Veen grab in the seagrass meadows and the unvegetated area (subtidal stations), whereas in the mangrove habitats (intertidal), samples were collected using hand corers (3 × 10 cm i.d. making one replicate; total area per replicate ~ 0.024 m2). In 2014, two replicates at each site and sampling date were taken for the study of macrobenthic communities, with additional samples being collected for the study of environmental variables (grain particle size distributions and organic matter content). In 2015, the same approach was followed increasing the number of replicates for the study of macrobenthic communities to three. Macrobenthic samples were sieved through 1 mm mesh screens and preserved in 96% ethanol. Laboratory analyses In order to estimate the primary production in the sampling area, the concentration of chl a was quantified by fluorescence using the EPA method 445.0 [112]. Water samples were filtered using GF/F filters as soon as we arrived at the laboratory. The filters were then preserved at -80 °C until extraction of the pigments. 10 ml of 90% acetone were used for each extract and left for 24 h in cold and dark conditions to minimize degradation. The procedure was undertaken in low light conditions to minimize degradation. A Turner Trilogy® fluorometer (Turner Designs) was used to quantify the chl a content using an acidic module. The degradation of the chlorophyll a to phaeophytin was accomplished by acidifying the sample with 60 µl of 0.1 N HCl. Sediment samples were sorted after all the vegetation associated with sediment was removed. Organisms were whenever possible identified to the species level. Vegetation biomass (seagrass leaves, roots, and mangrove material) was quantified per replicate. Grain particle-size distribution was quantified after initial wet sieving of the samples (63 μm mesh) to separate the silt and clay fraction from sandy fractions and gravel. The retained fractions were dried at 80 °C for 24–48 h. The dried sandy and gravel sample was then mechanically sieved by using a column of sieves to separate the sandy fractions and the gravel as follows: < 63 μm, silt–clay; 63–125 μm, fine sand; 250-500 μm, medium sand; 1000–2000 μm, coarse sand; > 2000 μm, gravel. The organic content of the sediments was determined by loss on ignition (LOI). Sediments were dried for 24–48 h at 60 °C and then the samples were placed in the muffle furnace at 450 °C for 4 h. After cooling in a desiccator for 30 min, samples were weighed and the LOI was calculated using the following equation [113]: $$\text{LOI} = {(\text{W}_\text{i}}-{\text{W}_\text{f}})/{\text{W}_\text{i}} \times 100$$ where: LOI = Organic Matter content (%), Wi = Initial weight of the dried sediment subsample; Wf = Final weight after ignition. General patterns Macrobenthic patterns were analysed through a combination of univariate and multivariate techniques. Several univariate metrics were calculated including the total number of taxa (S, species richness), density (ind. m−2), and Shannon–Wiener (H′). Considering the different sampling methods, and the dependency of species richness on sample size [114], estimates of species diversity were also calculated and compared with S. The nonparametric species richness estimators used: Chao 1, Jacknife 1 order and Bootstrap all follow an asymptotic approach to estimate the number of undetected species richness. These estimators are commonly used in ecological studies because they are simple, intuitive, relatively easy to use and perform reasonably well [115]. The biased corrected form of Chao 1 estimator [114, 116] uses the number of singletons and doubletons to estimate the lower bound of species richness. The first order Jacknife estimator [117] assumes that the number of species that are missed equals the ones that were seen once (singletons). The Bootstrap estimator is based on the assumption that if the same data is resampled with replacement the number of missing species after resampling will be similar to those missed originally [117]. All estimators were calculated using the open source software R [118] using function "specpool" from "vegan" package [119]. Abundance data was used for the calculations of all estimators. In order to have a balanced number of replicates, the analyses were conducted for two replicates, with those collected in 2015 being randomly selected. Preliminary analysis showed that the same general patterns in composition and alpha-diversity were obtained for 2014 and 2015 datasets. To visualize multivariate patterns of abundance in macrobenthic communities within the seascape, non-metric multidimensional scaling (nMDS) was applied based on the Bray–Curtis dissimilarities. Given the differences among habitats for some dominant species, when comparing habitats (i.e. full dataset), Bray–Curtis dissimilarities matrices were calculated using untransformed abundance data. Separate nMDS plots were generated for each one of the sites for a better visualization of the temporal variability. These analyses were also based on untransformed data. Within each site, significant variability in the multivariate patterns over time was analysed initially according to a three-factor design (Year; Season; Date, nested within Season) using Permutational Multivariate Analysis of Variance (PERMANOVA). As the factor "Date" was found not significant, and to increase the power of the analysis, a two-factor PERMANOVA was applied. Whenever significant differences in the interaction term were detected (i.e. Year × Season), pair-wise tests were conducted. Connectedness within the seascape and stability patterns over time A preliminary investigation of the patterns of variability across the seascape was carried out to identify generalist versus specialist taxa, i.e. those that span across multiple habitats versus those that are particularly associated with a specific habitat, respectively. We aimed to characterize the main differences in the community patterns in terms of shared and exclusive species that could determine the cause of the connectivity across the lagoon. This analysis was conducted based on the whole dataset, disregarding the seasonal and annual changes, as our main question was related to the constancy of spatial changes in different habitats. Finally, we analysed the frequency of occurrence of species in each habitat during the study period. Species were classified based on Habitat Preference Trait as follows: (i) resident, present in over 85% of the sampling dates (i.e. eight events); (ii) frequent, observed between 50% and 85% of the dates; (iii) occasional, presence registered in between 25% and 50% of the sampling occasions; (iv) rare, observed in less than or equal to 25% of the sampling dates; (v) seasonal, only observed in one season but in both years. Community stability was also examined over the sampling period within each habitat based on the indices Bray–Curtis (community structure) and Jaccard (presence/absence; composition). Within each habitat, variability between all pairwise comparisons among terms of interest (e.g. within and between seasons; within and between years) was analysed. We established that low levels of similarity are related to high variability in the macrobenthic communities over time, whereas high similarity is indicative of more stable communities. Relationships between environmental variables and assemblage structure Distance-based redundancy analysis (dbRDA) was used to assess the relationship between each environmental variable and the variation in the community structure (given by the direction and length of vectors for each variable). The variables used for the analysis were salinity, temperature, depth, grain size fractions, organic matter content (% LOI), and chl a. Three of the variables were transformed to reduce skewness, namely the fines and coarse sand fractions of the sediment (square root) and organic matter content (natural log). Marginal tests are used to show the significance of each variable individually to the model and sequential tests show the best subset of explanatory variables that explain the biological patterns. C/N: Carbon to Nitrogen ratio Chl a : Chlorophyll a Conductivity, temperature, and depth casts dbRDA: Distance-based redundancy analysis H' : Shannon–Wiener LOI: Loss on ignition Upper mangrove area Deeper mangrove area MDS: Non-metric multidimensional scaling OTUs: Operational taxonomic units PERMANOVA: Permutational Multivariate Analysis of Variance S1: Shallow seagrass meadow Deeper seagrass meadow Unv: Unvegetated soft-sediments Harley CD, Randall Hughes A, Hultgren KM, Miner BG, Sorte CJ, Thornber CS, et al. The impacts of climate change in coastal marine systems. Ecol Lett. 2006;9:228–41. PubMed Article PubMed Central Google Scholar Waycott M, Duarte CM, Carruthers TJB, Orth RJ, Dennison WC, Olyarnik S, et al. Accelerating loss of seagrasses across the globe threatens coastal ecosystems. PNAS. 2009;106:12377–81. Camacho-Valdez V, Ruiz-Luna A, Ghermandi A, Nunes PA. Valuation of ecosystem services provided by coastal wetlands in northwest Mexico. Ocean Coast Manage. 2013;78:1–11. Carvalho S, Moura A, Gaspar MB, Pereira P, da Fonseca LC, Falcão M, et al. Spatial and inter-annual variability of the macrobenthic communities within a coastal lagoon (Óbidos lagoon) and its relationship with environmental parameters. Acta Oecol. 2005;27:143–59. Como S, Magni P. Temporal changes of a macrobenthic assemblage in harsh lagoon sediments. Estuar Coast Shelf Sci. 2009;83:638–46. Kennish MJ, Paerl HW. Coastal lagoons: critical habitats of environmental change. CRC Marine Science: CRC Press; 2010. Pereira P, de Pablo H, Carvalho S, Vale C, Pacheco M. Daily availability of nutrients and metals in a eutrophic meso-tidal coastal lagoon (Óbidos lagoon, Portugal). Mar Pollut Bull. 2010;60:1868–72. Tagliapietra D, Pessa G, Cornello M, Zitelli A, Magni P. Temporal distribution of intertidal macrozoobenthic assemblages in a Nanozostera noltii-dominated area (Lagoon of Venice). Mar Environ Res. 2016;114:31–9. Tagliapietra D, Pavan M, Wagner C. Macrobenthic Community Changes Related to Eutrophication in Palude della Rosa (Venetian Lagoon, Italy). Estuar Coast Shelf Sci. 1998;47:217–26. Magni P, Micheletti S, Casu D, Floris A, De Falco G, Castelli A. Macrofaunal community structure and distribution in a muddy coastal lagoon. Chem Ecol. 2004;20:397–409. Como S, Magni P, Casu D, Floris A, Giordani G, Natale S, et al. Sediment characteristics and macrofauna distribution along a human-modified inlet in the Gulf of Oristano (Sardinia, Italy). Mar Pollut Bull. 2007;54:733–44. Newton A, Icely J, Cristina S, Brito A, Cardoso AC, Colijn F, et al. An overview of ecological status, vulnerability and future perspectives of European large shallow, semi-enclosed coastal systems, lagoons and transitional waters. Estuar Coast Shelf Sci. 2014;140:95–122. Nagelkerken I. Evaluation of nursery function of mangroves and seagrass beds for tropical decapods and reef fishes: Patterns and underlying mechanisms. In: Nagelkerken I, editor. Ecological connectivity among tropical coastal ecosystems. Dordrecht: Springer; 2009. p. 357–99. Pusceddu A, Gambi C, Corinaldesi C, Scopa M, Danovaro R. Relationships between meiofaunal biodiversity and prokaryotic heterotrophic production in different tropical habitats and oceanic regions. PLoS ONE. 2014;9:e91056. Navarro-Barranco C, Guerra-García JM. Spatial distribution of crustaceans associated with shallow soft-bottom habitats in a coral reef lagoon. Mar Ecol. 2016;37:77–87. Alsaffar Z, Cúrdia J, Borja A, Irigoien X, Carvalho S. Consistent variability in beta-diversity patterns contrasts with changes in alpha-diversity along an onshore to offshore environmental gradient: the case of Red Sea soft-bottom macrobenthos. Mar Biodivers. 2019;49:247–62. Nagelkerken I, Van Der Velde G. Connectivity between coastal habitats of two oceanic Caribbean islands as inferred from ontogenetic shifts by coral reef fishes. Gulf Caribb Res. 2003;14:43–59. Mumby PJ, Edwards AJ, Arias-González JE, Lindeman KC, Blackwell PG, Gall A, et al. Mangroves enhance the biomass of coral reef fish communities in the Caribbean. Nature. 2004;427:533. Mumby PJ. Connectivity of reef fish between mangroves and coral reefs: algorithms for the design of marine reserves at seascape scales. Biol Conserv. 2006;128:215–22. Dorenbosch M, Verberk W, Nagelkerken I, Van der Velde G. Influence of habitat configuration on connectivity between fish assemblages of Caribbean seagrass beds, mangroves and coral reefs. Mar Ecol Prog Ser. 2007;334:103–16. Berkström C, Gullström M, Lindborg R, Mwandya AW, Yahya SA, Kautsky N, Nyström M. Exploring 'knowns' and 'unknowns' in tropical seascape connectivity with insights from East African coral reefs. Estuar Coast Shelf Sci. 2012;107:1–21. McMahon KW, Berumen ML Thorrold SR. Linking habitat mosaics and connectivity in a coral reef seascape. PNAS. 2012;38:15372–15376. Unsworth RK, De León PS, Garrard SL, Jompa J, Smith DJ, Bell JJ. High connectivity of Indo-Pacific seagrass fish assemblages with mangrove and coral reef habitats. Mar Ecol Prog Ser. 2008;353:213–24. Sheridan P. Benthos of adjacent mangrove, seagrass and non-vegetated habitats in Rookery Bay, Florida, USA. Estuar Coast Shelf Sci. 1997;44:455–69. Lee SY. Mangrove macrobenthos: assemblages, services, and linkages. J Sea Res. 2008;59:16–29. Kathiresan K, Alikunhi NM. Tropical coastal ecosystems: rarely explored for their interaction. Ecologia. 2011;1:1–22. Skilleter GA, Loneragan NR, Olds A, Zharikov Y, Cameron B. Connectivity between seagrass and mangroves influences nekton assemblages using nearshore habitats. Mar Ecol Prog Ser. 2017;573:25–43. Kristensen E, Bouillon S, Dittmar T, Marchand C. Organic carbon dynamics in mangrove ecosystems: a review. Aquat Bot. 2008;89:201–19. Dissanayake N, Chandrasekara U. Effects of mangrove zonation and the physicochemical parameters of soil on the distribution of macrobenthic fauna in Kadolkele mangrove forest, a tropical mangrove forest in Sri Lanka. Adv Ecol. 2014;564056. Amarasinghe MD. Misconceptions of mangrove ecology and their implications on conservation and management. Sri Lanka J Aquat Sci. 2018;23:29–35. Short FT, Wyllie-Echeverria S. Natural and human-induced disturbance of seagrasses. Environ Conserv. 1996;23:17–27. Mateo MA, Romero J, Pérez M, Littler MM, Littler DS. Dynamics of millenary organic deposits resulting from the growth of the Mediterranean seagrass Posidonia oceanica. Estuar Coast Shelf Sci. 1997;44:103–10. Reusch TB, Boström C, Stam WT, Olsen JL. An ancient eelgrass clone in the Baltic. Mar Ecol Prog Ser. 1999;183:301–4. Leopardas V, Uy W, Nakaoka M. Benthic macrofaunal assemblages in multispecific seagrass meadows of the southern Philippines: variation among vegetation dominated by different seagrass species. J Exp Mar Biol Ecol. 2014;457:71–80. Paiva PC. Spatial and temporal variation of a nearshore benthic community in southern Brazil: implications for the design of monitoring programs. Estuar Coast Shelf Sci. 2001;52:423–33. Taddei D, Frouin P. Short-term temporal variability of macrofauna reef communities (Reunion Island, Indian Ocean). In: Proceedings of 10th International Coral Reef Symposium (ICRS). Japanese Coral Reef Society, Okinawa, Japan; 2004. p. 52–57. Bigot L, Conand C, Amouroux JM, Frouin P, Bruggemann H, Grémare A. Effects of industrial outfalls on tropical macrobenthic sediment communities in Reunion Island (Southwest Indian Ocean). Mar Pollut Bull. 2006;52:865–80. Pech D, Ardisson P-L, Hernández-Guevara NA. Benthic community response to habitat variation: a case of study from a natural protected area, the Celestun coastal lagoon. Cont Shelf Res. 2007;27:2523–33. Lamptey E, Armah AK. Factors affecting macrobenthic fauna in a tropical hypersaline coastal lagoon in Ghana West Africa. Estuar Coast. 2008;31:1006–19. Magni P, Draredja B, Melouah K, Como S. Patterns of seasonal variation in lagoonal macrozoobenthic assemblages (Mellah lagoon, Algeria). Mar Environ Res. 2015;109:168–76. Belal AAM, El-Sawy MA, Dar MA. The effect of water quality on the distribution of macro-benthic fauna in Western Lagoon and Timsah Lake Egypt. I. Egypt J Aquat Res. 2016;42:437–48. O'Reilly CM. Seasonal dynamics of periphyton in a large tropical lake. Hydrobiologia. 2006;553:293–301. Posey MH, Alphin TD, Banner S, Vose F, Lindberg W. Temporal variability, diversity and guild structure of a benthic community in the northeastern Gulf of Mexico. B Mar Sci. 1998;63:143–55. Rosa LC, Bemvenuti CE. Variabilidad temporal de la macrofauna estuarina de la Laguna de los Patos Brasil. Rev Biol Mar Oceanog. 2006;41:1–9. Norderhaug KM, Gundersen H, Pedersen A, Moy F, Green N, Walday MG, et al. Effects of climate and eutrophication on the diversity of hard bottom communities on the Skagerrak coast 1990–2010. Mar Ecol Prog Ser. 2015;530:29–46. Ysebaert T, Herman PM. Spatial and temporal variation in benthic macrofauna and relationships with environmental variables in an estuarine, intertidal soft-sediment environment. Mar Ecol Prog Ser. 2002;244:105–24. Biles CL, Solan M, Isaksson I, Paterson DM, Emes C, Raffaelli DG. Flow modifies the effect of biodiversity on ecosystem functioning: an in situ study of estuarine sediments. J Exp Mar Biol Ecol. 2003;285:165–77. Giberto DA, Bremec CS, Acha EM, Mianzan H. Large-scale spatial patterns of benthic assemblages in the SW Atlantic: the Rıo de la Plata estuary and adjacent shelf waters. Estuar Coast Shelf Sci. 2004;61:1–13. Shojaei MG, Gutow L, Dannheim J, Rachor E, Schröder A, Brey T. Common trends in German Bight benthic macrofaunal communities: assessing temporal variability and the relative importance of environmental variables. J Sea Res. 2016;107:25–33. Desroy N, Retière C. Long-term changes in muddy fine sand community of the Rance Basin: role of recruitment. J Mar Biol Assoc UK. 2001;81:553–64. Reiss H, Kröncke I. Seasonal variability of infaunal community structures in three areas of the North Sea under different environmental conditions. Estuar Coast Shelf Sci. 2005;65:253–74. Van Hoey G, Vincx M, Degraer S. Temporal variability in the Abra alba community determined by global and local events. J Sea Res. 2007;58:144–55. Wlodarska-Kowalczuk M, Pearson TH. Soft-bottom macrobenthic faunal associations and factors affecting species distributions in an Arctic glacial fjord (Kongsfjord, Spitsbergen). Polar Biol. 2004;27:155–67. Wlodarska-Kowalczuk M, Pearson TH, Kendall MA. Benthic response to chronic natural physical disturbance by glacial sedimentation in an Arctic fjord. Mar Ecol Prog Ser. 2005;303:31. Mincks SL, Smith CR. Recruitment patterns in Antarctic Peninsula shelf sediments: evidence of decoupling from seasonal phytodetritus pulses. Polar Biol. 2007;30:587–600. Glover AG, Smith CR, Mincks SL, Sumida PY, Thurber AR. Macrofaunal abundance and composition on the West Antarctic Peninsula continental shelf: evidence for a sediment 'food bank'and similarities to deep-sea habitats. Deep-Sea Res Pt. 2008;55:2491–501. Pawłowska J, Włodarska-Kowalczuk M, Zajączkowski M, Nygård H, Berge J. Seasonal variability of meio- and macrobenthic standing stocks and diversity in an Arctic fjord (Adventfjorden, Spitsbergen). Polar Biol. 2011;34:833–45. Rueda JL, Fernández-Casado M, Salas C, Gofas S. Seasonality in a taxocoenosis of molluscs from soft bottoms in the Bay of Cádiz (southern Spain). J Mar Biol Assoc UK. 2001;81:903–12. Guzmán-Alvis AI, Lattig P, Ruiz JA. Spatial and temporal characterization of soft bottom polychaetes in a shallow tropical bay (Colombian Caribbean). Boletin de Investig Mar y Costeras. 2006;35:19–36. Hernández-Guevara NA, Pech D, Ardisson P-L. Temporal trends in benthic macrofauna composition in response to seasonal variation in a tropical coastal lagoon, Celestun Gulf of Mexico. Mar Freshwater Res. 2008;59:772–9. Kanaya G, Suzuki T, Kikuchi E. Spatio-temporal variations in macrozoobenthic assemblage structures in a river-affected lagoon (Idoura Lagoon, Sendai Bay, Japan): influences of freshwater inflow. Estuar Coast Shelf Sci. 2011;92:169–79. McCarthy SA, Laws EA, Estabrooks WA, Bailey-Brock JH, Kay EA. Intra-annual variability in Hawaiian shallow-water, soft-bottom macrobenthic communities adjacent to a eutrophic estuary. Estuar Coast Shelf Sci. 2000;50:245–58. Nicolaidou A, Petrou K, Kormas KAr, Reizopoulou S. Inter-annual variability of soft bottom macrofaunal communities in two Ionian Sea lagoons. In: Martens K, Queiroga H, Cunha MR, Cunha A, Moreira MH, Quintino V, Rodrigues AM, Serôdio J, Warwick RM, editors. Marine biodiversity. Developments in Hydrobiology. Dordrecht: Springer Netherlands; 2006. p. 89–98. Jackson EL, Rowden AA, Attrill MJ, Bossy SF, Jones MB. Comparison of fish and mobile macroinvertebrates associated with seagrass and adjacent sand at St. Catherine Bay, Jersey (English Channel): emphasis on commercial species. B Mar Sci. 2002;71:1333–1341. Fredriksen S, Backer AD, Boström C, Christie H. Infauna from Zostera marina L. meadows in Norway. Differences in vegetated and unvegetated areas. Mar Biol Res. 2010:6:189–200. Barnes RSK, Barnes MKS. Shore height and differentials between macrobenthic assemblages in vegetated and unvegetated areas of an intertidal sandflat. Estuar Coast Shelf Sci. 2012;106:112–20. Daniel PA, Robertson AI. Epibenthos of mangrove waterways and open embayments: community structure and the relationship between exported mangrove detritus and epifaunal standing stocks. Estuar Coast Shelf Sci. 1990;31:599–619. Sokołowski A, Ziółkowska M, Zgrundo A. Habitat-related patterns of soft-bottom macrofaunal assemblages in a brackish, low-diversity system (southern Baltic Sea). J Sea Res. 2015;103:93–102. Pearman JK, Irigoien X, Carvalho S. Extracellular DNA amplicon sequencing reveals high levels of benthic eukaryotic diversity in the central Red Sea. Mar Gen. 2016;26:29–39. Ellingsen KE, Hewitt JE, Thrush SF. Rare species, habitat diversity and functional redundancy in marine benthos. J Sea Res. 2007;58:291–301. Benedetti-Cecchi L, Bertocci I, Vaselli S, Maggi E, Bulleri F. Neutrality and the response of rare species to environmental variance. PLoS ONE. 2008;3:e2777. Edgar GJ. The influence of plant structure on the species richness, biomass and secondary production of macrofaunal assemblages associated with Western Australian seagrass beds. J Exp Mar Biol Ecol. 1990;137:215–40. Nakamura Y, Sano M. Comparison of invertebrate abundance in a seagrass bed and adjacent coral and sand areas at Amitori Bay, Iriomote Island Japan. Fisheries Sci. 2005;71:543–50. Włodarska-Kowalczuk M, Jankowska E, Kotwicki L, Balazy P. Evidence of season-dependency in vegetation effects on macrofauna in temperate seagrass meadows (Baltic Sea). PLoS ONE. 2014;9:e100788. Hendriks IE, Sintes T, Bouma TJ, Duarte CM. Experimental assessment and modeling evaluation of the effects of the seagrass Posidonia oceanica on flow and particle trapping. Mar Ecol Prog Ser. 2008;356:163–73. Herkül K, Kotta J. Effects of eelgrass (Zostera marina) canopy removal and sediment addition on sediment characteristics and benthic communities in the Northern Baltic Sea. Mar Ecol. 2009;30:74–82. Schneider FI, Mann KH. Species specific relationships of invertebrates to vegetation in a seagrass bed. I. Correlational studies. J Exp Mar Biol Ecol. 1991;145:101–117. Barberá-Cebrián C, Sánchez-Jerez P, Ramos-Esplá A. Fragmented seagrass habitats on the Mediterranean coast, and distribution and abundance of mysid assemblages. Mar Biol. 2002;141:405–13. González-Ortiz V, Egea LG, Jiménez-Ramos R, Moreno-Marín F, Pérez-Lloréns JL, Bouma TJ, et al. Interactions between seagrass complexity, hydrodynamic flow and biomixing alter food availability for associated filter-feeding organisms. PLoS ONE. 2014;9:e104949. McCloskey RM, Unsworth RKF. Decreasing seagrass density negatively influences associated fauna. PeerJ. 2015;3:e1053. Ringold P. Burrowing, root mat density, and the distribution of fiddler crabs in the eastern United States. J Exp Mar Biol Ecol. 1979;36:11–21. Mateo MA, Cebrián J, Dunton K, Mutchler T. Carbon flux in seagrass ecosystems. in: Larkum AWD, Orth RJ, Duarte CM, editors. Seagrasses: Biology, Ecology and Conservation. Dordrecht: Springer Netherlands; 2006. p. 159–192. Santschi P, Höhener P, Benoit G, Buchholtz-ten Brink M. Chemical processes at the sediment-water interface. Mar Chem. 1990;30:269–315. Hyland J, Balthis L, Karakassis I, Magni P, Petrov A, Shine J, et al. Organic carbon content of sediments as an indicator of stress in the marine benthos. Mar Ecol Prog Ser. 2005;295:91–103. Pihl L, Svenson A, Moksnes P-O, Wennhage H. Distribution of green algal mats throughout shallow soft bottoms of the Swedish Skagerrak archipelago in relation to nutrient sources and wave exposure. J Sea Res. 1999;41:281–94. Almahasheer H, Serrano O, Duarte CM, Arias-Ortiz A, Masque P, Irigoien X. Low carbon sink capacity of Red Sea mangroves. Sci Rep. 2017;7:9700. Ingole B, Sivadas S, Nanajkar M, Sautya S, Nag A. A comparative study of macrobenthic community from harbours along the central west coast of India. Environ Monit Assess. 2009;154:135. Geist SJ, Nordhaus I, Hinrichs S. Occurrence of species-rich crab fauna in a human-impacted mangrove forest questions the application of community analysis as an environmental assessment tool. Estuar Coast Shelf Sci. 2012;96:69–80. Fusi M, Giomi F, Babbini S, Daffonchio D, McQuaid CD, Porri F, Cannicci S. Thermal specialization across large geographical scales predicts the resilience of mangrove crab populations to global warming. Oikos. 2015;124:784–95. Dittmann S. Abundance and distribution of small infauna in mangroves of Missionary Bay, North Queensland Australia. Rev Biol Trop. 2001;49:535–44. Alfaro AC. Benthic macro-invertebrate community composition within a mangrove/seagrass estuary in northern New Zealand. Estuar Coast Shelf Sci. 2006;66:97–110. Ludovisi A, Castaldelli G, Fano EA. Multi-scale spatio-temporal patchiness of macrozoobenthos in the Sacca di Goro lagoon (Po River Delta, Italy). Transit Water Bull. 2013;7:233–44. Pante E, Adjeroud M, Dustan P, Penin L, Schrimm M. Spatial patterns of benthic invertebrate assemblages within atoll lagoons: importance of habitat heterogeneity and considerations for marine protected area design in French Polynesia. Aquat Living Resour. 2006;19:207–17. Norkko A, Cummings VJ, Thrush SF, Hewitt JE, Hume T. Local dispersal of juvenile bivalves: implications for sandflat ecology. Mar Ecol Prog Ser. 2001;212:131–44. Gong WK, Ong JE, Wong CH, Dhanarajan C. Productivity of mangrove trees and its significance in a managed mangrove ecosystem in Malaysia. In: Universiti Malaya, Kuala Lumpur (Malaysia). Asian Symposium on Mangrove Environment: Research and Management. Kuala Lumpur (Malaysia). 25-29 Aug 1980. Camilleri JC. Leaf-litter processing by invertebrates in a mangrove forest in Queensland. Mar Biol. 1992;114:139–45. Demopoulos AW, Cormier N, Ewel KC, Fry B. Use of multiple chemical tracers to define habitat use of Indo-Pacific mangrove crab, Scylla serrata (Decapoda: portunidae). Estuar Coast. 2008;31:371–81. Mistri M. Persistence of benthic communities: a case study from the Valli di Comacchio, a Northern Adriatic lagoonal ecosystem (Italy). ICES J Mar Sci. 2002;59:314–22. Irlandi EA, Ambrose WG Jr, Orlando BA. Landscape ecology and the marine environment: how spatial configuration of seagrass habitat influences growth and survival of the bay scallop. Oikos. 1995;72:307–13. Alsaffar Z, Pearman JK, Curdia J, Calleja MLl, Ruiz-Compean P, Roth F, Villalobos R, Jones BH, Ellis J, Móran AX, Carvalho S. The role of seagrass vegetation and local environmental conditions in shaping benthic bacterial and macroinvertebrate communities in a tropical coastal lagoon. Scientific Reports, in press. Lundberg J, Moberg F. Mobile link organisms and ecosystem functioning: implications for ecosystem resilience and management. Ecosystems. 2003;6:87–98. Tano SA, Eggertsen M, Wikström SA, Berkström C, Buriyo AS, Halling C. Tropical seaweed beds as important habitats for juvenile fish. Mar Freshwater Res. 2017;68:1921–34. Nagelkerken I, van der Velde G, Gorissen MW, Meijer GJ, Van't Hof T, den Hartog C. Importance of mangroves, seagrass beds and the shallow coral reef as a nursery for important coral reef fishes, using a visual census technique. Estuar Coast Shelf Sci. 2000;51:31–44. Lugendo BR, Pronker A, Cornelissen I, de Groene A, Nagelkerken I, Dorenbosch M, et al. Habitat utilisation by juveniles of commercially important fish species in a marine embayment in Zanzibar Tanzania. Aquat Living Resour. 2005;18:149–58. Gullström M, Bodin M, Nilsson PG, Öhman MC. Seagrass structural complexity and landscape configuration as determinants of tropical fish assemblage composition. Mar Ecol Prog Ser. 2008;363:241–55. Berkström C, Jörgensen TL, Hellström M. Ecological connectivity and niche differentiation between two closely related fish species in the mangrove-seagrass-coral reef continuum. Mar Ecol Prog Ser. 2013;477:201–15. Mumby PJ, Hastings A. The impact of ecosystem connectivity on coral reef resilience. J Appl Ecol. 2008;45:854–62. Saenger P, Gartside D, Funge-Smith S. A review of mangrove and seagrass ecosystems and their linkage to fisheries and fisheries management. RAP: FAO; 2013. Khedhri I, Djabou H, Afli A. Trophic and functional organization of the benthic macrofauna in the lagoon of Boughrara–Tunisia (SW Mediterranean Sea). J Mar Biol Assoc UK. 2015;95:647–59. Perry DC, Staveley TA, Gullström M. Habitat connectivity of fish in temperate shallow-water seascapes. Front Mar Sci. 2017;4:440. Whitfield AK. The role of seagrass meadows, mangrove forests, salt marshes and reed beds as nursery areas and food sources for fishes in estuaries. Rev Fish Biol Fisher. 2017;27:75–110. Arar EJ, Collins GB. Method 445.0: In vitro determination of chlorophyll a and pheophytin a in marine and freshwater algae by fluorescence. United States Environmental Protection Agency, Office of Research and Development, National Exposure Research Laboratory Washington, DC, USA. 1997. Heiri O, Lotter AF, Lemcke G. Loss on ignition as a method for estimating organic and carbonate content in sediments: reproducibility and comparability of results. J Paleolimnol. 2001;25:101–10. Chao A. Estimating the population size for capture-recapture data with unequal catchability. Biometrics 1987;783–791. Gotelli NJ, Colwell RK. Quantifying biodiversity: procedures and pitfalls in the measurement and comparison of species richness. Ecol Lett. 2001;4:379–91. Chiu C-H, Wang Y-T, Walther BA, Chao A. An improved nonparametric lower bound of species richness via a modified good–turing frequency formula. Biometrics. 2014;70:671–82. Smith EP, van Belle G. Nonparametric estimation of species richness. Biometrics. 1984;40:119–29. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. 2018. Oksanen J, Blanchet FG, Kindt R, Legendre P, Minchin PR, O'hara RB, et al. vegan: Community Ecology Package R package version 2.5-2. 2018. The authors would like to thank Saskia Kurten, Richard Payumo, Miguel Viegas, and Holger Anlauf for their assistance in the field and the laboratory analyses. We would also like to thank to Carlos Navarro, Leandro Sampaio, Joana Oliveira, and Ascensão Ravara for their help in taxonomic identification. The authors are also indebted to the skippers and staff of the Coastal and Marine Resources Core Lab for their invaluable support in fieldwork activities. We are also grateful to Dr. John Pearman for proofreading this manuscript and providing comments that, along with those provided by the reviewers and the editor, greatly improved it. This research was partially supported by baseline funding provided by KAUST to Prof. Xabier Irigoien and SAKMEO - Saudi Aramco/KAUST Center for Marine Environmental Observations. This research was supported by baseline funding provided by KAUST to Prof. Xabier Irigoien. SC and JC are funded by the Saudi Aramco/KAUST Center for Marine Environmental Observations (SAKMEO). Red Sea Research Centre, King Abdullah University of Science and Technology (KAUST), Thuwal, 23955-6900, Saudi Arabia Zahra Alsaffar, João Cúrdia, Xabier Irigoien & Susana Carvalho Chemistry Department, College of Science, King Saud University (KSU), Riyadh, P.O. Box 2455, 11451, Saudi Arabia Zahra Alsaffar AZTI - Marine Research, Herrera Kaia, Pasaia, 20100, Spain Xabier Irigoien IKERBASQUE, Basque Foundation for Science, Bilbao, 48013, Spain João Cúrdia Susana Carvalho ZA and SC designed the study. ZA, JC, XI and SC, conducted the research, analysed and interpreted the data. ZA wrote the manuscript with the contribution of all the co-authors. All authors read and approved the final manuscript. Correspondence to Susana Carvalho. The conducted research followed the policies as approved by the King Abdullah University of Science and Technology (KAUST). Alsaffar, Z., Cúrdia, J., Irigoien, X. et al. Composition, uniqueness and connectivity across tropical coastal lagoon habitats in the Red Sea. BMC Ecol 20, 61 (2020). https://doi.org/10.1186/s12898-020-00329-z DOI: https://doi.org/10.1186/s12898-020-00329-z Inter-annual variability Spatial variability Macrobenthic communities Tropical habitats Seascape connectivity Conservation ecology and biodiversity research
CommonCrawl
Knowledge Center / Application Notes / Optics / Introduction to Modulation Transfer Function Introduction to Modulation Transfer Function http://www.edmundoptics.com/knowledge-center/application-notes/optics/introduction-to-modulation-transfer-function/ Edmund Optics Inc. http://www.edmundoptics.com Introduction to Modulation Transfer Function Components | Understanding | Importance | Characterization When optical designers attempt to compare the performance of optical systems, a commonly used measure is the modulation transfer function (MTF). MTF is used for components as simple as a spherical singlet lens to those as complex as a multi-element telecentric imaging lens assembly. In order to understand the significance of MTF, consider some general principles and practical examples for defining MTF including its components, importance, and characterization. THE COMPONENTS OF MTF To properly define the modulation transfer function, it is necessary to first define two terms required to truly characterize image performance: resolution and contrast. Resolution is an imaging system's ability to distinguish object detail. It is often expressed in terms of line-pairs per millimeter (where a line-pair is a sequence of one black line and one white line). This measure of line-pairs per millimeter (lp/mm) is also known as frequency. The inverse of the frequency yields the spacing in millimeters between two resolved lines. Bar targets with a series of equally spaced, alternating white and black bars (i.e. a 1951 USAF target or a Ronchi ruling) are ideal for testing system performance. For a more detailed explanation of test targets, view Choosing the Correct Test Target. For all imaging optics, when imaging such a pattern, perfect line edges become blurred to a degree (Figure 1). High-resolution images are those which exhibit a large amount of detail as a result of minimal blurring. Conversely, low-resolution images lack fine detail. Figure 1: Perfect Line Edges Before (Left) and After (Right) Passing through a Low Resolution Imaging Lens A practical way of understanding line-pairs is to think of them as pixels on a camera sensor, where a single line-pair corresponds to two pixels (Figure 2). Two camera sensor pixels are needed for each line-pair of resolution: one pixel is dedicated to the red line and the other to the blank space between pixels. Using the aforementioned metaphor, image resolution of the camera can now be specified as equal to twice its pixel size. Figure 2: Imaging Scenarios Where (a) the Line-Pair is NOT Resolved and (b) the Line-Pair is Resolved Correspondingly, object resolution is calculated using the camera resolution and the primary magnification (PMAG) of the imaging lens (Equations 1 – 2). It is important to note that these equations assume the imaging lens contributes no resolution loss. (1)$$ \text{Object Resolution} \left[ \large{\unicode[arial]{x03BC}} \text{m} \right] = \frac{\text{Camera Resolution} \left[ \large{\unicode[arial]{x03BC}} \text{m} \right]}{\text{PMAG}} $$ (2)$$ \text{Object Resolution} \left[ \tfrac{ \text{lp} }{\text{mm}} \right] = \text{PMAG} \times \text{Camera Resolution} \left[ \tfrac{ \text{lp} }{\text{mm}} \right] $$ Contrast/Modulation Consider normalizing the intensity of a bar target by assigning a maximum value to the white bars and zero value to the black bars. Plotting these values results in a square wave, from which the notion of contrast can be more easily seen (Figure 3). Mathematically, contrast is calculated with Equation 3: (3)$$ \text{% Contrast} = \left[ \frac{I_{\text{max}} - I_{\text{min}}}{I_{\text{max}} + I_{\text{min}}} \right] $$ Figure 3: Contrast Expressed as a Square Wave When this same principle is applied to the imaging example in Figure 1, the intensity pattern before and after imaging can be seen (Figure 4). Contrast or modulation can then be defined as how faithfully the minimum and maximum intensity values are transferred from object plane to image plane. To understand the relation between contrast and image quality, consider an imaging lens with the same resolution as the one in Figure 1 and Figure 4, but used to image an object with a greater line-pair frequency. Figure 5 illustrates that as the spatial frequency of the lines increases, the contrast of the image decreases. This effect is always present when working with imaging lenses of the same resolution. For the image to appear defined, black must be truly black and white truly white, with a minimal amount of grayscale between. Figure 4: Contrast of a Bar Target and Its Image Figure 5: Contrast Comparison at Object and Image Planes In imaging applications, the imaging lens, camera sensor, and illumination play key roles in determining the resulting image contrast. The lens contrast is typically defined in terms of the percentage of the object contrast that is reproduced. The sensor's ability to reproduce contrast is usually specified in terms of decibels (dB) in analog cameras and bits in digital cameras. UNDERSTANDING MTF Now that the components of the modulation transfer function (MTF), resolution and contrast/modulation, are defined, consider MTF itself. The MTF of a lens, as the name implies, is a measurement of its ability to transfer contrast at a particular resolution from the object to the image. In other words, MTF is a way to incorporate resolution and contrast into a single specification. As line spacing decreases (i.e. the frequency increases) on the test target, it becomes increasingly difficult for the lens to efficiently transfer this decrease in contrast; as result, MTF decreases (Figure 6). Figure 6: MTF for an Aberration-Free Lens with a Rectangular Aperture For an aberration-free image with a circular pupil, MTF is given by Equation 4, where MTF is a function of spatial resolution (ξ), which refers to the smallest line-pair the system can resolve. The cut-off frequency (ξc) is given by Equation 6. Figure 6 plots the MTF of an aberration-free image with a rectangular pupil. As can be expected, the MTF decreases as the spatial resolution increases. It is important to note that these cases are idealized and that no actual system is completely aberration-free. (4)$$ \text{MTF} \left( \xi \right) = \frac{2}{\pi} \left( \varphi - \cos{\varphi} \cdot \sin{\varphi} \right) $$ (5)$$ \varphi = \cos ^{-1} \left( \frac{\xi}{\xi_c} \right) $$ (6)$$ \xi_c = \frac{1}{\lambda \cdot \left( f/ \# \right)} $$ THE IMPORTANCE OF MTF In traditional system integration (and less crucial applications), the system's performance is roughly estimated using the principle of the weakest link. The principle of the weakest link proposes that a system's resolution is solely limited by the component with the lowest resolution. Although this approach is very useful for quick estimations, it is actually flawed because every component within the system contributes error to the image, yielding poorer image quality than the weakest link alone. Every component within a system has an associated modulation transfer function (MTF) and, as a result, contributes to the overall MTF of the system. This includes the imaging lens, camera sensor, image capture boards, and video cables, for instance. The resulting MTF of the system is the product of all the MTF curves of its components (Figure 7). For instance, a 25mm fixed focal length lens and a 25mm double gauss lens can be compared by evaluating the resulting system performance of both lenses with a Sony monochrome camera. By analyzing the system MTF curve, it is straightforward to determine which combination will yield sufficient performance. In some metrology applications, for example, a certain amount of contrast is required for accurate image edge detection. If the minimum contrast needs to be 35% and the image resolution required is 30 lp/mm, then the 25mm double gauss lens is the best choice. MTF is one of the best tools available to quantify the overall imaging performance of a system in terms of resolution and contrast. As a result, knowing the MTF curves of each imaging lens and camera sensor within a system allows a designer to make the appropriate selection when optimizing for a particular resolution. Figure 7: System MTF is the Product of the MTF of Individual Component: Lens MTF x Camera MTF = System MTF CHARACTERIZATION OF MTF Determining Real-World MTF A theoretical modulation transfer function (MTF) curve can be generated from the optical prescription of any lens. Although this can be helpful, it does not indicate the actual, real-world performance of the lens after accounting for manufacturing tolerances. Manufacturing tolerances always introduce some performance loss to the original optical design since factors such as geometry and coating deviate slightly from an ideal lens or lens system. For this reason, in our manufacturing sites, Edmund Optics® invests in optical test and measurement equipment for quantifying MTF. This MTF test and measurement equipment allows for characterization of the actual performance of both designed lenses and commercial lenses (whose optical prescription is not available to the public). As a result, precise integration - previously limited to lenses with known prescriptions - can now include commercial lenses. Reading MTF Graphs/Data A greater area under the MTF curve does not always indicate the optimal choice. A designer should decide based on the resolution of the application at hand. As previously discussed, an MTF graph plots the percentage of transferred contrast versus the frequency (cycles/mm) of the lines. A few things should be noted about the MTF curves offered by Edmund Optics®: Each MTF curve is calculated for a single point in space. Typical field points include on-axis, 70% field, and full-field. 70% is a common reference point because it captures approximately 50% of the total imaging area. Off-axis MTF data is calculated for both tangential and sagittal cases (denoted by T and S, respectively). Occasionally an average of the two is presented rather than the two individual curves. MTF curves are dependent on several factors, such as system conjugates, wavebands, and f/#. An MTF curve is calculated at specified values of each; therefore, it is important to review these factors before determining whether a component will work for a certain application. The spatial frequency is expressed in terms of cycles (or line-pairs) per millimeter. The inverse of this frequency yields the spacing of a line-pair (a cycle of one black bar and one white bar) in millimeters. The nominal MTF curve is generated using the standard prescription information available in optical design programs. This prescription information can also be found on our global website, in our print catalogs, and in our lens catalogs supplied to Zemax®. The nominal MTF represents the best-case scenario and does not take into account manufacturing tolerances. Conceptually, MTF can be difficult to grasp. Perhaps the easiest way to understand this notion of transferring contrast from object to image plane is by examining a real-world example. Figures 8 - 12 compare MTF curves and images for two 25mm fixed focal length imaging lenses: #54-855 Finite Conjugate Micro-Video Lens and #59-871 Compact Fixed Focal Length Lens. Figure 8 shows polychromatic diffraction MTF for these two lenses. Depending upon the testing conditions, both lenses can yield equivalent performance. In this particular example, both are trying to resolve group 2, elements 5 -6 (indicated by the red boxes in Figure 10) and group 3, elements 5 – 6 (indicated by the blue boxes in Figure 10) on a 1951 USAF resolution target (Figure 9). In terms of actual object size, group 2, elements 5 – 6 represent 6.35 – 7.13lp/mm (14.03 - 15.75μm) and group 3, elements 5 – 6 represent 12.70 – 14.25lp/mm (7.02 - 7.87μm). For an easy way to calculate resolution given element and group numbers, use our 1951 USAF Resolution EO Tech Tool. Under the same testing parameters, it is clear to see that #59-871 (with a better MTF curve) yields better imaging performance compared to #54-855 (Figures 11 – 12). In this real-world example with these particular 1951 USAF elements, a higher modulation value at higher spatial frequencies corresponds to a clearer image; however, this is not always the case. Some lenses are designed to be able to very accurately resolve lower spatial frequencies, and have a very low cut-off frequency (i.e. they cannot resolve higher spatial frequencies). Had the target been group -1, elements 5-6, the two lenses would have produced much more similar images given their modulation values at lower frequencies. Figure 8: Comparison of Polychromatic Diffraction MTF for #54-855 Finite Conjugate Micro-Video Lens (Left) and #59-871 Compact Fixed Focal Length Lens (Right) Figure 9: 1951 USAF Resolution Target Figure 10: Comparison of #54-855 Finite Conjugate Micro-Video Lens (Left) and #59-871 Compact Fixed Focal Length Lens (Right) Resolving Group 2, Elements 5 -6 (Red Boxes) and Group 3, Elements 5 – 6 (Blue Boxes) on a 1951 USAF Resolution Target Figure 11: Comparison of #54-855 Finite Conjugate Micro-Video Lens (Left) and #59-871 Compact Fixed Focal Length Lens (Right) Resolving Group 2, Elements 5 -6 on a 1951 USAF Resolution Target Figure 12: Comparison of #54-855 Finite Conjugate Micro-Video Lens (Left) and #59-871 Compact Fixed Focal Length Lens (Right) Resolving Group 3, Elements 5 – 6 on a 1951 USAF Resolution Target Modulation transfer function (MTF) is one of the most important parameters by which image quality is measured. Optical designers and engineers frequently refer to MTF data, especially in applications where success or failure is contingent on how accurately a particular object is imaged. To truly grasp MTF, it is necessary to first understand the ideas of resolution and contrast, as well as how an object's image is transferred from object to image plane. While initially daunting, understanding and eventually interpreting MTF data is a very powerful tool for any optical designer. With knowledge and experience, MTF can make selecting the appropriate lens a far easier endeavor - despite the multitude of offerings. Dereniak, Eustace. "OPTI 340 - Optical Design." Lecture, The University of Arizona, Tucson, AZ, Spring 2010. Geary, Joseph M. "Chapter 34 – MTF: Image Quality V." In Introduction to Lens Design: With Practical Zemax Examples, 389-96. Richmond, VA: Willmann-Bell, 2002. Hecht, Eugene. "11.3.5 Transfer Functions." In Optics, 550-56. 4th ed. San Francisco, CA: Addison-Wesley, 2001. Smith, Warren J. "Chapter 15.8 The Modulation Transfer Function." In Modern Optical Engineering, 385-90. 4th ed. New York, NY: McGraw-Hill Education, 2008. Was this content useful to you? Thank you for rating this content! Camera Resolution for Improved Imaging System Performance With tech tips for each important parameter, imaging users from novice to expert can learn about camera resolution as it pertains to the imaging electronics of a system, in this easy to follow application note. Imaging Lab 1.3: Resolution Resolution, one of the most critical parameters in an imaging system, is the ability to distinguish fine object detail. Learn the importance of resolution in imaging lenses and cameras. Imaging Lab 1.7: Contrast In Depth Contrast is directly related to resolution in an imaging system. Understand how contrast affects an imaging system's performance. Your Partner in Imaging Optics Edmund Optics can not only help you learn how to specify the right imaging optics, but can also provide you with multiple resources and products to surpass your imaging needs. 24/5 We Are Here to Help Need to contact Edmund Optics? Use any of our fast and friendly services to meet your needs.
CommonCrawl
In this post, I plan to introduce a few new special functions that appear in number theory and some infinite series identities that they satisfy. In particular, I will show some links between these number-theoretical functions and the Riemann Zeta function, and introduce a more general type of series, the Dirichlet Series, which is closely tied to the Zeta function. Finally, at the end, I will calculate a couple of series that require cumulative knowledge of Dirichlet Series and the Zeta function. I would now like to define a couple of number-theoretical functions that I will use later in the post. First is the divisor function $\sigma_\alpha (n)$, defined as the sum of the $\alpha$ th powers of the divisors of $n$ (including $1$ and $n$). defined using mathematical language, this is This function is multiplicative, meaning that for any coprime positive integers $m$ and $n$, the function satisfies The special case $\sigma_0 (n)$ is sometimes written simply as $d(n)$, and is just the number of divisors of $n$. Next is Euler's totient function $\phi(n)$, defined as the number of positive integers less than $n$ that are coprime to $n$ (including $1$). It satisfies a few interesting identities that we will use, including the identity for any positive integer $n$. The totient function is also multiplicative. The function $\Omega(n)$ counts the number of prime factors of $n$ with multipliticy, and the function $\omega(n)$ counts the number of prime factors of $n$ without multiplicity. The first function satisfies for any positive integers $m$ and $n$, and the second satisfies for coprime positive integers $m$ and $n$. The Liouville function $\lambda(n)$ is defined as It is also multiplicative. Many of these properties are pretty easy to prove (except for the proof that $\phi(n)$ is multiplicative, which gave me a bit of trouble), so I won't prove them, and I'll dive right in to the infinite series. This follows directly from the previously described propert of the Totient function: Now it's time for some more interesting Dirichlet series. First recall the following elementary property of infinite series This can be used to prove a couple very interesting and useful properties regarding number theoretical series. I will begin with a simple strategy for evaluating series of these type. This strategy is limited, so I will only do a few series, but I will later introduce a much more effective strategy that can be used for all of them. Suppose $f(n)$ is some function, and $g(n)$ is the function defined as Then consider the following: We can use this to calculate the Dirichlet series of the Euler Totient function mentioned earlier, using this property that I also stated earlier: Letting $f(n)=\phi(n)$ and $g(n)=n$, we have or This can also be used to calculate the Dirichlet series of the Mobius function, since is equal to $1$ if $n=1$ and $0$ otherwise. This allows us to say that I'll do one more example. Since $\sigma_a$ is defined as we have that of course, the LHS is so we have the result This strategy can be used to a greater extent - for example, it can be used to calculate generally the value of $D(\sigma_a(n),s)$ for any $a$. However, for more difficult Dirichlet series like $D(d(n^2),s)$ or $D(d^2(n),s)$, it is advisable to use the following technique. Recall the property of infinite series mentioned earlier. I will now use it to develop a much more useful strategy, especially those with multiplicative summands. Suppose that the function $f(n)$ is multiplicative and that we are considering the infinite series Since every positive integer has a unique prime factorization, this can be written as follows, where $p_i$ is the ith prime number: Or, since $f$ is multiplicative, Using the property of series mentioned earlier, this is equal to or, rearranging a bit, Since $f(1)=1$ for every multiplicative function $f$, this is equal to Thus, for any multiplicative function $f$, as long as the series converges, we have As a corrolary of this, we have the famous product representation of the Riemann Zeta function: However, we can do some other cool stuff with it. For example, consider the infinite series Since $d(n)$ is multiplicative, and since $d(p^k)=k+1$ for any prime $p$ and nonnegative integer $k$, we have the following: I won't put this in a box, because we've already derived a more general Dirichlet series for $\sigma_a$, but it is a good example of how to use this technique. Though this technique can be applied to more exotic series (as I will show in a moment), the earlier technique provides a much more elegant derivation of $D(\sigma_a(n),s)$, whereas this one gets very ugly with algebra. Interestingly, these last two Dirichlet series were reciprocals of each other.
CommonCrawl
BMC Medical Imaging Performance of TGSE BLADE DWI compared with RESOLVE DWI in the diagnosis of cholesteatoma Yaru Sheng1, Rujian Hong1, Yan Sha1, Zhongshuai Zhang2, Kun Zhou3 & Caixia Fu3 BMC Medical Imaging volume 20, Article number: 40 (2020) Cite this article Based on its high resolution in soft tissue, MRI, especially diffusion-weighted imaging (DWI), is increasingly important in the evaluation of cholesteatoma. The purpose of this study was to evaluate the role of the 2D turbo gradient- and spin-echo (TGSE) diffusion-weighted (DW) pulse sequence with the BLADE trajectory technique in the diagnosis of cholesteatoma at 3 T and to qualitatively and quantitatively compare image quality between the TGSE BLADE and RESOLVE methods. A total of 42 patients (23 males, 19 females; age range, 7–65 years; mean, 40.1 years) with surgically confirmed cholesteatoma in the middle ear were enrolled in this study. All patients underwent DWI (both a prototype TGSE BLADE DWI sequence and the RESOLVE DWI sequence) using a 3-T scanner with a 64-channel brain coil. Qualitative imaging parameters (imaging sharpness, geometric distortion, ghosting artifacts, and overall imaging quality) and quantitative imaging parameters (apparent diffusion coefficient [ADC], signal-to-noise ratio [SNR], contrast, and contrast-to-noise ratio [CNR]) were assessed for the two diffusion acquisition techniques by two independent radiologists. A comparison of qualitative scores indicated that TGSE BLADE DWI produced less geometric distortion, fewer ghosting artifacts (P < 0.001) and higher image quality (P < 0.001) than were observed for RESOLVE DWI. A comparison of the evaluated quantitative image parameters between TGSE and RESOLVE showed that TGSE BLADE DWI produced a significantly lower SNR (P < 0.001) and higher parameter values (both contrast and CNR (P < 0.001)) than were found for RESOLVE DWI. The ADC (P < 0.001) was significantly lower for TGSE BLADE DWI (0.763 × 10− 3 mm2/s) than RESOLVE DWI (0.928 × 10− 3 mm2/s). Compared with RESOLVE DWI, TGSE BLADE DWI significantly improved the image quality of cholesteatoma by reducing magnetic sensitive artifacts, distortion, and blurring. TGSE BLADE DWI is more valuable than RESOLVE DWI for the diagnosis of small-sized (2 mm) cholesteatoma lesions. However, TGSE BLADE DWI also has some disadvantages: the whole image intensity is slightly low, so that the anatomical details of the air-bone interface are not shown well, and this shortcoming should be improved in the future. Cholesteatoma is a benign, gradually expanding and destructive epithelial lesion of the temporal bone that results in the erosion of adjacent bony structures and can lead to various complications [1]. In addition to its clinical features and otoscopic findings, the early diagnosis of cholesteatoma based on an imaging examinations, such as high-resolution computed tomography (CT) and magnetic resonance imaging (MRI), is important. From a surgical perspective, high-resolution CT remains the primary imaging technique for the diagnosis and characterization of cholesteatoma in the middle ear due to its high spatial resolution and good visualization of bone structures [2, 3]. However, in terms of its disadvantages, it is difficult to distinguish cholesteatoma from granulation tissue, fibrous tissue, or fluid on high-resolution CT [4]. Based on the high resolution of soft tissue, MRI has gained increasing importance in the evaluation of cholesteatoma. Many studies have shown that MR diffusion-weighted imaging (DWI) has high sensitivity and specificity for identifying the presence of cholesteatoma due to its high keratin content [5,6,7]. However, conventional DWI uses an echo-planar imaging (EPI) trajectory to collect k-space data, and the obtained images (single-shot echo-planar DWI, SS EPI) may suffer from severe susceptibility artifacts at air-bone interfaces. Additionally, its image resolution is limited. Thus, it is difficult to use DWI in cases in which the lesion is closer than 5 mm [8, 9] from the distortion area. Compared with the EP DWI sequence, the non-echo-planar diffusion weighted imaging (non-EPI) DW imaging sequence produces thinner slices and has a higher imaging matrix, and it tends to produce fewer magnetic susceptibility artifacts but requires longer imaging times (multi-shot non-echo-planar DWI sequences require approximately 8 min), and non-EPI has higher sensitivity for detecting cholesteatoma and a lower misdiagnosis rate [7, 10,11,12]. Readout-segmented echo-planar imaging (RESOLVE) has been proposed to reduce image distortion. This method could significantly improve head and neck DWI by reducing echo spacing. Although RESOLVE DWI has a significantly improved image signal-to-noise ratio (SNR) and reduced image distortion, the partial volume effect and T2* blurring effect are not completely eliminated, and it is difficult to detect small lesions (< 2.5 mm) [13,14,15]. Periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER) DWI is a nonecho planar fast spin-echo-based DWI sequence in which the central k-space is acquired in a rotating manner. The PROPELLER sequence is free of geometric distortion and susceptibility artifacts, but the scan time is long and imposes a high specific absorption rate (SAR), especially at high fields [16,17,18,19]. The BLADE DWI technique has previously been reported to eliminate susceptibility artifacts by applying the 'blades' acquisition scheme in k-space [20]. This non-EPI technique was further optimized by using a TGSE method to increase the SNR efficiency and achieve the detection of small (< 2.5 mm) cholesteatomas while increasing the resolution to decrease susceptibility artifacts, thereby allowing differentiation from granulation tissue [20]. Houchun H. et al. [21] concluded that TGSE BLADE DWI exhibited less geometric distortion in the brain and reduced magnetic artifacts near the air-tissue interface than were achieved by conventional SE-EPI. However, the use of TGSE BLADE DWI in studying ear lesions has not yet been reported. L.M.J. Lips et al. [22] found that applying non-EPI DWI for the detection of residual or recurrent cholesteatoma achieved better results at 3 T than at 1.5 T. Hence, the purpose of this study was to evaluate the role of the TGSE BLADE DWI technique in the diagnosis of cholesteatoma at 3 T and qualitatively and quantitatively compare image quality between TGSE BLADE and RESOLVE protocols with the same scanning time (3 m46s). In this study, a total of 42 patients (23 males, 19 females; age range, 7–65 years; mean, 40.1 years) with surgically confirmed cholesteatoma were enrolled, and patients with congenital cholesteatoma have been excluded according to the clinical diagnosis from October 2018 to April 2019. Of the 42 patients, 37 had cholesteatoma in the middle and 5 had cholesteatoma in the external auditory canal. Clinicopathological results were the gold standard for all patients. The study was approved by the Review Committee of Eye & ENT Hospital of Fudan University, and written informed consent was obtained from all patients. All patients underwent DWI (both a prototype TGSE BLADE DWI sequence and a commercially available RESOLVE DWI sequence) using a 3 T scanner (MAGNETOM Prisma, Siemens Healthcare, Erlangen, Germany) with a 64-channel brain coil. However, especially for small attic lesions, a golden standard has not been reached at present and axial plane is considered acceptable. The parameters for TGSE BLADE DWI were as follows: TR/TE = 4000/62 ms; slice thickness/gap = 2/0.2 mm; slices = 21; bandwidth = 520 Hz/Px; field of view (FOV) = 280 × 280 mm2; matrix = 192 × 192; voxel size = 1.5 × 1.5 × 2.0 mm3; number of excitations (NEX) = 1; diffusion mode = 4 scan trace; b = 0, 1000 s/mm2; turbo factor = 13; EPI factor = 3; and data acquisition time = 3 min 46 s. For RESOLVE DWI, the imaging parameters were as follows: TR/TE = 5020/53 ms; slice thickness/gap = 2/0.2 mm; slices = 21; bandwidth = 766 Hz/Px; FOV = 230 × 230 mm; matrix = 192 × 192; voxel size = 1.2 × 1.2 × 2.0 mm; diffusion mode = 4 scan trace; b = 0, 1000 s/mm2; and data acquisition time = 3 min 46 s. Image assessment Qualitative analysis of image quality All images obtained in the 42 included patients by TGSE or RESOLVE were evaluated by two radiologists with 10 years of experience in ear MRI evaluation. Each observer randomly and blindly evaluated the images without knowing the type of DWI sequence and compared the two different DWI sequences using the side-by-side display method. A final decision was made based on mutual consultation when there was a divergence in the assessment results. Qualitative evaluation of images obtained by TGSE and RESOLVE was performed according to four criteria: image sharpness, geometric distortion, ghosting artifacts, and overall image quality. Image sharpness was assessed on a scale from 1 to 3. Both geometric distortion and ghosting artifacts were evaluated on a scale from 1 to 4, and the evaluation of geometric distortion included two parts: the whole image and the lesion in the ear. Overall image quality was also graded on a scale of 1 to 5. The detailed qualitative evaluation criteria are shown in Table 1. In Fig. 1, images C and E show geometric deformations with a score of 4 (no distortion) and 2 (moderate distortion), respectively. Table 1 Qualitative criteria for comparing image quality of TGSE and RESOLVE (b1000) sequences A 51-year-old male with primary cholesteatoma in the right middle ear confirmed by right mastoidectomy. Axial T1WI (a) and T2WI (b) showed the anatomical location of the cholesteatoma (arrow) in the right middle ear. c, e: TGSE BLADE (b1000) and RESOLVE (b1000) DWI showed a restricted diffusion lesion (high signal) in the right mastoid bone. d, f: The ADC on TGSE BLADE and RESOLVE DWI was 0.689 × 10− 3 mm2/s and 0.791 × 10− 3 mm2/s, respectively. However, the cholesteatoma lesion (arrow) was not as clear on RESOLVE DWI as on TGSE DWI. Moreover, the structures of the nasal cavity were obviously distorted on the RESOLVE b1000 (e) and ADC (f) maps, whereas almost no distortion was observed on the TGSE b1000 and ADC maps. Images (c and e) show the geometric deformation values of the entire image for images with a score of 4 (no distortion) and 2 (moderate distortion) Quantitative analysis of image quality The SNR, contrast and contrast-to-noise ratio (CNR) were the main evaluation criteria for the quantitative analysis of images obtained using the two sequence. The apparent diffusion coefficients (ADCs) of the two sequences were compared simultaneously. On the b1000 TGSE and RESOLVE images, the SNR of the lesions in the region of interest (ROI) was defined as the ratio of the mean signal intensity of the lesion (SROI) to the standard deviation of the background noise (σBG) (SNR = SROI/σBG) [23]. The SNR of the brainstem was calculated by the same method as follows: SNR = SB/σBG. Contrast was defined as the ratio of the signal intensity of the lesion (SROI) to the signal intensity of the brainstem (SB) on the b1000 map (contrast = SROI/SB). The CNR was defined as the difference between the SROI and SB divided by the standard deviation in the lesion ROI (σROI) and the brainstem ROI (σB) [13,14,15, 24], as follows: $$ CNR=\frac{S_{ROI}-{S}_B}{\sqrt{{\sigma_{ROI}}^2+{\sigma_B}^2}} $$ The ROI of the lesion on the b1000 and ADC maps was manually drawn as 1 mm2 at the level of the maximum diameter of the lesion, and the corresponding signal intensity and standard deviation were automatically generated on the MRI workstation. The ROI of the brainstem was defined by selecting 10 mm2 of the brainstem, and the signal intensity and standard deviation of each ROI were automatically generated. A circular ROI of 10 mm2 was set in the background on the b1000 map for all patients, and the standard deviation of the ROI was automatically generated. In selecting the ROI, areas affected by susceptible artifacts or volume effects were avoided. The parameters were measured independently and randomly by the two raters at an interval of 2 weeks. The mean value of the two measurements was selected as the final data for further analysis. For brain tissue evaluated on DWI sequences, the diagnostic criterion for cholesteatoma was a very high signal intensity that corresponded to limited diffusion on the ADC maps [8, 25]. The sizes of all lesions were determined on T2-weighted imaging (T2WI) based on the size and location of the lesions observed on the TGSE and RESOLVE sequence images and the premise of avoiding artifacts at air-bone interfaces as much as possible. All statistical analyzes and plots were performed and created using the SPSS 24.0 software package (Chicago, IL, USA), and P < 0.05 was considered statistically significant. The normality of all measurements obtained using the TGSE and RESOLVE sequences was tested using the Shapiro-Wilk test. Significant differences in qualitative parameters between TGSE and RESOLVE DWI were determined using the Wilcoxon rank-sum test, and significant differences in quantitative parameters were determined using the paired t-test. The interreader correlation of the ADC as a quantitative index was evaluated using the intraclass correlation coefficient (ICC). The range of the ICC coefficient was set from 0 to 1.00, and the ICC was defined as follows: < 0.40, poor; 0.41–0.60, moderate; 0.61–0.80, good; > 0.81, excellent [26, 27]. The mean ADCs of the lesions and brainstem measured by the two observers were further calculated, and the differences between them were assessed by paired t-test. The lesions were clearly displayed on the TGSE and RESOLVE sequence images obtained in 40 patients among the 42 cases. In only two cases, the RESOLVE sequence images produced more magnetic susceptibility artifacts because the lesion was too small (1.9 mm), and it was difficult to distinguish the lesions from the artifacts, while the TGSE sequence images showed the lesions clearly (Fig. 4). Comparison of the qualitative scores indicated that TGSE BLADE DWI produced less geometric distortion and ghosting artifacts (P < 0.001) and had higher image quality (P < 0.001) than were found for RESOLVE DWI. The average TGSE and RESOLVE scores were as follows (Table 2): geometric distortion (whole), 3.97 ± 0.15 and 3.26 ± 0.26, respectively (P < 0.001); geometric distortion (lesion), 3.95 ± 0.21 and 3.64 ± 0.48, respectively (P < 0.001); ghosting artifacts, 3.92 ± 0.26 and 3.07 ± 0.55, respectively (P < 0.001); and overall image quality, 4.85 ± 0.35 and 4.16 ± .69, respectively. Both TGSE BLADE DWI and RESOLVE DWI had nearly perfect image sharpness (P = 0.23; 2.85 ± 0.35 versus 2.73 ± 0.49). Table 1 shows a comparison of the qualitative parameter scores for TGSE BLADE DWI and RESOLVE DWI, and Fig. 2 shows the distributions of the qualitative scores obtained using TGSE BLADE DWI and RESOLVE DWI. As shown in Fig. 1, axial TGSE DWI precisely defined the cholesteatoma lesion in the right middle ear without geometric distortion or ghosting artifacts, whereas RESOLVE DWI showed significant artifacts at the air-bone interface (between the mastoid, i.e., the location of the cholesteatoma lesion, and the nasal sinus). Table 2 Comparison of results of qualitative parameter evaluation between TGSE and RESOLVE images (42 patients) Bar chart showing a comparison of the qualitative imaging parameters between TGSE BLADE and RESOLVE DWI Comparison of the evaluated quantitative image parameters between TGSE and RESOLVE showed significant differences between the two groups. TGSE BLADE DWI produced a significantly lower SNR (P < 0.001) and higher parameter values (both contrast and CNR (P < 0.001)) than were found for RESOLVE DWI. The results of the statistical analysis are as follows (Table 2): SNR of cholesteatoma, 102.3 ± 32.2 versus 493.7 ± 241.6, respectively (P < 0.001); SNR of brainstem, 59.1 ± 15.5 versus 289.8 ± 140.9, respectively (P < 0.001); contrast, 1.79 ± 0.35 versus 1.62 ± 0.44, respectively (P = 0.005); and CNR, 1.62 ± 0.44 versus 2.7 ± 2.6, respectively (P < 0.001). In terms of the measurement and evaluation of the ADC, values were measured in 40 cases, as the lesions were too small to be measured on the ADC maps in 2 cases. Excellent interreader agreement was obtained. Detailed interreader ICCs are shown in Table 3. All ADCs were measured twice by the two observers, and the average values were taken as the basis for further statistical analysis. As shown in Table 3, there was a significant difference in the ADC of cholesteatoma between TGSE BLADE and RESOLVE DWI (P < 0.01). The mean ADC of the cholesteatoma measured on TGSE (0.763 × 10− 3 mm2/s) BLADE DWI was significantly lower than that measured on RESOLVE (0.928 × 10− 3 mm2/s) DWI (P < 0.01). Similarly, the ADC of the brainstem measured on TGSE (0.498 × 10− 3 mm2/s) BLADE DWI was lower than that measured on RESOLVE (0.773 × 10− 3 mm2/s) DWI (P < 0.01). The box plot in Fig. 3 shows the distributions of the lesion and brainstem ADCs measured on TGSE BLADE and RESOLVE DWI. There was no significant difference in ADC values between the 5 patients with cholesteatoma of the external auditory canal and the 35 patients with cholesteatoma of the middle ear in the TGSE (P = 0.236) or RESOLVE (P = 0.127) images. Table 3 Comparison of ADC values between two observers Box plot of the ADC of the cholesteatoma and brainstem showing significant differences in the ADC between TGSE and RESOLVE. The ADC values of the cholesteatoma measured on TGSE and RESOLVE DWI were 0.763 × 10− 6 mm2/s and 0.928 × 10− 3 mm2/s, respectively. The ADC values of the brainstem measured on TGSE and RESOLVE DWI were 0.498 × 10− 3 mm2/s and 0.773 × 10− 3 mm2/s, respectively In this study, the measurement results in terms of lesion size were evaluated in 42 patients and showed that TGSE BLADE DWI showed small lesions more clearly than was achieved by RESOLVE DWI. Compared with RESOLVE, TGSE had much better image quality at the air-bone interface (nasal sinus, middle ear, mastoid) and significantly fewer ghosting artifacts and distortion. Furthermore, as shown in Fig. 4, axial TGSE BLADE DWI could completely and clearly show a small lesion (1.9 mm in width) located in the left tympanic cavity with less geometric distortion than was observed using RESOLVE DWI. A 44-year-old male with a small cholesteatoma (1.9 mm width) in the left tympanic cavity (white arrow). a: Axial T1WI. b: Axial T2WI showing the structure of the small cholesteatoma (white arrow). c: Axial TGSE BLADE DWI (b1000) clearly showing a markedly high signal intensity for the small cholesteatoma (white arrows) without artifacts. e: Axial RESOLVE DWI (b1000) showing the high signal intensity of a small lesion (white arrow) with light geometric distortion and the bilateral middle ear mastoid process with a few artifacts (red arrow). d, f: The ADC values on TGSE BLADE and RESOLVE DWI were 0.737 × 10− 3 mm2/s and 0.984 × 10− 3 mm2/s, respectively DWI is increasingly applied for the evaluation of various diseases in many areas of the body. Conventional DWI (SS-EPI) is often used in head and neck diseases; however, due to interference by the T2* blurring effect and the susceptibility artifacts of various tissues produced by nonmovement, the image quality of conventional DWI (SS-EPI) is usually not satisfactory [9, 28]. Many studies [8, 29, 30] have concluded that the size limit of cholesteatoma on EPI DWI is 5 mm and that smaller cholesteatoma lesions are easily missed on DW EPI. Compared with conventional SS-EPI DWI, RESOLVE has significantly better image quality due to its low susceptibility-based image distortion and T2* blurring and its robust correction for motion-induced phase artifacts [31]. RESOLVE DWI is therefore more widely used in head and neck diseases than SS-EPI DWI is. However, RESOLVE still has some shortcomings that need to be resolved; these include image artifacts and distortion (air-bone interface) that cannot be completely eliminated and a low diagnostic rate of small lesions (< 2.5 mm) [6, 13,14,15, 32]. To the best of our knowledge, TGSE is a new technique, and the use of DWI in head and neck diseases has not previously been reported in the literature. The basic imaging principles of the TGSE BLADE technique were introduced by Li et al. [33]. In this study, compared to RESOLVE DWI, TGSE BLADE DWI significantly improved the image quality in cases of cholesteatoma by reducing susceptibility artifacts, distortion and blurring when applied at the same scanning time (3 min 46 s). Moreover, TGSE BLADE DWI may be more valuable than RESOLVE DWI for the diagnosis of small cholesteatoma lesions (< 2 mm). Qualitative analysis of image quality showed that almost no geometric distortion or ghosting artifacts were observed in the TGSE images, while the RESOLVE images contained obvious geometric distortion, mostly in the nasal cavity and mastoid, in 8 of 42 (20%) cases. Moreover, serious artifacts were observed in the RESOLVE images in 5 (12%) cases. Hu [21] et al. also demonstrated that TGSE BLADE DWI produced less geometric distortion in the brain and signal pile-up in highly susceptible areas than was found for conventional SE-EPI. The image quality of TGSE BLADE has also been significantly improved. In the quantitative analysis of image quality, TGSE BLADE DWI produced higher contrast and a higher CNR than was observed for RESOLVE DWI. These were prospective results due to the lack of previous reports on TGSE BLADE DWI. This study shows that there is a significant difference in the ADC between the TGSE BLADE and RESOLVE sequences, with a significantly lower ADC of the cholesteatoma (P < 0.01) and brainstem (P < 0.01) found when using TGSE BLADE DWI than RESOLVE DWI. The average ADC for cholesteatoma on RESOLVE DWI was 0.928 × 10− 3 mm2/s, which is consistent with the cholesteatoma ADC (0.7–1.0 × 10− 3 mm2/s) previously reported in the literature [6, 34, 35]. The mean ADC of the cholesteatoma and brainstem on TGSE BLADE DWI was 0.763 × 10− 3 mm2/s and 0.498 × 10− 3 mm2/s, respectively. These findings demonstrate that the ADC obtained in our study should provide an auxiliary basis for more clinical applications of TGSE BLADE DWI in the future. However, there are also some limitations to our study: the number of patients included in this study was relatively small, and the error caused by manual measurement could not be eliminated. This may have affected the accuracy of the true range of the ADC on TGSE BLADE DWI. Moreover, TGSE BLADE DWI is not without its disadvantages. The overall image SNR of TGSE is slightly lower than that achieved by RESOLVE, mainly because placement of the gradient echo with T2* decay effects in the center of k-space reduces the image quality of TGSE, consistent with a previous pediatric brain study reported by Ui [28] et al. In conclusion, TGSE BLADE DWI produced better image quality than was achieved by RESOLVE DWI in the diagnosis of cholesteatoma. TGSE BLADE DWI was also superior to RESOLVE DWI in terms of image distortion, artifacts and lesion conspicuity. In addition, TGSE BLADE DWI appears to be more effective than RESOLVE DWI in detecting small lesions. The datasets used and analyzed in the current study are available from the corresponding author on reasonable request. Apparent diffusion coefficient SNR: CNR: Contrast-to-noise ratio SAR: Specific absorption rate ROI: Region of interest ICC: Louw L. Acquired cholesteatoma pathogenesis: stepwise explanations. J Laryngol Otol. 2010;124(6):587–93. Koji Y, Takashi Y, Akio H, et al. Contributing factors in the pathogenesis of acquired cholesteatoma: size analysis based on MDCT. AJR Am J Roentgenol. 2011;196:1172–5. Abdel RAAK, Rashad GM, Bassem A. Computed tomography staging of middle ear cholesteatoma. Pol J Radiol. 2015;80:328–33. Yates PD, Flood LM, Banerjee A, Clifford K. CT scanning of middle ear cholesteatoma: what does the surgeon want to know? Br J Radiol. 2002;75(898):847–52. Bert DF, Jean-Philippe V, Anja B, et al. Middle ear cholesteatoma: non-echo-planar diffusion-weighted MR imaging versus delayed gadolinium-enhanced T1-weighted MR imaging--value in detection. Radiology. 2010;255:866–72. Algin O, Aydın H, Ozmen E, et al. Detection of cholesteatoma: high-resolution DWI using RS-EPI and parallel imaging at 3 tesla. J Neuroradiol. 2017;44:388–94. Benjamin H, Christian K. Diffusion weighted imaging for the detection and evaluation of cholesteatoma. World J Radiol. 2017;9:217–22. Vercruysse JP, De Foer B, Pouillon M, Somers T, Casselman J, Offeciers E. The value of diffusion-weighted MR imaging in the diagnosis of primary acquired and residual cholesteatoma: a surgical verified study of 100 patients. Eur Radiol. 2006;16(7):1461–7. De Foer B, Vercruysse JP, Pilet B, et al. Single-shot, turbo spin-echo, diffusion-weighted imaging versus spin-echo-planar, diffusion-weighted imaging in the detection of acquired middle ear cholesteatoma. AJNR. Am J Neuroradiol. 2006;27(7):1480–2. Lingam Ravi K, Nash Robert, Majithia Anooj et al. Non-echoplanar diffusion weighted imaging in the detection of post-operative middle ear cholesteatoma: navigating beyond the pitfalls to find the pearl. Insights Imaging. 2016:669–678. Elefante A, Cavaliere M, Russo C, et al. Diffusion weighted MR imaging of primary and recurrent middle ear cholesteatoma: an assessment by readers with different expertise. Biomed Res Int. 2015;2015:597896. Schwartz KM, Lane JI, Bolster BD, et al. The utility of diffusion-weighted imaging for cholesteatoma evaluation. AJNR Am J Neuroradiol. 2011;32:430–6. Bogner W, Pinker-Domenig K, Bickel H, et al. Readout-segmented echo-planar imaging improves the diagnostic performance of diffusion-weighted MR breast examinations at 3.0 T. Radiology. 2012;263(1):64–76. Xu X, Wang Y, Hu H, Su G, Liu H. Readout-segmented echo-planar diffusion-weighted imaging in the assessment of orbital tumors: comparison with conventional single-shot echo-planar imaging in image quality and diagnostic performance. Acta Radiol. 2017;58(12)):1457–67. Zhao M, Liu Z, Sha Y, et al. Readout-segmented echo-planar imaging in the evaluation of sinonasal lesions: a comprehensive comparison of image quality in single-shot echo-planar imaging. Magn Reson Imaging. 2016;34:166–72. Ikuhiro K, Takashi U, Yuichiro M, et al. Comparison of diffusion-weighted imaging in the human brain using readout-segmented EPI and PROPELLER turbo spin Echo with single-shot EPI at 7 T MRI. Investig Radiol. 2016;51:435–9. Forbes KP, Pipe JG, Bird CR, et al. PROPELLER MRI: clinical testing of a novel technique for quantification and compensation of head motion. J Magn Reson Imaging. 2001;14:215–22. Pipe James G, Farthing Victoria G, Forbes Kirsten P. Multishot diffusion-weighted FSE using PROPELLER MRI. Magn Reson Med. 2002;47:42–52. Kim T, Baek M, Park J, et al. Comparison of DWI methods in the pediatric brain: PROPELLER turbo spin-Echo imaging versus readout-segmented Echo-planar imaging versus single-shot Echo-planar imaging. AJR Am J Roentgenol. 2018;210:1352–8. Dhepnorrarat RC, Wood B, Rajan GP. Postoperative non-echo-planar diffusion-weighted magnetic resonance imaging changes after cholesteatoma surgery: implications for cholesteatoma screening. Otology Neurotol. 2009;30(1):54–8. Houchun H, McAllister Aaron S, Ning J, et al. Comparison of 2D BLADE turbo gradient- and spin-Echo and 2D spin-Echo Echo-planar diffusion-weighted brain MRI at 3 T: preliminary experience in children. Acad Radiol. 2019;26(12):1597 undefined: undefined. Lips LMJ, Nelemans PJ, Theunissen FMD, et al. The diagnostic accuracy of 1.5 T versus 3 T non-echo-planar diffusion-weighted imaging in the detection of residual or recurrent cholesteatoma in the middle ear and mastoid. J Neuroradiol. 2019;S0150-9861(18)30332-8. https://doi.org/10.1016/j.neurad.2019.02.013. Kaufman L, Kramer DM, Crooks LE, et al. Measuring signal-to-noise ratios in MR imaging. Radiology. 1989;173:265–7. Song X, Pogue Brian W, Shudong J, et al. Automated region detection based on the contrast-to-noise ratio in near-infrared tomography. Appl Opt. 2004;43:1053–62. Nevoux J, Lenoir M, Roger G, et al. Childhood cholesteatoma. Eur Ann Otorhinolaryngol Head Neck Dis. 2010;127:143–50. Tyng Chiang J, Rubens C, Pinto Paula NV, et al. Conformal radiotherapy for lung cancer: interobservers' variability in the definition of gross tumor volume between radiologists and radiotherapists. Radiat Oncol. 2009;4:28. Fleiss JL, Cohen J. The equivalence of weighted kappa and the intraclass correlation coefficient as measures of reliability. Educ Psychol Meas. 1973;33:613–9. Attenberger UI, Runge VM, Stemmer A, Williams KD, et al. Diffusion weighted imaging: a comprehensive evaluation of a fast spin echo DWI sequence with BLADE (PROPELLER) k-space sampling at 3 T, using a 32-channel head coil in acute brain ischemia. Investig Radiol. 2009;44(10):656–61. De Foer B, Vercruysse JP, Bernaerts A, Maes J, et al. The value of single-shot turbo spin-echo diffusion-weighted MR imaging in the detection of middle ear cholesteatoma. Neuroradiology. 2007;49:841–8. Venail F, Bonafe A, Poirrier V, et al. Comparison of echo-planar diffusion-weighted imaging and delayed postcontrast T1-weighted MR imaging for the detection of residual cholesteatoma. AJNR Am J Neuroradiol. 2008;29:1363–8. Porter David A, Heidemann Robin M. High resolution diffusion-weighted imaging using readout-segmented echo-planar imaging, parallel imaging and a two-dimensional navigator-based reacquisition. Magn Reson Med. 2009;62:468–75. Williams Marc T, Denis A, Corinne A, et al. Detection of postoperative residual cholesteatoma with delayed contrast-enhanced MR imaging: initial findings. Eur Radiol. 2003;13:169–74. Li Z, Pipe James G, Chu-Yu L, et al. X-PROP: a fast and robust diffusion-weighted propeller technique. Magn Reson Med. 2011;66:341–7. Camilla R, Andrea E, Di Lullo Antonella M, et al. ADC benchmark range for correct diagnosis of primary and recurrent middle ear cholesteatoma. Biomed Res Int. 2018;2018:7945482. Thiriat S, Riehm S, Kremer S, et al. Apparent diffusion coefficient values of middle ear cholesteatoma differ from abscess and cholesteatoma admixed infection. AJNR Am J Neuroradiol. 2009;30:1123–6. The authors state that this work has not received any funding. Department of Radiology, Eye & ENT Hospital of Fudan University, 83 Fenyang Road, Shanghai, 200031, China Yaru Sheng, Rujian Hong & Yan Sha Scientific Marketing, Siemens Healthcare, Shanghai, China Zhongshuai Zhang Department of Digitalization, Siemens Shenzhen Magnetic Resonance, Ltd., Shenzhen, China Kun Zhou & Caixia Fu Yaru Sheng Rujian Hong Yan Sha Kun Zhou Caixia Fu ZZS and SYR designed this study. SYR collected patient data and was a major contributor to writing the manuscript. HRJ and SY analyzed and evaluated two kinds of sequential images. ZK and FCX provide guidance on magnetic resonance techniques. All authors read and approved the final manuscript. Correspondence to Yan Sha. This study was approved by the Review Committee of Eye & ENT Hospital of Fudan University, and written informed consent was obtained from all patients. For participants under 16 years old, written informed consent was obtained from a parent or guardian. Sheng, Y., Hong, R., Sha, Y. et al. Performance of TGSE BLADE DWI compared with RESOLVE DWI in the diagnosis of cholesteatoma. BMC Med Imaging 20, 40 (2020). https://doi.org/10.1186/s12880-020-00438-7 Received: 11 December 2019 Accepted: 30 March 2020 Middle ear TGSE BALDE DWI RESOLVE DWI Head and neck imaging Submission enquiries: [email protected]
CommonCrawl
Average value when it is not included in the support There are a few ways I think of averages of a (discrete) random variable: Average is an OLS estimate of running a regression of the random variable on a constant term. In that sense, it is a value that 'best' represents the data (minimizes the Euclidean distance) A central tendency of the data (i.e. my best guess of the random variable, without having any data on it) Now, in many cases, the average is not included in the support of the random variable. For instance, the expected value of the outcome of rolling a die is 3.5. However, this value is not included in the support. How would one interpret the average in this case? I think this is one of those cases in which "interpretation" (another common one is "intuition") is just not the problem. The mean is $\frac{1}{n}\sum x_i$ You get that answer, because that's the definition. Here it may clash with intuition, because you can't roll 3.5 on a die, but that's more a problem of the intuition itself - which is not something that's always useful. If it were, we wouldn't need math. one_observationone_observation $\begingroup$ Note also that awkward-looking averages can be usually made sensible by translating to totals. If gender is coded 0 or 1 then mean 0.5 may look awkward to interpret at first sight, but everyone should realise that it is just the equivalent of 50 males and 50 females, or whatever the frequencies are. There are too many feeble jokes about statistical reports of average families with 1.2 children, or whatever, but the problem is the idea that a mean must correspond to a concrete instance. $\endgroup$ – Nick Cox Feb 8 '16 at 16:59 Not the answer you're looking for? Browse other questions tagged average or ask your own question. Average per line vs. Average per column: is total average the same? Correcting for outliers in a running average Average value from ranges in a survey Can a moving average value result be higher than the highest constituent value on which the moving average is based? Standard deviation of a sample when you know the average Average precision when not all the relevant documents are found Average/extreme value in distrubtion Expressing Confidence Intervals as a percentage for simplicity
CommonCrawl
Helly–Bray theorem In probability theory, the Helly–Bray theorem relates the weak convergence of cumulative distribution functions to the convergence of expectations of certain measurable functions. It is named after Eduard Helly and Hubert Evelyn Bray. Let F and F1, F2, ... be cumulative distribution functions on the real line. The Helly–Bray theorem states that if Fn converges weakly to F, then $\int _{\mathbb {R} }g(x)\,dF_{n}(x)\quad {\xrightarrow[{n\to \infty }]{}}\quad \int _{\mathbb {R} }g(x)\,dF(x)$ for each bounded, continuous function g: R → R, where the integrals involved are Riemann–Stieltjes integrals. Note that if X and X1, X2, ... are random variables corresponding to these distribution functions, then the Helly–Bray theorem does not imply that E(Xn) → E(X), since g(x) = x is not a bounded function. In fact, a stronger and more general theorem holds. Let P and P1, P2, ... be probability measures on some set S. Then Pn converges weakly to P if and only if $\int _{S}g\,dP_{n}\quad {\xrightarrow[{n\to \infty }]{}}\quad \int _{S}g\,dP,$ for all bounded, continuous and real-valued functions on S. (The integrals in this version of the theorem are Lebesgue–Stieltjes integrals.) The more general theorem above is sometimes taken as defining weak convergence of measures (see Billingsley, 1999, p. 3). References 1. Patrick Billingsley (1999). Convergence of Probability Measures, 2nd ed. John Wiley & Sons, New York. ISBN 0-471-19745-9. This article incorporates material from Helly–Bray theorem on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Wikipedia
Metallic surface doping of metal halide perovskites Yuze Lin ORCID: orcid.org/0000-0002-0325-38421, Yuchuan Shao1, Jun Dai2, Tao Li3, Ye Liu1,4, Xuezeng Dai ORCID: orcid.org/0000-0002-5544-590X1, Xun Xiao ORCID: orcid.org/0000-0002-9810-24481, Yehao Deng ORCID: orcid.org/0000-0002-4119-71321, Alexei Gruverman3, Xiao Cheng Zeng ORCID: orcid.org/0000-0003-4672-85852,3,4,5 & Jinsong Huang ORCID: orcid.org/0000-0002-0509-87781,4 Nature Communications volume 12, Article number: 7 (2021) Cite this article Electronic properties and materials Intentional doping is the core of semiconductor technologies to tune electrical and optical properties of semiconductors for electronic devices, however, it has shown to be a grand challenge for halide perovskites. Here, we show that some metal ions, such as silver, strontium, cerium ions, which exist in the precursors of halide perovskites as impurities, can n-dope the surface of perovskites from being intrinsic to metallic. The low solubility of these ions in halide perovskite crystals excludes the metal impurities to perovskite surfaces, leaving the interior of perovskite crystals intrinsic. Computation shows these metal ions introduce many electronic states close to the conduction band minimum of perovskites and induce n-doping, which is in striking contrast to passivating ions such as potassium and rubidium ion. The discovery of metallic surface doping of perovskites enables new device and material designs that combine the intrinsic interior and heavily doped surface of perovskites. Metal halide perovskite materials have been a research hotspot in the field of optoelectronics and new electronics, including solar cells1,2,3,4,5,6, photodetectors7, light-emitting diodes8,9, ionization radiation detection10,11, laser12, single-photon emitter13, spintronic devices14, etc. Excellent performance have been frequently observed owing to their exceptional intrinsic properties including long charge-carrier lifetime, high mobility, strong light absorption, tunable bandgap, long spin-relaxation lifetime, and efficient emission15,16,17,18,19,20,21. Among many electronic devices, doped semiconductors at designated location at certain doping concentration are needed for multiple purposes, such as ohmic contact formation, p-n junction formation, tuned resistivity, carrier recombination tuning, thermoelectric functionality, etc. However, intentional doping of metal halide perovskites remains to be a grand challenge. Most Pb-based perovskite materials are very good intrinsic semiconductors in their single crystalline form, whereas addition of elements with either more or less valence electrons does not obviously change their conductivity22,23,24,25. Various added metal ions, like potassium26 and rubidium ion27, have shown different functions in perovskite thin film and crystals including alloying, defect passivation, and/or accompanied tailoring of perovskite crystallization process25. Self-doping has been observed in Pb perovskites with non-stoichiometric compositions, however the doping concentration either has a very small tunable range or is not easily controlled28,29,30. Here, we show that some metal ions such as silver, strontium, cerium ions, which exist in the precursors for metal halide perovskites as impurities, can n-dope the surface of perovskites from being intrinsic all the way upto metallic. We find these metal ion additives prefer to stay at perovskite surfaces because of their low solubility in metal halide perovskite crystals, leaving interior of perovskite crystals intrinsic. Computation shows that these metal ions introduce many electronic states close to the conduction band minimum of perovskites and induce n-doping, which is in striking contrast to passivating ions such as potassium26 and rubidium ion27. Surface doping of perovskites Semiconductor to metal transition of perovskite surfaces was discovered in our measurement of the conduction change of halide perovskite thin films, which are covered with metal impurity ions. Here, CH3NH3PbI3 (MAPbI3) polycrystalline thin films (ca. 500 nm thick, 1–2 µm grain sizes) were first covered by silver, strontium, cerium halide powders, and the metal halide powders were removed after thermal annealing. After this simple treatment, we observed a huge increase of dark current of lateral devices based on MAPbI3 film by up to four orders of magnitude in lateral structure devices as illustrated in Fig. 1a. In striking contrast, the same treatment of MAPbI3 films using KI, KBr, or RbI does not obviously change perovskite device dark current, which also indicates the dark current increase comes from the effect of metal cations, rather than halide anions. The dark currents of MAPbI3 lateral devices further increase when AgBr were used for surface treatment to replace AgI (Fig. 1a). Furthermore, we used four-probe measurement of lateral structure devices to avoid possible impact of contact resistance to total current. The four-probe measurement results as shown in Supplementary Fig. 1 confirmed the huge enhancement of film conductivity upon doping. The dramatically increasing dark current indicates that the MAPbI3 surfaces are doped by these ions, and maybe already to become metallic states. To verify this, we measured temperature-dependent current of lateral devices based on MAPbI3 films without and with surface treatment by Ag+. In an intrinsic semiconductor such as halide perovskites, electronic conduction should increase with temperature (T) owing to increased carrier concentration if the mobility variation is relatively small. In contrast, metallic conductors show a decrease of conduction with increasing temperature because of increased carrier scattering induced by lattice31. As shown in Fig. 1b, the current of lateral device based on the pristine MAPbI3 increases at higher temperature. The device based on ion-treated MAPbI3 thin films show increased dark current at lower temperatures, as a hallmark metallic feature. Here we measured dark current at relatively low temperatures from 160 K to 300 K to exclude the contribution of ion migration to the total current32. Ion migration is completely frozen at low temperature such as 160 K, whereas the conductivity of perovskite with AgBr doping is still several orders of magnitude larger than that of pristine perovskites. This confirms the conduction is dominated by electronic contribution, rather than ionic migration of either perovskite or silver halide itself. In addition, the forward and reverse I–V scan curves of the lateral device based on AgBr-doped MAPbI3 showed negligible hysteresis (Supplementary Fig. 1c), which agrees with that the electronic contribution dominates the conduction, because the obvious ion migration should cause the current hysteresis. Another experiment also indicated the current increase is caused by doping perovskite, rather than the ionic conductivity of ion conductor like silver halides. To further confirm that AgBr can dope MAPbI3, we mixed 10 wt.% AgBr in MAPbI3 in solution, and make a mixed film. We measured the conductivity of MAPbI3 thin film with and without AgBr additive. As shown in Supplementary Fig. 2, the current of lateral device with large channel length (0.95 cm) based on MAPbI3 mixed with 10 wt.% AgBr was about four orders higher than that of the control device based on MAPbI3. Fig. 1: Surface doping of perovskites by some metal ions. a Dark current of lateral structure devices based on MAPbI3 thin films without and with treaded surface by different metal ions, and the inset shows schematic lateral device structure, in which the width of electrodes is 1 mm, channel length is 50 μm, and the applied bias is 20 V. b Dark current of lateral devices based on MAPbI3 thin films without and with AgBr surface treatment under different temperature, showing a transition from semiconducting to metallic properties after surface treatment. c Transmittance of the same MAPbI3 thin film before and after AgBr surface treatment. d Majority carrier type and concentration, and e mobility of MAPbI3 thin films with and without surface treated by AgBr, SrI2, and CeI3, calculated from Hall effect measurement and four-probe conductivity measurement. We also measured the transmittance of a MAPbI3 thin film before and after surface treatment by AgBr to probe the doping induced optical property change. Here the same MAPbI3 thin film was used to avoid the variation of composition or morphology. A decrease of transmittance in the wavelength range of 2000–2500 nm is observed for the MAPbI3 film with a Ag+-treated surface, as shown in Fig. 1c. It can be attributed to the enhanced plasma reflectivity of heavily doped MAPbI3. Here, the calculated carrier concentration of doped perovskite reaches 6.6 × 1020 cm−3 from the peak of the plasma reflection spectrum based on Drude model33. Such a high carrier concentration is consistent with the transition to metallic states after doping of MAPbI3. This indicates the carrier concentration of ion doped perovskite increased by 6–8 orders of magnitude compared with what were reported in the bulk of polycrystalline perovskite films30. We noted that the dark current in lateral devices only increased by 3–4 orders of magnitude, which can be explained by the surface doping effect, decreased charge-carrier mobility at high carrier concentration, as well as the non-perfect connectivity of the surface doped layer. Meanwhile, the Sr2+- and Ce3+-treated MAPbI3 thin film did not show obvious plasma peaks in transmittance spectra between 300 nm and 2500 nm, which indicates the carrier density of SrI2- and CeI3- treated MAPbI3 surface is much lower than AgBr-treated surface. To verify it, Hall effect measurement was conducted to investigate carrier density, doping type and charge-carrier mobility of the perovskite surfaces treated by metal halides, and the measurement results are summarized in Fig. 1d, e and Supplementary Table 1. Ag+, Sr2+, and Ce3+ ions treated MAPbI3 surfaces showed n-type behavior. In contrast, as-cast MAPbI3 thin film is weakly p-type with a hole concentration of 5.3 × 1012 cm−3. The calculated electron concentration of perovskite surfaces treated by AgBr, SrI2, and CeI3 is 3.2 × 1020, 3.6 × 1019, or 2.5 × 1019 cm−3, respectively. The calculated plasma reflection peak of Sr2+ and Ce3+ ions doped MAPbI3 is far beyond 2500 nm, in consistent with the measurement results. The electron mobilities are 5.3 × 10−3, 9.5 × 10−3, or 1.7 × 10−2 cm2 V−1s−1 for AgBr, SrI2, and CeI3-treated MAPbI3 surfaces, which is 2–3 orders of magnitude lower than that (1.4 cm2 V−1s−1) of hole mobility in pristine films and can be well explained by the additional carrier scatter centers caused by dopants. The work function change of MAPbI3 induced by doping was measured by Kelvin probe force microscopy (KPFM). We measured the area including the boundaries of the doped and undoped regions so that the change of work function among different regions can be quantified within one image. A clear boundary with large contact potential difference (CPD) is identified between pristine MAPbI3 crystal and metal halide treated regions, and the shape of the contact area depends on the size and shape of metal halide particles. The CPD in the measurements is defined as (Φtip ‒ Φsample)/e. We used the same type of conductive tip (i.e., consistent Φtip), thus CPD value should be directly related to the work function of the measured samples. The CPD line profile across the boundary shown in Fig. 2 reveals a higher CPD value, or lower work function, for metal halide treated area by ca. 160–260 meV, compared with the pristine surface region for MAPbI3. This result shows that all these metal ions n-dope the MAPbI3 surface. It should be noted here the untreated surface of MAPbI3 might be already partially self-doped by the volatilization of methylammonium iodide (MAI), and thus the change of surface work function by CPD cannot be directly used to calculate the doping concentration change of MAPbI3 crystals. The excess metal halides were cleanly removed and CPD change is not caused by metal halide itself, because the change of CPD by pure metal halide compared with MAPbI3 is opposite to the case of doping changed CPD. We deposited a ~5 nm thick AgI on a MAPbI3 surface in vacuum, and then conducted KPFM measurements at the edge between AgI covered perovskite surface and as-cast perovskite surface. Here we used pan paper to cover part of MAPbI3 surface, and thermally evaporated a AgI layer onto the uncovered part of MAPbI3. As shown in Supplementary Fig. 3, the CPD of AgI on perovskite surface is ~80 mV lower than that of as-cast MAPbI3 surface. In contrast, the CPDs of perovskite surface treated by metal halide doping are higher than that of MAPbI3. X-ray photoelectron spectroscopy (XPS) was used to investigate the surface chemistry of MAPbI3 thin film treated by Ag+, Sr2+, and Ce3+, respectively. There was no obvious metal halide residue detected within equipment sensitivity limit (Supplementary Fig. 4). All the peaks of N 1 s, Pb 4 f, and I 3d of the MAPbI3 surface treated by metal ions shifted to higher binding energy, as shown in Fig. 2j–l, indicating the moving up of the Fermi level, which is consistent with n-doping effect by the excess metal ions34,35. Fig. 2: Doping type of perovskite surface doping. Height, CPD images and cross sectional curves of a–c AgI, d–f SrI2, or g–i CeI3 treated MAPbI3 single-crystal surface by KPFM measurement. j–l XPS scans of the N 1 s, I 3d, and Pb 4 f measured from the untreated MAPbI3 and MAPbI3 with surface treated by AgI, SrI2, and CeI3. Distribution of metal ion dopants One question rises naturally is whether these metal ions substitute Pb2+ or MA+ ions to cause a bulk doping within perovskite crystal lattices. Several experiment results can exclude this scenario. First, X-ray diffraction (XRD) characterization of both spun and blade-coated thin films (Supplementary Fig. 5) and polycrystalline powders (Fig. 3a) shows that no notable XRD peak shifts within equipment resolution limit has been observed after adding these ions into MAPbI3 at a concentration of 5 wt%. Second, we grew perovskite single crystals from perovskite precursor solutions with metal ion additives (0.1% Ag+, 0.2% Sr2+, 0.1% Ce3+ by weight ratio), and then measured the ion distribution using time-of-flight secondary ion mass spectrometer (ToF-SIMS). The ToF-SIMS intensities (Fig. 3b) of Ag+, Sr+, and Ce+ on the surface are much higher (3–5 orders) than those at the bulk of single crystals, and the intensities of extrinsic metal ions within crystal bulk are at the level of background signals in MAPbI3 thin single-crystal grown from precursor solution without additive. These results support that these extrinsic metal ions prefer to stay on the surfaces rather than incorporate into the crystals. Third, Hall effect measurement was conducted to characterize the bulk-doping concentration of these single crystals. The MAPbI3 crystal bulk is known to be weakly p-doped15. The almost invariant-free carrier concentrations and doping type in these single crystals grown from precursors with added metal ions also suggests that metal ion impurities do not incorporate into the perovskite lattice to cause bulk doping (Fig. 3c, d). Therefore, the observed dark current change of lateral structure devices merely comes from the surface doping of perovskites by the adsorption of metal ion impurities with perovskite surfaces. Ag+ and Sr2+ ions have been introduced into hybrid perovskite solar cells as additives to enhance the efficiency of solar cells36,37,38,39. It should also be noted the XRD peak shift of polycrystalline films cannot be simply explained by metal ion incorporation into perovskite lattice, unless the complexity of strain distribution in perovskite films could be well excluded40. Here we scraped the MAPbI3 powder with and without additives from the substrates for XRD study to exclude the impact of impact of strain caused by the substrates. No obvious change of XRD peaks is observed for the sample with 5% of metal additives, suggesting these metal ions do not easily incorporate MAPbI3, agreeing with TOF-SIMS study of single-crystal samples. Fig. 3: Distribution of metal ion dopants. a XRD curves of MAPbI3 powder without and with 5 wt.% Ag+, Sr2+ or Ce3+ additives. b TOF-SIMS intensity of Ag+, Sr+, and Ce+ on the surface and at the bulk (>800 nm depth) of MAPbI3 thin single crystals (TSCs) grown from precursor solution with 0.1% Ag+, 0.2% Sr2+, and 0.1% Ce3+ additives. Here the intensities of extrinsic metal ions within crystal bulk are at the level of background signals in MAPbI3 thin single-crystal grown from precursor solution without additive. c, d Hall effect measurements of MAPbI3 single crystals grown in precursor solutions c without and d with 0.1% Ag+ additives. We polished the surface of MAPbI3 singles crystals grown from precursor solution without and with metal ion additives, to avoid the impact of surface doping by the metal ions. The results show that the hole concentrations in the MAPbI3 single crystals grown in precursor solutions with and without Ag+ ions are 1.05 × 1011 cm−3 and 1.08 × 1011 cm−3, respectively. The almost same hole concentration indicates that the metal ions would not induce bulk doping. To find out why the extrinsic metal ions can n-dope the surface of metal halide perovskites, we carried out first-principles computation within the framework of density functional theory (DFT). The surface doping is realized through the additive metal ions on the surface, therefore the simulation models composed of MAPbI3 surfaces and metal ions are natural representations of the surface doping from the structural perspective. Although the geometry of excess Ag, Sr, and Ce ions in metal halide perovskites might be more complex as treated with the surface adsorption model, the adsorption of metal atoms using surface models captures the essential interaction between metal atoms and halide perovskites, and provide insights into the engineering of surface band edges upon doping. The computation shows that, unlike ions such as K+ and Rb+ reported previously (Supplementary Fig. 6)26,27, Ag (Sr or Ce) clearly introduces some occupied states below the Fermi level in the doped samples, and the computed finite density of states (DOS) at the Fermi level can be observed for the Ag (Sr or Ce)-doped samples, suggesting the metallic state (Fig. 4). The highest occupied level of Ag, Sr, and Ce are close to the conduction band minimum (CBM) of MAI- or PbI2-terminated surface (Supplementary Fig. 7). It should be noted that the occupied orbitals of isolated metal atoms above or near the lowest unoccupied band of perovskite is the necessary but not the sufficient condition of n-doping perovskite surface, when surface metal atoms like K etc., strongly hybridize with surface I atoms on both MAI terminal and PbI2 terminal of perovskite. The normalized total DOS and the DOS contributed from Ag (Sr or Ce) are plotted in Fig. 4a, b. The computed charge density corresponding to the highest occupied and lowest unoccupied level of the Ag, Sr, and Ce adsorbed on the MAI-terminated and PbI2-terminated surfaces are also plotted in Fig. 4c, from which we can clearly see the contributions of Ag, Sr, and Ce states to the charge density of the highest occupied states near the Fermi level. The stronger doping effect of Ag+ doped perovskite surface with the counter ion from I− to Br− can be attributed to the relatively higher ionization degree of Ag+ in AgI to AgBr41. Fig. 4: Modeling of surface doping mechanism. Computed normalized total and partial DOS of Ag, Sr, and Ce adsorbed on a MAI-terminated and b PbI2-terminated surfaces of MAPbI3. c The iso-surface plot of the charge density of the highest occupied band (HOB) and the lowest unoccupied band (LUB) of Ag, Sr, and Ce adsorbed on MAI-terminated and PbI2-terminated surfaces of MAPbI3. The purple, grey, silver, green and chartreuse spheres represent I, Pb, Ag, Sr and Ce atoms, respectively. The charge density distribution is highlighted in yellow. In summary, we discovered the surface of metal halide perovskites can be doped in a controlled manner via certain metal ions with carrier concentration changed by eight orders of magnitude. This study also explains the difficulty for the bulk-doping of perovskites, because most metal ions cannot incorporate into the crystal structure of metal halide perovskite. The non-uniform distribution of metal ion dopants in perovskite polycrystalline films may strongly impact halide perovskite optoelectronics in many different ways. Heavily doping perovskites can potentially lead to new applications such as thermoelectric energy conversion, given these perovskites have known low thermal conductivity. Non-uniform doping in polycrystalline perovskite thin film with optimized doping ratios can form homojunctions between individual perovskite grains and their adjacent grain boundaries and/or surface, which can be expected to minimize charge recombination by separating photogenerated electrons and holes spatially into different transport channels, and facilitating the charge separation in low-dimensional perovskites in solar cells, as well as other optoelectronics, such as photodetectors, radiation detectors, and light-emitting diodes. MAI were synthesized according to the methods reported in literature42. Other raw materials and solvents were purchased and were used without further purification. Fabrication of perovskite thin films The spin-coated perovskite thin film was fabricated by the anti-solvent method, and the composition of MAPbI3 were used here. The 80 μL precursor solution (1.3 M) was spun onto substrate at 2000 rpm for 2 s and 4000 rpm for 20 s, the sample was quickly washed with 130 μL toluene at during spin coating. Subsequently, the sample was annealed at 70 °C for 10 min and 100 °C for 10 min. The perovskite precursor solution was dissolved in mixed solvent of N, N-dimethylformamide and dimethyl sulfoxide with the volume ratio of 9:1. The same procedure is used to fabricate the MAPbI3 thin film with AgBr additive, by using the perovskite precursor solution with 10 wt.% AgBr additive. The blending ratio in this work mean the weight ratio of metal halide/lead iodide in precursor solution. For blade-coating perovskite film, we bladed films at 160 °C from the same perovskite precursor solution used in spin-coated processing, and then thermal annealed them at 100 °C for 30 min. Dark current measurement of lateral structure devices For lateral devices, OIHP thin films were spin-coated on glass substrates, and symmetrical electrodes were thermally deposited by using shadow mask. We used Cr (15 nm)/Au (25 nm) as electrodes, and two probe and four-probe geometry of electrode pattern were used. Unless stated otherwise, the electrode width is 1 mm and the channel length between electrodes is 50 μm for two probe devices and 100 μm for four-probe devices, respectively. Unless stated otherwise, we recorded the stable currents of devices at a bias of 20 V in two probe devices and I–V measurement of four-probe devices was from zero bias to high bias (forward scan) scan. The absence of obvious current hysteresis of our device is confirmed by changing current scanning directions (Supplementary Fig. 1c). We put metal halide powder on the surface of MAPbI3 thin films, and then annealed samples at 85 °C for 15 min. After that metal halide powders were blew off by using high-pressure nitrogen knife. The metal halide powders were purchased from Alfa Aesar, and used as received. The metal halide powders have sizes of submicrometer to hundreds of micrometers, and thus likely have point contacts with the perovskite surfaces. Thermal annealing at 85 °C would drive the ion diffusion so as to increase the coverage of ions on the perovskite surface. The measurements were performed in a Lakeshore Probe Station at dark condition, and the samples were placed on a metal plate with its temperature being controlled by a heater and injected liquid N2. The temperature-dependent conductivity was measured in vacuum (ca. 10−5 torr). Transmittance measurement Transmittance of the same MAPbI3 thin film before and after AgBr surface treatment was recorded using a PerkinElmer Lambda 1050 UV/VIS/NIR spectrometer. The carrier concentration n is calculated by33 $$n = \frac{{\omega _p^2m^ \ast \varepsilon _0\varepsilon _r}}{{e^2}}$$ where ε0 and εr is vacuum dielectric constant and relative permittivity, respectively. m* is charge-carrier effective mass, and e is elementary charge. ωp is plasma frequency given by $$\omega _p = \frac{{2\pi c}}{\lambda }$$ where c and λ is speed of light and wavelength, respectively. In perovskite thin film with AgBr-treated surface, λ is ~2250 nm, and thus ωp is ~8.37 × 1014 Hz. Single-crystal growth We followed the same single-crystal growth method with our previous published method of hydrophobic interface-confined crystal growth method43. PbI2 (7.376 g) and MAI (2.544 g) were added into 10 mL γ-butyrolactone solvent to obtain MAPbI3 precursor solution. Then AgI, SrI2, CeI3 were blended into MAPbI3 precursor solutions at weight ratios of 0.1%, 0.2%, or 0.1%, respectively. The solutions were dropped between two PTAA-covered substrates, and thin MAPbI3 single crystals with thickness of about 50 µm were grown by increasing temperature. XRD measurement XRD measurements were performed with a Bruker-AXS D8 Discover Diffractometer. Bruker-AXS D8 Discover Diffractometer is configured in parallel beam geometry with Cu Ka radiation. It should be noted we used the scraped powder from the substrates for XRD measurement to exclude the impact of strain effect on the lattice constant40. We have shown that thermal annealing could induce strain in perovskite films grown on ITO substrates, while scraped powders are strain free40. The small amount of power gave wide XRD peaks. As shown in Supplementary Fig. 5, the full width at half maximums for the XRD peaks of perovskite thin films deposited by both spin coating and blade-coating process are comparable to those of perovskite single crystals in literature24. No notable XRD peak shift was observed within its resolution limit after adding 5% excess metal ions into perovskite thin films. In addition, the blade-coated perovskite film with 5% Sr2+ showed obvious orientation change relative to the pure MAPbI3 film. ToF-SIMS measurement ToF-SIMS analyses were conducted using a TOF-SIMS V (ION TOF, Inc. Chestnut Ridge, NY) instrument equipped with a Binm+ (n = 1–5, m = 1, 2) liquid metal ion gun, Cs+ sputtering gun and electron flood gun for charge compensation. Both the Bi and Cs ion columns are oriented at 45° with respect to the sample surface normal. The instrument vacuum system consists of a load lock for rapid sample loading and an analysis chamber, separated by the gate valve. The analysis chamber pressure was maintained below 5.0 × 10−9 mbar. For depth profiles acquired in this study, 3 keV Cs+ with 18 nA current was used to create a 120 µm × 120 µm area, and the middle 30 µm × 30 µm area was analyzed using 0.4 pA Bi3+ primary ion beam. The analysis was done with the non-interlaced mode with three cycles of analysis and one cycle of sputtering to improve the detection limit. The positive secondary ion mass spectra were calibrated using NH4+, Pb+, PbI+, CsPb+, and CsPbI+. XPS measurement XPS data were acquired on a Kratos Axis Ultra DLD spectrometer with a monochromatic Al Kα source and a base pressure of ca. 5 × 10−9 torr. Survey and high-resolution scans were taken with pass energies of 80 eV and 20 eV, respectively. All data was corrected to the C 1 s peak at 284.6 eV. MAPbI3 thin films without and with silver, strontium, and cerium iodide, were deposited on ITO substrates. Hall effect measurement The details of the Hall effect measurement have been reported in previous publication29. We polished the surface of MAPbI3 singles crystals grown from precursor solution without and with metal ion additives, to avoid the impact of surface doping by the metal ions. The MAPbI3 thin films with ~500 nm thickness was used in Hall effect measurement. The detailed parameters are shown in Supplementary Table 1, and one typical 5 nm depth surface is adopted for MAPbI3 thin films with surface treated by metal halide. Hall measurements were conducted in air. The mobility is calculated by conductivity from four-probe measurement and carrier concentration from Hall effect. The carrier concentration n is determined by $$n = \frac{{{\mathrm{I}}_{\mathrm{x}}{\mathrm{B}}_{\mathrm{z}}}}{{V_{\mathrm{H}}te}}$$ where Ix and Bz is applied constant current and magnetic field, respectively. VH is measured Hall voltage. t is the thickness of the sample and e is elementary charge. KPFM measurement To prepare the single-crystal sample for KPFM study, we deposited the metal iodide particles onto certain area of pure MAPbI3 thin single crystals on ITO/PTAA substrates, followed by thermal annealing at 80 °C for 10 min. Metal halide powders with sizes of tens micrometers for AgI, hundreds micrometers for SrI2, and less than micrometers for CeI3 on the surface of MAPbI3 crystals. The shape of contact area depends on the size and shape of metal halide particles. Before the KPFM measurements, the samples with particles on the surface were examined under the build-in optical microscope of atomic force microscopy (AFM) to locate the region with the edge of untreated and treated area. The measurement area was chosen by position the AFM tip at the edge of particle on samples where the position is memorized in the system. Then the AFM tip was lifted up, and the particles on perovskite surface were blown away by the nitrogen knife, leaving the area with edge of undoped and doped MAPbI3 surface for KPFM measurements. Finally, the AFM tip was engaged back to the original position. KPFM is an AFM-based surface imaging technique to acquire work function or surface potential of a material. In this work, the KPFM measurements were performed using a commercial AFM (MFP3D-BIO, Asylum Research, USA) and Pt/Ir coated silicon probes (PPP-EFM, Nanosensors, Switzerland). The standard 2-pass KPFM technique was employed. The first pass acquired the morphology information, whereas during the second pass, tip was lifted ~30 nm above the sample morphology based on the first pass and the surface potential or CPD was acquired. The CPD in our measurements is defined as (Φtip ‒ Φsample)/e. Meanwhile, 1 V DC and 2 V AC biases were supplied to the conductive probe. The CPD value was measured as the DC bias that nullify the first resonance component of the electrostatic force between tip and sample surface. The observed CPD is a relative potential difference with respect to the conductive probe. All measurements were conducted in dry N2 to prevent sample degradation. During the measurements, the good quality of topography tracking (perfect matching of trace and retrace curves) was first ensured for each KPFM measurement via adjusting the scan rate and set point to minimize the topographic crosstalk. Second, the lift height was always maintained at about 30 nm above the surface for each measurement. KPFM measurements need both AC and DC inputs, where AC bias is for generating the harmonic electrostatic force while DC bias is supplied to nullify the 1st harmonic term, i.e., the CPD value. The 1 V DC potential is the initial set value send to the circuit. It is required by the equipment that the tip potential is more positive than the sample potential to initiate the scan. When the scan starts, the feedback loop is triggered and the CPD value will be updated and stored in time as the DC input. The CPD images were flattened by the zero order flatten process, and an offset was automatic subtracted during flatten process. First-principles computation was carried out in the framework of DFT as implemented in the VASP 5.4 program. The generalized gradient approximation in the form of Perdew–Burke–Ernzerhof was used for the exchange–correlation functional. The ion–electron interaction was treated with the projector-augmented wave method. Grimme's DFT-D3 dispersion correction was adopted to describe the long-range van der Waals interactions. The structures of the metal-doped surfaces were obtained through three steps. First, we optimized the structure of bulk tetragonal MAPbI3, for which both the lattice and atomic positions were allowed to relax until the force on each atom was smaller than 0.02 eV/Å. Second, we cleaved the PbI2-terminated and MAI-terminated symmetric (001) slabs from the optimized MAPbI3 tetragonal structure, respectively, each has a super cell of 2 × 2 and nine layers of MAI and PbI2 in total. About 30 Å vacuum was added on top of the slab surface so that the interaction between the adjacent slabs can be neglected. We then re-optimized these cleaved slabs. In this step, the lattice constants were fixed while all the atomic positions were allowed to relax until the force on each atom was smaller than 0.02 eV/Å. Third, we added metal atoms on these optimized slabs. The initial distance between metal atoms and surface of the slab along the z direction (normal to the surface of the slab) was set to 4 Å to avoid the pre-constrained bonding between metal atoms and surface. These metal-doped slabs were then re-optimized during, which the lattice constants were fixed while all the atomic positions were allowed to relax until the force on each atom was smaller than 0.02 eV/Å in this step. The re-optimized metal-doped surface structures were undertaken for the electronic structure calculations, for which a 5 × 5 Monkhorst–Pack grid was used for all DOS calculations. Kojima, A., Teshima, K., Shirai, Y. & Miyasaka, T. Organometal halide perovskites as visible-light sensitizers for photovoltaic cells. J. Am. Chem. Soc. 131, 6050–6051 (2009). Kim, H.-S. et al. Lead iodide perovskite sensitized all-solid-state submicron thin film mesoscopic solar cell with efficiency exceeding 9%. Sci. Rep. 2, 591 (2012). Liu, M., Johnston, M. B. & Snaith, H. J. Efficient planar heterojunction perovskite solar cells by vapour deposition. Nature 501, 395 (2013). Burschka, J. et al. Sequential deposition as a route to high-performance perovskite-sensitized solar cells. Nature 499, 316 (2013). Zhou, H. et al. Interface engineering of highly efficient perovskite solar cells. Science 345, 542–546 (2014). Jiang, Q. et al. Surface passivation of perovskite film for efficient solar cells. Nat. Photonics 13, 460–466 (2019). Fang, Y., Dong, Q., Shao, Y., Yuan, Y. & Huang, J. Highly narrowband perovskite single-crystal photodetectors enabled by surface-charge recombination. Nat. Photonics 9, 679–686 (2015). Lin, K. et al. Perovskite light-emitting diodes with external quantum efficiency exceeding 20 percent. Nature 562, 245–248 (2018). Cao, Y. et al. Perovskite light-emitting diodes based on spontaneously formed submicrometre-scale structures. Nature 562, 249–253 (2018). Wei, H. et al. Sensitive X-ray detectors made of methylammonium lead tribromide perovskite single crystals. Nat. Photonics 10, 333–339 (2016). Wei, H. et al. Dopant compensation in alloyed CH3NH3PbBr3-xClx perovskite single crystals for gamma-ray spectroscopy. Nat. Mater. 16 826–833 (2017). Liu, P. et al. Organic–inorganic hybrid perovskite nanowire laser arrays. ACS Nano 11, 5766–5773 (2017). Utzat, H. et al. Coherent single-photon emission from colloidal lead halide perovskite quantum dots. Science 363, 1068–1072 (2019). Wang, J. et al. Spin-optoelectronic devices based on hybrid organic-inorganic trihalide perovskites. Nat. Commun. 10, 129 (2019). ADS Article Google Scholar Dong, Q. et al. Electron-hole diffusion lengths > 175 μm in solution-grown CH3NH3PbI3 single crystals. Science 347, 967–970 (2015). Shi, D. et al. Low trap-state density and long carrier diffusion in organolead trihalide perovskite single crystals. Science 347, 519–522 (2015). Huang, J., Yuan, Y., Shao, Y. & Yan, Y. Understanding the physical properties of hybrid perovskites for photovoltaic applications. Nat. Rev. Mater. 2, 17042 (2017). Brenner, T. M., Egger, D. A., Kronik, L., Hodes, G. & Cahen, D. Hybrid organic—inorganic perovskites: low-cost semiconductors with intriguing charge-transport properties. Nat. Rev. Mater. 1, 15007 (2016). Sutherland, B. R. & Sargent, E. H. Perovskite photonic sources. Nat. Photonics 10, 295 (2016). Zhang, W., Eperon, G. E. & Snaith, H. J. Metal halide perovskites for energy applications. Nat. Energy 1, 16048 (2016). Ball, J. M. & Petrozza, A. Defects in perovskite-halides and their effects in solar cells. Nat. Energy 1, 16149 (2016). Cheng, X. et al. Fe2+/Fe3+ doped into MAPbCl3 single crystal: impact on crystal growth and optical and photoelectronic properties. J. Phys. Chem. C. 123, 1669–1676 (2019). Abdelhady, A. L. et al. Heterovalent dopant incorporation for bandgap and type engineering of perovskite crystals. J. Phys. Chem. Lett. 7, 295–301 (2016). Nayak, P. K. et al. Impact of Bi3+ heterovalent doping in organic–inorganic metal halide perovskite crystals. J. Am. Chem. Soc. 140, 574–577 (2018). Rudd, P. N. & Huang, J. Metal ions in halide perovskite materials and devices. Trend Chem. 1, 394–409 (2019). Abdi-Jalebi, M. et al. Maximizing and stabilizing luminescence from halide perovskites with potassium passivation. Nature 555, 497 (2018). Saliba, M. et al. Incorporation of rubidium cations into perovskite solar cells improves photovoltaic performance. Science 354, 206–209 (2016). Yin, W.-J., Shi, T. & Yan, Y. Unusual defect physics in CH3NH3PbI3 perovskite solar cell absorber. Appl. Phys. Lett. 104, 063903 (2014). Wang, Q. et al. Qualifying composition dependent p and n self-doping in CH3NH3PbI3. Appl. Phys. Lett. 105, 163508 (2014). Bi, C. et al. Understanding the formation and evolution of interdiffusion grown organolead halide perovskite thin films by thermal annealing. J. Mater. Chem. A 2, 18508–18514 (2014). William D. Callister Jr., Materials Science and Engineering: An Introduction, Seventh Edition (John Wiley & Sons, Inc., New York, 2007). Xing, J. et al. Ultrafast ion migration in hybrid perovskite polycrystalline thin films under light and suppression in single crystals. Phys. Chem. Chem. Phys. 18, 30484–30490 (2016). Hofmann, P. Solid State Physics: An Introduction, Second Edition (WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim, 2015). Yang, L. et al. Chloride molecular doping technique on 2D materials: WS2 and MoS2. Nano Lett. 14, 6275–6280 (2014). Lin, J. D. et al. Electron-doping-enhanced trion formation in monolayer molybdenum disulfide functionalized with cesium carbonate. ACS Nano 8, 5323–5329 (2014). Shahbazi, S. et al. Ag doping of organometal lead halide perovskites: morphology modification and p-type character. J. Phys. Chem. C. 121, 3673–3679 (2017). Chen, Q. et al. Ag-incorporated organic–inorganic perovskite films and planar heterojunction solar cells. Nano Lett. 17, 3231–3237 (2017). Shai, X. et al. Efficient planar perovskite solar cells using halide Sr-substituted Pb perovskite. Nano Energy 36, 213–222 (2017). Caprioglio, P. et al. High open circuit voltages in pin-type perovskite solar cells through strontium addition. Sustain. Energy Fuels 3, 550–563 (2019). Zhao, J. et al. Strained hybrid perovskite thin films and its impact to intrinsic stability of perovskite solar cells. Sci. Adv. 3, eaao5616 (2017). Morris, K. B. Solubility product and the silver-ammonia halides. J. Chem. Educ. 24, 236 (1947). Dong, Q. et al. Abnormal crystal growth in CH3NH3PbI3-xClx using a multi-cycle solution coating process. Energy Environ. Sci. 8, 2464–2470 (2015). Chen, Z. et al. Thin single crystal perovskite solar cells to harvest below-bandgap light absorption. Nat. Commun. 8, 1890 (2017). This work is financially supported by the Center for Hybrid Organic Inorganic Semiconductors for Energy (CHOISE), an Energy Frontier Research Center funded by the Office of Basic Energy Sciences, Office of Science within the U.S. Department of Energy and the National Science Foundation through the Nebraska Materials Research Science and Engineering Center (MRSEC) (Grant No. DMR-1420645). X.C.Z. also thanks computational support by UNL Holland Computing Center. Department of Applied Physical Sciences, University of North Carolina, Chapel Hill, NC, 27599, USA Yuze Lin, Yuchuan Shao, Ye Liu, Xuezeng Dai, Xun Xiao, Yehao Deng & Jinsong Huang Department of Chemistry, University of Nebraska–Lincoln, Lincoln, NE, 68588, USA Jun Dai & Xiao Cheng Zeng Department of Physics and Astronomy, University of Nebraska–Lincoln, Lincoln, NE, 68588, USA Tao Li, Alexei Gruverman & Xiao Cheng Zeng Department of Mechanical and Materials Engineering, University of Nebraska-Lincoln, Lincoln, NE, 68588, USA Ye Liu, Xiao Cheng Zeng & Jinsong Huang Department of Chemical & Biomolecular Engineering, University of Nebraska-Lincoln, Lincoln, NE, 68688, USA Xiao Cheng Zeng Yuze Lin Yuchuan Shao Jun Dai Tao Li Ye Liu Xuezeng Dai Xun Xiao Yehao Deng Alexei Gruverman Jinsong Huang J.H. conceived the idea. J.H., Y.Lin, and Y.S. designed the experiments. Y. Lin and Y.S. did the measurement of lateral devices and Hall effect measurement. Y.S. did temperature-dependent current measurement and transmittance measurement. J.D. and X.C.Z. conducted the computation. T.L. and A.G. conducted the KPFM characterization. Y. Liu grew the single crystals and did chemical synthesis. X.X. and Y. Liu did XRD characterization. Y.D. and Y. Lin blade-coated perovskite films. X.D. contributed to KPFM measurement. J.H. and Y. Lin wrote the paper. All authors reviewed this paper. Y. Lin, Y.S., and J.D. contributed equally to this work. Correspondence to Jinsong Huang. Peer review information Nature Communications thanks Nam-Gyu Park and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Lin, Y., Shao, Y., Dai, J. et al. Metallic surface doping of metal halide perovskites. Nat Commun 12, 7 (2021). https://doi.org/10.1038/s41467-020-20110-6 Radiative cooling technologies: a platform for passive heat dissipation Jin-Woo Cho Eun-Joo Lee Sun-Kyung Kim Journal of the Korean Physical Society (2022) Applied physics and mathematics
CommonCrawl
\begin{definition}[Definition:Trimorphic Number] An '''trimorphic number''' is a positive integer whose cube ends in that number. \end{definition}
ProofWiki
Let $x,$ $y,$ and $z$ be nonzero complex numbers such that $x + y + z = 20$ and \[(x - y)^2 + (x - z)^2 + (y - z)^2 = xyz.\]Find $\frac{x^3 + y^3 + z^3}{xyz}.$ We have the factorization \[x^3 + y^3 + z^3 - 3xyz = (x + y + z)(x^2 + y^2 + z^2 - xy - xz - yz).\]Expanding $(x - y)^2 + (x - z)^2 + (y - z)^2 = xyz,$ we get \[2x^2 + 2y^2 + 2z^2 - 2xy - 2xz - 2yz = xyz,\]so $x^2 + y^2 + z^2 - xy - xz - yz = \frac{xyz}{2},$ and \[x^3 + y^3 + z^3 - 3xyz = 20 \cdot \frac{xyz}{2} = 10xyz.\]Then $x^3 + y^3 + z^3 = 13xyz,$ so \[\frac{x^3 + y^3 + z^3}{xyz} = \boxed{13}.\]
Math Dataset
Arthur–Merlin protocol In computational complexity theory, an Arthur–Merlin protocol, introduced by Babai (1985), is an interactive proof system in which the verifier's coin tosses are constrained to be public (i.e. known to the prover too). Goldwasser & Sipser (1986) proved that all (formal) languages with interactive proofs of arbitrary length with private coins also have interactive proofs with public coins. Given two participants in the protocol called Arthur and Merlin respectively, the basic assumption is that Arthur is a standard computer (or verifier) equipped with a random number generating device, while Merlin is effectively an oracle with infinite computational power (also known as a prover). However, Merlin is not necessarily honest, so Arthur must analyze the information provided by Merlin in response to Arthur's queries and decide the problem itself. A problem is considered to be solvable by this protocol if whenever the answer is "yes", Merlin has some series of responses which will cause Arthur to accept at least 2⁄3 of the time, and if whenever the answer is "no", Arthur will never accept more than 1⁄3 of the time. Thus, Arthur acts as a probabilistic polynomial-time verifier, assuming it is allotted polynomial time to make its decisions and queries. MA The simplest such protocol is the 1-message protocol where Merlin sends Arthur a message, and then Arthur decides whether to accept or not by running a probabilistic polynomial time computation. (This is similar to the verifier-based definition of NP, the only difference being that Arthur is allowed to use randomness here.) Merlin does not have access to Arthur's coin tosses in this protocol, since it is a single-message protocol and Arthur tosses his coins only after receiving Merlin's message. This protocol is called MA. Informally, a language L is in MA if for all strings in the language, there is a polynomial sized proof that Merlin can send Arthur to convince him of this fact with high probability, and for all strings not in the language there is no proof that convinces Arthur with high probability. Formally, the complexity class MA is the set of decision problems that can be decided in polynomial time by an Arthur–Merlin protocol where Merlin's only move precedes any computation by Arthur. In other words, a language L is in MA if there exists a polynomial-time deterministic Turing machine M and polynomials p, q such that for every input string x of length n = |x|, • if x is in L, then $\exists z\in \{0,1\}^{q(n)}\,\Pr \nolimits _{y\in \{0,1\}^{p(n)}}(M(x,y,z)=1)\geq 2/3,$ • if x is not in L, then $\forall z\in \{0,1\}^{q(n)}\,\Pr \nolimits _{y\in \{0,1\}^{p(n)}}(M(x,y,z)=0)\geq 2/3.$ The second condition can alternatively be written as • if x is not in L, then $\forall z\in \{0,1\}^{q(n)}\,\Pr \nolimits _{y\in \{0,1\}^{p(n)}}(M(x,y,z)=1)\leq 1/3.$ To compare this with the informal definition above, z is the purported proof from Merlin (whose size is bounded by a polynomial) and y is the random string that Arthur uses, which is also polynomially bounded. AM The complexity class AM (or AM[2]) is the set of decision problems that can be decided in polynomial time by an Arthur–Merlin protocol with two messages. There is only one query/response pair: Arthur tosses some random coins and sends the outcome of all his coin tosses to Merlin, Merlin responds with a purported proof, and Arthur deterministically verifies the proof. In this protocol, Arthur is only allowed to send outcomes of coin tosses to Merlin, and in the final stage Arthur must decide whether to accept or reject using only his previously generated random coin flips and Merlin's message. In other words, a language L is in AM if there exists a polynomial-time deterministic Turing machine M and polynomials p, q such that for every input string x of length n = |x|, • if x is in L, then $\Pr \nolimits _{y\in \{0,1\}^{p(n)}}(\exists z\in \{0,1\}^{q(n)}\,M(x,y,z)=1)\geq 2/3,$ • if x is not in L, then $\Pr \nolimits _{y\in \{0,1\}^{p(n)}}(\forall z\in \{0,1\}^{q(n)}\,M(x,y,z)=0)\geq 2/3.$ The second condition here can be rewritten as • if x is not in L, then $\Pr \nolimits _{y\in \{0,1\}^{p(n)}}(\exists z\in \{0,1\}^{q(n)}\,M(x,y,z)=1)\leq 1/3.$ As above, z is the alleged proof from Merlin (whose size is bounded by a polynomial) and y is the random string that Arthur uses, which is also polynomially bounded. The complexity class AM[k] is the set of problems that can be decided in polynomial time, with k queries and responses. AM as defined above is AM[2]. AM[3] would start with one message from Merlin to Arthur, then a message from Arthur to Merlin and then finally a message from Merlin to Arthur. The last message should always be from Merlin to Arthur, since it never helps for Arthur to send a message to Merlin after deciding his answer. Properties • Both MA and AM remain unchanged if their definitions are changed to require perfect completeness, which means that Arthur accepts with probability 1 (instead of 2/3) when x is in the language.[1] • For any constant k ≥ 2, the class AM[k] is equal to AM[2]. If k can be polynomially related to the input size, the class AM[poly(n)] is equal to the class, IP, which is known to be equal to PSPACE and is widely believed to be stronger than the class AM[2]. • MA is contained in AM, since AM[3] contains MA: Arthur can, after receiving Merlin's certificate, flip the required number of coins, send them to Merlin, and ignore the response. • It is open whether AM and MA are different. Under plausible circuit lower bounds (similar to those implying P=BPP), they are both equal to NP.[2] • AM is the same as the class BP⋅NP where BP denotes the bounded-error probabilistic operator. Also, $\exists \cdot {\mathsf {BPP}}$ ( also written as ExistsBPP) is a subset of MA. Whether MA is equal to $\exists \cdot {\mathsf {BPP}}$ is an open question. • The conversion to a private coin protocol, in which Merlin cannot predict the outcome of Arthur's random decisions, will increase the number of rounds of interaction by at most 2 in the general case. So the private-coin version of AM is equal to the public-coin version. • MA contains both NP and BPP. For BPP this is immediate, since Arthur can simply ignore Merlin and solve the problem directly; for NP, Merlin need only send Arthur a certificate, which Arthur can validate deterministically in polynomial time. • Both MA and AM are contained in the polynomial hierarchy. In particular, MA is contained in the intersection of Σ2P and Π2P and AM is contained in Π2P. Even more, MA is contained in subclass SP 2 ,[3] a complexity class expressing "symmetric alternation". This is a generalization of Sipser–Lautemann theorem. • AM is contained in NP/poly, the class of decision problems computable in non-deterministic polynomial time with a polynomial size advice. The proof is a variation of Adleman's theorem. • MA is contained in PP; this result is due to Vereshchagin.[4] • MA is contained in its quantum version, QMA.[5] • AM contains the problem of deciding if two graphs are not isomorphic. The protocol using private coins is the following and can be transformed to a public coin protocol. Given two graphs G and H, Arthur randomly chooses one of them, and chooses a random permutation of its vertices, presenting the permuted graph I to Merlin. Merlin has to answer if I was created from G or H. If the graphs are nonisomorphic, Merlin will be able to answer with full certainty (by checking if I is isomorphic to G). However, if the graphs are isomorphic, it is both possible that G or H was used to create I, and equally likely. In this case, Merlin has no way to tell them apart and can convince Arthur with probability at most 1/2, and this can be amplified to 1/4 by repetition. This is in fact a zero knowledge proof. • If AM contains coNP then PH = AM. This is evidence that graph isomorphism is unlikely to be NP-complete, since it implies collapse of polynomial hierarchy. • It is known, assuming ERH, that for any d the problem "Given a collection of multivarariate polynomials $f_{i}$ each with integer coefficients and of degree at most d, do they have a common complex zero?" is in AM.[6] References 1. For a proof, see Rafael Pass and Jean-Baptiste Jeannin (March 24, 2009). "Lecture 17: Arthur-Merlin games, Zero-knowledge proofs" (PDF). Retrieved June 23, 2010. 2. Impagliazzo, Russell; Wigderson, Avi (1997-05-04). P = BPP if E requires exponential circuits: derandomizing the XOR lemma. ACM. pp. 220–229. doi:10.1145/258533.258590. ISBN 0897918886. S2CID 18921599. 3. "Symmetric Alternation captures BPP" (PDF). Ccs.neu.edu. Retrieved 2016-07-26. 4. Vereschchagin, N.K. (1992). "On the power of PP". [1992] Proceedings of the Seventh Annual Structure in Complexity Theory Conference. pp. 138–143. doi:10.1109/sct.1992.215389. ISBN 081862955X. S2CID 195705029. 5. Vidick, Thomas; Watrous, John (2016). "Quantum Proofs". Foundations and Trends in Theoretical Computer Science. 11 (1–2): 1–215. arXiv:1610.01664. doi:10.1561/0400000068. ISSN 1551-305X. S2CID 54255188. 6. "Course: Algebra and Computation". People.csail.mit.edu. Retrieved 2016-07-26. Bibliography • Babai, László (1985), "Trading group theory for randomness", STOC '85: Proceedings of the seventeenth annual ACM symposium on Theory of computing, ACM, pp. 421–429, ISBN 978-0-89791-151-1. • Goldwasser, Shafi; Sipser, Michael (1986), "Private coins versus public coins in interactive proof systems", STOC '86: Proceedings of the eighteenth annual ACM symposium on Theory of computing, ACM, pp. 59–68, ISBN 978-0-89791-193-1. • Arora, Sanjeev; Barak, Boaz (2009), Computational Complexity: A Modern Approach, Cambridge, ISBN 978-0-521-42426-4. • Madhu Sudan's MIT course on advanced complexity External links • Complexity Zoo: MA • Complexity Zoo: AM Important complexity classes Considered feasible • DLOGTIME • AC0 • ACC0 • TC0 • L • SL • RL • NL • NL-complete • NC • SC • CC • P • P-complete • ZPP • RP • BPP • BQP • APX • FP Suspected infeasible • UP • NP • NP-complete • NP-hard • co-NP • co-NP-complete • AM • QMA • PH • ⊕P • PP • #P • #P-complete • IP • PSPACE • PSPACE-complete Considered infeasible • EXPTIME • NEXPTIME • EXPSPACE • 2-EXPTIME • ELEMENTARY • PR • R • RE • ALL Class hierarchies • Polynomial hierarchy • Exponential hierarchy • Grzegorczyk hierarchy • Arithmetical hierarchy • Boolean hierarchy Families of classes • DTIME • NTIME • DSPACE • NSPACE • Probabilistically checkable proof • Interactive proof system List of complexity classes
Wikipedia
central bank calendar 2022 pdf » cute nintendo switch oled case » general term of geometric sequence » general term of geometric sequence If you know the formula for the n th term of a sequence in terms of n , then you can find any term. A.geometric, 34, 39, 44 B.arithmetic, 32, 36, 41 C.arithmetic, 34, 39, 44 D.The sequence is neither geometric nor . = (2)^(2n-1). th. A recursive definition, since each term is found by adding the common difference to the previous term is a k+1 =a k +d. The general formula for the nth term of a geometric . Answer. Consider these sequences. To generate a geometric sequence, we start by writing the first term. the 5th term in a geometric sequence is 160. Instead of y=mx+b, we write a n =dn+c where d is the common difference and c is a constant (not the first term of the sequence, however). The general form of a geometric sequence can be written as, a, ar, ar 2, ar 3, ar 4 ,. Also, it can identify if the sequence is arithmetic or geometric. Show Video Lesson. This ratio r is called the common ratio, and the nth term of a geometric sequence is given by an = arn. A geometric sequence is a sequence in which the ratio between any two consecutive terms, \(\ \frac{a_{n}}{a_{n-1}}\), is constant. #a_n = a_0 * r^n# e.g. Iterative Sequences A Geometric sequence (or geometric progression) is a sequence of numbers where each term after the first is given by multiplying the previous one by a fixed non-zero number, a constant, called the common ratio. If you need to review these topics, click here. In other words, it is the sequence where the last term is not defined. A Sequence is a set of things (usually numbers) that are in order. -2560. Write the first four terms of the sequence defined by the explicit formula an=n2n1n! Find the term you're looking for. In this video we look at 2 ways to find the general term or nth term of a geometric sequence. . This is relatively easy to find using guess and check, however I was wondering if there was a general algorithm one could use to find the general term for a more complicated series such as: 3, 3, 15, 45, 99, 183. 2, 6, 18, 54, 162, . Another way to think of this is that each term is multiplied by the same value, the common ratio, to get the next term. In this type of sequence, a n+1 = a n + d, where d is a constant. Step 2: Click the blue arrow to submit. The geometric sequence formula refers to determining the n th term of a geometric sequence. , Tn =? This lesson will work with arithmetic sequences, their recursive and explicit formulas and finding terms in a sequence. Arithmetic Mean Geometric Mean Quadratic Mean Median Mode Order Minimum Maximum Probability Mid-Range Range Standard Deviation Variance Lower Quartile Upper Quartile Interquartile Range Midhinge . Find the 7th term for the geometric sequence. A geometric sequence is a sequence that has a pattern of multiplying by a constant to determine consecutive terms. Consider the tower of bricks. The Sequence Calculator finds the equation of the sequence and also allows you to view the next terms in the sequence. Find the general term of the 'geometric sequence: 4, 27 Find the Sum, to 4 places of decimal, of the first 8 terms of the 'geometric 11. sequence: We don't have your requested question, but here is a suggested video that might help. Find indices, sums and common ratio of a geometric sequence step-by-step. To determine the nth term of the sequence, the following formula can be used: Find the nth term. Its general term is Geometric Sequence My Preferences My Reading List Literature Notes Test Prep Study Guides Algebra II Home Study Guides Algebra II Geometric Sequence All Subjects Linear Sentences in One Variable Formulas Series and Geometric Sequences - Basic Introduction Geometric Sequence Exercise 5 Understanding Geometric Sequences - Module 14.1 Geometric Sequences Geometric Sequences Geometric Sequence Formula Constructing Geometric Sequences - Module 14.2 (Part 1) Learning Task: Identify the next three terms of the following geometric sequences [Number . The 7th term of the geometric sequence is . Progressions are sequences that follow specified patterns. Substituting back into the first equation, we get This tool can help you find term and the sum of the first terms of a geometric progression. What is the general term of this sequence? Note : See and learn from Example 5 Discovering Maths 1B page 59. General Term of a Geometric Progression: When we say that a collection of objects is listed in a sequence, we usually mean that the collection is organised so that the first, second, third, and so on terms may be identified.An example of a sequence is the quantity of money deposited in a bank over a period of time. Use integers or fractions for any numbers in the expression.) Just follow these steps: Determine the value of r. You can use the geometric formula to create a system of two formulas to find r: Find the specific formula for the given sequence. Of course, a geometric sequence can have positive . The general term is one way to define a sequence. Answer (1 of 3): a= 2. , r= 8/2=4. Steps Download Article 1 Identify the first term in the sequence, call this number a. Nth Term of a Geometric Sequence. b.Plug a1 and r into the formula. The general term of a number sequence is one of many ways of defining sequences. .. [1] b. The calculator will generate all the work with detailed explanation. The common ratio is denoted by the letter r. Depending on the common ratio, the geometric sequence can be increasing or decreasing. General Term of a Geometric Sequence The nth term (the general term) of a geometric sequence with first term a 1 and common ratio r is a n =a 1 r (n-1).. Study Tip Be careful with the order of operations when evaluating a 1 r (n-1). Geometric Sequences. $$ a_{8} \text { for } 4,-12,36, \dots $$ The 7th term is 40. which gives the equations 48 = a 1 r 4 , 192 = a 1 r 6. Just follow these steps: Determine the value of r. You can use the geometric formula to create a system of two formulas to find r: Find the specific formula for the given sequence. Q: Use the formula for the general term (the nth term) of a geometric sequence to find the indicated. $2000, $2240, $2508.80, . If you find a common ratio between pairs of terms, then you have a geometric sequence and you should be able to determine #a_0# and #r# so that you can use the general formula for terms of a geometric sequence. In this case, although we are not giving the general term of the sequence, it is accepted as its definition, and it is said that the sequence is defined recursively. General term (nth term rule) A sequence of non zero numbers is called a geometric sequence if the ratio of a term and the term preceding to it, is always a constant. Find the next three terms. . A geometric sequence is an exponential function. The formula is a n = a n-1 . The ratio between consecutive terms, is r, the common ratio. Find step-by-step Probability solutions and your answer to the following textbook question: Determine the general term of a geometric sequence given that its sixth term is $\frac{16}{3}$ and its tenth term is $\frac{256}{3}.$. And in this case, three is our first term. Formula for Geometric Sequence The Geometric Sequence Formula is given as, gn = g1rn1 . $2000, $2240, $2508.80, . For example, A n = A n-1 + 4. Solution: Use geometric sequence formula: xn = ar(n-1) x n = a r ( n - 1) a3 = ar(3-1) = ar2 = 12 a 3 = a r ( 3 - 1) = a r 2 = 12. xn = ar(n-1) x n = a r ( n - 1) a5 = ar(5-1) = ar4 = 48 a 5 = a r ( 5 - 1) = a r 4 = 48. Geometric sequences calculator. The general term of a geometric sequence is tn = 6( 1 6 )n - 1, where n N and n 1. 1. a 0 = 5, a 1 = 40/9, a 3 = 320/81, . We will use the given two terms to create a system of equations that we can solve to find the common ratio r and the first term {a_1}. is called arithmetic-geometric sequence. The general term for a geometric sequence with a common ratio of 1 is. If T n T n represents the number of bricks in row n n (from the top) then T 1 = 5, T 2 = 6, T 3 = 7, T 1 = 5, T 2 . The general or standard form of such a sequence is given by \ (a, (a+d) r_ {,} (a+2 d) r^ {2}, \ldots\) Here, A.P. It is represented by: a, ar, ar 2, ar 3, ar 4, and so on. (Round to the nearest cent as needed.) Also, this calculator can be used to solve more complicated problems. It can be described by the formula . To obtain the third sequence, we take the second term and multiply it by the common ratio. Find the 7th term for the geometric sequence. Determine the values of k and m if both are positive integers. Then we multiply the first term by a fixed nonzero number to get the second term of the geometric sequence. So for example, we've got a sequence of numbers three, six, 12, 24, and so on. The diagram below shows a sequence of circle patterns wherenis the figure number. Q: Find the common ratio, r, for the following geometric sequence. Common Ratio Next Term N-th Term Value given Index Index given Value Sum. let denotes the nth term of geometric sequence then, = constant Example: 1, 2, 4, 8, 16, 32, 64, 128, 256, . The following figure gives the formula for the nth term of a geometric sequence. This 11-2 Skills Practice: Arithmetic Series Worksheet is suitable for 10th - 12th Grade 210 term, p Geometric Sequences Students should have the sequence right before they start the work State the common difference State the common difference. Any term of a geometric sequence can be expressed by the formula for the general term: Find the term you're looking for. a n = n a_n=-n a n = n. This was an easy example, but we'll always follow this same process to find the general term of any sequence. Scroll down the page for more examples and solutions. The General Term We actually have a formula that we can use to help us calculate the general term, or nth term, of any geometric sequence. 14, 19, 24, 29, . a.Plug r into one of the equations to find a1. Instead of y=a x, we write a n =cr n where r is the common ratio and c is a constant (not the first term of the sequence, however). A recursive definition, since each term is found by multiplying the previous term by the common ratio, a k+1 =a k * r. Steps in Finding the General Formula of Arithmetic and Geometric Sequences 1. Arithmetic Geometric Sequence The sequence whose each term is formed by multiplying the corresponding terms of an A.P. The calculator will generate all the work with detailed explanation. A geometric sequence is a sequence of numbers that increases or decreases by the same percentage at each step. General Term. Maybe you are seeing the pattern now. Based on this information, the value of the sequence is always n -n n, so a formula for the general term of the sequence is. Geometric sequences are sequences in which the next number in the sequence is found by multiplying the previous term by a number called the common ratio. Term of a Sequence. This video explains how to find the formula for the nth term of a given geometric sequence given three terms of the sequence. If you are struggling to understand what a geometric sequences is, don't fret! and G.P. It can be calculated by dividing any term of the geometric sequence by the term preceding it. Call this number n. [3] A sequence of numbers are called a geometric sequence if each term is multiplied by the same common ratio to get the next term. Algebra Tutorial geometric . Consider the sequence 1/2, 1/4, 1/8, 1/16, . Example: Given the information about the geometric sequence, determine the formula for the nth term. Hence r = 2 or r = -2. Consider the following terms: $(k4);(k+1);m;5k$ The first three terms form an arithmetic sequence and the last three terms form a geometric sequence. You may pick only the first five terms of the sequence. And by dividing them we obtain a m a k = a 1 r m 1 a 1 r k 1 = r m 1 r k . Geometric Sequences. Find the indicated term of each geometric sequence. = 2.(2)^2.(n-1). The terms of a geometric progression can be expressed from any other term with the following expression: a m = a k r m k since, if we apply the general term to the positions m and k, we have: a m = a 1 r m 1 a k = a 1 r k 1. Give the formula for the general term. This sequence has a factor of 2 between each number. In mathematics, a geometric series is the sum of an infinite number of terms that have a constant ratio between successive terms. First find r (n-1).Then multiply the result by a 1.. You have seen that each term of a geometric sequence can be expressed in terms of r and its previous term. If r is equal to 1, the sequence is a constant sequence, not a geometric sequence. The general term 2. A geometric progression or a geometric sequence is the sequence, in which each term is varied by another by a common ratio. A term is multiplied by 3 to get the next term. 20 Sequence that is neither increasing, nor decreasing, yet converges to 1 To recall, a geometric sequence or a geometric progression is a sequence of numbers where each term after the first is found by multiplying the previous one by a fixed. We say geometric sequences have a common ratio. The main purpose of this calculator is to find expression for the n th term of a given sequence. b.Plug a1 and r into the formula. Create a table with headings n and a n where n denotes the set of consecutive positive integers, and a n represents the term corresponding to the positive integers. T n T n is the n n th th term; n n is the position of the term in the sequence; a a is the first term; d d is the common difference. General Term of a Geometric Sequence After doing so, it is possible to write the general formula that can find any term in the . . 1, 10, 100, 1000, . The geometric mean between two numbers is the value that forms a geometric sequence . Here, the common ratio r = 153 = 7515 = 5. For example, the calculator can find the first term () and common ratio () if and . Algebra. Find the general term of the geometric series such that a 5 = 48 . Geometric Sequences In a Geometric Sequence each term is found by multiplying the previous term by a constant. We have that a n = a 1 r n . Arithmetic Sequence Formula: a n = a 1 + d (n-1) Geometric Sequence Formula: a n = a 1 r n-1. The first row has five bricks on top of the pile, the second row has six bricks, and the third row has seven bricks. Geometric sequence definition The geometric sequence definition is that a collection of numbers, in which all but the first one, are obtained by multiplying the previous one by a fixed, non-zero number called the common ratio. Example 8: The second term of a geometric sequence is 2, and the fifth term is \Large{1 \over {32}}. Determine the general term of the geometric sequence. Related Question. Sequence Type Next Term N-th Term Value . -. .. [1] 4. What I want to Find. = (2) ^(1+2n-2). General Term of a Geometric Sequence [2] 3 Identify the number of term you wish to find in the sequence. Find the general term of the sequence (Tn). This constant is called the common ratio denoted by 'r '. For example, the series + + + + is geometric, because each successive term can be obtained by multiplying the previous term by /.In general, a geometric series is written as + + + +., where is the coefficient of each term and is the common ratio between adjacent . Now divide a5 a 5 by a3 a 3. Find the ninth term. Find the 7 th term for the geometric sequence in which a 2 = 24 and a 5 = 3 . Try the free Mathway calculator and problem solver below to . Convergent Series A series whose limit as n is a real number. Use integers or fractions for any numbers in the expression.) nth term. Geometric Sequence. Homework: Sequences 1 Answer Key Answers to Practice 1 problems concerning complex numbers with . a.Plug r into one of the equations to find a1. Common Ratio In a geometric sequence, the ratio r between each term and the previous term. . = arn1 = a 1n1 = a. n n n. Series and Sigma Notation Finding general formula for a sequence that is not arithmetic and neither geometric progression? This constant value is called the common ratio. = 2.(4)^(n-1). For example, 2 ,6, 15, 54, .. is an infinite geometric sequence, having the first term 2, common ratio 3 and no last term as the sequence is endless. Give the formula for the general term. Tn = a.(r)^(n-1). the general term is: n (n+1)/2. 1 1 1 1 5' 10' 20' 40 1 *** The general term an = (Simplify your answer. Algebra questions and answers. A geometric sequence is a sequence in which each term is found by multiplying the preceding term by the same value. \ (=a, a+d, a+2 d, \ldots\) G.P \ (=1, r, r^ {2}, \ldots\) where r cannot be equal to 1, and the first term of the sequence, a, scales the sequence. For example, 2 ,6, 15, 54, .. is an infinite geometric sequence, having the first term 2, common ratio 3 and no last term as the sequence is endless. So, a sequence with common ratio of 1 is a rather boring geometric sequence, with all the terms equal to the first term. Hence, find the 15th term, T15. The formula for the general term of a geometric sequence is \[T_n=ar^{n-1}\] where $a$ is the first term $T_1$ $r$ is the constant ratio given by $\dfrac{T_{n+1}}{T_n . The general term formula for an arithmetic sequence is: {eq}x_n = a + d (n-1) {/eq} where {eq}x_n {/eq} is the value of the nth term, a is the starting number, d is the common difference, and n is. The next term of the sequence is produced when we multiply a constant (which is non-zero) to the preceding term. An arithmetic sequence is a linear function. Determine the general term of the geometric sequence. a n = a r n 1 = a 1 n 1 = a. 1 1 1 11 5' 15' 45' 135 The general term an = 1 (Simplify your answer. In a geometric sequence, the ratio between any two successive terms is a fixed ratio . Let Tnbe the number of dots in thenth pattern. Which of these is the sequence? The other way is the recursive definition of a sequence, which defines terms by way of other terms. If so, indicate the common ratio. General Term. a 1 = 2 , the second term is a 2 = 6 and so forth. Find the twelfth term of a sequence where the first term is 256 and the common ratio is r=14. #2, 4, 8, 16,.# There is a common ratio between each pair of terms. In this. General Term for Arithmetic Sequences The general term for an arithmetic sequence is a n = a 1 + (n - 1) d, where d is the common difference. Find the twelfth term of a sequence where the first term is 256 and the common ratio is r=14. first term. Find the 10 th term of the sequence 5, -10, 20, -40, . Dividing the two equations, we get: 4 = r 2. Substitute 24 for a 2 and 3 for a 5 in the formula a n = a 1 r n 1 . \large a_n = a r^ {n-1}= a \cdot 1^ {n-1} = a an. Write the first four terms of the sequence defined by the explicit formula an=n2n1n! Best Compact Sports Sedan 2022 Steeply Infinite Jest Black Dressy Jumpsuit For Wedding Uncw Fall 2022 Move-in Etelecare Subsidiaries
CommonCrawl
Arithmetic of abelian varieties In mathematics, the arithmetic of abelian varieties is the study of the number theory of an abelian variety, or a family of abelian varieties. It goes back to the studies of Pierre de Fermat on what are now recognized as elliptic curves; and has become a very substantial area of arithmetic geometry both in terms of results and conjectures. Most of these can be posed for an abelian variety A over a number field K; or more generally (for global fields or more general finitely-generated rings or fields). Integer points on abelian varieties There is some tension here between concepts: integer point belongs in a sense to affine geometry, while abelian variety is inherently defined in projective geometry. The basic results, such as Siegel's theorem on integral points, come from the theory of diophantine approximation. Rational points on abelian varieties The basic result, the Mordell–Weil theorem in Diophantine geometry, says that A(K), the group of points on A over K, is a finitely-generated abelian group. A great deal of information about its possible torsion subgroups is known, at least when A is an elliptic curve. The question of the rank is thought to be bound up with L-functions (see below). The torsor theory here leads to the Selmer group and Tate–Shafarevich group, the latter (conjecturally finite) being difficult to study. Heights Main article: Height function The theory of heights plays a prominent role in the arithmetic of abelian varieties. For instance, the canonical Néron–Tate height is a quadratic form with remarkable properties that appear in the statement of the Birch and Swinnerton-Dyer conjecture. Reduction mod p Reduction of an abelian variety A modulo a prime ideal of (the integers of) K — say, a prime number p — to get an abelian variety Ap over a finite field, is possible for almost all p. The 'bad' primes, for which the reduction degenerates by acquiring singular points, are known to reveal very interesting information. As often happens in number theory, the 'bad' primes play a rather active role in the theory. Here a refined theory of (in effect) a right adjoint to reduction mod p — the Néron model — cannot always be avoided. In the case of an elliptic curve there is an algorithm of John Tate describing it. L-functions Main article: Hasse–Weil zeta function For abelian varieties such as Ap, there is a definition of local zeta-function available. To get an L-function for A itself, one takes a suitable Euler product of such local functions; to understand the finite number of factors for the 'bad' primes one has to refer to the Tate module of A, which is (dual to) the étale cohomology group H1(A), and the Galois group action on it. In this way one gets a respectable definition of Hasse–Weil L-function for A. In general its properties, such as functional equation, are still conjectural – the Taniyama–Shimura conjecture (which was proven in 2001) was just a special case, so that's hardly surprising. It is in terms of this L-function that the conjecture of Birch and Swinnerton-Dyer is posed. It is just one particularly interesting aspect of the general theory about values of L-functions L(s) at integer values of s, and there is much empirical evidence supporting it. Complex multiplication Main article: Complex multiplication of abelian varieties Since the time of Carl Friedrich Gauss (who knew of the lemniscate function case) the special role has been known of those abelian varieties $A$ with extra automorphisms, and more generally endomorphisms. In terms of the ring ${\rm {End}}(A)$, there is a definition of abelian variety of CM-type that singles out the richest class. These are special in their arithmetic. This is seen in their L-functions in rather favourable terms – the harmonic analysis required is all of the Pontryagin duality type, rather than needing more general automorphic representations. That reflects a good understanding of their Tate modules as Galois modules. It also makes them harder to deal with in terms of the conjectural algebraic geometry (Hodge conjecture and Tate conjecture). In those problems the special situation is more demanding than the general. In the case of elliptic curves, the Kronecker Jugendtraum was the programme Leopold Kronecker proposed, to use elliptic curves of CM-type to do class field theory explicitly for imaginary quadratic fields – in the way that roots of unity allow one to do this for the field of rational numbers. This generalises, but in some sense with loss of explicit information (as is typical of several complex variables). Manin–Mumford conjecture See also: André–Oort conjecture The Manin–Mumford conjecture of Yuri Manin and David Mumford, proved by Michel Raynaud,[1][2] states that a curve C in its Jacobian variety J can only contain a finite number of points that are of finite order (a torsion point) in J, unless C = J. There are other more general versions, such as the Bogomolov conjecture which generalizes the statement to non-torsion points. References 1. Raynaud, Michel (1983). "Sous-variétés d'une variété abélienne et points de torsion". In Artin, Michael; Tate, John (eds.). Arithmetic and geometry. Papers dedicated to I. R. Shafarevich on the occasion of his sixtieth birthday. Vol. I: Arithmetic. Progress in Mathematics (in French). Vol. 35. Birkhäuser-Boston. pp. 327–352. MR 0717600. Zbl 0581.14031. 2. Roessler, Damian (2005). "A note on the Manin-Mumford conjecture". In van der Geer, Gerard; Moonen, Ben; Schoof, René (eds.). Number fields and function fields — two parallel worlds. Progress in Mathematics. Vol. 239. Birkhäuser. pp. 311–318. ISBN 0-8176-4397-4. MR 2176757. Zbl 1098.14030.
Wikipedia
Internal category In mathematics, more specifically in category theory, internal categories are a generalisation of the notion of small category, and are defined with respect to a fixed ambient category. If the ambient category is taken to be the category of sets then one recovers the theory of small categories. In general, internal categories consist of a pair of objects in the ambient category—thought of as the 'object of objects' and 'object of morphisms'—together with a collection of morphisms in the ambient category satisfying certain identities. Group objects, are common examples of internal categories. There are notions of internal functors and natural transformations that make the collection of internal categories in a fixed category into a 2-category. Definitions Let $C$ be a category with pullbacks. An internal category in $C$ consists of the following data: two $C$-objects $C_{0},C_{1}$ named "object of objects" and "object of morphisms" respectively and four $C$-arrows $d_{0},d_{1}:C_{1}\rightarrow C_{0},e:C_{0}\rightarrow C_{1},m:C_{1}\times _{C_{0}}C_{1}\rightarrow C_{1}$ subject to coherence conditions expressing the axioms of category theory. See [1] [2] [3] [4] . See also • Enriched category References 1. Moerdijk, Ieke; Mac Lane, Saunders (1992). Sheaves in geometry and logic : a first introduction to topos theory (2nd corr. print., 1994. ed.). New York: Springer-Verlag. ISBN 0-387-97710-4. 2. Mac Lane, Saunders (1998). Categories for the working mathematician (2. ed.). New York: Springer. ISBN 0-387-98403-8. 3. Borceux, Francis (1994). Handbook of categorical algebra. Cambridge: Cambridge University Press. ISBN 0-521-44178-1. 4. Johnstone, Peter T. (1977). Topos theory. London: Academic Press. ISBN 0-12-387850-0. • Internal category at the nLab
Wikipedia
Yesterday I asked a question on parameterizations of knotted surfaces in $\mathbb R^4$. After I stated in the comments that I wanted the question to be kept to the case of a general surface, the question was promptly put on hold as "unclear what you're asking". I then refined my question to make it clearer. A day has since passed, but the question has not been reopened. All there has been since then is one comment (after I altered the question) mentioning that the question does not seem unclear at all. I would very much appreciate it if the question could be reopened. Although the question at MO has been reopened, the meta post is not completely moot: let me remind users that there is a meta thread dedicated to the requests for reopening: Requests for reopen and undelete votes for on-hold, closed, and deleted questions. Not the answer you're looking for? Browse other questions tagged discussion status-completed specific-question .
CommonCrawl
\begin{document} \title{A note on the radial solutions for the supercritical H\'enon equation\thanks{Partially supported by M.I.U.R., national project \textit{Metodi variazionali ed equazioni differenziali non lineari}.}} \author{Vivina Barutello, Simone Secchi \\ \small{Dipartimento di Matematica ed Applicazioni, Universit\`a di Milano--Bicocca.} \\ \small{via R.~Cozzi 53, I-20125 Milano (Italy)} \and Enrico Serra \\ \small{Dipartimento di Matematica, Universit\`a di Milano.} \\ \small{via C.~Saldini 50, I-20133 Milano (Italy)}} \date{} \maketitle {\bf Mathematics subject classification:} 35J60, 34B15. {\bf Keywords:} H\'enon equation, Neumann problem, supercritical nonlinearity, radial solutions \begin{abstract} We prove the existence of a positive radial solution for the H\'enon equation with arbitrary growth. The solution is found by means of a shooting method and turns out to be an increasing function of the radial variable. Some numerical experiments suggest the existence of many positive oscillating solutions. \end{abstract} \section{Introduction} In 1982, W.-M. Ni wrote the first rigorous paper, \cite{ni82}, on an equation introduced ten years earlier by H\'enon in \cite{he} as a model for mass distribution in spherically symmetric clusters of stars. This equation goes now under the name of \textit{H\'enon equation}, and was originally coupled with Dirichlet boundary conditions: \begin{equation} \label{eq:HenonD} \begin{cases} -\Delta u = |x|^\alpha u^p &\text{in } B_1, \\ u>0, &\text{in } B_1, \\ u= 0, &\text{on } \partial B_1, \end{cases} \end{equation} where $B_1 = \left\{ x \in \R^N \mid |x|<1 \right\}$, with $N\geq 3$, $\alpha >0$ and $p>1$. The existence of solutions to (\ref{eq:HenonD}) for $p<\frac{N+2}{N-2}= 2^*-1$ is a standard exercise in critical point theory that can be solved by various simple approaches. On the other hand, Ni's main result states that \eqref{eq:HenonD} has (at least) one solution provided $p< 2^*-1+\frac{2\alpha}{N-2}$ and thus enlarges considerably the range of solvability beyond the classical critical threshold $p=2^*-1$. It is also simple to prove, by means of the Pohozaev identity, that for $p \geq 2^*-1 +\frac{2\alpha}{N-2}$, problem (\ref{eq:HenonD}) has no solution. Thus for the Dirichlet problem (\ref{eq:HenonD}) the picture is rather sharp: setting $p_\alpha = 2^* + \frac{2\alpha}{N-2}$, problem (\ref{eq:HenonD}) is solvable if and only if $p< p_\alpha-1$. In the following we will call $p_\alpha$ the \textit{H\'enon critical exponent}. The key observation in Ni's work is the fact that the presence of the weight $|x|^\alpha$, which is radial and vanishes at $x=0$, allows one to gain compactness properties when one restricts the analysis to radial functions. Indeed, by means of the pointwise estimate (\cite{ni82}) \begin{equation} \label{grow} |u(|x|)| \leq C \frac{\| \nabla u\|_2}{|x|^{\frac{N-2}{2}}}, \quad \hbox{for almost every $x \in B_1$}, \end{equation} which holds true for any radial~$u\in H_0^1(B_1)$, one can easily prove that the embedding of $H_{0,\mathrm{rad}}^1(B_1)$ into $L^p(|x|^\alpha\,dx)$ is compact precisely for $p < p_\alpha-1$, and this is what one needs to prove existence of a solution to (\ref{eq:HenonD}). Quite recently, much attention has been devoted to \textit{symmetry--breaking} issues, namely to the question of whether least--energy solutions of \eqref{eq:HenonD} are radially symmetric functions (for example for large $\alpha$). In their seminal papers \cite{ssw,SW}, Smets \textit{et al.} proved that this is indeed false for $\alpha$ sufficiently large. In the last few years various aspects of the H\'enon equation have been analyzed, and the resulting literature is nowadays rather rich (see for example \cite{BS}, \cite{bst}, \cite{BW1}, \cite{cst}, \cite{CT}, \cite{CP}, \cite{PS}, \cite{Se}, \cite{SS} and references therein). All these papers concern the Dirichlet problem. The H\'enon equation just recently started to draw attention when coupled with \textit{Neumann} boundary conditions. In this case the problem reads \begin{equation}\label{eq:Henon} \begin{cases} -\Delta u +u = |x|^\alpha u^p &\text{in } B_1, \\ u>0, &\text{in } B_1, \\ \frac{\partial u}{\partial \nu}= 0, &\text{on } \partial B_1, \end{cases} \end{equation} and has been studied in \cite{gs}, where the authors proved some symme\-try--breaking results by connecting the question of symmetry of the ground states to the symmetry properties of extremal functions in some trace inequalities. In the paper \cite{gs} of course it is assumed that $p < 2^*-1$, to use a variational approach in $H^1(B_1)$. From the point of view of the mere \textit{existence} of solutions for the Neumann problem (\ref{eq:Henon}), the situation presents both analogies and discrepancies with respect to the Dirichlet problem. Indeed, also in the Neumann case it is very easy to check that the problem admits at least one solution if $p < 2^* - 1$, and it has been proved in \cite{gs} that Ni's result extends to (\ref{eq:Henon}): by using an $H^1$ version of inequality (\ref{grow}), it is simple to prove that (\ref{eq:Henon}) admits at least one (radial) solution for every $p< p_\alpha-1$. If one wishes to complete the picture of the solvability for \eqref{eq:Henon} as a function of $p$, like in the Dirichlet case, one has to face the fact that the Pohozaev identity gives no relevant information in presence of Neumann boundary conditions. To our knowledge, it is not known whether the critical H\'enon exponent serves as a threshold between existence and nonexistence in \eqref{eq:Henon}. The purpose of this note is to fill this gap, by looking for radial solutions of~\eqref{eq:Henon} without any limitation on $p$. Of course without bounds on $p$ we cannot make use of variational arguments, and we take instead an ODE viewpoint. Our main result is the following. \begin{theorem} \label{th:main} For every $p>1$ and $\alpha >0$, problem \eqref{eq:Henon} admits a strictly increasing radial solution. \end{theorem} The preceding result shows that the uselessness of the Pohozaev argument for nonexistence of solutions is not a technical obstruction, but reflects a completely different situation with respect to the Dirichlet problem. Though our arguments are rather simple, we point out that it is rather difficult to find in the literature existence results for elliptic equations without any growth condition; some exceptions, for singularly perturbed elliptic problems can be found for example in \cite{amn1,amn2}. Our main result can be easily extended to the more general problem \begin{equation}\label{eq:general} \begin{cases} -\Delta u +u = \phi(|x|) f(u), &\text{in } B_1, \\ u>0, &\text{in } B_1, \\ \dfrac{\partial u}{\partial \nu} = 0, &\text{on } \partial B_1, \end{cases} \end{equation} under suitable assumptions on $\phi$ and $f$, but always with no growth restrictions on $f$. The last section of the paper contains some numerical experiments that suggest a rather surprising fact: there are choices of $N$, $\alpha$ and $p$ for which many (and possibly infinitely many) radial solutions of \eqref{eq:Henon} seem to exist. We clearly state that this is only a numerical hint towards further research, since no rigorous proof has been written so far. By the way, we believe that for any choice of the parameters, problem \eqref{eq:Henon} has exactly one radial, increasing solution, but we have to state this only as a \noindent \textbf{Conjecture.} \textit{For every $N \geq 3$, $\alpha >0$ and $p>1$, the only solution of \eqref{eq:Henon} which is positive, radial and increasing is that of Theorem~\ref{th:main}.} The matter of uniqueness for radially symmetric solutions of semilinear elliptic equations on balls or annuli is a classical and often overwhelmingly difficult issue. We refer to \cite{felmer} and the references therein for a short summary of known results. We have been unable to find uniqueness results in the literature, concerning nonautonomous equations like \eqref{eq:Henon} with a monotonically increasing dependence on $|x|$. \section{An existence result for the H\'{e}non equation} \label{sec:henon} A radial solution for \eqref{eq:Henon} must solve the ODE problem \begin{equation}\label{eq:Henon_radiale} \begin{cases} -u'' -\frac{N-1}{r}u'+u = r^\alpha u^p, &\text{in $(0,1)$}, \\ u>0, &\text{in $(0,1)$}, \\ u'(0) = u'(1) = 0. & \end{cases} \end{equation} We observe that $u'(1)=0$ corresponds to the Neumann boundary condition, while we require $u'(0)=0$ to obtain classical solutions. Since we do not impose any upper bound on $p$, variational techniques do not seem to be useful to prove any existence result. For this reason we use a shooting method, which consists in finding $\gamma>0$ such that the solution $u_\gamma$ of the initial value problem \begin{equation}\label{eq:shooting} \begin{cases} -u'' -\frac{N-1}{r}u'+u = r^\alpha u^p, &\text{in $(0,1)$}, \\ u>0, &\text{in $(0,1)$} \\ u(0) = \gamma,\; u'(0) = 0.& \end{cases} \end{equation} satisfies $u'_\gamma(1)=0$. Equation \eqref{eq:shooting}$_1$ can be written in the form \begin{equation}\label{eq:riscritta} \left( r^{N-1}u'\right)' = r^{N-1}(u-r^\alpha u^p), \end{equation} and it is quite natural to introduce the auxiliary function $A(r) = r^{N-1}u'(r)$. By the definition of $A$ and \eqref{eq:shooting}, we deduce that $A(0) = 0$ and that $A$ increases strictly if and only if $u(r)<c(r)$ where $$ c(r)=\frac{1}{r^{\alpha/(p-1)}}. $$ The curve $c$ will play a crucial r\^{o}le in our discussion; its peculiarity is that it is asymptotic to the coordinate axes, i.e. \begin{equation}\label{eq:c} \lim_{r \to 0+} c(r) = +\infty \quad \text{and} \quad \lim_{r \to +\infty} c(r) = 0^+. \end{equation} The next Lemma is well known, see for example \cite{caku}. \begin{lemma} \label{lem:1} For every $\gamma >0$, problem \eqref{eq:shooting} is uniquely solvable on $[0,+\infty)$. Its solution $u_\gamma$ is continuously differentiable with respect to the initial value $\gamma$. \end{lemma} We begin now a qualitative study of solutions to \eqref{eq:shooting}, with the aim of proving the existence of (at least) an initial datum $\bar\gamma >0$ such that the corresponding $u_{\bar \gamma}$ matches the Neumann boundary condition at $r=1$. Take any~$\gamma >0$, and consider the solution $u_\gamma$ of \eqref{eq:shooting}. The corresponding auxiliary function $A_\gamma(r) = r^{N-1}u'_\gamma(r)$ is strictly positive from $r=0$ until $u_\gamma$ intersects the curve $c$, see~\eqref{eq:riscritta}. After the first crossing, the derivative of $A_\gamma$ becomes negative. Therefore there exists a zero of $A_\gamma'$, and the first intersection point \[ r_\gamma := \inf \{r >0 \mid A'_\gamma (r) = 0 \} = \inf \{r >0 \mid u_\gamma (r) = c(r) \} \] between the graph of $u_\gamma$ and that of $c$ is well defined. By definition, $u'_\gamma (r)>0$ (at least) on the interval $(0,r_\gamma)$. Let us call $R_\gamma$ the first nontrivial stationary point of $u_\gamma$, namely \[ R_\gamma := \inf \{r >0 \mid u'_\gamma(r)=0 \}. \] The next Lemma states in particular that $R_\gamma$ is well defined. \begin{lemma}\label{lem:2} For every $\gamma > 0$ we have $r_\gamma < R_\gamma < +\infty$. \end{lemma} \begin{proof} Since the function $A_\gamma$ steadily increases on $(0,r_\gamma)$ and $A(0)=0$, we deduce that $A_\gamma(r_\gamma)>0$ and then that $u'_\gamma(r_\gamma)>0$. Hence $r_\gamma < R_\gamma$. Let us show that $R_\gamma < + \infty$. Assume, for the sake of contradiction, that $R_\gamma = + \infty$. This means that $u'(r)>0$ for all $r \geq r_\gamma$. Fix any $R_0 > r_\gamma$ (of course $u'_\gamma (R_0)>0$) and choose $\delta$ such that \[ 0< \delta < \left( \frac{u_\gamma(R_0)}{c(R_0)} \right)^{p-1}-1. \] This choice is possible since $u_\gamma$ increases strictly and $c$ decreases strictly, so that $u_\gamma(R_0)>c(R_0)$. Therefore \begin{equation} \label{eq:7} r^\alpha u\sb\gamma^{p-1}(r)-1 > \delta >0 \quad\hbox{for every } r \geq R_0. \end{equation} We now integrate \eqref{eq:shooting}$_1$ on the interval $[R_0,R]$, with $R>R_0$; we have \begin{eqnarray*} 0 &=& u'_\gamma(R)-u'_\gamma(R_0) + (N-1) \int_{R_0}^R \frac{u'_\gamma (r)}{r}\,dr\\ &&\qquad \quad- \int_{R_0}^R u_\gamma(r)\,dr + \int_{R_0}^R r^\alpha u_\gamma^p(r)\,dr\\ &\geq& -u'_\gamma(R_0)-\int_{R_0}^R u_\gamma(r) \left[1-r^\alpha u_\gamma^{p-1}(r)\right] \,dr \end{eqnarray*} and hence \[ \int_{R_0}^R u_\gamma (r) \left[ r^\alpha u_\gamma (r)^{p-1} - 1 \right]\,dr \leq u'_\gamma(R_0) < +\infty. \] for every $R >R_0$. However, thanks to \eqref{eq:7}, we obtain \[ \lim_{R \to +\infty} \int_{R_0}^R u_\gamma (r) \left[ r^\alpha u_\gamma (r)^{p-1} -1 \right] \, dr \geq \lim_{R \to +\infty} (R-R_0) \gamma \delta = +\infty, \] a contradiction that concludes the proof. \end{proof} \begin{lemma} \label{lem:3} There results $u''_\gamma (R_\gamma) < 0$. \end{lemma} \begin{proof} Indeed, from equation \eqref{eq:shooting} we obtain \[ -u''_\gamma (R_\gamma) = u_\gamma (R_\gamma) \left( 1-R_\gamma^\alpha u^{p-1}_\gamma (R_\gamma) \right) \] which is a positive quantity since $u_\gamma (R_\gamma) > c(R_\gamma)$. \end{proof} \begin{lemma} \label{lem:4} There exists $\delta>0$ such that for every $\gamma<\delta$ there results~$R_\gamma>1$. \end{lemma} \begin{proof} Lemma \ref{lem:1} implies that $\sup_{r \in [0,1]} |u_\gamma (r)| < 1$ for all $\gamma$ sufficiently small. Therefore $u_\gamma$ lies below the curve $c$ on $[0,1]$, since $c$ decreases and $c(1)=1$ independently of the parameters $p$ and $\alpha$. The claim follows from Lemma \ref{lem:2}. \end{proof} \begin{lemma} \label{lem:5} $\lim_{\gamma \to +\infty}R_\gamma = 0$. \end{lemma} \begin{proof} For the sake of contradiction we suppose that there exist $\delta > 0$ and a sequence $(\gamma_k)_k$, $\gamma_k \to +\infty$, such that $R_{\gamma_k} \geq \delta$ for every $k$. Since $u_{\gamma_k}'$ is strictly positive on $(0,\delta/2)$, the function $A_{\gamma_k}$ is strictly positive on $(0,\delta/2]$. Hence, for every $k$, \[ \begin{split} 0 < A_{\gamma_k}(\delta/2) & = \int_0^{\delta/2} A'_{\gamma_k}(r)\,dr \\ & = \int_0^{\delta/2} r^{N-1}u_{\gamma_k}(r)\left(1-r^\alpha u_{\gamma_k}^{p-1}(r)\right)\,dr \\ & \leq \int_0^{\delta/2} r^{N-1}u_{\gamma_k}(r) \left(1-r^\alpha \gamma_k^{p-1}\right)\,dr, \end{split} \] where the last inequality holds because $u_{\gamma_k}(r) \geq \gamma_k$ on $[0,\delta/2]$. Since $\gamma_k \to +\infty$, we can choose $k_0$ such that, for every $k \geq k_0$, there results $\gamma_k^{-(p-1)/\alpha}<\delta/2$; for such values of $k$ we can split the integral to obtain \[ \begin{split} 0 &< \int_0^{\gamma_k^{-(p-1)/\alpha}} r^{N-1}u_{\gamma_k}(r) \left(1-r^\alpha \gamma_k^{p-1}\right)\,dr\\ & \qquad\qquad+ \int_{\gamma_k^{-(p-1)/\alpha}}^{\delta/2} r^{N-1}u_{\gamma_k}(r) \left(1-r^\alpha \gamma_k^{p-1}\right)\,dr\\ & \leq u_{\gamma_k}(\gamma_k^{-(p-1)/\alpha})\int_0^{\gamma_k^{-(p-1)/\alpha}} r^{N-1} \left(1-r^\alpha \gamma_k^{p-1}\right)\,dr\\ &\qquad\qquad + u_{\gamma_k}(\gamma_k^{-(p-1)/\alpha})\int_{\gamma_k^{-(p-1)/\alpha}}^{\delta/2} r^{N-1} \left(1-r^\alpha \gamma_k^{p-1}\right)\,dr\\ &= u_{\gamma_k}(\gamma_k^{-(p-1)/\alpha}) \int_0^{\delta/2} r^{N-1} \left(1-r^\alpha \gamma_k^{p-1}\right)\,dr. \end{split} \] Indeed $u_{\gamma_k}$ increases and the quantity $(1-r^\alpha \gamma_k^{p-1})$ is positive on the interval $(0,\gamma_k^{-(p-1)/\alpha})$ and negative on the interval $(\gamma_k^{-(p-1)/\alpha},\delta/2)$. Integrating we reach the contradiction \[ 0 < u(\gamma_k^{-(p-1)/\alpha}) \left(\frac{\delta}{2}\right)^{N} \left[\frac{1}{N}-\frac{\gamma_k^{p-1}}{N+\alpha}\left(\frac{\delta}{2}\right)^\alpha \right] < 0 \] whenever $\gamma_k >\left( \frac{N+\alpha}{N} \right)^{1/(p-1)}(\delta/2)^{-\alpha/(p-1)}$, namely for $\gamma_k$ large enough. \end{proof} We are now ready to state our main result. \setcounter{theorem}{0} \begin{theorem} \label{main} For every $p>1$ and $\alpha >0$, problem \eqref{eq:Henon} admits a strictly increasing radial solution. \end{theorem} \begin{proof} We have seen that the first critical point $R_\gamma$ of $u_\gamma$ is larger than 1 for small values of $\gamma$, and smaller than 1 for large values of $\gamma$. Consider the map $F \colon (0,+\infty) \times (0,+\infty) \to \mathbb{R}$ defined by $F(\gamma,r)=u'_\gamma(r)$. We have that $F(\gamma,R_\gamma)=0$, and Lemma \ref{lem:3} implies that \[ \partial_2 F (\gamma,R_\gamma) = u''_\gamma (R_\gamma) < 0. \] The Implicit Function Theorem shows that $\gamma \mapsto R_\gamma$ is a continuous and even differentiable function. Therefore, there exists $\bar\gamma$ such that $R_{\bar \gamma}=1$. This means that $u_{\bar\gamma}$ is a radial solution of the Neumann problem \eqref{eq:Henon}. \end{proof} \begin{remark} When $p$ satisfies the condition \begin{equation}\label{eq:p-1} p-1 < \frac{\sqrt{(N-2)^2+4}-(N-2)}{2}\alpha, \end{equation} the Maximum Principle shows that every radial solution for the problem \eqref{eq:Henon} can change its monotonicity at most once, since it can intersect the curve $c$ at most twice. Indeed, computing \begin{multline*} -c''(r)-\frac{N-1}{r}c'(r)+c(r) =\\ (p-1)^2 r^{-2-\alpha/(p-1)}\left[r^\alpha + \frac{\alpha(N-1)(p-1) - \alpha(\alpha+p-1)}{(p-1)^2}\right], \end{multline*} it is easy to see that since $r \in [0,1]$, condition \eqref{eq:p-1} implies that $c$ is a subsolution for the operator $-\Delta + I$. Furthermore, a positive solution $u$ for \eqref{eq:Henon} is a supersolution for the same operator. Therefore $w=u-c$ is a supersolution that vanishes when $u$ intersects $c$. From the Maximum Principle we deduce that $w$ can vanish at most twice. \end{remark} \section{A generalization} \label{sec:general} Theorem \ref{main} can be adapted to problem \eqref{eq:general} by imposing some suitable assumptions of the functions $\phi$ and $f$. First of all, both functions $\phi$ and $f$ are defined (and continuous) on $[0,+\infty)$, since we are looking for positive radial solutions; furthermore we require: \begin{description} \item[(h1)] $\phi$ is increasing, $\phi(0)=\ell \geq 0$ and $\displaystyle \lim_{r \to +\infty} \phi(r)=\kappa \in [l,+\infty]$; \item[(h2)] the function $s \mapsto f(s)/s$ is strictly increasing; \item[(h3)] $\displaystyle \lim_{s \to +\infty} \dfrac{f(s)}{s} = \dfrac{1}{\ell}$ ($=+\infty$ if $\ell=0$); \item[(h4)] $\displaystyle \lim_{s \to 0^+} \dfrac{f(s)}{s} = \dfrac{1}{\kappa}$ ($=0$ if $\kappa=+\infty$). \end{description} Under conditions {\bf (h1)}--{\bf (h4)} the equation ${u}/{f(u)}= \phi(r)$ defines implicitly a continuous curve $u=\xi(r)$ which plays the same r\^{o}le as the curve $c$ in the previous section. Indeed if we call $H(u)={u}/{f(u)}$, then $\xi(r)=H^{-1}\left( \phi(r) \right)$, which decreases since $H^{-1}$ decreases and $\phi$ increases. Furthermore $\xi$ is asymptotic to the coordinate axes. Indeed from {\bf (h3)} we get \[ \lim_{r\to 0+} \xi(r) = \lim_{r\to 0+}H^{-1}\left( \phi(r) \right) = \lim_{u\to l}H^{-1}(u) = +\infty; \] similarly, from {\bf (h4)}, \[ \lim_{r\to +\infty} \xi(r) = \lim_{r\to +\infty}H^{-1}\left( \phi(r) \right) = \lim_{u\to k}H^{-1}(u) = 0. \] We can now proceed exactly as in Section \ref{sec:henon} defining the auxiliary function $A$ and the points $r_\gamma$, $R_\gamma$. Lemmas \ref{lem:2}, \ref{lem:3}, \ref{lem:4} and \ref{lem:5} can be proved also in this setting with minor changes. We then obtain the required generalization: \begin{theorem} \label{main2} Let $\phi$, $f:[0,+\infty)\to\mathbb{R}$ be continuous functions. If assumptions {\bf (h1)}--{\bf (h4)} are satisfied then problem \eqref{eq:general} admits a strictly increasing radial solution. \end{theorem} \begin{remark}\label{esempi} Our assumptions are clearly satisfied by nonlinearities with arbitrarily fast growth at infinity, like $f(s)=\exp (s)-1$, or $f(s) = \exp (\gamma s^q)-1$ for $\gamma>0$ and $q>1$. This latter case is particularly interesting, because it corresponds to Trudinger--Moser type problems without any restriction on $q$ and $\gamma$. Though in this paper we work in dimension $N\geq 3$, it is immediate to check that our results hold also for $N=2$, which is the case of the Trudinger--Moser problem. Of course, \textbf{(h3)} requires~$\ell=0$, i.e. $\phi (0)=0$. Similarly, a nonlinearity that is superlinear at zero forces $\kappa$ to be infinite. Also, we notice that homogeneity of $\phi$ plays no role, as long as $\phi $ satisfies the above assumptions; for example $\phi(r) = r^\alpha + r^\beta$, with $\alpha$, $\beta>0$, is an admissible function. \end{remark} \begin{remark} It is proved in \cite{adya91} that in dimension $N\geq 3$, for any $p>1$ and for large values of $R$, the problem \[ \begin{cases} -\Delta u + u = u^p &\hbox{for $|x|<R$}\\ \frac{\partial u}{\partial \nu} =0 &\hbox{for $|x|=R$} \end{cases} \] does not have any positive radial solution whose derivative changes sign. We could not find any similar statement for a nonlinearity like $|x|^\alpha u^p$. \end{remark} \section{Some numerical results} This section is devoted to the description of some numerical experiments. Such results are purely numerical and non rigorous; the purpose of the authors is, on one hand, to give some examples of the existence result proved in Section \ref{sec:henon}. On the other hand we want to point out some features of interest in the behavior of the solutions for the shooting problem \eqref{eq:shooting} when $\gamma$ diverges to $+\infty$. These numerical experiments seem to indicate that the structure of the set of radial solutions of problem (\ref{eq:Henon}) is still far to be understood, and deserves further study. \subsection{The monotone solution} The monotone solution for the H\'enon equation corresponds to a choice of the parameter $\gamma$ such that the first maximum point $R_\gamma$ of the solution of the shooting equation coincides with $1$. \noindent \begin{minipage}{0.5\textwidth} In the table on the right we have collected some values of $\gamma$, depending on $N$, $\alpha$ and $p$, for which $|R_\gamma - 1|<10^{-6}$. The starred values of $p$ are the H\'enon critical exponents, that is $p= p_\alpha-1$. In the first four rows of the table, we fix the dimension $N$ and the exponent $\alpha$ and we choose three values of $p$: subcritical, critical and supercri\-ti\-cal. We observe that $\gamma$ seems to be a decreasing function of $p$. In the last row we have investigated the behavior of $\gamma$ as $\alpha$ becomes larger and larger while $p$ has a supercritical value. The numerical results seem to show that $\gamma$ continues to lie near $1$. This is probably due to the fast convergence of $R_\gamma$ to $0$ as $\gamma \to +\infty$, see Lemma~\ref{lem:5}. \end{minipage} \begin{minipage}{0.45\textwidth} \[ \begin{array}{|c|c|c|c|} \hline N & \alpha & p & \gamma \\[3pt] \hline\hline & & 5 & 1.0816 \\[3pt] \;3\; & 3 & 11^* & 0.9710 \\[3pt] & & 15 & 0.9487 \\[3pt] \hline & & 3 & 1.3739 \\[3pt] 4 & 5 & 8^* & 1.0306 \\[3pt] & & 12 & 0.9872 \\[3pt] \hline & & 4 & 1.3102 \\[3pt] 5 & 9 & \;25/3^*\; & 1.0632 \\[3pt] & & 12 & 1.0147 \\[3pt] \hline & & 11/4 & 1.2175 \\[3pt] 10 & 5 & 5^* & 1.0688 \\[3pt] & & 10 & 1.0105 \\[3pt] \hline\hline & 50 & 20 & 1.0485 \\[3pt] \; 10 \; & 100 & 50 & 1.0114 \\[3pt] &\;200 \; & 50 &\;1.0135\; \\[3pt] \hline \end{array} \] \end{minipage} \subsection{Numerical evidence of oscillating solutions} We have considered so far the existence of a radial solution with Neumann boundary conditions with the \emph{first} stationary point at $1$. Recalling Ni's results about the existence of oscillating radial solutions for some elliptic problem on $\mathbb{R}^n$ (see \cite{ni83}) we now investigate the existence on non--monotone solutions for our problem. We address two natural questions. Can the shooting solution, which is defined on the whole interval $[0,+\infty)$, have stationary points different from $R_\gamma$? Can we choose $\alpha$, $p$, and $\gamma$ such that one of such points coincides with $1$? Although we do not have any rigorous proof of these facts, some numerical experiments show that the answers to these questions strongly depend on the parameters $\alpha$ and $p$, so that it seems very unlikely to obtain a single general result. \begin{figure} \caption{the three plots represent the shooting solution to problem \eqref{eq:shooting}, corresponding to different values of $\gamma$, when $N=4$, $\alpha=5$ and $p= 8$ (critical). In the horizontal axis the radial variable $r$ varies to the interval $[0,10]$. In the figure above--left we choose $\gamma=1.034$ and we obtain the first stationary point $R_\gamma \approx 1$; above--right, when $\gamma \approx 155$, the second stationary point approximates $1$. If $\gamma \approx 2584$ the third stationary point satisfies $R^3_\gamma \approx 1$. In the second and third picture, the first maximum point $R_\gamma$ is not printed in order to obtain a reasonable scaling, and we just plot the solution for $r>0.2$. } \label{figura1} \end{figure} Fix for instance $N=4$ and $\alpha=5$. The critical H\'enon exponent $p_\alpha-1$ is in this case $p=8$. For such values of the parameters, the monotone solution corresponds to $\gamma\approx1.034$. If we compute the solution on a larger interval we observe that it oscillates. Let us call $R^n_\gamma$ the $n$--th stationary point of this solution. Lemma \ref{lem:5} suggest that, as $\gamma \to +\infty$, $R^n_\gamma$ decreases also for $n>1$; for this reason we increase $\gamma$ in order to obtain $R_\gamma^n=1$ for some $n>1$. To obtain $R^2_\gamma \approx 1$ we need to choose $\gamma \approx 155$, while for $R^3_\gamma \approx 1$, a value of $\gamma \approx 2584$ will do (see Figure \ref{figura1}). These cases point towards the possibility of the coexistence of multiple radial positive solutions. Rather surprisingly this interesting behavior gently disappears when $p$ becomes supercritical. Indeed the oscillations become less and less sharp as $p$ increases and disappear when $p$ is greater than some $\bar p$ (in the described case $\bar p \approx 16$). This behavior is illustrated in Figure \ref{figura2} and it suggests a strong difference with the autonomous case studied by Ni in \cite{ni83}. \begin{figure} \caption{in the picture we compare the plots of the shooting solutions when $N=4$, $\alpha=5$ and $\gamma=1.034$, when $p$ varies from the critical H\'enon exponent $p=8$ to some supercritical values. In the horizontal axis the radial variable $r$ varies in the interval $[0,10]$. When $p$ is critical the numerical solution oscillates sharply; as $p$ increases the oscillations become weaker and weaker and they disappear when $p>16$. } \label{figura2} \end{figure} \end{document}
arXiv
The equation of the hyperbola shown below can be written as \[\frac{(y - k)^2}{a^2} - \frac{(x - h)^2}{b^2} = 1.\]Find $h + k + a + b.$ [asy] unitsize(0.3 cm); real upperhyper (real x) { return (2*sqrt((x - 6)^2/16 + 1) - 5); } real lowerhyper (real x) { return (-2*sqrt((x - 6)^2/16 + 1) - 5); } int i, n = 10; for (i = -n; i <= n; ++i) { draw((i,-n)--(i,n),gray(0.7)); draw((-n,i)--(n,i),gray(0.7)); } draw((0,-n)--(0,n)); draw((-n,0)--(n,0)); draw(graph(upperhyper,-10,10),red); draw(graph(lowerhyper,-3,10),red); draw(extension((-10,-10),(10,-10),(6,-5),(6,-5) + (4,2))--extension((10,10),(10,-10),(6,-5),(6,-5) + (4,2)),dashed); draw(extension((-10,-10),(-10,10),(6,-5),(6,-5) + (4,-2))--extension((10,10),(10,-10),(6,-5),(6,-5) + (4,-2)),dashed); dot((6,-5)); [/asy] The center of the hyperbola is $(6,-5).$ The distance from the center to a vertex is $a = 2.$ The slopes of the asymptotes are $\pm \frac{1}{2},$ so $b = 4.$ Thus, $h + k + a + b = 6 + (-5) + 2 + 4 = \boxed{7}.$
Math Dataset
\begin{document} \title{Four-Qubit Monogamy and Four-Way Entanglement} \author{DaeKil Park$^{1,2}$} \affiliation{$^1$Department of Electronic Engineering, Kyungnam University, Changwon 631-701, Korea \\ $^2$Department of Physics, Kyungnam University, Changwon 631-701, Korea } \begin{abstract} We examine the various properties of the three four-qubit monogamy relations, all of which introduce the power factors in the three-way entanglement to reduce the tripartite contributions. On the analytic ground as much as possible we try to find the minimal power factors, which make the monogamy relations hold if the power factors are larger than the minimal powers. Motivated to the three-qubit monogamy inequality we also examine whether those four-qubit monogamy relations provide the SLOCC-invariant four-way entanglement measures or not. Our analysis indicate that this is impossible provided that the monogamy inequalities are derived merely by introducing weighting power factors. \end{abstract} \maketitle \section{Introduction} Recently, quantum technology, i.e. technology based on quantum mechanics, attracts much attention to overcome various limitations of classical technology such as computational speed of computer and insecurity of cryptography. Quantum entanglement\cite{text,horodecki09} is the most important physical resource to develop the quantum technology because it plays a crucial role in the various quantum information processing. In fact, it is used in quantum teleportation\cite{teleportation}, superdense coding\cite{superdense}, quantum cloning\cite{clon}, and quantum cryptography\cite{cryptography,cryptography2}. It is also quantum entanglement, which makes the quantum computer\footnote{The current status of quantum computer technology was reviewed in Ref.\cite{qcreview}.} outperform the classical one\cite{computer}. Thus, it is very important to understand how to quantify and how to characterize the entanglement. One of the surprising property of the quantum entanglement arises in its distribution in the multipartite system. It is usually called the monogamy property. For example, let us consider the tripartite quantum state $\ket{\psi}_{ABC}$ in the qubit system. Authors in Ref. \cite{ckw} have shown the inequality \begin{equation} \label{ckw} {\cal C}^2_{A|(BC)} \geq {\cal C}^2_{A|B} + {\cal C}^2_{A|C} \end{equation} where ${\cal C}$ is concurrence\cite{woot-98}, one of the entanglement measure for bipartite system. This inequality, usually called CKW inequality, implies that the entanglement (measured by the squared concurrence) between $A$ and the remaining parties always exceeds entanglement between $A$ and $B$ plus entanglement between $A$ and $C$. This means that the more $A$ and $B$ are entangled, the lesser $A$ and $C$ are entangled. This is why the quantum cryptography is more secure than classical one. The inequality (\ref{ckw}) is strong in a sense that the three-qubit W-state\cite{dur00} \begin{equation} \label{w3} \ket{\mbox{W}_3} = \frac{1}{\sqrt{3}} \left( \ket{001} + \ket{010} + \ket{100} \right) \end{equation} saturates the inequality. Another surprising property of Eq. (\ref{ckw}) is the fact that the leftover in the inequality \begin{equation} \label{residual} \tau_{ABC} = {\cal C}^2_{A|(BC)} - \left( {\cal C}^2_{A|B} + {\cal C}^2_{A|C} \right), \end{equation} quantifies the true three-way entanglement. For general three-qubit pure state $\ket{\psi} = \sum_{i,j,k=0}^1 \psi_{ijk} \ket{ijk}_{ABC}$ the leftover $\tau_{ABC}$, which is called the residual entanglement\footnote{In this paper $\sqrt{\tau_{ABC}}$ is called the three-tangle.}, reduces to \begin{equation} \label{tangle3} \tau_{ABC} = \bigg|2 \epsilon_{i_1 i_2} \epsilon_{i_3 i_4} \epsilon_{j_1 j_2} \epsilon_{j_3 j_4} \epsilon_{k_1 k_3} \epsilon_{k_2 k_4} \psi_{i_1 j_1 k_1} \psi_{i_2 j_2 k_2} \psi_{i_3 j_3 k_3} \psi_{i_4 j_4 k_4} \bigg|. \end{equation} From this expression one can show that $\tau_{ABC}$ is invariant under a stochastic local operation and classical communication (SLOCC)\cite{bennet00}. Then, it is natural to ask whether or not such surprising properties are maintained in the monogamy relation of multipartite system. Subsequently, the generalization of Eq. (\ref{ckw}) was discussed in Ref. \cite{osborne06-1}. As Ref. \cite{osborne06-1} has shown analytically, the following monogamy relation \begin{equation} \label{tofv} {\cal C}^2_{q_1|(q_2 \cdots q_n)} \geq {\cal C}^2_{q_1|q_2} + {\cal C}^2_{q_1|q_3} + \cdots + {\cal C}^2_{q_1|q_n} \end{equation} holds in the $n$-qubit pure-state system. However, it is shown that the leftover of Eq. (\ref{tofv}) is not entanglement monotone. In order to remove this unsatisfactory feature the authors in Ref. \cite{bai07-1,bai08-1} considered the average leftover of the monogamy relation (\ref{tofv}). For example, they conjectured that in four-qubit system the following average leftover \begin{equation} \label{tangle-1} \theta_{ABCD} = \frac{\pi_A + \pi_B + \pi_C + \pi_D}{4} \end{equation} is a monotone, where $\pi_A = {\cal C}^2_{A|(BCD)} - ({\cal C}^2_{A|B} + {\cal C}^2_{A|C} + {\cal C}^2_{A|D})$ and other ones are derived by changing the focusing qubit. Even though $\theta_{ABCD}$ might be an entanglement monotone, it is obvious that it cannot quantify a true four-way entanglement because it detects the partial entanglement. For example, $\theta_{ABCD} (g_3) = 3 / 4$, where $\ket{g_3} = \ket{\mbox{GHZ}_3} \otimes \ket{0}$ and $\ket{\mbox{GHZ}_3}$ is a three-qubit Greenberger-Horne-Zeilinger (GHZ) state defined as \begin{equation} \label{ghz3} \ket{\mbox{GHZ}_3} = \frac{\ket{000} + \ket{111}}{\sqrt{2}}. \end{equation} In Ref. \cite{regula14-1} another following multipartite monogamy relation is examined: \begin{equation} \label{smonogamy-1} {\cal C}^2_{q_1|(q_2 \cdots q_n)} \geq \underbrace{\sum_{j=2}^n {\cal C}^2_{q_1|q_j}}_{2-\mbox{partite}} + \underbrace{\sum_{k > j=2}^n \left[ t_{q_1|q_j|q_k} \right]^{\mu_3}}_{3-\mbox{partite}} + \cdots + \underbrace{\sum_{\ell = 2}^n \left[ t_{q_1|q_2|\cdots | q_{\ell-1} | q_{\ell+ 1} | \cdots |q_n} \right]^{\mu_{n-1}}}_{(n-1)-\mbox{partite}}. \end{equation} In Eq. (\ref{smonogamy-1}) the power factors $\left\{\mu_m \right\}_{m=3}^{n-1}$ are included to regulate the weight assigned to the different $m$-partite contributions. If all power factors $\mu_m$ go to infinity, Eq. (\ref{smonogamy-1}) reduces to Eq. (\ref{tofv}). As a tripartite entanglement measure the residual entanglement or three-tangle can be used independently. Thus, in four-qubit system Eq. (\ref{smonogamy-1}) reduces to following two different expressions: \begin{equation} \label{four-monogamy} \Delta_j = t_{1|234} - \left( t_{1|2} + t_{1|3} + t_{1|4} \right) - \left( t_{1|2|3}^{(j)} + t_{1|2|4}^{(j)} + t_{1|3|4}^{(j)} \right) \geq 0 \hspace{1.0cm} (j = 1, 2, 3) \end{equation} where \begin{eqnarray} \label{four-monogamy-1} && t_{1|234} = {\cal C}^2_{1|234} = 4 \mbox{det} \rho_1 \hspace{4.0cm} t_{i|j} = {\cal C}^2 (\rho_{ij}) \hspace{1.0cm} \\ \nonumber && t_{i|j|k}^{(1)} = \left[ \min_{\{p_n, \ket{\psi_n}\}} \sum_n p_n \sqrt{\tau_{ijk}} (\psi_n) \right]^{\mu_1} \hspace{1.0cm} t_{i|j|k}^{(2)} = \left[ \min_{\{p_n, \ket{\psi_n}\}} \sum_n p_n \tau_{ijk} (\psi_n) \right]^{\mu_2}. \end{eqnarray} In Eq. (\ref{four-monogamy-1}) the tripartite entanglements $t_{i|j|k}^{(j)}$ are expressed explicitly as a convex roof\cite{benn96,uhlmann99-1} for mixed states derived by the partial trace of the four-qubit pure states. In particular, the authors of Ref. \cite{regula14-1} conjectured that all four-qubit pure states holds $\Delta_1 \geq 0$ when $\mu_1 \geq 3$. Different expression of the monogamy relation was introduced in Ref. \cite{regula16-1}, which is Eq. (\ref{four-monogamy}) with $j=3$, where \begin{equation} \label{additional-1} t_{i|j|k}^{(3)} = \left[ \min_{\{p_n, \ket{\psi_n}\}} \sum_n p_n \tau_{ijk}^{1 / q} (\psi_n) \right]^{q}. \end{equation} The authors of Ref. \cite{regula16-1} conjectured that $\Delta_3$ with $q = 4$ might be nonnegative for all four-qubit pure states. They also conjectured by making use of their extensive numerical tests that all possible second class states\footnote{This is classified as $L_{abc_2}$ in Ref. \cite{fourP-1}.} \begin{equation} \label{second-class} \ket{G} = {\cal N} \left[ \frac{a + b}{2} \left( \ket{0000} + \ket{1111} \right) + \frac{a - b}{2} \left( \ket{0011} + \ket{1100} \right) + c \left( \ket{0101} + \ket{1010} \right) + \ket{0110} \right] \end{equation} and their SLOCC transformation hold $\Delta_3 \geq 0$ when $q \geq 2.42$, where the parameters $a$, $b$, and $c$ are generally complex, and ${\cal N}$ is a normalization constant given by \begin{equation} \label{normalization1} {\cal N} = \frac{1}{\sqrt{1 + |a|^2 + |b|^2 + 2 |c|^2}}. \end{equation} The purpose of this paper is two kinds. First one is to find the minimal powers $(\mu_1)_{\min}$, $(\mu_2)_{\min}$, and $(q)_{\min}$ which make $\Delta_j \geq 0$ when the corresponding powers are larger than the minimal powers. Second one is to examine whether or not the leftovers $\Delta_j \hspace{.3cm} (j = 1, 2, 3)$ can be true four-way SLOCC-invariant entanglement measures like the CKW inequality in three-qubit case. In order to explore these issues on the analytical ground as much as possible we confine ourselves into the second class state $\ket{G}$. In Sec. II and Sec. III various tangles are computed analytically. In fact, the three-tangle of $\ket{G}$ was computed in Ref.\cite{ostr16-1}. Since, however, there is some mistake in Ref.\cite{ostr16-1}, we compute $\tau_{ABC}^{1/q} \hspace{.3cm} (q = 1, 2, \cdots)$ of $\ket{G}$ analytically in Sec. III. In Sec. IV we compute $\Delta_j$ analytically for few special cases. Exploiting the numerical results we compute the minimal powers for the cases. In Sec. V we compute the minimal powers for more general cases. In Sec. VI we examine whether or not $\Delta_j$ with particular powers can be SLOCC-invariant four-way entanglement measures. Our analysis indicate that this is impossible provided that the monogamy inequalities are derived merely by introducing weighting power factors. In Sec. VII a brief conclusion is given. \section{one- and two-tangles} In order to compute the one-tangle we derive the state of the first qubit $\rho_1$: \begin{equation} \label{one-tangle-1} \rho_1 = \mbox{tr}_{234} \ket{G} \bra{G} = \frac{{\cal N}^2}{4 {\cal N}_2^2} \ket{0} \bra{0} + \frac{{\cal N}^2}{4 {\cal N}_1^2} \ket{1} \bra{1} \end{equation} where ${\cal N}_1$ and ${\cal N}_2$ are \begin{equation} \label{normalization2} {\cal N}_1 = \frac{1}{\sqrt{2 (|a|^2 + |b|^2 + 2 |c|^2)}} \hspace{1.0cm} {\cal N}_2 = \frac{1}{\sqrt{2 (2 + |a|^2 + |b|^2 + 2 |c|^2)}}. \end{equation} It is easy to show the equality $1 / {\cal N}_1^2 + 1 / {\cal N}_2^2 = 4 / {\cal N}^2$, which guarantees the normalization of $\rho_1$. Thus, the one-tangle of $\rho_1$ is given by \begin{equation} \label{one-tangle-2} t_{1|234} \equiv 4 \mbox{det} \rho_1 = \frac{{\cal N}^4}{4 {\cal N}_1^2 {\cal N}_2^2}. \end{equation} In fact, one can show $t_{1|234} = t_{2|134} = t_{3|124} = t_{4|123}$. In order to compute the two-tangles we derive the two-qubit states, which are obtained by taking the partial trace over the remaining qubits. The final results can be represented as the following matrices in the computational basis: \begin{eqnarray} \label{two-tangle-1} &&\rho_{12} = {\cal N}^2 \left( \begin{array}{cccc} \frac{|a|^2 + |b|^2}{2} & 0 & 0 & \frac{|a|^2 - |b|^2}{2} \\ 0 & 1 + |c|^2 & c^* & 0 \\ 0 & c & |c|^2 & 0 \\ \frac{|a|^2 - |b|^2}{2} & 0 & 0 & \frac{|a|^2 + |b|^2}{2} \end{array} \right) \hspace{.5cm} \rho_{13} = {\cal N}^2 \left( \begin{array}{cccc} \beta & 0 & 0 & \alpha \\ 0 & \gamma + 1 & \delta^* & 0 \\ 0 & \delta & \gamma & 0 \\ \alpha & 0 & 0 & \beta \end{array} \right) \nonumber \\ && \hspace{4.0cm} \rho_{14} = {\cal N}^2 \left( \begin{array}{cccc} \gamma' + 1 & 0 & 0 & \delta' \\ 0 & \beta' & \alpha' & 0 \\ 0 & \alpha' & \beta' & 0 \\ (\delta')^* & 0 & 0 & \gamma' \end{array} \right) \end{eqnarray} where \begin{eqnarray} \label{two-tangle-2} &&\alpha = \mbox{Re} \left[(a+b) c^* \right] \hspace{.5cm} \beta = \frac{|a + b|^2}{4} + |c|^2 \hspace{.5cm} \gamma = \frac{|a-b|^2}{4} \hspace{.5cm} \delta = \frac{a - b}{2} \\ \nonumber &&\alpha' = \mbox{Re} \left[(a-b) c^* \right] \hspace{.5cm} \beta' = \frac{|a - b|^2}{4} + |c|^2 \hspace{.5cm} \gamma' = \frac{|a+b|^2}{4} \hspace{.5cm} \delta' = \frac{a + b}{2}. \end{eqnarray} Following the Wootters procedure\cite{woot-98} one can compute the two-tangles of the two-qubit reduced states $t_{i|j} = {\cal C}^2 (\rho_{ij})$ straightforwardly. The final expressions of the concurrences can be written as follows: \begin{eqnarray} \label{two-tangle-3} &&{\cal C} (\rho_{12}) = \left\{ \begin{array}{cc} {\cal N}^2 \max \left[2 |c| - (|a|^2 + |b|^2 ), 0 \right] & \hspace{.3cm} |c| \left[\sqrt{1 + |c|^2} + 1 \right] \geq \left\{|a|^2, |b|^2 \right\} \\ {\cal N}^2 \max \left[|a|^2 - |b|^2 - 2 |c| \sqrt{1 + |c|^2}, 0 \right] & \hspace{.3cm} |a|^2 \geq \left\{|b|^2, |c| \left[\sqrt{1 + |c|^2} + 1 \right] \right\} \\ {\cal N}^2 \max \left[|b|^2 - |a|^2 - 2 |c| \sqrt{1 + |c|^2}, 0 \right] & \hspace{.3cm} |b|^2 \geq \left\{|a|^2, |c| \left[\sqrt{1 + |c|^2} + 1 \right] \right\} \end{array} \right. \nonumber \\ &&{\cal C} (\rho_{13}) = \left\{ \begin{array}{cc} 2 {\cal N}^2 \max \left[ |\delta| - \beta, 0 \right] & \hspace{.3cm} \sqrt{\gamma (\gamma + 1)} + |\delta| \geq \left\{ \beta + \alpha, \beta - \alpha \right\} \\ 2 {\cal N}^2 \max \left[ \alpha - \sqrt{\gamma (\gamma + 1)}, 0 \right] & \hspace{.3cm} \beta + \alpha \geq \left\{ \beta - \alpha, \sqrt{\gamma (\gamma + 1)} + |\delta| \right\} \\ 2 {\cal N}^2 \max \left[- \alpha - \sqrt{\gamma (\gamma + 1)}, 0 \right] & \hspace{.3cm} \beta - \alpha \geq \left\{ \beta + \alpha, \sqrt{\gamma (\gamma + 1)} + |\delta| \right\} \end{array} \right. \\ \nonumber &&{\cal C} (\rho_{14}) = \left\{ \begin{array}{cc} 2 {\cal N}^2 \max \left[ |\delta'| - \beta', 0 \right] & \hspace{.3cm} \sqrt{\gamma' (\gamma' + 1)} + |\delta'| \geq \left\{ \beta' + \alpha', \beta' - \alpha' \right\} \\ 2 {\cal N}^2 \max \left[ \alpha' - \sqrt{\gamma' (\gamma' + 1)}, 0 \right] & \hspace{.3cm} \beta' + \alpha' \geq \left\{ \beta' - \alpha', \sqrt{\gamma' (\gamma' + 1)} + |\delta'| \right\} \\ 2 {\cal N}^2 \max \left[- \alpha' - \sqrt{\gamma' (\gamma' + 1)}, 0 \right] & \hspace{.3cm} \beta' - \alpha' \geq \left\{ \beta' + \alpha', \sqrt{\gamma' (\gamma' + 1)} + |\delta'| \right\} \end{array} \right. \end{eqnarray} where $a \geq \left\{b, c \right\}$ means $a \geq b$ and $a \geq c$. \section{three-tangle} In order to compute the three-tangles we should derive the three-qubit states by taking partial trace over the remaining qubit. For example, $\rho_{123}$ can be written as \begin{equation} \label{three-tangle-1} \rho_{123} \equiv \mbox{tr}_4 \ket{G} \bra{G} = p \ket{\psi_1} \bra{\psi_1} + (1 - p) \ket{\psi_2} \bra{\psi_2}, \end{equation} where $p = {\cal N}^2 / (4 {\cal N}_1^2)$ and \begin{eqnarray} \label{three-tangle-2} &&\ket{\psi_1} = {\cal N}_1 \left[ (a - b) \ket {001} + 2 c \ket{010} + (a + b) \ket{111} \right] \\ \nonumber &&\ket{\psi_2} = {\cal N}_2 \left[ (a + b) \ket{000} + 2 \ket{011} + 2 c \ket{101} + (a -b) \ket{110} \right]. \end{eqnarray} The residual entanglements of $\ket{\psi_1}$ and $\ket{\psi_2}$ are \begin{equation} \label{three-tangle-3} \tau_3 (\psi_1) = 0 \hspace{1.0cm} \tau_3 (\psi_2) = 64 {\cal N}_2^4 |(a^2 - b^2) c|. \end{equation} In order to compute the three-way entanglements of $\rho_{123}$ we consider the superposed state \begin{equation} \label{three-tangle-4} \ket{\Psi (p, \varphi)} = \sqrt{p} \ket{\psi_1} + e^{i \varphi} \sqrt{1 - p} \ket{\psi_2}. \end{equation} If the phase factor $\varphi$ is chosen as \begin{equation} \label{three-tangle-5} \varphi = \varphi_{\pm} = - \frac{\theta_1 - \theta_2}{2} \pm \frac{\pi}{2} \end{equation} with $\theta_1 = \mbox{Arg} [(a^2 - b^2) c ]$ and $\theta_2 = \mbox{Arg} [(a^2 - c^2) (b^2 - c^2)]$, the residual entanglement of $\ket{\Psi (p, \varphi)}$ becomes \begin{equation} \label{three-tangle-6} \tau_3 \left( \Psi (p, \varphi_{\pm}) \right) = 64 {\cal N}_1^2 {\cal N}_2^2 (1 - z) |(a^2 - c^2) (b^2 - c^2)| (1 - p) |p - p_0| \end{equation} where \begin{equation} \label{three-tangle-7} z = - \frac{{\cal N}_2^2}{{\cal N}_1^2} \left| \frac{(a^2 - b^2) c}{(a^2 - c^2) (b^2 - c^2)} \right| \hspace{1.0cm} p_0 = \frac{z}{z - 1}. \end{equation} Since $z \leq 0$, we get $ 0 \leq p_0 \leq 1$. Thus, $\tau_3 \left(\Psi (p, \varphi_{\pm}) \right)$ becomes zero at $p = p_0$. It is easy to show that at the region $0 \leq p \leq p_0$ the sign of the second derivative of $\tau_3 \left( \Psi (p, \varphi_{\pm}) \right)$ becomes \begin{equation} \label{three-tangle-8} \frac{d^2} {dp^2} \tau_3 \left( \Psi (p, \varphi_{\pm}) \right) \geq 0 \hspace{1.0cm} \frac{d^2}{dp^2} \tau_3^{\frac{1}{q}} \left( \Psi (p, \varphi_{\pm}) \right) \leq 0 \hspace{.3cm} (q = 2, 3, 4, \cdots). \end{equation} Since the three-way entanglement $t_{1|2|3}$ should be convex in the entire range of $p$, we have to adopt an appropriate convexification procedure appropriately. For, example, the optimal decomposition of $t^{(1)}_{1|2|3}$ is \begin{eqnarray} \label{three-tangle-9} \rho_{123} (p) = \left\{ \begin{array}{c} \frac{p}{2 p_0} \left[ \ket{\Psi (p_0, \varphi_+ ) } \bra{\Psi (p_0, \varphi_+ ) } + \ket{\Psi (p_0, \varphi_- ) } \bra{\Psi (p_0, \varphi_- ) } \right] + \frac{p_0 - p}{p_0} \ket{\psi_2} \bra{\psi_2} \\ \hspace{8.0cm} (0 \leq p \leq p_0) \\ \frac{1 - p}{2 (1 - p_0)} \left[ \ket{\Psi (p_0, \varphi_+ ) } \bra{\Psi (p_0, \varphi_+ ) } + \ket{\Psi (p_0, \varphi_- ) } \bra{\Psi (p_0, \varphi_- ) } \right] + \frac{p - p_0}{1 - p_0} \ket{\psi_1} \bra{\psi_1}. \\ \hspace{8.0cm} (p_0 \leq p \leq 1) \end{array} \right. \end{eqnarray} The resulting $t^{(1)}_{1|2|3}$ is \begin{eqnarray} \label{three-tangle-10} t^{(1)}_{1|2|3} = \left\{ \begin{array}{cc} \left[ 8 {\cal N}_2^2 \sqrt{|(a^2 - b^2) c|} \left(1 - \frac{p}{p_0} \right) \right]^{\mu_1} & \hspace{1.0cm} (0 \leq p \leq p_0) \\ 0 & \hspace{1.0cm} (p_0 \leq p \leq 1). \end{array} \right. \end{eqnarray} The optimal decomposition for $t^{(2)}_{1|2|3}$ at the region $0 \leq p \leq p_0$ is different from Eq. (\ref{three-tangle-9}) as \begin{equation} \label{three-tangle-11} \rho_{123} (p) = \frac{1}{2} \left[ \ket{\Psi (p, \varphi_+ ) } \bra{\Psi (p, \varphi_+ ) } + \ket{\Psi (p, \varphi_- ) } \bra{\Psi (p, \varphi_- ) } \right] \end{equation} and the resulting $t^{(2)}_{1|2|3}$ becomes \begin{eqnarray} \label{three-tangle-12} t^{(2)}_{1|2|3} = \left\{ \begin{array}{cc} \bigg( 64 {\cal N}_1^2 {\cal N}_2^2 (1 - z) |(a^2 - c^2) (b^2 - c^2)| (1 - p) (p_0 - p) \bigg)^{\mu_2} & \hspace{1.0cm} (0 \leq p \leq p_0) \\ 0 & \hspace{1.0cm} (p_0 \leq p \leq 1). \end{array} \right. \end{eqnarray} The optimal decomposition for $t^{(3)}_{1|2|3}$ is exactly the same with that of $t^{(1)}_{1|2|3}$ and the resulting $t^{(3)}_{1|2|3}$ is \begin{eqnarray} \label{three-tangle-13} t^{(3)}_{1|2|3} = \left\{ \begin{array}{cc} 64 {\cal N}_2^4 |(a^2 - b^2) c| \left(1 - \frac{p}{p_0} \right)^q & \hspace{1.0cm} (0 \leq p \leq p_0) \\ 0 & \hspace{1.0cm} (p_0 \leq p \leq 1). \end{array} \right. \end{eqnarray} One can show straightforwardly that $t^{(a)}_{i|j|k} \hspace{.2cm} (a=1,2,3)$ of other three parties are the same with $t^{(a)}_{1|2|3}$ in the second class $\ket{G}$. \section{Few special cases} In this section we examine the minimal power which makes $\Delta_j$ to be positive when the power factors are larger than the corresponding minimal powers. \subsection{special case I: $b = c = i a$} In this subsection we examine the minimal powers $(\mu_1)_{\min}$, $(\mu_2)_{\min}$, $(q)_{\min}$, which make $\Delta_j$ positive when $a$ is positive and $b = c = i a$. In this case the normalization constants given in Eqs. (\ref{normalization1}) and (\ref{normalization2}) become \begin{equation} \label{special-1} {\cal N} = \frac{1}{\sqrt{1 + 4 a^2}} \hspace{.3cm} {\cal N}_1 = \frac{1}{\sqrt{8} a} \hspace{.3cm} {\cal N}_2 = \frac{1}{2 \sqrt{1 + 2 a^2}}. \end{equation} Thus, the one-tangle $t_{1|234}$ simply reduces to \begin{equation} \label{special-2} t_{1|234} = \frac{8 a^2 (1 + 2 a^2)}{(1 + 4 a^2)^2}. \end{equation} Since Eq. (\ref{two-tangle-2}) yields \begin{equation} \label{special-3} \alpha = - \alpha' = a^2 \hspace{.5cm} \beta = \beta' = \frac{3}{2} a^2 \hspace{.5cm} \gamma = \gamma' = \frac{a^2}{2} \hspace{.5cm} \delta^* = \delta' = \frac{1 + i}{2} a, \end{equation} the concurrences given in Eq. (\ref{two-tangle-3}) become \begin{eqnarray} \label{special-4} &&{\cal C} (\rho_{12}) = \left\{ \begin{array}{cc} \frac{2 a (1 - a)}{1 + 4 a^2} & \hspace{2.0cm} 0 \leq a \leq 1 \\ 0 & \hspace{2.0cm} 1 \leq a \end{array} \right. \\ &&{\cal C} (\rho_{13}) = {\cal C} (\rho_{14}) = \left\{ \begin{array}{cc} \frac{a}{1 + 4 a^2} (\sqrt{2} - 3 a) & \hspace{2.0cm} 0 \leq a \leq \frac{\sqrt{2}}{3} \\ 0 & \hspace{2.0cm} \frac{\sqrt{2}}{3} \leq a \leq \sqrt{\frac{2}{3}} \\ \frac{a}{1 + 4 a^2} (2 a - \sqrt{2 + a^2}) & \hspace{2.0cm} \sqrt{\frac{2}{3}} \leq a. \end{array} \right. \end{eqnarray} The parameters $p$, $z$, and $p_0$ defined in the previous section are given by $p = 2 a^2 / (1 + 4a^2)$, $z = -\infty$, and $p_0 = 1$ in this special case. Using these the various three-way entanglements become \begin{equation} \label{special-5} t^{(1)}_{1|2|3} = \left( \frac{\sqrt{8 a^3}}{1 + 4 a^2} \right)^{\mu_1} \hspace{1.0cm} t^{(2)}_{1|2|3} = \left( \frac{8 a^3}{(1 + 4 a^2)^2} \right)^{\mu_2} \hspace{1.0cm} t^{(3)}_{1|2|3} = \frac{8 a^3 (1 + 2 a^2)^{q-2}}{(1 + 4 a^2)^q}. \end{equation} In this special case $t^{(1)}_{1|2|3} = t^{(2)}_{1|2|3}$ when $\mu_1 = 2 \mu_2$. However this relation does not hold generally. Combining Eqs. (\ref{special-2}), ({\ref{special-4}), and (\ref{special-5}), one can compute $\Delta_j$ defined in Eq. (\ref{four-monogamy}), whose expressions are \begin{equation} \label{special-6} \Delta_j = -3 t^{(j)}_{1|2|3} + \frac{2 a^2} {1 + 4 a^2} f(a) \end{equation} where \begin{eqnarray} \label{special-7} f(a) = \left\{ \begin{array}{cc} a (4 + 6 \sqrt{2} - 3 a) & \hspace{2.0cm} 0 \leq a \leq \frac{\sqrt{2}}{3} \\ 2 (1 + 2 a + 3 a^2) & \hspace{2.0cm} \frac{\sqrt{2}}{3} \leq a \leq \sqrt{\frac{2}{3}} \\ a (4 + a + 4 \sqrt{2 + a^2}) & \hspace{2.0cm} \sqrt{\frac{2}{3}} \leq a \leq 1 \\ 2 + 3 a^2 + 4 a \sqrt{2 + a^2}. & \hspace{2.0cm} 1 \leq a \end{array} \right. \end{eqnarray} In Fig. 1 we plot the $\mu_1$-dependence of $\Delta_1 > 0$ region and $q$-dependence of $\Delta_3 > 0$ region with varying $a$. From Fig. 1(a) $\Delta_1$ and $\Delta_2$ become positive regardless of $a$ if $\mu_1 = 2 \mu_2 \geq (\mu_1)_{\min}$, where $ (\mu_1)_{\min} = 2.152$. Fig. 1(b) shows that $\Delta_3$ becomes positive regardless of $a$ if $q \geq (q)_{\min}$, where $ (q)_{\min} = 2.305$. \subsection{special case II: $b = c > 0$} In this subsection we examine the minimal powers $(\mu_1)_{\min}$, $(\mu_2)_{\min}$, $(q)_{\min}$, which make $\Delta_j$ positive when $a$, $b$, and $c$ are real and positive with $b = c$. In this case the normalization constants given in Eqs. (\ref{normalization1}) and (\ref{normalization2}) become \begin{equation} \label{special-8} {\cal N} = \frac{1}{\sqrt{1 + a^2 + 3 b^2}} \hspace{.3cm} {\cal N}_1 = \frac{1}{\sqrt{2 (a^2 + 3 b^2)}} \hspace{.3cm} {\cal N}_2 = \frac{1}{ \sqrt{2 (2 + a^2 + 3 b^2)}}. \end{equation} Using the various formula presented in the previous section the one-tangle is given by \begin{equation} \label{special-9} t_{1|234} = \frac{(a^2 + 3 b^2) (2 + a^2 + 3 b^2)}{(1 + a^2 + 3 b^2)^2} \end{equation} and the concurrences are \begin{eqnarray} \label{special-10} &&{\cal C} (\rho_{12}) = \left\{ \begin{array}{cc} {\cal N}^2 \max \left[ 2 b - (a^2 + b^2), 0 \right] & \hspace{2.0cm} b (\sqrt{1 + b^2} + 1) \geq a^2 \\ {\cal N}^2 \max \left[a^2 - b^2 - 2 b \sqrt{1 + b^2}, 0 \right] & \hspace{2.0cm} b (\sqrt{1 + b^2} + 1) \leq a^2 \end{array} \right. \\ \nonumber &&{\cal C} (\rho_{13}) = \left\{ \begin{array}{cc} 2 {\cal N}^2 \max \left[ b (a + b) - \frac{|a - b|}{2} \sqrt{1 + \left( \frac{a - b}{2} \right)^2}, 0 \right] & \hspace{.2cm} \frac{(a + 3 b)^2}{4} \geq \frac{|a - b|}{2} \left( 1 + \sqrt{1 + \left( \frac{a - b}{2} \right)^2} \right) \\ \frac{{\cal N}^2}{2} \max \left[ 2 |a - b| - (a^2 + 2 a b + 5 b^2), 0 \right] & \hspace{.2cm} \frac{(a + 3 b)^2}{4} \leq \frac{|a - b|}{2} \left( 1 + \sqrt{1 + \left( \frac{a - b}{2} \right)^2} \right) \end{array} \right. \\ \nonumber && {\cal C} (\rho_{14}) = \left\{ \begin{array}{cc} 2 {\cal N}^2 \max \left[ b (b - a) - \frac{a + b}{2} \sqrt{1 + \left( \frac{a + b}{2} \right)^2}, 0 \right] & b \geq a \hspace{.2cm}\mbox{and} \hspace{.2cm} \frac{(a - 3 b)^2}{4} \geq \frac{a + b}{2} \left( 1 + \sqrt{1 + \left( \frac{a + b}{2} \right)^2} \right) \\ \frac{{\cal N}^2}{2} \max \left[ 2 (a + b) - (a^2 - 2 a b + 5 b^2), 0 \right]. & \hspace{.2cm} \mbox{elsewhere} \end{array} \right. \end{eqnarray} Since $p_0 = 1$ in this case too, the $t_{1|2|3}^{(j)}$ given in Eqs. (\ref{three-tangle-10}), (\ref{three-tangle-12}), and (\ref{three-tangle-13}) are expressed as \begin{eqnarray} \label{special-11} &&t_{1|2|3}^{(1)} = \left( \frac{2 \sqrt{b |a^2 - b^2|}}{1 + a^2 + 3 b^2} \right)^{\mu_1} \hspace{1.0cm} t_{1|2|3}^{(2)} = \left( \frac{4 b |a^2 - b^2|}{(1 + a^2 + 3 b^2)^2} \right)^{\mu_2} \nonumber \\ &&\hspace{2.0cm} t_{1|2|3}^{(3)} =2^{4 - q} b |a^2 - b^2| \frac{(2 + a^2 + 3 b^2)^{q - 2}}{(1 + a^2 + 3 b^2)^q}. \end{eqnarray} As in the previous special case we have a relation $t_{1|2|3}^{(1)} = t_{1|2|3}^{(2)}$ if $\mu_1 = 2 \mu_2$. In Fig. 2 the full parameter space is divided into two regions, i.e. $\Delta_j > 0$ and $\Delta_j \leq 0$ regions. The division enables us to find the minimal powers $(\mu_1)_{\min}$, $(\mu_2)_{\min}$, and $q_{\min}$, which makes $\Delta_j \geq 0 \hspace{.2cm} (j=1,2,3)$ regardless of the parameters. Fig. 2 shows $(\mu_1)_{\min} = 2 (\mu_2)_{\min} = 2.01$ and $(q)_{\min} = 2.00$ in this special case. \section{Numerical Analysis} In this section we compute the minimal powers $(\mu_1)_{\min}$, $(\mu_2)_{\min}$, and $(q)_{\min}$ for some more general cases by making use of numerical approach. First, we consider $b = c = i r a$ with $a > 0$. Since $p_0 = 1$ in this case, $t_{1|2|3}^{(j)}$ can be computed directly. One can show easily $t_{1|1|3}^{(1)} = t_{1|1|3}^{(2)}$ if $\mu_1 = 2 \mu_2$ in this case too. Thus, we have a constraint $(\mu_1)_{\min} = 2 (\mu_2)_{\min}$. \begin{center} \begin{tabular}{c|ccccccccccc} \hline \hline $r$ & $0.1$ & $0.2$ & $0.3$ & $0.4$ & $0.5$ & $0.6$ & $0.7$ & $0.8$ & $0.9$ & $1.0$ & $10$ \\ \hline $(\mu_1)_{\min}$ & $1.99$ & $2.01$ & $2.07$ & $2.17$ & $2.26$ & $2.27$ & $2.25$ & $2.21$ & $2.18$ & $2.15$ & $2.00$ \\ $(q)_{\min}$ & $1.97$ & $2.01$ & $2.09$ & $2.31$ & $3.31$ & $13.6$ & $2.91$ & $2.41$ & $2.33$ & $2.31$ & $2.00$ \\ \hline \hline \end{tabular} Table I: The $r$-dependence of the minimal powers $(\mu_1)_{\min}$ and $(q)_{\min}$ when $b = c = i r a$. \end{center} The $r$-dependence of the minimal powers $(\mu_1)_{\min}$ and $(q)_{\min}$ is summarized in Table I. Both powers increase with increasing $r$ from $0.1$ to $0.6$. Both decrease with increasing $r$ from $0.6$ and seem to be saturated to $(\mu_1)_{\min} = (q)_{\min} = 2$ at large $r$. At $r = 0.6$ $(q)_{\min} $ becomes very large as $(q)_{\min} = 13.6$ while $(\mu_1)_{\min}$ is not so large as $(\mu_1)_{\min} = 2.27$. Next, we consider $b = i r a$ and $c = n r a$, where $r$ and $a$ are real with integer $n$. The minimal powers can be computed by making use of the three-dimensional plot similar to Fig. 2. The results are summarized in Table II. \begin{center} \begin{tabular}{c|ccccc} \hline \hline $n$ & $1$ & $2$ & $3$ & $4$ & $5$ \\ \hline $(\mu_1)_{\min}$ & $2.13$ & $2.03$ & $2.01$ & $2.00$ & $2.00$ \\ $(\mu_2)_{\min}$ & $1.07$ & $1.02$ & $1.003$ & $1.00$ & $0.99$ \\ $(q)_{\min}$ & $2.28$ & $2.10$ & $1.98$ & $1.85$ & $1.91$ \\ \hline \hline \end{tabular} Table II: The $n$- dependence of minimal powers when $b = i r a$ and $c = n r a$. \end{center} All minimal powers exhibit decreasing behavior with increasing $n$. In this case $(\mu_2)_{\min}$ roughly equals to the half of $(\mu_1)_{\min}$ as in the previous cases. We also examine the case of $b = n i$ when $a$ and $c$ are real. The minimal powers of this case is summarized in Table III. \begin{center} \begin{tabular}{c|ccccc} \hline \hline $n$ & $1$ & $2$ & $3$ & $4$ & $5$ \\ \hline $(\mu_1)_{\min}$ & $2.28$ & $2.03$ & $1.99$ & $1.99$ & $1.99$ \\ $(\mu_2)_{\min}$ & $1.14$ & $1.02$ & $0.98$ & $0.98$ & $0.98$ \\ $(q)_{\min}$ & $2.36$ & $2.04$ & $1.98$ & $1.97$ & $1.97$ \\ \hline \hline \end{tabular} Table III: The $n$- dependence of minimal powers when $b = n i$. \end{center} Similar to the previous case all minimal powers exhibit decreasing behavior with increasing $n$. In this case also $(\mu_2)_{\min}$ roughly equals to the half of $(\mu_1)_{\min}$. Finally, we choose $N = 10000000$ second class states randomly with imposing $b=c$ and compute $\Delta_j$ with particular powers. The number of states which give negative $\Delta_1$ or $\Delta_2$ are summarized in Table IV. \begin{center} \begin{tabular}{c||cccc|cccc} \hline \hline $\mu_1$ or $\mu_2$ & \hspace{.2cm} $\mu_1 =2.0$ & $2.1$ & $2.2$ & $2.3$ \hspace{.2cm} & \hspace{.2cm} $\mu_2 = 1.00$ & $1.05$ & $1.10$ & $1.15$ \\ \hline No. states & $1216071$ & $191610$ & $16818$ & $0$ & $1213371$ & $191002$ & $16755$ & $0$ \\ \hline \hline \end{tabular} Table IV: Number of states which give negative $\Delta_1$ or $\Delta_2$ for arbitrary chosen 10000000 states. \end{center} The number of states which give negative $\Delta_3$ are summarized in Table V. \begin{center} \begin{tabular}{c|cccccccc} \hline \hline $q$ & $2.0$ & $2.1$ & $2.2$ & $2.3$ & $2.4$ & $2.5$ & $2.6$ & $2.7$ \\ \hline No. states & $1214527$ & $429823$ & $170308$ & $49247$ & $832$ & $110$ & $35$ & $7$ \\ \hline \hline \end{tabular} Table V: Number of states which give negative $\Delta_3$ for arbitrary chosen 10000000 states. \end{center} All the results discussed in section II and section III indicate that $(\mu_1)_{\min} \approx 2 (\mu_2)_{\min} \geq 2.3$ and $(q)_{\min} \geq 14$, at least in the whole second class. However, as Table I and Table V indicate, the region of negative $\Delta_3$ in the parameter space is extremely small for $2.7 \leq q \leq 13$. Thus, it seems to be highly difficult to find such states in the random number generation. \section{Four-Way Entanglement Measure} In this section we discuss a following question: Is it possible that the monogamy relation $\Delta_j (G)$ defined in Eq. (\ref{four-monogamy}) quantifies the SLOCC-invariant four-way entanglement in particular powers like a leftover of CKW inequality in three-way entanglement? In order to explore this question we note that for $n$-qubit system there are $2(2^n - 1) - 6 n$ independent SLOCC-invariant monotones\cite{verst03}. Thus, in four-qubit system there are six invariant monotones. Among them, it was shown in Ref. \cite{four-way-1,four-way-2,four-way-3} by making use of the antilinearity\cite{uhlmann99-1} that there are following three independent invariant monotones which measure the true four-way entanglement: \begin{eqnarray} \label{four-measure} & &{\cal F}^{(4)}_1 = (\sigma_{\mu} \sigma_{\nu} \sigma_2 \sigma_2) \bullet (\sigma^{\mu} \sigma_2 \sigma_{\lambda} \sigma_2) \bullet (\sigma_2 \sigma^{\nu} \sigma^{\lambda} \sigma_2) \nonumber \\ & &{\cal F}^{(4)}_2 = (\sigma_{\mu} \sigma_{\nu} \sigma_2 \sigma_2) \bullet (\sigma^{\mu} \sigma_2 \sigma_{\lambda} \sigma_2) \bullet (\sigma_2 \sigma^{\nu} \sigma_2 \sigma_{\tau}) \bullet (\sigma_2 \sigma_2 \sigma^{\lambda} \sigma^{\tau}) \\ \nonumber & &{\cal F}^{(4)}_3 = \frac{1}{2} (\sigma_{\mu} \sigma_{\nu} \sigma_2 \sigma_2) \bullet (\sigma^{\mu} \sigma^{\nu} \sigma_2 \sigma_2) \bullet (\sigma_{\rho} \sigma_2 \sigma_{\tau} \sigma_2) \bullet (\sigma^{\rho} \sigma_2 \sigma^{\tau} \sigma_2) \bullet (\sigma_{\kappa} \sigma_2 \sigma_2 \sigma_{\lambda}) \bullet (\sigma^{\kappa} \sigma_2 \sigma_2 \sigma^{\lambda}), \end{eqnarray} where $\sigma_0 = \openone_2$, $\sigma_1 = \sigma_x$, $\sigma_2 = \sigma_y$, $\sigma_3 = \sigma_z$, and the Einstein convention is understood with a metric $g^{\mu \nu} = \mbox{diag} \{-1, 1, 0, 1\}$. The solid dot in Eq. (\ref{four-measure}) is defined as follows. Let $\ket{\psi}$ be a four-qubit state. Then, for example, ${\cal F}^{(4)}_1$ of $\ket{\psi}$ is defined as \begin{equation} \label{revise1} {\cal F}^{(4)}_1 (\psi) = \bigg| \bra{\psi^*} \sigma_{\mu} \otimes \sigma_{\nu} \otimes \sigma_2 \otimes \sigma_2 \ket{\psi} \bra{\psi^*} \sigma^{\mu} \otimes \sigma_2 \otimes \sigma_{\lambda} \otimes \sigma_2 \ket{\psi} \bra{\psi^*} \sigma_2 \otimes \sigma^{\nu} \otimes \sigma^{\lambda} \otimes \sigma_2 \ket{\psi} \bigg|. \end{equation} Of course, other measures can be computed similarly. Thus, if $\Delta_j (G)$ properly quantifies the SLOCC-invariant four-way entanglement, it should be represented as a combination of ${\cal F}_j^{(4)}$. For simplicity, we consider only the second class state (\ref{second-class}) with $b = c = i a$. In this case $\Delta_j$ is computed analytically in Eq. (\ref{special-6}). In this case ${\cal F}^{(4)}_j$ becomes \begin{equation} \label{four_measure-1} {\cal F}^{(4)}_1 = \frac{48 a^6}{(1 + 4 a^2)^3} \hspace{.5cm} {\cal F}^{(4)}_2 = \frac{96 a^8}{(1 + 4 a^2)^4} \hspace{.5cm} {\cal F}^{(4)}_3 = \frac{3456 a^{12}}{(1 + 4 a^2)^6}. \end{equation} These results are plotted in Fig. 3. Thus, all four-way entanglement measures ${\cal F}^{(4)}_j$ exhibit monotonically increasing behavior with respect to $a$ when the quantum state is chosen as a second class (\ref{second-class}) with $b = c = i a$. It is easy to show that $\Delta_j$ cannot be expressed in terms of ${\cal F}^{(4)}_j$ because $\Delta_j$ have different expressions in the various range of $a$ as Eq. (\ref{special-7}) shows while ${\cal F}^{(4)}_j$ have same expressions regardless of the range of $a$. For example, one can find a least-square fit of $\Delta_1$ with $\mu_1 = 3$ as \begin{equation} \label{fit1} \Delta_1 (\mu_1 = 3) \approx c_1 {\cal F}^{(4)}_1 + c_2 {\cal F}^{(4)}_2 + c_3 {\cal F}^{(4)}_3 \end{equation} where $c_1 = 10.117$, $c_2 = -30.8143$, and $c_3 = 5.7116$. The left- and right-handed sides of Eq. (\ref{fit1}) are plotted in Fig. 4 as solid and dashed lines. Although both exhibit similar behavior, they do not coincide with each other exactly as expected. Same is true for $\Delta_2$ and $\Delta_3$. Thus, the monogamy constraints (\ref{four-monogamy-1}) derived by introducing a weighting factor in the power of the three-way entanglement cannot quantify the four-way entanglement properly in the SLOCC-invariant manner. \section{Conclusions} In this paper we examine the properties of the three four-qubit monogamy relations presented in Eq. (\ref{four-monogamy}), all of which introduce the power factors $\mu_1$, $\mu_2$, and $q$ in the three-way entanglement. First, we examine the minimal powers $(\mu_1)_{\min}$, $(\mu_2)_{\min}$and $(q)_{\min}$, which make $\Delta_j \hspace{.3cm} (j=1, 2, 3)$ to be positive when the powers are larger than the minimal powers. In order to explore this problem on the analytic ground as much as possible we confine ourselves into the second-class state $\ket{G}$ defined in Eq. (\ref{second-class}). Our analysis indicates that $(\mu_1)_{\min} \approx 2 (\mu_2)_{\min} \geq 2.3$ and $(q)_{\min} \geq 14$. Second, we try to provide an answer to a question ``can the leftovers of the four-qubit monogamy relations with particular powers be a SLOCC-invariant four-way entanglement measures like that of CKW inequality in three-qubit system?''. Our analysis indicates that this is impossible in the monogamy relations given in Eq. (\ref{four-monogamy}). Probably, same is true if monogamy relation is derived by introducing any form of weighting factors. Then, it is natural to ask a following question: Does the monogamy inequality exist in the multipartite qubit system, whose leftover quantifies the SLOCC-invariant entanglement measure? We do not have definite answer to this question. {\bf Acknowledgement}: This work was supported by the Kyungnam University Foundation Grant, 2016. \end{document}
arXiv
The Radiating Atom 6: Schrödinger's Equation in Real-Valued System Form Schrödinger's equation, to start with for the electron of the Hydrogen atom, is usually written in the form $ih\dot\Psi = H\Psi$, with $\Psi (x,t)$ a complex-valued function of a space-time $(x,t)$, $\dot\Psi =\frac{\partial\psi}{\partial t}$, $H=-\frac{h^2}{2m}\Delta + V(x)$ the Hamiltonian with $\Delta$ the Laplacian with respect to $x$, $V(x)=-\frac{1}{\vert x\vert}$ the kernel potential, $m$ the electron mass and $h$ Planck's constant. This equation can equivalently be expressed as follows in real-valued system form, with $\Psi =\phi + i\psi$ and $\phi =\phi (x,t)$ and $\psi =\psi (x,t)$ real-valued functions: $\dot\psi + H\phi =0$, $-\dot\phi + H\psi= 0$. This system can be viewed as a generalized harmonic oscillator or wave equation, which can naturally be extended to $\dot\psi + H\phi -\gamma\dddot\phi = f$ (1) $-\dot\phi + H\psi -\gamma\dddot\psi = g$ (2) where $f(x,t)$ and $g(x,t)$ represent external electro-magnetic forcing, and $\gamma\dddot \psi$ and $\gamma\dddot \phi$ represents the Abraham-Lorentz recoil force from emission of radiation with $\gamma$ having a dependence on $\Phi \equiv (\psi ,\phi )$ to be specified. A system of this form as a wave equation with small damping subject to near-resonant forcing is analyzed in Mathematical Physics of Black Body Radiation. The basic energy balance is obtained by multiplying (1) by $\dot\phi$ and (2) by $\dot\psi$, then adding and integrating in space and time, to get for $f=g=0$: $E(\Phi ,T)+R(\Phi ,T)= 0$ for $T>0$, $E(\Phi ,T)=\int (\psi (x,T)H\psi (x,T)+\phi (x,T)H\phi (x,T ))dx$ $R(\Phi ,T)=\int_0^T\int(\gamma\ddot\psi^2(x,t)+\gamma\ddot\phi^2(x,t))dxdt$, which expresses a balance between internal atomic energy $E(\Phi ,T)$ at time $T$ as the sum of "kinetic energy" related to the Laplacian $\Delta$ and potential energy related to V as terms in the Hamiltonian $H$, and total radiated energy until time $T$ in accordance with Larmor's formula stating that radiation scales with $\ddot q^2$, where $\ddot q=\ddot q(t)$ is the "acceleration" of a charge $q(t)$ varying in space over time. Let now $\psi_1=\psi_1(x)$ and $\psi_2=\psi_2(x)$ be two eigenfunction of the Hamiltonian $H$ with corresponding eigenvalues $E_1 < E_2$ and pure eigen-states $\Phi_j(x,t)\equiv (\cos(E_jt/h)\psi_j(x),\sin(E_jt/h)\psi_j(x))$ for $j=1,2$, and corresponding charge densities $q_j(t)\equiv\vert \Phi_j(x,t)\vert^2\equiv(\cos^2(Et/h)+\sin^2(Et/h))\psi_j^2(x)=\psi_j^2(x)$. We thus find that pure eigen-states have charge densities which are constant in time and thus do not radiate. On the other hand, the charge density $q(x,t)=\vert\Phi (x,t)\vert^2$ of a superposition $\Phi =c_1\Phi_1+c_2\Phi_2$ with $c_1$ and $c_2$ positive coefficients of the two pure eigenstates $\Phi_1$ and $\Phi_2$, has a time dependence of the form $q(x,t) = a(x) + b(x)\cos((E_2-E_1)t/h)$ with $a$ and $b$ coeffcients depending on $x$, and thus is radiating. We are thus led to a dependence of $\gamma$ on $\Phi$ of the form $\gamma \sim\ddot q^2$. We conclude that (1)-(2) offers a continuum mechanical model of a radiating Hydrogen atom which can be analyzed by eigenfunction expansion as in Mathematical Physics of Black Body Radiation and thus offers an answer to the basic questions of atomic mechanics: Why does a pure-eigen-state not radiate and thus can persist over time as a stable atomic state? Why can an atom radiate under external forcing? How much is an atom radiating under external forcing? Note that the system (1)-(2) in case with $f=g=\gamma =0$ has the equivalent form of a second order wave equation: $\ddot\psi + H^2\psi =0$, a form which Schrödinger dismissed on the ground that a time dependent potential would cause complications, and probably also because the presence of the term $\ddot\psi$ appears to be asking for a physical interpretation of $\dot\psi^2$ as kinetic energy, which however was already assigned to $\vert\nabla\psi\vert^2$ connected to the Laplacian. On the other hand, in the real-valued system form (1)-(2), these complications do no arise, and the extension to forcing and radiation is more natural than in the standard complex form, which is commonly viewed as a complete mystery beyond human comprehension. What remains to understand is the physical meaning of the system equations (1)-(2), which may well be possible after some imagination, which I hope to report on. In short (1)-(2) may be the form of Schrödinger's equation to use for extensions to multi-electron configurations. At least this is the route I am now seeking to explore. Note that letting $h$ tend to zero, we obtain the dynamical second order system $\ddot\psi (t) = -V^2\psi = -\frac{\psi}{\vert x\vert^2}$ which can be interpreted as Newton's equations for a moving "particle" localized in space. Schrödinger's equation (1)-(2) can thus be viewed as regularized form of Newton's equations with regularization from the Laplacian. In this perspective there is nothing holy about the Laplacian; it is thinkable that the effective regularization in an atom is non-isotropic, thus with different action in radial and angular variables in spherical coordinates centered at the kernel. An equation $\dot\psi +H\phi=\dot\psi + V(x)\phi=0$ with $h=0$ may formally be viewed as some form of force balance expressing a form of "square root of Newton's 2nd law" $\ddot\psi+V^2\psi$. Note that in (1)-(2) $-H\phi$ connects to $\dot\psi$ and $H\psi$ to $\dot\phi$ and so the dynamics of a pure eigen-state with wave function $\Phi_j$ can be described as a "revolution/oscillation in time" of a space-dependent eigen-function of the Hamiltonian for which the charge density is constant in time without radiation, while the charge density of a superposition of pure eigen-states varies in time and thus radiates. With this perspective, an electron is not "moving in space" like some form of planet around the kernel, but instead has a variation in time, which gives rise to a charge density with variation in time and thus radiation, except for a pure eigen-state which does not radiate. Etiketter: quantum mechanics, Schrödinger's equation Decemberöverenskommelsen = NyDemokrati = 1984 + 30... The Radiating Atom 9: Hydrogen and Beyond Sverige: Humanitär Stormakt: Löfvens Jultal: Kunsk... Matematik-IT: Programmering i Grundskolan: Tiden Ä... The Radiating Atom 8: Towards Resolution of the Ri... Löfven som Schrödingers Katt The Radiating Atom 7: Quantum Electro Dynamics Wit... Löfven-Socialdemokrati-Korporativism-Fascism-Nyfas... The Radiating Atom 6: Schrödinger's Equation in Re... Stefan Löfven Trotsar Parlamentarismens Princip The Radiating Atom 5: Summary Prediction of Global Temperature May Well Be Possi... The Radiating Atom 4: Absorption vs Emission
CommonCrawl