text
stringlengths 59
500k
| subset
stringclasses 6
values |
---|---|
Trace identity
In mathematics, a trace identity is any equation involving the trace of a matrix.
Properties
Trace identities are invariant under simultaneous conjugation.
Uses
They are frequently used in the invariant theory of $n\times n$ matrices to find the generators and relations of the ring of invariants, and therefore are useful in answering questions similar to that posed by Hilbert's fourteenth problem.
Examples
• The Cayley–Hamilton theorem says that every square matrix satisfies its own characteristic polynomial. This also implies that all square matrices satisfy
$\operatorname {tr} \left(A^{n}\right)-c_{n-1}\operatorname {tr} (A)\operatorname {tr} \left(A^{n-1}\right)+\cdots +(-1)^{n}n\det(A)=0\,$
where the coefficients $c_{i}$ are given by the elementary symmetric polynomials of the eigenvalues of A.
• All square matrices satisfy
$\operatorname {tr} (A)=\operatorname {tr} \left(A^{\mathsf {T}}\right).\,$
See also
• Trace inequality – inequalities involving linear operators on Hilbert spacesPages displaying wikidata descriptions as a fallback
References
Rowen, Louis Halle (2008), Graduate Algebra: Noncommutative View, Graduate Studies in Mathematics, vol. 2, American Mathematical Society, p. 412, ISBN 9780821841532.
| Wikipedia |
Ramsey theorem with forbidden induced subgraph
Given $n \in \mathbb{N}$, I want to prove that there exist $f(n) \in \mathbb{N}, 0<\alpha(n) \leq 1$ such that for every graph $G=(V,E)$ with $|V| > f(n)$ vertices, and a set $R \subseteq V \times V$ where $|R| \leq \alpha(n) {|V| \choose 2}$, there is a clique or anti clique of size $n$ whose edges/anti-edges are not in $R$ (non of the edges are in $R$). Very similar to Ramsey thorem but with the constraint of $R$.
I tried to prove in the following manner:
Take $f(n)$ to be the $n'$th Ramsey number. When choosing a pair of vertices $\{u,v\} \in V \times V$ randomly with uniform distribution , the probability that this pair is in $R$ is: $\frac{|R|}{|V \times V|} = \alpha(n)$. Since there is at least one clique/anti-clique, there are at least $n \choose 2$ edges/anti-edges that are part of clique/anti-clique. Hence the probability that random pair is a part of a clique/anti-clique is $\frac{{n \choose 2}}{{|V| \choose 2}} = \frac{n(n-1)}{|V|(|V|-1)}$. Thus, the probability that a random pair is NOT in $R$ AND in a clique/anti-clique is $\frac{n(n-1)}{|V|(|V|-1)}(1 - \alpha(n))$. There are $|V| \choose 2$ such pairs. I want the expectation of the number of pairs to be larger than $n \choose 2$ , so: $$ \frac{n(n-1)}{|V|(|V|-1)} (1-\alpha(n))\frac{|V|(|V|-1)}{2} \geq \frac{n(n-1)}{2}$$ Which leads to $\alpha(n) = 0$ - clearly incorrect.
My question is: how can I improve this method in order to achieve $\alpha(n) > 0$? I thought about taking $f(n)$ to be such that every graph with $|V| > f(n)$ vertices will have at least $n$ cliques, but I did all the math and it's also incorrect.
probability combinatorics graph-theory ramsey-theory
Horvey
HorveyHorvey
$\begingroup$ By "and a set $R$" do you mean "for every set $R$" or do you mean "there exists a set $R$"? The only restriction on $\alpha(n)$ is that $\alpha(n)\le1$? No lower bound? Not even $\alpha(n)\gt0$? $\endgroup$ – bof Apr 21 '17 at 10:07
$\begingroup$ $\alpha(n)>0$. The set $R$ is given, it cannot hold for every set $R$ (it's size is larger than the clique size for sufficient large $|V|$, so there is such set that contains the clique). $\endgroup$ – Horvey Apr 21 '17 at 12:46
$\begingroup$ Do you mean $|R| \leq \alpha(n) \binom{|V|}{2}$? $\endgroup$ – Perry Elliott-Iverson Apr 21 '17 at 14:35
$\begingroup$ yes. My mistake. Will edit $\endgroup$ – Horvey Apr 21 '17 at 14:44
Let $f(n)$ be the $n$th Ramsey number, and let $\alpha(n) = \left(\frac{1}{f(n)-1}\right)^2$. Then $|R|\leq \alpha(n) \binom{|V|}{2}$ and we have:
$$\begin{align} |(V \times V) \backslash R| &\geq \binom{|V|}{2} - \alpha(n)\binom{|V|}{2} \\ &= \binom{|V|}{2}\left(1-\left(\frac{1}{f(n)-1}\right)^2\right) \\ &= \frac{|V|(|V|-1)}{2}\left(1+\frac{1}{f(n)-1}\right)\left(1-\frac{1}{f(n)-1}\right) \\ &> \frac{|V|(|V|-1)}{2}\left(1+\frac{1}{|V|-1}\right)\left(1-\frac{1}{f(n)-1}\right) \\ &= \frac{|V|(|V|-1)}{2}\left(\frac{|V|}{|V|-1}\right)\left(1-\frac{1}{f(n)-1}\right) \\ &= \frac{|V|^2}{2}\left(1-\frac{1}{f(n)-1}\right) \end{align}$$
Thus by Turán's Theorem, there is a subset $S$ of $V$ with $|S|=f(n)$ so that $G[S]$ does not contain any edges of $R$, and $G[S]$ has either $K_n$ or $\overline{K_n}$.
Perry Elliott-IversonPerry Elliott-Iverson
$\begingroup$ Turan's theorem states that if I had $|V|^2/2 (1-1/(f(n)-1))$ edges, then I have clique of size $f(n)$. How is it helping to take the $n$'th Ramsey number? Not sure I understand your "punchline" $\endgroup$ – Horvey Apr 22 '17 at 18:02
$\begingroup$ There are at least $\frac{|V|^2}{2}\left(1-\frac{1}{f(n)-1}\right)$ members of $V \times V$ that are not in $R$, so Turan's theorem says that the graph $G'$ with vertex set $V$ and edge set $(V \times V) \backslash R$ must have a $K_{f(n)}$. Let $S \subseteq V$ be the set of vertices of this $K_{f(n)}$. Then no members of $R$ have both ends in $S$, and $G[S]$ is a graph with $f(n)$ vertices, which must have $K_n$ or $\overline{K_n}$ by Ramsey's Theorem. $\endgroup$ – Perry Elliott-Iverson Apr 22 '17 at 19:56
$\begingroup$ But $G$ does not necessarily contain all the edges in $V \times V$. What if $G$ has much less edges? $\endgroup$ – Horvey Apr 23 '17 at 4:47
$\begingroup$ The number of edges in $G$ is irrelevant. We found a large enough (order $f(n)$, which guarantees $K_n$ or $\overline{K_n}$) subset of the vertices of $G$ with no members of $R$ having both ends in that subset. $\endgroup$ – Perry Elliott-Iverson Apr 24 '17 at 19:51
Not the answer you're looking for? Browse other questions tagged probability combinatorics graph-theory ramsey-theory or ask your own question.
Maximal and Maximum Cliques
Prove Ramsey Number R(3,5)=14
What is a Ramsey Graph?
Modification of the Ramsey number
$(r+1)$ Clique of Induced subgraph and Turan's theorem
How to find a subgraph of cliques with the minimum total sum weight
Induced subgraph problem
Number of cliques or anti-cliques using Ramsey Theorem
Probability that in fully connected graph there is a clique of different colors
How do I apply the infinite Ramsey theorem to graph theory? | CommonCrawl |
Analysis of Markov-modulated fluid polling systems with gated discipline
JIMO Home
Hybrid social spider optimization algorithm with differential mutation operator for the job-shop scheduling problem
March 2021, 17(2): 549-573. doi: 10.3934/jimo.2019123
Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization
Sujit Kumar Samanta , and Rakesh Nandi
Department of Mathematics, National Institute of Technology Raipur, Raipur-492010, India
* Corresponding author: Sujit Kumar Samanta
Received September 2018 Revised May 2019 Published October 2019
Fund Project: The first author acknowledges the Council of Scientific and Industrial Research (CSIR), New Delhi, India, for partial support from the project grant 25(0271)/17/EMR-Ⅱ.
Figure(2) / Table(10)
This paper analyzes an infinite-buffer single-server queueing system wherein customers arrive in batches of random size according to a discrete-time renewal process. The customers are served one at a time under discrete-time Markovian service process. Based on the censoring technique, the UL-type $ RG $-factorization for the Toeplitz type block-structured Markov chain is used to obtain the prearrival epoch probabilities. The random epoch probabilities are obtained with the help of classical principle based on Markov renewal theory. The system-length distributions at outside observer's, intermediate and post-departure epochs are obtained by making relations among various time epochs. The analysis of waiting-time distribution measured in slots of an arbitrary customer in an arrival batch has also been investigated. In order to unify the results of both discrete-time and its continuous-time counterpart, we give a brief demonstration to get the continuous-time results from those of the discrete-time ones. A variety of numerical results are provided to illustrate the effect of model parameters on the performance measures.
Keywords: Toeplitz type block-structured Markov chain, censored Markov chain, discrete-time Markovian service process (D-MSP), general independent batch arrival, queueing, UL-type $ RG $-factorization.
Mathematics Subject Classification: Primary: 60K25, 90B22, 68M20, 60K20.
Citation: Sujit Kumar Samanta, Rakesh Nandi. Analysis of $GI^{[X]}/D$-$MSP/1/\infty$ queue using $RG$-factorization. Journal of Industrial & Management Optimization, 2021, 17 (2) : 549-573. doi: 10.3934/jimo.2019123
J. Abate, G. L. Choudhury and W. Whitt, Asymptotics for steady-state tail probabilities in structured Markov queueing models, Comm. Statist. Stochastic Models, 10 (1994), 99-143. doi: 10.1080/15326349408807290. Google Scholar
A. S. Alfa, Applied Discrete-Time Queues, 2$^{nd}$ edition, Springer-Verlag, New York, 2016. doi: 10.1007/978-1-4939-3420-1. Google Scholar
A. S. Alfa, J. Xue and Q. Ye, Perturbation theory for the asymptotic decay rates in the queues with Markovian arrival process and/or Markovian service process, Queueing Syst., 36 (2000), 287-301. doi: 10.1023/A:1011032718715. Google Scholar
J. R. Artalejo, I. Atencia and P. Moreno, A discrete-time $Geo^{[X]}/G/1$ retrial queue with control of admission, Applied Mathematical Modelling, 29 (2005), 1100-1120. doi: 10.1016/j.apm.2005.02.005. Google Scholar
J. R. Artalejo and Q. L. Li, Performance analysis of a block-structured discrete-time retrial queue with state-dependent arrivals, Discrete Event Dyn. Syst., 20 (2010), 325-347. doi: 10.1007/s10626-009-0075-6. Google Scholar
F. Avram and D. F. Chedom, On symbolic $RG$-factorization of quasi-birth-and-death processes, TOP, 19 (2011), 317-335. doi: 10.1007/s11750-011-0195-7. Google Scholar
A. D. Banik and U. C. Gupta, Analyzing the finite buffer batch arrival queue under Markovian service process: $GI^{X}/MSP/1/N$, TOP, 15 (2007), 146-160. doi: 10.1007/s11750-007-0007-2. Google Scholar
P. P. Bocharov, C. D'Apice and S. Salerno, The stationary characteristics of the $G/MSP/1/r$ queueing system, Autom. Remote Control, 64 (2003), 288-301. doi: 10.1023/A:1022219232282. Google Scholar
H. Bruneel and B. G. Kim, Discrete-time Models for Communication Systems including ATM, The Springer International Series in Engineering and Computer Science, 205, Kluwer Academic Publishers, Boston, 1993. doi: 10.1007/978-1-4615-3130-2. Google Scholar
M. L. Chaudhry, A. D. Banik and A. Pacheco, A simple analysis of the batch arrival queue with infinite-buffer and Markovian service process using roots method: $GI^{[X]}/C$-$MSP/1/\infty $, Ann. Oper. Res., 252 (2017), 135-173. doi: 10.1007/s10479-015-2026-y. Google Scholar
M. L. Chaudhry, S. K. Samanta and A. Pacheco, Analytically explicit results for the $GI/C$-$MSP/1/\infty$ queueing system using roots, Probab. Engrg. Inform. Sci., 26 (2012), 221-244. doi: 10.1017/S0269964811000349. Google Scholar
E. Çinlar, Introduction to Stochastic Process, Prentice Hall, New Jersey, 1975. Google Scholar
D. Freedman, Approximating Countable Markov Chains, 2$^{nd}$ edition, Springer-Verlag, New York, 1983. doi: 10.1007/978-1-4613-8230-0. Google Scholar
Y. Gao and W. Liu, Analysis of the $GI/Geo/c$ queue with working vacations, Applied Mechanics and Materials, 197 (2012), 534-541. doi: 10.4028/www.scientific.net/AMM.197.534. Google Scholar
V. Goswami, U. C. Gupta and S. K. Samanta, Analyzing discrete-time bulk-service $Geo/Geo^b/m$ queue, RAIRO Operations Research, 40 (2006), 267-284. doi: 10.1051/ro:2006021. Google Scholar
W. K. Grassmann and D. P. Heyman, Equilibrium distribution of block-structured Markov chains with repeating rows, J. Appl. Probab., 27 (1990), 557-576. doi: 10.2307/3214541. Google Scholar
U. C. Gupta and A. D. Banik, Complete analysis of finite and infinite buffer $GI/MSP/1$ queue — A computational approach, Oper. Res. Lett., 35 (2007), 273-280. doi: 10.1016/j.orl.2006.02.003. Google Scholar
A. Horváth, G. Horváth and M. Telek, A joint moments based analysis of networks of $MAP/MAP/1$ queues, Performance Evaluation, 67 (2010), 759-778. doi: 10.1016/j.peva.2009.12.006. Google Scholar
J. J. Hunter, Mathematical techniques of applied probability, in Discrete-Time Models: Techniques and Applications, Operations Research and Industrial Engineering, Academic Press, New York, 1983. Google Scholar
T. Jiang and L. Liu, Analysis of a batch service multi-server polling system with dynamic service control, J. Ind. Manag. Optim., 14 (2018), 743-757. doi: 10.3934/jimo.2017073. Google Scholar
N. K. Kim, S. H. Chang and K. C. Chae, On the relationships among queue lengths at arrival, departure, and random epochs in the discrete-time queue with D-BMAP arrivals, Oper. Res. Lett., 30 (2002), 25-32. doi: 10.1016/S0167-6377(01)00110-9. Google Scholar
Q. Li, Y. Ying and Y. Q. Zhao, A $BMAP/G/1$ retrial queue with a server subject to breakdowns and repairs, Ann. Oper. Res., 141 (2006), 233-270. doi: 10.1007/s10479-006-5301-0. Google Scholar
Q. L. Li, Constructive Computation in Stochastic Models with Applications: The RGfactorization, Springer, Berlin and Tsinghua University Press, Beijing, 2010. doi: 10.1007/978-3-642-11492-2. Google Scholar
Q. L. Li and Y. Q. Zhao, Light-tailed asymptotics of stationary probability vectors of Markov chains of $GI/G/1$ type, Adv. in Appl. Probab., 37 (2005), 1075-1093. doi: 10.1017/S0001867800000677. Google Scholar
Q. L. Li and Y. Q. Zhao, A $MAP/G/1$ queue with negative customers, Queueing Syst., 47(1) (2004), 5–43. doi: 10.1023/B:QUES.0000032798.65858.19. Google Scholar
D. M. Lucantoni and M. F. Neuts, Some steady-state distributions for the $MAP/SM/1$ queue, Comm. Statist. Stochastic Models, 10 (1994), 575-598. doi: 10.1080/15326349408807311. Google Scholar
C. D. Meyer, Stochastic complementation, uncoupling Markov chains, and the theory of nearly reducible systems, SIAM Review, 31 (1989), 240-272. doi: 10.1137/1031050. Google Scholar
M. S. Mushtaq, S. Fowler and A. Mellouk, QoE in 5G cloud networks using multimedia services, in Proceeding of IEEE international Wireless Communication and Networking Conference (WCNC'16), Doha, Qatar, 2016. doi: 10.1109/WCNC.2016.7565173. Google Scholar
T. Ozawa, Analysis of queues with Markovian service processes, Stochastic Models, 20 (2004), 391-413. doi: 10.1081/STM-200033073. Google Scholar
A. Pacheco, S. K. Samanta and M. L. Chaudhry, A short note on the $GI/Geo/1$ queueing system, Statist. Probab. Lett., 82 (2012), 268-273. doi: 10.1016/j.spl.2011.09.022. Google Scholar
S. K. Samanta, Sojourn-time distribution of the $GI/MSP/1$ queueing system, OPSEARCH, 52 (2015), 756-770. doi: 10.1007/s12597-015-0202-0. Google Scholar
S. K. Samanta, M. L. Chaudhry and A. Pacheco, Analysis of $BMAP/MSP/1$ queue, Methodol. Comput. Appl. Probab., 18 (2016), 419-440. doi: 10.1007/s11009-014-9429-0. Google Scholar
S. K. Samanta, M. L. Chaudhry, A. Pacheco and U. C. Gupta, Analytic and computational analysis of the discrete-time $GI/D$-$MSP/1$ queue using roots, Comput. Oper. Res., 56 (2015), 33-40. doi: 10.1016/j.cor.2014.10.017. Google Scholar
S. K. Samanta, U. C. Gupta and M. L. Chaudhry, Analysis of stationary discrete-time $GI/D$-$MSP/1$ queue with finite and infinite buffers, 4OR, 7 (2009), 337-361. doi: 10.1007/s10288-008-0088-2. Google Scholar
S. K. Samanta and Z. G. Zhang, Stationary analysis of a discrete-time $GI/D$-$MSP/1$ queue with multiple vacations, Appl. Math. Model., 36 (2012), 5964-5975. doi: 10.1016/j.apm.2012.01.049. Google Scholar
K. D. Turck, S. D. Vuyst, D. Fiems, H. Bruneel and and S. Wittevrongel, Efficient performance analysis of newly proposed sleep-mode mechanisms for IEEE 802.16m in case of correlated downlink traffic, Wireless Networks, 19 (2013), 831-842. doi: 10.1007/s11276-012-0504-6. Google Scholar
Y. C. Wang, J. H. Chou and S. Y. Wang, Loss pattern of $DBMAP/DMSP/1/K$ queue and its application in wireless local communications, Appl. Math. Model., 35 (2011), 1782-1797. doi: 10.1016/j.apm.2010.10.009. Google Scholar
Y. Wang, C. Linb and Q. L. Li, Performance analysis of email systems under three types of attacks, Performance Evaluation, 67 (2010), 485-499. doi: 10.1016/j.peva.2010.01.003. Google Scholar
M. Yu and A. S. Alfa, Algorithm for computing the queue length distribution at various time epochs in $DMAP/G^{(1, a, b)}/1/N$ queue with batch-size-dependent service time, European J. Oper. Res., 244 (2015), 227-239. doi: 10.1016/j.ejor.2015.01.056. Google Scholar
M. Zhang and Z. Hou, Performance analysis of $MAP/G/1$ queue with working vacations and vacation interruption, Applied Mathematical Modelling, 35 (2011), 1551-1560. doi: 10.1016/j.apm.2010.09.031. Google Scholar
J. A. Zhao, B. Li, C. W. Kok and I. Ahmad, MPEG-4 video transmission over wireless networks: A link level performance study, Wireless Networks, 10 (2004), 133-146. doi: 10.1023/B:WINE.0000013078.74259.13. Google Scholar
Y. Q. Zhao, Censoring technique in studying block-structured Markov chains, in Advance in Algorithmic Methods for Stochastic Models, Notable Publications, 2000, 417–433. Google Scholar
Y. Q. Zhao, W. Li and W. J. Braun, Infinite block-structured transition matrices and their properties, Adv. in Appl. Probab., 30 (1998), 365-384. doi: 10.1239/aap/1035228074. Google Scholar
Y. Q. Zhao and D. Liu, The censored Markov chain and the best augmentation, Journal of Applied Probability, 33 (1996), 623-629. doi: 10.1017/S0021900200100063. Google Scholar
Figure 1. Various time epochs in LAS-DA
Figure 2. Various time epochs in EAS
Table 1. System-length distribution at prearrival epoch (LAS-DA)
$ n $ $ \pi^{-}_1(n) $ $ \pi^{-}_2(n) $ $ \pi^{-}_3(n) $ $ \pi^{-}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{-}(n){\bf e} $
0 0.147931 0.087562 0.141983 0.215337 0.592813
10 0.005820 0.002639 0.005923 0.004720 0.019101
150 0.000000 0.000000 0.000000 0.000000 0.000000
$\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$ $\vdots$
Sum 0.268062 0.144522 0.263029 0.324388 1.000000
Table 2. System-length distribution at random epoch (LAS-DA)
$ n $ $ \pi_1(n) $ $ \pi_2(n) $ $ \pi_3(n) $ $ \pi_4(n) $ $ \mathit{\boldsymbol{\pi }}(n){\bf e} $
$ L_{q}= 4.110096 $, $ W_{q}\equiv L_{q}/\lambda\overline{g}=16.988397 $
Table 3. System-length distribution at intermediate epoch (LAS-DA)
$ n $ $ \pi^{\bullet}_1(n) $ $ \pi^{\bullet}_2(n) $ $ \pi^{\bullet}_3(n) $ $ \pi^{\bullet}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{\bullet}(n){\bf e} $
Table 4. System-length distribution at post-departure epoch (LAS-DA)
$ n $ $ \pi^{+}_1(n) $ $ \pi^{+}_2(n) $ $ \pi^{+}_3(n) $ $ \pi^{+}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{+}(n){\bf e} $
Table 5. Waiting-time distribution of an arbitrary customer (LAS-DA)
$ k $ $ w_1(k) $ $ w_2(k) $ $ w_3(k) $ $ w_4(k) $ $ {\bf w}(k){\bf e} $
$ W_{q}\equiv \sum_{k=1}^{\infty}k{\bf w}(k){\bf e}=16.988398 $
Table 6. System-length distribution at prearrival epoch (EAS)
Table 7. System-length distribution at random epoch (EAS)
Table 8. System-length distribution at outside observer's epoch (EAS)
$ n $ $ \pi^{\circ}_1(n) $ $ \pi^{\circ}_2(n) $ $ \pi^{\circ}_3(n) $ $ \pi^{\circ}_4(n) $ $ \mathit{\boldsymbol{\pi }}^{\circ}(n){\bf e} $
$ L_{q}=1.040384 $, $ W_{q}\equiv L_{q}/\lambda\overline{g}=4.265574 $
Table 9. System-length distribution at post-departure epoch (EAS)
Table 10. Waiting-time distribution of an arbitrary customer (EAS)
$ W_{q}\equiv \sum_{k=1}^{\infty}k{\bf w}(k){\bf e}=4.265574 $
Angelica Pachon, Federico Polito, Costantino Ricciuti. On discrete-time semi-Markov processes. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1499-1529. doi: 10.3934/dcdsb.2020170
Guangjun Shen, Xueying Wu, Xiuwei Yin. Stabilization of stochastic differential equations driven by G-Lévy process with discrete-time feedback control. Discrete & Continuous Dynamical Systems - B, 2021, 26 (2) : 755-774. doi: 10.3934/dcdsb.2020133
Zonghong Cao, Jie Min. Selection and impact of decision mode of encroachment and retail service in a dual-channel supply chain. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020167
Cuicui Li, Lin Zhou, Zhidong Teng, Buyu Wen. The threshold dynamics of a discrete-time echinococcosis transmission model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020339
Veena Goswami, Gopinath Panda. Optimal customer behavior in observable and unobservable discrete-time queues. Journal of Industrial & Management Optimization, 2021, 17 (1) : 299-316. doi: 10.3934/jimo.2019112
Ming Chen, Hao Wang. Dynamics of a discrete-time stoichiometric optimal foraging model. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 107-120. doi: 10.3934/dcdsb.2020264
Peter Giesl, Zachary Langhorne, Carlos Argáez, Sigurdur Hafstein. Computing complete Lyapunov functions for discrete-time dynamical systems. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 299-336. doi: 10.3934/dcdsb.2020331
Meng Chen, Yong Hu, Matteo Penegini. On projective threefolds of general type with small positive geometric genus. Electronic Research Archive, , () : -. doi: 10.3934/era.2020117
Jian Zhang, Tony T. Lee, Tong Ye, Liang Huang. An approximate mean queue length formula for queueing systems with varying service rate. Journal of Industrial & Management Optimization, 2021, 17 (1) : 185-204. doi: 10.3934/jimo.2019106
Zsolt Saffer, Miklós Telek, Gábor Horváth. Analysis of Markov-modulated fluid polling systems with gated discipline. Journal of Industrial & Management Optimization, 2021, 17 (2) : 575-599. doi: 10.3934/jimo.2019124
Junyong Eom, Kazuhiro Ishige. Large time behavior of ODE type solutions to nonlinear diffusion equations. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3395-3409. doi: 10.3934/dcds.2019229
Haixiang Yao, Ping Chen, Miao Zhang, Xun Li. Dynamic discrete-time portfolio selection for defined contribution pension funds with inflation risk. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020166
Xiaorui Wang, Genqi Xu, Hao Chen. Uniform stabilization of 1-D Schrödinger equation with internal difference-type control. Discrete & Continuous Dynamical Systems - B, 2021 doi: 10.3934/dcdsb.2021022
Honglin Yang, Jiawu Peng. Coordinating a supply chain with demand information updating. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020181
Xin Zhao, Tao Feng, Liang Wang, Zhipeng Qiu. Threshold dynamics and sensitivity analysis of a stochastic semi-Markov switched SIRS epidemic model with nonlinear incidence and vaccination. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2021010
Sushil Kumar Dey, Bibhas C. Giri. Coordination of a sustainable reverse supply chain with revenue sharing contract. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020165
Xi Zhao, Teng Niu. Impacts of horizontal mergers on dual-channel supply chain. Journal of Industrial & Management Optimization, 2020 doi: 10.3934/jimo.2020173
Mahir Demir, Suzanne Lenhart. A spatial food chain model for the Black Sea Anchovy, and its optimal fishery. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 155-171. doi: 10.3934/dcdsb.2020373
Wenyan Zhuo, Honglin Yang, Leopoldo Eduardo Cárdenas-Barrón, Hong Wan. Loss-averse supply chain decisions with a capital constrained retailer. Journal of Industrial & Management Optimization, 2021, 17 (2) : 711-732. doi: 10.3934/jimo.2019131
Manuel del Pino, Monica Musso, Juncheng Wei, Yifu Zhou. Type Ⅱ finite time blow-up for the energy critical heat equation in $ \mathbb{R}^4 $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3327-3355. doi: 10.3934/dcds.2020052
Sujit Kumar Samanta Rakesh Nandi | CommonCrawl |
\begin{document}
\title{Stable laws and Beurling kernels}
\author[London School of Economics]{Adam J. Ostaszewski} \address{Mathematics Department, London School of Economics,
Houghton Street, London WC2A 2AE, UK} \email{[email protected]}
\begin{abstract}We identify a close relation between stable distributions and the limiting homomorphisms central to the theory of regular variation. In so doing some simplifications are achieved in the direct analysis of these laws in Pitman and Pitman (2016); stable distributions are themselves linked to homomorphy. \end{abstract}
\keywords{Stable laws; Beurling regular variation; quantifier weakening; homomorphism; Goldie equation; \foreignlanguage{polish}{Go\l\aob b--Schinzel} equation; Levi--Civita equation}
\ams{60E07}{26A03; 39B22; 34D05; 39A20}
\renewcommand{\number \count63}{\number \count63} \setcounter{footnote}{0}
\section{Introduction}\label{s:intro}
This note\footnote{This expanded version of \cite{OstA} includes new material in \S 4 and an Appendix.} takes its inspiration from Pitman and Pitman's approach \cite{PitP}, in this volume, to the characterization of stable laws \emph{directly} from their characteristic functional equation \cite[(2.2)]{PitP}, \eqref{ChFE} below, which they complement with the derivation of parameter restrictions by an appeal to \emph{Karamata} (classical) regular variation (rather than \emph{indirectly} as a special case of the L\'{e}vy--Khintchine characterization of infinitely decomposable laws---cf.\ \cite[Section
4]{PitP}). We take up their functional-equation tactic with three aims in mind. The first and primary one is to extract a hidden connection with the more general theory of \emph{Beurling regular variation}, which embraces the original Karamata theory and its later `Bojani\'{c}--Karamata--de Haan' variants. (This has received renewed attention: \cite{BinO1,BinO4,Ost1}). The connection is made via another functional equation, the \emph{Goldie equation} \begin{equation}\label{GFE} \kappa(x+y)-\kappa(x)=\gamma(x)\kappa(y)\qquad(x,y\in\mathbb{R}),\tag{\emph{GFE}} \end{equation} with \emph{vanishing side condition} $\kappa(0)=0$ and \emph{auxiliary function }$\gamma$, or more properly with its multiplicative variant: \begin{equation}\label{GFEx} K(st)-K(s)=G(s)K(t)\qquad (s,t\in \mathbb{R}_+:=(0,\infty)),\tag{${\it GFE}_{\times}$} \end{equation} with corresponding side condition $K(1)=0$; the additive variant arises first in \cite{BinG} (see also \cite[Lemma 3.2.1 and Theorem 3.2.5]{BinGT}), but has only latterly been so named in recognition of its key role both there and in the recent developments \cite{BinO2,BinO3}, inspired both by \emph{Beurling slow variation} \cite[Section 2.11]{BinGT} and by its generalizations \cite{BinO1,BinO4} and \cite{Ost1}. This equation describes the family of \emph{Beurling kernels} (the asymptotic homomorphisms of Beurling regular variation), that is, the functions $K_{F}$ arising as locally uniform limits of the form \begin{equation}\label{BKer} K_{F}(t):=\lim_{x\rightarrow \infty }[F(x+t\varphi (x))-F(x)], \tag{\emph{BKer}} \end{equation} for $\varphi(\cdot)$ ranging over \emph{self-neglecting} functions (\emph{SN}). (See \cite{Ost1,Ost2} for the larger family of kernels arising when $\varphi(\cdot)$ ranges over the \emph{self-equivarying} functions (\emph{SE}), both classes recalled in the complements section \ref{ss:SNSE}.)
A secondary aim is achieved in the omission of extensive special-case arguments for the limiting cases in the Pitman analysis (especially the case of characteristic exponent $\alpha =1$ in \cite[Section 5.2]{PitP}---affecting parts of \cite[Section 8]{PitP}), employing here instead the more natural approach of
interpreting the `generic' case `in the limit' via the l'Hospital rule. A
final general objective of further streamlining is achieved, \emph{en
passant}, by telescoping various cases into one, simple, group-theoretic
argument; this helps clarify the `group' aspects as distinct from
`asymptotics', which relate parameter restrictions to tail balance---see
Remark \ref{r:dominant}.
A random variable $X$ has a \emph{stable law} if for each $n\in \mathbb{N}$ the law of the random walk $S_{n}:=X_{1}+\dotsb+X_{n},$ where the $n$ steps are independent and with law identical to $X$, is of the same type, i.e.\ the same in distribution up to scale and location: \[ S_{n}\eqdist a_{n}X+b_{n}, \] for some real constants $a_{n}$, $b_{n}$ with $a_{n}>0$; cf.\ \cite[VI.1]{Fel} and \cite[(1.1)]{PitP}. These laws may be characterized by the \emph{characteristic functional equation} (of the characteristic function of $X$, $\varphi (t)= \mathbb{E}[\re^{\ri tX}]$), as in \cite[(2.2)]{PitP}: \begin{equation}\label{ChFE} \varphi(t)^n=\varphi(a_nt)\exp(\ri b_nt)\qquad(n\in\mathbb{N},\;
t\in\mathbb{R}_+).\tag{\emph{ChFE}} \end{equation}
The standard way of solving \eqref{ChFE} is to deduce the equations satisfied by the functions $a:n\mapsto a_{n}$ and $b:n\mapsto b_{n}$. Pitman and Pitman \cite{PitP} proceed directly by proving the map $a$ \emph{injective}, then extending the map $b$ to $\mathbb{R}_{+}:=(0,\infty )$, and exploiting the classical Cauchy (or Hamel) exponential functional equation (for which see \cite{AczD} and \cite{Kuc}): \begin{equation}\label{CEE} K(xy)=K(x)K(y)\qquad (x,y\in \mathbb{R}_{+});\tag{\emph{CEE}} \end{equation} \eqref{CEE} is satisfied by $K(\cdot)=a(\cdot)$ on the smaller domain $\mathbb{N}$, as a consequence of \eqref{ChFE}. See \cite{RamL} for a similar, but less self-contained account. For other applications see the recent \cite{GupJTS}, which characterizes `generalized stable laws'.
We show in Section \ref{s:reduction} the surprising equivalence of \eqref{ChFE} with the fundamental equation \eqref{GFE} of the recently established theory of \emph{Beurling regular variation}. There is thus a one-to-one relation between Beurling kernels arising through \eqref{BKer} and the continuous solutions of \eqref{ChFE}, amongst which are the one-dimensional stable distributions. This involves passage from discrete to continuous, a normal feature of the theory of regular variation (see \cite[Section 1.9]{BinGT}) which, rather than unquestioningly adopt, we track carefully via Lemma 1 and Corollary 1 of Section \ref{s:reduction}: the ultimate justification here is the extension of $a$ to $\mathbb{R}_{+}$ (Ger's extension theorem \cite[Section
18.7]{Kuc} being thematic here), and the continuity of characteristic functions.
The emergence of a particular kind of functional equation, one interpretable as a \emph{group} homomorphism (see Section \ref{ss:Homo}), is linked to the simpler than usual form here of `probabilistic associativity' (as in \cite{Bin}) in the incrementation process of the stable random walk; in more general walks, functional equations (and integrated functional equations---see \cite{RamL}) arise over an associated \emph{hypergroup}, as with the Kingman--Bessel hypergroup and Bingham-Gegenbauer (ultraspherical) hypergroup (see \cite{Bin} and \cite{BloH}). We return to these matters, and connections with the theory of flows, elsewhere---\cite{Ost3}.
The material is organized as follows. Below we identify the solutions to \eqref{GFE} and in Section \ref{s:reduction} we prove equivalence of \eqref{GFE} and \eqref{ChFE}; our proof is self-contained modulo the (elementary) result that, for $\varphi $ a characteristic function, \eqref{ChFE} implies $a_{n}=n^{k}$ for some $k>0$ (in fact we need only to know that $k\neq 0$). Then in Section \ref{s:form} we read off the form of the characteristic functions of the stable laws. In Section \ref{s:sequenceidentification} we show that, for an arbitrary continuous solution $\varphi$ of \eqref{ChFE}, necessarily $a_{n}=n^{k}$ for some $k\neq 0$. We conclude in Section \ref{s:complements} with complements describing the families \emph{SN} and \emph{SE} mentioned above, and identifying the group structure implied, or `encoded', by \eqref{GFEx} to be $(\mathbb{R}_{+},\times )$, the multiplicative positive reals. In the Appendix we offer an elementary derivation of a a key formula needed in \cite{PitP}.
The following result, which has antecedents in several settings (some cited below), is key; on account of its significance, this has recently received further attention in \cite[especially Theorem 3]{BinO2} and \cite[especially Theorem 1]{Ost2}, to which we refer for background---cf.\ Section \ref{ss:ThmGFE}.
{ \makeatletter \def\th@plain{\normalfont\itshape
\def\@begintheorem##1##2{
\item[\hskip\labelsep \theorem@headerfont ##1{\bf .}] } } \makeatother
\begin{ThmGFE} {\rm (\cite[Theorem 1]{BinO2}, \cite[(2.2)]{BojK},
\cite[Lemma 3.2.1]{BinGT}; cf.\ \cite{AczG}.)} For $\mathbb{C}$-valued functions $\kappa$ and $\gamma$ with $\gamma$ locally bounded at $0$, with $\gamma(0)=1$ and $\gamma\neq1$ except at $0$, if $\kappa\not\equiv0$ satisfies \eqref{GFE} subject to the side condition $\kappa(0)=0$, then for some $\gamma_0$, $\kappa_0\in \mathbb{C}$: \[ \gamma(u)=\re^{\gamma_0u}\quad\text{and}\quad \kappa(x)\equiv\kappa_0H_{\gamma_0}(x):=\kappa_0\int_0^x\gamma(u)\sd u =\kappa_0\frac{\re^{\gamma_0x}-1}{\gamma_0}, \] under the usual l'Hospital convention for interpreting $\gamma_0=0$. \end{ThmGFE} }
\begin{rem}\label{r:extend} The cited proof is ostensibly for $\mathbb{R}$-valued $\kappa(\cdot)$ but immediately extends to $\mathbb{C}$-valued $\kappa$. Indeed, in brief, the proof rests on symmetry: \[ \gamma (v)\kappa(u)+\kappa(v)=\kappa(u+v)=\kappa(v+u) =\gamma(u)\kappa(v)+\kappa(u). \] So, for $u$, $v$ not in $\{x:\gamma(x)=1\}$, an additive subgroup, \[ \kappa(u)[\gamma(v)-1]=\kappa(v)[\gamma(u)-1]:\qquad \frac{\kappa(u)}{\gamma(u)-1}=\frac{\kappa(v)}{\gamma(v)-1} =\kappa_0, \] as in \cite[Lemma 3.2.1]{BinGT}. If $\kappa(\cdot)$ is to satisfy \eqref{GFE}, $\gamma(\cdot)$ needs to satisfy \eqref{CEE}.
The notation $H_\rho$ (originating in \cite{BojK}) is from \cite[Chapter 3: de
Haan theory]{BinGT} and, modulo exponentiation, links to the `inverse' functions $\eta_\rho(t)=1+\rho t$ (see Section \ref{ss:Homo}) which permeate regular variation (albeit long undetected), a testament to the underlying \emph{flow} and \emph{group} structure, for which see especially \cite{BinO1,BinO4}.
The Goldie equation is a special case of the \emph{Levi--Civita equations}; for a text-book treatment of their solutions for domain a semigroup and range $\mathbb{C}$ see \cite[Chapter 5]{Ste}. \end{rem}
\begin{rem}\label{r:constants} We denote the constants $\gamma_0$ and $\kappa_0$ more simply by $\gamma$ and $\kappa$, whenever context permits. To prevent conflict with the $\gamma$ of \cite[Section 5.1]{PitP} we denote that here by $\gamma_{\text{P}}(k),$ showing also dependence on the index of growth of $a_n$: see Section \ref{ss:notation}. \end{rem}
\begin{rem}\label{r:stuv} To solve \eqref{GFEx} write $s=\re^u$ and $t=\re^v$, obtaining \eqref{GFE}; then \begin{align*} G(\re^u)&=\gamma(u)=\re^{\gamma u}:\qquad G(s)=s^{\gamma}\\ K(\re^u)&=\kappa(u)=\kappa\,\frac{\re^{\gamma u}-1}\gamma:\qquad
K(s)=\kappa\,\frac{s^\gamma-1}\gamma. \end{align*} \end{rem}
\begin{rem}\label{r:altreg} Alternative regularity conditions, yielding continuity and the same $H_\gamma$ conclusion, include in \cite[Theorem
2]{BinO2} the case of $\mathbb{R}$-valued functions with $\kappa(\cdot)$ and $\gamma(\cdot)$ both non-negative on $\mathbb{R}_+$ with $\gamma\neq1$ except at $0$ (as then either $\kappa\equiv0$, or both are continuous). \end{rem}
\section{Reduction to the Goldie Equation}\label{s:reduction}
In this section we establish a Proposition connecting \eqref{ChFE} with \eqref{GFEx}, and so stable laws with Beurling kernels. Here in the interests of brevity\footnote{In \S 4 we prove from \eqref{ChFE}, with $\varphi$ arbitrary but continuous, that $a_n=n^k$ for some $k\ne0$, cf. \cite{Ost3}.}, this makes use of a well-known result concerning the norming constants (cf.\ \cite[VI.1, Theorem 1]{Fel}, \cite[Lemma 5.3]{PitP}), that $a:n\mapsto a_n$ satisfies $a_n=n^k$ for some $k>0$, and so is extendible to a continuous surjection onto $\mathbb{R}_+:=(0,\infty)$: \[ \tilde a(\nu)=\nu^k\qquad(\nu>0); \] this is used below to justify the validity of the definition \[ f(t):=\log\varphi(t)\qquad(t>0), \] with $\log$ here the principal logarithm, a tacit step in \cite[Section 5.1]{PitP}, albeit based on \cite[Lemma 5.2]{PitP}. We write $a_{m/n}=\tilde a_{m/n}=a_m/a_n$ and put $\mathbb{A}_{\mathbb{N}}:=\{a_n:n\in\mathbb{N}\}$ and $\mathbb{A}_{\mathbb{Q}}:=\{a_{m/n}:m,n\in \mathbb{N}\}$.
The Lemma below re-proves an assertion from \cite[Lemma 5.2]{PitP}, but without assuming that $\varphi$ is a characteristic function. Its Corollary needs no explicit formula for $b_{m/n},$ since the term will eventually be eliminated.
{
\begin{lemma}\label{l} For continuous $\varphi\not\equiv0$ satisfying \eqref{ChFE} with $a_n=n^k$ ($k\ne0$), $\varphi$ has no zeros on $\mathbb{R}_+$. \end{lemma}
\begin{proof}If $\varphi(\tau)=0$ for some $\tau>0$ then $\varphi(a_m\tau)=0$ for all $m$, by \eqref{ChFE}. Again by \eqref{ChFE}, $\abs{\varphi(\tau a_m/a_n)}^n=\abs{\varphi(a_m\tau)}=0$, so $\varphi$ is zero on the dense subset of points $\tau a_m/a_n$; then, by continuity, $\varphi\equiv0$ on $\mathbb{R}_+$, a contradiction. \end{proof}
\begin{corollary}\label{c} The equation \eqref{ChFE} with continuous $\varphi\not\equiv0$ and $a_n=n^k$ ($k\ne0$) holds on the dense subgroup $\mathbb{A}_{\mathbb{Q}}$: there are constants $\{b_{m/n}\}_{m,n\in\mathbb{N}}$ with
$$\varphi(t)^{m/n}=\varphi(a_{m/n}t)\exp(\ri b_{m/n}t)\qquad(t\ge0).$$
\end{corollary}
\begin{proof}Taking $t/a_n$ for $t$ in \eqref{ChFE} gives $\varphi(t/a_n)^n=\varphi(t)\exp(\ri b_nt/a_n)$, so by Lemma 1, using principal values, $\varphi(t)^{1/n}=\varphi(t/a_n)\exp(-\ri tb_n/(na_n))$, whence
$$\varphi(t)^{m/n}=\varphi\Bigl(\frac t{a_n}\Bigr)^m
\exp\Bigl(-\frac{\ri tmb_n}{na_n}\Bigr).$$
Replacing $n$ by $m$ in \eqref{ChFE} and then replacing $t$ by $t/a_n$ gives $\varphi(t/a_n)^m=\varphi(a_mt/a_n)\break\exp(\ri b_mt/a_n)$. Substituting this into the above and using $a_m/a_n=a_{m/n}$:
$$\varphi(t)^{m/n}=\varphi(a_{m/n}t)\exp\Bigl(\ri t\,\frac{nb_m-mb_n}{na_n}\Bigr).$$
As the left-hand side, and the first term on the right, depend on $m$ and $n$ only through $m/n$, we may rewrite the constant $(nb_m-mb_n)/(na_n)$ as $b_{m/n}$. The result follows. \end{proof}
Our main result below, on equational equivalence, uses a condition \eqref{GARplus} applied to the dense subgroup $\mathbb{A}=\mathbb{A}_{\mathbb{Q}}$. This is a \emph{quantifier weakening} relative to \eqref{GFE} and is similar to a condition with all variables ranging over $\mathbb{A}=\mathbb{A}_{\mathbb{Q}}$, denoted (${\it G}_{\mathbb{A}}$) in \cite{BinO2}, to which we refer for background on quantifier weakening. In Proposition 1 below we may also impose just $({\it G}_{\mathbb{A}_{\mathbb{Q}}})$, granted continuity of $\varphi$.
\begin{proposition}\label{p} For $\varphi$ continuous and $a_n=n^k$ ($k\ne0$), the functional equation \eqref{ChFE} is equivalent to \begin{equation}\label{GARplus} K(st)-K(s)=K(t)G(s)\qquad(s\in\mathbb{A},\;t\in\mathbb{R}_+), \tag{${\it G}_{\mathbb{A},\mathbb{R}_+}$} \end{equation} for either of $\mathbb{A}=\mathbb{A}_{\mathbb{N}}$ or $\mathbb{A}=\mathbb{A}_{\mathbb{Q}}$, both with side condition $K(0)=1$ and with $K$ and $G$ continuous; the latter directly implies \eqref{GFEx}. The correspondence is given by \[ K(t)= \begin{cases}
\displaystyle \frac{f(t)}{t\mathstrut},&\text{if $f(1)=0$},\\
\displaystyle \frac{f(t)}{tf(1)}-1,&\text{if $f(1)\neq0$}. \end{cases} \] \end{proposition} }
\begin{proof}By the Lemma, using principal values, \eqref{ChFE} may be re-written as \[ \varphi(t)^{n/t}=\varphi(a_nt)^{1/t}\exp(\ri b_n)\qquad (n\in\mathbb{N},\;t\in \mathbb{R}_+). \] From here, on taking principal logarithms and adjusting notation ($f:=\log \varphi $, $h(n)=-$\textrm{i}$b_{n},$ and $g(n):=a_{n}\in \mathbb{R}_{+}$), pass first to the form \[ \frac{f(g(n)t)}t=\frac{nf(t)}t+h(n)\qquad(n\in\mathbb{N},\;t\in\mathbb{R}_+); \] here the last term does not depend on $t$, and is defined for each $n$ so as to achieve equality. Then, with $s:=g(n)\in\mathbb{R}_{+}$, replacement of $n$ by $g^{-1}(s)$, valid by injectivity, gives, on cross-multiplying by $t$, \[ f(st)=g^{-1}(s)f(t)+h(g^{-1}(s))t. \] As $s,t\in \mathbb{R}_+$, take $F(t):=f(t)/t$, $G(s):=g^{-1}(s)/s$, $H(s):=h(g^{-1}(s))/s$; then \begin{equation}\label{dag} F(st)=F(t)G(s)+H(s)\qquad (s\in\mathbb{A}_{\mathbb{N}},\;t\in\mathbb{R}_+). \tag{\dag } \end{equation} This equation contains \emph{three} unknown functions: $F$, $G$, $H$ (cf.\ the Pexider-like formats considered in \cite[Section 4]{BinO2}), but we may reduce the number of unknown functions to \emph{two} by entirely eliminating\footnote{This loses the ``affine action'': $K\mapsto
G(t)K+H(t)$.} $H$. The elimination argument splits according as $F(1)=f(1)$ is zero or not.
\begin{enumerate}[\it C{a}se 1:\/] \item $f(1)=0$ (i.e.\ $\varphi (1)=1$). Taking $t=1$ in \eqref{dag} yields
$F(s)=H(s)$, and so \eqref{GARplus} holds for $K=F$, with side condition
$K(1)=0$ ($=F(1)$).
\item $f(1)\neq 0$. Then, with $\tilde{F}:=F/F(1)$ and $\tilde{H}:=H/F(1)$ in \eqref{dag}, \[ \tilde{F}(st)=\tilde{F}(t)G(s)+\tilde{H}(s)\qquad (s\in\mathbb{A},\;t\in\mathbb{R}_+), \] and $\tilde{F}(1)=1$. Taking again $t=1$ gives $\tilde{F}(s)=G(s)+\tilde{H}(s)$. Setting \begin{equation}\label{dagdag} K(t):=\tilde{F}(t)-1=\frac{F(t)}{F(1)}-1\tag{\dag\dag } \end{equation} (so that $K(1)=0$), and using $\tilde{H}=\tilde{F}-G$ in \eqref{dag} gives \begin{align*} \tilde{F}(st)&=\tilde{F}(t)G(s)+\tilde{F}(s)-G(s),\\ (\tilde{F}(st)-1)-(\tilde{F}(s)-1)&=(\tilde{F}(t)-1)G(s),\\ K(st)-K(s)&=K(t)G(s). \end{align*} That is, $K$ satisfies \eqref{GARplus} with side condition $K(1)=0$. \end{enumerate}
In summary: in both cases elimination of $H$ yields $(G_{\mathbb{A},\mathbb{R}_+})$ and the side condition of vanishing at the identity.
So far, in \eqref{GARplus} above, $t$ ranges over $\mathbb{R}_+$ whereas $s$ ranges over $\mathbb{A}_{\mathbb{N}}=\{a_n:n\in\mathbb{N}\}$, but $s$ can be allowed to range over $\{a_{m/n}:m,n\in \mathbb{N}\}$, by the Corollary. As before, since $a:n\mapsto a_n$ has $\tilde{a}$ as its continuous extension to a bijection onto $\mathbb{R}_+$, and $\varphi$ is continuous, we conclude that $s$ may range over $\mathbb{R}_+$, yielding the multiplicative form of the Goldie equation \eqref{GFEx} with the side-condition of vanishing at the identity. \end{proof}
\begin{rem}\label{r:onecase} As in \cite[Section 5]{PitP}, we consider only \emph{non-degenerate} stable distributions, consequently `Case 1' will not figure below (as this case yields an arithmetic distribution---cf.\ \cite[XVI.1,
Lemma 4]{Fel}, so here concentrated on $0$). \end{rem}
\begin{rem}\label{r:case2} In `Case 2' above, $\tilde{H}(st)-\tilde{H}(s)=\tilde{H}(t)G(s)$, since $G(st)=G(s)G(t)$, by Remark \ref{r:altreg}. So $\tilde H(\re^u)=\kappa H_{\gamma}(u)=\kappa(\re^{\gamma u}-1)/\gamma$. We use this in Section \ref{s:form}. \end{rem}
\section{Stable laws: their form}\label{s:form}
This section demonstrates how to `telescope' several cases of the analysis in \cite{PitP} into one, and to make l'Hospital's Rule carry the burden of the `limiting' case $\alpha =1$. At little cost, we also deduce the form of the location constants $b_n$, without needing the separate analysis conducted in \cite[Section 5.2]{PitP}.
We break up the material into steps, beginning with a statement of the result.
\subsection{Form of the law}\label{ss:form}
The \emph{form} of $\varphi $ for a non-degenerate stable distribution is an immediate corollary of Theorem GFE (Section \ref{s:intro}) applied to \eqref{dagdag} above. For some $\gamma\in\mathbb{R}$, $\kappa\in\mathbb{C}$ and with $A:=\kappa/\gamma$ and $B:=1-A$, \begin{equation}\label{ddag} f(t)=\log\varphi(t)= \begin{cases} f(1)(At^{\gamma+1}+Bt),&\text{for $\gamma\neq 0$},\\ f(1)(t+\kappa t\log t),&\text{with $\gamma=0$}, \end{cases} \qquad(t>0).\tag{\ddag} \end{equation} Here $\alpha:=\gamma+1$ is the \emph{characteristic exponent}. From this follows a formula for $t<0$ (by complex conjugation---see below). The connection with \cite[Section 5 at end]{PitP} is given by:
\begin{enumerate}[(i)] \item $f(1):=\log\varphi(1)=-c+\ri y$ (with $c>0$, as $\abs{\varphi(t)}<1$ for some $t>0$);
\item $f(1)\kappa=-\ri\lambda$. So $f(1)B=-c+\ri(y+\lambda/\gamma)$, and $\kappa=\lambda(-y+\ri c)/(c^2+y^2)$. \end{enumerate}
\begin{rem}\label{r:dominant} We note, for the sake of completeness, that restrictions on the two parameters $\alpha$ and $\kappa$ (equivalently $\gamma$ and $\kappa$) follow from asymptotic analysis of the `initial' behaviour of the characteristic function $\varphi$ (i.e.\ near the origin). This is equivalent to the `final' or tail behaviour (i.e.\ at infinity) of the corresponding distribution function. Specifically, the `dominance ratio' of the imaginary part of the \emph{dominant} behaviour in $f(t)$ to the value $c$ (as in (i) above) relates to the `tail balance' ratio $\beta$ of \cite[(6.10)]{PitP}, i.e.\ the asymptotic ratio of the distribution's tail difference to its tail sum---cf.\ \cite[Section 8]{PitP}. Technical arguments, based on Fourier inversion, exploit the regularly varying behaviour as $t\downarrow 0$ (with index of variation $\alpha $---see above) in the real and imaginary parts of $1-\varphi(t)$ to yield the not unexpected result \cite[Theorem 6.2]{PitP} that, for $\alpha\neq1$, the dominance ratio is proportional to the tail-balance ratio $\beta$ by a factor equal to the ratio of the sine and cosine variants of Euler's Gamma integral\footnote{ In view of that factor's key role, a quick and elementary derivation is offered in the Appendix (for $0<\alpha<1$).} (on account of the dominant power function)---compare \cite[Theorem 4.10.3]{BinGT}. \end{rem}
\subsection{On notation}\label{ss:notation} The parameter $\gamma:=\alpha-1$ is linked to the auxiliary function $G$ of \eqref{GFE}; this usage of $\gamma$ conflicts with \cite{PitP}, where two letters are used in describing the behaviour of the ratio $b_n/n$: $\lambda$ for the `case $\alpha=1$', and otherwise $\gamma$ (following Feller \cite[VI.1 Footnote 2]{Fel}). The latter we denote by $\gamma _{\text{P}}(k)$, reflecting the $k$ value in the `case $\alpha=1/k\neq 1$'. In Section \ref{ss:locgen} below it emerges that $\gamma_{\text{P}}(1+)=\lambda\log n$.
\subsection{Verification of the form (\ref{ddag})}\label{ss:ver} By Remark \ref{r:onecase}, only the second case of the Proposition applies: the function $K(t)=\tilde{F}(t)-1=f(t)/(tf(1))-1$ solves \eqref{GFEx} with side-condition $K(1)=0$. Writing $t=e^u$ (as in Remark \ref{r:stuv}) yields \[ \frac{f(t)}{tf(1)}=\frac{f(\re^u)\re^{-u}}{f(1)}=1+K(\re^u)=\kappa(u) =1+\kappa\,\frac{\re^{\gamma u}-1}\gamma, \] for some complex $\kappa$ and $\gamma\neq0$ (with passage to $\gamma=0$, in the limit, to follow). So, for $t>0$, with $A:=\kappa/\gamma$ and $B:=1-A$, as above, \[ f(t)=\log \varphi(t)=f(1)t\Bigl(1+\kappa\,\frac{t^\gamma-1}\gamma\Bigr) =f(1)(At^\alpha+Bt), \] with $\alpha=\gamma+1$. On the domain $t>0$, this agrees with \cite[(5.5)]{PitP}; for $t<0$ the appropriate formula is immediate via complex conjugation, verbatim as in the derivation of \cite[(5.5)]{PitP}, save for the $\gamma$ usage. To cover the case $\gamma=0$, apply the l'Hospital convention; as in \cite[(5.8)]{PitP}, for $t>0$ and $u>0$ and some $\kappa\in\mathbb{C}$, \[ \kappa (t):=\frac{f(e^t)e^{-t}}{f(1)}=1+\kappa t:\qquad f(u)=f(1)(u+\kappa u\log u). \]
\subsection{Location parameters: general case $\alpha\neq1$}\label{ss:locgen} Here $\gamma =\alpha -1\neq 0$. From the proof of the Proposition, $G(t):=g^{-1}(\re^t)\re^{-t}$, so $g^{-1}(\re^t)=\re^t\re^{\gamma t}=\re^{\alpha t}$. Put $k=1/\alpha$; then \[ v=g^{-1}(u)=u^\alpha:\qquad u=g(v)=v^{1/\alpha}=v^k, \] confirming $a_n=g(n)=n^k$, as in \cite[Lemma 5.3]{PitP}. (Here $k>0$, as \emph{strict} monotonicity was assumed in the Proposition). Furthermore, as in Remark \ref{r:case2},
$$\kappa\,\frac{\re^{\gamma t}-1}\gamma=\tilde H(\re^t) =\frac{h(g^{-1}(\re^t))\re^{-t}}{f(1)};$$
so
$$ h(g^{-1}(e^t))=f(1)\kappa\,\frac{\re^{\alpha t}-\re^t}\gamma:\qquad h(u)=f(1)\kappa\,\frac{u-u^{1/\alpha}}\gamma =f(1)\kappa\,\frac{u-u^k}\gamma, $$
where $\gamma=\alpha-1=(1-k)/k$. So
$$b_n=\ri h(n)=\ri f(1)\kappa\,\frac{n-n^k}\gamma,$$
as in the Pitman analysis: see \cite[Section 5.1]{PitP}. Here $b_n$ is real, since $f(1)\kappa=-\ri\lambda$, according to (ii) in Section \ref{ss:form} above and conforming with \cite[Section 5.1]{PitP}. So as $b_n/n=\gamma _{\text{P}}(k)$, similarly to \cite[end of proof of Lemma 4.1]{PitP}, again as $f(1)\kappa=-\ri\lambda$, for any $n\in \mathbb{N}$ \[ \lim_{k\rightarrow1}\gamma_{\text{P}}(k)=\ri f(1)\kappa\, \lim_{k\rightarrow1}k\frac{1-n^{k-1}}{k-1}=\lambda\log n. \]
\subsection{Location parameters: special case $\alpha=1$}\label{ss:special} Here $\gamma =0$. In Section \ref{ss:ver} above the form of $g$ specializes to \[ g^{-1}(\re^t)=\re^t:\qquad g(u)=u. \] Applying the l'Hospital convention yields the form of $h$: for $t>0$ and $u>0$, \[ h(g^{-1}(\re^t))=h(\re^t)=f(1)\kappa t\re^t:\qquad h(u)=f(1)\kappa u\log u; \] so, as in \cite[(5.8)]{PitP}, $b_n=\lambda n\log n$ (since $b_n=\ri h(n)$ and again $\lambda=\ri f(1)\kappa$).
\section{Identifying $a_{n}$ from the continuity of $\protect\varphi $}\label{s:sequenceidentification}
In \S 3 the form of the continuous solutions $\varphi $ of $(ChFE)$ was derived from the known continuous solutions of the Goldie equation $(GFE)$ on the assumption that $a_{n}=n^{k}$, for some $k\neq 0$ (as then $ \{a_{m}/a_{n}:m,n\in \mathbb{N}\}$ is dense in $\mathbb{R}_{+})$. Here we show that the side condition on $a_{n}$ may itself be deduced from $(ChFE) $ provided the solution $\varphi $ is continuous and \textit{non-trivial,}
i.e. neither $|\varphi |\equiv 0$ nor $|\varphi |\equiv 1$ holds, so obviating the assumption that $\varphi $ is the characteristic function of a (non-degenerate) distribution.
{ \makeatletter \def\th@plain{\normalfont\itshape
\def\@begintheorem##1##2{
\item[\hskip\labelsep \theorem@headerfont ##1{\bf .}] } }
\begin{theorem}If $\varphi $ is a non-trivial continuous function and satisfies $(ChFE)$ for some sequence $a_{n}\geq 0$, then $a_{n}=n^{k}$ for some $k\neq 0.$ \end{theorem} }
We will first need to establish a further lemma and proposition.
\begin{lemma}If $(ChFE)$\ is satisfied by a\ non-trivial continuous function $\varphi$, then the sequence $a_{n}$ is either convergent to $0,$ or divergent (`convergent to $+\infty $'). \end{lemma}
\begin{proof}Suppose otherwise. Then for some $\mathbb{M} \subseteq \mathbb{N}$, and $a>0,$ \[ a_{m}\rightarrow a\text{ through }\mathbb{M}. \] W.l.o.g. $\mathbb{M}=\mathbb{N}$, otherwise interpret $m$ below as restricted to $\mathbb{M}$. For any $t,$ $a_{m}t\rightarrow at,$ so $
K_{t}:=\sup_{m}\{|\varphi (a_{m}t)|\}$ is finite. Then for all $m$ \[
|\varphi (t)|^{m}=|\varphi (a_{m}t)|\leq K_{t}, \]
and so $|\varphi (t)|\leq 1,$ for all $t.$ By continuity, \[
|\varphi (at)|=\lim_{m}|\varphi (a_{m}t)|=\lim_{m}|\varphi (t)|^{m}=0\text{ or }1. \]
Then, setting $N_{k}:=\{t:|\varphi (at)|=k\},$ \[ \mathbb{R}_{+}=N_{0}\cup N_{1}. \]
By the connectedness of $\mathbb{R}_{+}$, one of $N_{0},N_{1}$ is empty, as the sets $N_{k}$ are closed; so respectively $|\varphi |\equiv 0$ or $
|\varphi |\equiv 1,$ contradicting non-triviality. \end{proof}
The next result essentially contains \cite[Lemma 5.2]{PitP}, which relies on $
|\varphi (0)|=1,$ the continuity of $\varphi ,$ and the existence of some $t$ with $\varphi (t)<1$ (guaranteed below by the non-triviality of $\varphi ).$ We assume less here, and so must also consider the possibility that $
|\varphi (0)|=0.$
\begin{proposition} \textit{If }$(ChFE)$\ \textit{is satisfied by a non-trivial continuous function }$\varphi $\textit{\ and for some }
$c>0,$\textit{\ }$|\varphi (t)|=$\textit{\ }$|\varphi (ct)|$\textit{\ for all }$t>0,$\textit{\ then }$c=1$\textit{.} \end{proposition}
\begin{proof} Note first that $a_{n}>0$ for all $n;$ indeed, otherwise, for some $k\geq 1$ \[
|\varphi (t)|^{k}=|\varphi (0)|\qquad (t\geq 0). \]
Assume first that $k>1;$ taking $t=0$ yields $|\varphi (0)|=0$ or $1,$ which as in Lemma 2 implies $|\varphi |\equiv 0$ or $|\varphi |\equiv 1.$ If $k=1$
then $|\varphi (t)|=|\varphi (0)|$ and for all $n>1,$ $|\varphi
(0)|^{n}=|\varphi (0)|,$ so that again $|\varphi (0)|=0$ or $1,$ which again implies $|\varphi |\equiv 0$ or $|\varphi |\equiv 1.$
Applying Lemma 2, the sequence $a_{n}$ converges either to $0$ or to $ \infty .$
First suppose that $a_{n}\rightarrow 0.$ Then, as above (referring again to $
K_{t}$), we obtain $|\varphi (t)|\leq 1$ for all $t.$ Now, since \[
|\varphi (0)|=\lim |\varphi (a_{n}t)|=\lim_{n}|\varphi (t)|^{n}, \]
if $|\varphi (t)|=1$ for \textit{some} $t,$ then $|\varphi (0)|=1$, and that in turn yields, for the very same reason, that $|\varphi (t)|\equiv 1$ for \textit{all} $t,$ a trivial solution, which is ruled out. So in fact $
|\varphi (t)|<1$ for all $t,$ and so $|\varphi (0)|=0.$ Now suppose that for some $c>0,$ $|\varphi (t)|=$ $|\varphi (ct)|$ for all $t>0.$ We show that $
c=1.$ If not, w.l.o.g. $c<1,$ (otherwise replace $c$ by $c^{-1}$ and note that $|\varphi (t/c)|=$ $|\varphi (ct/c)|=|\varphi (t)|$ ); then \[
0=|\varphi (0)|=\lim_{n}|\varphi (c^{n}t)|=|\varphi (t)|,\text{ for }t>0, \] and so $\varphi $ is trivial, a contradiction. So indeed $c=1$ in this case.
Now suppose that $a_{n}\rightarrow \infty .$ As $\varphi$ is non-trivial, choose $s$ with $\varphi (s)\neq 0,$ then \[
|\varphi (0)|=\lim_{n}|\varphi (s/a_{n})|=\lim_{n}\exp \left( \frac{1}{n}
\log |\varphi (s)|\right) =1, \]
i.e. $|\varphi (0)|=1.$ Again suppose that for some $c>0,$ $
|\varphi (t)|=$ $|\varphi (ct)|$ for all $t>0.$ To show that $c=1,$ suppose again w.l.o.g. that $c<1;$ then \[
1=|\varphi (0)|=\lim_{n}|\varphi (c^{n}t)|=|\varphi (t)|\text{ for }t>0, \]
and so $|\varphi (t)|\equiv 1,$ again a trivial solution. So again $c=1.$ \end{proof}
\textit{Proof of the Theorem. } $(ChFE)$ implies that \[
|\varphi (a_{mn}t)|=|\varphi (t)|^{mn}=|\varphi (a_{m}t)|^{n}=|\varphi
(a_{m}a_{n}t)|. \] By Proposition 2, $a_n$ satisfies the discrete version of the Cauchy exponential equation $(CEE)$ \[ a_{mn}=a_{m}a_{n}\qquad (m,n\in \mathbb{N}), \] whose solution is known to take the form $n^{k}$ (cf. \cite[Lemma 5.4]{PitP}), since $a_{n}>0$ (as in Prop. 2). If $a_{n}=1$ for some $n>1,$ then, for each
$t>0,$ $|\varphi (t)|=0$ or 1 (as $|\varphi (t)|=|\varphi (t)|^{n}$) and so again, by continuity as in Lemma 2, $\varphi $ is trivial. So $k\neq 0.$ $ \square $
\begin{rem} Continuity is essential to the theorem: take $a_n\equiv 1$, then a Borel function $\varphi$ may take the values 0 and 1 arbitrarily. \end{rem}
\section{Complements}\label{s:complements}
\subsection{Self-neglecting and self-equivarying functions}\label{ss:SNSE}
Recall (cf.\ \cite[Section 2.11]{BinGT}) that a self-map $\varphi$ of $\mathbb{R}_+$ is \emph{self-neglecting} ($\varphi\in{\it SN}$) if \begin{equation}\label{SN} \varphi(x+t\varphi(x))/\varphi(x)\rightarrow1\quad\text{locally uniformly in $t$ for all $t\in\mathbb{R}_+$},\tag{\emph{SN}} \end{equation} and $\varphi(x)=\mathrm o(x)$ as $x\rightarrow \infty$. This traditional restriction may be usefully relaxed in two ways, as in \cite{Ost1}: firstly, in imposing the weaker order condition $\varphi(x)=\mathrm O(x)$, and secondly by replacing the limit $1$ by a general limit function $\eta$, so that \begin{equation}\label{SE} \varphi(x+t\varphi(x))/\varphi(x)\rightarrow\eta(t)\quad\text{locally uniformly in $t$ for all $t\in\mathbb{R}_+$}.\tag{\emph{SE}} \end{equation} A $\varphi $ satisfying \eqref{SE} is called \emph{self-equivarying} in \cite{Ost1}, and the limit function $\eta=\eta^\varphi$ necessarily satisfies the equation \begin{equation}\label{BFE} \eta(u+v\eta(u))=\eta(u)\eta(v)\qquad(u,v\in\mathbb{R}_+) \tag{\emph{BFE}} \end{equation} (this is a special case of the \foreignlanguage{polish}{Go\l\aob b--Schinzel} equation---see also e.g.\ \cite{Brz}, or \cite{BinO2}, where \eqref{BFE} is termed the \emph{Beurling functional equation}). As $\eta\ge0$, imposing the natural condition $\eta>0$ (on $\mathbb{R}_+$) implies that it is continuous and of the form $\eta(t)=1+\rho t$, for some $\rho\ge0$ (see \cite{BinO2}); the case $\rho=0$ recovers \eqref{SN}. A function $\varphi\in{\it SE}$ has the representation \[ \varphi(t)\sim\eta^\varphi(t)\int_1^t e(u)\sd u\quad\text{for some continuous $e\rightarrow0$} \] (where $f\sim g$ if $f(x)/g(x)\rightarrow1$, as $x\rightarrow\infty$), and the second factor is in ${\it SN}$ (see \cite[Theorem 9]{BinO1}, \cite{Ost1}).
\subsection{Theorem GFE}\label{ss:ThmGFE} This theorem has antecedents in \cite{Acz} and \cite{Chu}, \cite[Theorem
1]{Ost2}, and is generalized in \cite[Theorem 3]{BinO2}. It is also studied in \cite{BinO3} and \cite{Ost2}.
\subsection{Homomorphisms and random walks}\label{ss:Homo} In the context of a ring, the `\foreignlanguage{polish}{Go\l\aob b--Schinzel} functions' $\eta_\rho(t)=1+\rho t$, as above, were used by Popa and Javor (see \cite{Ost2} for references) to define associated (generalized) \emph{circle
operations}: \[ a\circ_\rho b=a+\eta_\rho(a)b=a+(1+\rho a)b=a+b+\rho ab. \] (Note that $a\circ_1b=a+b+ab$ is the familiar circle operation, and $a\circ_0b=a+b$.) These were studied in the context of $\mathbb{R}$ in \cite[Section 3.1]{Ost2}; it is straightforward to lift that analysis to the present context of the ring $\mathbb{C}$, yielding the \emph{complex circle
groups} \[ \mathbb{C}_\rho:=\{x\in\mathbb{C}:1+\rho x\ne0\} =\mathbb{C}\backslash\{\rho^{-1}\}\qquad (\rho\ne0). \] Since \begin{align*} (1+\rho a)(1+\rho b)&=1+\rho a+\rho b+\rho^2ab
=1+\rho\lbrack a+b+\rho ab],\\ \eta_\rho(a)\eta_\rho(a)&=\eta_\rho(a\circ_\rho b), \end{align*} $\eta_\rho:(\mathbb{C}_\rho,\circ_\rho)\rightarrow (\mathbb{C}^*,\cdot)=(\mathbb{C}\backslash\{0\},\times)$ is an isomorphism (`from $\mathbb{C}_\rho$ to $\mathbb{C}_\infty$').
We may recast \eqref{GFEx} along the lines of \eqref{dag} so that $G(s)=s^\gamma$ with $\gamma\ne0$, and $K(t)=(t^\gamma-1)\rho^{-1}$, for
$$\rho=\frac\gamma\kappa=\frac{1-k}{k\kappa}.$$
Then, as $\eta_\rho(x)=1+\rho x=G(K^{-1}(x))$, \[ K(st)=K(s)\circ_\rho K(t)=K(s)+\eta_\rho(K(s))K(t)=K(s)+G(s)K(t). \] For $\gamma\ne0$, $K$ is a homomorphism from the multiplicative reals $\mathbb{R}_+$ into $\mathbb{C}_\rho$; more precisely, it is an isomorphism between $\mathbb{R}_+$ and the conjugate subgroup $(\mathbb{R}_+-1)\rho^{-1}$. In the case $\gamma=0$ ($k=1$), $\mathbb{C}_0=\mathbb{C}$ is the additive group of complex numbers; from \eqref{GFEx} it is immediate that $K$ maps logarithmically into $(\mathbb{R},+),$ `the additive reals'.
\acks The final form of this manuscript owes much to the referee's supportively penetrating reading of an earlier draft, and to the editors' advice and good offices, for which sincere thanks.
\section*{Appendix: a ratio formula}
We give an elementary derivation (using Riemann integrals) of the formula \[ \int_{0}^{\infty }\frac{\cos x}{x^{k}}e^{-\delta x}\,\mathrm{d}x\left/ \int_{0}^{\infty }\frac{\sin x}{x^{k}}e^{-\delta x}\,\mathrm{d}x\right. =\tan \pi k/2\qquad (0<k<1). \]
Substitution for $\delta >0$ of $s=\delta +i$ $=re^{i\theta }$, with $ r^{2}=1+\delta ^{2}$ and $\theta =\theta _{\delta }=\tan ^{-1}(1/\delta )$, in the Gamma integral: \[ \frac{\Gamma (1-k)}{s^{1-k}}=\int_{0}^{\infty }\frac{e^{-sx}}{x^{k}}\, \mathrm{d}x, \] with $0<k<1$, gives \[ \int_{0}^{\infty }\frac{\cos x-i\sin x}{x^{k}}e^{-\delta x}\,\mathrm{d}x= \frac{\Gamma (1-k)}{(1+\delta ^{2})^{(1-k)/2}}[\cos (1-k)\theta _{\delta }-i\sin (1-k)\theta _{\delta }]\qquad (\delta >0). \] This yields in the limit as $\delta \downarrow 0,$ since $\theta _{\delta }\rightarrow \pi /2,$ the ratio of the real and imaginary parts of the left-hand side for $\delta =0$ to be \[ \cot (1-k)\pi /2=\tan \pi k/2. \] Passage to the limit $\delta \downarrow 0$ on the left is validated, for any $k>0$, by an appeal to Abel's method: first integration by parts (twice) yields an indefinite integral \[ (1+\delta ^{2})\int e^{\delta x}\sin x\,\mathrm{d}x=-e^{\delta x}\cos x+\delta e^{\delta x}\sin x, \] valid for all $\delta ,$ whence (again by parts) \[ \int_{1}^{T}\frac{e^{-\delta x}\sin x\,\mathrm{d}x}{x^{k}}=\frac{e^{-\delta }(\delta \sin 1+\cos 1)}{(1+\delta ^{2})}-\frac{e^{-\delta T}(\delta \sin T+\cos T)}{T^{k}(1+\delta ^{2})}-k\int_{1}^{T}\frac{e^{-\delta x}(\delta \sin x+\cos x)\,\mathrm{d}x}{x^{k+1}(1+\delta ^{2})}. \] Here $e^{-\delta x}$ is uniformly bounded as $\delta \downarrow 0,$ so by joint continuity on $[0,1]$ \begin{eqnarray*} \lim_{\delta \downarrow 0}\int_{0}^{\infty }\frac{1}{x^{k}}e^{-\delta x}\sin x\,\mathrm{d}x &=&\lim_{\delta \downarrow 0}\int_{0}^{1}\frac{1}{x^{k}} e^{-\delta x}\sin x\,\mathrm{d}x+\lim_{\delta \downarrow 0}\int_{1}^{\infty } \frac{1}{x^{k}}e^{-\delta x}\sin x\,\mathrm{d}x \\ &=&\int_{0}^{\infty }\frac{\sin x}{x^{k}}\,\mathrm{d}x, \end{eqnarray*} and likewise with $\cos $ for $\sin $.
\end{document} | arXiv |
Methodology | Open | Published: 14 December 2015
The ubiquitous self-organizing map for non-stationary data streams
Bruno Silva1 &
Nuno Cavalheiro Marques2
The Internet of things promises a continuous flow of data where traditional database and data-mining methods cannot be applied. This paper presents improvements on the Ubiquitous Self-Organized Map (UbiSOM), a novel variant of the well-known Self-Organized Map (SOM), tailored for streaming environments. This approach allows ambient intelligence solutions using multidimensional clustering over a continuous data stream to provide continuous exploratory data analysis. The average quantization error and average neuron utility over time are proposed and used to estimating the learning parameters, allowing the model to retain an indefinite plasticity and to cope with changes within a multidimensional data stream. We perform parameter sensitivity analysis and our experiments show that UbiSOM outperforms existing proposals in continuously modeling possibly non-stationary data streams, converging faster to stable models when the underlying distribution is stationary and reacting accordingly to the nature of the change in continuous real world data streams.
At present, all kinds of stream data processing based on instantaneous data have become critical issues of Internet, Internet of Things (ubiquitous computing), social networking and other technologies. The massive amounts of data being generated in all these environments push the need for algorithms that can extract knowledge in a readily manner.
Within this increasingly important field of research the application of artificial neural networks to such task remains a fairly unexplored path. The self-organizing map (SOM) [1] is an unsupervised neural-network algorithm with topology preservation. The SOM has been applied extensively within fields ranging from engineering sciences to medicine, biology, and economics [2] over the years. The powerful visualization techniques for SOM models result from the useful and unique feature of SOM for detection of emergent complex cluster structures and non-linear relationships in the feature space [3]. The SOM can be visualized as a sheet-like neural network array, whose neurons become specifically tuned to various input vectors (examples) in an orderly fashion. For instance, the SOM and \(\mathcal {K}\)-means both represent data in a similar way through prototypes of data, i.e., centroids in \(\mathcal {K}\)-means and neuron weights in SOM, and their relation and different usages has already been studied [4]. However, it is the topological ordering of these prototypes in large SOM networks that allows the application of exploratory visualization techniques.
This paper is an extended version of work published in [5], introducing a novel variant of SOM, called the ubiquitous self-organizing map (UbiSOM), specially tailored for streaming and big data. We extend our previous work by improving the overall algorithm with the use of a drift function to estimate learning parameters, that weighs the previous average quantization error and a new introduced metric: the average neuron utility. Also, the UbiSOM algorithm now implements a finite-state machine, which allows it to cope with drastic changes in the underlying stream. We also performed parameter sensitivity analysis on new parameters imposed by the algorithm.
Our experiments, with artificial data and a real-world electric consumption sensor data stream, show that UbiSOM can be applied to data processing systems that want to use the SOM method to provide a fast response and timely mine valuable information from the data. Indeed our approach, albeit being a single-pass algorithm, outperforms current online SOM proposals in continuously modeling non-stationary data streams, converging faster to stable models when the underlying distribution is stationary and reacting accordingly to the nature of the change.
Background and literature review
In this section we introduce data streams and review current SOM algorithms that can, in theory, be used for streaming data, highlighting their problems in this setting.
Data streams
Nowadays, data streams [6, 7] are generated naturally within several applications as opposed to simple datasets. Such applications include network monitoring, web mining, sensor networks, telecommunications, and financial applications. All have vast amounts of data arriving continuously. Being able to produce clustering models in real-time assumes great importance within these applications. Hence, learning from streams not only is required in ubiquitous environments, but also is of relevance to other current hot topics, namely Big Data. The rationale behind the requirement of learning from streams is that the amount of information being generated is to big to be stored in devices, where traditional mining techniques could be applied. Data streams arrive continuously and are potentially unbounded. Therefore, it is impossible to keep the entire stream in memory.
Data streams require fast and real time processing to keep up with the high rate of data arrival and mining results are expected to be available within short response time. Data streams also imply non-stationarity of data, i.e., the underlying distribution may change. This may involve appearance/disappearance of clusters, changes in mean and/or variance and also correlations between variables. Consequently, algorithms performing over data streams are presented with additional challenges not previously tackled in traditional data mining. One thing that is agreed is that these algorithms can only return approximate models since data cannot be revisited to fine-tune the models [7], hence the need for incremental learning.
More formally, a data stream \(\mathcal {S}\) is a massive sequence of examples \({\mathbf{x}}_1, {\mathbf{x}}_2, \ldots , {\mathbf{x}}_N\), i.e., \(\mathcal {S} = \{ {\mathbf{x}}_i \}_{i=1}^{N}\), which is potentially unbounded (\(N \rightarrow \infty\)). Each example is described by an d-dimensional feature vector \({\mathbf{x}} = [ x_{i}^{j} ]_{j=1}^{d}\) belonging to a feature space \(\Omega\) that can be continuous, categorical or mixed. In out work we only consider continuous spaces.
The Self-Organizing Map
The SOM establishes a projection from the manifold \(\Omega\) onto a set \(\mathcal {K}\) of neurons (or units), formally written as \(\Omega \rightarrow \mathcal {K}\), hence performing both vector quantization and projection. Each unit \(\mathcal {K}\) is associated with a prototype \({\mathbf{w}}_k \in \mathbb {R}^d\), all of which establish the set \(\mathcal {K}\) that is referred as the codebook. Consequently, the SOM can be interpreted as a topology preserving mapping from an high-dimensional input space onto the 2D grid of map units. The number of prototypes K is defined by the dimensions of the grid (lattice size), i.e, \(width \times height\).
The classical Online SOM algorithm performs iteratively over time. An example \({\mathbf{x}}\) is presented at each iteration t and distances between \({\mathbf{x}}_t\) and all prototypes are computed. Usually the Euclidean distance is used and previous normalization of \({\mathbf{x}}\) is suggested to equate the dynamic ranges along each dimension; this ensures that no feature dominates the distance computations, improving the numerical accuracy. The best matching unit (BMU), which we denote by c, is the map unit with prototype closest to \({\mathbf{x}}_t\):
$$\begin{aligned} {\mathbf{w}}_{c}(t)=\underset{k}{min}\| {\mathbf{x}}-{\mathbf{w}}_{k}\|. \end{aligned}$$
It is important to highlight that \(E(t) = \parallel {\mathbf{x}}_t-{\mathbf{w}}_{c}(t)\parallel\) is the map error at time t, and is referred to as the quantization error.
Next, the prototype vectors are updated: the BMU and its topological neighbors are moved towards the example in the input space by the Kohonen learning rule [1, 8]:
$$\begin{aligned} {\mathbf{w}}_{k}(t+1)={\mathbf{w}}_{k}(t)+\eta (t)\, h_{ck}(t)\left[ {\mathbf{x}}_t-{\mathbf{w}}_{k}(t)\right] \end{aligned}$$
$$\begin{aligned}&t \text{ is the time};\\ &\eta (t)\text{ is the learning rate};\\ &h_{ck}(t)\text{ is the neighborhood kernel centered on the BMU:} \end{aligned}$$
$$\begin{aligned} h_{ck}(t)=e^{-\Bigg (\frac{\parallel r_{c}-r_{k}\parallel}{\sigma (t)}\Bigg ) ^{2}} \end{aligned}$$
where \(r_c\) and \(r_k\) are positions of units c and \(k\) on the SOM grid. Both \(\sigma (t)\) and \(\eta (t)\) decrease monotonically with time, a critical condition for the network to converge steadily towards a topological ordered state and to map the input space density. The following decreasing functions are common:
$$\begin{aligned} \sigma (t)=\sigma _{i}\left( \frac{\sigma _{f}}{\sigma _{i}}\right) ^{t/t_{f}},\quad \eta (t)=\eta _{i}\left( \frac{\eta _{f}}{\eta _{i}}\right) ^{t/t_{f}}, \end{aligned}$$
where \(\sigma _i\) and \(\sigma _f\) are respectively the initial and final neighborhood width and \(\eta _i\) and \(\eta _f\) the initial and final learning rate. From [8] it is suggested that: the width of the neighborhood should be decreased from a width approximately the width of the lattice, for an initial global ordering of the prototypes, down to only encompassing the adjacent map units. This iterative scheme endures until \(t_f\) is reached and is typically defined so the dataset is presented several times.
SOM models for streaming data
In a real-world streaming environment \(t_f\) is unknown or not defined, so the classical algorithm cannot be used. Even with a bounded stream the Online SOM loses plasticity over time (due to the decrease of the learning parameters) and cannot cope easily with changes in the underlying distribution.
Despite the huge amount of SOM literature around SOM and SOM-like networks, there is surprisingly and comparatively very little work dealing with incremental learning. Furthermore, most of these works are based on incremental models, that is, networks that create and/or delete nodes as necessary. For example, the modified GNG model [9] is able to follow non-stationary distributions by creating nodes like in a regular GNG and deleting them when they have a too small utility parameter. Similarly, the evolving self-organizing map (ESOM) [10, 11] is based on an incremental network quite similar to GNG that creates dynamically based on the measure of the distance of the BMU to the example (but the new node is created at exact data point instead of the mid-point as in GNG). Self-organizing incremental neural network (SOINN) [12] and its enhanced version (ESOINN) [13] are also based on an incremental structure where the first version is using a two layers network while the enhanced version proposed a single layer network. These proposals, however, do not guarantee a compact model, given that the number of nodes can increase unbounded in a non-stationary environment if not parameterized correctly.
On the other hand, our proposal keeps the size of the map fixed. Some SOM time-independent variants, obeying to this restriction, have been proposed. The two most recent examples are: the Parameterless SOM (PLSOM) [14], which evaluates the local error \({E(t)}\) and calculates the learning parameters depending on the local quadratic fitting error of the map to the input space, and; the Dynamic SOM (DSOM) [15] which follows a similar reasoning by adjusting the magnitude of the learning parameters to the local error, but fails to converge from a totally unordered state. Moreover, authors of both proposals admit that their algorithms are unable to map the input space density onto the SOM, which has a severe impact on the application of common visualization techniques for exploratory analysis. Also, these variants are very sensitive to outliers, i.e., noisy data, by using instantaneous \({E(t)}\) values.
On the other hand, the proposed UbiSOM algorithm in this paper estimates learning parameters based on the performance of the map over streaming data by monitoring the average quantization error, being more tolerant to noise and aware of real changes in the underlying distribution.
The ubiquitous self-organizing map
The proposed UbiSOM algorithm relies on two learning assessment metrics, namely the average quantization error and the average neuron utility, computed over a sliding window. While the first assesses the trend of the vector quantization process towards the underlying distribution, the later is able to detect regions of the map that may become "unused" given some changes in the distribution, e.g., disappearance of clusters. Both metrics are weighed in a drift function that gives an overall indication of the performance of the map over the data stream, used to estimate learning parameters.
The UbiSOM implements a finite state-machine consisting in two states, namely ordering and learning. The ordering state allows the map to initially unfold over the underlying distribution with monotonically decreasing learning parameters; it is also used to obtain the first values of the assessment metrics, transitioning afterwards to the learning state. Here, the learning parameters, i.e., learning rate and neighborhood radius, are decreased or increased based on the drift function. This allows the UbiSOM to retain an indefinite plasticity, while maintaining the original SOM properties, over non-stationary data streams. These states also coincide with the two typical training phases suggested by Kohonen. It is possible, however, that unrecoverable situations from abrupt changes in the underlying distribution are detected, which leads the algorithm to transition back to the ordering state.
Each UbiSOM neuron \(k\) is a tuple \(\mathcal {W}_{k}=\langle {\mathbf{w}}_{k},\, t_{k}^{update}\rangle\), where \({\mathbf{w}}_{k}\in \mathbb {R}^{d}\) is the prototype and \(t_{k}^{update}\) stores the time stamp of the last time its prototype was updated. For each incoming observation \({\mathbf{x}}_{t}\), presented at time t, two metrics are computed, within a sliding window of length T, namely the average quantization error \(\overline{qe}(t)\) and the average neuron utility \(\overline{\lambda }(t)\). We assume that all features of the data stream are equally normalized between \([d_{min},d_{max}]\). The local quantization error \({E(t)}\) is normalized by \(|\Omega |=(d_{max}-d_{min})\sqrt{d}\), so that \(\overline{qe}(t)\in [0,1]\). The \(\overline{\lambda }(t)\) metric averages neuron utility (\(\lambda (t)\)) values that are computed as a ratio of updated neurons during the last T observations. Both metrics are used in a drift function \(d(t)\), where the parameter \(\beta \in [0,1]\) weighs both metrics.
The UbiSOM switches between the ordering and learning states, both using the classical SOM update rule, but with different mechanisms for estimating learning parameters \(\sigma\) and \(\eta\). The ordering state endures for \(T\) examples, until the first values of \(\overline{qe}(t)\) and \(\overline{\lambda }(t)\) are available, establishing an interval \([t_{i},t_{f}]\), during which monotonically decreasing functions \(\sigma (t)\) and \(\eta (t)\) are used to decrease values between \(\{\sigma _{i},\sigma _{f}\}\) and \(\{\eta _{i},\eta _{f}\},\) respectively. The learning state estimates learning parameters as a function of the drift function. UbiSOM neighborhood function is defined in a way that uses \(\sigma \in [0,1]\) as opposed to existing variants, where the domain of the values is problem-dependent.
Online assessment metrics
The purpose of these metrics is to assess the "fit" of the map to the underlying distribution. Both proposed metrics are computed over a sliding window of length \(T\).
Average quantization error
The widely used global quantization error (QE) metric is the standard measure of fit of a SOM model to a particular distribution. It is typically used to compare SOM models obtained for different runs and/or parameterizations and used in a batch setting. The rationale is that the model which exhibits a lower QE value is better at summarizing the input space.
Regarding data streams this metric, as it stands, is not applicable because data is potentially infinite. Competing approaches to the proposed UbiSOM use only the local quantization error \(E(t)\). Kohonen stated that both \(\eta (t)\) and \(\sigma (t)\) should decrease monotonically with time, a critical condition to achieve convergence [8]. However, the local error is very unstable because \(\Omega \rightarrow \mathcal {K}\) is a many-to-few mapping, where some observations are better represented than others. As an example, with stationary data the local error does not decrease monotonically over time. We argue this is the reason why other existing approaches, e.g., PLSOM and DSOM, fail to model the input space density correctly.
In the proposed algorithm, the quantization error was modified to a running mean in the form of the average quantization error \(\overline{qe}(t)\), based on the premise that the error of a learner will decrease over time for an increasing number of examples if the underlying distribution is stationary; otherwise, if the distribution changes, the error increases. For each observation \({\mathbf{x}}_t\) the \(E{^{\prime }}(t)\) local quantization error is obtained during the BMU search, as the normalized Euclidean distance
$$\begin{aligned} E{^{\prime }}(t)=\frac{\Vert \,{\mathbf{x}}_{t}-{\mathbf{w}}_{c}\,\Vert }{|\Omega |}. \end{aligned}$$
These values are averaged over a window of length \(T \gg 0\) to obtain \(\overline{qe}(t)\), defined in Eq. (5). Consequently, the value of \(T\) establishes a short, medium or long-term trend of the model adaptation.
$$\begin{aligned} \overline{qe}(t)=\frac{1}{T}\sum _{t}^{t-T+1}E{^{\prime }}(t) \end{aligned}$$
Figure 1 depicts the typical behavior of \(E{^{\prime }}(t)\) values obtained during a run of the classical SOM algorithm, together with the computed \(\overline{qe}(t)\) values for \(T=2000\), over a data stream where the underlying distribution suffers an abrupt change at \(t\) = \(50\,000\). We can observe that \(E{^{\prime }}(t)\) values exhibit a large variance throughout time, as opposed to \(\overline{qe}(t)\) which is smoother and indicates the trend of the convergence. Therefore, it is implicitly assumed that if \(\overline{qe}(t)\) is decreasing, then the underlying distribution is stationary; otherwise, it is changing.
Behavior of local \(E{^{\prime }}(t)\) vs. average quantization error \(\overline{qe}(t)\)
Average neuron utility
The average quantization error \(\overline{qe}(t)\) may be a good overall indicator of the fit of the model. Despite that, it may be unable to detect the abrupt disappearance of clusters. Figure 2 illustrates such a scenario, depicting the "unused" area of the map after the inner cluster disappears. Here \(\overline{qe}(t)\) does not increase, however in this situation, the learning parameters should increase and allow the map to recover from this situation.As a consequence, the average neuron utility was proposed as a means to detect these cases.
Example of a distribution change not detected by the average quantization error. a Before change; b after the disappearance of the inner cluster, where it is visible a region of unused neurons
To compute this assessment metric each UbiSOM neuron \(\mathcal {K}\) is extended with a time stamp \(t_{k}^{update}\) which stores the last time the corresponding prototype was updated, functioning as an aging mechanism. A prototype is updated if it is the BMU or if it falls in the influence region of the BMU, limited by the neighborhood function. Initially, \(t_{k}^{update}=0\). The neuron utility \(\lambda (t)\) is given by Eq. (7). It measures the ratio of neurons that were updated within the last \(T\) observations, over the total number of neurons. Consequently, if all neurons have been recently updated, then \(\lambda (t)=1\). The values are then averaged by Eq. (8) to obtain \(\overline{{\lambda }}(t)\).
$$\begin{aligned} \lambda (t)=\frac{\sum _{k=1}^{K}1_{\{t-t_{k}^{update}\le T\}}}{K} \end{aligned}$$
$$\begin{aligned} \overline{\lambda }(t)=\frac{1}{T}\sum _{t}^{t-T+1}\lambda (t). \end{aligned}$$
As a result, a decrease in \(\overline{\lambda }(t)\) indicates that there are neurons that are not being used to quantize the data stream. While it is not unusual to obtain these "dead-units" with stationary data after the map has converged, the decreasing trend should alert for changes in the underlying distribution.
The drift function
The previous metrics \(\overline{qe}(t)\) and \(\overline{\lambda }(t)\) are both weighed in a drift function that is used by the UbiSOM to estimate learning parameters. In short:
\(\overline{qe}(t)\) The average quantization error gives an indication of how well the map is currently quantifying the underlying distribution, previously defined in Eq. (6). In most situation where the underlying data stream is stationary, \(\overline{qe}(t)\) is expected to decrease and stabilize, i.e., the map is converging. If the shape of the distribution changes, \(\overline{qe}(t)\) is expected to increase.
\(\overline{\lambda }(t)\) The average neuron utility is an additional measure which gives an indication of the proportion of neurons that are actively being updated, previously defined in Eq. (8). The decrease of \(\overline{\lambda }(t)\) indicates neurons are being underused, which can reflect changes in the underlying distribution not detected by \(\overline{qe}(t)\).
The drift function is defined as
$$\begin{aligned} d(t)=\beta \,\overline{qe}(t)+(1-\beta )\,(1-\overline{\lambda }(t)) \end{aligned}$$
where \(\beta \in [0,1]\) is a weighting factor that establishes the balance of importance between the two metrics. Since both \(\overline{qe}(t)\) and \(\overline{\lambda }(t)\) are only obtained after \(T\) observations, so is \(d(t)\).
A quick analysis of \(d(t)\) should be made: with high learning parameters, specially the neighborhood \(\sigma\) value, \(\overline{\lambda }(t)\) is expected to be \(\thickapprox 1\), which practically eliminates the second term of the equation. Consequently, the drift function in only governed by \(\overline{qe}(t)\). When the neuron utility decreases the second term contributes to the increase of \(d(t)\) in proportion to the chosen \(\mathbf {\beta }\) value. Ultimately, if \(\beta =1\) then the drift function is only defined by the \(\overline{qe}(t)\) metric. Empirically, \(\beta\) should be parameterized with relatively high values, establishing \(\overline{qe}(t)\) as the main measure of "fit" and using \(\overline{\lambda }(t)\) as a failsafe mechanism.
The neighborhood function
The UbiSOM algorithm uses a normalized neighborhood radius \(\sigma\) learning parameter and a truncated neighborhood function. The latter is what effectively allows \(\overline{\lambda }(t)\) to be computed.
The classical SOM neighborhood function relies on a \(\sigma\) value that is problem-dependent, i.e., the used values depend on the lattice size. This complicates the parameterization of \(\sigma\) for different values of \(\mathcal {K}\), i.e., \(width\times height\).
The performed normalization is based on the maximum distance between any two neurons in the lattice. In rectangular maps the farthest neurons are the ones at opposing diagonals, e.g., positions (0, 0) and \((width-1,\, height-1)\) in Fig. 3. Hence distances within the lattice are normalized by the Euclidean norm of the vector \({\mathbf{diag}}=(width-1,height-1)\), defined as
$$\begin{aligned} \| {\mathbf{diag}}\| =\sqrt{(width-1)^{2}+(height-1)^{2}}. \end{aligned}$$
This effectively limits the maximum neighborhood width the UbiSOM can use and establishes \(\sigma \in [0,1]\).
Maximum lattice distance used for normalization of \(\sigma\)
The neighborhood function of the UbiSOM variant is given by
$$\begin{aligned} h_{ck}^{\prime }(t)=e^{-\Big (\frac{\parallel r_{c}-r_{k}\parallel )}{\sigma \,\parallel {\mathbf{diag}}\parallel }\Big )^{2}} \end{aligned}$$
where \(r_{c}\) is the position in the grid of the BMU for observation \({\mathbf{x}}_{t}\). To get a grasp on how different \(\sigma\) values determine the influence region around the BMU, Fig. 4 depicts Eq. (11) for different \(\sigma\) values. Neurons whose values of \(h_{ck}^{\prime }(t)\) are below a threshold of 0.01 are not updated. This is critical for the computation of \(\lambda (t)\), since \(h_{ck}^{\prime }(t)\) is a continuous function and as \(h_{ck}^{\prime }(t)\rightarrow 0\) all other neurons would still be updated with very small values. The truncated neighborhood function is also a performance improvement, avoiding negligible updates to prototypes.
The UbiSOM neighborhood function. The threshold value is also depicted, below which no updates to prototypes are performed
UbiSOM states and transitions
States and transitions
The UbiSOM algorithm implements a finite state-machine, i.e., it can switch between two states. This design was, on one hand, imposed by the initial delay in obtaining values for the assessment metrics and, as a consequence, for the drift function \(d(t)\); on the other hand, seen as a desirable mechanism to conform to Kohonen's proposal of an ordering and a convergence phase for the SOM [8] and to deal with drastic changes that can occur in the underlying distribution.
The two possible states of the UbiSOM algorithm, namely ordering state and learning state are depicted in Fig. 5 and described next. Both use a similar update equation as the classical algorithm, but with the neighborhood function defined in Eq. (11), as defined in Eq. (12). Please note that the prototypes are only updated above the neighborhood function threshold.
$$\begin{aligned} {\mathbf{w}}_{k}(t+1)={\left\{ \begin{array}{ll} {\mathbf{w}}_{k}(t)+\eta (t)h_{ck}^{\prime }(t)\left[ {\mathbf{x}}_{t}-{\mathbf{w}}_{k}(t)\right] &\quad h_{ck}^{\prime }(t)>0.01\\ {\mathbf{w}}_{k}(t) &\quad otherwise \end{array}\right. } \end{aligned}$$
However, each state estimates learning parameters with different functions for \(\eta (t)\) and \(\sigma (t)\).
Ordering state
The ordering state is the initial state of the UbiSOM algorithm and to where it possibly reverts if it can not recover from an abrupt change in the data stream. It endures for \(T\) observations where learning parameters are estimated with a monotonically decreasing function, i.e., time-dependent, similar to the classical SOM. Thus, the parameter \(T\) simultaneously defines the window length of the assessment metrics, as well as dictates the duration of the ordering state. The parameters should be relatively high, so the map can order itself from a totally unordered initialization regarding the underlying distribution. This phase also allows for the first value of the drift function \(d(t)\) to be available. After \(T\) observations the algorithm switches to the learning state.
Let \(t_{i}\) and \(t_{f}=t_{i}+T-1\) be the first and last iterations of the ordering phase, respectively. This state requires choosing appropriate parameter values for \(\eta _{i}\), \(\eta _{f}\), \(\sigma _{i}\) and \(\sigma _{f}\), which are, respectively, the initial and final values for the learning rate and the normalized neighborhood radius. The choice of values will greatly impact the initial ordering of the prototypes and will affect the estimation of parameters of the learning state. Any monotonically decreasing function can be used, although in this research the following were used:
$$\begin{aligned} \sigma (t)=\sigma _{i}\left( \frac{\sigma _{f}}{\sigma _{i}}\right) ^{t/t_{f}},\quad \eta (t)=\eta _{i}\left( \frac{\eta _{f}}{\eta _{i}}\right) ^{t/t_{f}}\qquad \forall t\in \{t_{i},t_{i+1},\ldots ,t_{f}\} \end{aligned}$$
At the end of the \(t_{f}\) iteration, the first value of the drift function is obtained, i.e., \(d(t_{f}),\)and the UbiSOM algorithm transitions to the learning state.
Learning state
The learning state begins at \(t_{f}+1\) and is the main state of the UbiSOM algorithm, during which learning parameters are estimated in a time-independent manner. Here learning parameters are estimated solely based on the drift function \(d(t)\), decreasing or increasing relative to the first computed value \(d(t_{f})\) and final values (\(\eta _{f},\sigma _{f}\)) of the ordering state.
Given that in this state the map is expected to start converging, the values of \(d(t)\) should also decrease. Hence, the value \(d(t_{f})\) is used as a reference value establishing a threshold above which the map is considered to be irrecoverably diverging from changes in the underlying distribution, e.g., in some abrupt changes the drift function can increase rapidly to very high values. Consequently, it also limits the maximum values that learning parameters can attain during this state
Learning parameters \(\eta (t)\) and \(\sigma (t)\) are estimated for an observation presented at time t by Eq. (14), where \(d(t)\) is defined as in Eq. (9). One can easily derive that learning parameters are estimated proportionally to \(d(t)\). Also, final values of the ordering state for \(\eta _{f}\) and \(\sigma _{f}\) establish an upper bounded for the learning parameters in this state.
$$\begin{aligned} \eta (t)={\left\{ \begin{array}{ll} \frac{\eta _{f}}{d(t_{f})}\, d(t) &\quad d(t)<d(t_{t})\\ \eta _{f} &\quad otherwise \end{array}\right. }\quad \sigma (t)={\left\{ \begin{array}{ll} \frac{\sigma _{f}}{d(t_{f})}\, d(t) &\quad d(t)<d(t_{f})\\ \sigma _{f} &\quad otherwise. \end{array}\right. } \end{aligned}$$
The outcome of these equations is that if the distribution is stationary the learning parameters accompany the decrease of the drift function values, allowing the map to converge to a stable state. On the contrary, if changes occur, the drift function values rise, consequently increasing the learning parameters, and increase the plasticity of the map to a point where \(d(t)\) should decrease again. The increased plasticity should allow the map to adjust to the distribution change.
However, there may be cases of abrupt changes from where the map cannot recover, i.e., the map does not resume convergence with decreasing \(d(t)\) values. Therefore, if we detect that learning parameters are in their peak values during at least \(T\) iterations, i.e., \(\sum 1_{\{d(t)\ge d(t_{f})\}}\ge T\), then this situation is confirmed and the UbiSOM transitions back to the ordering state.
Time and space complexity
The UbiSOM algorithm (and model) does not increase the time complexity of the classical SOM algorithm, since all the potentially penalizing additional operations, namely the computations of the assessment metrics, can be obtained in O(1). Regarding space complexity, it increases the space needed for: (1) storing an additional timestamps for each neuron \(k\); (2) storing two queues for the assessment metrics \(\overline{qe}(t)\) and \(\overline{\lambda }(t)\), each of length \(T\). Therefore, after the initial creation of data structures (map and queues) in O(\(K\)) time and \(O(Kd+2K+2T)\) space, every observation \({\mathbf{x}}_{t}\) is processed in constant O(2Kd) time and constant space. No observations are kept in memory.
Hence, the UbiSOM algorithm is scalable in respect to the number of observations N, since the cost per observations is kept constant. However, the increase of the number of neurons \(K\), i.e., the size of the lattice, and the dimensionality d of the data stream will increase this cost linearly.
A series of experiments was conducted using artificial data streams to assess the UbiSOM parameterization and performance over stationary and non-stationary data, while comparing it to current proposals, namely the classical Online SOM, PLSOM and DSOM. With artificial data we can establish the ground truth of the expected outcome and illustrate some key points. Afterwards we apply the UbiSOM to a real-world electric power consumption problem where we further illustrate the potential of the UbiSOM when dealing with sensor data in a streaming environment.
Table 1 summarizes the artificial data streams used in the presented results. These are two- and three-dimensional for the purpose of easy visualization of obtained maps. The Gauss data stream, as the name suggests, describes a Gaussian cluster of points and is used to check if the algorithms can map the input space density properly; Chain describes two inter-locked rings—this represents a cluster structure that partitional clustering algorithms, e.g., \(k\)-means, fail to cluster properly; Hepta describes a distribution of 7 evenly spaced Gaussian clusters, where at time \(t\) = \(50\,000\) one cluster disappears, previously depicted in Fig. 2, and; Clouds contains an evolving cluster structure with separation and merge of clusters (between \(t\) = \(50\,000\) and \(t\) = \(150\,000\)) and aims at evaluating how the different tested algorithms react to continuous changes in the distribution. All data streams were normalized such that \({\mathbf{x}}_{t}\in [0,1]^{d}\).
Table 1 Summary of artificial data streams used in presented experiments
The parameterization of any SOM algorithm is mainly performed empirically, since only rules of thumb exist towards finding good parameters [8]. In the next section present an initial parameter sensitivity analysis of the new parameters introduced in the UbiSOM, e.g., \(T\) and \(\beta\), while empirically setting the remaining parameters, shared at some extent with the classical SOM algorithm. Concerning the lattice size it should be rectangular in order to minimize projection distortions, hence we use a \(20\times 40\) lattice for all algorithms, which also allows for a good quantization of the input space. In the ordering state of the UbiSOM algorithm we have empirically set \(\eta _{i}=0.1\), \(\eta _{f}=0.08\), \(\sigma _{i}=0.6\) and \(\sigma _{f}=0.2\), based on the recommendation that learning parameters should be initially relatively high to allow the unfolding of the map over the underlying distribution. These values have shown optimal results in the presented experiments and many others not included in this paper. A parameter sensitivity analysis including these parameters is reserved for future work.
Regarding the other algorithms, after several tries the best parameters for the chosen map size and for each compared algorithm were selected. The classical Online SOM uses \(\eta _{i}=0.1\) and \(\sigma _{i}=2\sqrt{K}\), decreasing monotonically to \(\eta _{f}=0.01\) and \(\sigma _{f}=1\) respectively; PLSOM uses a single parameter \(\gamma\) called neighborhood range and the values yielding the best results for the used lattice size were \(\gamma =(65,37,35,130)\) for the Gauss, Chain, Hepta and Clouds data streams, respectively. DSOM was parameterized as in [15] with \(elasticity=3\) and \(\varepsilon =0.1\), but since it fails to unfold from an initial random state, it was left out of further experiments. The authors admit that their algorithm has this drawback.
Maps across all experiments use the same random initialization of prototypes at the center of the input space, so no results are affected by different initial states.
Parameter sensitivity analysis
We present a parameter sensitivity analysis for parameters \(T\) and \(\beta\) introduced in the UbiSOM. The first establishes the length of the sliding window used to compute the assessment metrics, and consequently weather it uses a short, medium or long-term trend to estimate learning parameters. While a shorter window is more sensitive to the variance of the \(E{^{\prime }}(t)\) and to noise, a longer window increases the reaction time of the algorithm to true change in the underlying distribution. It also implicitly dictates the duration of the ordering state, where Kohonen recommends, as another rule-of-thumb, that it should not cover less that \(1\,000\) examples [8]. The later weights the importance of both assessment metrics in the drift function \(d(t)\) and, as discussed earlier, we should use higher values so as to favor the \(\overline{qe}(t)\) values while estimating learning parameters. Hence, we chose \(T=\{500,1000,1500,2000,2500,3000\}\) and \(\beta =\{0.5,0.6,0.7,0.8,0.9,1\}\) as the sets of values from where to perform the parameter sensitivity analysis.
To shed some light on how these parameters could affect learning, we opted to measure the mean quantization error [(Mean \(E{^{\prime }}(t)\)], so as to obtain a single value that could characterize the quantization procedure across the entire stream. Similarly, we used the mean neuron activity [(Mean \(\lambda (t)\)] to measure in a single value the proportion of utilized neurons during learning from stationary and non-stationary data streams.
Thus, we were interested in finding ideal intervals for the tested parameters that could simultaneously minimize the mean quantization error, while maximizing the mean neuron utility. We also computed \(\overline{qe}(t)\) for the different values of \(T\) to obtain a grasp on the delay in convergence imposed by this parameter. From the minimum \(\overline{qe}(t)\) obtained throughout the stream, we computed the iteration where the \(\overline{qe}(t)\) value falls within 5 % of the minimum (Convergence t), as a temporal indicator of convergence.
The results for all combinations of the chosen parameter values for Chain and Clouds data streams are presented in Tables 2 and 3, respectively. It comes at no surprise that for increasing \(T\), the convergence happens latter in the stream. More importantly, results empirically suggest that \(T=\{1500,2000\}\) and \(\beta =\{0.6,0.7,0.8\}\) exhibit the best compromise between the minimization of the error and the maximization of neuron utility.
Table 2 Parameter sensitivity analysis with the chain data stream
Table 3 Parameter sensitivity analysis with the clouds data stream
After experimentally trying these values, we opted for \(T=2000\) and \(\beta =0.7\) since it consistently gave good results across a variety of data streams, some not included in this paper. Hence, all the remaining experiments use these parameter values.
Density mapping
We illustrate the modeling and quantization process of all tested algorithms in Fig. 6, for the stationary Gauss data stream. It can be seen that only the Online SOM and the UbiSOM are able to model the input space density correctly, assigning more neurons to the denser area of points. The inability of PLSOM to map the density limits its applicability to exploratory analysis with some visualization techniques illustrated in this work. It can also be seen that DSOM fails to unfold the map to cover this distribution.
Final maps obtained for the stationary Gaussian data stream. a UbiSOM; b online SOM; c PLSOM; d DSOM
Convergence with stationary and non-stationary data
The following results compare the UbiSOM algorithm against the Online SOM and the PLSOM algorithms across all artificial data streams. Table 4 summarizes the obtained values for the previously used measures, namely the mean \(E{^{\prime }}(t)\) and mean \(\lambda (t)\) values. While PLSOM exhibits a lower mean \(E{^{\prime }}(t)\) on all data streams, except Gauss, it does so at the expense of not mapping the input space density, as previously demonstrated. The density mapping of the vector projection and the quantization process can be seen as conflicting goals. On the other hand, the UbiSOM algorithm performs very similarly in this measure, but consistently exhibits higher mean \(\lambda (t)\) values, most importantly in the non-stationary data streams, which may indicate that the other algorithms may have not performed so well in the presence of changes in the underlying distribution.
Table 4 Comparison of the UbiSOM, Online SOM and PLSOM algorithms across all data streams
The previous measures can establish a baseline comparison between algorithms, but are not conclusive regarding the quality of the obtained maps. Consequently, we computed the average quantization error \(\overline{qe}\) with \(T=2000\) for all algorithms and data streams. Please note that although this is the value that the UbiSOM also uses, we consider this fair for all algorithms, since it is simply evaluating the trend of the quantization error for each algorithm. The results are depicted in Fig. 7 and it can be seen that the UbiSOM algorithm generally converges faster to stationary phases of the distributions, while the PLSOM converges less steadily and slower in half of the streams, and the convergence of the Online SOM is dictated by the monotonic decrease of the learning parameters. In the Hepta data stream only the UbiSOM algorithm is able to detect the disappearance of the cluster, which we can derive by the increase of the \(\overline{qe}\) values. Please note that this was a consequence of the contribution of the average neuron activity into the drift function. Similarly, in the Clouds data stream the UbiSOM algorithm quickly reacts to the start of the gradual change in the position of the clusters through the \(\overline{qe}\) metric, while the average neuron utility is responsible for the observed "spikes" when it detects currently unused regions of the map. Given that the UbiSOM learning parameters are mainly estimated through \(\overline{qe}\), we can get a very close idea of the evolution of the learning parameters across these different datasets.
Average quantization error \(\overline{qe}(t)\) of algorithms across all data streams. Results for the UbiSOM, online SOM and PLSOM in left, center and right columns, respectively. The rows regard the Gauss, Chain, Hepta and Clouds data streams, respectively
In order to support the above inferences, Fig. 8 illustrates the UbiSOM over the Clouds data stream, confirming it models the changing underlying distribution correctly over time, maintaining density mapping and topological ordering of prototypes with few signs of unused portions of the map. On the other hand, the final Online SOM and PLSOM maps for this data stream are presented in Fig. 9. Neither is able to correctly model the distribution in its final state. Whereas the Online SOM is progressively less capable to model changes due to decreasing learning parameters, the PLSOM suffers from the fact it also uses an estimation of the input space diameter, which also is changing, to compute learning parameters.
Evolution of the UbiSOM over the Clouds data stream. Maps at a \(t\) = 10,000; b \(t\) = 80,000; c \(t\) = 130,000, and; d \(t\) = 190,000
Final maps for the Online SOM and PLSOM over the Clouds data stream a Online SOM; b PLSOM
As an additional example of the clustering of the UbiSOM through exploratory analysis, in Fig. 10 we illustrate the final map obtained for the Chain data stream and corresponding U-matrix [3], a color-scaled visualization that can be used here to correctly identify the two clusters. Warmer colors translate to higher distances between neurons, consequence of the vector projection, establishing borders between clusters.
Obtained UbiSOM for the Chain data stream and corresponding U-matrix. Two clear clusters can be derived from this visualization
Exploratory analysis in real-time
A real world demonstration is achieved by applying the UbiSOM to the real-world Household electric power consumption data stream from the UCI repository [16], comprising \(2\,049\,280\) observations of seven different measurements (i.e., \(d=7\)), collected to the minute, over a span of 4 years. Only features regarding sensor values were used, namely global active power (kW), voltage (V), global intensity (A), global reactive power (kW) and sub-meterings for the kitchen, laundry room and heating (W/h). The Household data stream contains several drifts in the underlying distribution, given the nature of electric power consumption, and we believe these are the most challenging environments where UbiSOM can operate.
Here, we briefly present another visualization technique called component planes [3], that further motivates the application of UbiSOM to a non-stationary data stream. Component planes can be regarded as a "sliced" version of the SOM, showing the distribution of different features values in the map, through a color scale. This visualization can be obtained at any point in time, providing a snapshot of the model for the present and recent past. Ultimately, one can take several snapshots and inspect the evolution of the underlying stream.
Figure 11 depicts the last 5000 presented examples (\({\sim }3.5\) days) of the Household data stream, i.e., from \(t\) = \(495\,000\) to \(t\) = \(500\,000\), while Fig. 12 shows the component planes obtained at t = \(500\,000\) using the UbiSOM. For illustration purposes we discarded the global reactive power feature. These visualizations indicate correlated features, namely Global active power and Global intensity are strongly correlated (identical component planes), while exhibiting some degree of inverse correlation to Voltage. Since the UbiSOM is able to map the input space density, the component planes of the heating sensors indicate their relative overall usage right before that period of time, e.g., Heating has a high consumption approximately 2/3 of the time. Since this point in time concerns the month of December 2008, this seems self-explanatory.
Household data stream from \(t\) = \(495\,000\) to t = \(500\,000\)
UbiSOM obtained component planes at t = \(500\,000\) with the Household data stream
Component planes also show that Global active power has its highest values when Kitchen (Sub_metering_1) and Heating (Sub_metering_3) are active at the same time; the overlap of higher values for Laundry room (Sub_metering_2) Kitchen (Sub_metering_1) is low, indicating that they are not used very often at the same time. All these empirical inductions from the exploratory analysis of the component planes seem correct looking at the plotted data in Fig. 11, and highlight the visualization strengths of UbiSOM with streaming data.
This paper presented the improved version of the ubiquitous self-organizing map (UbiSOM), a variant tailored for real-time exploratory analysis over data streams. Based on literature review and the conducted experiments, it is the first SOM algorithm capable of learning stationary and non-stationary distributions, while maintaining the original SOM properties. It introduces a novel average neuron utility assessment metric in addition to the previously used average quantization error, both used in a drift function that measures the performance of the map over non-stationary data and allows for learning parameters to be estimated accordingly. Experiments show this is a reliable method to achieve the proposed goal and the assessment metrics proved fairly robust. The UbiSOM outperforms current SOM algorithms in stationary and non-stationary data streams.
The real-time exploratory analysis capabilities of the UbiSOM are, in our opinion, extremely relevant to a large set of domains. Besides cluster analysis, the component-plane based exploratory analysis of the Household data stream exemplifies the relevancy of the proposed algorithm. This points to a particular useful usage of UbiSOM in many practical applications, e.g., with high social value, including health monitoring, powering a greener economy in smart cities or the financial domain. Coincidently, ongoing work is targeting the financial domain to model the relationships between a wide variety of asset prices for portfolio selection and to signal changes in the model over time as an alert mechanism. In parallel, we continue conducting research with distributed air quality sensor data in Portugal.
Kohonen T. Self-organized formation of topologically correct feature maps. Biol Cybern. 1982;43(1):59–69.
Pöllä M, Honkela T, Kohonen T. Bibliography of self-organizing map (som) papers: 2002–2005 addendum. Neural Computing Surveys. 2009.
Ultsch A, Herrmann L. The architecture of emergent self-organizing maps to reduce projection errors. In: Verleysen M, editor. Proceedings of the European Symposium on Artificial Neural Networks (ESANN 2005); 2005. pp. 1–6.
Ultsch A. Self organizing neural networks perform different from statistical k-means clustering. In: Proceedings of GfKl '95. 1995.
Silva B, Marques NC. Ubiquitous self-organizing map: learning concept-drifting data streams. New contributions in information systems and technologies. Advances in Intelligent Systems and Computing: Springer; 2015. p. 713–22.
Aggarwal CC. Data streams: models and algorithms, vol. 31. Springer; 2007.
Gama J, Rodrigues PP, Spinosa EJ, de Carvalho ACPLF. Knowledge discovery from data streams. Chapman and Hall/CRC Boca Raton. 2010.
Kohonen T. Self-organizing maps, vol 30. New York: Springer; 2001.
Fritzke B. A self-organizing network that can follow non-stationary distributions. In: Artificial Neural Networks—ICANN 97. Springer. 1997. p. 613–618
Deng D, Kasabov N. Esom. An algorithm to evolve self-organizing maps from on-line data streams. Neural Networks, IEEE-INNS-ENNS International Joint Conference on IEEE Computer Society, vol. 6; 2000. p. 6003.
Deng D, Kasabov N. On-line pattern analysis by evolving self-organizing maps. Neurocomputing. 2003;51:87–103.
Furao S, Hasegawa O. An incremental network for on-line unsupervised classification and topology learning. Neural Netw. 2006;19(1):90–106.
Furao S, Ogura T, Hasegawa O. An enhanced self-organizing incremental neural network for online unsupervised learning. Neural Netw. 2007;20(8):893–903.
Berglund E. Improved plsom algorithm. Appl Intell. 2010;32(1):122–30.
Rougier N, Boniface Y. Dynamic self-organising map. Neurocomputing. 2011;74(11):1840–7.
Bache K, Lichman M. UCI machine learning repository. 2013. http://archive.ics.uci.edu/html.
BS is the principal researcher for the work proposed in this article. His contributions include the underlying idea, background investigation, initial drafting of the article, and results implementation. NCM supervised the research and played a pivotal role in writing the article. Both authors read and approved the final manuscript.
The research of BS was partially funded by Fundação para a Ciência e Tecnologia with the Ph.D. scholarship SFRH/BD/49723/2009. The authors would also like to thank Project VeedMind, funded by QREN, SI IDT 38662.
DSI/ESTSetúbal, Instituto Politécnico de Setúbal, Campus do IPS, Estefanilha, 2914-761, Setúbal, Portugal
NOVA Laboratory for Computer Science and Informatics, DI/FCT, Universidade Nova de Lisboa, Monte da Caparica, Portugal
Nuno Cavalheiro Marques
Search for Bruno Silva in:
Search for Nuno Cavalheiro Marques in:
Correspondence to Bruno Silva.
Self-organizing maps
Non-stationary data
Exploratory analysis
Sensor data | CommonCrawl |
"\\begin{document}\n\n\\title{Optimizing microwave photodetection: Input-Output theory}\n\n\\author{(...TRUNCATED) | arXiv |
"\\begin{document}\n\n\\vspace*{-.8in} \\begin{center} {\\LARGE\\em On the Continuity of Bounded Wea(...TRUNCATED) | arXiv |
"Semilinear elliptic system with boundary singularity\nDCDS Home\nGlobal well-posedness and long tim(...TRUNCATED) | CommonCrawl |
"\\begin{definition}[Definition:Imperial/Volume/Gill]\nThe '''gill''' is an imperial unit of volume.(...TRUNCATED) | ProofWiki |
"Twistor space\n\nIn mathematics and theoretical physics (especially twistor theory), twistor space (...TRUNCATED) | Wikipedia |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 33