{"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Improved Bayesian Regret Bounds for Thompson Sampling in Reinforcement Learning\n\n Ahmadreza Moradipari\\({}^{*}\\)\\({}^{\\dagger}\\) Mohammad Pedramfar\\({}^{*}\\) Modjtaba Shokrian Zini\\({}^{*}\\)\n\n\\({}^{\\ddagger}\\)\n\n&Vaneet Aggarwal\n\n\\({}^{\\ddagger}\\)Pardue University, West Lafayette, IN, USA, vaneet@purdue.eduThis work was supported in part by the National Science Foundation under grant CCF-2149588 and Cisco, Inc.\n\n###### Abstract\n\nIn this paper, we prove the first Bayesian regret bounds for Thompson Sampling in reinforcement learning in a multitude of settings. We simplify the learning problem using a discrete set of surrogate environments, and present a refined analysis of the information ratio using posterior consistency. This leads to an upper bound of order \\(\\widetilde{O}(H\\sqrt{d_{l_{1}}T})\\) in the time inhomogeneous reinforcement learning problem where \\(H\\) is the episode length and \\(d_{l_{1}}\\) is the Kolmogorov \\(l_{1}-\\)dimension of the space of environments. We then find concrete bounds of \\(d_{l_{1}}\\) in a variety of settings, such as tabular, linear and finite mixtures, and discuss how how our results are either the first of their kind or improve the state-of-the-art.\n\n## 1 Introduction\n\nReinforcement Learning (RL) is a sequential decision-making problem in which an agent interacts with an unknown environment typically modeled as a Markov Decision Process (MDP) [38, 8]. The goal of the agent is to maximize its expected cumulative reward. This problem has a variety of applications, including robotics, game playing, resource management, and medical treatments. The key challenge in RL is to balance the so-called exploration-exploitation trade-off efficiently: exploring unseen state-action pairs to gain more knowledge about the unknown environment or exploiting the current knowledge to maximize the expected cumulative reward. Two efficient approaches have been developed to control this trade-off: _optimism in the face of uncertainty_ (OFU) and _Thompson Sampling_ (TS) (or Posterior Sampling (PS)). OFU constructs a confidence set of statistically plausible MDPs that includes the true MDP with high probability and plays an optimistic policy according to the MDP with maximum gain from this set [5, 40]. TS samples a statistically plausible MDP from a posterior distribution and plays the optimistic policy of the sampled MDP [29, 31]. In this work, we focus on the latter, and by combining an information theoretical approach first introduced by [36] with analysis based on posterior consistency tools, we prove state-of-the-art Bayesian regret bounds in a variety of settings.\n\nIn this paper, we start by defining the Bayesian RL problem, where transition and reward functions are Bayesian and time inhomogeneous. The Bayesian RL problem we consider is more comprehensive than in previous works, as we allow for both Bayesian transition and Bayesian rewards, and do not make any assumption on their individual prior. To simplify the learning problem, we utilize the notion of surrogate environments, which is a discretization of the environments space, and its learning task and TS regret is a proxy to that of the main problem. The construction of the surrogate environments was first introduced by [18] with an incorrect proof, which is fixed in our work bydefining the surrogate environments through an optimization. Of main importance is the size of this new environment space. The Bayesian regret decomposes to the product of two terms, one being the cumulative mutual information of the environment and history traversed by the policy. By the well-known entropy estimation of the mutual information, this significant factor in the regret is connected to the \\(l_{1}-\\)dimensions (\\(d_{l_{1}}\\)) of the transition and reward functions space, which can be more succinctly interpreted as the \\(l_{1}-\\)dimension \\(d_{l_{1}}\\) of the environment space. The latter is in turn estimated by the size of the space of surrogate environments.\n\nThe information ratio, representing a trade-off of exploration/exploitation, is the other significant term in the decomposition of the TS Bayesian regret. In an improvement to [18], our novel analysis of this ratio based on posterior consistency tools, shows that this trade-off is bounded by \\(H^{3/2}\\), where \\(H\\) is the episode length. This bound is general and independent of the dimension of transition/reward function space at each step, which is is a key factor behind the advantage of our regret bound, such as the \\(\\sqrt{SA}\\) advantage in the tabular case compared to [18], or the lack of any restriction on the prior (e.g., Dirichlet prior) compared to [31]. Following a further refined approach, we finally estimate the TS Bayesian regret to be \\(\\widetilde{O}(\\lambda\\sqrt{d_{l_{1}}T})\\) for large enough \\(T\\) in the time inhomogeneous setting. Here, a new term 'value diameter' \\(\\lambda\\), which is the average difference of the optimal value functions at different states, is used in bounding the information ratio, where instead of \\(H^{3/2}\\), we have the smaller term \\(\\lambda H^{1/2}\\). Bounding the information ratio with \\(\\lambda\\) is a conceptual contribution of our work, which shows that the ratio is bounded by a _value-dependent_ term, which is in nature different from \\(H\\) but always \\(\\leq H+1\\). Further, there exists another bound for \\(\\lambda\\); in environments where states are reachable from one another in \\(D\\) steps, we have \\(\\lambda\\leq D+1\\). In 'well-connected' MDPs, one could have \\(D\\ll H\\), implying an improvement over the \\(H^{3/2}\\) information ratio bound.\n\nOur generic bound is abstract in terms of \\(d_{l_{1}}\\), so we estimate it in more explicit terms for useful applications. [18] have bounded \\(d_{l_{1}}\\) in the tabular and linear case without formalizing this notion, and while for tabular MDPs, \\(d_{l_{1}}\\) was bounded by \\(SAH\\), for linear MDPs with feature space dimension \\(d_{f}\\), we investigate their claim of the bound \\(d_{f}H\\). Detailed in Appendix G, we show a counterexample to their analysis, and we manage to find a correct estimate in this setting. We also introduce finite mixtures MDPs and are the first to prove a TS Bayesian regret of order \\(\\widetilde{O}(\\lambda\\sqrt{HmT})\\), where \\(m\\) is the number of mixtures.\n\nLastly, we note that our regret bound of order \\(\\widetilde{O}(\\lambda\\sqrt{d_{l_{1}}T})\\) is the first in the general nonlinear time inhomogeneous Bayesian RL setting for TS, and generalizing [31, Conj. 1], we conjecture it to be optimal if \\(\\lambda\\) can be replaced by \\(\\widetilde{O}(\\sqrt{H})\\).\n\nRelated work.Since the introduction of information ratio by [35, 36], a new line of research has emerged to provide tighter regret bounds for TS. The general approach involves factoring the Bayesian regret into two components: an information ratio that captures the trade-off between optimal action selection and information gain, and a cumulative information gain term that depends on the target environment and the history of previous observations. Then, both components are bounded separately using information theoretic tools.\n\nIn the bandit setting, this analysis has been used to bound Bayesian regret for TS [14, 9], as well as that of a new algorithm called information-directed sampling (IDS) [35, 26, 23, 19, 20]. This analysis has also been used in partial monitoring [25, 24] and RL with a specific Dirichlet prior and additional assumptions [28, 27] or when the true environment is too complicated to learn [4]. More recently, [18] studied the Bayesian regret of TS in RL without any prior assumptions for tabular MDP. This is the closest work to our paper and we discuss our generalization in detail in Section 5.\n\nThe Bayesian tabular MDP case has also been studied with the additional Dirichlet prior assumption in [31], where they achieve a regret bound matching ours. In an independent approach, the first non-linear Bayesian RL model was considered by [16] with a regret bound of \\(dH^{3/2}T^{1/2}\\) where \\(d\\) is a notion of dimension of their model, but their results were limited to Gaussian process settings with linear kernels. Finally, [11] considered general non-linear Bayesian RL models and introduced an algorithm that obtains \\(dH^{1+\\alpha/2}T^{1-\\alpha/2}\\) where \\(\\alpha\\) is a tuning parameter and \\(d\\) is the dimension of \\(\\mathcal{S}\\times\\mathcal{A}\\times\\mathcal{S}\\).\n\nIt is worth noting that there is another line of work that incorporates confidence regions into TS to achieve Bayesian regret bounds that can match the best possible frequentist regret bounds by UCB in both bandit settings [35] and RL [31; 30; 32; 12]. However, this technique often results in a sub-optimal Bayesian regret, as the best bound known for UCB itself is not optimal.\n\nWhile our work's emphasis is on theoretical guarantees for TS, we discuss here the experiments using this algorithm. Previous works on PSRL [35; 26; 23; 20; 31] come with extensive experiments on TS (and/or its variants), and discussions on computational efficiency of PSRL. In particular, experiments in [31] support the assertion that \"PSRL dramatically outperforms existing algorithms based on OFU\". In addition, PSRL with oracle access has been shown to be the most performant, esp. when compared to recent OFU based UCBVI/UCBVI-B, or even variants of PSRL such as Optimistic PSRL [39; Fig. 1.3]. However, an important limitation in experiments is the need for oracle access to an optimal policy, and that can not be always satisfied efficiently. Nevertheless, clever engineering can make TS work even in large scale Deep RL. Indeed, for general RL settings, the recent work [37] shows how to implement TS in Deep RL on the Atari benchmark and concludes that \"Posterior Sampling Deep RL (PSDRL) significantly outperforms previous state-of-the-art randomized value function approaches, its natural model-free counterparts, while being competitive with a state-of-the-art (model-based) reinforcement learning method in both sample efficiency and computational efficiency\". In summary, experiments in the literature provide enough support for the empirical performance of TS.\n\n## 2 Preliminaries\n\n### Finite-horizon MDP\n\nWe follow the literature's conventions in our notation and terminology to avoid confusion when comparing results. The environment is a tuple \\(\\mathcal{E}=(\\mathcal{S},\\mu_{\\mathcal{S}},\\mathcal{A},\\mu_{\\mathcal{A}},H,\\{ P_{h}\\}_{h=1}^{H},\\{r_{h}\\}_{h=1}^{H})\\), where \\(\\mathcal{S}\\) is the topological measurable state space, \\(\\mathcal{A}\\) is the topological measurable action space, \\(\\mu_{\\mathcal{S}}\\) and \\(\\mu_{\\mathcal{A}}\\) are base probability measures on \\(\\mathcal{S}\\) and \\(\\mathcal{A}\\) respectively, \\(H\\) is the episode length, \\(P_{h}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\Delta_{\\mathcal{S},\\mu_{\\mathcal{ S}}}\\) is the transition probability kernel, and \\(r_{h}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\Delta_{[0,1],\\mathrm{Lebesgue}}\\) is the reward function, where we fix the convention \\(r(s,a):=\\mathbb{E}_{x}[r(x|s,a)]=\\int_{0}^{1}xr(x|s,a)\\,\\mathrm{d}x\\) as we mostly deal with its mean value. Notice that \\(\\Delta_{X,\\mu}\\) is the set of probability distributions over \\(X\\) that are absolutely continuous with respect to \\(\\mu\\). We will use \\(\\Delta_{X}\\) when the base measure is clear from the context. We assume \\(\\mathcal{S}\\), \\(\\mathcal{A}\\) are known and deterministic while the transition probability kernel and reward are unknown and random. Throughout the paper, the implicit dependence of \\(P_{h}\\) and \\(r_{h}\\) on \\(\\mathcal{E}\\) should be clear from the context.\n\nLet \\(\\Theta_{h}^{P}\\) be the topological function space of \\(P_{h}\\) and \\(\\Theta^{P}=\\Theta_{1}^{P}\\times\\cdots\\times\\Theta_{H}^{P}\\) be the full function space. The space \\(\\Theta_{h}^{P}\\) is assumed to be separable and equipped with prior probability measure \\(\\rho_{h}^{P}\\) yielding the product prior probability measure \\(\\rho^{P}=\\rho_{1}^{P}\\otimes\\cdots\\otimes\\rho_{H}^{P}\\) for \\(\\Theta^{P}\\). The exact same definition with similar notations \\(\\Theta_{h}^{R},\\rho_{h}^{R},\\rho^{R},\\Theta^{R}\\) applies for the reward function. Notice the explicit assumption of time inhomogeneity in these definitions, with all 'layers' \\(h\\) being independent. The two sets define the set of all environments parametrized by \\(\\Theta=\\Theta_{1}\\times\\cdots\\times\\Theta_{H}\\) where \\(\\Theta_{h}=\\Theta_{h}^{P}\\times\\Theta_{h}^{R}\\). Note that the prior is assumed to be known to the learner. This setting implies that an environment \\(\\mathcal{E}\\) sampled according to the prior \\(\\rho=\\rho^{P}\\otimes\\rho^{R}\\) is essentially determined by its transition and reward functions pair \\(\\{(P_{h},r_{h})\\}_{h=1}^{H}\\). We simplify the notation to view \\(\\Theta\\) as the set of all environments, i.e., saying\n\n\\begin{table}\n\\begin{tabular}{|c|c|c|c|c|} \\hline Reference & Tabular & Linear & General & Comments \\\\ \\hline\n[29] & \\(\\sqrt{H^{3}S^{2}AL}\\) & - & - & - \\\\ \\hline\n[30] & - & - & \\(L^{*}\\sqrt{d_{K}d_{E}HL}\\) & \n\\begin{tabular}{c} Uses Eluder dimension \\\\ Lipschitz assumption \\\\ \\end{tabular} \\\\ \\hline\n[31] & \\(\\sqrt{H^{3}SAL}\\) & - & - & Dirichlet prior \\\\ \\hline\n[28] & \\(\\sqrt{H^{3}SAL}\\) & - & - & Assumptions on prior \\\\\n**[12]** & \\(L^{*}\\sqrt{H^{3}S^{2}A^{2}L}\\) & - & \\(L^{*}\\gamma\\sqrt{HL}\\) & \n\\begin{tabular}{c} Assumptions on regularity \\& noise \\\\ Lipschitz assumption \\\\ \\end{tabular} \\\\ \\hline \\hline\n[18] & \\(\\sqrt{H^{4}S^{2}A^{2}L}\\) & - & - & - \\\\ \\hline This paper & \\(\\lambda\\sqrt{H^{2}SAL}\\) & \\(\\lambda\\sqrt{d_{l_{1}}HL}\\) & \\(\\lambda\\sqrt{d_{l_{1}}HL}\\) & \n\\begin{tabular}{c} Assumptions 1 \\& 2 \\\\ Holds in the limit \\(L\\rightarrow\\infty\\) \\\\ \\end{tabular} \\\\ \\hline \\end{tabular}\n\\end{table}\nTable 1: Bayesian regret bounds for TS (i.e. PSRL)\\(\\mathcal{E}\\in\\Theta\\) should be viewed as \\(\\{(P_{h},r_{h})\\}_{h=1}^{H}\\in\\Theta\\). The space of all possible real-valued functions \\(\\{(P_{h},r_{h})\\}_{h=1}^{H}\\) has a natural vector space structure. Therefore it is meaningful to discuss the notion of the convex combination of environments. We assume that \\(\\Theta\\) is a convex subspace of the space of all possible environments. This assumption is not restrictive, since we may replace any environment space with its convex hull. Note that we do not assume that the support of the prior is convex.\n\n_Remark 1_.: The case of joint prior may be of interest, but to our knowledge all prior works also take \\(\\rho^{P},\\rho^{R}\\) to be independent.\n\nAgent, policy and history.An agent starts at an initial state \\(s^{\\ell}_{1}\\), which is fixed for all episodes \\(\\ell\\). It observes a state \\(s^{\\ell}_{h}\\) at layer \\(h\\) episode \\(\\ell\\), takes action \\(a^{\\ell}_{h}\\), and receives reward \\(r^{\\ell}_{h}\\). The environment changes to the next random state \\(s^{\\ell}_{h+1}\\) with probability \\(P_{h}(s^{\\ell}_{h+1}|s^{\\ell}_{h},a^{\\ell}_{h})\\). The agent stops acting at \\(s_{H+1}\\) and the environment is reset to its initial state.\n\nWe define \\(\\mathcal{H}_{\\ell,h}\\) as the history \\((s^{\\ell}_{1},a^{\\ell}_{1},r^{\\ell}_{1},\\ldots,s^{\\ell}_{h},a^{\\ell}_{h},r^{ \\ell}_{h})\\). Denote by \\(\\mathcal{D}_{\\ell}=\\left(\\mathcal{H}_{1,H},\\ldots,\\mathcal{H}_{\\ell-1,H}\\right)\\) the history up to episode \\(\\ell\\), where \\(\\mathcal{D}_{1}:=\\emptyset\\). Finally, let \\(\\Omega_{h}=\\prod_{i=1}^{h}(\\mathcal{S}\\times\\mathcal{A}\\times[0,1])\\) be the set of all possible histories up to layer \\(h\\).\n\nA policy \\(\\pi\\) is represented by stochastic maps \\((\\pi_{1},\\ldots,\\pi_{H})\\) where each \\(\\pi_{h}:\\Omega_{h-1}\\times\\mathcal{S}\\rightarrow\\Delta_{\\mathcal{A},\\mu_{ \\mathcal{A}}}\\). Let \\(\\Pi_{S}\\) denote the entire stationary policy class, stationary meaning a dependence only on the current state and layer and let \\(\\Pi\\subseteq\\Pi_{S}\\).\n\nValue and state occupancy functions.Define the value function \\(V^{\\mathcal{E}}_{h,\\pi}\\) as the value of the policy \\(\\pi\\) interacting with \\(\\mathcal{E}\\) at layer \\(h\\):\n\n\\[V^{\\mathcal{E}}_{h,\\pi}(s):=\\mathbb{E}^{\\mathcal{E}}_{\\pi}\\left[\\sum_{h^{\\prime }=h}^{H}r_{h^{\\prime}}(s_{h^{\\prime}},a_{h^{\\prime}})\\bigg{|}s_{h}=s\\right]\\,, \\tag{1}\\]\n\nwhere \\(\\mathbb{E}^{\\mathcal{E}}_{\\pi}\\) denotes the expectation over the trajectory under policy, transition, and reward functions \\(\\pi,P_{h},r_{h}\\). The value function at step \\(H+1\\) is set to null, \\(V^{\\mathcal{E}}_{H+1,\\pi}(\\cdot):=0\\). We assume there is a measurable function \\(\\pi^{*}_{\\mathcal{E}}:\\Theta\\rightarrow\\Pi\\) such that \\(V^{\\mathcal{E}}_{h,\\pi^{*}_{\\mathcal{E}}}(s)=\\max_{\\pi\\in\\Pi}V^{\\mathcal{E}}_{ h,\\pi}(s),\\ \\forall s\\in\\mathcal{S},h\\in[H]\\). The optimal policy \\(\\pi^{*}\\) is a function of \\(\\mathcal{E}\\), making it a random variable in the Bayesian setting. Lastly, let the _state-action occupancy probability measure_ be \\(\\mathbb{P}^{\\mathcal{E}}_{\\pi}(s_{h}=s,a_{h}=a)\\), also known as the state occupancy measure under policy \\(\\pi\\) and environment \\(\\mathcal{E}\\). It follows from the definitions that this measure is absolutely continuous with respect to \\(\\mu_{\\mathcal{S}\\times\\mathcal{A}}:=\\mu_{\\mathcal{S}}\\times\\mu_{\\mathcal{A}}\\). Let \\(d^{\\mathcal{E}}_{h,\\pi}(s,a)\\) denote the Radon-Nikodym derivative so that we have \\(d^{\\mathcal{E}}_{h,\\pi}(s,a)\\,\\mathrm{d}\\mu_{\\mathcal{S}\\times\\mathcal{A}}= \\mathrm{d}\\mathbb{P}^{\\mathcal{E}}_{\\pi}(s_{h}=s,a_{h}=a)\\). We will assume throughout the paper that this density \\(d^{\\mathcal{E}}_{h,\\pi}(s,a)\\) is measurable and upper bounded for all \\(\\pi,\\mathcal{E},s,a,h\\). The upper bound is a reasonable assumption, and it happens trivially in the tabular case (\\(d^{\\mathcal{E}}_{h,\\pi}(s,a)\\leq SA\\)). This also happens, e.g., when one assumes that the maps \\((\\mathcal{E},s,a,s^{\\prime},h)\\mapsto P^{\\mathcal{E}}_{h}(s^{\\prime}|s,a)\\) and \\((\\pi,s,a,h)\\mapsto\\pi_{h}(a|s)\\) are continuous and \\(\\Theta\\), \\(\\mathcal{S}\\), \\(\\mathcal{A}\\) and the set of all optimal policies (as a subset of \\(\\Pi\\)) are compact.\n\n### Bayesian regret\n\nWe formulate the expected regret over \\(L\\) episodes and \\(T=LH\\) total steps in an environment \\(\\mathcal{E}\\) as\n\n\\[\\mathfrak{R}_{L}(\\mathcal{E},\\pi)=\\mathbb{E}\\left[\\sum_{\\ell=1}^{L}\\left(V^{ \\mathcal{E}}_{1,\\pi^{\\mathcal{E}}_{\\mathcal{E}}}(s^{\\ell}_{1})-V^{\\mathcal{E}}_ {1,\\pi^{\\mathcal{E}}}(s^{\\ell}_{1})\\right)\\right]\\,, \\tag{2}\\]\n\nwhere the expectation is over the randomness of \\(\\pi=\\{\\pi^{\\ell}\\}_{\\ell}\\). The Bayesian regret is \\(\\mathfrak{B}\\mathfrak{R}_{L}(\\pi)=\\mathbb{E}[\\mathfrak{R}_{L}(\\mathcal{E},\\pi)]\\). For Thompson Sampling (TS), the algorithm selects the optimal policy of a given sample \\(\\mathcal{E}_{\\ell}\\) picked from the posterior \\(\\mathcal{E}_{\\ell}\\sim\\mathbb{P}(\\mathcal{E}\\in\\cdot|\\mathcal{D}_{\\ell})\\):\n\n\\[\\pi^{\\ell}_{\\text{TS}}=\\text{argmax}_{\\pi\\in\\Pi}V^{\\mathcal{E}_{\\ell}}_{1,\\pi}(s ^{\\ell}_{1})\\,. \\tag{3}\\]\n\nImportantly, the law of TS aligns with the posterior, i.e., \\(\\mathbb{P}(\\mathcal{E}|\\mathcal{D}_{\\ell})=\\mathbb{P}(\\pi^{\\ell}_{\\text{TS}}= \\pi^{*}_{\\mathcal{E}}|\\mathcal{D}_{\\ell})\\).\n\n_Remark 2_.: Note that \\(\\mathbb{P}(\\pi^{\\ell}_{\\text{TS}}=\\pi^{*}_{\\mathcal{E}}|\\mathcal{D}_{\\ell})\\) is a probability for a specific measure on the space of optimal policies. To ensure that \\(\\int_{\\Pi}\\mathbb{P}(\\pi^{*}|\\mathcal{D}_{\\ell})\\mathrm{d}\\rho_{\\Pi^{*}}=1\\), we need an appropriate measure \\(\\rho_{\\Pi^{*}}\\) on \\(\\Pi^{*}\\). Given the law of TS, the natural choice for this measure is the push-forward of the prior measure \\(\\rho\\) under the map \\(star:\\Theta\\rightarrow\\Pi^{*}\\), where \\(star(\\mathcal{E})=\\pi^{*}_{\\mathcal{E}}\\).\n\n### Notations\n\nFor Bayesian RL, conditional expressions involving a given history \\(\\mathcal{D}_{\\ell}\\) are widely used. We adopt the notation in [18] to refer to such conditionals; let \\(\\mathbb{P}_{\\ell}(\\cdot):=\\mathbb{P}(\\cdot|\\mathcal{D}_{\\ell})\\), \\(\\mathbb{E}_{\\ell}[\\cdot]:=\\mathbb{E}[\\cdot|\\mathcal{D}_{\\ell}]\\). We can rewrite the Bayesian regret as\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi)=\\sum_{\\ell=1}^{L}\\mathbb{E}\\left[\\mathbb{E}_ {\\ell}\\left[V^{\\mathcal{E}}_{1,\\pi^{\\mathcal{E}}_{\\ell}}(s^{\\ell}_{1})-V^{ \\mathcal{E}}_{1,\\pi}(s^{\\ell}_{1})\\right]\\right] \\tag{4}\\]\n\nand define the conditional mutual information \\(\\mathbb{I}_{\\ell}(X;Y):=D_{\\text{KL}}(\\mathbb{P}((X,Y)\\in\\cdot|\\mathcal{D}_{ \\ell})||\\mathbb{P}(X\\in\\cdot|\\mathcal{D}_{\\ell})\\otimes\\mathbb{P}(Y\\in\\cdot| \\mathcal{D}_{\\ell}))\\). For a random variable \\(\\chi\\) and random policy \\(\\pi\\), the following will be involved in the information ratio:\n\n\\[\\mathbb{I}_{\\ell}^{\\pi}(\\chi;\\mathcal{H}_{\\ell,h}):=\\mathbb{I}_{\\ell}(\\chi; \\mathcal{H}_{\\ell,h}|\\pi)=\\mathbb{E}_{\\pi}[D_{\\text{KL}}(\\mathbb{P}_{\\ell}(( \\chi,\\mathcal{H}_{\\ell,h})\\in\\cdot|\\pi)||\\mathbb{P}_{\\ell}(\\chi\\in\\cdot|\\pi) \\otimes\\mathbb{P}_{\\ell}(\\mathcal{H}_{\\ell,h}\\in\\cdot|\\pi))]\\,, \\tag{5}\\]\n\nNote that \\(\\mathbb{E}[\\mathbb{I}_{\\ell}(X;Y)]=\\mathbb{I}(X;Y|\\mathcal{D}_{\\ell})\\). To clarify, \\(\\mathbb{P}_{\\ell}(\\mathcal{H}_{\\ell,h}\\in\\cdot|\\pi)\\) is the probability of \\(\\mathcal{H}_{\\ell,h}\\) being generated under \\(\\pi\\) within some environment. Given that the histories under consideration are generated by the TS algorithm, they are always generated in the true environment \\(\\mathcal{E}\\) under an optimal policy \\(\\pi^{*}_{\\mathcal{E}^{\\prime}}\\). For \\(\\pi=\\pi^{\\ell}_{\\text{TS}}\\), this can be computed as \\(\\mathbb{P}_{\\ell}(\\mathcal{H}_{\\ell,h}|\\pi)=\\int_{\\mathcal{E}}P(\\mathcal{H}_{ \\ell,h}|\\pi,\\mathcal{E})\\,\\mathrm{d}\\mathbb{P}_{\\ell}(\\mathcal{E})\\), where \\(P(\\mathcal{H}_{\\ell,h}|\\pi,\\mathcal{E})\\) is an expression in terms of transition and reward functions of \\(\\mathcal{E}\\) and \\(\\pi\\).\n\nFinally, we define \\(\\bar{\\mathcal{E}}_{\\ell}\\) as the mean MDP where \\(P^{\\bar{\\mathcal{E}}_{\\ell}}_{h}(\\cdot|s,a)=\\mathbb{E}_{\\ell}[P^{\\bar{ \\mathcal{E}}}_{h}(\\cdot|s,a)]\\) is the mean of posterior measure, and similarly for \\(r^{\\bar{\\mathcal{E}}_{\\ell}}_{h}(\\cdot|s,a)=\\mathbb{E}_{\\ell}[r^{\\bar{ \\mathcal{E}}}_{h}(\\cdot|s,a)]\\). We note that under the independence assumption across layers, the same is given for the state-occupancy density \\(d^{\\bar{\\mathcal{E}}_{\\ell}}_{h,\\pi}=\\mathbb{E}_{\\ell}[d^{\\bar{\\mathcal{E}}}_{ h,\\pi}]\\).\n\n## 3 Bayesian RL problems\n\n**Definition 1**.: A Bayesian RL in this paper refers to the time-inhomogeneous finite-horizon MDP with independent priors on transition and reward functions, as described in Section 2.1.\n\nThe Bayesian RL _problem_ is the task of finding an algorithm \\(\\pi\\) with optimal Bayesian regret as defined in Eq. (4). Below we list the variations of this problem. A setting considered by most related works such as [31, 16] is the following:\n\n**Definition 2**.: The **time (reward) homogeneous** Bayesian RL refers to the Bayesian RL setting where the prior \\(\\rho^{P}\\) (\\(\\rho^{R}\\)) is over the space \\(\\Theta^{P}\\) (\\(\\Theta^{R}\\)) containing the single transition (reward) function \\(P\\) (\\(r\\)) defining \\(\\mathcal{E}\\), i.e., all layers have the same transition (reward) functions.\n\n**Definition 3**.: The **tabular** Bayesian RL is a Bayesian RL where \\(\\mathcal{S},\\mathcal{A}\\) are finite sets.\n\n**Definition 4** (Linear MDP [41, 22]).: Let \\(\\phi^{P}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\mathbb{R}^{d^{P}_{\\ell}}, \\phi^{R}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\mathbb{R}^{d^{R}_{\\ell}}\\) be feature maps with bounded norm \\(\\|\\phi^{P}(s,a)\\|_{2},\\|\\phi^{R}(s,a)\\|_{2}\\leq 1\\). The **linear** Bayesian RL is a Bayesian RL where for any \\(\\mathcal{E}=\\{(P^{\\mathcal{E}}_{h},r^{\\mathcal{E}}_{h})\\}_{h=1}^{H}\\in\\Theta\\), there exists vector-valued maps \\(\\psi^{P,\\mathcal{E}}_{h}(s),\\psi^{R,\\mathcal{E}}_{h}(s)\\) with bounded \\(l_{2}-\\)norm such that for any \\((s,a)\\in\\mathcal{S}\\times\\mathcal{A}\\),\n\n\\[P^{\\mathcal{E}}_{h}(\\cdot|s,a)=\\langle\\phi^{P}(s,a),\\psi^{P,\\mathcal{E}}_{h}( \\cdot)\\rangle\\,,\\;\\;r^{\\mathcal{E}}_{h}(\\cdot|s,a)=\\langle\\phi^{R}(s,a), \\psi^{R,\\mathcal{E}}_{h}(\\cdot)\\rangle \\tag{6}\\]\n\nA restricted version of the finite mixtures called linear mixture was first considered in [6] in the frequentist setting. Here, we consider the general setting.\n\n**Definition 5**.: The **finite mixtures** Bayesian RL is a Bayesian RL where for any \\(h\\in[H]\\) there exists fixed conditional distributions \\(\\{Z^{P}_{h,i}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\Delta_{S}\\}_{i=1}^{m^{P}_ {h}}\\) and \\(\\{Z^{R}_{h,i}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\Delta_{[0,1]}\\}_{i=1}^{m^{ R}_{h}}\\), such that for any environment \\(\\mathcal{E}\\) given by \\(\\{(P^{\\mathcal{E}}_{h},r^{\\mathcal{E}}_{h})\\}_{h=1}^{H}\\), there exists parametrized probability distributions \\(\\mathbf{a}^{P,\\mathcal{E}}_{h}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\Delta_{m^{ P}_{h}},\\mathbf{a}^{R,\\mathcal{E}}_{h}:\\mathcal{S}\\times\\mathcal{A}\\rightarrow\\Delta_{m^{ R}_{h}}\\) such that\n\n\\[P^{\\mathcal{E}}_{h}(\\cdot|s,a)=\\sum_{i=1}^{m^{P}_{h}}a^{P,\\mathcal{E}}_{h,i}(s,a )Z^{P}_{h,i}(\\cdot|s,a),\\;\\;r^{\\mathcal{E}}_{h}(\\cdot|s,a)=\\sum_{i=1}^{m^{R}_{h }}a^{R,\\mathcal{E}}_{h,i}(s,a)Z^{R}_{h,i}(\\cdot|s,a) \\tag{7}\\]Surrogate learning\n\nNext, we define the discretized surrogate learning problem, and bound the size of the surrogate environments space, a significant term in the regret. To do so, we need to first define the Kolmogorov dimension of a set of parametrized distributions, esp. working out the case of \\(l_{1}-\\)distance. In the definitions below, we implicitly assume any required minimal measurability assumptions on the involved sets.\n\n**Definition 6**.: Given a set \\(\\mathcal{F}\\) of \\(\\mathcal{O}-\\)parametrized distributions \\(P:\\mathcal{O}\\to\\Delta(\\mathcal{S})\\) over a set \\(\\mathcal{S}\\) where both \\(\\mathcal{O},\\mathcal{S}\\) are measurable. Let \\(\\mathcal{M}(\\cdot,\\cdot):\\mathcal{F}\\times\\mathcal{F}\\to\\mathbb{R}^{\\geq 0}\\) be a _distance_, i.e., \\(\\mathcal{M}(P,Q)\\geq 0\\stackrel{{\\leftrightarrow}}{{\\rightleftarrows}}P=Q\\). Then its right \\(\\varepsilon-\\)covering number is the size \\(K_{\\mathcal{M}}(\\varepsilon)\\) of the smallest set \\(\\mathcal{C}_{\\mathcal{M}}(\\varepsilon)=\\{P_{1},\\ldots,P_{K_{\\mathcal{M}}( \\varepsilon)}\\}\\subset\\mathcal{F}\\) such that\n\n\\[\\forall P\\in\\mathcal{F},\\ \\exists P_{j}\\in\\mathcal{C}_{\\mathcal{M}}( \\varepsilon):\\ \\mathcal{M}(P,P_{j})\\leq\\varepsilon\\,. \\tag{8}\\]\n\nThe potential asymmetry of \\(\\mathcal{M}\\) (e.g., KL-divergence) requires the notion of left/right covering number. The right covering number will be the default, so covering number will always refer to that.\n\n**Definition 7**.: Let \\(d_{\\mathcal{M}}(\\varepsilon)=\\log(K_{\\mathcal{M}}(\\varepsilon))\\). Define the Kolmogorov \\(\\mathcal{M}-\\)dimension \\(d_{\\mathcal{M}}\\) of \\(\\mathcal{F}\\) as\n\n\\[d_{\\mathcal{M}}=\\limsup_{\\varepsilon\\to 0}\\frac{d_{\\mathcal{M}}( \\varepsilon)}{\\log(\\frac{1}{\\varepsilon})}. \\tag{9}\\]\n\nFor \\(l_{1}(P,Q):=\\sup_{o\\in\\mathcal{O}}||P(\\cdot|o)-Q(\\cdot|o)||_{1}\\), applying Definition 6 to the sets \\(\\Theta_{h}^{P},\\Theta_{h}^{R}\\) with \\(\\mathcal{O}=\\mathcal{S}\\times\\mathcal{A}\\), and denote the respective covering numbers by \\(L_{h}^{P}(\\varepsilon),L_{h}^{R}(\\varepsilon)\\) corresponding to covering sets \\(\\mathcal{C}_{h}^{P}(\\varepsilon),\\mathcal{C}_{h}^{R}(\\varepsilon)\\). Similarly applying Eq. (9) and denote the corresponding \\(l_{1}-\\)dimensions by \\(d_{l_{1}h}^{P}(\\varepsilon),d_{l_{1}h}^{R}(\\varepsilon),d_{l_{1}h}^{P}( \\varepsilon),d_{l_{1}h}^{P}(\\varepsilon),d_{l_{1}h}^{P},d_{l_{1}h}^{R}\\) and \\(d_{l_{1}}^{P}:=\\sum_{h}d_{l_{1}h}^{P},d_{l_{1}}^{R}:=\\sum_{h}d_{l_{1}h}^{R}\\). The sums \\(d_{l_{1},h}:=d_{l_{1}h}^{P}+d_{l_{1}h}^{R},d_{l_{1}}:=d_{l_{1}}^{P}+d_{l_{1}h}^{R}\\) can be interpreted as the \\(l_{1}-\\)dimension of \\(\\Theta_{h}\\) and \\(\\Theta\\), i.e., the environment space.\n\n_Remark 3_.: We can also apply this framework to the KL-divergence, by \\(\\mathcal{M}_{\\mathrm{KL}}(P,Q):=\\sup_{o\\in\\mathcal{O}}D_{\\mathrm{KL}}(P(\\cdot| o)||Q(\\cdot||o))\\). This was implicitly used by [18] to prove their regret bound in the tabular case. Note that Pinsker's lemma (Lemma 9) implies that the KL-divergence is larger than the squared total variance, and the latter is trivially larger than the \\(l_{1}\\) distance. Therefore, \\(l_{1}-\\)dimension is smaller than \\(d_{\\mathcal{M}_{\\mathrm{KL}}}\\), allowing for tighter regret bounds.\n\nWe now revisit the definition of \\(\\varepsilon-\\)value partitions and show their existence is guaranteed by finite \\(l_{1}-\\)covering numbers. These partitions are the origins of surrogate environments.\n\n**Definition 8**.: Given \\(\\varepsilon>0\\), an \\(\\varepsilon-\\)value partition for a Bayesian RL problem is a partition \\(\\{\\Theta_{k}\\}_{k=1}^{K}\\) over \\(\\Theta\\) such that for any \\(k\\in[K]\\) and \\(\\mathcal{E},\\mathcal{E}^{\\prime}\\in\\Theta_{k}\\),\n\n\\[V_{1,\\pi_{\\mathcal{E}}^{\\prime}}^{\\mathcal{E}}(s_{1}^{\\ell})-V_{1,\\pi_{ \\mathcal{E}}^{\\prime}}^{\\mathcal{E}^{\\prime}}(s_{1}^{\\ell})\\leq\\varepsilon\\,. \\tag{10}\\]\n\nA _layered_\\(\\varepsilon-\\)value partition is one where the transition functions are independent over layers after conditioning on \\(k\\). Throughout this paper, we will only consider layered \\(\\varepsilon-\\)value partition. We define \\(K_{\\mathrm{surr}}(\\varepsilon)\\) as the minimum \\(K\\) for which there exists a layered \\(\\varepsilon-\\)value partition.\n\nInspired by Eq. (9), we define the surrogate dimension as \\(d_{\\mathrm{surr}}=\\limsup_{\\varepsilon\\to 0}\\frac{K_{\\mathrm{surr}}( \\varepsilon)}{\\log(1/\\varepsilon)}\\).\n\n**Lemma 1**.: _Given a Bayesian RL, we have \\(K_{\\mathrm{surr}}(\\varepsilon)\\leq\\prod_{h}L_{h}^{P}(\\varepsilon/(2H)^{2}) \\times L_{h}^{R}(\\varepsilon/(4H))\\). This implies \\(d_{\\mathrm{surr}}\\leq d_{l_{1}}\\)._\n\nThe above is proved in Appendix B. It is hard to find \\(d_{\\mathrm{surr}}\\), but one can estimate \\(d_{l_{1}}\\), and according to the above, this acts as a proxy for \\(K_{\\mathrm{surr}}\\). This is useful as the regret relates to \\(K_{\\mathrm{surr}}\\). But to show this, we need to construct _surrogate environments_ inside each partition, and show that learning those is almost equivalent to the original problem. Let \\(\\zeta\\) be a discrete random variable taking values in \\(\\{1,\\cdots,K_{\\mathrm{surr}}(\\varepsilon)\\}\\) that indicates the partition \\(\\mathcal{E}\\) lies in, such that \\(\\zeta=k\\) if and only if \\(\\mathcal{E}\\in\\Theta_{k}\\).\n\n**Lemma 2**.: _For any \\(\\varepsilon-\\)value partition and any \\(\\ell\\in[L]\\), there are random environments \\(\\tilde{\\mathcal{E}}_{\\ell}^{*}\\in\\Theta\\) with their laws only depending on \\(\\zeta,\\mathcal{D}_{\\ell}\\), such that_\n\n\\[\\mathbb{E}_{\\ell}\\left[V_{1,\\pi_{\\mathcal{E}}^{*}}^{\\mathcal{E}}(s_{1}^{\\ell} )-V_{1,\\pi_{\\mathcal{E}}^{\\prime}}^{\\mathcal{E}}(s_{1}^{\\ell})\\right]- \\mathbb{E}_{\\ell}\\left[V_{1,\\pi_{\\mathcal{E}}^{*}}^{\\mathcal{E}^{*}}(s_{1}^{ \\ell})-V_{1,\\pi_{\\mathcal{E}}^{*}}^{\\mathcal{E}^{*}}(s_{1}^{\\ell})\\right]\\leq \\varepsilon\\,. \\tag{11}\\]_The expectation in both equations is over \\(\\mathcal{E}\\) and \\(\\pi_{\\mathcal{F}}^{\\ell}\\in\\{\\pi_{\\mathcal{E}^{\\prime}}^{*}\\}_{\\mathcal{E}^{\\prime }\\in\\Theta}\\), with both sampled independently \\(\\sim\\mathbb{P}_{\\ell}(\\cdot)\\), and the \\(K\\) different values of \\(\\tilde{\\mathcal{E}}_{t}^{*}\\). The second expectation over \\((\\tilde{\\mathcal{E}}_{t}^{*},\\mathcal{E})\\) is over pairs that are in the same partition, i.e., \\(\\tilde{\\mathcal{E}}_{\\ell}^{*},\\mathcal{E}\\) are independent only after conditioning on \\(\\zeta\\)._\n\nWe note that the proof in [18, App. B.1] contains the use of a lemma that does not apply to construct the law of the environment \\(\\tilde{\\mathcal{E}}_{\\ell}^{*}\\). More details is provided in Appendix C, where we find \\(\\tilde{\\mathcal{E}}_{\\ell}^{*}\\) by minimizing an expected value of \\(\\pi_{\\text{TS}}^{\\ell}\\).\n\n## 5 Bayesian regret bounds for Thompson Sampling\n\n### General Bayesian regret bound\n\nWe start by introducing the notion of value diameter.\n\n**Definition 9**.: Given the environment \\(\\mathcal{E}\\), its value diameter is defined as\n\n\\[\\lambda_{\\mathcal{E}}:=\\max_{1\\leq h\\leq H}(\\sup_{s}V_{h,\\pi_{\\mathcal{E}}^{*} }^{\\mathcal{E}}(s)-\\inf_{s}V_{h,\\pi_{\\mathcal{E}}^{*}}^{\\mathcal{E}}(s))+\\max _{1\\leq h\\leq\\mathcal{E},s\\in\\mathcal{S},a\\in\\mathcal{A}}(r_{h}^{\\sup}(s,a)-r _{h}^{\\inf}(s,a)),\\]\n\nwhere \\(r_{h}^{\\sup}(s,a)\\) (and \\(r_{h}^{\\inf}(s,a)\\)) is the supremum (and infimum) of the set of rewards that are attainable under the distribution \\(r_{h}(s,a)\\) with non-zero probability. As a special case, if rewards are deterministic, then we have \\(r_{h}^{\\sup}(s,a)=r_{h}^{\\inf}(s,a)\\) for all \\(s,a\\). The (average) value diameter over \\(\\Theta\\) is denoted by \\(\\lambda:=\\mathbb{E}_{\\mathcal{E}\\sim\\rho}[\\lambda_{\\mathcal{E}}^{2}]^{1/2}\\).\n\nAs the value function is between \\(0\\) and \\(H\\), we have \\(\\lambda_{\\mathcal{E}}\\leq H+1\\) implying \\(\\lambda\\leq H+1\\). Note that value diameter is closely related to the notion of diameter commonly defined in finite RL problems. Strictly speaking, for a time-homogeneous RL, it is straightforward to see that the value diameter is bounded from above by one plus the diameter [33].\n\nWe now discuss the assumptions surrounding our results. The main technical assumption of this paper is the existence of consistent estimators, which as we will see in Appendix K, is closely related to the notion of posterior consistency:\n\n**Assumption 1**.: _There exists a strongly consistent estimator of the true environment given the history._\n\nRoughly speaking, we assume that with unlimited observations under TS, it is possible to find the true environment. For this assumption to fail, we need to have two environments that produce the same distribution over histories under TS and are therefore indistinguishable from the point of view of TS. The precise description of this assumption is detailed in Appendix K.\n\nAnother necessary technical assumption is that almost all optimal policies visit almost all state action pairs in their respective environment.\n\n**Assumption 2**.: _For almost every environment \\(\\mathcal{E}\\in\\Theta\\) and almost every \\((s,a)\\in\\mathcal{S}\\times\\mathcal{A}\\) and every \\(h\\in[H]\\), we have_\n\n\\[d_{h,\\pi_{\\mathcal{E}}^{*}}^{\\mathcal{E}}(s,a)\\neq 0.\\]\n\nRecall that, for any environment \\(\\mathcal{E}\\in\\Theta\\), the policy \\(\\pi_{\\mathcal{E}}^{*}\\) is the optimal policy of \\(\\mathcal{E}\\) within the policy class \\(\\Pi\\). Therefore, one example of how the above assumption holds is when \\(\\Pi\\) is the set of \\(\\varepsilon\\)-greedy algorithms and transition functions of environments assign non-zero probability to every state. Under these assumptions, we discuss our main result and its corollaries.\n\n**Theorem 3**.: _Given a Bayesian RL problem, for all \\(\\varepsilon>0\\), we have_\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi_{\\text{TS}})\\leq 2\\lambda\\sqrt{\\log(K_{\\text{ surr}}(\\varepsilon))T}+L\\varepsilon+T_{0} \\tag{12}\\]\n\n_where \\(T_{0}\\) does not depend on \\(T\\). This can be further upper bounded by_\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi_{\\text{TS}})\\leq\\widetilde{O}(\\lambda\\sqrt{d_ {l_{1}}T})\\,. \\tag{13}\\]\n\n_for large enough \\(T\\). Given a homogeneous \\(l_{1}\\) dimension \\(d_{\\mathrm{hom}}=d_{l_{1},h},\\forall h\\), this simplifies to_\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi_{\\text{TS}})\\leq\\widetilde{O}(\\lambda\\sqrt{Hd _{\\mathrm{hom}}T})\\,. \\tag{14}\\]\n\n_Remark 4_.: For all regret bounds, we will replace \\(\\lambda\\leq H+1\\) to compare our result. For the case of homogeneous dimensions, we obtain \\(\\widetilde{O}(H^{3/2}\\sqrt{d_{\\mathrm{hom}}T})\\). Crucially, our main result shows a new conceptual understanding of the information ratio by bounding it by two terms of different nature: \\(H\\) and \\(\\lambda\\), where the latter can be bounded by either the largest diameter of the environments or \\(H\\).\n\n_Remark 5_.: Despite not impacting the asymptotics, the impact of \\(T_{0}\\) can be large depending on the structure of the RL problem, and could be dominant even for large \\(T\\)s in practice.\n\n_Remark 6_.: Considering time as a part of the state observation, one could apply this regret analysis to particular time-homogeneous settings. However, this mapping of time-inhomogeneous RLs to homogeneous ones is not surjective, hence the result above does not readily extend to time-homogeneous settings.\n\nWhile [16] were the first to consider a nonlinear Bayesian RL model, their bound is limited to the Gaussian process (with linear kernel) setting, while ours in the nonlinear time inhomogeneous setting makes no assumptions on the prior and is the first such bound. Our novel analysis allow us to upper bound the information ratio by \\(\\lambda\\sqrt{H}\\) instead of, for example \\(H^{3/2}\\sqrt{SA}\\) ([18]) in the tabular case, improving the regret bound by a square root relevant to the dimension \\(d\\) of the problem.\n\nThe detailed proof is given in Appendix D. Following [18], the regret (4) is rewritten using Lemma 2 to reduce the problem into its surrogate, and we use the well-known information-ratio trick by multiplying and dividing by the mutual information. We follow that with a Cauchy-Schwarz, summarized below\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi_{\\text{TS}}) \\leq\\mathbb{E}\\left[\\sum_{\\ell=1}^{L}\\frac{\\mathbb{E}_{\\ell}\\left[ V_{1,\\pi_{\\ell}}^{\\mathcal{E}_{\\epsilon}^{*}}(s_{1}^{\\ell})-V_{1,\\pi_{\\text{TS}} ^{\\mathcal{E}_{\\epsilon}^{*}}}^{\\mathcal{E}_{\\epsilon}^{*}}(s_{1}^{\\ell}) \\right]}{\\sqrt{\\mathbb{I}_{\\ell}^{\\pi_{\\text{TS}}^{\\mathcal{E}_{\\epsilon}^{*} }}(\\tilde{\\mathcal{E}}_{\\ell}^{*};\\mathcal{H}_{\\epsilon,H})}}\\right]+L\\varepsilon \\tag{15}\\] \\[\\leq\\sqrt{\\mathbb{E}\\left[\\sum_{\\ell=1}^{L}\\frac{\\left(\\mathbb{E}_ {\\ell}\\left[V_{1,\\pi_{\\ell}}^{\\mathcal{E}_{\\epsilon}^{*}}(s_{1}^{\\ell})-V_{1, \\pi_{\\text{TS}}^{\\mathcal{E}_{\\epsilon}^{*}}}^{\\mathcal{E}_{\\epsilon}^{*}}(s_{ 1}^{\\ell})\\right]\\right)^{2}}{\\mathbb{I}_{\\ell}^{\\pi_{\\text{TS}}^{\\mathcal{E} _{\\epsilon}^{*}}}(\\tilde{\\mathcal{E}}_{\\ell}^{*};\\mathcal{H}_{\\epsilon,H})} \\right]}\\mathbb{E}\\left[\\sum_{\\ell=1}^{L}\\mathbb{I}_{\\ell}^{\\pi_{\\text{TS}}^{ \\mathcal{E}_{\\epsilon}^{*}}}(\\tilde{\\mathcal{E}}_{\\ell}^{*};\\mathcal{H}_{ \\epsilon,H})}\\right]+L\\varepsilon \\tag{16}\\]\n\nNote the cost \\(\\varepsilon\\) at each episode (Lemma 2) in the first inequality, yielding the overall error \\(L\\varepsilon\\). Then, we can bound the mutual information appearing in the regret term by \\(\\mathbb{E}\\left[\\sum_{\\ell=1}^{L}\\mathbb{I}_{\\ell}^{\\pi_{\\text{TS}}^{\\mathcal{ E}_{\\epsilon}^{*}}}(\\tilde{\\mathcal{E}}_{\\ell}^{*};\\mathcal{H}_{\\epsilon,H}) \\right]=I_{\\ell}^{\\pi_{\\text{TS}}^{\\mathcal{E}_{\\epsilon}^{*}}}(\\tilde{ \\mathcal{E}}_{\\ell}^{*};\\mathcal{D}_{\\ell})\\leq I_{\\ell}^{\\pi_{\\text{TS}}^{ \\mathcal{E}_{\\epsilon}^{*}}}(\\zeta;\\mathcal{D}_{\\ell})\\leq\\log(K_{\\text{surr}} (\\varepsilon))\\), where we used the mutual information chain rule, followed by data processing inequality to substitute \\(\\tilde{\\mathcal{E}}_{\\ell}^{*}\\rightarrow\\zeta\\), and finally used the trivial bound by the entropy. But the main novelty of our approach lies in our control of the first term\n\n\\[\\Gamma_{\\ell}(\\pi_{\\text{TS}}^{\\ell}):=\\frac{\\left(\\mathbb{E}_{\\ell}\\left[V_{1,\\pi_{\\ell}}^{\\mathcal{E}_{\\epsilon}^{*}}(s_{1}^{\\ell})-V_{1,\\pi_{\\text{TS}}^{ \\mathcal{E}_{\\epsilon}^{*}}}^{\\mathcal{E}_{\\epsilon}^{*}}(s_{1}^{\\ell})\\right] \\right)^{2}}{\\mathbb{I}_{\\ell}^{\\pi_{\\text{TS}}^{\\mathcal{E}_{\\epsilon}^{*}}}( \\tilde{\\mathcal{E}}_{\\ell}^{*};\\mathcal{H}_{\\epsilon,H})} \\tag{17}\\]\n\ncalled the information ratio. In our analysis, we have the following bound on its expectation.\n\n\\[\\mathbb{E}[\\Gamma_{\\ell}(\\pi_{\\text{TS}}^{\\ell})\\mid\\mathcal{E}_{0}]\\leq \\mathbb{E}\\left[\\sum_{h}\\int\\frac{\\mathbb{E}_{\\ell}\\left[(\\lambda_{\\mathcal{E} }d_{h,\\pi^{*}}^{\\mathcal{E}_{\\epsilon}^{\\prime}}(s,a))^{2}\\right]}{\\mathbb{E}_{ \\ell}\\left[d_{h,\\pi^{*}}^{\\mathcal{E}_{\\epsilon}}(s,a)\\right]}\\mu_{\\mathcal{S }\\times\\mathcal{A}}\\mid\\mathcal{E}_{0}\\right],\\]\n\nwhere the average is taken over all histories \\(\\mathcal{D}_{\\ell}\\) that are generated from running TS on the true environment \\(\\mathcal{E}_{0}\\), and we have introduced the smaller term \\(\\lambda_{\\mathcal{E}}\\) instead of \\(H\\) in [18]. While [18] essentially bound the above only in the tabular setting with \\(SAH^{3}\\), we manage to generally bound the above with a more precise bound using Doob's consistency theorem. Assumption 1 allows us to use Doob's consistency theorem to conclude that for almost every environment \\(\\mathcal{E}_{0}\\), almost every infinite sequence of histories \\((\\mathcal{D}_{\\ell})_{\\ell=1}^{\\infty}\\) sampled from \\(\\mathcal{E}_{0}\\), and every integrable function \\(f\\), the posterior mean \\(\\mathbb{E}_{\\ell}[f(\\mathcal{E})]=\\mathbb{E}[f(\\mathcal{E})\\mid\\mathcal{D}_{ \\ell}]\\) converges to \\(f(\\mathcal{E}_{0})\\). In particular, we conclude that \\(\\mathbb{E}[\\Gamma_{\\ell}(\\pi_{\\text{TS}}^{\\ell})\\mid\\mathcal{E}_{0}]\\) tends to \\(\\lambda_{\\mathcal{E}_{0}}^{2}H\\) in the limit, allowing us to claim that for large enough \\(\\ell\\), the expected information ratio \\(\\mathbb{E}[\\Gamma_{\\ell}(\\pi_{\\text{TS}}^{\\ell})]\\) is uniformly bounded by \\(2\\mathbb{E}[\\lambda_{\\mathcal{E}}^{2}]H=2\\lambda^{2}H\\). As there are \\(L\\) many such ratios, the two bounds together yield \\(2\\sqrt{\\lambda^{2}HL}\\cdot\\sqrt{\\log(K_{\\text{surr}}(\\varepsilon))}+L\\varepsilon\\). This bound is true for large enough \\(\\ell\\), giving the additional additive term \\(T_{0}\\) in the theorem. Since this term is additive, applying Lemma 1 to bound \\(\\log(K_{\\text{surr}}(\\varepsilon))\\), we have successfully shown the asymptotic behavior of the regret, independent of the prior, is of order \\(\\widetilde{O}(H\\sqrt{d_{l_{1}}T})\\).\n\n### Applications\n\nIn each application below, the challenge is to bound \\(d_{l_{1}}\\) using the specifics of the model, and except for the case of tabular Bayesian RL, such analysis has not been carried out rigorously. We formalize the corollaries and show they are state-of-the-art compared to the literature.\n\nTabular RL.The result below follows from Theorem 3; the main contribution comes from our new information ratio bound, followed by the estimate \\(\\widetilde{O}((\\frac{1}{\\varepsilon})^{SAH})\\) of \\(K_{\\mathrm{surr}}(\\varepsilon)\\) ([18]).\n\n**Corollary 4**.: _Given a tabular Bayesian RL problem, for large enough \\(T\\),_\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi_{\\mathsf{TS}})\\leq\\widetilde{O}(\\lambda\\sqrt{ HSAT})\\,, \\tag{18}\\]\n\n_where the polylogarithmic terms are explicitly in terms of \\(H,S,A,L\\)._\n\nWe observe that our result matches [31] when their result in the time homogeneous setting (Definition 2) is extended to time inhomogeneous. However, in that paper, the authors assume a Dirichlet based prior which we do not.\n\nLinear RL.A previous state-of-the-art \\(\\widetilde{O}(d_{f}H^{3/2}\\sqrt{T})\\) was claimed by [18] to hold for linear Bayesian RLs with deterministic reward. We note:\n\n* As in the previous cases, their proof in bounding their information ratio includes a factor of \\(d_{f}\\), which ours avoids.\n* We show that the proof bounding \\(K_{\\mathrm{surr}}(\\varepsilon)\\) in [18, App. B.4] is incorrect, starting with a wrong application of Cauchy-Schwarz and a wrong mutual information in their definition of information ratio. We provide counterexamples for the estimates found therein to substantiate our claim (see Appendix G.1).\n\nTo state our own corollary in this case, we need to define a few notions. Let \\(d^{f}_{l_{1}}=d^{P,f}_{l_{1}}+d^{R,f}_{l_{1}}\\) be the sum of the \\(l_{1}-\\)dimensions of the feature map space \\(\\{\\psi^{P,\\mathcal{E}}_{h}\\}_{\\mathcal{E}\\in\\Theta},\\{\\psi^{R,\\mathcal{E}}_{h}\\} _{\\mathcal{E}\\in\\Theta}\\) where the \\(l_{1}-\\)distance between feature maps is defined as \\(l_{1}(\\psi^{\\mathcal{E}}_{h},\\psi^{\\mathcal{E}^{\\prime}}_{h})=\\int_{s}\\|\\psi^{ \\mathcal{E}}_{h}-\\psi^{\\mathcal{E}^{\\prime}}_{h}\\|_{1}\\mu_{\\mathcal{S}}\\). Our corollary also provides a concrete bound in the case of _mixture_ linear Bayesian RL where the feature maps are themselves a sum of finitely many **fixed** feature maps. This means for all \\(\\mathcal{E}\\in\\Theta\\), we have\n\n\\[\\psi^{P,\\mathcal{E}}_{h}=\\sum_{i=1}^{m_{h}^{P}}a^{P,\\mathcal{E}}_{h,i}\\Psi^{P} _{h,i}(s),\\ \\ \\psi^{R,\\mathcal{E}}_{h}=\\sum_{i=1}^{m_{h}^{R}}a^{R,\\mathcal{E}}_{h,i} \\Psi^{R}_{h,i}(s) \\tag{19}\\]\n\nwhere \\(\\{\\Psi^{P}_{h,i}(s)\\}_{i=1}^{m_{h}^{P}},\\{\\Psi^{R}_{h,i}(s)\\}_{i=1}^{m_{h}^{R}}\\) are finitely many fixed feature maps and \\(\\forall\\mathcal{E},h:\\sum_{i}|a^{P,\\mathcal{E}}_{h,i}|^{2},\\sum_{i}|a^{R, \\mathcal{E}}_{h,i}|^{2}\\leq C_{a}\\) for some constant \\(C_{a}>0\\). Let \\(M=M^{P}+M^{R}=\\sum_{h}m_{h}^{P}+\\sum_{h}m_{h}^{R}\\).\n\n**Corollary 5**.: _For a linear Bayesian RL, for large enough \\(T\\),_\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi_{\\mathsf{TS}})\\leq\\widetilde{O}(\\lambda\\sqrt{ d^{f}_{l_{1}}T}). \\tag{20}\\]\n\n_Given a linear Bayesian RL with finitely many states and total feature space dimension \\(d_{f}=d^{P}_{f}+d^{R}_{f}\\), we have \\(d_{l_{1}}\\leq 2d_{f}HS\\), yielding for large enough \\(T\\),_\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi_{\\mathsf{TS}})\\leq\\widetilde{O}(\\lambda\\sqrt{ Hd_{f}ST}). \\tag{21}\\]\n\n_Given a mixture linear Bayesian RL, for large enough \\(T\\),_\n\n\\[\\mathfrak{B}\\mathfrak{R}_{L}(\\pi_{\\mathsf{TS}})\\leq\\widetilde{O}(\\lambda\\sqrt{ MT})\\,, \\tag{22}\\]\n\nThe proof is given in Appendix G. The fact that \\(d_{l_{1}}\\) appears instead of \\(d_{f}\\) in the general bound is not counter-intuitive, as we should expect the complexity of the feature map space \\(\\{\\psi^{P,\\mathcal{E}}_{h}(s)\\}_{\\mathcal{E}\\in\\Theta,h\\in[H]},\\{\\psi^{R, \\mathcal{E}}_{h}(s)\\}_{\\mathcal{E}\\in\\Theta,h\\in[H]}\\) to play a role in the regret, especially as this space can be very complex, and model very different environments that can not be grouped in the same \\(\\varepsilon-\\)value partition.\n\nTherefore, opposite to the claim made by [18], this complexity can not be captured by simply \\(d_{f}\\) except maybe in degenerate cases, such as when \\(\\mathcal{S}\\) is finite, which is our second statement. More generally, if each feature map \\(\\psi^{P,\\mathcal{E}}_{h}(s),\\psi^{R,\\mathcal{E}}_{h}(s)\\) can be characterized with a vector of uniformly bounded norm \\(\\mathbf{a}^{P,\\mathcal{E}}_{h}\\in\\mathbb{R}^{m_{h}^{P}},\\mathbf{a}^{R,\\mathcal{E}}_{h }\\in\\mathbb{R}^{m_{h}^{R}}\\), then we can bound the regret in terms of \\(m_{h}^{P},m_{h}^{R}\\)s, as is done in Eq. (22) (the finite state case corresponds to \\(m_{h}^{P}=d^{P}_{f}S,m_{h}^{R}=d^{R}_{f}S\\)).\n\nFinite mixtures RL.To state our finite mixtures model result, we need to set the following notations. Let \\(d_{l_{1}}^{m_{1}}=d_{l_{1}}^{m,P}+d_{l_{1}}^{m,R}=\\sum_{h}d_{l_{1},h}^{m,P}+\\sum_{ h}d_{l_{1},h}^{m,R}\\) correspond to the total \\(l_{1}-\\)dimension of the space of mixtures coefficient maps \\(\\{\\mathbf{a}_{h}^{P,\\mathcal{E}}(s,a)\\}_{\\mathcal{E}\\in\\Theta},\\{\\mathbf{a}_{h}^{R, \\mathcal{E}}(s,a)\\}_{\\mathcal{E}\\in\\Theta}\\) with \\(l_{1}-\\)distance defined as \\(l_{1}(\\mathbf{a}_{h}^{\\mathcal{E}},\\mathbf{a}_{h}^{\\mathcal{E}^{\\prime}})=\\sup_{s,a}\\| \\mathbf{a}_{h}^{\\mathcal{E}}(s,a)-\\mathbf{a}_{h}^{\\mathcal{E}^{\\prime}}(s,a)\\|_{1}\\). Define also the restricted finite mixtures model where \\(\\mathbf{a}_{h}^{P,\\mathcal{E}},\\mathbf{a}_{h}^{R,\\mathcal{E}}\\) are vectors in \\(\\mathbb{R}^{m_{h}},\\mathbb{R}^{m_{h}^{R}}\\) independent of \\((s,a)\\) and let \\(M=M^{P}+M^{R}=\\sum_{h}m_{h}^{P}+\\sum_{h}m_{h}^{R}\\).\n\n**Corollary 6**.: _Given a finite mixtures Bayesian RL problem, for large enough \\(T\\),_\n\n\\[\\mathfrak{BV}_{L}(\\pi_{\\text{TS}})\\leq\\widetilde{O}(\\lambda\\sqrt{d_{l_{1}}^{m }T}). \\tag{23}\\]\n\n_Assuming the restricted finite mixtures model, for large enough \\(T\\),_\n\n\\[\\mathfrak{BV}_{L}(\\pi_{\\text{TS}})\\leq\\widetilde{O}\\left(\\lambda\\sqrt{MT} \\right). \\tag{24}\\]\n\n_which, given a uniform dimension \\(m=m_{h}^{P}=m_{h}^{R}\\), yields \\(\\widetilde{O}(\\lambda\\sqrt{HmT})\\)._\n\nWe prove the above in Appendix H, deriving it from our generic bound, after relating the \\(l_{1}-\\)dimension \\(d_{l_{1}}\\) of the environment space to that of the mixtures coefficients. To the best of our knowledge, this is the first bound for finite mixtures Bayesian RL problems. We note that in a previous work ([6]), a restricted version of finite mixtures, like in Eq. (24), was considered in the frequentist setting.\n\nWe finish this section by proposing the following conjecture, in line with [31, Conj. 1].\n\n**Conjecture 7**.: _For the Bayesian RL, the following is true and optimal for **all**\\(T\\):_\n\n\\[\\mathfrak{BV}_{L}(\\pi_{\\text{TS}})\\leq O\\left(\\inf_{\\varepsilon>0}(\\sqrt{H\\log (K_{\\operatorname{surr}}(\\varepsilon))T}+L\\varepsilon)\\right). \\tag{25}\\]\n\n_where the constant factor is independent of the prior. This means there exists a Bayesian RL problem such that \\(\\mathfrak{BV}_{L}(\\pi_{\\text{TS}})=\\widetilde{O}(\\sqrt{Hd_{\\operatorname{surr }}T})\\). All polylogarithmic terms are in terms of \\(H,d_{\\operatorname{surr}},T\\)._\n\nNote that the above coincides with the lower bound for the (model-based) time inhomogeneous frequentist setting; see e.g., [21] for the proven lower bound for the tabular case. This is also \\(\\sqrt{H}\\) higher (this factor being baked in \\(d_{\\operatorname{surr}}\\)) than that of the time homogeneous frequentist setting, which is expected, according to [21, App. D]. Note that in this conjecture, the \\(\\lambda\\) in our bound is replaced by \\(\\sqrt{H}\\), and the conjecture is not for \\(T\\) large enough, but for all \\(T\\). Supporting this conjecture requires experiments where TS can be exactly implemented assuming access to an oracle which provides the optimal policy for a query environment. Simulations have been performed for the similar [31, Conj. 1] in the time homogeneous case. Our conjecture is similar but with the additional expected factor of \\(\\sqrt{H}\\) due to time inhomogeneity, thus their simulation also supports the above.\n\n## 6 Conclusions\n\nIn this paper, we have addressed the Bayesian Reinforcement Learning (RL) problem in the context of time inhomogeneous transition and reward functions. By considering both Bayesian transition and Bayesian rewards without prior assumptions, we have extended the scope of previous works, making our formulation more comprehensive. To simplify the learning problem, we have introduced surrogate environments, which discretize the environment space. We have established a connection between the size of this new environment space and the \\(l_{1}\\)-dimensions of the transition and reward functions space, providing insights into the \\(l_{1}\\)-dimension of the environment space denoted by \\(d_{l_{1}}\\). We have employed posterior consistency tools to analyze the information ratio, which captures the trade-off between exploration and exploitation. We conjecture that (at least a weakened version of) our posterior consistency assumption should hold in general, which is left for future work. Our analysis has resulted in a refined approach to estimate the Bayesian regret in Thompson Sampling (TS), yielding a regret bound of \\(\\widetilde{O}(\\lambda\\sqrt{d_{l_{1}}T})\\) for large enough time steps \\(T\\). The result is specialized to linear, tabular, and finite mixtures MDPs.\n\n**Limitations:** While the paper provides asymptotic generic regret bound for TS in a generalized setup which improve the state of the art results, finding lower bounds, esp. one dependent on \\(\\lambda\\), are left open. In addition, the issue of prior misspecificity is not discussed and left for future studies.\n\n## References\n\n* [1] Amirhesam Abedsoltan, Mikhail Belkin, and Parthe Pandit. Toward large kernel models. _arXiv preprint arXiv:2302.02605_, 2023.\n* [2] Alekh Agarwal, Yujia Jin, and Tong Zhang. VOQL: Towards optimal regret in model-free rl with nonlinear function approximation. _arXiv preprint arXiv:2212.06069_, 2022.\n* [3] Shipra Agrawal and Randy Jia. Optimistic posterior sampling for reinforcement learning: worst-case regret bounds. _Advances in Neural Information Processing Systems_, 30, 2017.\n* [4] Dilip Arumugam and Benjamin Van Roy. Deciding what to model: Value-equivalent sampling for reinforcement learning. 35:9024-9044, 2022.\n* [5] Peter Auer, Thomas Jaksch, and Ronald Ortner. Near-optimal regret bounds for reinforcement learning. _Advances in neural information processing systems_, 21, 2008.\n* [6] Alex Ayoub, Zeyu Jia, Csaba Szepesvari, Mengdi Wang, and Lin Yang. Model-based reinforcement learning with value-targeted regression. In _International Conference on Machine Learning_, pages 463-474. PMLR, 2020.\n* [7] Mohammad Gheshlaghi Azar, Ian Osband, and Remi Munos. Minimax regret bounds for reinforcement learning. In _International Conference on Machine Learning_, pages 263-272. PMLR, 2017.\n* [8] Dimitri Bertsekas and John N Tsitsiklis. _Neuro-dynamic programming_. Athena Scientific, 1996.\n* [9] Sebastien Bubeck and Mark Sellke. First-order bayesian regret analysis of thompson sampling. In _Algorithmic Learning Theory_, pages 196-233. PMLR, 2020.\n* [10] Qi Cai, Zhuoran Yang, Chi Jin, and Zhaoran Wang. Provably efficient exploration in policy optimization. In _International Conference on Machine Learning_, pages 1283-1294. PMLR, 2020.\n* [11] Souradip Chakraborty, Amrit Bedi, Alec Koppel, Pratap Tokekar, Furong Huang, and Dinesh Manocha. Posterior coreset construction with kernelized stein discrepancy for model-based reinforcement learning. In _NeurIPS 2022 Workshop on Score-Based Methods_, 2022.\n* [12] Sayak Ray Chowdhury and Aditya Gopalan. Online learning in kernelized markov decision processes. In _Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics_, pages 3197-3205. PMLR, 2019.\n* [13] Christoph Dann, Mehryar Mohri, Tong Zhang, and Julian Zimmert. A provably efficient model-free posterior sampling method for episodic reinforcement learning. _Advances in Neural Information Processing Systems_, 34:12040-12051, 2021.\n* [14] Shi Dong and Benjamin Van Roy. An information-theoretic analysis for thompson sampling with many actions. _Advances in Neural Information Processing Systems_, 31, 2018.\n* [15] Yaqi Duan, Mengdi Wang, and Martin J Wainwright. Optimal policy evaluation using kernel-based temporal difference methods. _arXiv preprint arXiv:2109.12002_, 2021.\n* [16] Ying Fan and Yifei Ming. Model-based reinforcement learning for continuous control with posterior sampling. In _International Conference on Machine Learning_, pages 3078-3087. PMLR, 2021.\n* [17] Subhashis Ghosal and Aad Van der Vaart. _Fundamentals of nonparametric Bayesian inference_, volume 44. Cambridge University Press, 2017.\n* [18] Botao Hao and Tor Lattimore. Regret bounds for information-directed reinforcement learning. _arXiv preprint arXiv:2206.04640_, 2022.\n* [19] Botao Hao, Tor Lattimore, and Wei Deng. Information directed sampling for sparse linear bandits. _Advances in Neural Information Processing Systems_, 34:16738-16750, 2021.\n\n* [20] Botao Hao, Tor Lattimore, and Chao Qin. Contextual information-directed sampling. In _International Conference on Machine Learning_, pages 8446-8464. PMLR, 2022.\n* [21] Chi Jin, Zeyuan Allen-Zhu, Sebastien Bubeck, and Michael I Jordan. Is q-learning provably efficient? _Advances in neural information processing systems_, 31, 2018.\n* [22] Chi Jin, Zhuoran Yang, Zhaoran Wang, and Michael I Jordan. Provably efficient reinforcement learning with linear function approximation. In _Conference on Learning Theory_, pages 2137-2143. PMLR, 2020.\n* [23] Johannes Kirschner, Tor Lattimore, Claire Vernade, and Csaba Szepesvari. Asymptotically optimal information-directed sampling. In _Conference on Learning Theory_, pages 2777-2821. PMLR, 2021.\n* [24] Tor Lattimore and Andras Gyorgy. Mirror descent and the information ratio. In _Conference on Learning Theory_, pages 2965-2992. PMLR, 2021.\n* [25] Tor Lattimore and Csaba Szepesvari. An information-theoretic approach to minimax regret in partial monitoring. In _Conference on Learning Theory_, pages 2111-2139. PMLR, 2019.\n* [26] Fang Liu, Swapna Buccapatnam, and Ness Shroff. Information directed sampling for stochastic bandits with graph feedback. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 32, 2018.\n* [27] Xiuyuan Lu. _Information-directed sampling for reinforcement learning_. Stanford University, 2020.\n* [28] Xiuyuan Lu and Benjamin Van Roy. Information-theoretic confidence bounds for reinforcement learning. _Advances in Neural Information Processing Systems_, 32, 2019.\n* [29] Ian Osband, Daniel Russo, and Benjamin Van Roy. (more) efficient reinforcement learning via posterior sampling. _Advances in Neural Information Processing Systems_, 26, 2013.\n* [30] Ian Osband and Benjamin Van Roy. Model-based reinforcement learning and the eluder dimension. _Advances in Neural Information Processing Systems_, 27, 2014.\n* [31] Ian Osband and Benjamin Van Roy. Why is posterior sampling better than optimism for reinforcement learning? In _International conference on machine learning_, pages 2701-2710. PMLR, 2017.\n* [32] Ian Osband, Benjamin Van Roy, Daniel J Russo, Zheng Wen, et al. Deep exploration via randomized value functions. _J. Mach. Learn. Res._, 20(124):1-62, 2019.\n* [33] Martin L Puterman. _Markov decision processes: discrete stochastic dynamic programming_. John Wiley & Sons, 2014.\n* [34] Daniel Russo. Worst-case regret bounds for exploration via randomized value functions. _Advances in Neural Information Processing Systems_, 32, 2019.\n* [35] Daniel Russo and Benjamin Van Roy. Learning to optimize via information-directed sampling. _Advances in Neural Information Processing Systems_, 27, 2014.\n* [36] Daniel Russo and Benjamin Van Roy. An information-theoretic analysis of thompson sampling. _The Journal of Machine Learning Research_, 17(1):2442-2471, 2016.\n* [37] Remo Sasso, Michelangelo Conserva, and Paulo Rauber. Posterior sampling for deep reinforcement learning. _arXiv preprint arXiv:2305.00477_, 2023.\n* [38] Richard S Sutton and Andrew G Barto. _Reinforcement learning: An introduction_. MIT press, 2018.\n* [39] Daniil Tiapkin, Denis Belomestny, Daniele Calandriello, Eric Moulines, Remi Munos, Alexey Naumov, Mark Rowland, Michal Valko, and Pierre Menard. Optimistic posterior sampling for reinforcement learning with few samples and tight guarantees. _Advances in Neural Information Processing Systems_, 35:10737-10751, 2022.\n\n* [40] Aristide Tossou, Debabrota Basu, and Christos Dimitrakakis. Near-optimal optimistic reinforcement learning using empirical bernstein inequalities. _arXiv preprint arXiv:1905.12425_, 2019.\n* [41] Lin Yang and Mengdi Wang. Sample-optimal parametric q-learning using linearly additive features. In _International Conference on Machine Learning_, pages 6995-7004. PMLR, 2019.\n* [42] Andrea Zanette, David Brandfonbrener, Emma Brunskill, Matteo Pirotta, and Alessandro Lazaric. Frequentist regret bounds for randomized least-squares value iteration. In _International Conference on Artificial Intelligence and Statistics_, pages 1954-1964. PMLR, 2020.\n* [43] Zihan Zhang, Yuan Zhou, and Xiangyang Ji. Almost optimal model-free reinforcement learningvia reference-advantage decomposition. _Advances in Neural Information Processing Systems_, 33:15198-15207, 2020.\n* [44] Dongruo Zhou, Quanquan Gu, and Csaba Szepesvari. Nearly minimax optimal reinforcement learning for linear mixture markov decision processes. In _Conference on Learning Theory_, pages 4532-4576. PMLR, 2021.", "response": "## Review\n\n### Summary\n\nThis paper investigates the performance of Thompson sampling in the context of Bayesian reinforcement learning, where both the transition and reward functions are sampled from a known prior distribution. It establishes Bayesian regret bounds applicable to various settings, including tabular, linear, and finite mixture MDPs. The analysis utilizes the Kolmogorov dimension, leading to results that match or improve upon existing state-of-the-art findings. While the general approach is valuable and opens up new avenues for exploration, there are concerns regarding the absence of lower bounds and the need for more explicit definitions of assumptions in theorems. Overall, the paper presents significant theoretical contributions to the understanding of Thompson sampling in reinforcement learning.\n\n### Strengths\n\n- The paper provides a comprehensive analysis of Thompson sampling in MDPs, claiming to offer the most general results to date.\n- It generalizes previous state-of-the-art regret bounds in both tabular and linear settings.\n- The writing is clear and easy to follow, with a thorough discussion of previous works.\n- The authors present proofs for their claims and provide corollaries for specialized settings.\n\n### Weaknesses\n\n- The paper lacks lower bounds to substantiate the tightness of the upper bounds.\n- Some theorem statements could be more explicit, particularly regarding assumptions.\n- The weak notion of Bayesian regret is seen as a limitation, given existing results in the frequentist setting.\n- There is no empirical validation of the derived bounds, limiting practical applicability.\n\n### Questions\n\n- Is the claim that the law of Thompson sampling aligns with the true posterior distribution valid under all conditions?\n- What is the relationship between the Kolmogorov dimension and the concept of covering dimensions?\n- Can the authors elaborate on how the proposed bounds compare to existing literature, especially in terms of frequentist regret?\n- What assumptions underlie the results, and how do they affect the generalizability of the findings?\n\n### Soundness\n\n**Score:** 3\n\n**Description:** 3 = good; the theoretical results are sound and well-supported, but some assumptions need clearer definitions.\n\n### Presentation\n\n**Score:** 2\n\n**Description:** 2 = fair; while the writing is mostly clear, the presentation would benefit from a more structured overview of results and additional clarity in some sections.\n\n### Contribution\n\n**Score:** 3\n\n**Description:** 3 = good; the paper makes significant contributions to the theoretical understanding of Thompson sampling in reinforcement learning, though some limitations are noted.\n\n### Rating\n\n**Score:** 6\n\n**Description:** 6 = Weak Accept; the paper is technically solid and has moderate-to-high impact potential, but it needs further improvements, particularly regarding empirical validation.\n\n### Paper Decision\n\n**Decision:** Accept (poster)\n\n**Reasons:** The decision to accept is based on the originality and significance of the paper's contributions to Bayesian reinforcement learning. The theoretical results are strong, and the potential for high impact in the field is evident, though the absence of empirical results and lower bounds is a concern. Overall, the clarity and depth of the analysis provide a solid foundation for future work.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Group Robust Classification\n\nWithout Any Group Information\n\nChristos Tsirigotis\n\nUniversite de Montreal, Mila, ServiceNow Research &Joao Monteiro\n\nServiceNow Research &Pau Rodriguez\n\nServiceNow Research &David Vazquez\n\nServiceNow Research &Aaron Courville\n\nUniversite de Montreal, Mila, CIFAR CAI Chair\n\nWork done during internship at ServiceNow Research. Author correspondence at tsirigoc@mila.quebec.This work was also funded, in part, from A. Courville's Sony Research Award. Courville also acknowledges support from his Canada Research Chair.\n\n###### Abstract\n\nEmpirical risk minimization (ERM) is sensitive to spurious correlations in the training data, which poses a significant risk when deploying systems trained under this paradigm in high-stake applications. While the existing literature focuses on maximizing group-balanced or worst-group accuracy, estimating these accuracies is hindered by costly bias annotations. This study contends that current bias-unsupervised approaches to group robustness continue to rely on group information to achieve optimal performance. Firstly, these methods implicitly assume that all group combinations are represented during training. To illustrate this, we introduce a systematic generalization task on the MPI3D dataset and discover that current algorithms fail to improve the ERM baseline when combinations of observed attribute values are missing. Secondly, bias labels are still crucial for effective model selection, restricting the practicality of these methods in real-world scenarios. To address these limitations, we propose a revised methodology for training and validating debiased models in an entirely bias-unsupervised manner. We achieve this by employing pretrained self-supervised models to reliably extract bias information, which enables the integration of a logit adjustment training loss with our validation criterion. Our empirical analysis on synthetic and real-world tasks provides evidence that our approach overcomes the identified challenges and consistently enhances robust accuracy, attaining performance which is competitive with or outperforms that of state-of-the-art methods, which, conversely, rely on bias labels for validation.\n\n## 1 Introduction\n\nSupervised learning algorithms typically rely on the empirical risk minimization (ERM) paradigm - minimizing the average loss on a training set. The ERM paradigm operates under the assumption that the training data is a representative sample of the true data distribution [67]. Consequently, models that achieve low expected loss can still be _unfair_ when the model is tasked to predict outcomes for underrepresented groups [38, 10, 66], and prone to _rely on spurious correlations_ between target annotations and generative attributes of the training data [32, 62] that do not hold up under more general testing conditions. As automated predictors are deployed in failure-critical applications [24] or interact with society, where fairness must be guaranteed [13, 36], robustness and fairness become requirements that standard learning strategies do not satisfy.\n\nThese limitations have motivated the search for alternatives that perform uniformly across different data subgroups [69, 61] or, equivalently, that rely less on spurious features [2] or \"shortcuts\" [28]. Part of solving this problem includes making the correct assumptions about data, which should consider spurious statistical correlations due to bias in its generative process. A popular example of this issue is the _cow vs. camel_ classification problem [2].\n\nAs one might expect, pictures of cows very often contain a grass background, while camels are usually depicted in a desert. As such, a binary classifier that predicts whether there is grass in the background of an image could achieve a high prediction accuracy on top of natural images of the two animals. However, such a classifier would fail whenever the background changes from the typical occurrences.\n\nWhile progress has been made, existing methods to improve robustness on the least frequent attribute combinations (_e.g._, cow on sand), require knowledge about the bias attributes (bias-supervision) either during training [6, 61], or validation [57, 48, 50]. This limits their applicability in practical scenarios where bias annotations could be too costly or impossible to obtain. Even in cases where they can be obtained somewhat efficiently, human annotators might be biased themselves or sensitive annotations may not be readily available due to privacy concerns [76].\n\nMoreover, access to group information is rarely satisfiable on natural data, where a _curse of generative dimensionality_ implies that given the high number of attributes controlling the generative process, no realistic finite sample can cover all possible - exponential in number - combinations of such attributes. In fact, many group robustness algorithms that rely on data rebalancing or loss reweighting techniques [61, 48] assume that all group combinations are present during training. However, this condition is not satisfied in systematic generalization tasks, where certain data subgroups that are present in test data, are absent from training data. This motivates us to study the extent that training algorithms can successfully operate using group information implicitly present in the training dataset.\n\nSpecifically, in the case of these challenging systematically split datasets, we study the generalization of models on unseen combinations of attributes that were independently observed during training. This combinatorial type of generalization draws its name from the cognitive property of systematicity [26]. In deep learning, this question was introduced for the first time by Lake and Baroni [46], who studied the systematicity of RNN models in sequence prediction tasks. Here, we raise the same question in classification. To do so, we introduce a benchmark consisting of systematic splits from the MPI3D (thereby named sMPI3D) image dataset [29].In particular, we use the '_real_' split of MPI3D which consists of photographs of a robotic arm that has a colored rigid object attached to its end effector. The images are captured in a way that controls their generative attributes, such as the shape of the rigid object or the position of the robotic arm. We use it to ask the following question: given a'shape' classifier that has been trained on objects of all possible shapes and colors but only a subset of their combinations (e.g., red cubes and blue spheres), how would it perform for a new color-shape combination (e.g., blue cubes)? In this example, the 'color' attribute plays the role of a _bias_, obstructing view of all possible colors that cubes could have. In Appendix D.1 we describe in more detail the construction of the sMPI3D task, while in Figure 1(d) we illustrate an example of a systematic split.\n\nAs discussed above, existing state-of-the-art methods suffer from two limitations. The first one, as demonstrated by Asgari et al. [3, Table 1], is that the robust accuracy of many recent training algorithms, which do not demand any bias annotations during training, degrades severely when there is no access to bias labels during model selection. This highlights that robust algorithms should prescribe a way of performing bias-unsupervised validation, assuming the same access to i.i.d. data\n\nFigure 1: **Unsupervised Logit Adjustment (uLA). We train a linear classifier on top of an SSL pre-trained model to obtain biased predictions. These predictions are then leveraged to train a debiased model. No bias information is used during training or cross-validation.**\n\nresources as training. The second, revealed by our study on sMPI3D (Section 4.2), is that in most cases these methods fail to improve over the ERM baseline, which fails to systematically generalize.\n\nTo address these issues, our approach, summarized in Figure 1, is based on pretraining a base encoder using self-supervised learning [7] to extract a proxy for the missing bias labels and to provide an initialization point for finetuning a debiased model. Pretraining a proxy for the bias variable enables us to use this network for two purposes: first, to train a group-robust network and, second, to define a validation criterion for robust model selection. The debiasing training algorithm is based on the logit adjustment paradigm [54]. Our entirely bias-unsupervised methodology to group robustness using logit adjustment, which we call uLA, is able to compete with or outperform state-of-the-art counterparts, that otherwise utilize bias labels during model selection, in synthetic and real-world benchmarks. At the same time, it is the only method to consistently offer improvements over the ERM baseline in the sMPI3D systematic generalization task; thus effectively tackling the identified challenges about explicit or implicit use of group information.\n\n## 2 Preliminaries\n\n**Problem Formulation.** Consider a multi-class classification task, where \\(X\\subset\\mathcal{X}\\) is the input variable and \\(Y\\subset\\mathcal{Y}\\), the categorical target variable with \\(|\\mathcal{Y}|=K\\) classes. We have access to a dataset of observations \\(\\mathcal{D}\\coloneqq\\{(x_{n},y_{n})\\}_{n=1}^{N}\\) sampled i.i.d. from an underlying data distribution \\(p_{\\text{data}}(X,Y)\\) over \\(\\mathcal{X}\\times\\mathcal{Y}\\). The setting above may become problematic once we consider that the deployment data \\(\\mathcal{D}_{\\text{test}}\\) are sampled from a different testing distribution: \\(p_{\\text{test}}\\neq p_{\\text{data}}\\). In other words, we assume that there are two data generating processes; one which generates development data (\\(\\mathcal{D}\\) and \\(\\mathcal{D}_{\\text{valid}}\\), where \\(\\mathcal{D}_{\\text{valid}}\\) is used for validation) according to \\(p_{\\text{data}}\\), and one which generates deployment data (\\(\\mathcal{D}_{\\text{test}}\\)) according to \\(p_{\\text{test}}\\).\n\nIn further detail, we focus on a particular transfer learning problem from \\(p_{\\text{data}}\\) to \\(p_{\\text{test}}\\), which is due to a distribution shift in attributes which participate in the generative process of the input variable \\(X\\). Our study considers anti-causal prediction tasks for which the target variable \\(Y\\) is one of the generative attributes of \\(X\\), and \\(Z\\subset\\mathcal{Z}\\) another (possibly unobserved) categorical generative attribute with \\(|\\mathcal{Z}|=L\\) classes. \\(Z\\) is marginally independent to \\(Y\\) under \\(p_{\\text{test}}\\), but it might not be under \\(p_{\\text{data}}\\). For this reason, we say that the variables \\(Y\\) and \\(Z\\) are _spuriously correlated_ in training and validation data, and \\(Z\\) will also be referred to as the _bias attribute_. Under this setting, a group \\((y,z)\\in[K]\\times[L]\\) is defined as a combination of target and bias attribute values, and group robustness can be formulated as a transfer learning problem from \\(p_{\\text{data}}\\) to \\(p_{\\text{test}}\\) under the following two assumptions relating the two joint distributions over \\(X\\), \\(Y\\) and \\(Z\\):\n\n\\[p_{\\text{test}}(y,z)\\propto\\mathbb{1}_{\\operatorname{Supp}p_{ \\text{data}}(Y)\\times\\operatorname{Supp}p_{\\text{data}}(Z)}(y,z), \\tag{1}\\] \\[\\text{and }p_{\\text{test}}(x|y,z)=p_{\\text{data}}(x|y,z), \\tag{2}\\]\n\nwhere \\(1_{S}(s)=1\\) if \\(s\\in S\\) else \\(0\\), the characteristic function of a set \\(S\\), and \\(\\operatorname{Supp}p\\) denotes the support set of a distribution \\(p\\). Relation 1 asserts that, during test time, the target and bias variables\n\nFigure 2: **Left**: Example tasks. Circled in dashed lines are samples from training split, exhibiting statistically major attribute groups. Those outside are example test samples, where all groups are equally considered. A classifier trained on a biased training set may misclassify a bias-conflicting test sample - recognizing a ‘red three’ as ‘zero’ (2a) or a ‘blue sphere’ as a ‘cone’ (2b) - or be unfair to the sensitive gender attribute when tasked to classify a facial attribute, e.g. hair color (2c). **Right**: An example of a systematic split. \\(C\\) is the number of color values per shape. ‘Crosses’ represent groups used to sample the training and validation splits, while ‘circles’ are entirely out-of-distribution.\n\nare distributed uniformly over the product of their respective marginal supports under \\(p_{\\text{data}}\\). In other words, all combinations of observed attributes values are considered equally. This also implies that \\(Y\\) and \\(Z\\) are marginally independent under \\(p_{\\text{test}}\\)3. On the other hand, relation 2 assumes the invariance of mechanism, which is an assumption typically found in _disentangled causal process_ literature [64]. These two assumptions underlie the group robustness literature, in that they are equivalent to the evaluation of classifiers under a popular robust performance criterion which is described below.\n\nFootnote 3: Generally, the variables, which are uniform over a support which factorizes into a cartesian product of their marginal supports, are independent: \\(c\\mathbb{1}_{\\mathcal{X}\\times\\mathcal{Y}}(x,y)=c\\mathbb{1}(x,y\\in\\mathcal{X} \\times\\mathcal{Y})=c\\mathbb{1}(x\\in\\mathcal{X})\\mathbb{1}(y\\in\\mathcal{Y})\\).\n\n**Performance Criteria.** Let \\(\\hat{y}(x;f)\\coloneqq\\operatorname*{argmax}_{y}f(x)_{y}\\) be the predictions of a scoring function \\(f:\\mathcal{X}\\rightarrow\\mathbb{R}^{K}\\). The accuracy of \\(f\\) under \\(p_{\\text{test}}\\) corresponds to the _group-balanced accuracy_\n\n\\[\\operatorname{Acc}_{Y|X}(f;p_{\\text{test}})=\\underset{x\\sim p_{\\text{data}}( \\cdot|y,z)}{\\mathbb{E}}\\mathbb{1}\\big{(}y=\\hat{y}(x;f)\\big{)}=\\frac{1}{KL} \\sum_{y,z}\\underset{x\\sim p_{\\text{data}}(\\cdot|y,z)}{\\mathbb{E}}\\mathbb{1} \\big{(}y=\\hat{y}(x;f)\\big{)}, \\tag{3}\\]\n\nwhich is a frequently used performance metric in literature [61, 48, 50]. Essentially, this performance criterion first separately computes the accuracy for samples \\(x\\) of each group \\((y,z)\\) and then averages them. The averaging operation per individual group accuracy directly stems from the uniformity assumption in \\(p_{\\text{test}}\\) (rel. 1) and implements a group fairness notion: we care equally about performing well in all groups. Another popular group robustness criterion is the _worst-group accuracy_[61] which substitutes the average accuracy over individual groups with the minimum (worst) accuracy.\n\nAs described in Section 5, there exist different approaches to improve group fairness [61, 48, 50] that depend on knowing biases for cross-validation. Here we focus on logit adjustment techniques like the one proposed by Liu et al. [50], which co-train two models where one corrects the biases of the other. In the following sections, we show how, by decoupling the training of the biased model from the bias-corrected model, it is possible to obtain a proxy criterion that can be used for cross-validation without knowing the bias attribute.\n\n**Logit Adjustment** was originally developed as a technique for supervised learning under class-imbalanced [25, 53] or long-tailed [54] data. For clarity of presentation, we will assume for the moment that we have access to bias labels. We describe sLA, a bias-supervised training process with logit adjustment. Let \\(h_{\\theta}:\\mathcal{X}\\rightarrow\\mathbb{R}^{|\\mathcal{Y}|}\\) be the model that we want to train for group robustness, such as a neural network parameterized by \\(\\theta\\) implementing a function from samples in \\(\\mathcal{X}\\) to the unnormalized logit space of \\(Y\\). We then train by minimizing the average cross-entropy loss for the following _logit adjusted model_\n\n\\[p_{\\theta}(y|x,z)\\propto\\exp(h_{\\theta}(x)_{y}+\\log\\hat{p}_{\\text{data}}(y|z)). \\tag{4}\\]\n\nWe estimate the conditional \\(\\hat{p}_{\\text{data}}(y|z)\\) directly from the available training data, for example via empirical frequencies of the finite number of groups \\((y,z)\\in[K]\\times[L]\\). Finally, during inference, we cancel out the contribution of the logit bias term \\(\\hat{p}_{\\text{data}}(y|z)\\) and predict only according to the optimized neural network \\(h_{\\theta^{*}}\\).\n\nNote that, since \\(Y\\) is a categorical variable, \\(\\log\\hat{p}_{\\text{data}}(y|z)\\) takes non-positive values. For this reason, we can intuitively interpret logit adjustment as a soft masking operation for outputs of \\(h_{\\theta}\\) which are unlikely in the training data when we have observed \\(z\\). In this way, we account for the dependency of \\(Y\\) to \\(Z\\) which spuriously exists in the training distribution. By fitting the cross-entropy objective under logit adjustment, the network \\(h_{\\theta}\\) has to model the remaining relations for \\(Y\\mid X\\) that are not spurious since those are already accounted for. An expressive enough model class in Equation (4) can achieve this, assuming further that the likelihood ratio \\(\\frac{p_{\\text{data}}(x|y,z)}{p_{\\text{data}}(x|z)}=\\frac{p_{\\text{data}}(y|x,z)}{p_{\\text{data}}(y|z)}\\) is independent of \\(z\\). In appendix A, we derive sLA from first principles, bridging the gap between the well-studied application of this technique to the class-imbalance problem with its application to group robustness. Moreover, we demonstrate that sLA is a well-justified procedure by proving the following proposition.\n\n**Proposition 2.1** (sLA optimizes the group-balanced accuracy).: _Under the assumption that the hypothesis class \\(p_{\\theta}(y|x,z)\\) (eq. 4) contains \\(p_{\\text{data}}(y|x,z)\\), the minimizer network \\(h_{\\theta^{*}}\\) of the cross-entropy loss maximizes a lower bound to the group-balanced accuracy \\(\\operatorname{Acc}_{Y\\mid X}(h_{\\theta^{*}};p_{\\text{test}})\\)._\n\n**Systematic Generalization.** We expect sLA models to systematically generalize, as they optimize for the transfer learning problem defined by Equations (1) and (2). As Figure 1(d) suggests, systematic generalization is just an extreme case of such problems, for which samples from some combinationshave \\(p_{\\text{data}}(y,z)=0\\) during training, however they populate the test set with \\(p_{\\text{test}}(y,z)>0\\). As a counterexample, the setting by Ahmed et al. [1] does not correspond to a systematic generalization task, as we define it. In what they refer to as systematic splits of colored MNIST, all possible combinations of color and digit are exposed to the model during its training, while in our sMPI3D some color and shape combinations are only revealed during test-time. This detail makes our setting significantly more challenging, as the conditions for applying importance sampling are not met, as \\(\\operatorname{Supp}p_{\\text{data}}(Y,Z)\\not\\supseteq\\operatorname{Supp}p_{ \\text{test}}(Y,Z)\\)[4, Chapter 5]. This means that a simple re-weighting of the per-sample loss with \\(\\frac{1}{p_{\\text{data}}(y,z)}\\), in order to estimate \\(\\mathbb{E}_{p_{\\text{test}}}[l_{\\text{ce}}(y,h_{\\theta}(x))]\\), is not appropriate for the task.\n\n**Self-Supervised Learning (SSL)** refers to a collection of methods [7] in unsupervised representation learning where unlabeled data provides the supervision by defining tasks where a model is asked to predict some part of the input from other parts, for example by contrasting independently transformed versions of data samples [14; 35; 15]. Methods like SimCLR [14], MoCo [35], BYOL [30] and Barlow Twins [73] provide solutions for low-data generalization, robustness, as well as transferability of learnt representations for image classification. Although self-supervised learning methods are generally more robust to distribution shifts than purely supervised methods [49], they can still be affected by significant shifts, for example in long-tailed data learning [5; 63]. In this work, we explore the limits and utility of SSL in group robust classification. An extended preliminary discussion about self-supervised algorithms can be found in Appendix B.\n\n## 3 ULA: Bias-unsupervised Logit Adjustment\n\nHere, we introduce bias-unsupervised logit adjustment (uLA), a logit correction approach which improves on the work of Liu et al. [50] by _removing dependency on explicit bias annotations both during training and validation_. For this reason, we create a proxy variable for the bias via the predictions of a pretrained network using SSL. The advantage of pretraining, over co-training a bias network [57; 17; 50], is two-fold: First, we can reuse the fixed bias proxy network to define a validation criterion for group robustness. This enables us to perform hyperparameter search, but also informs us about when to stop training the debiased model, which is critical for optimal performance [48]. Second, recent literature [39; 43] has demonstrated, in the case of training with bias-supervised data, that a linear classifier on top of pretrained base models provides with substantial group-robustness improvements. Here, we initialize the debiased model from the pretrained base model and finetune it using logit adjustment. Figure 1 provides a summary of our approach and its pseudocode can be found in Appendix C. Further training details are discussed in the following.\n\n### Bias-unsupervised Training\n\n**Biased network: pretrain with SSL.** We start by training a _base model_, \\(f_{\\text{base}}\\), using an SSL method on the unlabeled data of the training set \\(\\mathcal{D}\\). We decide on the hyperparameters of the SSL algorithm (learning rate, weight decay, temperature, augmentations and potentially others) by maximizing the i.i.d. validation accuracy of an online linear classifier, which probes the representations of the base model. Afterwards, we train a linear classifier \\(g_{\\phi}\\) on top of a frozen \\(f_{\\text{base}}\\) and against target variable labels \\(Y\\) using a vanilla cross-entropy loss, in order to derive a proxy for the bias variable. Finally, we retrieve the composite neural network \\(h_{\\text{bias}}=g_{\\phi}\\circ f_{\\text{base}}\\), and use its predictions \\(\\hat{y}(x;h_{\\text{bias}})\\) as a proxy for the missing bias variable observations. For the same purpose, Nam et al. [57] trained a bias-extracting network by employing Generalized Cross Entropy [75], an objective which provides robustness to noisy labels. Since the goal of the proxy network is to predict the spurious attribute, bias-conflicting or minority samples can be perceived as mislabeled data points. Here, we follow these observations and leverage the representations learnt with SSL which, when composed with a low-capacity (linear) classifier, provide with a model that is more robust to label-noise [71]. In addition to deriving a bias proxy, we will also use \\(f_{\\text{base}}\\) as an initialization point in the parameter space for training the debiased model.\n\nIn this paper, we have chosen to use MoCoV2+ [15] as the SSL algorithm for the image classification tasks we consider. We make this choice since contrastive learning algorithms, like the MoCo family [35; 15], offer relatively stable training since they explicitly prevent representation collapse in their loss function (see Appendix B). However, as we demonstrate in Section 4.3, our method is not restrained to the use of a particular SSL algorithm.\n\n**Debiased network: logit adjustment.** In the absence of bias labels during training, we need a substitute for the estimate \\(\\hat{p}_{\\text{data}}(Y,Z)\\) in eq. 4. We use the predictions \\(y_{\\text{bias}}\\) of the bias proxy network \\(h_{\\text{bias}}\\) to that end. The resulting joint distribution between the target variable and the biased network's predictions, \\(\\hat{p}_{\\text{data}}(y,y_{\\text{bias}})\\), can be thought of as a soft confusion matrix of \\(h_{\\text{bias}}\\) and can be computed using the available training data with\n\n\\[\\hat{p}_{\\text{data}}(y,y_{\\text{bias}})=\\frac{1}{|\\mathcal{D}|}\\sum_{x^{ \\prime},y^{\\prime}\\in\\mathcal{D}}p_{\\text{bias}}(y_{\\text{bias}}\\,|\\,x^{ \\prime})\\mathbb{1}(y=y^{\\prime}), \\tag{5}\\]\n\nwhere \\(p_{\\text{bias}}(y_{\\text{bias}}\\,|\\,x)\\propto\\exp(h_{\\text{bias}}(x)_{y_{\\text {bias}}}/\\tau)\\) is the biased model conditional. Note that \\(p_{\\text{bias}}(y_{\\text{bias}}\\,|\\,x)\\) is post-hoc calibrated by a temperature hyperparameter \\(\\tau\\). As we rely on the biased network to approximate the spurious correlation structure described in \\(p_{\\text{data}}(Y,Z)\\), it is crucial that the predicted conditional probabilities of the biased network are calibrated correctly [31, 56, 53]. Afterwards, we are ready to begin training a debiased network with logit adjustment as in Section 2; using only the predictions of \\(h_{\\text{bias}}\\) as the bias variable. The debiased network is initialized at the composition of a random linear classifier with the pretrained network \\(f_{\\text{base}}\\), and during training we finetune it while adjusting its output logits by\n\n\\[p_{\\theta}(y\\,|\\,x)\\propto\\exp\\Big{(}h_{\\theta}(x)_{y}+\\eta\\,\\log\\hat{p}_{ \\text{data}}\\big{(}y\\,|\\,\\hat{y}(x;h_{\\text{bias}})\\Big{)}, \\tag{6}\\]\n\nwhere \\(\\eta\\geq 0\\) is a hyperparameter controlling the strength of the additive logit bias. Notice that for \\(\\eta=0\\) we fall back to ERM training. By tuning \\(\\eta\\) we can mitigate calibration errors of the debiased model [53], similar to what \\(\\tau\\) does for the bias proxy. Selected hyperparameter configurations can be found in Appendix E.\n\n### Bias-unsupervised Validation\n\nWe re-purpose the pretrained biased classifier \\(h_{\\text{bias}}\\) so that training no longer requires bias-annotated validation data for model selection. Our bias-unsupervised validation criterion calculates a balanced accuracy across pairs \\((y,y_{\\text{bias}})\\in[K]\\times[K]\\) of true labels and biased classifier predictions. In practice, we compute\n\n\\[\\overline{\\text{BalAcc}}(f;h_{\\text{bias}})\\coloneqq\\frac{1}{K^{2}}\\sum_{y,y_ {\\text{bias}}}\\frac{1}{|S_{y,y_{\\text{bias}}}|}\\sum_{x_{i},y_{i}\\in S_{y,y_{ \\text{bias}}}}\\mathbb{1}\\big{(}y_{i}=\\hat{y}(x_{i};f)\\big{)} \\tag{7}\\]\n\n\\[S_{y,y_{\\text{bias}}}\\coloneqq\\left\\{(x_{i},y_{i})\\in\\mathcal{D}_{\\text{valid }}\\,|\\,y_{i}=y\\text{ and }\\hat{y}(x_{i};h_{\\text{bias}})=y_{\\text{bias}}\\right\\},\\]\n\nwhere \\(S_{y,y_{\\text{bias}}}\\) are partitions of \\(\\mathcal{D}_{\\text{valid}}\\) based on the value of predictions of \\(h_{\\text{bias}}\\) on a sample \\(x_{i}\\) and its ground-truth target label \\(y_{i}\\). This corresponds to a form of group-balanced accuracy. Alternatively, we could also calculate a form of worst-group accuracy by taking the minimum across \\(S_{y,y_{\\text{bias}}}\\). We find that worst-case validation is more suitable for tasks with small number of classes \\(K\\).\n\nDuring training, we evaluate models at every epoch and we select the one that maximizes our validation score across the duration of a training trial. In addition, we use this criterion to tune hyperparameters. In particular, for each task we tune learning rate, weight decay, logit adjustment strength coefficient \\(\\eta\\), calibration temperature \\(\\tau\\) and, in addition, the number of pretraining steps for the SSL backbone - whenever it is applicable - and for the linear classification probe of the bias proxy network. Fig. 1 depicts our bias-unsupervised training and validation procedures.\n\n## 4 Experiments\n\n**Datasets.** The tasks we consider are all specific instances of the setup above (see Fig. 2 and Section 2). This spans group robustness challenges like colored MNIST [57, cMNIST], corrupted CIFAR10 [37, cCIFAR10] and Waterbirds[62], fair classification benchmarks like CelebA [51], and systematic generalization tasks such as the contributed sMPI3D. Details about their construction can be found in Appendix D.\n\n**Training Setup.** For cMNIST, we train a 3-hidden layer MLP, while we use a ResNet18 [34] for cCIFAR10 and sMPI3D, and a ResNet50 for Waterbirds and CelebA. For all datasets except Waterbirds, we pretrain the base model with the MoCoV2+ [15] process, while training of the linear probe for the bias network and finetuning for the logit adjusted debiased network happen withAdamW [52] optimizer. For Waterbirds instead, we leverage a base model which was pretrained on Imagenet [60], following baselines in the literature for fair comparison, and we finetune it using SGD. Finally, for cMNIST, cCIFAR10 and sMPI3D we use our group-balanced bias-unsupervised validation criterion, whereas for Waterbirds and CelebA the worst-group version. Further details are described in Appendix E.\n\n**Baselines.** We compare uLA with vanilla ERM and a diverse set of group robustness techniques described in Section 5. GroupDRO [61] provides with a fully bias-supervised baseline, while LrF [57], JtT [48], LC [50] and DFA [17] are bias-unsupervised during training although they require bias annotations during validation to achieve robust optimal performance. We also consider two fully bias-unsupervised methods: Bardenhagen et al. [9] propose early stopping networks to derive a proxy for bias-unsupervised validation, and MaskTune [3] which provide competitive results under fully bias-unsupervised benchmarks without performing any validation procedure.\n\n### Results on Benchmarks\n\nIn Table 1, we report the group-balanced accuracy on cMNIST and cCIFAR10, across different percentages of bias-conflicting examples in the training set. For cMNIST, we observe that our method performs overall competitively against LrF, DFA and LC, even though these baselines use bias annotations during model selection. On the other hand, for cCIFAR10, we observe a significantly improved group robust performance for 3/4 difficulty levels. The highest difference is observed at the 1.0% task, where our method outperforms GroupDRO by about 24% absolute increase in group-balance accuracy.\n\nIn Table 2, we observe worst-group accuracy results in the more challenging Waterbirds and CelebA datasets. In both cases, our approach performs again competitively among bias-unsupervised training algorithms that leverage bias information during validation, falling slightly behind LC, which is the best out of the ones considered. Notably, our approach is still the best performing fully bias-unsupervised method, outperforming the supervised learning pretraining validation scheme of Bardenhagen et al. [9], and performing on par with MaskTune on Waterbirds and better than it on CelebA by \\(\\approx\\)8% absolute worst-group accuracy.\n\n\\begin{table}\n\\begin{tabular}{l c c c c c c c c c c} \\hline \\hline & \\multicolumn{2}{c}{Bias Labels} & \\multicolumn{3}{c}{cCIFAR10} \\\\ \\cline{2-11} & Train & Val & \\(0.5\\%\\) & \\(1.0\\%\\) & \\(2.0\\%\\) & \\(5.0\\%\\) & \\(0.5\\%\\) & \\(1.0\\%\\) & \\(2.0\\%\\) & \\(5.0\\%\\) \\\\ \\hline\n[61] GroupDRO\\({}^{\\dagger}\\) & ✓ & ✓ & \\(63.12\\) & \\(68.78\\) & \\(76.30\\) & \\(84.20\\) & \\(33.44\\) & \\(38.30\\) & \\(45.81\\) & \\(57.32\\) \\\\ \\hline\n[57] LrF\\({}^{*}\\) & ✗ & ✓ & \\(52.50_{\\pm 0.23}\\) & \\(61.89_{\\pm 0.47}\\) & \\(70.13_{\\pm 0.24}\\) & \\(80.57_{\\pm 3.88}\\) & \\(28.74_{\\pm 3.0}\\) & \\(33.07_{\\pm 0.77}\\) & \\(39.14_{\\pm 0.30}\\) & \\(50.27_{\\pm 1.56}\\) \\\\\n[17] DFA\\({}^{*}\\) & ✗ & ✓ & \\(65.22_{\\pm 4.41}\\) & \\(81.73_{\\pm 2.34}\\) & \\(84.79_{\\pm 0.88}\\) & \\(89.66_{\\pm 1.0}\\) & \\(29.95_{\\pm 1.9}\\) & \\(36.49_{\\pm 1.9}\\) & \\(17.82_{\\pm 5.1}\\) & \\(51.13_{\\pm 1.18}\\) \\\\\n[50] LC\\({}^{\\dagger}\\) & ✗ & ✓ & \\(71.53_{\\pm 1.87}\\) & \\(82.52_{\\pm 1.81}\\) & \\(\\mathbf{86.21_{\\pm 1.09}}\\) & \\(91.61_{\\pm 0.87}\\) & \\(\\mathbf{34.56_{\\pm 0.37}}\\) & \\(37.34_{\\pm 0.60}\\) & \\(47.81_{\\pm 2.00}\\) & \\(54.55_{\\pm 1.56}\\) \\\\ \\hline ERM\\({}^{*}\\) & ✗ & ✗ & \\(35.19_{\\pm 4.90}\\) & \\(52.09_{\\pm 2.88}\\) & \\(65.86_{\\pm 3.29}\\) & \\(82.17_{\\pm 0.74}\\) & \\(23.08_{\\pm 1.25}\\) & \\(25.82_{\\pm 0.33}\\) & \\(30.06_{\\pm 0.71}\\) & \\(39.42_{\\pm 0.64}\\) \\\\ uLA (ours) & ✗ & ✗ & \\(\\mathbf{75.13_{\\pm 0.78}}\\) & \\(81.80_{\\pm 1.41}\\) & \\(84.79_{\\pm 1.90}\\) & \\(\\mathbf{92.79_{\\pm 0.85}}\\) & \\(34.39_{\\pm 1.14}\\) & \\(\\mathbf{62.49_{\\pm 0.74}}\\) & \\(\\mathbf{63.88_{\\pm 1.07}}\\) & \\(\\mathbf{74.19_{\\pm 0.58}}\\) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 1: Results using datasets from Chu et al. [17] for various % of bias-conflicting examples in the training set. We report avg. group-balanced test accuracy (%) and std. dev. over 5 seeds. \\({}^{*}\\)Results from Chu et al. [17]. \\({}^{\\dagger}\\)Results from Liu et al. [50].\n\n\\begin{table}\n\\begin{tabular}{l c c c c c c} \\hline \\hline & \\multicolumn{2}{c}{Bias Labels} & \\multicolumn{2}{c}{Waterbirds} & \\multicolumn{2}{c}{CelebA} \\\\ \\cline{2-7} & Train & Val & i.i.d. & worst group & i.i.d. & worst group \\\\ \\hline\n[61] GroupDRO\\({}^{*}\\) & ✓ & ✓ & \\(93.5\\) & \\(91.4\\) & \\(92.9\\) & \\(88.9\\) \\\\ \\hline\n[9] ERM & ✗ & ✓ & \\(97.6\\) & \\(86.7\\) & \\(93.1\\) & \\(77.8\\) \\\\\n[57] LfF\\({}^{*}\\) & ✗ & ✓ & \\(97.5\\) & \\(75.2\\) & \\(86.0\\) & \\(77.2\\) \\\\\n[48] JtT\\({}^{*}\\) & ✗ & ✓ & \\(93.6\\) & \\(86.0\\) & \\(88.0\\) & \\(81.1\\) \\\\\n[50] LC & ✗ & ✓ & - & \\(90.5_{\\pm 1.1}\\) & - & \\(88.1_{\\pm 0.8}\\) \\\\ \\hline ERM & ✗ & ✗ & \\(97.3\\) & \\(72.6\\) & \\(95.6\\) & \\(47.2\\) \\\\ Bardenhagen et al. [9] & ✗ & ✗ & \\(97.5\\) & \\(78.5\\) & \\(88.0\\) & \\(78.9\\) \\\\\n[3] MaskTune & ✗ & ✗ & \\(93.0_{\\pm 0.7}\\) & \\(86.4_{\\pm 1.9}\\) & \\(91.3_{\\pm 0.1}\\) & \\(78.0_{\\pm 1.2}\\) \\\\ uLA (ours) & ✗ & ✗ & \\(91.5_{\\pm 0.7}\\) & \\(86.1_{\\pm 1.5}\\) & \\(93.9_{\\pm 0.2}\\) & \\(86.5_{\\pm 3.7}\\) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 2: Results on Waterbirds and CelebA. We report avg. test accuracy (%) and std. dev. over 5 seeds. \\({}^{*}\\)Results from Liu et al. [48].\n\n### Systematic Generalization\n\nsMPI3D is our contributed task which we use to study combinatorial systematicity in classifiers. With this benchmark, we aim to study the ability of classifiers to generalize to samples generated from novel combinations of observed generative attributes values, which under \\(p_{\\text{data}}\\) have \\(0\\) probability. The target task is to classify the _shape_ of an object, which is spuriously correlated with its _color_. We devise 4 difficulty levels for this task which we denote by \\(C\\), the number of color values present in the training set per shape. In Fig. 1(d), we display a possible split between in-distribution and o.o.d. combinations of attributes for \\(C=4\\). Details about its construction are given in Appendix D.\n\n**Results.** In Table 3, we further validate our approach on our contributed systematic generalization task. Under this setting, all bias-unsupervised approaches are evaluated fairly since their models are validated with exactly the same data resources; access to bias labels cannot give model selection advantage to any algorithm since there are no o.o.d. samples in the validation (just like in the training) split. Our method is the only one which consistently offers group-balanced accuracy improvements across difficulty levels, demonstrating generalization to o.o.d. samples. On the other hand, GroupDRO, vanilla ERM, JT and LC are not able to increase group-balanced accuracy over the percentage of i.i.d. samples present in the balanced test set.\n\n### Ablation Studies\n\nWe perform a set of studies to understand better the efficacy of our approach. In Fig. 3, we ablate the choice of training paradigm and the influence of a pretrained base model using SSL to the robust performance of a trained model to the target task. The paradigms we choose are among vanilla cross-entropy minimization, uLA and sLA (bias-supervised logit adjustment - see Section 2). For this ablation, a bias-supervised validation procedure was used for comparison against the fully bias-supervised sLA baseline. We find that finetuning the pretrained base model gives the best performance across training paradigms and tasks. Second, form CelebA and cCIFAR10, vanilla cross-entropy finetuning does not offer stark performance improvement over vanilla training from scratch. Only when we apply a logit adjustment training procedure, we are able to take significant advantage of the learnt representation space. At the same time, the accuracy gaps between uLA and\n\n\\begin{table}\n\\begin{tabular}{l c c c c c} \\hline \\hline & \\multicolumn{2}{c}{Bias Labels} & \\multicolumn{4}{c}{sMPI3D (various \\(C\\) colors / shape)} \\\\ \\cline{2-6} & Train \\& Val & \\(2\\) & \\(3\\) & \\(4\\) & \\(5\\) \\\\ \\hline \\% i.i.d. samples & - & \\(33.33\\) & \\(50.00\\) & \\(66.66\\) & \\(83.33\\) \\\\ \\hline\n[61] GroupDRO & ✓ & \\(31.23_{\\pm 1.88}\\) & \\(46.01_{\\pm 4.13}\\) & \\(69.68_{\\pm 7.82}\\) & \\(82.18_{\\pm 8.39}\\) \\\\ \\hline ERM & ✗ & \\(31.94_{\\pm 1.67}\\) & \\(47.68_{\\pm 4.06}\\) & \\(71.94_{\\pm 7.71}\\) & \\(83.10_{\\pm 8.64}\\) \\\\\n[48] JT & ✗ & \\(31.89_{\\pm 0.88}\\) & \\(48.93_{\\pm 2.04}\\) & \\(67.78_{\\pm 3.14}\\) & \\(83.31_{\\pm 3.51}\\) \\\\\n[50] LC\\({}^{*}\\) & ✗ & \\(31.29_{\\pm 0.96}\\) & \\(45.01_{\\pm 2.31}\\) & \\(61.67_{\\pm 5.06}\\) & \\(94.62_{\\pm 0.88}\\) \\\\ uLA (ours) & ✗ & \\(\\mathbf{59.58}_{\\pm 3.12}\\) & \\(\\mathbf{80.53}_{\\pm 3.16}\\) & \\(\\mathbf{91.11}_{\\pm 3.52}\\) & \\(\\mathbf{98.05}_{\\pm 0.64}\\) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 3: Results on sMPI3D for various numbers of \\(C\\) colors per shape value in the training set (see Appendix D). We report avg. group-balanced accuracy (%) and std. dev. over 5 seeds & dataset generations. \\({}^{*}\\)GroupMix augmentation was not used for fairness of comparison.\n\nFigure 3: Ablations on the influence of SSL pretraining and on the use of Vanilla process, sLA, or uLA for training against the downstream task. For cCIFAR10 (1%) and sMPI3D (\\(C=3\\)) we report the group-balanced test accuracy, whereas for CelebA the worst-group. We report avg. accuracies and std. dev. over 5 seeds.\n\nsLA are small which indicates that we are able to recover correctly the bias attribute with the proxy network. On the other hand, for sMPI3D the gaps between uLA and sLA are large. This shows that, in the systematic generalization case, bias extraction remains an open challenge for future work, and that improvement over baseline procedures in Table 3 is due to pretraining with SSL.\n\nIn addition, we study the impact of the choice of SSL method that we use to pretrain the backbone. For this reason, we perform an ablation experiment on cCIFAR10 (1%) by changing the pretraining strategy from MoCoV2+ to BYOL [30] or Barlow Twins[73]. We find that, while BYOL performs within the error margin of the best MoCoV2+ setting, Barlow Twins underperforms. Barlow Twins seeks to match the empirical (i.i.d.) cross-correlation between features to the identity. Arguably, we expect that the cross-correlation is different under the shifted test set. In any case, uLA significantly outperforms the non-uLA baselines with any of the considered SSL methods.\n\nFinally, we study how the choice of pretraining paradigm for the bias network influences the quality of hyperparameter search using our proposed validation criterion. In Fig. 4, we present two separate searches on the same space of hyperparameters using two different pretraining approaches. In red we see our approach of pretraining an SSL base model for the bias network, and in blue we see a baseline approach where we pretrain with purely supervised learning. We observe that SSL pretraining enables stronger correlation between the proposed bias-unsupervised group-balanced validation criterion and the corresponding test accuracy on cCIFAR10. It is more difficult to tune hyperparameters with a bias network pretrained with supervised learning, because it may fit the training set entirely. In that case, the validation criterion collapses to the in-distribution test accuracy which is not indicative of the group-balanced test accuracy. On the contrary, classification with a linear probe on top of SSL representations prevents from fitting the training set entirely, having small generalization gaps in-distribution [8]. In this way, the validation criterion remains strongly correlated even in larger validation accuracy values, maintaining its utility in a greater range of hyperparameter configurations.\n\n## 5 Related Work\n\nPrior literature can be grouped according to three main strategies that attempt to improve a model's robustness to dataset bias [54]. (i) _Resampling_ strategies increase or decrease the frequency of biased\n\n\\begin{table}\n\\begin{tabular}{l c} \\hline \\hline \\multicolumn{2}{c}{Group-balanced Test Acc.} \\\\ \\hline ERM & \\(25.82_{\\pm 0.33}\\) \\\\ GroupDRO & \\(38.30\\) \\\\ \\hline uLA w. MoCoV2+ & \\(62.49_{\\pm 0.74}\\) \\\\ uLA w. BYOL & \\(59.73_{\\pm 2.03}\\) \\\\ uLA w. BarlowTwins & \\(50.08_{\\pm 1.23}\\) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 4: Ablation of SSL pretraining methods for the backbone model. Default \\(\\eta=1.0\\) and \\(\\tau=1.0\\) are used. We pretrain for \\(1000\\) epochs, and train the linear head of the bias proxy for \\(100\\) epochs. We report avg. group-balanced test accuracy (%) and std. dev. over 5 seeds on cCIFAR10 (1%).\n\nFigure 4: Validation vs test on cCIFAR10 (1%).\n\n\\begin{table}\n\\begin{tabular}{l c} \\hline \\hline Pretraining method & Pearson correlation \\\\ \\hline End-to-end supervised & \\(0.690\\) (\\(0.588,0.770\\)) \\\\ SSL + linear head & \\(0.819\\) (\\(0.721,0.885\\)) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 5: Effect of pretraining method for a bias proxy network on results of hyperparameter search using our validation criterion. Each training trial’s hyperparameters were sampled from the same prior. Pretraining with SSL results in trials whose best validation criterion correlates better with selected model’s group-balanced test accuracy on cCIFAR10 (1%). In parenthesis, we compute 95% confidence intervals.\n\nattributes in the input space [27; 55] or latent space [23; 74; 17]. (ii) _Loss reweighting_ methods balance class or feature importances during training [74; 21; 12; 65; 40]. Especially relevant to our work are the reweighting methods that improve robustness and generalization when training on biased datasets [61; 57; 17; 48]. (iii) _Post-hoc adaptation_ methods [54; 25; 19; 42; 41; 3; 70] correct the biases learned by already-trained models. Most relevant to our work is the logit adjustment technique proposed by Menon et al. [54] for long-tail learning, which leverages class frequencies to rebalance the model predictions after training or to train with a loss function which is aware of the class prior. We develop a logit adjustment technique for the problem of learning from biased data, which does not require previous knowledge about the dataset's biases.\n\nSeveral solutions have emerged for the group robustness problem, especially when bias attribute data is available. For example, GroupDRO [61] leverages explicit bias attributes to reduce worst-case group loss. Our method, however, reduces reliance on such data, recognizing that access to them can be impractical. Bias-unsupervised methods like Learning from Failure (LFF) [57] and Disentangled Feature Augmentation (DFA) [17] reweight the loss of the unbiased model using a co-trained biased model, removing the need for bias supervision. Conversely, Just-train-twice (JT) [48] reweights misclassified samples from an initial biased training, emphasizing worst-performing group data. Similarly, He et al. [33] exemplify the same intuition in that a second network is trained on examples that cannot be predicted already using the spurious structure. Liu et al. [50] proposed a bias-unsupervised logit adjustment technique (LC) also based on co-training a biased network. Utilizing domain knowledge, Clark et al. [18] describe a bias-supervised logit adjustment approach to debiasing Visual Question Answering (VQA) models by incorporating a bias proxy which is trained exclusively on question data. These methods only generalize to seen attribute groups during training and require bias knowledge during validation for optimal performance. We tackle these issues by employing SSL to pre-train a network, deriving a bias proxy for debiased model training and validation, as well as using it as initialization.\n\nIn order to derive bias-unsupervised solutions, literature has proposed to train reference models as proxies for the missing bias labels. Creager et al. [20] optimize a reference model for group assignments which maximally violate a relaxation of the Environment Invariance Criterion. Chen et al. [16] seek to establish conditional independence between the predictions of the proxy model and the target variable given the inferred groups. Our work follows more closely the approach of [48], in which a biased network is simply trained with ERM. As we demonstrate at Figure 6 of Appendix F, by utilizing a frozen backbone pretrained with SSL, our approach improves on the sensitivity to the number of training steps for the bias proxy.\n\nBardenhagen et al. [9] suggested a validation scheme dependent on early stopping of bias proxy network training. Our method, using SSL pretraining, avoids this by treating pretraining steps as tunable hyperparameters, maintaining performance of alternatives which were otherwise tuned with bias information. Chen et al. [16] perform experiments using a methodology dubbed as Training Environments Validation (TEV). Similar to us, TEV validates models based on inferred groups from training, however the methodology is unfortunately not well documented in the literature, making its reproducibility difficult. As we show in the ablation study of Figure 4, implementation details can make a large difference in the quality of the criterion. Finally, MaskTune[3] eliminates spurious shortcuts by masking input data during a secondary training phase. Despite its resilience in performance without using a bias-unsupervised o.o.d. model selection criterion, the need for a reliable validation strategy for group robustness remains.\n\n## 6 Conclusion\n\nWe explored group robust classification in synthetic and real tasks, proposing a generalization task with unseen attribute combinations. Current robust classification methods struggle in this setting, motivating our SSL-based logit adjustment approach. Importantly, we introduce **a methodology for training and validating robust models without group labels**. Empirical evaluations on five datasets show our method outperforms existing fully bias-unsupervised approaches and rivals those using bias annotations during validation. In terms of _limitations and broader impact_ of our contributions, as machine learning systems handle high-stakes applications, ensuring robustness to underrepresented samples is crucial. Our work reduces reliance on known data biases, but existing benchmarks differ from real-life scenarios with unknown biased attribute combinations. To bridge this gap, we proposed a synthetic benchmark and encourage further research on real data, revealing more system limitations.\n\n## References\n\n* Ahmed et al. [2021] F. Ahmed, Y. Bengio, H. van Seijen, and A. Courville. Systematic generalisation with group invariant predictions. In _International Conference on Learning Representations_, 2021.\n* Arjovsky et al. [2019] M. Arjovsky, L. Bottou, I. Gulrajani, and D. Lopez-Paz. Invariant risk minimization. _arXiv preprint arXiv:1907.02893_, 2019.\n* Asgari et al. [2022] S. Asgari, A. Khani, F. Khani, A. Gholami, L. Tran, A. Mahdavi Amiri, and G. Hamarneh. Masktune: Mitigating spurious correlations by forcing to explore. In _Advances in Neural Information Processing Systems_, 2022.\n* Asmussen and Glynn [2007] S. Asmussen and P. Glynn. _Stochastic Simulation: Algorithms and Analysis_. Stochastic Modelling and Applied Probability. Springer New York, 2007.\n* Assran et al. [2023] M. Assran, R. Balestriero, Q. Duval, F. Bordes, I. Misra, P. Bojanowski, P. Vincent, M. Rabbat, and N. Ballas. The hidden uniform cluster prior in self-supervised learning. In _International Conference on Learning Representations_, 2023.\n* Bahng et al. [2020] H. Bahng, S. Chun, S. Yun, J. Choo, and S. J. Oh. Learning de-biased representations with biased representations. In _International Conference on Machine Learning_, 2020.\n* Balestriero et al. [2023] R. Balestriero, M. Ibrahim, V. Sobal, A. Morcos, S. Shekhar, T. Goldstein, F. Bordes, A. Bardes, G. Mialon, Y. Tian, et al. A cookbook of self-supervised learning. _arXiv preprint arXiv:2304.12210_, 2023.\n* Bansal et al. [2020] Y. Bansal, G. Kaplun, and B. Barak. For self-supervised learning, rationality implies generalization, provably. _arXiv preprint arXiv:2010.08508_, 2020.\n* Bardenhagen et al. [2021] V. Bardenhagen, A. Tifrea, and F. Yang. Boosting worst-group accuracy without group annotations. In _NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications_, 2021.\n* Blodgett et al. [2016] S. L. Blodgett, L. Green, and B. O'Connor. Demographic dialectal variation in social media: A case study of african-american english. _arXiv preprint arXiv:1608.08868_, 2016.\n* Brodersen et al. [2010] K. H. Brodersen, C. S. Ong, K. E. Stephan, and J. M. Buhmann. The balanced accuracy and its posterior distribution. In _International Conference on Pattern Recognition_, 2010.\n* Cao et al. [2019] K. Cao, C. Wei, A. Gaidon, N. Arechiga, and T. Ma. Learning imbalanced datasets with label-distribution-aware margin loss. _Advances in Neural Information Processing Systems_, 2019.\n* Caton and Haas [2020] S. Caton and C. Haas. Fairness in machine learning: A survey. _arXiv preprint arXiv:2010.04053_, 2020.\n* Chen et al. [2020] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton. A simple framework for contrastive learning of visual representations. In _International Conference on Machine Learning_, 2020.\n* Chen et al. [2020] X. Chen, H. Fan, R. Girshick, and K. He. Improved baselines with momentum contrastive learning. _arXiv preprint arXiv:2003.04297_, 2020.\n* Chen et al. [2022] Y. Chen, R. Xiong, Z.-M. Ma, and Y. Lan. When does group invariant learning survive spurious correlations? In _Advances in Neural Information Processing Systems_, 2022.\n* Chu et al. [2021] S. Chu, D. Kim, and B. Han. Learning debiased and disentangled representations for semantic segmentation. In _Advances in Neural Information Processing Systems_, 2021.\n* Clark et al. [2019] C. Clark, M. Yatskar, and L. Zettlemoyer. Don't take the easy way out: Ensemble based methods for avoiding known dataset biases. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_. Association for Computational Linguistics, 2019.\n* Collell et al. [2016] G. Collell, D. Prelec, and K. R. Patil. Reviving threshold-moving: a simple plug-in bagging ensemble for binary and multiclass imbalanced data. _ArXiv_, abs/1606.08698, 2016.\n\n* [20] E. Creager, J.-H. Jacobsen, and R. Zemel. Environment inference for invariant learning. In _International Conference on Machine Learning_, 2021.\n* [21] Y. Cui, M. Jia, T.-Y. Lin, Y. Song, and S. Belongie. Class-balanced loss based on effective number of samples. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2019.\n* [22] V. G. T. da Costa, E. Fini, M. Nabi, N. Sebe, and E. Ricci. solo-learn: A library of self-supervised methods for visual representation learning. _Journal of Machine Learning Research_, 2022.\n* [23] L. Darlow, S. Jastrzebski, and A. Storkey. Latent adversarial debiasing: Mitigating collider bias in deep neural networks. _arXiv preprint arXiv:2011.11486_, 2020.\n* [24] K. Eykholt, I. Evtimov, E. Fernandes, B. Li, A. Rahmati, C. Xiao, A. Prakash, T. Kohno, and D. Song. Robust physical-world attacks on deep learning visual classification. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, 2018.\n* [25] T. Fawcett and F. Provost. Combining data mining and machine learning for effective user profiling. In _International Conference on Knowledge Discovery and Data Mining_, 1996.\n* [26] J. A. Fodor and Z. W. Pylyshyn. Connectionism and cognitive architecture: A critical analysis. _Cognition_, 28(1), 1988.\n* [27] R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, and W. Brendel. Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. In _International Conference on Learning Representations_, 2018.\n* [28] R. Geirhos, J.-H. Jacobsen, C. Michaelis, R. Zemel, W. Brendel, M. Bethge, and F. A. Wichmann. Shortcut learning in deep neural networks. _Nature Machine Intelligence_, 2020.\n* [29] M. W. Gondal, M. Wuthrich, D. Miladinovic, F. Locatello, M. Breidt, V. Volchkov, J. Akpo, O. Bachem, B. Scholkopf, and S. Bauer. On the transfer of inductive bias from simulation to the real world: a new disentanglement dataset. In _Advances in Neural Information Processing Systems_, 2019.\n* [30] J.-B. Grill, F. Strub, F. Altche, C. Tallec, P. Richemond, E. Buchatskaya, C. Doersch, B. Avila Pires, Z. Guo, M. Gheshlaghi Azar, et al. Bootstrap your own latent-a new approach to self-supervised learning. _Advances in Neural Information Processing Systems_, 2020.\n* [31] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger. On calibration of modern neural networks. In _International Conference on Machine Learning_, 2017.\n* [32] S. Gururangan, S. Swayamdipta, O. Levy, R. Schwartz, S. Bowman, and N. A. Smith. Annotation artifacts in natural language inference data. In _Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies_, 2018.\n* [33] H. He, S. Zha, and H. Wang. Unlearn dataset bias in natural language inference by fitting the residual. In _Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)_. Association for Computational Linguistics, 2019.\n* [34] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2016.\n* [35] K. He, H. Fan, Y. Wu, S. Xie, and R. Girshick. Momentum contrast for unsupervised visual representation learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020.\n* [36] L. A. Hendricks, K. Burns, K. Saenko, T. Darrell, and A. Rohrbach. Women also snowboard: Overcoming bias in captioning models. In _European Conference on Computer Vision (ECCV)_, 2018.\n* [37] D. Hendrycks and T. Dietterich. Benchmarking neural network robustness to common corruptions and perturbations. In _International Conference on Learning Representations_, 2019.\n\n* [38] D. Hovy and A. Sogaard. Tagging performance correlates with author age. In _Association for Computational Linguistics_, 2015.\n* [39] P. Izmailov, P. Kirichenko, N. Gruver, and A. G. Wilson. On feature learning in the presence of spurious correlations. In _Advances in Neural Information Processing Systems_, 2022.\n* [40] M. A. Jamal, M. Brown, M.-H. Yang, L. Wang, and B. Gong. Rethinking class-balanced methods for long-tailed visual recognition from a domain adaptation perspective. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020.\n* [41] B. Kang, S. Xie, M. Rohrbach, Z. Yan, A. Gordo, J. Feng, and Y. Kalantidis. Decoupling representation and classifier for long-tailed recognition. _arXiv preprint arXiv:1910.09217_, 2019.\n* [42] B. Kim and J. Kim. Adjusting decision boundary for class imbalanced learning. _IEEE Access_, 2020.\n* [43] P. Kirichenko, P. Izmailov, and A. G. Wilson. Last layer re-training is sufficient for robustness to spurious correlations. In _International Conference on Learning Representations_, 2023.\n* [44] O. O. Koyejo, N. Natarajan, P. K. Ravikumar, and I. S. Dhillon. Consistent binary classification with generalized performance metrics. In _Advances in Neural Information Processing Systems_, 2014.\n* [45] A. Krizhevsky and G. Hinton. Learning multiple layers of features from tiny images. Technical report, University of Toronto, 2009.\n* [46] B. Lake and M. Baroni. Generalization without systematicity: On the compositional skills of sequence-to-sequence recurrent networks. In _International Conference on Machine Learning_, 2018.\n* [47] Y. LeCun, C. Cortes, and C. J. Burges. Mnist handwritten digit database. In _Proceedings of the IEEE_, 1998.\n* [48] E. Z. Liu, B. Haghgoo, A. S. Chen, A. Raghunathan, P. W. Koh, S. Sagawa, P. Liang, and C. Finn. Just train twice: Improving group robustness without training group information. In _International Conference on Machine Learning_, 2021.\n* [49] H. Liu, J. Z. HaoChen, A. Gaidon, and T. Ma. Self-supervised learning is more robust to dataset imbalance. In _NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and Applications_, 2021.\n* [50] S. Liu, X. Zhang, N. Sekhar, Y. Wu, P. Singhal, and C. Fernandez-Granda. Avoiding spurious correlations via logit correction. In _International Conference on Learning Representations_, 2023.\n* [51] Z. Liu, P. Luo, X. Wang, and X. Tang. Deep learning face attributes in the wild. In _Proceedings of the IEEE international conference on computer vision_, 2015.\n* [52] I. Loshchilov and F. Hutter. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_, 2017.\n* [53] A. Menon, H. Narasimhan, S. Agarwal, and S. Chawla. On the statistical consistency of algorithms for binary classification under class imbalance. In _International Conference on Machine Learning_, 2013.\n* [54] A. K. Menon, S. Jayasumana, A. S. Rawat, H. Jain, A. Veit, and S. Kumar. Long-tail learning via logit adjustment. In _International Conference on Learning Representations_, 2021.\n* [55] M. Minderer, O. Bachem, N. Houlsby, and M. Tschannen. Automatic shortcut removal for self-supervised representation learning. In _International Conference on Machine Learning_, 2020.\n* [56] R. Muller, S. Kornblith, and G. E. Hinton. When does label smoothing help? _Advances in Neural Information Processing Systems_, 2019.\n\n* [57] J. Nam, H. Cha, S. Ahn, J. Lee, and J. Shin. Learning from failure: De-biasing classifier from biased classifier. _Advances in Neural Information Processing Systems_, 2020.\n* [58] A. v. d. Oord, Y. Li, and O. Vinyals. Representation learning with contrastive predictive coding. _arXiv preprint arXiv:1807.03748_, 2018.\n* [59] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. _Advances in Neural Information Processing Systems_, 2019.\n* [60] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, et al. Imagenet large scale visual recognition challenge. _International journal of computer vision_, 2015.\n* [61] S. Sagawa, P. W. Koh, T. B. Hashimoto, and P. Liang. Distributionally robust neural networks. In _International Conference on Learning Representations_, 2020.\n* [62] S. Sagawa, A. Raghunathan, P. W. Koh, and P. Liang. An investigation of why overparameterization exacerbates spurious correlations. In _International Conference on Machine Learning_, 2020.\n* [63] Y. Shi, I. Daunhawer, J. E. Vogt, P. Torr, and A. Sanyal. How robust is unsupervised representation learning to distribution shift? In _International Conference on Learning Representations_, 2023.\n* [64] R. Suter, D. Miladinovic, B. Scholkopf, and S. Bauer. Robustly disentangled causal mechanisms: Validating deep representations for interventional robustness. In _International Conference on Machine Learning_, 2019.\n* [65] J. Tan, C. Wang, B. Li, Q. Li, W. Ouyang, C. Yin, and J. Yan. Equalization loss for long-tailed object recognition. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2020.\n* [66] R. Tatman. Gender and dialect bias in youtube's automatic captions. In _ACL workshop on ethics in natural language processing_, 2017.\n* [67] V. N. Vapnik. _Statistical Learning Theory_. Wiley-Interscience, 1998.\n* [68] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The caltech-ucsd birds-200-2011 dataset. Technical report, California Institute of Technology, 2011.\n* [69] H. Wang, Z. He, Z. C. Lipton, and E. P. Xing. Learning robust representations by projecting superficial statistics out. In _International Conference on Learning Representations_, 2018.\n* [70] J. Wei, H. Narasimhan, E. Amid, W.-S. Chu, Y. Liu, and A. Kumar. Distributionally robust post-hoc classifiers under prior shifts. In _International Conference on Learning Representations_, 2023.\n* [71] Y. Xue, K. Whitecross, and B. Mirzasoleiman. Investigating why contrastive learning benefits robustness against label noise. In _International Conference on Machine Learning_, 2022.\n* [72] Y. You, I. Gitman, and B. Ginsburg. Large batch training of convolutional networks. _arXiv preprint arXiv:1708.03888_, 2017.\n* [73] J. Zbontar, L. Jing, I. Misra, Y. LeCun, and S. Deny. Barlow twins: Self-supervised learning via redundancy reduction. In _International Conference on Machine Learning_, 2021.\n* [74] X. Zhang, Z. Fang, Y. Wen, Z. Li, and Y. Qiao. Range loss for deep face recognition with long-tailed training data. In _International Conference on Computer Vision_, 2017.\n* [75] Z. Zhang and M. Sabuncu. Generalized cross entropy loss for training deep neural networks with noisy labels. In _Advances in Neural Information Processing Systems_, 2018.\n\n* [76] C. Zhao, J. Mangat, S. Koujalgi, A. Squicciarini, and C. Caragea. Privacyalert: A dataset for image privacy prediction. In _Proceedings of the International AAAI Conference on Web and Social Media_, 2022.\n* [77] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba. Places: A 10 million image database for scene recognition. _IEEE transactions on pattern analysis and machine intelligence_, 2017.\n\n## Appendix A Bias-Supervised Training with Logit Adjustment\n\nThe goal of this Appendix section is to develop a bias-supervised approach for maximizing the group-balanced accuracy (Equation (3)), which is going to serve as an initial point in the development of our entirely bias-unsupervised methodology in Section 3.\n\nUnder the problem formulation we introduced in Section 2, the metric that we would ultimately like to maximize given a scoring function \\(f_{Y|X}:\\mathcal{X}\\rightarrow\\mathbb{R}^{K}\\) is \\(\\operatorname{Acc}_{Y|X}(f_{Y|X};p_{\\text{test}})\\), the top-1 accuracy under the test distribution \\(p_{\\text{test}}\\). Simple algebraic manipulations reveal that this test accuracy corresponds to the group-balanced accuracy:\n\n\\[\\operatorname{Acc}_{Y|X}(f_{Y|X};p_{\\text{test}}) :=\\operatorname*{\\mathbb{E}}_{x,y\\sim p_{\\text{test}}}\\mathbb{1} \\left(y=\\operatorname*{argmax}_{y^{\\prime}}f_{Y|X}(x)_{y^{\\prime}}\\right) \\tag{8}\\] \\[=\\operatorname*{\\mathbb{E}}_{\\begin{subarray}{c}y,z\\sim p_{\\text {test}}\\\\ x\\sim p_{\\text{test}}(\\cdot\\,|y,z)\\end{subarray}}\\mathbb{1}\\left(y= \\operatorname*{argmax}_{y^{\\prime}}f_{Y|X}(x)_{y^{\\prime}}\\right)\\] \\[=\\frac{1}{KL}\\sum_{(y,z)\\in[K]\\times[L]}\\operatorname*{\\mathbb{E} }_{x\\sim p_{\\text{test}}(\\cdot\\,|y,z)}\\mathbb{1}\\left(y=\\operatorname*{argmax }_{y^{\\prime}}f_{Y|X}(x)_{y^{\\prime}}\\right).\\]\n\nA related quantity in the literature is the class-balanced top-1 accuracy, which is used in class-imbalanced classification problems [11, 53]. We can make the connection by imagining \\((y,z)\\) as a multi-label classification target in a multi-label, but class-imbalanced, problem. The multi-label class-balanced accuracy of a multi-label scoring function \\(f_{Y,Z|X}:\\mathcal{X}\\rightarrow\\mathbb{R}^{K}\\times\\mathbb{R}^{L}\\) would then be\n\n\\[\\operatorname{Acc}_{Y,Z|X}(f_{Y,Z|X};p_{\\text{test}})=\\frac{1}{KL}\\sum_{(y,z) \\in[K]\\times[L]}\\operatorname*{\\mathbb{E}}_{x\\sim p_{\\text{test}}(\\cdot\\,|y,z )}\\mathbb{1}\\left((y,z)=\\operatorname*{argmax}_{y^{\\prime},z^{\\prime}}f_{Y,Z|X }(x)_{y^{\\prime},z^{\\prime}}\\right). \\tag{9}\\]\n\nWe will set towards building a multi-label classifier which maximizes multi-label class-balanced accuracy and we will finally show that its part which only predicts \\(Y\\) from \\(X\\) can be shown to maximize a lower bound to the group-balanced accuracy.\n\nWe assume that \\(\\mathcal{D}\\) and \\(\\mathcal{D}_{\\text{valid}}\\) provide us with tuples of observations \\((x,y,z)\\) distributed according to \\(p_{\\text{data}}\\). In this case, results in the class-imbalance literature [53, 44, 19] state that the Bayes-optimal scoring function \\(f^{\\star}_{Y,Z|X}\\) for maximizing the class-balanced criterion (9) is given by\n\n\\[\\operatorname*{argmax}_{y,z}f^{\\star}_{Y,Z|X}(x)_{y,z}=\\operatorname*{argmax} _{y,z}p_{\\text{test}}(x\\,|\\,y,z). \\tag{10}\\]\n\nNote that under the invariance of mechanism assumption (2) we have that \\(p_{\\text{test}}(x\\,|\\,y,z)=p_{\\text{data}}(x\\,|\\,y,z)\\). This allows us to compute the mechanism conditional probability in \\(p_{\\text{data}}\\) distribution terms\n\n\\[p_{\\text{test}}(x\\,|\\,y,z)=\\frac{p_{\\text{data}}(y,z\\,|\\,x)}{p_{\\text{data}}( y,z)}p_{\\text{data}}(x) \\tag{11}\\]\n\nby utilizing the Bayes rule. This way we have a candidate strategy to maximize multi-label class-balanced accuracy in Equation (9): First, we will estimate \\(p_{\\text{data}}(y,z\\,|\\,x)\\) and \\(p_{\\text{data}}(y,z)\\) from data, and then we will divide the estimates. Notice that, in order to make a prediction about \\((y,z)\\) given \\(x\\), we do not need to model \\(p_{\\text{data}}(x)\\) as this quantity is constant in \\((y,z)\\), and thus does not influence the \\(\\operatorname*{argmax}\\).\n\n**Training process.** We can directly estimate \\(p_{\\text{data}}(y,z)\\) from available training data by computing the empirical frequencies of pairs \\((y,z)\\in[L]\\times[K]\\), or by a Bayesian estimate with a Dirichlet prior. On the other hand, we can train a parameterized discriminative model \\(p_{\\theta}\\) with maximum conditional likelihood estimation to estimate \\(p_{\\text{data}}(y,z|x)\\)\n\n\\[\\max_{\\theta}\\frac{1}{|\\mathcal{D}|}\\sum_{x,y,z\\in\\mathcal{D}}\\log p_{\\theta}( y,z\\,|\\,x) \\tag{12}\\]\n\nwhich is equivalent to the minimization of a KL divergence estimate between the \\(p_{\\text{data}}\\) and \\(p_{\\theta}\\) model conditionals.\n\nThe method described so far would have sufficed if the goal was to perform multi-label classification, however we need to devise a way to overcome the bias-supervision, since we only care about classifying \\(Y\\) without having access to \\(Z\\) annotations. For this reason, we need to proceed to modelling assumptions about \\(p_{\\theta}\\), which will allow us to develop the bias-unsupervised method described in Section 3. First, we recall that \\(p_{\\text{data}}\\) factorizes as\n\n\\[p_{\\text{data}}(y,z\\,|\\,x)=p_{\\text{data}}(y\\,|\\,x,z)p_{\\text{data}}(z\\,|\\,x). \\tag{13}\\]\n\nLet \\(h_{\\theta}:\\mathcal{X}\\to\\mathbb{R}^{K}\\) be a parameterized model (such as a neural network) tasked to predict unnormalized logits of \\(Y\\) given an observation \\(x\\). Then, we model \\(p_{\\text{data}}(y\\,|\\,x,z)\\) by\n\n\\[p_{\\theta}(y\\,|\\,x,z)\\propto\\exp\\big{(}h_{\\theta}(x)_{y}+\\log\\hat{p}_{\\text{ data}}(y\\,|\\,z)\\big{)}. \\tag{14}\\]\n\nUnpacking our modelling assumption Equation (14), we have the following: First, we are going to compute \\(\\hat{p}_{\\text{data}}(y\\,|\\,z)=\\frac{\\hat{p}_{\\text{data}}(y,z)}{\\hat{p}_{ \\text{data}}(z)}\\), by using our estimated \\(\\hat{p}_{\\text{data}}(y,z)\\) and by marginalizing out \\(y\\) for \\(\\hat{p}_{\\text{data}}(z)=\\sum_{y\\in[K]}\\hat{p}_{\\text{data}}(y,z)\\). Given a tuple of observables \\((x,y,z)\\) in a training batch, we use \\(\\log\\hat{p}_{\\text{data}}(y\\,|\\,z)\\) to adjust additively the outputs of our parametric discriminative model \\(h_{\\theta}\\) to the observed input \\(x\\). Notice that since \\(Y\\) is a categorical variable, \\(\\log\\hat{p}_{\\text{data}}(y\\,|\\,z)\\in(-\\infty,0]\\) for all \\(y\\in[K]\\). For this reason, we can intuitively interpret _logit adjustment_ as a soft masking operation for outputs of \\(h_{\\theta}\\) which are unlikely in the training data when we have observed \\(z\\). In this way, we account for the dependency of \\(Y\\) to \\(Z\\) which spuriously exists in the training distribution. By fitting the cross-entropy objective under logit adjustment, the network \\(h_{\\theta}\\) has to model the remaining relations for \\(Y\\,|\\,X\\) that are not spurious since those are already accounted for.\n\n**Prediction process.** Our modelling assumption enables the prediction of \\(y\\) without computing the maximizer over combinations of \\((y,z)\\in[K]\\times[L]\\). This is an important step towards developing a bias-unsupervised training procedure. This is because assumption 14 allows us to compute the maximizer for \\(y\\,|\\,x\\) and \\(z\\,|\\,x\\) separately. We show that under our modelling assumption, if we are only interested in predicting only the target variable \\(y\\) given an observation \\(x\\), we do not need to model \\(z\\,|\\,x\\) at all. To understand how this is possible, we take a close look at the total expression for optimal predictor. After fitting the model \\(p_{\\theta}\\) using Equation (12), we calculate the estimated scoring function for the multi-label class-imbalanced problem as in Equation (11)\n\n\\[\\hat{f}_{Y,Z|X}(x)=\\hat{p}_{\\text{test}}(x\\,|\\,y,z)=\\frac{\\hat{p} _{\\text{data}}(y,z\\,|\\,x)}{\\hat{p}_{\\text{data}}(y,z)}\\,\\hat{p}_{\\text{data}}(x) \\tag{15}\\] \\[=\\frac{p_{\\theta}(y\\,|\\,x,z)}{\\hat{p}_{\\text{data}}(y\\,|\\,z)}\\, \\frac{\\hat{p}_{\\text{data}}(z\\,|\\,x)}{\\hat{p}_{\\text{data}}(z)}\\,\\hat{p}_{ \\text{data}}(x)\\] \\[=\\frac{\\exp\\big{(}h_{\\theta}(x)_{y}+\\log\\hat{p}_{\\text{data}}(y \\,|\\,z)\\big{)}}{Z_{\\theta}(z,x)\\,\\hat{p}_{\\text{data}}(y\\,|\\,z)}\\,\\frac{\\hat{p }_{\\text{data}}(z\\,|\\,x)}{\\hat{p}_{\\text{data}}(z)}\\,\\hat{p}_{\\text{data}}(x)\\] \\[=\\exp\\big{(}h_{\\theta}(x)_{y}\\big{)}\\,\\frac{\\hat{p}_{\\text{data}}( z\\,|\\,x)\\hat{p}_{\\text{data}}(x)}{Z_{\\theta}(z,x)\\hat{p}_{\\text{data}}(z)},\\]\n\nwhere \\(Z_{\\theta}(z,x)\\coloneqq\\sum_{y\\in[K]}\\exp\\big{(}h_{\\theta}(x)_{y}+\\log\\hat{p }_{\\text{data}}(y\\,|\\,z)\\big{)}\\) is the partition function of \\(p_{\\theta}(y|x,z)\\). Notice that the expression simplifies into a multiplication of two terms; the first, \\(\\exp\\big{(}h_{\\theta}(x)_{y}\\big{)}\\), solely depends on \\((x,y)\\), while the second fraction only depends on \\((x,z)\\). Then, predicting according to Equation (10) amounts to computing\n\n\\[\\max_{y,z}\\hat{p}_{\\text{test}}(x\\,|\\,y,z)=\\max_{y}\\exp\\big{(}h_{\\theta}(x)_{ y}\\big{)}\\,\\max_{z}\\frac{\\hat{p}_{\\text{data}}(z|x)\\hat{p}_{\\text{data}}(x)}{Z_{ \\theta}(z,x)\\hat{p}_{\\text{data}}(z)}. \\tag{16}\\]\n\nEssentially, information about \\(Z\\) is only need during training time and not during prediction of \\(Y\\) given \\(X\\). In practice, during prediction we adapt the model by removing the logit adjustment term in order to acquire the unbiased network for the task. Consequently, the obtained scoring function for debiased \\(Y\\) predictions is just the trained neural network\n\n\\[\\hat{f}_{Y|X}(x;\\theta)\\coloneqq h_{\\theta}(x). \\tag{17}\\]\n\nNotice that our design decisions are well-justified, in the sense that, under our class of models (eq. 14), we have \\(\\operatorname{Acc}_{Y,Z|X}(\\hat{f}_{Y,Z|X};p_{\\text{test}})\\leq\\operatorname{ Acc}_{Y|X}(\\hat{f}_{Y|X};p_{\\text{test}})\\). This means that by maximizing the multi-label class-balanced accuracy we indirectly maximize the top-1 accuracy of the candidate scoring function \\(\\hat{f}_{Y|X}\\) under the test distribution \\(p_{\\text{test}}\\), which is the group-balanced accuracy as we have demonstrated.\n\n**Proposition A.1** (sLA optimizes the group-balanced accuracy).: _Under the assumption that the hypothesis class of \\(p_{\\theta}(y|x,z)\\) (eq. 4) contains \\(p_{\\text{data}}(y|x,z)\\), the minimizer network \\(h_{\\theta^{*}}\\) of the cross-entropy loss maximizes a lower bound to the group-balanced accuracy \\(\\operatorname{Acc}_{Y|X}(h_{\\theta^{*}};p_{\\text{test}})\\)._\n\nProof.: We will start by the quantity\n\n\\[\\operatorname{Acc}_{Y,Z|X}\\bigl{(}f;p_{\\text{test}}(\\cdot|y,z)\\bigr{)}\\coloneqq \\mathbb{P}_{x|y,z}^{\\text{test}}\\bigl{(}(y,z)=\\operatorname*{argmax}_{y^{\\prime },z^{\\prime}}f(x)_{y^{\\prime},z^{\\prime}}\\bigr{)} \\tag{18}\\]\n\nthe classification accuracy conditioned on the pair \\((y,z)\\). Recall that under our modelling assumption Equation (14), the estimated Bayes-optimal scoring function for the multi-label class-imbalanced classification problem is\n\n\\[\\hat{f}_{Y,Z|X}(x)=\\hat{p}_{\\text{test}}(x\\,|\\,y,z)=\\exp\\big{(}h_{\\theta^{*}}( x)_{y}\\big{)}C(z,x), \\tag{19}\\]\n\nwhere \\(C(z,x)\\coloneqq\\frac{\\hat{p}_{\\text{data}}(z|x)\\hat{p}_{\\text{data}}(x)}{\\hat {Z}_{\\theta}(z,x)\\hat{p}_{\\text{data}}(z)}\\) is a constant in \\(y\\). We then substitute to get\n\n\\[\\operatorname{Acc}_{Y,Z|X}\\bigl{(}\\hat{f}_{Y,Z|X};p_{\\text{test}}(\\cdot|y,z) \\bigr{)}=\\mathbb{P}_{x|y,z}^{\\text{test}}\\Bigl{(}(y,z)=\\operatorname*{argmax}_ {y^{\\prime},z^{\\prime}}\\exp\\big{(}h_{\\theta^{*}}(x)_{y^{\\prime}}\\big{)}C(z^{ \\prime},x)\\Bigr{)}. \\tag{20}\\]\n\nHowever \\(\\max_{z^{\\prime}}\\) gets pushed to the innermost term since \\(\\exp\\big{(}h_{\\theta^{*}}(x)_{y^{\\prime}}\\big{)}\\) is independent of \\(z^{\\prime}\\), as well as \\(\\max_{z^{\\prime}}C(z^{\\prime},x)\\) is independent of \\(y^{\\prime}\\). This means that we can take the \\(\\operatorname{argmax}\\) of individual expression independently\n\n\\[\\operatorname{Acc}_{Y,Z|X}\\bigl{(}\\hat{f}_{Y,Z|X};p_{\\text{test}} (\\cdot|y,z)\\bigr{)}= \\tag{21}\\] \\[\\mathbb{P}_{x|y,z}^{\\text{(test)}}\\bigl{(}\\{y=\\operatorname*{ argmax}_{y^{\\prime}}\\exp\\big{(}h_{\\theta^{*}}(x)_{y^{\\prime}}\\big{)}\\}\\cap\\{z= \\operatorname*{argmax}_{z^{\\prime}}C_{2}(z^{\\prime},x)\\}\\bigr{)},\\]\n\nand since \\(\\mathbb{P}(A\\cap B)\\leq\\mathbb{P}(A)\\) and \\(\\exp\\) is a strictly increasing function, we get\n\n\\[\\operatorname{Acc}_{Y,Z|X}\\bigl{(}\\hat{f}_{Y,Z|X};p_{\\text{test}}(\\cdot|y,z) \\bigr{)}\\leq\\mathbb{P}_{x|y,z}^{\\text{(test)}}\\bigl{(}y=\\operatorname*{argmax} _{y^{\\prime}}h_{\\theta^{*}}(x)_{y^{\\prime}}\\bigr{)}. \\tag{22}\\]\n\nWe recognise the scoring function \\(\\hat{f}_{Y|X}(x)\\coloneqq h_{\\theta^{*}}(x)\\) on the right hand side of the expression. Finally, by summing up all inequalities for all \\((y,z)\\in[K]\\times[L]\\), and dividing by \\(KL\\), we get\n\n\\[\\frac{1}{KL}\\sum_{y,z}\\operatorname{Acc}_{Y,Z|X}\\bigl{(}\\hat{f}_ {Y,Z|X};p_{\\text{test}}(\\cdot|y,z)\\bigr{)}\\leq\\frac{1}{KL}\\sum_{y,z}\\mathbb{ P}_{x|y,z}^{\\text{(test)}}\\bigl{(}y=\\operatorname*{argmax}_{y^{\\prime}}\\hat{f}_{Y|X} (x)_{y^{\\prime}}\\bigr{)} \\tag{23}\\] \\[\\operatorname{Acc}_{Y,Z|X}\\bigl{(}\\hat{f}_{Y,Z|X};p_{\\text{test}} \\bigr{)}\\leq\\operatorname{Acc}_{Y|X}(\\hat{f}_{Y|X};p_{\\text{test}}). \\tag{24}\\]\n\nThis shows that the scoring function \\(\\hat{f}_{Y|X}(x)\\coloneqq h_{\\theta^{*}}(x)\\) maximizes a lower bound to the group-balanced accuracy. \n\n## Appendix B Self-supervised Learning Algorithms\n\n\\begin{table}\n\\begin{tabular}{l c c c c} \\hline \\hline & \\multicolumn{2}{c}{Target Attribute} & \\multicolumn{2}{c}{Bias Attribute} \\\\ \\cline{2-5} & \\(p_{\\text{data}}\\) & \\(p_{\\text{test}}\\) & \\(p_{\\text{data}}\\) & \\(p_{\\text{test}}\\) \\\\ \\hline cMNIST (\\%) & \\(98.86\\) & \\(27.01\\) & \\(99.88\\) & \\(96.04\\) \\\\ cCIFAR10 (\\(10\\)) & \\(98.93\\) & \\(36.53\\) & \\(99.60\\) & \\(82.05\\) \\\\ SMP19D (\\(C=3\\)) & \\(98.19\\) & \\(66.08\\) & \\(99.91\\) & \\(87.22\\) \\\\ CelebA\\({}^{*}\\) & \\(94.01\\) & \\(75.44\\) & \\(95.74\\) & \\(90.60\\) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 6: Downstream online linear classification against the _target_ and _bias_ attributes of each task considered (see Appendix D). We present top-1 (average) accuracy (%) of the corresponding classifiers evaluated at last epoch of MoCoV2+ training, under samples from the i.i.d. validation set (\\(p_{\\text{data}}\\)) and the o.o.d. test set (\\(p_{\\text{test}}\\)). \\({}^{*}\\)For CelebA, since the test split is not controlled to be group-balanced, we simulate the accuracy under \\(p_{\\text{test}}\\) by utilizing the existing bias annotations to estimate a group-balanced test accuracy.\n\nIn our work, we are using a contrastive learning algorithm [58, 14, 35, 15] to pretrain a base model for the bias proxy, as well as to initialize the debiased model. In contrastive learning, an instance discrimination pretext task is defined from unlabeled data. In particular, pairs of data points (positives), which are derived by independently augmenting the same observation (views), should have representations that are closer in distance than representations of other samples in the dataset (negatives). To achieve that we optimize a form of the InfoNCE loss [58]\n\n\\[\\mathcal{L}(q,k^{+},\\{k^{-}_{i}\\}_{i=1}^{N})=-\\log\\frac{\\exp(q^{\\top}k^{+}/ \\tau)}{\\exp(q^{\\top}k^{+}/\\tau)+\\sum_{i=1}^{N}\\exp(q^{\\top}k^{-}_{i}/\\tau)}, \\tag{25}\\]\n\nwhere \\((q,k^{+})\\) is the positive pair and \\(\\{k^{-}_{i}\\}_{i=1}^{N}\\) are the negatives. Typically, representations are projected to the unit hypersphere for training stability. In the MoCoV2+ framework [15], gradients are backpropagated only through the query representation (\\(q\\)) to the encoding network, while the key representations (\\(k^{+}\\) and \\(k^{-}\\)) are extracted by a network derived from the exponential moving average of its parameters. Finally, positive key representations of past batches are kept in a queue in memory to serve as negative key representations for the subsequent batches.\n\n## Appendix C uLA: Algorithm\n\nIn Algorithm 1, we provide a high-level description of the uLA methodology for training and validation of group-robust models without any bias annotations. In addition, a PyTorch [59] implementation is available at the following repository: [https://github.com/tsirif/uLA](https://github.com/tsirif/uLA).\n\n## Appendix D Datasets\n\n### Systematic MPI3D\n\nsMPI3D is our contributed dataset which we use to study combinatorial systematicity in classifiers. By this term, we designate the ability of classifiers to generalize to samples generated from novel combinations of generative attributes, which under \\(p_{\\text{data}}\\) have \\(0\\) probability. However, all the constituent values of the individual generative attributes in the novel combinations have been observed under some combination in the training data. The _real_ split of MPI3D [29] consists of photographs of a robotic arm that has a colored rigid object attached to its end effector. The images are captured in a way that controls their generative attributes, such as the shape of the rigid object or the position of the robotic arm.\n\nWe use this dataset to create development, and deployment splits to test for systematic generalization. In particular, we consider the shape of the rigid object to be the target variable \\(Y_{\\text{shape}}\\), while its color to be the bias \\(Z_{\\text{color}}\\). There are \\(6\\) possible shapes for the objects and \\(6\\) possible colors, totaling \\(36\\) combinations of attributes. Figure 1(d) illustrates pairs of attributes used to sample an example systematic training split. A training set is created by assigning \\(C\\) number of colors per shape so that: first, \\(p_{\\text{data}}(z_{\\text{color}}\\mid y_{\\text{shape}})=\\frac{1}{C}\\) if \\((y_{\\text{shape}},z_{\\text{color}})\\) exists in the elected pairs of attributes for training; otherwise \\(0\\). Second, we make sure that marginally all colors and shapes are represented uniformly, which means that \\(p_{\\text{data}}(y)=\\frac{1}{6}\\) and \\(p_{\\text{data}}(z)=\\frac{1}{6}\\). During testing, we desire to generalize to samples generated from \\(p_{\\text{test}}\\) which distributes pairs \\((y_{\\text{shape}},z_{\\text{color}})\\) uniformly, covering combinations that are entirely missing from the training set. In our benchmarks, we are testing methods for \\(C\\in\\{2,3,4,5\\}\\), against the same 5 sets of systematic splits which we generate independently per \\(C\\).\n\n**Systematic split generation.** As we described above, sMPI3D is generated using the _real_ \"split\" of the MPI3D dataset [29]. The dataset is distributed as a tensor of shape \\((6,6,2,3,3,40,40,64,64,3)\\), with the last 3 axis corresponding to pixel values of a \\(64\\times 64\\) RGB image and the rest to coordinates for generative attribute values. The first two coordinates correspond to the color and shape attributes for the rigid object depicted in the image. The procedure for generating splits of sMPI3D is stochastic, meaning that it depends on a random seed set by the user. The reason for this is that we would like to benchmark methods against the possibility that some systematic splits are consistently easier to generalize than others. To generate a train-validation-test split of sMPI3D, we first generate the pattern of included-excluded combinations for the development dataset (training and validation datasets), like the one presented in Figure 1(d). The pattern depends on the argument \\(C\\), the number of color values observed per shape value. Then, we randomly permute data points across the the first two axis of the data tensor, which correspond to the color and shape attributes. Afterwards, we split the data tensor according to whether a certain position \\((\\operatorname{color},\\operatorname{shape})\\) belongs to the development dataset or not. We subsample the deployment dataset uniformly across i.i.d. combinations for \\(180k\\) and \\(18k\\) mutually exclusive data points for the training and validation sets respectively, for all \\(C\\in\\{2,3,4,5\\}\\). Finally, the remaining data points are combined and \\(54k\\) images are sampled uniformly from all combinations to create the unbiased test set. We provide code for generating systematic splits on MPI3D in the supplementary material.\n\n```\nData:Training split: \\(\\mathcal{D}\\coloneqq\\{x_{m},y_{m}\\}_{m=1}^{M}\\)\n1Data:Validation split: \\(\\mathcal{D}_{\\operatorname{valid}}\\coloneqq\\{x_{n},y_{n}\\}_{n=1}^{N}\\)\n2Data:Hyperparameters:SSL checkpoint \\(T_{\\operatorname{sid}}\\), linear probe training steps \\(T_{\\operatorname{sid}}\\), logit adjustment strength \\(\\eta\\) and calibration \\(\\tau\\), and optimization hyperparameters Result:Debiased model \\(h_{\\theta^{*}}\\) and its validation_score\n3\n4 /* Load pretrained SSL encoder */\n5 \\(f_{\\operatorname{base}}\\leftarrow\\operatorname{load}(T_{\\operatorname{ssl}})\\)\n6 /* Train Linear Head */\n7 Define a classifier \\(h_{\\operatorname{bias}}=g_{\\phi}\\circ f_{\\operatorname{bias}}\\) from the composition of pretrained \\(f_{\\operatorname{base}}\\) and linear classifier \\(g_{\\phi}\\) with parameters \\(\\phi\\)\n8 Predict biased predictions using \\(\\hat{y}(x;h_{\\operatorname{bias}})=\\operatorname{argmax}_{y}h_{\\operatorname{ bias}}(x)_{y}\\)\n9for\\(t=1\\)to\\(T_{\\operatorname{sid}}\\)do\n10 Sample batch \\(B\\) from \\(\\mathcal{D}\\)\n11 Update \\(\\phi\\) so that linear model on top of \\(h_{\\operatorname{bias}}(x)\\) minimizes average cross-entropy on \\(B\\)\n12\n13 end for\n14\n15 end for /* Finetune with logit adjustment */\n16 Define calibrated model of biased predictions \\(p_{\\operatorname{bias}}(y_{\\operatorname{bias}}|x)\\propto\\exp\\left(h_{ \\operatorname{bias}}(x)_{y_{\\operatorname{bias}}}/\\tau\\right)\\)\n17 Compute confusion matrix \\(\\hat{p}_{\\operatorname{data}}(y,y_{\\operatorname{bias}})\\) // See Equation (5)\n18 Define \\(h_{\\theta}\\) to be the debiased model\n19 Initialize base model in \\(h_{\\theta}\\) from pretrained \\(f_{\\operatorname{base}}\\)\n20 Predict debiased predictions using \\(\\hat{y}(x;h_{\\theta})=\\operatorname{argmax}_{y}h_{\\theta}(x)_{y}\\)\n21\n22 checkpoints \\(\\leftarrow\\) [ ]\n23for\\(t=1\\)to maximum number of iterationsdo\n24 Sample batch \\(B\\) from \\(\\mathcal{D}\\)\n25 Update \\(\\theta\\) so that model \\(p_{\\theta}(y|x)\\propto\\exp\\left(h_{\\theta}(x)_{y}+\\eta\\log\\hat{p}_{\\operatorname {data}}\\big{(}y|\\hat{y}(x;h_{\\operatorname{bias}})\\big{)}\\right)\\) minimizes average cross-entropy on \\(B\\)\n26 validation_score \\(\\leftarrow\\) compute_validation_criterion(\\(\\mathcal{D}_{\\operatorname{valid}},h_{\\operatorname{bias}},h_{\\theta}\\)) // see Equation (7)\n27 checkpoints \\(\\leftarrow\\) checkpoints \\(+\\left[(\\texttt{validation\\_score},h_{\\theta})\\right]\\)\n28\n29 end for\n30 Final debiased model, \\(h_{\\theta^{*}}\\leftarrow\\max_{\\texttt{validation\\_score}\\texttt{ checkpoints}}\\)\n```\n\n**Algorithm 1**ULA: Logit adjustment without bias labels during training or model selection.\n\n### Non-systematic Benchmarks\n\n**Colored MNIST.** (cMNIST) is an RGB version of MNIST dataset [47], in which the digit target variable \\(Y_{\\operatorname{digit}}\\) is paired with a color bias variable \\(Z_{\\operatorname{color}}\\) to draw an image of that digit using a specific color. In this work, we study the dataset which follows the data generation procedure from Nam et al. [57]. Each of the digits \\(k\\in\\{0,...,9\\}\\) is paired with a distinct color out of a choice of ten, with probability \\(p_{\\operatorname{data}}(Z_{\\operatorname{color}}=k\\,|\\,Y_{\\operatorname{digit }}=k)=1-\\beta\\) of the available training samples for that digit \\(k\\). The rest of the probability mass is split uniformly across the remaining color options, \\(p_{\\operatorname{data}}(Z_{\\operatorname{color}}=l\\,|\\,Y_{\\operatorname{digit }}=k)=\\frac{\\beta}{9}\\) for \\(l\\neq k\\). Essentially, \\(\\beta\\) controls for the percentage of bias-conflicting samples in the training set. The goal is to train a classifier that performs well under the test set in which digits are paired with colors with random chance \\(1/10^{2}\\). We obtain four different tasks by varying \\(\\beta\\in\\{0.5\\%,1\\%,2\\%,5\\%\\}\\).\n\n**Corrupted CIFAR10.** (CCIFAR10) is a modification of the CIFAR10 dataset [45] by Hendrycks and Dietterich [37], in which images are affected by a type of texture noise which correlates with the original target category. Similarly to cMNIST, there are 10 different types of texture noise each of which is predominantly paired to a unique CIFAR10 label. The training set is created under \\(p_{\\text{data}}(Z_{\\text{noise}}=k\\,|\\,Y_{\\text{cifar}}=k)=1-\\beta\\) with the rest of probability spread uniformly to the bias-conflicting options of texture noise. Our goal again is to perform well under the test set in which pairs of texture noise and labels are distributed uniformly. Four development datasets are used in the experiments with \\(\\beta\\in\\{0.5\\%,1\\%,2\\%,5\\%\\}\\).\n\n**Waterbirds.** Our study employs the Waterbirds dataset, devised by Sagawa et al. [61]. This dataset is derived from the CUB dataset's bird images [68], superimposed on backgrounds from the Places dataset [77]. The dataset classifies seabirds and waterfowl as waterbirds, and all other species as landbirds. Ocean and natural lake backgrounds from Places are labeled as 'water background', while bamboo and broadleaf forest backgrounds are termed 'land background'. Four groups emerge: land background with waterbird and landbird, and water background with waterbird and landbird. The first two are minority groups due to fewer examples, while the latter two constitute majority groups. The dataset maintains the original training, validation, and test splits from Sagawa et al. [61]. In training, 95% of waterbirds and landbirds are respectively paired with water and land backgrounds, while we care for the worst-group test accuracy.\n\n**CelebA.** We examine a fair classification task using the CelebA dataset [51] of celebrity facial attributes. Same to Sagawa et al. [61], we consider the _BlondHair_ attribute to be the target variable. Then, a spurious correlation can naturally be observed with the binarized gender attribute _Male_. Four groups emerge for which the minorities are males with blond hair, followed by females with non-blond hair. We use the standard splits defined in the literature, and we report the worst-group test accuracy.\n\n## Appendix E Experiment Details\n\n**Architecture, Optimization, Augmentation and Validation.** For cMNIST, we train a 3-hidden layer MLP with 100 hidden neurons for each layer, while we use a ResNet18 [34] for cCIFAR10 and sMPI3D, and a ResNet50 for Waterbirds and CelebA. For all datasets except Waterbirds, we pretrain the base model with the MoCoV2+ [15] process, while for Waterbirds we use an already pretrained ResNet50 on Imagenet [60]. Training of the linear probe for the bias network and finetuning for the logit adjusted debiased network happen with AdamW [52] optimizer using default momentum hyperparameters \\(\\beta=(0.9,0.999)\\), except for Waterbirds where we use SGD with \\(0.9\\) momentum. The learning rate for all dataset, excluding Waterbirds, is scheduled with a cosine decaying rule. Batch size is set to \\(256\\) for cMNIST, cCIFAR10, sMPI3D and CelebA, and \\(64\\) for Waterbirds. We use minimal data augmentation for training the linear probe, as well as the debiased logit adjusted model, following the baselines we compare against for fair comparison. In more details, we use random resized crops (RRC) and random horizontal flipping for all datasets, except cMNIST where no augmentation is used. Finally, training happens for a maximum of \\(100\\)\n\n\\begin{table}\n\\begin{tabular}{l c c c c} \\hline \\hline & Learning Rate & Batch Size & Weight Decay & Temperature \\\\ \\hline cMNIST & \\(1.0\\) & \\(256\\) & \\(1\\)e-\\(3\\) & \\(0.1\\) \\\\ cCIFAR10 & \\(0.9\\) & \\(256\\) & \\(1\\)e-\\(4\\) & \\(0.1\\) \\\\ sMPI3D & \\(0.9\\) & \\(256\\) & \\(1\\)e-\\(4\\) & \\(0.1\\) \\\\ CelebA & \\(0.3\\) & \\(128\\) & \\(3\\)e-\\(5\\) & \\(0.1\\) \\\\ \\hline \\hline \\multicolumn{5}{c}{SSL Augmentations} \\\\ \\hline cMNIST & \\multicolumn{3}{c}{RRC + Gaussian Blur} \\\\ cCIFAR10 & RRC + Color Jitter + Gray Scale + Gaussian Blur + Horizontal Flip \\\\ sMPI3D & RRC + Color Jitter + Gray Scale + Gaussian Blur + Horizontal Flip \\\\ CelebA & \\multicolumn{3}{c}{RRC + Color Jitter + Solarization + Horizontal Flip} \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 7: Hyperparameters used during SSL pretraining.\n\nepochs for all datasets (except Waterbirds which we train for \\(200\\)), and the best model across the training duration is selected for each trial, by evaluating our proposed validation score at the end of each epoch and monitoring for the maximum value. For cMNIST, cCIFAR10 and sMPI3D, we use our bias-unsupervised group-balanced validation score (see Equation (7)), whereas for Waterbirds and CelebA the corresponding worst-group version.\n\nSelf-supervised Pretraining.For all cases, we adapt self-supervised pretraining recipes from _solo-learn_ software library [22]. In particular, we optimize the MoCoV2+ loss with a LARS optimizer [72] using \\(\\eta_{\\text{LARS}}=0.002\\). Furthermore, an MLP projector with a single hidden layer is used to project features to a hypersphere in a \\(256\\)-dimensional space. Cosine scheduling is used for the learning rate and for the exponential moving average coefficient of the momentum encoder. We train for a maximum of \\(100\\) epochs for the cMNIST, sMPI3D and CelebA tasks, and for \\(500\\) epochs for the cCIFAR10 datasets. We save periodically checkpoints of base models across training, as we are going to decide which one to use as a basis for the bias proxy and for the debiased model as a hyperparameter of uLA. For each dataset, we perform a small random search over contrastive temperature and data augmentation by maximizing the i.i.d. validation accuracy of an online linear classifier on-top of extracted representations. In general, we fix the contrastive temperature to \\(0.1\\) and use default Imagenet augmentations for sMPI3D and cCIFAR10, while we search for effective variants of those for cMNIST and CelebA. Details about the hyperparameter combinations used are displayed at Table 7.\n\nIn all cases, we verify that the augmentations at Table 7**do not cause representations to be invariant to variations of the bias attribute**. We take extra care in order to delineate that, especially for synthetic tasks, improvements are observed due to our uLA methodology, and not due to the use of data augmentations which nullify the bias attribute; something which, in real-life tasks such as CelebA, might be impossible to do with hand-crafted transformation. We classify against the bias attribute using a linear classifier on extracted representations, and we find that, for the hyperparameters used, bias information is indeed retained (see Table 6).\n\nHyperparameter Search for uLA.For each dataset and task, we search over the respective spaces defined at Table 8. In particular, we sample uniformly \\(64\\) independent configurations (\\(128\\) for Waterbirds) and we choose the one with the best bias-unsupervised validation score (Equation (7)). To make the space a bit smaller, we consider the same weight decay for the training of the bias proxy and the debiased logit adjusted network, as well as we consider the same number of training epochs\n\n\\begin{table}\n\\begin{tabular}{l c c c} \\hline \\hline & Learning Rate & Weight Decay & \\(\\eta\\) \\\\ \\hline cMNIST & \\{5e-4, 1e-3, 2e-3, 5e-3\\} & \\{0.0,1e-4, 1e-3, 1e-2, 1e-1\\} & \\{1.0,1.25, 1.5\\} \\\\ cCIFAR10 & \\{1e-5, 5e-5, 1e-4, 5e-4, 1e-3\\} & \\{0.0,1e-4, 1e-3, 1e-2, 1e-1\\} & \\{1.0,1.25, 1.5, 1.75, 2.0\\} \\\\ sMPI3D & \\{1e-5, 5e-5, 1e-4, 5e-4, 1e-3\\} & \\{0.0,1e-4, 1e-3, 1e-2, 1e-1\\} & 1.0 \\\\ Waterbirds & \\{1e-4, 5e-4, 1e-3, 5e-3\\} & \\{0.0, 1e-4, 1e-2\\} & \\{1.0,1.5, 2.0, 2.5, 3.0\\} \\\\ CelebA & \\{1e-5, 5e-5, 1e-4, 5e-4\\} & \\{0.0, 1e-5, 1e-4, 1e-3, 1e-2\\} & \\{1.0,1.25, 1.5\\} \\\\ \\hline \\multicolumn{3}{c}{\\(\\tau\\)} & \\multicolumn{3}{c}{\\(T_{\\text{sal}}/T_{\\text{stop}}\\)} \\\\ \\hline cMNIST & \\{0.1,1.0,2.0\\} & \\{10,20,50,100\\} epochs \\\\ cCIFAR10 & \\{0.5,1.0,2.0\\} & \\{100,200,300,400,500\\} epochs \\\\ sMPI3D & 1.0 & \\{10,20,50,100\\} epochs \\\\ Waterbirds & \\{0.1,0.5,1.0,2.0\\} & \\{50,100,200,500,1000,200,5000\\} steps \\\\ CelebA & \\{0.5,1.0,2.0\\} & \\{10,20,50,100\\} epochs \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 8: Search space used to tune hyperparameters for each of the datasets considered\n\n\\begin{table}\n\\begin{tabular}{l c c c c c} \\hline \\hline & Learning Rate & Weight Decay & \\(\\eta\\) & \\(\\tau\\) & \\(T_{\\text{sal}}\\) (epochs) & \\(T_{\\text{stop}}\\) \\\\ \\hline cMNIST (1\\%) & 2e-3 & 1e-3 & \\(1.5\\) & \\(2.0\\) & \\(100\\) & \\(100\\) epochs \\\\ cCIFAR10 (1\\%) & 5e-5 & 1e-4 & \\(1.0\\) & \\(1.0\\) & \\(500\\) & \\(500\\) epochs \\\\ sMPI3D (\\(C=3\\)) & 5e-5 & 1e-4 & \\(1.0\\) & \\(1.0\\) & \\(100\\) & \\(100\\) epochs \\\\ Waterbirds & 1e-3 & \\(0.0\\) & \\(3.0\\) & \\(0.1\\) & - & \\(50\\) steps \\\\ CelebA & 1e-4 & \\(0.0\\) & \\(1.5\\) & \\(0.5\\) & \\(10\\) & \\(10\\) epochs \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 9: Best hyperparameters selected by our proposed bias-unsupervised validation score.\n\nfor SSL-pretrained base network and the linear head for the bias proxy (which we train online in our implementation). For Waterbirds, the linear head is trained on top of a pretrained ResNet50 on Imagenet, which is taken from a PyTorch [59] model repository. The pretrained model's parameters remain frozen during the training of the bias proxy, so in this case there is no number of pretraining steps to select. The hyperparameters used for the benchmarking experiments, that we presented in Section 4, can be found at Table 9.", "response": "## Review\n\n### Summary\n\nThis paper introduces a novel bias-unsupervised method for mitigating spurious correlations in classifiers without requiring group annotations during training or validation. The authors propose a Self-Supervised Learning (SSL)-based approach that utilizes a pre-trained model to inform the model selection and training process, specifically through a logit adjustment technique. The method is empirically validated against various benchmarks, including the newly introduced sMPI3D dataset, which highlights its effectiveness in systematic generalization tasks. Overall, while the method shows promise in addressing limitations of previous techniques, further clarity in some theoretical aspects and connections to existing methods would enhance its contributions.\n\n### Strengths\n\n- The proposed method, uLA, effectively addresses spurious correlations without group annotations.\n- Clear and well-structured presentation, making it easy to follow.\n- The introduction of the sMPI3D dataset is a significant contribution, providing a realistic testing ground for systematic generalization.\n- Empirical results demonstrate the method's efficacy across various benchmarks.\n\n### Weaknesses\n\n- Lack of clarity on how the proposed method achieves improvements in systematic generalization.\n- The paper does not sufficiently discuss important related works in the field, which could provide context for its contributions.\n- Certain sections, such as the explanation of the bias variable extraction and dataset construction, require more exposition.\n- Concerns about the efficiency and robustness of the uLA method, particularly regarding the choice of SSL algorithms and hyperparameter sensitivity.\n\n### Questions\n\n- What theoretical or intuitive justifications support the claim that the proposed method benefits systematic generalization?\n- Could the authors clarify the choice of SSL algorithm and its impact on the method's performance?\n- What empirical analyses support the reliability of the estimation method used in the proposed approach?\n\n### Soundness\n\n**Score:** 3\n\n**Description:** Good. The paper presents a solid theoretical foundation and empirical results, but some claims need clearer justification.\n\n### Presentation\n\n**Score:** 3\n\n**Description:** Good. The paper is generally well-written, though certain areas could benefit from improved clarity and exposition.\n\n### Contribution\n\n**Score:** 4\n\n**Description:** Excellent. The introduction of a new dataset and a novel method significantly advance the field of bias-unsupervised learning.\n\n### Rating\n\n**Score:** 6\n\n**Description:** Weak Accept: The paper is technically solid and has moderate-to-high impact potential, but it requires some clarifications and additional discussions.\n\n### Paper Decision\n\n**Decision:** Accept (poster)\n\n**Reasons:** The paper presents a novel approach to a significant problem in machine learning, showing empirical promise and introducing new methodologies and datasets. While some aspects require clarification, the overall contribution to the field is valuable enough to warrant acceptance.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Double Auctions with Two-sided Bandit Feedback\n\n Soumya Basu\n\nGoogle Mountain View\n\nbasusoumya@google.com &Abishek Sankararaman\n\nAWS\n\nabishek.90@gmail.com\n\n###### Abstract\n\nDouble Auction enables decentralized transfer of goods between multiple buyers and sellers, thus underpinning functioning of many online marketplaces. Buyers and sellers compete in these markets through bidding, but do not often know their own valuation a-priori. As the allocation and pricing happens through bids, the profitability of participants, hence sustainability of such markets, depends crucially on learning respective valuations through repeated interactions. We initiate the study of Double Auction markets under bandit feedback on both buyers' and sellers' side. We show with confidence bound based bidding, and 'Average Pricing' there is an efficient price discovery among the participants. In particular, the regret on combined valuation of the buyers and the sellers - a.k.a. the social regret - is \\(O(\\log(T)/\\Delta)\\) in \\(T\\) rounds, where \\(\\Delta\\) is the minimum price gap. Moreover, the buyers and sellers exchanging goods attain \\(O(\\sqrt{T})\\) regret, individually. The buyers and sellers who do not benefit from exchange in turn only experience \\(O(\\log T/\\Delta)\\) regret individually in \\(T\\) rounds. We augment our upper bound by showing that \\(\\omega(\\sqrt{T})\\) individual regret, and \\(\\omega(\\log T)\\) social regret is unattainable in certain Double Auction markets. Our paper is the first to provide decentralized learning algorithms in a two-sided market where _both sides have uncertain preference_ that need to be learned.\n\n0\nFootnote 0: Work done when AS was affiliated with UC Berkeley. All opinions are that of the authors and do not necessarily represent that of the employers\n\n## 1 Introduction\n\nOnline marketplaces, such as eBay, Craigslist, Task Rabbit, Doordash, Uber, enables allocation of resources between supply and demand side agents at a scale through market mechanisms, and dynamic pricing. In many of these markets, the valuation of the resources are often personalized across agents (both supply and demand side), and remain apriori unknown. The agents learn their own respective valuations through repeated interactions while competing in the marketplace. In turn, the learning influences the outcomes of the market mechanisms. In a recent line of research, this interplay between learning and competition in markets has been studied in multiple systems, such as bipartite matching markets [29, 30, 39, 7], centralized basic auctions [26, 22]. These works follow the 'protocol model' [2], where multiple agents follow a similar protocol/algorithm, while each agent executes her protocol using only her own observations/world-view up to the point of execution.\n\nIn this paper, we initiate the study of the decentralized Double Auction market where multiple sellers and buyers, each with their own valuation, trades an indistinguishable good. In each round, the sellers and the buyers present bids for the goods.1 The auctioneer is then tasked with creating an allocation, and pricing for the goods. All sellers with bids smaller than the price set by the auctioneer sell at that price, whereas all the buyers with higher bids buy at that price. Each buyer and seller, is oblivious to all the prices including her own. Only a buyer, or a seller participating in the market observesher own valuation of the good (with added noise). Notably, our work tackles two-sided uncertainty, whereas the previous works mainly focused on one-sided uncertainty in the market.\n\nDouble auction is used in e-commerce [43] - including business-to-business and peer-to-peer markets, bandwidth allocation [24; 23], power allocation [31]. We focus on the 'Average Mechanism' for double auction. Average mechanism guarantees that the auctioneer and the auction participants incur no losses in the trade. It also ensures that each commodity is given to the participant that values it the highest, thus maximizing social welfare. Additionally, average mechanism can be implemented through simple transactions. These properties make average mechanism a suitable choice in large social markets, such as energy markets [32], environmental markets [35], cloud-computing markets [38; 32], bidding on wireless spectrum [18]. Our objective is to design a bandit average mechanism for double auction markets when the participants are a-prioriu unaware of their exact valuation.\n\nUnder average mechanism (detailed in Section 2.2) first an allocation is found, by maximizing \\(K\\) such that the \\(K\\) highest bidding buyers all bid higher than the \\(K\\) lowest bidding sellers. The price is set as the average of the \\(K\\)-th lowest bid among the buyers, and the \\(K\\) highest bid among the sellers for the chosen \\(K\\). We have two-sided uncertainty in the market, as both buyers and sellers do not know their own valuation. The uncertainty in bids manifests in two ways. Firstly, each buyer needs to compete with others by bidding high enough to get allotted so that she can discover her own price. Similarly, the sellers compete by bidding lower for price discovery. The competition-driven increase in buyers' bids, and decrease in the sellers' bids may decrease the utility that a buyer or seller generates. Secondly, as the valuation needs to be estimated, the price set in each round as a function of these estimated valuations (communicated to the auctioneer in the form of bids) remains noisy. This noise in price also decreases the utility. However, when price discovery is slow the noise in price increases. Therefore, the main challenge in decentralized double auction with learning is to strike a balance between the competition-driven increase/decrease of bids, and controlling the noise in price.\n\n### Main Contributions\n\nOur main contributions in this paper are as follows.\n\n1. Our paper is the first to provide decentralized learning algorithms in a two-sided market where _both sides have uncertain preference_ that need to be learned. Unlike in the setting with one sided uncertainity only, we identify that with _two-sided uncertainty_ in double auction markets _optimism_ in the face of uncertainty in learning (OFUL) from both sides causes _information flow bottleneck_ and thus not a good strategy. We introduce the notion of _domination of information flow_ - that increases the chance of trade and price discovery. The sellers bid the lower confidence bound (LCB), and buyers bid the upper confidence bound (UCB) of their respective valuation. By using UCB bids the buyers, and using the LCB bids the sellers decrease their reward and facilitate price discovery. Formally, with the above bids under average mechanism\n\n* We show that the social welfare regret, i.e. the regret in the combined valuation of all the sellers and the buyers in the market is \\(O(\\log(T)/\\Delta)\\) regret in \\(T\\) rounds with a minimum reward gap \\(\\Delta\\). We also show a \\(\\Omega(\\log(T)/\\Delta)\\) lower bound on social-welfare regret and thus our upper bound is order-optimal.\n* For individual regret, we show that each of the the sellers and the buyers that do not participate under the true valuations incur \\(O(\\log(T)/\\Delta)\\) regret, while the optimal participating buyers and sellers incur a \\(O(\\sqrt{T\\log(T)})\\) regret. Our upper bound holds for heterogeneous confidence widths, making it robust against the choices of the individual agents.\n\n2. We complement the upper bounds by showing price discovery itself is \\(\\Omega(\\sqrt{T})\\) hard in the minimax sense. Specifically, we consider a relaxed system where _(i)_ the price of the good is known to all, and _(ii)_ an infinite pool of resource exists, and hence any buyer willing to pay the price gets to buy, and any seller willing to sell at the price does so. We show under this setup, for any buyer or seller, there exists a system where that agent must incur a regret of \\(\\Omega(\\sqrt{T})\\). Similarly, we establish a \\(\\Omega(\\log(T))\\) lower bound for the social-welfare regret by showing that the centralized system can be reduced to a combinatorial semi-bandit and using the results of [14].\n\nSystem Model\n\nThe market consists of \\(N\\geq 1\\) buyers and \\(M\\geq 1\\) sellers, trading a _single type_ of item which are indistinguishable across sellers. This set of \\(M+N\\) market participants, repeatedly participate in the market for \\(T\\) rounds. Each buyer \\(i\\in[N]\\) has valuation \\(B_{i}\\geq 0\\), for the item and each seller \\(j\\in[M]\\) has valuation \\(S_{i}\\geq 0\\). No participant knows of their valuation apriori and learn it while repeatedly participating in the market over \\(T\\) rounds.\n\n### Interaction Protocol\n\nThe buyers and sellers interact through an auctioneer who implements a bilateral trade mechanism at each round. At each round \\(t\\geq 1\\), every buyer \\(i\\in[N]\\) submits bids \\(b_{i}(t)\\) and seller \\(j\\in[M]\\) submits asking price 2\\(s_{j}(t)\\) simultaneously. Based on the bids and asking prices in round \\(t\\), the auctioneer outputs _(i)_ subsets \\(\\mathcal{P}_{b}(t)\\subseteq[N]\\) and \\(\\mathcal{P}_{s}(t)\\subseteq[M]\\) of participating buyers and sellers with equal cardinality \\(K(t)\\leq\\min(M,N)\\), and _(ii)_ the trading price \\(p(t)\\) for the participating buyers and sellers in this round. Subsequently, every buyer \\(i\\in[N]\\) is _(i)_ either part of the trade at time \\(t\\), in which case she gets utility \\(r_{i}^{(B)}(t):=B_{i}+\\nu_{b,i}(t)-p(t)\\), or _(ii)_ is not part of the trade at time \\(t\\) and receives \\(0\\) utility along with a signal that she did not participate. Similarly, each seller \\(j\\in[M]\\) is either part of the trade and receives utility \\(r_{j}^{(S)}(t):=p(t)-S_{j}-\\nu_{s,j}(t)\\), or is informed she is not part of the trade and receives \\(0\\) utility. The random variables \\(\\nu_{b,i}(t)\\) and \\(\\nu_{s,j}(t)\\) for all \\(i\\in[N]\\), \\(j\\in[M]\\) and \\(t\\in[T]\\) are i.i.d., \\(0\\) mean, \\(1\\) sub-Gaussian random variables.3\n\nFootnote 2: Throughout, we refer to sellers ‘bids’ as their asking price\n\nFootnote 3: We study the system with \\(1\\)-sub-Gaussian to avoid clutter. Extension to general \\(\\sigma\\)-sub-Gaussian is trivial when \\(\\sigma\\) or an upper bound to it is known.\n\n### Average price mechanism\n\nThroughout the paper, we assume the auctioneer implements the average price mechanism in every round \\(t\\). Under this mechanism, at each round \\(t\\), the auctioneer orders the bids by the 'natural order', i.e., sorts the buyers bids in descending order and the seller's bids in ascending order. Denote by the sorted bids from the buyer and seller as \\(b_{i_{1}}(t)\\geq\\cdots b_{i_{N}}(t)\\) and the sorted sellers bids by \\(s_{j_{1}}(t)\\leq\\cdots s_{j_{M}}(t)\\). Denote by the index \\(K(t)\\) to be the largest index such that \\(b_{i_{K(t)}}(t)\\geq s_{j_{K(t)}}(t)\\). In words, \\(K(t)\\) is the 'break-even index' such that all buyers \\(i_{1},\\cdots,i_{K(t)}\\) have placed bids offering to buy at a price strictly larger than the asking price submitted by sellers \\(j_{1},\\cdots,j_{K(t)}\\). The auctioneer then selects the participating buyers \\(\\mathcal{P}_{b}(t)=\\{i_{1},\\cdots,i_{K(t)}\\}\\), and participating sellers \\(\\mathcal{P}_{s}(t)=\\{j_{1},\\cdots,j_{K(t)}\\}\\). The price is set to \\(p(t):=\\frac{b_{i_{K(t)}}+s_{j_{K(t)}}}{2}\\), and thus the name of the mechanism is deemed as the average mechanism.\n\n### Regret definition\n\nFor the given bilateral trade mechanism, and true valuations \\((B_{i})_{i\\in[N]}\\) and \\((S_{j})_{j\\in[M]}\\), denote by \\(K^{*}\\leq\\min(M,N)\\) be the number of matches and by \\(p^{*}\\) to be the price under the average mechanism when all the buyers and sellers bid their true valuations. Let \\(\\mathcal{P}_{b}^{*}\\) be to be set of the _optimal participating buyers_, and \\(\\mathcal{P}_{s}^{*}\\) to be set of the _optimal participating sellers_. For any buyer \\(i\\in[N]\\), we denote by \\((B_{i}-p^{*})\\) to be the true utility of the buyer. Similarly, for any seller \\(j\\in[N]\\), we denote by \\((p^{*}-S_{j})\\) to be the true utility of seller \\(j\\). From the description of the average mechanism, in the system with true valuations, all participating agents have non-negative true utilities.\n\nRecall from the protocol description in Section 2.1 that at any time \\(t\\), if buyer \\(i\\in[N]\\) participates, then she receives a mean utility of \\((B_{i}-p(t))\\). For a participating seller \\(j\\in[M]\\) her mean utility of \\((p(t)-S_{j})\\) in round \\(t\\). If in any round \\(t\\), if a buyer \\(i\\in[N]\\) or a seller \\(j\\in[M]\\) does not participate, then she receives a deterministic utility \\(0\\). The expected individual regret of a buyer \\(i\\), namely \\(R_{b,i}(T)\\)\n\nFigure 1: Average Mechanism with 6 Buyers and 5 Sellers\n\nand a seller \\(j\\), namely \\(R_{s,j}(T)\\), are defined as\n\n\\[R_{b,i}(T)=T(B_{i}-p^{*})\\mathbbm{1}(i\\leq K^{*})-\\mathbb{E}\\Big{[} \\sum_{t:i\\in\\mathcal{P}_{b}(t)}(B_{i}-p(t))\\Big{]},\\] \\[R_{s,j}(T)=T(p^{*}-S_{j})\\mathbbm{1}(j\\leq K^{*})-\\mathbb{E}\\Big{[} \\sum_{t:j\\in\\mathcal{P}_{s}(t)}(p(t)-S_{j})\\Big{]}.\\]\n\nAuctioneer has no regret as average mechanism is _budget balanced_, i.e. auctioneer does not gain or lose any utility during the process.\n\nWe also define the social welfare regret similar to gain from trade regret in [10]. The social welfare is defined as the total valuation of the goods after the transfer of goods from seller to buyer in each round. The expected (w.r.t reward noise) total valuation after transfer is thus defined as \\(\\Big{(}\\sum_{i\\in\\mathcal{P}_{b}(t)}B_{i}+\\sum_{j\\in[M]\\setminus\\mathcal{P}_{s }(t)}S_{j}\\Big{)}\\), while the the expected total valuation under oracle average mechanism is \\(\\Big{(}\\sum_{i\\in\\mathcal{P}_{b}^{*}}B_{i}+\\sum_{j\\in[M]\\setminus\\mathcal{P}_{ s}^{*}}S_{j}\\Big{)}\\). Therefore, the expected social welfare regret is defined as\n\n\\[R_{SW}(T)=T\\big{(}\\sum_{i\\in\\mathcal{P}_{b}^{*}}B_{i}+\\sum_{j\\in[M]\\setminus \\mathcal{P}_{s}^{*}}S_{j}\\big{)}-\\mathbb{E}\\Big{[}\\sum_{t=1}^{T}\\big{(}\\sum_{i \\in\\mathcal{P}_{b}(t)}B_{i}+\\sum_{j\\in[M]\\setminus\\mathcal{P}_{s}(t)}S_{j} \\Big{)}\\Big{]}. \\tag{1}\\]\n\n## 3 Decentralized Bidding for Domination of Information Flow\n\nWe consider the decentralized system where each market participant bids based on their own observation, without any additional communication. The core idea is balancing _domination of information flow_ and _overunder bidding_, i.e. ensuring the number of allocation is not less than \\(K^{*}\\) in each round with high probability, and the bids converge to each agent's true valuation.\n\nEach seller \\(j\\in[M]\\), with \\(n_{s,j}(t)\\) participation upto round \\(t\\), at time \\(t+1\\) bids the lower confidence bound (scaled by \\(\\alpha_{s,j}\\)), LCB(\\(\\alpha_{s,j}\\)) in short, of its own valuation of the item. Each buyer \\(i\\in[N]\\), with \\(n_{s,j}(t)\\) participation upto round \\(t\\), at time \\(t+1\\), bids the upper confidence bound (scaled by \\(\\alpha_{b,i}\\)), UCB(\\(\\alpha_{b,i}\\)) in short, of its own valuation of the item. The bids are specified in Equation 2.\n\n\\[s_{j}(t+1)=\\hat{s}_{j}(t)-\\sqrt{\\frac{\\alpha_{s,j}\\log(t)}{n_{s,j}(t)}},\\quad b _{j}(t+1)=\\hat{b}_{i}(t)+\\sqrt{\\frac{\\alpha_{b,i}\\log(t)}{n_{b,i}(t)}} \\tag{2}\\]\n\nHere, \\(\\hat{s}_{j}(t)=\\frac{1}{n_{s,j}(t)}\\sum_{t^{\\prime}\\leq t:j\\in\\mathcal{P}_{s}( t^{\\prime})}Y_{s,j}(t^{\\prime})\\), and \\(\\hat{b}_{i}(t)=\\frac{1}{n_{b,i}(t)}\\sum_{t^{\\prime}\\leq t:i\\in\\mathcal{P}_{b}( t^{\\prime})}Y_{b,i}(t^{\\prime})\\) are the observed empirical valuation of the item upto time \\(t\\) by seller \\(j\\), and buyer \\(i\\), respectively.\n\nOur buyers and sellers follow the _protocol model_ (which is ubiquitous in bandit learning for markets) and agree on UCB and LCB based bids, respectively. They are _heterogeneous_ as they may use different \\(\\alpha_{b,i}\\) and \\(\\alpha_{s,j}\\) scaling parameters. The only restriction (as seen in Section 4) is \\(\\min\\{\\alpha_{b,i},\\alpha_{s,j}\\}\\geq 4\\) which they agree on as part of the protocol.\n\n**Key Insights:** We now contrast our algorithm design from standard multi-armed-bandit (MAB) problems. In a typical MAB problems, including other multi-agent settings, algorithm is designed based on optimism in the face of uncertainty in learning (OFUL) principle [1]. The UCB-type indices under OFUL arises as optimistic estimate of the rewards of arms/actions. However, such optimism used from both sides, i.e. both buyer and seller using UCB indices, may lead to a standstill. The bids of the buyers with constant probability can remain below the sellers' bids. Instead, we emphasize information flow through trade. In our algorithm buyers' UCB bids and sellers' LCB bids ensure that the system increases the chance for buyer and seller to participate in each round as compared to using their true valuations. There is _domination of information flow_, and consequently they discover their own valuation in the market.\n\nHowever, too aggressive over or under bidding can disrupt the price setting process. In particular, if even one non-participating buyer is bidding high enough (due to UCB bids) to exceed a non-participating seller's price, firstly she participates and accrues regret. More importantly, the participating sets deviates, resulting in a deviation of the price of the good from \\(p^{*}\\). Thus resulting in regret for all participating agents as well. Similar problems arise if one or more non-participatingseller predicts lower. On the other hand, too low aggression is also harmful as the price discovery may not happen resulting in deviation of participating set, deviation of price, and high regret. In the next section, we show the aggression remains within desired range, i.e. the regret of the agents remain low, even with heterogeneous UCBs and LCBs.\n\n## 4 Regret Upper Bound\n\nIn this section, we derive the regret upper bound for all the buyers and sellers in the system. Without loss of generality, let us assume that the buyers are sorted in decreasing order of valuation \\(B_{1}>B_{2}>...>B_{N}\\). The sellers are sorted in increasing order of valuation \\(S_{1}0\\).\n\nOur first main result, Theorem 1, proves a \\(O(\\log(T)/\\Delta)\\) upper bound for the social welfare regret.\n\n**Theorem 1**.: _The expected social welfare regret of the Average mechanism with buyers bidding UCB(\\(\\mathbf{\\alpha}_{b}\\)), and sellers bidding LCB(\\(\\mathbf{\\alpha}_{s}\\)) of their estimated valuation, for \\(\\alpha_{\\min}>2\\), is bounded as:_\n\n\\[R_{SW}(T)\\leq\\sum_{i\\leq K^{*}}\\sum_{i^{\\prime}>K^{*}}\\frac{( \\sqrt{\\alpha_{\\max}}+2)^{2}}{(B_{i}-B_{i^{\\prime}})}\\log(T)+\\sum_{j\\leq K^{*}} \\sum_{j^{\\prime}>K^{*}}\\frac{(\\sqrt{\\alpha_{\\max}}+2)^{2}}{(S_{j^{\\prime}}-S_{ j})}\\log(T)\\\\ +\\sum_{j^{\\prime}>K^{*}}\\sum_{i^{\\prime}>K^{*}}\\frac{(\\sqrt{ \\alpha_{\\max}}+2)^{2}}{(S_{j^{\\prime}}-B_{i^{\\prime}})}\\log(T)+MNb_{max}\\zeta( \\alpha_{\\min}/2),\\]\n\n_where \\(\\zeta(x)\\) is the Riemann-zeta function which is finite for \\(x>1\\)._\n\nThe following theorem provides the individual regret bounds for all sellers and buyers in \\(T\\) rounds.\n\n**Theorem 2**.: _The expected regret of the Average mechanism with buyers bidding UCB(\\(\\mathbf{\\alpha}_{b}\\)), and sellers bidding LCB(\\(\\mathbf{\\alpha}_{s}\\)) of their estimated valuation, for \\(\\alpha_{\\min}>2\\), is bounded as:_\n\n* _for a participating buyer_ \\(i\\in[K^{*}]\\) _as_ \\(R_{b,i}(T)\\leq(2+\\sqrt{\\alpha_{\\max}})\\sqrt{T\\log(T)}+C_{b^{\\prime},i}\\log(T)\\)_,_\n* _for a participating seller_ \\(j\\in[K^{*}]\\) _as_ \\(R_{s,j}(T)\\leq(2+\\sqrt{\\alpha_{\\max}})\\sqrt{T\\log(T)}+C_{s^{\\prime},j}\\log(T)\\)_,_\n* _for a non-participating buyer_ \\(i\\geq(K^{*}+1)\\) _as_ \\(R_{b,i}(T)\\leq\\frac{\\sqrt{(M-K^{*}+1)(2+\\sqrt{\\alpha_{\\max}})^{2}}}{(B_{K^{*} }-B_{i})}\\log(T)\\)_,_\n* _for a non-participating seller_ \\(j\\geq(K^{*}+1)\\) _as_ \\(R_{s,j}(T)\\leq\\frac{\\sqrt{(N-K^{*}+1)(2+\\sqrt{\\alpha_{\\max}})^{2}}}{(S_{j}-S_{ K^{*}})}\\log(T)\\)_._\n\n_Here \\(C_{b^{\\prime},i}\\) and \\(C_{s^{\\prime},j}\\) are \\(O\\left(\\frac{(M-K^{*}+1)(N-K^{*}+1)}{\\Delta}\\right)\\) constants (see Theorem 16 in Appendix A)._\n\nWe can summarize our main results (with lower bounds taken from Section 5) as\n\nSeveral comments on our main results are in order.\n\n**Social and Individual Regret:** The social regret as well as the individual regret of the optimal non-participating buyers and sellers under the average mechanism with confidence bound bids grow as \\(O(\\log(T))\\). However, the individual regret of the optimal participating agents grow as \\(O(\\sqrt{T\\log(T)})\\). The social regret, is determined by how many times a participant buyers fails to participate, and non-participants end up participating. Also, the non-partic\n\n\\begin{table}\n\\begin{tabular}{|l|c|c|c|c|} \\hline \\multirow{2}{*}{**Regret**} & \\multirow{2}{*}{**Social**} & \\multirow{2}{*}{**Participant**} & \\multicolumn{2}{c|}{**Non-participant**} \\\\ \\cline{3-4} & & & & **Buyer** & **Seller** \\\\ \\hline\n**Upper** & \\(O(\\frac{MN}{\\Delta}\\log(T))\\) & \\(O((\\sqrt{T}+\\frac{MN}{\\Delta})\\log(T))\\) & \\(O(\\frac{\\sqrt{M}}{\\Delta}\\log(T))\\) & \\(O(\\frac{\\sqrt{N}}{\\Delta}\\log(T))\\) \\\\ \\hline\n**Lower** & \\(\\Omega(\\frac{M+N}{\\Delta}\\log(T))\\) & \\(\\Omega(\\sqrt{T})^{*}\\) & 0 & 0 \\\\ \\hline \\end{tabular}\n\\end{table}\nTable 1: Regret bounds derived in this paper. The participant regret lower bound is instance independent indicated by (\\(*\\)). Rest are instance dependent. Lower bound of 0 indicates absence of non-trivial bound.\n\nregret through participation. The effect of price setting is not present in both the cases, and counting the number of bad events lead to \\(O(\\log(T))\\) regret. For the participant agents the error in price-setting dominates their respective individual regret of \\(O(\\sqrt{T\\log(T)})\\).\n\n**Individually rational for \\(\\mathbf{\\Delta}=\\mathbf{\\omega}\\big{(}\\sqrt{\\log(T)/T}\\big{)}\\):** We observe that the non-participating buyers and sellers, are only having individual regret \\(O(\\log(T)/\\Delta)\\). And the participating buyers and sellers are having individual regret \\(O(\\sqrt{T\\log(T)}\\). This is reassuring as this does not discourage buyers and sellers from participation for a large range of system a-priori. Indeed, for the last participating buyer and seller the utility is \\((B_{K}^{*}-S_{K}^{*})/2\\), which is close to \\(\\Delta\\). Hence, as long as \\(\\Delta T=\\omega(\\max\\{\\log(T)/\\Delta,\\sqrt{T\\log(T)}\\})\\) or \\(\\Delta=\\omega(\\sqrt{\\log(T)/T})\\) a non-participating buyer or seller prefers entering the market then discovering her price and getting out, as compared to not participating in the beginning. Also, participating buyer or seller is guaranteed return through participation.\n\n**Incentives and Deviations:** We now discuss the incentives of individual users closely following the notion of symmetric equilibrium in double auction [42]. In this setting, a _myopic agent_ who knows her true valuation, and greedily maximizes her reward in each round assuming all the non-strategic agents use confidence-based bidding. For average mechanism only the price-setting agents (i.e. the \\(K^{*}\\)-th buyer and the \\(K^{*}\\)-th seller) have incentive to deviate from their true valuation to increase their single-round reward. The _non-price-setting agents are truthful_. When the \\(K^{*}\\)-th buyer deviates then each participating buyer has an average per round surplus of \\((B_{K^{*}}-\\max(S_{K^{*}},B_{K^{*}+1}))/2\\), and each participating seller has the same deficit. On the contrary, when the \\(K^{*}\\)-th seller deviates each participating seller has average per round surplus of \\((S_{K^{*}}-\\min(B_{K^{*}},S_{K^{*}+1}))/2\\) surplus, and each participating buyer has the same average deficit in each round. See Appendix C for more discussions.\n\n**Scaling of Regret with \\(N\\), \\(M\\), \\(K^{*}\\), \\(\\mathbf{\\Delta}\\):**\n\n_Social regret:_ The social regret scales as \\(O((MN-(K^{*})^{2})\\log(T)/\\Delta)\\). When participant buyers are replaced by non-participant buyers the valuation decreases which leads to \\(O(K^{*}(N-K^{*})\\log(T)/\\Delta)\\) regret. Similarly, participant seller getting replaced by non-participant seller introduces \\(O(K^{*}(M-K^{*})\\log(T)/\\Delta)\\) social regret. Finally, as goods move from non-participant buyers to non-participant sellers we obtain \\(O((M-K^{*})(N-K^{*})\\log(T)/\\Delta)\\) regret.\n\n_Non-participant regret:_ The individual regret scales as \\(O(\\sqrt{M}\\log(T)/\\Delta)\\) for a non-participant buyer, whereas it scales as \\(O(\\sqrt{N}\\log(T)/\\Delta)\\) for a non-participant seller. For any non-participant buyer once it has \\(O(\\log(T)/\\Delta^{2})\\) samples it no longer falls in top \\(K^{*}\\) buyer. However, it can keep participating until each non-participant seller collects enough samples, i.e. \\(O(\\log(T)/\\Delta^{2})\\) samples. Finally, regret is shown to be \\(O(\\sqrt{\\#\\text{participant}\\log(T)})\\) which leads to the \\(O(\\sqrt{M}\\log(T)/\\Delta)\\).\n\n_Participants:_ The leading \\(O(\\sqrt{T\\log(T)})\\) term in the regret for each optimal participating buyer and seller do not scale with the size of the system. This leading term depends mainly on the random fluctuation of the bid of the lowest bidding participating buyer and the highest bidding participating seller. The \\(O\\left(\\frac{MN}{\\Delta}\\log(T)\\right)\\) regret for the participating buyer and seller comes because each time a non-participating buyer or seller ends up participating the price deviates.\n\n_Special Case \\(K^{*}\\approx N\\approx M\\):_ We see that when the system the number of participants is very high, i.e. \\((N-K^{*}),(M-K^{*})=O(1)\\), then the \\(O(\\log(T)/\\Delta)\\) component, hence the regret, per buyer/seller does not scale with the system size. Furthermore, the social regret also scales as \\(O((MN-(K^{*})^{2})\\log(T)/\\Delta)\\approx O(\\log(T)/\\Delta)\\). This indicates that there is rapid learning, as all the participants are discovering her own price almost every round.\n\n### Proof Outline of Regret Upper Bounds\n\nWe now present an outline to the proof of Theorem 2. The full proof is given in Appendix A. A salient challenge in our proof comes from _two-sided uncertainty_. In Double auction, the outcomes of the buyers' and sellers' sides are inherently coupled. Hence error in buyers' side propagates to the sellers' side, and vice versa, under two-sided uncertainty. For example, if the buyers are bidding lower than their true valuation then the optimal non-participating sellers, even with perfect knowledge of their own valuation, ends up participating. We show that optimal participants (both buyers and sellers) get decoupled, whereas in optimal non-participants regret leads to new information in both buyer and seller side. This breaks the two-sided uncertainty obstacle.\n\n**Monotonicity and Information Flow:** Our proof leverages _monotonicity_ of the average mechanism, which means if the bid of each buyer is equal or higher, and simultaneously bid of each seller is equal or lower, then the number of participants increases (Proposition 7). Additionally, UCB ensures each buyer w.h.p. bids higher than its true value, and LCB ensures each seller w.h.p. bids lower than its true value. Therefore, we have _domination of the information flow_: \\(K(t)\\geq K^{*}\\) w.h.p. for all \\(t\\geq 1\\).\n\n**Social Regret Decomposition:** For any sample path with \\(K(t)\\geq K^{*}\\) the social regret, namely \\(r_{SW}(T)\\), under average mechanism can be bounded as\n\n\\[r_{SW}(T)\\leq\\sum_{i\\leq K^{*}K^{*}}(S_{j^{\\prime}}-B_{i^{\\prime}})\\sum_{t=1}^{T} \\mathbbm{1}(j^{\\prime}\\in\\mathcal{P}_{s}(t),i^{\\prime}\\in\\mathcal{P}_{b}(t)).\\]\n\nThe first term corresponds to a non-participant buyer replacing a participant buyer. The second term is the same for sellers, whereas the final term corresponds to two non-participant buyer and seller getting matched.\n\n**Individual Regret Decomposition:** We now turn to the individual regrets. Regret of a non-participant buyer \\(i\\), can be bounded by\n\n\\[\\sum_{t:i\\in\\mathcal{P}_{b}(t)}(p(t)-B_{i})\\leq\\sum_{t:i\\in \\mathcal{P}_{b}(t)}(b_{i}(t)-B_{i})\\lessapprox\\sum_{t:i\\in\\mathcal{P}_{b}(t)} \\sqrt{\\frac{\\alpha_{b,i}\\log(t)}{n_{b,i}(t)}}\\lessapprox\\sqrt{\\alpha_{b,i}n_{b, i}(T)\\log(T)}.\\]\n\nWe have \\(p(t)\\) lesser than \\(b_{i}(t)\\) because \\(i\\)-th buyer participates in round \\(t\\). By UCB property w.h.p. \\(b_{i}(t)\\) is at most \\(\\sqrt{\\frac{\\alpha_{b,i}\\log(t)}{n_{b,i}(t)}}\\) away from \\(B_{i}\\). A similar argument shows for a non-participating seller \\(j\\) the regret is roughly \\(\\sqrt{\\alpha_{s,j}n_{s,j}(T)\\log(T)}\\).\n\nFor a participating buyer \\(i\\), we have the regret bounded as\n\n\\[\\sum_{t:i\\notin\\mathcal{P}_{b}(t)}(B_{i}-p^{*})+\\sum_{t:i\\in \\mathcal{P}_{b}(t)}(p(t)-p^{*})=(T-n_{b,i}(T))(B_{i}-p^{*})+\\sum_{t:i\\in \\mathcal{P}_{b}(t)}(p(t)-p^{*}).\\]\n\nSimilarly, a participating seller \\(j\\) has regret bound \\((T-n_{s,j}(T))(p^{*}-S_{j})+\\sum_{t:j\\in\\mathcal{P}_{s}(t)}(p^{*}-p(t))\\).\n\n**Decoupling Participants:** In Lemma 10 in Appendix A, we show that learning is decoupled for the optimal participating buyers, and optimal participating sellers. It lower bounds the number of participation for optimal participating buyers and sellers.\n\n**Lemma 3** (Informal statement of Lemma 10).: _For \\(\\alpha_{\\min}>4\\), w.h.p., for every \\(i,j\\in[K^{*}]\\), and \\(i^{\\prime},j^{\\prime}>K^{*}\\)_\n\n\\[(T-n_{b,i}(T))\\lessapprox\\sum_{i^{\\prime\\prime}\\geq K^{*}+1}\\frac{ \\alpha_{b,i^{\\prime\\prime}}\\log(T)}{(B_{i}-B_{i^{\\prime\\prime}})^{2}},(T-n_{s,j }(T))\\lessapprox\\sum_{j^{\\prime\\prime}\\geq K^{*}+1}\\frac{\\alpha_{s,i^{\\prime \\prime}}\\log(T)}{(S_{j^{\\prime\\prime}}-S_{j})^{2}},\\] \\[\\sum_{t=1}^{T}\\mathbbm{1}(i\\sharp\\mathcal{P}_{b}(t),i^{\\prime}\\in \\mathcal{P}_{b}(t))\\lessapprox\\frac{\\alpha_{b,i^{\\prime}}\\log(T)}{(B_{i}-B_{i^ {\\prime}})^{2}},\\text{ and}\\sum_{t=1}^{T}\\mathbbm{1}(j\\sharp\\mathcal{P}_{s}(t),j^ {\\prime}\\in\\mathcal{P}_{s}(t))\\lessapprox\\frac{\\alpha_{b,i^{\\prime}}\\log(T)}{(S _{j^{\\prime}}-S_{j})^{2}}.\\]\n\nIt argues after a optimal non-participant \\(i^{\\prime\\prime}\\) gets \\(O(\\frac{\\log(T)}{(B_{i}-B_{i^{\\prime\\prime}})^{2}})\\) samples it can not participate while \\(i\\) does not participate as the bid of \\(i\\) is higher than the bid of \\(i^{\\prime\\prime}\\) with high probability. Similarly seller side result follow.\n\n**Non-Participant regret leads to two-sided Learning:** Unlike optimal participating agents, the effect of uncertainties on the optimal non-participating buyers and sellers cannot be decoupled directly. With at least one non-participating buyer with large estimation error in her valuation present, the non-participating sellers can keep participating even with perfect knowledge. However, this does not ensure directly that the estimation error of this non-participating buyer decreases. Next, in Lemma 12 in Appendix A, we upper bound the number of times a non-participant can participate. Informally we have\n\n**Lemma 4** (Informal).: _For \\(\\alpha_{\\min}>4\\), with high probability for any \\(i,j\\geq(K^{*}+1)\\),_\n\n\\[n_{b,i}(T)\\lessapprox\\frac{\\alpha_{b,i}\\log(T)}{(B_{K^{*}}-B_{i})^{2}}+\\sum_{j^{ \\prime}\\geq(K^{*}+1)}\\frac{\\alpha_{s,i^{\\prime}}\\log(T)}{(S_{j^{\\prime}}-B_{i}) ^{2}},n_{s,j}(T)\\lessapprox\\frac{\\alpha_{s,i}\\log(T)}{(S_{j}-S_{K^{*}})^{2}}+ \\sum_{i^{\\prime}\\geq(K^{*}+1)}\\frac{\\alpha_{b,i^{\\prime}}\\log(T)}{(S_{j}-B_{i^ {\\prime}})^{2}},\\]\n\n_and \\(\\sum_{t=1}^{T}\\mathbbm{1}(j^{\\prime}\\in\\mathcal{P}_{s}(t),i^{\\prime}\\in \\mathcal{P}_{b}(t))\\lessapprox\\frac{\\alpha_{\\max}\\log(T)}{(S_{j^{\\prime}}-B_{i ^{\\prime}})^{2}}\\)._\n\nA non-participant buyer \\(i\\) after obtaining \\(O(\\frac{\\log(T)}{(B_{K^{*}}-B_{i})^{2}})\\) samples does not belong to top \\(K^{*}\\) w.h.p. Hence, she participates with non-negligible probability only if a seller \\(j^{\\prime}\\geq(K^{*}+1)\\) participates. However, this implies that with each new match of buyer \\(i\\), additionally at least one non-participating seller is matched, decreasing both their uncertainties. For this buyer \\(i\\), we argue such spurious participation happens a total of \\(\\sum_{j^{\\prime}\\geq(K^{*}+1)}O(\\frac{\\log(T)}{(S_{j^{\\prime}}-B_{i})^{2}})\\) times. After that all the non-participating sellers \\(j\\geq(K^{*}+1)\\) will have enough samples so that their LCB bids will separate from the \\(i\\)-th buyer's UCB bid. Reversing sellers' and buyers' roles does the rest.\n\n**Bounding Price Difference:** The final part of the proof establishes bound on the cumulative difference of price from the true price \\(p^{*}\\) (see, Lemma 15 in Appendix A).\n\n**Lemma 5**.: _(Informal statement of Lemma 15) For \\(\\alpha_{\\min}>4\\), w.h.p. \\(\\sum_{t=1}^{T}|p(t)-p^{*}|\\lessapprox C\\log(T)+\\sqrt{\\alpha_{max}T\\log(T)},\\) where \\(C=O\\big{(}\\frac{(M-K^{*})(N-K^{*})}{\\Delta}\\big{)}\\)._\n\nLet us focus on the first upper bound, i.e. of the cumulative value of \\((p(t)-p^{*})\\). The proof breaks down the difference into two terms, \\(2(p(t)-p^{*})=(\\min_{i\\in\\mathcal{P}_{b}(t)}b_{i}(t)-B_{K^{*}})+(S_{K^{*}}- \\max_{j\\in\\mathcal{P}_{s}(t)}s_{j}(t))\\). For rounds when the buyer \\(K^{*}\\) is present (which happens all but \\(O\\big{(}\\frac{(N-K^{*})\\log(T)}{(B_{i}-p^{*})^{2}}\\big{)}\\) rounds) we can replace \\(\\min_{i\\in\\mathcal{P}_{b}(t)}b_{i}(t)\\) with \\(b_{K^{*}}(t)\\). Finally noticing that \\(\\sum_{t}(b_{K^{*}}(t)-B_{K^{*}})\\lessapprox\\sqrt{\\alpha_{max}}\\sqrt{T\\log(T)}\\) takes care of the first term. For the second term, we need to study the process \\(\\max_{j\\in\\mathcal{P}_{s}(t)}s_{j}(t)\\). First we bound the number of times sellers \\(1\\) to \\((K^{*}-1)\\) crosses the seller \\(K^{*}\\). Next we eliminate all the rounds where at least one seller \\(j\\geq(K^{*}+1)\\) are participating. Such an elimination comes at a cost of \\(O\\big{(}\\frac{(M-K^{*})}{\\Delta}\\big{)}\\) For any seller \\(j\\geq(K^{*}+1)\\) this happens \\(O\\big{(}\\frac{(N-K^{*})\\log(T)}{\\Delta_{j}^{2}}\\big{)}\\) times for some appropriate \\(\\Delta_{j}\\), and gives \\(\\Delta_{j}\\) regret in each round. This final step gives us the dominating \\(O\\big{(}\\frac{(M-K^{*})(N-K^{*})}{\\Delta}\\big{)}\\) term. The bound for the cumulative value of \\((p^{*}-p(t))\\) follows analogously.\n\n## 5 Lower Bounds\n\n### \\(\\Omega(\\sqrt{T})\\) minimax lower bound on individual regret\n\nWe show a minimax regret lower bound of \\(\\Omega(\\sqrt{T})\\) in Lemma 21 by considering a simpler system that decouples learning and competition. In this system, the seller is assumed to _(i)_ know her exact valuation, and _(ii)_ always ask her true valuation as the selling price, i.e., is truthful in her asking price in all the rounds. Furthermore, the pricing at every round is fixed to the average \\(p_{t}=\\frac{B_{t}+S_{t}}{2}\\) in the event that \\(B_{t}\\geq S\\). We show in Corollary 19 through a coupling argument that any algorithm for the classical two armed bandit problem can be converted to an algorithm for this special case. Then we use well known lower bounds for the bandit problem to give the \\(\\Omega(\\sqrt{T})\\) lower bound in Lemma 21. All technical details are in Appendix B.\n\n### \\(\\Omega(\\log(T))\\) instance dependent lower bound on Social Welfare Regret\n\nThe key observation is that social-welfare regret in Equation (1) is _independent_ of the pricing mechanism and only depends on the participating buyers \\(\\mathcal{P}_{b}(t)\\) and sellers \\(\\mathcal{P}_{s}(t)\\) at each time \\(t\\). We will establish a lower bound on a centralized decision maker (DM), who at each time, observes all the rewards obtained by all agents thus far, and decides \\(\\mathcal{P}_{b}(t)\\) and \\(\\mathcal{P}_{s}(t)\\) for each \\(t\\). In Appendix B.7, we show that the actions of the DM can be coupled to that of a combinatorial semi-bandit model [14], where the base set of arms are the set of all buyers and sellers, the valid subset of arms are those subsets having an equal number of buyers and sellers and the mean reward of any valid subset\\(\\mathcal{A}\\subseteq 2^{\\mathcal{D}}\\) is the difference between the sum of all valuations of buyers in \\(\\mathcal{A}\\) and of sellers in \\(\\mathcal{A}\\). In Appendix B we exploit this connection to semi-bandits to give a \\(\\Omega(\\log(T))\\) regret lower bound for the centralized DM. Thus our upper bound of \\(O(\\log(T))\\) social welfare regret under the decentralized setting is order optimal since even a centralized system must incur \\(\\Omega(\\log(T))\\) regret.\n\n## 6 Simulation Study\n\nWe perform synthetic studies to augment our theoretical guarantees. For a fixed system of \\(N\\) buyers, \\(M\\) sellers, \\(K^{*}\\) participants, and \\(\\Delta\\) gap, the rewards are Bernoulli, with means themselves chosen uniformly at random. We vary the confidence width of the buyers, \\(\\alpha_{b}\\), and seller, \\(\\alpha_{s}\\), in \\([\\alpha_{1},\\alpha_{2}]\\). Next we simulate the performance of the UCB(\\(\\alpha_{b}\\)) and LCB(\\(\\alpha_{s}\\)) over \\(100\\) independent sample paths with \\(T=50k\\). We report the mean, 25% and 75% value of the trajectories. We plot the cumulative regret of the buyers, \\(R_{b,i}(t)\\), and the sellers, \\(R_{s,j}(t)\\), the number of matches in the system \\(K(t)\\), and the price difference \\((p(t)-p^{*})\\). In Figure 2, we have a \\(8\\times 8\\) system with \\(K^{*}=5\\). We see that \\(K(t)\\) converges to \\(5\\), where as \\((p(t)-p^{*})\\) converges to \\(0\\). The social regret grows as \\(log(T)\\). The participant and non-participant individual regret of this instance is presented in the appendix in Figure 3. We defer the simulation of other systems used to study behavior of heterogeneous \\(\\alpha\\), varying gaps, and different system sizes to Appendix D.\n\n## 7 Related Work\n\n**Classical mechanism design in double auctions:** There is a large body of work on mechanism design for double auctions, following Myerson et al. [36]. The average-mechanism, which is the subject of focus in this paper achieves all the above desiderata except for being incentive compatible. The VCG mechanism was developed in a series of works [40], [12] and [20] achieves all desiderata except being budget balanced. This mechanism requires the auctioneer to subsidize the trade. More sophisticated trade mechanisms known as the McAfee mechanism [33], trade reduction, and the probabilistic mechanism [5] all trade-off some of the desiderata for others. However, the key assumption in all of these lines of work was that all participants know their own valuations, and do not need to learn through repeated interactions.\n\n**Bandit learning in matching markets:** In recent times, online learning for the two-sided matching markets have been extensively studied in [29], [30], [39], [7], [16]. This line of work studies two sided markets when one of the side does not know of their preferences apriori and learn it through interactions. However, unlike the price-discovery aspect of the present paper, the space of preferences that each participant has to learn is discrete and finite, while the valuations that agents need to learn form a continuum. This model was improved upon by [9] that added notions of price and transfer. The paper of [25] studied a contextual version of the two-sided markets where the agents preferences depend on the reveled context through an unknown function that is learnt through interactions.\n\n**Learning in auctions:** Online learning in simple auctions has a rich history - [6], [13], [34], [8], [15], [41], [22], [17] to name a few, each of which study a separate angle towards learning from repeated samples in auctions. However, unlike the our setting, one side of the market knows of their true valuations apriori. The work of [26] is the closest to ours, where the participants apriori do not know their true valuations. However, they consider the VCG mechanism in the centralized setting, i.e., all participants can observe the utilities of all participants in every round.\n\n**Learning in bi-lateral trade:** One of the first studies on learning in bilateral trade is [10]. The work of [10] considers the single buyer and seller model under a weaker ordinal feedback model, while in the present work we consider the multi-agent model under a stronger bandit feedback model. The feedback model in [10] is more restrictive since the gain from the trade cannot be estimated by the agents based on ordinal feedback while in our model, the gain from the trade can be estimated by each agent. However, our work considers the impact of multi-agent competition on regret minimization, which is not studied in [10]. In each round a buyer and a seller draw her own valuation i.i.d. from their respective distributions. Then an arbiter sets the price, with trade happening if the price is between the seller's and buyer's price. They show with full-information follow-the-leader type algorithm achieves \\(O(\\sqrt{T})\\) regret, where as with realistic (bandit-like) feedback by learning the two distributions approximately \\(O(T^{2/3})\\) regret can be achieved.\n\n**Protocol Model:** The 'protocol model' alludes to multiple agents following a similar protocol/algorithm with each agent executing protocol with private information (e.g. Platform-as-a-Service (PAAS) [21]). In the works on learning and markets mentioned here, although the decentralized systems can be modeled as games, the protocol model is studied as a tractable way to capture the essence of the problem [4; 3]. The technical basis for this assumption is that, in the limit when the number of participants are large, and the impact of any single participant is small, a protocol based algorithm is the limit in a precise sense of any equilibrium solution to the multi-agent game [19].\n\n## 8 Conclusion and Future Work\n\nWe study the Double auction with Average mechanism where the buyers and sellers need to know their own valuation from her own feedback. Using confidence based bounds - UCB for the buyers and LCB for the sellers, we show that it is possible to obtain \\(O(\\sqrt{T\\log(T)})\\) individual regret for the true participant buyers and sellers in \\(T\\) rounds. Whereas, the true non-participant buyers and sellers obtain a \\(O(\\log(T)/\\Delta)\\) individual regret where \\(\\Delta\\) is the smallest gap. The social regret of the proposed algorithm also admits a \\(O(\\log(T)/\\Delta)\\) bound. We show that there are simpler systems where each buyer and seller must obtain a \\(\\Omega(\\sqrt{T})\\) individual regret in the minimax sense. Moreover, in our setting we show even a centralized controller obtains \\(\\Omega(\\log(T))\\) social regret. Obtaining a minimax matching a \\(O(\\sqrt{T})\\) regret remains open. Another important future avenue is, developing a framework and bidding strategy with provable 'good' regret for general Double auction mechanisms. Alleviating drawbacks of protocol model, such as collusion among participants, and platform disintermediation are important directions to explore.\n\n## References\n\n* [1] Yasin Abbasi-Yadkori, David Pal, and Csaba Szepesvari. Improved algorithms for linear stochastic bandits. _Advances in neural information processing systems_, 24, 2011.\n* [2] Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. The computational power of population protocols. _Distributed Computing_, 20(4):279-304, 2007.\n* [3] Nick Arnosti, Ramesh Johari, and Yash Kanoria. Managing congestion in decentralized matching markets. In _Proceedings of the fifteenth ACM conference on Economics and computation_, pages 451-451, 2014.\n* [4] Itai Ashlagi, Mark Braverman, Yash Kanoria, and Peng Shi. Communication requirements and informative signaling in matching markets. In _EC_, page 263, 2017.\n* [5] Moshe Babaioff and Noam Nisan. Concurrent auctions across the supply chain. _Journal of Artificial Intelligence Research_, 21:595-629, 2004.\n* [6] Maria-Florina Balcan, Avrim Blum, and Yishay Mansour. Single price mechanisms for revenue maximization in unlimited supply combinatorial auctions. Technical report, Technical Report CMU-CS-07-111, Carnegie Mellon University, 2007. 3, 2007.\n* [7] Soumya Basu, Karthik Abinav Sankararaman, and Abishek Sankararaman. Beyond \\(\\log^{2}(t)\\) regret for decentralized bandits in matching markets. In _International Conference on Machine Learning_, pages 705-715. PMLR, 2021.\n\n* [8] Avrim Blum, Vijay Kumar, Atri Rudra, and Felix Wu. Online learning in online auctions. _Theoretical Computer Science_, 324(2-3):137-146, 2004.\n* [9] Sarah H Cen and Devavrat Shah. Regret, stability & fairness in matching markets with bandit learners. In _International Conference on Artificial Intelligence and Statistics_, pages 8938-8968. PMLR, 2022.\n* [10] Nicolo Cesa-Bianchi, Tommaso R Cesari, Roberto Colomboni, Federico Fusco, and Stefano Leonardi. A regret analysis of bilateral trade. In _Proceedings of the 22nd ACM Conference on Economics and Computation_, pages 289-309, 2021.\n* [11] Nicolo Cesa-Bianchi and Gabor Lugosi. Combinatorial bandits. _Journal of Computer and System Sciences_, 78(5):1404-1422, 2012.\n* [12] Edward H Clarke. Multipart pricing of public goods. _Public choice_, pages 17-33, 1971.\n* [13] Richard Cole and Tim Roughgarden. The sample complexity of revenue maximization. In _Proceedings of the forty-sixth annual ACM symposium on Theory of computing_, pages 243-252, 2014.\n* [14] Richard Combes, Mohammad Sadegh Talebi Mazraeh Shahi, Alexandre Proutiere, et al. Combinatorial bandits revisited. _Advances in neural information processing systems_, 28, 2015.\n* [15] Lorenzo Croissant, Marc Abeille, and Clement Calauzenes. Real-time optimisation for online learning in auctions. In _International Conference on Machine Learning_, pages 2217-2226. PMLR, 2020.\n* [16] Xiaowu Dai and Michael Jordan. Learning in multi-stage decentralized matching markets. _Advances in Neural Information Processing Systems_, 34, 2021.\n* [17] Constantinos Daskalakis and Vasilis Syrgkanis. Learning in auctions: Regret is hard, envy is easy. _Games and Economic Behavior_, 2022.\n* [18] Xiaojun Feng, Yanjiao Chen, Jin Zhang, Qian Zhang, and Bo Li. Tahes: A truthful double auction mechanism for heterogeneous spectrums. _IEEE Transactions on Wireless Communications_, 11(11):4038-4047, 2012.\n* [19] Markus Fischer. On the connection between symmetric n-player games and mean field games. _The Annals of Applied Probability_, 27:757-810, 2017.\n* [20] Theodore Groves. Incentives in teams. _Econometrica: Journal of the Econometric Society_, pages 617-631, 1973.\n* [21] Xin Guo, Anran Hu, Renyuan Xu, and Junzi Zhang. Learning mean-field games. _Advances in Neural Information Processing Systems_, 32, 2019.\n* [22] Yanjun Han, Zhengyuan Zhou, and Tsachy Weissman. Optimal no-regret learning in repeated first-price auctions. _arXiv preprint arXiv:2003.09795_, 2020.\n* [23] George Iosifidis, Lin Gao, Jianwei Huang, and Leandros Tassiulas. A double-auction mechanism for mobile data-offloading markets. _IEEE/ACM Transactions On Networking_, 23(5):1634-1647, 2014.\n* [24] George Iosifidis and Iordanis Koutsopoulos. Double auction mechanisms for resource allocation in autonomous networks. _IEEE Journal on Selected Areas in Communications_, 28(1):95-102, 2009.\n* [25] Meena Jagadeesan, Alexander Wei, Yixin Wang, Michael Jordan, and Jacob Steinhardt. Learning equilibria in matching markets from bandit feedback. _Advances in Neural Information Processing Systems_, 34, 2021.\n* [26] Kirthevasan Kandasamy, Joseph E Gonzalez, Michael I Jordan, and Ion Stoica. Mechanism design with bandit feedback. _arXiv preprint arXiv:2004.08924_, 2020.\n\n* [27] Elon Kohlberg, Shmuel Zamir, et al. Repeated games of incomplete information: The symmetric case. _Annals of Statistics_, 2(5):1040-1041, 1974.\n* [28] Tor Lattimore and Csaba Szepesvari. _Bandit algorithms_. Cambridge University Press, 2020.\n* [29] Lydia T Liu, Horia Mania, and Michael Jordan. Competing bandits in matching markets. In _International Conference on Artificial Intelligence and Statistics_, pages 1618-1628. PMLR, 2020.\n* [30] Lydia T Liu, Feng Ruan, Horia Mania, and Michael I Jordan. Bandit learning in decentralized matching markets. _Journal of Machine Learning Research_, 22(211):1-34, 2021.\n* [31] Bodhisattwa P Majumder, M Nazif Faqiry, Sanjoy Das, and Anil Pahwa. An efficient iterative double auction for energy trading in microgrids. In _2014 IEEE Symposium on Computational Intelligence Applications in Smart Grid (CIASG)_, pages 1-7. IEEE, 2014.\n* [32] Sweta Malik, Subhasis Thakur, Maeve Duffy, and John G Breslin. Double auction mechanisms for peer-to-peer energy trading: A comparative analysis. In _2022 IEEE 7th International Energy Conference (ENERGYCON)_, pages 1-6. IEEE, 2022.\n* [33] R Preston McAfee. A dominant strategy double auction. _Journal of economic Theory_, 56(2):434-450, 1992.\n* [34] Mehryar Mohri and Andres Munoz Medina. Learning theory and algorithms for revenue optimization in second price auctions with reserve. In _International conference on machine learning_, pages 262-270. PMLR, 2014.\n* [35] R Andrew Muller, Stuart Mestelman, John Spraggon, and Rob Godby. Can double auctions control monopoly and monopsony power in emissions trading markets? _Journal of environmental economics and management_, 44(1):70-92, 2002.\n* [36] Roger B Myerson and Mark A Satterthwaite. Efficient mechanisms for bilateral trading. _Journal of economic theory_, 29(2):265-281, 1983.\n* [37] Jean-Pierre Ponssard and Shmuel Zamir. Zero-sum sequential games with incomplete information. _International Journal of Game Theory_, 2(1):99-107, 1973.\n* [38] SM Reza Dibaj, Ali Miri, and SeyedAkbar Mostafavi. A cloud priority-based dynamic online double auction mechanism (pb-dodam). _Journal of Cloud Computing_, 9:1-26, 2020.\n* [39] Abishek Sankararaman, Soumya Basu, and Karthik Abinav Sankararaman. Dominate or delete: Decentralized competing bandits in serial dictatorship. In _International Conference on Artificial Intelligence and Statistics_, pages 1252-1260. PMLR, 2021.\n* [40] William Vickrey. Counterspeculation, auctions, and competitive sealed tenders. _The Journal of finance_, 16(1):8-37, 1961.\n* [41] Jonathan Weed, Vianney Perchet, and Philippe Rigollet. Online learning in repeated auctions. In _Conference on Learning Theory_, pages 1562-1583. PMLR, 2016.\n* [42] Robert Wilson. Incentive efficiency of double auctions. _Econometrica: Journal of the Econometric Society_, pages 1101-1115, 1985.\n* [43] Peter R Wurman, William E Walsh, and Michael P Wellman. Flexible double auctions for electronic commerce: Theory and implementation. _Decision Support Systems_, 24(1):17-27, 1998.", "response": "## Review\n\n### Summary\n\nThis paper investigates a two-sided double auction setting where buyers and sellers face uncertainty over their valuations, which are learned through interactions. The proposed algorithm, based on upper confidence bounds for buyers and lower confidence bounds for sellers, aims to minimize regret both socially and individually. The authors derive regret bounds showing that the social regret is upper-bounded by O(log T/Δ), while individual regret is at most O(T). This work is significant as it is the first to study repeated double auctions with unknown valuations, integrating insights from bandit learning theory and providing theoretical and simulation results to support the findings.\n\n### Strengths\n\n- Novelty of the setting, being the first to tackle learning in double auctions with unknown valuations.\n- Interesting connection between double auctions and bandit learning.\n- Well-written and organized with clear explanations.\n- The proposed algorithm is easy to implement and guarantees good regret bounds.\n- Tight regret results for social welfare and individual utility.\n- Good discussion on incentives and deviations complemented by simulation studies.\n\n### Weaknesses\n\n- The algorithmic approach is straightforward and does not introduce substantial new ideas.\n- Coordination between buyers and sellers on the bidding algorithm could encourage collusion.\n- The proof techniques may not easily generalize to more complex settings.\n- Minor lack of clarity regarding the definition of social welfare used in the paper.\n\n### Questions\n\n- Which many-to-many double auction bandit settings do you view as the right motivation for this work?\n- Please clarify the use of both instance-dependent and instance-independent regret bounds.\n- How to understand the regret bounds in cases of ties, and what happens if Δ=0?\n- Can the proof outline be made more concise to include results of simulation experiments?\n\n### Soundness\n\n**Score:** 3\n\n**Description:** 3 = good; the theoretical foundations are solid, though there are some concerns about the generalizability of the proof techniques.\n\n### Presentation\n\n**Score:** 3\n\n**Description:** 3 = good; the paper is well-organized and clear, but minor improvements in clarity and structure could enhance comprehension.\n\n### Contribution\n\n**Score:** 3\n\n**Description:** 3 = good; while the contribution is significant in the context of auction theory and online learning, it lacks groundbreaking new ideas.\n\n### Rating\n\n**Score:** 7\n\n**Description:** 7 = accept, but needs minor improvements; the paper is technically solid with a high impact on its area, yet it requires some clarifications and enhancements.\n\n### Paper Decision\n\n**Decision:** Accept (poster)\n\n**Reasons:** The paper presents a novel approach to a relevant problem in economic theory and online learning, with solid theoretical foundations and interesting results. While there are some minor weaknesses and questions regarding clarity and the approach, the overall contribution is significant, justifying an acceptance decision.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# DFRD: Data-Free Robustness Distillation for Heterogeneous Federated Learning\n\nKangyang Luo\\({}^{1}\\), Shuai Wang\\({}^{1}\\), Yexuan Fu\\({}^{1}\\), Xiang Li\\({}^{1}\\)1, Yunshi Lan\\({}^{1}\\), Ming Gao\\({}^{1,2}\\)\n\nSchool of Data Science & Engineering\\({}^{1}\\)\n\nKLATASDS-MOE in School of Statistics\\({}^{2}\\)\n\nEast China Normal University\n\nShanghai, China\n\n{52205901003, 51215903058,51215903042}@stu.ecnu.edu.cn\n\n{xiangli, yslan, mgao}@dase.ecnu.edu.cn\n\nFootnote 1: corresponding author\n\n###### Abstract\n\nFederated Learning (FL) is a privacy-constrained decentralized machine learning paradigm in which clients enable collaborative training without compromising private data. However, how to learn a robust global model in the data-heterogeneous and model-heterogeneous FL scenarios is challenging. To address it, we resort to data-free knowledge distillation to propose a new FL method (namely DFRD). DFRD equips a conditional generator on the server to approximate the training space of the local models uploaded by clients, and systematically investigates its training in terms of _fidelity_, _transferability_ and _diversity_. To overcome the catastrophic forgetting of the global model caused by the distribution shifts of the generator across communication rounds, we maintain an exponential moving average copy of the generator on the server. Additionally, we propose dynamic weighting and label sampling to accurately extract knowledge from local models. Finally, our extensive experiments on various image classification tasks illustrate that DFRD achieves significant performance gains compared to SOTA baselines. Our code is here: [https://anonymous.4open.science/r/DFRD-0C83/](https://anonymous.4open.science/r/DFRD-0C83/).\n\n## 1 Introduction\n\nWith the surge of data, deep learning algorithms have made significant progress in both established and emerging fields [1, 2, 3]. However, in many real-world applications (e.g., mobile devices [4], IoT [5], and autonomous driving [6], etc.), data is generally dispersed across different clients (i.e., data silos). Owing to the high cost of data collection and strict privacy protection regulations, the centralized training that integrates data together is prohibited [7, 8]. Driven by this reality, Federated Learning (FL) [9, 10] has gained considerable attention in industry and academia as a promising distributed learning paradigm that allows multiple clients to participate in the collaborative training of a global model without access to their private data, thereby ensuring the basic privacy.\n\nDespite its remarkable success, the inevitable hurdle that plagues FL research is the vast heterogeneity among real-world clients [11]. Specifically, the distribution of data among clients may be non-IID (identical and independently distributed), resulting in **data heterogeneity**[42, 12, 13, 8, 41]. It has been confirmed that the vanilla FL method FedAvg [14] suffers from _client drift_ in this case, which leads to severe performance degradation. To ameliorate this issue, a plethora of modifications [15, 16, 17, 34, 39] for FedAvg focus on regularizing the objectives of the local models to align the global optimization objective. All of the above methods follow the widely accepted assumption of model homogeneity, where local models have to share the same architecture as the global model. Nevertheless,when deploying FL systems, different clients may have distinct hardware and computing resources, and can only train the model architecture matching their own resource budgets [22, 24], resulting in **model heterogeneity**. In this case, to enable the FL systems with model homogeneity, on the one hand, clients with low resource budgets, which may be critical for enhancing the FL systems, will be discarded at the expense of training bias [43, 8, 23]. On the other hand, keeping a low complexity for the global model to accommodate all clients leads to performance drop due to the limited model capacity [30]. Therefore, the primary challenge of model-heterogeneous FL is how to conduct model aggregation of heterogeneous architectures among clients to enhance the inclusiveness of federated systems. To solve this challenge, existing efforts fall broadly into two categories: knowledge distillation (KD)-based methods [40, 45, 25, 26, 27] and partial training (PT)-based methods [30, 28, 29, 31], yet each of them has its own limitations. Concretely, KD-based methods require additional public data to align the logits outputs between the global model (student) and local models (teachers). But the desired public data is not always available in practice and the performance may decrease dramatically if the apparent disparity in distributions exists between public data and clients' private data [33]. PT-based methods send width-based sub-models to clients, which are extracted by the server from a larger global model according to each client's resource budget, and then aggregate these trained sub-models to update the global model. PT-based methods can be considered as an extension of FedAvg to model-heterogeneous scenarios, which means they are implementation-friendly and computationally efficient, but they also suffer from the same adverse effects from data heterogeneity as FedAvg or even more severely. In a word, how to learn a robust global model in FL with both data and model heterogeneity is a highly meaningful and urgent problem.\n\nTo this end, we systematically investigate the training of a robust global model in FL with both data and model heterogeneity with the aid of data-free knowledge distillation (DFKD) [35, 36, 53, 59]. See the related work in Appendix A for more DFKD methods. Note that the strategy of integrating DFKD to FL is not unique to us. Recently, FedFTG [48] leverages DFKD to fine-tune the global model in model-homogeneous FL to overcome data heterogeneity, and DENSE [49] aggregates knowledge from heterogeneous local models based on DFKD to train a global model for one-shot FL. They all equip the server, which possesses powerful hardware and computing resources, with a generator to approximate the training space of the local models (teachers), and train the generator and the global model (student) in an adversarial manner. However, the local models uploaded per communication round are not only architecturally heterogeneous but also trained on non-IID distributed private data in the situation of both data and model heterogeneity. _In this case, the generator tends to deviate from the real data distribution. Also, its output distribution may undergo large shifts (i.e., distribution shifts) across communication rounds, causing the global model to catastrophically forget useful knowledge learned in previous rounds and suffer from performance degradation._ To confront the mentioned issues, we propose a novel **D**ata-**F**ree **R**obust **D**istillation FL method called DFRD, which utilizes a conditional generator to generate synthetic data and thoroughly studies how to effectively and accurately simulate the local models' training space in terms of _fidelity_, _transferability_ and\n\nFigure 1: The full pipeline for DFRD combined with a PT-based method. DFRD works on the server and contains two phases, _training generator_ and _robust model distillation_, where \\(\\mathcal{L}_{tran},\\mathcal{L}_{div},\\mathcal{L}_{fid}\\) and \\(\\mathcal{L}_{kl},\\widetilde{\\mathcal{L}}_{kl}\\) are the loss objectives of the conditional generator and the global model, respectively.\n\n_diversity_[48; 49]. To mitigate catastrophic forgetting of the global model, an exponential moving average (EMA) copy of the conditional generator is maintained on the server to store previous knowledge learned from the local models. The EMA generator, along with the current generator, provides training data for updates of the global model. Also, we propose dynamic weighting and label sampling to aggregate the logits outputs of the local models and sample labels respectively, thereby properly exploring the knowledge of the local models. We revisit FedFTG and DENSE, and argue that DFRD as a fine-tuning method (similar to FedFTG) can significantly enhance the global model. So, we readily associate the PT-based methods in model-heterogeneous FL with the ability to rapidly provide a preliminary global model, which will be fine-tuned by DFRD. We illustrate the schematic for DFRD as a fine-tuning method based on a PT-based method in Fig. 1. Although FedFTG and DENSE can also be applied to fine-tune the global model from the PT-based methods after simple extensions, we empirically find that they do not perform as well, and the performance of the global model is even inferior to that of local models tailored to clients' resource budgets.\n\nOur main contributions of this work are summarized as follows. First, we propose a new FL method termed DFRD that enables a robust global model in both data and model heterogeneity settings with the help of DFKD. Second, we systematically study the training of the conditional generator w.r.t. _fidelity_, _transferability_ and _diversity_ to ensure the generation of high-quality synthetic data. Additionally, we maintain an EMA generator on the server to overcome the global model's catastrophic forgetting caused by the distribution shifts of the generator. Third, we propose dynamic weighting and label sampling to accurately extract the knowledge of local models. At last, our extensive experiments on six real-world image classification datasets verify the superiority of DFRD.\n\n## 2 Notations and Preliminaries\n\n**Notations.** In this paper, we focus on the centralized setup that consists of a central server and \\(N\\) clients owning private labeled datasets \\(\\{(\\mathbf{X}_{i},\\mathbf{Y}_{i})\\}_{i=1}^{N}\\), where \\(\\mathbf{X}_{i}=\\{\\mathbf{x}_{i}^{b}\\}_{b=1}^{n_{i}}\\) follows the data distribution \\(\\mathcal{D}_{i}\\) over feature space \\(\\mathcal{X}_{i}\\), i.e., \\(\\mathbf{x}_{i}^{b}\\sim\\mathcal{D}_{i}\\), and \\(\\mathbf{Y}_{i}=\\{y_{i}^{b}\\}_{b=1}^{n_{i}}\\subset[C]:=\\{1,\\cdots,C\\}\\) denotes the ground-truth labels of \\(\\mathbf{X}_{i}\\). And \\(C\\) refers to the total number of labels. Remarkably, the heterogeneity for FL in our focus includes both data heterogeneity and model heterogeneity. **For the former**, we consider the same feature space, yet the data distribution may be different among clients, that is, label distribution skewness in clients (i.e., \\(\\mathcal{X}_{i}=\\mathcal{X}_{j}\\) and \\(\\mathcal{D}_{i}\\neq\\mathcal{D}_{j},\\forall i\\neq j,i,j\\in[N]\\)). **For the latter**, each client \\(i\\) holds an on-demand local model \\(f_{i}\\) parameterized by \\(\\mathbf{\\theta}_{i}\\). Due to the difference in resource budgets, the model capacity of each client may vary, i.e., \\(|\\mathbf{\\theta}_{i}|\\neq|\\mathbf{\\theta}_{j}|,\\exists i\\neq j,i,j\\in[N]\\). In PT-based methods, we define a confined width capability \\(R_{i}\\in(0,1]\\) according to the resource budget of client \\(i\\), which is the proportion of nodes extracted from each layer in the global model \\(f\\). Note that \\(f\\) is parameterized by \\(\\mathbf{\\theta}\\), and \\(|\\mathbf{a}|\\) denotes the number of elements in vector \\(\\mathbf{a}\\).\n\n**PT-based method** is a solution for model-heterogeneous FL, which strives to extract a matching width-based slimmed-down sub-model from the global model as a local model according to each client's budget. As with FedAvg, it requires the server to periodically communicate with the clients. In each round, there are two phases: local training and server aggregation. In local training, each client trains the sub-model received from the server utilizing the local optimizer. In server aggregation, the server collects the heterogeneous sub-models and aggregates them by straightforward selective averaging to update the global model, as follows [28; 29; 30; 31]:\n\n\\[\\mathbf{\\theta}_{[l,k]}^{t}=\\frac{1}{\\sum_{j\\in\\mathcal{S}_{t}}p_{j}}\\sum_{i\\in \\mathcal{S}_{t}}p_{i}\\mathbf{\\theta}_{i,[l,k]}^{t}, \\tag{1}\\]\n\nwhere \\(\\mathcal{S}_{t}\\) is a subset sampled from \\([N]\\) and \\(p_{i}\\) is the weight of client \\(i\\), which generally indicates the size of data held by client \\(i\\). At round \\(t\\), \\(\\mathbf{\\theta}_{[l,k]}^{t}\\) denotes the \\(k^{th}\\) parameter of layer \\(l\\) of the global model and \\(\\mathbf{\\theta}_{i,[l,k]}^{t}\\) denotes the parameter \\(\\mathbf{\\theta}_{[l,k]}^{t}\\) updated by client \\(i\\). We can clearly see that Eq. (1) independently calculates the average of each parameter for the global model according to how many clients update that parameter in round \\(t\\). Instead, the parameter remains unchanged if no clients update it. Notably, if \\(|\\mathbf{\\theta}_{i}^{t}|=|\\mathbf{\\theta}^{t}|\\) for any \\(i\\in[N]\\), PT-based method becomes FedAvg. The key to PT-based method is to select \\(\\mathbf{\\theta}_{i}^{t}\\) from the global model \\(\\mathbf{\\theta}^{t}\\) when given \\(R_{i}\\). And existing sub-model extraction schemes fall into three categories: _static_[28; 29], _random_[30] and _rolling_[31].\n\nProposed Method\n\nIn this section, we detail the proposed method DFRD. We mainly work on considering DFRD as a fine-tuning method to enhance the PT-based methods, thus enabling a robust global model in FL with both data and model heterogeneity. Fig. 1 visualizes the training procedure of DFRD combined with a PT-based method, consisting of four stages on the server side: _training generator_, _robust model distillation_, _sub-model extraction_ and _model aggregation_. Note that _sub-model extraction_ and _model aggregation_ are consistent with that in the PT-based methods, so we detail the other two stages. Moreover, we present pseudocode for DFRD in Appendix C.\n\n### Training Generator\n\nAt this stage, we aim to train a well-behaved generator to capture the training space of local models uploaded from active clients. Specifically, we consider a conditional generator \\(G(\\cdot)\\) parameterized by \\(\\mathbf{w}\\). It takes as input a random noise \\(\\mathbf{z}\\in\\mathbb{R}^{d}\\) sampled from standard normal distribution \\(\\mathcal{N}(\\mathbf{0},\\mathbf{I})\\), and a random label \\(y\\in[C]\\) sampled from label distribution \\(p(y)\\), i.e., the probability of sampling \\(y\\), thus generating the synthetic data \\(\\mathbf{s}=G(\\mathbf{h}=o(\\mathbf{z},y),\\mathbf{w})\\). Note that \\(o(\\mathbf{z},y)\\) represents the merge operator of \\(\\mathbf{z}\\) and \\(y\\). To the best of our knowledge, synthetic data generated by a well-trained generator should satisfy several key characteristics: _fidelity_, _transferability_, and _diversity_[48, 49]. Therefore, in this section, we construct the loss objective from the referred aspects to ensure the quality and utility of \\(G(\\cdot)\\).\n\n**Fidelity.** To commence, we study the fidelity of the synthetic data. Specifically, we expect \\(G(\\cdot)\\) to simulate the training space of the local models to generate the synthetic dataset with a similar distribution to the original dataset. To put it differently, we want the synthetic data \\(\\mathbf{s}\\) to approximate the training data with label \\(y\\) without access to clients' training data. To achieve it, we form the fidelity loss \\(\\mathcal{L}_{fid}\\) at logits level:\n\n\\[\\mathcal{L}_{fid}=CE(\\sum_{i\\in\\mathcal{S}_{t}}\\tau_{i,y}f_{i}(\\mathbf{s},\\mathbf{\\theta }_{i}),y), \\tag{2}\\]\n\nwhere \\(CE\\) denotes the cross-entropy function, \\(f_{i}(\\mathbf{s},\\mathbf{\\theta}_{i})\\) is the logits output of the local model from client \\(i\\) when \\(\\mathbf{s}\\) is given, \\(\\tau_{i,y}\\) dominates the weight of logits from different clients \\(\\{i|i\\in\\mathcal{S}_{t}\\}\\) when \\(y\\) is given. And \\(\\mathcal{L}_{fid}\\) is the cross-entropy loss between the weighted average logits \\(\\sum_{i\\in\\mathcal{S}_{t}}\\tau_{i,y}f_{i}(\\mathbf{s},\\mathbf{\\theta}_{i})\\) and the label \\(y\\). By minimizing \\(\\mathcal{L}_{fid}\\), \\(\\mathbf{s}\\) is enforced to be classified into label \\(y\\) with a high probability, thus facilitating the fidelity of \\(\\mathbf{s}\\).\n\nIn reality, the conditional generator \\(G(\\cdot)\\) easily generates synthetic data with low classification errors (i.e. \\(\\mathcal{L}_{fid}\\) close to \\(0\\)) as the training proceeds. This may cause the synthetic data to fall into a space far from the decision boundary of the ensemble model (i.e., \\(\\sum_{i\\in\\mathcal{S}_{t}}\\tau_{i,y}f_{i}(\\cdot,\\mathbf{\\theta}_{i})\\)) if only \\(\\mathcal{L}_{fid}\\) is optimized, as shown in the synthetic data represented by red circles in Fig. 2 (a). Note that \\(d_{S}\\) and \\(d_{T}\\) denote the decision boundaries of the global model (student) and the ensemble model (teacher), respectively. An obvious observation is that the red circles are correctly classified on the same side of the two decision boundaries (i.e., \\(d_{S}\\) and \\(d_{T}\\)), making it difficult to transfer teacher's knowledge to student. We next explore how to augment the transferability of the synthetic data to ameliorate this pitfall.\n\n**Transferability** is intended to guide \\(G(\\cdot)\\) in generating synthetic data that moves the decision boundary of the global model towards that of the ensemble model, such as synthetic data with black circles in Fig. 2 (b). However, during the training of \\(d_{S}\\) to approach \\(d_{T}\\), we find that \\(G(\\cdot)\\) can easily generate two other types of synthetic data, the yellow and purple circles in Fig. 2 (c). Both of them are misclassified by the ensemble model (\\(d_{T}\\)), while the yellow circles are correctly classified and\n\nFigure 2: The visualization of synthetic data and decision boundaries of global model (student) and ensemble model (teacher). _Left panel_: synthetic data (red circles) are far away from the decision boundary \\(d_{T}\\). _Middle panel_: synthetic data (black circles) near the decision boundaries \\(d_{T}\\). _Right panel_: synthetic data (yellow and purple circles) cross over the decision boundary \\(d_{T}\\).\n\nthe purple circles are misclassified by the global model (\\(d_{S}\\)). For the conditional generator \\(G(\\cdot)\\) that takes label information as one of the inputs, yellow and purple circles can mislead the generator, thereby leading to \\(d_{S}\\) approximating \\(d_{T}\\) with a large deviation, as shown in Fig. 2 (c). Based on the above observation, we reckon that the synthetic data \\(\\mathbf{s}=G(\\mathbf{h}=o(\\mathbf{z},y),\\mathbf{w})\\) is useful if it is classified as \\(y\\) by the ensemble model but classified not as \\(y\\) by the global model. To realize it, we maximize the logits discrepancy between the global model and the ensemble model on synthetic data with black circles by leveraging Kullback-Leibler divergence loss, which takes the form:\n\n\\[\\mathcal{L}_{tran}=-\\varepsilon\\cdot KL(\\sum_{i\\in\\mathcal{S}_{t}}\\tau_{i,y}f_ {i}(\\mathbf{s},\\mathbf{\\theta}_{i}),f(\\mathbf{s},\\mathbf{\\theta})), \\tag{3}\\]\n\nwhere \\(KL\\) is Kullback-Leibler divergence function and \\(f(\\mathbf{s},\\mathbf{\\theta})\\) denotes the logits output of the global model on \\(\\mathbf{s}\\) with label \\(y\\). Note that \\(\\varepsilon=1\\) if \\(\\arg\\max f(\\mathbf{s},\\mathbf{\\theta})\\neq y\\) and \\(\\arg\\max\\sum_{i\\in\\mathcal{S}_{t}}\\tau_{i,y}f_{i}(\\mathbf{s},\\mathbf{\\theta}_{i})=y\\) hold, otherwise \\(\\varepsilon=0\\). (\\(\\diamondsuit\\))\n\nWe would like to point out that the existing works [48] and [49] are in line with our research perspective on the transferability of generator, which aims to generate more synthetic data with black circles. However, they do not carefully frame learning objective for enhancing the transferability of generator. Concretely, [48] does not consider the type of synthetic data, i.e., \\(\\varepsilon=1\\) always holds, thus inducing the generation of synthetic data with yellow and purple circles. (\\(\\triangle\\)) [49] focuses on synthetic data satisfying \\(\\arg\\max f(\\mathbf{s},\\mathbf{\\theta})\\neq\\arg\\max\\sum_{i\\in\\mathcal{S}_{t}}\\tau_{i,y} f_{i}(\\mathbf{s},\\mathbf{\\theta}_{i})\\), but enables the generation of synthetic data with purple circles yet. (\\(\\bigtriangledown\\)) 2\n\nFootnote 2: Note that \\(\\triangle\\), \\(\\bigtriangledown\\) and \\(\\diamondsuit\\) denote the strategies for the transferability of generator in [48], in [49] and in this paper, respectively.\n\n**Diversity.** Although we enable \\(G(\\cdot)\\) to generate synthetic data that falls around the real data by optimizing \\(\\mathcal{L}_{fid}\\) and \\(\\mathcal{L}_{tran}\\), the diversity of synthetic data is insufficient. Due to the fact that the generator may get stuck in _local equilibria_ as the training proceeds, model collapse occurs [57, 58]. In this case, the generator may produce similar data points for each class with little diversity. Also, the synthetic data points may not differ significantly among classes. This causes the empirical distribution estimated by \\(G(\\cdot)\\) to cover only a small manifold in the real data space, and thus only partial knowledge of the ensemble model is extracted. To alleviate this issue, we introduce a diversity loss \\(\\mathcal{L}_{div}\\) with label information to increase the diversity of synthetic data as follows:\n\n\\[\\mathcal{L}_{div}=\\exp{(-\\sum_{j,k\\in[B]}\\|\\mathbf{s}_{j}-\\mathbf{s}_{k}\\|_{2}*\\|\\mathbf{h }_{j}-\\mathbf{h}_{k}\\|_{2}/B^{2})}, \\tag{4}\\]\n\nwhere \\(B\\) denotes the batch size and \\(\\mathbf{s}_{j/k}=G(\\mathbf{h}_{j/k}=o(\\mathbf{z}_{j/k},y_{j/k}),\\mathbf{w})\\). Intuitively, \\(\\mathcal{L}_{div}\\) takes \\(\\|\\mathbf{h}_{j}-\\mathbf{h}_{k}\\|_{2}\\) as a weight, and then multiplies it by the corresponding \\(\\|\\mathbf{s}_{j}-\\mathbf{s}_{k}\\|_{2}\\) in each batch \\(B\\), thus imposing a larger weight on the synthetic data points pair (\\(\\mathbf{s}_{j}\\) and \\(\\mathbf{s}_{k}\\)) at the more distant input pair (\\(\\mathbf{h}_{j}\\) and \\(\\mathbf{h}_{k}\\)). Notably, we merge the random noise \\(\\mathbf{z}\\) with label \\(y\\) as the input of \\(G(\\cdot)\\) to overcome spurious solutions [53]. Further, we propose a multiplicative merge operator, i.e., \\(o(\\mathbf{z},y)=\\mathbf{z}\\times\\mathcal{E}(y)\\), where \\(\\mathcal{E}\\) is a trainable embedding and \\(\\times\\) means vector element-wise product. We find that our merge operator enables synthetic data with more diversity compared to others, possibly because the label information is effectively absorbed into the stochasticity of \\(\\mathbf{z}\\) by multiplying them when updating \\(\\mathcal{E}\\). See Section 4.3 for more details and empirical justification.\n\nCombining \\(\\mathcal{L}_{fid}\\), \\(\\mathcal{L}_{tran}\\) and \\(\\mathcal{L}_{div}\\), the overall objective of the generator can be formalized as follows:\n\n\\[\\mathcal{L}_{gen}=\\mathcal{L}_{fid}+\\beta_{tran}\\cdot\\mathcal{L}_{tran}+\\beta _{div}\\cdot\\mathcal{L}_{div}, \\tag{5}\\]\n\nwhere \\(\\beta_{tran}\\) and \\(\\beta_{div}\\) are tunable hyper-parameters. Of note, the synthetic data generated by a well-trained generator should be visually distinct from the real data for privacy protection, while it can capture the common knowledge of the local models to ensure similarity to the real data distribution for utility. More privacy protection is discussed in Appendices A and A.\n\n### Robust Model Distillation\n\nNow we update the global model. Normally, the global model attempts to learn as much as possible logits outputs of the ensemble model on the synthetic data generated by the generator based on knowledge distillation [46, 48, 40, 49]. The updated global model and the ensemble model are then served to train \\(G(\\cdot)\\) with the goal of generating synthetic data that maximizes the mismatch between them in terms of logits outputs (see _transferability_ discussed in the previous section). This adversarial game enables the generator to rapidly explore the training space of the local models to help knowledge transfer from them to the global model. However, it also leads to dramatic shifts in the output distribution of \\(G(\\cdot)\\) across communication rounds under heterogeneous FL scenario (i.e., distribution shifts), causing the global model to catastrophically forget useful knowledge gained in previous rounds. To tackle the deficiency, we propose to equip the server with a generator \\(\\widetilde{G}(\\cdot)\\) parameterized by \\(\\widetilde{\\mathbf{w}}\\) that is an exponential moving average (EMA) copy of \\(G(\\cdot)\\). Its parameters at the \\(t^{th}\\) communication round are computed by\n\n\\[\\widetilde{\\mathbf{w}}^{t}=\\lambda\\cdot\\widetilde{\\mathbf{w}}^{t-1}+(1-\\lambda)\\cdot \\mathbf{w}^{t}, \\tag{6}\\]\n\nwhere \\(\\lambda\\in(0,1)\\) is the momentum. We can easily see that the parameters of \\(\\widetilde{G}(\\cdot)\\) vary very little compared to those of \\(G(\\cdot)\\) over communication rounds, if \\(\\lambda\\) is close to \\(1\\). We further utilize synthetic data from \\(\\widetilde{G}(\\cdot)\\) as additional training data for the global model outside of \\(G(\\cdot)\\), mitigating the huge exploratory distribution shift induced by the large update of \\(G(\\cdot)\\) and achieving stable updates of the global model. Particularly, we compute the Kullback-Leibler divergence between logits of the ensemble model and the global model on the synthetic data points \\(\\mathbf{s}=G(\\mathbf{h}=o(\\mathbf{z},y),\\mathbf{w})\\) and \\(\\widetilde{\\mathbf{s}}=\\widetilde{G}(\\widetilde{\\mathbf{h}}=o(\\widetilde{\\mathbf{z}}, \\widetilde{y}),\\widetilde{\\mathbf{w}})\\) respectively, which is formulated as follows:\n\n\\[\\mathcal{L}_{md}=\\mathcal{L}_{kl}+\\alpha\\widetilde{\\mathcal{L}}_{kl}=KL(f(\\bm {s},\\mathbf{\\theta}),\\sum_{i\\in\\mathcal{S}_{t}}\\tau_{i,y}f_{i}(\\mathbf{s},\\mathbf{\\theta} _{i}))+\\alpha\\cdot KL(f(\\widetilde{\\mathbf{s}},\\mathbf{\\theta}),\\sum_{i\\in\\mathcal{S}_ {t}}\\tau_{i,\\widetilde{y}}f_{i}(\\widetilde{\\mathbf{s}},\\mathbf{\\theta}_{i})), \\tag{7}\\]\n\nwhere \\(\\alpha\\) is a tunable hyper-parameter for balancing different loss items.\n\n**Dynamic Weighting and Label Sampling.** So far, how to determine \\(\\tau_{i,y}\\) and \\(p(y)\\) is unclear. The appropriate \\(\\tau_{i,y}\\) and \\(p(y)\\) are essential for effective extraction of knowledge from local models. For clarity, we propose dynamic weighting and label sampling, i.e., \\(\\tau_{i,y}=n_{i,t}^{y}/n_{\\mathcal{S}_{t},t}^{y}\\) and \\(p(y)=n_{\\mathcal{S}_{t},t}^{y}/\\sum_{y\\in[C]}n_{\\mathcal{S}_{t},t}^{y}\\), where \\(n_{\\mathcal{S}_{t},t}^{y}=\\sum_{j\\in[\\mathcal{S}_{t}]}n_{j,t}^{y}\\) and \\(n_{i,t}^{y}\\) denotes the number of data with label \\(y\\) involved in training on client \\(i\\) at round \\(t\\). Due to space limitations, see Appendix F for their detail study and experimental justification.\n\n## 4 Experiments\n\n### Experimental Settings\n\n**Datasets.** In this paper, we evaluate different methods with six real-world image classification task-related datasets, namely Fashion-MNIST [69] (FMNIST in short), SVHN [70], CIFAR-10, CIFAR-100 [71], Tiny-imageNet3 and Food101 [73]. We detail the six datasets in Appendix B. To simulate data heterogeneity across clients, as in previous works [34; 37; 38], we use Dirichlet process \\(Dir(\\omega)\\) to partition the training set for each dataset, thereby allocating local training data for each client. It is worth noting that \\(\\omega\\) is the concentration parameter and smaller \\(\\omega\\) corresponds to stronger data heterogeneity.\n\nFootnote 3: [http://cs231n.stanford.edu/tiny-imagenet-200.zip](http://cs231n.stanford.edu/tiny-imagenet-200.zip)\n\n**Baselines.** We compare DFRD to FedFTG [48] and DENSE [49], which are the most relevant methods to our work. To verify the superiority of DFRD, on the one hand, DFRD, FedFTG and DENSE are directly adopted on the server to transfer the knowledge of the local models to a randomly initialized global model. We call them collectively **data-free methods**. On the other hand, they are utilized as **fine-tuning methods** to improve the global model's performance after computing weighted average per communication round. In this case, in each communication round, the preliminary global model is obtained using FedAvg [14] in FL with homogeneous models, whereas in FL with heterogeneous models, the PT-based methods, including HeteroFL [29], Federated Dropout [30] (FedDp for short) and FedRolex [31], are employed to get the preliminary global model. 4\n\nFootnote 4: As an example, DFRD+FedRelox indicates that DFRD is used as a fine-tuning method to improve the performance of FedRelox, and others are similar. See Tables 1 and 2 for details.\n\n**Configurations.** Unless otherwise stated, all experiments are performed on a centralized network with \\(N=10\\) active clients. We set \\(\\omega\\in\\{0.01,0.1,1.0\\}\\) to mimic different data heterogeneity scenarios. To simulate model-heterogeneous scenarios, we formulate exponentially distributed budgets for a given \\(N\\): \\(R_{i}=[\\frac{1}{2}]^{\\min\\{\\sigma_{i}[\\frac{\\omega}{N}]\\}}(i\\in[N])\\), where \\(\\sigma\\) and \\(\\rho\\) are both positive integers. We fix \\(\\sigma=4\\)and consider \\(\\rho\\in\\{5,10,40\\}\\). See Appendix D for more details. Unless otherwise specified, we set \\(\\beta_{tran}\\) and \\(\\beta_{div}\\) both to \\(1\\) in _training generator_, while in _robust model distillation_, we set \\(\\lambda=0.5\\) and \\(\\alpha=0.5\\). And all baselines leverage the same setting as ours. Due to space limitations, see Appendix E for the full experimental setup.\n\n**Evaluation Metrics.** We evaluate the performance of different FL methods by local and global test accuracy. To be specific, for local test accuracy (_L.acc_ for short), we randomly and evenly distribute the test set to each client and harness the test set on each client to verify the performance of local models. In terms of global test accuracy (_G.acc_ for short), we employ the global model on the server to evaluate the global performance of different FL methods via utilizing the original test set. Note that _L.acc_ is reported in **round brackets**. To ensure reliability, we report the average for each experiment over \\(3\\) different random seeds.\n\n### Results Comparison\n\n**Impacts of varying \\(\\omega\\).** We study the performance of different methods at different levels of data heterogeneity on FMNIST, SVHN, CIFAR-10 and CIFAR-100, as shown in Table 1. One can see that the performance of all methods degrades severely as \\(\\omega\\) decreases, with DFRD being the only method that is robust while consistently leading other baselines with an overwhelming margin w.r.t. _G.acc_. Also, Fig. 3 (a)-(b) show that the learning efficiency of DFRD consistently beats other baselines (see Fig. 8-9 in Appendix H for complete curves). Notably, DFRD, FedFTG and DENSE as fine-tuning methods uniformly surpass FedAvg w.r.t. _G.acc_ and _L.acc_. However, their global test accuracies suffer from dramatic deterioration or even substantially worse than that of FedAvg when they act as data-free methods. We conjecture that FedAvg aggregates the knowledge of local models more effectively than data-free methods. Also, when DFRD is used to fine-tune FedAvg, it can significantly enhance the global model, yet improve the performance of local models to a less extent.\n\n**Impacts of different \\(\\rho\\).** We explore the impacts of different model heterogeneity distributions on different methods with SVHN, CIFAR-10, Tiny-ImageNet and FOOD101. A higher \\(\\rho\\) means more clients with \\(\\frac{1}{16}\\)-width capacity w.r.t. the global model. From Table 2, we can clearly see that the performance of all methods improves uniformly with decreasing \\(\\rho\\), where DFRD consistently and overwhelmingly dominates other baselines in terms of \\(G.acc\\). Specifically, DFRD improves \\(G.acc\\) by an average of \\(11.07\\%\\) and \\(7.54\\%\\) on SVHN and CIFAR-10 respectively, compared to PT-based methods (including HeteroFL, FedDP and FedRolex). Meanwhile, DFRD uniformly and significantly outstrips FedFTG and DENSE w.r.t. \\(G.acc\\). The selected learning curve shown in Fig. 3 (c) also verifies the above statement (see Fig. 10-12 in Appendix H for more results). The above empirical results show that DFRD not only is robust to varying \\(\\rho\\), but also has significantly intensified effects on the global model for different PT-based methods. However, the results on Tiny-ImageNet and FOOD101 indicate that PT-based methods suffer from inferior test accuracy. Although DFRD improves their test accuracy, the improvement is slight. Notably, DFRD improves negligibly over PT-based methods when all clients exemplify \\(\\frac{1}{16}\\)-width capability. We thus argue that weak clients performing complex image classification tasks learn little useful local knowledge, resulting in the inability to provide effective information for the global model.\n\n### Ablation Study\n\nIn this section, we carefully demonstrate the efficacy and indispensability of core modules and key parameters in our method on SVHN, CIFAR-10 and CIFAR-100. Thereafter, we resort to FedRolex+DFRD to yield all results. For SVHN (CIFAR-10, CIFAR-100), we set \\(\\omega=0.1\\)\\((0.1,1.0)\\)\n\n\\begin{table}\n\\begin{tabular}{c c c c|c c c|c c c|c c} \\hline \\hline \\multicolumn{2}{c|}{} & \\multicolumn{2}{c|}{PRDST} & \\multicolumn{2}{c|}{SVHN} & \\multicolumn{2}{c|}{CIFAR-10} & \\multicolumn{2}{c}{CIFAR-100} \\\\ \\hline \\multirow{2}{*}{Alex.} & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) \\\\ \\cline{2-14} & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) & \\(\\omega\\)\\(\\in\\)\\(0.01\\) \\\\ \\hline \\multirow{2}{*}{ResNet} & 10.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 14.19\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 & 13.99\\(\\pm\\)1.99 \\\\ \\cline{2-14} & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) & \\(\\omega\\)\\(\\in\\)\\(0.1\\) \\\\ \\hline \\multirow{2}{*}{Neural} & 10.69\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 11.59\\(\\pm\\)1.99 & 11.59\\(\\pm\\)1.99 \\\\ \\cline{2-14} & **0.770\\(\\pm\\)1.99** & 10.59\\(\\pm\\)1.99 & **10.59\\(\\pm\\)1.99** & **10.59\\(\\pm\\)1.99** & **10.59\\(\\pm\\)1.99** & **10.59\\(\\pm\\)1.99** & **10.59\\(\\pm\\)1.99** & **10.59\\(\\pm\\)1.99** & **10.59\\(\\pm\\)1.99** & **10.59\\(\\pm\\)1.99** & **10.59\\(\\pm\\)1.99** \\\\ \\hline \\multirow{2}{*}{Deep} & 10.79\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 11.59\\(\\pm\\)1.99 \\\\ \\cline{2-14} & 0.677\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 11.59\\(\\pm\\)1.99 \\\\ \\hline \\multirow{2}{*}{Neural} & 10.19\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 \\\\ \\cline{2-14} & **0.667\\(\\pm\\)1.99** & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 \\\\ \\hline \\multirow{2}{*}{Neural} & 10.19\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 \\\\ \\cline{2-14} & **0.667\\(\\pm\\)1.99** & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.99 \\\\ \\hline \\multirow{2}{*}{Neural} & 10.19\\(\\pm\\)1.99 & 10.59\\(\\pm\\)1.\n\nand \\(\\rho=10\\)\\((10,5)\\). Note that we also study the performance of DFRD with different numbers of clients and stragglers, see Appendix G for details.\n\n**Impacts of different transferability constraints.** It can be observed from Table 3 that our proposed transferability constraint \\(\\lozenge\\) uniformly beats \\(\\triangle\\) and \\(\\bigtriangledown\\) w.r.t. global test performance over three datasets. This means that \\(\\lozenge\\) can guide the generator to generate more effective synthetic data, thereby improving the test performance of the global model. Also, we conjecture that generating more synthetic data like those with the black circles (see Fig. 2) may positively impact on the performance of local models, since \\(\\lozenge\\) also slightly and consistently trumps other competitors w.r.t. \\(L.acc\\).\n\n**Impacts of varying merge operators.** To look into the utility of the merge operator, we consider multiple merge operators, including \\(mul\\), \\(add\\), \\(cat\\), \\(ncat\\) and \\(none\\). Among them, \\(mul\\) is \\(o(\\mathbf{z},y)=\\mathbf{z}\\times\\mathcal{E}(y)\\), \\(add\\) is \\(o(\\mathbf{z},y)=\\mathbf{z}+\\mathcal{E}(y)\\), \\(cat\\) is \\(o(\\mathbf{z},y)=[\\mathbf{z},\\mathcal{E}(y)]\\), \\(ncat\\) is \\(o(\\mathbf{z},y)=[\\mathbf{z},y]\\) and \\(none\\) is \\(o(\\mathbf{z},y)=\\mathbf{z}\\). From Table 4, we can observe that our proposed \\(mul\\) significantly outperforms other competitors in terms of \\(G.acc\\). This suggests that \\(mul\\) can better exploit the diversity constraint to make the generator generate more diverse synthetic data, thus effectively fine-tuning the global model. Also, the visualization of the synthetic data in Appendix I validates this statement.\n\n\\begin{table}\n\\begin{tabular}{c c|c|c|c} \\hline \\hline M. O. & SVHN & CIFAR-10 & CIFAR-100 \\\\ \\hline \\(mul\\)\\(mul\\)\\(mul\\)\\(\\bm\\bm\\)\\(\\bm\\)\\(\\bm\\)\\(\\bm\\)\\(\\bm\\)\\(\\bm\\bm\\) & **25.57\\(\\bm\\bm\\)\\(\\bm\\)\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\bm\\)**\\(\\bm\\mathbf{\\bm\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\mathbf{\\bmbm{\\mathbf{\\mathbf{\\bm\n\n**Necessity of each component for DFRD.** We report the test performance of DFRD after divesting some modules and losses in Table 5. Here, EMA indicates the exponential moving average copy of the generator on the server. We can evidently observe that removing the EMA generator leads to a significant drop in \\(G.acc\\), which implies that it can generate effective synthetic data for the global model. The reason why the EMA generator works is that it avoids catastrophic forgetting of the global model and ensures the stability of the global model trained in heterogeneous FL. We display synthetic data in Appendix I that further corroborates the above claims. Meanwhile, we perform the leave-one-out test to explore the contributions of \\(\\mathcal{L}_{tran}\\) and \\(\\mathcal{L}_{div}\\) to DFRD separately, and further report the test results of removing them simultaneously. From Table 5, deleting either \\(\\mathcal{L}_{tran}\\) or \\(\\mathcal{L}_{div}\\) adversely affects the performance of DFRD. In addition, their joint absence further exacerbates the degradation of \\(G.acc\\). This suggests that \\(\\mathcal{L}_{tran}\\) and \\(\\mathcal{L}_{div}\\) are vital for the training of the generator. Interestingly, \\(\\mathcal{L}_{div}\\) benefits more to the global model than \\(\\mathcal{L}_{tran}\\). We speculate that the diversity of synthetic data is more desired by the global model under the premise of ensuring the fidelity of synthetic data by optimizing \\(\\mathcal{L}_{fid}\\).\n\n**Varying \\(\\beta_{tran}\\) and \\(\\beta_{div}\\).** We explore the impacts of \\(\\beta_{tran}\\) and \\(\\beta_{div}\\) on SVHN and CIFAR-10. We select \\(\\beta_{tran}\\) and \\(\\beta_{div}\\) from \\(\\{0.25,0.50,0.75,1.00,1.25,1.50\\}\\). From Fig. 4, we can see that DFRD maintains stable test performance among all selections of \\(\\beta_{tran}\\) and \\(\\beta_{div}\\) over SVHN. At the same time, \\(G.acc\\) fluctuates slightly with the increases of \\(\\beta_{tran}\\) and \\(\\beta_{div}\\) on CIFAR-10. Besides, we observe that the worst \\(G.acc\\) in Fig. 4 outperforms the baseline with the best \\(G.acc\\) in Table 2. The above results indicate that DFRD is not sensitive to choices of \\(\\beta_{tran}\\) and \\(\\beta_{div}\\) over a wide range.\n\n**Varying \\(\\alpha\\) and \\(\\lambda\\).** In order to delve into the effect of the EMA generator on DFRD in more details, we perform grid testing on the choices of control parameters \\(\\alpha\\) and \\(\\lambda\\) over SVHN and CIFAR-10. We set \\(\\alpha\\in\\{0.25,0.50,0.75,1.00,1.25,1.50\\}\\) and \\(\\lambda\\in\\{0.1,0.3,0.5,0.7,0.9\\}\\). It can be observed from Fig. 5 that high global test accuracies on SVHN are mainly located in the region of \\(\\alpha<1.25\\) and \\(\\lambda>0.5\\), while on CIFAR-10 they are mainly located in the region of \\(\\alpha>0.25\\) and \\(\\lambda<0.9\\). According to the above results, we deem that the appropriate \\(\\alpha\\) and \\(\\lambda\\) in a specific task is essential for the utility of the EMA generator. Notably, high local test accuracies mainly sit in regions that are complementary to those of high global test accuracies, suggesting that pursuing high \\(G.acc\\) and \\(L.acc\\) simultaneously seems to be a dilemma. How to ensure high \\(G.acc\\) and \\(L.acc\\) simultaneously in the field of FL is an attractive topic that is taken as our future work.\n\n## 5 Conclusion\n\nIn this paper, we propose a new FL method called DFRD, which aims to learn a robust global model in the data-heterogeneous and model-heterogeneous scenarios with the aid of DFKD. To ensure the utility, DFRD considers a conditional generator and thoroughly studies its training in terms of _fidelity_,\n\nFigure 4: Test accuracy (%) with varying \\(\\beta_{tran}\\) and \\(\\beta_{div}\\).\n\nFigure 5: Test accuracy (%) with varying (\\(\\alpha\\),\\(\\lambda\\)).\n\n\\begin{table}\n\\begin{tabular}{c c|c|c} \\hline \\hline & SVHN & CIFAR-10 & CIFAR-100 \\\\ \\hline baseline & **34.78\\(\\pm\\)1.49** & **25.57\\(\\pm\\)37** & **28.08\\(\\pm\\)58** \\\\ & **(15.99\\(\\pm\\)1.49)** & **(16.74\\(\\pm\\)4.27)** & **(13.03\\(\\pm\\)3.46)** \\\\ \\hline -EMA & 26.97\\(\\pm\\)1.28 & 19.80\\(\\pm\\)2.25 & 24.57\\(\\pm\\)0.50 \\\\ & (14.17\\(\\pm\\)3.09) & (16.55\\(\\pm\\)0.95) & (12.23\\(\\pm\\)0.07) \\\\ \\hline -\\(\\mathcal{L}_{tran}\\) & 29.30\\(\\pm\\)2.55 & 22.97\\(\\pm\\)31 & 27.28\\(\\pm\\)0.46 \\\\ & (14.24\\(\\pm\\)1.40) & (16.33\\(\\pm\\)1.16) & (12.79\\(\\pm\\)0.40) \\\\ -\\(\\mathcal{L}_{div}\\) & 27.68\\(\\pm\\)0.85 & 22.12\\(\\pm\\)1.08 & 26.94\\(\\pm\\)1.60 \\\\ & (14.26\\(\\pm\\)3.09) & (16.64\\(\\pm\\)1.15) & (12.81\\(\\pm\\)0.40) \\\\ -\\(\\mathcal{L}_{tran}\\), -\\(\\mathcal{L}_{div}\\) & 20.32\\(\\pm\\)1.03 & 21.97\\(\\pm\\)2.48 & 25.50\\(\\pm\\)0.51 \\\\ & (13.65\\(\\pm\\)1.13) & (16.52\\(\\pm\\)1.39) & (12.30\\(\\pm\\)0.10) \\\\ \\hline \\hline \\end{tabular}\n\\end{table}\nTable 5: Impact of each component in DFRD.\n\ntransferability_ and _diversity_. Additionally, DFRD maintains an EMA generator to augment the global model. Furthermore, we propose dynamic weighting and label sampling to accurately extract the knowledge of local models. At last, we conduct extensive experiments to verify the superiority of DFRD. Due to space constraints, we discuss in detail the **limitations** and **broader impacts** of our work in Appendixes J and K, respectively.\n\n## 6 Acknowledgments\n\nThis work has been supported by the National Natural Science Foundation of China under Grant No.U1911203, and the National Natural Science Foundation of China under Grant No.62377012.\n\n## References\n\n* [1] Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., Franke, U., Roth, S., & Schiele, B. (2016) The cityscapes dataset for semantic urban scene understanding. _In Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 3213-3223.\n* [2] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017) Imagenet classification with deep convolutional neural networks. Communications of the ACM **60**(6): 84-90.\n* [3] Chen, Z., Li, B., Xu, J., Wu, S., Ding, S., & Zhang, W. (2022) Towards practical certifiable patch defense with vision transformer. _In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 15148-15158.\n* [4] Lim, W. Y. B., Luong, N. C., Hoang, D. T., Jiao, Y., Liang, Y. C., Yang, Q., Niyato, D., & Miao, C. (2020) Federated learning in mobile edge networks: A comprehensive survey. _IEEE Communications Surveys & Tutorials_**22**(3): 2031-2063.\n* [5] Nguyen, D. C., Ding, M., Pathirana, P. N., Seneviratne, A., Li, J., & Poor, H. V. (2021) Federated learning for internet of things: A comprehensive survey. _IEEE Communications Surveys & Tutorials_**23**(3): 1622-1658.\n* [6] Li, Y., Tao, X., Zhang, X., Liu, J., & Xu, J. (2021) Privacy-preserved federated learning for autonomous driving. _IEEE Transactions on Intelligent Transportation Systems_**23**(7): 8423-8434.\n* [7] Voigt, P., & Von dem Bussche, A. (2017) The eu general data protection regulation (gdpr). _A Practical Guide, 1st Ed., Cham: Springer International Publishing_**10**(3152676): 10-5555.\n* [8] Kairouz, P., McMahan, H. B., Avent, B., Bellet, A., Bennis, M., Bhagoji, A. N., et al. (2021). Advances and open problems in federated learning. _Foundations and Trends(r) in Machine Learning_**14**(1-2): 1-210.\n* [9] Konecny, J., McMahan, B., & Ramage, D. (2015) Federated optimization: Distributed optimization beyond the datacenter. _arXiv preprint arXiv:1511.03575_.\n* [10] Konecny, J., McMahan, H. B., Yu, F. X., Richtarik, P., Suresh, A. T., & Bacon, D. (2016). Federated learning: Strategies for improving communication efficiency. _arXiv preprint arXiv:1610.05492_.\n* [11] Li, T., Sahu, A. K., Talwalkar, A., & Smith, V. (2020) Federated learning: Challenges, methods, and future directions. _IEEE signal processing magazine_**37**(3): 50-60.\n* [12] Zhao, Y., Li, M., Lai, L., Suda, N., Civin, D., & Chandra, V. (2018) Federated learning with non-iid data. _arXiv preprint arXiv:1806.00582_.\n* [13] Li, Q., Diao, Y., Chen, Q., & He, B. (2022) Federated learning on non-iid data silos: An experimental study. _In 2022 IEEE 38th International Conference on Data Engineering (ICDE)_, pp. 965-978.\n* [14] McMahan, B., Moore, E., Ramage, D., Hampson, S., & y Arcas, B. A. (2017) Communication-efficient learning of deep networks from decentralized data. _In Artificial intelligence and statistics_, pp. 1273-1282.\n* [15] Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A., & Smith, V. (2020). Federated optimization in heterogeneous networks. _Proceedings of Machine learning and systems_**2**: 429-450.\n* [16] Karimireddy, S. P., Kale, S., Mohri, M., Reddi, S., Stich, S., & Suresh, A. T. (2020) Scaffold: Stochastic controlled averaging for federated learning. _In International Conference on Machine Learning_, pp. 5132-5143.\n\n* [17] Acar, D. A. E., Zhao, Y., Navarro, R. M., Mattina, M., Whatmough, P. N., & Saligrama, V. (2021) Federated learning based on dynamic regularization. _arXiv preprint arXiv:2111.04263_.\n* [18] Kim, J., Kim, G., & Han, B. (2022) Multi-level branched regularization for federated learning. _In International Conference on Machine Learning_, pp. 11058-11073.\n* [19] Mendieta, M., Yang, T., Wang, P., Lee, M., Ding, Z., & Chen, C. (2022) Local learning matters: Rethinking data heterogeneity in federated learning. _In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 8397-8406.\n* [20] Lee, G., Jeong, M., Shin, Y., Bae, S., & Yun, S. Y. (2022) Preservation of the Global Knowledge by Not-True Distillation in Federated Learning. _Advances in Neural Information Processing Systems_.\n* [21] Zhang, J., Li, Z., Li, B., Xu, J., Wu, S., Ding, S., & Wu, C. (2022) Federated learning with label distribution skew via logits calibration. _In International Conference on Machine Learning_, pp. 26311-26329.\n* [22] Ignatov, A., Timofte, R., Chou, W., Wang, K., Wu, M., Hartley, T., & Van Gool, L. (2018) Ai benchmark: Running deep neural networks on android smartphones. _In Proceedings of the European Conference on Computer Vision (ECCV) Workshops_, pp. 0-0.\n* [23] Makhija, D., Han, X., Ho, N., & Ghosh, J. (2022) Architecture agnostic federated learning for neural networks. _In International Conference on Machine Learning_, pp. 14860-14870.\n* [24] Hong, J., Wang, H., Wang, Z., & Zhou, J. (2022) Efficient split-mix federated learning for on-demand and in-situ customization. _arXiv preprint arXiv:2203.09747_.\n* [25] Afonin, A., & Karimireddy, S. P. (2021) Towards model agnostic federated learning using knowledge distillation. _arXiv preprint arXiv:2110.15210_.\n* [26] Cho, Y. J., Manoel, A., Joshi, G., Sim, R., & Dimitriadis, D. (2022) Heterogeneous ensemble knowledge transfer for training large models in federated learning. _arXiv preprint arXiv:2204.12703_.\n* [27] Fang, X., & Ye, M. (2022) Robust federated learning with noisy and heterogeneous clients. _In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 10072-10081.\n* [28] Horvath, S., Laskaridis, S., Almeida, M., Leontiadis, I., Venieris, S., & Lane, N. (2021) Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout. _Advances in Neural Information Processing Systems_**34**: 12876-12889.\n* [29] Diao, E., Ding, J., & Tarokh, V. (2020) HeteroFL: Computation and communication efficient federated learning for heterogeneous clients. _arXiv preprint arXiv:2010.01264_.\n* [30] Caldas, S., Konecny, J., McMahan, H. B., & Talwalkar, A. (2018) Expanding the reach of federated learning by reducing client resource requirements. _arXiv preprint arXiv:1812.07210_.\n* [31] Alam, S., Liu, L., Yan, M. & Zhang, M. (2022) FedRolex: Model-Heterogeneous Federated Learning with Rolling Sub-Model Extraction. _Advances in neural information processing systems_.\n* [32] Tang, Z., Zhang, Y., Shi, S., He, X., Han, B., & Chu, X. (2022). Virtual homogeneity learning: Defending against data heterogeneity in federated learning. _In International Conference on Machine Learning_, pp. 21111-21132.\n* [33] Tan, Y., Long, G., Liu, L., Zhou, T., Lu, Q., Jiang, J., & Zhang, C. (2022) Fedproto: Federated prototype learning across heterogeneous clients. _In Proceedings of the AAAI Conference on Artificial Intelligence_**36**(8): 8432-8440.\n* [34] Luo, K., Li, X., Lan, Y., & Gao, M. (2023) GradMA: A Gradient-Memory-based Accelerated Federated Learning with Alleviated Catastrophic Forgetting. _arXiv preprint arXiv:2302.14307_.\n* [35] Chen, H., Wang, Y., Xu, C., Yang, Z., Liu, C., Shi, B., Xu C., Xu C. & Tian, Q. (2019) Data-free learning of student networks. _In Proceedings of the IEEE/CVF International Conference on Computer Vision_, pp. 3514-3522.\n* [36] Micaelli, P., & Storkey, A. J. (2019) Zero-shot knowledge transfer via adversarial belief matching. _Advances in Neural Information Processing Systems_**32**.\n* [37] Yurochkin, M., Agarwal, M., Ghosh, S., Greenewald, K., Hoang, N., & Khazaeni, Y. (2019, May) Bayesian nonparametric federated learning of neural networks. _In International conference on machine learning_, pp. 7252-7261.\n\n* [38] Wang, H., Yurochkin, M., Sun, Y., Papailiopoulos, D., & Khazaeni, Y. (2020) Federated learning with matched averaging. _arXiv preprint arXiv:2002.06440_.\n* [39] Li, Q., He, B., & Song, D. (2021) Model-contrastive federated learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10713-10722.\n* [40] Lin, T., Kong, L., Stich, S. U., & Jaggi, M. (2020) Ensemble distillation for robust model fusion in federated learning. _Advances in Neural Information Processing Systems_**33**: 2351-2363.\n* [41] Li, X., Jiang, M., Zhang, X., Kamp, M., & Dou, Q. (2021) Fedbn: Federated learning on non-iid features via local batch normalization. _arXiv preprint arXiv:2102.07623_.\n* [42] Fallah, A., Mokhtari, A., & Ozdaglar, A. (2020) Personalized federated learning: A meta-learning approach. _In Advances in Neural Information Processing Systems_.\n* [43] Pan, S. J., & Yang, Q. (2010) A survey on transfer learning. _IEEE Transactions on knowledge and data engineering_**22**(10): 1345-1359.\n* [44] Li, D., & Wang, J. (2019) Fedmd: Heterogenous federated learning via model distillation. _arXiv preprint arXiv:1910.03581_.\n* [45] He, C., Annavaram, M., & Avestimehr, S. (2020) Group knowledge transfer: Federated learning of large cnns at the edge. _Advances in Neural Information Processing Systems_**33**:14068-14080.\n* [46] Zhang, S., Liu, M., & Yan, J. (2020) The diversified ensemble neural network. _Advances in Neural Information Processing Systems_**33**:16001-16011.\n* [47] Zhu, Z., Hong, J., & Zhou, J. (2021) Data-free knowledge distillation for heterogeneous federated learning. _In International Conference on Machine Learning_, pp. 12878-12889.\n* [48] Zhang, L., Shen, L., Ding, L., Tao, D., & Duan, L. Y. (2022) Fine-tuning global model via data-free knowledge distillation for non-iid federated learning. _In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 10174-10183.\n* [49] Zhang, J., Chen, C., Li, B., Lyu, L., Wu, S., Ding, S., Shen, C. & Wu, C. (2022) DENSE: Data-Free One-Shot Federated Learning. _In Advances in Neural Information Processing Systems_.\n* [50] Heinbaugh, C. E., Luz-Ricca, E., & Shao, H. (2023) Data-Free One-Shot Federated Learning Under Very High Statistical Heterogeneity. _In The Eleventh International Conference on Learning Representations_.\n* [51] Fang, G., Song, J., Wang, X., Shen, C., Wang, X., & Song, M. (2021) Contrastive model inversion for data-free knowledge distillation. _arXiv preprint arXiv:2105.08584_.\n* [52] Yin, H., Molchanov, P., Alvarez, J. M., Li, Z., Mallya, A., Hoiem, D., Jha N. K., & Kautz, J. (2020) Dreaming to distill: Data-free knowledge transfer via deepinversion. _In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 8715-8724.\n* [53] Do, K., Le, T. H., Nguyen, D., Nguyen, D., Harikumar, H., Tran, T., Rana, S., & Venkatesh, S. (2022) Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation. _Advances in Neural Information Processing Systems_**35**: 10055-10067.\n* [54] Binici, K., Pham, N. T., Mitra, T., & Leman, K. (2022) Preventing catastrophic forgetting and distribution mismatch in knowledge distillation via synthetic data. _In Proceedings of the IEEE/CVF winter conference on applications of computer vision_, pp. 663-671.\n* [55] Binici, K., Aggarwal, S., Pham, N. T., Leman, K., & Mitra, T. (2022) Robust and resource-efficient data-free knowledge distillation by generative pseudo replay. _In Proceedings of the AAAI Conference on Artificial Intelligence_**36**(6):6089-6096.\n* [56] Patel, G., Mopuri, K. R., & Qiu, Q. (2023) Learning to Retain while Acquiring: Combating Distribution-Shift in Adversarial Data-Free Knowledge Distillation. _In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 7786-7794.\n* [57] Odena, A., Olah, C., & Shlens, J. (2017) Conditional image synthesis with auxiliary classifier gans. _In International conference on machine learning_, pp. 2642-2651.\n* [58] Kodali, N., Abernethy, J., Hays, J., & Kira, Z. (2017) On convergence and stability of gans. _arXiv preprint arXiv:1705.07215_.\n\n* [59] Yoo, J., Cho, M., Kim, T., & Kang, U. (2019) Knowledge extraction with no observable data. _Advances in Neural Information Processing Systems_**60**.\n* [60] Luo, L., Sandler, M., Lin, Z., Zhmoginov, A., & Howard, A. (2020) Large-scale generative data-free distillation. _arXiv preprint arXiv:2012.05578_.\n* [61] Nayak, G. K., Mopuri, K. R., Shaj, V., Radhakrishnan, V. B., & Chakraborty, A. (2019) Zero-shot knowledge distillation in deep networks. _In International Conference on Machine Learning_, pp. 4743-4751.\n* [62] Wang, Z. (2021) Data-free knowledge distillation with soft targeted transfer set synthesis. _In Proceedings of the AAAI Conference on Artificial Intelligence_**35**(11): 10245-10253.\n* [63] Liu, R., Wu, F., Wu, C., Wang, Y., Lyu, L., Chen, H., & Xie, X. (2022) No one left behind: Inclusive federated learning over heterogeneous devices. _In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining_, pp. 3398-3406.\n* [64] Kim, M., Yu, S., Kim, S., & Moon, S. M. (2023) DepthFL: Depthwise Federated Learning for Heterogeneous Clients. _In The Eleventh International Conference on Learning Representations_.\n* [65] Singhal, K., Sidahmed, H., Garrett, Z., Wu, S., Rush, J., & Prakash, S. (2021) Federated reconstruction: Partially local federated learning. _Advances in Neural Information Processing Systems_**34**:11220-11232.\n* [66] Zhang, H., Bosch, J., & Olsson, H. H. (2021) Real-time end-to-end federated learning: An automotive case study. _In 2021 IEEE 45th Annual Computers, Software, and Applications Conference_, pp. 459-468.\n* [67] Li, Z., He, Y., Yu, H., Kang, J., Li, X., Xu, Z., & Niyato, D. (2022) Data heterogeneity-robust federated learning via group client selection in industrial iot. _IEEE Internet of Things Journal_**9**(18): 17844-17857.\n* [68] Zeng, S., Li, Z., Yu, H., Zhang, Z., Luo, L., Li, B., & Niyato, D. (2023) Hfedms: Heterogeneous federated learning with memorable data semantics in industrial metaverse. _IEEE Transactions on Cloud Computing_.\n* [69] Xiao, H., Rasul, K., & Vollgraf, R. (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. _arXiv preprint arXiv:1708.07747_.\n* [70] Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., & Ng, A. Y. (2011) Reading digits in natural images with unsupervised feature learning.\n* [71] Krizhevsky, A., & Hinton, G. (2009) Learning multiple layers of features from tiny images.\n* [72] He, K., Zhang, X., Ren, S., & Sun, J. (2016) Deep residual learning for image recognition. _In Proceedings of the IEEE conference on computer vision and pattern recognition_, pp. 770-778.\n* [73] Bossard, L., Guillaumin, M., & Van Gool, L. (2014) Food-101-mining discriminative components with random forests. _In Computer Vision-ECCV 2014: 13th European Conference_, pp. 446-461.\n* [74] Dwork, C. (2008) Differential privacy: A survey of results. _In Theory and Applications of Models of Computation_, pp. 1-19.\n* [75] Geyer, R. C., Klein, T., & Nabi, M. (2017) Differentially private federated learning: A client level perspective. _arXiv preprint arXiv:1712.07557_.\n* [76] Cheng, A., Wang, P., Zhang, X. S., & Cheng, J. (2022) Differentially Private Federated Learning with Local Regularization and Sparsification. _In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 10122-10131.\n* [77] Ma, J., Naas, S. A., Sigg, S., & Lyu, X. (2022) Privacy-preserving federated learning based on multi-key homomorphic encryption. _International Journal of Intelligent Systems_**37**(9): 5880-5901.\n* [78] Zhang, L., Xu, J., Vijayakumar, P., Sharma, P. K., & Ghosh, U. (2022). Homomorphic encryption-based privacy-preserving federated learning in iot-enabled healthcare system. _IEEE Transactions on Network Science and Engineering_.", "response": "## Review\n\n### Summary\n\nThis paper presents a novel method called DFRD for robust and privacy-constrained Federated Learning (FL) amidst data and model heterogeneity. The authors propose a data-free knowledge distillation approach that utilizes a conditional generator to simulate the local models' training space, mitigating catastrophic forgetting. Experiments on several image classification datasets demonstrate that DFRD outperforms existing methods. While the method shows promise, concerns regarding hyperparameter stability, performance metrics, and experimental clarity were raised. Overall, the paper contributes valuable insights into addressing challenges in heterogeneous FL environments.\n\n### Strengths\n\n- The writing is clear and easy to follow.\n- The proposed method is novel, addressing important issues in Federated Learning.\n- Empirical evaluation is rigorous, with extensive ablation studies conducted.\n- The methodology appears practical, maintaining a consistent spatial footprint across communication cycles.\n- The authors effectively justify the necessity of loss functions and validate their effectiveness in experiments.\n\n### Weaknesses\n\n- The optimal hyperparameters for different datasets are unstable.\n- The quality of synthetic images does not meet the proposed fidelity and diversity goals.\n- Some components, like the EMA strategy, may lack novelty in the context of prior work.\n- Reported learning curves indicate models do not converge after 100 communication rounds, raising concerns about the results.\n- Overall accuracy in experiments is low, raising doubts about the practicality of the method.\n\n### Questions\n\n- Why does the method need to compute KL divergence on both G and \\tilde{G} when EMA is already used?\n- Could the authors clarify the role of the parameters w, \\rho, and \\delta in data heterogeneity?\n- What is the rationale behind selecting FedFTG and DENSE as the most relevant methods compared to others?\n- Can the authors provide a clearer explanation of the methodology behind their observations regarding transferability and decision boundaries?\n- How does the choice of hyperparameters impact the EMA method's performance, and what are the implications of different values?\n\n### Soundness\n\n**Score:** 3\n\n**Description:** Good - The paper demonstrates a solid technical foundation with a few concerns regarding stability and practical performance.\n\n### Presentation\n\n**Score:** 3\n\n**Description:** Good - The writing is generally clear, but some figures and tables could be improved for better comprehension.\n\n### Contribution\n\n**Score:** 3\n\n**Description:** Good - While the paper introduces a new method and methodology, some aspects of its novelty are marginal and could benefit from deeper theoretical justification.\n\n### Rating\n\n**Score:** 6\n\n**Description:** Weak Accept - The paper is technically solid with moderate-to-high impact potential, but it requires some improvements regarding clarity and experimental validation.\n\n### Paper Decision\n\n**Decision:** Accept (poster)\n\n**Reasons:** The paper presents a significant contribution to the field of Federated Learning by addressing critical challenges associated with data and model heterogeneity. While there are weaknesses related to performance stability and clarity, the overall originality and rigor of the experimental work justify acceptance. The paper's findings hold practical relevance, making it suitable for presentation.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Birth of a Transformer: A Memory Viewpoint\n\nAlberto Bietti\n\nFlatiron Institute\n\n&Vivien Cabannes\n\nFAIR, Meta\n\n&Diane Bouchacourt\n\nFAIR, Meta\n\n&Herve Jegou\n\nFAIR, Meta\n\n&Leon Bottou\n\nFAIR, Meta\n\nWork done while at FAIR, Meta.FAIR, Meta.\n\nrequire complex nonlinear dynamics [15; 33; 45]. This simplification also allows us to introduce a natural model for individual weight matrices as _associative memories_, which store input-output or key-value pairs of embeddings through their outer products. Random high-dimensional embeddings are particularly well-suited to this viewpoint thanks to their near-orthogonality. We provide a detailed empirical study of the training dynamics, by measuring how quickly each weight matrix learns to behave as the desired associative memory, studying how this is affected by data-distributional properties, and investigate the order in which layers are learned: the model first finds the right output associations from the current token and from uniform attention patterns, then the attention heads learn to focus on the correct key-value pairs. We then present theoretical insights on this top-down learning process through population gradient dynamics. Despite its simplicity, our setup already provides useful insights on the internal structure of transformer language models and its evolution throughout training, paving the way for a better understanding of LLMs. We hope that our insights may lead to future research and improvements for LLM practitioners, _e.g._, for optimization algorithms, data pre-processing and selection, interpretability, fine-tuning, and model editing.\n\nIn summary, we make the following contributions:\n\n* We introduce a new synthetic setup to study global vs in-context learning: sequences follow bigram language models, where some bigrams change across sequences and others do not.\n* We view the transformer's weight matrices as associative memories that learn to store specific pairs of embeddings, and use this to derive a simplified but more interpretable model for our task.\n* We empirically study the training dynamics with careful probing: global bigrams are learned first, then the induction head is formed by learning appropriate memories in a top-down fashion.\n* We give theoretical insights on training dynamics, showing how a few top-down gradient steps on the population loss can recover the desired associative memories by finding signal in noisy inputs.\n\nRelated work.After the success of transformer language models for in-context learning was found [6], several works have studied how in-context learning may arise in various contexts [8; 38; 43; 48; 57]. Multiple recent papers have introduced synthetic tasks in order to better understand and interpret transformers [9; 33; 39; 61]. Several works have attempted to understand internal mechanisms in transformers that are responsible for certain behaviors, an area known as \"mechanistic interpretability\" [16; 17; 35; 39; 40; 54]. Memory and neural networks have a long history of connections [5; 18; 19; 21; 26; 27; 30; 34; 50; 55; 56]. The associative memories we consider bear similarity to [29; 56], though we use continuous input/outputs. The reader may also be interested in Fast Weight programmers [46; 47]. The use of random vectors for storing memories is related to [23]. Our approach to probing based on memory recall is related to techniques in [13; 17], though motivated differently. [14; 32; 36] study statistical and approximation properties of transformers, highlighting benefits of sparse attention patterns, but do not consider training dynamics. [25; 31; 49; 51] provide theoretical analyses of learning dynamics in transformers and other attention models, but consider different data setups and focus on single-layer architectures, while we focus on two-layer models and take a different viewpoint based on associative memories.\n\n## 2 Background\n\nThis section provides background on transformer architectures and induction head mechanisms.\n\nTransformer architecture.Transformers [52] operate on sequences of embeddings by alternating self-attention operations and token-wise feed-forward layers. We focus on decoder-only, auto-regressive architectures with a causal attention mask, which are commonly used in large language models trained for next-token prediction [6; 11; 41; 42]. We ignore normalization layers in order to simplify the architecture, since its stability benefits are not as crucial in the small models we consider. Given an input sequence of tokens \\(z_{1:T}\\in[N]^{T}\\) of length \\(T\\), where \\(N\\) is the vocabulary size, the transformer operates as follows:\n\n* **Token embeddings**: each discrete token is mapped to a \\(d\\)-dimensional embedding via an embedding map \\(W_{E}\\in\\mathbb{R}^{d\\times N}\\). We will denote the embeddings of tokens \\(z_{t}\\) by \\(x_{t}:=w_{E}(z_{t})\\), where \\(w_{E}(j)\\) is the \\(j\\)-th column of \\(W_{E}\\).\n* **Positional embeddings**: the positional embeddings \\(p_{t}=w_{P}(t)\\in\\mathbb{R}^{d}\\) are added to each token embedding depending on its position in the sequence, leading to the following input embeddings: \\[x_{t}:=x_{t}+p_{t}=w_{E}(z_{t})+w_{P}(t).\\] (1)* **Attention blocks**: given an input sequence \\(x_{1:T}\\in\\mathbb{R}^{d\\times T}\\) of embeddings, the causal attention block computes, for \\(W_{K},W_{Q},W_{V},W_{O}\\in\\mathbb{R}^{d\\times d}\\) (key, query, value, output), and for each \\(t\\), \\[x_{t}^{\\prime}:=W_{O}W_{V}x_{1:t}\\sigma(x_{1:t}^{\\top}W_{K}^{\\top}W_{Q}x_{t}) \\in\\mathbb{R}^{d},\\] (2) where \\(\\sigma\\) takes the softmax of its elements, leading to an attention of the \"values\" \\(W_{V}x_{t}\\) with weights proportional to \\(\\exp((W_{K}x_{s})^{\\top}(W_{Q}x_{t}))\\). Note that the attention operation usually considers multiple \"heads\" that each projects the input to a lower dimension. Here we stick to a single head for simplicity, since it will be sufficient for our purposes. Rewriting (2) on each \\(t\\) as \\(x_{1:T}^{\\prime}=\\mathcal{A}(x_{1:T};W_{K},W_{Q},W_{V},W_{O})\\), the \\(\\ell\\)-th layer of the transformer applies attention with layer-specific parameters along with a residual connection as follows:2 Footnote 2: We omit layer indices for simplicity of notation, and use the assignment operator \\(:=\\) instead. \\[x_{1:T}:=x_{1:T}+\\mathcal{A}(x_{1:T};W_{K}^{\\ell},W_{Q}^{\\ell},W_{V}^{\\ell},W_ {O}^{\\ell})\\]\n* **Feed-forward blocks**: feed-forward blocks operate on individual token embeddings after each attention block, typically by applying a one-hidden-layer MLP to each token, denoted \\(\\mathcal{F}(\\cdot;W_{F})\\), with a residual connection: at layer \\(\\ell\\), we have \\[x_{t}:=x_{t}+\\mathcal{F}(x_{t};W_{F}).\\] Our simplified setup will linear feed-forward layers: \\(\\mathcal{F}(x_{t};W_{F})=W_{F}x_{t}\\).\n* **Unembedding**: After the last transformer layer, the embeddings are mapped back to the vocabulary space \\(\\mathbb{R}^{N}\\) through a linear \"unembedding\" layer \\(W_{U}=[w_{U}(1),\\dots,w_{U}(N)]^{\\top}\\in\\mathbb{R}^{N\\times d}\\), where we refer to the \\(w_{U}(j)\\) as \"output embeddings\". The output of this layer is then fed into a cross-entropy loss for predicting of \\(z_{t+1}\\) from each \\(x_{t}\\).\n\nWe will sometimes refer to the representations \\(x_{t}\\) for a given token \\(t\\) throughout layers as its _residual stream_[16], since they consist of sums of embeddings and layer outputs due to residual connections.\n\nInduction head mechanism.Induction heads [16, 40] are a particular type of mechanism (or \"circuit\") in transformers that allows basic in-context prediction of the form \\([\\cdots,\\mathtt{a},\\mathtt{b},\\cdots,\\mathtt{a}]\\rightarrow\\mathtt{b}\\). These were found to be ubiquitous in transformer language models, playing a key role in enabling various forms of in-context learning. The basic mechanism consist of two attention heads in separate layers (see Figure 1 for an illustration): (i) the first is a _previous token head_ which attends to the previous token using positional information and copies its embedding to the next token; (ii) the second is the _induction head_ itself, which attends using the output of the previous token head, and outputs the original token. Our work focuses on this basic copy mechanism, but we note that richer behaviors are possible, particularly when combining multiple such mechanisms (_e.g._, [54]).\n\nFigure 1: **Induction head mechanism**. Induction heads are a two-layer mechanism that can predict \\(b\\) from a context \\([\\dots,a,b,\\dots,a]\\). The first layer is a _previous token head_, which attends to the previous token based on positional embeddings (\\(p_{t}\\to p_{t-1}\\)) and copies it after a remapping (\\(w_{E}(a)\\to w_{1}(a):=W_{O}^{1}W_{V}^{1}w_{E}(a)\\)). The second layer is the _induction head_, which attends based on the output of the previous token head (\\(w_{E}(a)\\to w_{1}(a)\\)) and outputs the attended token, remapped to output embeddings (\\(w_{E}(b)\\to w_{U}(b)\\)). Boxes in the diagram represent different embeddings in superposition on each token’s residual stream (we omit some irrelevant ones for clarity, _e.g._, positional embeddings in upper layers), and attention and output associations are shown with the associative memory viewpoint presented in Section 4.\n\n## 3 Synthetic Setup\n\nIn this section, we introduce our synthetic data setup, which allows us to carefully study how the induction head mechanism develops during training, and how transformers learn to use information from the context vs simple associations from the training data.\n\nBigram data model.Our model for sequences consists of a generic bigram language model (_i.e._, Markov chain), but where the transitions for a few _trigger tokens_ denoted \\(q_{k}\\) are modified in each sequence to always be followed by some _output tokens_\\(o_{k}\\). Let \\(K\\) be the number of trigger tokens, and fix the following distributions over the vocabulary \\([N]\\): \\(\\pi_{b}(\\cdot|i)\\), \\(\\pi_{u}\\), \\(\\pi_{o}(\\cdot|i)\\) and \\(\\pi_{q}\\), for \\(i\\in[N]\\). \\(\\pi_{b}(\\cdot|i)\\) are the global bigram conditionals, \\(\\pi_{u}\\) the global unigram distribution, while \\(\\pi_{o}\\) is used to sample output tokens at each sequence. The triggers are either fixed to some predefined set of tokens \\(Q\\), or sampled from \\(\\pi_{q}\\). Each sequence \\(z_{1:T}^{n}\\) is generated as follows:\n\n* (optional) Sample \\(q_{1},\\dots,q_{K}\\sim\\pi_{q}\\), i.i.d. without replacement (_random triggers_)\n* Sample \\(o_{k}\\sim\\pi_{o}(\\cdot|q_{k})\\), i.i.d. with replacement.\n* Sample \\(z_{1}^{n}\\sim\\pi_{u}\\) and \\(z_{t}^{n}|z_{t-1}^{n}\\sim p_{n}(\\cdot|z_{t-1}^{n})\\) for \\(t=2,\\dots,T\\), where \\[p_{n}(j|i)=\\begin{cases}\\pi_{b}(j|i),&\\text{ if }i\\notin\\{q_{k}\\}_{k}\\\\ \\mathbbm{1}\\{j=o_{k}\\},&\\text{ if }i=q_{k}.\\end{cases}\\]\n\nExperimental setup and initial experiment.Our experiments take \\(\\pi_{u}\\) and \\(\\pi_{b}\\) to be unigram and bigram character-level distributions estimated from the tiny Shakespeare dataset, with vocabulary size \\(N=65\\). We generally sample triggers from \\(\\pi_{q}=\\pi_{u}\\) or fix them to the \\(K\\) most frequent tokens. We sample uniform outputs \\(o_{k}\\) in most cases, but also experiment with \\(\\pi_{o}=\\pi_{b}\\) in Section 5.\n\nAs a preliminary experiment, we train a two-layer vanilla transformer with single-head attention layers and MLP feed-forward layers, following the training setup described in Section 5. On our synthetic data, with fixed (resp. random) triggers and uniform outputs, the model achieves over 99% accuracy (resp. 95%) on output tokens after the first occurrence, versus around 55% for one layer. This gap may be related to the difficulty of modeling three-way interactions with a single attention layer [44]. We visualize attention maps on test sequences in Figure 2, which shows that the model has learned an induction head mechanism. The sequence in the middle figure has \\((q_{k},o_{k})\\in\\{(a,b),(t,s)\\}\\). For fixed triggers, the induction head is only active for the triggers used in training, which suggests the presence of a \"memory\" in the attention layer. For random triggers, it is active on every repeated token, so that the model then needs to disambiguate between in-context and global predictions. For instance, the model may choose to use the retrieved token when it is unlikely to be sampled from the global bigram distribution, something which we found to often be the case in practice.\n\n## 4 The Associative Memory Viewpoint\n\nIn this section, we present our associative memory view on transformers: with nearly orthogonal embeddings, the weight matrices behave as associative memories which store pairs of embeddings as\n\nFigure 2: **Induction head behavior in attention maps** observed on a 2-layer transformer trained on two variants of our synthetic dataset. Each row shows the attention pattern for predicting the next token. (left) The first layer head always attends to the previous token. (center) For fixed triggers \\(Q=\\{a,t\\}\\), the second layer head mainly attends to tokens following such triggers. (right) For random triggers, the induction head mechanism is active for any repeated token (here the only trigger is \\(L\\)). Red and green boxes highlight tokens following previous occurrences of the query, with red boxes corresponding to “correct” output tokens \\(o_{k}\\) following trigger tokens \\(q_{k}\\).\n\na weighted sum of their outer products. We then introduce a simplified transformer model with fixed random embeddings that will yield a precise understanding of learning dynamics using this viewpoint.\n\n### Weight matrices as associative memories\n\nWhile intermediate representations in the transformer consist of high-dimensional vectors in residual streams, they are often \"collapsed\" down to scalar measurements by testing against other representations, using operations of the form \\(v_{j}^{\\top}Wu_{i}\\) for some matrix \\(W\\). For instance, \\(u_{i}\\) and \\(v_{j}\\) could be key and query vectors in an attention head, or input and output embeddings for predicting the next token. If \\((u_{i})_{i}\\) and \\((v_{j})_{j}\\) are orthonormal (or nearly-orthonormal) sets of embeddings, a natural way to store desired input-output associations \\(i,j\\) is through the following associative memory:\n\n\\[W=\\sum_{i,j}\\alpha_{ij}v_{j}u_{i}^{\\top}, \\tag{3}\\]\n\nso that the scores \\(v_{j}^{\\top}Wu_{i}\\approx\\alpha_{ij}\\) may be used to assess the relevance of the \\((i,j)\\) pair, _e.g._, as part of a softmax operation in attention or next token prediction.\n\nRandom embeddings.A simple way to ensure that embeddings \\((u_{i})_{i}\\) and \\((v_{j})_{j}\\) are nearly-orthonormal is to set them to be random high-dimensional vectors, such as Gaussian vectors with variance \\(1/d\\) in \\(d\\) dimensions. Indeed, these are known to satisfy [23; 53]\n\n\\[u_{i}^{\\top}u_{i}\\approx 1\\quad\\text{ and }\\quad u_{i}^{\\top}u_{j}\\approx O \\left(\\frac{1}{\\sqrt{d}}\\right),\\]\n\nso that (3) is a reasonable way to define an associative memory, without requiring an explicit activation function as employed in end-to-end memory networks [50]. We may also easily create a \"remapping\" of an existing embedding \\(u_{i}\\) by multiplying it by a random matrix \\(W_{0}\\in\\mathbb{R}^{d\\times d}\\) with Gaussian entries of variance \\(1/d\\), which is commonly used for initializing neural network parameters. The new **remapped embedding**\\(W_{0}u_{i}\\) is near-unit norm, and is near-orthogonal to \\(u_{i}\\) in addition to the other \\(u_{j}\\). Note that this fact implies that attention scores at initialization are near-uniform. See Appendix A for more details.\n\nLearning associative memories.We now show that learning associations of input-output embeddings via gradient descent leads to a weighted associative memory of a form similar to (3).\n\n**Lemma 1** (Gradients and associative memories).: _Let \\(p\\) be a data distribution over input-output tokens, and consider the following loss, where the input and output embeddings \\(W_{E}\\) and \\(W_{U}\\) are fixed:_\n\n\\[L(W)=\\mathbb{E}_{(z,y)\\sim p}[\\ell(y,W_{U}Ww_{E}(z))], \\tag{4}\\]\n\n_with \\(\\ell\\) the cross-entropy loss. The gradients of the population loss \\(L\\) then take the form_\n\n\\[\\nabla_{W}L(W)=\\sum_{k=1}^{N}\\mathbb{E}_{z}[(\\hat{p}_{W}(y=k|z)-p(y=k|z))w_{U} (k)w_{E}(z)^{\\top}], \\tag{5}\\]\n\n_where \\(\\hat{p}_{W}(y\\!=\\!k|x)=\\sigma(W_{U}Ww_{E}(z))_{k}\\) are the model's predicted probabilities. Running gradient descent (with or without weight decay) from initialization \\(W_{0}\\) then leads to estimates of the following form, for some \\(\\alpha_{0}\\) and \\(\\alpha_{ij}\\) that vary with the number of iterations:_\n\n\\[\\hat{W}=\\alpha_{0}W_{0}+\\sum_{i,j}\\alpha_{ij}w_{U}(j)w_{E}(i)^{\\top}. \\tag{6}\\]\n\nNote that (4) is a convex problem in \\(W\\), thus with appropriate step-size and large enough number of steps (with no weight decay) we can expect gradient descent to be close to the global minimum. At the optimum, if the embeddings are nearly orthogonal, then (5) implies \\(\\hat{p}_{W}(y=k|z)\\approx p(y=k|z)\\). We remark that if \\(W_{0}\\) is a Gaussian random matrix, as if often the case for neural network layers, the first term in (6) plays a minor role: testing \\(W_{0}\\) against an input-output pair \\((i,j)\\) with \\(\\alpha_{ij}\\neq 0\\) will concentrate around zero when \\(d\\) is large, while the \\((i,j)\\) term in the sum will concentrate around \\(\\alpha_{ij}\\). We also note that the gradient updates described above correspond to a so-called maximal feature learning regime similar to \\(\\mu\\)P updates in intermediate layers of deep networks [58; 59].\n\nHandling superposition.In Lemma 1, we assumed that inputs to the matrix \\(W\\) are embeddings of a single token. Yet, in transformer models, the inputs to weight matrices are often sums, or _superpositions_ of embeddings. For instance, the initial representations of each token are sums of token and positional embeddings, and representations at later layers are sums of the outputs of each previous block, due to residual connections. Outputs of attention layers are also weighted sums of potentially many embeddings, at least initially when attention patterns are spread out. By linearity, associative memories of the form (6) simply operate individually on each embedding of a superposition, and return a new superposition (up to additional noise due to near-orthogonality). In practice, we will see that learned memories often focus on a single embedding and filter out the rest as noise when irrelevant (see also Section 6). We note that linearity can also be limiting, since it makes it difficult to map sets to specific output embeddings: \\(u_{\\{i,j\\}}:=u_{i}+u_{j}\\) needs to map to \\(Wu_{i}+Wu_{j}\\), and thus cannot map to a new embedding \\(v_{\\{i,j\\}}\\). Such mappings of sets thus require non-linear associative memories, for instance by leveraging a sparse decoding of which elements are actually present (_e.g._, using compressed sensing), or by using MLPs with non-linear activations [15; 30].\n\n### A simplified two-layer transformer architecture\n\nWe consider a simpler two-layer transformer which is more interpretable with the memory viewpoint, and will help us analyze learning dynamics both empirically and theoretically.\n\n* We freeze input, output and positional embeddings (\\(W_{E},W_{U},W_{P}\\)) to their random initialization throughout training. This brings us to the Gaussian random vector setup presented above.\n* We fix \\(W^{1}_{Q}=W^{2}_{Q}=I_{d}\\), so that \\(W^{1}_{K}\\) and \\(W^{2}_{K}\\) play the role of both key and query matrices. This changes the gradient dynamics, but simplifies the model by avoiding the redundancy in (2). The pre-softmax attention scores then take the form \\(x_{q}^{\\top}W^{\\ell}_{K}x_{k}\\), with \\(x_{q}\\) (resp. \\(x_{k}\\)) the query (resp. key) embeddings, which now directly resembles an associative memory lookup.\n* We freeze \\(W^{1}_{V}\\), \\(W^{1}_{Q}\\), and \\(W^{2}_{V}\\) to random initialization. These play the role of _remapping_ attended tokens into new tokens, since for random \\(W\\) and large \\(d\\), \\(Wx\\) is nearly orthogonal to \\(x\\) and to any other random embeddings independent of \\(x\\).\n* We train \\(W^{2}_{Q}\\), since the outputs of the induction head need to be mapped back into appropriate output embeddings in order to predict the output tokens \\(o_{k}\\) correctly.\n* We use a single linear feedforward layer after the second attention block, with weight matrix \\(W_{F}\\).\n\nThis is plausibly the layer responsible for learning the global bigram distributions.\n\nWe remark that while this model freezes some parameters at initialization, it is richer than a \"lazy\" or neural tangent kernel approximation [10; 22; 24] since the model is still highly non-linear in its parameters and, as we will see, induces rich non-linear learning dynamics.\n\nSolving the bigram problem with associative memories.We now show how the above architecture can solve the synthetic bigram problem from Section 3 with well-chosen weight matrices. While this is only a hypothetical model, we show in Section 5 that it is surprisingly faithful to the learned model.\n\nRecall that due to residual connections, the inputs to the weight matrices typically consist of superpositions of various embeddings including token embeddings, positional embeddings, or \"remapped\" versions thereof. These may be viewed as sets, as illustrated in Figure 1, and associative memories can easily ignore certain elements of the set, _e.g._, ignore token embeddings by only focusing on positional embeddings. The induction head mechanism can be obtained by setting:\n\n\\[W^{1}_{K}=\\sum_{t=2}^{T}p_{t}p_{t-1}^{\\top},\\quad W^{2}_{K}=\\sum_{k\\in Q}w_{E} (k)(W^{1}_{O}W^{1}_{V}w_{E}(k))^{\\top},\\quad W^{2}_{O}=\\sum_{k=1}^{N}w_{U}(k)( W^{2}_{V}w_{E}(k))^{\\top}, \\tag{7}\\]\n\nwhere \\(Q\\) is the set of triggers when they are fixed, or the support of \\(\\pi_{q}\\) when they are random. In words, the first attention layer matches a token to the previous tokens using positional embeddings. The second layer matches the trigger token to a remapping of itself by \\(W^{1}_{O}W^{1}_{V}\\), and the output matches a remapping of the input token by \\(W^{2}_{V}\\) to the corresponding output token. We remark that one can easily make the attention patterns more peaked on the correct associations by rescaling \\(W^{1}_{K}\\) and \\(W^{2}_{K}\\). The global bigram statistics can be encoded in the feed-forward layer as follows:\n\n\\[W_{F}=\\sum_{i=1}^{N}\\sum_{j=1}^{N}\\log\\pi_{b}(j|i)w_{U}(j)w_{E}(i)^{\\top}. \\tag{8}\\]The question remains of how the model could trade-off predictions from the induction head and from the feed-forward layer, which are added together due to residual connections. With fixed triggers \\(Q\\), we may simply remove all \\(i\\in Q\\) from the summation in (8), so that the model exclusively relies on the attention head for all triggers (indeed, the output of \\(W_{O}^{2}\\) is in the span of output embeddings, which are nearly orthogonal to the row space of \\(W_{F}\\)). When the triggers can vary across different sequences, choosing between the induction head and the feed-forward layer is more ambiguous as it depends on context, and \\(W_{F}\\) may try to learn more complex mappings that also use the outputs of \\(W_{O}^{2}\\). In practice, we observe that the model often prefers the induction head, unless its output agrees with one of the top predictions from the global bigram, in which case it tends to prefer those.\n\nBeyond the simplified architecture.While our simplified architecture already captures the relevant aspects for the bigram model, it lacks some of the components that appear in standard transformers, such as non-linear MLPs, trained embeddings, layer normalization, and joint learning of a factorization \\(W_{K}^{\\top}W_{Q}\\) (potentially with low rank matrices \\(W_{K},W_{Q}\\in\\mathbb{R}^{d_{h}\\times d}\\) with \\(d_{h}0\\) for \\(T\\) larger than a small constant.\n* If \\(j=k=i\\), the two occurrences of the trigger happen one after the other, so it must be that \\(t_{o}=T\\). Then \\[a_{k,j,i}\\approx\\frac{2}{T}-\\frac{1}{T(T-1)}-\\frac{2}{T^{2}}=\\frac{2}{T}+O \\left(\\frac{1}{T^{2}}\\right),\\] for \\(T\\) larger than a small constant.\n* If \\(i\\neq j=k\\), all tokens up to position \\(t_{o}-2=T-2\\) are i.i.d. uniform in \\([N]\\setminus\\{j\\}\\), so that \\[a_{k,j,i}\\approx\\frac{T-2}{T(N-1)}-\\frac{1}{T}\\left((T-2)\\cdot\\frac{1}{N-1}+ \\frac{T-2}{(T-1)(N-1)}+\\frac{T-2}{T(N-1)}\\right)=O\\left(\\frac{1}{N}\\right)\\]\n* If \\(i\\neq j\\) and \\(j\\neq k\\), all tokens except at positions \\(t_{o}-1\\), \\(t_{o}\\) and \\(T\\) (we have \\(t_{o}0\\). Recall that in our model, we have \\(z_{T}=q\\) with probability one (\\(q\\) is the trigger token), and that \\(q\\) only appears twice: once at position \\(t_{q}:=t_{o}-11\\),\n\n\\[W^{1}_{K}p_{t}\\approx\\frac{\\eta\\alpha\\alpha^{\\prime}}{NTt}\\sum_{k=1}^{N}(A_{t,k }-B_{t,k}-C_{t,k}+D_{t,k}),\\]\n\nwith\n\n\\[A_{t,k} =\\mathbb{E}[\\mathbbm{1}\\{z_{t}\\!=\\!k\\}\\;\\mathbbm{1}\\{t_{q}\\leq t \\}(p_{t_{q}}-\\bar{p}_{1:t})|y\\!=\\!k] \\tag{26}\\] \\[B_{t,k} =\\mathbb{E}[\\mathbbm{1}\\{z_{t}\\!=\\!k\\}\\;\\mathbbm{1}\\{t_{q}\\leq t \\}(p_{t_{q}}-\\bar{p}_{1:t})]\\] (27) \\[C_{t,k} =\\mathbb{E}[r_{k}\\;\\mathbbm{1}\\{t_{q}\\leq t\\}(p_{t_{q}}-\\bar{p}_ {1:t})|y\\!=\\!k]\\] (28) \\[D_{t,k} =\\mathbb{E}[r_{k}\\;\\mathbbm{1}\\{t_{q}\\leq t\\}(p_{t_{q}}-\\bar{p}_ {1:t})], \\tag{29}\\]\n\nwhere \\(r_{k}:=\\frac{1}{T}\\sum_{t=1}^{T}\\mathbbm{1}\\{z_{t}=k\\}\\). We have\n\n\\[A_{t,k} =\\mathbb{E}[\\mathbbm{1}\\{z_{t}\\!=\\!k\\}(\\mathbbm{1}\\{t_{q}=t-1\\}+ \\mathbbm{1}\\{t_{q}\\in[t-2]\\cup\\{t\\}\\})(p_{t_{q}}-\\bar{p}_{1:t})|y\\!=\\!k]\\] \\[=\\mathbb{P}(t_{q}=t-1|y=k)(p_{t-1}-\\bar{p}_{1:t})+\\frac{1}{N}\\sum _{s\\in[t-2]\\cup\\{t\\}}\\mathbb{P}(t_{q}=s|y=k)(p_{s}-\\bar{p}_{1:t})\\] \\[=\\mathbb{P}(t_{q}=t-1)(p_{t-1}-\\bar{p}_{1:t})+\\frac{1}{N}\\sum_{s \\in[t-2]\\cup\\{t\\}}\\mathbb{P}(t_{q}=s)(p_{s}-\\bar{p}_{1:t})\\] \\[=\\mathbb{P}(t_{q}=t-1)(p_{t-1}-\\bar{p}_{1:t})+O\\left(\\frac{1}{N} \\right),\\]\n\nsince the distribution of \\(t_{q}\\) is the same regardless of \\(y\\). We proceed similarly for the other quantities and obtain the following:\n\n\\[B_{t,k} =O\\left(\\frac{1}{N}\\right)\\] \\[C_{t,k} =\\frac{\\mathbb{P}(t_{q}=t-1)}{T}(p_{t-1}-\\bar{p}_{1:t})+O\\left( \\frac{1}{N}\\right)\\] \\[D_{t,k} =O\\left(\\frac{1}{N}\\right).\\]\n\nThis yields the following associative memory behavior, for \\(t>1\\):\n\n\\[p_{s}^{\\top}W^{1}_{K}p_{t}\\approx\\frac{\\eta\\alpha\\alpha^{\\prime}(T-1)}{T^{2}t }\\left\\{\\mathbb{P}(t_{q}=t-1)\\left(\\mathbbm{1}\\{s=t-1\\}-\\frac{1}{t}\\;\\mathbbm{ 1}\\{s\\in[t]\\}\\right)+O\\left(\\frac{1}{N}\\right)\\right\\},\\]\n\nwhich matches the desired \"previous token head\" behavior from (7) when \\(N\\) is large. As in the case of \\(W^{2}_{K}\\), we may then \"saturate\" the softmax by choosing a large enough step-size.\n\n## Appendix C Other Proofs\n\n### Proof of Lemma 1\n\nProof.: Recall the form of the cross-entropy loss for classification with \\(K\\) classes:\n\n\\[\\ell(y,\\xi)=-\\sum_{k=1}^{N}\\mathbbm{1}\\left\\{y=k\\right\\}\\log\\frac{e^{\\xi_{k}} }{\\sum_{j}e^{\\xi_{j}}}.\\]\n\nIts derivatives take the form\n\n\\[\\frac{\\partial\\ell}{\\partial\\xi_{k}}(y,\\xi)=s(\\xi)_{k}-\\mathbbm{1}\\{y=k\\},\\]\n\nwith \\(s(\\xi)_{k}=\\frac{e^{\\xi_{k}}}{\\sum_{j}e^{\\xi_{j}}}\\) the softmax.\n\nThe gradient of \\(L\\) is then given by\n\n\\[\\nabla_{W}L(W) =\\mathbb{E}_{(z,y)}\\left[\\sum_{k=1}^{N}\\frac{\\partial\\ell}{\\partial \\xi_{k}}(y,W_{U}Ww_{E}(z))\\nabla_{W}(w_{U}(k)^{\\top}Ww_{E}(z))\\right]\\] \\[=\\mathbb{E}_{(z,y)}\\left[\\sum_{k=1}^{N}(\\hat{p}_{W}(k|z)-\\mathbb{1 }\\{y=k\\})w_{U}(k)w_{E}(z)^{\\top}\\right]\\] \\[=\\sum_{k=1}^{N}\\mathbb{E}_{z}[\\mathbb{E}_{y}[(\\hat{p}_{W}(k|z)- \\mathbb{1}\\{y=k\\})w_{U}(k)w_{E}(z)^{\\top}\\mid z]]\\] \\[=\\sum_{k=1}^{N}\\mathbb{E}_{z}[(\\hat{p}_{W}(k|z)-\\mathbb{E}_{y}[ \\mathbb{1}\\{y=k\\}|z])w_{U}(k)w_{E}(z)^{\\top}],\\]\n\nwhich yields the desired result. \n\n### Proof of Lemma 2\n\nProof.: Using similar steps as the proof of Lemma 1, we have\n\n\\[\\nabla_{W}L(W) =\\mathbb{E}_{(x,y)}\\left[\\sum_{k=1}^{N}\\frac{\\partial\\ell}{ \\partial\\xi_{k}}(y,W_{U}Wx)\\nabla_{W}(w_{U}(k)^{\\top}Wx)\\right]\\] \\[=\\mathbb{E}_{(x,y)}\\left[\\sum_{k=1}^{N}(\\hat{p}_{W}(k|x)-\\mathbb{1 }\\{y=k\\})w_{U}(k)x^{\\top}\\right]\\] \\[=\\sum_{k=1}^{N}w_{U}(k)\\,\\mathbb{E}_{x}[\\hat{p}_{W}(k|x)x]^{\\top} -\\sum_{k=1}^{N}\\mathbb{E}_{y}[\\mathbb{1}\\{y=k\\}w_{U}(k)\\,\\mathbb{E}[x|y]^{\\top}]\\] \\[=\\sum_{k=1}^{N}w_{U}(k)\\,\\mathbb{E}_{x}[\\hat{p}_{W}(k|x)x]^{\\top} -\\sum_{k,j=1}^{N}p(y=j)\\,\\mathbb{1}\\{j=k\\}w_{U}(k)\\,\\mathbb{E}[x|y=j]^{\\top}\\] \\[=\\sum_{k=1}^{N}p(y=k)w_{U}(k)(\\hat{\\mu}_{k}-\\mu_{k})^{\\top},\\]\n\nwith \\(\\hat{\\mu}_{k}=p(y=k)^{-1}\\,\\mathbb{E}_{x}[\\hat{p}_{W}(k|x)x]\\) and \\(\\mu_{k}=\\mathbb{E}[x|y=k]\\). \n\n### Proof of Lemma 4\n\nProof.: To better isolate the role of keys from values, we denote the keys that are fed into the matrix \\(W\\) by \\(Z=[z_{1},\\ldots,z_{T}]\\in\\mathbb{R}^{d\\times T}\\), while the query is simply \\(x_{T}\\). In practice we have \\(Z=X\\), and both are superpositions of potentially multiple embeddings (if \\(W\\) is part of the second attention layer, these are the token embedding, positional embedding, and the output of the first attention layer).\n\nThe gradient of the loss at \\(W=0\\) writes:\n\n\\[\\nabla_{W}L(W)\\big{|}_{W=0}=\\mathbb{E}_{(X,Z,y)}\\left[\\sum_{k=1}^{ N}\\frac{\\partial\\ell}{\\partial\\xi_{k}}(y,\\xi)\\cdot\\nabla_{W}(w_{U}(k)^{\\top} \\Phi_{2}X\\sigma(Z^{\\top}Wx_{T}))\\big{|}_{W=0}\\right] \\tag{30}\\] \\[=\\mathbb{E}_{(X,Z,y)}\\left[\\sum_{k=1}^{N}(\\hat{p}_{W}(k|X,Z)- \\mathbb{1}\\{y=k\\})\\cdot\\nabla_{W}(w_{U}(k)^{\\top}\\Phi_{2}X\\sigma(Z^{\\top}Wx_{ T}))\\big{|}_{W=0}\\right]. \\tag{31}\\]\n\nWe have\n\n\\[\\nabla_{W}(w_{U}(k)^{\\top}\\Phi_{2}X\\sigma(Z^{\\top}Wx_{T}))\\big{|}_ {W=0} =\\sum_{t=1}^{T}w_{U}(k)^{\\top}\\Phi_{2}x_{t}\\cdot\\nabla_{W}(\\sigma (Z^{\\top}Wx_{T})_{t})\\] \\[=\\frac{1}{T}\\sum_{t=1}^{T}w_{U}(k)^{\\top}\\Phi_{2}x_{t}\\cdot(z_{t} -\\bar{z}_{1:T})x_{T}^{\\top},\\]where \\(\\bar{z}_{1:T}=\\frac{1}{T}\\sum_{t}z_{t}\\), and we used the fact that\n\n\\[\\frac{\\partial}{\\partial u_{s}}\\sigma(u)_{t}\\big{|}_{u=0}=\\frac{1}{T}\\,\\mathbbm{1 }\\{t=s\\}-\\frac{1}{T^{2}}. \\tag{32}\\]\n\nThe gradient (31) now writes\n\n\\[\\nabla_{W}L(W)\\big{|}_{W=0}=\\sum_{k=1}^{N}\\mathbb{E}_{(X,Z)}[(\\hat{\\rho}_{W}(k| X,Z)-\\mathbbm{1}\\{y=k\\})\\frac{1}{T}\\sum_{t=1}^{T}w_{U}(k)^{\\top}\\Phi_{2}x_{t} \\cdot(z_{t}-\\bar{z}_{1:T})x_{T}^{\\top}],\\]\n\nand the result follows. \n\n### Proof of Lemma 5\n\nProof.: The linearization of the second layer softmax around zero takes the following form:\n\n\\[\\bar{\\sigma}(Z^{\\top}W_{2}x_{T})_{t}=\\frac{1}{T}(1+z_{t}^{\\top}W_{2}x_{T}-\\bar{ z}_{1:T}^{\\top}W_{2}x_{T}),\\]\n\nwith \\(z_{t}=\\sum_{s=1}^{t}\\Phi_{1}x_{s}\\sigma(p_{1:t}^{\\top}Wp_{t})_{s}\\) the output of the first attention layer.\n\n\\[\\xi_{k} =\\sum_{t=1}^{T}w_{U}(k)^{\\top}\\Phi_{2}x_{t}\\bar{\\sigma}(Z^{\\top}W_ {2}x_{T})\\] \\[=\\frac{1}{T}\\sum_{t=1}^{T}w_{U}(k)^{\\top}\\Phi_{2}x_{t}+\\frac{1}{T} \\sum_{t=1}^{T}w_{U}(k)^{\\top}\\Phi_{2}x_{t}\\sum_{s=1}^{t}(\\Phi_{1}x_{s})^{\\top} W_{2}x_{T}\\sigma(p_{1:t}^{\\top}Wp_{t})_{s}\\] \\[\\quad-w_{u}(k)^{\\top}\\Phi_{2}\\bar{x}_{1:T}\\cdot\\frac{1}{T}\\sum_{t= 1}^{T}\\sum_{s=1}^{t}(\\Phi_{1}x_{s})^{\\top}W_{2}x_{T}\\sigma(p_{1:t}^{\\top}Wp_{t })_{s}.\\]\n\nThen,\n\n\\[\\nabla_{W}L(W)\\big{|}_{W=0} \\tag{33}\\] \\[\\quad=\\mathbb{E}_{(X,y)}\\left[\\sum_{k=1}^{N}\\frac{\\partial\\ell}{ \\partial\\xi_{k}}(y,\\xi)\\cdot\\nabla_{W}\\xi_{k}\\big{|}_{W=0}\\right]\\] (34) \\[\\quad=\\mathbb{E}_{(X,y)}\\left[\\sum_{k=1}^{N}\\frac{\\partial\\ell}{ \\partial\\xi_{k}}(y,\\xi)\\frac{1}{T}\\sum_{t=1}^{T}w_{U}(k)^{\\top}\\Phi_{2}x_{t} \\frac{1}{t}\\sum_{s=1}^{t}(\\Phi_{1}x_{s})^{\\top}W_{2}x_{T}(p_{s}-\\bar{p}_{1:t}) p_{t}^{\\top}\\right]\\] (35) \\[\\quad-\\mathbb{E}_{(X,y)}\\left[\\sum_{k=1}^{N}\\frac{\\partial\\ell}{ \\partial\\xi_{k}}(y,\\xi)w_{U}(k)^{\\top}\\Phi_{2}\\bar{x}_{1:T}\\cdot\\frac{1}{T} \\sum_{t=1}^{T}\\frac{1}{t}\\sum_{s=1}^{t}(\\Phi_{1}x_{s})^{\\top}W_{2}x_{T}(p_{s}- \\bar{p}_{1:t})p_{t}^{\\top}\\right], \\tag{36}\\]\n\nusing (32). The result follows by using \\(\\frac{\\partial\\ell}{\\partial\\xi_{k}}(y,\\xi)=\\hat{p}(k|\\xi)-\\mathbbm{1}\\{y=k\\}\\).\n\n## Appendix D Beyond our Simplified Architecture\n\nWhile the simplified architecture presented in Section 4.2 is sufficient to capture the desired induction behavior for our bigram task, transformer architectures used in practice typically involve more components, as well as more heads and layers. In this section, we discuss how our memory viewpoint extends to such architectures.\n\nFactorizations.In practice, transformers typically involve products of matrices, potentially with a low-rank bottleneck. For instance, our key-query matrices \\(W_{K}\\) should instead be considered as a product \\(W_{K}^{\\top}W_{Q}\\), and the output-value matrices \\(W_{O}\\) and \\(W_{V}\\) are typically jointly optimized.\n\nConsider an associative memory of the form:\n\n\\[W_{*}=\\sum_{i}y_{i}x_{i}^{\\top}\\in\\mathbb{R}^{d\\times d},\\]\n\nwhere \\((x_{i})_{i}\\) and \\((y_{i})_{i}\\) are appropriate collections of near-orthonormal embeddings.\n\nWe now argue that a similar associative memory can be achieved with the factorization \\(W=\\frac{d}{2d^{\\prime}}UV\\), where \\(U\\in\\mathbb{R}^{d\\times d^{\\prime}}\\), \\(V\\in\\mathbb{R}^{d^{\\prime}\\times d}\\) with \\(d^{\\prime}\\leq d\\) (for instance \\(d^{\\prime}\\) could be the dimension of attention heads), are given by:3\n\nFootnote 3: When \\(d^{\\prime}\\) is the head dimension, the \\(\\frac{d}{d^{\\prime}}\\) scaling can be interpreted as the correct multiplier to use in attention logits, which plays a similar role to the \\(\\frac{d}{d^{\\prime}}\\) multiplier in the \\(\\mu\\)P scaling [59], for our setup where the variance of the random entries of input embeddings is \\(1/d\\) instead of \\(1\\) as in [59].\n\n\\[U =U_{0}+\\sum_{i}y_{i}(V_{0}x_{i})^{\\top}\\] \\[V =V_{0}+\\sum_{i}(U_{0}^{\\top}y_{i})x_{i}^{\\top},\\]\n\nwhere \\(U_{0}\\) and \\(V_{0}\\) are random matrices with \\(\\mathcal{N}(0,\\frac{1}{d})\\) entries. These matrices are similar to those that would arise from a single gradient step individually on \\(U\\) and \\(V\\) from initializations \\(U_{0}\\) and \\(V_{0}\\), as in Lemma 1. To see why \\(W\\) behaves like \\(W_{*}\\), note that we have\n\n\\[UV=U_{0}V_{0}+\\sum_{i}y_{i}(V_{0}x_{i})^{\\top}V_{0}+\\sum_{i}(U_{0}^{\\top}y_{i} )x_{i}^{\\top}+\\sum_{i,j}y_{i}(V_{0}x_{i})^{\\top}(U_{0}^{\\top}y_{j})x_{j}^{\\top}.\\]\n\nIt is also easy to check using central limit arguments (similar to remapping in Appendix A) that \\(\\tilde{x}_{i}:=\\sqrt{\\frac{d}{d^{\\prime}}}V_{0}x_{i}\\in\\mathbb{R}^{d^{\\prime}}\\) and \\(\\tilde{y}_{i}:=\\sqrt{\\frac{d}{d^{\\prime}}}U_{0}^{\\top}y_{i}\\) are all nearly-orthonormal embeddings. Thus, we have\n\n\\[2y_{k}^{\\top}Wx_{l} =\\tilde{y}_{k}^{\\top}\\tilde{x}_{l}+\\sum_{i}y_{k}^{\\top}y_{i} \\tilde{x}_{i}^{\\top}\\tilde{x}_{l}+\\sum_{i}\\tilde{y}_{k}^{\\top}\\tilde{y}_{i}x_ {i}^{\\top}x_{l}+\\sum_{i,j}y_{k}^{\\top}y_{i}\\tilde{x}_{i}^{\\top}\\tilde{y}_{j}x_ {j}^{\\top}x_{l}\\] \\[\\approx 0+\\mathbbm{1}\\{k=l\\}+\\mathbbm{1}\\{k=l\\}+0,\\]\n\nwhere the first and last term vanish due to the cross-terms \\(\\tilde{y}_{i}^{\\top}\\tilde{x}_{i^{\\prime}}\\) which vanish for any \\(i,i^{\\prime}\\). Thus, \\(W\\) and \\(W_{*}\\) encode the same associations, when \\(d\\) and \\(d^{\\prime}\\) are large enough to ensure near-orthogonality.\n\nLayer-normalization.Normalization layers [4, 60] are typically used in transformers to improve training stability [52], and are applied on each token representation, either after [52] or before [6] each block. It may be seen as an operation of the form4\n\nFootnote 4: RMSNorm [60] would use the variance instead of the norm in the denominator, leading to an additional \\(\\sqrt{d}\\) factor in the numerator. Here we use the norm, which is more natural when embeddings have near-unit norm, in contrast to the \\(\\approx\\sqrt{d}\\) norm for the standard parameterization.\n\n\\[LN(x)=\\frac{x}{\\|x\\|},\\]\n\napplied to the input or output of a given block.\n\nIn order to obtain a basic understanding of the role of layer-norm in our associative memory setup, we may consider the setup Lemma 1, with a normalization applied after the linear operation, leading to the population loss:\n\n\\[L(W)=\\mathbb{E}_{(x,y)\\sim p}[\\ell(y,W_{U}LN(Wx))]. \\tag{37}\\]\n\nThe gradients then take the form\n\n\\[\\nabla_{W}L(W)=\\sum_{k=1}^{N}\\mathbb{E}_{x}\\left[\\frac{\\hat{p}_{W}(y=k|x)-p(y =k|x)}{\\|Wx\\|}\\left(I-\\frac{(Wx)(Wx)^{\\top}}{\\|Wx\\|^{2}}\\right)w_{U}(k)x^{\\top }\\right]. \\tag{38}\\]This illustrates that in addition to weighting the updates on class \\(k\\) by the prediction error \\(\\hat{p}(k|x)-p(k|x)\\), the updates are also projected on the orthogonal of the \\(Wx\\) direction. This means that an update on the direction \\(w_{U}(k)x^{\\top}\\) will occur only to the extent that \\(Wx\\) is not already aligned with \\(w_{U}(k)\\). Thus, if an association is \"stored\" once, so that \\(Wx\\approx w_{U}(y)\\), layer-norm will avoid further updating \\(W\\) in that direction, hopefully avoiding norms that grow too much, and also encouraging frequent and infrequent tokens to be weighted similarly in the final memory (see also [7] for more discussion on this).\n\nNote that at random initialization \\(Wx\\) is nearly orthogonal to any \\(w_{U}(k)\\), so that layer-norm only starts playing a significant role later in training, and does not affect our theoretical analysis based on single gradient steps.\n\nMLP blocks.If we denote by \\((u_{i})_{i}\\) and \\((v_{i})_{i}\\) collections of near-orthonormal input and output embeddings, an MLP block may encode associations \\((i,j)\\in\\mathcal{M}\\) as follows:\n\n\\[F(x)=\\sum_{(i,j)\\in\\mathcal{M}}v_{j}\\sigma(u_{i}^{\\top}x-b),\\]\n\nwhere \\(\\sigma\\) is a non-linear activation and \\(b\\) a bias term. Then, if one assumes that \\(u_{i}^{\\top}u_{j}\\leq b\\) for \\(i\\neq j\\) and \\(\\sigma(t)=0\\) for \\(t<0\\), then this can help filter out noise that arises from near-orthogonality, and may then lead to additional storage capacity, at the cost of additional computation (see, _e.g._, [7, 30]).\n\nAn additional benefit of MLP layers discussed in Section 4 is that they may encode many-to-many associations, which is useful when multiple embeddings are in the residual stream and need to be considered jointly (_e.g._, a subject and a relation in a factual recall task [35]). This may be achieved, for instance, by considering embeddings \\(u_{\\mathcal{I}}=\\frac{1}{\\sqrt{|\\mathcal{I}|}}\\sum_{i\\in\\mathcal{I}}u_{i}\\), where \\(\\mathcal{I}\\) are sets of bounded size (_e.g._, as obtained using layer-norm over a residual stream). Then, assuming the \\(u_{i}\\) are nearly-orthonormal, we have \\(u_{\\mathcal{I}}^{\\top}u_{\\mathcal{I}}\\approx 1\\) while \\(u_{\\mathcal{I}}^{\\top}u_{\\mathcal{I}}\\lesssim 1-\\delta\\) if \\(\\mathcal{I}=\\mathcal{I}^{\\prime}\\), for some \\(\\delta\\) that depends on the maximal cardinality of these sets. Although \\(1-\\delta\\) is no-longer vanishingly small, defining\n\n\\[F(x)=\\sum_{(\\mathcal{I},\\mathcal{J})\\in\\mathcal{M}}v_{\\mathcal{J}}\\sigma(u_{ \\mathcal{I}}^{\\top}x-b),\\]\n\nthe non-linearity may still succeed at filtering out any \\(\\mathcal{I}\\) that does not correspond to the query set in \\(x\\). We leave the question of how such non-linear associative memories may arise from training dynamics to future work.\n\nMultiple heads and layers.We remark that our view of weights as associative memories applies to any parameter other than embedding/unembedding layers, and thus naturally extends to multiple heads (using the low-rank factorizations described above) and multi-layer models.\n\nIt is important to note, however, that the redundancy introduced by having more heads and layers makes it more challenging to identify which layer/head/weight will learn certain associations (see, _e.g._, Figure 11 in Appendix E). This is in contrast to our simplified architecture of Section 4.2, where we may identify the role of each matrix (up to some possible redundancy when using the feed-forward layer \\(W_{F}\\)). In practice, mechanisms may appear in different heads/layers across different training runs, which makes interpretability more challenging, and typically requires some causal identification techniques, such as mediation analysis [35, 54].\n\n## Appendix E Experiment Details and Additional Experiments\n\nIn this section, we present additional details on the experiments, as well as additional results.\n\nComputing setup.We use Pytorch and each run uses a single GPU, along with 60 CPU cores for real-time data generation. We will make our code available upon publication.\n\nHyperparameters.We now provide the hyperparameters used in each figure. The SGD step-size is denoted \\(\\eta\\). We fix the momentum parameter to \\(0.9\\) and the weight decay parameter to \\(10^{-4}\\). \\(U\\) denotes the uniform distribution over \\([N]\\).\n\n* Figure 2: \\(K=3\\), \\(\\pi_{q}=\\pi_{u}\\) (random triggers) or \\(Q\\) is the \\(K\\) most likely elements of \\(\\pi_{u}\\), \\(\\pi_{o}=U\\), \\(d=128\\), \\(d_{hidden}=4\\times 128\\) (hidden dimension of the feed-forward MLPs), \\(\\eta=0.2\\).\n* Figure 3: \\(K=5\\), \\(\\pi_{q}=\\pi_{u}\\) (random triggers), \\(\\pi_{o}=U\\), \\(d=128\\), \\(\\eta=0.2\\).\n* Figure 4(left) and Figure 5: \\(\\pi_{o}=U\\), \\(d=128\\), \\(\\eta=1\\). For random triggers we use \\(\\pi_{q}=\\pi_{u}\\). For \\(K=1\\) with fixed frequent trigger, the only trigger is the most probable token according to \\(\\pi_{u}\\), while for \\(K=5\\) with fixed rare triggers, the five triggers are the 6-th to 10-th most probable tokens according to \\(\\pi_{u}\\).\n* Figure 4(center): \\(K=3\\), \\(\\pi_{q}=\\pi_{u}\\) (random triggers), \\(\\pi_{o}=U\\) or \\(\\pi_{o}=\\pi_{b}\\) (conditioned on the trigger), \\(d=128\\), \\(\\eta=1\\).\n* Figure 4(right): \\(K=3\\), \\(\\pi_{q}=\\pi_{u}\\) (random triggers), \\(\\pi_{o}=U\\), \\(d=128\\), \\(\\eta=1\\).\n\nMemory recall probes and data-distributional properties.Figure 5 and Figure 6 show the evolution of the different memory probes for the settings considered in Figure 4(left,center). Figure 5 highlights that associative memories for the induction head are slower to learn when using few triggers (small \\(K\\)), rare fixed triggers, or random triggers (note that the probe for \\(W_{K}^{2}\\) with fixed triggers only shows recall accuracy on the set of triggers \\(Q\\), which is an easier task). Figure 6\n\nFigure 5: Memory recall probes for the setting of Figure 4(left).\n\nFigure 6: Memory recall probes for the setting of Figure 4(center).\n\nshows that using uniform output tokens can lead to better fitting of \\(W^{2}_{O}\\) and \\(W^{2}_{K}\\) compared to using output tokens sampled from bigrams. In addition to the increased diversity when using uniform outputs, this may also be due to the fact that bigram outputs are already well predicted using global statistics with just the feed-forward layer, hence the gradient signal on such well-predicted tokens may not propagate through the induction head mechanism. In contrast, the recall accuracy for \\(W^{1}_{K}\\) is comparable for both settings, since the previous token head is useful at all positions regardless of the output token distribution.\n\nVisualizing memories.Figure 7 shows visualizations of the associative memory behaviors after training. We see that diagonal elements dominate in the plots, which corresponds to correct associations lead to high'memory recall'. Nonetheless, we see that some of the diagonal elements are weaker than others, particularly for late positions in \\(W^{1}_{K}\\), and for some of the trigger tokens in \\(W^{2}_{K}\\), while the diagonal for \\(W^{2}_{O}\\) seems to be roughly uniform. We note that characters corresponding to capital letters have token index 13 to 38, while lowercase letters have index 39 to 64. The association patterns found in \\(W^{2}_{K}\\) then seem related to frequencies of appearance of triggers, whereby capital letters appear less frequently in the data, and are also less frequently chosen as triggers, compared to lowercase letters. Similarly, since the first occurrence of triggers is typically early in a sequence, it is natural that \\(W^{1}_{K}\\) learns stronger associations at earlier positions. In contrast, diagonal elements for \\(W^{2}_{O}\\) are nearly uniform, which agrees with the fact that output tokens are sampled uniformly in this setup. We refer to the follow-up work [7] for an analysis of how data frequencies affect association strength in such associative memories.\n\nEffect of dimension.Recall that our study of associative memories with random embeddings requires large dimension \\(d\\) in order to ensure near-orthogonality, and thus store input-output pairs more effectively. In Figure 8, we evaluate the recall accuracy for \\(W^{2}_{O}\\) for varying dimension, when training it by itself, and only on the output tokens (as in Figure 3). We see that higher dimension leads to faster learning of the memory, in particular \\(d=128\\) seems sufficient for fast learning after just a few iterations with a tuned learning rate. If the learning rate isn't tuned, we notice that there is a further slowdown for low dimension, is likely due to issues with the fact that our experiments use the standard parameterization of neural networks at initialization, rather than maximal update parameterizations [58]. Note that learning \\(W^{2}_{O}\\) alone is a convex optimization problem, and we hypothesize that higher dimension makes the problem better conditioned, and hence easier to learn. In Figure 9, we show \"one-step\" recall accuracies for classifying output tokens from the average\n\nFigure 8: Effect of dimension on learning \\(W^{2}_{O}\\) alone, with fixed or tuned learning rate.\n\nFigure 9: Accuracy of one-step estimate of \\(W^{2}_{O}\\) with varying dimension and number of batches used for computing expectations. Each batch consists of 32 sequences of 256 tokens for a total of 8 192 tokens, with \\(K=5\\) random triggers and uniform outputs.\n\nattention input to \\(W_{O}^{2}\\), given by\n\n\\[R_{1}=\\frac{1}{N}\\sum_{k=1}^{N}\\mathbbm{1}\\left\\{k=\\arg\\max_{k^{\\prime}}(W_{V}^{2} w_{E}(k^{\\prime}))^{\\top}(\\mu_{k}-\\mu)\\right\\},\\]\n\nwhere \\(\\mu_{k}=\\mathbb{E}[x|y=k]\\) and \\(\\mu=\\mathbb{E}[x]\\), for \\(x=\\frac{1}{t}\\sum_{s=1}^{t}W_{V}^{2}w_{E}(z_{s})\\) and \\(y=z_{t+1}\\), when \\(z_{t}\\) is a trigger token after its first occurrence. Expectations are computed over batches of data of varying sizes and in different dimensions. We call this \"one-step\" since it is related to the classifier obtained after performing a single gradient step on \\(W_{O}^{2}\\) from zero initialization (see Lemma 2 and Appendix B.3.1). The plots illustrate that this simple one-step model is already able to extract relevant signal from the noisy average attention, after a handful of batches of data, corresponding to tens of thousands of tokens, and that this gets easier as the dimension increases.\n\nMore complex architectures.Figure 10 shows training behavior for a more complex model than the simplified one considered in Section 5, namely where we train all parameters, replace the linear \\(W_{F}\\) feedforward layer by a two-layer MLP, and were (pre-)layer-normalization is added. Despite these changes, we see similar behavior for the memory recall probes (which now involve embeddings that may change over time), suggesting that the model is still identifying the same memory associations, despite the additional redundancies in parameters and modified training dynamics.\n\nFigure 11 shows the attention maps obtained when training a multi-head version of our two-layer model, with four attention heads per layer. We see that the redundancy of multiple heads creates difficulties in identifiability: only one of the first layer heads learns the previous token behavior, while the induction behavior is shared across different heads at the second layer. This illustrates the challenges of interpretability in the presence of redundant models, which then require additional work to identify which of the layers and heads are performing a given behavior, _e.g._, through interventions and causal mediation analysis [35, 54].\n\nFigure 11: Attention maps for a two-layer model with 4 attention heads. In the first layer (top), the previous token mechanism is mostly achieved by one of the four heads, while the induction behavior at the second layer (bottom) is distributed across the different heads.\n\nFigure 10: Training of a more realistic architecture with (i) ReLU MLP instead of linear layer for the second feed-forward layer, (ii) all parameters trained, including embeddings, (iii) pre-layer normalization. The loss, in-context accuracy and memory recall probes are similar to the simplified architecture (see, _e.g._, Figure 4).", "response": "## Review\n\n### Summary\n\nThis paper provides a comprehensive analysis of in-context learning (ICL) mechanisms in a simplified two-layer transformer architecture through a synthetic task that involves associative recall. The authors focus on the emergence of induction heads and associative memory in attention weight matrices, linking them to global bigram distribution learning. They present both mathematical justifications and experimental results, revealing how these mechanisms develop during training. Although the study is insightful and advances understanding of ICL in transformers, concerns are raised regarding the simplicity of the model and task, which may limit the generalizability of the findings to more complex systems.\n\n### Strengths\n\n- The problem is well-motivated and addresses a significant aspect of language models: understanding ICL.\n- The theoretical contributions are interesting and provide a non-trivial analysis of the mechanisms at play.\n- The methodology is clean, utilizing memory recall probes effectively.\n- The paper is well-written and easy to follow, with clear motivations and conclusions.\n- The experimental results provide valuable insights into the learning dynamics of transformers.\n\n### Weaknesses\n\n- The simplicity of the analyzed architecture raises questions about the applicability of the findings to more complex transformers.\n- The significance of the results could be better articulated, particularly concerning practical implications.\n- The paper lacks a thorough discussion of limitations and the potential non-generalizability of results to larger models.\n- Some design decisions, like freezing layers, may lead to biased results that do not reflect the behavior of fully trained models.\n- The overall presentation could be improved for clarity, as some sections are difficult to follow.\n\n### Questions\n\n- What are the implications of ignoring layer normalization and dropout on the findings?\n- Could the authors clarify the significance of framing induction heads as associative memory?\n- Were any experiments performed to test the effects of varying the dimensionality of the model?\n- How do the findings apply to larger transformer models, and what additional experiments could validate this?\n- Can the authors provide a proof sketch for Lemma 1 and discuss its implications for real transformers?\n\n### Soundness\n\n**Score:** 3\n\n**Description:** 3 = good; the theoretical framework and empirical results are solid, but the simplifications in the model may affect the applicability to real-world scenarios.\n\n### Presentation\n\n**Score:** 3\n\n**Description:** 3 = good; while the paper is generally well-written, there are clarity issues that could benefit from editing and restructuring for better flow.\n\n### Contribution\n\n**Score:** 3\n\n**Description:** 3 = good; the paper offers valuable insights and a useful case study, but the novelty is somewhat limited by the simplicity of the task and model.\n\n### Rating\n\n**Score:** 7\n\n**Description:** 7 = accept, but needs minor improvements; the paper is technically solid and impactful in understanding ICL in transformers, though some aspects require further exploration.\n\n### Paper Decision\n\n**Decision:** Accept\n\n**Reasons:** The paper provides significant insights into how in-context learning mechanisms emerge in transformer architectures, contributing to the understanding of language models. While there are valid concerns regarding the simplicity of the model and task, the rigorous theoretical and experimental work presented makes it a valuable contribution to the field.\n"} {"query": "You are a highly experienced, conscientious, and fair academic reviewer, please help me review this paper. The review should be organized into nine sections: \n1. Summary: A summary of the paper in 100-150 words.\n2. Strengths/Weaknesses/Questions: The Strengths/Weaknesses/Questions of paper, which should be listed in bullet points, with each point supported by specific examples from the article where possible.\n3. Soundness/Contribution/Presentation: Rate the paper's Soundness/Contribution/Presentation, and match this score to the corresponding description from the list below and provide the result. The possible scores and their descriptions are: \n 1 poor\n 2 fair\n 3 good\n 4 excellent\n4. Rating: Give this paper an appropriate rating, match this rating to the corresponding description from the list below and provide the result. The possible Ratings and their descriptions are: \n 1 strong reject\n 2 reject, significant issues present\n 3 reject, not good enough\n 4 possibly reject, but has redeeming facets\n 5 marginally below the acceptance threshold\n 6 marginally above the acceptance threshold\n 7 accept, but needs minor improvements \n 8 accept, good paper\n 9 strong accept, excellent work\n 10 strong accept, should be highlighted at the conference \n5. Paper Decision: It must include the Decision itself(Accept or Reject) and the reasons for this decision, based on the criteria of originality, methodological soundness, significance of results, and clarity and logic of presentation.\n\nHere is the template for a review format, you must follow this format to output your review result:\n**Summary:**\nSummary content\n\n**Strengths:**\n- Strength 1\n- Strength 2\n- ...\n\n**Weaknesses:**\n- Weakness 1\n- Weakness 2\n- ...\n\n**Questions:**\n- Question 1\n- Question 2\n- ...\n\n**Soundness:**\nSoundness result\n\n**Presentation:**\nPresentation result\n\n**Contribution:**\nContribution result\n\n**Rating:**\nRating result\n\n**Paper Decision:**\n- Decision: Accept/Reject\n- Reasons: reasons content\n\n\nPlease ensure your feedback is objective and constructive. The paper is as follows:\n\n# Large Graph Property Prediction via\n\nGraph Segment Training\n\nKaidi Cao\\({}^{1}\\), Phitchaya Mangpo Phothilimthana\\({}^{2}\\), Sami Abu-El-Haija\\({}^{2}\\), Dustin Zelle\\({}^{2}\\),\n\nYanqi Zhou\\({}^{2}\\), Charith Mendis\\({}^{3}\\), Jure Leskovec\\({}^{1}\\), Bryan Perozzi\\({}^{2}\\)\n\n\\({}^{1}\\)Stanford University, \\({}^{2}\\)Google, \\({}^{3}\\)UIUC\n\nThe work was partially completed during Kaidi Cao's internship at Google.The work was partially completed when Charith Mendis was a visiting researcher at Google.\n\n###### Abstract\n\nLearning to predict properties of a large graph is challenging because each prediction requires the knowledge of an entire graph, while the amount of memory available during training is bounded. Here we propose Graph Segment Training (GST), a general framework that utilizes a divide-and-conquer approach to allow learning large graph property prediction with a constant memory footprint. GST first divides a large graph into segments and then backpropagates through only a few segments sampled per training iteration. We refine the GST paradigm by introducing a historical embedding table to efficiently obtain embeddings for segments not sampled for backpropagation. To mitigate the staleness of historical embeddings, we design two novel techniques. First, we finetune the prediction head to fix the input distribution shift. Second, we introduce _Stale Embedding Dropout_ to drop some stale embeddings during training to reduce bias. We evaluate our complete method GST+EFD (with all the techniques together) on two large graph property prediction benchmarks: MalNet and TpuGraphs. Our experiments show that GST+EFD is both memory-efficient and fast, while offering a slight boost on test accuracy over a typical full graph training regime.\n\n## 1 Introduction\n\nGraph property prediction is a task of predicting a certain property or a characteristic of an entire graph [3]. Important applications include, predicting properties of molecules [32, 12], predicting properties of programs/code [1, 12, 6, 41] and predicting properties of organisms based on their protein-protein interaction networks [27, 42].\n\nThese popular graph property prediction tasks deal with relatively small graphs, so the scalability issue arises only from a large number of (small) graphs.\n\nHowever, graph property prediction tasks also face another scalability challenge, which arises due to the large size of each individual graph, as some graphs can have millions or even billions of nodes and edges [10]. Training typical Graph Neural Networks (GNNs) to classify such large graphs can be computationally infeasible, as the memory needed scales at least linearly with the size of the graph [40]. This presents a challenge as even most powerful GPUs, which are optimized for handling large amounts of data, only have a limited amount of memory available.\n\nPrevious efforts to improve scalability of GNNs have mostly focused on developing methods for node-level and link-level prediction tasks, which can be performed using sampled subgraphs [11, 5, 13, 38, 43, 2, 21]. However, there is a lack of research on how to train scalable models for _property prediction of large graphs_. Training a model on a sampled subgraph alone is insufficient for thesetypes of tasks, as the subgraph sampled may not contain all the necessary information to make accurate predictions for the entire graph. For example, if the task is to predict the diameter of the graph, it is unlikely that a fixed-size subgraph would contain sufficient features for the GNN to make an accurate and reliable prediction. Thus, it is essential to aggregate information from the entire graph to predict a graph property.\n\nIn this paper, we address the problem of property prediction of large graphs. We propose Graph Segment Training (GST)3, which is able to train on large graphs with constant (GPU) memory footprint.\n\nFootnote 3: Source code available at [https://github.com/kaidic/GST](https://github.com/kaidic/GST).\n\nOur approach partitions each large graph into smaller segments with a controlled size in the pre-processing phase. During the training process, a random subset of segments is selected to update the model at each step, rather than using the entire graph. This way, we need to maintain intermediate activations for only a few segments for backpropagation; embeddings for the remaining segments are created without saving their intermediate activations. The embeddings of all segments are then combined to generate an embedding for the original large graph, which is used for prediction. Therefore, each large graph has an upper bound on memory consumption during training regardless of its original size. This allows us to train the model on large graphs without running into an out-of-memory (OOM) issue, even with limited computational resources.\n\nTo accelerate the training process further, we introduce a historical embedding table to efficiently produce embeddings for graph segments that do not require gradients, as historical embeddings eliminate additional computation on such segments. However, the historical embeddings induce staleness issues during training, so we design two techniques to mitigate such issues in practice. First, we characterize the input distribution mismatch issue between training and test stages of the prediction head, and propose to finetune only the prediction head at the end of training to close the input distribution gap. Second, we identify bias in the loss function due to stale historical embeddings, and introduce _Stale Embedding Dropout_ to drop some stale embeddings during training to reduce this bias. Our final proposed method, called GST+EFD, is both memory-efficient and fast.\n\nWe evaluate our method on the following datasets: MalNet-Tiny, MalNet-Large and TpuGraphs. A typical full graph training pipeline (Full Graph Training) can only train on MalNet-Tiny, and\n\nFigure 1: (a) **Full Graph Training**: Classically, models are trained using the entire graph, meaning all nodes and edges of the graph are used to compute gradients. For large graphs, this might be computationally infeasible. (b) **Graph Segment Training**: Our solution is to partition each large graph into smaller segments and select a random subset of segments to update the model; embeddings for the remaining segments are produced without saving their intermediate activations. The embeddings of all segments are combined to generate an embedding for the original large graph, which is then used for prediction. The important benefit is that GPU memory requirement only depends on the segment size (but not the full graph size).\n\nunavoidably reaches OOM on MalNet-Large and TpuGraphs dataset. On the contrary, we empirically show that the proposed GST framework successfully trains on arbitrarily large graphs using a single NVIDIA-V100 GPU with only 16GB of memory for MalNet-Large and four GPUs for TpuGraphs dataset, while maintaining comparable performance with Full Graph Training. We finally demonstrate that our complete method GST+EFD slightly outperforms GST by another 1-2% in terms of final evaluation metric, and simultaneously being 3x faster in terms of training time.\n\n## 2 Preliminaries\n\n**Notation.** For a function \\(f(\\mathbf{a}):\\mathbb{R}^{d_{0}}\\rightarrow\\mathbb{R}^{d_{1}}\\), we use \\(D^{k}_{\\mathbf{a}}f[\\mathbf{a}_{0}]\\in\\mathbb{R}^{d_{0}\\times d_{1}}\\) to denote its \\(k\\)-th derivative of \\(f\\) with respect to \\(\\mathbf{a}\\) evaluated at value \\(\\mathbf{a}_{0}\\). Let \\(f\\circ g\\) to denote the composition of function \\(f\\) with \\(g\\). We use \\(\\bigcirc\\) to denote entry-wise product, and let \\(\\mathbf{a}\\bigodot^{2}\\stackrel{{\\Delta}}{{=}}\\mathbf{a}\\bigodot\\mathbf{a}\\). We use \\(\\bigoplus\\) to represent aggregation. \\(\\bigoplus_{j\\leq J}\\mathbf{a}_{j}\\) means aggregating the set \\(\\{\\mathbf{a}_{j}\\}_{j\\leq J}\\), where \\(j\\) indexes segments, and \\(J\\) is the number of segments in a graph. We usually drop \\(j\\) subscript from \\(\\bigoplus\\) for brevity. \\(\\mathbf{a}_{i}\\bigoplus\\mathbf{b}_{j}\\) aggregates both sets \\(\\{\\mathbf{a}_{i}\\}\\) and \\(\\{\\mathbf{b}_{j}\\}\\) together. \\(\\bigoplus\\) can be mean or sum operators when applying to vectors. We define the input graph as \\(\\mathcal{G}=\\{\\mathcal{V},\\mathcal{E}\\}\\), where \\(\\mathcal{V}=\\{v_{1},...,v_{m}\\}\\) is the node set and \\(\\mathcal{E}\\subseteq\\mathcal{V}\\times\\mathcal{V}\\) is the edge set.\n\nLet dataset \\(\\mathcal{D}=\\{(\\mathcal{G}^{(i)},y^{(i)})\\}_{i\\leq n}\\) contain \\(n\\) examples: each label \\(y^{(i)}\\) is associated with \\(\\mathcal{G}^{(i)}\\).\n\n**Graph Neural Network.** We consider a backbone graph neural network \\(F\\) that takes a graph \\(\\mathcal{G}^{(i)}\\) and generates graph embedding \\(\\mathbf{h}^{(i)}\\in\\mathbb{R}^{d_{h}}\\), followed by a final prediction head \\(F^{\\prime}\\) that takes graph embedding \\(\\mathbf{h}^{(i)}\\) and outputs final predictions: \\(\\widehat{y}^{(i)}=(F^{\\prime}\\circ F)(\\mathcal{G}^{(i)})\\). We optimize GNN with the loss function as \\(\\mathcal{L}((F^{\\prime}\\circ F)(\\mathcal{G}^{(i)}),y^{(i)})\\).\n\n**Historical Embedding Table.** We define an embedding table \\(\\mathcal{T}:\\mathcal{K}\\rightarrow\\mathbb{R}^{d_{h}}\\), where key-space \\(\\mathcal{K}\\subset\\mathbb{Z}\\times\\mathbb{Z}\\) is a tuple: (graph index \\(i\\leq n\\), segment index \\(j\\leq J\\)). We use \\(\\tilde{\\mathbf{h}}^{(i)}_{j}=\\mathcal{T}(i,j)\\) to denote embedding for graph segment \\(\\mathcal{G}^{(i)}_{j}\\): an embedding not up to date with the current backbone \\(F\\).\n\n## 3 Our Method: GST+EFD\n\n### Graph Segment Training (GST)\n\nGiven a training graph dataset \\(\\mathcal{D}_{\\text{train}}=\\{(\\mathcal{G}^{(i)},y^{(i)})\\}_{i=1}^{n}\\), a common SGD update step requires calculating the gradient:\n\n\\[\\nabla_{\\theta}\\sum_{(\\mathcal{G}^{(i)},y^{(i)})\\in\\mathcal{B}}\\mathcal{L}((F^ {\\prime}\\circ F)(\\mathcal{G}^{(i)}),y^{(i)})\\]\n\nwhere \\(\\theta\\) is trainable weights in \\(F^{\\prime}\\circ F\\), and \\(\\mathcal{B}\\) is a sampled minibatch. Graphs can differ in size (the number of nodes \\(|\\mathcal{V}^{(i)}|\\)), with some being too large to fit into the device's memory. This is because the memory required to store all intermediate activations for computing gradients is proportional to the number of nodes and edges in the graph.\n\nTo address the above issue, we propose to partition each original input graph into a collection of graph segments, _i.e._,\n\n\\[\\mathcal{G}^{(i)}\\approx\\bigoplus\\mathcal{G}^{(i)}_{j}\\quad\\text{ for }j\\in\\{1,2,\\ldots,J^{(i)}\\}\\]\n\nAn example of a partition algorithm is METIS [16]. This preprocessing step will result in a training set \\(\\mathcal{D}_{\\text{train}}=\\{(\\bigoplus_{j\\leq J^{(i)}}\\mathcal{G}^{(i)}_{j}, y^{(i)})\\}_{i=1}^{n}\\). Number of partitions \\(J^{(i)}\\) can vary across graphs, but the size of each graph segment can be bounded by a controlled size (\\(|\\mathcal{V}^{(i)}_{j}|