entry_id
stringlengths
33
34
published
stringlengths
14
14
title
stringlengths
6
252
authors
sequencelengths
1
1.7k
primary_category
stringclasses
152 values
categories
sequencelengths
1
8
text
stringlengths
0
52.1M
introduction
stringlengths
0
79.1k
background
stringlengths
0
34.5k
method
stringlengths
0
49.1k
results
stringlengths
0
175k
discussion
stringlengths
0
57.6k
conclusion
stringlengths
5
156k
http://arxiv.org/abs/2409.17876v1
20240926142344
Why Companies "Democratise" Artificial Intelligence: The Case of Open Source Software Donations
[ "Cailean Osborne" ]
cs.CY
[ "cs.CY", "cs.AI", "cs.SE" ]
Why Companies “Democratise” Artificial Intelligence: The Case of Open Source Software Donations Cailean Osborne Oxford Internet Institute University of Oxford Oxford, UK September 28, 2024 ================================================================================================= § ABSTRACT Companies claim to “democratise’’ artificial intelligence (AI) when they donate AI open source software (OSS) to non-profit foundations or release AI models, among others, but what does this term mean and why do they do it? As the impact of AI on society and the economy grows, understanding the commercial incentives behind AI democratisation efforts is crucial for ensuring these efforts serve broader interests beyond commercial agendas. Towards this end, this study employs a mixed-methods approach to investigate commercial incentives for 43 AI OSS donations to the Linux Foundation. It makes contributions to both research and practice. It contributes a taxonomy of both individual and organisational social, economic, and technological incentives for AI democratisation. In particular, it highlights the role of democratising the governance and control rights of an OSS project (i.e., from one company to open governance) as a structural enabler for downstream goals, such as attracting external contributors, reducing development costs, and influencing industry standards, among others. Furthermore, OSS donations are often championed by individual developers within companies, highlighting the importance of the bottom-up incentives for AI democratisation. The taxonomy provides a framework and toolkit for discerning incentives for other AI democratisation efforts, such as the release of AI models. The paper concludes with a discussion of future research directions. § INTRODUCTION Companies are increasingly “democratising” artificial intelligence (AI). However, “AI democratisation” remains an ambiguous term, encompassing a variety of goals and methods <cit.>, including the release of AI open source software (OSS) <cit.>, its donation to non-profit foundations <cit.>, and the release of AI models (henceforth: open models) <cit.>, among others. While press releases celebrate a myriad of benefits that AI democratisation promises for research and innovation, the commercial incentives driving such efforts are often obscured from public view. Given the ever-increasing impact of AI on society and the economy, understanding the commercial incentives for AI democratisation efforts is crucial so that we can ensure these efforts serve broader societal interests beyond commercial agendas. Towards this end, this study presents an exploratory investigation of why companies democratise AI with a focus on OSS donations as one method, among others, of AI democratisation. Through a mixed-methods approach, combining the analysis of pre-donation technical pitches, post-donation blog posts, a questionnaire, and semi-structured interviews, it investigates commercial incentives for 43 AI OSS donations to the Linux Foundation (LF), making contributions to both research and practice. It makes contributions to both research and practice. It contributes a taxonomy of both individual and organisational social, economic, and technological incentives for AI democratisation. In particular, it highlights the role of democratising the governance and control rights of an OSS project (i.e., from one company to vendor-neutral, open governance) as a structural enabler for downstream goals, such as attracting external contributors, reducing development costs, and influencing industry standards, among others. Furthermore, it sheds light on the role of individual developers within companies, who champion and coordinate OSS donations, thus highlighting the relevance of the bottom-up incentives for AI democratisation. The taxonomy provides a framework and toolkit for discerning incentives for other AI democratisation efforts, such as the release of AI models <cit.>. The paper is structured as follows. First, it reviews prior work on the political economy of open source AI and OSS (Section <ref>). Second, it presents the research aims and the study design (Section <ref>). Third, it reports the key findings (Section <ref>). Fourth, it discusses the implications of the findings for research and practice as well as the threats to validity (Section <ref>). Finally, the paper concludes with a summary of the key contributions (Section <ref>). § RELATED WORK §.§ “Democratising AI”: Narratives and Practices In the context of concerns about industry concentration and influence on AI research and development (R&D) <cit.>, it has become en vogue for companies to claim to “democratise AI”—an altruism-laden term that is notoriously ambiguous. Prior work finds that it used as a catch-all term to encompass a variety of goals and practices <cit.>, including the following: * Democratising AI use: Lowering entry barriers for the use of AI technologies, including but not limited to commercial products like OpenAI's ChatGPT or GitHub's Copilot, access to AI models through APIs or publicly available model weights, and the release of AI OSS like PyTorch and TensorFlow. * Democratising AI development: Lowering the entry barriers for the development of AI technologies, including but not limited to the release of AI models and AI OSS. * Democratising AI profits: Redistributing the economic value accrued to companies from their use of AI technologies to the respective users and impacted populations. * Democratising AI governance: Distributing the decision-making power in the development or use of AI technologies to a wider community of stakeholders and impacted populations. In most cases, AI democratisation is used to refer to the lowering of barriers for the use or the development of AI technologies, leading <cit.> to conclude that, “‘AI democratisation’ is a (mostly) unfortunate term”. Open source technologies and collaboration methods have been integral to AI democratisation efforts, offering the means of enabling both access to and participation in the development of AI. Commercial releases of AI OSS <cit.> and open models <cit.> have contributed to the rapid growth of the open source AI ecosystem, which now comprises over 300 AI OSS libraries <cit.>, hundreds of thousands of open models <cit.>, and over a million AI OSS repositories <cit.>. The prevalence of AI democratisation efforts begs the questions of why companies release their AI software and models, and what are the impacts thereof on the norms, practices, and potential trajectories of AI developer communities. Prior work hints at a number of incentives. Scholars contend that industry giants promote open source as an alternative to their concentrated power in the AI industry, whilst using it as a means to shape industry standards, benefit from user innovation, and ultimately extend their influence the norms and tools used by researchers and developers around the world <cit.>. Nick Srnicek argues that “the seemingly non-capitalist practice of releasing their AI software for free in fact obscures a significant capitalist battle between the major companies” <cit.>. This was evident in a leaked Google memo, which claimed that “open source solutions will out-compete companies like Google or OpenAI” and for this reason they should “own the ecosystem and let open source work for us” <cit.>. As discussed below, this leaked memo highlights the ethical tensions that emerge from company’s attempts to exploit the collective efforts of OSS developer communities <cit.>. Other companies, such as Meta, have been outspoken about the drivers of their open source AI strategy: by releasing AI software like PyTorch and large language models (LLM) like their LLaMA models, Meta seeks to increase adoption of its technology, improve their performance and safety through distributed feedback and innovation, and ultimately benefit from ecosystem effects. For example, upon releasing LLaMA 2, Nick Clegg, Meta’s President of Global Affairs, explained that “Openness isn’t altruism—Meta believes it’s in its interest. It leads to better products, faster innovation, and a flourishing market, which benefits us as it does many others” <cit.>. He argued that releasing LLaMA 2 would make it “safer and better” because it will benefit from the “wisdom of the crowds.” Clegg added that, “A mistaken assumption is that releasing source code or model weights makes systems more vulnerable. On the contrary, external developers and researchers can identify problems that would take teams holed up inside company silos much longer.’’ Meanwhile Mark Zuckerberg, Meta's CEO, has explained publicly that Meta seeks to build an ecosystem around their AI software and models as a source of strategic advantage. For example, Zuckerberg explained to shareholders that the widespread use of PyTorch has “been very valuable for us” because it has facilitated their use of external AI research and innovations that use PyTorch <cit.>. Furthermore, upon the release of LLaMA 3, he explained that they are not doing open source “because we are, like, altruistic… I just want everyone to be using it because the more people who are using it, the more the flywheel will spin for making LLaMA better” <cit.>. These statements provide some answers to why technology giants like Meta democratise AI, but it remains to be studied systematically why various kinds of companies, including startups, engage in AI democratisation efforts. There are also concerns about “open-washing” by startups and industry giants alike, who have been promoting open models released under restrictive licenses and with limited transparency as “open source” <cit.>. Liesenfeld and Dingemanse () argue that companies engage in open-washing to reap the benefits of open source (e.g., reputation rewards and adoption), whilst not actually complying with open source standards or norms (e.g., via restrictive licenses). For example, Meta released LLaMA 2 with much fanfare, claiming that the “open source” model would benefit research and innovation, its distribution under a novel license with restrictive commercial terms (i.e., any company with greater than 700 million monthly active users in the preceding month must request a license that Meta may grant in its sole discretion) received backlash from the open source community <cit.>. For example, Stefano Maffulli, the Executive Director of the Open Source Initiative, commented, “Unfortunately, the tech giant has created the misunderstanding that LLaMA 2 is `open source' – it is not. Meta is confusing 'open source' with `resources available to some users under some conditions,' [which are] two very different things” <cit.>. Whether open source AI will deliver its promised equalising effects or lead to further industry concentration remains to be seen. However, it is important to view these open source AI developments in the context of developments and concentrations in the wider AI supply chain or “AI stack”, including the hardware accelerators (i.e., chips), data, talent, and compute infrastructure required to develop and deploy AI systems in practice <cit.>. One may argue that no matter how much AI software or how many AI models companies release publicly, regardless of their license choice, such AI democratisation efforts will do little to fundamentally reconfigure the distribution of power and resources in the wider AI industry. This prior work provides insights into the open source AI strategies of industry giants and concerns related to open-washing. However, we still have significant gaps in our understanding of why and how different types of companies, beyond industry giants, employ open source as a means to “democratise” AI. To address this gap, in the next section, I draw on the extensive literature on the political economy of OSS, which provides a comprehensive theoretical foundation for understanding commercial incentives for AI democratisation efforts that are facilitated by open source technology. §.§ Commercial Incentives for OSS Development Companies have participated in the development of OSS in a myriad of ways since the late 1990s <cit.>, including by deploying developers to contribute to projects as part of their job responsibilities or corporate social responsibility initiatives <cit.>, funding projects <cit.>, or joining project steering committees <cit.>, among others. These are popular strategies through which companies seek to influence projects that develop maintain OSS that they use <cit.>. It is also common for companies to spin-out proprietary software as company-hosted OSS, where the host company controls the intellectual property of the project (e.g., by requiring contributors to sign a contributor license agreement) and employs the maintainers of the project <cit.>. This is a proven commercial strategy to increase adoption of their software, benefit from external contributions, win market share, or reduce a competitor’s market share <cit.>. In some cases, a handful of companies share control of a project; for example, in 2017, Facebook and Microsoft jointly released the Open Neural Network Exchange (ONNX) to enable interoperability between various deep learning frameworks like PyTorch and TensorFlow <cit.>. An extensive literature discusses the diverse incentives for commercial adoption and development of OSS at both the individual level (see Table <ref>) and the organisational level (see Table <ref>). Bonaccorsi and Rossi’s () taxonomy of social, economic, and technological incentives at the individual and organisational levels provides an enduring framework for categorising these diverse incentives. In addition, they find that divergent incentives between individuals and organisations. While individuals are mostly driven by social and technological incentives, such as their personal interest <cit.>, values <cit.>, or needs <cit.>, companies are chiefly motivated by economic and technological factors, such as influencing industry standards <cit.>, reducing development costs <cit.>, and recruiting external developers <cit.>. However, the incentives of individuals vary based on factors such as whether they are volunteers or paid <cit.>, the governance structure of the OSS project <cit.>, and their geography and cultural norms <cit.>, among others. While individual developers, including volunteers, do not share the primary incentives of for-profit companies, they tend to accept commercial participation on condition that they comply with community norms <cit.>. Furthermore, commercial participation in OSS projects can also attract volunteers, who see their presence as a signal of the complexity of the project <cit.>. For companies, economic and technological incentives are the most salient. In particular, the distributed production model of OSS development, involving many more skilled developers beyond those within the organisational boundaries of any single company, is viewed as means to decrease in-house R&D costs <cit.>. Benjamin Birkinbine argues that the greatest value of OSS for companies stems from the peer production model that expands the labour force that can test and develop the software. Specifically, he contends that the value for companies stems from the processes, not the products, of OSS development <cit.>. These incentives were underscored in the aforementioned Google memo, which urged the company to “own the ecosystem and let open source work for us” <cit.>. The extent of volunteer activity in OSS development, from bug-spotting to code contributions <cit.>, raises ethical questions about the exploitation of volunteer work <cit.> and the failure of most companies to adequately reciprocate to support the sustainability of OSS projects <cit.>. For example, while Linus’ Law—i.e., that “Given enough eyeballs, all bugs are shallow” <cit.>—is typically quoted to argue that the OSS development model offers security advantages over proprietary software development, one can extend it to convey the value of distributed bug-spotting and improvements that no single company must pay for by themselves. While volunteers make useful contributions to OSS projects <cit.>, it is often the case that companies specifically seek to collaborate with other companies, including market rivals, as a means to jointly share R&D costs <cit.> and shape industry standards <cit.>. While inter-company collaborations do not necessarily exclude volunteers, it is not uncommon for companies to engage in strategic or contractual collaborations, involving private collaboration methods, which volunteers cannot participate in (Osborne et al, forthcoming). The prevalence of inter-company collaborations has turned many OSS communities “from networks of individuals into networks of companies” <cit.>, resulting in a tangle of cooperation and competition between companies that has become known as “open source co-opetition’’ <cit.>. Moreover, commercial participation in OSS development can help companies to improve their market position by undercutting the product of a market rival <cit.> as well as to enhance their reputation as an OSS patron among developers <cit.>, which in turn can help to recruit software developers <cit.>. The diversity of incentives of various stakeholders underlines the critical role of governance in OSS projects <cit.>. Non-profit foundations have emerged as key mediators—or “boundary organisations”—whose vendor-neutrality and open community governance have proven to be effective structural enablers of collaboration between “unexpected allies” <cit.>. For instance, the LF is reputed to facilitate “communities of competitors,” where “market rivals...intentionally coordinate activities for mutual benefit in precise, market-focused, non-differentiating engagements” <cit.>. Foundations limit the dominance of any single company in OSS projects, which attracts new contributors to projects <cit.>, especially volunteers who are hesitant about performing free work for a company <cit.>, and increases their adoption <cit.>. However, foundations do not always prevent commercial dominance in OSS projects <cit.>. For instance, around 10% of companies account for 80% of commits to projects in the OpenStack ecosystem <cit.>. Moreover, governance changes resulting from donations do not guarantee activity increases. For example, a study of PyTorch's governance transition from Meta to the LF revealed no net increase in project activity and specifically that contributions from Meta decreased significantly, those from users (e.g., app developers and cloud providers) remained unchanged, but those from complementors (e.g., chip manufacturers) increased <cit.>. While governance changes may address “hold-up” problems for certain companies, particularly for complementors whose value capture proposition depends on interoperability, they do not guarantee net increases in external contributions <cit.>. This review has provided a theoretical foundation for examining commercial incentives in AI democratisation efforts. The taxonomy of social, economic, and technological incentives at individual and organisational levels <cit.> offers a valuable framework for this study's exploratory aims. By applying this framework to AI OSS donations, this study aims to identify and categorise the commercial interests driving AI democratisation. The following section outlines the research aims and methodological approach in more detail. § STUDY DESIGN §.§ Research Aims The objective of this study is to identify and categorise commercial interests for AI democratisation and thus contribute to advancing the nascent research agenda on the political economy of open source AI <cit.>. Specifically, it examines the following research question (RQ): Why do companies democratise AI? Given the various methods of AI democratisation <cit.>, it focuses on AI OSS donations to foundations—that is, the transfer of an OSS project from a company’s ownership to a non-profit foundation <cit.>—which in the AI industry are commonly presented as acts of AI democratisation. While this narrow scope enables an in-depth analysis of one method of AI democratisation, it naturally limits the generalisability of the findings to others, such as the increasingly common releases of open models (see Section <ref>). Within this scope, a mixed-methods approach was employed to investigate the incentives for 43 OSS donations between May 2018 and October 2022 by a range of companies, from startups to multinational corporations, to the LF AI & Data Foundation and PyTorch Foundation, two foundations under the LF that host OSS for data science and AI. The range of projects and companies form a diverse sample (see Table <ref>), accounting for various project maturity levels, company sizes, sectors, and countries <cit.>. Furthermore, the mixed-methods approach to multiple cases mitigates the uniqueness of single cases <cit.> and data sources or methods <cit.>, thus enhancing the validity of the findings. The study received ethical clearance by the University of Oxford CUREC review board prior to data collection. §.§ Case Presentation §.§.§ LF AI & Data Foundation The LF AI & Data Foundation was founded in March 2018 as the LF Deep Learning Foundation and rebranded as the LF AI Foundation in May 2019, broadening its scope to encompass various AI sub-fields. In October 2020, it merged with the ODPi, an organisation promoting a big data software ecosystem. The foundation subsequently rebranded as the LF AI & Data Foundation, acknowledging the vital role of data in AI R&D. At the point of data collection (October 2022), the foundation had 51 member companies from North America, Europe, and East Asia. It hosted 42 OSS projects that had been donated by diverse organisations, including startups (e.g., AI Squared), research institutes (e.g., the Beijing Academy of AI), consortia (e.g. ONNX), management consultancies (e.g., McKinsey & Co.), and technology giants (e.g., IBM, Samsung, and Tencent). When a company seeks to donate an OSS project to the LF AI & Data Foundation, they must be a member organisation of the LF or be endorsed by a member and submit their proposal for review by the technical advisory council (TAC). The TAC comprises representatives from the various projects at the foundation and premier member companies, who vote on the approval of donation proposals. The LF AI & Data Foundation segregates business and technical governance of hosted OSS projects, ensuring that developers retain technical control in their projects whilst the foundation assumes responsibility for funding, marketing, and license compliance, among others, and enforces open community governance <cit.>. §.§.§ PyTorch Foundation The PyTorch Foundation was established in September 2022 to host the popular PyTorch deep learning framework that had been donated by Meta <cit.>. Its mission is to “driv[e] the adoption of AI tooling by fostering and sustaining an ecosystem of open source, vendor-neutral projects integrated with PyTorch” and “to democratise state-of-the-art tools, libraries, and other components to make these innovations accessible to everyone” <cit.>. The PyTorch Foundation similarly maintains a separation between business and technical governance for the PyTorch project and wider ecosystem, with the PyTorch project retaining its technical governance structure while the foundation is responsible for funding, hosting expenses, and events, among others. The PyTorch Foundation manages the project’s assets, including its website, GitHub repository, and social media accounts, and enforces open community governance. Upon its launch, it formed a governing board comprising representatives from its initial members: AMD, Amazon Web Services, Google Cloud, Hugging Face, IBM, Intel, Meta, Microsoft, and Nvidia <cit.>. The governing board members shape PyTorch's strategic direction through voting rights, contribute to the project's technical development and roadmap, and gain benefits such as early feature access and increased visibility in the PyTorch ecosystem, while being expected to actively support and promote PyTorch's growth and adoption. §.§ Data & methods §.§.§ Data Collection This study comprises two sources of primary data and two sourcs of secondary data. First, two types of secondary data were collected from the Internet for 43 AI OSS donations to the LF (42 donations to the LF AI & Data Foundation and 1 donation to the PyTorch Foundation). First, pre-donation technical pitches to the TAC were collected from the LF AI & Data Foundation wiki page <cit.>. Second, post-donation blog posts by the LF and respective companies were collected from the LF AI & Data Foundation website and through Google search queries in the format of “[company name] + [project name] + [LF AI & Data Foundation]. This yielded 40 presentations (95%) and 37 (88%) blog posts for the 42 projects donated to the LF AI & Data Foundation. It was not possible to collect a pre-donation technical pitch for the PyTorch donation because, as an inaugural project of its namesake foundation, it did not follow this process. Blog posts were accessed via the PyTorch Foundation website and Google search queries using the aforementioned format. A full list of the 43 OSS projects, donors, and respective document links is provided in Table <ref>. Subsequently, primary data was collected through questionnaires and 12 semi-structured interviews with ten project maintainers who had donated the project and two foundation employees. First, a brief questionnaire was distributed to the maintainers to gather information on the donation process, incentives, and outcomes, as well as to recruit interviewees. It was distributed via the LF AI & Data Foundation's mailing list and to the PyTorch maintainers via the Executive Director of the PyTorch Foundation, resulting in 16 responses from the maintainers of 16 projects at the LF AI & Data Foundation and 0 responses from the maintainers of PyTorch (in total, 37% of the 43 projects). Ten maintainers were recruited for interviews through the questionnaire, who worked for 9 companies, diverse by geography, size, and sector (see Table <ref>). In addition, the Executive Director of the foundations (N.B. same person) and a LF AI & Data Foundation project coordinator were interviewed. The 12 interviews lasted between 30 and 60 minutes and were semi-structured, combining standardised questions about the donation process with tailored questions based on their questionnaire responses, adding depth to the quantitative findings in Figure <ref>. The interviews were conducted digitally, and were recorded to aid analysis and to enhance the validity of the research findings <cit.>. §.§.§ Data Analysis Thematic analysis was applied to systematically identify commonalities, patterns, and relationships in the qualitative data <cit.>. A systematic six-step procedure was followed to enhance the reliability of this analysis <cit.>. The initial coding procedure involved an integrated approach, combining the inductive coding <cit.> and a deductive coding informed by prior work on incentives that exist at the levels of individual developers (see Table <ref>) and organisations (see Table <ref>). This approach allowed for the identification of both commonalities with prior work and novel findings concerning open source AI democratisation efforts. The coding was conducted by the author until reaching saturation <cit.>, then merged codes into 25 distinct themes (i.e., social, economic, and technological incentives at the level of individual developers and companies shown in Table <ref>). To address the limitation of single-author analysis, each step was rigorously documented, and the results were member-checked with the interviewees and discussed with two academic advisers <cit.>. Furthermore, the interviewees were invited to review the quotes attributed to their anonymised IDs and to state their attribution preference, ensuring consent for the inclusion of statements. Only three interviewees proposed revisions (e.g., to enhance specificity), indicating the resonance of the findings with the practitioners. §.§.§ Reflexivity In social science research, it is critical that one as a researcher engages in critical self-evaluation of the one’s positionality and disiplinary conventions, and how they may influence one’s research, from its initial design through to the reporting of and presentation of research findings <cit.>. The exercise of reflexivity was particularly important for the credibility of this study, given the author’s affiliation with the LF as a research contractor. A social identity map was employed as a tool to encourage reflection on positionality and to address potential biases in three areas <cit.>. First, at the outset, it was used to consider the effects of the LF affiliation on data access and potential threats to reproducibility, recognising that the LF affiliation likely increased the willingness of foundation staff and project maintainers to participate in the study. To enhance reproducibility, the data collection strategy relied primarily on publicly available information (e.g., mailing lists) and information sheets sent with invitations explicitly stated the independent purpose and funding for the research. Second, to minimise social desirability bias in interview responses, the research's independent purpose and funding mentioned in the information sheet were explained to interviewees at the beginning of each interview. Third, a journal was maintained during the thematic analysis to document coding choices, and the findings were shared with two academic advisers to review the interpretations. While these actions contribute to enhancing the credibility and reliability of the research process, it must be acknowledged that it is difficult, if not impossible, to perform bias-free research and, therefore, it should be understood as an imperfect, yet best-effort attempt by the author to control for and mitigate potential biases in the research process <cit.>. § RESULTS The findings reveal an interplay of social, economic, and technological incentives at both individual and organisational levels for OSS donations to foundations (see Table <ref>). This section presents these findings, contributing a more nuanced understanding of the bottom-up and top-down incentives behind AI democratisation efforts by companies. §.§ Individual-level Incentives Individual employees often play a crucial role in championing and coordinating the donation process within their companies. 38% of questionnaire respondents stated that the decision to donate was driven from the bottom up by developers, while 13% of respondents stated the decision was made by individuals who held both developer and managerial roles (e.g., startup founders). These findings underscore the importance of understanding individual incentives in shaping commercial decisions to donate OSS projects to foundations. §.§.§ Social Incentives At the individual level, social incentives are as a significant driver for OSS donations. Two key themes stood out: reciprocity and personal reputation. Many developers expressed a deep-seated ethos of `giving back” to the community that supports their work. For instance, Respondent A (BeyondML) stated: The vast majority of proprietary models and software in data science and machine learning are built on open source, so being part of that and contributing to that is really important to me personally and to our company. This sentiment was echoed by Respondent H (NNStreamer), who noted that their donation was “just for our own satisfaction” as OSS users and developers. These responses highlight the personal investment many developers have in the OSS ecosystem and their desire to contribute to its growth and sustainability. Personal reputation is also a significant social incentive, with successful donations to major foundations representing an opportunity for individual developers to enhance their standing in the OSS community. For example, Respondent J (ONNX-MLIR) explained that having a project accepted by a major foundation provides credibility among peers and brings developers like himself closer to their “dream of having your big open source project with 1000s of stars.” Respondent I (ONNX-MLIR) added that improving one's reputation also leads to career benefits, as one can be hired based on one's reputation, and that individuals' aspirations align with company's goals to improve their corporate reputation as an OSS-friendly workplace. §.§.§ Economic Incentives Two primary economic incentives at the individual level are career benefits and access to foundation support services.The reputational gains from OSS contributions often translate into tangible career benefits. For example, respondent I (ONNX-MLIR) noted that achievements in OSS projects make developers more competitive in the AI job market, as their expertise at the intersection of software engineering and AI becomes both known and knowable in the wider open source AI ecosystem. This enhanced visibility and credibility in the job market creates a powerful incentive for individuals to support OSS donations. As mentioned above, Respondent I (ONNX-MLIR) pointed out that these incentives of individual developers also align with organisational goals of fostering a skilled workforce, attracting and recruiting competitive AI talent, and improving their corporate reputation among OSS developers. The desire to harness foundation support services is another important economic incentive. Many developers viewed foundation support as a means to address challenges faced by maintainers in managing projects alongside their full-time employment. 87% of respondents claimed this support was important for them (see Figure <ref>). Respondent F (KServe) elaborated on this point: We, as developers, don't have a lot of time for [outreach and marketing]. We sought to benefit from support services, such as outreach to new contributors and marketing. Similarly, Respondent H (NNStreamer) sought marketing support to increase project visibility and to attract external developers. These examples illustrate how foundations are perceived to provide resources that individual maintainers often lack or cannot secure within their own companies, which in turn helps to enhance the growth of their OSS projects. §.§.§ Technological Incentives Technological incentives at the individual level include ensuring project sustainability and enabling the use of collaboration tools. Project sustainability is a significant concern for maintainers. Respondent G (Ludwig) provided a compelling example, describing how he donated his project to ensure its survival following organisational restructuring and his personal departure from the company. He viewed the transition to a foundation as an effective strategy to ensure the project's continuation. This case demonstrates how personal attachment to projects, which he described as his “baby”, can drive individuals to seek sustainable governance models for their OSS projects, especially when their affiliation with the company comes to an end. It also highlights the use of OSS donations as a mechanism to preserve source code that might otherwise be lost due to corporate changes or neglect. The ability to use preferred collaboration tools is another technological incentive. Respondents D and E (Kedro) explained that transitioning the project to the LF AI & Data Foundation made it easier to use tools like Slack and Discord, which were either forbidden or difficult to get approval for at their company. As Respondent E (Kedro) explained, “It untied our hands from our own bureaucracy.” They elaborated that this freedom from corporate constraints concerning collaboration tools not only enhances developer productivity and satisfaction but also aligns with broader organisational goals of fostering innovation and efficiency, thus representing a win-win scenario for them. §.§ Organisational-level Incentives While individual employees play a crucial role in championing OSS donations within their companies, organisational incentives (i.e., corporate strategies) ultimately matter above all in the decision-making process, with 44% of respondents stating that the donation was a top-down decision by managers and 13% respondents stating that the decision was made by individuals who held both developer and managerial roles within their company. §.§.§ Social Incentives At the organisational level, three primary social incentives emerged: adopting open governance, reciprocating to the OSS ecosystem, and building the company's reputation. The adoption of open governance upon donating a project to a foundation is a salient incentive, with 81% of respondents reporting it as important. This change in governance model is viewed as a structural enabler for downstream goals. For example, Detakin's press release highlighted this incentive: The LF AI & Data provides a vendor-neutral governance structure that can help the project grow broad industry collaboration. Even more importantly, becoming a LF AI & Data project ensures that OpenLineage can never belong to a company. Similarly, Lyft emphasised the importance of a “neutral holding ground” when donating both Amundsen and Flyte. These statements underscore the perceived value of neutral governance in fostering collaboration and ensuring the independence of projects. Similar to individuals, reciprocity to the OSS ecosystem is a key social incentive at the organisational level. Several respondents cited the critical importance of OSS dependencies in their companies' proprietary products and services, and perceived the donation of their OSS project as one way to “give back”, as explained by Respondent A (BeyondML). In a similar vein, Respondent C (Elyra) underscored the impact of OSS for advances in AI as a reason for why he champions giving back to the ecosystem: The democratisation of AI software is really what is helping industry advance. If you look back like 10-20 years ago, it was very hard and you needed to have a specific set of skills to be able to even build a very simple model. Today with all the tools and stuff, it's much easier. Respondent C (Elyra) explained that their team's decision to donate Elyra stemmed in part from their desire to play their part in advancing industry. This sentiment was echoed in several companies' post-donation press releases, framing their donations as acts of AI democratisation. For instance, Uber stated that by donating Pyro, it hoped “to facilitate greater opportunities for researchers worldwide and [to] make deep learning and Bayesian modelling more accessible.” Building corporate reputation emerged as the third significant social incentive at the organisational level. Many respondents explained they hoped that the reputation of the LF AI & Data would enhance their company's credibility in the open source AI ecosystem, with 75% of respondents reporting it as important. This incentive was particularly strong for startups and companies without an established reputation in AI. Respondent A (BeyondML) explained: One of the things that we hoped to get from the LF is its name recognition, obviously just about every developer in the world knows about the LF or knows the term Linux. So having that kind of badge, if you will, immediately gives you a level of credibility with your project. Respondents D and E (Kedro) echoed this sentiment, stating in their pre-donation pitch that they “would like to leverage the initial marketing announcements to build credibility in their technical and product-related capability”. Subsequently, in the post-donation press release, their company stated: It's a substantial step forward for our organisation on our open source journey. The consultancy sector has traditionally been highly protective of intellectual property, but it's clear that open, collaborative innovation will help power the next phase for analytics technology. These findings highlight three key social incentives driving organisational decisions to donate AI OSS projects. The adoption of open governance emerges as a key structural enabler for fostering collaboration and ensuring project independence. Reciprocity to the OSS ecosystem reflects companies' recognition of their dependence on open source technologies and their desire to contribute to industry advancement. Additionally, the opportunity to enhance corporate reputation, particularly for less established companies, serves as a salient motivator. These social incentives are strategically important for companies seeking to position themselves favourably within the open source AI ecosystem. §.§.§ Economic Incentives Several economic incentives at the organisational level inform the decision to donate OSS projects, including attracting external contributors, reducing development costs, diversifying project funding, and harnessing foundation support services. Indeed, being able to attract new contributors to their project is a critical incentive for OSS donations, with 100% respondents reporting it as important. Respondent J (ONNX-MLIR) described OSS donations as a strategic trade-off, where companies exchange full control of their OSS project for the aspired for benefits of distributed development involving a community of contributors. He noted the self-interested logic: We continue doing the same work as we would if it wasn't an open source project, but there's this expectation that we're going to benefit from a community helping us achieve our own goals. Respondent G (Ludwig) explained that above all it is beneficial for attracting contributors from other companies, who “prefer not to contribute to projects that are started by companies that could be competitors. They don't trust it as fully open source if it was started by Uber, Google, Facebook, or whatever company.” Meanwhile, Respondent B (Elyra) provided a more nuanced view: In reality, in all the projects that I've seen, they are still driven by the main inventors. Open governance just means that the feedback comes from additional contributors ... It's more like a community project. However, respondents cautioned that changing the governance model does not guarantee more or useful external contributions. Respondent D (Kedro) explained she had rejected pull requests “because they were not up to scratch”, while Respondent G (Ludwig) discussed the role of mentorship in training external developers to become effective contributors, which he highlighted requires the company to invest time and money. Diversification of funding is another significant economic incentive. For example, respondent E (Kedro) discussed the relevance of client concerns about the financial dependence on the host company: Clients would ask, 'What will happen if your team does not exist tomorrow?' They were afraid that if we left the code, they wouldn't be able to get new versions and then it would become unmaintained. Respondent D (Kedro) speculated that the only reason they were able to convince their senior management to approve the open-sourcing of Kedro was due to this client pressure. In addition, some companies seek to attract or increase external investment in their project. Several respondents saw this as a primary reason for Meta's donation of PyTorch, speculating that members of the newly founded governing board of the PyTorch Foundation would likely invest more resources in PyTorch than they did before. Harnessing foundation support services also emerged as a significant economic incentive at the organisational level, with 87% of respondents reporting it as important. For example, Respondent A (BeyondML) described the resources and infrastructure provided by foundations as “stability offerings” that could help scale and sustain projects beyond the means of their startup, thus making OSS donations an attractive option for sustainable project growth. §.§.§ Technological Incentives At the organisational level, several technological incentives were identified, including ecosystem integration and adoption, software improvements, faster innovation, and influencing industry standards. Respondents highlighted that joining a foundation offered ecosystem benefits. For example, respondent C (Elyra) noted: Together with [open governance], we thought being in an ecosystem of other machine learning and AI projects would foster collaboration and integration, exposing Elyra to more use cases. This perspective illustrates how companies view foundation ecosystems as platforms for enhanced collaboration and project visibility, potentially leading to more adoption and more diverse applications of their software. This was confirmed by 88% of respondents who reported increasing adoption as an important incentive for their company. Software improvements and faster innovation were also identified as significant technological incentives. Respondent B (Elyra) observed that “That's something which is one strength of open source; it's much more properly tested than [proprietary] software.” An additional incentive is the aim to accelerate development, implicitly thanks to new contributors who would join the project, with 69% of respondents reporting it as important. This underlines the aforementioned perspective of Respondent J (ONNX-MLIR) that AI democratisation is in large part self-interested, using open source as a means to reduce internal costs, speed up innovation, and enhance software quality. Influencing industry standards emerged as another key technological incentive, with 63% of respondents reporting it as important for their company. A number of respondents speculated that the ambition behind Meta's donation of PyTorch was to make it the open standard for deep learning, bringing an end to the age-old rivalry between PyTorch and Google's TensorFlow. Respondent E (Kedro) commented: I think that move was actually made to destroy TensorFlow because TensorFlow [does not have] open governance. Respondent D (Kedro) suggested that Meta was seeking to beat TensorFlow out of the market since Google would struggle to compete with the strategic alliance of industry giants that had formed under the PyTorch Foundation's governing board (which even includes Google). Respondent B (Elyra) was more direct, describing it as “a death knell to TensorFlow.” Reflecting on their experience of overseeing donations to the foundation, Respondent L (LF) explained: Every company may have a different set of incentives but what's common across all of them is the desire to make sure that the project becomes successful in the long term and becomes the de facto project for its given functionality. These findings reveal a complex interplay of social, economic, and technological incentives driving companies to donate AI OSS to foundations. The incentives span both individual and organisational levels, highlighting the multifaceted nature of AI democratisation efforts. These insights provide a nuanced understanding of the strategies and considerations that shape commercial decisions, practices, and strategies in the open source AI ecosystem. § DISCUSSION §.§ Implications for Research §.§.§ Disambiguating “AI Democratisation” Narratives The findings shed light on the inconsistent use of “AI democratisation” in commercial narratives surrounding OSS donations, extending the work of <cit.>. They show that in the context of OSS donations, the most relevant interpretation is the democratisation of governance, as companies transfer already open-sourced software from their single-vendor governance to open governance provided by vendor-neutral foundations. This transition of governance and control rights, however, is not an end in itself but a strategy that aims to realise goals, such as increasing adoption and recruiting new contributors to the project. It is often the companies seek, in particular, participation by other companies, who previously would hesitate to contribute to a company's OSS project, confirming prior findings on “hold-up” problems <cit.>. These findings challenge the notion that AI democratisation efforts are primarily altruistic. OSS donations are a strategic calculation, where companies relinquish control of their OSS project in exchange for assumed benefits. This perspective aligns with and extends the work of Widder et al. <cit.> and Srnicek <cit.>, who argue that open source practices in AI serve commercial interest. This should not be surprising, as for-profit companies are not charities, but it provides an empirical evidence base to illustrate that “there is no such thing as a free lunch” and to understand the possible objectives behind such AI democratisation efforts. Moreover, the analysis reveals the role of individual developers who advocate for OSS donations within their companies, demonstrating that donations result from an interplay between both individual and organisational incentives. §.§.§ A Taxonomy for Understanding AI Democratisation Incentives The findings can be summarised in a taxonomy of commercial incentives for AI OSS donations (see Table <ref>). Drawing on the taxonomy by Bonaccorsi and Rossi (, see Tables <ref>-<ref>), this taxonomy categorises incentives at both individual and organisational levels, grouped into social, economic, and technological categories. It provides a framework for understanding the intersecting bottom-up and top-down incentives behind OSS donations, demonstrating that they are driven by a combination of individual and organisational factors. The value of this taxonomy lies in its ability to provide a structured approach to analysing the incentives for AI democratisation efforts, offering researchers and practitioners a tool to systematically examine the incentives at play in different scenarios. The findings corroborate prior work on the salience of economic and technological incentives for companies <cit.>, with the democratisation of governance in large part used as a means to various economic and technological ends, such as recruiting external developers <cit.>, reducing development costs <cit.>, and influencing industry standards <cit.>. According to the questionnaire, the most important incentive for companies is to recruit new contributors to the project, who can help improve the quality and competitiveness of the software, ultimately serving the interests of the donor company. As Respondent J (ONNX-MLIR) noted, “there's an expectation that we're going to benefit from a community helping us achieve our own goals.” These findings corroborate statements by Big Tech companies about their open source AI strategies, who want to “own the ecosystem and let open source work for us” <cit.> and for “everyone to be using [LLaMA] because the more people who are using it, the more the flywheel will spin” <cit.>. This aligns with ethical concerns about commercial exploitation of volunteer work in OSS without adequately reciprocating to the OSS ecosystem <cit.>. However, the findings also add nuance to this perspective. While the promise of being able to attract new contributors to their project is indeed important for many companies, it is not universally paramount. For some companies, particularly those with substantial resources or an existing community, other incentives such as standard-setting take precedence <cit.>. The case of PyTorch, which was donated by Meta and already had a large contributor community, exemplifies this scenario. Even for companies primarily seeking to attract new contributors, the findings show that these benefits are far from guaranteed. The interviews identified two key challenges. First, the mere act of donation does not ensure that a project will attract new contributors. Second, even if a project can recruit new contributors, the quality of their contributions may not meet the project's standards. Companies reject external contributions that “are not up to scratch” (Respondent D, Kedro) and need to invest in mentoring and training resources for external contributors (Respondent G, Ludwig). Companies may need to invest considerable time and resources into community building and contributor development, challenging the notion of open source as a straightforward cost-saving measure. This finding provides important nuance to the “free labour” thesis, and provides avenues for future research into the real costs and benefits of spinning-out or donating OSS projects. It is also worth noting that the relative importance of these incentives vary depending on the context, company size, and type of the AI technology being “democratised” (e.g., OSS, AI model, or other). While this taxonomy provides a starting point for understanding commercial incentives for AI democratisation, it should be used as a flexible framework rather than a definitive list. Future research should build on it by testing its applicability to different approaches to AI democratisation, such as AI model and dataset releases <cit.>. On the one hand, incentives such as building a company's reputation, increasing adoption, recruiting new contributors, reducing development costs, and shaping industry standards likely apply to AI model releases. On the other hand, the adoption of open governance is, at least to this day, specific to the context of OSS donations to foundations. To date, while companies have released models, they have not transferred them to a vendor-neutral foundation for open governance. This may in part be due to the lack of established practices for openly governing AI models given their relative complexity compared to OSS in terms of the resources required to develop and maintain AI models and their constituent components <cit.>, or in part because the incentives may be more similar to those for spinning-out software, such as increasing adoption <cit.>. At this stage, we can only speculate and, therefore, the commercial incentives and strategies for AI model releases, including governance choices, as well as their implications for norms and practices in the open source AI ecosystem, warrant further investigation. §.§.§ From Intentions to Outcomes: Evaluating the Long-term Impact of OSS Donations The findings capture the stated incentives of companies at a particular point in time. However, the actual realisation of these incentives and goals may vary and would require longitudinal study to verify. For instance, the study by <cit.> on PyTorch's governance transition provides a compelling case study that both supports and challenges some of the incentives identified in this research. Their finding that the governance change led to increased participation from complementors (e.g., chip manufacturers) aligns with the incentive of recruiting new contributors identified in this study. However, the decrease in contributions from Meta following the transition suggests that the realisation of incentives may be more complex than initially anticipated. This complexity is further underscored by the fact that users (e.g., app developers and cloud providers) did not change their rate of participation, indicating that different stakeholders may respond differently to governance changes. These insights call for more longitudinal studies that track the outcomes of OSS donations over time. Such research could help verify whether long-term economic and technological benefits are indeed realised. Furthermore, it could shed light on how the balance of contributions shifts between the donor company and external contributors following a governance change. The PyTorch case also highlights the importance of considering the specific type of external contributors when evaluating the success of OSS donations. The differential response between complementors and users suggests that future research should delve deeper into how different types of stakeholders perceive and respond to changes in project governance. Future research could explore how companies balance the potential benefits of increased external contributions against the possible reduction in their own incentives to invest in the project. While this study provides a comprehensive taxonomy of incentives for OSS donations, the emerging evidence on the impacts of such donations highlights the need for further research. §.§ Implications for Practice The findings have several important implications for practitioners in open source AI development and governance. First, the findings underscore the need for greater transparency in corporate communications about AI democratisation efforts. The inconsistent use of “AI democratisation” underlines that practitioners should be more specific and transparent about their goals, as suggested by <cit.>. Companies could clearly communicate the specific aspects of AI they aim to democratise (e.g., access, development, or governance) and explain how their actions contribute to these goals. Furthermore, greater transparency about the benefits that companies seek to gain from their democratisation efforts could foster more trust in corporate-community relations. This transparency could lead to more realistic expectations from the open source community and potentially more sustainable collaborations. Second, the taxonomy provides a toolkit for researchers and practitioners to understand the various social, economic, and technological incentives that may be driving AI democratisation efforts. It is also useful in shedding light on relevant incentives at both the individual developer level and the organisational level, revealing that both bottom-up and top-down incentives may be at play when companies announce they are democratising AI. Furthermore, practitioners can use this toolkit to examine how well commercial incentives align with the broader interests of the open source AI community, which may help companies and volunteers identify areas of synergy and enable more ethical collaboration. Third, the findings highlight the importance of foundations that facilitate collaboration between diverse stakeholders. While the benefits of foundations for developing and governing OSS are by now well understood, to this day companies are not transferring AI models to foundations for open governance. As discussed above, this may in part be due to the lack of established practices for openly governing AI models given their relative complexity in terms of the resources required to develop and maintain AI models and their various constituent components <cit.>. A question for practitioners to explore is whether, and if so how, the foundation-hosting and open governance model could be adopted for the hosting and governance of AI models. To answer this question, AI companies should engage foundations, not just as potential hosts of models, but as active partners in shaping the future of open source AI development. By employing the toolkit and implementing these practices, practitioners can work towards a more transparent and sustainable approach to AI democratisation that acknowledges commercial interests while fostering the broader goals of open and collaborative AI development. This approach could lead to more ethical and effective AI technologies that better serve societal needs while still allowing for innovation and commercial success. §.§ Threats to Validity The validity of the findings are evaluated with reference to guidance for case study research and qualitative research methods in software engineering research <cit.>. §.§.§ Credibility Credibility refers to the believability of the findings <cit.>. The primary threat stems from the author's affiliation with the LF as a research contractor, creating potential conflicts of interest and research biases, despite the independent purpose and funding of this research study. As explained in Section <ref>, a social identity map was used to identify and address potential biases in three areas: data access, collection, and analysis <cit.>. As previously stated, while a number of step were taken to enhance the credibility of the research process, it is acknowledged that there is nonetheless a risk of biases that were not controlled for or mitigated and as such it should be understood as an imperfect but best-effort attempt by the author. Another challenge concerned gaining access to data about commercial strategies. The working solution involved drawing on four sources—pre-donation technical pitches, post-donation press releases, questionnaires, and interviews—which offered a triangulated account of incentives <cit.>. While this approach provided four perspectives, it was inevitably constrained by the limited participation of company-affiliated developers, who were willing or allowed to participate in a research interview, and by the fact that some strategic incentives may never be shared publicly. In addition, the willingness of maintainers to complete the questionnaire and/or participate in an interview may have been affected by company policies (e.g., NDAs). §.§.§ Robustness Robustness concerns strength, reliability, and soundness of the study's design, methods, and findings. There was a risk of social desirability bias or response bias in the interviews, owing to the author's affiliation with the LF. Steps were taken to minimise the risk of these biases, such as proactively communicating the independent purpose and funding of this study in the interview invitations and at the beginning of every interview. Another threat stems from the thematic analysis. The author sought to maximise the robustness of this analysis by following best-practice guidelines <cit.> and integrating approaches to coding qualitative data <cit.>. In addition, the structure of the taxonomy poses risks to the validity of the findings. In some cases, it was difficult to demarcate incentives at the two units of analysis, which may have led to a misclassification or oversimplification. That being said, efforts were made to develop the categories by thoroughly reviewing prior work and member-checking findings with interviewees to evaluate their accuracy and resonance with practitioners <cit.>. §.§.§ Transferability Transferability concerns the generalisability of qualitative research findings. There are two key threats to generalisability. First, the narrow focus on OSS donations as a method of AI democratisation limits the generalisability of the taxonomy, which both includes aspects specific to OSS donations (e.g., democratising governance or foundation support) and excludes aspects that may be typical of other methods of AI democratisation (e.g., open model releases). Second, this study may be limited by the specific characteristics of the sample of companies, OSS projects, and the foundations. The sample was drawn from the LF AI & Data Foundation and PyTorch Foundation, two foundations under the LF, which may not fully represent other foundations. Future research directions were recommended in Section <ref> to advance our understanding of the commercial incentives for different approaches to AI democratisation. §.§.§ Dependability Dependability refers to the consistency of the research process. A comprehensive list of secondary documents collected for each OSS project is provided in Table <ref>. These documents were triangulated by questionnaires and interviews with maintainers and staff from the foundations. With their consent, all interviews were recorded and transcribed to aid the analysis <cit.>. With regards to the data analysis, common guidelines were followed for the systematic analysis of qualitative data <cit.>. Additionally, the accuracy of the analysis was validated through member-checking findings with interviewees, ensuring their resonance with practitioners. § CONCLUSION Companies are “democratising” AI through various approach and with various goals. While these efforts tend to be celebrated for facilitating science and innovation, the strategic incentives driving these apparently altruistic acts are in large part hidden from public view. However, as the development and application of AI technologies has an ever-increasing impact on society and the economy, it becomes imperative to better understand the incentives and strategies for AI democratisation as well as whose interests AI democratisation ultimately serves. Towards this end, this study employed a mixed-methods approach to investigate the commercial incentives for 43 AI OSS donations to the LF. The findings highlight an interplay of social, economic, and technological incentives at both individual and organisational levels. In the case of OSS donations, the democratisation of governance is treated as a social means for primarily economic and technological ends, such as attracting external contributors, reducing development costs, and influencing industry standards, among others. Furthermore, the study illustrates the role of individual developers, who champion OSS donations within their companies, thus highlighting the relevance of the bottom-up incentives for AI democratisation. While some incentives are unique to OSS donations, the taxonomy serves both as a theoretical foundation and as a practical toolkit for understanding the commercial incentives for related AI democratisation efforts, such as AI model releases. As the number of open models continues to grow at a rapid pace, it is timely for researchers to turn their attention to the commercial incentives and strategies driving these AI democratisation efforts, as well as the effects thereof on the norms, practices, and potential trajectories of the open source AI ecosystem at large. § ACKNOWLEDGEMENTS The author thanks all research subjects for their participation in this study. § DECLARATIONS §.§ Funding This study was funded by the UK Economic and Social Research Council (Grant Number: ES/P000649/1). §.§ Competing interests The author was affiliated with the LF as a research contractor during the research period. However, the study was independently funded by the UK Economic and Social Research Council (Grant Number: ES/P000649/1). §.§ Ethics approval This study obtained ethical clearance from University of Oxford’s Research Ethics Committee prior to data collection. §.§ Consent As per the ethics approval requirements at the University of Oxford, all research subjects were asked to provide informed consent prior to data collection and were informed of their rights to withdraw their data at any stage. In addition, research subjects confirmed the affiliations and quotes included in this paper prior to its submission for publication. §.§ Data and/or Code availability N/A §.§ Authors' contributions All contributions were made by the author and any errors made are his. apalike § OSS DONATIONS TO LF AI & DATA FOUNDATION AND PYTORCH FOUNDATION p2.5cmp2.5cmp2cmp4cmp4cm OSS Donations to the LF AI & Data and PyTorch Foundations Project Donor Date TAC Proposal Press Release Project Company Date TAC Proposal Press Release Acumos AT&T, Tech Mahdra 2018-05 <https://drive.google.com/file/d/1L6fFhZnFqeR3mwy8Ya/KVoRGzCUjdnyrM> <https://www.acumos.org/news/2018/11/14/lf-deep-learning-delivers-first-acumos-ai-release-making-it-easier-to-deploy-and-share-artificial-intelligence-models/> Angel Tencent 2018-08 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/18481216/GMT20191121-140452_LF-AI-Foun_1920x1080.mp4> <https://www.linuxfoundation.org/press/press-release/lf-deep-learning-adds-two-new-framework-projects-to-expand-community-and-ecosystem> Egeria IBM, ING 2018-08 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/30408948/September%2024%2C%202020_LF%20AI%20TAC%20Deck%202.pptx> <https://www.linuxfoundation.org/press/press-release/new-ai-data-foundation-combines-industrys-fastest-growing-open-source-developments-in-artificial-intelligence-and-open-data> Elastic Deep Learning Baidu 2018-08 N/A <https://www.linuxfoundation.org/press/press-release/lf-deep-learning-adds-two-new-framework-projects-to-expand-community-and-ecosystem> Horovod Uber 2018-12 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/30408797/August%2013%2C%202020_LF%20AI%20TAC%20Deck%20-%20compressed2.pptx> <https://www.uber.com/en-GB/blog/horovod-deep-learning-foundation/> Pyro Uber 2019-01 <https://drive.google.com/file/d/1B0ZkJUKVZoJxsaUkge/02kGKddiZX8DLj/view> <https://www.uber.com/en-GB/blog/pyro-lf-deep-learning-foundation/> Adlik ZTE 2019-09 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=%2F7733341%2F12091694%2FTAC+recording+08152019.mp4> <https://lfaidata.foundation/blog/2019/10/21/lf-ai-welcomes-adlik-as-newest-incubation-project/> ONNX ONNX Community 2019-11 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/18481160/TAC%2010-24-2019.mp4> <https://cloudblogs.microsoft.com/opensource/2019/11/14/onnx-joins-linux-foundation/> Marquez WeWork 2019-12 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/18481417/TAC-12192019.mp4> N/A sparklyr RStudio 2019-12 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/18481269/TAC-recording-12052019.mp4> <https://lfaidata.foundation/blog/2020/01/29/sparklyr-joins-lf-ai-as-its-newest-incubation-project-scaling-data-science-and-machine-learning-workflows-using-apache-spark-and-r/> Milvus Zilliz 2020-01 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/22249639/January%2016%2C%202020_LF%20AI%20TAC%20Deck.pdf> <https://www.prnewswire.com/news-releases/milvus-the-ai-search-engine-originally-developed-by-zilliz-joins-lf-ai-as-new-incubation-project-301038716.html> OpenDS4All IBM, ODPi, UPenn 2020-02 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/30408948/September%2024%2C%202020_LF%20AI%20TAC%20Deck%202.pptx> <https://community.ibm.com/community/user/ai-datascience/blogs/ana-echeverri1/2020/02/28/opends4all-is-live> NNStreamer Samsung 2020-03 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/24281106/March%2012%2C%202020_LF%20AI%20TAC%20Deck_v2.pdf> <https://research.samsung.com/news/LF-AI-Foundation-Announces-NNStreamer-as-Its-Newest-Incubation-Project> ForestFlow DreamWorks Animation 2020-04 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/24281142/March%2026%2C%202020_LF%20AI%20TAC%20Deck.pdf> <https://research.dreamworks.com/dreamworks-animation-releases-forestflow-machine-learning-model-server-to-the-open-source-community/> Ludwig Uber 2020-05 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/24281544/Ludwig%20LFAI.pdf> <https://www.uber.com/blog/introducing-ludwig/> Adversarial Robustness Toolbox IBM 2020-06 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/30409001/October%208%2C%202020_LF%20AI%20TAC%20Deck.pdf> <https://developer.ibm.com/open/centers/codait/trusted-ai/> AI Explainability 360 IBM 2020-06 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/30409001/October%208%2C%202020_LF%20AI%20TAC%20Deck.pdf> <https://developer.ibm.com/open/centers/codait/trusted-ai/> AI Fairness 360 IBM 2020-06 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/30409001/October%208%2C%202020_LF%20AI%20TAC%20Deck.pdf> <https://developer.ibm.com/open/centers/codait/trusted-ai/> Amundsen Lyft 2020-07 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341> <https://eng.lyft.com/amundsen-1-year-later-7b60bf28602> DELTA DiDi 2020-09 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/30408797/August%2013%2C%202020_LF%20AI%20TAC%20Deck%20-%20compressed2.pptx> <https://lfaidata.foundation/blog/2021/06/17/delta-joins-lf-ai-data-as-new-incubation-project/> Feast Gojek 2020-09 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/30408948/September%2024%2C%202020_LF%20AI%20TAC%20Deck%202.pptx> <https://feast.dev/blog/feast-joins-the-linux-foundation-for-ai-data/> SOAJS Herron Tech 2020-09 <https://wiki.lfaidata.foundation/download/attachments/7733341/September%2010%2C%202020_LF%20AI%20TAC%20Deck%20-%20updated.pdf?version=1&modificationDate=1599741898000&api=v2> N/A DataPractices DataPractices Org 2020-12 <https://wiki.lfaidata.foundation/download/attachments/7733341/October%205%2C%202020_LF%20AI%20TAC%20Deck%282%29.pdf?version=1&modificationDate=1604497702000&api=v2> N/A JanusGraph JanusGraph Community 2021-01 <https://wiki.lfaidata.foundation/download/attachments/7733341/December%203%2C%202020_LF%20AI%20TAC%20Deck.pdf?version=2&modificationDate=1606864208000&api=v2> <https://lfaidata.foundation/blog/2021/01/12/janusgraph-joins-lf-ai-data-as-new-incubation-project/> Flyte Lyft 2021-02 <https://wiki.lfaidata.foundation/download/attachments/7733341/February%2025%2C%202021_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1614021188000&api=v2> <https://eng.lyft.com/flyte-joins-lf-ai-data-48c9b4b60eec> Datashim IBM 2021-03 <https://wiki.lfaidata.foundation/download/attachments/7733341/January%2014%2C%202021_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1610631576000&api=v2> <https://lfaidata.foundation/blog/2021/03/23/datashim-joins-lf-ai-data-as-new-incubation-project/> RosaeNLG BNP Paribas 2021-03 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/39092353/March%2011%2C%202021_LF%20AI%20TAC%20Deck(2).pdf> <https://lfaidata.foundation/blog/2021/04/28/rosaenlg-joins-lf-ai-data-as-new-sandbox-project/> Substra OWKIN 2021-03 <https://wiki.lfaidata.foundation/download/attachments/7733341/March%2025%2C%202021_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1616590543000&api=v2> <https://lfaidata.foundation/blog/2022/11/28/owkin-launches-open-science-push-by-open-sourcing-ai-software-substra-and-releasing-two-open-source-ai-innovations-at-neurips/> Kompute The Institute for Ethical AI 2021-05 <https://wiki.lfaidata.foundation/download/attachments/7733341/May%206%2C%202021_LF%20AI%20TAC%20Deck.pdf?version=2&modificationDate=1620316765000&api=v2> <https://lfaidata.foundation/blog/2021/08/26/kompute-joins-lf-ai-data-as-new-sandbox-project/> OpenLineage Datakin, IBM 2021-07 <https://wiki.lfaidata.foundation/download/attachments/7733341/December%2015%202022_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1673523471000&api=v2> <https://openlineage.io/blog/joining-lfai/> TonY LinkedIn 2018-09 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/43287136/July%2015%2C%202021_LF%20AI%20TAC%20Deck.pdf> <https://engineering.linkedin.com/blog/2018/09/open-sourcing-tony–native-support-of-tensorflow-on-hadoop> Kedro McKinsey QuantumBlack 2021-08 <https://wiki.lfaidata.foundation/download/attachments/7733341/August%2026%2C%202021_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1630089002000&api=v2> <https://medium.com/quantumblack/kedro-joins-the-linux-foundation-to-become-an-open-standard-for-machine-learning-engineering-b0061152ff73> KServe KServe Community 2021-11 <https://wiki.lfaidata.foundation/download/attachments/7733341/October%2018%2C%202021_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1637936838000&api=v2> <https://lfaidata.foundation/blog/2022/02/24/kserve-joins-lf-ai-data-as-new-incubation-project/> OpenBytes Graviti 2021-11 <https://wiki.lfaidata.foundation/download/attachments/7733341/October%2021%2C%202021_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1635325659000&api=v2> <https://www.linuxfoundation.org/press/press-release/linux-foundation-and-graviti-announce-project-openbytes-to-make-open-data-more-accessible-to-all> Artigraph Replica 2022-01 <https://wiki.lfaidata.foundation/download/attachments/7733341/January%2027%2C%202022_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1643715756000&api=v2> <https://lfaidata.foundation/uncategorized/2022/04/13/lf-ai-data-announces-artigraph-as-new-sandbox-project/> 1chipML Ericsson 2022-04 <https://wiki.lfaidata.foundation/download/attachments/7733341/April%207%2C%202022_LF%20AI%20TAC%20Deck%20%281%29.pdf?version=1&modificationDate=1650500517000&api=v2> <https://lfaidata.foundation/blog/2022/07/21/lf-ai-data-announces-three-new-sandbox-projects/> BeyondML Squared AI 2022-06 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/61964452/June%2016%2C%202022_LF%20AI%20TAC%20Deck.pdf> <https://lfaidata.foundation/blog/2022/07/21/lf-ai-data-announces-three-new-sandbox-projects/> FlagAI BAAI 2022-06 <https://wiki.lfaidata.foundation/download/attachments/7733341/June%2030%2C%202022_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1657325720000&api=v2> N/A FedLCM VMWare 2022-08 <https://wiki.lfaidata.foundation/download/attachments/7733341/July%2028%2C%202022_LF%20AI%20TAC%20Deck%20%281%29.pdf?version=1&modificationDate=1660155848000&api=v2> <https://blogs.vmware.com/opensource/2022/10/27/open-source-project-fedlcm-to-the-lf-ai-data/> FATE LinkedIn 2022-08 <https://wiki.lfaidata.foundation/download/attachments/7733341/August%2025%2C%202022_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1662390660000&api=v2> <https://cloudblogs.microsoft.com/opensource/2022/09/12/feathr-feature-store-joins-lf-ai-data-foundation/> OpenDataology OpenDataology Community 2022-08 <https://wiki.lfaidata.foundation/download/attachments/7733341/August%2011%2C%202022_LF%20AI%20TAC%20Deck.pdf?version=1&modificationDate=1661373254000&api=v2> N/A PyTorch Meta 2022-09 N/A <https://pytorch.org/blog/PyTorchfoundation/> Elyra IBM 2022-10 <https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=7733341&preview=/7733341/39092353/March%2011%2C%202021_LF%20AI%20TAC%20Deck(2).pdf> <https://developer.ibm.com/blogs/open-source-elyra-ai-toolkit-simplifies-data-model-development/>
Companies are increasingly “democratising” artificial intelligence (AI). However, “AI democratisation” remains an ambiguous term, encompassing a variety of goals and methods <cit.>, including the release of AI open source software (OSS) <cit.>, its donation to non-profit foundations <cit.>, and the release of AI models (henceforth: open models) <cit.>, among others. While press releases celebrate a myriad of benefits that AI democratisation promises for research and innovation, the commercial incentives driving such efforts are often obscured from public view. Given the ever-increasing impact of AI on society and the economy, understanding the commercial incentives for AI democratisation efforts is crucial so that we can ensure these efforts serve broader societal interests beyond commercial agendas. Towards this end, this study presents an exploratory investigation of why companies democratise AI with a focus on OSS donations as one method, among others, of AI democratisation. Through a mixed-methods approach, combining the analysis of pre-donation technical pitches, post-donation blog posts, a questionnaire, and semi-structured interviews, it investigates commercial incentives for 43 AI OSS donations to the Linux Foundation (LF), making contributions to both research and practice. It makes contributions to both research and practice. It contributes a taxonomy of both individual and organisational social, economic, and technological incentives for AI democratisation. In particular, it highlights the role of democratising the governance and control rights of an OSS project (i.e., from one company to vendor-neutral, open governance) as a structural enabler for downstream goals, such as attracting external contributors, reducing development costs, and influencing industry standards, among others. Furthermore, it sheds light on the role of individual developers within companies, who champion and coordinate OSS donations, thus highlighting the relevance of the bottom-up incentives for AI democratisation. The taxonomy provides a framework and toolkit for discerning incentives for other AI democratisation efforts, such as the release of AI models <cit.>. The paper is structured as follows. First, it reviews prior work on the political economy of open source AI and OSS (Section <ref>). Second, it presents the research aims and the study design (Section <ref>). Third, it reports the key findings (Section <ref>). Fourth, it discusses the implications of the findings for research and practice as well as the threats to validity (Section <ref>). Finally, the paper concludes with a summary of the key contributions (Section <ref>).
§.§ “Democratising AI”: Narratives and Practices In the context of concerns about industry concentration and influence on AI research and development (R&D) <cit.>, it has become en vogue for companies to claim to “democratise AI”—an altruism-laden term that is notoriously ambiguous. Prior work finds that it used as a catch-all term to encompass a variety of goals and practices <cit.>, including the following: * Democratising AI use: Lowering entry barriers for the use of AI technologies, including but not limited to commercial products like OpenAI's ChatGPT or GitHub's Copilot, access to AI models through APIs or publicly available model weights, and the release of AI OSS like PyTorch and TensorFlow. * Democratising AI development: Lowering the entry barriers for the development of AI technologies, including but not limited to the release of AI models and AI OSS. * Democratising AI profits: Redistributing the economic value accrued to companies from their use of AI technologies to the respective users and impacted populations. * Democratising AI governance: Distributing the decision-making power in the development or use of AI technologies to a wider community of stakeholders and impacted populations. In most cases, AI democratisation is used to refer to the lowering of barriers for the use or the development of AI technologies, leading <cit.> to conclude that, “‘AI democratisation’ is a (mostly) unfortunate term”. Open source technologies and collaboration methods have been integral to AI democratisation efforts, offering the means of enabling both access to and participation in the development of AI. Commercial releases of AI OSS <cit.> and open models <cit.> have contributed to the rapid growth of the open source AI ecosystem, which now comprises over 300 AI OSS libraries <cit.>, hundreds of thousands of open models <cit.>, and over a million AI OSS repositories <cit.>. The prevalence of AI democratisation efforts begs the questions of why companies release their AI software and models, and what are the impacts thereof on the norms, practices, and potential trajectories of AI developer communities. Prior work hints at a number of incentives. Scholars contend that industry giants promote open source as an alternative to their concentrated power in the AI industry, whilst using it as a means to shape industry standards, benefit from user innovation, and ultimately extend their influence the norms and tools used by researchers and developers around the world <cit.>. Nick Srnicek argues that “the seemingly non-capitalist practice of releasing their AI software for free in fact obscures a significant capitalist battle between the major companies” <cit.>. This was evident in a leaked Google memo, which claimed that “open source solutions will out-compete companies like Google or OpenAI” and for this reason they should “own the ecosystem and let open source work for us” <cit.>. As discussed below, this leaked memo highlights the ethical tensions that emerge from company’s attempts to exploit the collective efforts of OSS developer communities <cit.>. Other companies, such as Meta, have been outspoken about the drivers of their open source AI strategy: by releasing AI software like PyTorch and large language models (LLM) like their LLaMA models, Meta seeks to increase adoption of its technology, improve their performance and safety through distributed feedback and innovation, and ultimately benefit from ecosystem effects. For example, upon releasing LLaMA 2, Nick Clegg, Meta’s President of Global Affairs, explained that “Openness isn’t altruism—Meta believes it’s in its interest. It leads to better products, faster innovation, and a flourishing market, which benefits us as it does many others” <cit.>. He argued that releasing LLaMA 2 would make it “safer and better” because it will benefit from the “wisdom of the crowds.” Clegg added that, “A mistaken assumption is that releasing source code or model weights makes systems more vulnerable. On the contrary, external developers and researchers can identify problems that would take teams holed up inside company silos much longer.’’ Meanwhile Mark Zuckerberg, Meta's CEO, has explained publicly that Meta seeks to build an ecosystem around their AI software and models as a source of strategic advantage. For example, Zuckerberg explained to shareholders that the widespread use of PyTorch has “been very valuable for us” because it has facilitated their use of external AI research and innovations that use PyTorch <cit.>. Furthermore, upon the release of LLaMA 3, he explained that they are not doing open source “because we are, like, altruistic… I just want everyone to be using it because the more people who are using it, the more the flywheel will spin for making LLaMA better” <cit.>. These statements provide some answers to why technology giants like Meta democratise AI, but it remains to be studied systematically why various kinds of companies, including startups, engage in AI democratisation efforts. There are also concerns about “open-washing” by startups and industry giants alike, who have been promoting open models released under restrictive licenses and with limited transparency as “open source” <cit.>. Liesenfeld and Dingemanse () argue that companies engage in open-washing to reap the benefits of open source (e.g., reputation rewards and adoption), whilst not actually complying with open source standards or norms (e.g., via restrictive licenses). For example, Meta released LLaMA 2 with much fanfare, claiming that the “open source” model would benefit research and innovation, its distribution under a novel license with restrictive commercial terms (i.e., any company with greater than 700 million monthly active users in the preceding month must request a license that Meta may grant in its sole discretion) received backlash from the open source community <cit.>. For example, Stefano Maffulli, the Executive Director of the Open Source Initiative, commented, “Unfortunately, the tech giant has created the misunderstanding that LLaMA 2 is `open source' – it is not. Meta is confusing 'open source' with `resources available to some users under some conditions,' [which are] two very different things” <cit.>. Whether open source AI will deliver its promised equalising effects or lead to further industry concentration remains to be seen. However, it is important to view these open source AI developments in the context of developments and concentrations in the wider AI supply chain or “AI stack”, including the hardware accelerators (i.e., chips), data, talent, and compute infrastructure required to develop and deploy AI systems in practice <cit.>. One may argue that no matter how much AI software or how many AI models companies release publicly, regardless of their license choice, such AI democratisation efforts will do little to fundamentally reconfigure the distribution of power and resources in the wider AI industry. This prior work provides insights into the open source AI strategies of industry giants and concerns related to open-washing. However, we still have significant gaps in our understanding of why and how different types of companies, beyond industry giants, employ open source as a means to “democratise” AI. To address this gap, in the next section, I draw on the extensive literature on the political economy of OSS, which provides a comprehensive theoretical foundation for understanding commercial incentives for AI democratisation efforts that are facilitated by open source technology. §.§ Commercial Incentives for OSS Development Companies have participated in the development of OSS in a myriad of ways since the late 1990s <cit.>, including by deploying developers to contribute to projects as part of their job responsibilities or corporate social responsibility initiatives <cit.>, funding projects <cit.>, or joining project steering committees <cit.>, among others. These are popular strategies through which companies seek to influence projects that develop maintain OSS that they use <cit.>. It is also common for companies to spin-out proprietary software as company-hosted OSS, where the host company controls the intellectual property of the project (e.g., by requiring contributors to sign a contributor license agreement) and employs the maintainers of the project <cit.>. This is a proven commercial strategy to increase adoption of their software, benefit from external contributions, win market share, or reduce a competitor’s market share <cit.>. In some cases, a handful of companies share control of a project; for example, in 2017, Facebook and Microsoft jointly released the Open Neural Network Exchange (ONNX) to enable interoperability between various deep learning frameworks like PyTorch and TensorFlow <cit.>. An extensive literature discusses the diverse incentives for commercial adoption and development of OSS at both the individual level (see Table <ref>) and the organisational level (see Table <ref>). Bonaccorsi and Rossi’s () taxonomy of social, economic, and technological incentives at the individual and organisational levels provides an enduring framework for categorising these diverse incentives. In addition, they find that divergent incentives between individuals and organisations. While individuals are mostly driven by social and technological incentives, such as their personal interest <cit.>, values <cit.>, or needs <cit.>, companies are chiefly motivated by economic and technological factors, such as influencing industry standards <cit.>, reducing development costs <cit.>, and recruiting external developers <cit.>. However, the incentives of individuals vary based on factors such as whether they are volunteers or paid <cit.>, the governance structure of the OSS project <cit.>, and their geography and cultural norms <cit.>, among others. While individual developers, including volunteers, do not share the primary incentives of for-profit companies, they tend to accept commercial participation on condition that they comply with community norms <cit.>. Furthermore, commercial participation in OSS projects can also attract volunteers, who see their presence as a signal of the complexity of the project <cit.>. For companies, economic and technological incentives are the most salient. In particular, the distributed production model of OSS development, involving many more skilled developers beyond those within the organisational boundaries of any single company, is viewed as means to decrease in-house R&D costs <cit.>. Benjamin Birkinbine argues that the greatest value of OSS for companies stems from the peer production model that expands the labour force that can test and develop the software. Specifically, he contends that the value for companies stems from the processes, not the products, of OSS development <cit.>. These incentives were underscored in the aforementioned Google memo, which urged the company to “own the ecosystem and let open source work for us” <cit.>. The extent of volunteer activity in OSS development, from bug-spotting to code contributions <cit.>, raises ethical questions about the exploitation of volunteer work <cit.> and the failure of most companies to adequately reciprocate to support the sustainability of OSS projects <cit.>. For example, while Linus’ Law—i.e., that “Given enough eyeballs, all bugs are shallow” <cit.>—is typically quoted to argue that the OSS development model offers security advantages over proprietary software development, one can extend it to convey the value of distributed bug-spotting and improvements that no single company must pay for by themselves. While volunteers make useful contributions to OSS projects <cit.>, it is often the case that companies specifically seek to collaborate with other companies, including market rivals, as a means to jointly share R&D costs <cit.> and shape industry standards <cit.>. While inter-company collaborations do not necessarily exclude volunteers, it is not uncommon for companies to engage in strategic or contractual collaborations, involving private collaboration methods, which volunteers cannot participate in (Osborne et al, forthcoming). The prevalence of inter-company collaborations has turned many OSS communities “from networks of individuals into networks of companies” <cit.>, resulting in a tangle of cooperation and competition between companies that has become known as “open source co-opetition’’ <cit.>. Moreover, commercial participation in OSS development can help companies to improve their market position by undercutting the product of a market rival <cit.> as well as to enhance their reputation as an OSS patron among developers <cit.>, which in turn can help to recruit software developers <cit.>. The diversity of incentives of various stakeholders underlines the critical role of governance in OSS projects <cit.>. Non-profit foundations have emerged as key mediators—or “boundary organisations”—whose vendor-neutrality and open community governance have proven to be effective structural enablers of collaboration between “unexpected allies” <cit.>. For instance, the LF is reputed to facilitate “communities of competitors,” where “market rivals...intentionally coordinate activities for mutual benefit in precise, market-focused, non-differentiating engagements” <cit.>. Foundations limit the dominance of any single company in OSS projects, which attracts new contributors to projects <cit.>, especially volunteers who are hesitant about performing free work for a company <cit.>, and increases their adoption <cit.>. However, foundations do not always prevent commercial dominance in OSS projects <cit.>. For instance, around 10% of companies account for 80% of commits to projects in the OpenStack ecosystem <cit.>. Moreover, governance changes resulting from donations do not guarantee activity increases. For example, a study of PyTorch's governance transition from Meta to the LF revealed no net increase in project activity and specifically that contributions from Meta decreased significantly, those from users (e.g., app developers and cloud providers) remained unchanged, but those from complementors (e.g., chip manufacturers) increased <cit.>. While governance changes may address “hold-up” problems for certain companies, particularly for complementors whose value capture proposition depends on interoperability, they do not guarantee net increases in external contributions <cit.>. This review has provided a theoretical foundation for examining commercial incentives in AI democratisation efforts. The taxonomy of social, economic, and technological incentives at individual and organisational levels <cit.> offers a valuable framework for this study's exploratory aims. By applying this framework to AI OSS donations, this study aims to identify and categorise the commercial interests driving AI democratisation. The following section outlines the research aims and methodological approach in more detail.
null
The findings reveal an interplay of social, economic, and technological incentives at both individual and organisational levels for OSS donations to foundations (see Table <ref>). This section presents these findings, contributing a more nuanced understanding of the bottom-up and top-down incentives behind AI democratisation efforts by companies. §.§ Individual-level Incentives Individual employees often play a crucial role in championing and coordinating the donation process within their companies. 38% of questionnaire respondents stated that the decision to donate was driven from the bottom up by developers, while 13% of respondents stated the decision was made by individuals who held both developer and managerial roles (e.g., startup founders). These findings underscore the importance of understanding individual incentives in shaping commercial decisions to donate OSS projects to foundations. §.§.§ Social Incentives At the individual level, social incentives are as a significant driver for OSS donations. Two key themes stood out: reciprocity and personal reputation. Many developers expressed a deep-seated ethos of `giving back” to the community that supports their work. For instance, Respondent A (BeyondML) stated: The vast majority of proprietary models and software in data science and machine learning are built on open source, so being part of that and contributing to that is really important to me personally and to our company. This sentiment was echoed by Respondent H (NNStreamer), who noted that their donation was “just for our own satisfaction” as OSS users and developers. These responses highlight the personal investment many developers have in the OSS ecosystem and their desire to contribute to its growth and sustainability. Personal reputation is also a significant social incentive, with successful donations to major foundations representing an opportunity for individual developers to enhance their standing in the OSS community. For example, Respondent J (ONNX-MLIR) explained that having a project accepted by a major foundation provides credibility among peers and brings developers like himself closer to their “dream of having your big open source project with 1000s of stars.” Respondent I (ONNX-MLIR) added that improving one's reputation also leads to career benefits, as one can be hired based on one's reputation, and that individuals' aspirations align with company's goals to improve their corporate reputation as an OSS-friendly workplace. §.§.§ Economic Incentives Two primary economic incentives at the individual level are career benefits and access to foundation support services.The reputational gains from OSS contributions often translate into tangible career benefits. For example, respondent I (ONNX-MLIR) noted that achievements in OSS projects make developers more competitive in the AI job market, as their expertise at the intersection of software engineering and AI becomes both known and knowable in the wider open source AI ecosystem. This enhanced visibility and credibility in the job market creates a powerful incentive for individuals to support OSS donations. As mentioned above, Respondent I (ONNX-MLIR) pointed out that these incentives of individual developers also align with organisational goals of fostering a skilled workforce, attracting and recruiting competitive AI talent, and improving their corporate reputation among OSS developers. The desire to harness foundation support services is another important economic incentive. Many developers viewed foundation support as a means to address challenges faced by maintainers in managing projects alongside their full-time employment. 87% of respondents claimed this support was important for them (see Figure <ref>). Respondent F (KServe) elaborated on this point: We, as developers, don't have a lot of time for [outreach and marketing]. We sought to benefit from support services, such as outreach to new contributors and marketing. Similarly, Respondent H (NNStreamer) sought marketing support to increase project visibility and to attract external developers. These examples illustrate how foundations are perceived to provide resources that individual maintainers often lack or cannot secure within their own companies, which in turn helps to enhance the growth of their OSS projects. §.§.§ Technological Incentives Technological incentives at the individual level include ensuring project sustainability and enabling the use of collaboration tools. Project sustainability is a significant concern for maintainers. Respondent G (Ludwig) provided a compelling example, describing how he donated his project to ensure its survival following organisational restructuring and his personal departure from the company. He viewed the transition to a foundation as an effective strategy to ensure the project's continuation. This case demonstrates how personal attachment to projects, which he described as his “baby”, can drive individuals to seek sustainable governance models for their OSS projects, especially when their affiliation with the company comes to an end. It also highlights the use of OSS donations as a mechanism to preserve source code that might otherwise be lost due to corporate changes or neglect. The ability to use preferred collaboration tools is another technological incentive. Respondents D and E (Kedro) explained that transitioning the project to the LF AI & Data Foundation made it easier to use tools like Slack and Discord, which were either forbidden or difficult to get approval for at their company. As Respondent E (Kedro) explained, “It untied our hands from our own bureaucracy.” They elaborated that this freedom from corporate constraints concerning collaboration tools not only enhances developer productivity and satisfaction but also aligns with broader organisational goals of fostering innovation and efficiency, thus representing a win-win scenario for them. §.§ Organisational-level Incentives While individual employees play a crucial role in championing OSS donations within their companies, organisational incentives (i.e., corporate strategies) ultimately matter above all in the decision-making process, with 44% of respondents stating that the donation was a top-down decision by managers and 13% respondents stating that the decision was made by individuals who held both developer and managerial roles within their company. §.§.§ Social Incentives At the organisational level, three primary social incentives emerged: adopting open governance, reciprocating to the OSS ecosystem, and building the company's reputation. The adoption of open governance upon donating a project to a foundation is a salient incentive, with 81% of respondents reporting it as important. This change in governance model is viewed as a structural enabler for downstream goals. For example, Detakin's press release highlighted this incentive: The LF AI & Data provides a vendor-neutral governance structure that can help the project grow broad industry collaboration. Even more importantly, becoming a LF AI & Data project ensures that OpenLineage can never belong to a company. Similarly, Lyft emphasised the importance of a “neutral holding ground” when donating both Amundsen and Flyte. These statements underscore the perceived value of neutral governance in fostering collaboration and ensuring the independence of projects. Similar to individuals, reciprocity to the OSS ecosystem is a key social incentive at the organisational level. Several respondents cited the critical importance of OSS dependencies in their companies' proprietary products and services, and perceived the donation of their OSS project as one way to “give back”, as explained by Respondent A (BeyondML). In a similar vein, Respondent C (Elyra) underscored the impact of OSS for advances in AI as a reason for why he champions giving back to the ecosystem: The democratisation of AI software is really what is helping industry advance. If you look back like 10-20 years ago, it was very hard and you needed to have a specific set of skills to be able to even build a very simple model. Today with all the tools and stuff, it's much easier. Respondent C (Elyra) explained that their team's decision to donate Elyra stemmed in part from their desire to play their part in advancing industry. This sentiment was echoed in several companies' post-donation press releases, framing their donations as acts of AI democratisation. For instance, Uber stated that by donating Pyro, it hoped “to facilitate greater opportunities for researchers worldwide and [to] make deep learning and Bayesian modelling more accessible.” Building corporate reputation emerged as the third significant social incentive at the organisational level. Many respondents explained they hoped that the reputation of the LF AI & Data would enhance their company's credibility in the open source AI ecosystem, with 75% of respondents reporting it as important. This incentive was particularly strong for startups and companies without an established reputation in AI. Respondent A (BeyondML) explained: One of the things that we hoped to get from the LF is its name recognition, obviously just about every developer in the world knows about the LF or knows the term Linux. So having that kind of badge, if you will, immediately gives you a level of credibility with your project. Respondents D and E (Kedro) echoed this sentiment, stating in their pre-donation pitch that they “would like to leverage the initial marketing announcements to build credibility in their technical and product-related capability”. Subsequently, in the post-donation press release, their company stated: It's a substantial step forward for our organisation on our open source journey. The consultancy sector has traditionally been highly protective of intellectual property, but it's clear that open, collaborative innovation will help power the next phase for analytics technology. These findings highlight three key social incentives driving organisational decisions to donate AI OSS projects. The adoption of open governance emerges as a key structural enabler for fostering collaboration and ensuring project independence. Reciprocity to the OSS ecosystem reflects companies' recognition of their dependence on open source technologies and their desire to contribute to industry advancement. Additionally, the opportunity to enhance corporate reputation, particularly for less established companies, serves as a salient motivator. These social incentives are strategically important for companies seeking to position themselves favourably within the open source AI ecosystem. §.§.§ Economic Incentives Several economic incentives at the organisational level inform the decision to donate OSS projects, including attracting external contributors, reducing development costs, diversifying project funding, and harnessing foundation support services. Indeed, being able to attract new contributors to their project is a critical incentive for OSS donations, with 100% respondents reporting it as important. Respondent J (ONNX-MLIR) described OSS donations as a strategic trade-off, where companies exchange full control of their OSS project for the aspired for benefits of distributed development involving a community of contributors. He noted the self-interested logic: We continue doing the same work as we would if it wasn't an open source project, but there's this expectation that we're going to benefit from a community helping us achieve our own goals. Respondent G (Ludwig) explained that above all it is beneficial for attracting contributors from other companies, who “prefer not to contribute to projects that are started by companies that could be competitors. They don't trust it as fully open source if it was started by Uber, Google, Facebook, or whatever company.” Meanwhile, Respondent B (Elyra) provided a more nuanced view: In reality, in all the projects that I've seen, they are still driven by the main inventors. Open governance just means that the feedback comes from additional contributors ... It's more like a community project. However, respondents cautioned that changing the governance model does not guarantee more or useful external contributions. Respondent D (Kedro) explained she had rejected pull requests “because they were not up to scratch”, while Respondent G (Ludwig) discussed the role of mentorship in training external developers to become effective contributors, which he highlighted requires the company to invest time and money. Diversification of funding is another significant economic incentive. For example, respondent E (Kedro) discussed the relevance of client concerns about the financial dependence on the host company: Clients would ask, 'What will happen if your team does not exist tomorrow?' They were afraid that if we left the code, they wouldn't be able to get new versions and then it would become unmaintained. Respondent D (Kedro) speculated that the only reason they were able to convince their senior management to approve the open-sourcing of Kedro was due to this client pressure. In addition, some companies seek to attract or increase external investment in their project. Several respondents saw this as a primary reason for Meta's donation of PyTorch, speculating that members of the newly founded governing board of the PyTorch Foundation would likely invest more resources in PyTorch than they did before. Harnessing foundation support services also emerged as a significant economic incentive at the organisational level, with 87% of respondents reporting it as important. For example, Respondent A (BeyondML) described the resources and infrastructure provided by foundations as “stability offerings” that could help scale and sustain projects beyond the means of their startup, thus making OSS donations an attractive option for sustainable project growth. §.§.§ Technological Incentives At the organisational level, several technological incentives were identified, including ecosystem integration and adoption, software improvements, faster innovation, and influencing industry standards. Respondents highlighted that joining a foundation offered ecosystem benefits. For example, respondent C (Elyra) noted: Together with [open governance], we thought being in an ecosystem of other machine learning and AI projects would foster collaboration and integration, exposing Elyra to more use cases. This perspective illustrates how companies view foundation ecosystems as platforms for enhanced collaboration and project visibility, potentially leading to more adoption and more diverse applications of their software. This was confirmed by 88% of respondents who reported increasing adoption as an important incentive for their company. Software improvements and faster innovation were also identified as significant technological incentives. Respondent B (Elyra) observed that “That's something which is one strength of open source; it's much more properly tested than [proprietary] software.” An additional incentive is the aim to accelerate development, implicitly thanks to new contributors who would join the project, with 69% of respondents reporting it as important. This underlines the aforementioned perspective of Respondent J (ONNX-MLIR) that AI democratisation is in large part self-interested, using open source as a means to reduce internal costs, speed up innovation, and enhance software quality. Influencing industry standards emerged as another key technological incentive, with 63% of respondents reporting it as important for their company. A number of respondents speculated that the ambition behind Meta's donation of PyTorch was to make it the open standard for deep learning, bringing an end to the age-old rivalry between PyTorch and Google's TensorFlow. Respondent E (Kedro) commented: I think that move was actually made to destroy TensorFlow because TensorFlow [does not have] open governance. Respondent D (Kedro) suggested that Meta was seeking to beat TensorFlow out of the market since Google would struggle to compete with the strategic alliance of industry giants that had formed under the PyTorch Foundation's governing board (which even includes Google). Respondent B (Elyra) was more direct, describing it as “a death knell to TensorFlow.” Reflecting on their experience of overseeing donations to the foundation, Respondent L (LF) explained: Every company may have a different set of incentives but what's common across all of them is the desire to make sure that the project becomes successful in the long term and becomes the de facto project for its given functionality. These findings reveal a complex interplay of social, economic, and technological incentives driving companies to donate AI OSS to foundations. The incentives span both individual and organisational levels, highlighting the multifaceted nature of AI democratisation efforts. These insights provide a nuanced understanding of the strategies and considerations that shape commercial decisions, practices, and strategies in the open source AI ecosystem.
§.§ Implications for Research §.§.§ Disambiguating “AI Democratisation” Narratives The findings shed light on the inconsistent use of “AI democratisation” in commercial narratives surrounding OSS donations, extending the work of <cit.>. They show that in the context of OSS donations, the most relevant interpretation is the democratisation of governance, as companies transfer already open-sourced software from their single-vendor governance to open governance provided by vendor-neutral foundations. This transition of governance and control rights, however, is not an end in itself but a strategy that aims to realise goals, such as increasing adoption and recruiting new contributors to the project. It is often the companies seek, in particular, participation by other companies, who previously would hesitate to contribute to a company's OSS project, confirming prior findings on “hold-up” problems <cit.>. These findings challenge the notion that AI democratisation efforts are primarily altruistic. OSS donations are a strategic calculation, where companies relinquish control of their OSS project in exchange for assumed benefits. This perspective aligns with and extends the work of Widder et al. <cit.> and Srnicek <cit.>, who argue that open source practices in AI serve commercial interest. This should not be surprising, as for-profit companies are not charities, but it provides an empirical evidence base to illustrate that “there is no such thing as a free lunch” and to understand the possible objectives behind such AI democratisation efforts. Moreover, the analysis reveals the role of individual developers who advocate for OSS donations within their companies, demonstrating that donations result from an interplay between both individual and organisational incentives. §.§.§ A Taxonomy for Understanding AI Democratisation Incentives The findings can be summarised in a taxonomy of commercial incentives for AI OSS donations (see Table <ref>). Drawing on the taxonomy by Bonaccorsi and Rossi (, see Tables <ref>-<ref>), this taxonomy categorises incentives at both individual and organisational levels, grouped into social, economic, and technological categories. It provides a framework for understanding the intersecting bottom-up and top-down incentives behind OSS donations, demonstrating that they are driven by a combination of individual and organisational factors. The value of this taxonomy lies in its ability to provide a structured approach to analysing the incentives for AI democratisation efforts, offering researchers and practitioners a tool to systematically examine the incentives at play in different scenarios. The findings corroborate prior work on the salience of economic and technological incentives for companies <cit.>, with the democratisation of governance in large part used as a means to various economic and technological ends, such as recruiting external developers <cit.>, reducing development costs <cit.>, and influencing industry standards <cit.>. According to the questionnaire, the most important incentive for companies is to recruit new contributors to the project, who can help improve the quality and competitiveness of the software, ultimately serving the interests of the donor company. As Respondent J (ONNX-MLIR) noted, “there's an expectation that we're going to benefit from a community helping us achieve our own goals.” These findings corroborate statements by Big Tech companies about their open source AI strategies, who want to “own the ecosystem and let open source work for us” <cit.> and for “everyone to be using [LLaMA] because the more people who are using it, the more the flywheel will spin” <cit.>. This aligns with ethical concerns about commercial exploitation of volunteer work in OSS without adequately reciprocating to the OSS ecosystem <cit.>. However, the findings also add nuance to this perspective. While the promise of being able to attract new contributors to their project is indeed important for many companies, it is not universally paramount. For some companies, particularly those with substantial resources or an existing community, other incentives such as standard-setting take precedence <cit.>. The case of PyTorch, which was donated by Meta and already had a large contributor community, exemplifies this scenario. Even for companies primarily seeking to attract new contributors, the findings show that these benefits are far from guaranteed. The interviews identified two key challenges. First, the mere act of donation does not ensure that a project will attract new contributors. Second, even if a project can recruit new contributors, the quality of their contributions may not meet the project's standards. Companies reject external contributions that “are not up to scratch” (Respondent D, Kedro) and need to invest in mentoring and training resources for external contributors (Respondent G, Ludwig). Companies may need to invest considerable time and resources into community building and contributor development, challenging the notion of open source as a straightforward cost-saving measure. This finding provides important nuance to the “free labour” thesis, and provides avenues for future research into the real costs and benefits of spinning-out or donating OSS projects. It is also worth noting that the relative importance of these incentives vary depending on the context, company size, and type of the AI technology being “democratised” (e.g., OSS, AI model, or other). While this taxonomy provides a starting point for understanding commercial incentives for AI democratisation, it should be used as a flexible framework rather than a definitive list. Future research should build on it by testing its applicability to different approaches to AI democratisation, such as AI model and dataset releases <cit.>. On the one hand, incentives such as building a company's reputation, increasing adoption, recruiting new contributors, reducing development costs, and shaping industry standards likely apply to AI model releases. On the other hand, the adoption of open governance is, at least to this day, specific to the context of OSS donations to foundations. To date, while companies have released models, they have not transferred them to a vendor-neutral foundation for open governance. This may in part be due to the lack of established practices for openly governing AI models given their relative complexity compared to OSS in terms of the resources required to develop and maintain AI models and their constituent components <cit.>, or in part because the incentives may be more similar to those for spinning-out software, such as increasing adoption <cit.>. At this stage, we can only speculate and, therefore, the commercial incentives and strategies for AI model releases, including governance choices, as well as their implications for norms and practices in the open source AI ecosystem, warrant further investigation. §.§.§ From Intentions to Outcomes: Evaluating the Long-term Impact of OSS Donations The findings capture the stated incentives of companies at a particular point in time. However, the actual realisation of these incentives and goals may vary and would require longitudinal study to verify. For instance, the study by <cit.> on PyTorch's governance transition provides a compelling case study that both supports and challenges some of the incentives identified in this research. Their finding that the governance change led to increased participation from complementors (e.g., chip manufacturers) aligns with the incentive of recruiting new contributors identified in this study. However, the decrease in contributions from Meta following the transition suggests that the realisation of incentives may be more complex than initially anticipated. This complexity is further underscored by the fact that users (e.g., app developers and cloud providers) did not change their rate of participation, indicating that different stakeholders may respond differently to governance changes. These insights call for more longitudinal studies that track the outcomes of OSS donations over time. Such research could help verify whether long-term economic and technological benefits are indeed realised. Furthermore, it could shed light on how the balance of contributions shifts between the donor company and external contributors following a governance change. The PyTorch case also highlights the importance of considering the specific type of external contributors when evaluating the success of OSS donations. The differential response between complementors and users suggests that future research should delve deeper into how different types of stakeholders perceive and respond to changes in project governance. Future research could explore how companies balance the potential benefits of increased external contributions against the possible reduction in their own incentives to invest in the project. While this study provides a comprehensive taxonomy of incentives for OSS donations, the emerging evidence on the impacts of such donations highlights the need for further research. §.§ Implications for Practice The findings have several important implications for practitioners in open source AI development and governance. First, the findings underscore the need for greater transparency in corporate communications about AI democratisation efforts. The inconsistent use of “AI democratisation” underlines that practitioners should be more specific and transparent about their goals, as suggested by <cit.>. Companies could clearly communicate the specific aspects of AI they aim to democratise (e.g., access, development, or governance) and explain how their actions contribute to these goals. Furthermore, greater transparency about the benefits that companies seek to gain from their democratisation efforts could foster more trust in corporate-community relations. This transparency could lead to more realistic expectations from the open source community and potentially more sustainable collaborations. Second, the taxonomy provides a toolkit for researchers and practitioners to understand the various social, economic, and technological incentives that may be driving AI democratisation efforts. It is also useful in shedding light on relevant incentives at both the individual developer level and the organisational level, revealing that both bottom-up and top-down incentives may be at play when companies announce they are democratising AI. Furthermore, practitioners can use this toolkit to examine how well commercial incentives align with the broader interests of the open source AI community, which may help companies and volunteers identify areas of synergy and enable more ethical collaboration. Third, the findings highlight the importance of foundations that facilitate collaboration between diverse stakeholders. While the benefits of foundations for developing and governing OSS are by now well understood, to this day companies are not transferring AI models to foundations for open governance. As discussed above, this may in part be due to the lack of established practices for openly governing AI models given their relative complexity in terms of the resources required to develop and maintain AI models and their various constituent components <cit.>. A question for practitioners to explore is whether, and if so how, the foundation-hosting and open governance model could be adopted for the hosting and governance of AI models. To answer this question, AI companies should engage foundations, not just as potential hosts of models, but as active partners in shaping the future of open source AI development. By employing the toolkit and implementing these practices, practitioners can work towards a more transparent and sustainable approach to AI democratisation that acknowledges commercial interests while fostering the broader goals of open and collaborative AI development. This approach could lead to more ethical and effective AI technologies that better serve societal needs while still allowing for innovation and commercial success. §.§ Threats to Validity The validity of the findings are evaluated with reference to guidance for case study research and qualitative research methods in software engineering research <cit.>. §.§.§ Credibility Credibility refers to the believability of the findings <cit.>. The primary threat stems from the author's affiliation with the LF as a research contractor, creating potential conflicts of interest and research biases, despite the independent purpose and funding of this research study. As explained in Section <ref>, a social identity map was used to identify and address potential biases in three areas: data access, collection, and analysis <cit.>. As previously stated, while a number of step were taken to enhance the credibility of the research process, it is acknowledged that there is nonetheless a risk of biases that were not controlled for or mitigated and as such it should be understood as an imperfect but best-effort attempt by the author. Another challenge concerned gaining access to data about commercial strategies. The working solution involved drawing on four sources—pre-donation technical pitches, post-donation press releases, questionnaires, and interviews—which offered a triangulated account of incentives <cit.>. While this approach provided four perspectives, it was inevitably constrained by the limited participation of company-affiliated developers, who were willing or allowed to participate in a research interview, and by the fact that some strategic incentives may never be shared publicly. In addition, the willingness of maintainers to complete the questionnaire and/or participate in an interview may have been affected by company policies (e.g., NDAs). §.§.§ Robustness Robustness concerns strength, reliability, and soundness of the study's design, methods, and findings. There was a risk of social desirability bias or response bias in the interviews, owing to the author's affiliation with the LF. Steps were taken to minimise the risk of these biases, such as proactively communicating the independent purpose and funding of this study in the interview invitations and at the beginning of every interview. Another threat stems from the thematic analysis. The author sought to maximise the robustness of this analysis by following best-practice guidelines <cit.> and integrating approaches to coding qualitative data <cit.>. In addition, the structure of the taxonomy poses risks to the validity of the findings. In some cases, it was difficult to demarcate incentives at the two units of analysis, which may have led to a misclassification or oversimplification. That being said, efforts were made to develop the categories by thoroughly reviewing prior work and member-checking findings with interviewees to evaluate their accuracy and resonance with practitioners <cit.>. §.§.§ Transferability Transferability concerns the generalisability of qualitative research findings. There are two key threats to generalisability. First, the narrow focus on OSS donations as a method of AI democratisation limits the generalisability of the taxonomy, which both includes aspects specific to OSS donations (e.g., democratising governance or foundation support) and excludes aspects that may be typical of other methods of AI democratisation (e.g., open model releases). Second, this study may be limited by the specific characteristics of the sample of companies, OSS projects, and the foundations. The sample was drawn from the LF AI & Data Foundation and PyTorch Foundation, two foundations under the LF, which may not fully represent other foundations. Future research directions were recommended in Section <ref> to advance our understanding of the commercial incentives for different approaches to AI democratisation. §.§.§ Dependability Dependability refers to the consistency of the research process. A comprehensive list of secondary documents collected for each OSS project is provided in Table <ref>. These documents were triangulated by questionnaires and interviews with maintainers and staff from the foundations. With their consent, all interviews were recorded and transcribed to aid the analysis <cit.>. With regards to the data analysis, common guidelines were followed for the systematic analysis of qualitative data <cit.>. Additionally, the accuracy of the analysis was validated through member-checking findings with interviewees, ensuring their resonance with practitioners.
Companies are “democratising” AI through various approach and with various goals. While these efforts tend to be celebrated for facilitating science and innovation, the strategic incentives driving these apparently altruistic acts are in large part hidden from public view. However, as the development and application of AI technologies has an ever-increasing impact on society and the economy, it becomes imperative to better understand the incentives and strategies for AI democratisation as well as whose interests AI democratisation ultimately serves. Towards this end, this study employed a mixed-methods approach to investigate the commercial incentives for 43 AI OSS donations to the LF. The findings highlight an interplay of social, economic, and technological incentives at both individual and organisational levels. In the case of OSS donations, the democratisation of governance is treated as a social means for primarily economic and technological ends, such as attracting external contributors, reducing development costs, and influencing industry standards, among others. Furthermore, the study illustrates the role of individual developers, who champion OSS donations within their companies, thus highlighting the relevance of the bottom-up incentives for AI democratisation. While some incentives are unique to OSS donations, the taxonomy serves both as a theoretical foundation and as a practical toolkit for understanding the commercial incentives for related AI democratisation efforts, such as AI model releases. As the number of open models continues to grow at a rapid pace, it is timely for researchers to turn their attention to the commercial incentives and strategies driving these AI democratisation efforts, as well as the effects thereof on the norms, practices, and potential trajectories of the open source AI ecosystem at large.
http://arxiv.org/abs/2409.17117v1
20240925172849
Counting Triangles in Triangles
[ "Jim Propp", "Adam Propp-Gubin" ]
math.CO
[ "math.CO" ]
Counting Triangles in Triangles Jim Propp (UMass Lowell, Lowell, MA) Adam Propp-Gubin (Belmont High School, Belmont, MA) § ABSTRACT We give a formula for counting the triangles in a picture consisting of the three sides of a triangle and some cevians. This lets us prove statements that are claimed without proof in the Online Encyclopedia of Integer Sequences and some popular YouTube videos, and also prove some new results. We also give formulas that apply when the cevians cut each side into equal-length pieces. § INTRODUCTION How many triangles can you find in this picture? < g r a p h i c s > Keep in mind that triangles can be made of smaller triangles, like the two shown here: < g r a p h i c s > Do you think you found all the triangles in the original picture? Are you sure? Did you forget to include the big triangle that contains all six of the little triangles? Now that you've included it, do you think you've found all the triangles in the picture? How can you be sure? There are a lot of YouTube videos that treat such problems, such as a Presh Talwalkar's “How Many Triangles Are There?” video <cit.>. Talwalkar's video, which has over half a million views, is about counting triangles in the following picture: < g r a p h i c s > Problems like this appear on a lot of Indian examinations. Talwalkar's video is better than most videos of its kind because it uses mathematical reasoning. A more typical video is Imran Sir's “Best Trick for Counting Figures” video <cit.>, which gives tricks for getting the right answer quickly without any explanation for why the tricks work. For instance, Sir claims that the number of triangles in the following picture is 4^3: < g r a p h i c s > The six extra lines inside the triangle are cevians: lines that pass through a vertex of a triangle and a point on the opposite side. Sir makes the more general claim that if you have a triangle with n cevians coming from one vertex and n cevians coming from a second vertex (and no cevians coming from the third vertex) then the number of triangles in the picture is (n+1)^3. But why? This claim also appears in the entry for the sequence of perfect cubes in the Online Encyclopedia of Integer Sequences <cit.>, where it's credited to Lekraj Beedassy, but as far as we know no proof has been published. The OEIS also includes a formula for counting triangles in a more complicated picture in which there are n cevians coming from each of the three vertices of a triangle on the assumption that no three cevians meet at a point <cit.>, but the argument given there isn't very clear. The purpose of this article is to prove all these claims by providing a general formula for counting triangles in a picture consisting of a triangle and some cevians. The proofs are easy but we feel that because of the global interest in such puzzles, there should be a published article that explains what's going on in this special class of triangle-counting puzzles. Our approach also provides a nice application of the combinatorial method of overcounting-and-correcting. Our main result is Theorem 1: In triangle ABC, if you draw a cevians from vertex A, b cevians from vertex B, and c cevians from vertex C, then the number of triangles formed is a+b+c+3 3-a+2 3-b+2 3-c+2 3-d where d is the number of points inside the triangle where three cevians meet. Here n 3 is the number of ways to choose 3 objects out of a collection of n objects; it equals n!/3!(n-3)!, which simplifies to n(n-1)(n-2)/6. Consider again the problem that we gave at the beginning of this article. We could draw the following sixteen pictures, and after checking that there are no duplicates, we'd know that the answer's at least sixteen: < g r a p h i c s > But how would we know we didn't miss any other triangles? Theorem 1 assures us that we didn't: we have a=b=c=1 and we can see that d=1 as well, so that the formula gives 6 3 - 3 3 - 3 3 - 3 3 - 1 = 16. Notice that d must be 0 when a=0 or b=0 or c=0, since the only way three cevians can meet inside the triangle is if one cevian is an A-cevian, one is a B-cevian, and one is a C-cevian. Because of this, when a=b=n and c=0, we automatically get d=0 so that the formula gives 2n+3 3 - n+2 3 - n+2 3 - 2 3 - 0, which simplifies to (n+1)^3, as Sir and Beedassy claim. But there's no need for a and b to be equal if c=0; we still automatically get d=0, so Theorem 1 tells us that the number of triangles is a+b+3 3-a+2 3-b+2 3, which simplifies to (a+1)(b+1)(a+b+2)/2. Another application of Theorem 1 is the case a=b=n, c=1 with the assumption that the picture has bilateral symmetry under the reflection that fixes C and switches A and B. Here's the case n=3: < g r a p h i c s > In this situation there are n points where three cevians meet, corresponding to triples that consist of a cevian from A to BC, its mirror-twin (from B to AC), and the median from C to AB. Our formula gives 2n+4 3 - 2 n+2 3 - 3 3 - n = (n+3)(n+1)^2; this provides a new combinatorial interpretation of this sequence <cit.>. Theorem 1 bypasses the sometimes difficult problem of finding d, which can't be determined from a, b, and c alone. For instance, in the following picture, we have a=b=c=1 just like in the opening example, but d is 0 instead of 1, so that the number of triangles is 17 instead of 16. < g r a p h i c s > Theorem 2 establishes an important case in which the geometry of the cevians is specified in enough detail to allow us to determine d and so find a formula for the number of triangles in the picture: Theorem 2: Let p be a prime and let q=p^m for some exponent m ≥ 1. From each vertex of triangle ABC draw q-1 cevians that divide the opposite side into q pieces of equal length. Then (a) If p is odd, the number of triangles formed is (8q^3 - 9q^2 + 3q)/2. (b) If p=2, the number of triangles formed is (8q^3 - 9q^2 - 3q + 10)/2. Our proof of Theorem 2 relies on Theorem 1 along with Ceva's Theorem from geometry (from which cevians got their name), Euclid's Lemma from number theory (when q is an odd prime), and the Fundamental Theorem of Arithmetic (when q is a general prime power). § PROOFS Proof of Theorem 1: There are a+b+c+3 line segments in total (the cevians along with the three sides of triangle ABC). No two of the segments are parallel, and each of the segments intersects each of the others, either in the interior of triangle ABC or else at one of the three corners. Each of the triangles in the picture is bounded by three of these a+b+c+3 line segments, and each triple of line segments either bounds a triangle or else meets at a single point. So the number of triangles can't be more than a+b+c+3 3; to turn this into the right answer, we have to subtract the number of triples of line segments that meet at a point. There are a+2 line segments that pass through A, so there are a+2 3 triples of line segments that meet at A. For the same reason there are b+2 3 triples of line segments that meet at B and c+2 3 triples of line segments that meet at C. Lastly, there are d triples of line segments that meet in the interior, since each such intersection point uniquely determines the three cevians passing through it. So the number of triples of line segments that bound a triangle is a+b+c+3 3 - a+2 3 - b+2 3 - c+2 3 - d, as claimed. □ Proof of Theorem 2: By Ceva's Theorem, in any triangle ABC (see image), < g r a p h i c s > the three cevians AD, BE, and CF meet at a point if and only if BD/DCCE/EAAF/FB = 1. The p-1 cevians from A divide segment BC into p equal parts, so if AD is one of those cevians, line segment BD must consist of some whole number of those equal parts, say i of them, so that the complementary line segment DC must consist of p-i of them. So the ratio BD/DC must be of the form i/(p-i) for some integer i between 1 and p-1. For the same reason there must be an integer j between 1 and p-1 for which CE/EA = j/(p-j) and an integer k between 1 and p-1 for which AF/FB = k/(p-k). So the three cevians associated with i, j, and k meet at a point if and only if (i/(p-i)) (j/(p-j)) (k/(p-k)) = 1, that is, if and only if i j k = (p-i) (p-j) (p-k). Expanding the right hand side and moving ijk to the left side, we can rewrite this as 2ijk = p^3 - p^2 (i+j+k) + p (ij+ik+jk). At this point the analysis of the equation splits into two cases. (a) Assume p is odd. First we do the sub-case where m is 1, that is, q is equal to an odd prime p. Since p divides the right hand side of the preceding equation, p must divide the left hand side as well. But since p is an odd prime, p can't divide 2, nor can p divide i, j, or k since p is bigger than all three of them. On the other hand, Euclid's lemma tells us that when a prime p divides a product of two or more numbers, it must divide at least one of them. This contradiction shows that no triple i, j, k with 1 ≤ i,j,k ≤ p-1 satisfies i j k = (p-i) (p-j) (p-k). That is, there are no points interior to the triangle where three cevians intersect. So d is 0 as claimed, and Theorem 1 (with a=b=c=p-1) then tells us that the number of triangles is 3p 3 - 3 p+1 3, which simplifies to (8 p^3 - 9 p^2 + 3p)/2, as our formula predicts. In the more general situation where q=p^m (with p odd) we apply the Fundamental Theorem of Arithmetic, which says that every natural number can be written in a unique way as a product of powers of primes if we don't pay attention to the order in which the powers of primes get multiplied together. Because of this, every natural number n can be written in a unique way as a power of p times a number not divisible by p. Write i = p^r f, j = p^s g, and k = p^t h where f, g, and h are integers not divisible by p. It's important to notice that r, s, and t must all be strictly less than m (this is where the assumption that q is a power of p plays a role). We can rewrite the equation i j k = (q - i) (q - j) (q - k) as 2 i j k = q^3 - q^2 i - q^2 j - q^2 k + q i j + q i k + q j k and then rewrite it again as 2 p^r+s+t e f g = p^3m - p^2m+r f - p^2m+s g - p^2m+t h + p^m+r+s fg + p^m+r+t fh + p^m+s+t g h. At first it's not clear how this will help us, but all the exponents of p in the right hand side are bigger than r+s+t because m is greater than r, s, and t. (For instance, if we add together the three inequalities m>r, m>s, and m>t we get the inequality 3m>r+s+t, which tells us that the first exponent in the right hand side is bigger than r+s+t; or, if we add s+t to both sides of the inequality m > r we get the inequality m+s+t > r+s+t which tells us that the last exponent in the right hand side is bigger than r+s+t.) So each exponent in the right hand side is strictly bigger than r+s+t, or in other words, each of those exponents is greater than or equal to r+s+t+1. So each term in the right hand side of the equation is divisible by p^r+s+t+1, which implies that the whole right hand side is divisible by p^r+s+t+1. However, the left hand side isn't divisible by p^r+s+t+1 because the exponent in p^r+s+t falls short by 1 and because 2, e, f, and g are not divisible by p (here we're using Euclid's lemma again). This contradiction tells us that d=0 in this case as well, so the formula we want to prove follows by the same reasoning that we used in the proof of the case where q is prime. (b) Now assume that p = 2, so that q = 2^m. The equation ijk = (q-i)(q-j)(q-k) becomes 2^r+s+t+1 e f g = 2^3m - 2^2m+r f - 2^2m+s g - 2^2m+t h + 2^m+r+s fg + 2^m+r+t fh + 2^m+s+t g h. Because r, s, and t are all at most m-1, the exponent 3m must be at least 3 more than r+s+t and the exponents 2m+r, 2m+s, and 2m+t must all be at least 2 more than r+s+t, so none of those exponents can equal r+s+t+1. The only exponents that might equal r+s+t+1 are the last three: m+r+s, m+r+t, and m+s+t. Suppose without loss of generality that m+s+t (the last of them) equals r+s+t+1. Then we have m=r+1, or in other words r=m-1. So i must be a multiple of 2^m-1 that's smaller than 2^m, and the only possibility is i = 2^m-1 itself (that is, i = q/2). In other words, the ith cevian from vertex A is actually a median. This shows that at every point where cevians intersect, at least one of the three cevians must be a median. But there are three medians to consider. For each median there are q-1 mirror-pairs of cevians, so we might think the total number of intersection points is three times that; but the expression 3(q-1) triple-counts the point where all three medians meet, so we have to subtract 2. So the total number of intersection points is 3(q-1)-2 = 3 q - 5. Plugging this into Theorem 1, we find that the number of triangles is (8 q^3 - 9 q^2 - 3 q + 10)/2. □ § CONCLUSION Theorem 1 was proved using the strategy of overcounting followed by correcting. This strategy can also be applied to the “Bollywood problem” mentioned near the start of the article. We've got 7 lines, so there are 7 3 = 35 triples. 4 3 = 4 of the triples meet at the apex, 3 3 = 1 of the triples consist of three parallel line segments, and 3 24 1 = 12 of the triples consist of two parallel line segments and a line that goes through the apex. So the number of triangles in this picture is 35-4-1-12, or 18. A more direct approach like Presh Talwalkar's yields the answer with less work, but it's always good to know multiple ways to arrive at the answer to a problem, especially on a math test, if you have enough time to check your work. One problem that proved to be too hard for us was finding an exact formula that's like Theorem 2 in assuming that the cevians divide the opposite side into equal-length pieces but doesn't assume that the number of cevians is 1 less than a power of a prime. The difficult part is counting the places where cevians intersect. That is, for general values of n it's hard to determine the number of solutions to the equation ijk = (n-i)(n-j)(n-k) with i,j,k between 1 and n-1. This mysterious number, which depends on the prime factorization of n in a complicated way, is the subject of sequence A331423 in the OEIS <cit.>. A related entry is sequence A332378<cit.>; it lists the odd values of n for which there's a triple of cevians meeting at a point (under the assumption that the cevians from each vertex of the triangle divide the opposite side into n pieces of equal length). We did notice two patterns in the data. If q is of the form p(2p-1) where p and 2p-1 are both prime, then it seems that the picture you get from having 3(q-1) cevians dividing each side of the triangle into q pieces of equal length gives a positive number of intersection points; we've checked this out to p=97, p(2p-1)=18721. The same sort of thing seems to happen when q is of the form p^2(2p+1) where p and 2p+1 are both prime (that is, where p is a Germaine prime); we've checked this out to p=29, p^2(2p+1)=49619. We don't know how to prove that these patterns continue, let alone find a general rule governing those two integer sequences. Maybe you can! 99 O1 OEIS Foundation Inc. (2024), Entry A000578 in The On-Line Encyclopedia of Integer Sequences; https://oeis.org/A000578. O2 OEIS Foundation Inc. (2024), Entry A130748 in The On-Line Encyclopedia of Integer Sequences; https://oeis.org/A130748. O3 OEIS Foundation Inc. (2024), Entry A152618 in The On-Line Encyclopedia of Integer Sequences; https://oeis.org/A152618. O4 OEIS Foundation Inc. (2024), Entry A331423 in The On-Line Encyclopedia of Integer Sequences; https://oeis.org/A331423. O5 OEIS Foundation Inc. (2024), Entry A332378 in The On-Line Encyclopedia of Integer Sequences; https://oeis.org/A332378. S I. Sir, Best Trick for Counting Figures (video, uploaded August 3, 2021); https://www.youtube.com/watch?v=yg2LiAYuwFc. T P. Talwalkar, How Many Triangles Are There? (video, uploaded April 10, 2018); https://www.youtube.com/watch?v=OLJDnpWr8TY.
How many triangles can you find in this picture? < g r a p h i c s > Keep in mind that triangles can be made of smaller triangles, like the two shown here: < g r a p h i c s > Do you think you found all the triangles in the original picture? Are you sure? Did you forget to include the big triangle that contains all six of the little triangles? Now that you've included it, do you think you've found all the triangles in the picture? How can you be sure? There are a lot of YouTube videos that treat such problems, such as a Presh Talwalkar's “How Many Triangles Are There?” video <cit.>. Talwalkar's video, which has over half a million views, is about counting triangles in the following picture: < g r a p h i c s > Problems like this appear on a lot of Indian examinations. Talwalkar's video is better than most videos of its kind because it uses mathematical reasoning. A more typical video is Imran Sir's “Best Trick for Counting Figures” video <cit.>, which gives tricks for getting the right answer quickly without any explanation for why the tricks work. For instance, Sir claims that the number of triangles in the following picture is 4^3: < g r a p h i c s > The six extra lines inside the triangle are cevians: lines that pass through a vertex of a triangle and a point on the opposite side. Sir makes the more general claim that if you have a triangle with n cevians coming from one vertex and n cevians coming from a second vertex (and no cevians coming from the third vertex) then the number of triangles in the picture is (n+1)^3. But why? This claim also appears in the entry for the sequence of perfect cubes in the Online Encyclopedia of Integer Sequences <cit.>, where it's credited to Lekraj Beedassy, but as far as we know no proof has been published. The OEIS also includes a formula for counting triangles in a more complicated picture in which there are n cevians coming from each of the three vertices of a triangle on the assumption that no three cevians meet at a point <cit.>, but the argument given there isn't very clear. The purpose of this article is to prove all these claims by providing a general formula for counting triangles in a picture consisting of a triangle and some cevians. The proofs are easy but we feel that because of the global interest in such puzzles, there should be a published article that explains what's going on in this special class of triangle-counting puzzles. Our approach also provides a nice application of the combinatorial method of overcounting-and-correcting. Our main result is Theorem 1: In triangle ABC, if you draw a cevians from vertex A, b cevians from vertex B, and c cevians from vertex C, then the number of triangles formed is a+b+c+3 3-a+2 3-b+2 3-c+2 3-d where d is the number of points inside the triangle where three cevians meet. Here n 3 is the number of ways to choose 3 objects out of a collection of n objects; it equals n!/3!(n-3)!, which simplifies to n(n-1)(n-2)/6. Consider again the problem that we gave at the beginning of this article. We could draw the following sixteen pictures, and after checking that there are no duplicates, we'd know that the answer's at least sixteen: < g r a p h i c s > But how would we know we didn't miss any other triangles? Theorem 1 assures us that we didn't: we have a=b=c=1 and we can see that d=1 as well, so that the formula gives 6 3 - 3 3 - 3 3 - 3 3 - 1 = 16. Notice that d must be 0 when a=0 or b=0 or c=0, since the only way three cevians can meet inside the triangle is if one cevian is an A-cevian, one is a B-cevian, and one is a C-cevian. Because of this, when a=b=n and c=0, we automatically get d=0 so that the formula gives 2n+3 3 - n+2 3 - n+2 3 - 2 3 - 0, which simplifies to (n+1)^3, as Sir and Beedassy claim. But there's no need for a and b to be equal if c=0; we still automatically get d=0, so Theorem 1 tells us that the number of triangles is a+b+3 3-a+2 3-b+2 3, which simplifies to (a+1)(b+1)(a+b+2)/2. Another application of Theorem 1 is the case a=b=n, c=1 with the assumption that the picture has bilateral symmetry under the reflection that fixes C and switches A and B. Here's the case n=3: < g r a p h i c s > In this situation there are n points where three cevians meet, corresponding to triples that consist of a cevian from A to BC, its mirror-twin (from B to AC), and the median from C to AB. Our formula gives 2n+4 3 - 2 n+2 3 - 3 3 - n = (n+3)(n+1)^2; this provides a new combinatorial interpretation of this sequence <cit.>. Theorem 1 bypasses the sometimes difficult problem of finding d, which can't be determined from a, b, and c alone. For instance, in the following picture, we have a=b=c=1 just like in the opening example, but d is 0 instead of 1, so that the number of triangles is 17 instead of 16. < g r a p h i c s > Theorem 2 establishes an important case in which the geometry of the cevians is specified in enough detail to allow us to determine d and so find a formula for the number of triangles in the picture: Theorem 2: Let p be a prime and let q=p^m for some exponent m ≥ 1. From each vertex of triangle ABC draw q-1 cevians that divide the opposite side into q pieces of equal length. Then (a) If p is odd, the number of triangles formed is (8q^3 - 9q^2 + 3q)/2. (b) If p=2, the number of triangles formed is (8q^3 - 9q^2 - 3q + 10)/2. Our proof of Theorem 2 relies on Theorem 1 along with Ceva's Theorem from geometry (from which cevians got their name), Euclid's Lemma from number theory (when q is an odd prime), and the Fundamental Theorem of Arithmetic (when q is a general prime power).
null
null
null
null
Theorem 1 was proved using the strategy of overcounting followed by correcting. This strategy can also be applied to the “Bollywood problem” mentioned near the start of the article. We've got 7 lines, so there are 7 3 = 35 triples. 4 3 = 4 of the triples meet at the apex, 3 3 = 1 of the triples consist of three parallel line segments, and 3 24 1 = 12 of the triples consist of two parallel line segments and a line that goes through the apex. So the number of triangles in this picture is 35-4-1-12, or 18. A more direct approach like Presh Talwalkar's yields the answer with less work, but it's always good to know multiple ways to arrive at the answer to a problem, especially on a math test, if you have enough time to check your work. One problem that proved to be too hard for us was finding an exact formula that's like Theorem 2 in assuming that the cevians divide the opposite side into equal-length pieces but doesn't assume that the number of cevians is 1 less than a power of a prime. The difficult part is counting the places where cevians intersect. That is, for general values of n it's hard to determine the number of solutions to the equation ijk = (n-i)(n-j)(n-k) with i,j,k between 1 and n-1. This mysterious number, which depends on the prime factorization of n in a complicated way, is the subject of sequence A331423 in the OEIS <cit.>. A related entry is sequence A332378<cit.>; it lists the odd values of n for which there's a triple of cevians meeting at a point (under the assumption that the cevians from each vertex of the triangle divide the opposite side into n pieces of equal length). We did notice two patterns in the data. If q is of the form p(2p-1) where p and 2p-1 are both prime, then it seems that the picture you get from having 3(q-1) cevians dividing each side of the triangle into q pieces of equal length gives a positive number of intersection points; we've checked this out to p=97, p(2p-1)=18721. The same sort of thing seems to happen when q is of the form p^2(2p+1) where p and 2p+1 are both prime (that is, where p is a Germaine prime); we've checked this out to p=29, p^2(2p+1)=49619. We don't know how to prove that these patterns continue, let alone find a general rule governing those two integer sequences. Maybe you can! 99 O1 OEIS Foundation Inc. (2024), Entry A000578 in The On-Line Encyclopedia of Integer Sequences; O2 OEIS Foundation Inc. (2024), Entry A130748 in The On-Line Encyclopedia of Integer Sequences; O3 OEIS Foundation Inc. (2024), Entry A152618 in The On-Line Encyclopedia of Integer Sequences; O4 OEIS Foundation Inc. (2024), Entry A331423 in The On-Line Encyclopedia of Integer Sequences; O5 OEIS Foundation Inc. (2024), Entry A332378 in The On-Line Encyclopedia of Integer Sequences; S I. Sir, Best Trick for Counting Figures (video, uploaded August 3, 2021); T P. Talwalkar, How Many Triangles Are There? (video, uploaded April 10, 2018);
http://arxiv.org/abs/2409.17355v1
20240925210115
Learning Utilities from Demonstrations in Markov Decision Processes
[ "Filippo Lazzati", "Alberto Maria Metelli" ]
cs.LG
[ "cs.LG" ]
Twisted points of quotient stacks, integration and BPS-invariantsM.G. was supported by an NSERC discovery grant and an Alfred P. Sloan fellowship. D.W. was supported by the Swiss National Science Foundation [No. 196960]. P.Z. was supported by the ERC in the form of Eva Viehmann’s Consolidator Grant 770936: Newton Strata and by the Deutsche Forschungsgemeinschaft (DFG) through the Collaborative Research Centre TRR 326 ”Geometry and Arithmetic of Uniformized Structures”, project number 444845124. [ ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Our goal is to extract useful knowledge from demonstrations of behavior in sequential decision-making problems. Although it is well-known that humans commonly engage in risk-sensitive behaviors in the presence of stochasticity, most Inverse Reinforcement Learning (IRL) models assume a risk-neutral agent. Beyond introducing model misspecification, these models do not directly capture the risk attitude of the observed agent, which can be crucial in many applications. In this paper, we propose a novel model of behavior in Markov Decision Processes (MDPs) that explicitly represents the agent's risk attitude through a utility function. We then define the Utility Learning (UL) problem as the task of inferring the observed agent's risk attitude, encoded via a utility function, from demonstrations in MDPs, and we analyze the partial identifiability of the agent's utility. Furthermore, we devise two provably efficient algorithms for UL in a finite-data regime, and we analyze their sample complexity. We conclude with proof-of-concept experiments that empirically validate both our model and our algorithms. § INTRODUCTION The ultimate goal of Artificial Intelligence (AI) is to construct artificial rational autonomous agents <cit.>. Such agents will interact with each other and with human beings to achieve the tasks that we assign to them. In this vision, a crucial feature is being able to correctly model the observed behavior of other agents. This allows a variety of applications: (i) descriptive, to understand the intent of the observed agent <cit.>, (ii) predictive, to anticipate the behavior of the observed agent (potentially in new scenarios) <cit.>, (iii) normative, to imitate the observed agent because they are behaving in the “right way” <cit.>. Nowadays, Inverse Reinforcement Learning (IRL) provides the most popular and powerful models, i.e., simplified representations, of the behavior of the observed agent, named “expert”. Under the so-called “reward hypothesis” <cit.>, that has been recently re-interpreted in terms of properties of preferences over trajectories <cit.>, IRL algorithms construct reward functions representing the objectives and the desires of the expert. Depending on the application, different models can be adopted. For instance, <cit.> considers the expert as an exact expected return maximizer, while <cit.> and <cit.> assume that the probability with which actions and trajectories, respectively, are played is proportional to their fraction of optimality (i.e., of expected return). All these models assume that the expert is a risk-neutral agent, i.e., an agent interested in the maximization of the expected return. However, there are many scenarios in which rational agents <cit.>, as well as humans <cit.>, adopt risk-sensitive strategies in the presence of stochasticity. In the most general case, agents are not only interested in the expected return, but in the full distribution of the return <cit.>. Popular examples in this context include agents who aim to maximize the expected return while trying to minimize the variance <cit.>, agents interested in the optimization of the Conditional Value-at-Risk (CVaR) <cit.>, or in rewards volatility <cit.>. IRL models, thus, incur in mis-specification, which can crucially affect the descriptive, predictive, and normative power of the inferred reward function <cit.>. Related works.  To overcome this limitation, some authors have analyzed the risk-sensitive IRL problem <cit.>, in which either the learner is provided with the reward function of the expert and it must infer some parameters representing the risk attitude, or the learner must infer both the reward function and the risk attitude from the given demonstrations <cit.>. Nevertheless, these works suffer from major limitations that prevent the adoption of the proposed algorithms in real-world applications. For instance, they either make demanding assumptions (e.g., Boltzmann policies like in <cit.> and <cit.>, which hypothesize the expert to play each action exactly proportionally to its Q-value), or consider rather limited settings (e.g., the “prepare-react model” of <cit.>, that imposes too much structure in the expert's behavior and in the environment's dynamics). An analogous line of research focuses on the problem of learning the risk attitude of an agent from demonstrations in certain decision-making settings other than Markov Decision Processes (MDPs) <cit.>. Even though the powerful model of von Neumann-Morgenstern (vNM) utility functions <cit.> is adopted for representing the risk attitude of the expert, these works only focus on “coarse” sequential decision-making settings like decision trees <cit.>, that do not provide the rich expressivity of MDPs (there is no notion of reward function). A more detailed analysis, along with additional related works, is provided in Appendix <ref>. Our proposal.  In this paper, we formalize, characterize, and analyze the problem of inferring the risk attitude of an agent, encoded with a utility function, from demonstrations of behavior in MDPs. The main contributions of this paper are listed below. The proofs of all results are in Appendix <ref>-<ref>. * Motivated by a real-world example, we propose a simple yet powerful model of behavior in MDPs, that separates the objective (reward) from the risk attitude (utility) of an agent (Section <ref>). * We introduce Utility Learning (UL) as the problem of inferring the risk attitude of an agent in MDPs, and we characterise the partial identifiability of the expert's utility (Section <ref>). * We present and theoretically analyze two novel algorithms, and , for efficiently solving the Utility Learning problem with finite data (Section <ref>). * We conclude the paper with proof-of-concept experiments that serve as an empirical validation of both the proposed model and the presented algorithms. (Section <ref>). § PRELIMINARIES The main paper's notation is below. Additional notation for the supplemental is in Appendix <ref>. Notation.  For any N ∈, we write N{1,…,N}. Given set , we denote by Δ^ the probability simplex on . Given ⊆^d,y∈^d, we define Π_(y) _x∈y-x_2. A real-valued function f:→ is L-Lipschitz if, for all x,y∈, we have |f(x)-f(y)|≤ L|x-y|. f is increasing if, for all x<y∈, it holds f(x)≤ f(y), and it is strictly-increasing if f(x)< f(y). The probability distribution that puts all its mass on z∈ is denoted by δ_z and is called the Dirac delta. We represent probability measures on finite support as finite mixtures of Dirac deltas. Markov Decision Processes (MDPs).  A tabular episodic Markov Decision Process (MDP) <cit.> is a tuple =,,H,s_0,p,r, where and are the finite state (S||) and action (A||) spaces, H is the time horizon, s_0∈ is the initial state, p:→Δ^ is the transition model, and r:→ [0,1] is the deterministic reward function. The interaction of an agent with generates trajectories. Let Ω_h (×)^h-1× be the set of state-action trajectories of length h for all h∈H+1, and ΩΩ_H+1. A deterministic non-Markovian policy π={π_h}_h∈H is a sequence of functions π_h:Ω_h→ that, given the history up to stage h, i.e., ω=s_1,a_1…,s_h-1,a_h-1,s_h∈Ω_h, prescribes an action. A Markovian policy π={π_h}_h∈H is a sequence of functions π_h:→ that depend on the current state only. We use g:⋃_h∈{2,…,H+1}Ω_h→[0,H] to denote the return of a (partial) trajectory ω∈Ω_h, i.e., g(ω)∑_h'∈h-1 r_h'(s_h',a_h'). With abuse of notation, we denote by _p,r,π the probability distribution over trajectories of any length induced by π in (we omit s_0 for simplicity), and by _p,r,π the expectation w.r.t. _p,r,π. We define the return distribution η^p,r,π∈Δ^[0,H] of policy π as η^p,r,π(y)∑_ω∈Ω: g(ω)=y_p,r,π(ω) for all y∈[0,H]. The set of possible returns at h∈H+1 is ^p,r_h {y∈[0,h-1] | ∃ω∈Ω_h,∃π: g(ω)=y∧_p,r,π(ω)>0}, and ^p,r^p,r_H+1. We remark that ^p,r_h has finite cardinality for all h. The performance of policy π is given by J^π(p,r)_p,r,π[∑_h=1^H r_h(s_h,a_h)], and note that J^π(p,r)=_G∼η^p,r,π[G]. We define the optimal performance as J^*(p,r)max_π J^π(p,r), and the optimal policy as π^*∈_π J^π(p,r). Risk-Sensitive Markov Decision Processes (RS-MDPs).  A Risk-Sensitive Markov Decision Process (RS-MDP) <cit.> is a pair _U,U, where =,,H,s_0,p,r is an MDP, and U∈ is a utility function in set {U':[0,H]→[0,H] | U'(0)=0,U'(H)=H∧ U' is strictly-increasing and continuous}. Differently from <cit.>, w.l.o.g., our utilities satisfy U(H)=H to settle the scale. The interaction with _U is the same as with , and the notation described earlier still applies, except for the performance of policies. The performance of policy π is J^π(U;p,r)_p,r,π[ U(∑_h=1^H r_h(s_h,a_h))], and note that J^π(U;p,r)=_G∼η^p,r,π[U(G)]. We define the optimal performance as J^*(U;p,r)max_π J^π(U;p,r), the optimal policy as π^*∈_π J^π(U;p,r), and the set of optimal policies for _U as Π^*_p,r(U). Enlarged state space approach.   In MDPs, there always exists a Markovian optimal policy <cit.>, but in RS-MDPs this does not hold. The enlarged state space approach <cit.> is a method, proposed by <cit.>, to compute an optimal policy in a RS-MDP. Given RS-MDP =1mu =1mu =1mu _U=,,H,s_0,p,r,U, we construct the enlarged state space MDP [_U]={×^p,r_h}_h∈H,,H,s_0,0,,, with a different state space ×^p,r_h at each h.[Actually, <cit.> use state space ×_≥ 0, while <cit.> use ×[h-1] for all h∈H. Instead, we consider sets ×{^p,r_h}_h to capture the minimal size required.] For every h∈H and =2mu =1.5mu(s,y,a)∈×^p,r_h×, the reward function is =2mu =1.5mu_h(s,y,a)=U(y+r_h(s,a))h=H, while the dynamics assigns to the next state =2mu =1.5mu(s',y')∈×^p,r_h+1 the probability: =2mu =1.5mu_h(s',y'|s,y,a) p_h(s'|s,a)y'=y+r_h(s,a). In words, the state space is enlarged with a component that keeps track of the cumulative reward in the original RS-MDP, and the reward , bounded in [0,H], provides the utility of the accumulated reward at the end of the episode. A Markovian policy ψ={ψ_h}_h∈H for [_U] is a sequence of mappings ψ_h:×^p,r_h→. Being an MDP, we adopt for [_U] the same notation presented earlier for MDPs, by replacing p,r,π with ,,ψ. Let ψ^* be the optimal Markovian policy for [_U]. Then, Theorem 3.1 of <cit.> shows that the (non-Markovian) policy π^*, defined for all h∈{2,…,H} and =1mu =1mu =1muω∈Ω_h as π^*_h(ω)ψ^*_h(s_h,∑_h'∈h-1r_h'(s_h',a_h')), and =1mu =1mu =1mu π^*_1(s_0)=ψ^*_1(s_0,0), is optimal for _U. Inverse Reinforcement Learning (IRL).  IRL aims to recover the reward function of an expert agent from demonstrations of behavior <cit.>. In the literature (e.g., <cit.>), various assumptions are made on how the expert's policy π^E is generated from the expert's MDP =,,H,s_0,p,r^E. Given the expert's MDP without reward ,,H,s_0,p, the expert's policy π^E, and the specific assumption considered, the IRL objective is to recover the reward r^E. Miscellaneous.  For L>0, we write _L{U∈ | U is L-Lipschitz}. For any finite set ⊆[0,H] we define ^{U∈[0,H]^|| | ∃ U∈, ∀ x∈: U(x)=U(x)}, and _L^{U∈^ | ∃ U∈_L, ∀ x∈: U(x)=U(x)}. We will denote by _U some RS-MDPs with U∈^. § MOTIVATION AND PROBLEM SETTING In this section, we begin by motivating the need for a more expressive model of behavior in MDPs. Next, we propose a risk-aware model and we justify it. We conclude with some observations. Existing models for representing behavior.  Our goal is to develop an algorithm, that permits to learn a “good” model of behavior of an agent from demonstrations in an MDP. In this context, the most common models present in the literature enforce a structure made of two components: (i) a reward function, that represents the objective of the agent, and (ii) a planning method, that describes how the behavior of the agent is generated given its objective. Crucially, the planning method is assumed to be known,[Indeed, <cit.> have demonstrated that “it is impossible to uniquely decompose a policy into a planning algorithm and reward function”, but we need to impose some structure to the problem.] thus, all the information about the behavior must hold inside the reward (the objective) that can be learned. Popular examples include IRL <cit.>, entropy-regularized IRL, <cit.>, and Bayesian IRL <cit.>. Limitations.  Our insight is that these models are not expressive enough to model human behavior in the presence of stochasticity in many common situations, as shown in the following example. 26cm0cm8 Consider the MDP on the side, where you can reach state s having already earned either 0€ or 100€ (in this example reward is money). From s, you can take either the “risky” action a_risky, that provides you with 200€ with probability (w.p.) 1/2 or 0€ otherwise, or the “safe” action a_safe, that provides you always with 50€ . What action would you play in state s? Risk-averse <cit.> people might go with a_safe when landed on s with 0€ , and with a_risky otherwise, while risk-seeking people might always go with a_risky. Simply put, the current state s is not sufficient for predicting behavior, because people decide to take risks depending on how much money (i.e., reward) they earned so far. In other words, people might exhibit a non-Markovian behavior dependent on both the state and the cumulated rewards, which is not contemplated by the aforementioned IRL models.[Re-modelling the MDP including the reward into the state would make the optimal policy Markovian. Yet, this mathematical device would alter the state representation, preventing reward transfer to new environments.] Our proposal.   We propose to explicitly represent the risk attitude by constructing a model with three components: (i) a reward function, i.e., the objective; (ii) a utility function, i.e., the risk attitude, (iii) a (known) planning method, i.e., how the behavior of the agent is generated given its objective and its risk-attitude. Formally, we model the expert as an optimal agent in a RS-MDP: =1mu =1mu =2mu π^E∈_π_p,r,π[ U(∑_h=1^H r_h(s_h,a_h)) ], where (i) r is the reward, (ii) U∈ is the utility, and (iii) the principle of maximization of the expected utility is the planning method. There are many arguments that support this model: * it generalizes the IRL model of <cit.> by modelling the risk attitude through U; * it is justified by the famous expected utility theory <cit.>;[The set of prizes is ^p,r, and each policy π is a choice that induces a lottery η^p,r,π over prizes.] * it explains the existence of non-Markovian optimal policies (see <cit.>); * the corresponding planning problem enjoys practical tractability <cit.>. Some considerations.   If the utility U is linear, the RS-MDP _U admits a Markovian optimal policy. Otherwise, the more U deviates from linearity, the more non-Markovian policies may outperform Markovian policies, which may incur in a finite loss of performance, as shown below. proploseinperformancewithmarkovianity There exists a RS-MDP with horizon H=4 in which the difference between the optimal performance and the performance of the best Markovian policy is 0.5. Next, we observe that also any deterministic RS-MDP admits an optimal Markovian policy. Intuitively, in absence of risk (i.e., stochasticity) the utility function plays no role. propenvdeterministic Given any RS-MDP with deterministic transition model p and reward function r, if the utility U is increasing, then, there exists a Markovian optimal policy. Finally, if we restrict to Markovian policies, we note that non-stationarity (i.e., the dependence of the policy on the stage h) and stochasticity (i.e., if the policy prescribes a lottery over actions instead of a single action) can improve the performance even in stationary environments. Intuitively, they permit to consider larger ranges of return distributions w.r.t. stationary deterministic policies. proppolicynonstationary There exists a RS-MDP with stationary transition model and reward in which the best Markovian policy is non-stationary, and the best stationary Markovian policy is stochastic. § UTILITY LEARNING In this section, we formalise the Utility Learning problem, we characterise the partial identifiability of the true utility from demonstrations, and we analyze the inferred utilities for applications. Learning from demonstrations under the new model.  In Section <ref>, we described a model that parametrizes the behavior of an agent through two components: a reward r, and a utility U. Given demonstrations of behavior in an MDP, Eq. (<ref>) defines three different learning problems: * Utility Learning (UL): given r, learn U. * Inverse Reinforcement Learning (IRL): given U, learn r. * IRL + UL: learn both r and U. Problem 3 is the most interesting (and challenging), because it makes the least assumptions, while Problem 2 has been extensively studied in literature when U is linear <cit.> (but not in detail for other choices of U). In this paper, we focus on Problem 1, which we name Utility Learning (UL), for two reasons. First, there exist relevant applications of UL per se (see the last part of this section). Second, understanding UL represents a significant step toward solving Problem 3. Partial identifiability in utility learning.  In the exact UL setting (where s_0,p,π^E are known), by definition, we are given a policy π^E and an MDP =,,H, s_0,p,r, and the goal is to find the expert's utility function U^E∈ that satisfies J^*(U^E;p,r)=J^π^E(U^E;p,r), i.e., that makes π^E optimal in RS-MDP _U^E. Does the knowledge of π^E and suffice to uniquely identify U^E? Analogously to IRL <cit.>, the answer is negative, as shown in the following example. Let =,,H,s_0,p,r be the MDP in Fig. <ref> (left), where H=2,r_1(s_0,a_1)=1,r_1(s_0,a_2)=0.5, and all other values of p,r are drawn in the figure. Note that ^p,r={0,1,1.5,2}. Let π^E be the expert policy, that prescribes a_1 in s_0. Then, a utility U∈ makes π^E optimal for _U if playing a_1 is better than playing a_2: J^π^E(U;p,r)=0.1U(2)+0.5U(1.5)+0.4U(1)≥ 0.8U(1.5)+0.2U(1). Thus, all the utilities U∈, that assign to G=1,G=1.5 any of the values coloured in blue in Fig. <ref> (middle), are equally-plausible candidates to be U^E. Example <ref> shows that U^E is just partially identifiable from demonstrations. In particular, we cannot uniquely identify the value of U^E at points in the set ^p,r, and we do not have information on U^E at the other points [0,H]∖^p,r. Similarly to <cit.>, we formalize the set of utilities “compatible” with π^E in by introducing the notion of feasible utility set:[In Appendix <ref> we provide a more explicit expression of the feasible utility set.] [Feasible Utility Set] Let =,,H, s_0,p,r be an MDP, and let π^E be the (potentially non-Markovian) expert policy. The feasible utility set _p,r,π^E contains all the utilities that make π^E optimal for RS-MDP _U. Formally: _p,r,π^E{U∈ |0.9J^π^E(U;p,r)=J^*(U;p,r)}. Usage and transferability of utilities.  UL is a problem setting for inferring the risk attitude of an agent. Once learned, we might “use” the computed utility U for (i) predicting the behavior of the expert in a new environment, (ii) imitating the expert, or (iii) assessing how valuable a certain policy is from the viewpoint of the expert. However, due to partial identifiability, U cannot be close to U^E more than the worst utility in the feasible set _p,r,π^E. Is this “ambiguity” tolerated by the applications above? The following propositions answer negatively for all (i),(ii), and (iii) of them, but, fortunately, Proposition <ref> shows that adding more data can solve the issue. Let us begin with (i). In our model, a utility U permits to predict the behavior of an agent with utility U^E in a new MDP ' if U and U^E induce in ' the same optimal policies. Nevertheless, not all the utilities in the feasible set satisfy this property for all the possible MDPs, as shown below: [Transfer to a new transition model]propproptransferabilityp There exist two MDPs =,,H,s_0,p,r,'=,,H,s_0,p',r, with p≠ p', for which there exists a policy π^E and a pair of utilities U_1,U_2∈ such that: U_1,U_2∈_p,r,π^E and Π_p',r^*(U_1)∩Π_p',r^*(U_2)={}. [Transfer to a new reward] propproptransferabilityr There exist two MDPs =,,H,s_0,p,r,'=,,H,s_0,p,r', with r≠ r', for which there exists a policy π^E and a pair of utilities U_1,U_2∈ such that: U_1,U_2∈_p,r,π^E and Π_p,r'^*(U_1)∩Π_p,r'^*(U_2)={}. Intuitively, we are saying that transferring the learned utility U to an MDP with a different reward or transition model might cause it to induce optimal policies other than those induced by U^E there. Consider now (ii). To perform a meaningful imitation, due to the practical difficulty of computing optimal policies, we require that any policy with an almost optimal performance for U has also a “good” performance for U^E, but this does not always hold: proppropimitation There exists an MDP =,,H,s_0,p,r and a policy π^E for which there exists a pair of utilities U_1,U_2∈_p,r,π^E such that, for any ϵ≥ 0 smaller than some constant, there exists a policy π_ϵ such that J^*(U_1;p,r)-J^π_ϵ(U_1;p,r)=ϵ and J^*(U_2;p,r)-J^π_ϵ(U_2;p,r)≥ 1. Finally, concerning (iii), the fact that U and U^E provide close values of performance for all the policies seems a desirable requirement, i.e., asking for small =2mu =1.5mu =1.5mud_p,r^all(U^E,U)max_π|J^π(U^E;p,r)-J^π(U;p,r)| <cit.>. We note that closeness under some norm implies closeness in d^all: proppropbounddall Consider an arbitrary MDP with transition model p and reward function r. Then, for any pair of utilities U_1,U_2∈, it holds that d^all_p,r(U_1,U_2)≤max_G∈^p,r|U_1(G)-U_2(G)|. Nonetheless, not all the utilities in the feasible set are close to each other in d^all_p,r distance: proppropnobounddall There exists an MDP =,,H,s_0,p,r and a policy π^E for which there exists a pair of utilities U_1,U_2∈_p,r,π^E such that d^all_p,r(U_1,U_2)= 1. Intuitively, Propositions <ref>-<ref>, <ref>, prove that demonstrations of behavior in a single MDP do not provide enough information on U^E to obtain guarantees for applications (i),(ii), (iii). Instead, the following result shows that demonstrations in multiple environments permit to mitigate this issue. [Multiple demonstrations]proppropmultipledemonstrations Let ,,H be, respectively, any state space, action space, and horizon, satisfying S≥ 3,A≥2,H≥ 2, and let U^E∈ be any utility. If, for any possible dynamics s_0,p and reward r, we are given the set of all the deterministic optimal policies of the corresponding RS-MDP ,,H, s_0,p,r,U^E, then we can uniquely identify utility U^E. Simply put, Proposition <ref> says that it suffices to observe the set of optimal policies induced by the expert utility in a certain set of MDPs for uniquely retrieving U^E. Its proof is constructive. § ONLINE UTILITY LEARNING WITH GENERATIVE MODEL In the previous section, we have analyzed UL in the exact setting. Here, we introduce a more realistic setting for UL, and we describe two efficient algorithms with theoretical guarantees to address it. We consider the online UL problem setting with multiple demonstrations,[Asking for multiple demonstrations permits to alleviate the partial identifiability issues (Proposition <ref>).] which we now define. Let U^E∈ be the utility function of the expert. Consider N MDPs ^i=^i,^i,H,s_0^i,p^i,r^i, indexed by i∈N, that share the same horizon H. For each ^i, we know ^i,^i,H,s_0^i,r^i, we have access to a generative sampling oracle <cit.> for the transition model p^i, which, given any triple s,a,h, returns a sample s'∼ p^i_h(·|s,a), and we are given a dataset ^E,i={s_1^j,a_1^j,s_2^j,…,s_H^j,a_H^j,s_H+1^j}_j∈τ^E,i of τ^E,i trajectories collected by executing expert policy π^E,i, which is optimal for the RS-MDP ^i_U^E. Informally, the goal is to find U^E. §.§ Challenges and Our Solution To develop efficient algorithms for learning utilities in practice, some challenges must be addressed. Curse of Dimensionality.  Approximation techniques are needed for computing optimal policies in RS-MDPs (the enlarged state space is too large |^p,r_h|∝ (SA)^(h-1) ∀ h <cit.>), for storing return distributions (whose support may grow exponentially in the horizon |^p,r|∝ (SA)^H <cit.>), and for storing utilities in (defined over the interval [0,H]). Finite Data.  Some quantities of interest (i.e., policies and transition models) are not known exactly, but they must be estimated from samples, introducing an estimation error. Partial Identifiability.  Even in the exact setting, demonstrations of behavior are “explained” equally well by infinitely many utilities; thus, it is not clear which utility an algorithm should output. To address these challenges, our algorithms (Sections <ref>- <ref>) adopt the following approaches. Curse of Dimensionality.  We combine the (i) discretization approach in <cit.> for the enlarged state space, with the (ii) categorical representation of <cit.> for return distributions. Moreover, we consider (iii) discretized utilities. (i) Fix parameter ϵ_0>0, define sets {0,ϵ_0,2ϵ_0 ,…,1/ϵ_0ϵ_0}, _h{0,ϵ_0,2ϵ_0 ,…,(h-1)/ϵ_0ϵ_0} as the ϵ_0-coverings of [0,1] and [0,h-1] for all h∈H+1, and let _H+1,d||. Intuitively, note that the summation of h values in provides a value in _h+1 for all h. Therefore, for any i∈N, let 0.9^i_U^E^i,^i,H,s_0^i,p^i,r^i,U^E be the RS-MDP with reward r^i, obtained from r^i as r^i_h(s,a)Π_[r^i_h(s,a)] for all s,a,h. In this manner, the sets of partial returns of 0.9^i satisfy 0.9^p^i,r^i_h⊆_h⊆ for all h, thus the MDP [0.9^i_U^E] has a state space with cardinality at most Sd≤(SH/ϵ_0), which is no longer exponential in the horizon. (ii) Denote as {q∈Δ^d | ∑_j∈dq_jδ_y_j} the set of parametric probability distributions supported on , where y_1 0,y_2ϵ_0,…,y_dH/ϵ_0ϵ_0 represent the ordered items of set . We construct the categorical representation Proj_(η)∈ of an arbitrary (return) distribution η∈Δ^[0,H] through the operator Proj_, defined in Eq. (<ref>). (iii) We approximate utilities U∈ with vectors U∈0.9^ so that U(y)=U(y) for all y∈. In this way, we work with tractable approximations whose complexity is controlled by parameter ϵ_0. Finite Data.  We introduce the notion of utility compatibility to cope with finite data. With multiple demonstrations, the true utility U^E satisfies the hard constraints 0.9J^π^E,i(U^E;p^i,r^i)=J^*(U^E;p^i,r^i) for all i∈N. However, with finite data, our estimate of 0.9J^π^E,i(U^E;p^i,r^i)-J^*(U^E;p^i,r^i) might be different from zero for some i, thus, we might get wrong in recognizing U^E as the true expert's utility. Crucially, collecting more (but still finite) data does not guarantee to obtain exactly zero. Drawing inspiration from <cit.>, we relax these “hard” requirements by introducing a “soft” notion of constraints satisfaction, which we name utility compatibility: Given MDP =,,H,s_0,p,r and policy π^E, the (non)compatibility _p,r,π^E:→[0,H] of utility U∈ with π^E in is: _p,r,(U) J^*(U;p,r)-0.95J^π^E(U;p,r). Thanks to utility compatibility, we can quantify the extent to which a utility U is (non)compatible with the (multiple) demonstrations by computing max_i∈N_p^i,r^i,π^E,i(U). Partial Identifiability.  We propose to develop two practical algorithms to fully characterize a set of utility functions: (i) A utility classifier,[The notion of reward classifier can be found in <cit.>. We extend it to utilities.] that “defines” the boundaries of the set, and (ii) a utility extractor, that extracts a utility from the set. For a given accuracy threshold Δ>0, define the set of Δ-compatible utilities as: _Δ{U∈ | max_i∈N_p^i,r^i,π^E,i(U) ≤Δ}. (i) We define a utility classifier algorithm as a procedure that takes in input a utility U∈, and outputs a boolean saying whether U∈_Δ or not. Intuitively, being the input utility arbitrary, such algorithm permits to characterize the entire set _Δ. Furthermore, (ii) we define a utility extractor algorithm as a procedure that outputs an arbitrary utility U from set _Δ. §.§ () is a utility classifier algorithm. It classifies utilities U w.r.t. _Δ by estimating the (non)compatibility 0.9^i(U)≈_p^i,r^i,π^E,i(U) for all i∈N, and, then, checking if max_i ∈N0.9^i(U)≤Δ. As other algorithms <cit.>, comprises two phases: an exploration phase (Algorithm <ref>), where we compute estimates {p^i}_i by collecting τ^i samples from the generative model of each 0.9^i, and a classification phase (Algorithm <ref>), that takes in input a utility =1mu =1mu =1mu U∈, estimates {p^i}_i, and datasets =1mu =1mu =1mu{^E,i}_i, to construct estimates {0.9^i(U)}_i for classifying U. Specifically, at Line <ref>, we discretize the utility U. Next, for all i∈N, we construct estimates J^E,i(U)≈ J^π^E,i(U;p^i,r^i) and J^*,i(U)≈ J^*(U;p^i,r^i) as follows. At Line <ref>, we estimate η^E,i≈Proj_(η^p^i,r^i,π^E,i) ≈η^p^i,r^i,π^E,i through the (Estimate Return Distribution) subroutine (Algorithm <ref>), and dataset ^E,i, while at Line <ref> we compute J^E,i(U). At Line <ref>, we approximate the optimal performance J^*(U;p^i,r^i) in RS-MDP ^i_U with the optimal performance J^*,i(U) J^*(U;p^i,r^i) in RS-MDP [0.8^i_U]^i,^i,H,s_0^i,p^i,r^i,U, which is computed through value iteration in the enlarged state space MDP [0.8^i_U] using the subroutine (Algorithm <ref>). Finally, at Line <ref> we compute ^i(U), and at Line <ref> we perform the classification. enjoys the following guarantee: thrthrupperboundcatyoneu Let ϵ,δ∈(0,1), and let be a subset of _L containing the utilities to classify. If we set ϵ_0= ϵ^2/(72HL^2), and if it holds that, for all i∈N: 0.9if ||=1: τ^E,i≤(H^2/ϵ^2logN/δ), τ^i≤(SAH^4/ϵ^2logSAHNL/δϵ), 0.9else : τ^E,i≤(H^4 L^2/ϵ^4logHNL/δϵ), τ^i≤(SAH^5/ϵ^2(S+logSAHN/δ)), then, w.p. at least 1-δ, correctly classifies all the U∈ that satisfy either max_i_p^i,r^i,π^E,i(U) <Δ-ϵ (inside _Δ) or max_i_p^i,r^i,π^E,i(U) >Δ+ϵ (outside _Δ). Some observations are in order. First, note that Δ is arbitrary in [0,H], and the sample complexity does not depend on it. If we have one utility to classify ||=1, then ∝ S queries to the generative model suffice instead of ∝ S^2. Note that ϵ_0 represents a trade-off between approximation and estimation error. If we re-normalize utilities so that U(H)=1, then some H terms in the bounds disappear. Intuitively, the Lipschitzianity assumption is necessary for approximating continuous utilities U∈ with vectors in . Finally, observe that we can restrict the range of (non)compatibility [Δ-ϵ,Δ+ϵ] where can make mistake with high probability (w.h.p.) by collecting more data. §.§ () For simplicity, let _L_L^ for L>0, and let ,_L, ,_L,_Δ be the analogous of, respectively, ,_L, ,_L,_Δ, but containing increasing functions instead of strictly-increasing functions.[Note that, for defining _Δ, we extend also the definition of (non)compatibility (Def. <ref>) to utilities in .] is a utility extractor algorithm. For any Δ>0, it aims to extract a utility U from _Δ by performing online gradient descent in the space of discretized L-Lipschitz utilities _L. It comprises two phases: an exploration phase, that coincides with that of (Algorithm <ref>) and aims to compute estimates {p^i}_i using {τ^i}_i samples, and an extraction phase (Algorithm <ref>), that takes in input estimates {p^i}_i, and datasets {^E,i}_i, to construct a utility U∈_L to return. Specifically, starting from U_0∈_L, we compute a sequence {U_1,…, U_T} of utilities in _L through an online projected gradient descent scheme, where the gradient g_t is computed at Line <ref>, and the update is carried out at Line <ref> with projection onto _L. Intuitively, we aim to minimize function max_i∈N_p^i,r^i,π^E,i(U)≤∑_i _p^i,r^i,π^E,i(U) over set _L (we upper bound the max with the sum to work with gradients instead of subgradients), but computing the gradient ∇_U∑_i∈N_p^i,r^i,π^E,i(U) =∑_i (∇_U J^*(U;p^i,r^i)-η^p^i,r^i,π^E,i) is not simple. Thus, analogously to <cit.>, we replace =1mu =1mu =1mu ∇_U J^*(U;p^i,r^i) with =1mu =1mu =1mu ∇_U J^π^*,i_t(U;p^i,r^i) =η^p^i,r^i,π^*,i_t, where π^*,i_t is the (fixed) optimal policy in RS-MDP ^i_U_t (U_t∈_L satisfies U_t(y)=U_t(y) for all y∈), and we prove convergence. Therefore, Lines <ref>, <ref>-<ref> approximate =1mu =1mu =1mu ∑_i(η_t^i-η^E,i)≈∑_i (η^p^i,r^i,π^*,i_t-η^p^i,r^i,π^E,i) for all t. In particular, Lines <ref>-<ref> approximate η^p^i,r^i,π^*,i_t by passing through =1mu =1mu =1mu η^i_t≈η^p^i,r^i,π^*,i_t≈η^p^i,r^i,π^*,i_t, where π^*,i_t is the optimal policy for the RS-MDP =1mu =1mu =1mu ^i_U_t^i,^i,H,s_0^i, p^i, r^i,U_t. At Line <ref> we compute through value iteration ( subroutine, Algorithm <ref>) the optimal policy 0.9ψ^*,i_t for MDP [0.9^i_U_t]. Then, at Line <ref>, we collect the return of K trajectories obtained by executing 0.9ψ^*,i_t in MDP [0.9^i_U_t] ( subroutine, Algorithm <ref>), which is equivalent to playing π^*,i_t in 0.9^i_U_t. Finally, at Line <ref>, we use this data to compute the empirical estimate η^i_t. enjoys the following guarantee: thrthrupperboundtractor Let ϵ,δ∈(0,1), L>0, and assume that U^E∈_L. If we execute with parameters ϵ_0=ϵ^2/(80N^2L^2H),T≥(N^4H^4L^2/ϵ^4), K≥( N^2H^2logNHL/δϵ/ϵ^2), α=√(H/ϵ_0-1)H/(2N√(T)), an arbitrary U_0∈_L, and if it holds that, for all i∈N: 0.9τ^E,i≥( H^4N^4L^2/ϵ^4logNHL/δϵ), τ^i≥( N^2SAH^5/ϵ^2( S+logSAHN/δ) ), then, w.p. at least 1-δ, for any Δ≥ϵ, guarantees that all the utilities U∈_L such that U(y)=U(y) for all y∈ (where U∈_L is the output of ) belong to U∈_Δ. Intuitively, any U∈_L obtained by “interpolating” U has a small (non)compatibility w.h.p.. We consider increasing utilities _L instead of strictly-increasing _L to guarantee the closedness of the set onto which we project. As for , normalizing U(H)=1 would remove some H terms from the bounds, and the Lipschitzianity assumption cannot be dropped. Finally, projection Π0.8__L can be implemented efficiently since set 0.9_L is made of (H^2/ϵ_0^2) linear constraints (Appendix <ref>). § NUMERICAL SIMULATIONS In this section, we provide two proof-of-concept experiments using data collected from people. The Data.  We asked to 15 participants to describe the actions they would play in an MDP with horizon H=5 (see Appendix <ref>), at varying of the state, the stage, and the cumulative reward collected. The reward has a monetary interpretation. To answer the questions, the participants have been provided with complete information about the dynamics and the reward function of the MDP.[We have been allowed to collect these data because they are not personal.] Experiment 1 - Model validation.  We aim to answer to: Is it worthy to increase the model complexity using a learnable utility in Eq. (<ref>) instead of the (fixed) linear utility as <cit.>? How much better do we fit the data? To measure the fitness of a utility U to the data (policy π) fairly, we consider a relative notion of (non)compatibility (we omit p,r for simplicity): _π^r(U) (J^*(U)-J^π(U))/J^*(U). Intuitively, _π^r(U) measures the quality of π as perceived by the demonstrating agent, if U was its true utility function. We execute (without exploration) for the 15 participants comparing the IRL risk-neutral utility U_linear with 3 “baselines”: A risk-averse U_sqrt (concave) and a risk-lover U_square (convex) utilities, and the utility U_SG fitted through the SG method (see Appendix <ref> for details). We report the (non)compatibilities in percentage below: Some observations are in order. First, this data shows that replacing U_linear (i.e., IRL) in Eq. (<ref>) with U_sqrt reduces 0.9_π^r(·) from 28% to 13% on the average of the participants, answering positively to our question. Next, the (fixed) U_sqrt outperforms the U_SG of each participant. This is due to both the bounded rationality of humans, who can not apply the H=1 utility U_SG to H>1 problems, and the fact that U_sqrt “overfits” the simple MDP considered, but it might generalize worse than U_SG to new environments. Finally, all the utilities are compatible with policies 4 and 11, providing empirical evidence on the partial identifiability of the expert's utility from single demonstrations. Experiment 2 - Empirical analysis of .  We aim to empirically characterise . We execute it with different values of step size α and initial utility U_0 to compute a compatible utility for participant 10 (chosen arbitrarily). Fig. <ref> (left) shows that the optimal step size α=100 may be very large, due to (i) the presence of compatible utilities on the boundaries of _L,[_L forces utilities to be increasing, i.e., with constraints U(G_1)≤ U(G_2) ∀ G_1≤ G_2. The plateau in Fig. <ref> (right) indicates that U(G_1)= U(G_2) ∀ G_1≤ G_2, G_1,G_2∈[1,3], thus, it represents a boundary.] thus larger step sizes can converge sooner, and to (ii) the projection onto _L that results in minimal changes of utility even with very large steps (see Appendix <ref>). These observations do not change if we consider other participants (Appendix <ref>). Next, we note that the choice of U_0 is rather irrelevant for the shape of U, but it matters for its “location”, as shown in Fig. <ref> (right). To view the sequence of utilities extracted by during a run, see Appendix <ref>. § CONCLUSION In this paper, we proposed a novel descriptive model of behavior in MDPs, we formalized the UL problem as that of learning the risk attitude of an agent from demonstrations, and we characterised the partial identifiability of the expert's utility. In addition, we have described two provably efficient algorithms for estimating the compatibility of a utility with demonstrations, and for extracting a compatible utility. They have been empirically validated through two proof-of-concept experiments. Future directions.   This paper opens up many important questions. To quantify the model mis-specification, to use function approximation, to conduct an empirical study on the horizon used by humans for planning <cit.>, to combine demonstrations with other feedbacks <cit.>, to learn both r and U, to extend imitation learning approaches (e.g., GAIL <cit.>) or the maximum entropy framework with utilities, to improve the model in Eq. (<ref>) with negative rewards and prospect theory <cit.>, and many others. We believe that most of the IRL literature shall be extended under the proposed, more expressive, framework to construct more accurate algorithms for IRL and UL. iclr2025_conference § ADDITIONAL RELATED WORKS We describe here the most relevant related works. First, we describe IRL papers with risk, i.e., those works that consider MDPs, and try to learn either the reward function or the utility or both. Next, we analyze the works that aim to learning the risk attitude (i.e., a utility function) from demonstrations of behavior (potentially in problems other than MDPs). Finally, we present other connected works. Inverse Reinforcement Learning with risk. <cit.> introduce the risk-sensitive IRL problem in decision problems different from MDPs. Authors analyze two settings, one in which the expert takes a single decision, and one in which there are multiple decisions in sequence. They model the expert as a risk-aware decision-making agent acting according to a coherent risk metric <cit.>, and they consider both the case in which the reward function is known, and they try to learn the risk attitude (coherent risk metric) of the expert, and the case in which the reward is unknown, and they aim to estimate both the risk attitude and the reward function. Nevertheless, the authors analyze a very simple model of environment, that they call prepare-react model, which is much different from an MDP, since, simply put, it is equivalent to a deterministic MDP in which the stochasticity is shared by all the state-action pairs at each stage h∈H. Moreover, the optimal policy is markovian in this setting. <cit.> generalizes the work of <cit.>. Specifically, the biggest improvement is to consider nested optimization stages. However, the model of the environment is still much simple, and, in addition, the authors consider a maximum likelihood approach to facilitate inference. We mention also the work of <cit.> who extend <cit.> by devising an active learning framework to improve the efficiency of their learning algorithms. Another important work is that of <cit.>, who study the risk-sensitive IRL problem in MDPs, by proposing an interesting parametric model of behavior for the expert based on prospect theory <cit.>, and they devise a gradient-based inverse reinforcement learning algorithm that minimizes a loss function defined on the observed behavior. However, this work suffers from the major limitation of assuming that the expert plays actions exactly based on a softmax distribution, which introduces enough structure to perform maximum likelihood and to learn the parameters of the utility function. Such assumption is rather strong. We shall mention also the recent pre-print of <cit.> that proposes a novel stochastic control framework in continuous time that includes two utility functions and a generic discounting scheme under a time-varying rate. Assuming to know both the utilities and the discounting scheme, the authors show that, through state augmentation, the control problem is well-posed. In addition, the authors provide sufficient conditions for the identification of both the utilities and the discounting scheme given demonstrations of behavior. It should be remarked that there are many differences between this work and ours. First, they consider a continuous time environment that is rather different from an MDP. Next, when they consider MDPs to make things more concrete, they assume a utility function on the reward instead of the return, and they also consider the entropy-regularized setting in which the optimal policy is the Boltzmann policy, which permits to apply maximum likelihood for inferring the parameters of the utility function and the discount factor (they assume exponential discounting). Learning utilities from demonstrations. <cit.> considers an approach similar to IRL <cit.>. Their goal is not to perform active preference elicitation, but, similarly to us, to use demonstrations to infer preferences. Specifically, they aim to learn utilities in sequential decision-making problems from demonstrations. However, they model the problems through decision trees, which are different from MDPs, and this represents the main difference between their work and ours. Indeed, decision trees are simpler since there is no notion of reward function at intermediate states. In this manner, they are able to devise (backward induction) algorithms to learn utilities in decision trees through linear constraints similar to those devised by <cit.> in IRL. It is interesting to notice that they adopt a Bayesian approach to extract a single utility from the feasible set constructed, and not an heuristic like that of <cit.>. They assume a prior p(u) over the true utility function u, and approximate the posterior w.r.t. the feasible set of utilities using Markov Chain Monte Carlo (MCMC). <cit.> considers the problem of learning utilities from demonstrations similarly to <cit.>, but with the difference of considering influence diagrams instead of decision trees. Since any influence diagram can be expanded into a decision tree, authors adopt a strategy similar to <cit.>. <cit.> faces the problem of learning human utilities from (video) demonstrations, with the aim of generating meaningful tasks based on the learned utilities. However, differently from us, they consider the stochastic context-free And-Or graph (STC-AOG) framework <cit.>, instead of MDPs. Others. <cit.> is similar to our work in that it aims to learn the behavioral model of the expert from demonstrations. However, they do not consider a specific model like us (i.e., Eq. (<ref>)), but use a differentiable planner (neural network) to learn the planner. However, their approach requires a lot of demonstrations, even across multiple MDPs, and it does not consider the fact that there exist interesting models of humans in behavioral economics. § ADDITIONAL NOTATION In this appendix, we introduce additional notation that will be used in other appendices. Miscellaneous. For any probability distribution ν∈Δ^, we denote its cumulative density function by F_ν. Let ν∈Δ^ be a probability distribution on ; then, for any y∈[0,1], we define the generalized inverse F^-1_ν(y) as: F^-1_ν(y)inf_x∈{F_ν(x)≥ y}. We define the 1-Wasserstein distance w_1:Δ^×Δ^→ [0,∞] between two probability distributions ν,μ as: w_1(ν,μ)∫_0^1 | F_ν^-1(y)-F^-1_μ(y)|dy. In addition, we define the Cramér distance ℓ_2:Δ^×Δ^→[0,∞] between two probability distributions ν,μ as: ℓ_2(ν,μ)( ∫_ (F_ν(y)-F_μ(y))^2dy )^1/2. We will use notation: _X∼ Q[X] _X∼ Q[(X-_X∼ Q[X])^2], to denote the variance of a random variable X∼ Q distributed as Q. Given two random variables X∼ Q_1,Y∼ Q_2, we denote their covariance as: Cov_X∼ Q_1,Y∼ Q_2[X,Y]_X∼ Q_1,Y∼ Q_2[(X-_X∼ Q_1[X])(Y-_Y∼ Q_2[Y])]. We define the categorical projection operator Proj_ (mentioned in Section <ref>), that projects onto set ={y_1,y_2,…,y_d} (the items of are ordered: y_1≤ y_2≤…≤ y_d), based on <cit.>. For single Dirac measures on an arbitrary y∈, we write: Proj_(δ_y)δ_y_1 if y≤ y_1 y_i+1-y/y_i+1-y_iδ_y_i+ y-y_i/y_i+1-y_iδ_y_i+1 if y_i<y≤ y_i+1 δ_y_d if y> y_d , and we extend it affinely to finite mixtures of M Dirac distributions, so that: Proj_(∑_j∈Mq_jδ_z_j)=∑_j∈Mq_j Proj_(δ_z_j), for some set of real values {z_j}_j∈M and weights {q_j}_j∈M. Value functions. Given an MDP =,,H,s_0,p,r and a policy π, we define the V- and Q-functions of policy π in MDP at every (s,a,h)∈ respectively as V^π_h(s;p,r)_p,r,π[∑_t=h^H r_t(s_t,a_t)|s_h=s] and Q^π_h(s,a;p,r)_p,r,π[∑_t=h^H r_t(s_t,a_t)|s_h=s,a_h=a]. We define the optimal V- and Q-functions as V^*_h(s;p,r)sup_π V^π_h(s;p,r) and Q^*_h(s,a;p,r)sup_π Q^π_h(s,a;p,r). For MDPs with an enlarged state space, e.g., {×_h}_h,,H,(s_0,0),,, and a policy ψ={ψ_h}_h, for all h∈H and (s,y,a)∈ we denote the V- and Q-functions respectively as V^ψ_h(s,y;,)_,,ψ[∑_t=h^H _t(s_t,y_t,a_t)|s_h=s,y_h=y] and Q^ψ_h(s,y,a;,)_,,ψ[∑_t=h^H _t(s_t,y_t,a_t)|s_h=s,y_h=y,a_h=a]. We denote the optimal V- and Q-functions as V^*_h(s,y;,)sup_ψ V^ψ_h(s,y;,) and Q^*_h(s,y,a;,)sup_ψ Q^ψ_h(s,y,a;,). Observe that the notation just introduced will be extended in a straightforward manner to MDPs (MDPs with enlarged state space) that have an estimated transition model p (), and/or a discretized reward function r (). § PROOFS FOR SECTION <REF> * The objective in Eq. (<ref>) coincides with that of a common MDP in absence of stochasticity and when U is increasing. Since there always exists an optimal Markovian policy in MDPs, thus we obtain the result. * For reasons that will be clear later, let us define symbol x≈ 2.6 as the solution of x-x^2/3.99-0.1=1. Consider the RS-MDP _U=,,H,s_0,p,r,U in Figure <ref>, where ={s_init,s_1,s_2,s_3,s_4,s_5,s_6}, ={a_1,a_2}, H=4, s_0=s_init, transition model p such that: p_1(s_1|s_init,a)=p_1(s_2|s_init,a)=1/2 ∀ a∈, p_2(s_3|s_1,a)=p_2(s_3|s_2,a)=1 ∀ a∈, p_3(s_4|s_3,a_1)=x/3.99,p_3(s_5|s_3,a_1)=1-x/3.99,p_3(s_6|s_3,a_2)=1, reward function r defined as: r_1(s_init,a)=0 ∀ a∈, r_2(s_1,a)=1 ∀ a∈, r_2(s_2,a)=0 ∀ a∈, r_3(s_3,a)=0 ∀ a∈, r_4(s_4,a)=1 ∀ a∈, r_4(s_5,a)=0 ∀ a∈, r_4(s_6,a)=0.5 ∀ a∈, and utility function U∈ that satisfies: U(y)= x-0.1 if y=0.5 x if y=1 x+0.1 if y=1.5 3.99 if y=2 . Note that this entails that: x/3.99U(2)+U(1)=U(0.5)+U(1.5). Note also that the support of the return function of this (RS-)MDP is ^p,r={0,0.5,1,1.5,2}. For α∈[0,1], let π^α be the generic Markovian policy that plays action a_1 in s_3 w.p. α (the actions played in other states are not relevant). Then, its expected utility is: J^π^α(U;p,r) =1/2[ α(x/3.99U(2)+(1-x/3.99)U(1)) +(1-α)U(1.5) ] +1/2[ α(x/3.99U(1)+(1-x/3.99)U(0)) +(1-α)U(0.5) ] (1)=1/2[ α(x/3.99U(2)+U(1)) +(1-α)(U(1.5)+U(0.5)) ] (2)=U(1.5)+U(0.5)/2, where at (1) we have used that U(0)=0, and at (2) we have used Eq. (<ref>). Thus, all Markovian policies π^α have the same performance. Let us consider the non-Markovian policy π that, in state s_3, plays action a_1 w.p. 1 if s_3 is reached with cumulative reward 1, and it plays action a_2 w.p. 1 if s_3 is reached with cumulative reward 0. Then, its performance is: J^π(U;p,r) =1/2(x/3.99U(2)+(1-x/3.99)U(1)) +1/2U(0.5). The difference in performance between the optimal performance and that of π^α is: J^*(U;p,r)-J^π^α(U;p,r) ≥ J^π(U;p,r) - J^π^α(U;p,r) =1/2(x/3.99U(2)+(1-x/3.99)U(1)) +1/2U(0.5)-U(1.5)+U(0.5)/2 =1/2(x/3.99U(2)+(1-x/3.99)U(1)-U(1.5)) (3)=1/2(x+x-x^2/3.99-x-0.1) = 1/2(x-x^2/3.99-0.1) (4)= 0.5, where at (3) we have replaced the values of utility, and at (4) we have used the definition of x. * Consider the stationary RS-MDP _U=,,H,s_0,p,r,U depicted in Figure <ref>, where ={s_init,s_1,s_2,s_3}, ={a_1,a_2}, H=4, s_0=s_init, stationary transition model p (we omit subscript because of stationarity) such that: p(s_2|s_init,a_1)=1-p(s_3|s_init,a_1)=1/3, p(s_1|s_init,a_2)=1, p(s_init|s,a)=1 ∀ s∈{s_1,s_2,s_3},∀ a∈, reward function r defined as: r(s_init,a)=0 ∀ a∈, r(s_1,a)=0.5 ∀ a∈, r(s_2,a)=1 ∀ a∈, r(s_3,a)=0 ∀ a∈, and utility function U∈ that satisfies: U(y)= 0.15 if y=0.5 0.2 if y=1 1.8 if y=1.5 2 if y=2 . Let π^α,β denote the general non-stationary policy that plays action a_1 at stage 1 w.p. α∈[0,1], and plays action a_1 at stage 2 w.p. β∈[0,1]. The performance of policy π^α,β can be written as: J^π^α,β(U;p,r) =α{1/3[ β( 1/3U(2)+2/3U(1) ) +(1-β)U(1.5) ] +2/3[ β1/3U(1)+(1-β)U(0.5) ] } + (1-α)[ β( 1/3U(1.5) +2/3U(0.5) ) +(1-β)U(1) ] = αβ[ 1/9U(2)+13/9U(1)-2/3U(1.5)-4/3U(0.5) ] +(α+β)[ 1/3U(1.5)+2/3U(0.5)-U(1) ]+U(1) = αβ[ 2/9+13/45-18/15-1/5] + (α+β)[ 1/5+1/10-1/5]+1/5 =-8/9αβ +1/10 (α+β)+1/5. To show that the best Markovian policy is non-stationary in this example, we show that the performance of non-stationary policy π^0,1 is better than the performance of all possible Markovian policies. The performance of π^0,1 is: J^π^0,1(U;p,r)=1/10+1/5=0.3. Instead, the generic stationary policy is π^α,α, and has performance: J^π^α,α(U;p,r)=-8/9α^2 +1/5α+1/5. The value of α∈[0,1] that maximizes this objective is: d/dα J^π^α,α(U;p,r)=-16/9α+1/5=0 α = 9/80, from which we get: J^π^9/80,9/80(U;p,r)=169/800≤ 0.22, which is smaller than 0.3=J^π^0,1(U;p,r). This concludes the proof of the first part of the proposition. For the second part, simply observe that, in the problem instance considered, we just obtained that the best Markovian stationary policy plays action a_1 w.p. 9/80, i.e., it is stochastic. § ADDITIONAL RESULTS AND PROOFS FOR SECTION <REF> In this appendix, we provide a more explicit formulation for the feasible utility set (Appendix <ref>), and then we provide the proofs of all the results presented in Section <ref> (Appendix <ref>). §.§ A more Explicit Formulation for the Feasible Utility Set For any policy π, we denote by ^p,r,π the set of all (s,h,y) state-stage-cumulative reward triples which are covered with non-zero probability by policy π in the considered (RS-)MDP. Thanks to this definition, we can rewrite the feasible set as follows: Let =,,H, s_0,p,r be an MDP, and let π^E be the expert policy. Then, the feasible utility set _p,r,π^E contains all and only the utility functions that make the actions played by the expert policy optimal at all the (s,h,y)∈^p,r,. Formally: _p,r,π^E={ U∈ | ∀ (s,h,y)∈^p,r,,∀ a∈: Q^*_h(s,y,π^E_h(s,y);p,r) ≥ Q^*(s,y,a;p,r), where we used the notation introduced in Appendix <ref>. Based on Theorem 3.1 of <cit.> (or Theorem 1 of <cit.>), we have that a utility U∈ belongs to the feasible set if it makes the expert policy optimal even in the enlarged state space MDP (note that it is possible to define a policy ψ for the enlarged MDP because we are considering policies π whose non-Markovianity lies only in the cumulative reward up to now). Therefore, the result follows thanks to a proof analogous to that of Lemma E.1 in <cit.>, since we are simply considering a common MDP with two variables per state. §.§ Proofs for Section <ref> * We will prove the guarantee stated in the proposition using two different pairs of MDPs: One that that satisfies ^p',r=^p,r, i.e., for which the support of the return function coincides, and the other that does not. Let us begin with the former. Consider a simple MDP =,,H,s_init,p,r with five states ={s_init,s_0,s_0.25,s_0.75,s_1}, two actions ={a_1,a_2}, horizon H=2, initial state s_init, transition model p such that: p_1(s'|s_init,a_1)= 1/4 if s'=s_0 1/4 if s'=s_0.25 1/4 if s'=s_0.75 1/4 if s'=s_1 , p_1(s'|s_init,a_2)= 1/2 if s'=s_0.25 1/2 if s'=s_0.75, and reward function r that assigns r_1(s_init,a_1)=r_1(s_init,a_2)=0, and: r_2(s,a)= 0 if s=s_0∧ (a=a_1 a=a_2) 0.25 if s=s_0.25∧ (a=a_1 a=a_2) 0.75 if s=s_0.75∧ (a=a_1 a=a_2) 1 if s=s_1∧ (a=a_1 a=a_2) . Note that the support of the return function is ^p,r={0,0.25,0.75,1}. We are given an expert's policy π^E that prescribes action a_1 at stage 1 in state s_init, and arbitrary actions in other states (the specific action is not relevant). The MDP is represented in Figure <ref>. Now, we show that utilities U_1,U_2∈, defined in points of the support ^p,r as (and connected in arbitrary continuous strictly-increasing manner between these points): U_1(G)= 0 if G=0 0.01 if G=0.25 0.02 if G=0.75 1.99 if G=1 , U_2(G)= 0 if G=0 0.01 if G=0.25 0.99 if G=0.75 1.99 if G=1 , belong to the feasible set _p,r,π^E, and, when transferred to the new MDP '=,,H, s_init,p',r, with transition model p'≠ p defined as: p_1'(·|s_init,a_1)=p_1(·|s_init,a_1), p_1'(s'|s_init,a_2)= 0.7 if s'=s_0 0.3 if s'=s_1, impose different optimal policies, i.e., utility U_2 keeps making action a_1 optimal from state s_init even in ', while U_1 makes action a_2 optimal. This proves the thesis of the proposition. Let us begin by showing that U_1,U_2∈_p,r,π^E belong to the feasible set of with policy π^E. Let π be the policy that plays action a_2 in state s_init. Then, the distribution of returns induced by policies π^E and π are (we represent values only at points in ^p,r={0,0.25,0.75,1}): η^p,r,π^E =[1/4, 1/4, 1/4, 1/4]^⊺ η^p,r,π =[0, 1/2, 1/2, 0]^⊺. Thus, policy π^E is optimal under some utility U if and only if the values assigned by U to points in ^p,r={0,0.25,0.75,1} (denoted, respectively, by U^1,U^2,U^3,U^4) satisfy: U^⊺ (η^p,r,π^E-η^p,r,π)= [1/4, -1/4, -1/4, 1/4] U =U^1-U^2-U^3+U^4≥ 0, where we have overloaded the notation and denoted with U[U^1,U^2,U^3,U^4]^⊺ both the utility and the vector of values assigned to points in ^p,r. By imposing normalization constraints (U(0)=0,U(2)=2), we get U^1=0, and by imposing also the monotonicity constraints, we get that utility U is in the feasible set _p,r,π^E if and only if: U^4≥ U^2+U^3 0<U^2<U^3<U^4<2 . Clearly, both utilities U_1,U_2 satisfy these constraints, thus they belong to the feasible set _p,r,π^E. Now, concerning problem ', the performances of π^E,π w.r.t. utilities U_1,U_2 are: J^π^E(U_1;p',r) = 1/4U_1(0)+1/4U_1(0.25)+1/4U_1(0.75)+1/4U_1(1)= 2.02/4=0.505, J^π(U_1;p',r) = 0.7 U_1(0)+0.3 U_1(1)= 0.3× 1.99=0.597, J^π^E(U_2;p',r) = 1/4U_1(0)+1/4U_1(0.25)+1/4U_1(0.75)+1/4U_1(1)= 2.99/4=0.7475, J^π(U_2;p',r) = 0.7 U_1(0)+0.3 U_1(1)= 0.3× 1.99=0.597. Clearly, J^π^E(U_1;p',r)<J^π(U_1;p',r), but J^π^E(U_2;p',r)>J^π(U_2;p',r), thus we conclude that the set of policies induced by utilities U_1,U_2 in ' do not intersect, since they start from s_init with different actions Π^*_p',r(U_1)∩Π^*_p',r(U_2)={}. This concludes the proof with an example that satisfies ^p',r=^p,r. If we want an example that does not satisfy ^p',r=^p,r, then we can consider exactly the same example with and ', but using r_1(s_init,a_2)=0.001. In this manner, we see that ^p,r={0,0.25,0.251,0.75,0.751,1}, and ^p',r={0,0.001,0.25,0.75,1,1.001}, which are different. By choosing U_1',U_2' as: U_1'(G)= 0 if G=0 0.001 if G=0.001 0.01 if G=0.25 0.011 if G=0.251 0.02 if G=0.75 0.021 if G=0.751 1.99 if G=1 1.991 if G=1.001 , U_2'(G)= 0 if G=0 0.001 if G=0.001 0.01 if G=0.25 0.011 if G=0.251 0.99 if G=0.75 0.991 if G=0.751 1.99 if G=1 1.991 if G=1.001 , it can be shown that U_1',U_2' belong to the (new) feasible set of , and that induce different policies in '. This concludes the proof. * Similarly to the proof of Proposition <ref>, we provide two examples, one with ^p,r'=^p,r, and the other with ^p,r'≠^p,r. Let us begin with the former. Consider a simple MDP =,,H, s_init,p,r with three states ={s_init,s_1,s_2}, two actions ={a_1,a_2}, horizon H=2, initial state s_init, transition model p such that: p_1(s'|s_init,a_1)= 1/2 if s'=s_1 1/2 if s'=s_2, p_1(s'|s_init,a_2)= 0.9 if s'=s_1 0.1 if s'=s_2, and reward function r that assigns r_1(s_init,a_1)=0, r_1(s_init,a_2)=0.5, and: r_2(s,a)= 0 if s=s_1∧ (a=a_1 a=a_2) 1 if s=s_2∧ (a=a_1 a=a_2) . Note that the support of the return function is ^p,r={0,0.5,1,1.5}. We are given an expert's policy π^E that prescribes action a_1 at stage 1 in state s_init, and arbitrary actions in other states (the specific action is not relevant). The MDP is represented in Figure <ref>. Now, we show that the utilities U_1,U_2∈, defined in points of the support ^p,r as (and connected in arbitrary continuous strictly-increasing manner between these points): U_1(G)= 0 if G=0 0.1 if G=0.5 0.9 if G=1 1.5 if G=1.5 , U_2(G)= 0 if G=0 0.1 if G=0.5 0.8 if G=1 1.5 if G=1.5 , belong to the feasible set _p,r,π^E, and, when transferred to the new MDP '=,,H, s_init,p,r', with reward function r'≠ r defined as: r_1'(s_init,a_1)=0.5, r_1(s_init,a_2)=0, r_2'(s,a)= 1 if s=s_1∧ (a=a_1 a=a_2) 0 if s=s_2∧ (a=a_1 a=a_2) , impose different optimal policies, i.e., utility U_2 keeps making action a_1 optimal from state s_init even in ', while U_1 makes action a_2 optimal. This will demonstrate the thesis of the proposition. Let us begin by showing that U_1,U_2∈_p,r,π^E belong to the feasible set of with policy π^E. Let π be the policy that plays action a_2 in state s_init. Then, the distribution of returns induced by policies π^E and π are (we represent values only at points in ^p,r={0,0.5,1,1.5}): η^p,r,π^E =[0.5, 0, 0.5, 0]^⊺ η^p,r,π =[0, 0.9, 0, 0.1]^⊺. Thus, policy π^E is optimal under some utility U if and only if the values assigned by U to points in ^p,r={0,0.5,1,1.5} (denoted, respectively, by U^1,U^2,U^3,U^4) satisfy: U^⊺ (η^p,r,π^E-η^p,r,π)= [0.5, -0.9, 0.5, -0.1] U =0.5 U^1-0.9 U^2 + 0.5 U^3-0.1 U^4≥ 0, where we have overloaded the notation and denoted with U[U^1,U^2,U^3,U^4]^⊺ both the utility and the vector of values assigned to points in ^p,r. By imposing normalization constraints (U(0)=0,U(2)=2), we get U^1=0, and by imposing also the monotonicity constraints, we get that utility U is in the feasible set _p,r,π^E if and only if: U^4≥ 5 U^3 - 9 U^2 0<U^2<U^3<U^4<2 . Clearly, both utilities U_1,U_2 satisfy these constraints, thus they belong to the feasible set _p,r,π^E. Now, concerning problem ', the performances of π^E,π w.r.t. utilities U_1,U_2 are: J^π^E(U_1;p,r') = 0 U_1(0)+0.5 U_1(0.5)+0 U_1(1)+0.5 U_1(1.5)= 1.6/2=0.8, J^π(U_1;p,r') = 0.1 U_1(0)+0 U_1(0.5)+0.9 U_1(1)+0 U_1(1.5)= 0.9× 0.9=0.81, J^π^E(U_2;p,r') = 0 U_2(0)+0.5 U_2(0.5)+0 U_2(1)+0.5 U_2(1.5)= 1.6/2=0.8, J^π(U_2;p,r') = 0.1 U_2(0)+0 U_2(0.5)+0.9 U_2(1)+0 U_2(1.5)= 0.9× 0.8=0.72. Clearly, J^π^E(U_1;p,r')<J^π(U_1;p,r'), but J^π^E(U_2;p,r')>J^π(U_2;p,r'), thus we conclude that the set of policies induced by utilities U_1,U_2 in ' do not intersect, since they start from s_init with different actions Π^*_p,r'(U_1)∩Π^*_p,r'(U_2)={}. This concludes the proof with an example that satisfies ^p,r'=^p,r. If we want an example that does not satisfy ^p,r'=^p,r, then we can consider exactly the same example with and ', but using r_1'(s_init,a_2)=0.001. In this manner, we see that ^p,r={0,0.5,1,1.5}, and ^p',r={0.001,0.5,1.001,1.5}, which are different. Nevertheless, by choosing U_1',U_2' as: U_1'(G)= 0 if G=0 0.001 if G=0.001 0.1 if G=0.5 0.9 if G=1 0.901 if G=1.001 1.5 if G=1.5 , U_2'(G)= 0 if G=0 0.001 if G=0.001 0.1 if G=0.5 0.8 if G=1 0.801 if G=1.001 1.5 if G=1.5 , it can be shown that U_1',U_2' still belong to the feasible set of (the constraints are the same), and that induce different policies in '. This concludes the proof. * Consider a simple MDP =,,H, s_init,p,r with four states ={s_init,s_1,s_2,s_3}, three actions ={a_1,a_2,a_3}, horizon H=2, initial state s_init, transition model p such that: p_1(s_2|s_init,a_1)=1, p_1(s_1|s_init,a_3)=1, p_1(s'|s_init,a_2)= 0.91 if s'=s_1 0.09 if s'=s_3, and reward function r that assigns r_1(s_init,a_1)=r_1(s_init,a_2)=r_1(s_init,a_3)=0, and: r_2(s,a)= 0 if s=s_1∧ (a=a_1 a=a_2 a=a_3) 0.5 if s=s_2∧ (a=a_1 a=a_2 a=a_3) 1 if s=s_3∧ (a=a_1 a=a_2 a=a_3) . Note that the support of the return function is ^p,r={0,0.5,1}. We are given an expert's policy π^E that prescribes action a_1 at stage 1 in state s_init, and arbitrary actions in other states (the specific action is not relevant). The MDP is represented in Figure <ref>. Now, we show that the utilities U_1,U_2∈, defined in points of the support ^p,r as (and connected in arbitrary continuous strictly-increasing manner between these points): U_1(G)= 0 if G=0 0.1 if G=0.5 0.1/0.09 if G=1 , U_2(G)= 0 if G=0 1.099 if G=0.5 1.1 if G=1 , belong to the feasible set _p,r,π^E, and that, for any ϵ∈[0,0.1], there exists a policy π for which it holds both that J^*(U_1;p,r)-J^π(U_1;p,r)=ϵ and J^*(U_2;p,r)-J^π(U_2;p,r)≥ 1. First, let us show that both U_1,U_2 belong to the feasible utility set. Let π^1,π^2,π^3 be the policies that play, respectively, action a_1,a_2,a_3 in state s_init (note that π^1=π^E). Then, their performances for arbitrary utility U are: J^π^1(U; p,r)=U(0.5), J^π^2(U; p,r)=0.09U(1)+0.91U(0)=0.09U(1), J^π^3(U; p,r)=U(0)=0, where we have used the normalization condition. Replacing U with U_1, we get J^*(U_1;p,r)=J^π^1(U_1; p,r)=0.1=J^π^2(U_1; p,r)=0.1>J^π^3(U_1; p,r)=0. Instead, replacing with U_2, we get J^*(U_2;p,r)=J^π^1(U_2; p,r)=1.099>J^π^2(U_2; p,r)=0.09× 1.1>J^π^3(U_2; p,r)=0. Therefore, both U_1,U_2∈_p,r,π^E. Now, for any α∈[0,1] let us denote by π_α the policy that, at state s_init, plays action a_3 w.p. α, and action a_2 w.p. 1-α. We show that, for any ϵ∈[0,0.1], policy π_ϵ/0.1 is ϵ-optimal for utility U_1, and its suboptimality is at least 1 under utility U_2. For any α∈[0,1], the expected utilities of policy π_α under U_1 and U_2 are: J^π_α(U_1;p,r)=(1-α)× 0.09× U_1(1)=(1-α)× 0.1, J^π_α(U_2;p,r)=(1-α)× 0.09× U_2(1)=(1-α)× 0.099, from which we derive that the suboptimalities of such policy under U_1 and U_2 are: J^*(U_1;p,r)-J^π_α(U_1;p,r)=0.1-(1-α)× 0.1=0.1α, J^*(U_2;p,r)-J^π_α(U_2;p,r)=1.099-(1-α)× 0.099=1+0.099α. Thus, for any ϵ∈[0,0.1], policy π_ϵ/0.1 is ϵ-optimal for utility U_1, but it is at least 1-suboptimal for utility U_2. The intuition is that utilities U_1 and U_2 assess in completely different manners the policies that play action a_2, although they both describe policy π^E as optimal. This concludes the proof. * For the sake of simplicity, we denote the infinity norm and the 1-norm w.r.t. set ^p,r as: f_∞max_G∈^p,r|f(G)| and f_1∑_G∈^p,r|f(G)|. In addition, we overload notation and use symbols U_1,U_2 to denote the vectors in [0,H]^|^p,r| containing, respectively, the values assigned by utility functions U_1,U_2 to points in set ^p,r. Then, we can write: d^all_p,r(U_1,U_2) sup_π∈Π |J^π(U_1;p,r)-J^π(U_2;p,r)| =sup_π∈Π |_G∼η^p,r,π[U_1(G)]- _G∼η^p,r,π[U_2(G)]| =sup_π∈Π |_G∼η^p,r,π[U_1(G)-U_2(G)]| (1)≤sup_η∈Δ^^p,r |_G∼η[U_1(G)-U_2(G)]| (2)≤sup_η∈Δ^^p,r_G∼η|U_1(G)-U_2(G)| (3)=U_1-U_2_∞, where at (1) we upper bound by considering the set of all possible distributions over set ^p,r instead of just those induced by some policies in the considered MDP, at (2) we apply triangle inequality, and at (3) we have used the fact that ·_1 and ·_∞ are dual norms. * Consider a simple MDP =,,H, s_init,p,r with three states ={s_init,s_1,s_2}, three actions ={a_1,a_2,a_3}, horizon H=2, initial state s_init, transition model p such that: p_1(s_1|s_init,a_1)=1, p_1(s_2|s_init,a_2)=p_1(s_2|s_init,a_2)=1, and reward function r that assigns r_1(s_init,a_1)=r_1(s_init,a_2)=0, r_1(s_init,a_2)=1, and: r_2(s,a)= 0 if s=s_1∧ (a=a_1 a=a_2 a_3) 1 if s=s_2∧ (a=a_1 a=a_2 a_3) . Note that the support of the return function is ^p,r={0,1,2}. We are given an expert's policy π^E that prescribes action a_3 at stage 1 in state s_init, and arbitrary actions in the other states (the specific action is not relevant). The MDP is represented in Figure <ref>. Consider two utilities U_1,U_2, that take on the following values in ^p,r: U_1(G)= 0 if G=0 0.1 if G=1 2 if G=2 , U_2(G)= 0 if G=0 1.1 if G=1 2 if G=2 . It is immediate that both utilities belong to the feasible set _p,r,π^E. Nevertheless, if we denote by π the policy that plays action a_2 in state s_init, we see that J^π(U_1;p,r)=0.1, while J^π(U_2;p,r)=1.1, so that the difference is 1. * We provide a constructive proof that shows which values of s_0,p,r it is sufficient to choose for recovering U^E exactly. The construction is articulated into two parts. First, we aim to recover the value of U^E(1), i.e., for G=1; next, we recover the utility for all other possible values of return. The intuition is that we construct a Standard Gamble (SG) between two policies over the entire horizon <cit.>. To infer U^E(1), we use the s_0,p,r values that provide the MDP described in Figure <ref>. We consider a single initial state s_init. From here, action a_1 (and all actions other than a_1 and a_2) brings deterministically to state s_1^2, while action a_2 brings to state s_3^2 w.p. q (to choose, for some q∈[0,1]), and to state s_2^2 w.p. 1-q. From state s_i^2, for any i∈3, all actions bring deterministically to state s_i^3, and so on, up to state s_i^H. We will call the trajectory {s_init,s_i^2,s_i^3,…,s_i^H} the i^th trajectory for all i∈3, and we will write G(i) to denote the sum of rewards along such trajectory. To infer the value U^E(1), we select a reward r':→[0,1] that provides return G(1)=1.5 to the first trajectory, return G(2)=1 to the second trajectory, and return G(3)=H to the third trajectory (this is possible because H≥ 2). By selecting, successively, all the values of q∈[0,1], we are asking to the expert to play either action a_1 or action a_2 from the initial state s_init (we denote policies π^1,π^2, respectively, the policies that play actions a_1,a_2 in s_init). Since we are assuming that the expert will demonstrate all the possible deterministic optimal policies, there exists a value q'∈[0,1] for which the expert demonstrates both policies π^1 and π^2. Indeed, the expected utilities of policies π^1,π^2 for arbitrary value of q are (we write p(q) as the generic transition model): J^π^1(U^E;p(q),r')=U^E(1.5), J^π^2(U^E;p(q),r')=qU^E(H)+(1-q)U^E(1)=qH+(1-q)U^E(1), and since U^E is strictly-increasing, we have U^E(1)<U^E(1.5)<U^E(H)=H, thus there must exist q' that permits to write U^E(1.5) as a convex combination of the other two. This allows us to write: U^E(1.5)=q'H+(1-q')U^E(1). Next, we select reward r” that provides returns G(1)=1,G(2)=0.5,G(3)=1.5. Thus, there must exist a q”∈[0,1] for which the expert demonstrates both policies π^1 and π^2, allowing us to write: U^E(1)=q”U^E(1.5)+(1-q”)U^E(0.5). Finally, we can repeat the same step with a third reward r”' that provides returns G(1)=0.5,G(2)=0,G(3)=1, and for some q”'∈[0,1] we obtain: U^E(0.5)=q”'U^E(1). By putting together Eq. (<ref>), Eq. (<ref>), and Eq. (<ref>), we can retrieve U^E(1): U^E(1.5)=q'H+(1-q')U^E(1) U^E(1)=q”U^E(1.5)+(1-q”)U^E(0.5) U^E(0.5)=q”'U^E(1) . Now that we know U^E(1), we can infer the utility for all the returns G∈(1,H) by choosing a reward that provides returns G(1)=G,G(2)=1,G(3)=H, because for some q∈[0,1] the expert will play both policies π^1 and π^2, which allows us to write: U^E(G)=qH+(1-q)U^E(1), and to retrieve U^E(G). Similarly, for all G∈(0,1), we select a reward that provides returns G(1)=G,G(2)=0,G(3)=1, and for some q∈[0,1] we can write: U^E(G)=qU^E(1), and retrieve U^E(G). This concludes the proof. As a final remark, we stress that the initial step for inferring U^E(1) cannot be dropped because there is no reward r:→[0,1] that provides returns G(2)=0 and G(3)=H, because both the first and second trajectories pass through action a_2 in state s_init. § ADDITIONAL RESULTS AND PROOFS FOR SECTION <REF> This appendix is divided in 4 parts. First, we show the complexity of implementing operator Π__L (Appendix <ref>). In Appendix <ref>, we provide the pseudocode, along with a description, of algorithms , , , and . In Appendix <ref>, we provide the proof of Theorem <ref>. In Appendix <ref>, we provide the proof of Theorem <ref>. §.§ Projecting onto the set of discretized utilities Let us use the square brackets [] to denote the components of vectors. Then, note that set _L can be represented more explicitly as: _L= {U∈[0,H]^d | U[1]=0∧U[d]=H∧U[i]≤U[i+1] ∀ i∈d-1 ∧ ∀ i,j ∈d s.t. i<j: |U[i]-U[j]|≤ L(j-i)ϵ_0}. Notice that set _L is closed and convex, since it is defined by linear constraints only. The amount of constraints scales as ∝ d^2. §.§ Missing Algorithms and Sub-routines In Algorithm <ref>, we report the pseudo-code implementing subroutine . Simply put, we adopt a uniform-sampling strategy, i.e., we collect n=τ/(SAH) samples from each (s,a,h)∈ triple, that we use to compute the empirical estimate of the transition model. We return such estimate. ruled The sub-routine (Algorithm <ref>) takes in input a utility U, an environment index i, and a transition model p, that uses to construct the RS-MDP _U^i,^i,H,s_0^i,p,r^i,U. Notice that _U≠^i_U^E, for 3 aspects. First, it uses the input transition model p≠ p^i; next, it consider the discretized reward r^i≠ r^i; finally, it has input utility U≠ U^E. outputs two items. The optimal performance J^*(U;p,r^i) for RS-MDP _U, and the optimal policy ψ^*={ψ^*_h}_h for the enlarged state space MDP [_U]. However, it should be remarked that, instead of computing optimal policy ψ^* for [_U] only at pairs (s,y)∈×^p,r^i_h for all h∈H, computes the optimal policy ψ^* at all pairs (s,y)∈×_h for all h∈H (note that ^p,r^i_h⊆_h). The algorithm implemented in for computing both J^*(U;p,r^i) and ψ^* is value iteration. The difference from common implementations of value iterations lies in the presence of an additional variable in the state. A similar pseudocode is provided in Algorithm 1 of <cit.>. ruled (Estimate the Return Distribution) The sub-routine (Algorithm <ref>) takes in input a dataset ^E={ω_j}_j of state-action trajectories ω_j∈Ω and a reward function r, and it computes an estimate of the return distribution w.r.t. r. For every trajectory ω_j∈^E, computes the return G_j of ω_j based on the input reward r (Line <ref>). In the next lines, simply computes the categorical projection of the mixture of Dirac deltas: η= Proj_(∑_j 1/|^E|δ_G_j), where the categorical projection operator Proj_ is defined in Eq. (<ref>). ruled (Algorithm <ref>) takes in input a Markovian policy ψ, a transition model p, a reward r, an environment index i, and a number of trajectories K, to construct the MDP ^i,^i,H,s_0^i,p,r obtained from MDP ^i by replacing the dynamics and reward p^i,r^i with the input p,r. collects K trajectories by playing policy ψ in for K times, computes the return G of each trajectory, and then returns a dataset containing these K returns. In other words, with abuse of notation, we say that the outputted dataset ={G_k}_k∈K is obtained by collecting K samples G_k from distribution η^p,r,ψ. ruled §.§ Analysis of * Observe that the classification carried out by complies with the statement in the theorem as long as we can demonstrate that: ℙ_{^i}_i,{π^E,i}_i( sup_U∈|max_i∈N_p^i,r^i,π^E,i(U)- max_i∈N^i(U) |≤ϵ)≥ 1-δ, where ℙ_{^i}_i,{π^E,i}_i represents the joint probability distribution induced by the exploration phase of and the execution of each π^E,i in the corresponding ^i. We can rewrite this expression as: sup_U∈|max_i∈N_p^i,r^i,π^E,i(U)- max_i∈N^i(U) | (1)≤sup_U∈max_i∈N| _p^i,r^i,π^E,i(U)- ^i(U) | =max_i∈Nsup_U∈| _p^i,r^i,π^E,i(U)- ^i(U) |, where at (1) we have upper bounded the difference of the maxima of two real-valued functions with the maximum of their difference. This shows that we can obtain the result as long as we can demonstrate that, for all i∈N, it holds that: ℙ_p^i,r^i,π^E,i( sup_U∈| _p^i,r^i,π^E,i(U)- ^i(U) |≤ϵ)≥ 1-δ/N; the statement of the theorem would then follow from a union bound. Therefore, let us omit the i index for simplicity, and let us try to obtain the bound in Eq. (<ref>). We can write: sup_U∈| _p,r,(U)-(U)| sup_U∈|( J^*(U;p,r)-J^π^E(U;p,r))- ( J^*(U)-J^E(U))| (2)≤sup_U∈|J^π^E(U;p,r)-J^E(U)| +sup_U∈|J^*(U;p,r)-J^*(U)| (3)=sup_U∈| _G∼η^p,r,π^E[U(G)]- _G∼η^E[U(G)] ±_G∼Proj_(η^p,r,π^E)[U(G)]| +sup_U∈|J^*(U;p,r)-J^*(U)| (4)≤sup_U∈|_G∼η^p,r,π^E[U(G)]- _G∼Proj_(η^p,r,π^E)[U(G)]| +sup_U∈|_G∼Proj_(η^p,r,π^E)[U(G)]- _G∼η^E[U(G)]| +sup_U∈|J^*(U;p,r)-J^*(U)| (5)≤sup_f: f is L-Lipschitz|_G∼η^p,r,π^E[f(G)]- _G∼Proj_(η^p,r,π^E)[f(G)]| +sup_U∈| _G∼Proj_(η^p,r,π^E)[U(G)]- _G∼η^E[U(G)]| +sup_U∈|J^*(U;p,r)-J^*(U)| (6)=L· w_1(η^p,r,π^E,Proj_(η^p,r,π^E)) + sup_U∈|_G∼Proj_(η^p,r,π^E)[U(G)]- _G∼η^E[U(G)]| sup_U∈|J^*(U;p,r)-J^*(U)|, where at (2) we have applied triangle inequality, at (3) we use the definition of J^π^E(U;p,r), and that of J^E(U) (Line <ref> of ), and we have added and subtracted a term, where operator Proj_ is defined in Eq. (<ref>). We remark that distribution η^p,r,π^E may have a support that grows exponentially in H, while both η^E and Proj_(η^p,r,π^E) are supported on . Note that η^E and Proj_(η^p,r,π^E) are different distributions, since the former is the projection on of an estimate of η^p,r,π^E. At (4), we apply triangle inequality, at (5) we use the hypothesis that all utilities are L-Lipschitz ⊆_L, and notice that _L is a subset of all L-Lipschitz functions f:[0,H]→[0,H], and at (6) we apply the duality formula for the 1-Wasserstein distance w_1 (see Eq. (6.3) in Chapter 6 of <cit.>). Concerning the case ||=1, we apply, for all i∈N, Lemma <ref> with probability δ/(2N) and accuracy ϵ/3, and Lemma <ref> with probability δ/(2N) and accuracy ϵ/3, while we bound the 1-Wasserstein distance through Lemma <ref>, to obtain, through an application of the union bound, that: ℙ_{^i}_i,{π^E,i}_i( sup_U∈ |max_i∈N_p^i,r^i,π^E,i(U)- max_i∈N^i(U) |≤ L√(2Hϵ_0)+ϵ/3+HLϵ_0+ϵ/3 )≥ 1-δ, as long as, for all i∈N: τ^E,i≥(H^2logN/δ/ϵ^2), τ^i≥(SAH^4/ϵ^2logSAHN/δϵ_0). By setting ϵ_0= ϵ^2/72HL^2, we obtain that: L√(2Hϵ_0)+HLϵ_0= ϵ/6+ϵ^2/72L≤ϵ/3. By putting this bound into the bound on τ^i, we get the result. When is an arbitrary subset of _L, we apply, for all i∈N, Lemma <ref> with probability δ/(2N) and accuracy ϵ/3, and Lemma <ref> with probability δ/(2N) and accuracy ϵ/3, while we bound the 1-Wasserstein distance through Lemma <ref>, to obtain, through an application of the union bound, that: ℙ_{^i}_i,{π^E,i}_i( sup_U∈ |max_i∈N_p^i,r^i,π^E,i(U)- max_i∈N^i(U) |≤ L√(2Hϵ_0)+ϵ/3+HLϵ_0+ϵ/3 )≥ 1-δ, as long as, for all i∈N: τ^E,i≥( H^3/ϵ^2ϵ_0logHN/δϵ_0), τ^i≥(SAH^5/ϵ^2(S+logSAHN/δ)). Again, by setting ϵ_0= ϵ^2/72HL^2, we obtain that: L√(2Hϵ_0)+HLϵ_0= ϵ/6+ϵ^2/72L≤ϵ/3. By putting this bound into the bounds on τ^E,i and τ^i, we get the result. §.§.§ Lemmas on the Expert's Return Distribution Let the projection operator Proj_ be defined as in Eq. (<ref>), over set with discretization ϵ_0. Then, for all i∈N, it holds that: w_1(η^p^i,r^i,π^E,i,Proj_(η^p^i,r^i,π^E,i)) ≤√(2Hϵ_0). For the sake of simplicity, we omit index i∈N, but the following derivation can be applied to all the N demonstrations. By applying Lemma 5.2 of <cit.>, replacing term 1/(1-γ) with horizon H, we get: w_1(η^p,r,π^E,Proj_(η^p,r,π^E)) ≤√(H)ℓ_2(η^p,r,π^E,Proj_(η^p,r,π^E)). Similarly to the proof of Proposition 3 of <cit.>, we can write: ℓ_2^2(η^p,r,π^E,Proj_(η^p,r,π^E)) (1)∫_ (F_η^p,r,π^E(y)-F_Proj_(η^p,r,π^E)(y))^2dy (2)=∫_0^H (F_η^p,r,π^E(y)- F_Proj_(η^p,r,π^E)(y))^2dy (3)=∑_j∈d-1∫_y_j^y_j+1 (F_η^p,r,π^E(y)- F_Proj_(η^p,r,π^E)(y))^2dy +∫_y_d^H (F_η^p,r,π^E(y)- F_Proj_(η^p,r,π^E)(y))^2dy (4)≤∑_j∈d-1∫_y_j^y_j+1 (F_η^p,r,π^E(y)- F_Proj_(η^p,r,π^E)(y))^2dy+ϵ_0 (5)≤∑_j∈d-1∫_y_j^y_j+1 (F_η^p,r,π^E(y_j+1)- F_η^p,r,π^E(y_j))^2dy+ϵ_0 = ∑_j∈d-1(y_j+1-y_j) (F_η^p,r,π^E(y_j+1)- F_η^p,r,π^E(y_j))^2+ϵ_0 (6)=ϵ_0∑_j∈d-1 (F_η^p,r,π^E(y_j+1)- F_η^p,r,π^E(y_j))^2+ϵ_0 (7)≤ϵ_0(∑_j∈d-1 (F_η^p,r,π^E(y_j+1)- F_η^p,r,π^E(y_j))^2+ϵ_0 (8)=ϵ_0 (F_η^p,r,π^E(y_d)- F_η^p,r,π^E(y_1))^2+ϵ_0 ≤2ϵ_0, where at (1) we have applied the definition of ℓ_2 distance (Eq. (<ref>)), at (2) we recognize that the two distributions η^p,r,π^E,Proj_(η^p,r,π^E) are defined on [0,H], at (3) we use the additivity property of the integral, using notation {0,ϵ_0,2ϵ_0,…,H/ϵ_0ϵ_0}, d ||=H/ϵ_0+1, y_1 0, y_2ϵ_0, y_3 2ϵ_0,…, y_dH/ϵ_0ϵ_0, (notation introduced in Section <ref>). At (4) we upper bound ∫_y_d^H (F_η^p,r,π^E(y)- F_Proj_(η^p,r,π^E)(y))^2dy≤∫_y_d^Hdy =H-y_d=H-H/ϵ_0ϵ_0=ϵ_0(H/ϵ_0-H/ϵ_0)≤ϵ_0 since the difference of cumulative distribution functions is bounded by 1. At (5), thanks to the definition of the projection operator Proj_ (Eq. (<ref>)), we notice that, for y∈[y_j,y_j+1], it holds that F_Proj_(η^p,r,π^E)(y)∈[F_η^p,r,π^E(y_j), F_η^p,r,π^E(y_j+1)], thus we can upper bound the integrand through the maximum, constant, difference of cumulative distribution functions. At (6) we use the definition of set , i.e., an ϵ_0-covering of the [0,H] interval, at (7) we use the Cauchy-Schwarz's inequality ∑_j (x_j)^2≤ (∑_j x_j)^2 for x_j≥ 0, and noticed that the summands are always non-negative, at (8) we apply a telescoping argument. The result follows by taking the square root of both sides. Let i∈N, and let f∈[0,H]^d be an arbitrary d-dimensional vector. Denote by G_1,G_2,…,G_τ^E,ii.i.d.∼η^p^i,r^i,π^E,i the random variables representing the returns of the τ^E,i trajectories inside dataset ^E,i. Let η^E,i be the random output of Algorithm <ref> that depends on the random variables G_1,G_2,…,G_τ^E,i. Then, it holds that: _G_1,G_2,…,G_τ^E,i∼η^p^i,r^i,π^E,i[ _y∼η^E,i[f(y)] ] = _y∼Proj_(η^p^i,r^i,π^E,i) [f(y)]. We omit index i for simplicity, but the proof can be carried out for all i∈N independently. To prove the statement, we use the notation described in Appendix <ref> for the Dirac delta, to provide an explicit representation of both the distribution Proj_(η^p,r,π^E) and the “random” distribution η^E. We consider distribution η^p,r,π^E supported on {z_1,z_2,…,z_M}⊆[0,H], while distributions Proj_(η^p,r,π^E),η^E are supported on set ={y_1,y_2,…,y_d}⊆[0,H]. W.r.t. distribution Proj_(η^p,r,π^E), we can write: Proj_(η^p,r,π^E) =Proj_( ∑_k∈Mη^p,r,π^E(z_k) δ_z_k) (1)=∑_k∈Mη^p,r,π^E(z_k)Proj_ (δ_z_k) (2)=∑_k∈Mη^p,r,π^E(z_k)( δ_y_1z_k≤ y_1 +δ_y_dz_k> y_d + ∑_j∈d-1( y_j+1-z_k/y_j+1-y_jδ_y_j+ z_k-y_j/y_j+1-y_jδ_y_j+1)z_k∈(y_j,y_j+1]) = δ_y_1∑_k∈Mη^p,r,π^E(z_k)(z_k≤ y_1 +y_2-z_k/y_2-y_1z_k∈(y_1,y_2]) + ∑_j∈{2,…,d-1}δ_y_j(∑_k∈Mη^p,r,π^E(z_k)(y_j+1-z_k/y_j+1-y_jz_k∈(y_i,y_j+1] + z_k-y_j-1/y_i-y_j-1z_k∈(y_j-1,y_i]) ) +δ_y_d∑_k∈Mη^p,r,π^E(z_k)( z_k> y_d+z_k-y_d-1/y_d-y_d-1z_k∈(y_d-1,y_d]), where at (1) we have applied the extension in Eq. (<ref>) of the projection operator Proj_ to finite mixtures of Dirac distributions, and at (2) we have applied its definition (Eq. (<ref>)). Concerning distribution η^E, based on Algorithm <ref>, we can write: η^E =δ_y_1/τ^E( ∑_t∈τ^E( G_t≤ y_1+ y_2-G_t/y_2-y_1G_t∈(y_1,y_2]) ) + ∑_j∈{2,…,d-1}δ_y_j/τ^E(∑_t∈τ^E(y_j+1-G_t/y_j+1-y_jG_t∈(y_i,y_j+1] + G_t-y_j-1/y_i-y_j-1G_t∈(y_j-1,y_i]) ) +δ_y_d/τ^E(∑_t∈τ^E( G_t> y_d+G_t-y_d-1/y_d-y_d-1G_t∈(y_d-1,y_d])). Now, if we take the expectation of the random vector η^E w.r.t. η^p,r,π^E, we get: _G_1,G_2,…,G_τ^E∼η^p,r,π^E[ η^E] =_G_1,G_2,…,G_τ^E∼η^p,r,π^E[ δ_y_1/τ^E( ∑_t∈τ^E( G_t≤ y_1+ y_2-G_t/y_2-y_1G_t∈(y_1,y_2]) ) + ∑_j∈{2,…,d-1}δ_y_j/τ^E(∑_t∈τ^E(y_j+1-G_t/y_j+1-y_jG_t∈(y_i,y_j+1] + G_t-y_j-1/y_i-y_j-1G_t∈(y_j-1,y_i]) ) +δ_y_d/τ^E(∑_t∈τ^E( G_t> y_d+G_t-y_d-1/y_d-y_d-1G_t∈(y_d-1,y_d])) ] (3)=_G∼η^p,r,π^E[ δ_y_1( G≤ y_1+ y_2-G/y_2-y_1G∈(y_1,y_2]) + ∑_j∈{2,…,d-1}δ_y_j(y_j+1-G/y_j+1-y_jG∈(y_i,y_j+1] + G-y_j-1/y_i-y_j-1G∈(y_j-1,y_i]) +δ_y_d( G> y_d+G-y_d-1/y_d-y_d-1G∈(y_d-1,y_d]) ] (4)=δ_y_1∑_k∈Mη^p,r,π^E(z_k)(z_k≤ y_1 +y_2-z_k/y_2-y_1z_k∈(y_1,y_2]) + ∑_j∈{2,…,d-1}δ_y_j(∑_k∈Mη^p,r,π^E(z_k)(y_j+1-z_k/y_j+1-y_jz_k∈(y_i,y_j+1] + z_k-y_j-1/y_i-y_j-1z_k∈(y_j-1,y_i]) ) +δ_y_d∑_k∈Mη^p,r,π^E(z_k)( z_k> y_d+z_k-y_d-1/y_d-y_d-1z_k∈(y_d-1,y_d]) (5)=Proj_(η^p,r,π^E), where at (3) we use the fact that G_1,G_2,…,G_τ^E are independent and identically distributed, at (4) we apply the linearity of the expectation, we notice that δ_y_j does not depend on G for all j∈d, and we notice that, for any y∈, it holds that _G∼η^p,r,π^E[G≤ y]=η^p,r,π^E(G≤ y)=∑_k∈Mη^p,r,π^E(z_k) z_k≤ y, where we have abused notation by writing η^p,r,π^E(G≤ y) to mean the probability, under distribution η^p,r,π^E, that event {G≤ y} happens. Moreover, similarly, we notice that, for any y,y'∈, it holds that _G∼η^p,r,π^E[G·G∈[y,y']]= ∑_k∈M z_k η^p,r,π^E(z_k) z_k∈[y,y']. At (5) we simply recognize Proj_(η^p,r,π^E) using the previous expression. This concludes the proof because the equality of the Dirac delta representations means that the expectations of any function w.r.t. these two distributions coincide. Let i∈N and let ϵ,δ∈(0,1). If ||=1, then, with probability at least 1-δ, we have: sup_U∈|_G∼Proj_(η^p^i,r^i,π^E,i) [U(G)]- _G∼η^E,i[U(G)]|≤ϵ, as long as: τ^E≥ c H^2log2/δ/ϵ^2, where c is some positive constant. Let U be the only function inside . Let us omit index i for simplicity. Then, we can write: |_G∼η^E[U(G)]- _G∼Proj_(η^p,r,π^E) [U(G)]| (1)=|_G∼η^E[U(G)]- _η^p,r,π^E[_G∼η^E[U(G)]]| (2)≤ c H√(log2/δ/τ^E), where at (1) we have applied Lemma <ref>, and at (2) we have applied the Hoeffding's inequality noticing that function U is bounded in [0,H], and denoting with c some positive constant. By imposing: c H√(log2/δ/τ^E)≤ϵ, and solving w.r.t. τ^E, we get the result. Let i∈N and let ϵ,δ∈(0,1). Then, with probability at least 1-δ, we have: sup_U∈|_G∼Proj_(η^p^i,r^i,π^E,i) [U(G)]- _G∼η^E,i[U(G)]|≤ϵ, as long as: τ^E≥( H^3/ϵ^2ϵ_0logH/δϵ_0). Again, let us omit index i for simplicity. First, for all possible functions U∈, we denote by U∈_L the function in _L that takes on the values that the function U assigns to the points of set . This permits us to write: sup_U∈|_G∼η^E[U(G)]- _G∼Proj_(η^p,r,π^E) [U(G)]| = sup_U∈_L|_G∼η^E[U(G)]- _G∼Proj_(η^p,r,π^E) [U(G)]| (1)≤sup_U∈[0,H]^d|_G∼η^E[U(G)]- _G∼Proj_(η^p,r,π^E) [U(G)]| (2)=sup_U∈[0,H]^d|_G∼η^E[U(G)]- _η^p,r,π^E[_G∼η^E[U(G)]]|, where at (1) we upper bound by considering all the possible vectors U∈[0,H]^d, and at (2) we apply Lemma <ref>. Now, similarly to the proof of Lemma 7.2 in <cit.>, we construct an ϵ'-covering of set [0,H]^d, call it _ϵ', with |_ϵ'|≤ (1+2H√(d)/ϵ')^d such that, for all f∈[0,H]^d, there exists f'∈_ϵ' for which f-f'_2≤ϵ'. By applying a union bound over all f'∈_ϵ' and Lemma <ref>, we have that, with probability at least 1-δ, for all f'∈_ϵ', it holds that: |_G∼η^E[f'(G)]- _η^p,r,π^E[ _G∼η^E[f'(G)]]| ≤ cH√(dlog2(1+2H√(d)/ϵ')/δ/τ^E). Next, for any f∈[0,H]^d, denote its closest points (in 2-norm) from _ϵ' as f'. Then, we have: |_G∼η^E[f(G)]- _η^p,r,π^E[ _G∼η^E[f(G)]] | =|_G∼η^E[f(G)]- _η^p,r,π^E[ _G∼η^E[f(G)]] ±(_G∼η^E[f'(G)]- _η^p,r,π^E[ _G∼η^E[f'(G)]])| (3)≤|_G∼η^E[f'(G)]- _η^p,r,π^E[ _G∼η^E[f'(G)]]| +|_G∼η^E[f(G)-f'(G)]| +|_η^p,r,π^E[ _G∼η^E[f(G)-f'(G)]]| (4)≤ cH√(dlog2(1+2H√(d)/ϵ')/δ/τ^E)+2ϵ' (5)≤c'H√(dlogHdτ^E/δ/τ^E) where at (3) we apply triangle inequality, at (4) we apply the result in Eq. (<ref>), and the fact that, by definition of ϵ'-covering, f-f'_2≤ϵ' entails that |f(y)-f(y')|≤ϵ' for all y∈; at (5) we set ϵ'=1/τ^E, and we simplify. The result follows by upper bounding d≤ H/ϵ_0+1, and then by setting: c”H√(HlogHτ^E/δϵ_0/ϵ_0τ^E)≤ϵ, and solving w.r.t. τ^E, and noticing that for all τ^E greater than some constant, we can get rid of the logarithmic terms in τ^E. §.§.§ Lemmas on the Optimal Performance for Single Utility In this section, we will omit index i∈N since the following derivations can be carried out for each i. We denote the arbitrary MDP in {^i}_i as =,,H,s_0,p,r, and its analogous with discretized reward r, defined at all (s,a,h)∈ as r_h(s,a)Π_[r_h(s,a)], as ,,H,s_0,p,r. We denote the analogous MDPs with empirical transition model p as =,,H,s_0,p,r and ,,H,s_0,p,r. Given any utility U∈_L, we denote the corresponding RS-MDPs, respectively, as _U,_U,_U,_U. Concerning the discretized RS-MDPs _U and _U, we denote the corresponding enlarged state space MDPs, respectively, as [_U]={×_h}_h,,H,(s_0,0),, and [_U]={×_h}_h ,,H,(s_0,0),,, where we decided to define such enlarged state space MDPs using the state space {×_h}_h considered by Algorithm <ref> () instead of, respectively, {×^p,r_h}_h and {×^p,r_h}_h. Thus, the transition models and , from any h∈H and (s,y,a)∈, assign to the next state (s',y')∈ the probability: _h(s',y'|s,y,a) p_h(s'|s,a)y'=y+r_h(s,a) and _h(s',y'|s,y,a)p_h(s'|s,a)y'=y+r_h(s,a). Moreover, the reward function , in any h∈H and (s,y,a)∈, is _h(s,y,a)=0 if h<H, and _h(s,y,a)=U(y+r_h(s,a)) if h=H. We will make extensive use of notation for V- and Q- functions introduced in Appendix <ref>. We are now ready to proceed with the analysis. In general, the analysis shares similarities to that of Theorem 3 of <cit.>, but we use results also from <cit.> to obtain tighter bounds. Let ϵ,δ∈(0,1). For any fixed L-Lipschitz utility function U∈_L, it suffices to execute with: τ≤(SAH^4/ϵ^2logSAH/δϵ_0), to obtain |J^*(U;p,r)-J^*(U)|≤ HLϵ_0+ϵ w.p. 1-δ. For an arbitrary utility U∈_L, we can write: |J^*(U;p,r)-J^*(U)| (1)=|J^*(U;p,r)-J^*(U)± J^*(,)| (2)≤|J^*(U;p,r)-J^*(,)| +|J^*(,)-J^*(U)| (3)= |J^*(U;p,r)-J^*(,)|+|J^*(,)-J^*(,)| (4)≤HLϵ_0+|J^*(,)-J^*(,)| =HLϵ_0+|V_1^*(s_0,0;,)-V_1^*(s_0,0;,)| ≤ HLϵ_0+max_ |Q^*_h(s,y,a;,)-Q^*_h(s,y,a;,)| (5)≤HLϵ_0+ϵ', where at (1) we add and subtract the optimal expected utility in the enlarged MDP [_U] considered by Algorithm <ref>, but with the true transition model . At (2) we apply triangle inequality, at (3) we recognize that the estimate J^*(U) used in and outputted by (Algorithm <ref>) is the optimal expected utility for the discretized problem with estimated dynamics , at (4) we use Proposition 3 of <cit.>, since U is L-Lipschitz, and at (5) we apply Lemma <ref> to bound the distance between Q-functions. By setting: c√(H^3log4SAHd/δ/n)_≤ϵ/3 +cH^2(log16SAHd/δ/n)^3/4_≤ϵ/3 +cH^3log16SAHd/δ/n_≤ϵ/3≤ϵ, and solving w.r.t. ϵ: n≥ c'H^3log4SAHd/δ/ϵ^2 n≥ c”H^8/3log16SAHd/δ/ϵ^4/3 n≥ c”'H^3log16SAHd/δ/ϵ. Taking the largest bound, we get: n≥ c H^3log16SAHd/δ/ϵ^2, for some positive constant c. Since d≤ H/ϵ_0+1, we can write: τ≥ c' SAH^4logc”SAH/δϵ_0/ϵ^2, for some positive constants c',c”, where we used that τ=SAHn. The proof of the following lemma is organized in many lemmas, and is based on the proof of Theorem 1 of <cit.>. For any δ∈(0,1), we have: max_ |Q^*_h(s,y,a;,)-Q^*_h(s,y,a;,)|≤ϵ', w.p. at least 1-δ, where ϵ' is defined as: ϵ' c√(H^3log4SAHd/δ/n) +cH^2(log16SAHd/δ/n)^3/4 +cH^3log16SAHd/δ/n, for some positive constant c. We upper bound one side, and then the other. For all the , it holds that: Q^*_h(s,a,y;,)-Q^*_h(s,y,a;,) (1)≤_,,ψ^*[ ∑_h'=h^H ∑_s'∈( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,) | s_h=s,y_h=y,a_h=a ] (2)≤_,,ψ^*[ ∑_h'=h^H c√(c_1 _s'∼p_h'(·|s_h',a_h') [V^ψ^*_h'+1(s',y_h'+1;,)]/n) +b_2 | s_h=s,y_h=y,a_h=a ] =c√(c_1/n)_,,ψ^*[ ∑_h'=h^H √(_s'∼p_h'(·|s_h',a_h')[V^ψ^*_h'+1(s',y_h'+1;,)]) | s_h=s,y_h=y,a_h=a ] +Hb_2 (3)≤ c√(c_1/n)√(H^3)+Hb_2 =c√(H^3log4SAHY/δ/n) +c'H^2(log16SAHY/δ/n)^3/4 +c”H^3log16SAHY/δ/n ϵ', where at (1) we have applied Lemma <ref>, at (2) we have applied Lemma <ref> with δ/2 of probability, at (3) we have applied Lemma <ref>. The proof for the other side of inequality is completely analogous, and it holds w.p. 1-δ/2. The result follows through the application of a union bound. For any tuple , it holds that: Q^*_h(s,y,a;,)-Q^*_h(s,y,a;,)≤_,,ψ^*[ ∑_h'=h^H ∑_s'∈ ( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,) | s_h=s,y_h=y,a_h=a ], Q^*_h(s,y,a;,)-Q^*_h(s,y,a;,)≥_,,ψ^*[ ∑_h'=h^H ∑_s'∈ ( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,) | s_h=s,y_h=y,a_h=a ], where ψ^*,ψ^* are the optimal policies respectively in problems , and ,. For any , we can write: Q^*_h(s,y,a;,)-Q^*_h(s,y,a;,) = Q^ψ^*_h(s,y,a;,)- Q^ψ^*_h(s,y,a;,) (1)≤ Q^ψ^*_h(s,y,a;,)- Q^ψ^*_h(s,y,a;,) (2)=_h(s,y,a)+∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) -( _h(s,y,a)+∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,)) (3)=∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) -∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) ±∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) =∑_(s',y')∈×_h+1( _h(s',y'|s,y,a)-_h(s',y'|s,y,a))V_h+1^ψ^*(s',y';,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) ) (4)=∑_(s',y')∈×_h+1( p_h(s'|s,a)y+r_h(s,a)=y' - p_h(s'|s,a) y+r_h(s,a)=y')V_h+1^ψ^*(s',y';,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) ) (5)=∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a)) ∑_y'∈_h+1y+r_h(s,a)=y'V_h+1^ψ^*(s',y';,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s';,) ) (6)=∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a))V_h+1^ψ^*(s',y+r_h(s,a);,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) ) = ∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a))V_h+1^ψ^*(s',y+r_h(s,a);,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a) ·(Q_h+1^ψ^*(s',y',ψ^*_h+1(s',y');,)- Q_h+1^ψ^*(s',y',ψ^*_h+1(s',y');,) ), where at (1) we have used that ψ^* is the optimal policy in ,, and thus Q^ψ^*_h(s,a;,)≤ Q^ψ^*_h(s,a;,). At (2) we apply the Bellman equation, at (3) we add and subtract the expected under optimal value function under , at (4) we use the definition of transition model ,, at (5) we split the summations, at (6) we recognize that the indicator function takes on value 1 only when y+r_h(s,a)=y'. Finally, we unfold the recursion to obtain the result. Concerning the second equation, for any , we can write: Q^*_h(s,y,a;,)-Q^*_h(s,y,a;,) = Q^ψ^*_h(s,y,a;,)- Q^ψ^*_h(s,y,a;,) (7)=_h(s,y,a)+∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) -( _h(s,y,a)+∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,)) (8)=∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) -∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) ±∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) =∑_(s',y')∈×_h+1( _h(s',y'|s,y,a)-_h(s',y'|s,y,a))V_h+1^ψ^*(s',y';,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) ) =∑_(s',y')∈×_h+1( p_h(s'|s,a)y+r_h(s,a)=y' - p_h(s'|s,a)y+r_h(s,a)=y')V_h+1^ψ^*(s',y';,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) ) =∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a)) ∑_y'∈_h+1y+r_h(s,a)=y'V_h+1^ψ^*(s',y';,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) ) =∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a)) V_h+1^ψ^*(s',y+r_h(s,a);,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) ) (9)≥∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a)) V_h+1^ψ^*(s',y+r_h(s,a);,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a) ·(Q_h+1^ψ^* (s',y',ψ^*_h+1(s',y');,)- Q_h+1^ψ^*(s',y',ψ^*_h+1(s',y');,) ), where at (7) we have applied the Bellman equation, at (8) we have added and subtracted a term, and at (9) we have used that V_h+1^ψ^*(s',y';,)=Q_h+1^ψ^*(s',y',ψ^*_h+1(s',y');,)≥ Q_h+1^ψ^*(s',y',ψ^*_h+1(s',y');,), since ψ^*_h+1(s',y') is the optimal action under ,, and so, it cannot be worse than action ψ^*_h+1(s',y'). By unfolding the recursion, we obtain the result. For any δ∈(0,1), w.p. at least 1-δ, it holds that: max_|V_h^*(s,y;,)-V^ψ^*_h(s,y;,)| ≤ cH^2√(log2SAHd/δ/n), max_|V_h^*(s,y;,)-V^*_h(s,y;,)| ≤ cH^2√(log2SAHd/δ/n). where c is some positive constant. First, we observe that, for any , by following passages similar to those in the proof of Lemma <ref>: |V_h^*(s,y;,)-V^ψ^*_h(s,y;,)| = |Q_h^ψ^*(s,y,ψ^*_h(s,y);,)- Q^ψ^*_h(s,y,ψ^*_h(s,y);,)| =|_h(s,y,ψ^*_h(s,y))+∑_(s',y')∈×_h+1_h(s',y'|s,y,ψ^*_h(s,y))V_h+1^ψ^*(s',y';,) -( _h(s,y,ψ^*_h(s,y))+∑_(s',y')∈×_h+1_h(s',y'|s,y,ψ^*_h(s,y))V_h+1^ψ^*(s',y';,))| =|∑_(s',y')∈×_h+1_h(s',y'|s,y,ψ^*_h(s,y))V_h+1^ψ^*(s',y';,) -∑_(s',y')∈×_h+1_h(s',y'|s,y,ψ^*_h(s,y))V_h+1^ψ^*(s',y';,) ±∑_(s',y')∈×_h+1_h(s',y'|s,y,ψ^*_h(s,y))V_h+1^ψ^*(s',y';,)| =|∑_(s',y')∈×_h+1( _h(s',y'|s,y,ψ^*_h(s,y))-_h(s',y'|s,y,ψ^*_h(s,y)))V_h+1^ψ^*(s',y';,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,ψ^*_h(s,y))(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) )| =|∑_s'∈( p_h(s'|s,ψ^*_h(s,y))-p_h(s'|s,ψ^*_h(s,y))) V_h+1^ψ^*(s',y+r_h(s,ψ^*_h(s,y));,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,ψ^*_h(s,y))(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) )| =… =|_,,ψ^*[ ∑_h'=h^H ∑_s'∈( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,) | s_h=s,y_h=y ]| (1)≤_,,ψ^*[ ∑_h'=h^H |∑_s'∈( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,)| | s_h=s,y_h=y ], where at (1) we have brought the absolute value inside the expectation. Similarly, for the other term, for any , we can write: |V_h^*(s,y;,)-V^*_h(s,y;,)| =|V_h^ψ^*(s,y;,) -V^ψ^*_h(s,y;,)| (2)= |max_a∈Q_h^ψ^*(s,y,a;,)- max_a∈Q^ψ^*_h(s,y,a;,)| (3)≤max_a∈|Q_h^ψ^*(s,y,a;,)- Q^ψ^*_h(s,y,a;,)| =max_a∈|_h(s,y,a)+∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) -( _h(s,y,a)+∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,))| =max_a∈|∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) -∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,) ±∑_(s',y')∈×_h+1_h(s',y'|s,y,a)V_h+1^ψ^*(s',y';,)| =max_a∈|∑_(s',y')∈×_h+1( _h(s',y'|s,y,a)-_h(s',y'|s,y,a))V_h+1^ψ^*(s',y';,) +∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) )| (4)≤|∑_(s',y')∈×_h+1( _h(s',y'|s,y,a)-_h(s',y'|s,y, a))V_h+1^ψ^*(s',y';,)| +|∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) )| =|∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a)) V_h+1^ψ^*(s',y+Π_[r_h(s,a)];,)| +|∑_(s',y')∈×_h+1_h(s',y'|s,y,a)(V_h+1^ψ^*(s',y';,)- V_h+1^ψ^*(s',y';,) )| ≤… (5)≤_,,ψ[ ∑_h'=h^H |∑_s'∈( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,)| | s_h=s,y_h=y ], where at (2) we have applied the Bellman optimality equation, at (3) we have upper bounded the difference of maxima with the maximum of the difference, at (4) we denote the maximal action by a, and we apply triangle inequality; at (5) we have unfolded the recursion and called ψ the resulting policy. Now, for some ϵ∈(0,1), let us denote by the event defined as: { ∀: |∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a)) V_h+1^ψ^*(s',y+r_h(s,a);,)|≤ϵ} We can write: (^∁) =( ∃: |∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a)) V_h+1^ψ^*(s',y+r_h(s,a);,)|> ϵ) (6)≤∑_ ( |∑_s'∈( p_h(s'|s,a)-p_h(s'|s,a)) V_h+1^ψ^*(s',y+r_h(s,a);,)|> ϵ) (7)≤∑_ 2e^-2 n ϵ^2/H^2 =2SAHd e^-2 n ϵ^2/H^2, where at (6) we have applied a union bound over all tuples , and at (7) we have applied Hoeffding's inequality, by recalling that we collect n samples (see Algorithm <ref>) for any (s,a,h)∈ triple, and that vector V_h+1^ψ^*(·,y+r_h(s,a);,) bounded by [0,H] is independent of the randomness in p_h(·|s,a). It should be remarked that our collection of samples depends only on , and not on _h; such term enters the expression only through the union bound, because we have to apply Hoeffding's inequality for all the value functions considered, which are as many as |_h| . Note that we use d=|_H+1| since it is the largest |_h| among h∈H+1. This probability is at most δ if: 2SAHd e^-2 n ϵ^2/H^2≤δϵ≥ H√(log2SAHd/δ/2 n). By plugging into the previous expressions, we obtain that, w.p. 1-δ: |V_h^*(s,y;,)-V^ψ^*_h(s,y;,)| ≤_,,ψ^*[ ∑_h'=h^H |∑_s'∈( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,)| | s_h=s,y_h=y ] ≤_,,ψ^*[ ∑_h'=h^H H√(log2SAHd/δ/2 n) | s_h=s,y_h=y ] = H^2√(log2SAHd/δ/2 n), and also: |V_h^*(s,y;,)-V^*_h(s,y;,)| ≤_,,ψ[ ∑_h'=h^H |∑_s'∈( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,)| | s_h=s,y_h=y ] ≤_,,ψ[ ∑_h'=h^H H√(log2SAHd/δ/2 n) | s_h=s,y_h=y ] = H^2√(log2SAHd/δ/2 n). This concludes the proof. For any δ∈(0,1), w.p. at least 1-δ, it holds that, for all : √(_s'∼ p_h(·|s,a)[V^*_h+1(s',y+r_h(s,a);,)])≤ √(_s'∼p_h(·|s,a) [V^ψ^*_h+1(s',y+r_h(s,a);,)])+b_1, √(_s'∼ p_h(·|s,a)[V^*_h+1(s',y+r_h(s,a);,)])≤ √(_s'∼p_h(·|s,a) [V^*_h+1(s',y+r_h(s,a);,)])+b_1, where b_1 is defined as: b_1 cH(log4SAHY/δ/n)^1/4 + c'H^2√(log4SAHY/δ/n), for some positive constants c,c'. In the following, we will use y as a label for y+r_h(s,a). We begin with the first expression. We can write, for any : _s'∼ p_h(·|s,a)[V^*_h+1(s',y;,)] =_s'∼ p_h(·|s,a)[V^*_h+1(s',y;,)] ±_s'∼p_h(·|s,a)[V^*_h+1(s',y;,)] =(_s'∼ p_h(·|s,a)[V^*_h+1(s',y;,)]- _s'∼p_h(·|s,a)[V^*_h+1(s',y;,)]) + _s'∼p_h(·|s,a)[V^*_h+1(s',y;,)] (1)=∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) - [(∑_s'∈p_h(s'|s,a)V^*_h+1(s',y;,))^2 - (∑_s'∈p_h(s'|s,a) V^*_h+1(s',y;,))^2] +_s'∼p_h(·|s,a)[V^*_h+1(s',y;,)± V^ψ^*_h+1(s',y;,)] (2)=∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) - [(∑_s'∈p_h(s'|s,a)V^*_h+1(s',y;,))^2 - (∑_s'∈p_h(s'|s,a) V^*_h+1(s',y;,))^2] +_s'∼p_h(·|s,a) [V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,)] + _s'∼p_h(·|s,a)[V^ψ^*_h+1 (s',y;,)] +2Cov_s'∼p_h(·|s,a) [V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,), V^ψ^*_h+1(s',y;,)] (3)≤∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) - [(∑_s'∈p_h(s'|s,a)V^*_h+1(s',y;,))^2 - (∑_s'∈p_h(s'|s,a) V^*_h+1(s',y;,))^2] +_s'∼p_h(·|s,a)[V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,)] + _s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)] +2(_s'∼p_h(·|s,a)[V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,)] ·_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])^1/2 =∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) - [(∑_s'∈p_h(s'|s,a)V^*_h+1(s',y;,))^2 - (∑_s'∈p_h(s'|s,a) V^*_h+1(s',y;,))^2] +[ √(_s'∼p_h(·|s,a)[V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,)]) +√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)] )]^2, where at (1) we have used the common formula for the variance [X]=[X^2]-[X]^2, at (2) we have decomposed the variance of a sum as [X+Y] = [X] + [Y] + 2Cov[X,Y], at (3) we have applied Cauchy-Schwarz's inequality to bound the covariance with the product of the variances |Cov[X,Y]|≤√([X][Y]). Next, observe that: _s'∼p_h(·|s,a)[V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,)] (4)=_s'∼p_h(·|s,a)[(V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,))^2] - _s'∼p_h(·|s,a)[V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,)]^2 (5)≤_s'∼p_h(·|s,a)[(V^*_h+1(s',y;,)- V^ψ^*_h+1(s',y;,))^2] (6)≤(V^*_h+1(·,y;,)- V^ψ^*_h+1(·,y;,))^2_∞ = V^*_h+1(·,y;,)- V^ψ^*_h+1(·,y;,)_∞^2, where at (4) we have used [X]=[X^2]-[X]^2, at (5) we recognize that the second term is a square, thus always positive, and we remove it, and at (6) we have upper bounded the expected value, an average, through the infinity norm. Thanks to this expression, we can continue to upper bound the previous term as: _s'∼ p_h(·|s,a)[V^*_h+1(s',y;,)] ≤∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) - [(∑_s'∈p_h(s'|s,a)V^*_h+1(s',y;,))^2 - (∑_s'∈p_h(s'|s,a) V^*_h+1(s',y;,))^2] +[V^*_h+1(·,y;,)- V^ψ^*_h+1(·,y;,)_∞ +√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])]^2 (7)=∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) - [(∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*_h+1(s',y;,)) ·(∑_s'∈(p_h(s'|s,a)+p_h(s'|s,a))V^*_h+1(s',y;,))] +[V^*_h+1(·,y;,)- V^ψ^*_h+1(·,y;,)_∞ +√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])]^2 (8)≤∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) - [(∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*_h+1(s',y;,)) ·(∑_s'∈(p_h(s'|s,a)+p_h(s'|s,a))V^*_h+1(s',y;,))] +[cH^2√(log4SAHd/δ/ n)+√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])]^2 (9)≤∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) +2H |∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a))V^*_h+1(s',y;,)| +[cH^2√(log4SAHd/δ/ n)+√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])]^2 (10)≤∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*^2_h+1(s',y;,) +2cH^2√(log4SAHd/δ/ n) +[cH^2√(log4SAHd/δ/ n)+√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])]^2 (11)≤cH^2√(log4SAHd/δ/ n) +2cH^2√(log4SAHd/δ/ n) +[cH^2√(log4SAHd/δ/ n)+√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])]^2 =3cH^2√(log4SAHd/δ/ n) +[cH^2√(log4SAHd/δ/ n)+√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])]^2, where at (7) we have applied the common formula x^2-y^2=(x-y)(x+y), at (8) we have applied Lemma <ref> using probability δ'=δ/2, and noticing that, for how the discretized MDP is constructed, we have that y∈, at (9) we have upper bounded the second term with the absolute value and recognized that the value function does not exceed H and the sum of probabilities is no greater than 2; at (10) we recognize that, in the proof of Lemma <ref>, we had already bounded that term, thus, under the event which holds w.p. 1-δ/2, we have that bound; at (11) we have applied Hoeffding's inequality to all tuples with probability δ/(2SAHd), and noticed that the square of the value function does not exceed H^2. Observe that the previous formula holds for all w.p. 1-δ (by summing the two δ/2 through a union bound). By taking the square root of both sides, we obtain: √(_s'∼ p_h(·|s,a)[V^*_h+1(s',y;,)]) ≤(3cH^2√(log4SAHd/δ/ n) +[cH^2√(log4SAHd/δ/ n) +√(_s'∼p_h(·|s,a) [V^ψ^*_h+1(s',y;,)])]^2)^1/2 (12)≤c'H√(log4SAHY/δ/ n) +cH^2√(log4SAHY/δ/ n)_ b_1 +√(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)]) = √(_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)])+ b_1, where at (12) we have used the fact that √(a+b)≤√(a)+√(b). To prove the second formula, the passages are basically the same, the only difference is that, at passage (1), we sum and subtract V^ψ^*_h+1(s',y;,) instead of V^ψ^*_h+1(s',y;,), and that at passage (8) we apply the other expression in Lemma <ref>. This concludes the proof. For any δ∈(0,1), define: c_1 log2SAHd/δ, b_2 cH(log8SAHd/δ/n)^3/4 +c'H^2log8SAHd/δ/n, for some positive constants c,c'. Then, w.p. at least 1-δ, we have, for all : ∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*_h+1(s',y+r_h(s,a);,) ≤ c”√(c_1 _s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y+r_h(s,a);,)]/n) +b_2, ∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*_h+1(s',y+r_h(s,a);,) ≥ -c”'√(c_1 _s'∼p_h(·|s,a)[V^*_h+1(s',y+r_h(s,a);,)]/n) +b_2, for some positive constants c”,c”'. Again, we will write y instead of y+r_h(s,a) for simplicity. For all , we can write: ∑_s'∈(p_h(s'|s,a)-p_h(s'|s,a)) V^*_h+1(s',y;,) (1)≤√(2 _s'∼ p_h(·|s,a)[V^*_h+1 (s',y;,)]log2SAHd/δ/n) +2Hlog2SAHd/δ/3n (2)≤√(2log2SAHd/δ/n)(√(_s'∼p_h(·|s,a) [V^ψ^*_h+1(s',y;,R)])+b_1) +2Hlog2SAHd/δ/3n (3)=c√(c_1_s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)]/n) +c'√(c_1/n) H(log8SAHd/δ/n)^1/4 + c”√(c_1/n)H^2√(log8SAHd/δ/n) +c”'Hc_1/n ≤ c√(c_1 _s'∼p_h(·|s,a)[V^ψ^*_h+1(s',y;,)]/n) + c'H(log8SAHd/δ/n)^3/4 +c””H^2log8SAHd/δ/n, where at (1) we have applied the Bernstein's inequality using δ/(2SAHd) as probability for all , and at (2) we have applied Lemma <ref> with δ/2 of probability, and a union bound to guarantee the event to hold w.p. 1-δ, at (3) we use the definition of c_1log2SAHd/δ, and denoted by c,c',c”,c”' some positive constants. For the other expression, an analogous derivation can be carried out. In particular, we use the other side of the Bernstein's inequality, and the other expression in Lemma <ref>. For any and deterministic policy ψ, let Σ^ψ_h(s,y,a) be defined as: Σ^ψ_h(s,y,a)_,,ψ[ |∑_h'=h^H _h'(s_h',y_h',a_h') -Q^ψ_h(s,y,a;,)|^2 | s_h=s,y_h=y,a_h=a ]. Then, function Σ satisfies the Bellman equation, i.e., for any and deterministic policy ψ: Σ^ψ_h(s,y,a)= _s'∼ p_h(·|s,a)[V^ψ_h+1(s',y+r_h(s,a);,)] + _s'∼ p_h(·|s,a) [Σ^ψ_h+1(s',y+r_h(s,a),ψ_h+1(s',y+r_h(s,a)))]. For all and deterministic policy ψ, we can write (we denote a'ψ_h+1(s',y+r_h(s,a)) and y y+r_h(s,a) for notational simplicity, and we remark that y is not a random variable): Σ^ψ_h(s,y,a) _,,ψ[ |∑_h'=h^H _h'(s_h',y_h',a_h') - Q^ψ_h(s,y,a;,)|^2 | s_h=s,y_h=y,a_h=a ] (1)=_s'∼ p_h(·|s,a)[_,,ψ[ |∑_h'=h^H _h'(s_h',y_h',a_h') -Q^ψ_h+1(s',y,a';,) - (Q^ψ_h(s,y,a;,)-Q^ψ_h+1(s',y,a';,))|^2 | s_h=s,a_h=a,y_h=y,s_h+1=s']] (2)=_s'∼ p_h(·|s,a)[_,,ψ[ |∑_h'=h+1^H _h'(s_h',y_h',a_h') -Q^ψ_h+1(s',y,a';,) - (Q^ψ_h(s,y,a;,)-_h(s,y,a)- Q^ψ_h+1(s',y,a';,))|^2 | s_h+1=s',y_h+1=y]] (3)=_s'∼ p_h(·|s,a)[_,,ψ[ |∑_h'=h+1^H _h'(s_h',y_h',a_h') -Q^ψ_h+1(s',y,a';,)|^2 | s_h+1=s',y_h+1=y]] -2 _s'∼ p_h(·|s,a)[(Q^ψ_h(s,y,a;,)-_h(s,y,a)-Q^ψ_h+1(s',y,a';,)) ·_,,ψ[ ∑_h'=h+1^H _h'(s_h',y_h',a_h') -Q^ψ_h+1(s',y,a';,) | s_h+1=s',y_h+1=y]_=0] + _s'∼ p_h(·|s,a)[|Q^ψ_h(s,y,a;,)-_h(s,y,a)-Q^ψ_h+1(s',y,a';,)|^2] (4)=_s'∼ p_h(·|s,a)[ _,,ψ[ |∑_h'=h+1^H _h'(s_h',y_h',a_h') -Q^ψ_h+1(s',y,a';,)|^2 | s_h+1=s',y_h+1=y]_= Σ^ψ_h+1(s',y,a')] + _s'∼ p_h(·|s,a)[|Q^ψ_h(s,y,a;,)-_h(s,y,a)-Q^ψ_h+1(s',y,a';,)|^2]__s'∼ p_h(·|s,a)[Q^ψ_h+1(s',y,a';,)] =_s'∼ p_h(·|s,a)[V^ψ_h+1(s',y;,)] = _s'∼ p_h(·|s,a) [Σ^ψ_h+1(s',y,a')]+_s'∼ p_h(·|s,a)[V^ψ_h+1(s',y;,)], at (1) we add and subtract a term, at (2) we bring out the non-random reward received at h, at (3) we compute the square and use the linearity of expectation, at (4) we use the fact that _,,ψ[ ∑_h'=h+1^H _h'(s_h',y_h',a_h') -Q^ψ_h+1(s',y,a';,) | s_h+1=s' ]=Q^ψ_h+1(s',y,a';,)-Q^ψ_h+1(s',y,a';,)=0 because of linearity of expectation. Let ψ be any policy, and let be any transition model associated to an arbitrary inner dynamics p. Then, for all , it holds that: |_,,ψ[∑_h'=h^H √(_s'∼ p_h'(·|s_h',a_h')[V^ψ_h'+1(s',y_h'+1;,)]) | s_h=s,y_h=y,a_h=a]| ≤√(H^3). For all , we can write (note that this derivation is independent of ,p, so we might use even ,p in the proof): |_,,ψ[∑_h'=h^H √(_s'∼ p_h'(·|s_h',a_h')[V^ψ_h'+1(s',y_h'+1;,)]) | s_h=s,y_h=y,a_h=a]| (1)≤|_,,ψ[√(H∑_h'=h^H_s'∼ p_h'(·|s_h',a_h')[V^ψ_h'+1(s',y_h'+1;,)]) | s_h=s,y_h=y,a_h=a]| (2)≤√(H)√(_,,ψ[∑_h'=h^H _s'∼ p_h'(·|s_h',a_h')[V^ψ_h'+1(s',y_h'+1;,)] | s_h=s,y_h=y,a_h=a]) (3)=√(H)(_,,ψ[∑_h'=h^H Σ^ψ_h'(s_h',y_h',a_h')-_s'∼ p_h'(·|s_h',a_h')[Σ^ψ_h'+1(s',y_h'+1,ψ_h'+1(s',y_h'+1))] | s_h=s,y_h=y,a_h=a])^1/2 = √(H)√(_,,ψ[∑_h'=h^H Σ^ψ_h'(s_h',y_h',a_h')-Σ^ψ_h'+1(s_h'+1,y_h'+1,a_h'+1) | s_h=s,y_h=y,a_h=a]) (4)=√(H)√(_,,ψ[ Σ^ψ_h(s_h,y_h,a_h)-Σ^ψ_H+1(s_H+1,y_H+1,a_H+1)_=0 | s_h=s,y_h=y,a_h=a]) = √(H)√(Σ^ψ_h(s,y,a)) (5)≤√(H)√( H^2) = √(H^3), where at (1) we have applied the Cauchy-Schwarz's inequality, at (2) we have applied Jensen's inequality, at (3) we have applied Lemma <ref>, at (4) we have used telescoping, and at (5) we have bounded Σ^ψ_h(s,y,a)≤ H^2 for all . §.§.§ Lemmas on the Optimal Performance for Multiple Utilities To prove the following results, we will make use of the notation introduced in the previous section. Let ϵ,δ∈(0,1). It suffices to execute with: τ≤(SAH^5/ϵ^2(S+logSAH/δ)), to obtain sup_U∈_L|J^*(U;p,r)-J^*(U)|≤ HLϵ_0+ϵ w.p. 1-δ. Similarly to the proof of Lemma <ref>lemma: bound J star all utilities, we can write: sup_U∈_L |J^*(U;p,r)-J^*(U)| =sup_U∈_L|J^*(U;p,r)-J^*(U)± J^*(,)| ≤sup_U∈_L|J^*(U;p,r)-J^*(,)| +sup_U∈_L|J^*(,)-J^*(U)| =sup_U∈_L|J^*(U;p,r)-J^*(,)|+ sup_U∈_L|J^*(,)-J^*(,)| ≤HLϵ_0+sup_U∈_L|J^*(,)-J^*(,)| (1)≤ HLϵ_0+H^2 √(2/n(logSAH/δ +(S-1)log(e(1+n/(S-1))))) ≤ HLϵ_0+ϵ, where at (1) we have applied the formula in Lemma <ref>. By enforcing such quantity to be smaller than ϵ, we get: H^2 √(2/n(logSAH/δ +(S-1)log(e(1+n/(S-1)))))≤ H^2√(log(e(1+n/(S-1))))/√(n)√(2(logSAH/δ+(S-1)))≤ϵ n ≥ 2H^4/ϵ^2(logSAH/δ+(S-1)) log(e(1+n/(S-1))). By summing over all (s,a,h)∈, and by applying Lemma J.3 of <cit.>, we obtain that: τ=SAHn≥( SAH^5/ϵ^2(logSAH/δ+S) ). For any δ∈(0,1), for all utility functions U∈_L at the same time, we have: |J^*_h(,)-J^*_h(,)|≤ H^2 √(2/n(logSAH/δ +(S-1)log(e(1+n/(S-1))))), w.p. at least 1-δ. Let us denote by the event defined as: { ∀ n∈, ∀: nKL(p_h(·|s,a)p_h(·|s,a))≤logSAH/δ +(S-1)log(e(1+n/(S-1))) }. We can write: (^∁) =(∃ n∈, ∃: nKL(p_h(·|s,a)p_h(·|s,a))> logSAH/δ +(S-1)log(e(1+n/(S-1))) ) (1)=(∃ n∈, ∃ (s,a,h)∈: nKL(p_h(·|s,a)p_h(·|s,a))> logSAH/δ +(S-1)log(e(1+n/(S-1))) ) (2)≤∑_(s,a,h)∈( ∃ n∈, nKL(p_h(·|s,a)p_h(·|s,a))> logSAH/δ +(S-1)log(e(1+n/(S-1))) ) (3)≤∑_(s,a,h)∈δ/SAH ≤δ, where at (1) we realize that there is no dependence on variable y, thus we can drop it,[Therefore, differently from the event for a single utility, now there is no dependence on d in the bound. Intuitively, d appeared in the case of a single utility because we had to apply Hoeffding's inequality d times, because we had, potentially, d different value functions (as many as the states). Since now we provide the bound for all the possible value functions (1-norm bound), then the dependence on d disappears.] at (2) we have applied a union bound over all triples (s,a,h)∈, and at (3) we have applied Proposition 1 of <cit.>. Next, for all utilities U∈_L at the same time, for all the tuples , we can write: |V_h^*(s,y;,)-V^*_h(s,y;,)| (4)≤_,,ψ[ ∑_h'=h^H |∑_s'∈( p_h'(s'|s_h',a_h')-p_h'(s'|s_h',a_h')) V_h'+1^ψ^*(s',y_h'+1;,)| | s_h=s,y_h=y,a_h=a ] (5)≤H_,,ψ[ ∑_h'=h^H p_h'(·|s_h',a_h')- p_h'(·|s_h',a_h')_1 | s_h=s,y_h=y,a_h=a ] (6)≤H _,,ψ[ ∑_h'=h^H √(2KL(p_h' (·|s_h',a_h')p_h'(·|s_h',a_h'))) | s_h=s,y_h=y,a_h=a ] (7)≤H _,,ψ[ ∑_h'=h^H √(2/n(logSAH/δ +(S-1)log(e(1+n/(S-1))))) | s_h=s,y_h=y,a_h=a ] =H^2√(2/n(logSAH/δ +(S-1)log(e(1+n/(S-1))))), where at (4) we apply the formula derived in the proof of Lemma <ref>lemma: lemma 4 and triangle inequality, at (5) we have upper bounded with the 1-norm, defined as f_1∑_x |f(x)|, at (6) we have applied Pinsker's inequality, at (7) we assume that concentration event holds. We remark that the guarantee provided by this theorem holds not only for L-Lipschitz utilities, but for all functions with the same dimensionality (since it is a bound in 1-norm). §.§ Analysis of * The proof draws inspiration from those of <cit.> and <cit.>. Given any distribution η supported on , and given any two utilities U∈_L,U∈_L (where U is a function on [0,H] and U is a vector on ), we will abuse notation and write both U^⊺η and U^⊺η, with obvious meaning. Moreover, for L>0, we define operator _L:_L→ 2^_L (where 2^ denotes the power set of set ) that, given vector U∈_L, returns the set _L(U){U∈_L | ∀ y∈: U(y)=U(y)}. First of all, we observe that the guarantee provided by the theorem follows directly by the following expression: ℙ_^1,^2,…,^N( sup_U∈_L(U)max_i∈N_p^i,r^i,π^E,i(U)≤ϵ)≥ 1-δ, where ℙ_^1,^2,…,^N denotes the joint probability distribution obtained by the N MDPs {^i}_i. Let us denote by U(∑_t=0^T-1U_t)/T the output of . Note that U∈_L. We can write: sup_U∈_L(U)max_i∈N_p^i,r^i,π^E,i (U) (1)≤sup_U∈_L(U)∑_i∈N_p^i,r^i,π^E,i(U) (2)=sup_U∈_L(U)∑_i∈N(J^*(U;p^i,r^i)-J^π^E,i(U;p^i,r^i) ±U^⊺η^E,i) (3)≤sup_U∈_L(U)∑_i∈N(J^*(U;p^i,r^i)-U^⊺η^E,i) +ϵ_1 (4)=sup_U∈_L(U)∑_i∈N( max_η∈_iU^⊺η -U^⊺η^E,i)+ϵ_1 (5)=sup_ U_0∈_L(U_0), …, U_T-1∈_L(U_T-1) 1/T∑_i∈Nmax_η∈_i∑_t=0^T-1( U_t^⊺η -U_t^⊺η^E,i)+ϵ_1 (6)≤1/T∑_t=0^T-1sup_ U_t∈_L(U_t)∑_i∈N( max_η∈_iU_t^⊺η±U_t^⊺η^i_t -U_t^⊺η^E,i)+ϵ_1 (7)≤1/T∑_t=0^T-1∑_i∈NU_t^⊺(η^i_t - η^E,i)±1/Tmin_U∈_L∑_t=0^T-1∑_i∈NU^⊺(η^i_t - η^E,i)+ϵ_1+ϵ_2 (8)≤1/Tmin_U∈_L∑_t=0^T-1∑_i∈NU^⊺(η^i_t - η^E,i)+ϵ_1+ϵ_2+ 2HN√(H/ϵ_0)/√(T)_ϵ_3 (9)≤1/T∑_t=0^T-1∑_i∈NU^E,⊺(η^i_t - η^E,i) ± U^E,⊺η^p^i,r^i,π^E,i+ϵ_1+ϵ_2+ϵ_3 (10)≤1/T∑_t=0^T-1∑_i∈NU^E,⊺η^i_t± U^E,⊺η^p^i,r^i,π^i_t - U^E,⊺η^p^i,r^i,π^E,i+2ϵ_1+ϵ_2+ϵ_3 (11)≤1/T∑_t=0^T-1∑_i∈NU^E,⊺(η^p^i,r^i,π^i_t- η^p^i,r^i,π^E,i)_≤0+2ϵ_1+ϵ_2+ϵ_3 +ϵ_4 (12)≤ 2ϵ_1+ϵ_2+ϵ_3+ϵ_4, where at (1) we upper bound the maximum of positive terms with their sum, at (2) we apply the definition of (non)compatibility, at (3) we first upper bound the supremum of a sum with the sum of the supremum, and then we apply Lemma <ref> w.p. δ/3, and denote ϵ_1 NL√(2Hϵ_0)+∑_i∈Nc H √(HlogNHτ^E,i/δϵ_0/ϵ_0τ^E,i), at (4) we denote by _i the set of possible return distributions in environment i, at (5) we use the definition of U, and realize that all functions U∈_L(U) can be constructed based on T functions U_0∈_L(U_0), …,U_T-1∈_L(U_T-1). At (6) we upper bound the maximum of the sum with the sum of maxima, and exchange the two summations, and we add and subtract the dot product between the (discretized) utility U_t and the estimate of the return distribution computed at Line <ref>; moreover, we bring the sup inside the summation. At (7) we upper bound the supremum of the sum with the sum of the supremum, and we apply Lemma <ref> w.p. δ/3, defining ϵ_2 cNH^2 √(1/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))) +NHLϵ_0+ c'HN√(logNT/δ/ K), and we add and subtract a term, at (8) we apply Theorem H.2 from <cit.> since set _L is closed and convex, where Dmax_U,U'∈_LU-U'_2=√(d-2)H = √(H/ϵ_0-1)H ≤ H√(H/ϵ_0) (recall that we consider increasing and not strictly-increasing utilities),[The maximum is attained by discretized utilities U,U' that assign, respectively, U(y)=0 and U'(y)=H to all the y∈∖{y_1,y_d}.] and max_U∈_L∇∑_i∈NU^⊺ (η^i_t-η^E,i)_2=∑_i∈Nη^i_t-η^E,i_2≤∑_i∈Nη^i_t_1+η^E,i_1=2N G (because η^i_t and η^E,i are probability distributions), with learning rate α=D/(G√(T))=H√(d-2)/(2N√(T))= √(H/ϵ_0-1)H/(2N√(T)), at (9) we upper bound the minimum over utilities with a specific choice of utility, U^E, and we add and subtract a term; note that U^E∈_L corresponds to the expert's utility U^E∈_L (by hypothesis), i.e., for all y∈: U^E(y)=U^E(y). Note that, by hypothesis, U^E makes all the expert policies optimal, i.e., ∀ i∈N: U^E,⊺η^p^i,r^i,π^E,i =sup_π U^E,⊺η^p^i,r^i,π. At (10) we note that, under the good event of Lemma <ref>, we can provide an upper bound using the term in Lemma <ref> (since U^E∈_L); in addition, we sum and subtract a term that depends on some policy π^i_t, whose existence is guaranteed by Lemma <ref>, which we apply at the next step. At (11) we apply Lemma <ref> w.p. δ/3, and we define as ϵ_4 the upper bound times N. Finally, at (12) we use the hypothesis that utility U^E makes the expert policy optimal in all environments. We want that 2ϵ_1+ϵ_2+ϵ_3+ϵ_4≤ϵ. We can rewrite the sum as: 2ϵ_1+ϵ_2+ϵ_3+ϵ_4 =( 2NL√(2Hϵ_0)+3/2LNHϵ_0)+ cHN√(H)/√(ϵ_0T) + c'∑_i∈NH √(HlogNHτ^E,i/δϵ_0/ϵ_0τ^E,i) +c”NH√(logNT/δ/K) +c”'NH^2 √(1/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))). By imposing each term smaller than ϵ/5, we find that it suffices that ϵ_0=ϵ^2/80 N^2L^2H T≥(N^2H^3/ϵ_0ϵ^2)≥(N^4H^4L^2/ϵ^4) τ^E,i≥( H^3N^2logNH/δϵ_0/ϵ_0ϵ^2)≥( H^4N^4L^2logNHL/δϵ/ϵ^4) ∀ i∈N K≥( N^2H^2logNT/δ/ϵ^2)≥( N^2H^2logNHL/δϵ/ϵ^2) τ^i≥( N^2SAH^5/ϵ^2( S+logSAHN/δ) ∀ i∈N, where we have used that τ^i=SAHn for all i∈N, and also used Lemma J.3 of <cit.>. The statement of the theorem follows through the application of a union bound. Let δ∈(0,1). Then, it holds that, w.p. at least 1-δ: sup_U∈_L∑_i∈N| U^⊺η^E,i- J^π^E,i(U;p^i,r^i)|≤ NL√(2Hϵ_0)+∑_i∈Nc H√(HlogNHτ^E,i/δϵ_0/ϵ_0τ^E,i), where c is some positive constant. We can make the same derivation as in the proof of Theorem <ref> to upper bound the objective with the sum of two terms, which can then be bounded using Lemma <ref> and the expression (Eq. (<ref>)) obtained in the proof of Lemma <ref> w.p. δ/N: sup_U∈_L∑_i∈N| U^⊺η^E,i- J^π^E,i(U;p^i,r^i)| ≤ L∑_i∈Nw_1(η^p^i,r^i,π^E,i, Proj_(η^p^i,r^i,π^E,i)) + ∑_i∈Nsup_U'∈[0,H]^d|_G∼Proj_(η^p^i,r^i,π^E,i)[U'(G)]- _G∼η^E,i[U'(G)]| ≤ L N√(2Hϵ_0) +∑_i∈Nc H√(HlogNHτ^E,i/δϵ_0/ϵ_0τ^E,i). The result follows through the application of the union bound. Let δ∈(0,1). With probability at least 1-δ, for all t∈{0,1,…,T-1}, for all i∈N, it holds that: sup_ U_t∈_L(U_t)max_η∈_iU_t^⊺η - U_t^⊺η^i_t ≤ cH^2 √(1/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))) +HLϵ_0+ c'H√(logNT/δ/ K), where c,c' are some positive constants. We use the notation in Section <ref>. In particular, let policy π^*,i_t be the optimal policy in the RS-MDP ^i_U_t^i,^i,H,s_0^i, p^i, r^i,U_t, i.e.: J^π^*,i_t(U_t;p^i,r^i) = J^*(U_t;p^i,r^i) = J^*(U_t;p^i,r^i), where the last passage holds trivially for all U_t∈_L(U_t) (because there is no evaluation of utility outside ). Thus, for all t∈{0,1,…,T-1}, we have: sup_ U_t∈_L(U_t)max_η∈_iU_t^⊺η - U_t^⊺η^i_t± J^*(U_t;p^i,r^i) (1)≤sup_ U_t∈_L(U_t)|J^*(U_t;p^i,r^i)- J^*(U_t;p^i,r^i)| + |U_t^⊺(η^i_t- η^p^i,r^i,π^*,i_t)| (2)≤ HLϵ_0+cH^2 √(1/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))) + |U_t^⊺(η^i_t- η^p^i,r^i,π^*,i_t)| (3)≤ HLϵ_0+cH^2 √(1/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))) + c'H√(logNT/δ/ K), where at (1) we have applied the triangle inequality, and realized that in the second term there is no dependence on the value of utility outside of ; moreover, we have used that J^*(U_t;p^i,r^i)= U_t^⊺η^p^i,r^i,π^*,i_t by definition of policy π^*,i_t. At (2) we apply Lemma <ref> (our J^*(U_t;p^i,r^i) has the same meaning of J^*(U) in the lemma, and we upper bound sup_ U_t∈_L(U_t) with sup_ U∈_L) w.p. δ/(2N),[We remark that, in doing so, we can still apply Proposition 3 of <cit.> inside the proof of Lemma <ref> even though we consider increasing utilities instead of strictly-increasing utilities; indeed, it is trivial to observe that the proof of Proposition 3 of <cit.> does not depend on such property.] and we keep the confidence bound explicit, and we upper bound d≤ H/ϵ_0+1, and at (3) we observe that η^i_t is the empirical estimate of distribution η^p^i,r^i,π^*,i_t (see Line <ref>) obtained through the sampling of K sample returns G_1,G_2,…,G_K i.i.d.∼η^p^i,r^i,π^*,i_t. Indeed, note that the policy ψ^*,i_t, computed at Line <ref> and optimal for [^i_U_t]={^i×_h}_h,^i,H,s_0^i, ^i, ^i_t,[See Section <ref> for the meaning of ^i and ^i_t; we use _h for all h in the state space instead of the sets of partial returns {^p^i,r^i_h}_h in order to obtain policy ψ^*,i_t supported on the entire ×_h space, and to make it compliant with Algorithm <ref>] provides policy π^*,i_t through the formula in Section <ref>, thus Line <ref> is actually simulating π^*,i_t in MDP ^i. Therefore, we can apply Hoeffding's inequality (e.g., see Lemma <ref>) w.p. δ/(2TN). The result follows through the application of the union bound. We remark that in one case we use probability δ/(2N) (without T) while in the other we use δ/(2NT) (with T), because in the former we provide a guarantee for all possible utilities w.r.t. the optimal performance, thus all the T steps are already included; instead, in the latter, we provide a guarantee for a single utility and for a single policy at a specific t∈{0,…,T-1}, thus we have to compute a union bound with T. Let δ∈(0,1). With probability at least 1-δ, for all i∈N and t∈{0,…,T-1}, under the good event in Lemma <ref>, there exists a policy π^i_t such that: U^E,⊺η^i_t- U^E,⊺η^p^i,r^i,π^i ≤ LHϵ_0/2+ cH√(logNT/δ/ K) +c'H^2√(1/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))), where c,c' are positive constants. First, simply observe that η^i_t is the empirical estimate (see Line <ref>) of η^p^i,r^i,π^*,i_t, thus, similarly to the proof of Lemma <ref>, for all i∈N and t∈{0,1,…,T-1}, we can apply Hoeffding's inequality w.p. δ/(2TN): |U^E,⊺(η^i_t -η^p^i,r^i,π^*,i_t)|≤ cH√(logNT/δ/ K). Now, we compare distributions η^p^i,r^i,π^*,i_t and η^p^i,r^i,π^*,i_t. Through straightforward passages, we can write: |U^E,⊺(η^p^i,r^i,π^*,i_t- η^p^i,r^i,π^*,i_t)| = |J^π^*,i_t(U^E;p^i,r^i) - J^π^*,i_t(U^E;p^i,r^i)| =|∑_s'∈ p^i_1(s'|s_0^i,π^*,i_t,1(s_0^i)) V_2^π^*,i_t(s';p^i,r^i) - ∑_s'∈p^i_1(s'|s_0^i,π^*,i_t,1(s_0^i)) V_2^π^*,i_t(s';p^i,r^i)| ≤|∑_s'∈(p^i_1(s'|s_0^i,π^*,i_t,1(s_0^i))- p^i_1(s'|s_0^i,π^*,i_t,1(s_0^i))) V_2^π^*,i_t(s';p^i,r^i) | +∑_s'∈p^i_1(s'|s_0^i,π^*,i_t,1(s_0^i,0))| V_2^π^*,i_t(s';p^i,r^i)- V_2^π^*,i_t(s';p^i,r^i) | ≤… ≤_p^i,r^i,π^*,i_t[ ∑_h'=1^H |∑_s'∈( p^i_h'(s'|s_h',a_h')-p^i_h'(s'|s_h',a_h')) V_h'+1^π^*,i_t(s';p^i,r^i)| | s_1=s_0^i ] ≤H_p^i,r^i,π^*,i_t[ ∑_h'=1^H p^i_h'(·|s_h',a_h')-p^i_h'(·|s_h',a_h')_1| s_1=s_0^i ] ≤ H_p^i,r^i,π^*,i_t[ ∑_h'=1^H √(2KL(p^i_h'(·|s_h',a_h') p^i_h'(·|s_h',a_h')))| s_1=s_0^i ], where at the last passage we applied the Pinsker's inequality. Note that the previous derivation was possible as long as as policy π^*,i_t is defined over all the possible pairs state-cumulative reward (s,y)∈×_h for all h∈H. Since we construct it through policy ψ^*,i_t, obtained at Line <ref>, i.e., over the entire enlarged state space {×_h}_h, then policy π^*,i_t satsifies such property. Now, in the proof of Lemma <ref> we used Lemma <ref>, in which event bounds the KL-divergence between transition models. Therefore, under the application of Lemma <ref>, it holds that: |U^E,⊺(η^p^i,r^i,π^*,i_t- η^p^i,r^i,π^*,i_t)| ≤ H^2 √(2/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))), where n is the number of samples takes at each (s,a,h)∈ in the i∈N MDP. Therefore, we can finally write: U^E,⊺η^i_t- U^E,⊺η^p^i,r^i,π^i±U^E,⊺η^p^i,r^i,π^*,i_t±U^E,⊺η^p^i,r^i,π^*,i_t = U^E,⊺( η^p^i,r^i,π^*,i_t-η^p^i,r^i,π^i) +U^E,⊺( η^p^i,r^i,π^*,i_t -η^p^i,r^i,π^*,i_t) + U^E,⊺(η^i_t- η^p^i,r^i,π^*,i_t) (1)≤ U^E,⊺( η^p^i,r^i,π^*,i_t-η^p^i,r^i,π^i)+cH√(logNT/δ/ K) +c'H^2 √(2/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))) (2)≤LHϵ_0/2 +cH√(logNT/δ/ K) +c'H^2 √(2/n(logSAHN/δ +(S-1)log(e(1+n/(S-1))))), where at (1) we have used the bounds derived earlier, and at (2) we have applied Lemma <ref>, noticing that we can choose policy π^i as we wish, and using that k≤ϵ_0/2. Let _1=,,H,s_0,p,r^1 and _2=,,H,s_0,p,r^2 be two MDPs with deterministic rewards that differ only in the reward function r^1≠ r^2, and assume that, for all (s,a,h)∈, it holds that |r^1_h(s,a)-r^2_h(s,a)|≤ k, for some k≥ 0. Let π^1 be an arbitrary (potentially non-Markovian) policy that induces, in _1, the distribution over returns η^p,r^1,π^1. Then, there exists a policy π^2 that induces in _2 the distribution η^p,r^2,π^2 such that: sup_U∈_L| _G∼η^p,r^1,π^1[U(G)]- _G∼η^p,r^2,π^2[U(G)] |≤ LHk. A non-Markovian policy like π^1, in its most general form, prescribes actions at stages h∈H depending on the sequence of state-action-reward s_1,a_1,r_1,s_2,a_2,r_2,…,s_h-1,a_h-1,r_h-1,s_h received so far. Since, by hypothesis, the reward functions are deterministic (see also Section <ref>), then it is clear that the information contained in the rewards received so far ({r_1,r_2,…,r_h-1}) is already contained in the state-action pairs received s_1,a_1,s_2,a_2,…,s_h-1,a_h-1,s_h (indeed, for deterministic reward r^1, we have that r_1=r^1_1(s_1,a_1),r_2=r^1_2(s_2,a_2), and so on). This means that, for any non-Markovian policy in the MDP _1, since it coincides with _2 except for the deterministic reward function, it is possible to construct a policy π^2 that induces the same distribution over state-action trajectories, i.e., for any state-action trajectory ω=s_1,a_1,s_2,a_2,…,s_H-1,a_H-1,s_H,a_H,s_H+1∈Ω, it holds _p,r^1,π^1(ω)=_p,r^2,π^2(ω). Therefore, we can write: sup_U∈_L| _G∼η^p,r^1,π^1[U(G)]- _G∼η^p,r^2,π^2[U(G)]| (1)=sup_U∈_L| ∑_ω∈Ω_p,r^1,π^1(ω) U(∑_(s,a,h)∈ωr^1_h(s,a)) - ∑_ω∈Ω_p,r^2,π^2(ω) U(∑_(s,a,h)∈ωr^2_h(s,a)) | (2)=sup_U∈_L| ∑_ω∈Ω_p,r^1,π^1(ω) U(∑_(s,a,h)∈ωr^1_h(s,a)) - ∑_ω∈Ω_p,r^1,π^1(ω) U(∑_(s,a,h)∈ωr^2_h(s,a)) | =sup_U∈_L| ∑_ω∈Ω_p,r^1,π^1(ω) (U(∑_(s,a,h)∈ωr^1_h(s,a))- U(∑_(s,a,h)∈ωr^2_h(s,a))) | (3)≤sup_U∈_L∑_ω∈Ω_p,r^1,π^1(ω) |U(∑_(s,a,h)∈ωr^1_h(s,a))- U(∑_(s,a,h)∈ωr^2_h(s,a))| (4)≤∑_ω∈Ω_p,r^1,π^1(ω) L|∑_(s,a,h)∈ω(r^1_h(s,a)-r^2_h(s,a))| (5)≤∑_ω∈Ω_p,r^1,π^1(ω) L∑_(s,a,h)∈ω|r^1_h(s,a)-r^2_h(s,a)| (6)≤∑_ω∈Ω_p,r^1,π^1(ω) L∑_(s,a,h)∈ωk =LHk, where at (1) we use the fact that the expected utility w.r.t. the distribution over returns can be computed using the probability distribution over state-action trajectories (since the rewards are deterministic), at (2) we use that policy π^2 is constructed exactly to match the distribution over state-action trajectories, at (3) we apply triangle inequality, at (4) we use the fact that all utilities U∈_L are L-Lipschitz, i.e., for all x,y∈[0,H]: |U(x)-U(y)|≤ L|x-y|, at (5) we apply again the triangle inequality, and at (6) we use the hypothesis that r^1,r^2 are close to each other by parameter k. § EXPERIMENTAL DETAILS In this appendix, we collect additional information about the experiments described in Section <ref>. Appendix <ref> presents formally the MDP used for the collection of the data along with the questions posed to the participants. Appendix <ref> describes what is a Standard Gamble <cit.> and how it has been used to construct the utility U_SG of the participants. Finally, Appendices <ref> and <ref> contain, respectively, additional details on Experiment 1 and 2. §.§ Data Description Below, we describe the data collected. §.§.§ Considered MDP. The 15 participants analyzed in the study have been provided with complete access to the MDP in Figure <ref>, which we will denote by . In other words, the participants know the transition model and the reward function of everywhere. Intuitively, states L (Low), M (Medium), H (High), and T (Top), represent 4 “levels” so that the received reward increases when playing actions in “higher” states instead of “lower” states. Formally, MDP =,,H,s_0,p,r has four states ={L,M,H,T}, and three actions for each state ={a_0,a_+,a_-}. The horizon is H=5, i.e., the agent has to take 5 actions. The initial state is s_0=M. The transition model p is stationary, i.e., it does not depend on the stage h∈H. Specifically, p is depicted in Table <ref>. The intuition is that action a_0 keeps the agent in the same state deterministically, while action a_+ tries to bring the agent to the higher state with probability 1/3, and action a_- sometimes make the agent “fall down” to the lower state with probability 1/5. The reward function r:→ is deterministic, stationary, and depends only the state-action pair played. The specific values are depicted in Table <ref>. Note that we have written the reward values as numbers in [0€, 1000€], to provide a monetary interpretation. Nevertheless, we will rescale the interval to [0,1] during the analysis for normalization. Observe that the same actions played in “higher” states (e.g., H or T) provide higher rewards than when played in “lower” states (e.g., L or M). Moreover, notice that action a_+, which is the only action that tries to increase the state, does not provide reward at all, while the risky action a_-, which sometimes decreases the state, always provides double the reward than “default” action a_0. §.§.§ Intuition behind agents behavior. The reward is interpreted as money. Playing MDP involves a trade-off between playing action a_+, which gives no money but potentially allows to collect more money in the future (by reaching “higher” states), and action a_-, which provides the greatest amount of money immediately, but potentially reduces the amount of money which can be earned in the future. Action a_0, being deterministic, provides a reference point, so that deterministically playing action a_0 for all the H=5 stages gives to the agent 30× 5=150€. Thus, playing actions a_+,a_- other than a_0 means that the agent accepts some risk to try to increase its earnings. §.§.§ Questions asked to the participants We remark that the participants have enough background knowledge to understand the MDP described. To each participant, we ask which action in {a_0,a_+,a_-} it would play if it was in a certain state s, stage h, with cumulative reward up to now y, for many different values of triples (s,h,y)∈×H×[0€,5000€]. Specifically, the values of triples s,h,y considered are: M,1,0€ M,2,0€ M,2,30€ M,2,60€ H,2,0€ M,3,0€ M,3,30€ M,3,60€ M,3,200€ H,3,0€ H,3,30€ H,3,60€ H,3,200€ T,3,0€ M,4,0€ M,4,30€ M,4,60€ M,4,90€ M,4,120€ M,4,150€ M,4,180€ M,4,300€ M,4,400€ H,4,0€ H,4,30€ H,4,60€ H,4,100€ H,4,130€ H,4,200€ H,4,300€ H,4,1000€ T,4,0€ T,4,60€. From state L, we assume all participants always play action a_+ since it is the only rational strategy. Moreover, from stage h=5, we assume that all participants always play action a_- since, again, it is the only rational strategy. In all other possible combinations of values of s,h,y, we “interpolate” by considering the action recommended by the participant in the closest y' to y, in the same s,h. §.§.§ The return distribution of the participants' policies We now present the return distribution of the policies prescribed by the participants. Specifically, we have simulated 10000 times the policies of the participants, and we have computed the empirical estimate of their return distributions. Such values are reported in Figures <ref>, <ref>, <ref>, <ref>, and <ref>, where we use notation η^E_i to denote the return distribution of participant i, with i∈15. §.§ Standard Gamble Data Standard Gamble (SG). The Standard Gamble (SG) method (e.g., see Section 2.5 of <cit.>) is a common method for inferring the von Neumann-Morgenstern (vNM) utility function of an agent. Observe Figure <ref>. In a SG, the agent has to decide between two options: A sure option (e.g., x=30€), in which the prize is obtained with probability 1, and a lottery between two prizes (e.g., 5000€and 0€), in which the best prize (5000€) is received with probability p. For any value of x, the agent has to answer what is the probability p that, from his perspective, makes the two options (i.e., x for sure, or the lottery) indifferent. Given the probability p, we have that the utility U of the agent for x is: U(x)=p· U(5000)+(1-p)· U(0)=p, since, by normalization conditions, we have U(0)=0 and U(5000)=1. Our SG. We have asked the 15 participants to the study to answer some SG questions, which allows us to fit a vNM utility function for each of them (which we call U_SG). Specifically, we have asked to answer 8 different SG questions, in which the x value in Figure <ref> has been replaced by: 10€, 30€, 50€, 100€, 300€, 500€, 1000€, 2000€. Next, we linearly interpolate the computed utilities, obtaining the functions in Figure <ref>. It should be remarked that this model considers single decisions (i.e., H=1), while in MDPs there is a sequence of decisions to be taken over time, specifically over a certain time horizon H. §.§ Details Experiment 1 The utilities U_sqrt, U_square, and U_linear can be formally defined as: U_sqrt(G)√(5G), U_square(G) G^2/5, U_linear(G) G. They are depicted in Figure <ref>. The experiment has been conducted collecting 10000 trajectories to estimate the return distribution of each participant's policy, and 10000 trajectories for estimating the return distribution of the optimal policy, which has been computed exactly through value iteration. We have executed 5 simulations with different seeds, and the relative (non)compatibility values written in the table in Section <ref> are the average over the 5 simulations. For the experiment, we use the true transition model, and we remark that the reward function considered, when discretized, coincides with itself, i.e., we do not incur in estimation error of the transition model nor in approximation error for the discretization. The experiment has been conducted in less than 1 hour on a personal computer with processor AMD Ryzen 5 5500U with Radeon Graphics (2.10 GHz), with 8,00 GB of RAM. §.§ Details Experiment 2 We consider the policy of the 10th participant to the survey, and we execute multiple times with varying values of the input parameters, specifically: We always use 10000 trajectories for estimating the return distribution of the 10th participant's policy, and the return distribution of the optimal policies computed along the way; we make 5 runs with each combination of parameters with different seeds. We execute for T=70 iterations using Lipschitz constant L=10, which means that we consider only utilities U∈_L satisfying |U(G)-U(G')|≤ 10|G-G'| for all G,G'∈[0,5] (the horizon is 5). As initial utility U_0, we try U_sqrt, U_square, and U_linear, and as learning rates we try 0.01,0.5,5,100,1000,10000. The experiment has been conducted on the same personal computer as experiment 1, in some hours. §.§.§ The sequence of utilities extracted by We now present some plots representing the sequence of utilities extracted by during its execution. Specifically, we consider initial utility U_0=U_square, and we use learning rates α∈[0.01,0.5,5,100,1000,10000]. We plot the sequence of utilities considered by during its execution in Figures <ref>, <ref>, and <ref>, where we adopt notation that U_t denotes the utility extracted at iteration t, and the number in the legend represents the (non)compatibility of that utility. We consider again participant 10. we observe that for smaller learning rates (e.g., α∈[0.01,0.5,5]), the utilities as well as the (non)compatibilities) do not change much (Figure <ref> and Figure <ref> left), while for larger learning rates, we obtain more consistent changes (Figure <ref> left and Figure <ref>). Clearly, larger learning rates require less iterations to achieve small values of (non)compatibilities. Nevertheless, too large values (e.g., α=10000) are outperformed by intermediate values (e.g., α=100). §.§.§ A visual explanation for a large learning rate Now, we show that the projection update represented by operator Π__L crucially neglects small variations in the (non-projected) utilities, requiring us to increase the step size. Thus, the intuition is that we need a large learning rate because the projection step neglects small variations. To show this, we take as initial utility U_0=U_sqrt, two return distributions η^*_0,η^E, where η^* coincides with the distribution of an optimal policy for U_sqrt, and η^E is the return distribution of the policy played by participant 10. These distributions are plotted in Figure <ref> left, and their difference is plotted in Figure <ref> right. In particular, we note that the two distributions are rather different, with the expert's distribution η^E that is more risk-averse, in that it provides higher probability to returns around G=0.5, while the optimal distribution η^*_0 is more risk-lover, in that it assigns some probability to higher returns G≥ 1, but suffering from also high probability to small returns G≤ 0.3. We aim to perform the update rule: U_1'U_0 -α (η^*-η^E), with some learning rate α, and then to perform the projection: U_1Π__L[U_1']. We execute the update with the following values of steps size: α∈{0.01,0.5,5,100,1000,10000}, and we plot the corresponding updated utilities U_1' and U_1 in Figures <ref>, <ref>, and <ref>. As we can see from Figures <ref>, <ref>, and <ref>, the update U_0→U_1 obtained with step sizes < 5 are rather neglectable, so that the return distribution of the new optimal policy η^*_1 for U_1 still coincides with the previous one η^*_0, and the gradient at the next step is the same. For α=5, we begin to notice some changes. See Figure <ref>. Instead, with larger gradients, we observe a non-neglectable change in utility, which provides a consistent change in the return distribution for α=100, and a huge change for α∈[1000,10000] (see Figure <ref>). Since neglectable changes in both the utility and the optimal return distribution (obtained with small learning rates) mean that we have to update the utility many times along the same direction, then the update is equivalent to performing a single update in that direction with a huge step size. This justifies the use of large learning rates.
The ultimate goal of Artificial Intelligence (AI) is to construct artificial rational autonomous agents <cit.>. Such agents will interact with each other and with human beings to achieve the tasks that we assign to them. In this vision, a crucial feature is being able to correctly model the observed behavior of other agents. This allows a variety of applications: (i) descriptive, to understand the intent of the observed agent <cit.>, (ii) predictive, to anticipate the behavior of the observed agent (potentially in new scenarios) <cit.>, (iii) normative, to imitate the observed agent because they are behaving in the “right way” <cit.>. Nowadays, Inverse Reinforcement Learning (IRL) provides the most popular and powerful models, i.e., simplified representations, of the behavior of the observed agent, named “expert”. Under the so-called “reward hypothesis” <cit.>, that has been recently re-interpreted in terms of properties of preferences over trajectories <cit.>, IRL algorithms construct reward functions representing the objectives and the desires of the expert. Depending on the application, different models can be adopted. For instance, <cit.> considers the expert as an exact expected return maximizer, while <cit.> and <cit.> assume that the probability with which actions and trajectories, respectively, are played is proportional to their fraction of optimality (i.e., of expected return). All these models assume that the expert is a risk-neutral agent, i.e., an agent interested in the maximization of the expected return. However, there are many scenarios in which rational agents <cit.>, as well as humans <cit.>, adopt risk-sensitive strategies in the presence of stochasticity. In the most general case, agents are not only interested in the expected return, but in the full distribution of the return <cit.>. Popular examples in this context include agents who aim to maximize the expected return while trying to minimize the variance <cit.>, agents interested in the optimization of the Conditional Value-at-Risk (CVaR) <cit.>, or in rewards volatility <cit.>. IRL models, thus, incur in mis-specification, which can crucially affect the descriptive, predictive, and normative power of the inferred reward function <cit.>. Related works.  To overcome this limitation, some authors have analyzed the risk-sensitive IRL problem <cit.>, in which either the learner is provided with the reward function of the expert and it must infer some parameters representing the risk attitude, or the learner must infer both the reward function and the risk attitude from the given demonstrations <cit.>. Nevertheless, these works suffer from major limitations that prevent the adoption of the proposed algorithms in real-world applications. For instance, they either make demanding assumptions (e.g., Boltzmann policies like in <cit.> and <cit.>, which hypothesize the expert to play each action exactly proportionally to its Q-value), or consider rather limited settings (e.g., the “prepare-react model” of <cit.>, that imposes too much structure in the expert's behavior and in the environment's dynamics). An analogous line of research focuses on the problem of learning the risk attitude of an agent from demonstrations in certain decision-making settings other than Markov Decision Processes (MDPs) <cit.>. Even though the powerful model of von Neumann-Morgenstern (vNM) utility functions <cit.> is adopted for representing the risk attitude of the expert, these works only focus on “coarse” sequential decision-making settings like decision trees <cit.>, that do not provide the rich expressivity of MDPs (there is no notion of reward function). A more detailed analysis, along with additional related works, is provided in Appendix <ref>. Our proposal.  In this paper, we formalize, characterize, and analyze the problem of inferring the risk attitude of an agent, encoded with a utility function, from demonstrations of behavior in MDPs. The main contributions of this paper are listed below. The proofs of all results are in Appendix <ref>-<ref>. * Motivated by a real-world example, we propose a simple yet powerful model of behavior in MDPs, that separates the objective (reward) from the risk attitude (utility) of an agent (Section <ref>). * We introduce Utility Learning (UL) as the problem of inferring the risk attitude of an agent in MDPs, and we characterise the partial identifiability of the expert's utility (Section <ref>). * We present and theoretically analyze two novel algorithms, and , for efficiently solving the Utility Learning problem with finite data (Section <ref>). * We conclude the paper with proof-of-concept experiments that serve as an empirical validation of both the proposed model and the presented algorithms. (Section <ref>).
null
null
null
null
In this paper, we proposed a novel descriptive model of behavior in MDPs, we formalized the UL problem as that of learning the risk attitude of an agent from demonstrations, and we characterised the partial identifiability of the expert's utility. In addition, we have described two provably efficient algorithms for estimating the compatibility of a utility with demonstrations, and for extracting a compatible utility. They have been empirically validated through two proof-of-concept experiments. Future directions.   This paper opens up many important questions. To quantify the model mis-specification, to use function approximation, to conduct an empirical study on the horizon used by humans for planning <cit.>, to combine demonstrations with other feedbacks <cit.>, to learn both r and U, to extend imitation learning approaches (e.g., GAIL <cit.>) or the maximum entropy framework with utilities, to improve the model in Eq. (<ref>) with negative rewards and prospect theory <cit.>, and many others. We believe that most of the IRL literature shall be extended under the proposed, more expressive, framework to construct more accurate algorithms for IRL and UL. iclr2025_conference
http://arxiv.org/abs/2409.17905v1
20240926145141
Rotation distance using flows
[ "Claire Mathieu", "William Thurston" ]
cs.DM
[ "cs.DM", "cs.DS", "G.2.m" ]
Symmetry Enhancement, SPT Absorption, and Duality in QED_3 Zohar Komargodski September 28, 2024 ========================================================== § ABSTRACT Splay trees are a simple and efficient dynamic data structure, invented by Sleator and Tarjan. The basic primitive for transforming a binary tree in this scheme is a rotation. Sleator, Tarjan, and Thurston proved that the maximum rotation distance between trees with n internal nodes is exactly 2n-6 for trees with n internal nodes (where n is larger than some constant). The proof of the upper bound is easy but the proof of the lower bound, remarkably, uses sophisticated arguments based on calculating hyperbolic volumes. We give an elementary proof of the same result. The main interest of the paper lies in the method, which is new. It basically relies on a potential function argument, similar to many amortized analyses. However, the potential of a tree is not defined explicitly, but by constructing an instance of a flow problem and using the max-flow min-cut theorem. § INTRODUCTION *Background. Splay trees are a simple and efficient dynamic data structure, invented by Sleator and Tarjan [1]. The basic primitive for transforming a binary tree in this scheme is a rotation (see figure <ref>). fig1fig1Rotating an edge of a binary tree. The way in which splay trees evolve during a sequence of transformations is still not well understood, as witnessed by the resistance of the dynamic optimality conjecture to all efforts so far. In an attempt to understand how rotations act on trees, Sleator, Tarjan and Thurston addressed the question of computing the rotation distance between binary trees, i.e. the minimum number of rotations required to transform a given tree into another given tree [2]. They proved that the maximum distance is exactly 2n-6 for trees with n internal nodes (where n is larger than some constant). The proof of the upper bound is easy but the proof of the lower bound, remarkably, uses sophisticated arguments based on calculating hyperbolic volumes. *Results. In this paper, we give an elementary proof of the same result, using the max-flow min-cut theorem instead of hyperbolic geometry. One advantage of the proof is that we exhibit two “natural trees” which are at maximum distance apart, namely two variants of the “zig-zag” tree. (In the original proof, the two trees are not explicitly defined). *The proof method. The main interest of the paper lies in the method, which is new. It basically relies on a potential function argument, similar to many amortized analyses. However, the potential of a tree is not defined explicitly (as is usually the case in an amortized analysis), but by constructing an instance of a flow problem and using the max-flow min-cut theorem. It is also interesting to see an example of a problem where potential functions can be used for proving lower bounds and not only upper bounds. *Comparing the two methods. Our proof has the obvious advantage of being purely combinatorial, so that it is essentially self-contained. One drawback is that it involves some tedious case-by-case analysis. We must stress that this tedious work is unnecessary for proving a lower bound of 2n-O(1): if we do not insist in proving that the maximum distance is exactly 2n-6, but are happy with the less precise lower bound 2n-O(1), our proof technique works much more smoothly, and most of the case-by-case analyses disappear. In contrast, the proof of [2], which works by successive refinements of the lower bound, is relatively easy for a lower bound of 2n-O(√(n)), but would require almost as much work to prove 2n-O(1) as 2n-6. For other potential applications, we think that our method is more powerful. Instead of relying on the hyperbolic metric, in which solving the problem consists in positioning n points in space adequately (with 3n degrees of freedom), our approach is akin to constructing our own metric, geared to the problem which we wish to solve, and thus there is much more freedon. This will be formalized in the last section. *Plan of the paper. In section <ref>, we leave the binary tree setting and reformulate the problem in terms of flips and triangulations, essentially as in [2]. In section <ref>, we present the two far-apart triangulations and study the sphere triangulation formed by their union. In section <ref>, we define a weight function on triangles, using the max-flow min-cut theorem and a partial order on triangles. In section <ref>, we analyze the change of weights induced by flips, and conclude the proof of the lower bound. In section <ref>, we discuss connections between this proof, linear programming, and the original proof, as well as other possible applications of the technique. § PRELIMINARIES : TRIANGULATIONS AND TETRAHEDRA Rather than dealing with binary trees, it is preferable to study the dual problem on triangulations, which have more symmetry. §.§ Binary trees and triangulations Consider a binary tree with n internal nodes and (n+1) external leaves; move the external leaves to infinity by extending the edges to these leaves into half-lines; add a distinguished half-line from the root to infinity. The planar dual of this graph is a triangulation of an (n+2)-gon with one distinguished polygon edge. This defines a bijection between rooted binary trees with n internal nodes and triangulations of an (n+2)-gon with a distinguished polygon edge. A rotation on a binary tree corresponds to a flip in the triangulation, obtained by replacing an edge by the other diagonal of the quadrilateral formed by the two triangles adjacent to this edge. (See figure <ref>). 350ptfig2fig2Binary trees and triangulations We define the flip distance between two triangulations T_1 and T_2 as the minimum number of flips required to transform T_1 into T_2. We will prove that the maximum flip distance between triangulations of an (n+2)-gon is 2n-6. §.§ Amortized analysis : defining a weight function The proof idea is to define a weight function on triangles, such that the weight of a triangulation is the sum of the weights of its triangles; such that any flip, by destroying two triangles and creating two new triangles, changes the total weight by at most 1; and such that there are two particular triangulations whose weights differ by exactly 2n-6. The lower bound follows. This is similar to the potential function arguments used to prove amortized upper bounds on the performance of dynamic algorithms. Actually, for the sake of symmetry, we prefer to think in terms of oriented triangles. An oriented triangle is a triple {i,j,k} of polygon vertices (not necessarily adjacent), together with one of two possible orientations, ijk=jki=kij or ikj=kji=jik, such that changing the orientation multiplies by -1: in other words, w(ijk)=-w(ikj) for all i,j,k. Let w(T) be the weight of a triangulation T drawn inside the polygon with all triangles oriented counterclockwise. We draw the initial triangulation T_i inside the polygon, orienting all triangles counterclockwise. We redraw the final triangulation T_f so that all of its edges are outside the polygon, orienting all triangles counterclockwise. We obtain a triangulated planar graph T with all triangles oriented counterclockwise and with total weight w(T)=w(T_i)-w(T_f). A flip of triangles of T_i or of T_f can be seen as a flip in T which preserves the counterclockwise orientation. The central part of the proof consists in showing that any flip of any edge changes the weight of the triangulation by at most 1. In particular, transforming T_i into T_f by doing flips of edges inside the polygon yields a graph with two copies of each triangle of T_f, one inside and one outside the polygon, with opposite orientations, and thus the total weight is 0. Given a polygon with (n+2) vertices, the lower bound will be proved by exhibiting: firstly, a weight function on oriented triangles which satisfies the following tetrahedral constraints: for any quadruple {i,j,k,l} of polygon vertices, . |w(ijk)+w(jlk)+w(jlk)+w(kli)+w(ilj)| ≤ 1 , (which guarantees that any flip changes the weight by at most 1); secondly, a triangulated planar graph which has a hamiltonian cycle (i.e. which can be viewed as the union of two polygon triangulations, one on each side of the hamiltonian cycle) and has weight 2n-6 if all triangles are oriented counterclockwise. Geometrically, we can consider this planar triangulation as a triangulation of the sphere with all triangles oriented outwards. Each flip corresponds to a tetrahedron. In [2], the idea is to exhibit a sphere triangulation which cannot be extended to a triangulation of the ball with fewer than 2n-6 tetrahedra. Since the hyperbolic volume of any tetrahedron is at most a constant V_0, the proof focuses on constructing a sphere triangulation forming a polyhedron of total volume greater than (2n-7)V_0, and almost all the work is devoted to the difficult problem of calculating volumes in hyperbolic space. From here on, our proof differs from the original proof of [2]. § CHOOSING THE TWO TRIANGULATIONS §.§ Two far apart triangulations In this section, we define two triangulations of an (n+2)-gon. We will later prove that they are at flip distance at least 2n-6 apart. The first one is the “horizontal zig-zag” triangulation, and the second one is the zig-zag triangulation, rotated by √(n) (see figure <ref>). 300ptfig3fig3Two triangulations at distance 2n-6 from each other In the planar triangulated graph T obtained by drawing one triangulation inside and the other one outside the polygon, we note that most vertices have degree 6: they are adjacent to two edges of the first triangulation, two edges of the second triangulation, and to two polygon edges. The exceptions are the extremities of the zig-zags, which make up for four vertices of degree 5 and four vertices of degree 4 in graph T. In the neighborhood of a “typical” vertex, the graph T looks like a regular triangular lattice; near the degree 4 vertices, the graph is as in figure <ref>. 300ptfig4fig4 The sphere triangulation in the neighborhood of a degree 4 vertex The √(n) rotation angle was chosen so that the degree 4 vertices are far from one another in T as we will see later. The weight function on oriented triangles will be defined using graph T and will require computing distances on this graph T. Thus we need to understand distances on T. §.§ Visualizing distances on T As a warmup, let us start from the planar regular triangular lattice L, and show how to use it to represent a triangulated graph L' where every vertex has degree 6, except for one degree 5 vertex and one adjacent degree 4 vertex, in such a way that distances can easily be computed. Pick a vertex u from L; its 6 edges extend to infinite half-lines which partition the plane into 6 sectors. Removing one sector and identifying the two half-lines bounding it into one half-line V decreases the degree of u by 1. Let v be the neighbor of u on V. Removing the two sectors of v on either side of V and identifying their remaining boundaries decreases v's degree by 2. We obtain the following picture for L', where the arrows join vertices which are identified (figure <ref>). fig5fig5Representation of graph L' Let c denote the center of the segment in L joining the two points labeled v. The triangular lattice L is a double cover of L' (except for the edge of L going through c): to compute the distance between two points x and y of L', we take y', the image of y by the symmetry of center c in L, and compute d_L'(x,y)=min(d_L(x,y), d_L(x,y')). See figure <ref> for an example, where the distance is d_L'(x,y)=min(d_L(x,y),d_L(x,y'))=min(6,4)=4. fig6fig6 Computing distances in L' using the lattice L We now turn to our problem of constructing a convenient picture of T. The same idea can be used, using L as an infinite cover. We obtain the graph depicted in figure <ref>. Let c denote the center of any one of the black lozenges (formed by two adjacent triangles of L). Two vertices of L which are images of each other by the symmetry of center c represent the same vertex of T. A fundamental region consists of four big triangular regions, which we label 0,1,2,3. Graph T can be recovered by taking such a fundamental region (dotted in the figure) and suitably identifying the sides (identified vertices are joined by arrows in figure <ref>). fig7fig7Computing distances in T using L To compute the distance between vertices x and y in T, we consider all the vertices y_i of L which represent y. Then d_T(x,y)=min{d_L(x,y_i)}. In practice, for a given x in T, we take the set of all the representatives of x in L, and compute its Voronoi diagram (where distances are computed in L). Then for any y, d_T(x,y)=d_L(x_0,y_0), where x_0 is an arbitrary (fixed) representative of x and y_0 is the representative of y in the Voronoi cell of x_0. § DEFINING THE WEIGHT FUNCTION The definition of the weight of a triangle (ijk) will essentially depend on the position of its three vertices in T. §.§ Defining orientations of triangles Before defining the actual value of the weight of triangles, we will first choose an orientation for each triangle {i,j,k} such that the weight of the triangle will be non-negative for the chosen orientation. (Knowing the sign of w(ijk), even if we do not know its exact value, will often be sufficient for checking the tetrahedral constraints <ref>). In graph T, consider non-crossing shortest paths p_ij,p_jk,p_ki from i to j, j to k and k to i respectively. Their concatenation forms a cycle separating T into two connected components. The one with the least area (i.e. with the fewest triangles) is called the interior. We define the triangular region (ijk) to be the union of all the interiors of such cycles. Informally, (ijk) is the “fattest" region formed by geodesics between the vertices. The boundary of this triangular region usually defines a cyclic order on the vertices (except if a vertex is in the interior of the triangular region, or if the triangular region is flat, or if the triangle is so big that the triangular region contains the whole graph T). We choose the orientation to be ijk if the vertices occur in that order in the counterclockwise direction around the boundary. Also note (we will use this later) that inclusion of triangular regions induces a natural partial order on triangles. Unless otherwise specified, from now on the “weight of triangle {i,j,k}" will refer to the triangle with this chosen orientation. §.§ Defining the weights of triangles We partition triangles in classes, depending on how many edges they have which are in the graph T. Firstly, triangles of T: the triangles which are not adjacent to one of the degree 4 or degree 5 vertices have weight 1. The triangles adjacent to a vertex of degree 4 have weight 3/4. For the triangles adjacent to a vertex of degree 5, their weight, 3/4 or 1, is defined as in figure <ref>. fig8fig8Defining weights near the special vertices Secondly, triangles having exactly two edges of T: If the area of their triangular region is 0, they have weight 0. Otherwise the weight is defined as in figure <ref>: in most cases, they are obtained by using two adjacent triangles of weight 1 in the flip, and are then given weight 1/2. Other cases involve the degree 4 and degree 5 vertices, and are listed in figure <ref>. fig9fig9Weight of triangles created by one flip Thirdly, triangles having exactly one edge of T: this is the crucial part of the construction. Each such triangle is associated to its isolated vertex s (the vertex which is not an extremity of the edge of T). We will define the weight of all triangles associated to s simultaneously by solving a flow problem (one flow problem for each s) defined as follows. We flip every edge opposite to s in the six (or five or four in special cases) triangles of T containing s. (The degree of s doubles in the process). We consider the planar dual of the graph T with these six flips, T(s). Add a vertex s_0 to T(s), the source for the flow problem, linked to all the vertices dual to the triangles obtained by doing a flip. Define the value of the source as the sum of the weights of these triangles. The sink for the flow problem is a new vertex t_0 linked to all the vertices dual to triangles of weight 3/4, and each edge linked to the sink has capacity 1/4. We now want to give an orientation to the edges of T(s). Let e be an edge of T(s) and ab its dual edge of T. If triangle {a,b,s} is oriented, then we direct e outwards from the triangular region (abs). The next step is to define capacities. We choose two constants c large enough and c'>10c. All oriented edges within distance c of the source or of the sink are given capacity 1/2. All oriented edges within distance c' of the source or of the sink are given capacity 1/4. Forget the orientation of all the other edges of the graph, and give them capacity 1/4. See figure <ref>. 300ptfig10fig10Defining capacities To show that there exists a flow absorbing all the value coming out of the source, we use the max-flow min-cut theorem (in the straightforward direction), along with the following lemma. Any cut in T(s)∪{s_0}∪{t_0} separating the source s_0 from the sink t_0 has capacity at least equal to the value of the source. This is proved by examining orientations, using the representation of T described above (the proof will be in the full version). Coming back to the problem of defining weights on triangles, if w is the flow through an edge e, we give weight w to the triangle sab associated to s and containing the edge ab dual of e, oriented so that the flow goes out of the triangle and oriented counterclockwise. An example is given in figure <ref> for a “typical" source (not in the neighborhood of the sink). fig11fig11Defining weights using flows Fourthly, we need to define the weight of triangles with no edge of T: We will define their weights inductively in order of increasing area of their triangular region. Area 0 triangles are given weight 0. Let (ijk) be a triangle of non-zero area. We look at all vertices l in the triangular region (ijk) such that (ijl),(jkl) and (kil) all have area strictly smaller than the area of (ijk), and define a function f(l): f(l)=w(ijl)+w(jkl)+w(kil)-1. We now define the weight of triangle (ijk): w(ijk)=max{0,max_l f(l)}. This concludes the definition of triangle weights. § VERIFYING THE CONSTRAINTS §.§ The triangulation has weight 2n-6 The graph T has 2n triangles and n+2 vertices. Every triangle of T has weight 1 except in the neighborhood of vertices of degree less than 6: there are 4 vertices of degree 4, adjacent to 4 vertices of degree 5; in the neighborhood of each of these 4 pairs, there are 6 triangles of weight 3/4, each accounting for a weight deficit of 1/4. There is a total of 24 triangles of weight 3/4 and the total weight is thus 2n-24/4=2n-6. §.§ Weights of large triangles We want to check that each flip decreases the weight by at most 1, that is, that each quadruple {i,j,k,l} (viewed as a tetrahedron with all triangles oriented outwards) satisfies |w(ijl)+w(jkl)+w(kil)+w(kji)| ≤ 1 . We will first show the following lemmas. If {i,j,k} is not a triangle of T, then |w(ijk)|≤ 1/2. Proof. If {i,j,k} has two edges in T, the proof is by inspection. If it has one edge in T, the result follows from the fact that all capacities are at most 1/2. If it has no edge in T, its weight, if non-zero, is determined by some fourth vertex l. By induction ijl, jlk, lki all have weight at most 1/2, so f(l) is at most 1/2+1/2+1/2-1≤ 1/2. Hence the lemma. We will show that all large triangles have weight 0. If d(i,j)>c+2 and both vertices i and j are at distance greater than c from the sink, then the weight of (ijk) is |w(ijk)|≤ 1/4. Proof. If ijk has one edge in T, say jk, then this edge is far both from the source i and from the sink, hence the dual has capacity 1/4 and |w(ijk)|≤ 1/4. If ijk has no edge in T, the weight of ijk is determined by doing a decomposition into three triangles and iterating on each of these triangles until they all have at least one edge in T or have weight 0. Let ijl be the triangle of side ij obtained in this decomposition. If ijl has weight 0, working back up the decomposition, we deduce w(ijk)=0. If not, then vertex l is a neighbor either of i or of j; say that it is a neighbor of i. Thus the edge dual to il, being far from both the sink and the source j, has capacity 1/4, and |w(ijl)|≤ 1/4. Working back up the decomposition, we deduce |w(ijk)|≤ 1/4, hence the lemma. Large triangles, i.e. such that all three side lengths d(i,j), d(j,k) and d(k,i) are greater than 10c, have weight 0. The proof is similar to the proof of the previous lemma and will be in the full version. §.§ Weights of tetrahedra We will use these lemmas for analyzing the tetrahedral constraints <ref>. Case 1: Tetrahedra with 2 faces in T: then all the faces are triangles with two or three edges in T. Proof by inspection. Case 2: Tetrahedra with one face in T: the weight of the other three faces was defined using flows, precisely so that their contributions cancel. The total weight of the tetrahedron is thus at most 1. Case 3: Tetrahedra with no face in T: this is more difficult. We look at the orientation of the four triangular regions defined by the faces of the tetrahedron. Subcase 1: two positively oriented and two negatively oriented faces. A positive face must have nonnegative weight, except if it was defined using flows, unoriented and given capacity 1/4. If no face is in that special case, then there are two faces whose weight is between 0 and 1/2 and two faces whose weight is between -1/2 and 0: the total weight of the tetrahedron is thus between -1 and 1. If one of the faces, say ijk, is in the special case, then studying the possible positions of the fourth point l of the tetrahedron and using c'>3c proves that the total weight is at most 1, by arguments similar to the proof of lemma <ref>. (Proof detailed in full version). Subcase 2: three positively oriented faces, say (ijl),(jkl),(kil) and one negatively oriented face (kji). If the area of (ijk) is strictly greater than the areas of (ijl),(jkl),(kil), then by definition of the weight of triangle (ijk), the tetrahedron has weight at most one. (The other cases will be in the full version). Subcase 3: four positively oriented faces. Then at least two of these faces are huge and satisfy the assumptions of lemma <ref>, thus have weight 0. The other two having weight at most 1/2, the tetrahedron has weight at most 1. This completes the proof of the lower bound. § COMMENTS We note that the proof can be greatly simplified for the more modest goal of proving the lower bound 2n-O(1). Then, in section <ref>, when defining the weights of the triangles, we can just give weight 0 to all the triangles of T within some fixed neighborhood of the special vertices (degree 4 or 5). The non-flat triangles with two edges of T all have weight 0 in the neighborhood of the special vertices and 1/2 otherwise. The flow problem can be defined by giving capacity 1/2 near the source and 1/4 everywhere else, and the proof that there is no small cut becomes straightforward. The proof of lemma <ref> is greatly simplified, and analyzing the tetrahedral constraints is also simpler. The lower bound 2n-O(1) also holds for more general graphs, namely any graph which looks like the triangular lattice except for a constant number of local perturbations. We conjecture that the same technique can be used to characterize the pairs of polygon triangulations which are at maximum distance apart. In fact, in the sphere triangulation formed by their union, it is known that there is no vertex of degree greater than 6. Since the sum of the degrees of the (n+2) vertices is 6(n+2)-12, there are only a finite number of cases to consider for the degree sequence of the graph. In some cases, it is easy to see that the two triangulations are not at maximum distance apart (for example, if the sphere triangulation has a vertex of degree 3, or two adjacent vertices of degree 4). In the other cases, we conjecture that the graph looks like the triangular lattice (i.e. there is no short cycle separating the graph into two large connected components: the injectivity radius is not too small), and that the same proof will work. More generally, the approach of using a potential function may be useful for proving lower bounds on other problems involving structures which evolve through local transformations. This work stems from the last section of [2] (in the journal version only), which briefly sketched connections between the original proof and linear programming. More explicitly, each flip of two triangles can be associated to an oriented tetrahedron, i.e. a quadruple {i,j,k,l}, vertices of the triangles involved, together with a sign giving the direction of the transformation. There are N=2n+24 oriented tetrahedra. The distance between two triangulations T_i and T_f is the length of the shortest path transforming T_i in T_f. To each such path, we associate a vector (x_1,…,x_N), where x_j is the number of times that the flip corresponding to tetrahedron j occured on the path. The length of the path is x_1+…+x_n, and all x_j's are non-negative integers. The constraint is that the path must lead from T_i to T_f. Amazingly, this can be worded as a linear constraint, using the linearity of the boundary operator for simplices: let be the map {{i,j,k,l},+}↦ (ijl)+(jkl)+(kil)+(kji) , where the image is a formal sum of oriented triangles and we adopt the convention (ijk)=-(kji). Then the set of formal sums of triangles is an n3-dimensional vector space. One can check that can be extended to a linear operator: : (x_1,…,x_N) ↦∑_1 ≤ j ≤ N x_j (tetrahedron j) . The constraint that the path must go from T_i to T_f then gives: ((x_1,...,x_N)) =T_f-T_i , where T_f (resp. T_i) is the formal sum of triangles of the triangulation T_f (resp. T_i), with counterclockwise orientation. If we solve the following linear program: m^*=min (x_1+...+x_N) s.t.∀ j, x_j≥ 0 (x_1,...,x_N)=T_f-T_i , we will have a lower bound m^* on the distance between T_i and T_f. The dual linear program is: M^*=max(T_f-T_i)· (w_1,...,w_n3) s.t. ^T w≤ (1,1,…,1) . Since M^* ≤ m^*, the value of M^* is also a lower bound. The constraints of the dual program can be interpreted as follows: for every tetrahedron, the sum of the weights of the four triangles bounding it is at most 1. Thus this w is precisely our weight function on triangles, and our whole proof can be viewed as constructing a feasible solution to the dual program. In the hyperbolic geometry approach, the authors choose an embedding of the sphere triangulation in space, with one vertex v_0 at infinity. We can say that they define a weight function on triangles: w((i,j,k))= Vol({i,j,k,v_0})/V_0 , where the volumes are computed in hyperbolic space. If we consider the dual program, in that case the constraints are trivially satisfied, and the cost function is the volume of the sphere triangulation (up to a factor 1/V_0). It can be hoped that this linear programming interpretation would help providing “natural” amortized analyses for other problems. § REFERENCES [1] Daniel D. Sleator and R. E. Tarjan. Self-adjusting binary search trees, J.ACM 32, 1985, 652-686. [2] D. D. Sleater, R. E. Tarjan and W. P. Thurston. Rotation distance, triangulations, and hyperbolic geometry, Journal of the AMS, 1, 3, 1988, 647-681. (Preliminary version in STOC 1986).
*Background. Splay trees are a simple and efficient dynamic data structure, invented by Sleator and Tarjan [1]. The basic primitive for transforming a binary tree in this scheme is a rotation (see figure <ref>). fig1fig1Rotating an edge of a binary tree. The way in which splay trees evolve during a sequence of transformations is still not well understood, as witnessed by the resistance of the dynamic optimality conjecture to all efforts so far. In an attempt to understand how rotations act on trees, Sleator, Tarjan and Thurston addressed the question of computing the rotation distance between binary trees, i.e. the minimum number of rotations required to transform a given tree into another given tree [2]. They proved that the maximum distance is exactly 2n-6 for trees with n internal nodes (where n is larger than some constant). The proof of the upper bound is easy but the proof of the lower bound, remarkably, uses sophisticated arguments based on calculating hyperbolic volumes. *Results. In this paper, we give an elementary proof of the same result, using the max-flow min-cut theorem instead of hyperbolic geometry. One advantage of the proof is that we exhibit two “natural trees” which are at maximum distance apart, namely two variants of the “zig-zag” tree. (In the original proof, the two trees are not explicitly defined). *The proof method. The main interest of the paper lies in the method, which is new. It basically relies on a potential function argument, similar to many amortized analyses. However, the potential of a tree is not defined explicitly (as is usually the case in an amortized analysis), but by constructing an instance of a flow problem and using the max-flow min-cut theorem. It is also interesting to see an example of a problem where potential functions can be used for proving lower bounds and not only upper bounds. *Comparing the two methods. Our proof has the obvious advantage of being purely combinatorial, so that it is essentially self-contained. One drawback is that it involves some tedious case-by-case analysis. We must stress that this tedious work is unnecessary for proving a lower bound of 2n-O(1): if we do not insist in proving that the maximum distance is exactly 2n-6, but are happy with the less precise lower bound 2n-O(1), our proof technique works much more smoothly, and most of the case-by-case analyses disappear. In contrast, the proof of [2], which works by successive refinements of the lower bound, is relatively easy for a lower bound of 2n-O(√(n)), but would require almost as much work to prove 2n-O(1) as 2n-6. For other potential applications, we think that our method is more powerful. Instead of relying on the hyperbolic metric, in which solving the problem consists in positioning n points in space adequately (with 3n degrees of freedom), our approach is akin to constructing our own metric, geared to the problem which we wish to solve, and thus there is much more freedon. This will be formalized in the last section. *Plan of the paper. In section <ref>, we leave the binary tree setting and reformulate the problem in terms of flips and triangulations, essentially as in [2]. In section <ref>, we present the two far-apart triangulations and study the sphere triangulation formed by their union. In section <ref>, we define a weight function on triangles, using the max-flow min-cut theorem and a partial order on triangles. In section <ref>, we analyze the change of weights induced by flips, and conclude the proof of the lower bound. In section <ref>, we discuss connections between this proof, linear programming, and the original proof, as well as other possible applications of the technique.
null
null
null
null
null
http://arxiv.org/abs/2409.17828v1
20240926132716
Hofer distance on Lagrangian links inside the disc
[ "Ibrahim Trifa" ]
math.SG
[ "math.SG" ]
Phase glides and self-organization of atomically abrupt interfaces out of stochastic disorder in Andrej Kuznetsov =================================================================================================== § ABSTRACT We show that the set of Hamiltonian isotopies of certain unions of circles inside the disc is unbounded for the Hofer distance. The proof relies on a result by Francesco Morabito <cit.> together with a standard argument of Michael Khanevsky <cit.>. § INTRODUCTION Let (M,ω) be a symplectic manifold, and L_0 be a Lagrangian submanifold. Consider the set ℒ(L_0,M,ω):= {φ(L_0),φ∈_c(M,ω)}, where _c(M,ω) denotes the group of Hamiltonian diffeomorphisms that are compactly supported in the interior of M. This set can be equipped with the Hofer distance: d_H(L,L')=inf{φ|φ∈_c(M,ω),φ(L) = L'} where φ denotes the usual Hofer norm of a Hamiltonian diffeomorphism, defined in Section <ref>. We know very little about this distance. An important question is whether it is unbounded or not. For instance when L_0 is the equator of the two-sphere, this has been an open question for more than thirty years, and a positive answer would imply that there exist Hamiltonian diffeomorphisms of the sphere arbitrarily far away (for the Hofer distance) from the set of autonomous Hamiltonians (<cit.>). It is also open when L_0 is a circle inside the unit disc, which is what motivated us to study unions of circle inside the disc. One can show that the Hofer distance is bounded when L_0 is the unit circle inside the plane ^2 equipped with its standard symplectic form. Indeed, for any L'∈ℒ(L_0,^2,ω_std), we can find L”∈ℒ disjoint from both L and L'. Then we get that d_H(L,L')≤ d_H(L,L”)+d_H(L”,L')≤ 2π, since it only costs the area of a disc to displace it to a disjoint one of the same area. In some other cases, the Hofer distance is unbounded: for instance Khanevsky proved it when L_0 is a diameter inside the unit disc in ^2 <cit.>, a non-contractible circle inside the cylinder S^1×[0,1], or a non-displaceable contractible circle inside this same cylinder <cit.>. Khanevsky proved all those results using the same strategy that we will explain in Section <ref>. It relies on the existence of some quasimorphisms on the group of Hamiltonian diffeomorphisms, which in his case come from Entov and Polterovich's construction <cit.>. In 2013, Sobhan Seyfaddini generalised Khanevsky's first result to the case of the standard Lagrangian in a Euclidian ball of any even dimension <cit.>. As far as we know, this distance has never been proven to be bounded for any monotone Lagrangian submanifolds. In this paper, we consider the following case: let (,ω) denote the closed unit disc in equipped with the standard symplectic form, normalised so that the total area of the disc is 1. Let _c(,ω) be the group of Hamiltonian diffeomorphisms of (,ω) supported in the interior of . Let k≥ 2, and L_0 be a disjoint union of k embedded smooth closed simple curves bounding discs of the same area A>1/k+1. Then, d_H is unbounded on ℒ(L_0,,ω). The proof relies on the same strategy as Khanevsky, together with a result of Morabito. We explain those in the next Section, then move on to the proof of the main result in Section <ref>. §.§ Acknowledgements The author would like to thank his PhD advisor Sobhan Seyfaddini for his support, as well as Patricia Dietzsch, Baptiste Serraille and Francesco Morabito for the insightful discussions about Khanevsky's results. § PRELIMINARIES §.§ Hofer distance, Quasimorphisms and Khanevsky's argument We start by fixing notations and recalling the definition of the Hofer distance. Let (M,ω) be a symplectic manifold. A Hamiltonian on M is a smooth function H:S^1× M→ R. Any Hamiltonian induces a Hamiltonian vector field X_H_t uniquely defined by the equation ω(X_H_t,·)=-dH_t The time 1 map of the flow ϕ^t_H of X_H_t is called the Hamiltonian diffeomorphism generated by H. We denote by (M,ω) the set of all Hamiltonian diffeomorphisms of (M,ω); it is in fact a group. It can be equipped with the Hofer norm, defined by: φ=inf_H,φ=ϕ^1_H∫_S^1(max_M H_t-min_M H_t)dt Before stating Khanevsky's argument, we need to recall the definition of a quasimorphism. Let G be a group. A quasimorphism of G is a map q:G→ satisfying: ∃ D>0,∀ g,h ∈ G, q(gh)-q(g)-q(h)≤ D The infimum of all D such that this property is satisfied is called the defect of q. Moreover, a quasimorphism q is said to be homogeneous if for any g in G and integer n in , q(g^n)=nq(g). Given any quasimorphism, one can always homogenize it: Let q:G→ be a quasimorphism. Define μ:G→ by the formula μ(g) = lim_n→∞q(g^n)/n Then, μ is a well defined homogeneous quasimorphism. We are now ready to state Khanevsky's argument. Let L_0 be a Lagrangian submanifold of (M,ω). Denote by _c(M,ω) the subgroup of Hamiltonian diffeomorphisms that are compactly supported in the interior of M, and let ℒ(L_0,M,ω):={φ(L_0),φ∈_c(M,ω)}. It is equipped with the Hofer distance: d_H(L,L')=inf{φ|φ∈(M,ω),φ(L) = L'} Denote by 𝒮(L_0,M,ω) the stabiliser of L_0 in _c(M,ω), i.e. the subgroup consisting of diffeomorphisms φ such that φ(L_0) = L_0. Then, we have the following (<cit.>): Suppose there exists a non-vanishing homogeneous quasimorphism r on Ham_c(M,ω), Lipschitz with respect to the Hofer distance, and that vanishes on 𝒮(L_0,M,ω). Then, ℒ(L_0,M,ω) has infinite diameter for the Hofer distance. Let φ∈_c(M,ω) be such that r(φ)≠ 0. Let D be the defect of r. For n a positive integer, let L_n:=φ^n(L_0). Then, d_h(L_0,L_n)=inf{ψ|ψ(L_0)=L_n}. Let ψ∈(M,ω) be such that ψ(L_0)=L_n=φ^n(L_0). Then, φ^-nψ(L_0)=L_0, i.e. φ^-nψ∈𝒮(L_0,M,ω). Therefore, r(φ^-nψ)=0 and: ψ ≥| r(ψ) | since r is Hofer-Lipschitz ≥| r(ψ)-r(φ^-nψ)| ≥| r(φ^-n)| - | r(φ^-n+r(ψ)-r(φ^-nψ)| ≥ n | r(φ)| - D since r is a homogeneous quasimorphism of defect D Taking the infimum over ψ, we get d_H(L_0,L_n)≥ n | r(φ)| - D. Since r(φ)≠ 0, it shows that ℒ(L_0,M,ω) has infinite diameter. §.§ Morabito's result Let (,ω) denote the closed unit disc in equipped with the standard symplectic form, normalised so that the total area of the disc is 1. Let _c(,ω) be the group of Hamiltonian diffeomorphisms of (,ω) supported in the interior of . Let k≥ 2, and L_0 be a disjoint union of k embedded smooth closed simple curves (L_0^i)_1≤ i≤ k bounding discs of the same area A>1/k+1. Let φ be a Hamiltonian diffeomorphism in 𝒮(L_0, , ω). Then, there exists a permutation σ∈𝔖_k such that for 1≤ i ≤ k, φ(L_0^i)=L_0^σ(i). By choosing a Hamiltonian isotopy φ_t from the identity to φ, one can associate a braid with k strands to φ. This construction does not depend on the choice of isotopy, and therefore defines a map b from 𝒮(L_0, , ω) to ℬ_k, the group of braids with k strands. For i=1,2, consider a symplectic embedding Φ_i of the disc (,ω) into a two-sphere 𝕊_i of area 1+s_i, where s_1,s_2 are two different points of the interval (0,(k+1)A-1]. Denote by L_0,i the image of L_0 by Φ_i. Then, L_0,i is an η_i-monotone Lagrangian link on 𝕊_i (in the sense of <cit.>), where η_i=(k+1)A-1-s_i/2(k-1). Therefore, the construction in <cit.> provides us with a well defined spectral invariant c_L_0,i:(𝕊_i)→, whose pullback by Φ_i descends to a Hofer-Lipschitz quasimorphism on _c(,ω), that we still denote by c_L_0,i. Let μ_i be its homogenization. Then, Morabito proves the following in <cit.>: For any φ in 𝒮(L_0,,ω), we have: μ_1(φ)-μ_2(φ)=η_2-η_1/2k(b(φ)) where is the linking number of a braid. § PROOF OF THE MAIN RESULT Putting together Khanevsky's statement and Morabito's result, we can now give a proof of Theorem <ref>. We want to prove that d_H is unbounded on ℒ(L_0,,ω). By Khanevsky's statement (Proposition <ref>), we only need to construct a non-vanishing, homogeneous, Hofer-Lipschitz quasimorphism r:_c(,ω)→ which vanishes on 𝒮(L_0,,ω). For i=1,...,4, consider a symplectic embedding Φ_i of the disc (,ω) into a sphere 𝕊_i of area 1+s_i, where s_1,...,s_4 are four different points of the interval (0,(k+1)A-1]. Following the same construction as in Section <ref>, we end up with four Hofer-Lipschitz, homogeneous quasimorphisms μ_1,...,μ_4 on _c(,ω). Then, by Morabito's Theorem <ref>, for i≠ j, and φ∈𝒮(L_0,,ω), we have: μ_i(φ)-μ_j(φ)=η_j-η_i/2k(b(φ)) Let r:=(η_4-η_3)(μ_1-μ_2)-(η_2-η_1)(μ_3-μ_4). Then, r is a homogeneous, Hofer-Lipschitz quasimorphism on _c(,ω), which vanishes on 𝒮. It only remains to show that r is not identically zero. By <cit.>, the homogenized quasimorphisms defined using η-monotone links only depend on the number of components and the constant η. Therefore, r=(η_4-η_3)(μ_L'_1-μ_L'_2)-(η_2-η_1)(μ_L'_3-μ_L'_4), where, for i=1,...,4, L'_i is any choice of η_i-monotone link with k components in 𝕊_i. We now present such a choice. For a∈(0,1], let C_a⊂ be the circle of radius √(a) centered at the origin (so that it bounds a disc of area a). Then, we can complete the images of the two concentric circles C_A and C_2A-2η_i by the embedding Φ_i into an η_i-monotone link with k (nested) components L'_i. Since the C_2A-2η_i, i=1,...,4 are disjoint, we can construct a Hamiltonian H on , equal to 1 on C_2A-2η_1, and supported in a small neighbourhood of it so that by Lagrangian control (<cit.>), r(φ_H)=η_4-η_3/k≠ 0. The reason we need at least two circle components in our link to prove that d_H is unbounded is because we want to be able to embed the disc into spheres of different areas to get η-monotone links with different parameters η. If we have a single circle L_0 inside the disc bounding a disc of area A>1/2, then the only way for it to be monotone after embedding the disc into a sphere is to choose a sphere of area 2A, and therefore our strategy cannot produce a quasimorphism satisfying Khanevsky's conditions. Another idea could be to embed the disc into spheres 𝕊_i of area (i+1)A for different integers i≥ 2, then consider monotone links L_i consisting of the image of L_0 and (i-1) parallel circles such that the area of each connected component of the complement of L_i is equal to A. Then, the quasimorphisms μ_i := μ_L_i would be Hofer-Lipschitz homogeneous quasimorphisms on _c(,ω), and one could easily build linear combinations that vanish on 𝒮(L_0,,ω). However, we conjecture that such a linear combination is always identically zero, and therefore does not produce a quasimorphism satisfying Khanevsky's conditions. This is related to <cit.> about whether some linear combinations of quasimorphisms defined on (S^2) identically vanish or not. alpha
Let (M,ω) be a symplectic manifold, and L_0 be a Lagrangian submanifold. Consider the set ℒ(L_0,M,ω):= {φ(L_0),φ∈_c(M,ω)}, where _c(M,ω) denotes the group of Hamiltonian diffeomorphisms that are compactly supported in the interior of M. This set can be equipped with the Hofer distance: d_H(L,L')=inf{φ|φ∈_c(M,ω),φ(L) = L'} where φ denotes the usual Hofer norm of a Hamiltonian diffeomorphism, defined in Section <ref>. We know very little about this distance. An important question is whether it is unbounded or not. For instance when L_0 is the equator of the two-sphere, this has been an open question for more than thirty years, and a positive answer would imply that there exist Hamiltonian diffeomorphisms of the sphere arbitrarily far away (for the Hofer distance) from the set of autonomous Hamiltonians (<cit.>). It is also open when L_0 is a circle inside the unit disc, which is what motivated us to study unions of circle inside the disc. One can show that the Hofer distance is bounded when L_0 is the unit circle inside the plane ^2 equipped with its standard symplectic form. Indeed, for any L'∈ℒ(L_0,^2,ω_std), we can find L”∈ℒ disjoint from both L and L'. Then we get that d_H(L,L')≤ d_H(L,L”)+d_H(L”,L')≤ 2π, since it only costs the area of a disc to displace it to a disjoint one of the same area. In some other cases, the Hofer distance is unbounded: for instance Khanevsky proved it when L_0 is a diameter inside the unit disc in ^2 <cit.>, a non-contractible circle inside the cylinder S^1×[0,1], or a non-displaceable contractible circle inside this same cylinder <cit.>. Khanevsky proved all those results using the same strategy that we will explain in Section <ref>. It relies on the existence of some quasimorphisms on the group of Hamiltonian diffeomorphisms, which in his case come from Entov and Polterovich's construction <cit.>. In 2013, Sobhan Seyfaddini generalised Khanevsky's first result to the case of the standard Lagrangian in a Euclidian ball of any even dimension <cit.>. As far as we know, this distance has never been proven to be bounded for any monotone Lagrangian submanifolds. In this paper, we consider the following case: let (,ω) denote the closed unit disc in equipped with the standard symplectic form, normalised so that the total area of the disc is 1. Let _c(,ω) be the group of Hamiltonian diffeomorphisms of (,ω) supported in the interior of . Let k≥ 2, and L_0 be a disjoint union of k embedded smooth closed simple curves bounding discs of the same area A>1/k+1. Then, d_H is unbounded on ℒ(L_0,,ω). The proof relies on the same strategy as Khanevsky, together with a result of Morabito. We explain those in the next Section, then move on to the proof of the main result in Section <ref>. §.§ Acknowledgements The author would like to thank his PhD advisor Sobhan Seyfaddini for his support, as well as Patricia Dietzsch, Baptiste Serraille and Francesco Morabito for the insightful discussions about Khanevsky's results.
null
null
null
null
null
http://arxiv.org/abs/2409.17651v1
20240926085704
Atom graph, partial Boolean algebra and quantum contextuality
[ "Songyi Liu", "Yongjun Wang", "Baoshan Wang", "Jian Yan", "Heng Zhou" ]
quant-ph
[ "quant-ph" ]
1]Song-Yi [email protected] [1]Yong-Jun [email protected] 1]Bao-Shan [email protected] 1]Jian [email protected] 1]Heng [email protected] *[1]School of Mathematical Science, Beihang University, Beijing, 100191, China Partial Boolean algebra underlies the quantum logic as an important tool for quantum contextuality. We propose the notion atom graphs to reveal the graph structure of partial Boolean algebra for quantum systems by proving that (i) the partial Boolean algebras for quantum systems are determined by their atom graphs; (ii) the states on atom graphs can be extended uniquely to the partial Boolean algebras, and (iii) each exclusivity graph is an induced graph of an atom graph. (i) and (ii) show that the quantum systems are uniquely determined by their atom graphs. which proves the reasonability of graphs as the models of quantum experiments. (iii) establishes a connection between partial Boolean algebra and exclusivity graphs, and introduces a method to express the exclusivity experiments more precisely. We also present a general and parametric description for Kochen-Specker theorem based on graphs, which gives a type of non-contextuality inequality for KS contextuality. Atom graph, partial Boolean algebra and quantum contextuality [ September 28, 2024 ============================================================= § DECLARATIONS Competing interests The authors have no relevant financial or non-financial interests to disclose. § INTRODUCTION Quantum theory provides potential capabilities for information processing. The investigation of fundamental features of quantum systems has become a significant issue. All the non-classical features of quantum systems, such as non-locality <cit.>, negativity <cit.> and Kochen-Specker contextuality <cit.>, can be generalized by quantum contextuality, which is divided into state-dependent contextuality and state-independent contextuality <cit.>. It was shown that contextuality supplies a critical resource for quantum computation <cit.>. Partial Boolean algebra is a powerful tool for quantum contextuality, which was used by Kochen and Specker (1967) to examine the problem of hidden variables in quantum mechanics <cit.>, and has achieved great development for logic of quantum mechanics <cit.>. A quantum system consists of a measurement scenario and a quantum state. The measurement scenario introduces contexts and the quantum state supplies super-classical probability distributions, which cause the contextuality together. A measurement scenario forms a partial Boolean algebra, and the quantum states are described by the probability distributions. In this paper, the partial Boolean algebras are shown to be linked with the exclusivity graphs, which are utilized to depict quantum probabilities and non-contextuality inequalities (NC inequalities) <cit.>. We explore the features of partial Boolean algebras for quantum systems, and get some results. Firstly, we propose the atom graphs, and expose the graph structures of ACEpBa, that is, atomic and complete partial Boolean algebra satisfying logical exclusivity principle (LEP). A quantum system forms an ACEpBa, which causes that the quantum systems are uniquely determined by graphs with probability distributions on them. Therefore, the utilization of graphs to be the models of quantum systems is proved reasonable. Secondly, we present a method to extend every exclusivity graph to an atom graph of ACEpBa, which establishes a connection between partial Boolean algebra and exclusivity graphs. Finally, we introduce a general and parametric description for Kochen-Specker theorem based on graphs, which gives a type of NC inequality for KS contextuality. In the next section 2, the notions partial Boolean algebra and ACEpBa are introduced. Section 3 defines atom graphs, and shows the graph structures of quantum systems with two theorems. In Section 4, it is proved that each finite, simple and undirected graph is induced subgraph of the atom graph of an ACEpBa. In Section 5, the number of associated contexts of vertexes is defined, and is applied to obtain a parametric description of KS contextuality. Finally, in Section 6, we summarize our work. § PARTIAL BOOLEAN ALGEBRA §.§ Basic concepts Partial Boolean algebra is generalization of Boolean algebra. Some concepts defined below are from <cit.>. Let B be a set with * a reflexive and symmetric binary relation ⊙, * a (total) unary operation ¬: B→ B, * two (partial) binary operations , : ⊙→ B, * elements 0, 1 ∈ B, satisfying that for every subset S⊆ B such that ∀ a, b∈ S, a⊙ b, there is another subset C⊆ B such that S⊆ C and C is a Boolean algebra determined by the above operations and elements. B is called a partial Boolean algebra, which can be written by (B, ⊙), or (B, ⊙; , , ¬, 0, 1) for details. Relation ⊙ is called commeasurability relation, and elements a, b∈ B are commeasurable if a⊙ b. A partial Boolean algebra B can be seen as overlapped Boolean algebras. More specifically, B is a colimit of its total subalgebras in the category of partial Boolean algebras <cit.>. For elements a,b∈ B, we write a≤ b to mean that a⊙ b and a b=a. Let B be a partial Boolean algebra. Element a∈ B(a≠ 0) is called an atom of B if for each element x∈ B, x≤ a implies x=0 or x=a. B is said to be atomic if for each element x∈ B(x≠ 0), there is an atom a∈ B such that a≤ x. Let B be a partial Boolean algebra. B is said to be complete if for each subset S⊆ B whose elements are pairwise commeasurable, ⋁ S and ⋀ S exist. In practice, we mainly concern finite systems. Each finite Boolean algebra is atomic and complete, so it's easy to see each finite partial Boolean algebra is also atomic and complete. Two elements a, b∈ B are said to be exclusive, written a b, if there exists an element c∈ B such that a≤ c and b≤ c. A partial Boolean algebra (B,⊙) is said to satisfy Logical Exclusivity Principle (LEP) if ⊆⊙. An atomic and complete partial Boolean algebra which satisfies LEP is referred to as an ACEpBa. LEP indicates that exclusivity implies commeasurability, which was proposed by Abramsky and Barbosa (2020) to get closer to a quantum-realisable model <cit.>. Let B be a partial Boolean algebra. A Boolean subalgebra C⊆ B is called a maximal Boolean subalgebra of B if for each Boolean subalgebra D⊆ B, D⊇ C implies D=C. Let B be a partial Boolean algebra. A state on B is a map v:B→[0, 1] such that 1. v(0)=0; 2. v( x)=1-v(x); 3.for all x,y∈ B with x⊙ y,v(x y)+v(x y)=v(x)+v(y). A state is called a 0-1 state if its range is {0,1}. States are utilized to depict probability distributions of systems. A 0-1 state is a homomorphism from a partial Boolean algebra to {0,1}, that is, a truth-values assignment. §.§ Quantum system as an ACEpBa In this subsection, we show how to describe quantum systems using partial Boolean algebras. Quantum logic was proposed by Birkhoff and Von Neumann (1936) to describe the property deduction in quantum physics <cit.>. Quantum states are depicted by a Hilbert space ℋ. A proposition like Â∈Δ is depicted by a projector P̂ on ℋ, where  is a bounded self-adjoint operator on ℋ representing a physical quantity, and Δ is a Borel set of ℝ. Therefore, properties in a quantum system compose a set of projectors 𝒫(ℋ). If P̂_1, P̂_2 are projectors onto closed linear subspaces S_1, S_2, P̂_1P̂_2 is defined to be the projector onto S_1∩ S_2, and ¬P̂_1 is defined to be the projector onto S_1^. Then P̂_1P̂_2=¬(¬P̂_1¬P̂_2). One can prove that 𝒫(ℋ) is an orthocomplemented modular lattice, called property lattice or standard quantum logic. Property lattice 𝒫(ℋ) has several disadvantages such as not satisfying the distributive law <cit.>. In research of contextuality, partial Boolean algebra performs better than orthocomplemented modular lattice. Therefore, we let 𝒫(ℋ) be a partial Boolean algebra, which means operations between the noncommutative projectors are not allowed. For details, all the projectors on ℋ constitute the set 𝒫(ℋ). Define binary relation P̂_1⊙P̂_2 by P̂_1P̂_2=P̂_2P̂_1. P̂_1P̂_2 is defined to be P̂_1P̂_2 only if P̂_1⊙P̂_2,. Definition of ¬P̂_1 is unchanged. Then we have P̂_1P̂_2=¬(¬P̂_1¬P̂_2)=P̂_1+P̂_2. Because pairwise commeasurable projectors generate a Boolean algebra, 𝒫(ℋ)=(𝒫(ℋ), ⊙; , , ¬, 0̂, 1̂) is a partial Boolean algebra, where 0̂ is the zero projector, and 1̂ is the projector onto ℋ. We don't need to consider all the physical quantities, that is, bounded self-adjoint operators on ℋ. In that case, we will get a partial algebra rather than a partial Boolean algebra. Because each bounded self-adjoint operator has spectral decomposition, and all the propositions about physical quantities can be described by projectors, 𝒫(ℋ) is powerful enough for us. Easy to see 𝒫(ℋ) is atomic and complete. The atoms of 𝒫(ℋ) are the total 1-D (n-D denotes n-dimensional) projectors. And each finite quantum system, that is, finite partial Boolean subalgebra of 𝒫(ℋ), is naturally atomic and complete. Consider four 1-D projectors on a qubit (2-D Hilbert space), P̂_0=|0⟩⟨0|, P̂_1=|1⟩⟨1|, P̂_+=|+⟩⟨+|, P̂_-=|-⟩⟨-|, which generate the partial Boolean algebra in Fig.<ref>. We can also draw “overlapped" partial Boolean algebras. Five 1-D projectors on 3-D Hilbert space as Fig.<ref> generate the partial Boolean algebra shown in Fig.<ref>. Every experiment of quantum physics chooses a finite partial subalgebra of 𝒫(ℋ) as its measurement scenario. The measurement scenario of CHSH experiment for Bell inequality is a partial Boolean algebra with 16 atoms <cit.> (4 physical quantities introduce 16 elementary events), and the KCBS experiment for NC inequality is generated by 5 atoms <cit.>. If two projectors P̂_1, P̂_2 are exclusive, which means there is a projector P̂ such that P̂_1≤P̂ and P̂_2≤¬P̂, then P̂_1, P̂_2 are orthogonal, so they are commutative. Therefore, 𝒫(ℋ) satisfies LEP. To sum up, 𝒫(ℋ) is an ACEpBa. Easy to see that each partial Boolean subalgebra of 𝒫(ℋ) is also ACEpBa. It is not clear whether the ACEpBas can perfectly depict the quantum systems. Some extra properties may need to be introduced for the axiomatization of quantum systems (such as <cit.>). § ATOM GRAPH In this section, we define the atom graphs and prove several theorems which expose the graph structures of quantum systems. Unless otherwise specified, the graphs in this paper are simple and undirected. §.§ Graph Structure Theorem of ACEpBa If B is an atomic and complete Boolean algebra, then B is determined by its set of atoms, in other words, B is isomorphic to the algebra of the power set of atoms. We generalize the conclusion to ACEpBa. Let (B,⊙) be an atomic partial Boolean algebra, A(B) the set of all atoms of B. A graph G is said to be the atom graph of B, written AG(B), if its vertex set is A(B), and vertexes a_i,a_j∈ A(B) are adjacent iff a_i⊙ a_j. For example, the atom graph of partial Boolean algebra in Fig.<ref> is shown in Fig.<ref>. Every atomic partial Boolean algebra has a unique atom graph, but conversely, it is possible that a graph is not atom graph of any atomic partial Boolean algebra. Some such graphs are shown in Fig.<ref>. If Fig.<ref> (1) is the atom graph of some atomic partial Boolean algebra, then ¬ b=a=c, so a and c must be the same vertex. It is a contradiction, similar to Fig.<ref> (2) and (3). What kinds of graphs are atom graphs is an interesting question, but we don't get into it here. Another problem is, a graph can be the atom graph of several different atomic partial Boolean algebras. In this case, the graph isn't a reasonable model of the quantum system. Fortunately, for ACEpBa, the partial Boolean algebra with a specific atom graph is unique. Before proving that, let's define a concept. Let B be an atomic partial Boolean algebra. A maximal context of B is the set of all atoms of a maximal Boolean algebra of B. If A is a maximal context of B, then elements in A are pairwise commeasurable and ⋁ A=1. A maximal context of B is exactly a maximal clique of AG(B) (A clique of a graph is a set of vertexes which are pairwise adjacent). In the research of contextuality, a “maximal context" is often defined to be a maximal Boolean subalgebra of B, or a set of commeasurable physical quantities. However, one will see these definitions are essentially equivalent. Now we prove that the structure of an ACEpBa is uniquely determined by its atom graph. Let B_1 and B_2 be ACEpBas. B_1 and B_2 are isomorphic iff AG(B_1) and AG(B_2) are isomorphic. The isomorphic ACEpBas have isomorphic atom graphs obviously from the relevant definitions. Conversely, if B_1 and B_2 are ACEpBas, and AG(B_1) is isomorphic to AG(B_2), we aim to prove that B_1 is isomorphic to B_2. For every element b∈ B_1, b may belong to different maximal Boolean subalgebras. Suppose b=⋁ A_1=⋁ A_3, where A_1⊂ C and A_3⊂ C' (C and C' are two maximal contexts of B_1). Set A_2=C-A_1, and A_4=C'-A_3, as the Fig.<ref>. We see that C=A_1∪ A_2 is a maximal context, and ⋁ A_1=b, so ⋁ A_2= b. Similarly, ⋁ A_3=b and ⋁ A_4= b. Therefore for all a_2∈ A_2, we have a_2≤ b, and for all a_3∈ A_3, we have a_3≤ b, which imply a_2 a_3. Since B_1 is an ACEpBa, which satisfies LEP, we see a_2⊙ a_3. Therefore A_2∪ A_3 is a maximal context. Similarly, A_4∪ A_1 is also a maximal context. Then we get that: b=⋁ A_1=⋁ A_3⇒ A_1∪ A_2, A_2∪ A_3, A_3∪ A_4 and A_4∪ A_1 are maximal contexts. On the other side, for any two maximal contexts C and C' of B_1, we divide them by setting C=A_1∪ A_2 and C'=A_3∪ A_4, where A_1∩ A_2=A_3∩ A_4=∅. If A_1∪ A_2, A_2∪ A_3, A_3∪ A_4 and A_4∪ A_1 are all maximal contexts, then ⋁(A_1∪ A_2)=⋁(A_2∪ A_3)=1. Therefore ⋁ A_1=⋁ A_3=¬⋁ A_2. To sum up, if A_1 and A_3 are subsets of two maximal contexts C and C' of B_1 respectively, A_2=C-A_1 and A_4=C'-A_3, then ⋁ A_1=⋁ A_3 iff A_1∪ A_2, A_2∪ A_3, A_3∪ A_4 and A_4∪ A_1 are all maximal contexts. Equivalently, ⋁ A_1≠⋁ A_3 iff at least one of A_1∪ A_2, A_2∪ A_3, A_3∪ A_4 and A_4∪ A_1 is not a maximal context.(*) We define a map from B_1 to B_2 as follow. f:B_1→ B_2 b=⋁ A_1↦⋁ A_1 ⋁ A_1 is the disjunction of a collection of atoms. Atoms of B_1 correspond to the vertexes of AG(B_1), and AG(B_1) is isomorphic to AG(B_2), so atoms of B_1 and B_2 have one-to-one correspondence. Therefore A_1 can also be seen as atoms of B_2 for simplicity. f is well-defined, since if b=⋁ A_1=⋁ A_3, we have ⋁ A_1=⋁ A_3 in B_2. Otherwise due to conclusion (*), there is a set which is a maximal context of B_1, but not a maximal context of B_2. Thus AG(B_1) and AG(B_2) are not isomorphic, which leads to a contradiction. In other words, for all b∈ B_1, f(b) is unique, independent with the selection of maximal contexts. If b=⋁ A_1, b'=⋁ A'_1 and b≠ b', then f(b)=⋁ A_1 f(b')=⋁ A'_1 and f(b)≠ f(b') in B_2 due to the conclusion (*), so f is injective. Next prove f is a surjection. if b=⋁ A_1∈ B_2, then f(⋁ A_1)=b. To sum up, f is a bijection. Finally, it's easy to proved that f is a homomorphism between partial Boolean algebras, so f is an isomorphism between B_1 and B_2. We see theorem <ref> exposes the graph structure of measurement scenarios of quantum systems. In 2020, Abramsky and Barbosa proposed a tool to extend the commeasurability relation of a partial Boolean algebra <cit.>, that is, B→ B[⊚] (⊚ is a binary relation of B). Theorem <ref> implies that, for an ACEpBa, the extension of commeasurability relation is equivalent to the increasing of edges of its atom graph. §.§ Extension Theorem of the states on atom graphs We have defined states on partial Boolean algebras. For graphs, we have definition below. Let G be a simple and undirected graph. A state on G is a map v:V(G)→[0, 1] satisfying that for each maximal clique C of G, ∑_i∈ Cv(i)=1. A state is called a 0-1 state if its range is {0,1}. Abramsky and Barbosa pointed out that there is a one-to-one correspondence between the states on a finite Boolean algebra and the probability distributions on the atoms <cit.>. We generalize the conclusion to ACEpBa. Let B be an atomic partial Boolean algebra, v a state on B, and v' the restriction of v on AG(B). Then v' is a state on AG(B). For each maximal clique C of AG(B), C is a maximal context of B. We have ∑_i∈ Cv(i)=1 since v is a state on B. Therefore ∑_i∈ Cv'(i)=1, which implies that v' is a state on G. Let B be an ACEpBa, and v' a state on AG(B). There is a unique state v on B, whose restriction on AG(B) is v'. Define a map v:B→ [0, 1] as v(0)=0, and if b=⋁ A_1∈ B, where A_1 is a subset of a maximal context, then v(b)=∑_i∈ A_1v'(i). We prove that v is well-defined. Since v' is a state of AG(B), v(b)=∑_i∈ A_1v'(i)∈[0,1]. Set b=⋁ A_1=⋁ A_3. Using Fig.<ref> in the proof of theorem <ref>, we have that if ∑_i∈ A_1v'(i)=p, then ∑_i∈ A_2v'(i)=1-p. A_2∪ A_3 is a maximal context, so ∑_i∈ A_3v'(i)=p and ∑_i∈ A_4v'(i)=1-p. Thus ∑_i∈ A_1v'(i)=∑_i∈ A_3v'(i). Therefore, v(b) is independent with the selection of maximal contexts, so v is well-defined. Next prove v is a state on B. Firstly, v(0)=0 holds. Then, if b=⋁ A_1, we have b=⋁ A_2, so v( b)=∑_i∈ A_2v'(i)=1-∑_i∈ A_1v'(i)=1-v(b). Finally, if x,y∈ B, x⊙ y, then x, y are in the same maximal Boolean subalgebra. If x=⋁ A_x, y=⋁ A_y, we have v(x y)+v(x y)=∑_i∈ A_x∪ A_yv'(i)+∑_i∈ A_x∩ A_yv'(i)=∑_i∈ A_xv'(i)+∑_i∈ A_yv'(i)=v(x)+v(y). In summary, v is a state on B. The restriction of v on AG(B) is obviously v' from the definition. At last, we prove the uniqueness. If there is another v_1 which is a state on B whose restriction on AG(B) is v', then for b=⋁ A_1∈ B, we have v_1(b)=∑_i∈ A_1v_1(i)=∑_i∈ A_1v'(i)=v(b) since v_1 is a state,. Therefore v_1=v. The theorems <ref> and <ref> show the one-to-one correspondence between states on an ACEpBa and states on its atom graph. Therefore, the quantum states can be depicted by the states on atom graphs. Easy to see that the conclusion also holds for 0-1 states. For example, a state on the ACEpBa in Fig.<ref> corresponds to a state on its atom graph. They are shown in Fig.<ref>. Theorems <ref>, <ref> and <ref> expose the graph structure of ACEpBa, thus prove the reasonability of graphs to be the models of quantum systems. Firstly, measurement scenario of a quantum system is determined by its atom graph. And then, quantum states are determined by states on its atom graph. § EXTENSION OF GRAPHS TO ATOM GRAPHS The section below explains how the atom graphs are connected to the exclusivity graphs. At first, we introduce concept of exclusivity graphs, which is the application of graph theory for quantum contextuality. §.§ Exclusivity graph Exclusivity graphs <cit.> are the tools utilized to describe the exclusive events. It is based upon the mathematical works of Lovász et al. <cit.>. Let G be an exclusivity graph, that is, a simple, connected, finite, and undirected graph. The vertexes of G are marked as 1, 2, ..., n. A vector x: V(G)→{0, 1} in {0 ,1}^n is said to be the incidence vector of vertex set x^-1(1)⊆ V(G). Notation α(G;w) denotes the maximum weight of the independent sets of G. A weight is a vector w: V(G)→ℝ^+. Thus α(G;1⃗) is the maximum independent number of G, also written α(G). Let VP(G) (vertex packing polytope) indicate the convex hull of incidence vectors of all the independent sets of vertexes. VP(G) was employed in the calculation of α(G;w), because α(G;w) is the maximum of the linear function w^Tx for x∈ VP(G). Moreover, VP(G) consists of all of the “classical probabilities" from the perspective of exclusivity graphs. A well-known example is the KCBS experiment <cit.>, which includes five events P_0, P_1, P_2, P_3, P_4 such that P_i and P_i+1 (with the sum modulo 5) are exclusive, i.e. at most one of P_i and P_i+1 is true at the same time. The exclusivity relation of P_i (i=0, 1, 2, 3, 4) is shown by Fig.<ref>. If these five events are “classical events", that is, the events in classical probability theory (sets or indicator functions), for the graph G, VP(G) is the set of classical probability vectors of five events. In that case, polynomial ∑_i=0^4P(P_i) satisfies the inequality below. ∑_i=0^4P(P_i)≤α(G)=2 It is called the KCBS inequality, the earliest NC inequality <cit.>. However, in quantum case, these five events are “quantum events", that is, projectors P̂_i in 𝒫(ℋ) for some ℋ. If the quantum state is ρ, then the probabilities of event P_i is ⟨P̂_i⟩=Tr(ρP̂_i). A notable interpretation of G in quantum systems was found by Cabello et al. <cit.>. It violates the KCBS inequality, ∑_i=0^4P(P_i)=∑_i=0^4 Tr(ρP̂_i)=√(5)>2 which provides an evidence of quantum contextuality. An exclusivity graph isn't necessarily an atom graph, but we can prove that every finite, simple, undirected graph is the induced graph of an atom graph, which leads to the connection between ACEpBa and exclusivity graphs. §.§ Faithful and linearly independent orthogonal co-representation To achieve our final conclusion, it's necessary to introduce the notion orthogonal co-representation. Grötschel, Lovász, and Schrijver (1986) defined the Orthonormal Representation (OR) of graph G, which can be seen as an interpretation of G to quantum systems <cit.>. Let G be a simple, undirected graph. An OR of G is a map v:V(G)→ℝ^d (d∈ℤ^+) such that ∥ v(i)∥=1(i∈ V(G)), and if i, j are not adjacent, then v(i) v(j). Let v_i denote vector v(i) in the following. Notice that if i,j are not adjacent then corresponding vectors are orthogonal. To interpret adjacency to orthogonality, we should consider the OR of G̅ (complement of G). This definition require neither that different vertexes are mapped to different vectors nor that adjacent vertexes are mapped to non-orthogonal vectors. For instance, if G is the pentagon in Fig.<ref>, all the graphs in Fig.<ref> can be the orthogonality graphs of OR of G̅. Such definition is unsatisfactory for research of quantum contextuality. Thus, Abramsky and Brandenburger proposed the faithful orthogonal co-representation <cit.>. Let G be a simple, undirected graph, and v an OR of G̅. v is said to be a faithful orthogonal co-representation of G if v is injective and i,j are adjacent iff v_i v_j. This definition ensures that a graph corresponds to unique orthogonality graph. Furthermore, we define another notion. Let G be a simple, undirected graph, and v an OR of G̅. v is said to be a linearly independent orthogonal co-representation of G if vector set {v_i:i∈ V(G)} are linearly independent. For graph G, it is important to know whether it has an orthogonal co-representation or not. A result for the question was mentioned in <cit.>, but it is not powerful enough for us. Here, we need to prove another result. Each simple, finite, and undirected graph G with n vertexes has a faithful and linearly independent orthogonal co-representation in ℝ^n We use the mathematical induction. When n=1, G has a faithful and linearly independent orthogonal co-representation in ℝ, that is, {v_1}. Then we assume the result for general n-1, and show it holds for n. For a graph G with n vertexes, its every induced subgraph with n-1 vertexes has a faithful and linearly independent orthogonal co-representation: {v_1,v_2,...,v_n-1}. They span an (n-1)-D subspace of ℝ^n. The problem is thus reduced to proving that there is a vector v_n∈ℝ^n such that ∥ v_n∥=1, v_n v_i iff n,i are adjacent (i=1,2,...,n-1), and {v_1,...,v_n-1,v_n} are linearly independent. The subspace Span(v_1,...,v_n-1)^ is one-dimensional. Let e_n be a unit vector in it, and then {v_1,...,v_n-1,e_n} is a basis of ℝ^n. Suppose v'_n=x_1v_1+...+x_n-1v_n-1+e_n. We need v'_n satisfying: 1.x_1(v_i,v_1)+...+x_n-1(v_i,v_n-1)+0=0, i.e. v'_n v_i, iff n, i are adjacent; 2.x_1(v_i,v_1)+...+x_n-1(v_i,v_n-1)+0≠0 iff n,i are not adjacent, where (,) is the notation for inner product on ℝ^n. If n is adjacent with all of the i=1,...,n-1, we have x_1(v_1,v_1)+x_2(v_1,v_2)+...+x_n-1(v_1,v_n-1)=0 x_1(v_2,v_1)+x_2(v_2,v_2)+...+x_n-1(v_2,v_n-1)=0 ...... x_1(v_n-1,v_1)+x_2(v_n-1,v_2)+...+x_n-1(v_n-1,v_n-1)=0. Because {v_1,...,v_n-1} is linearly independent, ((v_i,v_j)) is a Gram matrix, with rank n-1. Thus the equation system has a unique solution x⃗=0, which gives v'_n=e_n. If n is not adjacent with some vertexes, then we substitute the corresponding equations with inequalities in the equation system. It can be proved that the new system still has solutions. If the system has m equalities, then the subsystem made up of them has an (n-1-m)-D solution space S. We mark the n-1-m inequalities left as 1,2,...,n-1-m, which respectively have (n-2)-D solution spaces S_1,S_2,...,S_n-1-m as equalities. We have S' ={x⃗:x⃗∈ S and x⃗∉ S_1,...,S_n-1-m} =S∩S_1∩...∩S_n-1-m =S∩(S∪S_1)∩...∩(S∪S_n-1-m) =S∩((S∩ S_1)∩...∩(S∩ S_n-1-m)) =S∩(S∩ S_1)∪...∪(S∩ S_n-1-m) =S-((S∩ S_1)∪...∪(S∩ S_n-1-m)). S∩ S_i are all (n-m-2)-D subspaces of (n-m-1)-D space S, that is, hyperplanes. Since the union of finite hyperplanes properly contains in the whole space, S' is not empty. Thus the new system has solutions. Therefore, there exists a vector v'_n=x_1v_1+...+x_n-1v_n-1+e_n such that v'_n v_i iff n,i are adjacent. Since e_n has coefficient 1 in v'_n, {v_1,...,v_n-1,v'_n} is linearly independent. Finally, let v_n=v'_n/∥ v'_n∥. Then v_n is the desired vector, so G has a faithful and linearly independent orthogonal co-representation in ℝ^n, and the induction goes through. Due to theorem <ref>, the pentagon in Fig.<ref> has a faithful and linearly independent orthogonal co-representation in ℝ^5, which differs from the one found by Cabello et al. in ℝ^3 <cit.>. Both of them can be used to investigate the probabilities of exclusive events, and one will see that the linear independence has special benefits. §.§ Higher dimensional context extension With theorem <ref>, we show how to extend a graph to an atom graph. It is easy to see there is a one-to-one correspondence with simple, finite, and undirected graph G and its total maximal cliques. Therefore we can define Let G be a simple, finite, and undirected graph, whose total maximal cliques are C_1,C_2,...,C_N. Let x_1,...,x_N be N vertexes irrelevant to vertexes in G. The higher-dimensional context extension of G, denoted by G^e, is a graph with total maximal cliques: C_1∪{x_1},C_2∪{x_2},...,C_N∪{x_N}. We call the size of the maximum clique of G dimension of G. G^e is gotten by adding a vertex into every maximal context of G. The dimension of G^e must be higher than G. It is a tool to study G in higher dimensions. For example, G^e of the pentagon G in Fig.<ref> is a five-pointed star shown in Fig.<ref> For each G, G^e must have a 0-1 state. Only needs to map vertexes in G to 0, and the new vertexes to 1. A generalized probability distributions on G is a map v on V(G) which satisfies v(i)≥ 0(i∈ V(G)) and ∑_i∈ Cv(i)≤ 1 for each maximal cliques C of G. Easy to see that Let G be a simple, finite, and undirected graph. There is a one-to-one correspondence between generalized probability distributions on G and states on G^e. For each state on G^e, its restriction on G is a generalized probability value on G from the definition. For each generalized probability distribution v on G, if the total maximal cliques of G^e are C_1∪{x_1},C_2∪{x_2},...,C_N∪{x_N}, then we define a state v' on G^e by v'(i)=v(i)(i∈ V(G)) and v'(x_k)=1-∑_i∈ C_kv(i). It is easy to see that v' is the unique state on G^e such that v'(i)=v(i)(i∈ V(G)). Now we prove the central theorem of this section. If G is a simple, finite, and undirected graph, then G^e is the atom graph of an ACEpBa. Suppose |V(G)|=n. Applying theorem <ref>, G has a faithful and linearly independent orthogonal co-representation {v_1,...,v_n} in ℝ^n, which correspond to a projector set {P̂_1,...,P̂_n}, where P̂_i projects ℝ^n to Span(v_i). If G has N maximal cliques C_1,...,C_N, we set P_k=¬(⋁_i∈ C_kP̂_i)(k=1,...,N). P_k is a projector onto Span({v_i:i∈ C_k})^. It will be proved that {P̂_1,...,P̂_n}∪{P_1,...,P_N} is the set of atoms of an ACEpBa, whose atom graph is exactly G^e. Firstly we prove that every P_k is commeasurable with P̂_i(i∈ C_k), but not commeasurable with P̂_j(j∉ C_k). If P_k is commeasurable with a P̂_j(j∉ C_k), thus P_kP̂_j=P̂_jP_k, then we have P_kP̂_j=P̂_j or P_kP̂_j=0̂. If P_kP̂_j=P̂_j, then v_j∈ Span({v_i:i∈ C_k})^, so v_j v_i for all i∈ C_k. Thus C_k∪v_j is a clique, which contradicts to that C_k is maximal. If P_kP̂_j=0̂, then v_j Span({v_i:i∈ C_k})^, so v_j∈ Span({v_i:i∈ C_k}), which contradicts to that {v_1,...,v_n} is linearly independent. Next we prove that each P_k(k=1,...,N) is not commeasurable with others. It is trivial for N=1. If N>1, and P_k_1, P_k_2 (k_1≠ k_2) are commeasurable, we have P_k_1=P'∨ P, P_k_2=P”∨ P and P'P”=P'P=P”P=0, P=P_k_1P_k_2. For maximal cliques C_k_1 and C_k_2, set A_1=C_k_1-C_k_2, A_2=C_k_1∩ C_k_2 and A_3=C_k_2-C_k_1. A_1 and A_3 are non-empty. Let P_A_t=⋁_i∈ A_tP̂_i(t=1,2,3), which gives that P'∨ P∨ P_A_1∨ P_A_2=P_k_1∨⋁_i∈ C_k_1P̂_i=1̂, and P”∨ P∨ P_A_2∨ P_A_3=P_k_2∨⋁_i∈ C_k_2P̂_i=1̂, as shown in Fig.<ref>. P', P” and P are pairwise orthogonal, so they generate a Boolean algebra. Thus P_A_1∨ P_A_2=¬(P'∨ P) and P_A_2∨ P_A_3=¬(P”∨ P) are in the same Boolean algebra, i.e. P_A_1∨ P_A_2 and P_A_2∨ P_A_3 are commeasurable. Then there exist two projectors P'_A_1 and P'_A_3 such that P'_A_1P'_A_3=0̂, P_A_1=P'_A_1∨ P_A_1P_A_3 and P_A_3=P'_A_3∨ P_A_1P_A_3, . Thus P_A_1 and P_A_3 are commeasurable. Because {v_i:i∈ A_1∪ A_3} is linearly independent, if x=∑_i∈ A_1a_iv_i=∑_j∈ A_3b_jv_j, then ∑_i∈ A_1a_iv_i-∑_j∈ A_3b_jv_j=0. Thus a_i,b_j=0, i.e. x=0⃗, which means that P_A_1P_A_3=0̂, in other words, Span({v_i:i∈ A_1}) and Span({v_i:i∈ A_3}) are orthogonal. However, it leads that the vectors in {v_i:i∈ A_1} and {v_i:i∈ A_3} are orthogonal. Since v is faithful, we have that C_k_1∪ C_k_2 is a clique. It causes C_k_1=C_k_2, which contradicts to k_1≠ k_2. To sum up, P_k is only commeasurable with P̂_i(i∈ C_k). Let B be a partial Boolean algebra generated by the set A={P̂_1,...,P̂_n}∪{P_1,...,P_N}. For each i∈{1,...,n}, P̂_i is an atom obviously. For each k∈{1,...,N}, because P_k is only commeasurable with P̂_i(i∈ C_k), and P_kP̂_i=0̂(i∈ C_k), the Boolean algebra generated by {P_k}∪{P̂_i:i∈ C_k} is isomorphic to the Boolean algebra with 2^|C_k|+1 elements. Then we can see P_k is an atom. And A contains all the atoms of B. B is a partial subalgebra of projectors, so it is an ACEpBa, and the atom graph of B is isomorphic to G^e. With theorem <ref>, we immediately get that each simple, finite, and undirected graph is an induced subgraph of the atom graph of an ACEpBa. For instance, the pentagon G in Fig.<ref> is an induced subgraph of the atom graph in Fig.<ref>. G^e complements all the elementary events overlooked by G. Another approach to get all the elementary events was used by Cabello et al.<cit.>, which lets the pentagon be a subgraph of the graph in Fig.<ref>. However, because of the exclusivity relation of P_i(i=0,1,2,3,4), the vertexes 1,1|i,i+1 in Fig.<ref> are all impossible events, which should be deleted. 1,0|i,i+1 and 0,1|i-1,i are the same, so they should be merged in pairs. Therefore, Fig.<ref> doesn't give a correct expression of the KCBS experiment. After simplifying, Fig.<ref> is reducted to Fig.<ref>, which presents all the elementary events precisely. The higher-dimensional context extension of graph G is one way to extend G. Another method is the equal-dimensional context extension, which adds one point to every maximal clique of G except the maximum cliques. However, different from G^e, the equal-dimensional context extension may not be an atom graph, which means that G may have no interpretation to an equal-dimensional quantum system. For now, we have connected the ACEpBa (the atom graph) and the exclusivity graphs. The next section moves on to consider the KS contextuality. § KS CONTEXTUALITY KS theorem is the earliest description of the contextuality of quantum systems <cit.>. It can be depicted by partial Boolean algebra <cit.>, which states that if ℋ be a Hilbert space and dim(ℋ)≥ 3, then there is no homomorphism from 𝒫(ℋ) to {0,1} (Kochen and Specker, 1967). In other words, when dim(ℋ)≥ 3, it is impossible to assign truth-values to all properties in quantum systems simultaneously, which leads to the impossibility of assigning values to all physical quantities simultaneously. The property “there is no homomorphism to {0,1}" is called KS contextuality. With theorems <ref> and <ref>, we have that If the measurement scenario of a quantum system is G (G is an atom graph), then it has KS contextuality iff there is no 0-1state on G. Two important KS-proofs with graphs were given by Kochen, Specker <cit.>, and Cabello et al <cit.>. The KS graph has 117 vertexes, and Cabello's graph has 18 vertexes. If the existence of 0-1 states on a graph leads to a contradiction, we say it introduces a KS-proof. The remaining part of this section will offer a general and parametric expression of KS contextuality, which introduces a type of NC inequality. Let G be a finite, simple and undirected graph, and i a vertex of G. The number of associated contexts of i, written c_G(i), is defined to be the number of the maximal cliques to which i belongs. i is said to associate to C if i∈ C. Let G be a finite, simple and undirected graph, and I an independent set of G. The value ∑_i∈ Ic_G(i) is called a number of independently associated contexts of G. Therefore, α(G;c_G) is the greatest number of independently associated contexts of G. Exactly, what we define is a special weight c_G. If G is a finite, simple and undirected graph, and v is a state of G, we let S(v,c_G) denote ∑_i∈ V(G)c_G(i)v(i), and c(G) denote the total number of maximal cliques of G. It is straightforward to show that S(v,c_G)=c(G) by S(v,c_G)=∑_k=1^N∑_i∈ C_kv(i)=∑_k=1^N1=N, where C_k is the k'th maximal clique of G. S(v,c_G)=c(G) is an important equation for states , and it is also a generalization of the equation used for KS-proof in <cit.>. We prove the proposition <ref> as a lemma. If G is a finite, simple and undirected graph, then α(G;c_G)≤ c(G). For any independent set I of G, two vertexes i, j in I can not associate to the same maximal clique. Otherwise, if i, j∈ C, then i, j are adjacent since C is a clique, which contradicts to that I is an independent set. Therefore, distinct vertexes in I associate to distinct maximal cliques. Thus ∑_i∈ Ic_G(i)≤ c(G), and α(G;c_G)≤ c(G). Next, we give description of KS contextuality using the parameters of graphs. If G is a finite, simple and undirected graph, then the statements below are equivalent: 1. α(G;c_G)=c(G). 2. There exists a 0-1 state on G. 3. There exists a 0-1 state v on G s.t. S(v,c_G)=α(G;c_G). 4. There exists a state v on G s.t. S(v,c_G)=α(G;c_G). 1⇒ 2: Since α(G;c_G)=c(G), there is an independent set I satisfying ∑_i∈ Ic_G(i)=c(G). Thus vertexes in I associate to all the maximal cliques of G. We define a map v:V(G)→{0,1} by v(i)=1,(i∈ I) and v(i)=0,(i∉ I). Then v is a 0-1 state on G. 2⇒ 3: If v is a 0-1 state on G, then the set I={i∈ V|v(i)=1} is an independent set. Since for every maximal clique C, ∑_i∈ Cv(i)=1, there is exactly one vertex i such that v(i)=1 in C. Thus the vertexes in I associate to all the maximal cliques. Therefore, ∑_i∈ Ic_G(i)=c(G)≤α(G;c_G). Applying proposition <ref>, we have c(G)=α(G;c_G). Therefore S(v,c_G)=α(G;c_G) 3⇒ 4: Follows from the relevant definitions. 4⇒ 1: Obviously from the equation S(v,c_G)=c(G). Notice that the theorem <ref> holds for the atom graph of any finite ACEpBa. Therefore, applying proposition <ref>, the theorem <ref> states that the KS contextuality can be defined by α(G;c_G)<c(G), which supplies a parametric method, also a NC inequality, to determine if the system has KS contextuality. A similar result was gotten with sheaf theory by Abramsky and Brandenburger <cit.>. However, their expression is not parametric compared with ours, and they didn't realize the graph structure of quantum systems. § CONCLUSION We exposed the graph structure of quantum systems by theorems <ref> , <ref> and <ref> for ACEpBa, which ensures that the utilization of graphs for quantum systems is reasonable. ACEpBa, with the atom graphs we defined, can be used to describe the quantum systems and develop the theories for quantum contextuality. As an instance, a general and parametric description of KS contextuality was presented by theorem <ref>. In the rest of this paper, we establish the connection between ACEpBa and exclusivity graphs by theorems <ref> and <ref>, which introduces a method to express the exclusivity experiments more precisely. The higher-dimensional (or equal-dimensional) context extension can be tools to investigate the features of quantum experiments. Acknowledgments The work was supported by the National Natural Science Foundation of China (11871083).
Quantum theory provides potential capabilities for information processing. The investigation of fundamental features of quantum systems has become a significant issue. All the non-classical features of quantum systems, such as non-locality <cit.>, negativity <cit.> and Kochen-Specker contextuality <cit.>, can be generalized by quantum contextuality, which is divided into state-dependent contextuality and state-independent contextuality <cit.>. It was shown that contextuality supplies a critical resource for quantum computation <cit.>. Partial Boolean algebra is a powerful tool for quantum contextuality, which was used by Kochen and Specker (1967) to examine the problem of hidden variables in quantum mechanics <cit.>, and has achieved great development for logic of quantum mechanics <cit.>. A quantum system consists of a measurement scenario and a quantum state. The measurement scenario introduces contexts and the quantum state supplies super-classical probability distributions, which cause the contextuality together. A measurement scenario forms a partial Boolean algebra, and the quantum states are described by the probability distributions. In this paper, the partial Boolean algebras are shown to be linked with the exclusivity graphs, which are utilized to depict quantum probabilities and non-contextuality inequalities (NC inequalities) <cit.>. We explore the features of partial Boolean algebras for quantum systems, and get some results. Firstly, we propose the atom graphs, and expose the graph structures of ACEpBa, that is, atomic and complete partial Boolean algebra satisfying logical exclusivity principle (LEP). A quantum system forms an ACEpBa, which causes that the quantum systems are uniquely determined by graphs with probability distributions on them. Therefore, the utilization of graphs to be the models of quantum systems is proved reasonable. Secondly, we present a method to extend every exclusivity graph to an atom graph of ACEpBa, which establishes a connection between partial Boolean algebra and exclusivity graphs. Finally, we introduce a general and parametric description for Kochen-Specker theorem based on graphs, which gives a type of NC inequality for KS contextuality. In the next section 2, the notions partial Boolean algebra and ACEpBa are introduced. Section 3 defines atom graphs, and shows the graph structures of quantum systems with two theorems. In Section 4, it is proved that each finite, simple and undirected graph is induced subgraph of the atom graph of an ACEpBa. In Section 5, the number of associated contexts of vertexes is defined, and is applied to obtain a parametric description of KS contextuality. Finally, in Section 6, we summarize our work.
null
null
null
null
We exposed the graph structure of quantum systems by theorems <ref> , <ref> and <ref> for ACEpBa, which ensures that the utilization of graphs for quantum systems is reasonable. ACEpBa, with the atom graphs we defined, can be used to describe the quantum systems and develop the theories for quantum contextuality. As an instance, a general and parametric description of KS contextuality was presented by theorem <ref>. In the rest of this paper, we establish the connection between ACEpBa and exclusivity graphs by theorems <ref> and <ref>, which introduces a method to express the exclusivity experiments more precisely. The higher-dimensional (or equal-dimensional) context extension can be tools to investigate the features of quantum experiments. Acknowledgments The work was supported by the National Natural Science Foundation of China (11871083).
http://arxiv.org/abs/2409.17763v1
20240926115841
Confidence intervals uncovered: Are we ready for real-world medical imaging AI?
[ "Evangelia Christodoulou", "Annika Reinke", "Rola Houhou", "Piotr Kalinowski", "Selen Erkan", "Carole H. Sudre", "Ninon Burgos", "Sofiène Boutaj", "Sophie Loizillon", "Maëlys Solal", "Nicola Rieke", "Veronika Cheplygina", "Michela Antonelli", "Leon D. Mayer", "Minu D. Tizabi", "M. Jorge Cardoso", "Amber Simpson", "Paul F. Jäger", "Annette Kopp-Schneider", "Gaël Varoquaux", "Olivier Colliot", "Lena Maier-Hein" ]
cs.CV
[ "cs.CV", "cs.AI", "cs.LG" ]
Confidence intervals uncovered [1]Shared first/last authors: E. Christodoulou and A. Reinke/L. Maier-Hein, O. Colliot, and G. Varoquaux E. Christodoulou et al. German Cancer Research Center (DKFZ) Heidelberg, Div. Intelligent Medical Systems, Germany [email protected] AI Health Innovation Cluster, Germany National Center for Tumor Diseases (NCT), NCT Heidelberg, a partnership between DKFZ and Heidelberg University Hospital, Germany DKFZ Heidelberg, Helmholtz Imaging, Germany HIDSS4Health - Helmholtz Information and Data Science School for Health, Germany DKFZ Heidelberg, Interactive Machine Learning Group, Germany MRC Unit for Lifelong Health and Ageing at UCL and Centre for Medical Image Computing, Department of Computer Science, University College London, UK School of Biomedical Engineering and Imaging Science, King’s College London, UK Sorbonne Université, Institut du Cerveau - Paris Brain Institute - ICM, CNRS, Inria, Inserm, AP-HP, Hôpital de la Pitié-Salpêtrière, France NVIDIA, Germany Department of Computer Science, IT University of Copenhagen, Denmark Centre for Medical Image Computing, University College London, UK School of Computing, Queen’s University, Canada Department of Biomedical and Molecular Sciences, Queen’s University, Canada Division of Biostatistics, DKFZ, Germany Parietal project team, INRIA Saclay-Île de France, France Faculty of Mathematics and Computer Science, Heidelberg University, Germany Medical Faculty, Heidelberg University, Germany Confidence intervals uncovered: Are we ready for real-world medical imaging AI? Evangelia Christodoulou1,2,3,⋆ Annika Reinke1,4,⋆ Rola Houhou1,3 Piotr Kalinowski1,3,5 Selen Erkan6 Carole H. Sudre7,8 Ninon Burgos9 Sofiène Boutaj9 Sophie Loizillon9 Maëlys Solal9 Nicola Rieke10 Veronika Cheplygina11 Michela Antonelli8,12 Leon D. Mayer1,3 Minu D. Tizabi1,3 M. Jorge Cardoso8 Amber Simpson13,14 Paul F. Jäger4,6 Annette Kopp-Schneider15Gaël Varoquaux16,⋆ Olivier Colliot9,⋆ Lena Maier-Hein1,3,4,17,18,⋆ September 12th, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Medical imaging is spearheading the AI transformation of healthcare. Performance reporting is key to determine which methods should be translated into clinical practice. Frequently, broad conclusions are simply derived from mean performance values. In this paper, we argue that this common practice is often a misleading simplification as it ignores performance variability. Our contribution is threefold. (1) Analyzing all MICCAI segmentation papers (n = 221) published in 2023, we first observe that more than 50% of papers do not assess performance variability at all. Moreover, only one (0.5%) paper reported confidence intervals (CIs) for model performance. (2) To address the reporting bottleneck, we show that the unreported standard deviation (SD) in segmentation papers can be approximated by a second-order polynomial function of the mean Dice similarity coefficient (DSC). Based on external validation data from 56 previous MICCAI challenges, we demonstrate that this approximation can accurately reconstruct the CI of a method using information provided in publications. (3) Finally, we reconstructed 95% CIs around the mean DSC of MICCAI 2023 segmentation papers. The median CI width was 0.03 which is three times larger than the median performance gap between the first and second ranked method. For more than 60% of papers, the mean performance of the second-ranked method was within the CI of the first-ranked method. We conclude that current publications typically do not provide sufficient evidence to support which models could potentially be translated into clinical practice. § INTRODUCTION As demonstrated by the fact that more than 530 of the first 692 AI in healthcare products approved by the U.S. Food and Drug Administration (FDA) fall within the application domain of medical imaging <cit.>, medical imaging is spearheading the AI-powered transformation of healthcare. Performance reporting and comparisons of medical imaging models are key to determining their potential for clinical translation. Clinical translation requires approval by regulatory agencies such as the U.S. FDA, whose recommendations insist on the importance of characterizing variability and reporting confidence intervals (CIs); for instance in <cit.>). A recent paper <cit.> written by FDA staff describes regulatory science principles on performance assessment of AI algorithms in imaging and emphasizes that "The statistical analysis plays a critical role in the assessment of machine learning (ML) performance but may be under-appreciated by many ML developers". Current practice in reporting results (including that of the authors!) often does not fulfill these requirements and thus far from lends itself to determining whether a medical imaging model is suited for clinical translation. The underlying question is: Can we really trust performance claims made in publications? The purpose of this paper was to address this important question: * Based on a comprehensive analysis of all MICCAI 2023 segmentation papers, we show that performance variability is rarely accounted for in the medical image analysis community. * To demonstrate the implications of the reporting bottleneck, we propose a work-around to approximate variability parameters from the information provided in publications. Specifically, we show that the unreported standard deviation (SD) in segmentation papers can be approximated using a second-order polynomial function of the mean Dice similarity coefficient (DSC). * Using our proposed approximation method, we reconstruct CIs for the published MICCAI papers and provide evidence that the praise proposed methods receive is often not supported by sufficient evidence. § METHODS Assessment of AI model performance variability is crucial as it directly impacts the model’s reliability in clinical practice. While variability reporting guidelines—in particular regarding the inclusion of CIs—are available in the clinical prediction modeling domain <cit.>, such practices are still unfamiliar in the medical imaging domain. In this paper, we focus on two statistical concepts capturing performance variability: SD is a measure of the dispersion or spread of data points from the mean value. For example, given a set of performance metric values (e.g., DSC values of a model on multiple images) the SD states how much these values vary from the average performance. A small SD indicates that the values are close to the mean, while a large one suggests that the values are more dispersed. A CI can be used to estimate the range within which a population parameter (such as the mean) is expected to lie with a certain level of confidence. For example, a 95% CI for the mean suggests that if we were to take many samples and calculate the CI for each, about 95% of these intervals would contain the true population mean. CIs provide a measure of the precision of an estimate. Their widths approach 0 for infinite sample sizes. To identify current practices in performance variability reporting and further raise awareness on this matter in the medical imaging community, our work addresses research questions (RQs) depicted in Figure <ref>. §.§ Systematic review of MICCAI 2023 segmentation papers Given that segmentation is a key focus of MICCAI and DSC is the community's primary metric <cit.>, our study concentrates on segmentation papers. From each of the identified segmentation papers we extracted information on the claims of the paper, method performance, its variability, and validation practices. To reduce the bias in extracting information from the papers, each paper was screened independently by two researchers. Subsequently, three additional researchers, distinct from those involved in data extraction, addressed data extraction conflicts. With the inclusion criteria being use of a test set for validation and mention of the exact test set size and mean DSC values, we identified all segmentation papers for which we could approximate the SD and CI. We excluded papers that solely used a random train/test split with no validation set (because there is a risk that the test set was used for validation, e.g., for model selection, leading to overoptimistic performance estimates) or only provided performance information graphically. §.§ Approximation of missing variability parameters To develop a method for approximating missing SD and CI from data present in publications, we used data from the Medical Segmentation Decathlon challenge <cit.>, which saw 19 models competing on 10 different segmentation tasks in different anatomical regions. Stratifying by task, we calculated both the mean and SD of the DSC for each model of the challenge’s test set resulting in 189 SD values. When plotting the values of mean DSC against the SD, it seemed that a second-order polynomial curve would fit the data points reasonably well (see Figure <ref> bottom). To formally address this functional relationship, using the library in Python, we fitted a generalized linear model (GLM) with a log link function, assuming a Gamma distribution for SD, which was expressed as a function of a second-order polynomial of mean DSC. The final model was Log(SD)=2.0310 + 0.0726· DSC_μ -0.0008· DSC_μ^2. Following compensation for missing SD values, we computed CIs around their respective mean DSC using a parametric approach, as we had no access to the test data and models used in the papers, and could thus not use bootstrapping methods. The results from this parametric approach have been shown to closely approximate those from a non-parametric approach that uses bootstrapping <cit.>, justifying our choice of method. For calculation of CI, we used [ DSC_μ - t_n-1, 1-α/2·SD/√(n), DSC_μ + t_n-1, 1-α/2·SD/√(n)], with DSC_μ being the mean DSC, n the test size, t_n-1, 1-α/2 the quantile of the t distribution, n-1 the degrees of freedom and α the level of significance. We set α to 0.05, corresponding to 95% CIs. § EXPERIMENTS AND RESULTS RQ1: Common practice with respect to variability reporting From all 730 papers published in the scope of MICCAI 2023, we identified 221 (30.3%) segmentation papers. As shown in Figure <ref>, more than half of the papers (54.8%) did not report any kind of variability. CIs were reported in only one paper (0.5%). Of those that did report variability, only 47% reported SD (21% of all segmentation papers), and only 5% combined SD with a graphical representation of variability. However, for 61.7% of the papers that reported SD, its method of computing was not specified. 83.3% of papers claimed that they outperformed the state of the art. RQ2: Quality of SD approximation The quality of our polynomial fit on the development data is illustrated in Figure <ref> (bottom). For external validation of our data imputation method, we obtained access to the performance data from 56 past MICCAI segmentation challenges <cit.>, comprising results for 213 different methods applied to 124 different tasks. For these, we computed the SD and CI both from the observed data and with our approach. According to our results, the proposed model generalizes well to unseen data with a median (interquartile range (IQR)) difference between the observed and predicted CI width of 0.0024 (0.0097, 0.0422) for the dataset sizes > 20 (better for increasing test set size as shown in Figure <ref>(a)). RQ3 Performance differences versus widths of CI A total of 77 papers met our inclusion criteria for imputing the CI. The median/max (IQR) CI width for the first-ranked method was 0.03/0.31 (0.02, 0.06). The median (IQR) difference in mean DSC between the first- and second-ranked method (from now on referred to as delta DSC) was 0.01 (0.00, 0.03). Thus, the median width of the CIs of the first-ranked method was about three times larger than the median delta DSC (see Figure <ref> in the Supplement). For 64.9% of papers, the mean performance of the second-ranked method was within the CI range of the first-ranked method (Figure <ref>(b)). The code for our experiments is available at: <https://github.com/IMSY-DKFZ/CI_uncovered> § DISCUSSION Our work is the first to systematically analyze common practice with respect to model performance variability reporting in the field of medical image analysis. Our study clearly shows that reporting of performance variability, in particular reporting of CIs, is a rare exception. These reporting practices are at odds with the MICCAI reproducibility checklist guidelines <cit.> which include the following item: "A description of results with central tendency (e.g., mean) & variation (e.g., error bars)". Even when variability is reported, for instance in the form of the SD, conclusions are commonly drawn based on mean performance values without taking variability into account. This reporting practice can be very misleading and is highly unhelpful in reaching the ultimate goal of clinical translation of imaging models. A model exhibiting a high mean metric score but large variability in performance may not be suitable for safety-critical real-world applications, in which low performance on even some images may have dramatic consequences for patients. A limitation of our study could be seen in the fact that we had to approximate the SDs and subsequently CIs based on the data available. However, our external validation of our SD approximation indicates high reliability (Figure <ref>(a)). Related work on the topic of variability analysis is sparse. In an analysis of biomedical image analysis challenge reporting, <cit.> showed that claims are often solely drawn from aggregated results in tables, which supports our hypothesis. <cit.> likewise emphasize the lack of reporting CIs in medical image segmentation. Reporting on the Conference on Neural Information Processing Systems (NeurIPS) reproducibility program, <cit.> state that "it seems surprising to have 87% of papers that see value in clearly defining the metrics and statistics used, yet 36% of papers judge that error bars are not applicable to their results". While our current analysis focuses on the variability of the trained model (i.e., accounting from variance coming from the test set), <cit.> analyzed the variability of the learning procedure (i.e., accounting for other sources of variances such as random seeds or hyperparameters) during initial method development. Additionally, <cit.> investigated rankings in biomedical challenges and found that these are often not stable. Similarly, <cit.> found that rankings between private and public leaderboards in Kaggle competitions were not stable. In this paper, we focused on the question: Can we trust the reported mean performance results? Note that this is conceptually different from asking whether one method truly outperforms another, as investigated in <cit.>. In fact, the confidence in reported mean performance values, as measured by CIs, is necessary (but, of course, not sufficient) for deciding on whether a proposed algorithm is ready for clinical translation. Our study revealed that claims of scientific progress are typically based on small differences (around 0.01) in the mean DSC, suggesting that these were considered clinically relevant by the authors. In contrast, CIs are—on average—much wider. We consider this contradictory because if a difference of 0.01 matters, then, shouldn’t a CI with a much larger width be concerning (and at least be reported), as it means that the true mean may be substantially smaller than the reported one? Future work should not only be directed to fostering better reporting practices, but also address a complementary question: Does a published method really make an improvement over the state of the art? P-values are probably the most visible statistical tool in this context, yet, the standard view of statistical testing (null-hypothesis significance testing) is often considered as insufficient evidence both in machine learning <cit.> and in medical evaluation <cit.>. One of their drawbacks is that a sufficiently large sample size can make any two models significantly different. However, a difference can be statistically significant but so small that it is clinically meaningless. To assess clinically relevant benefits, "superiority margins"—as used in superiority testing for clinical trials—are an interesting concept that could easily be adapted to CIs by adding a boundary <cit.>. In conclusion, we showed that current publications in the medical image analysis community typically do not provide sufficient evidence to support which models could potentially be translated into clinical practice. We hope that our results will trigger a major community shift towards uncertainty-aware performance assessment of medical image analysis models. §.§.§ We acknowledge funding from the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute), reference ANR-10-IAIHU-06 (Agence Nationale de la Recherche-10-IA Institut Hospitalo-Universitaire-6) and reference ANR-23-CE17-0054. This publication was further supported through state funds approved by the State Parliament of Baden-Württemberg for the Innovation Campus Health + Life Science Alliance Heidelberg Mannheim. Moreover, it received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement no. 101002198, NEURAL SPICING). Part of this work was also funded by Helmholtz Imaging (HI), a platform of the Helmholtz Incubator on Information and Data Science. §.§.§ We declare the following competing interests: N. Rieke is an employee of NVIDIA. The remaining authors have no competing interests. splncs04
As demonstrated by the fact that more than 530 of the first 692 AI in healthcare products approved by the U.S. Food and Drug Administration (FDA) fall within the application domain of medical imaging <cit.>, medical imaging is spearheading the AI-powered transformation of healthcare. Performance reporting and comparisons of medical imaging models are key to determining their potential for clinical translation. Clinical translation requires approval by regulatory agencies such as the U.S. FDA, whose recommendations insist on the importance of characterizing variability and reporting confidence intervals (CIs); for instance in <cit.>). A recent paper <cit.> written by FDA staff describes regulatory science principles on performance assessment of AI algorithms in imaging and emphasizes that "The statistical analysis plays a critical role in the assessment of machine learning (ML) performance but may be under-appreciated by many ML developers". Current practice in reporting results (including that of the authors!) often does not fulfill these requirements and thus far from lends itself to determining whether a medical imaging model is suited for clinical translation. The underlying question is: Can we really trust performance claims made in publications? The purpose of this paper was to address this important question: * Based on a comprehensive analysis of all MICCAI 2023 segmentation papers, we show that performance variability is rarely accounted for in the medical image analysis community. * To demonstrate the implications of the reporting bottleneck, we propose a work-around to approximate variability parameters from the information provided in publications. Specifically, we show that the unreported standard deviation (SD) in segmentation papers can be approximated using a second-order polynomial function of the mean Dice similarity coefficient (DSC). * Using our proposed approximation method, we reconstruct CIs for the published MICCAI papers and provide evidence that the praise proposed methods receive is often not supported by sufficient evidence.
null
null
null
Our work is the first to systematically analyze common practice with respect to model performance variability reporting in the field of medical image analysis. Our study clearly shows that reporting of performance variability, in particular reporting of CIs, is a rare exception. These reporting practices are at odds with the MICCAI reproducibility checklist guidelines <cit.> which include the following item: "A description of results with central tendency (e.g., mean) & variation (e.g., error bars)". Even when variability is reported, for instance in the form of the SD, conclusions are commonly drawn based on mean performance values without taking variability into account. This reporting practice can be very misleading and is highly unhelpful in reaching the ultimate goal of clinical translation of imaging models. A model exhibiting a high mean metric score but large variability in performance may not be suitable for safety-critical real-world applications, in which low performance on even some images may have dramatic consequences for patients. A limitation of our study could be seen in the fact that we had to approximate the SDs and subsequently CIs based on the data available. However, our external validation of our SD approximation indicates high reliability (Figure <ref>(a)). Related work on the topic of variability analysis is sparse. In an analysis of biomedical image analysis challenge reporting, <cit.> showed that claims are often solely drawn from aggregated results in tables, which supports our hypothesis. <cit.> likewise emphasize the lack of reporting CIs in medical image segmentation. Reporting on the Conference on Neural Information Processing Systems (NeurIPS) reproducibility program, <cit.> state that "it seems surprising to have 87% of papers that see value in clearly defining the metrics and statistics used, yet 36% of papers judge that error bars are not applicable to their results". While our current analysis focuses on the variability of the trained model (i.e., accounting from variance coming from the test set), <cit.> analyzed the variability of the learning procedure (i.e., accounting for other sources of variances such as random seeds or hyperparameters) during initial method development. Additionally, <cit.> investigated rankings in biomedical challenges and found that these are often not stable. Similarly, <cit.> found that rankings between private and public leaderboards in Kaggle competitions were not stable. In this paper, we focused on the question: Can we trust the reported mean performance results? Note that this is conceptually different from asking whether one method truly outperforms another, as investigated in <cit.>. In fact, the confidence in reported mean performance values, as measured by CIs, is necessary (but, of course, not sufficient) for deciding on whether a proposed algorithm is ready for clinical translation. Our study revealed that claims of scientific progress are typically based on small differences (around 0.01) in the mean DSC, suggesting that these were considered clinically relevant by the authors. In contrast, CIs are—on average—much wider. We consider this contradictory because if a difference of 0.01 matters, then, shouldn’t a CI with a much larger width be concerning (and at least be reported), as it means that the true mean may be substantially smaller than the reported one? Future work should not only be directed to fostering better reporting practices, but also address a complementary question: Does a published method really make an improvement over the state of the art? P-values are probably the most visible statistical tool in this context, yet, the standard view of statistical testing (null-hypothesis significance testing) is often considered as insufficient evidence both in machine learning <cit.> and in medical evaluation <cit.>. One of their drawbacks is that a sufficiently large sample size can make any two models significantly different. However, a difference can be statistically significant but so small that it is clinically meaningless. To assess clinically relevant benefits, "superiority margins"—as used in superiority testing for clinical trials—are an interesting concept that could easily be adapted to CIs by adding a boundary <cit.>. In conclusion, we showed that current publications in the medical image analysis community typically do not provide sufficient evidence to support which models could potentially be translated into clinical practice. We hope that our results will trigger a major community shift towards uncertainty-aware performance assessment of medical image analysis models. §.§.§ We acknowledge funding from the French government under management of Agence Nationale de la Recherche as part of the “Investissements d’avenir” program, reference ANR-19-P3IA-0001 (PRAIRIE 3IA Institute), reference ANR-10-IAIHU-06 (Agence Nationale de la Recherche-10-IA Institut Hospitalo-Universitaire-6) and reference ANR-23-CE17-0054. This publication was further supported through state funds approved by the State Parliament of Baden-Württemberg for the Innovation Campus Health + Life Science Alliance Heidelberg Mannheim. Moreover, it received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (grant agreement no. 101002198, NEURAL SPICING). Part of this work was also funded by Helmholtz Imaging (HI), a platform of the Helmholtz Incubator on Information and Data Science. §.§.§ We declare the following competing interests: N. Rieke is an employee of NVIDIA. The remaining authors have no competing interests. splncs04
null
http://arxiv.org/abs/2409.17715v1
20240926103731
Optimal Sensitivity Oracle for Steiner Mincut
[ "Koustav Bhanja" ]
cs.DS
[ "cs.DS" ]
QuForge: A Library for Qudits Simulation Jonas Mazierohttps://orcid.org/0000-0002-2872-986X < g r a p h i c s > September 28, 2024 =============================================================================== § ABSTRACT Let G=(V,E) be an undirected weighted graph on n=|V| vertices and S⊆ V be a Steiner set. Steiner mincut is a well-studied concept, which also provides a generalization to both (s,t)-mincut (when |S|=2) and global mincut (when |S|=n). Here, we address the problem of designing a compact data structure that can efficiently report a Steiner mincut and its capacity after the failure of any edge in G; such a data structure is known as a Sensitivity Oracle for Steiner mincut. In the area of minimum cuts, although many Sensitivity Oracles have been designed in unweighted graphs, however, in weighted graphs, Sensitivity Oracles exist only for (s,t)-mincut [Annals of Operations Research 1991, NETWORKS 2019, ICALP 2024], which is just a special case of Steiner mincut. Here, we generalize this result from |S|=2 to any arbitrary set S⊆ V, that is, 2 ≤ |S| ≤ n. We first design an 𝒪(n^2) space Sensitivity Oracle for Steiner mincut by suitably generalizing the approach used for (s,t)-mincuts [Annals of Operations Research 1991, NETWORKS 2019]. However, the main question that arises quite naturally is the following. Can we design a Sensitivity Oracle for Steiner mincut that breaks the 𝒪(n^2) bound on space? In this manuscript, we present the following two results that provide an answer to this question. * Sensitivity Oracle: Assuming the capacity of every edge is known, * there is an 𝒪(n) space data structure that can report the capacity of Steiner mincut in 𝒪(1) time and * there is an 𝒪(n(n-|S|+1)) space data structure that can report a Steiner mincut in 𝒪(n) time after the failure of any given edge in G. * Lower Bound: We show that any data structure that, after the failure of any edge, can report a Steiner mincut or its capacity must occupy Ω(n^2) bits of space in the worst case, irrespective of the size of the Steiner set. The lower bound in (2) shows that the assumption in (1) is essential to break the Ω(n^2) lower bound on space. Sensitivity Oracle in (1.b) occupies only subquadratic, that is 𝒪(n^1+ϵ), space if |S|=n-n^ϵ+1, for every ϵ∈ [0,1). For |S|=n-k for any constant k≥ 0, it occupies only 𝒪(n) space. So, we also present the first Sensitivity Oracle occupying 𝒪(n) space for global mincut. In addition, we are able to match the existing best-known bounds on both space and query time for (s,t)-mincut [Annals of Operations Research 1991, NETWORKS 2019] in undirected graphs. arabic § INTRODUCTION In the real world, networks (graphs) are often subject to the failure of edges and vertices due to a variety of factors, such as physical damage, interference, or other disruptions. This can lead to changes in the solution to several graph problems. While these failures can happen at any location in the network at any time, they are typically short-lived. Naturally, it requires us to have compact data structures that can efficiently report the solution to the given graph problem (without computing from scratch) once any failure has occurred. Such data structures are known as Sensitivity Oracles for several graph problems. There exist elegant Sensitivity Oracles for many fundamental graph problems, such as shortest paths <cit.>, reachability <cit.>, traversals <cit.>, etc. The minimum cut of a graph is also a fundamental concept of graph theory. Moreover, it has a variety of practical applications in the real world <cit.>. Designing Sensitivity Oracles for various minimum cuts of a graph has been an emerging field of research for the past few decades <cit.>. There are two well-known mincuts of a graph. They are global mincut and (s,t)-mincut. Here, we design the first Sensitivity Oracle for global mincut in undirected weighted graphs that can handle the failure of any edge. The concept of Steiner mincut is also well-studied in the area of minimum cuts <cit.>; moreover, it has global mincut, as well as (s,t)-mincut, as just a special corner case. In this article, as our main result, we present the first Sensitivity Oracle for Steiner mincut for handling the failure of any edge in undirected weighted graphs. Interestingly, our result bridges the gap between the two extreme scenarios of Steiner mincut while matching their bounds, namely, (s,t)-mincut <cit.> and global mincut (designed in this article). In addition, it also provides the first generalization from unweighted graphs <cit.> to weighted graphs. Let G=(V,E) be an undirected graph on n=|V| vertices and m=|E| edges with non-negative real values assigned as the capacity to edges. We denote the capacity of an edge e by w(e). Let S⊆ V be a Steiner set of G such that |S|≥ 2. A vertex s is called a Steiner vertex if s∈ S; otherwise, s is called a nonSteiner vertex. A nonempty set C⊂ V is said to be a Steiner cut if there is at least one pair of Steiner vertices s,s' such that s∈ C and s'∉ C. For S=V, a Steiner cut is a (global) cut. Similarly, for S={s,t}, a Steiner cut is an (s,t)-cut. A cut C is said to separate a pair of vertices u,v if u∈ C and v∈C=V∖ C or vice versa. An edge e=(u,v) is said to contribute to a cut C if C separates endpoints u,v of e. The capacity of a cut C, denoted by c(C), is the sum of capacities of all contributing edges of C. A Steiner cut of the least capacity is known as the Steiner mincut, denoted by S-mincut. Let λ_S be the capacity of S-mincut. The problem of designing a Sensitivity Oracle for S-mincut for handling the failure of any edge is defined as follows. For any graph G, a single edge Sensitivity Oracle for Steiner mincut is a compact data structure that can efficiently report a Steiner mincut and its capacity after the failure of any edge in G. For unweighed graphs, there exist single edge Sensitivity Oracles for global mincut <cit.>, (s,t)-mincut <cit.>, and Steiner mincut <cit.>. Unfortunately, for weighted graphs, in the area of minimum cuts, the only existing results are single edge Sensitivity Oracles for (s,t)-mincut <cit.>. For undirected weighted graphs, Ausiello et al. <cit.>, exploiting the Ancestor tree data structure of Cheng and Hu <cit.>, designed the first single edge Sensitivity Oracle for (s,t)-mincut. Their Sensitivity Oracle occupies 𝒪(n^2) space. After the failure of any edge, it can report an (s,t)-mincut C and its capacity in 𝒪(|C|) and 𝒪(1) time, respectively. Recently, Baswana and Bhanja <cit.> complemented this result by showing that Ω(n^2log n) bits of space is required in the worst case, irrespective of the query time. For Steiner mincuts, it follows from the above discussion that the existing Sensitivity Oracles are either for undirected unweighted graphs or only for a special case, when |S|=2, in weighted graphs. Therefore, to provide a generalization of these results to any Steiner set, the following is an important question to raise. Does there exist a single edge Sensitivity Oracle for S-mincut in undirected weighted graphs? We show that the approach taken by Ausiello et al. <cit.> can be generalized from S={s,t} to any set S⊆ V. This answers the above-mentioned question in the affirmative and leads to the following result. For any undirected weighted graph G on n=|V| vertices, for every Steiner set S, there exists an 𝒪(n^2) space data structure that, after the failure of any edge in G, can report an S-mincut C and its capacity in 𝒪(|C|) time and 𝒪(1) time respectively. The space and query time of the Sensitivity Oracle in Theorem <ref> match with the existing optimal results for (s,t)-mincut <cit.>. The lower bound of Ω(n^2log n) bits of space in <cit.> is only for |S|=2. However, to the best of our knowledge, no lower bound is known for any |S|>2. Therefore, the main question that we address in this article arises quite naturally as follows. For undirected weighted graphs, does there exist a single edge Sensitivity Oracle for S-mincut that breaks the quadratic bound on space and still achieves optimal query time if |S|>2? §.§ Our Results A Sensitivity Oracle in a weighted graph addresses queries in a more generic way <cit.>. Given any edge e and any value Δ satisfying Δ≥ 0, the aim is to efficiently report the solution of a given problem after reducing the capacity of edge e by Δ. In this generic setting, we first design an 𝒪(n) space single edge Sensitivity Oracle for global mincut that also achieves optimal query time. Now, in order to bridge the gap between the two extreme scenarios of Steiner set (|S|=n and |S|=2) while matching their bounds, we present our main result that breaks the 𝒪(n^2) space bound of Theorem <ref>, and answers Question <ref> in the affirmative. Let G=(V,E) be an undirected weighted graph on n=|V| vertices and m=|E| edges. For any Steiner set S of G, * there is an 𝒪(n) space rooted tree 𝒯(G) that, given any edge e∈ E and any value Δ satisfying 0≤Δ≤ w(e), can report the capacity of S-mincut in 𝒪(1) time after reducing the capacity of edge e by Δ and * there is an 𝒪(n(n-|S|+1)) space data structure ℱ(G) that, given any edge e∈ E and any value Δ satisfying 0≤Δ≤ w(e), can report an S-mincut C in 𝒪(|C|) time after reducing the capacity of edge e by Δ. For any ϵ∈ [0,1), the space occupied by the single edge Sensitivity Oracle for S-mincut in Theorem <ref>(2) is subquadratic, that is 𝒪(n^1+ϵ), for |S|=n-n^ϵ+1. Moreover, it approaches to 𝒪(n) as |S| tends to n. In particular, for |S|=n-k, for any constant k≥ 0, it occupies only 𝒪(n) space. Observe that our results in Theorem <ref> interestingly match the bounds on both space and query time for the two extreme scenarios of the Steiner set. On one extreme (|S|=n), it occupies 𝒪(n) space for global mincut. On the other extreme (|S|=2), it occupies 𝒪(n^2) space, which match the best-known existing results for (s,t)-mincut <cit.>. Finally, notice that the time taken by our Sensitivity Oracle to answer any query is also worst-case optimal. We also provide lower bounds on both space and query time of Sensitivity Oracles for S-mincut. Our first lower bound is for reporting the capacity of S-mincut and our second lower bound is for reporting an S-mincut. Let D be any data structure that can report the capacity of Steiner mincut after the failure of any edge for undirected weighted graphs on n vertices. Data structure D must occupy Ω(n^2log n) bits of space in the worst case, irrespective of the query time and the size of the Steiner set. For reporting the capacity of S-mincut, Theorem <ref> provides a generalization of the existing lower bound on both space and time for (s,t)-mincut by Baswana and Bhanja <cit.>. However, for reporting an S-mincut, no lower bound on space or query time for single edge Sensitivity Oracle was known till date, even for the two extreme scenarios of Steiner set. So, the following theorem is the first lower bound for reporting an S-mincut after the failure of any edge. Let D be any data structure that can report a Steiner mincut C in 𝒪(|C|) time after the failure of any edge for undirected weighted graphs on n vertices. Data structure D must occupy Ω(n^2) bits of space in the worst case, irrespective of the size of Steiner set. It is assumed in Theorem <ref> that the query edge e is present in G and the change in capacity (that is, Δ) provided with the query is at most w(e). So, the lower bounds of Ω(n^2) bits of space in Theorem <ref> and Theorem <ref> do not violate the sub-quadratic space data structures in Theorem <ref>. Moreover, the assumption in Theorem <ref> seems practically justified. This is because, as discussed in <cit.>, in the real world, the capacity of an edge reduces only if the edge actually exists in the graph, and furthermore, it can reduce by a value at most the capacity of the edge. §.§ Related Works In the seminal works by Dinitz and Vainshtein <cit.>, they designed an 𝒪(min{nλ_S,m}) space data structure, known as Connectivity Carcass, for storing all S-mincuts of an unweighted undirected graph. It can report an S-mincut in 𝒪(m) time and its capacity in 𝒪(1) time. Baswana and Pandey <cit.>, using Connectivity Carcass as the foundation, designed an 𝒪(n) space single edge Sensitivity Oracle for S-mincut in undirected unweighted graphs that also reports an S-mincut in 𝒪(n) time. Their result matches the bounds on both space and time for the existing result on the two extreme scenarios of S-mincut, namely, (s,t)-mincut <cit.> and global mincut <cit.>. The result in <cit.> also acts as the foundation of single edge Sensitivity Oracles for all-pairs mincut <cit.> For directed weighted graphs, Baswana and Bhanja <cit.> presented a single edge Sensitivity Oracle for (s,t)-mincuts that matches both space and query time of the undirected weighted graph results <cit.>. Providing a generalization from the two extreme scenarios of the Steiner set (S=V and |S|=2) is also addressed for various problems, namely, computing Steiner mincut <cit.>, Steiner connectivity augmentation and splitting-off <cit.>, construction of a cactus graph for Steiner mincuts <cit.>. §.§ Organization of the Article This article is organized as follows. Section <ref> contains the basic preliminaries. We first construct an 𝒪(n^2) space single edge Sensitivity Oracle for Steiner mincut in Section <ref>. In Section <ref>, we design an 𝒪(n) space single edge Sensitivity Oracle for reporting only the capacity of Steiner mincut. A linear space single edge Sensitivity Oracle for global mincut is designed in Section <ref>. Our main result on the subquadratic space single edge Sensitivity Oracle for Steiner mincut is developed in Section <ref>. The lower bound results are given in Section <ref>. Finally, we conclude in Section <ref> with couple of open problems. § PRELIMINARIES In this section, we define a set of basic notations and properties of cuts. Let G∖{e} denote the graph obtained from G after the removal of edge e. We now define the concept of crossing cuts, introduced by Dinitz, Karzanov, and Lomonosov <cit.>. A pair of cuts C,C' in G is said to be crossing if each of the four sets C∩ C', C∖ C', C'∖ C, and C∪ C' is nonempty. The following concept of mincut for an edge and vital edges are to be used crucially in the construction of our data structure. A Steiner cut C is said to be a mincut for an edge e if e contributes to C and c(C)≤ c(C') for every Steiner cut C' in which e contributes. Let e be an edge and C be a mincut for edge e. Edge e is said to be a vital edge if its removal reduces the capacity of Steiner mincut, that is, c(C)-w(e)<λ_S. We now define a special mincut for an edge. A mincut C for an edge e=(x,y)∈ E with x∈ C is said to be a nearest mincut for e if there is no mincut C' for e such that x∈ C' and C'⊂ C. The set of all nearest mincuts for an edge e is denoted by N(e). For any two sets A,B⊂ V, * c(A)+c(B)≥ c(A∩ B)+c(A∪ B) and * c(A)+c(B)≥ c(A∖ B)+c(B∖ A). For undirected weighted graphs, Gomory and Hu <cit.> designed the following tree structure, which is widely known as Gomory Hu Tree. For any undirected weighted graph G=(V,E) on n=|V| vertices, there is an 𝒪(n) space undirected weighted tree 𝒯_GH on vertex set V that satisfies the following property. Let u,v be any pair of vertices in G. A cut of the least capacity separating u,v in 𝒯_GH is also a cut of the least capacity separating u,v in G. Moreover, 𝒯_GH can report a cut C of the least capacity separating u,v in 𝒪(|C|) time and its capacity in 𝒪(1) time. A set of cuts ℒ is said to form a laminar family if, for any pair of cuts C_1,C_2∈ℒ, exactly one of the three is true – C_1∩ C_2 is an empty set, C_1⊆ C_2, and C_2⊆ C_1. *A rooted tree 𝒯_ℒ representing a laminar family ℒ: For any given laminar family ℒ of cuts, we can construct a rooted tree 𝒯_ℒ as follows. Let x be any vertex in G. Let ϕ_ℒ(x) denote the unique node in 𝒯_ℒ to which vertex x is mapped and let SubTree(x) denote the set of all vertices mapped to the subtree rooted at ϕ_ℒ(x) (including ϕ_ℒ(x)) in 𝒯_ℒ. The set SubTree(x) defines the unique minimal cut in ℒ that contains x. If ϕ_ℒ(x) is a child of the root node in 𝒯_ℒ, then SubTree(x) is a maximal cut in ℒ. For any pair of vertices x,y in G, let C_1=SubTree(x) and C_2=SubTree(y). Then, ϕ_ℒ(x) is a child of ϕ_ℒ(y) in 𝒯_ℒ if and only if C_1 is a maximal proper subset of C_2 in ℒ. A vertex in G is mapped to the root node of 𝒯_ℒ if no cut in ℒ contains it. This leads to the following lemma. For any laminar family ℒ of cuts, there exists an 𝒪(n) space rooted tree 𝒯_ℒ such that a cut C∈ℒ if and only if there exists a node μ (except root node) of 𝒯_ℒ and C is the set of vertices mapped to the subtree rooted at μ (including node μ). § AN 𝒪(N^2) SPACE SENSITIVITY ORACLE FOR STEINER MINCUT In this section, we first provide the limitations of the previous results in unweighted graphs. Later, we design an 𝒪(n^2) space single edge Sensitivity Oracle for S-mincut. *Limitations of the existing results For unweighted graphs, the following property is used crucially to design every existing single edge Sensitivity Oracle. Property P_1: Failure of an edge e reduces the capacity of S-mincut if and only if edge e contributes to an S-mincut. Dinitz and Vainshtein <cit.> designed the following quotient graph, known as the flesh graph, of G. Flesh graph is obtained by contracting every pair of vertices in G that are not separated by any S-mincut. The construction ensures that every pair of vertices in flesh is separated by an S-mincut of G. Every vertex in G is mapped to a unique vertex in flesh. Therefore, the endpoints of any edge e are mapped to different vertices in flesh if and only if failure of e reduces capacity of S-mincut. Thus, by Property P_1, storing the 𝒪(n) space mapping of vertices of G to the vertices of flesh is sufficient to answer the capacity of S-mincut in 𝒪(1) time after the failure of any edge. In addition, Dinitz and Vainshtein <cit.> showed that the flesh graph can be used to report an S-mincut after the failure of any edge in G in 𝒪(m) time. Flesh graph is one of the three components of Connectivity Carcass designed by Dinitz and Vainshtein <cit.>; the other two components are Skeleton and Projection mapping. Recently, Baswana and Pandey <cit.>, exploiting the properties of all the three components of Connectivity Carcass established an ordering among the vertices of flesh graph. By using Property P_1, they showed that this ordering, along with Skeleton and Projection mapping, can be used to design an 𝒪(n) space single edge Sensitivity Oracle for S-mincut in unweighted graphs. This can report an S-mincut in 𝒪(n) time. Unfortunately, for weighted graphs, it is easy to observe that multiple edges can exist that do not contribute to any S-mincut but failure of each of them reduces the capacity of S-mincut. Hence, in weighted graphs, Property P_1 no longer holds. So, the existing structures are not suitable for handling the failure of weighted edges. It requires us to explore the structure of mincuts for every edge whose both endpoints belong to the same node of flesh graph. Moreover, the capacity of mincut for these edges can be quite large compared to the capacity of S-mincut. §.§ Sensitivity Oracle for Steiner Mincut: 𝒪(n^2) Space We now give a proof of Theorem <ref> by designing an 𝒪(n^2) space single edge Sensitivity Oracle for S-mincut. Let F be any arbitrary real-valued function defined on cuts. Cheng and Hu <cit.> presented the following result. There is an 𝒪(n^2) space data structure, known as Ancestor tree, that, given any pair of vertices u and v, reports a cut C of the least capacity (F-value) separating u,v in 𝒪(|C|) time and the capacity of C in 𝒪(1) time. In order to design Ancestor tree for Steiner cuts, similar to (s,t)-mincuts given by Ausiello et al. <cit.>, we define function F for Steiner cuts as follows. For a set C⊂ V, F(C)= c(C), if C is a Steiner cut ∞, otherwise. Let e=(x,y) be any failed edge. Ancestor tree can report a cut C of the least capacity separating x and y in 𝒪(|C|) time and its capacity in 𝒪(1) time. By Equation <ref>, C is also a Steiner cut separating x and y. Therefore, by Definition <ref>, C is a mincut for edge e. Hence, the new capacity of S-mincut is either c(C)-w(e) or remains λ_S if c(C)-w(e)≥λ_S. By storing the capacities of all edges of G, we can determine whether c(C)-w(e)< λ_S in 𝒪(1) time. If the capacity of S-mincut reduces, then we can report C in 𝒪(|C|) time; otherwise report an S-mincut C_m in 𝒪(|C_m|) time. This completes the proof of Theorem <ref>. § A SENSITIVITY ORACLE FOR REPORTING CAPACITY OF STEINER MINCUT In this section, we address the problem of reporting the capacity of S-mincut after reducing the capacity of an edge e∈ E by a value Δ satisfying 0 ≤Δ≤ w(e). We denote this query by cap(e,Δ). Observe that a trivial data structure for answering query cap occupies 𝒪(m) space if we store the capacity of mincut for each vital edge in G. For |S|=2 in directed weighted graphs, Baswana and Bhanja <cit.> designed an 𝒪(n) space data structure that implicitly stores all vital edges for (s,t)-mincut and the capacity of their mincuts. We extend their approach from vital edges to all edges in undirected weighted graphs in order to establish the following. For any Steiner set S, with 2≤ |S|≤ n, there exists an 𝒪(n) space data structure that can answer query cap in 𝒪(1) time. Let ℰ⊆ E and V(ℰ) denote the smallest set of vertices such that, for each edge (u,v)∈ℰ, both u and v belongs to V(ℰ). We first design an 𝒪(|V(ℰ)|) space rooted full binary tree for answering query cap for all edges in ℰ. In Section <ref>, this construction also helps in designing a compact structure for reporting mincuts for a special subset of edges. Later in this section, we show an extension to 𝒪(n) space rooted full binary tree for answering query cap for all edges in E. Let C(e) denote a mincut for an edge e. Note that C(e) is a Steiner cut as well (Definition <ref>). We say that an edge e belongs to a set U⊂ V if both endpoints of e belong to U. Suppose C(e) is a mincut for an edge e belonging to V(ℰ) such that, for every other edge e'∈ V(ℰ), c(C(e))≤ c(C(e')). Let e' be an edge from V(ℰ). If e' contributes to C(e), it follows from the selection of edge e that C(e) is a Steiner cut of the least capacity to which e' contributes. Hence, C(e) is a mincut for edge e' as well. This ensures that C(e) partitions the set of all edges belonging to V(ℰ) into three sets – edges of V(ℰ) belonging to C(e)∩ V(ℰ), edges of V(ℰ) belonging to C(e)∩ V(ℰ), and edges of V(ℰ) that contribute to C(e). This leads to a recursive procedure (Algorithm <ref>) for the construction of a tree 𝒯. Each internal node μ of tree 𝒯 has three fields – (i) μ.cap stores the capacity of mincut for the selected edge at μ, (ii) μ.left points to the left child of μ, and (iii) μ.right points to the right child of μ. Each vertex u∈ V(ℰ) is mapped to a leaf node of 𝒯, denoted by 𝕃(u). We invoke Algorithm <ref> with U=V(ℰ). Observe that tree 𝒯 resulting from Algorithm <ref> is a full binary tree. There are 𝒪(|V(ℰ)|) leaf nodes. So, the space occupied by the tree is 𝒪(|V(ℰ)|). *Answering Query cap(e=(x,y),Δ): Suppose edge e belongs to ℰ. Let μ be the lca of 𝕃(x) and 𝕃(y). It follows from the construction of tree 𝒯 that field μ.cap at node μ in 𝒯 stores the capacity of mincut for edge e. Therefore, if μ.cap-Δ<λ_S, then we report μ.cap as the new capacity of S-mincut; otherwise, the capacity of S-mincut does not change. It leads to the following lemma. Let G=(V,E) be an undirected weighted graph on n=|V| vertices. For any Steiner set S⊆ V and a set of edges ℰ⊆ E, there is an 𝒪(|V(ℰ)|) space full binary tree 𝒯_ℰ that, given any edge e∈ℰ and any value Δ satisfying 0≤Δ≤ w(e), can report the capacity of S-mincut in 𝒪(1) time after reducing the capacity of edge e by Δ. We now answer query cap(e,Δ) where edge e∈ E. Observe that edge e in query cap can be either a vital or a nonvital edge. In order to determine whether an edge is vital or not, we design a full binary tree 𝒯_E by invoking Algorithm <ref> with U=V since ℰ=E. Let us denote the tree by 𝒯(G). By Lemma <ref>, the size of tree 𝒯(G) is 𝒪(n). It is now easy to observe that an edge e is a vital edge in graph G if and only if the capacity of the Steiner mincut in graph G∖{e} is μ.cap-w(e)<λ_S, where node μ is the lca(𝕃(x), 𝕃(y)). This leads to the following lemma. Let G=(V,E) be an undirected weighted graph on n=|V| vertices. For any Steiner set S⊆ V, there is an 𝒪(n) space full binary tree 𝒯(G) that, given any edge e∈ E and any value Δ satisfying 0≤Δ≤ w(e), can report the capacity of S-mincut in 𝒪(1) time after reducing the capacity of edge e by Δ. Lemma <ref> completes the proof of Theorem <ref>(1). § AN 𝒪(N) SPACE SENSITIVITY ORACLE FOR GLOBAL MINCUT The well-known (s,t)-mincut is one extreme scenario of S-mincut when |S|=2. In weighted graphs, designing a single edge Sensitivity Oracle for (s,t)-mincut has been addressed quite extensively <cit.>. Moreover, each of them occupies 𝒪(n^2) space. However, to this day, no nontrivial single edge Sensitivity Oracle exists for global mincut, which is the other extreme scenario of S-mincut when |S|=n. We now present the first single edge Sensitivity Oracle for global mincut that occupies only 𝒪(n) space and achieves optimal query time. Let λ_V be the capacity of global mincut. Given any edge e, we want to determine the capacity of mincut for edge e for Steiner set S=V. Observe that, for S=V, every cut in the graph is a Steiner cut (or global cut). Exploiting this insight, we can state the following interesting relation between global mincut and all-pairs mincuts (or (u,v)-mincut, for every u,v∈ V). For an edge (u,v), C is a cut of the least capacity that separates u,v if and only if C is a mincut for edge (u,v). By Theorem <ref>, for every pair of vertices u,v in G, Gomory Hu Tree (Theorem <ref>) stores a cut of the least capacity separating u,v. By Lemma <ref>, it follows that, for S=V, Gomory Hu Tree stores a mincut for every edge in G. Hence, it acts as a single edge Sensitivity Oracle for global mincut and can report a mincut C for any given edge e in 𝒪(|C|) time and its capacity in 𝒪(1) time. Therefore, after reducing w(e) by a value Δ, if c(C)-Δ<λ_V, we can report a global mincut and its capacity optimally using 𝒪(n) space. This leads to the following result. For any undirected weighted graph G=(V,E) on n=|V| vertices, there is an 𝒪(n) space data structure that, given any edge e in G and any value Δ satisfying 0≤Δ≤ w(e), can report the capacity of global mincut in 𝒪(1) time and a global mincut C in 𝒪(|C|) time after reducing the capacity of edge e by Δ. Now, for both extreme scenarios of Steiner mincuts, we have a single edge Sensitivity Oracle. Interestingly, the Sensitivity Oracle for global mincut achieves better than quadratic space. Therefore, the question that arises is how to generalize these results to any Steiner set. § SENSITIVITY ORACLE FOR STEINER MINCUT: BREAKING QUADRATIC BOUND In this section, we address the problem of reporting an S-mincut after reducing the capacity of any given edge e by any given value Δ satisfying 0<Δ≤ w(e). We denote this query by cut(e,Δ). Our objective is to design a data structure that breaks 𝒪(n^2) bound on space for efficiently answering query cut, if |S|>2. A simple data structure can be designed by augmenting tree 𝒯(G) in Theorem <ref>(1) as follows. For each internal node μ of tree 𝒯(G), Algorithm <ref> selects an edge e in Step 7 and stores the capacity of mincut C(e) for edge e in μ.cap. Observe that if we augment node μ with C(e), then it helps in answering query cut as well. However, the augmented tree occupies 𝒪(n^2) space, which defeats our objective. For global mincut (S=V), observe that Gomory Hu Tree essentially acts as a data structure that stores at least one mincut for every edge quite compactly. To design a more compact data structure for answering query cut for S-mincut compared to Theorem <ref>, we take an approach of designing a data structure that can compactly store at least one mincut for every edge. We begin by a classification of all edges of graph G. This classification not only helps in combining the approaches taken for (s,t)-mincut and global mincut but also provides a way to design a compact data structure for efficiently answering query cut for any Steiner set S. *A Classification of All Edges: An edge e_1 in G belongs to * Type-1 if both endpoints of e_1 belong to V∖ S. * Type-2 if both endpoints of e_1 belong to S. * Type-3 if exactly one endpoint of e_1 belongs to S. Given any edge e_1, we can classify e_1 into one of the above-mentioned three types in 𝒪(1) time using sets V and S. Note that edges from Type-1 allow us to extend an approach for (s,t)-mincut given by Baswana and Bhanja <cit.>. Similarly, edges from Type-2 help in extending the approach used for global mincut. However, the main challenge arises in designing a data structure for compactly storing a mincut for all edges from Type-3. We now design a compact data structure for efficiently answering query cut for each type of edges separately. §.§ An 𝒪((n-|S|)n) Space Data Structure for All Edges from Type-1 In this section, we design a data structure for answering cut for all edges from Type-1. Each edge from Type-1 has both endpoints in the set V∖ S. The number of vertices in V∖ S is n-|S|. Therefore, trivially storing a mincut for every edge would occupy 𝒪((n-|S|)^2n) space, which is 𝒪(n^3) for |S|=k for any constant k≥ 2. Exploiting the fact that the number of distinct endpoints of all edges from Type-1 is at most n-|S|, we design an 𝒪(n(n-|S|)) space data structure for all edges from Type-1 using Algorithm <ref> and Lemma <ref> as follows. Let E_1 be the set of all edges from Type-1. It follows from Lemma <ref> that, using Algorithm <ref>, it is possible to design a rooted full binary tree 𝒯_E_1 occupying 𝒪(n-|S|) space for answering query cap(e,Δ) when edge e is from Type-1. We augment each internal node μ of 𝒯_E_1 with a mincut for the edge selected by Algorithm <ref> (in Step 7) while processing node μ. The resulting structure occupies 𝒪((n-|S|)n) space and acts as a data structure for answering query cut for all edges from Type-1. Hence the following lemma holds. For any Steiner set S⊆ V, there is an 𝒪((n-|S|)n) space data structure that, given any edge e from Type-1 and any value Δ satisfying 0≤Δ≤ w(e), can report an S-mincut C in 𝒪(|C|) time after reducing the capacity of edge e by Δ. §.§ An 𝒪(n) Space Data Structure for All Edges from Type-2 In this section, we design a data structure for answering query cut for all edges from Type-2. For each edge from Type-2, both endpoints are Steiner vertices. So, the number of distinct endpoints of edges from Type-2 can be at most |S|. Trivially, storing a mincut for every edge from Type-2 would occupy 𝒪(|S|^2n) space, which is 𝒪(n^3) if |S|=𝒪(n). In a similar way as designing 𝒯_E_1 for all edges from Type-1 (Lemma <ref>), by using Lemma <ref> and Algorithm <ref>, it is possible to design an 𝒪(|S|n) space data structure for answering query cut for all edges from Type-2. Unfortunately, it defeats our objective because, for |S|=n or global mincuts, it occupies 𝒪(n^2) space. Interestingly, by exploiting the fact that both the endpoints of every edge from Type-2 are Steiner vertices, we are able to show that a Gomory Hu Tree of graph G is sufficient for answering query cut for all edges from Type-2. Gomory Hu Tree of G stores a mincut for every edge from Type-2. By Theorem <ref>, Gomory Hu Tree stores a cut of the least capacity separating every pair of vertices in G. Let (u,v) be any edge from Type-2. Suppose C is a cut of the least capacity separating u and v in Gomory Hu Tree of G. So, edge (u,v) is contributing to C. By definition of Type-2 edges, both u and v are Steiner vertices. Therefore, C is a Steiner cut in which edge (u,v) is contributing. It follows from Theorem <ref> that C is also a cut of the least capacity in G that separates u and v. So, C is also a Steiner cut of the least capacity in which edge (u,v) is contributing. Hence, C is a mincut for edge (u,v). Let e=(u,v) be any edge from Type-2. By Theorem <ref>, Gomory Hu Tree of G can determine in 𝒪(1) time the capacity of a cut C of the least capacity in G that separates u and v. By Lemma <ref>, C is a mincut for edge e. So, again by Theorem <ref>, Gomory Hu Tree can be used to report mincut C for edge e in 𝒪(|C|) time. This completes the proof of the following lemma. For any Steiner set S⊆ V, there is an 𝒪(n) space data structure that, given any edge e from Type-2 and any value Δ satisfying 0≤Δ≤ w(e), can report an S-mincut C in 𝒪(|C|) time after reducing the capacity of edge e by Δ. For global mincut or S=V, both endpoints of every edge are Steiner vertices. Therefore, Theorem <ref> can also be seen as a corollary of Lemma <ref>. §.§ An 𝒪((n-|S|)n) Space Data Structure for All Edges from Type-3 In this section, the objective is to design a data structure for answering query cut for all edges from Type-3. Observe that the size of the smallest set of vertices that contains all the endpoints of all edges from Type-3 can be Ω(n). Therefore, using Lemma <ref> and Algorithm <ref>, we can have an 𝒪(n^2) space data structure, which is no way better than the trivial data structure for answering query cut (Theorem <ref>). Now, each edge from Type-3 has exactly one nonSteiner endpoint. So, unlike edges from Type-2, Lemma <ref> no longer holds for edges from Type-3. This shows the limitations of the approaches taken so far in designing a data structure for answering query cut. Trivially storing a mincut for every edge from Type-3 requires 𝒪((n-|S|)n|S|) space. For |S|=n/k, any constant k≥ 2, it occupies 𝒪(n^3) space. Interestingly, we present a data structure occupying only 𝒪((n-|S|)n) space for answering query cut for all edges from Type-3. For any edge e_1=(x,u) with x∈ S from Type-3, without loss of generality, we assume that any mincut C for edge e_1 contains the Steiner vertex x, otherwise consider C. Note that the set of global mincuts and (s,t)-mincuts are closed under both intersection and union. This property was crucially exploited in designing a compact structure for storing them <cit.>. To design a compact structure for storing a mincut for every edge from Type-3, we also explore the relation between a pair of mincuts for edges from Type-3. Let A and B be mincuts for edges e_1 and e_2 from Type-3, respectively. Unfortunately, it turns out that if A crosses B, then it is quite possible that neither A∩ B nor A∪ B is a mincut for e_1 or e_2 even if both are Steiner cuts (refer to Figure <ref>(i)). This shows that mincuts for edges from Type-3 are not closed under intersection or union. To overcome this hurdle, we first present a partitioning of the set of edges from Type-3 based on the nonSteiner vertices as follows. Let V'⊆ V∖ S be the smallest set of nonSteiner vertices such that every edge from Type-3 has endpoint in V'. Let u be any vertex from V'. Let Type-3(u) be the set that contains all edges from Type-3 having u as one of the two endpoints. We aim to design an 𝒪(n) space data structure that can report a mincut for each edge from Type-3(u). This is because storing an 𝒪(n) space data structure for every nonSteiner vertex of G would lead to an 𝒪((n-|S|)n) space data structure. In order to design an 𝒪(n) space data structure for edges from Type-3(u), we consider the set of nearest mincuts (Definition <ref>) for edges from Type-3(u). The following lemma provides a strong reason behind the use of nearest mincuts for edges from Type-3(u). Let C∈ N(e_1) and C'∈ N(e_2) such that e_1=(x,u), e_2=(x',u) are edges from Type-3(u). Then, x'∉ C and x∉ C' if and only if C∩ C'=∅. Suppose C∩ C'≠∅. Since x∉ C' and x'∉ C, so we have x∈ C∖ C' and x'∈ C'∖ C. Evidently, C∖ C', as well as C'∖ C, is a Steiner cut. It is given that C is the nearest mincut for edge (x,u) and x∈ C∖ C'. This implies that c(C∖ C')> c(C). It follows from sub-modularity of cuts (Lemma <ref>(2)) that c(C'∖ C)<c(C'). Therefore, we get a Steiner cut C'∖ C of capacity strictly less than c(C') and edge (x',u) is a contributing edge of Steiner cut C'∖ C, a contradiction. The proof of the converse part is immediate. Let e_1=(x,u) and e_2=(x',u) be any pair of edges from Type-3(u). Let C be a nearest mincut for e_1 and C' be a nearest mincut for e_2. Lemma <ref> essentially states that if e_2 contributes to C'∖ C and e_1 contributes to C∖ C', then C must not cross C'. Now, the problem arises when one of the two edges e_1,e_2 is contributing to a nearest mincut for the other edge. Firstly, there might exist multiple nearest mincuts for an edge (refer to Figure <ref>(iii)). It seems quite possible that an edge, say e_2, is contributing to one nearest mincut for e_1 and is not contributing to another nearest mincut for e_1. Secondly, the union of a pair of nearest mincuts for an edge e_1 from Type-3(u) might not even be a Steiner cut if they cross (refer to Figure <ref>(iii)). Hence, the union of them can have a capacity strictly less than the capacity of mincut for e. So, it seems that the nearest mincuts for edges from Type-3(u) appear quite arbitrarily. It might not be possible to have an 𝒪(n) space structure for storing them. Interestingly, we are able to circumvent all the above challenges as follows. Observe that we are interested in only those edges from Type-3(u) whose failure reduces S-mincut. They are the set of all vital edges that belong to Type-3(u), denoted by VitType-3(u). By exploiting vitality of edges from VitType-3(u), we establish the following crucial insight for any pair of crossing mincuts for edges from VitType-3(u). Interestingly, the following result holds even if the union of a pair of mincuts for a pair of edges from VitType-3(u) is not always a Steiner cut (refer to Figure <ref>(ii)). Let C_1 and C_2 be mincuts for edges e_1=(x_1,u) and e_2=(x_2,u) from VitType-3(u) respectively. Steiner vertex x_2 is present in C_1 if and only if C_1∩ C_2 is a mincut for edge e_2. Suppose Steiner vertex x_2 is present in C_1. Since u is a common endpoint of both edges e_1,e_2, we have u∉ C_1∪ C_2. C_1 is a mincut for edge e_1, so, x_1∈ C_1. Now, there are two possibilities – either (1) x_1∉ C_2 or (2) x_1∈ C_2. We now establish each case separately. Case 1. Suppose x_1∉ C_2, in other words, x_1∈ C_1∖ C_2. It implies that C_1∖ C_2 is nonempty. We consider that C_2 is not a subset of C_1; otherwise, C_1∩ C_2=C_2 is a mincut for e_2 and the lemma holds. So, C_2∖ C_1 is also nonempty. Let c(C_1)=λ_1 and c(C_2)=λ_2. Since x_2∈ C_1∩ C_2, edge e_2 is contributing to C_1∩ C_2. Therefore, c(C_2)≤ c(C_1∩ C_2); otherwise, C_2 is not a mincut for edge e_2. Since C_1 is a Steiner cut, there must be a Steiner vertex z such that z∉ C_1. Based on the position of z with respect to cut C_2, observe that z appears either (1.1) in C_1∪ C_2 or (1.2) in C_2∖ C_1 (refer to Figure <ref>). Case 1.1. Suppose z∈C_1∪ C_2 (refer to Figure <ref>(i)). Observe that C_1∪ C_2 is a Steiner cut in which edge e_1 is contributing. So, the capacity of C_1∪ C_2 has to be at least λ_1. By sub-modularity of cuts (Lemma <ref>(1)), c(C_1∩ C_2)+c(C_1∪ C_2)≤λ_1+λ_2. It follows that c(C_1∩ C_2)≤λ_2. Since C_1∩ C_2 is a Steiner cut in which edge e_2 is contributing, therefore, the capacity of C_1∩ C_2 is exactly λ_2. Hence C_1∩ C_2 is a mincut for edge e_2. Case 1.2. We show that this case does not arise. Assume to the contrary that z∈ C_2∖ C_1 (refer to Figure <ref>(ii)). Here, we crucially exploit the fact that edge e_2 is a vital edge from Type-3(u). Let us now consider graph G∖{e_2}. Since e_2 is a vital edge in G, therefore, in graph G∖{e_2}, the capacity of S-mincut is λ_2-w(e_2) and C_2 is an S-mincut. In G, edge e_2 is also a contributing edge of C_1. Therefore, the capacity of C_1 in G∖{e_2} is λ_1-w(e_2). Without causing any ambiguity, let us denote the capacity of any cut A in G∖{e_2} by c(A). By sub-modularity of cuts (Lemma <ref>(2)), in graph G∖{e_2}, we have c(C_1∖ C_2)+c(C_2∖ C_1)≤λ_1+λ_2-2w(e_2). Recall that x_1 ∈ C_1∖ C_2 and z∈ C_2∖ C_1. So, both C_1∖ C_2 and C_2∖ C_1 are Steiner cuts in graph G∖{e_2}. Therefore, the capacity of C_2∖ C_1 is at least λ_2-w(e_2). It follows that the capacity of C_1∖ C_2 in G∖{e_2} is at most λ_1-w(e_2). We now obtain graph G by adding edge e_2 to graph G∖{e_2}. Observe that edge e_2 does not contribute to Steiner cut C_1∖ C_2. Therefore, the capacity of cut C_1∖ C_2 remains the same in graph G, which is at most λ_1-w(e_2). Since e_2 is a vital edge, so, w(e_2)>0. This implies that we have λ_1-w(e_2)<λ_1. Therefore, for cut C_1∖ C_2 in G, C_1∖ C_2 is a Steiner cut and has a capacity that is strictly less than λ_1. Moreover, edge e_1 is contributing to C_1∖ C_2. So, C_1 is not a mincut for edge e_1, a contradiction. Case 2. In this case, we have x_1∈ C_2. Since C_1 and C_2 both are Steiner cuts, observe that either (2.1) there is at least one Steiner vertex z in C_1∪ C_2 or (2.2) there exists a pair of Steiner vertices z_1,z_2 such that z_1∈ C_1∖ C_2 and z_2∈ C_2∖ C_1. The proof of case (2.1) is along similar lines to the proof of case (1.1). So, let us consider case (2.2). Edges e_1 and e_2 are contributing to both C_1 and C_2. It implies that c(C_1)=c(C_2). Let c(C_1) (or c(C_2)) be λ. Let us consider graph G∖{e_2}. Since e_2 is a vital edge, the capacity of S-mincut is λ-w(e_2). The capacity of cuts C_1 and C_2 in G∖{e_2} is λ-w(e_2) since e_2 contributes to both of them. Without causing any ambiguity, let us denote the capacity of any cut A in G∖{e_2} by c(A). By sub-modularity of cuts (Lemma <ref>(2)), c(C_1∖ C_2)+c(C_2∖ C_1)≤ 2λ-2w(e_2). Now, it is given that z_1∈ C_1∖ C_2 and z_2∈ C_2∖ C_1. Therefore, in G∖{e_2}, C_1∖ C_2 and C_2∖ C_1 are Steiner cuts. It follows that c(C_1∖ C_2), as well as c(C_2∖ C_1), is exactly λ-w(e_2). Let us obtain graph G from G∖{e_2}. Observe that e_2 contributes neither to C_1∖ C_2 nor to C_2∖ C_1. Therefore, the capacity of cuts C_1∖ C_2 and C_2∖ C_1 remains the same in G, which is λ-w(e_2). Since e_2 is vital, w(e_2)>0. So, λ-w(e_2)<λ_S<λ, where λ_S is the capacity of S-mincut in G. Hence, we have a Steiner cut of capacity strictly smaller than S-mincut, a contradiction. We now prove the converse part. Suppose C_1∩ C_2 is a mincut for edge e_2. Since C_1∩ C_2⊆ C_1, u belongs to C_1∩ C_2. Hence, x_2 must belong to C_1∩ C_2 because e_2 has to be a contributing edge of C_1∩ C_2. This completes the proof. For any pair of nonSteiner vertices a,b∈ V', it turns out that Lemma <ref> does not necessarily hold (In Figure <ref>(i), nearest mincut A of (s_1,a) contains s_2 but nearest mincut B for (s_2,b) crosses A). So, a collaboration between mincuts from VitType-3(v) and VitType-3(v'), for any pair v,v'∈ V', does not seem possible. Recall that our objective is to design an 𝒪(n) space structure for storing a mincut for every edge from VitType-3(u). By Lemma <ref>, the set of nearest mincuts for all edges from VitType-3(u) satisfies Disjoint property. By exploiting Lemma <ref>, we now establish two interesting properties (uniqueness and subset) satisfied by the nearest mincuts for all edges from VitType-3(u). These properties help in designing an 𝒪(n) space data structure for storing them. We first establish the uniqueness property in the following lemma. For any edge e=(x,u) from VitType-3(u), the nearest mincut for edge e is unique. Suppose C_1 and C_2 are a pair of distinct nearest mincuts for edge e. It follows from Lemma <ref> that C=C_1∩ C_2 is a mincut for edge e. So, C is a proper subset of both C_1 and C_2, which contradicts that C_1 and C_2 are nearest mincuts for edge e. Although the nearest mincut for each edge from VitType-3(u) is unique (Lemma <ref>), the Uniqueness Property alone can only guarantee a data structure occupying 𝒪(n|S|) space for all edges from VitType-3(u). To achieve a better space, we now explore the relation between the nearest mincuts for a pair of edges from VitType-3(u). Since nearest mincut for an edge from VitType-3(u) is unique (Lemma <ref>), without causing any ambiguity, we consider N(e) to denote the unique nearest mincut for an edge e from VitType-3(u). Let e_1=(x_1,u) and e_2=(x_2,u) be a pair of edges from VitType-3(u). If neither x_1∈ N(e_2) nor x_2∈ N(e_1), then, by Lemma <ref>, N(e_1) is disjoint from N(e_2). For the other cases when x_1∈ N(e_2) or x_2∈ N(e_1), we now establish the following subset property that states N(e_1) is either identical to N(e_2) or one of {N(e_1), N(e_2)} contains the other. Let (x,u) and (x',u) be a pair of edges from VitType-3(u). Then, x'∈ N((x,u)) if and only if N((x',u))⊆ N((x,u)). Let C=N((x,u)) and C'=N((x',u)). Let us assume to the contrary that C'⊈ C. It is given that x'∈ C. Therefore, by Lemma <ref>, C∩ C' is also a mincut for edge (x',u). This contradicts that C' is a nearest mincut for edge (x',u). Since N((x',u))⊆ N((x,u)) and (x',u) is a contributing edge of N((x',u)) with x'∈ N((x',u)), therefore, x' also belong to N((x,u)). This completes the proof. Let C_1 and C_2 be a pair of nearest mincuts for edges (x_1,u) and (x_2,u) from VitType-3(u), where x_1,x_2∈ S. It follows from Lemma <ref> and Lemma <ref> that there are three possibilities for C_1 and C_2 – C_1 is the same as C_2, one of C_1 and C_2 is a proper subset of the other, and C_1 is disjoint from C_2. Therefore, for any vertex u∈ V', the set containing the nearest mincuts for every edge from VitType-3(u) forms a Laminar family ℒ(u) (Definition <ref>) on set V. It follows from Lemma <ref> that there is an 𝒪(n) space tree 𝒯_ℒ(u) that satisfies the following property. For each edge (x,u) from VitType-3(u) with x∈ S, SubTree(x) of tree 𝒯_ℒ(u) is the nearest mincut for edge (x,u). *Data Structure ℱ_3 for all vital edges from Type-3: For each nonSteiner vertex u∈ V', we construct a tree 𝒯_ℒ(u) based on the laminar family ℒ(u) consisting of the nearest mincuts for all edges from VitType-3(u). Since V' contains nonSteiner vertices of G only, there can be at most n-|S| vertices in V'. This implies that the overall space occupied by the data structure is 𝒪(n(n-|S|)). *Reporting a mincut for a vital edge from Type-3 using ℱ_3: Given any vital edge e=(x,u) from Type-3, where x∈ S and u∈ V∖ S, by following Lemma <ref>, we report the set of vertices stored in SubTree(x) of tree 𝒯_ℒ(u) as the nearest mincut for edge (x,u). Note that given any edge e from Type-3 and any value Δ satisfying 0≤Δ≤ w(e), by using the data structure of Lemma <ref>, we can determine in 𝒪(1) time whether the capacity of S-mincut reduces after reducing w(e) by Δ. This leads to the following lemma for answering query cut for all edges from Type-3. For any Steiner set S⊆ V, there is an 𝒪((n-|S|)n) space data structure that, given any edge e from Type-3 and any value Δ satisfying 0≤Δ≤ w(e), can report an S-mincut C in 𝒪(|C|) time after reducing the capacity of edge e by Δ. Lemma <ref>, Lemma <ref>, and Lemma <ref> complete the proof of Theorem <ref>(2). The pseudo-code for answering query cut is provided in Algorithm <ref>. Algorithm <ref> is invoked with the failed edge e and the change in capacity Δ of edge e satisfying 0≤Δ≤ w(e). In Step <ref> of Algorithm <ref>, the change in capacity (Δ) is required to determine if edge e is a vital edge. Otherwise, Algorithm <ref> fails to report the valid S-mincut after reducing the capacity of edge e. § LOWER BOUND In this section, we provide lower bounds for the following three problems. Given any undirected weighted graph G, designing a data structure that, after failure of any edge in G, can (1) report the capacity of S-mincut, (2) determine whether the capacity of S-mincut has changed, and (3) report an S-mincut. §.§ Reporting Capacity For any n≥ 2, let M be any ⌊n/2⌋×⌊n+1/2⌋ matrix of integers such that for any {i,j} with i∈ [⌊n/2⌋] and j∈ [⌊n+1/2⌋], M[i, j] stores an integer in the range [1,n^c] for some constant c>0. Given matrix M, we construct the following graph G(M) (refer to Figure <ref> for better understanding). *Construction of G(M): The vertex set V_M of G(M) consists of n vertices. Let S_M⊆ V_M be a Steiner set such that S_M contains at least two vertices. For all the rows of matrix M, there is a set L={a_1,a_2,…, a_⌊n/2⌋} containing ⌊n/2⌋ vertices such that ⌊|S_M|/2⌋ vertices of S_M and ⌊n-|S_M|/2⌋ vertices of V_M∖ S_M belong to L. For all the columns of matrix M, there is a set R={b_1,b_2,…,b_⌊n+1/2⌋} containing ⌊n+1/2⌋ vertices such that ⌊|S_M|+1/2⌋ vertices of S_M and ⌊n-|S_M|+1/2⌋ vertices of V_M∖ S_M belong to R. The set of edges of G(M) are defined as follows. For every {i,j} with i∈ [⌊n/2⌋] and j∈ [⌊n+1/2⌋], there is an edge between vertex a_i of set L and vertex b_j of set R of capacity w_ij=M[i,j]. For every pair of vertices from set L, there is an edge of infinite capacity. Similarly, for every pair of vertices from set R, there is an edge of infinite capacity. Let λ be the capacity of Steiner mincut for graph G(M) for Steiner set S_M. The following lemma establishes a relation between graph G(M) and Matrix M. For any {i,j} with i∈ [⌊n/2⌋] and j∈ [⌊n+1/2⌋], after the failure of edge (a_i,b_j), the capacity of Steiner mincut of graph G_M is λ-M[i,j]. The capacity of Steiner mincut of G(M) is λ. Let us consider the cut C=L. It follows from the construction of G(M) that each of two sets L and R contains at least one Steiner vertex. Therefore, cut C is a Steiner cut. Moreover, since C is the only cut of finite capacity, so, it is the only Steiner mincut of G(M). The set of contributing edges of C is the set of edges that lie between set L and set R. Therefore, after the failure of edge (a_i,b_j), the capacity of C reduces by the amount w_ij=M[i,j]. Hence the resulting capacity of Steiner mincut is λ-M[i,j]. Let D(G(M)) be a data structure that can report the capacity of Steiner mincut after the failure of any edge in G. By Lemma <ref>, data structure D(G(M)) can also report M[i,j] for any {i,j} with i∈ [⌊n/2⌋] and j∈ [⌊n+1/2⌋] as follows. It reports λ-λ' as the value of M[i,j], where λ' is the value reported by D(G(M)) after the failure of edge (a_i,b_j) in G(M). It is easy to observe that there are (2^(⌊n/2⌋)(⌊n+1/2⌋)clog n) different instances of matrix M is possible. By Lemma <ref>, for every pair of different instances of matrix M, the encoding of the corresponding data structures has to be different. Therefore, there is an instance of the matrix M such that the encoding of the corresponding data structure requires Ω(n^2log n) bits of space. This completes the proof of Theorem <ref>. §.§ Determining the Change in Capacity Suppose H=(L_H,R_H,E_H) is a undirected unweighted bipartite graph on n=|L_H∪ R_H| vertices and m=|E_H| edges such that L_H contains ⌊n/2⌋ and R_H contains ⌊n+1/2⌋ vertices. Let ℬ be the class of all bipartite graphs H. Given an instance B of ℬ, we construct the following graph G(B). *Construction of G(B): Graph G(B)=(V_B,E_B) is a undirected weighted graph. The vertex set of G(B) is the same as B. That is, there are two subsets of V_B, one is L_H and the other is R_H. To obtain G(B), the following edges are added to B. For every pair of vertices {a,b} in L_H (likewise in R_H), we add an edge (a,b) of capacity infinity. For every pair of vertices {a,b} with a∈ L_H and b∈ R_H, add an edge (a,b) in G(B) and the capacity of (a,b) is given as follows. If (a,b) is an edge in B, then the capacity of (a,b) in G(B) is 1, otherwise, the capacity of (a,b) in G(B) is 0. Let S⊆ V_B be a Steiner set containing at least two and at most n vertices. There is at least one Steiner vertex in L_H and at least one Steiner vertex in R_H. The following lemma establishes a close relationship between the existence of an edge in graph B and the problem of determining whether the capacity of S-mincut has changed after the failure of any edge in graph G(B). An edge (a,b) is present in B if and only if upon failure of an edge (a,b) with a∈ L_H and b∈ R_H, the capacity of S-mincut has changed in G(B). Suppose edge (a,b) is present in B. It follows from the construction of G(B) that there is an edge (a,b) in G(B) of capacity 1. In graph G(B), the set of vertices belonging to C=L_H (or the complement set, that is, R_H) is the only cut of finite capacity. Moreover, it is ensured in the construction that both L_H and R_H contain at least one Steiner vertex of G(B). Therefore, C is the only Steiner cut of finite capacity. This implies C is a Steiner mincut of G(B). Moreover, edge (a,b) is a contributing edge of C of capacity 1. As a result, after the failure of edge (a,b) in G(B), the capacity of Steiner mincut decreases. Suppose upon failure of an edge (a,b), the capacity of S-mincut has changed in G(B). As established in the proof of forward direction, C=L_H is the only Steiner mincut of G(B). Moreover, since failure of edge (a,b) changes the capacity of Steiner mincut, it implies that (a,b) cannot have zero capacity. So, edge (a,b) has capacity 1 in G(B). Therefore, by the construction of G(B), edge (a,b) definitely exists in B. Let D(G(B)) be any data structure for graph G(B) that, after the failure of any edge in G(B), can determine whether the capacity of S-mincut has changed. It follows from Lemma <ref> that, for any given pair of vertices u∈ L_H and v∈ R_H, data structure D(G(B)) can be used to determine whether edge (u,v) is present in B. For a pair of instances B_1 and B_2 of ℬ, there is at least one edge e such that e is present in B_1 but not in B_2 or vice versa. Therefore, by Lemma <ref>, the encoding of data structure D(G(B_1)) must be different from the encoding of data structure D(G(B_2)). It is easy to observe that there are Ω(2^(⌊n/2⌋)(⌊n+1/2⌋)) different possible instances of ℬ. As a result, there exists at least one instance B from ℬ such that the encoding of D(G(B)) requires Ω(n^2) bits of space. This completes the proof of the following theorem. Let G=(V,E) be an undirected weighted graph on n vertices. For every Steiner set S⊆ V, any data structure that can determine whether the capacity of Steiner mincut is changed after the failure of any edge from G must occupy Ω(n^2) bits of space in the worst case, irrespective of the query time. §.§ Reporting Cut Let H=(V_H,E_H) be an undirected weighted graph on n vertices with a Steiner set S_H⊆ V_H. Given graph H, we construct the following graph G_s(H) (refer to Figure <ref> for better readability). *Construction of G_s(H): Let λ be the capacity of Steiner mincut of graph H. Let α=max{c(C(e))-w(e)} for every vital edge e∈ G, where C(e) denote a mincut for edge e in H. Graph G_s(H) is obtained by adding one vertex s and an edge (s,a) of capacity λ'=λ+α/2 to H where a is any vertex of H. Set S=S_H∪{s} is the Steiner set for graph G_s(H). Without loss of generality, assume that, for any cut C in H and in G_s(H), vertex a belongs to C; otherwise, consider C. Let C_m=V_H be the Steiner cut of capacity λ' in G_s(H). By construction of G_s(H), the following lemma holds except Steiner cut C_m. Except Steiner cut C_m, C is a Steiner cut in H if and only if C∪{s} is a Steiner cut in G_s(H). Moreover, the capacity of C in H is the same as the capacity of C∪{s} in G_s(H). In graph H, for every vital edge e and a mincut C(e) for e, by Definition <ref> and Definition <ref>, c(C(e))-w(e)<λ. Hence, the value of α is strictly smaller than the Steiner mincut of H, that is, α<λ. Moreover, since λ' is the average of λ and α, therefore, λ'<λ. In graph G_s(H), s is a Steiner vertex, and hence, C_m=V_H defines a Steiner cut because S∖{s}⊆ V_H. Also, for every Steiner cut C except C_m in G_s(H), by Lemma <ref>, there is a Steiner cut C∖{s} in H having the same capacity as C. So, except C_m, the capacity of every Steiner cut in G_s(H) is at least λ. Therefore, the capacity of Steiner mincut in G_s(H) is λ'. Moreover, C_m is the only Steiner mincut in G_s(H). Let us now prove the following lemma. Except edge (s,a), an edge e is a vital edge in G_s(H) if and only if edge e is a vital edge in H. Suppose edge e is a vital edge in G_s(H). By construction, except edge (s,a) and vertex s, the graph G_s(H) is the same as graph H. Moreover, by Definition <ref>, after the removal of any vital edge e in G_s(H), the capacity of Steiner mincut of G_s(H) is strictly less than λ'. It follows from Lemma <ref> that the capacity of mincut for edge e in H is the same as the capacity of mincut for edge e in G_s(H). Therefore, after the removal of edge e from H, the capacity of mincut for edge e in H is also strictly less than λ'. Since λ'<λ, the capacity of Steiner mincut in H is reduced. So, every vital edge e of G_s(H), except edge (s,a), is also a vital edge in H. This completes the proof of the forward direction. Let us now prove the converse part. Suppose e is a vital edge in H. So, after the removal of edge e from H, the capacity of Steiner mincut in H is λ-w(e). By construction of G_s(H), for every vital edge e' in H, the capacity of Steiner mincut of G_s(H) is strictly greater than λ-w(e'), that is, λ'>λ-w(e'). Moreover, by Lemma <ref>, for every mincut C for vital edge e in H, there is a mincut C∪{s} in G_s(H) such that the capacity of C in H is the same as the capacity of C∪{s} in G_s(H). So, the removal of a vital edge e from G_s(H) reduces the capacity of Steiner mincut in G_s(H) to λ-w(e) from λ'. Therefore, every vital edge e in H is also a vital edge in G_s(H). We now establish the following interesting relation between graph H and graph G_s(H). After the failure of any edge (x,y) (s,a) in G_s(H), Steiner cut C_m does not remain the Steiner mincut for G_s(H) if and only if after the failure of edge (x,y) in H, the capacity of Steiner mincut has changed in H. Suppose after the failure of any edge (x,y) (s,a) in G_s(H), C_m does not remain the Steiner mincut for G_s(H). This implies that after the failure of edge (x,y), the capacity of Steiner mincut of G_s(H) becomes strictly less than λ'. So, edge (x,y) is a vital edge in G_s(H). By Lemma <ref>, since (x,y) (s,a), edge (x,y) is a vital edge in H as well. Therefore, after the failure of edge (x,y) in H, the capacity of Steiner mincut has changed in H. This completes the proof of the forward direction. Let us consider the converse part. Suppose after the failure of edge (x,y) in H, the capacity of Steiner mincut has changed in H. This implies edge (x,y) is a vital edge in H. By Lemma <ref>, edge (x,y) is also a vital edge in graph G_s(H). Since edge (s,a) is not present in H, therefore, the capacity of cut C_m remains unaffected. As a result, C_m does not remain the Steiner mincut for graph G_s(H) after the failure of edge (x,y). Let D be a data structure that, after the failure of any edge from an undirected weighted graph, can report a Steiner mincut C in 𝒪(|C|) time. By Lemma <ref>, we can use data structure D to determine whether the capacity of Steiner mincut of graph H has changed after the failure of any edge e in H as follows. We construct graph G_s(H) from H. We have the data structure D for graph G_s(H). Let us denote it by D(G_s(H)). By construction, edge e is also present in G_s(H). Upon failure of any edge e in H, we query data structure D(G_s(H)) for edge e. Suppose D(G_s(H)) returns Steiner mincut C in 𝒪(|C|) time after the failure of edge e. We can also determine in 𝒪(|C|) time whether C_m=C, where C_m is the only Steiner mincut of G_s(H). By Lemma <ref>, if C_m=C, then the capacity of Steiner mincut of graph H has not changed; otherwise, it has changed. Therefore, this argument, along with Theorem <ref>, completes the proof of Theorem <ref>. § CONCLUSION We have designed the first Sensitivity Oracle for Steiner mincuts in weighted graphs. It also includes the first Sensitivity Oracle for global mincut in weighted graphs. Interestingly, our Sensitivity Oracle occupies space subquadratic in n when |S| approaches n and also achieves optimal query time. On the other hand, it matches the bounds on both space and query time with the existing best-known results for (s,t)-mincut <cit.>. The quadratic space single edge Sensitivity Oracle in Theorem <ref> does not assume that the capacity of failed edge is known. We have also complemented this result with matching lower bounds. Now, it would be great to see whether there is any matching lower bound on space and query time for single edge Sensitivity Oracles for Steiner mincuts assuming weight of the failed edge is known. Finally, our main obtained structure in Theorem <ref> that breaks the quadratic bound is quite simple as it is a forest of 𝒪(n-|S|) trees. We strongly believe that our techniques and structures will be quite useful for addressing several future problems, including the problem of designing a Sensitivity Oracle for S-mincut that can handle failure of multiple edges.
In the real world, networks (graphs) are often subject to the failure of edges and vertices due to a variety of factors, such as physical damage, interference, or other disruptions. This can lead to changes in the solution to several graph problems. While these failures can happen at any location in the network at any time, they are typically short-lived. Naturally, it requires us to have compact data structures that can efficiently report the solution to the given graph problem (without computing from scratch) once any failure has occurred. Such data structures are known as Sensitivity Oracles for several graph problems. There exist elegant Sensitivity Oracles for many fundamental graph problems, such as shortest paths <cit.>, reachability <cit.>, traversals <cit.>, etc. The minimum cut of a graph is also a fundamental concept of graph theory. Moreover, it has a variety of practical applications in the real world <cit.>. Designing Sensitivity Oracles for various minimum cuts of a graph has been an emerging field of research for the past few decades <cit.>. There are two well-known mincuts of a graph. They are global mincut and (s,t)-mincut. Here, we design the first Sensitivity Oracle for global mincut in undirected weighted graphs that can handle the failure of any edge. The concept of Steiner mincut is also well-studied in the area of minimum cuts <cit.>; moreover, it has global mincut, as well as (s,t)-mincut, as just a special corner case. In this article, as our main result, we present the first Sensitivity Oracle for Steiner mincut for handling the failure of any edge in undirected weighted graphs. Interestingly, our result bridges the gap between the two extreme scenarios of Steiner mincut while matching their bounds, namely, (s,t)-mincut <cit.> and global mincut (designed in this article). In addition, it also provides the first generalization from unweighted graphs <cit.> to weighted graphs. Let G=(V,E) be an undirected graph on n=|V| vertices and m=|E| edges with non-negative real values assigned as the capacity to edges. We denote the capacity of an edge e by w(e). Let S⊆ V be a Steiner set of G such that |S|≥ 2. A vertex s is called a Steiner vertex if s∈ S; otherwise, s is called a nonSteiner vertex. A nonempty set C⊂ V is said to be a Steiner cut if there is at least one pair of Steiner vertices s,s' such that s∈ C and s'∉ C. For S=V, a Steiner cut is a (global) cut. Similarly, for S={s,t}, a Steiner cut is an (s,t)-cut. A cut C is said to separate a pair of vertices u,v if u∈ C and v∈C=V∖ C or vice versa. An edge e=(u,v) is said to contribute to a cut C if C separates endpoints u,v of e. The capacity of a cut C, denoted by c(C), is the sum of capacities of all contributing edges of C. A Steiner cut of the least capacity is known as the Steiner mincut, denoted by S-mincut. Let λ_S be the capacity of S-mincut. The problem of designing a Sensitivity Oracle for S-mincut for handling the failure of any edge is defined as follows. For any graph G, a single edge Sensitivity Oracle for Steiner mincut is a compact data structure that can efficiently report a Steiner mincut and its capacity after the failure of any edge in G. For unweighed graphs, there exist single edge Sensitivity Oracles for global mincut <cit.>, (s,t)-mincut <cit.>, and Steiner mincut <cit.>. Unfortunately, for weighted graphs, in the area of minimum cuts, the only existing results are single edge Sensitivity Oracles for (s,t)-mincut <cit.>. For undirected weighted graphs, Ausiello et al. <cit.>, exploiting the Ancestor tree data structure of Cheng and Hu <cit.>, designed the first single edge Sensitivity Oracle for (s,t)-mincut. Their Sensitivity Oracle occupies 𝒪(n^2) space. After the failure of any edge, it can report an (s,t)-mincut C and its capacity in 𝒪(|C|) and 𝒪(1) time, respectively. Recently, Baswana and Bhanja <cit.> complemented this result by showing that Ω(n^2log n) bits of space is required in the worst case, irrespective of the query time. For Steiner mincuts, it follows from the above discussion that the existing Sensitivity Oracles are either for undirected unweighted graphs or only for a special case, when |S|=2, in weighted graphs. Therefore, to provide a generalization of these results to any Steiner set, the following is an important question to raise. Does there exist a single edge Sensitivity Oracle for S-mincut in undirected weighted graphs? We show that the approach taken by Ausiello et al. <cit.> can be generalized from S={s,t} to any set S⊆ V. This answers the above-mentioned question in the affirmative and leads to the following result. For any undirected weighted graph G on n=|V| vertices, for every Steiner set S, there exists an 𝒪(n^2) space data structure that, after the failure of any edge in G, can report an S-mincut C and its capacity in 𝒪(|C|) time and 𝒪(1) time respectively. The space and query time of the Sensitivity Oracle in Theorem <ref> match with the existing optimal results for (s,t)-mincut <cit.>. The lower bound of Ω(n^2log n) bits of space in <cit.> is only for |S|=2. However, to the best of our knowledge, no lower bound is known for any |S|>2. Therefore, the main question that we address in this article arises quite naturally as follows. For undirected weighted graphs, does there exist a single edge Sensitivity Oracle for S-mincut that breaks the quadratic bound on space and still achieves optimal query time if |S|>2? §.§ Our Results A Sensitivity Oracle in a weighted graph addresses queries in a more generic way <cit.>. Given any edge e and any value Δ satisfying Δ≥ 0, the aim is to efficiently report the solution of a given problem after reducing the capacity of edge e by Δ. In this generic setting, we first design an 𝒪(n) space single edge Sensitivity Oracle for global mincut that also achieves optimal query time. Now, in order to bridge the gap between the two extreme scenarios of Steiner set (|S|=n and |S|=2) while matching their bounds, we present our main result that breaks the 𝒪(n^2) space bound of Theorem <ref>, and answers Question <ref> in the affirmative. Let G=(V,E) be an undirected weighted graph on n=|V| vertices and m=|E| edges. For any Steiner set S of G, * there is an 𝒪(n) space rooted tree 𝒯(G) that, given any edge e∈ E and any value Δ satisfying 0≤Δ≤ w(e), can report the capacity of S-mincut in 𝒪(1) time after reducing the capacity of edge e by Δ and * there is an 𝒪(n(n-|S|+1)) space data structure ℱ(G) that, given any edge e∈ E and any value Δ satisfying 0≤Δ≤ w(e), can report an S-mincut C in 𝒪(|C|) time after reducing the capacity of edge e by Δ. For any ϵ∈ [0,1), the space occupied by the single edge Sensitivity Oracle for S-mincut in Theorem <ref>(2) is subquadratic, that is 𝒪(n^1+ϵ), for |S|=n-n^ϵ+1. Moreover, it approaches to 𝒪(n) as |S| tends to n. In particular, for |S|=n-k, for any constant k≥ 0, it occupies only 𝒪(n) space. Observe that our results in Theorem <ref> interestingly match the bounds on both space and query time for the two extreme scenarios of the Steiner set. On one extreme (|S|=n), it occupies 𝒪(n) space for global mincut. On the other extreme (|S|=2), it occupies 𝒪(n^2) space, which match the best-known existing results for (s,t)-mincut <cit.>. Finally, notice that the time taken by our Sensitivity Oracle to answer any query is also worst-case optimal. We also provide lower bounds on both space and query time of Sensitivity Oracles for S-mincut. Our first lower bound is for reporting the capacity of S-mincut and our second lower bound is for reporting an S-mincut. Let D be any data structure that can report the capacity of Steiner mincut after the failure of any edge for undirected weighted graphs on n vertices. Data structure D must occupy Ω(n^2log n) bits of space in the worst case, irrespective of the query time and the size of the Steiner set. For reporting the capacity of S-mincut, Theorem <ref> provides a generalization of the existing lower bound on both space and time for (s,t)-mincut by Baswana and Bhanja <cit.>. However, for reporting an S-mincut, no lower bound on space or query time for single edge Sensitivity Oracle was known till date, even for the two extreme scenarios of Steiner set. So, the following theorem is the first lower bound for reporting an S-mincut after the failure of any edge. Let D be any data structure that can report a Steiner mincut C in 𝒪(|C|) time after the failure of any edge for undirected weighted graphs on n vertices. Data structure D must occupy Ω(n^2) bits of space in the worst case, irrespective of the size of Steiner set. It is assumed in Theorem <ref> that the query edge e is present in G and the change in capacity (that is, Δ) provided with the query is at most w(e). So, the lower bounds of Ω(n^2) bits of space in Theorem <ref> and Theorem <ref> do not violate the sub-quadratic space data structures in Theorem <ref>. Moreover, the assumption in Theorem <ref> seems practically justified. This is because, as discussed in <cit.>, in the real world, the capacity of an edge reduces only if the edge actually exists in the graph, and furthermore, it can reduce by a value at most the capacity of the edge. §.§ Related Works In the seminal works by Dinitz and Vainshtein <cit.>, they designed an 𝒪(min{nλ_S,m}) space data structure, known as Connectivity Carcass, for storing all S-mincuts of an unweighted undirected graph. It can report an S-mincut in 𝒪(m) time and its capacity in 𝒪(1) time. Baswana and Pandey <cit.>, using Connectivity Carcass as the foundation, designed an 𝒪(n) space single edge Sensitivity Oracle for S-mincut in undirected unweighted graphs that also reports an S-mincut in 𝒪(n) time. Their result matches the bounds on both space and time for the existing result on the two extreme scenarios of S-mincut, namely, (s,t)-mincut <cit.> and global mincut <cit.>. The result in <cit.> also acts as the foundation of single edge Sensitivity Oracles for all-pairs mincut <cit.> For directed weighted graphs, Baswana and Bhanja <cit.> presented a single edge Sensitivity Oracle for (s,t)-mincuts that matches both space and query time of the undirected weighted graph results <cit.>. Providing a generalization from the two extreme scenarios of the Steiner set (S=V and |S|=2) is also addressed for various problems, namely, computing Steiner mincut <cit.>, Steiner connectivity augmentation and splitting-off <cit.>, construction of a cactus graph for Steiner mincuts <cit.>. §.§ Organization of the Article This article is organized as follows. Section <ref> contains the basic preliminaries. We first construct an 𝒪(n^2) space single edge Sensitivity Oracle for Steiner mincut in Section <ref>. In Section <ref>, we design an 𝒪(n) space single edge Sensitivity Oracle for reporting only the capacity of Steiner mincut. A linear space single edge Sensitivity Oracle for global mincut is designed in Section <ref>. Our main result on the subquadratic space single edge Sensitivity Oracle for Steiner mincut is developed in Section <ref>. The lower bound results are given in Section <ref>. Finally, we conclude in Section <ref> with couple of open problems.
null
null
null
null
We have designed the first Sensitivity Oracle for Steiner mincuts in weighted graphs. It also includes the first Sensitivity Oracle for global mincut in weighted graphs. Interestingly, our Sensitivity Oracle occupies space subquadratic in n when |S| approaches n and also achieves optimal query time. On the other hand, it matches the bounds on both space and query time with the existing best-known results for (s,t)-mincut <cit.>. The quadratic space single edge Sensitivity Oracle in Theorem <ref> does not assume that the capacity of failed edge is known. We have also complemented this result with matching lower bounds. Now, it would be great to see whether there is any matching lower bound on space and query time for single edge Sensitivity Oracles for Steiner mincuts assuming weight of the failed edge is known. Finally, our main obtained structure in Theorem <ref> that breaks the quadratic bound is quite simple as it is a forest of 𝒪(n-|S|) trees. We strongly believe that our techniques and structures will be quite useful for addressing several future problems, including the problem of designing a Sensitivity Oracle for S-mincut that can handle failure of multiple edges.
http://arxiv.org/abs/2409.18067v1
20240926171237
Topological chiral superconductivity
[ "Minho Kim", "Abigail Timmel", "Long Ju", "Xiao-Gang Wen" ]
cond-mat.str-el
[ "cond-mat.str-el", "cond-mat.supr-con" ]
"
null
null
null
null
null
null
http://arxiv.org/abs/2409.17126v1
20240925174220
Blox-Net: Generative Design-for-Robot-Assembly Using VLM Supervision, Physics Simulation, and a Robot with Reset
[ "Andrew Goldberg", "Kavish Kondap", "Tianshuang Qiu", "Zehan Ma", "Letian Fu", "Justin Kerr", "Huang Huang", "Kaiyuan Chen", "Kuan Fang", "Ken Goldberg" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.LG" ]
Grounded Predictions of Teamwork as a One-Shot Game: A Multiagent Multi-Armed Bandits Approach [ Received September 15, 1996; accepted March 16, 1997 ============================================================================================== empty empty § ABSTRACT Generative AI systems have shown impressive capabilities in creating text, code, and images. Inspired by the rich history of research in industrial “Design for Assembly”, we introduce a novel problem: Generative Design-for-Robot-Assembly (GDfRA). The task is to generate an assembly based on a natural language prompt (e.g., “giraffe”) and an image of available physical components, such as 3D-printed blocks. The output is an assembly, a spatial arrangement of these components, and instructions for a robot to build this assembly. The output must 1) resemble the requested object and 2) be reliably assembled by a 6 DoF robot arm with a suction gripper. We then present , a GDfRA system that combines generative vision language models with well-established methods in computer vision, simulation, perturbation analysis, motion planning, and physical robot experimentation to solve a class of GDfRA problems with minimal human supervision. Blox-Net achieved a Top-1 accuracy of 63.5% in the “recognizability" of its designed assemblies (eg, resembling giraffe as judged by a VLM). These designs, after automated perturbation redesign, were reliably assembled by a robot, achieving near-perfect success across 10 consecutive assembly iterations with human intervention only during reset prior to assembly. Surprisingly, this entire design process from textual word (“giraffe”) to reliable physical assembly is performed with zero human intervention. § INTRODUCTION Design-for-Assembly (DfA) has a long history dating back to the start of the Industrial Revolution, where guns, pocket watches, and clocks were designed with interchangeable parts to facilitate mass production on human assembly lines <cit.>. With the advent of industrial automation in the latter half of the 20th century, DfA was expanded to take into account the error tolerances of mechanical assembly systems driven by mechanical cams and belts, and later for robotic assembly systems, the latter known as Design-for-Robot-Assembly (DfRA) <cit.>. DfRA is the process of designing a product and robot assembly system together to ensure feasibility, for example designing an injection molded part along with a custom workcell for manipulating it. These design systems were enhanced by the emergence of Computer-Aided Design (CAD) and Computer-Aided-Manufacturing (CAM) software that streamlined human visualization and evaluation of components and assemblies using Finite Element Methods (FEM) and perturbation analysis <cit.>. Such systems help human designers visualize and arrange mechanical components with realistic tolerances, checking for potential jamming and wedging conditions (tolerance stack-up) <cit.>. All existing DfRA systems require human designers in the loop <cit.>. One factor that is difficult for DfRA systems to accurately model is the reliability of robot assembly, which depends on the inherent uncertainty in perception, control, and physics (eg, friction) <cit.>. This can to some degree be modeled with simulation, but it is well-known that 3D simulation systems struggle to accurately model minute 3D deformations and collisions that occur during robot grasping and effects such as deformations of robot gripper and suction cups which can produce substantial errors leading to assembly failures <cit.>. Therefore, physical assembly trials are ideal for evaluation. Recent advances in Generative AI systems have demonstrated remarkable abilities to create novel texts, code, and images <cit.>. Researchers are actively exploring “text-to-video” <cit.> and “text-to-3D” <cit.> systems, where the latter generates 3D mesh structures from textual descriptions (and there are ongoing research efforts applying Gen AI for eCAD design of chips <cit.>). This suggests that Generative AI may have potential for DfRA, and that if coupled with a physical robot, it may be possible in certain cases to fully automate the design cycle. In this paper, we propose Blox-Net, a fully-implemented generative DfRA (GDfRA) system that combines the semantic and text generation capabilities of large language models (LLM) with physical analysis from a simulator. Blox-Net includes 3 phases: 1) A vision language model (VLM) with customized iterative prompting to design a feasible 3D arrangement of the available components – an assembly – that approximates the shape of the desired object (eg “a giraffe”); 2) simulation with perturbation analysis to evaluate this assembly in terms of physical robot constructability and to revise the assembly accordingly; 3) Computer vision, motion planning, and control of a physical robot with a camera to repeatedly, through an automated reset, construct this assembly with the given components to automatically evaluate physical assembly reliability. This paper makes the following contributions: * Formulation of a novel problem, Generative Design-for-Robot-Assembly (GDfRA). * Blox-Net, a GDfRA system that combines prompting of GPT-4o with a physical robot, physics simulation, and motion planning to automatically address a class of GDfRA problems where the components are 3D printed blocks. * Results from experiments suggesting that Blox-Net can produce assemblies – arrangements of given physical blocks – that closely resemble the requested object, are stable under gravity throughout the construction process, and can be reliably assembled by a six-axis robot arm. Starting from singulated objects, Blox-Net achieves 99.2% accuracy in autonomous block placements. § RELATED WORK §.§ Design for Robot Assembly The concept of Design for Assembly (DfA) was pioneered by Geoffrey Boothroyd and Peter Dewhurst in the early 1980s <cit.>, with Hitachi developing its Assemblability Evaluation Method (AEM) in 1986 <cit.>. These seminal works laid the foundation for systematic approaches that follow product design guidelines <cit.> facilitate facilitate efficient assembly processes. As robotics automation in manufacturing became prevalent, Design for Robot Assembly (DfRA) emerged as an extension of DfA principles, specifically addressing the unique capabilities and limitations of robotic systems in assembly tasks <cit.>. Design for Robot Assembly (DfRA) <cit.> has evolved significantly with the advent of Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM) software, which expedite design and evaluation of components and assemblies using Finite Element Methods and perturbation analysis <cit.>. While these tools facilitate visualization and analysis of tolerances, stresses, and forces, all existing DfRA systems require extensive human input <cit.>. A persistent challenge in DfRA is accurately modeling assembly reliability, given the inherent uncertainties in perception, control, and physics <cit.>. Simulation can partially address this, but struggles to capture 3D deformations and collisions crucial to robot grasping, necessitating iterative real-world testing and redesign <cit.>. Recent advancements leverage large language models (LLMs) <cit.> for various aspects of design, including task planning, robot code generation <cit.>, engineering documentation understanding <cit.>, and generating planar layouts or CAD models <cit.>. However, these methods primarily focus on determining assembly sequences for fixed designs. In contrast, this paper addresses both the design and execution aspects of robot assembly, aiming to create physically feasible designs for robotic assembly with minimal human supervision. §.§ Text-to-Shape Generation Semantic generation of 3D shapes and structures is a long-standing problem in computer vision and computer graphics <cit.>. Deep generative models have enabled a wide range of approaches that learn to capture the distribution of realistic 3D shapes, in the format of voxel maps <cit.>, meshes <cit.>, point clouds <cit.>, sign distance functions <cit.>, and implicit representations <cit.>. A large number of approaches have also been proposed to reconstruct 3D shapes by conditioning on a single or multiple images <cit.>. With the advances of aligned text-image representations and vision-language models, an increasing number of works have aimed to generate semantically meaningful shapes specified by natural language instructions <cit.>. Unlike these methods, based on the available physical building blocks, Blox-Net generates 3D shapes by prompting an LLM (ChatGPT 4o <cit.>) and then generates a plan for assembling the blocks to construct the desired shape. §.§ Robot Task Planning with Foundation Models Recent advancements in large pre-trained models, such as large language models (LLMs) and vision-language models (VLMs) <cit.>, have significantly impacted robotics task planning by leveraging vast internet-scale data. These models enable end-to-end learning through fine-tuning on robotics datasets <cit.> or allow LLMs to directly generate task or motion plans in text or code <cit.>. Rather than focusing on motion or waypoint planning, prompts the VLM to generate a construction plan by determining the poses of blocks to form semantically meaningful and physically feasible structures, which are then assembled using motion planning and force feedback control. § GDFRA PROBLEM We formally define the problem of Generative Design for Robotic Assembly (GDfRA). We consider the design of a 3D structure that can be assembled with an industrial robot arm (see <Ref>). The input is a word or phrase (, “bridge”) and an image of available components for assembly. The objective for the GDfRA system is to design a structure which is (1) "recognizable" meaning the structure visually resembles the provided text input and (2) "constructible" meaning the structure can be assembled by a robot. § METHOD We present Blox-Net, a system for a class of GDfRA that assumes (1) components are cuboids and cylinders and (2) components are lying in stable poses within a reachable planar area. Blox-Net includes three phases. In phase I (<Ref>), Blox-Net prompts a VLM (GPT-4o <cit.>) to generate multiple assembly designs, from which the VLM selects the top candidate based on stability and visual fidelity. In phase II (<ref>), the chosen assembly design undergoes an iterative refinement process in a customized physics simulator. This simulation-based approach applies controlled perturbations to enhance the design's constructability while maintaining its core characteristics. In phase III (<ref>), utilizes a robot arm equipped with a wrist-mounted stereo camera and suction gripper to construct the optimized design using 3D printed blocks. The assembly is constructed on a tilt plate, which the robot actuates to automatically reset the blocks back into a tray. §.§ Phase I: VLM Design and Selection Given the language description and a set of blocks with known sizes and shapes, Blox-Net uses a VLM to generate candidate structure designs. Unlike existing text-to-3D generation methods that produce unconstrained meshes <cit.>, generates 3D structures subject to the physical constraints imposed by the available blocks. It prompts the VLM to generate an assembly plan that specifies the 3D locations and orientations for placing each block using the available components (illustrated in <ref> (VLM Design Prompting)). To facilitate high-quality generation, similar to DALL-E 3 <cit.>, first elaborates the prompt. For example, to construct a “giraffe", the VLM is prompted to give a concise, qualitative textual description that conveys the key features of a giraffe by highlighting the overall structure and proportions. After prompt elaboration, the VLM is prompted for the assembly plan. Specifically, the prompt includes the target object (“giraffe"), the VLM's elaboration response, and the set of available blocks. The set of available blocks is encoded as JSON, which provides a structured, flexible format familiar to VLM models. Based on these inputs, the VLM is asked to explain each block's role in the structure. Once a high-level plan is generated, prompts the VLM to produce an assembly plan, specifying the rotation, position and color of objects. Instead of using common rotation parametrizations like Euler angles, instructs the VLM to rotate blocks by rearranging their dimensions directly, thereby providing a more simple interface for specifying orientation. Next, prompts the VLM to output the (x, y) coordinates for block placement. Limiting the specification to (x, y) coordinates rather than (x, y, z) simplifies the action space and avoids potential issues with blocks being placed inside one another. Blocks are placed by dropping them in the order. To enhance stability and correct misplaced blocks, Blox-Net performs iterative, simulation-in-the-loop prompting. Each block's placement is simulated by dropping it in simulation from above the structure. After each placement, the system checks the block for stability. If instability is detected, details such as the specific block that moved, the direction of movement, and two orthographic views highlighting the unstable block are included in a prompt sent back to the VLM for correction. This process continues until all blocks are stable or a maximum of two iterations is reached. This full prompting pipeline is run in parallel, generating 10 design candidates. For each design, the VLM is queried with a rendered image from the simulation, and provides a rating from 1 to 5 based on how well the structure resembles the intended design. The top-rated stable designs are then paired in a head-to-head comparison, where two images are shown to the VLM, and it selects the more recognizable design. This process is repeated in a knockout format (<ref>) until a final design is chosen. §.§ Phase II: Perturbation-Based Redesign In GDfRA, accounting for imprecise state estimation and robot control is important to ensure robust assembly. The design output from the VLM does not account for such tolerances, which can result in collisions and misplaced blocks during assembly. We thus introduce a perturbation-based redesign process. The redesign process iterates through each of the blocks and determines if adjustments are needed. A block will be perturbed if it violates at least one of the following three criterion: (1) the surface-to-surface distance to another block is less than a specified collision threshold and the two blocks overlap in the gravity-aligned axis (2) the block is already in collision with another block; or (3) the block is unstable at some nearby sampled point within a predefined radius. For each block, Blox-Net samples points evenly along regularly spaced, concentric circles centered at the block nominal location and checks for stability and collision at each point. The block position is updated to the average of positions that are stable and free from collision. This process is applied to all blocks in the structure until no further adjustments are needed or each block has been visited a predefined maximum number of times. §.§ Phase III: Robot Assembly and Evaluation To evaluate constructability, automates physical assembly and evaluates the generated design on a robot. The robot first moves to a predefined pose and captures a top-down RGBD image of the blocks on a plastic tray. uses SAM <cit.> to segment an RGB image and obtain image masks. SAM segmentations include regions that do not correspond to blocks. To filter out extraneous masks, we generate a point cloud for each mask by deprojecting the masked area from the depth image obtained from a stereo camera <cit.>. then discards masks that are outside the tray, below a certain minimum area, or not circular or rectangular. refines each mask to segment the top of each block by fitting a RANSAC <cit.> plane to the pointcloud and retaining only inliers. The block’s rotation is determined by fitting the tightest oriented bounding box to the refined mask. The block's center is the mean of the points in the filtered point cloud, with the x and y dimensions measured from the point cloud and the z dimension derived from its height relative to the tray base. Upon determining the size, shape, position, and dimensions for each block, Blox-Net can obtain a new plan through the design generation and perturbation-based redesign process (<ref> and <ref>), or construct the target object based on a previously generated plan. Blocks may require rotations about their x or y axis to align with the pose used in the plan. This rotation is facilitated by placing the block in a 90-degree angle bracket and regrasping the block from a different side (<ref>). After reorientation, the robot captures a new top-down image and all block positions, rotations, shapes, and dimensions are recomputed via the aforementioned pipeline. The assembly process begins after all blocks are properly oriented. Each block is grasped at its centroid, rotated to the planned orientation, and placed at the location specified in the design. Force feedback control is used for both grasping and placing blocks: during a grasp, the robot lowers onto the block until a force is detected; similarly, during placement, it descends and releases the block once a force is sensed. To enable efficient testing and design validation, we design an automatic reset. After completing the full assembly, the robot arm captures an image. Then, the robot presses down on the tilt plate, dumping the blocks back into the tray. This resets the scene for subsequent trials. § EXPERIMENTS To evaluate how well the generated structures by satisfy the GDfRA objective, we assess both the semantic recognizability of the designs (in <ref>), which refers to how well the designs semantically align with the prompts, and their constructability (in <ref>), which refers to how reliably they can be constructed by a real robot. Additionally, we evaluate the effectiveness of the perturbation redesign (in <ref>). To create a candidate objects list for evaluation, we prompt GPT-4o to generate a list of 200 objects spanning categories such as furniture, alphabet letters, architecture, and animals. We run Blox-Net's design generation (<ref>) on all objects using a fixed set of block shapes and dimensions. We evaluate semantic recognizability on all 200 designs and evaluate constructability on a representative subset of 11 designs, which showcase the capabilities and limitations of , using a physical robot. Additionally, we evaluate the effectiveness of perturbation redesign on 5 designs using a physical robot. §.§ Semantic Recognizability To measure how well the generated structure resembles the requested language description, we design an experiment using GPT-4o as an evaluator to assess the semantic distinctiveness and accuracy of each design, following methodologies similar to those used in VLM answer scoring <cit.>. In this experiment, we use a set of N object labels, where N includes the correct label alongside N-1 randomly selected distractor labels from the pool of 200 objects. We provide GPT-4o with a rendered image of the generated assembly and the N labels in random order, and task the VLM with ranking these labels based on how well each one matches the image. We report the percentage of correct Top-1 predictions, and for imperfect guesses, we analyze the average ranking of the correct label within GPT-4o's ordered list, (where a ranking of 1 is best). Additionally, we report the average ranking relative to N, with results presented for Top-1 accuracy and average ranking for N=5, 10, 15, 20. P[1]>p#1 R[1]>p#1 §.§ Constructability We measure constructability on a real robot over 10 trials on 6 designs selected to highlight diversity. For each trial, we record the % of blocks correctly positioned at the time of placement, and the % of trials where the structure is fully successfully assembled. These experiments incorporate automated reset, block reorientation, and assembly fully end-to-end. To assess the system’s autonomy, we track the average percentage of blocks per trial which require intervention during the reset phase, where an intervention is counted for each block moved. In failure cases after reset where blocks occlude each other, are not adequately separated, or fall out of the tray, blocks are repositioned and placed back in their same stable pose. In cases where the robot fails to regrasp a block during reorientation, the block is placed back in the stable pose corresponding to its final stable pose in the structure. Human interventions are only performed during resetting; there is no human intervention during the assembly process. §.§ Perturbation Redesign Ablation We evaluate the effect of perturbation redesign (<ref>) on construction success. We conduct experiments on 5 objects, each assembled 10 times with and without perturbation redesign. Each trial begins with all blocks singulated and in their correct stable pose. This isolates the effect of perturbation redesign by eliminating influence from prior assembly states. Assembly is performed fully autonomously. We report the percentage of blocks correctly placed at the time of their placement, the average percentage of blocks in the correct location at the end of each trial, and the percentage of trials where the structure is fully completed. §.§ Implementation Details Blox-Net is implemented with the following components: GPT-4o, PyBullet, UR5e robot arm, Robotiq suction gripper, Zed Mini Stereo Camera, and 3D printed cuboidal and cylindrical blocks. We use PyBullet as a simulator and define simulation parameters as a uniform object density of 1000 kg/m^3, lateral friction coefficient of 0.5, spinning friction coefficient of 0.2, gravity of -9.81 m/s^2. A block is measured as unstable if after 500 simulation steps at 240Hz the block's position deviates by more than 1cm or is rotated by more than .1 radians from its starting position. During perturbation redesign, Blox-Net samples 8 points from each of 10 concentric circles with radii from 1mm to 15mm and each block is perturbed a maximum of 10 times. Blox-Net filters masks by shape by fitting a minimum area bounding rectangle and minimum bounding circle to each mask. Masks are discarded if their area is less than 80% of the areas of both bounding shapes. § RESULTS Semantic Recognizability: We present results in <ref>. Results from the evaluation of BloxNet’s designs using GPT-4o as an evaluator suggest that the generated designs closely align with the correct category semantics as recognized by GPT-4o. Notably, with N = 5 labels, the model achieves a Top-1 accuracy of 63.5%, demonstrating a consistent correspondence between the generated designs and the intended prompts. Importantly, even with larger label sets, the model maintains a reasonable average ranking, with the correct label placed consistently near the top. This suggests that the generated designs remain recognizable, even among a large pool of designs. Constructability: Results are in <ref>. All designs are reliably assembled by the robot without human intervention during assembly. Five of six designs achieve a perfect assembly completion rate, and all designs achieve a 98%+ placement success rate, highlighting Blox-Net's ability to assemble complex structures. Human interventions, which occur only during the reset phase, are sometimes needed to singulate or reorient blocks. Complex structures, such as the Giraffe (9 blocks) or shelf (10 blocks), have more human reset interventions due to an increasing likelihood of overlapping, non-singulated, or misoriented blocks. Perturbation Redesign Ablation Results are summarized in Table <ref>. Perturbation redesign greatly improves all three metrics across all 5 designs to near-perfect. The percentage of correctly placed blocks and the percentage of correctness in the end state are similar for all objects except the ceiling fan. While incorrect block placements typically lead to errors in the final structure, later block placements sometimes correct these errors. Perturbation redesign improves the full completion success rate by an average of 4x. Overall, perturbation redesign significantly enhances the robustness of the assembly process by accommodating slight imprecisions, leading to more reliable and accurate final structures across a variety of designs. § LIMITATIONS AND CONCLUSION While shows promising results in constrained 3D structure generation, it is limited to non-deformable cuboid and cylinder blocks, restricting geometric diversity and reducing Blox-Net's ability to represent complex shapes. Many assembly designs are still not clearly recognizable, likely due in part to these block limitations. The system uses only a suction-based gripper, without accounting for gripper width or slanted surfaces, and sometimes requires human intervention during the reset process, reducing assembly efficacy. This paper introduces , a novel system addressing the Generative Design-for-Robot-Assembly problem using a three-phase approach: creating the initial designs by prompting a vision language model, conducting simulation-based analysis for constructability, and utilizing a physical robot for assembly evaluation. Experiment results suggest that can bridge the gap between abstract design concepts and robot-executable assemblies. Remarkably, five Blox-Net assembly designs, each using 3 to 10 blocks and scoring high in recognizability, were successfully assembled 10 consecutive times by the robot without any human intervention. § ACKNOWLEDGMENTS This research was performed at the AUTOLAB at UC Berkeley in affiliation with the Berkeley AI Research (BAIR) Lab. The authors were supported in part by donations from Toyota Research Institute, Bosch, Google, Siemens, and Autodesk and by equipment grants from PhotoNeo, Nvidia, and Intuitive Surgical. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE 2146752. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We thank Timothe Kasriel, Chung Min Kim and Kush Hari for their helpful discussions and feedback.
Design-for-Assembly (DfA) has a long history dating back to the start of the Industrial Revolution, where guns, pocket watches, and clocks were designed with interchangeable parts to facilitate mass production on human assembly lines <cit.>. With the advent of industrial automation in the latter half of the 20th century, DfA was expanded to take into account the error tolerances of mechanical assembly systems driven by mechanical cams and belts, and later for robotic assembly systems, the latter known as Design-for-Robot-Assembly (DfRA) <cit.>. DfRA is the process of designing a product and robot assembly system together to ensure feasibility, for example designing an injection molded part along with a custom workcell for manipulating it. These design systems were enhanced by the emergence of Computer-Aided Design (CAD) and Computer-Aided-Manufacturing (CAM) software that streamlined human visualization and evaluation of components and assemblies using Finite Element Methods (FEM) and perturbation analysis <cit.>. Such systems help human designers visualize and arrange mechanical components with realistic tolerances, checking for potential jamming and wedging conditions (tolerance stack-up) <cit.>. All existing DfRA systems require human designers in the loop <cit.>. One factor that is difficult for DfRA systems to accurately model is the reliability of robot assembly, which depends on the inherent uncertainty in perception, control, and physics (eg, friction) <cit.>. This can to some degree be modeled with simulation, but it is well-known that 3D simulation systems struggle to accurately model minute 3D deformations and collisions that occur during robot grasping and effects such as deformations of robot gripper and suction cups which can produce substantial errors leading to assembly failures <cit.>. Therefore, physical assembly trials are ideal for evaluation. Recent advances in Generative AI systems have demonstrated remarkable abilities to create novel texts, code, and images <cit.>. Researchers are actively exploring “text-to-video” <cit.> and “text-to-3D” <cit.> systems, where the latter generates 3D mesh structures from textual descriptions (and there are ongoing research efforts applying Gen AI for eCAD design of chips <cit.>). This suggests that Generative AI may have potential for DfRA, and that if coupled with a physical robot, it may be possible in certain cases to fully automate the design cycle. In this paper, we propose Blox-Net, a fully-implemented generative DfRA (GDfRA) system that combines the semantic and text generation capabilities of large language models (LLM) with physical analysis from a simulator. Blox-Net includes 3 phases: 1) A vision language model (VLM) with customized iterative prompting to design a feasible 3D arrangement of the available components – an assembly – that approximates the shape of the desired object (eg “a giraffe”); 2) simulation with perturbation analysis to evaluate this assembly in terms of physical robot constructability and to revise the assembly accordingly; 3) Computer vision, motion planning, and control of a physical robot with a camera to repeatedly, through an automated reset, construct this assembly with the given components to automatically evaluate physical assembly reliability. This paper makes the following contributions: * Formulation of a novel problem, Generative Design-for-Robot-Assembly (GDfRA). * Blox-Net, a GDfRA system that combines prompting of GPT-4o with a physical robot, physics simulation, and motion planning to automatically address a class of GDfRA problems where the components are 3D printed blocks. * Results from experiments suggesting that Blox-Net can produce assemblies – arrangements of given physical blocks – that closely resemble the requested object, are stable under gravity throughout the construction process, and can be reliably assembled by a six-axis robot arm. Starting from singulated objects, Blox-Net achieves 99.2% accuracy in autonomous block placements.
§.§ Design for Robot Assembly The concept of Design for Assembly (DfA) was pioneered by Geoffrey Boothroyd and Peter Dewhurst in the early 1980s <cit.>, with Hitachi developing its Assemblability Evaluation Method (AEM) in 1986 <cit.>. These seminal works laid the foundation for systematic approaches that follow product design guidelines <cit.> facilitate facilitate efficient assembly processes. As robotics automation in manufacturing became prevalent, Design for Robot Assembly (DfRA) emerged as an extension of DfA principles, specifically addressing the unique capabilities and limitations of robotic systems in assembly tasks <cit.>. Design for Robot Assembly (DfRA) <cit.> has evolved significantly with the advent of Computer-Aided Design (CAD) and Computer-Aided Manufacturing (CAM) software, which expedite design and evaluation of components and assemblies using Finite Element Methods and perturbation analysis <cit.>. While these tools facilitate visualization and analysis of tolerances, stresses, and forces, all existing DfRA systems require extensive human input <cit.>. A persistent challenge in DfRA is accurately modeling assembly reliability, given the inherent uncertainties in perception, control, and physics <cit.>. Simulation can partially address this, but struggles to capture 3D deformations and collisions crucial to robot grasping, necessitating iterative real-world testing and redesign <cit.>. Recent advancements leverage large language models (LLMs) <cit.> for various aspects of design, including task planning, robot code generation <cit.>, engineering documentation understanding <cit.>, and generating planar layouts or CAD models <cit.>. However, these methods primarily focus on determining assembly sequences for fixed designs. In contrast, this paper addresses both the design and execution aspects of robot assembly, aiming to create physically feasible designs for robotic assembly with minimal human supervision. §.§ Text-to-Shape Generation Semantic generation of 3D shapes and structures is a long-standing problem in computer vision and computer graphics <cit.>. Deep generative models have enabled a wide range of approaches that learn to capture the distribution of realistic 3D shapes, in the format of voxel maps <cit.>, meshes <cit.>, point clouds <cit.>, sign distance functions <cit.>, and implicit representations <cit.>. A large number of approaches have also been proposed to reconstruct 3D shapes by conditioning on a single or multiple images <cit.>. With the advances of aligned text-image representations and vision-language models, an increasing number of works have aimed to generate semantically meaningful shapes specified by natural language instructions <cit.>. Unlike these methods, based on the available physical building blocks, Blox-Net generates 3D shapes by prompting an LLM (ChatGPT 4o <cit.>) and then generates a plan for assembling the blocks to construct the desired shape. §.§ Robot Task Planning with Foundation Models Recent advancements in large pre-trained models, such as large language models (LLMs) and vision-language models (VLMs) <cit.>, have significantly impacted robotics task planning by leveraging vast internet-scale data. These models enable end-to-end learning through fine-tuning on robotics datasets <cit.> or allow LLMs to directly generate task or motion plans in text or code <cit.>. Rather than focusing on motion or waypoint planning, prompts the VLM to generate a construction plan by determining the poses of blocks to form semantically meaningful and physically feasible structures, which are then assembled using motion planning and force feedback control.
We present Blox-Net, a system for a class of GDfRA that assumes (1) components are cuboids and cylinders and (2) components are lying in stable poses within a reachable planar area. Blox-Net includes three phases. In phase I (<Ref>), Blox-Net prompts a VLM (GPT-4o <cit.>) to generate multiple assembly designs, from which the VLM selects the top candidate based on stability and visual fidelity. In phase II (<ref>), the chosen assembly design undergoes an iterative refinement process in a customized physics simulator. This simulation-based approach applies controlled perturbations to enhance the design's constructability while maintaining its core characteristics. In phase III (<ref>), utilizes a robot arm equipped with a wrist-mounted stereo camera and suction gripper to construct the optimized design using 3D printed blocks. The assembly is constructed on a tilt plate, which the robot actuates to automatically reset the blocks back into a tray. §.§ Phase I: VLM Design and Selection Given the language description and a set of blocks with known sizes and shapes, Blox-Net uses a VLM to generate candidate structure designs. Unlike existing text-to-3D generation methods that produce unconstrained meshes <cit.>, generates 3D structures subject to the physical constraints imposed by the available blocks. It prompts the VLM to generate an assembly plan that specifies the 3D locations and orientations for placing each block using the available components (illustrated in <ref> (VLM Design Prompting)). To facilitate high-quality generation, similar to DALL-E 3 <cit.>, first elaborates the prompt. For example, to construct a “giraffe", the VLM is prompted to give a concise, qualitative textual description that conveys the key features of a giraffe by highlighting the overall structure and proportions. After prompt elaboration, the VLM is prompted for the assembly plan. Specifically, the prompt includes the target object (“giraffe"), the VLM's elaboration response, and the set of available blocks. The set of available blocks is encoded as JSON, which provides a structured, flexible format familiar to VLM models. Based on these inputs, the VLM is asked to explain each block's role in the structure. Once a high-level plan is generated, prompts the VLM to produce an assembly plan, specifying the rotation, position and color of objects. Instead of using common rotation parametrizations like Euler angles, instructs the VLM to rotate blocks by rearranging their dimensions directly, thereby providing a more simple interface for specifying orientation. Next, prompts the VLM to output the (x, y) coordinates for block placement. Limiting the specification to (x, y) coordinates rather than (x, y, z) simplifies the action space and avoids potential issues with blocks being placed inside one another. Blocks are placed by dropping them in the order. To enhance stability and correct misplaced blocks, Blox-Net performs iterative, simulation-in-the-loop prompting. Each block's placement is simulated by dropping it in simulation from above the structure. After each placement, the system checks the block for stability. If instability is detected, details such as the specific block that moved, the direction of movement, and two orthographic views highlighting the unstable block are included in a prompt sent back to the VLM for correction. This process continues until all blocks are stable or a maximum of two iterations is reached. This full prompting pipeline is run in parallel, generating 10 design candidates. For each design, the VLM is queried with a rendered image from the simulation, and provides a rating from 1 to 5 based on how well the structure resembles the intended design. The top-rated stable designs are then paired in a head-to-head comparison, where two images are shown to the VLM, and it selects the more recognizable design. This process is repeated in a knockout format (<ref>) until a final design is chosen. §.§ Phase II: Perturbation-Based Redesign In GDfRA, accounting for imprecise state estimation and robot control is important to ensure robust assembly. The design output from the VLM does not account for such tolerances, which can result in collisions and misplaced blocks during assembly. We thus introduce a perturbation-based redesign process. The redesign process iterates through each of the blocks and determines if adjustments are needed. A block will be perturbed if it violates at least one of the following three criterion: (1) the surface-to-surface distance to another block is less than a specified collision threshold and the two blocks overlap in the gravity-aligned axis (2) the block is already in collision with another block; or (3) the block is unstable at some nearby sampled point within a predefined radius. For each block, Blox-Net samples points evenly along regularly spaced, concentric circles centered at the block nominal location and checks for stability and collision at each point. The block position is updated to the average of positions that are stable and free from collision. This process is applied to all blocks in the structure until no further adjustments are needed or each block has been visited a predefined maximum number of times. §.§ Phase III: Robot Assembly and Evaluation To evaluate constructability, automates physical assembly and evaluates the generated design on a robot. The robot first moves to a predefined pose and captures a top-down RGBD image of the blocks on a plastic tray. uses SAM <cit.> to segment an RGB image and obtain image masks. SAM segmentations include regions that do not correspond to blocks. To filter out extraneous masks, we generate a point cloud for each mask by deprojecting the masked area from the depth image obtained from a stereo camera <cit.>. then discards masks that are outside the tray, below a certain minimum area, or not circular or rectangular. refines each mask to segment the top of each block by fitting a RANSAC <cit.> plane to the pointcloud and retaining only inliers. The block’s rotation is determined by fitting the tightest oriented bounding box to the refined mask. The block's center is the mean of the points in the filtered point cloud, with the x and y dimensions measured from the point cloud and the z dimension derived from its height relative to the tray base. Upon determining the size, shape, position, and dimensions for each block, Blox-Net can obtain a new plan through the design generation and perturbation-based redesign process (<ref> and <ref>), or construct the target object based on a previously generated plan. Blocks may require rotations about their x or y axis to align with the pose used in the plan. This rotation is facilitated by placing the block in a 90-degree angle bracket and regrasping the block from a different side (<ref>). After reorientation, the robot captures a new top-down image and all block positions, rotations, shapes, and dimensions are recomputed via the aforementioned pipeline. The assembly process begins after all blocks are properly oriented. Each block is grasped at its centroid, rotated to the planned orientation, and placed at the location specified in the design. Force feedback control is used for both grasping and placing blocks: during a grasp, the robot lowers onto the block until a force is detected; similarly, during placement, it descends and releases the block once a force is sensed. To enable efficient testing and design validation, we design an automatic reset. After completing the full assembly, the robot arm captures an image. Then, the robot presses down on the tilt plate, dumping the blocks back into the tray. This resets the scene for subsequent trials.
Semantic Recognizability: We present results in <ref>. Results from the evaluation of BloxNet’s designs using GPT-4o as an evaluator suggest that the generated designs closely align with the correct category semantics as recognized by GPT-4o. Notably, with N = 5 labels, the model achieves a Top-1 accuracy of 63.5%, demonstrating a consistent correspondence between the generated designs and the intended prompts. Importantly, even with larger label sets, the model maintains a reasonable average ranking, with the correct label placed consistently near the top. This suggests that the generated designs remain recognizable, even among a large pool of designs. Constructability: Results are in <ref>. All designs are reliably assembled by the robot without human intervention during assembly. Five of six designs achieve a perfect assembly completion rate, and all designs achieve a 98%+ placement success rate, highlighting Blox-Net's ability to assemble complex structures. Human interventions, which occur only during the reset phase, are sometimes needed to singulate or reorient blocks. Complex structures, such as the Giraffe (9 blocks) or shelf (10 blocks), have more human reset interventions due to an increasing likelihood of overlapping, non-singulated, or misoriented blocks. Perturbation Redesign Ablation Results are summarized in Table <ref>. Perturbation redesign greatly improves all three metrics across all 5 designs to near-perfect. The percentage of correctly placed blocks and the percentage of correctness in the end state are similar for all objects except the ceiling fan. While incorrect block placements typically lead to errors in the final structure, later block placements sometimes correct these errors. Perturbation redesign improves the full completion success rate by an average of 4x. Overall, perturbation redesign significantly enhances the robustness of the assembly process by accommodating slight imprecisions, leading to more reliable and accurate final structures across a variety of designs.
null
null
http://arxiv.org/abs/2409.17610v1
20240926075557
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue
[ "Zhangpu Li", "Changhong Zou", "Suxue Ma", "Zhicheng Yang", "Chen Du", "Youbao Tang", "Zhenjie Cao", "Ning Zhang", "Jui-Hsin Lai", "Ruei-Sung Lin", "Yuan Ni", "Xingzhi Sun", "Jing Xiao", "Kai Zhang", "Mei Han" ]
cs.CL
[ "cs.CL", "cs.CV" ]
ZALM3: Zero-Shot Enhancement of Vision-Language Alignment via In-Context Information in Multi-Turn Multimodal Medical Dialogue Zhangpu Li^1†, Changhong Zou^1†, Suxue Ma^2†, Zhicheng Yang^3, Chen Du^3, Youbao Tang^3, Zhenjie Cao^4, Ning Zhang^3, Jui-Hsin Lai^3, Ruei-Sung Lin^3, Yuan Ni^4, Xingzhi Sun^4, Jing Xiao^4, Kai Zhang^1, and Mei Han^3 ^†This work was done during Zhangpu Li, Changhong Zou, and Suxue Ma's internship at Ping An Technology, Shenzhen, China. ^Corresponding authors: ; ^1Z.L., C.Z., and K.Z. are with Tsinghua Shenzhen International Graduate School, China. ^2S.M. is with Tianjin University, China. ^3Z.Y., C.D., Y.T., N.Z., J-H.L., R-S.L., and M.H. are with PAII Inc., USA. ^4Z.C., Y.N., X.S., and J.X. are with Ping An Technology, China. September 28, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT The rocketing prosperity of large language models (LLMs) in recent years has boosted the prevalence of vision-language models (VLMs) in the medical sector. In our online medical consultation scenario, a doctor responds to the texts and images provided by a patient in multiple rounds to diagnose her/his health condition, forming a multi-turn multimodal medical dialogue format. Unlike high-quality images captured by professional equipment in traditional medical visual question answering (Med-VQA), the images in our case are taken by patients' mobile phones. These images have poor quality control, with issues such as excessive background elements and the lesion area being significantly off-center, leading to degradation of vision-language alignment in the model training phase. In this paper, we propose ZALM3, a Zero-shot strategy to improve vision-language ALignment in Multi-turn Multimodal Medical dialogue. Since we observe that the preceding text conversations before an image can infer the regions of interest (RoIs) in the image, ZALM3 employs an LLM to summarize the keywords from the preceding context and a visual grounding model to extract the RoIs. The updated images eliminate unnecessary background noise and provide more effective vision-language alignment. To better evaluate our proposed method, we design a new subjective assessment metric for multi-turn unimodal/multimodal medical dialogue to provide a fine-grained performance comparison. Our experiments across three different clinical departments remarkably demonstrate the efficacy of ZALM3 with statistical significance. multi-turn multimodal dialogue, zero-shot vision-language alignment, subjective assessment § INTRODUCTION Recent years have witnessed the tremendous success of large language models (LLMs) <cit.>. They have even shown their massive potential in various professional application domains when equipped with specialized knowledge, such as legal <cit.>, financial <cit.>, and medical sectors <cit.>. A challenging format in these applications is multi-turn dialogue, where an LLM model-based assistant interacts with a user in multiple rounds to finally fulfill the user's request. Multi-turn dialogue requires the model not only to understand the user's current input but also to provide coherent responses by considering the dialogue history. This has raised higher requirements for the model's contextual understanding and generation capabilities. In the medical field, with the worldwide lockdown and quarantine caused by COVID-19, online doctor consultation applications have rapidly developed. Recently, some cutting-edge studies have focused on real multi-turn medical dialogue <cit.>, however, their conversations involve only text and lack the ability to handle multiple modalities, such as images sent by users. Meanwhile, scholars have intensively investigated the integration of visual modality with LLMs to achieve vision-language models (VLMs) by leveraging Transformer technology and other techniques <cit.>. As the medical field is a significant branch for VLM-based applications <cit.>, numerous methods have recently emerged to effectively learn visual and textual information (e.g. radiology reports) using VLMs to enhance the performance of medical imaging analysis <cit.>, medical report generation <cit.>, and medical visual question answering (Med-VQA) <cit.>. Nevertheless, the dominant settings were single-turn <cit.> or multi-turn stitched by independent single-turn pairs of visual questions and textual answers <cit.>. Different from conventional multi-turn textual medical dialogue or Med-VQA settings, our scenario involves multi-turn medical consultations with text and image modalities together, unveiling a multimodal research branch that has not been well addressed. In our case, patients describe their conditions by sending texts and images, while doctors ultimately provide diagnoses, medication recommendations, or referrals via multiple rounds of interaction with patients. Distinct from those professional medical images <cit.>, all images in our setting are provided by patients with their phones. As a result, those images may contain significant noises, such as excessive background elements and non-centered regions of interest (RoIs). For a real doctor, such a poor-quality image is not a hurdle as long as a lesion in the image is discernible. However, noisy images significantly degrade the vision-language alignment for training a VLM, detrimentally affecting the performance of the model's response. To enhance vision-language alignment performance, recent approaches have proposed various adaptations based on CLIP (Contrastive Language-Image Pre-training) <cit.>. Some other representative methods include projecting visual features <cit.>, employing multiple vision encoders <cit.>, or decoupling the language and vision training process <cit.>. Unlike those approaches that need to alter the structure of VLMs, we employ a zero-shot visual grounding method to update images by extracting their key regions. Visual grounding is mapping a natural language query to the specific visual region that the query refers to <cit.>. It always requires an external word or phrase input as the textual query. Since a multi-turn dialogue maintains semantic coherence, when a patient sends an image related to her/his medical condition, the descriptions of this image have always been mentioned in the prior dialogue turns. For instance, a patient in the dermatology department may describe her/his symptoms and affected areas in the text before sending an image. This preceding context acts as a hidden gem, which can be leveraged as a textual query for visual grounding inferring the key regions of the image. With this procedure, we do not need 1) human annotation labor to crop the image (thanks to the visual grounding approach) or 2) an external query for visual grounding (thanks to the preceding context of the image). Furthermore, this paradigm simulates the real diagnostic routine where a doctor gathers preliminary information through text communications before viewing the image. Consequently, even if the image has high background noise, the doctor can still ignore the noise and focus on the relevant areas based on prior knowledge. In this paper, we present (Zero-shot enhancement of vision-language ALignment in Multi-turn Multimodal Medical dialogue). Our key contributions are as follows. * We propose an LLM-powered and visual grounding-based approach to enhance the vision-language alignment in the unprecedented multi-turn multimodal medical dialogue scenario. It leverages the in-context information before the image's appearance without introducing extra annotation or training efforts. This approach is a zero-shot and compatible with various medical VLMs. * We reformulate the subjective assessment strategy for the multi-turn unimodal/multimodal medical dialogues, and design new evaluation metrics for the subjective assessment to achieve more quantitative results over the previous simplistic win-loss-tie evaluation criteria. * We develop two versions of medical VLMs and conduct extensive experiments on our multi-turn multimodal medical dialogue datasets. The results demonstrate the efficacy of with statistical significance across different clinical departments. § RELATED WORK Med-VQA and Medical Dialogue. VQA is an interdisciplinary task that requires the capacities of both computer vision (CV) and natural language processing (NLP). As a significant branch of VQA, Med-VQA has garnered widespread attention from scholars. Many public datasets were released to provide specialized data support for the Med-VQA domain <cit.>. However, these dialogue datasets are either single-turn (e.g. medical image captioning or medical report generation) or multi-turn by simply concatenating single-turn question-answer pairs without direct contextual relevance. Furthermore, these datasets are constructed in an image-centered format, rather than being built based on the dialogue of real patient-doctor interactions. Since medical LLMs have been investigated in recent years for various NLP-based tasks <cit.>, numerous studies have explored the capacity of the LLMs for medical dialogue in English or other languages such as Chinese <cit.>. Moreover, researchers leveraged the capability of multimodal LLMs to tackle the image-based tasks such as case diagnosis and radiology or pathological image classification and recognition <cit.>. In those studies, medical images were obtained from professional equipment with fixed fields of view and angles, where a high image quality was ensured. However in our scenario, images are captured by patients' mobile phones, resulting in significant noise that adversely affects vision-language alignment at the phase of model training. Visual-Language Alignment. Aligning the features of image and text is essential for the VLM research domain <cit.>. The pioneering CLIP leveraged enormous image-text pairs to validate effective vision-language alignment <cit.>. By encoding medical domain knowledge, various contributions adapted from CLIP aimed to solve general <cit.> or specific problems in medical image analysis, such as chest X-rays <cit.> and mammograms <cit.>. A recently advanced paradigm designed projection layers for visual features, such as Bootstrapping Language-Image Pre-training v2 (BLIP-2) <cit.>, Large Language and Vision Assistant (LLaVA) <cit.>, and Qwen-VL <cit.>, leading to further specific modifications on them for medical applications <cit.>. Other methods included introducing multiple vision encoders to extract diverse visual features <cit.>, or decoupling the training process of language and vision components <cit.>. Different from those methods that focus on the improvement at the model level, our proposed approach mines the in-context information for image refinement to achieve better vision-language alignment. Visual Grounding. Another mindset to facilitate vision-language alignment is to emphasize the RoIs <cit.>. Some VLMs have possessed the region-level or pixel-level grounding ability to output the RoIs in bounding box or segmentation mask formats <cit.>. Meanwhile, outstanding visual foundation models like Grounding DINO (GDINO) <cit.> and Segment Anything Model (SAM) <cit.> have provided zero-shot grounding capacities. In the medical sector, aside from initially testing a grounding model's zero-shot performance on medical image datasets <cit.>, researchers proposed an enhanced framework for chest X-ray segmentation by jointly using GDINO and SAM <cit.>. Authors in <cit.> proposed a SAM-assisted approach for enhancing medical image annotation. However, those methods required externally well-defined prompts to trigger the grounding models, likely involving non-negligible human effort. In contrast, our grounding prompts in are entirely derived from the preceding context in the multi-turn dialogue without human intervention. § MATERIALS AND METHODS §.§ Overview of the Proposed Method Fig. <ref> illustrates our proposed method. In an original VLM training strategy, the multi-turn multimodal conversation data between doctors and patients is directly fed into the VLM model for training. Once a VLM model is trained, multi-turn conversation inference is performed. In our proposed framework, the key module, , eliminates the irrelevant regions from the images in the original database, while preserving their original positions in the conversations. This process does not need any training procedure. The updated database is then used for VLM training, where the vision-language alignment is improved by extracting the RoIs from the original images <cit.>. Once the enhanced VLM training is finished, the module is also used during inference, ensuring that the data processing during inference is consistent with that during training. §.§ Description of Database Our data comes from a leading Chinese online medical consultation platform, covering over 10 major clinical departments. Nationwide patients interact with doctors through multiple turns of online conversations to achieve diagnoses, prescriptions, referrals, and other medical purposes. In addition to pure textual conversations, patients also send images in the dialogue when necessary to provide further explanations or evidence to the doctors. This platform has over 3.2M consultation sessions per month, including more than 1.5M images on average. While the quality of text content is acceptable, the quality of images uploaded by patients is significantly lower. Since those images are taken with the patients' mobile phones, many of them are adversely affected by factors such as lighting, angles, resolution, etc. For instance, in the department of dermatology, it is challenging for patients to independently capture certain body parts, resulting in the RoI not being centered in the image. In such cases, a real doctor can identify the body part in the image, but it is difficult for an AI model. In several departments where images constitute a significant portion of the data, including dermatology, ophthalmology, and traditional Chinese medicine (TCM), we recruited 12 volunteers to evaluate the quality of approximately 60k images. They assessed issues such as excessive background information, blurriness, and misalignment of the RoI from the center of the image. Fig. <ref> presents the assessment results based on the 5 area ratio segments of RoIs to original images across the three departments. As we can see, the rates of poor-quality images for all three departments are significantly reduced by our proposed under all ratios. §.§ Extraction of Keywords In our conversational scenario, patients initially describe their symptoms in text and doctors respond accordingly. When the patient needs to upload an image for further diagnosis, the description of this image is actually reflected in the preceding context. For example, when a patient initially describes that “my leg got bitten by a mosquito and it was swollen”, this has already provided some description of the upcoming image the patient will send. Therefore, an extra manual annotation for the image's RoIs is not necessary. While we leverage the preceding context information to generate the description of the image, these conversation histories contain a lot of noise and unrelated content. To achieve more appropriate inputs for a visual ground model, we exploit an LLM to extract keywords from the conversation history preceding the image. To balance the precision and processing time of keyword extraction, the previous up to three turns of patient-doctor conversations in the text before an image are utilized, as we observe that the relevance between the earlier conversations and the image is not significant. Once the keywords are extracted, they are then fed into the visual grounding model. §.§ Visual Grounding A visual grounding task aims to identify the pertinent object or region in an image by a natural language query <cit.>. We leverage GDINO <cit.> as a zero-shot object detector to tackle this task. GDINO combines the Transformer-based object detection algorithm DETR with Improved DeNoising (DINO) <cit.> and the grounding vision-language pre-training model Grounded Language-Image Pre-training (GLIP) <cit.>, equipped with the capacities of understanding natural language queries and detecting the relevant objects in an image. Specifically, GDINO first extracts vanilla image and text features, enhancing them via a feature enhancer module to obtain cross-modality features. It then selects queries from the image features by a language-guided query selection module, feeding them into a cross-modality decoder to predict object boxes with related phrases. Once the keywords are given from Sec. <ref>, the visual grounding model outputs the object detection results with bounding boxes and the corresponding keywords. Those outlined regions are kept as our image refinement result. For multiple bounding box outputs (e.g. multiple keywords are found, or multiple objects for one keyword are found), we take the union of coordinates of these bounding boxes to accommodate all their regions. If any keyword or any object bounding box is not activated in this image, the image will be kept as the same as the original one. Hence, our visual grounding procedure is conservative to ensure that as much information as possible from the image is retained and not overlooked. Once visual grounding is complete, the updated image replaces the original in the same conversation spot. §.§ Medical VLM Although many Chinese medical LLMs have emerged <cit.>, there is still a gap in Chinese medical VLMs. To address this, we have developed two versions of medical VLM frameworks along the timeline of LLM development. The initial framework replaces the English LLM in BLIP-2 <cit.> with the Baichuan2-13B-Chat <cit.> model. The current framework is based on the Qwen-VL architecture <cit.>, upgrading the LLM component from Qwen-7B-Chat to Qwen-14B-Chat <cit.>. The details of training and inference strategies are described in Sec. <ref>. Note that our proposed method is compatible with other medical LLMs, such as the recently released HuatuoGPT-II series <cit.>, which features a larger backbone (34B) and requires more computational resources. § DESIGN OF SUBJECTIVE ASSESSMENT §.§ Difficulty of Objective Assessment In single-turn question-answering tasks, such as an image captioning task, a model's generated response can be directly compared for similarity with the ground truth description, and further a numeric accuracy result can be achieved. However, in multi-turn medical dialogue tasks, the ground truth for each dialogue turn is difficult to define precisely. For instance, before providing a diagnosis, a doctor may need to ask several relevant questions to the patient, and the order of these questions is not strictly required, as long as enough information is gathered from the patient. Hence, it is inappropriate to rigidly compare the similarity between each turn of the model-generated response and the doctor's response. Another seemingly intuitive approach is to compare the correctness of the entire dialogue's diagnosis result. However, in our actual scenario, disease diagnosis or medication advice must not be independently provided by an AI model, because it would raise concerns about medical accidents and liability issues <cit.>. When an AI model obtains enough information from multi-turn dialogues with patients, regardless of whether its diagnosis is correct or not, a real doctor will intervene to make the decision. Therefore, based on the aforementioned aspects, objective evaluation methods are not quite appropriate in the context of our multi-turn medical dialogue. §.§ Improvement of Subjective Assessment The International Telecommunication Union Radiocommunication Sector (ITU-R) describes methodologies for subjective assessment (ITU-R BT.500), which were originally used for assessing the quality of television images <cit.>. Among those methods, double-stimulus methods were endorsed to compare the outputs of the two test conditions simultaneously. This strategy has been utilized in various quality assessment tasks <cit.>. For medical LLM quality assessment, a simplified form (e.g. a win-loss-tie percentage) of the stimulus-comparison methods in ITU-R BT.500 <cit.> has been adopted in recent Chinese medical LLMs <cit.>. To enhance the current performance assessment scheme in the medical VLM/LLM domain, we leverage the philosophy of double-stimulus continuous quality-scale (DSCQS) <cit.>, utilize the mechanism of differential mean opinion scores (DMOS) <cit.>, and integrate the necessary medical domain knowledge to provide more fine-grained and robust evaluation results. §.§ Proposed Evaluation Metrics For each doctor's response, we have two model-generated results: the original model, and the model with . Specialists rate the content generated by these two models according to a five-grade scale (0∼4 points) evaluation table, which is designed via our extensive communications with doctors. By adequately integrating the doctors' necessary and professional concerns, our five-grade criteria for the multi-turn multimodal medical dialogue settings are detailed in Table <ref>. For the model with , r_nsm represents the score given by the n-th evaluator for the m-th response in the s-th entire multi-turn consultation session, where r_nsm∈{0, 1, 2, 3, 4}, n ∈{1, ..., N}, s ∈{1, ..., S}, and m ∈{1, ..., M_s}. Correspondingly, for the original model, the score is denoted as r^ref_nsm. We then calculate the difference between the two scores: d_nsm = r_nsm - r^ref_nsm. Next, we design two types of DMOS, which are session-level (𝒟^sess) and image-level (𝒟^img), respectively: 𝒟^sess = {1/N1/M_s∑_n=1^N∑_m=1^M_sd_nsm |  s ∈{1, ..., S}}, 𝒟^img = {1/N∑_n=1^Nd_ni |  i ∈{1, ..., I}}, where d_ni denotes the score difference for the i-th image sent by the patient; I = ∑_s=1^SI_s; I_s represents the number of images in the s-th entire multi-turn consultation session. § EXPERIMENT RESULTS §.§ Implementation Details Although some Chinese multi-turn textual medical dialogue datasets are available <cit.>, to the best of our knowledge, there are no publicly accessible Chinese multi-turn multimodal medical dialogue datasets. Therefore, we comprehensively conduct experiments on our in-house datasets. Our data for the experiments includes the departments of dermatology, ophthalmology, and TCM, because these departments possess substantial amounts of image data as mentioned in Sec. <ref>. Moreover, the rate of poor image quality is high as shown in Fig. <ref> (orange bars). For example, tongue diagnosis is an indispensable routine examination in TCM <cit.>. It plays a crucial role in the diagnosis and treatment of various diseases by combining tongue examination with clinical assessments. While a doctor requires a close-up of a patient's tongue, the patient always takes a photo that includes her/his face, making the tongue region much smaller than what the doctor expects. A visualized example in Sec. <ref> highlights that such vision-language misalignment downgrades the model's response. Our servers are equipped with NVIDIA V100 16G GPUs, and have Ubuntu 18.04, Python 3.9.0, and CUDA 11.8. Due to computational limitations, our training strategy is divided into two steps. First, we fine-tune the LLM component of the VLM with LoRA (Low-Rank Adaptation) <cit.> using multi-turn textual dialogue data. In each department, we extract 100k entire consultation sessions that purely have text conversations (i.e. unimodal textual data), and split them by a ratio of 9:1 into training and validation sets. As a result, a dedicated LLM is obtained for each department. Second, we train the projection layer in the VLM using multimodal data. This approach ensures that we can retain enough image tokens without causing an out-of-memory (OOM) error. In each department, we extract 10k entire consultation sessions that have at least one image sent by patients. We split 9k:900:100 for training, validation, and test sets, respectively. For the VLM training, we adopt OneCycleLR <cit.> with the max learning rate 5×10^-5 for both the models of BLIP-2 with Baichuan2-13B-Chat and Qwen-VL with Qwen-14B-Chat. The remaining parameters are set by default. In the model inference process, we concatenate the preceding text and image data from the patient and real doctor's conversations as the history input to generate the response to the user's current message. We recruit 4 specialists for subjective assessment of the model. In , we use the Baichuan2-13B-Chat model as the LLM in our experiments. Since this LLM is independent of the VLM, it can also adopt other LLMs. Due to the page limit, the prompt is provided in our supplementary materials. We limit the number of keywords to 5. As mentioned in Sec. <ref>, GDINO is utilized as the visual grounding model. Although GDINO has shown the performance of their Tiny, Base, and Large models <cit.>, the available checkpoints are for the Tiny (GDINO-T) and Base (GDINO-B) models only. We use the GDINO-B to achieve better performance. The illustrated comparisons of GDINO-T and GDINO-B models for visual grounding are presented in our supplementary materials. The bounding box threshold is set as 0.35 and the text threshold is set as 0.25. The remaining parameters of GDINO are default. §.§ Results §.§.§ Multi-Turn Multimodal Medical Dialogue We design three subjective assessment criteria to evaluate our proposed approach. We calculate the averaged values 𝒟^sess and 𝒟^img to represent the performance of at the session level and image level, respectively. In addition, we compute one more averaged value 𝒟^img', which represents images where the new image area is less than 70% of the original image area after applying , indicating that these images have been obviously cropped by . Table <ref> shows the results of these three assessments on our multi-turn multimodal medical dialogue datasets from different clinical departments. We can see that all averaged DMOS values are greater than 0, and the p-values are remarkably less than 0.05. This indicates that our method is drastically effective across different departments. In particular, the session-level improvement is most robust according to the highly significant p-values. Furthermore, we observed that the results at the cropped image-level column are the highest, confirming that these images cropped by enhance vision-language alignment, which benefits the model's responses to other images and further improves the model's performance on the entire session. In Sec. <ref>, we mentioned the two versions of our medical VLM. Besides applying our current framework (Qwen-VL<cit.> + Qwen-14B-Chat<cit.>) to three departments, it is important to note that the initial framework (BLIP-2<cit.> + Baichuan2-13B-Chat<cit.>) was only applied to the department of dermatology. One reason is that the first dataset we obtained was from the dermatology department, and the data from the two other departments were not available to us at the same time. Another reason is that we found the performance of the current framework was significantly better than that of the initial one. The results of this observation will be described in Sec.<ref>. Therefore, to optimize computational resource usage, we only used the current model framework for the subsequent departments (i.e. ophthalmology and TCM). Fig. <ref> depicts the efficacy of our on a session-wise inference example from the department of TCM. Since we have discussed the difficulty of objective assessment in Sec. <ref>, it is challenging to match the model's responses directly with those of a real doctor. We thus mainly compare the model's outputs without and with the proposed , using the actual patient-doctor dialogue as a reference. As described in Sec. <ref>, all text and image messages between the patient and the real doctor before the user's current message are regarded as the history input during the model training and inference. The original model, which was without , had the issues of premature diagnoses (“folliculitis”) and unnecessary referrals to other departments (“see a dermatologist”). In particular, when the patient sent a poor-quality tongue image, which was not a close-up view, the model incorrectly identified the affected area as the “forehead”. In contrast, the model with generated acceptable responses to keep the conversation going and accurately addressed the appearance of the patient's tongue based on the cropped image, which was generally consistent with the real doctor's response. In this example, extracts the RoI region from the unsatisfactory image, enables the model's attention to shift to the tongue, and makes the responses more accurate. §.§.§ Medical VLM Frameworks In this section, we compare the performance of the initial and current VLM frameworks we developed. Since our assessment metric is a relative one, we record the mean opinion scores (MOS) values before deriving the DMOS. This absolute value can reveal the individual performance of the two VLM frameworks. We calculate the session-level averaged MOS^ref and MOS, representing the original model without (i.e. the reference model) and the model with , respectively. Table <ref> displays the performance results of these two frameworks. First, the reference result of Qwen-VL + Qwen-14B-Chat significantly outperforms that of BLIP2 + Baichuan2-13B-Chat (3.23 vs. 2.82), and is even also comparable to the result of BLIP2 + Baichuan2-13B-Chat equipped with (3.23 vs. 3.25). Moreover, the performance gap between the frameworks widens further after applying to Qwen-VL + Qwen-14B-Chat (3.25 vs. 3.71). Therefore we adopt Qwen-VL + Qwen-14B-Chat as the current VLM framework. § DISCUSSION, LIMITATIONS, AND FUTURE WORK Many medical LLMs conducted performance comparisons with ChatGPT/GPT-4 or used them for scoring <cit.>. Unfortunately, due to our platform's medical data privacy protection policies, we are prohibited from calling external API interfaces including ChatGPT/GPT-4. However, those studies have already demonstrated that open-source LLMs trained with specialized medical data outperform general-purpose LLMs in professional fields. To achieve a better performance of keyword extractions in , we believe a model fine-tuned with medical data can be adopted, such as the LLM part in our department-wise VLM. It is worth noting that the images sent by users can be regarded as “natural images with medical information” rather than typical medical images (DICOM grayscale format). While there are some outstanding medical multimodal representation models <cit.>, a zero-shot visual grounding model trained on natural images, like GDINO, fits our scenario more. While our experiments across multiple departments have proven the efficacy of the proposed , there are also some limitations. As mentioned in Sec. <ref>, due to computational limitations, we adopt a two-step strategy for our model training. With sufficient computational resources, we believe that fine-tuning our entire VLM with LoRA <cit.> on the database curated by will achieve better results <cit.>. If sufficient computational resources are available, whether attaching a visual grounding model affects the performance of the VLM itself is an ongoing research area <cit.>. Since our platform only includes Chinese patients and does not involve people of other skin tones or ethnicities, this may affect the universality of our proposed method to these populations. Due to the limitations of our practical application scenarios, we have conducted experiments on multimodal data in Chinese only. However, our method can be applied to multi-turn multimodal medical dialogue in other languages as well. We observe that some patients upload photos of medications and lab reports, which contain extensive textual information, requiring the model to have the capacity of optical character recognition (OCR). With the recent emergence of studies on medical document question answering (DQA) <cit.>, addressing this issue become important in our future work. § CONCLUSIONS In this paper, we propose , a zero-shot scheme to improve vision-language alignment in the multi-turn multimodal medical dialogue scenario, rather than altering any VLM structure. Aiming to address the issue of poor-quality images sent by patients, leverages the preceding text conversations before an image that has semantic coherence to infer the RoIs in the image. It adopts an LLM to summarize the keywords from the prior texts and a visual grounding model to crop the RoIs, in order to enrich vision-language alignment. Compared with the commonly used but simplistic win-loss-tie rule, we design a new evaluation metric to convey a fine-grained performance comparison for subjective assessment of multi-turn unimodal/multimodal medical dialogue. Our experiments using two versions of LLMs on the in-house datasets from three different clinical departments statistically demonstrate the significant effectiveness of . § REFERENCES IEEEtran § PROMPT FOR KEYWORD EXTRACTION The prompt for keyword extraction in is provided in Table <ref>. § GROUNDING DINO (GDINO) Although GDINO has shown the performance of their Tiny, Base, and Large models <cit.>, the available checkpoints are for the Tiny (GDINO-T) and Base (GDINO-B) models only. Fig. <ref> provides several examples from the department of dermatology to illustrate the visual grounding performance of GDINO-T and GDINO-B. Both GDINO-T and GDINO-B can ignore irrelevant keywords or phrases, such as “December 16” and “10 days”. While the two examples show the equality of Compared with GDINO-T, GDINO-B shows its superior performance. The two examples on the left are relatively simple, as the background interference is minimal, allowing both GDINO-T and GDINO-B to capture a consistent RoI. However, in the example in the upper right corner, the extensive texture patterns in the background interfere with GDINO-T's region of interest (RoI) extraction. In the lower right example, GDINO-T tends to be more strict in extracting the RoIs, while GDINO-B is more lenient. Since we aim to remove irrelevant elements from the image without losing core information, we believe that GDINO-B performs better overall. This observation has also been endorsed by doctors.
Recent years have witnessed the tremendous success of large language models (LLMs) <cit.>. They have even shown their massive potential in various professional application domains when equipped with specialized knowledge, such as legal <cit.>, financial <cit.>, and medical sectors <cit.>. A challenging format in these applications is multi-turn dialogue, where an LLM model-based assistant interacts with a user in multiple rounds to finally fulfill the user's request. Multi-turn dialogue requires the model not only to understand the user's current input but also to provide coherent responses by considering the dialogue history. This has raised higher requirements for the model's contextual understanding and generation capabilities. In the medical field, with the worldwide lockdown and quarantine caused by COVID-19, online doctor consultation applications have rapidly developed. Recently, some cutting-edge studies have focused on real multi-turn medical dialogue <cit.>, however, their conversations involve only text and lack the ability to handle multiple modalities, such as images sent by users. Meanwhile, scholars have intensively investigated the integration of visual modality with LLMs to achieve vision-language models (VLMs) by leveraging Transformer technology and other techniques <cit.>. As the medical field is a significant branch for VLM-based applications <cit.>, numerous methods have recently emerged to effectively learn visual and textual information (e.g. radiology reports) using VLMs to enhance the performance of medical imaging analysis <cit.>, medical report generation <cit.>, and medical visual question answering (Med-VQA) <cit.>. Nevertheless, the dominant settings were single-turn <cit.> or multi-turn stitched by independent single-turn pairs of visual questions and textual answers <cit.>. Different from conventional multi-turn textual medical dialogue or Med-VQA settings, our scenario involves multi-turn medical consultations with text and image modalities together, unveiling a multimodal research branch that has not been well addressed. In our case, patients describe their conditions by sending texts and images, while doctors ultimately provide diagnoses, medication recommendations, or referrals via multiple rounds of interaction with patients. Distinct from those professional medical images <cit.>, all images in our setting are provided by patients with their phones. As a result, those images may contain significant noises, such as excessive background elements and non-centered regions of interest (RoIs). For a real doctor, such a poor-quality image is not a hurdle as long as a lesion in the image is discernible. However, noisy images significantly degrade the vision-language alignment for training a VLM, detrimentally affecting the performance of the model's response. To enhance vision-language alignment performance, recent approaches have proposed various adaptations based on CLIP (Contrastive Language-Image Pre-training) <cit.>. Some other representative methods include projecting visual features <cit.>, employing multiple vision encoders <cit.>, or decoupling the language and vision training process <cit.>. Unlike those approaches that need to alter the structure of VLMs, we employ a zero-shot visual grounding method to update images by extracting their key regions. Visual grounding is mapping a natural language query to the specific visual region that the query refers to <cit.>. It always requires an external word or phrase input as the textual query. Since a multi-turn dialogue maintains semantic coherence, when a patient sends an image related to her/his medical condition, the descriptions of this image have always been mentioned in the prior dialogue turns. For instance, a patient in the dermatology department may describe her/his symptoms and affected areas in the text before sending an image. This preceding context acts as a hidden gem, which can be leveraged as a textual query for visual grounding inferring the key regions of the image. With this procedure, we do not need 1) human annotation labor to crop the image (thanks to the visual grounding approach) or 2) an external query for visual grounding (thanks to the preceding context of the image). Furthermore, this paradigm simulates the real diagnostic routine where a doctor gathers preliminary information through text communications before viewing the image. Consequently, even if the image has high background noise, the doctor can still ignore the noise and focus on the relevant areas based on prior knowledge. In this paper, we present (Zero-shot enhancement of vision-language ALignment in Multi-turn Multimodal Medical dialogue). Our key contributions are as follows. * We propose an LLM-powered and visual grounding-based approach to enhance the vision-language alignment in the unprecedented multi-turn multimodal medical dialogue scenario. It leverages the in-context information before the image's appearance without introducing extra annotation or training efforts. This approach is a zero-shot and compatible with various medical VLMs. * We reformulate the subjective assessment strategy for the multi-turn unimodal/multimodal medical dialogues, and design new evaluation metrics for the subjective assessment to achieve more quantitative results over the previous simplistic win-loss-tie evaluation criteria. * We develop two versions of medical VLMs and conduct extensive experiments on our multi-turn multimodal medical dialogue datasets. The results demonstrate the efficacy of with statistical significance across different clinical departments.
Med-VQA and Medical Dialogue. VQA is an interdisciplinary task that requires the capacities of both computer vision (CV) and natural language processing (NLP). As a significant branch of VQA, Med-VQA has garnered widespread attention from scholars. Many public datasets were released to provide specialized data support for the Med-VQA domain <cit.>. However, these dialogue datasets are either single-turn (e.g. medical image captioning or medical report generation) or multi-turn by simply concatenating single-turn question-answer pairs without direct contextual relevance. Furthermore, these datasets are constructed in an image-centered format, rather than being built based on the dialogue of real patient-doctor interactions. Since medical LLMs have been investigated in recent years for various NLP-based tasks <cit.>, numerous studies have explored the capacity of the LLMs for medical dialogue in English or other languages such as Chinese <cit.>. Moreover, researchers leveraged the capability of multimodal LLMs to tackle the image-based tasks such as case diagnosis and radiology or pathological image classification and recognition <cit.>. In those studies, medical images were obtained from professional equipment with fixed fields of view and angles, where a high image quality was ensured. However in our scenario, images are captured by patients' mobile phones, resulting in significant noise that adversely affects vision-language alignment at the phase of model training. Visual-Language Alignment. Aligning the features of image and text is essential for the VLM research domain <cit.>. The pioneering CLIP leveraged enormous image-text pairs to validate effective vision-language alignment <cit.>. By encoding medical domain knowledge, various contributions adapted from CLIP aimed to solve general <cit.> or specific problems in medical image analysis, such as chest X-rays <cit.> and mammograms <cit.>. A recently advanced paradigm designed projection layers for visual features, such as Bootstrapping Language-Image Pre-training v2 (BLIP-2) <cit.>, Large Language and Vision Assistant (LLaVA) <cit.>, and Qwen-VL <cit.>, leading to further specific modifications on them for medical applications <cit.>. Other methods included introducing multiple vision encoders to extract diverse visual features <cit.>, or decoupling the training process of language and vision components <cit.>. Different from those methods that focus on the improvement at the model level, our proposed approach mines the in-context information for image refinement to achieve better vision-language alignment. Visual Grounding. Another mindset to facilitate vision-language alignment is to emphasize the RoIs <cit.>. Some VLMs have possessed the region-level or pixel-level grounding ability to output the RoIs in bounding box or segmentation mask formats <cit.>. Meanwhile, outstanding visual foundation models like Grounding DINO (GDINO) <cit.> and Segment Anything Model (SAM) <cit.> have provided zero-shot grounding capacities. In the medical sector, aside from initially testing a grounding model's zero-shot performance on medical image datasets <cit.>, researchers proposed an enhanced framework for chest X-ray segmentation by jointly using GDINO and SAM <cit.>. Authors in <cit.> proposed a SAM-assisted approach for enhancing medical image annotation. However, those methods required externally well-defined prompts to trigger the grounding models, likely involving non-negligible human effort. In contrast, our grounding prompts in are entirely derived from the preceding context in the multi-turn dialogue without human intervention.
null
null
null
null