title
stringlengths
8
238
introduction
stringlengths
10
3.28k
conclusion
stringlengths
15
2.85k
prompt
stringlengths
510
5.5k
“Merging Results Is No Easy Task” : An International Survey Study of Collaborative Data Analysis Practices Among UX Practitioners
Usability testing is a frequently employed user-centred design method for detecting usability problems [ 25 , 110 ]. Despite being effective and useful, analyzing usability test sessions can be tedious, challenging, and time-consuming [ 12 , 25 , 29 , 81 ]. When analyzing usability test sessions, user experience (UX) practitioners often need to attend to multiple behavioral signals in both the visual and audio channels of the recordings and conduct several tasks (e.g., look at user’s actions, listen to user’s verbalizations, and write their own annotations) simultaneously [ 12 ]. Furthermore, UX prac- titioners working in industry often face time pressure to deliver their analysis results [ 25 , 29 , 64 ]. Consequently, it is not uncommon that UX practitioners might miss important usability problems or misinterpret them [26, 38].
We have conducted an international survey to understand the prac- tices and challenges of collaboration in the context of conducting data analysis on usability test sessions. Based on the responses of 279 participants who had varying UX experience and worked in different geographic locations, we found that UX practitioners col- laborate to primarily identify more usability problems and generate redesign suggestions , and to improve reliability of results to a lesser extent. We identified three modes of collaboration: independently analyze different portions of the data with little or no overlap and then collaborate (i.e., divide and conquer) , collaboratively analyze the session with little or no independent analysis (i.e., group analysis) , and independently analyze the same set of data and then collaborate . Although the third mode was recommended by the literature, it was least adopted among our respondents because it was perceived to be more time-consuming than the other two. Moreover, most of the respondents encountered challenges related to lack of time as 70% needed to complete analysis within 1 week, impacting their choice of collaboration modes . These findings highlight an opportunity to address the trade-offs between efficiency and validity of analysis results. What’s more, respondents also experienced disagreements with colleagues regarding usability problems and difficulty in merg- ing analysis from multiple practitioners. Our survey findings could potentially inform UX practitioners about how their colleagues per- ceive collaboration during data analysis. In addition, our findings reveal opportunities for developing better methods and tools to facilitate collaboration during analysis , for example, developing an integrated platform to support both analysis and collaboration in one place, establishing group analysis procedures, supporting both synchronous and asynchronous collaboration, and integrating team members with various skill levels.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*“Merging Results Is No Easy Task” : An International Survey Study of Collaborative Data Analysis Practices Among UX Practitioners* and the introduction:Usability testing is a frequently employed user-centred design method for detecting usability problems [ 25 , 110 ]. Despite being effective and useful, analyzing usability test sessions can be tedious, challenging, and time-consuming [ 12 , 25 , 29 , 81 ]. When analyzing usability test sessions, user experience (UX) practitioners often need to attend to multiple behavioral signals in both the visual and audio channels of the recordings and conduct several tasks (e.g., look at user’s actions, listen to user’s verbalizations, and write their own annotations) simultaneously [ 12 ]. Furthermore, UX prac- titioners working in industry often face time pressure to deliver their analysis results [ 25 , 29 , 64 ]. Consequently, it is not uncommon that UX practitioners might miss important usability problems or misinterpret them [26, 38]."},{"role":"assistant","content":"Conclusion: We have conducted an international survey to understand the prac- tices and challenges of collaboration in the context of conducting data analysis on usability test sessions. Based on the responses of 279 participants who had varying UX experience and worked in different geographic locations, we found that UX practitioners col- laborate to primarily identify more usability problems and generate redesign suggestions , and to improve reliability of results to a lesser extent. We identified three modes of collaboration: independently analyze different portions of the data with little or no overlap and then collaborate (i.e., divide and conquer) , collaboratively analyze the session with little or no independent analysis (i.e., group analysis) , and independently analyze the same set of data and then collaborate . Although the third mode was recommended by the literature, it was least adopted among our respondents because it was perceived to be more time-consuming than the other two. Moreover, most of the respondents encountered challenges related to lack of time as 70% needed to complete analysis within 1 week, impacting their choice of collaboration modes . These findings highlight an opportunity to address the trade-offs between efficiency and validity of analysis results. What’s more, respondents also experienced disagreements with colleagues regarding usability problems and difficulty in merg- ing analysis from multiple practitioners. Our survey findings could potentially inform UX practitioners about how their colleagues per- ceive collaboration during data analysis. In addition, our findings reveal opportunities for developing better methods and tools to facilitate collaboration during analysis , for example, developing an integrated platform to support both analysis and collaboration in one place, establishing group analysis procedures, supporting both synchronous and asynchronous collaboration, and integrating team members with various skill levels."}]
Understanding Peer Review of Software Engineering Papers
You are invited to participate in a study entitled ”Identifying Best Practices in Peer Review” that is being conducted by Neil Ernst, Daniel Mendez, JeffCarver, and Marco Torchiano.
This study conducted an interview and survey-based study to understand some of the mechanisms by which reviewers in software engineering conduct reviews. We secured a large cohort of respondents derived from program committee mem- bers and journal reviewers of the past few years, many of whom have received recognition for their excellent reviews. Our paper outlined what these respondents see happening in the peer review world, including a large amount of time spent on reviews. We then explained the practices common to these respondents, fol- lowed by some of their tips and beliefs about what should be in a review, and how paper writers can optimize their paper for reviewers. Most important properties of a good review are for it to be helpful and factually-based. Finally, while we initially thought the perceptions and practices of the award-winners would differ from non-award winners, the results show remarkably few large differences.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Understanding Peer Review of Software Engineering Papers* and the introduction:You are invited to participate in a study entitled ”Identifying Best Practices in Peer Review” that is being conducted by Neil Ernst, Daniel Mendez, JeffCarver, and Marco Torchiano."},{"role":"assistant","content":"Conclusion: This study conducted an interview and survey-based study to understand some of the mechanisms by which reviewers in software engineering conduct reviews. We secured a large cohort of respondents derived from program committee mem- bers and journal reviewers of the past few years, many of whom have received recognition for their excellent reviews. Our paper outlined what these respondents see happening in the peer review world, including a large amount of time spent on reviews. We then explained the practices common to these respondents, fol- lowed by some of their tips and beliefs about what should be in a review, and how paper writers can optimize their paper for reviewers. Most important properties of a good review are for it to be helpful and factually-based. Finally, while we initially thought the perceptions and practices of the award-winners would differ from non-award winners, the results show remarkably few large differences."}]
Communication in Immersive Social Virtual Reality: A Systematic Review of 10 Years’ Studies
Communication is defined as the transmission of resources, such as knowledge, data, and skills, among different parties using shared symbols and media [ 12 , 62 ]. Efficient communication is an essential demand which would impact work efficiency and user experiences in contexts such as collaboration [ 64 ], social contact [ 35 ], meeting [ 1 ], education [ 45 ], and gaming [ 65 ]. Recently, with the outbreak of COVID-19, the need for remote communication has increased dramatically [ 18 , 26 , 30 , 52 ]. The current mainstream technology for remote communication is video conferencing systems such as Zoom and Skype. However, these types of communication media do not provide an immersive environment as face-to-face does. As a result, they do not provide users with the feeling of bodily closeness, emotional closeness, and the experiences of physical presence. They also limit users’ interaction with the environment and objects, such as visiting museums or traveling together [22].
Communication experiences and quality are crucial in social VR for collaboration and building social connectedness. We have con- ducted a systematic review using the PRISMA method to understand how communication has been studied in social VR in the past ten years’ published papers in the major digital libraries (i.e., ACM Digi- tal Library, IEEE Xplore and Springer). According to our results, we identified the key factors that affect people’s communication experi- ence and quality in social VR, including the representation of avatar bringing the sense of anonymity and relax, various approaches to convey emotion and exchange information. Furthermore, we inves- tigated the methodology used in surveyed studies and identified research opportunities to improve the design of social VR applica- tions to better support communication and make it more accessible in the social VR. We discussed and proposed future research direc- tions according to the results of paper reviews, such as using social VR as communication media for remote family relatives.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Communication in Immersive Social Virtual Reality: A Systematic Review of 10 Years’ Studies* and the introduction:Communication is defined as the transmission of resources, such as knowledge, data, and skills, among different parties using shared symbols and media [ 12 , 62 ]. Efficient communication is an essential demand which would impact work efficiency and user experiences in contexts such as collaboration [ 64 ], social contact [ 35 ], meeting [ 1 ], education [ 45 ], and gaming [ 65 ]. Recently, with the outbreak of COVID-19, the need for remote communication has increased dramatically [ 18 , 26 , 30 , 52 ]. The current mainstream technology for remote communication is video conferencing systems such as Zoom and Skype. However, these types of communication media do not provide an immersive environment as face-to-face does. As a result, they do not provide users with the feeling of bodily closeness, emotional closeness, and the experiences of physical presence. They also limit users’ interaction with the environment and objects, such as visiting museums or traveling together [22]."},{"role":"assistant","content":"Conclusion: Communication experiences and quality are crucial in social VR for collaboration and building social connectedness. We have con- ducted a systematic review using the PRISMA method to understand how communication has been studied in social VR in the past ten years’ published papers in the major digital libraries (i.e., ACM Digi- tal Library, IEEE Xplore and Springer). According to our results, we identified the key factors that affect people’s communication experi- ence and quality in social VR, including the representation of avatar bringing the sense of anonymity and relax, various approaches to convey emotion and exchange information. Furthermore, we inves- tigated the methodology used in surveyed studies and identified research opportunities to improve the design of social VR applica- tions to better support communication and make it more accessible in the social VR. We discussed and proposed future research direc- tions according to the results of paper reviews, such as using social VR as communication media for remote family relatives."}]
Deep Learning for Image Denoising: A Survey
Image processing has numerous applications including image segmentation [28], image classification [25,38,32,12], object detection [13], video tracking [36], im- age restoration [48] and action recognition [35]. Especially, the image denoising technology is one of the most important branches of image processing technolo- gies and is used as an ex-ample to show the development of the image processing technologies in last 20 years [42]. Buades et al. [5] proposed a non-local algorithm method to deal with image denoising. Lan et al. [19] fused the belief propagation inference method and Markov Random Fields (MRFs) to address image denois- ing. Dabov et al. [9] proposed to transform grouping similar two-dimensional im- age fragments into three-dimensional data arrays to improve sparisty for image denoising. These selection and extraction methods have amazing performance for image denoising. However, the conventional methods have two challenges [45]. First, these methods are non-convex, which need to manually set parameters. Second, these methods refer a complex optimization problem for the test stage, resulting in high computational cost.
This paper first comprehensively introduces the development of deep learning technologies on image processing applications. And then shows the implementa- tions of typical CNNs. After that, image denoising is illustrated in detail, which concludes the differences and ideas of different methods for image denoising in real world. Finally, this paper shows the challenges of deep learning methods for image processing applications and offers solutions. This review offers important cues on deep learning technologies for image processing applications. We believe that this paper could pro-vide researchers with a useful guideline working in the related fields, especially for the beginners worked in deep-learning.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Learning for Image Denoising: A Survey* and the introduction:Image processing has numerous applications including image segmentation [28], image classification [25,38,32,12], object detection [13], video tracking [36], im- age restoration [48] and action recognition [35]. Especially, the image denoising technology is one of the most important branches of image processing technolo- gies and is used as an ex-ample to show the development of the image processing technologies in last 20 years [42]. Buades et al. [5] proposed a non-local algorithm method to deal with image denoising. Lan et al. [19] fused the belief propagation inference method and Markov Random Fields (MRFs) to address image denois- ing. Dabov et al. [9] proposed to transform grouping similar two-dimensional im- age fragments into three-dimensional data arrays to improve sparisty for image denoising. These selection and extraction methods have amazing performance for image denoising. However, the conventional methods have two challenges [45]. First, these methods are non-convex, which need to manually set parameters. Second, these methods refer a complex optimization problem for the test stage, resulting in high computational cost."},{"role":"assistant","content":"Conclusion: This paper first comprehensively introduces the development of deep learning technologies on image processing applications. And then shows the implementa- tions of typical CNNs. After that, image denoising is illustrated in detail, which concludes the differences and ideas of different methods for image denoising in real world. Finally, this paper shows the challenges of deep learning methods for image processing applications and offers solutions. This review offers important cues on deep learning technologies for image processing applications. We believe that this paper could pro-vide researchers with a useful guideline working in the related fields, especially for the beginners worked in deep-learning."}]
Abstract
Outsourcing data storage to cloud-based services gains popularity among all kinds of users, as it seems to be a reasonable alternative to a private cloud. Cost aside, customers subordinate their choice of an adequate cloud provider to various factors, particularly availability, security, and privacy of the stored data.
In previous sections, we analyzed existing distributed storage systems provid- ing additional secrecy by use of fragmentation. We also presented database fragmentation solutions separating data in order to avoid full encryption. Few systems focus on providing a long term, secure and non-costly data storage. An- other motivation is the possibility of minimizing encryption inside databases, while still providing a good level of data protection.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Abstract* and the introduction:Outsourcing data storage to cloud-based services gains popularity among all kinds of users, as it seems to be a reasonable alternative to a private cloud. Cost aside, customers subordinate their choice of an adequate cloud provider to various factors, particularly availability, security, and privacy of the stored data."},{"role":"assistant","content":"Conclusion: In previous sections, we analyzed existing distributed storage systems provid- ing additional secrecy by use of fragmentation. We also presented database fragmentation solutions separating data in order to avoid full encryption. Few systems focus on providing a long term, secure and non-costly data storage. An- other motivation is the possibility of minimizing encryption inside databases, while still providing a good level of data protection."}]
Deep Long-Tailed Learning: A Survey
D EEP learning allows computational models, composed of multiple processing layers, to learn data representations with multiple levels of abstraction [1], [2] and has made incredible progress in computer vision [3], [4], [5], [6], [7], [8]. The key enablers of deep learning are the availability of large-scale datasets, the emergence of GPUs, and the advancement of deep network architectures [9]. Thanks to the strong ability of learning high- quality data representations, deep neural networks have been applied with great success to many visual discriminative tasks, including image classification [6], [10], object detection [7], [11] and semantic segmentation [8], [12].
In this survey, we have extensively reviewed classic deep long- tailed learning methods proposed before mid-2021, according to the taxonomy of class re-balancing, information augmentation and module improvement. We have empirically analyzed several state-of-the-art long-tailed methods by evaluating to what extent they address the issue of class imbalance, based on a newly proposed relative accuracy metric. Following that, we discussed the main application scenarios of long-tailed learning, and identified potential innovation directions for methods and task settings. We expect that this timely survey not only provides a better understanding of long-tailed learning for researchers and the community, but also facilitates future research.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Long-Tailed Learning: A Survey* and the introduction:D EEP learning allows computational models, composed of multiple processing layers, to learn data representations with multiple levels of abstraction [1], [2] and has made incredible progress in computer vision [3], [4], [5], [6], [7], [8]. The key enablers of deep learning are the availability of large-scale datasets, the emergence of GPUs, and the advancement of deep network architectures [9]. Thanks to the strong ability of learning high- quality data representations, deep neural networks have been applied with great success to many visual discriminative tasks, including image classification [6], [10], object detection [7], [11] and semantic segmentation [8], [12]."},{"role":"assistant","content":"Conclusion: In this survey, we have extensively reviewed classic deep long- tailed learning methods proposed before mid-2021, according to the taxonomy of class re-balancing, information augmentation and module improvement. We have empirically analyzed several state-of-the-art long-tailed methods by evaluating to what extent they address the issue of class imbalance, based on a newly proposed relative accuracy metric. Following that, we discussed the main application scenarios of long-tailed learning, and identified potential innovation directions for methods and task settings. We expect that this timely survey not only provides a better understanding of long-tailed learning for researchers and the community, but also facilitates future research."}]
A Survey on Cross-Lingual Summarization
To help people efficiently grasp the gist of docu- ments in a foreign language, Cross-Lingual Sum- marization (XLS) aims to generate a summary in the target language from the given document(s) in a different source language. This task could be re- garded as a combination of monolingual summa- rization (MS) and machine translation (MT), both of which are unsolved natural language process- ing (NLP) tasks and have been continuously stud- ied for decades ( Paice , 1990 ; Brown et al. , 1993 ), which makes XLS a challenging task.
In this paper, we present the first comprehensive survey of current research efforts on XLS. We sys- tematically summarize existing XLS datasets and methods, highlight their characteristics and com- pare them with each other to provide deeper anal- yses. In addition, we give multiple perspective di- rections to facilitate further research on XLS. We hope that this XLS survey could provide a clear picture of this topic and boost the development of the current XLS technologies.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Cross-Lingual Summarization* and the introduction:To help people efficiently grasp the gist of docu- ments in a foreign language, Cross-Lingual Sum- marization (XLS) aims to generate a summary in the target language from the given document(s) in a different source language. This task could be re- garded as a combination of monolingual summa- rization (MS) and machine translation (MT), both of which are unsolved natural language process- ing (NLP) tasks and have been continuously stud- ied for decades ( Paice , 1990 ; Brown et al. , 1993 ), which makes XLS a challenging task."},{"role":"assistant","content":"Conclusion: In this paper, we present the first comprehensive survey of current research efforts on XLS. We sys- tematically summarize existing XLS datasets and methods, highlight their characteristics and com- pare them with each other to provide deeper anal- yses. In addition, we give multiple perspective di- rections to facilitate further research on XLS. We hope that this XLS survey could provide a clear picture of this topic and boost the development of the current XLS technologies."}]
A Review of In-Memory Space-Efficient Data Structures for Temporal Graphs
Widespread adoption of complex network concepts in information technolo- gies has driven the creation of large volumes of data to be modelled as graphs. This increasing volume of data produced at high speeds brings us new chal- lenges. On one hand, we need efficient computational mechanisms to persist data that evolve continuously over time on cheap external storage. On the other hand, we need specialized techniques to load these data in faster (and more expensive) memories using minimal space and, then, process queries as fast as possible to construct valuable knowledge.
This paper reviewed studies about data structures for storing and querying temporal graphs in primary memory. We noted that many authors use data compression techniques to reduce the amount of space per contact needed to store temporal graphs. There are also succinct data structures that allows a variety of useful queries with time complexity of queries similar to non- compressed data structures. Therefore, algorithms could efficiently process large amount of data in primary memory.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Review of In-Memory Space-Efficient Data Structures for Temporal Graphs* and the introduction:Widespread adoption of complex network concepts in information technolo- gies has driven the creation of large volumes of data to be modelled as graphs. This increasing volume of data produced at high speeds brings us new chal- lenges. On one hand, we need efficient computational mechanisms to persist data that evolve continuously over time on cheap external storage. On the other hand, we need specialized techniques to load these data in faster (and more expensive) memories using minimal space and, then, process queries as fast as possible to construct valuable knowledge."},{"role":"assistant","content":"Conclusion: This paper reviewed studies about data structures for storing and querying temporal graphs in primary memory. We noted that many authors use data compression techniques to reduce the amount of space per contact needed to store temporal graphs. There are also succinct data structures that allows a variety of useful queries with time complexity of queries similar to non- compressed data structures. Therefore, algorithms could efficiently process large amount of data in primary memory."}]
A Comprehensive Survey on Networking over TV White Spaces
In a historic ruling in 2008, the Federal Communication Commission (FCC) in the United States opened up the TV white space spectrum for unlicensed secondary usage [1]. TV white spaces refer to the allocated but locally unused TV channels (between 54 MHz and 698 MHz in the US). Similar regulations have been adopted by several other countries including UK [2], Canada [3], Singapore [4], Malaysia [5], and South Africa [6]. Unlicensed secondary devices such as Wi-Fi [7], ZigBee [8], and Bluetooth [9] are allowed to access and operate over the TV white spaces without interfering the primary users, i.e., TV stations or other licensed users of the spectrum. To access TV white spaces, an unlicensed device can either query a cloud-hosted geo-location spec- trum database or perform a sensing operation to determine the energy of the spectrum [1]. In 2010, FCC mandated that an unlicensed device must query a database to learn about the TV white spaces in its location [10]. Also, FCC kept the sensing technique as an optional choice for secondary users [10].
The 2008 FCC ruling in the United States on TV white space spectrum has opened up new opportunities for unlicensed operation in the TV band. TV white spaces’ availability, diverse bandwidth, and excellent propagational character- istics make them suitable for long range, low-power, and large area applications such as sensing and monitoring ap- plications, agricultural IoT applications, wireless broadband access, real-time applications, smart and connected commu- nities, smart utility applications, etc. Several new network architectures and protocols have been proposed solely tar- geting the TV white space spectrum as well. However, TV white space fragmentation, temporal diversity, and spatial diversity make it challenging to adopt them directly due to interference and coexistence issues. In this paper, we have investigated these key research challenges and provided future directions. We have then provided key insights and comparison between several protocols built on top of TV white spaces. Also, we have presented the opportunities of TV white space protocols in great details.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Comprehensive Survey on Networking over TV White Spaces* and the introduction:In a historic ruling in 2008, the Federal Communication Commission (FCC) in the United States opened up the TV white space spectrum for unlicensed secondary usage [1]. TV white spaces refer to the allocated but locally unused TV channels (between 54 MHz and 698 MHz in the US). Similar regulations have been adopted by several other countries including UK [2], Canada [3], Singapore [4], Malaysia [5], and South Africa [6]. Unlicensed secondary devices such as Wi-Fi [7], ZigBee [8], and Bluetooth [9] are allowed to access and operate over the TV white spaces without interfering the primary users, i.e., TV stations or other licensed users of the spectrum. To access TV white spaces, an unlicensed device can either query a cloud-hosted geo-location spec- trum database or perform a sensing operation to determine the energy of the spectrum [1]. In 2010, FCC mandated that an unlicensed device must query a database to learn about the TV white spaces in its location [10]. Also, FCC kept the sensing technique as an optional choice for secondary users [10]."},{"role":"assistant","content":"Conclusion: The 2008 FCC ruling in the United States on TV white space spectrum has opened up new opportunities for unlicensed operation in the TV band. TV white spaces’ availability, diverse bandwidth, and excellent propagational character- istics make them suitable for long range, low-power, and large area applications such as sensing and monitoring ap- plications, agricultural IoT applications, wireless broadband access, real-time applications, smart and connected commu- nities, smart utility applications, etc. Several new network architectures and protocols have been proposed solely tar- geting the TV white space spectrum as well. However, TV white space fragmentation, temporal diversity, and spatial diversity make it challenging to adopt them directly due to interference and coexistence issues. In this paper, we have investigated these key research challenges and provided future directions. We have then provided key insights and comparison between several protocols built on top of TV white spaces. Also, we have presented the opportunities of TV white space protocols in great details."}]
Towards Evaluation of Autonomously Generated Musical Compositions: A Comprehensive Survey
The scope of algorithmic composition includes a wide range of tasks, which consist of the creation of melodies, chords, rhythms or even lyrics, i.e., all the typical components of music. The purpose of this work is to present the output of the neural network without any modifications. The aim of the research is not to romanticize artificial intelligence and its use in algorithmic composition, but on the contrary to point out its advantages and disadvantages. We therefore start from the idea that only undistorted results carry a certain aesthetic, which is destroyed by later authorial adaptations. There are many applications that aim to create a complete model for an autonomously generated composition; systems are able to generate muzak songs, assist singers in transcribing songs or can imitate long-dead authors. But why? This is the right question that we would like to address in this paper. Subjective understanding of creativity or aesthetics differs not only within preferences (popular authors or genres), but also differs on the basis of experienced experience or socio-cultural environment. So, what do we want to achieve with such an adaptation? What is the benefit of the resulting work for the author, who can no longer evaluate this composition? And in what ways should we evaluate such a composition at all? Let’s try to paraphrase musicologist Matěj Kratochvíl here: “So far I have not found anyone who would publicly consider whether the world will somehow enrich the existence of a new composition by a dead author, which in comparison with his actual work sounds as if it was composed by a machine. So, I dare say that the music world has not gained anything new.” (Kratochvíl 2020)
Although we have proposed several different methods of analytical, musicolog- ical and aesthetic evaluation, Matěj Kratochvíl’s rhetorical question from the introductory part of this paper (Kratochvíl 2020) was far from answered. Al- though the evaluation methods of musical metacreaction offer several possibili- ties by which we can to some extent determine the impact of similar generative projects (see media evaluation and audience evaluation), the measurability of such outputs is at least problematic. So, let’s try to think again about Matěj Kratochvíl’s statement and verify the contribution that generative art can pro- vide. The emergence of models that focus on generative forms of art has brought with it, above all, the loss of illusions that art will remain the exclusive domain of man. However, generative art, and therefore artificial intelligence in general, faces many difficulties. In their reports, journalists often describe the resulting works in layman’s terms without any knowledge of cultural or scientific under- standing, which further damages the authenticity of these works. The idea of artificial intelligence as omniscient robots, which could one day exterminate and replace humanity, is fueled not only by media companies, often by commercial popularity, which aims to convince investors why their model is the real, unadul- terated artificial intelligence. For laymen, AI is a difficult topic to grasp, which is most often associated with the film hero Terminator, or with a similar theme of cult sci-fifilms. We encounter the same situation in the case of computational creativity. The media explosion of artificial intelligence hardly distinguishes between machine learning and the supposed, nowadays non-existent general ar- tificial intelligence. Firms engaged in the application of neural networks are then to some extent forced to maintain this unrealistic picture of AI, which today represents a cultural fetish rather than an object of scientific interest. If, in the case of supposed, truly omniscient artificial intelligence, we speak of the loss of illusions, then this issue points to a perhaps even more interesting perspective of creative tendencies.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Towards Evaluation of Autonomously Generated Musical Compositions: A Comprehensive Survey* and the introduction:The scope of algorithmic composition includes a wide range of tasks, which consist of the creation of melodies, chords, rhythms or even lyrics, i.e., all the typical components of music. The purpose of this work is to present the output of the neural network without any modifications. The aim of the research is not to romanticize artificial intelligence and its use in algorithmic composition, but on the contrary to point out its advantages and disadvantages. We therefore start from the idea that only undistorted results carry a certain aesthetic, which is destroyed by later authorial adaptations. There are many applications that aim to create a complete model for an autonomously generated composition; systems are able to generate muzak songs, assist singers in transcribing songs or can imitate long-dead authors. But why? This is the right question that we would like to address in this paper. Subjective understanding of creativity or aesthetics differs not only within preferences (popular authors or genres), but also differs on the basis of experienced experience or socio-cultural environment. So, what do we want to achieve with such an adaptation? What is the benefit of the resulting work for the author, who can no longer evaluate this composition? And in what ways should we evaluate such a composition at all? Let’s try to paraphrase musicologist Matěj Kratochvíl here: “So far I have not found anyone who would publicly consider whether the world will somehow enrich the existence of a new composition by a dead author, which in comparison with his actual work sounds as if it was composed by a machine. So, I dare say that the music world has not gained anything new.” (Kratochvíl 2020)"},{"role":"assistant","content":"Conclusion: Although we have proposed several different methods of analytical, musicolog- ical and aesthetic evaluation, Matěj Kratochvíl’s rhetorical question from the introductory part of this paper (Kratochvíl 2020) was far from answered. Al- though the evaluation methods of musical metacreaction offer several possibili- ties by which we can to some extent determine the impact of similar generative projects (see media evaluation and audience evaluation), the measurability of such outputs is at least problematic. So, let’s try to think again about Matěj Kratochvíl’s statement and verify the contribution that generative art can pro- vide. The emergence of models that focus on generative forms of art has brought with it, above all, the loss of illusions that art will remain the exclusive domain of man. However, generative art, and therefore artificial intelligence in general, faces many difficulties. In their reports, journalists often describe the resulting works in layman’s terms without any knowledge of cultural or scientific under- standing, which further damages the authenticity of these works. The idea of artificial intelligence as omniscient robots, which could one day exterminate and replace humanity, is fueled not only by media companies, often by commercial popularity, which aims to convince investors why their model is the real, unadul- terated artificial intelligence. For laymen, AI is a difficult topic to grasp, which is most often associated with the film hero Terminator, or with a similar theme of cult sci-fifilms. We encounter the same situation in the case of computational creativity. The media explosion of artificial intelligence hardly distinguishes between machine learning and the supposed, nowadays non-existent general ar- tificial intelligence. Firms engaged in the application of neural networks are then to some extent forced to maintain this unrealistic picture of AI, which today represents a cultural fetish rather than an object of scientific interest. If, in the case of supposed, truly omniscient artificial intelligence, we speak of the loss of illusions, then this issue points to a perhaps even more interesting perspective of creative tendencies."}]
A Scoping Review of Publicly Available Language Tasks in Clinical Natural Language Processing
Since the inception of the first Integrating Biology and the Bedside (i2b2) shared task in 2006, currently known as the National Natural Language Processing (NLP) Clinical Challenge (n2c2), the field of clinical NLP has advanced in clinical applications that rely on text from the electronic health record (EHR). Tasks with publicly available data (e.g. shared tasks) provide a new avenue for advancing the state-of-the-art using publicly available datasets in a sector that is otherwise heavily regulated and protected from sharing patient data. In an editorial approximately a decade ago, Chapman et al.[1] identified the major barriers for clinical NLP developments where shared tasks may provide a solution. At the time, some of the challenges were lack of data resources including annotation tools, benchmarking and standardized metrics, reproducibility, collaboration between the general NLP communities and health research communities, and the need for user-centered development.
The interests in introducing and participating in clinical NLP tasks are growing with more tasks surfacing each year. The breadth of tasks is also growing with topics varying from tasks with specific clinical applications to those facil- itating clinical language understanding and reasoning. It is no doubt that the field will continue to grow and attract more researchers from both general NLP domain and the clinical informatics community. We encourage future work on proposing tasks/shared tasks to overcome barriers in community collaboration, reporting transparency, and consis- tency of data preparation.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Scoping Review of Publicly Available Language Tasks in Clinical Natural Language Processing* and the introduction:Since the inception of the first Integrating Biology and the Bedside (i2b2) shared task in 2006, currently known as the National Natural Language Processing (NLP) Clinical Challenge (n2c2), the field of clinical NLP has advanced in clinical applications that rely on text from the electronic health record (EHR). Tasks with publicly available data (e.g. shared tasks) provide a new avenue for advancing the state-of-the-art using publicly available datasets in a sector that is otherwise heavily regulated and protected from sharing patient data. In an editorial approximately a decade ago, Chapman et al.[1] identified the major barriers for clinical NLP developments where shared tasks may provide a solution. At the time, some of the challenges were lack of data resources including annotation tools, benchmarking and standardized metrics, reproducibility, collaboration between the general NLP communities and health research communities, and the need for user-centered development."},{"role":"assistant","content":"Conclusion: The interests in introducing and participating in clinical NLP tasks are growing with more tasks surfacing each year. The breadth of tasks is also growing with topics varying from tasks with specific clinical applications to those facil- itating clinical language understanding and reasoning. It is no doubt that the field will continue to grow and attract more researchers from both general NLP domain and the clinical informatics community. We encourage future work on proposing tasks/shared tasks to overcome barriers in community collaboration, reporting transparency, and consis- tency of data preparation."}]
A N E XTENSIVE R EVIEW OF C OMPUTATIONAL D ANCE A UTOMATION T ECHNIQUES AND A PPLICATIONS
Efforts of combining dance and computational power can be traced back to 1967. Being a domain that needs relatively more innovation and creativity than mere following standard procedures, dance was the slowest to adopt technology. The earliest attempt was published by A. Michael Noll [2] in Dance Magazine in 1967, although New York based choreographer Merce Cunningham also did the same [3].
Creativity is considered to be a gift and no two persons can be same in this process. A field of art like Dance is considered to be entirely a creative process although the basics are very clearly specified and taught in case of Classical Dance forms. Each choreographer is known for their particular style and creative form. Using a machine to aid in this creative process has been attempted by many well-known creative artists and dance is also a domain known to accept and start experimenting with this. We have reviewed at least hundred of such research papers and articles and categorized them accordingly.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A N E XTENSIVE R EVIEW OF C OMPUTATIONAL D ANCE A UTOMATION T ECHNIQUES AND A PPLICATIONS* and the introduction:Efforts of combining dance and computational power can be traced back to 1967. Being a domain that needs relatively more innovation and creativity than mere following standard procedures, dance was the slowest to adopt technology. The earliest attempt was published by A. Michael Noll [2] in Dance Magazine in 1967, although New York based choreographer Merce Cunningham also did the same [3]."},{"role":"assistant","content":"Conclusion: Creativity is considered to be a gift and no two persons can be same in this process. A field of art like Dance is considered to be entirely a creative process although the basics are very clearly specified and taught in case of Classical Dance forms. Each choreographer is known for their particular style and creative form. Using a machine to aid in this creative process has been attempted by many well-known creative artists and dance is also a domain known to accept and start experimenting with this. We have reviewed at least hundred of such research papers and articles and categorized them accordingly."}]
T WENTY - TWO YEARS SINCE REVEALING CROSS - SITE SCRIPTING ATTACKS : A SYSTEMATIC MAPPING AND A COMPREHENSIVE SURVEY
The use of web applications through the Internet has become an indispensable mean for different business and governmental organizations to reduce costs, speed up activities, improve the quality of services and reach as many targeted people as possible. Users also get immense benefits from online provided services. However, those gains are not without risks; web applications requiring users’ registrations through input forms are preferred targets for different hacking attacks, putting their own and users’ confidential data at risks.
In this paper, we conducted a systematic mapping and a comprehensive survey studying the advancement in research to tackle XSS attacks. The study is not restricted to a period of time and covered high quality studies published since its discovery. Several studies were found in the literature but a remarkable interest is only observed in the last few years. Despite the diversity of solutions proposed over the years, XSS attacks are still prevalent and targeting new web applications and platforms. The study revealed much attention to XSS vulnerability detection instead of its repair. For securing web navigation, several defense lines should be provided. As a staring point, developers need to be aware of the consequences resulted from ignoring security practices, at the same time, effective tools enabling the automatic detection and repair of XSS vulnerabilities should be made available to facilitate their tasks. Dynamic defense techniques against XSS attacks should also be provided for protecting innocent users when new attacks occur, those techniques are still immature for providing the intended protection level against all types of XSS attacks [ 180 ]. Traditional filtering approaches become ineffective regarding the new developed web technologies, more advanced techniques need to be explored. Although the wide adoption of machine learning based techniques for the detection of XSS attacks, existing endeavors only focus on performance analysis omitting three important problems related to cyber-security communities (1) interpret ability of prediction results, (2) robustness of models against adversarial attacks and (3) suitability for integration in real-world architectures and platforms. The review also denoted a bias toward basic XSS attacks; this needs to be alleviated by advocating more research targeting other XSS attack variants, specifically WXSS and XAS that are targeting online social networks that may affect wider populations in a short period of time. Moreover, regarding the rapid development of web technologies, XSS attacks written in other scripting languages such VBScript and ActiveX or embedded in advanced web languages such as HTML5 need to properly be studied for completeness and robustness.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*T WENTY - TWO YEARS SINCE REVEALING CROSS - SITE SCRIPTING ATTACKS : A SYSTEMATIC MAPPING AND A COMPREHENSIVE SURVEY* and the introduction:The use of web applications through the Internet has become an indispensable mean for different business and governmental organizations to reduce costs, speed up activities, improve the quality of services and reach as many targeted people as possible. Users also get immense benefits from online provided services. However, those gains are not without risks; web applications requiring users’ registrations through input forms are preferred targets for different hacking attacks, putting their own and users’ confidential data at risks."},{"role":"assistant","content":"Conclusion: In this paper, we conducted a systematic mapping and a comprehensive survey studying the advancement in research to tackle XSS attacks. The study is not restricted to a period of time and covered high quality studies published since its discovery. Several studies were found in the literature but a remarkable interest is only observed in the last few years. Despite the diversity of solutions proposed over the years, XSS attacks are still prevalent and targeting new web applications and platforms. The study revealed much attention to XSS vulnerability detection instead of its repair. For securing web navigation, several defense lines should be provided. As a staring point, developers need to be aware of the consequences resulted from ignoring security practices, at the same time, effective tools enabling the automatic detection and repair of XSS vulnerabilities should be made available to facilitate their tasks. Dynamic defense techniques against XSS attacks should also be provided for protecting innocent users when new attacks occur, those techniques are still immature for providing the intended protection level against all types of XSS attacks [ 180 ]. Traditional filtering approaches become ineffective regarding the new developed web technologies, more advanced techniques need to be explored. Although the wide adoption of machine learning based techniques for the detection of XSS attacks, existing endeavors only focus on performance analysis omitting three important problems related to cyber-security communities (1) interpret ability of prediction results, (2) robustness of models against adversarial attacks and (3) suitability for integration in real-world architectures and platforms. The review also denoted a bias toward basic XSS attacks; this needs to be alleviated by advocating more research targeting other XSS attack variants, specifically WXSS and XAS that are targeting online social networks that may affect wider populations in a short period of time. Moreover, regarding the rapid development of web technologies, XSS attacks written in other scripting languages such VBScript and ActiveX or embedded in advanced web languages such as HTML5 need to properly be studied for completeness and robustness."}]
A Deep Journey into Super-resolution: A Survey
‘Everything has been said before, but since nobody listens we have to keep going back and beginning all over again.’
Single-image super-resolution is a challenging research problem with important real-life applications. The phenom- enal success of deep learning approaches has resulted in rapid growth in deep convolutional network based tech- niques for image super-resolution. A diverse set of ap- proaches have been proposed with exciting innovations in network architectures and learning methodologies. This survey provides a comprehensive analysis of existing deep- learning based methods for super-resolution. We note that the super-resolution performance has been greatly enhanced in recent years with a corresponding increase in the network complexity. Remarkably, the state-of-the-art approaches still suffer from limitations that restrict their application to key real-world scenarios ( e.g. , inadequate metrics, high model complexity, inability to handle real-life degradations). We hope this survey will attract new efforts towards the solu- tion of these crucial problems.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Deep Journey into Super-resolution: A Survey* and the introduction:‘Everything has been said before, but since nobody listens we have to keep going back and beginning all over again.’"},{"role":"assistant","content":"Conclusion: Single-image super-resolution is a challenging research problem with important real-life applications. The phenom- enal success of deep learning approaches has resulted in rapid growth in deep convolutional network based tech- niques for image super-resolution. A diverse set of ap- proaches have been proposed with exciting innovations in network architectures and learning methodologies. This survey provides a comprehensive analysis of existing deep- learning based methods for super-resolution. We note that the super-resolution performance has been greatly enhanced in recent years with a corresponding increase in the network complexity. Remarkably, the state-of-the-art approaches still suffer from limitations that restrict their application to key real-world scenarios ( e.g. , inadequate metrics, high model complexity, inability to handle real-life degradations). We hope this survey will attract new efforts towards the solu- tion of these crucial problems."}]
Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey
Driven by the needs to protect the enormous value within data and the evolution of the emerging data mining techniques, information leakage becomes a growing concern for governments, organizations and individuals [ 3 ]. Compromising the confidentiality of protected information is an information leakage incident and a prominent threat of cyber security [ 2 ], for instance, the leakage of sensitive information results in both financial and reputational damages to the organizations [ 19 ]. Thus, information leakage incidents are indeed an urgent threat that deserves the public attention.
In this survey, the ML-based stealing attack against the controlled information and the defense mechanisms are reviewed. The generalized MLBSA methodology compatible with the published work is outlined. Specifically, the MLBSA methodology uncovers how adversaries steal the controlled information in five phases, i.e. reconnaissance, data collection, feature engineering, attacking the objective, and evaluation. Based on different types of the controlled information, the literature was reviewed in three categories consisting of the controlled user activities information, the controlled ML model related information, and the controlled authentication information. The attacker is assumed to use the system without any administrative privilege. This assumption implies that user activities information was stolen by leveraging the kernel data and the sensor data both of which are beyond the protection of the application. The attack against the controlled ML model-related information is demonstrated with stealing the model description and/or stealing the training data. Similarly, keystroke data, secret keys, and password data are the examples of stealing the controlled authentication information.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Machine Learning Based Cyber Attacks Targeting on Controlled Information: A Survey* and the introduction:Driven by the needs to protect the enormous value within data and the evolution of the emerging data mining techniques, information leakage becomes a growing concern for governments, organizations and individuals [ 3 ]. Compromising the confidentiality of protected information is an information leakage incident and a prominent threat of cyber security [ 2 ], for instance, the leakage of sensitive information results in both financial and reputational damages to the organizations [ 19 ]. Thus, information leakage incidents are indeed an urgent threat that deserves the public attention."},{"role":"assistant","content":"Conclusion: In this survey, the ML-based stealing attack against the controlled information and the defense mechanisms are reviewed. The generalized MLBSA methodology compatible with the published work is outlined. Specifically, the MLBSA methodology uncovers how adversaries steal the controlled information in five phases, i.e. reconnaissance, data collection, feature engineering, attacking the objective, and evaluation. Based on different types of the controlled information, the literature was reviewed in three categories consisting of the controlled user activities information, the controlled ML model related information, and the controlled authentication information. The attacker is assumed to use the system without any administrative privilege. This assumption implies that user activities information was stolen by leveraging the kernel data and the sensor data both of which are beyond the protection of the application. The attack against the controlled ML model-related information is demonstrated with stealing the model description and/or stealing the training data. Similarly, keystroke data, secret keys, and password data are the examples of stealing the controlled authentication information."}]
Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey
Deep Learning is a subfield of Machine Learning (ML) that leverages the capabilities of artificial neural networks to automatically learn from data. These networks are fully-differentiable computational graphs optimized via gradient descent to learn representations from raw data [ 12 ], currently being the most efficient and effective data-oriented strategy to perform several Computer Vision (CV) and Natural Language Processing (NLP) tasks.
In this survey paper, we have investigated debiasing methods targeting fairness-aware neural networks for language and vision research. We have contextualized fairness and its relationship with biases and their possible origins. We have presented the main metrics and evaluation measures for assessing the level of fairness provided by models for computer vision and natural language processing tasks, reviewing both application-specific and general-purpose measures, their proper use, applicability, and known limitations. Then, we have discussed, in depth, several debiasing methods for neural models under the perspective of a new taxonomy for the area, which is yet another contribution of this paper. We concluded with our thoughts on the most pressing fairness challenges in neural networks, calling attention for potential trends and future research directions.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Debiasing Methods for Fairer Neural Models in Vision and Language Research: A Survey* and the introduction:Deep Learning is a subfield of Machine Learning (ML) that leverages the capabilities of artificial neural networks to automatically learn from data. These networks are fully-differentiable computational graphs optimized via gradient descent to learn representations from raw data [ 12 ], currently being the most efficient and effective data-oriented strategy to perform several Computer Vision (CV) and Natural Language Processing (NLP) tasks."},{"role":"assistant","content":"Conclusion: In this survey paper, we have investigated debiasing methods targeting fairness-aware neural networks for language and vision research. We have contextualized fairness and its relationship with biases and their possible origins. We have presented the main metrics and evaluation measures for assessing the level of fairness provided by models for computer vision and natural language processing tasks, reviewing both application-specific and general-purpose measures, their proper use, applicability, and known limitations. Then, we have discussed, in depth, several debiasing methods for neural models under the perspective of a new taxonomy for the area, which is yet another contribution of this paper. We concluded with our thoughts on the most pressing fairness challenges in neural networks, calling attention for potential trends and future research directions."}]
D EEP N EURAL N ETWORKS FOR COVID-19 D ETECTION AND D IAGNOSIS USING I MAGES AND A COUSTIC - BASED T ECHNIQUES : A R ECENT R EVIEW
The novel severe acute respiratory syndrome-related coronavirus (SARS-CoV-2) started from Wuhan, China in December 2019 and spread to all the countries worldwide. This virus caused pneumonia of unknown cytology and is named COVID-19. This infectious disease has been classified as a public health crisis of the international community concern on January 30, 2020, because of its high infectivity and mortality. The lack of successful diagnosis or preventive measures has led to a rise in the number of cases, an increase in the cost of hospitalizations and palliative treatments. Therefore, scientists and medical industries around the world incited to find a prompt and accurate detection of COVID-19 for early prevention, screening, forecasting, drug development, and contact tracing to save more time for the scientific community and healthcare expert to pass to the next diagnosis stage to reduce the death rate reverse transcription polymerase chain reaction (RT-PCR) is recommended to diagnose COVID-19. Additionally, there are studies in the literature using various imaging methods (computed tomography (CT) and X-ray). [ 1 , 2 , 3 , 4 , 5 ]. It may occur in situations that negatively affect these methods. The changes of viruses by the appearance of new mutations make the classifications a more challenging task [ 6 ]. Moreover, one of the biggest problems with COVID-19 patients is viral pneumonia (VP). Differentiating between viral and non-viral pneumonia (nVP) is not easy. Coexistence of COVID-19 and viral pneumonia can have dire consequences.
Although the RT-PCR test is considered the gold standard for COVID-19 diagnosis, it is time-consuming to make a decision because of high false-negative levels in the results. Therefore, medical imaging modalities such as chest X-ray and chest CT scans are the best alternative according to scientists. Chest X-ray radiography is of low cost and low radiation dose, it is available and easy to use in general or community hospitals. This review presents a detailed study of the existing solutions that are mainly based on DL techniques to early diagnose the COVID-19. This study gives more of an insight into the scientists’ and decision-makers’ thought processes - not only during the wave periods but also during that of the vaccination that could require real-time mass testing. The lack of data, however, is the mandatory problem to achieve efficient and real-time results. Many solutions have been presented and discussed in this review study to give more ideas to future trends and also for eventual future diseases that might suffer from the missing-data problem. We believe that with more public databases, better DL based-approaches can be developed to detect and diagnose the COVID19 accurately. Also, when policy-makers and citizens are making their best to submit to the difficult constraints of lockdown and social distancing, AI can be used to create more intelligent robots and autonomous machines to help health workforce and to reduce their workload by disinfection, working in hospitals, food distributing and helping the patients. The challenge of this solution is that people lack confidence in autonomous machines and prefer to be served by a human even if there is a risk of virus transmission. Moreover, entrusting chatbots to diagnose patients needs a large amount of medical data from experts. Also, the difference in languages from a country to another makes an already difficult task still more arduous. On the other hand, when dealing with voice analysis, there are still many challenges to be taken up. For example, until now, annotated data of patients’ voices are not publicly available for research purposes of COVID-19 detection and diagnosis. Collecting these data is mostly made in unconstrained environments (i.e. in-the-wild) using smartphones or other voice recorders. These environments are generally noisy and contain reverberation, which leads to bad quality of data and makes the diagnosis and detection of COVID-19 more challenging. Finally, one of the most important future trends is to concentrate on further decreasing the false negative rate and, as far as practicable, reducing the false positive rate by the same token to accurately differentiate viral from BP.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*D EEP N EURAL N ETWORKS FOR COVID-19 D ETECTION AND D IAGNOSIS USING I MAGES AND A COUSTIC - BASED T ECHNIQUES : A R ECENT R EVIEW* and the introduction:The novel severe acute respiratory syndrome-related coronavirus (SARS-CoV-2) started from Wuhan, China in December 2019 and spread to all the countries worldwide. This virus caused pneumonia of unknown cytology and is named COVID-19. This infectious disease has been classified as a public health crisis of the international community concern on January 30, 2020, because of its high infectivity and mortality. The lack of successful diagnosis or preventive measures has led to a rise in the number of cases, an increase in the cost of hospitalizations and palliative treatments. Therefore, scientists and medical industries around the world incited to find a prompt and accurate detection of COVID-19 for early prevention, screening, forecasting, drug development, and contact tracing to save more time for the scientific community and healthcare expert to pass to the next diagnosis stage to reduce the death rate reverse transcription polymerase chain reaction (RT-PCR) is recommended to diagnose COVID-19. Additionally, there are studies in the literature using various imaging methods (computed tomography (CT) and X-ray). [ 1 , 2 , 3 , 4 , 5 ]. It may occur in situations that negatively affect these methods. The changes of viruses by the appearance of new mutations make the classifications a more challenging task [ 6 ]. Moreover, one of the biggest problems with COVID-19 patients is viral pneumonia (VP). Differentiating between viral and non-viral pneumonia (nVP) is not easy. Coexistence of COVID-19 and viral pneumonia can have dire consequences."},{"role":"assistant","content":"Conclusion: Although the RT-PCR test is considered the gold standard for COVID-19 diagnosis, it is time-consuming to make a decision because of high false-negative levels in the results. Therefore, medical imaging modalities such as chest X-ray and chest CT scans are the best alternative according to scientists. Chest X-ray radiography is of low cost and low radiation dose, it is available and easy to use in general or community hospitals. This review presents a detailed study of the existing solutions that are mainly based on DL techniques to early diagnose the COVID-19. This study gives more of an insight into the scientists’ and decision-makers’ thought processes - not only during the wave periods but also during that of the vaccination that could require real-time mass testing. The lack of data, however, is the mandatory problem to achieve efficient and real-time results. Many solutions have been presented and discussed in this review study to give more ideas to future trends and also for eventual future diseases that might suffer from the missing-data problem. We believe that with more public databases, better DL based-approaches can be developed to detect and diagnose the COVID19 accurately. Also, when policy-makers and citizens are making their best to submit to the difficult constraints of lockdown and social distancing, AI can be used to create more intelligent robots and autonomous machines to help health workforce and to reduce their workload by disinfection, working in hospitals, food distributing and helping the patients. The challenge of this solution is that people lack confidence in autonomous machines and prefer to be served by a human even if there is a risk of virus transmission. Moreover, entrusting chatbots to diagnose patients needs a large amount of medical data from experts. Also, the difference in languages from a country to another makes an already difficult task still more arduous. On the other hand, when dealing with voice analysis, there are still many challenges to be taken up. For example, until now, annotated data of patients’ voices are not publicly available for research purposes of COVID-19 detection and diagnosis. Collecting these data is mostly made in unconstrained environments (i.e. in-the-wild) using smartphones or other voice recorders. These environments are generally noisy and contain reverberation, which leads to bad quality of data and makes the diagnosis and detection of COVID-19 more challenging. Finally, one of the most important future trends is to concentrate on further decreasing the false negative rate and, as far as practicable, reducing the false positive rate by the same token to accurately differentiate viral from BP."}]
Anaphora and Coreference Resolution: A Review
A discourse is a collocated group of sentences which convey a clear understanding only when read together. The etymology of anaphora is ana (Greek for back) and pheri (Greek for to bear), which in simple terms means repetition. In computational linguistics, anaphora is typically defined as references to items mentioned earlier in the discourse or “pointing back” reference as described by ( Mitkov , 1999 ). The most prevalent type of anaphora in natural language is the pronominal anaphora ( Lappin and Leass , 1994 ). Coreference, as the term suggests refers to words or phrases referring to a single unique entity in the world. Anaphoric and co-referent entities themselves form a subset of the broader term “discourse parsing” ( Soricut and Marcu , 2003 ), which is crucial for full text understanding.
Our survey presents an exhaustive overview of the entity resolution field, which forms a core component of natural language processing research. In this survey, we put forth a detailed account of the types of references and the important constraints for entity resolution with the aim of establishing the bread scope of the task. We also clarify the boundaries between the tasks of coreference resolution and anaphora resolution for more focussed research progress in the future. In addition, we also attempt to compare the predominantly used evaluation metrics. We observe that though there are multiple datasets available, the state-of-the-art methods have not been evaluated on them. With the spirit of encouraging more exhaustive evaluations, we also provide an account on the datasets released for the task.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Anaphora and Coreference Resolution: A Review* and the introduction:A discourse is a collocated group of sentences which convey a clear understanding only when read together. The etymology of anaphora is ana (Greek for back) and pheri (Greek for to bear), which in simple terms means repetition. In computational linguistics, anaphora is typically defined as references to items mentioned earlier in the discourse or “pointing back” reference as described by ( Mitkov , 1999 ). The most prevalent type of anaphora in natural language is the pronominal anaphora ( Lappin and Leass , 1994 ). Coreference, as the term suggests refers to words or phrases referring to a single unique entity in the world. Anaphoric and co-referent entities themselves form a subset of the broader term “discourse parsing” ( Soricut and Marcu , 2003 ), which is crucial for full text understanding."},{"role":"assistant","content":"Conclusion: Our survey presents an exhaustive overview of the entity resolution field, which forms a core component of natural language processing research. In this survey, we put forth a detailed account of the types of references and the important constraints for entity resolution with the aim of establishing the bread scope of the task. We also clarify the boundaries between the tasks of coreference resolution and anaphora resolution for more focussed research progress in the future. In addition, we also attempt to compare the predominantly used evaluation metrics. We observe that though there are multiple datasets available, the state-of-the-art methods have not been evaluated on them. With the spirit of encouraging more exhaustive evaluations, we also provide an account on the datasets released for the task."}]
Mining the online infosphere: A survey
Online infosphere 1 is the term corresponding to the Internet becoming a virtual parallel world formed from billions of networks of artificial life at different scales ranging from tiny pieces of software to massive AI tools running a factory or driving a car. The motivations for this are diverse, seeking to both help mankind and harm it.
In this survey we have presented a critical rundown on the evolution of the online infosphere by depicting some of the research areas that are becoming very crucial at current times. We started our discussion with a view of the infosphere as a collaborative platform, with a dedicated focus on Wikipedia. Wikipedia, the freely available and one of the largest knowledge base, containing a wide variety of information has been a primary focus of an extensive research so far. In this survey we have presented a detailed account of the works on article quality monitoring, editor behaviour and their retention and malicious activities like vandalism.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Mining the online infosphere: A survey* and the introduction:Online infosphere 1 is the term corresponding to the Internet becoming a virtual parallel world formed from billions of networks of artificial life at different scales ranging from tiny pieces of software to massive AI tools running a factory or driving a car. The motivations for this are diverse, seeking to both help mankind and harm it."},{"role":"assistant","content":"Conclusion: In this survey we have presented a critical rundown on the evolution of the online infosphere by depicting some of the research areas that are becoming very crucial at current times. We started our discussion with a view of the infosphere as a collaborative platform, with a dedicated focus on Wikipedia. Wikipedia, the freely available and one of the largest knowledge base, containing a wide variety of information has been a primary focus of an extensive research so far. In this survey we have presented a detailed account of the works on article quality monitoring, editor behaviour and their retention and malicious activities like vandalism."}]
Generalizing to Unseen Domains: A Survey on Domain Generalization
M ACHINE learning (ML) has achieved remarkable suc- cess in various areas, such as computer vision, natural language processing, and healthcare. The goal of ML is to design a model that can learn general and predictive knowledge from training data, and then apply the model to new (test) data. Traditional ML models are trained based on the i.i.d. assumption that training and testing data are identically and independently distributed. However, this assumption does not always hold in reality. When the probability distributions of training data and testing data are different, the performance of ML models often get deteriorated due to domain distribution gaps. Collecting the data of all possible domains to train ML models is expensive and even prohibitively impossible. Therefore, enhancing the generalization ability of ML models is important in both industry and academic fields.
Generalization has always been an important research topic in machine learning research. In this article, we review the domain generalization areas by providing in-depth analysis of existing methods, datasets and applications. Then, we thoroughly analyze the methods. Based on our analysis, we provide several potential research challenges that could be the directions of future research. We hope that this survey can provide useful insights to researchers of interest and inspire more progress in the future.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Generalizing to Unseen Domains: A Survey on Domain Generalization* and the introduction:M ACHINE learning (ML) has achieved remarkable suc- cess in various areas, such as computer vision, natural language processing, and healthcare. The goal of ML is to design a model that can learn general and predictive knowledge from training data, and then apply the model to new (test) data. Traditional ML models are trained based on the i.i.d. assumption that training and testing data are identically and independently distributed. However, this assumption does not always hold in reality. When the probability distributions of training data and testing data are different, the performance of ML models often get deteriorated due to domain distribution gaps. Collecting the data of all possible domains to train ML models is expensive and even prohibitively impossible. Therefore, enhancing the generalization ability of ML models is important in both industry and academic fields."},{"role":"assistant","content":"Conclusion: Generalization has always been an important research topic in machine learning research. In this article, we review the domain generalization areas by providing in-depth analysis of existing methods, datasets and applications. Then, we thoroughly analyze the methods. Based on our analysis, we provide several potential research challenges that could be the directions of future research. We hope that this survey can provide useful insights to researchers of interest and inspire more progress in the future."}]
How to Certify Machine Learning Based Safety-critical Systems?
Machine Learning (ML) is drastically changing the way we interact with the world. We are now using software applications powered by ML in critical as- pects of our daily lives; from finance, energy, to health and transportation. Thanks to frequent innovations in domains like Deep Learning (DL) and Re- inforcement Learning (RL), the adoption of ML is expected to keep rising and the economic benefits of systems powered by ML is forecast to reach 30.6 Billions \$ by 2024 1 . However, the integration of ML in systems is not with- out risks, especially in safety-critical systems such as avionic or automotive, where any mistake can lead to catastrophic events 2 . Therefore, before applying and deploying any machine learning based components into a safety-critical system, these components need to be certified.
This paper provides a comprehensive overview of certification challenges for ML based safety-critical systems. We conducted a systematic review of the literature pertaining to Robustness , Uncertainty , Explainability , Verification , Safe Reinforcement Learning and Direct Certification . We identified gaps in this literature and discussed about current limitations and future research opportunities. With this paper, we hope to provide the research community with a full view of certification challenges and stimulate more collaborations between academia and industry.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*How to Certify Machine Learning Based Safety-critical Systems?* and the introduction:Machine Learning (ML) is drastically changing the way we interact with the world. We are now using software applications powered by ML in critical as- pects of our daily lives; from finance, energy, to health and transportation. Thanks to frequent innovations in domains like Deep Learning (DL) and Re- inforcement Learning (RL), the adoption of ML is expected to keep rising and the economic benefits of systems powered by ML is forecast to reach 30.6 Billions \$ by 2024 1 . However, the integration of ML in systems is not with- out risks, especially in safety-critical systems such as avionic or automotive, where any mistake can lead to catastrophic events 2 . Therefore, before applying and deploying any machine learning based components into a safety-critical system, these components need to be certified."},{"role":"assistant","content":"Conclusion: This paper provides a comprehensive overview of certification challenges for ML based safety-critical systems. We conducted a systematic review of the literature pertaining to Robustness , Uncertainty , Explainability , Verification , Safe Reinforcement Learning and Direct Certification . We identified gaps in this literature and discussed about current limitations and future research opportunities. With this paper, we hope to provide the research community with a full view of certification challenges and stimulate more collaborations between academia and industry."}]
Recommender Systems for the Internet of Things: A Survey
Recent advances in identification technologies, such as wireless sensor networks, RFID, and nan- otechnology, have empowered a multitude of phys- ical things with lifted computing ability [ 100 ]. Ev- eryday things have become recognizable, address- able, and controllable over the Internet. The poten- tial for seamlessly integrating the physical and cy- berspace has created enormous business opportuni- ties. However, finding an effective mechanism for searching and recommending things remains a sig- nificant challenge. Recommender systems present a critical stage in promoting and investigating the advantages of IoT. They generally include proce- dures that facilitate consumer choices based on their preferences. Given the huge amount of infor- mation that is available through IoT, users are likely to receive numerous recommendations for services or products. Knowledge of user preferences is es- sential for the building of any recommender system. The most important feature of such a recommender system for IoT is its ability to exploit knowledge of human behaviour and other IoT data in order to produce accurate recommendations. With bil- lions of IoT resources connected to and accessible on the Internet, a key question is: How can IoT data be effectively exploited as a source to build recommendation systems?
RSIoTs have become a crucial tool that can sup- port users in various decision-making activities. In this article, we have provided an extensive re- view of the most notable works to date on RSIoTs. We have proposed a classification framework of three categories for the organizing and dividing of existing publications and then highlighted each category in more detail. We have also discussed the limitations of using IoT for recommendation tasks, as well as the most important requirements for the RSIoT. We have provided a detailed account of the techniques of each system and its applica- tions. In addition, we have detailed some of the most common recommendations that are provided by building an RSIoT. Both IoT and RSs have been ongoing hot research topics in recent years. While existing studies have established a solid foundation for research into RSIoT, our analysis has generated several promising directions for future research:
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Recommender Systems for the Internet of Things: A Survey* and the introduction:Recent advances in identification technologies, such as wireless sensor networks, RFID, and nan- otechnology, have empowered a multitude of phys- ical things with lifted computing ability [ 100 ]. Ev- eryday things have become recognizable, address- able, and controllable over the Internet. The poten- tial for seamlessly integrating the physical and cy- berspace has created enormous business opportuni- ties. However, finding an effective mechanism for searching and recommending things remains a sig- nificant challenge. Recommender systems present a critical stage in promoting and investigating the advantages of IoT. They generally include proce- dures that facilitate consumer choices based on their preferences. Given the huge amount of infor- mation that is available through IoT, users are likely to receive numerous recommendations for services or products. Knowledge of user preferences is es- sential for the building of any recommender system. The most important feature of such a recommender system for IoT is its ability to exploit knowledge of human behaviour and other IoT data in order to produce accurate recommendations. With bil- lions of IoT resources connected to and accessible on the Internet, a key question is: How can IoT data be effectively exploited as a source to build recommendation systems?"},{"role":"assistant","content":"Conclusion: RSIoTs have become a crucial tool that can sup- port users in various decision-making activities. In this article, we have provided an extensive re- view of the most notable works to date on RSIoTs. We have proposed a classification framework of three categories for the organizing and dividing of existing publications and then highlighted each category in more detail. We have also discussed the limitations of using IoT for recommendation tasks, as well as the most important requirements for the RSIoT. We have provided a detailed account of the techniques of each system and its applica- tions. In addition, we have detailed some of the most common recommendations that are provided by building an RSIoT. Both IoT and RSs have been ongoing hot research topics in recent years. While existing studies have established a solid foundation for research into RSIoT, our analysis has generated several promising directions for future research:"}]
Generative Adversarial Networks: A Survey Towards Private and Secure Applications
The technological breakthrough brought by Generative Adversarial Networks (GAN) has rapidly produced a revolutionary impact on machine learning and its related fields, and this impact has already flourished to various of research areas and applications. As a powerful generative framework, GAN has significantly promoted many applications with complex tasks, such as image generation, super-resolution, text data manipulations, etc . Most recently, exploiting GAN to work out elegant solutions to severe privacy and security problems becomes increasingly popular in both academia and industry due to its game theoretic optimization strategy. This survey aims to provide a comprehensive review and an in-depth summary of the state-of-the-art technologies and
This survey intensively reviews the state-of-the-art approaches using GAN for privacy and security in a broad spectrum of applications, including image generation, video event detection, records publishing, distributed learning, malware detection, fraud detection, and so on. For the different purposes of attack and defense, these existing approaches establish problem formulation based on the variants of GAN framework, taking into account attack success rate, classification/prediction accuracy, data utility, and other performance metrics. After a thorough analysis, the unsolved challenges and promising research directions are provided for further discussion from perspectives of application scenario, model design, and data utilization.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Generative Adversarial Networks: A Survey Towards Private and Secure Applications* and the introduction:The technological breakthrough brought by Generative Adversarial Networks (GAN) has rapidly produced a revolutionary impact on machine learning and its related fields, and this impact has already flourished to various of research areas and applications. As a powerful generative framework, GAN has significantly promoted many applications with complex tasks, such as image generation, super-resolution, text data manipulations, etc . Most recently, exploiting GAN to work out elegant solutions to severe privacy and security problems becomes increasingly popular in both academia and industry due to its game theoretic optimization strategy. This survey aims to provide a comprehensive review and an in-depth summary of the state-of-the-art technologies and"},{"role":"assistant","content":"Conclusion: This survey intensively reviews the state-of-the-art approaches using GAN for privacy and security in a broad spectrum of applications, including image generation, video event detection, records publishing, distributed learning, malware detection, fraud detection, and so on. For the different purposes of attack and defense, these existing approaches establish problem formulation based on the variants of GAN framework, taking into account attack success rate, classification/prediction accuracy, data utility, and other performance metrics. After a thorough analysis, the unsolved challenges and promising research directions are provided for further discussion from perspectives of application scenario, model design, and data utilization."}]
Multimodal Machine Learning: A Survey and Taxonomy
T HE world surrounding us involves multiple modalities — we see objects, hear sounds, feel texture, smell odors, and so on. In general terms, a modality refers to the way in which something happens or is experienced. Most people associate the word modality with the sensory modalities which represent our primary channels of communication and sensation, such as vision or touch. A research problem or dataset is therefore characterized as multimodal when it includes multiple such modalities. In this paper we focus primarily, but not exclusively, on three modalities: natural language which can be both written or spoken; visual signals which are often represented with images or videos; and vocal signals which encode sounds and para-verbal information such as prosody and vocal expressions.
As part of this survey, we introduced a taxonomy of multi- modal machine learning: representation, translation, fusion, alignment, and co-learning. Some of them such as fusion have been studied for a long time, but more recent interest in representation and translation have led to a large number of new multimodal algorithms and exciting multimodal applications.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Multimodal Machine Learning: A Survey and Taxonomy* and the introduction:T HE world surrounding us involves multiple modalities — we see objects, hear sounds, feel texture, smell odors, and so on. In general terms, a modality refers to the way in which something happens or is experienced. Most people associate the word modality with the sensory modalities which represent our primary channels of communication and sensation, such as vision or touch. A research problem or dataset is therefore characterized as multimodal when it includes multiple such modalities. In this paper we focus primarily, but not exclusively, on three modalities: natural language which can be both written or spoken; visual signals which are often represented with images or videos; and vocal signals which encode sounds and para-verbal information such as prosody and vocal expressions."},{"role":"assistant","content":"Conclusion: As part of this survey, we introduced a taxonomy of multi- modal machine learning: representation, translation, fusion, alignment, and co-learning. Some of them such as fusion have been studied for a long time, but more recent interest in representation and translation have led to a large number of new multimodal algorithms and exciting multimodal applications."}]
A Review on Facial Micro-Expressions Analysis: Datasets, Features and Metrics
F ACIAL expression research has a long history and accel- erated through the 1970s. The modern theory on basic emotions by Ekman et al [1], [2], [3] has generated more re- search than any other in the psychology of emotion [4]. They outline 7 universal facial expressions: happy, sad, anger, fear, surprise, disgust and contempt, as the universality of emotion. When an emotional episode is triggered, there is an impulse which may induce one or more of these expressions of emotion.
We have presented a comprehensive review on datasets, features and metrics for micro-expressions analysis. The ultimate goal of this paper is to provide new insights and recommendations to advancing the micro-expression analysis research. We have provided a good guidelines for beginners and a detailed challenges and recommendations for those who are already working in this area. In addition, we contribute to the research by addressing the effect of resolutions on different feature types and introducing the new emotional classes for SAMM.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Review on Facial Micro-Expressions Analysis: Datasets, Features and Metrics* and the introduction:F ACIAL expression research has a long history and accel- erated through the 1970s. The modern theory on basic emotions by Ekman et al [1], [2], [3] has generated more re- search than any other in the psychology of emotion [4]. They outline 7 universal facial expressions: happy, sad, anger, fear, surprise, disgust and contempt, as the universality of emotion. When an emotional episode is triggered, there is an impulse which may induce one or more of these expressions of emotion."},{"role":"assistant","content":"Conclusion: We have presented a comprehensive review on datasets, features and metrics for micro-expressions analysis. The ultimate goal of this paper is to provide new insights and recommendations to advancing the micro-expression analysis research. We have provided a good guidelines for beginners and a detailed challenges and recommendations for those who are already working in this area. In addition, we contribute to the research by addressing the effect of resolutions on different feature types and introducing the new emotional classes for SAMM."}]
A Survey among Network Operators on BGP Prefix Hijacking
BGP prefix hijacking 101. Autonomous Systems (ASes) use the Border Gateway Protocol (BGP) [15] to advertise address space (as IPv4/IPv6 network prefixes) and establish inter-domain routes in the Internet. BGP is a distributed pro- tocol, lacking authentication of advertised routes. As a re- sult, an AS is able to advertise illegitimate routes for IP pre- fixes it does not own. These advertisements propagate and “pollute” many ASes, or even the entire Internet, affecting service availability, integrity, and confidentiality of commu- nications. This phenomenon, called BGP prefix hijacking , is frequently observed [25], and can be caused by router mis- configurations [1,2] or malicious attacks [3,22,25].
In this work, to increase community understanding of ex- isting BGP hijacking defenses and the needs of network op- erators, we presented the results of a survey of 75 network operators around the world.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey among Network Operators on BGP Prefix Hijacking* and the introduction:BGP prefix hijacking 101. Autonomous Systems (ASes) use the Border Gateway Protocol (BGP) [15] to advertise address space (as IPv4/IPv6 network prefixes) and establish inter-domain routes in the Internet. BGP is a distributed pro- tocol, lacking authentication of advertised routes. As a re- sult, an AS is able to advertise illegitimate routes for IP pre- fixes it does not own. These advertisements propagate and “pollute” many ASes, or even the entire Internet, affecting service availability, integrity, and confidentiality of commu- nications. This phenomenon, called BGP prefix hijacking , is frequently observed [25], and can be caused by router mis- configurations [1,2] or malicious attacks [3,22,25]."},{"role":"assistant","content":"Conclusion: In this work, to increase community understanding of ex- isting BGP hijacking defenses and the needs of network op- erators, we presented the results of a survey of 75 network operators around the world."}]
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
D EEP L EARNING [1] is providing major breakthroughs in solving the problems that have withstood many attempts of machine learning and artificial intelligence com- munity in the past. As a result, it is currently being used to decipher hard scientific problems at an unprecedented scale, e.g. in reconstruction of brain circuits [2]; analysis of muta- tions in DNA [3]; prediction of structure-activity of potential drug molecules [4], and analyzing the particle accelerator data [5] [6]. Deep neural networks have also become the preferred choice to solve many challenging tasks in speech recognition [7] and natural language understanding [8].
Deep learning is at the heart of the current rise of ma- chine learning and artificial intelligence. In the field of Computer Vision, it has become the work horse for appli- cations ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently lead to a large influx of contributions in this direction. This article presents the first comprehensive survey on adver- sarial attacks on deep learning in Computer Vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey* and the introduction:D EEP L EARNING [1] is providing major breakthroughs in solving the problems that have withstood many attempts of machine learning and artificial intelligence com- munity in the past. As a result, it is currently being used to decipher hard scientific problems at an unprecedented scale, e.g. in reconstruction of brain circuits [2]; analysis of muta- tions in DNA [3]; prediction of structure-activity of potential drug molecules [4], and analyzing the particle accelerator data [5] [6]. Deep neural networks have also become the preferred choice to solve many challenging tasks in speech recognition [7] and natural language understanding [8]."},{"role":"assistant","content":"Conclusion: Deep learning is at the heart of the current rise of ma- chine learning and artificial intelligence. In the field of Computer Vision, it has become the work horse for appli- cations ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For images, such perturbations are often too small to be perceptible, yet they completely fool the deep learning models. Adversarial attacks pose a serious threat to the success of deep learning in practice. This fact has recently lead to a large influx of contributions in this direction. This article presents the first comprehensive survey on adver- sarial attacks on deep learning in Computer Vision. We review the works that design adversarial attacks, analyze the existence of such attacks and propose defenses against them. To emphasize that adversarial attacks are possible in practical conditions, we separately review the contributions that evaluate adversarial attacks in the real-world scenarios. Finally, drawing on the reviewed literature, we provide a broader outlook of this research direction."}]
A Review of Anonymization for Healthcare Data Iyiola E. Olatunji ∗ , Jens Rauch 2 , Matthias Katzensteiner 3 , and Megha Khosla 1
With the increasing adoption of healthcare information technology (HIT) by medical institutions, the generation and capture of healthcare-related data have been increasing rapidly in the past years. The application of artificial intelligence (AI) techniques already gives a glimpse of potential improvements ranging from lung cancer nodules detection in CT scans to disease prediction and treatment [61, 80, 97]. The challenge though is that these AI models are usually data hungry and require large amounts of data for training. Health care data, on the other hand, contains highly sensitive patient information and cannot be easily shared. The reluctance behind releasing data query/analysis tools build on health care data can be further justified by the fundamental law of information recovery [23] which states that when a data source is queried multiple times and it returns overly accurate information for each query, the underlying data can be reconstructed partially or in full. Therefore, health data need to be protected against such leakage to ensure patient’s privacy.
In this paper, we provided a comprehensive review of anonymization models and techniques applicable for relational and graph-based health care data. Besides, we studied possible attacks on anonymized data and empirically demonstrated reconstruction attack on MIMIC-III data. Finally we discussed existing defense mechanisms while giving an overview of existing anonymization tools. We believe that our comprehensive review covering different perspectives on anonymization will assist researchers and practitioners in selecting relevant an ony miz ation techniques based on the data type, desired privacy level, information loss, and possible adversarial behavior.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Review of Anonymization for Healthcare Data Iyiola E. Olatunji ∗ , Jens Rauch 2 , Matthias Katzensteiner 3 , and Megha Khosla 1* and the introduction:With the increasing adoption of healthcare information technology (HIT) by medical institutions, the generation and capture of healthcare-related data have been increasing rapidly in the past years. The application of artificial intelligence (AI) techniques already gives a glimpse of potential improvements ranging from lung cancer nodules detection in CT scans to disease prediction and treatment [61, 80, 97]. The challenge though is that these AI models are usually data hungry and require large amounts of data for training. Health care data, on the other hand, contains highly sensitive patient information and cannot be easily shared. The reluctance behind releasing data query/analysis tools build on health care data can be further justified by the fundamental law of information recovery [23] which states that when a data source is queried multiple times and it returns overly accurate information for each query, the underlying data can be reconstructed partially or in full. Therefore, health data need to be protected against such leakage to ensure patient’s privacy."},{"role":"assistant","content":"Conclusion: In this paper, we provided a comprehensive review of anonymization models and techniques applicable for relational and graph-based health care data. Besides, we studied possible attacks on anonymized data and empirically demonstrated reconstruction attack on MIMIC-III data. Finally we discussed existing defense mechanisms while giving an overview of existing anonymization tools. We believe that our comprehensive review covering different perspectives on anonymization will assist researchers and practitioners in selecting relevant an ony miz ation techniques based on the data type, desired privacy level, information loss, and possible adversarial behavior."}]
A S URVEY AND I MPLEMENTATION OF P ERFORMANCE M ETRICS FOR S ELF -O RGANIZED M APS
Self-Organizing Maps (SOM) [ 1 ] have been used for almost 40 years across various application domains such as biology, geology, healthcare, industry [ 2 , 3 ] and humanities [ 4 ] as an interpretable tool to explore, cluster and visualize high-dimensional data sets. In every application, practitioners need to know whether they can trust the resulting mapping, and perform model selection to select algorithm parameters (e.g. the map size, learning rate and number of iterations). Concretely, two questions need to be answered:
In this paper, we reviewed various internal and external performance metrics for SOM and introduced the SOMperf Python module, enabling practitioners to easily evaluate their models. Future work perspectives include the computation of per-unit metrics, a SOM visualization module, as well as distance functions between self-organized models. In addition, other more recent SOM quality metrics could be implemented, such as the map embedding accuracy [23].
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A S URVEY AND I MPLEMENTATION OF P ERFORMANCE M ETRICS FOR S ELF -O RGANIZED M APS* and the introduction:Self-Organizing Maps (SOM) [ 1 ] have been used for almost 40 years across various application domains such as biology, geology, healthcare, industry [ 2 , 3 ] and humanities [ 4 ] as an interpretable tool to explore, cluster and visualize high-dimensional data sets. In every application, practitioners need to know whether they can trust the resulting mapping, and perform model selection to select algorithm parameters (e.g. the map size, learning rate and number of iterations). Concretely, two questions need to be answered:"},{"role":"assistant","content":"Conclusion: In this paper, we reviewed various internal and external performance metrics for SOM and introduced the SOMperf Python module, enabling practitioners to easily evaluate their models. Future work perspectives include the computation of per-unit metrics, a SOM visualization module, as well as distance functions between self-organized models. In addition, other more recent SOM quality metrics could be implemented, such as the map embedding accuracy [23]."}]
Meta-Learning in Neural Networks: A Survey
Contemporary machine learning models are typically trained from scratch for a specific task using a fixed learn- ing algorithm designed by hand. Deep learning-based ap- proaches have seen great successes in a variety of fields [1]–[3]. However there are clear limitations [4]. For example, successes have largely been in areas where vast quantities of data can be collected or simulated, and where huge compute resources are available. This excludes many applications where data is intrinsically rare or expensive [5], or compute resources are unavailable [6], [7].
The field of meta-learning has recently seen a rapid growth in interest. This has come with some level of confusion, with regards to how it relates to neighbouring fields, what it can be applied to, and how it can be benchmarked. In this survey we have sought to clarify these issues by thoroughly surveying the area both from a methodological point of view – which we broke down into a taxonomy of meta-representation, meta-optimizer and meta-objective; and from an application point of view. We hope that this survey will help newcomers and practitioners to orient themselves in this growing field, as well as highlight op- portunities for future research.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Meta-Learning in Neural Networks: A Survey* and the introduction:Contemporary machine learning models are typically trained from scratch for a specific task using a fixed learn- ing algorithm designed by hand. Deep learning-based ap- proaches have seen great successes in a variety of fields [1]–[3]. However there are clear limitations [4]. For example, successes have largely been in areas where vast quantities of data can be collected or simulated, and where huge compute resources are available. This excludes many applications where data is intrinsically rare or expensive [5], or compute resources are unavailable [6], [7]."},{"role":"assistant","content":"Conclusion: The field of meta-learning has recently seen a rapid growth in interest. This has come with some level of confusion, with regards to how it relates to neighbouring fields, what it can be applied to, and how it can be benchmarked. In this survey we have sought to clarify these issues by thoroughly surveying the area both from a methodological point of view – which we broke down into a taxonomy of meta-representation, meta-optimizer and meta-objective; and from an application point of view. We hope that this survey will help newcomers and practitioners to orient themselves in this growing field, as well as highlight op- portunities for future research."}]
An Overview of Distant Supervision for Relation Extraction with a Focus on Denoising and Pre-training Methods
Relation extraction (RE), a subtask of information extraction, is a foundational task in natural lan- guage processing (NLP). The RE task is to deter- mine a relationship between two distinct entities from text, producing fact triples in the form [ head , relation , tail ] or, as referred to in some works, [ sub- ject , predicate , object ]. For example, after reading the Wikipedia page on Noam Chomsky, we learn that Noam was born in Philadelphia, Pennsylvania, which corresponds to the fact triple [ Noam Chom- sky , born in , Philadelphia ]. Fact triples are foun- dational to human knowledge and play a key role in many downstream NLP tasks such as question- answering, search queries, and knowledge-graph completion ( Xu et al. , 2016 ; Lin et al. , 2015 ; Li et al. , 2014 ).
In this survey, we reviewed the relation extraction task with a focus on distant supervision. We traced the history of RE methods by discussing exemplary works and highlighting shortcomings to contextual- ize progress. We noted the key differences between pattern-based methods, statistical-based methods, neural-based methods, and finally, large language model-based methods. We conducted an overview of popular RE datasets and discussed some limita- tions of both corpus-based and instance-based RE evaluation.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*An Overview of Distant Supervision for Relation Extraction with a Focus on Denoising and Pre-training Methods* and the introduction:Relation extraction (RE), a subtask of information extraction, is a foundational task in natural lan- guage processing (NLP). The RE task is to deter- mine a relationship between two distinct entities from text, producing fact triples in the form [ head , relation , tail ] or, as referred to in some works, [ sub- ject , predicate , object ]. For example, after reading the Wikipedia page on Noam Chomsky, we learn that Noam was born in Philadelphia, Pennsylvania, which corresponds to the fact triple [ Noam Chom- sky , born in , Philadelphia ]. Fact triples are foun- dational to human knowledge and play a key role in many downstream NLP tasks such as question- answering, search queries, and knowledge-graph completion ( Xu et al. , 2016 ; Lin et al. , 2015 ; Li et al. , 2014 )."},{"role":"assistant","content":"Conclusion: In this survey, we reviewed the relation extraction task with a focus on distant supervision. We traced the history of RE methods by discussing exemplary works and highlighting shortcomings to contextual- ize progress. We noted the key differences between pattern-based methods, statistical-based methods, neural-based methods, and finally, large language model-based methods. We conducted an overview of popular RE datasets and discussed some limita- tions of both corpus-based and instance-based RE evaluation."}]
Scene Flow Estimation: A Survey
Scene flow is a three-dimensional motion field of the surface in world space, or in other words, it shows the three-dimensional displacement vector of each surface point between two frames. As most computer vision issues are, scene flow estimation is essentially an ill-posed energy minimization problem with three unknowns. Prior knowledge in multiple aspects is required to make the energy function solvable with just a few pairs of images. Hence, it’s essential to fully make use of information from the data source and to weigh different prior knowledge for a better performance.
This paper presents a comprehensive and up-to-date survey on both scene flow estimation methods and the evaluation methodologies for the first time after 17 years since scene flow was introduced. We have discussed most of the estimation methods so researchers could have a clear view of this field and get inspired for their studies of interest. The representative methods are highlighted so the differences between these methods are clear, and the similarities between top-tier methods can be seen as a tendency for modification. The widely used benchmarks have been analyzed and compared, so are multiple evaluation protocols. This paper provides sufficient information for researchers to choose the appropriate datasets and protocols for evaluating performance of their algorithms.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Scene Flow Estimation: A Survey* and the introduction:Scene flow is a three-dimensional motion field of the surface in world space, or in other words, it shows the three-dimensional displacement vector of each surface point between two frames. As most computer vision issues are, scene flow estimation is essentially an ill-posed energy minimization problem with three unknowns. Prior knowledge in multiple aspects is required to make the energy function solvable with just a few pairs of images. Hence, it’s essential to fully make use of information from the data source and to weigh different prior knowledge for a better performance."},{"role":"assistant","content":"Conclusion: This paper presents a comprehensive and up-to-date survey on both scene flow estimation methods and the evaluation methodologies for the first time after 17 years since scene flow was introduced. We have discussed most of the estimation methods so researchers could have a clear view of this field and get inspired for their studies of interest. The representative methods are highlighted so the differences between these methods are clear, and the similarities between top-tier methods can be seen as a tendency for modification. The widely used benchmarks have been analyzed and compared, so are multiple evaluation protocols. This paper provides sufficient information for researchers to choose the appropriate datasets and protocols for evaluating performance of their algorithms."}]
Few-shot Learning: A Survey
“Can machines think [ 121 ]? ” This is the question raised in Alan Turing’s seminal paper entitled
Few-Shot Learning (FSL) is an important machine learning problem to facilitate real artificial intelligence. In this survey, we provide a comprehensive and systematic review of FSL. We first formally define FSL, and discuss the relatedness and difference of FSL with respect to relevant learning problems such as semi-supervised learning, imbalanced learning, transfer learning and meta-learning. Then, we reveal the core issues of FSL and classify existing works into a unified taxonomy in terms of how they deal with the core issues, i. e., data, model and algorithm. Using the prior knowledge, data augments the supervised experience of FSL, model constrains the hypothesis space of FSL, and algorithm alters the search of parameter in hypothesis space to solve FSL. We perform extensive literature review from the birth of FSL to the most recent published ones, and categorizes them in into different categories with thorough comparison. As possible future works for FSL, we provide possible directions in terms of problem setup, techniques, applications and theory, hoping these can inspire of solving of FSL problem and achieve human-like machines one day.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Few-shot Learning: A Survey* and the introduction:“Can machines think [ 121 ]? ” This is the question raised in Alan Turing’s seminal paper entitled"},{"role":"assistant","content":"Conclusion: Few-Shot Learning (FSL) is an important machine learning problem to facilitate real artificial intelligence. In this survey, we provide a comprehensive and systematic review of FSL. We first formally define FSL, and discuss the relatedness and difference of FSL with respect to relevant learning problems such as semi-supervised learning, imbalanced learning, transfer learning and meta-learning. Then, we reveal the core issues of FSL and classify existing works into a unified taxonomy in terms of how they deal with the core issues, i. e., data, model and algorithm. Using the prior knowledge, data augments the supervised experience of FSL, model constrains the hypothesis space of FSL, and algorithm alters the search of parameter in hypothesis space to solve FSL. We perform extensive literature review from the birth of FSL to the most recent published ones, and categorizes them in into different categories with thorough comparison. As possible future works for FSL, we provide possible directions in terms of problem setup, techniques, applications and theory, hoping these can inspire of solving of FSL problem and achieve human-like machines one day."}]
A survey on haptic technologies for mobile augmented reality
AR and MAR have atracted interest from both industry and academia in the last decade. MAR enhances the real world of a mobile user with computer-generated virtual content. AR applications combine real and virtual objects in a physical environment, are interactive in real time and display an augmented view. Te advances in mobile computing, computer vision, and networking have enabled the AR ecosystem. Due to the mobile nature of MAR applications, they tend to run on mobile or wearable devices such as smartphones, tablets, smart-glasses. Tese device provide user’s mobility, but at the cost of constrained resources such as computing-power, and energy. Te computational constrains of these mobile devices limit the performance and design of AR applications in the mobile environment. Terefore, cloud infrastructures ([ 30 ], [ 36 ]), computing-ofoading solutions ([ 34 ], [ 63 ]), service providers, cloudlets ([ 19 ]) and Fog computing ([ 18 ]) continue to deploy innovative services to provide a real time AR experience ([ 21 ]). Microsof 1 , Facebook 2 and Apple 3 have shown their interest in AR applications and they believe in the viability of this technology. Te current trend of mobile AR applications has afected the mobile market. Some well-known commercial MAR applications such as Pokemon GO 4 are location-based AR mobile games.
In this survey, we depict the state-of-the-art of several haptic devices and their capabilities as wearables in MAR ecosystem. Furthermore, we classify the haptic feedback devices by their sensory nature and their design characteristics, such as mid-air, and exoskeleton. We start with a brief description of the main features of haptic devices and the importance of audio and visual as non-haptic devices in enhancing the UX and improving the overall interaction performance. We analyze the main characteristics of the proposed devices, and their applicability as wearables for MAR applications. Although there are many works and commercial products, an afordable, portable and simple approach for haptic wearable devices is still missing. Moreover, the fdelity of these devices is limited to one scenario such as surface/texture rendering, grasping, or pushing. Te combination of more haptic devices to achieve beter feedback has been done by several authors but the size, or difcult implementation hinders their deployment in mobile environments, where the scenarios and circumstances surrounding the user can change. With this work we aim to provide a beter understanding of mechanisms, challenges and future possibilities of haptic feedback in the MAR feld.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A survey on haptic technologies for mobile augmented reality* and the introduction:AR and MAR have atracted interest from both industry and academia in the last decade. MAR enhances the real world of a mobile user with computer-generated virtual content. AR applications combine real and virtual objects in a physical environment, are interactive in real time and display an augmented view. Te advances in mobile computing, computer vision, and networking have enabled the AR ecosystem. Due to the mobile nature of MAR applications, they tend to run on mobile or wearable devices such as smartphones, tablets, smart-glasses. Tese device provide user’s mobility, but at the cost of constrained resources such as computing-power, and energy. Te computational constrains of these mobile devices limit the performance and design of AR applications in the mobile environment. Terefore, cloud infrastructures ([ 30 ], [ 36 ]), computing-ofoading solutions ([ 34 ], [ 63 ]), service providers, cloudlets ([ 19 ]) and Fog computing ([ 18 ]) continue to deploy innovative services to provide a real time AR experience ([ 21 ]). Microsof 1 , Facebook 2 and Apple 3 have shown their interest in AR applications and they believe in the viability of this technology. Te current trend of mobile AR applications has afected the mobile market. Some well-known commercial MAR applications such as Pokemon GO 4 are location-based AR mobile games."},{"role":"assistant","content":"Conclusion: In this survey, we depict the state-of-the-art of several haptic devices and their capabilities as wearables in MAR ecosystem. Furthermore, we classify the haptic feedback devices by their sensory nature and their design characteristics, such as mid-air, and exoskeleton. We start with a brief description of the main features of haptic devices and the importance of audio and visual as non-haptic devices in enhancing the UX and improving the overall interaction performance. We analyze the main characteristics of the proposed devices, and their applicability as wearables for MAR applications. Although there are many works and commercial products, an afordable, portable and simple approach for haptic wearable devices is still missing. Moreover, the fdelity of these devices is limited to one scenario such as surface/texture rendering, grasping, or pushing. Te combination of more haptic devices to achieve beter feedback has been done by several authors but the size, or difcult implementation hinders their deployment in mobile environments, where the scenarios and circumstances surrounding the user can change. With this work we aim to provide a beter understanding of mechanisms, challenges and future possibilities of haptic feedback in the MAR feld."}]
A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges
M ACHINE learning models commonly make the closed- set assumption, where the test data is drawn i.i.d from the same distribution as the training data. Yet in practice, all types of test input data—even those on which we have not trained the classifier—can be encountered. Unfortunately, models can assign misleading confidence values for unseen test samples [ 1 ], [ 2 ], [ 3 ], [ 4 ], [ 5 ]. This leads to concerns about the reliability of classifiers, particularly for safety-critical ap- plications [ 6 ]. In literature, several fields attempt to address the issue of identifying the unknowns/anomalies/out-of- distribution data in the open-world setting. In particular, the problems of anomaly detection (AD), Novelty Detection
In many applications, it is not feasible to model all kinds of classes occurring during testing; thus, scenarios existing in domains such as OOD detection, OSR, ND (one-class learning), and AD become ubiquitous. Up to this time, these domains, in spite of having the same intention and a large intersection, have been followed roughly independently by researchers.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges* and the introduction:M ACHINE learning models commonly make the closed- set assumption, where the test data is drawn i.i.d from the same distribution as the training data. Yet in practice, all types of test input data—even those on which we have not trained the classifier—can be encountered. Unfortunately, models can assign misleading confidence values for unseen test samples [ 1 ], [ 2 ], [ 3 ], [ 4 ], [ 5 ]. This leads to concerns about the reliability of classifiers, particularly for safety-critical ap- plications [ 6 ]. In literature, several fields attempt to address the issue of identifying the unknowns/anomalies/out-of- distribution data in the open-world setting. In particular, the problems of anomaly detection (AD), Novelty Detection"},{"role":"assistant","content":"Conclusion: In many applications, it is not feasible to model all kinds of classes occurring during testing; thus, scenarios existing in domains such as OOD detection, OSR, ND (one-class learning), and AD become ubiquitous. Up to this time, these domains, in spite of having the same intention and a large intersection, have been followed roughly independently by researchers."}]
A Brief Survey on Deep Learning Based Data Hiding, Steganography and Watermarking
Seeing is not always believing, i . e . a natural-looking image can contain secret information that is invisible to the gen- eral public. Data hiding enables concealing a secret message within a transport medium, such as a digital image, and its essential property lies in im perce pti bil it y for achieving the fundamental goal of being hidden. With easy access to the Internet and gaining popularity of the social media platform, digital media, such as image or video, has become the most commonly used host for secure data transfer in applications ranging from secret communication, copy-right protection to content authentication. Data hiding schemes are character- ized by three requirements: capacity regarding the embed- ded payload, security in terms of being undetectable by ste- ganalysis, robustness against distortions in the transmission channel. There is a trade-off among the above three require- ments, namely capacity , security and robustness [Kadhim et al. , 2019; Zhang et al. , 2020a] as depicted in Figure 1. For example, a hiding algorithm that is capacity-oriented is of- ten subject to low security and robustness. Generally speak- ing, data hiding aims to hide more information given no extra constraint is applied. Secure steganography and robust wa- termarking, as the term suggests, prioritize security and ro- bustness, respectively. Most traditional data hiding methods are carried out under the distortion-coding framework, which allocates different distortions to different cover elements and
Deep learning based data hiding has become an emerging field and is drawing increasing attention. Our work conducts a brief survey on this topic by first outlining three basic meta- architectures. We further discuss the challenges of deep hid- ing in various applications, such as steganography, light field messaging and watermarking. Finally, we discuss its impact on the field of adversarial attack and vice versa. A joint in- vestigation of data hiding and adversarial attack will be an interesting direction with possible new insights.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Brief Survey on Deep Learning Based Data Hiding, Steganography and Watermarking* and the introduction:Seeing is not always believing, i . e . a natural-looking image can contain secret information that is invisible to the gen- eral public. Data hiding enables concealing a secret message within a transport medium, such as a digital image, and its essential property lies in im perce pti bil it y for achieving the fundamental goal of being hidden. With easy access to the Internet and gaining popularity of the social media platform, digital media, such as image or video, has become the most commonly used host for secure data transfer in applications ranging from secret communication, copy-right protection to content authentication. Data hiding schemes are character- ized by three requirements: capacity regarding the embed- ded payload, security in terms of being undetectable by ste- ganalysis, robustness against distortions in the transmission channel. There is a trade-off among the above three require- ments, namely capacity , security and robustness [Kadhim et al. , 2019; Zhang et al. , 2020a] as depicted in Figure 1. For example, a hiding algorithm that is capacity-oriented is of- ten subject to low security and robustness. Generally speak- ing, data hiding aims to hide more information given no extra constraint is applied. Secure steganography and robust wa- termarking, as the term suggests, prioritize security and ro- bustness, respectively. Most traditional data hiding methods are carried out under the distortion-coding framework, which allocates different distortions to different cover elements and"},{"role":"assistant","content":"Conclusion: Deep learning based data hiding has become an emerging field and is drawing increasing attention. Our work conducts a brief survey on this topic by first outlining three basic meta- architectures. We further discuss the challenges of deep hid- ing in various applications, such as steganography, light field messaging and watermarking. Finally, we discuss its impact on the field of adversarial attack and vice versa. A joint in- vestigation of data hiding and adversarial attack will be an interesting direction with possible new insights."}]
A Comprehensive Survey on Automatic Knowledge Graph Construction
Knowledge graphs (KGs) provide well-organized human knowledge for applications like search engines [ 1 ], recommendation systems [ 2 ], and question answering [ 3 ].
With this paper, we delivered a comprehensive survey on the topic of knowledge graph construction. Specifically, we reviewed the tasks, methods, challenges, and related resources used to construct, refine, and integrate KGs from various data types in different scenarios. To probe into the essential topics for the big data environment, we systematically presented the paragon models for obtaining fine-grained concepts (entity typing), dealing with low-resource knowledge (extraction tasks in few- shot scenarios), understanding large linguistic objects (document-level relation extraction), complex reasoning (logic and interpretable reasoning) and handling conditional structures (temporal and general conditions) in knowledge graphs. Moreover, we provided briefs on practical KG toolkits and projects. In conclusion, knowledge graph construction has become a critical topic for enabling human intelligence in AI applications. In the future, the research community will certainly be searching for more paradigms to empower KGs with wisdom in massive heterogeneous, autonomous, complex, and evolving data environments while enhancing collaborations between knowledge communities.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Comprehensive Survey on Automatic Knowledge Graph Construction* and the introduction:Knowledge graphs (KGs) provide well-organized human knowledge for applications like search engines [ 1 ], recommendation systems [ 2 ], and question answering [ 3 ]."},{"role":"assistant","content":"Conclusion: With this paper, we delivered a comprehensive survey on the topic of knowledge graph construction. Specifically, we reviewed the tasks, methods, challenges, and related resources used to construct, refine, and integrate KGs from various data types in different scenarios. To probe into the essential topics for the big data environment, we systematically presented the paragon models for obtaining fine-grained concepts (entity typing), dealing with low-resource knowledge (extraction tasks in few- shot scenarios), understanding large linguistic objects (document-level relation extraction), complex reasoning (logic and interpretable reasoning) and handling conditional structures (temporal and general conditions) in knowledge graphs. Moreover, we provided briefs on practical KG toolkits and projects. In conclusion, knowledge graph construction has become a critical topic for enabling human intelligence in AI applications. In the future, the research community will certainly be searching for more paradigms to empower KGs with wisdom in massive heterogeneous, autonomous, complex, and evolving data environments while enhancing collaborations between knowledge communities."}]
Using Meta-heuristics and Machine Learning for Software Optimization of Parallel Computing Systems: A Systematic Literature Review
Traditionally, parallel computing [69] systems have been used for scientific and technical com- puting. Usually scientific and engineering computational problems are complex and resource intensive. To e ffi ciently solve these problems, utilization of parallel computing systems that may comprise multiple processing units is needed. The emergence of multi-core and many-core pro- cessors in the last decade led to the pervasiveness of parallel computing systems from embedded systems, personal computers, to data centers and supercomputers. While in the past parallel com- puting was a focus of only a small group of scientists and engineers at supercomputing centers, nowadays virtually programmers of all systems are exposed to parallel processors that comprise multiple or many cores [46].
In this article, we have conducted a systematic literature review that describes approaches that use machine learning and meta-heuristics for software optimization of parallel computing systems. We have classified approaches based on the software life-cycle activities at compile-time and run-time, including the code optimization and generation, scheduling, and adaptation. We have discussed the shortcomings of existing approaches and provided recommendations for future research directions.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Using Meta-heuristics and Machine Learning for Software Optimization of Parallel Computing Systems: A Systematic Literature Review* and the introduction:Traditionally, parallel computing [69] systems have been used for scientific and technical com- puting. Usually scientific and engineering computational problems are complex and resource intensive. To e ffi ciently solve these problems, utilization of parallel computing systems that may comprise multiple processing units is needed. The emergence of multi-core and many-core pro- cessors in the last decade led to the pervasiveness of parallel computing systems from embedded systems, personal computers, to data centers and supercomputers. While in the past parallel com- puting was a focus of only a small group of scientists and engineers at supercomputing centers, nowadays virtually programmers of all systems are exposed to parallel processors that comprise multiple or many cores [46]."},{"role":"assistant","content":"Conclusion: In this article, we have conducted a systematic literature review that describes approaches that use machine learning and meta-heuristics for software optimization of parallel computing systems. We have classified approaches based on the software life-cycle activities at compile-time and run-time, including the code optimization and generation, scheduling, and adaptation. We have discussed the shortcomings of existing approaches and provided recommendations for future research directions."}]
A Survey on Echo Chambers on Social Media: Description, Detection and Mitigation
Having access to verified and trusted information is cru- cial in the midst of the COVID-19 pandemic, one of the most significant health crises (Mallah et al. 2021) in recent history. Exposure to misinformation on social media has been linked to COVID-19 vaccine hesitancy (Loomba et al. 2021), the belief that 5G towers spread the virus (Ahmed et al. 2020b), the misconception that a COVID-19 vac- cine candidate caused the death of trial participants 1 , and a widely held view that the virus is a conspiracy or a bioweapon (Douglas 2021). These beliefs threaten the re- sponse to the pandemic and promote actions that can lead to the spread of the virus. In this regard, the Alan Tur- ing Institute (Seger et al. 2020) classified epistemic secu- rity as a fundamental challenge for society when facing a situation that requires taking collective action to respond to crises (e.g., global pandemics) or complex challenges (e.g., climate change). They define epistemic security as reliably preventing threats to the production, distribution, consump- tion, and assessment of reliable information within a soci- ety. Echo chambers on social media are identified (Seger et al. 2020) as one of the core threats to epistemic security as they can drastically increase the spread and even creation of misinformation on social media (Del Vicario et al. 2019, 2016a; Zollo et al. 2017; Zollo and Quattrociocchi 2018). The presence of misinformation on social media is a well- documented problem (Shu et al. 2020; Wu et al. 2019). So- cial media is a prominent source of news and information about COVID-19 and other current events for most of us. Currently, more than half of adults in the US say that they get their news from social media (Shearer and Mitchell 2021).
While the emergence of echo chambers can seem to be an unstoppable wave, we must realize that there is hope for a better information ecosystem. We showed that echo cham- bers are largely a byproduct of recommender systems. As such, what has been manufactured by these systems can like- wise be deconstructed by these systems. Social media may not currently live up to its’ promise of bringing us closer to- gether and fostering better conversations presently. The fu- ture does not have to be this way – through research and a structured strategy a less polarized world is possible.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Echo Chambers on Social Media: Description, Detection and Mitigation* and the introduction:Having access to verified and trusted information is cru- cial in the midst of the COVID-19 pandemic, one of the most significant health crises (Mallah et al. 2021) in recent history. Exposure to misinformation on social media has been linked to COVID-19 vaccine hesitancy (Loomba et al. 2021), the belief that 5G towers spread the virus (Ahmed et al. 2020b), the misconception that a COVID-19 vac- cine candidate caused the death of trial participants 1 , and a widely held view that the virus is a conspiracy or a bioweapon (Douglas 2021). These beliefs threaten the re- sponse to the pandemic and promote actions that can lead to the spread of the virus. In this regard, the Alan Tur- ing Institute (Seger et al. 2020) classified epistemic secu- rity as a fundamental challenge for society when facing a situation that requires taking collective action to respond to crises (e.g., global pandemics) or complex challenges (e.g., climate change). They define epistemic security as reliably preventing threats to the production, distribution, consump- tion, and assessment of reliable information within a soci- ety. Echo chambers on social media are identified (Seger et al. 2020) as one of the core threats to epistemic security as they can drastically increase the spread and even creation of misinformation on social media (Del Vicario et al. 2019, 2016a; Zollo et al. 2017; Zollo and Quattrociocchi 2018). The presence of misinformation on social media is a well- documented problem (Shu et al. 2020; Wu et al. 2019). So- cial media is a prominent source of news and information about COVID-19 and other current events for most of us. Currently, more than half of adults in the US say that they get their news from social media (Shearer and Mitchell 2021)."},{"role":"assistant","content":"Conclusion: While the emergence of echo chambers can seem to be an unstoppable wave, we must realize that there is hope for a better information ecosystem. We showed that echo cham- bers are largely a byproduct of recommender systems. As such, what has been manufactured by these systems can like- wise be deconstructed by these systems. Social media may not currently live up to its’ promise of bringing us closer to- gether and fostering better conversations presently. The fu- ture does not have to be this way – through research and a structured strategy a less polarized world is possible."}]
A Systematic Literature Review of Empiricism and Norms of Reporting in Computing Education Research Literature
From 2009 to 2015, the number of bachelor’s degrees in Computer Science increased by 74 percent, while overall growth across all fields of study only rose by 16 percent [ 50 ]. Taulbee’s 2019 [ 75 ] report shows an increase from a recent minimum of under 10,000 bachelor’s degrees conferred in 2009 to over 30,000 degrees in 2019 (see Figure 1B [ 75 ]). It is undeniable that there has been a rapid increase in interest in computer science courses at higher-education institutions. We see 1 similar demand in K-12 with CSforAll initiatives. Educators have struggled with how to cope with new and exacerbated challenges in computing education - ranging from how to scale up to handle new enrollments to which teaching techniques are best during this time of growth. The COVID-19 pandemic has added additional investigation into the challenges of online instruction.
Our research goal was to characterize the reporting of empiricism in Computing Education Re- search literature by identifying whether publications include content necessary for researchers to perform replications, meta-analyses, and theory building. This systematic literature review sum- marizes the type of papers and studies included during 2014 and 2015 in the SIGCSE TS, ICER, ITiCSE, TOCE, and CSE venues. A majority of the accepted papers report empirical work. How- ever those papers do not consistently follow reporting norms. We have provided suggestions to authors and reviewers to move the community forward in publishing high-quality empirical work that can lead to meta-analysis and theory building.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Systematic Literature Review of Empiricism and Norms of Reporting in Computing Education Research Literature* and the introduction:From 2009 to 2015, the number of bachelor’s degrees in Computer Science increased by 74 percent, while overall growth across all fields of study only rose by 16 percent [ 50 ]. Taulbee’s 2019 [ 75 ] report shows an increase from a recent minimum of under 10,000 bachelor’s degrees conferred in 2009 to over 30,000 degrees in 2019 (see Figure 1B [ 75 ]). It is undeniable that there has been a rapid increase in interest in computer science courses at higher-education institutions. We see 1 similar demand in K-12 with CSforAll initiatives. Educators have struggled with how to cope with new and exacerbated challenges in computing education - ranging from how to scale up to handle new enrollments to which teaching techniques are best during this time of growth. The COVID-19 pandemic has added additional investigation into the challenges of online instruction."},{"role":"assistant","content":"Conclusion: Our research goal was to characterize the reporting of empiricism in Computing Education Re- search literature by identifying whether publications include content necessary for researchers to perform replications, meta-analyses, and theory building. This systematic literature review sum- marizes the type of papers and studies included during 2014 and 2015 in the SIGCSE TS, ICER, ITiCSE, TOCE, and CSE venues. A majority of the accepted papers report empirical work. How- ever those papers do not consistently follow reporting norms. We have provided suggestions to authors and reviewers to move the community forward in publishing high-quality empirical work that can lead to meta-analysis and theory building."}]
Did Chatbots Miss Their ’Apollo Moment’ ∗ ? A Survey of the Potential, Gaps and Lessons from Using Collaboration Assistants During COVID-19
COVID-19 1 is a global pandemic which started in China in the winter of 2019 and has spread around the world with over 100 million cases and killing more than two million people by Feb 2021 [WHO, 2021]. As the disease has progressed, new hot spots of the disease have emerged first in South-East Asia, then Europe and then in US, South America and South Asia. The disease has evolved and regions around the world have also switched their responses frequently while waiting for an effective vaccine to be developed and widely available for lasting cure. The impact of COVID pandemic has var- ied globally over geography and time, as measured by num- ber of cases and deaths, depending on demographics of the local population as well as the public health policies imple- mented in response. A compilation of resources can be found at [Srivastava, 2020].
COVID19 caused a major disruption in the lives of people around the world and they were looking for help with deci- sions in all aspects of their lives. At this juncture, chatbots as the AI technology for providing personalized decision sup- port at scale, was most needed. In this paper, we reviewed the range of methods available to build them and capabilities they can offer. We then looked at how chatbots were positioned for benefit in health and the limited evidence that existed be- fore COVID of their impact. COVID triggered launching of disease-specific new chatbots as well as accelerated adoption of existing one in customer care and mental health. However, most of them worked in simple scenarios and raised questions about usability, effectiveness and handling of user privacy. We identified gaps from the experience and drew lessons that can be used for future health exigencies.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Did Chatbots Miss Their ’Apollo Moment’ ∗ ? A Survey of the Potential, Gaps and Lessons from Using Collaboration Assistants During COVID-19* and the introduction:COVID-19 1 is a global pandemic which started in China in the winter of 2019 and has spread around the world with over 100 million cases and killing more than two million people by Feb 2021 [WHO, 2021]. As the disease has progressed, new hot spots of the disease have emerged first in South-East Asia, then Europe and then in US, South America and South Asia. The disease has evolved and regions around the world have also switched their responses frequently while waiting for an effective vaccine to be developed and widely available for lasting cure. The impact of COVID pandemic has var- ied globally over geography and time, as measured by num- ber of cases and deaths, depending on demographics of the local population as well as the public health policies imple- mented in response. A compilation of resources can be found at [Srivastava, 2020]."},{"role":"assistant","content":"Conclusion: COVID19 caused a major disruption in the lives of people around the world and they were looking for help with deci- sions in all aspects of their lives. At this juncture, chatbots as the AI technology for providing personalized decision sup- port at scale, was most needed. In this paper, we reviewed the range of methods available to build them and capabilities they can offer. We then looked at how chatbots were positioned for benefit in health and the limited evidence that existed be- fore COVID of their impact. COVID triggered launching of disease-specific new chatbots as well as accelerated adoption of existing one in customer care and mental health. However, most of them worked in simple scenarios and raised questions about usability, effectiveness and handling of user privacy. We identified gaps from the experience and drew lessons that can be used for future health exigencies."}]
Domain Adaptation for Visual Applications: A Comprehensive Survey
While huge volumes of unlabeled data are generated and made available in many domains, the cost of acquiring data labels remains high. To overcome the burden of annotation, alternative solutions have been proposed in the literature in order to exploit available unlabeled data from the domain (referred to as semi-supervised learning), or labeled data or models available in similar domains (referred to as transfer learning). Domain Adaptation (DA) is a particular case of transfer learning (TL) that leverages labeled data in one or more related source domains, to learn a classifier for unseen or unlabeled data in a target domain, where in general it is assumed that the task is the same, i.e . class labels shared with the source domains. The domains are assumed to be related, but not identical in which case it becomes a standard machine learning (ML) problem that assumes the test data is drawn from the same distribution as the training data. When this assumption is not verified, i.e . the distributions on training and test set do not match, the performance at test time can be significantly degraded.
This paper attempted to provide an overview of different visual domain adaptation solutions, including shallow methods and more recent deep models. We grouped the methods both by their similarity concerning the prob- lem (homogeneous versus heterogeneous data, unsupervised versus semi-supervised scenario) and the solution proposed (feature transformation, instance reweighing, deep models, online learning). We also reviewed meth- ods that solve domain adaptation in the case of heterogeneous data as well as approaches that addresses com- puter vision problems beyond image classification, such as object detection or multi-object tracking. Finally, we ended the paper situating domain adaptation within a larger context relating it to other transfer learning techniques as well as to traditional machine learning approaches.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Domain Adaptation for Visual Applications: A Comprehensive Survey* and the introduction:While huge volumes of unlabeled data are generated and made available in many domains, the cost of acquiring data labels remains high. To overcome the burden of annotation, alternative solutions have been proposed in the literature in order to exploit available unlabeled data from the domain (referred to as semi-supervised learning), or labeled data or models available in similar domains (referred to as transfer learning). Domain Adaptation (DA) is a particular case of transfer learning (TL) that leverages labeled data in one or more related source domains, to learn a classifier for unseen or unlabeled data in a target domain, where in general it is assumed that the task is the same, i.e . class labels shared with the source domains. The domains are assumed to be related, but not identical in which case it becomes a standard machine learning (ML) problem that assumes the test data is drawn from the same distribution as the training data. When this assumption is not verified, i.e . the distributions on training and test set do not match, the performance at test time can be significantly degraded."},{"role":"assistant","content":"Conclusion: This paper attempted to provide an overview of different visual domain adaptation solutions, including shallow methods and more recent deep models. We grouped the methods both by their similarity concerning the prob- lem (homogeneous versus heterogeneous data, unsupervised versus semi-supervised scenario) and the solution proposed (feature transformation, instance reweighing, deep models, online learning). We also reviewed meth- ods that solve domain adaptation in the case of heterogeneous data as well as approaches that addresses com- puter vision problems beyond image classification, such as object detection or multi-object tracking. Finally, we ended the paper situating domain adaptation within a larger context relating it to other transfer learning techniques as well as to traditional machine learning approaches."}]
State Representation Learning for Control: An Overview
Robotics control and artificial intelligence (AI) in a broad perspective heavily rely on the availability of compact and expressive representations of the sensor data. Designing such representations has long been performed manually by the designer, but deep learning now provides a general framework to learn such representations from data. This is particularly interesting for robotics where multiple sensors (such as cameras) can provide very high dimensional data, while the robot objective can often be expressed in a much lower dimensional space (such as the 3D position of an object in a manipulation task). This low dimensional representation, frequently called the state of the system, has the crucial role of encoding essential information (for a given task) while discarding the many irrelevant aspects of the original data.
We reviewed State Representation Learning algorithms that are designed to find a way to compress high-dimensional observations data into a low and meaningful dimensional space for controlled systems. These models only require observations made by the system, the actions performed and optionally the reward of an associated task. We presented the various constraints that can be used in this objective: such representation should contain enough information to be able to reproduce the input observation; it should contain enough information about the dynamics of the environment to predict the future state given an action, or reversely to predict actions given two states; it should contain information related to a task reward; it should follow various constraints based on a priori knowledge directly on the representation space, e.g., using physics laws or common sense priors.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*State Representation Learning for Control: An Overview* and the introduction:Robotics control and artificial intelligence (AI) in a broad perspective heavily rely on the availability of compact and expressive representations of the sensor data. Designing such representations has long been performed manually by the designer, but deep learning now provides a general framework to learn such representations from data. This is particularly interesting for robotics where multiple sensors (such as cameras) can provide very high dimensional data, while the robot objective can often be expressed in a much lower dimensional space (such as the 3D position of an object in a manipulation task). This low dimensional representation, frequently called the state of the system, has the crucial role of encoding essential information (for a given task) while discarding the many irrelevant aspects of the original data."},{"role":"assistant","content":"Conclusion: We reviewed State Representation Learning algorithms that are designed to find a way to compress high-dimensional observations data into a low and meaningful dimensional space for controlled systems. These models only require observations made by the system, the actions performed and optionally the reward of an associated task. We presented the various constraints that can be used in this objective: such representation should contain enough information to be able to reproduce the input observation; it should contain enough information about the dynamics of the environment to predict the future state given an action, or reversely to predict actions given two states; it should contain information related to a task reward; it should follow various constraints based on a priori knowledge directly on the representation space, e.g., using physics laws or common sense priors."}]
Meta Learning for Natural Language Processing: A Survey
Recently, deep learning (DL) based natural lan- guage processing (NLP) has been one of the re- search mainstreams and yields significant perfor- mance improvement in many NLP problems. How- ever, DL models are data-hungry. The downside limits such models’ application to different domains, languages, countries, or styles because collecting in-genre data for model training are costly.
This paper investigates how meta-learning is used in NLP applications. We review the task construction settings (Section 3 ), the commonly used methods including learning to initialize, learning to compare and neural architecture search (Section 4 ), and high- light research directions that go beyond improving performance (Section 5 ). We hope this paper will encourage more researchers in the NLP community to work on meta-learning.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Meta Learning for Natural Language Processing: A Survey* and the introduction:Recently, deep learning (DL) based natural lan- guage processing (NLP) has been one of the re- search mainstreams and yields significant perfor- mance improvement in many NLP problems. How- ever, DL models are data-hungry. The downside limits such models’ application to different domains, languages, countries, or styles because collecting in-genre data for model training are costly."},{"role":"assistant","content":"Conclusion: This paper investigates how meta-learning is used in NLP applications. We review the task construction settings (Section 3 ), the commonly used methods including learning to initialize, learning to compare and neural architecture search (Section 4 ), and high- light research directions that go beyond improving performance (Section 5 ). We hope this paper will encourage more researchers in the NLP community to work on meta-learning."}]
A N O PEN R EVIEW OF O PEN R EVIEW : A C RITICAL A NALYSIS OF THE M ACHINE L EARNING C ONFERENCE R EVIEW P ROCESS
Over the last decade, mainstream machine learning conferences have been strained by a deluge of conference paper submissions. At ICLR, for example, the number of submissions has grown by an order of magnitude within the last 5 years alone. Furthermore, the influx of researchers from disparate fields has led to a diverse range of perspectives and opinions that often conflict when it comes to reviewing and accepting papers. This has created an environment where the legitimacy and randomness of the review process is a common topic of discussion. Do conference reviews consistently identify high quality work? Or has review degenerated into censorship?
We find the level of reproducibility at ICLR (66% in 2020) to be higher than we expect when considering the much lower acceptance rate (26.5%), which seemingly contradicts the notion that reviews are “random.” Nonetheless, many authors find large swings in reviews as they resubmit papers to different conferences and find it difficult to identify a home venue where their ideas feel respected. We speculate that the perceived randomness of conference reviews is the result of several factors. First, differences in paper matching and bidding systems used by different conferences can sway the population of reviewers that are recommended to bid on an article, resulting in a major source of inter-conference randomness that is not represented in the above intra-conference study. Second, the influx of researchers from disparate backgrounds means that the value system of a paper’s reviewers is often mismatched with that of its authors.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A N O PEN R EVIEW OF O PEN R EVIEW : A C RITICAL A NALYSIS OF THE M ACHINE L EARNING C ONFERENCE R EVIEW P ROCESS* and the introduction:Over the last decade, mainstream machine learning conferences have been strained by a deluge of conference paper submissions. At ICLR, for example, the number of submissions has grown by an order of magnitude within the last 5 years alone. Furthermore, the influx of researchers from disparate fields has led to a diverse range of perspectives and opinions that often conflict when it comes to reviewing and accepting papers. This has created an environment where the legitimacy and randomness of the review process is a common topic of discussion. Do conference reviews consistently identify high quality work? Or has review degenerated into censorship?"},{"role":"assistant","content":"Conclusion: We find the level of reproducibility at ICLR (66% in 2020) to be higher than we expect when considering the much lower acceptance rate (26.5%), which seemingly contradicts the notion that reviews are “random.” Nonetheless, many authors find large swings in reviews as they resubmit papers to different conferences and find it difficult to identify a home venue where their ideas feel respected. We speculate that the perceived randomness of conference reviews is the result of several factors. First, differences in paper matching and bidding systems used by different conferences can sway the population of reviewers that are recommended to bid on an article, resulting in a major source of inter-conference randomness that is not represented in the above intra-conference study. Second, the influx of researchers from disparate backgrounds means that the value system of a paper’s reviewers is often mismatched with that of its authors."}]
Deep Learning for Iris Recognition: A Review
Biological patterns, such as the face, voice, fingerprint, iris, and finger vein, have replaced traditional methods like keys, passwords, and access cards as the most reliable and effective human identification. Biological patterns, includ- ing face, voice, fingerprint, iris, and finger vein, are widely used for personal identification. Existing studies [ 1 , 2 ] have proven that among the aforemen- tioned biological patterns, the iris pattern is the most accurate and secure form for personal identification due to its significant advantages: (a) Unique: there are not any iris having the same physical characteristic as others, even if they come from the same person or identical twins; (b) Stability: the iris is formed during childhood, and it generally maintains unchangeable physi- cal characteristics throughout life; (c) Informative: the iris has rich texture information such as spots, stripes, filaments and coronas [ 2 ]. (d) Safety: Since the iris is located in a circular area under the surface of the eye between the black pupil and the white sclera, it is rarely disturbed by external factors. As a result, it is difficult to forge the iris pattern; (e) Contactless: Iris Recog- nition (IR) is more hygienic than biometrics that requires contact, such as fingerprint recognition [ 3 ]. Due to these advantages, IR has been widely used in identification [ 4 ].
This paper provides a comprehensive review of deep learning-based IR, bridg- ing the lack of a comprehensive review related to deep learning in the field of IR. A total of 120 papers have been collected to support this work. The pur- pose of this paper is to discuss deep learning-based IR tasks. The paper begins with some information related to IR, including an analysis of IR background in Section. 1 and related reviews of an introduction to commonly used public datasets in Section. 2 . The survey reviews sections immediately follow with the most important work of this paper. In Section. 3 , we discuss the recogni- tion task of deep learning in IR, and the process of the identification task can be divided into non-end-to-end and end-to-end processes. In the non-end-to- end process, we discuss it according to the process, including pre-processing, feature extraction and matching. In the end-to-end process, we discuss it in terms of typical and novel neural networks, respectively. U-Net and FCN show their excellent performance on the segmentation task. In addition, the PAD task on IR is also studied. Moreover, the YOLO model achieves good results on the iris localization task. GAN performs well in iris image enhancement. There are some other tasks in which deep learning also plays an important role. All these studies achieved excellent performance. This survey concludes with a summary of some typical challenges of IR and suggests potential directions for IR. They can inspire future research in the field of IR.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Deep Learning for Iris Recognition: A Review* and the introduction:Biological patterns, such as the face, voice, fingerprint, iris, and finger vein, have replaced traditional methods like keys, passwords, and access cards as the most reliable and effective human identification. Biological patterns, includ- ing face, voice, fingerprint, iris, and finger vein, are widely used for personal identification. Existing studies [ 1 , 2 ] have proven that among the aforemen- tioned biological patterns, the iris pattern is the most accurate and secure form for personal identification due to its significant advantages: (a) Unique: there are not any iris having the same physical characteristic as others, even if they come from the same person or identical twins; (b) Stability: the iris is formed during childhood, and it generally maintains unchangeable physi- cal characteristics throughout life; (c) Informative: the iris has rich texture information such as spots, stripes, filaments and coronas [ 2 ]. (d) Safety: Since the iris is located in a circular area under the surface of the eye between the black pupil and the white sclera, it is rarely disturbed by external factors. As a result, it is difficult to forge the iris pattern; (e) Contactless: Iris Recog- nition (IR) is more hygienic than biometrics that requires contact, such as fingerprint recognition [ 3 ]. Due to these advantages, IR has been widely used in identification [ 4 ]."},{"role":"assistant","content":"Conclusion: This paper provides a comprehensive review of deep learning-based IR, bridg- ing the lack of a comprehensive review related to deep learning in the field of IR. A total of 120 papers have been collected to support this work. The pur- pose of this paper is to discuss deep learning-based IR tasks. The paper begins with some information related to IR, including an analysis of IR background in Section. 1 and related reviews of an introduction to commonly used public datasets in Section. 2 . The survey reviews sections immediately follow with the most important work of this paper. In Section. 3 , we discuss the recogni- tion task of deep learning in IR, and the process of the identification task can be divided into non-end-to-end and end-to-end processes. In the non-end-to- end process, we discuss it according to the process, including pre-processing, feature extraction and matching. In the end-to-end process, we discuss it in terms of typical and novel neural networks, respectively. U-Net and FCN show their excellent performance on the segmentation task. In addition, the PAD task on IR is also studied. Moreover, the YOLO model achieves good results on the iris localization task. GAN performs well in iris image enhancement. There are some other tasks in which deep learning also plays an important role. All these studies achieved excellent performance. This survey concludes with a summary of some typical challenges of IR and suggests potential directions for IR. They can inspire future research in the field of IR."}]
A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts
T RADITIONAL machine learning methods assume that the training and test data are drawn independently and identically (i.i.d.) from the same distribution [ 1 ]. However, when the test distribution (target) differs from the training distribution (source), we face the problem of distribution shifts . Such a shift poses significant challenges for machine learning systems deployed in the wild, such as images captured by different cameras [ 2 ], road scenes of different cities [ 3 ], and imaging devices in different hospitals [ 4 ]. As a result, the research community has developed a variety of generalization or adaptation techniques to improve model robustness against distribution shifts. For instance, domain generalization (DG) [ 5 ] aims to learn a model using data from one or multiple source domains that can generalize well to any out-of-distribution target domain. On the other hand, domain adaptation (DA) [ 6 ] follows the transductive learning principle to leverage knowledge from a labeled source domain to an unlabeled target domain.
Learning to adapt a pre-trained model to unlabeled data under distribution shifts is an emerging and critical problem in the field of machine learning. This survey provides a comprehensive review of four related topics: source-free domain adaptation, test-time batch adaptation, online test- time adaptation, and test-time prior adaptation. These top- ics are unified as a broad learning paradigm of test-time adaptation. For each topic, we first introduce its history and definition, followed by a new taxonomy of advanced algorithms. Additionally, we provide a review of applica- tions related to test-time adaptation, as well as an outlook of emerging research trends and open problems. We believe that this survey will assist both newcomers and experienced researchers in better understanding the current state of research in test-time adaptation under distribution shifts.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts* and the introduction:T RADITIONAL machine learning methods assume that the training and test data are drawn independently and identically (i.i.d.) from the same distribution [ 1 ]. However, when the test distribution (target) differs from the training distribution (source), we face the problem of distribution shifts . Such a shift poses significant challenges for machine learning systems deployed in the wild, such as images captured by different cameras [ 2 ], road scenes of different cities [ 3 ], and imaging devices in different hospitals [ 4 ]. As a result, the research community has developed a variety of generalization or adaptation techniques to improve model robustness against distribution shifts. For instance, domain generalization (DG) [ 5 ] aims to learn a model using data from one or multiple source domains that can generalize well to any out-of-distribution target domain. On the other hand, domain adaptation (DA) [ 6 ] follows the transductive learning principle to leverage knowledge from a labeled source domain to an unlabeled target domain."},{"role":"assistant","content":"Conclusion: Learning to adapt a pre-trained model to unlabeled data under distribution shifts is an emerging and critical problem in the field of machine learning. This survey provides a comprehensive review of four related topics: source-free domain adaptation, test-time batch adaptation, online test- time adaptation, and test-time prior adaptation. These top- ics are unified as a broad learning paradigm of test-time adaptation. For each topic, we first introduce its history and definition, followed by a new taxonomy of advanced algorithms. Additionally, we provide a review of applica- tions related to test-time adaptation, as well as an outlook of emerging research trends and open problems. We believe that this survey will assist both newcomers and experienced researchers in better understanding the current state of research in test-time adaptation under distribution shifts."}]
A Survey on Medical Document Summarization
The internet has become a global phenomenon, connecting people all over the world and allowing for the exchange of information on a scale that was previously unimaginable. The rise of the internet and the corresponding digitization of many aspects of daily life has had a profound impact on society leading to information overload [ 18 ]. The sheer amount of information available today can be overwhelming. To combat this, individuals can use summarization techniques to distill the information down to its most essential points. The internet also had a profound impact on medical science. With the proliferation of online health tools, it is now easier than ever before to access medical information and resources [ 88 ]. For example, individuals can easily search for medical information, research medical conditions and treatments, and find healthcare providers. Additionally, social media platforms have provided a platform for medical professionals to collabo- rate, share information, and discuss current medical topics. This has allowed medical professionals to quickly access the latest research, treatments, and developments in the field. Furthermore, online tools and platforms have enabled medical professionals to perform remote consultations with patients, providing more efficient and convenient healthcare services. There are a few reasons why summarization is important for medical documents. First, it allows for a quick overview of the document’s content. This can be useful when trying to determine if the document is relevant to a particular topic of interest. Second, summarization can help to identify key points or ideas within a document. This can be valuable when trying to understand the main arguments or findings of a study. Finally, summarization can help to improve the readability of a document by reducing the amount of text that needs to be read. This application of summarization systems has the potential to reduce the burdens from medical workers who already are overburdened [94].
The internet has drastically changed the way medical documents are created and accessed. In the past, they were often handwritten which made them hard to share and find. Now, they are typically created electronically which makes them much easier to both access and share. The internet has also allowed medical professionals to easily share documents with one another which has ultimately improved patient care and medical research. This paper provides a survey to introduce users and researchers to the techniques and current trends in the Medical Summarization task. We cover the formal definition of the Medical Summarization task, a detailed analysis of different medical tasks based on the type of medical documents and specific datasets and challenges associated with them, a detailed categorization of existing works based on input, output, and technique, and an in-depth look at the evaluation metrics utilized to measure the quality of the summaries. To finish, we suggest some potential future directions for further research. We are confident that this survey will encourage more work in medical document summarization.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*A Survey on Medical Document Summarization* and the introduction:The internet has become a global phenomenon, connecting people all over the world and allowing for the exchange of information on a scale that was previously unimaginable. The rise of the internet and the corresponding digitization of many aspects of daily life has had a profound impact on society leading to information overload [ 18 ]. The sheer amount of information available today can be overwhelming. To combat this, individuals can use summarization techniques to distill the information down to its most essential points. The internet also had a profound impact on medical science. With the proliferation of online health tools, it is now easier than ever before to access medical information and resources [ 88 ]. For example, individuals can easily search for medical information, research medical conditions and treatments, and find healthcare providers. Additionally, social media platforms have provided a platform for medical professionals to collabo- rate, share information, and discuss current medical topics. This has allowed medical professionals to quickly access the latest research, treatments, and developments in the field. Furthermore, online tools and platforms have enabled medical professionals to perform remote consultations with patients, providing more efficient and convenient healthcare services. There are a few reasons why summarization is important for medical documents. First, it allows for a quick overview of the document’s content. This can be useful when trying to determine if the document is relevant to a particular topic of interest. Second, summarization can help to identify key points or ideas within a document. This can be valuable when trying to understand the main arguments or findings of a study. Finally, summarization can help to improve the readability of a document by reducing the amount of text that needs to be read. This application of summarization systems has the potential to reduce the burdens from medical workers who already are overburdened [94]."},{"role":"assistant","content":"Conclusion: The internet has drastically changed the way medical documents are created and accessed. In the past, they were often handwritten which made them hard to share and find. Now, they are typically created electronically which makes them much easier to both access and share. The internet has also allowed medical professionals to easily share documents with one another which has ultimately improved patient care and medical research. This paper provides a survey to introduce users and researchers to the techniques and current trends in the Medical Summarization task. We cover the formal definition of the Medical Summarization task, a detailed analysis of different medical tasks based on the type of medical documents and specific datasets and challenges associated with them, a detailed categorization of existing works based on input, output, and technique, and an in-depth look at the evaluation metrics utilized to measure the quality of the summaries. To finish, we suggest some potential future directions for further research. We are confident that this survey will encourage more work in medical document summarization."}]
MAC Protocols for Wireless Mesh Networks with Multi-beam Antennas: A Survey ⋆
Due to a growing popularity of wireless local access, there exists a high demand to improve network throughput and enhance energy efficiency in data transmission between terminal devices (e.g., mobile phones) and access points (or base stations). However, wireless local networks mainly focus on the single-hop transmission. With the help of explosive implemen- tations of wireless network in practical, it has sparked the idea of Wireless Mesh Networks (WMN) [1], which can potentially improve the overall network capacity, enlarge the network coverage, and facilitate the network’s auto-configuration. WMN networks typically ogrnaize the communication nodes in a mesh topology, which is similar to a wireless ad-hoc net- work. In a WMN, it typically has three components: terminals, routers (or switches) and gateways [2]. Compared to traditional antennas, multi-beam antennas, alternatively called smart antennas, provide several advantages, e.g., a higher antenna gain, longer transmission range, better spatial reuse, and much lower interference [3]. Thus, introducing multi-beam antennas into wireless LAN can improve the overall performance of a network, especially for the wireless mesh networks. For example, WMN and its applications can be largely used in a harsh working condition or a disaster relief environment to provide special ser- vices [4]. The advantages of multi-beam antenna on WMN have attracted the researchers from both the academy and the industry, which result in rapid commercialization with lots of standardization efforts [5].
In this paper, we presented a comprehensive survey of MAC protocols in the wireless mesh network using multi-beam antennas. Theoretically, the capacity of WLAN can be consider- ably boosted by the use of multi-beam smart antennas. However, if the designers directly apply IEEE 802.11 to a WLAN with multi-beam antennas, it will inevitably encounter many challenges. The existing solutions to these challenges are based on DCF and hence are not suitable for multi-media applications. The design principles of MAC protocols need to ex- ploit the benefits of multi-beam antennas and overcome the beamforming-related challenges.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*MAC Protocols for Wireless Mesh Networks with Multi-beam Antennas: A Survey ⋆* and the introduction:Due to a growing popularity of wireless local access, there exists a high demand to improve network throughput and enhance energy efficiency in data transmission between terminal devices (e.g., mobile phones) and access points (or base stations). However, wireless local networks mainly focus on the single-hop transmission. With the help of explosive implemen- tations of wireless network in practical, it has sparked the idea of Wireless Mesh Networks (WMN) [1], which can potentially improve the overall network capacity, enlarge the network coverage, and facilitate the network’s auto-configuration. WMN networks typically ogrnaize the communication nodes in a mesh topology, which is similar to a wireless ad-hoc net- work. In a WMN, it typically has three components: terminals, routers (or switches) and gateways [2]. Compared to traditional antennas, multi-beam antennas, alternatively called smart antennas, provide several advantages, e.g., a higher antenna gain, longer transmission range, better spatial reuse, and much lower interference [3]. Thus, introducing multi-beam antennas into wireless LAN can improve the overall performance of a network, especially for the wireless mesh networks. For example, WMN and its applications can be largely used in a harsh working condition or a disaster relief environment to provide special ser- vices [4]. The advantages of multi-beam antenna on WMN have attracted the researchers from both the academy and the industry, which result in rapid commercialization with lots of standardization efforts [5]."},{"role":"assistant","content":"Conclusion: In this paper, we presented a comprehensive survey of MAC protocols in the wireless mesh network using multi-beam antennas. Theoretically, the capacity of WLAN can be consider- ably boosted by the use of multi-beam smart antennas. However, if the designers directly apply IEEE 802.11 to a WLAN with multi-beam antennas, it will inevitably encounter many challenges. The existing solutions to these challenges are based on DCF and hence are not suitable for multi-media applications. The design principles of MAC protocols need to ex- ploit the benefits of multi-beam antennas and overcome the beamforming-related challenges."}]
Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review
Cancer is a disease marked by a disordered growth of abnormal cells that may lead to death if not treated. Around 1.9 million people in the US are expected to be diagnosed with cancer in 2023. Cancer is the second most common reason for death in the US, and it is expected to cause 1,670 deaths per day in the US in 2023 [1]. However, with advances in oncology research, it is estimated that nearly 42% of newly diagnosed cases can be potentially avoided. Being a complex disease, the development and growth of cancer involve multiple mi- croscopic and macroscopic changes in the cell morphology, which are not yet fully understood. In recent years, increasing interest has been in using machine learning techniques, such as deep neural networks (DNNs), to assist with cancer diagnosis and treatment.
Research efforts in integrating data across a few modalities have already shown encouraging results. There is no unified framework available for scaling across all possible modalities of cancer. The convergence of individual methodologies and data across varying scales may hold vital clues in creating a unified view of the disease that is more prognostic, predictive, and insightful than the in- dividual view or modality. Efforts to beat cancer require synergistic analysis of heterogeneous data and instantiating scalable models. In this survey, we re- viewed the multimodal learning task on oncology data. The future resides in developing a deployment-ready, scalable deep learning framework with inher- ent uncertainty quantification, interpret ability, and general iz ability to integrate oncology data across multiple scales, modalities, and resolutions to accelerate cancer diagnosis, prognosis, therapeutic response, and treatment planning.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Multimodal Data Integration for Oncology in the Era of Deep Neural Networks: A Review* and the introduction:Cancer is a disease marked by a disordered growth of abnormal cells that may lead to death if not treated. Around 1.9 million people in the US are expected to be diagnosed with cancer in 2023. Cancer is the second most common reason for death in the US, and it is expected to cause 1,670 deaths per day in the US in 2023 [1]. However, with advances in oncology research, it is estimated that nearly 42% of newly diagnosed cases can be potentially avoided. Being a complex disease, the development and growth of cancer involve multiple mi- croscopic and macroscopic changes in the cell morphology, which are not yet fully understood. In recent years, increasing interest has been in using machine learning techniques, such as deep neural networks (DNNs), to assist with cancer diagnosis and treatment."},{"role":"assistant","content":"Conclusion: Research efforts in integrating data across a few modalities have already shown encouraging results. There is no unified framework available for scaling across all possible modalities of cancer. The convergence of individual methodologies and data across varying scales may hold vital clues in creating a unified view of the disease that is more prognostic, predictive, and insightful than the in- dividual view or modality. Efforts to beat cancer require synergistic analysis of heterogeneous data and instantiating scalable models. In this survey, we re- viewed the multimodal learning task on oncology data. The future resides in developing a deployment-ready, scalable deep learning framework with inher- ent uncertainty quantification, interpret ability, and general iz ability to integrate oncology data across multiple scales, modalities, and resolutions to accelerate cancer diagnosis, prognosis, therapeutic response, and treatment planning."}]
Vision-Centric BEV Perception: A Survey
A C curate and comprehensive understanding of surround- ing scenes, including dynamic objects and static streets, is essential for autonomous vehicles to make safe and effective driving decisions. 3D perception conducted in bird’s eye-view (BEV) has attracted immense attention in recent years for two main reasons. First, BEV representations of the world, especially traffic scenarios, contains rich semantic information, precise localization, and absolute scales, which can be directly deployed by many downstream real-world applications such as behavior prediction, motion planning, etc. Second, BEV provides a physics-interpretable way to fuse information from different views, modalities, time series, and agents. Because it represents the world in a coordinate system, multiple views of data captured from surrounding cameras can be fused into a whole scene in BEV without extra stitching operations in overlapping areas. Meanwhile, the temporal fusion of consecutive visual data is exact and natural without any distortion existing in the perspective view. Moreover, other widely used acquisition sensors, like LiDAR and Radar, capture data in 3D space, which can been easily transformed to BEV, and conduct sensor fusion with cameras. Even for Vehicle-to-Vehicle or Vehicle-Infrastructure communication technologies, BEV also plays an important role in fusing various information from multiple sources.
This paper presents a comprehensive review of recent research on solving the view transformation between per- spective view and bird’s eye view. We introduce and discuss related methods by clearly classifying them according to the core idea and downstream vision tasks. To facilitate further research and implementation, detailed comparison and anal- ysis of performance and application scenarios are provided, and rich extensions of vision-centric BEV perception are also proposed.
[{"role":"user", "content":"Help me to generate the conclusion of a survey paper given the title:*Vision-Centric BEV Perception: A Survey* and the introduction:A C curate and comprehensive understanding of surround- ing scenes, including dynamic objects and static streets, is essential for autonomous vehicles to make safe and effective driving decisions. 3D perception conducted in bird’s eye-view (BEV) has attracted immense attention in recent years for two main reasons. First, BEV representations of the world, especially traffic scenarios, contains rich semantic information, precise localization, and absolute scales, which can be directly deployed by many downstream real-world applications such as behavior prediction, motion planning, etc. Second, BEV provides a physics-interpretable way to fuse information from different views, modalities, time series, and agents. Because it represents the world in a coordinate system, multiple views of data captured from surrounding cameras can be fused into a whole scene in BEV without extra stitching operations in overlapping areas. Meanwhile, the temporal fusion of consecutive visual data is exact and natural without any distortion existing in the perspective view. Moreover, other widely used acquisition sensors, like LiDAR and Radar, capture data in 3D space, which can been easily transformed to BEV, and conduct sensor fusion with cameras. Even for Vehicle-to-Vehicle or Vehicle-Infrastructure communication technologies, BEV also plays an important role in fusing various information from multiple sources."},{"role":"assistant","content":"Conclusion: This paper presents a comprehensive review of recent research on solving the view transformation between per- spective view and bird’s eye view. We introduce and discuss related methods by clearly classifying them according to the core idea and downstream vision tasks. To facilitate further research and implementation, detailed comparison and anal- ysis of performance and application scenarios are provided, and rich extensions of vision-centric BEV perception are also proposed."}]

No dataset card yet

Downloads last month
2

Models trained or fine-tuned on technicolor/HiCaD_intro_conclusion_pairs