Datasets:

bibtex_url
stringlengths
41
52
proceedings
stringlengths
38
49
bibtext
stringlengths
788
3.49k
abstract
stringlengths
0
2.12k
authors
sequencelengths
1
58
title
stringlengths
16
181
id
stringlengths
7
18
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
170 values
n_linked_authors
int64
-1
9
upvotes
int64
-1
56
num_comments
int64
-1
9
n_authors
int64
-1
57
paper_page_exists_pre_conf
int64
0
1
Models
sequencelengths
0
99
Datasets
sequencelengths
0
5
Spaces
sequencelengths
0
57
https://aclanthology.org/2024.naacl-srw.18.bib
https://aclanthology.org/2024.naacl-srw.18/
@inproceedings{enomoto-etal-2024-investigating, title = "Investigating Web Corpus Filtering Methods for Language Model Development in {J}apanese", author = "Enomoto, Rintaro and Tolmachev, Arseny and Niitsuma, Takuro and Kurita, Shuhei and Kawahara, Daisuke", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.18", doi = "10.18653/v1/2024.naacl-srw.18", pages = "154--160", abstract = "The development of large language models (LLMs) is becoming increasingly significant, and there is a demand for high-quality, large-scale corpora for their pretraining.The quality of a web corpus is especially essential to improve the performance of LLMs because it accounts for a large proportion of the whole corpus. However, filtering methods for Web corpora have yet to be established.In this paper, we present empirical studies to reveal which filtering methods are indeed effective and analyze why they are.We build classifiers and language models in Japanese that can process large amounts of corpora rapidly enough for pretraining LLMs in limited computational resources. By evaluating these filtering methods based on a Web corpus quality evaluation benchmark, we reveal that the most accurate method is the N-gram language model. Indeed, we empirically present that strong filtering methods can rather lead to lesser performance in downstream tasks.We also report that the proportion of some specific topics in the processed documents decreases significantly during the filtering process.", }
The development of large language models (LLMs) is becoming increasingly significant, and there is a demand for high-quality, large-scale corpora for their pretraining.The quality of a web corpus is especially essential to improve the performance of LLMs because it accounts for a large proportion of the whole corpus. However, filtering methods for Web corpora have yet to be established.In this paper, we present empirical studies to reveal which filtering methods are indeed effective and analyze why they are.We build classifiers and language models in Japanese that can process large amounts of corpora rapidly enough for pretraining LLMs in limited computational resources. By evaluating these filtering methods based on a Web corpus quality evaluation benchmark, we reveal that the most accurate method is the N-gram language model. Indeed, we empirically present that strong filtering methods can rather lead to lesser performance in downstream tasks.We also report that the proportion of some specific topics in the processed documents decreases significantly during the filtering process.
[ "Enomoto, Rintaro", "Tolmachev, Arseny", "Niitsuma, Takuro", "Kurita, Shuhei", "Kawahara, Daisuke" ]
Investigating Web Corpus Filtering Methods for Language Model Development in Japanese
naacl-srw.18
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.19.bib
https://aclanthology.org/2024.naacl-srw.19/
@inproceedings{kruijt-2024-referring, title = "Referring Expressions in Human-Robot Common Ground: A Thesis Proposal", author = "Kruijt, Jaap", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.19", doi = "10.18653/v1/2024.naacl-srw.19", pages = "161--167", abstract = "In this PhD, we investigate the processes through which common ground shapes the pragmatic use of referring expressions in Human-Robot Interaction. A central point in our investigation is the interplay between a growing common ground and changes in the surrounding context, which can create ambiguity, variation and the need for pragmatic interpretations. We outline three objectives that define the scope of our work: 1) obtaining data with common ground interactions, 2) examining reference-making, and 3) evaluating the robot interlocutor. We use datasets as well as a novel interactive experimental framework to investigate the linguistic processes involved in shaping referring expressions. We also design an interactive robot model, which models these linguistic processes and can use pragmatic inference to resolve referring expressions. With this work, we contribute to existing work in HRI, reference resolution and the study of common ground.", }
In this PhD, we investigate the processes through which common ground shapes the pragmatic use of referring expressions in Human-Robot Interaction. A central point in our investigation is the interplay between a growing common ground and changes in the surrounding context, which can create ambiguity, variation and the need for pragmatic interpretations. We outline three objectives that define the scope of our work: 1) obtaining data with common ground interactions, 2) examining reference-making, and 3) evaluating the robot interlocutor. We use datasets as well as a novel interactive experimental framework to investigate the linguistic processes involved in shaping referring expressions. We also design an interactive robot model, which models these linguistic processes and can use pragmatic inference to resolve referring expressions. With this work, we contribute to existing work in HRI, reference resolution and the study of common ground.
[ "Kruijt, Jaap" ]
Referring Expressions in Human-Robot Common Ground: A Thesis Proposal
naacl-srw.19
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.20.bib
https://aclanthology.org/2024.naacl-srw.20/
@inproceedings{rahaman-ive-2024-source, title = "Source Code is a Graph, Not a Sequence: A Cross-Lingual Perspective on Code Clone Detection", author = "Rahaman, Mohammed and Ive, Julia", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.20", doi = "10.18653/v1/2024.naacl-srw.20", pages = "168--199", abstract = "Code clone detection is challenging, as sourcecode can be written in different languages, do-mains, and styles. In this paper, we arguethat source code is inherently a graph, not asequence, and that graph-based methods aremore suitable for code clone detection thansequence-based methods. We compare the per-formance of two state-of-the-art models: Code-BERT (Feng et al., 2020), a sequence-basedmodel, and CodeGraph (Yu et al., 2023), agraph-based model, on two benchmark data-sets: BCB (Svajlenko et al., 2014) and PoolC(PoolC, no date). We show that CodeGraphoutperforms CodeBERT on both data-sets, es-pecially on cross-lingual code clones. To thebest of our knowledge, this is the first work todemonstrate the cross-lingual code clone detec-tion showing superiority on graph-based meth-ods over sequence-based methods", }
Code clone detection is challenging, as sourcecode can be written in different languages, do-mains, and styles. In this paper, we arguethat source code is inherently a graph, not asequence, and that graph-based methods aremore suitable for code clone detection thansequence-based methods. We compare the per-formance of two state-of-the-art models: Code-BERT (Feng et al., 2020), a sequence-basedmodel, and CodeGraph (Yu et al., 2023), agraph-based model, on two benchmark data-sets: BCB (Svajlenko et al., 2014) and PoolC(PoolC, no date). We show that CodeGraphoutperforms CodeBERT on both data-sets, es-pecially on cross-lingual code clones. To thebest of our knowledge, this is the first work todemonstrate the cross-lingual code clone detec-tion showing superiority on graph-based meth-ods over sequence-based methods
[ "Rahaman, Mohammed", "Ive, Julia" ]
Source Code is a Graph, Not a Sequence: A Cross-Lingual Perspective on Code Clone Detection
naacl-srw.20
Poster
2312.16488
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.21.bib
https://aclanthology.org/2024.naacl-srw.21/
@inproceedings{zhang-etal-2024-distilling, title = "Distilling Text Style Transfer With Self-Explanation From {LLM}s", author = "Zhang, Chiyu and Cai, Honglong and Li, Yuezhang and Wu, Yuexin and Hou, Le and Abdul-Mageed, Muhammad", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.21", doi = "10.18653/v1/2024.naacl-srw.21", pages = "200--211", abstract = "Text Style Transfer (TST) seeks to alter the style of text while retaining its core content. Given the constraints of limited parallel datasets for TST, we propose CoTeX, a framework that leverages large language models (LLMs) alongside chain-of-thought (CoT) prompting to facilitate TST. CoTeX distills the complex rewriting and reasoning capabilities of LLMs into more streamlined models capable of working with both non-parallel and parallel data. Through experimentation across four TST datasets, CoTeX is shown to surpass traditional supervised fine-tuning and knowledge distillation methods, particularly in low-resource settings. We conduct a comprehensive evaluation, comparing CoTeX against current unsupervised, supervised, in-context learning (ICL) techniques, and instruction-tuned LLMs. Furthermore, CoTeX distinguishes itself by offering transparent explanations for its style transfer process.", }
Text Style Transfer (TST) seeks to alter the style of text while retaining its core content. Given the constraints of limited parallel datasets for TST, we propose CoTeX, a framework that leverages large language models (LLMs) alongside chain-of-thought (CoT) prompting to facilitate TST. CoTeX distills the complex rewriting and reasoning capabilities of LLMs into more streamlined models capable of working with both non-parallel and parallel data. Through experimentation across four TST datasets, CoTeX is shown to surpass traditional supervised fine-tuning and knowledge distillation methods, particularly in low-resource settings. We conduct a comprehensive evaluation, comparing CoTeX against current unsupervised, supervised, in-context learning (ICL) techniques, and instruction-tuned LLMs. Furthermore, CoTeX distinguishes itself by offering transparent explanations for its style transfer process.
[ "Zhang, Chiyu", "Cai, Honglong", "Li, Yuezhang", "Wu, Yuexin", "Hou, Le", "Abdul-Mageed, Muhammad" ]
Distilling Text Style Transfer With Self-Explanation From LLMs
naacl-srw.21
Poster
2403.01106
[ "" ]
https://huggingface.co/papers/2403.01106
0
0
0
7
1
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.22.bib
https://aclanthology.org/2024.naacl-srw.22/
@inproceedings{wang-etal-2024-reinforcement, title = "Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation", author = "Wang, Hao and Morimura, Tetsuro and Honda, Ukyo and Kawahara, Daisuke", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.22", doi = "10.18653/v1/2024.naacl-srw.22", pages = "212--218", abstract = "Non-autoregressive (NAR) language models are known for their low latency in neural machine translation (NMT). However, a performance gap exists between NAR and autoregressive models due to the large decoding space and difficulty in capturing dependency between target words accurately. Compounding this, preparing appropriate training data for NAR models is a non-trivial task, often exacerbating exposure bias. To address these challenges, we apply reinforcement learning (RL) to Levenshtein Transformer, a representative edit-based NAR model, demonstrating that RL with self-generated data can enhance the performance of edit-based NAR models. We explore two RL approaches: stepwise reward maximization and episodic reward maximization. We discuss the respective pros and cons of these two approaches and empirically verify them. Moreover, we experimentally investigate the impact of temperature setting on performance, confirming the importance of proper temperature setting for NAR models{'} training.", }
Non-autoregressive (NAR) language models are known for their low latency in neural machine translation (NMT). However, a performance gap exists between NAR and autoregressive models due to the large decoding space and difficulty in capturing dependency between target words accurately. Compounding this, preparing appropriate training data for NAR models is a non-trivial task, often exacerbating exposure bias. To address these challenges, we apply reinforcement learning (RL) to Levenshtein Transformer, a representative edit-based NAR model, demonstrating that RL with self-generated data can enhance the performance of edit-based NAR models. We explore two RL approaches: stepwise reward maximization and episodic reward maximization. We discuss the respective pros and cons of these two approaches and empirically verify them. Moreover, we experimentally investigate the impact of temperature setting on performance, confirming the importance of proper temperature setting for NAR models{'} training.
[ "Wang, Hao", "Morimura, Tetsuro", "Honda, Ukyo", "Kawahara, Daisuke" ]
Reinforcement Learning for Edit-Based Non-Autoregressive Neural Machine Translation
naacl-srw.22
Poster
2405.01280
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.23.bib
https://aclanthology.org/2024.naacl-srw.23/
@inproceedings{horiguchi-etal-2024-evaluation, title = "Evaluation Dataset for {J}apanese Medical Text Simplification", author = "Horiguchi, Koki and Kajiwara, Tomoyuki and Arase, Yuki and Ninomiya, Takashi", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.23", doi = "10.18653/v1/2024.naacl-srw.23", pages = "219--225", abstract = "We create a parallel corpus for medical text simplification in Japanese, which simplifies medical terms into expressions that patients can understand without effort.While text simplification in the medial domain is strongly desired by society, it is less explored in Japanese because of the lack of language resources.In this study, we build a parallel corpus for Japanese text simplification evaluation in the medical domain using patients{'} weblogs.This corpus consists of 1,425 pairs of complex and simple sentences with or without medical terms.To tackle medical text simplification without a training corpus of the corresponding domain, we repurpose a Japanese text simplification model of other domains.Furthermore, we propose a lexically constrained reranking method that allows to avoid technical terms to be output.Experimental results show that our method contributes to achieving higher simplification performance in the medical domain.", }
We create a parallel corpus for medical text simplification in Japanese, which simplifies medical terms into expressions that patients can understand without effort.While text simplification in the medial domain is strongly desired by society, it is less explored in Japanese because of the lack of language resources.In this study, we build a parallel corpus for Japanese text simplification evaluation in the medical domain using patients{'} weblogs.This corpus consists of 1,425 pairs of complex and simple sentences with or without medical terms.To tackle medical text simplification without a training corpus of the corresponding domain, we repurpose a Japanese text simplification model of other domains.Furthermore, we propose a lexically constrained reranking method that allows to avoid technical terms to be output.Experimental results show that our method contributes to achieving higher simplification performance in the medical domain.
[ "Horiguchi, Koki", "Kajiwara, Tomoyuki", "Arase, Yuki", "Ninomiya, Takashi" ]
Evaluation Dataset for Japanese Medical Text Simplification
naacl-srw.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.24.bib
https://aclanthology.org/2024.naacl-srw.24/
@inproceedings{kajikawa-etal-2024-multi, title = "Multi-Source Text Classification for Multilingual Sentence Encoder with Machine Translation", author = "Kajikawa, Reon and Yamada, Keiichiro and Kajiwara, Tomoyuki and Ninomiya, Takashi", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.24", doi = "10.18653/v1/2024.naacl-srw.24", pages = "226--232", abstract = "To reduce the cost of training models for each language for developers of natural language processing applications, pre-trained multilingual sentence encoders are promising.However, since training corpora for such multilingual sentence encoders contain only a small amount of text in languages other than English, they suffer from performance degradation for non-English languages.To improve the performance of pre-trained multilingual sentence encoders for non-English languages, we propose a method of machine translating a source sentence into English and then inputting it together with the source sentence in a multi-source manner.Experimental results on sentiment analysis and topic classification tasks in Japanese revealed the effectiveness of the proposed method.", }
To reduce the cost of training models for each language for developers of natural language processing applications, pre-trained multilingual sentence encoders are promising.However, since training corpora for such multilingual sentence encoders contain only a small amount of text in languages other than English, they suffer from performance degradation for non-English languages.To improve the performance of pre-trained multilingual sentence encoders for non-English languages, we propose a method of machine translating a source sentence into English and then inputting it together with the source sentence in a multi-source manner.Experimental results on sentiment analysis and topic classification tasks in Japanese revealed the effectiveness of the proposed method.
[ "Kajikawa, Reon", "Yamada, Keiichiro", "Kajiwara, Tomoyuki", "Ninomiya, Takashi" ]
Multi-Source Text Classification for Multilingual Sentence Encoder with Machine Translation
naacl-srw.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.25.bib
https://aclanthology.org/2024.naacl-srw.25/
@inproceedings{toossi-etal-2024-reproducibility, title = "A Reproducibility Study on Quantifying Language Similarity: The Impact of Missing Values in the {URIEL} Knowledge Base", author = {Toossi, Hasti and Huai, Guo and Liu, Jinyu and Khiu, Eric and Do{\u{g}}ru{\"o}z, A. Seza and Lee, En-Shiun}, editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.25", doi = "10.18653/v1/2024.naacl-srw.25", pages = "233--241", abstract = "In the pursuit of supporting more languages around the world, tools that characterize properties of languages play a key role in expanding the existing multilingual NLP research. In this study, we focus on a widely used typological knowledge base, URIEL, which aggregates linguistic information into numeric vectors. Specifically, we delve into the soundness and reproducibility of the approach taken by URIEL in quantifying language similarity. Our analysis reveals URIEL{'}s ambiguity in calculating language distances and in handling missing values. Moreover, we find that URIEL does not provide any information about typological features for 31{\%} of the languages it represents, undermining the reliabilility of the database, particularly on low-resource languages. Our literature review suggests URIEL and lang2vec are used in papers on diverse NLP tasks, which motivates us to rigorously verify the database as the effectiveness of these works depends on the reliability of the information the tool provides.", }
In the pursuit of supporting more languages around the world, tools that characterize properties of languages play a key role in expanding the existing multilingual NLP research. In this study, we focus on a widely used typological knowledge base, URIEL, which aggregates linguistic information into numeric vectors. Specifically, we delve into the soundness and reproducibility of the approach taken by URIEL in quantifying language similarity. Our analysis reveals URIEL{'}s ambiguity in calculating language distances and in handling missing values. Moreover, we find that URIEL does not provide any information about typological features for 31{\%} of the languages it represents, undermining the reliabilility of the database, particularly on low-resource languages. Our literature review suggests URIEL and lang2vec are used in papers on diverse NLP tasks, which motivates us to rigorously verify the database as the effectiveness of these works depends on the reliability of the information the tool provides.
[ "Toossi, Hasti", "Huai, Guo", "Liu, Jinyu", "Khiu, Eric", "Do{\\u{g}}ru{\\\"o}z, A. Seza", "Lee, En-Shiun" ]
A Reproducibility Study on Quantifying Language Similarity: The Impact of Missing Values in the URIEL Knowledge Base
naacl-srw.25
Poster
2405.11125
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.26.bib
https://aclanthology.org/2024.naacl-srw.26/
@inproceedings{zenimoto-etal-2024-coding, title = "Coding Open-Ended Responses using Pseudo Response Generation by Large Language Models", author = "Zenimoto, Yuki and Hasegawa, Ryo and Utsuro, Takehito and Yoshioka, Masaharu and Kando, Noriko", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.26", doi = "10.18653/v1/2024.naacl-srw.26", pages = "242--254", abstract = "Survey research using open-ended responses is an important method thatcontributes to the discovery of unknown issues and new needs. However,survey research generally requires time and cost-consuming manual dataprocessing, indicating that it is difficult to analyze large dataset.To address this issue, we propose an LLM-based method to automate partsof the grounded theory approach (GTA), a representative approach of thequalitative data analysis. We generated and annotated pseudo open-endedresponses, and used them as the training data for the coding proceduresof GTA. Through evaluations, we showed that the models trained withpseudo open-ended responses are quite effective compared with thosetrained with manually annotated open-ended responses. We alsodemonstrate that the LLM-based approach is highly efficient andcost-saving compared to human-based approach.", }
Survey research using open-ended responses is an important method thatcontributes to the discovery of unknown issues and new needs. However,survey research generally requires time and cost-consuming manual dataprocessing, indicating that it is difficult to analyze large dataset.To address this issue, we propose an LLM-based method to automate partsof the grounded theory approach (GTA), a representative approach of thequalitative data analysis. We generated and annotated pseudo open-endedresponses, and used them as the training data for the coding proceduresof GTA. Through evaluations, we showed that the models trained withpseudo open-ended responses are quite effective compared with thosetrained with manually annotated open-ended responses. We alsodemonstrate that the LLM-based approach is highly efficient andcost-saving compared to human-based approach.
[ "Zenimoto, Yuki", "Hasegawa, Ryo", "Utsuro, Takehito", "Yoshioka, Masaharu", "K", "o, Noriko" ]
Coding Open-Ended Responses using Pseudo Response Generation by Large Language Models
naacl-srw.26
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.27.bib
https://aclanthology.org/2024.naacl-srw.27/
@inproceedings{ye-2024-cross, title = "Cross-Task Generalization Abilities of Large Language Models", author = "Ye, Qinyuan", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.27", doi = "10.18653/v1/2024.naacl-srw.27", pages = "255--262", abstract = "Humans can learn a new language task efficiently with only few examples, by leveraging their knowledge and experience obtained when learning prior tasks. Enabling similar cross-task generalization abilities in NLP systems is fundamental for approaching the goal of general intelligence and expanding the reach of language technology in the future.In this thesis proposal, I will present my work on (1) benchmarking cross-task generalization abilities with diverse NLP tasks; (2) developing model architectures for improving cross-task generalization abilities; (3) analyzing and predicting the generalization landscape of current state-of-the-art large language models. Additionally, I will outline future research directions, along with preliminary thoughts on addressing them.", }
Humans can learn a new language task efficiently with only few examples, by leveraging their knowledge and experience obtained when learning prior tasks. Enabling similar cross-task generalization abilities in NLP systems is fundamental for approaching the goal of general intelligence and expanding the reach of language technology in the future.In this thesis proposal, I will present my work on (1) benchmarking cross-task generalization abilities with diverse NLP tasks; (2) developing model architectures for improving cross-task generalization abilities; (3) analyzing and predicting the generalization landscape of current state-of-the-art large language models. Additionally, I will outline future research directions, along with preliminary thoughts on addressing them.
[ "Ye, Qinyuan" ]
Cross-Task Generalization Abilities of Large Language Models
naacl-srw.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.28.bib
https://aclanthology.org/2024.naacl-srw.28/
@inproceedings{wang-yoshinaga-2024-commentary, title = "Commentary Generation from Data Records of Multiplayer Strategy Esports Game", author = "Wang, Zihan and Yoshinaga, Naoki", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.28", doi = "10.18653/v1/2024.naacl-srw.28", pages = "263--271", abstract = "Esports, a sports competition on video games, has become one of the most important sporting events. Although esports play logs have been accumulated, only a small portion of them accompany text commentaries for the audience to retrieve and understand the plays. In this study, we therefore introduce the task of generating game commentaries from esports{'} data records. We first build large-scale esports data-to-text datasets that pair structured data and commentaries from a popular esports game, League of Legends. We then evaluate Transformer-based models to generate game commentaries from structured data records, while examining the impact of the pre-trained language models. Evaluation results on our dataset revealed the challenges of this novel task. We will release our dataset to boost potential research in the data-to-text generation community.", }
Esports, a sports competition on video games, has become one of the most important sporting events. Although esports play logs have been accumulated, only a small portion of them accompany text commentaries for the audience to retrieve and understand the plays. In this study, we therefore introduce the task of generating game commentaries from esports{'} data records. We first build large-scale esports data-to-text datasets that pair structured data and commentaries from a popular esports game, League of Legends. We then evaluate Transformer-based models to generate game commentaries from structured data records, while examining the impact of the pre-trained language models. Evaluation results on our dataset revealed the challenges of this novel task. We will release our dataset to boost potential research in the data-to-text generation community.
[ "Wang, Zihan", "Yoshinaga, Naoki" ]
Commentary Generation from Data Records of Multiplayer Strategy Esports Game
naacl-srw.28
Poster
2212.10935
[ "https://github.com/arnozwang/esports-data-to-text" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.29.bib
https://aclanthology.org/2024.naacl-srw.29/
@inproceedings{van-der-meer-2024-facilitating, title = "Facilitating Opinion Diversity through Hybrid {NLP} Approaches", author = "Van Der Meer, Michiel", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.29", doi = "10.18653/v1/2024.naacl-srw.29", pages = "272--284", abstract = "Modern democracies face a critical issue of declining citizen participation in decision-making. Online discussion forums are an important avenue for enhancing citizen participation. This thesis proposal 1) identifies the challenges involved in facilitating large-scale online discussions with Natural Language Processing (NLP), 2) suggests solutions to these challenges by incorporating hybrid human-AI technologies, and 3) investigates what these technologies can reveal about individual perspectives in online discussions. We propose a three-layered hierarchy for representing perspectives that can be obtained by a mixture of human intelligence and large language models. We illustrate how these representations can draw insights into the diversity of perspectives and allow us to investigate interactions in online discussions.", }
Modern democracies face a critical issue of declining citizen participation in decision-making. Online discussion forums are an important avenue for enhancing citizen participation. This thesis proposal 1) identifies the challenges involved in facilitating large-scale online discussions with Natural Language Processing (NLP), 2) suggests solutions to these challenges by incorporating hybrid human-AI technologies, and 3) investigates what these technologies can reveal about individual perspectives in online discussions. We propose a three-layered hierarchy for representing perspectives that can be obtained by a mixture of human intelligence and large language models. We illustrate how these representations can draw insights into the diversity of perspectives and allow us to investigate interactions in online discussions.
[ "Van Der Meer, Michiel" ]
Facilitating Opinion Diversity through Hybrid NLP Approaches
naacl-srw.29
Poster
2405.09439
[ "" ]
https://huggingface.co/papers/2405.09439
1
0
0
1
1
[]
[]
[]
https://aclanthology.org/2024.naacl-srw.30.bib
https://aclanthology.org/2024.naacl-srw.30/
@inproceedings{srinivasagan-ostermann-2024-hybridbert, title = "{H}ybrid{BERT} - Making {BERT} Pretraining More Efficient Through Hybrid Mixture of Attention Mechanisms", author = "Srinivasagan, Gokul and Ostermann, Simon", editor = "Cao, Yang (Trista) and Papadimitriou, Isabel and Ovalle, Anaelia and Zampieri, Marcos and Ferraro, Francis and Swayamdipta, Swabha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 4: Student Research Workshop)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-srw.30", doi = "10.18653/v1/2024.naacl-srw.30", pages = "285--291", abstract = "Pretrained transformer-based language models have produced state-of-the-art performance in most natural language understanding tasks. These models undergo two stages of training: pretraining on a huge corpus of data and fine-tuning on a specific downstream task. The pretraining phase is extremely compute-intensive and requires several high-performance computing devices like GPUs and several days or even months of training, but it is crucial for the model to capture global knowledge and also has a significant impact on the fine-tuning task. This is a major roadblock for researchers without access to sophisticated computing resources. To overcome this challenge, we propose two novel hybrid architectures called HybridBERT (HBERT), which combine self-attention and additive attention mechanisms together with sub-layer normalization. We introduce a computing budget to the pretraining phase, limiting the training time and usage to a single GPU. We show that HBERT attains twice the pretraining accuracy of a vanilla-BERT baseline. We also evaluate our proposed models on two downstream tasks, where we outperform BERT-base while accelerating inference. Moreover, we study the effect of weight initialization with a limited pretraining budget. The code and models are publicly available at: www.github.com/gokulsg/HBERT/.", }
Pretrained transformer-based language models have produced state-of-the-art performance in most natural language understanding tasks. These models undergo two stages of training: pretraining on a huge corpus of data and fine-tuning on a specific downstream task. The pretraining phase is extremely compute-intensive and requires several high-performance computing devices like GPUs and several days or even months of training, but it is crucial for the model to capture global knowledge and also has a significant impact on the fine-tuning task. This is a major roadblock for researchers without access to sophisticated computing resources. To overcome this challenge, we propose two novel hybrid architectures called HybridBERT (HBERT), which combine self-attention and additive attention mechanisms together with sub-layer normalization. We introduce a computing budget to the pretraining phase, limiting the training time and usage to a single GPU. We show that HBERT attains twice the pretraining accuracy of a vanilla-BERT baseline. We also evaluate our proposed models on two downstream tasks, where we outperform BERT-base while accelerating inference. Moreover, we study the effect of weight initialization with a limited pretraining budget. The code and models are publicly available at: www.github.com/gokulsg/HBERT/.
[ "Srinivasagan, Gokul", "Ostermann, Simon" ]
HybridBERT - Making BERT Pretraining More Efficient Through Hybrid Mixture of Attention Mechanisms
naacl-srw.30
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-tutorials.1.bib
https://aclanthology.org/2024.naacl-tutorials.1/
@inproceedings{uchendu-etal-2024-catch, title = "Catch Me If You {GPT}: Tutorial on Deepfake Texts", author = "Uchendu, Adaku and Venkatraman, Saranya and Le, Thai and Lee, Dongwon", editor = "Zhang, Rui and Schneider, Nathan and Chaturvedi, Snigdha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-tutorials.1", doi = "10.18653/v1/2024.naacl-tutorials.1", pages = "1--7", abstract = "In recent years, Natural Language Generation (NLG) techniques have greatly advanced, especially in the realm of Large Language Models (LLMs). With respect to the quality of generated texts, it is no longer trivial to tell the difference between human-written and LLMgenerated texts (i.e., deepfake texts). While this is a celebratory feat for NLG, it poses new security risks (e.g., the generation of misinformation). To combat this novel challenge, researchers have developed diverse techniques to detect deepfake texts. While this niche field of deepfake text detection is growing, the field of NLG is growing at a much faster rate, thus making it difficult to understand the complex interplay between state-of-the-art NLG methods and the detectability of their generated texts. To understand such inter-play, two new computational problems emerge: (1) Deepfake Text Attribution (DTA) and (2) Deepfake Text Obfuscation (DTO) problems, where the DTA problem is concerned with attributing the authorship of a given text to one of k NLG methods, while the DTO problem is to evade the authorship of a given text by modifying parts of the text. In this cutting-edge tutorial, therefore, we call attention to the serious security risk both emerging problems pose and give a comprehensive review of recent literature on the detection and obfuscation of deepfake text authorships. Our tutorial will be 3 hours long with a mix of lecture and hands-on examples for interactive audience participation. You can find our tutorial materials here: https://tinyurl.com/naacl24-tutorial.", }
In recent years, Natural Language Generation (NLG) techniques have greatly advanced, especially in the realm of Large Language Models (LLMs). With respect to the quality of generated texts, it is no longer trivial to tell the difference between human-written and LLMgenerated texts (i.e., deepfake texts). While this is a celebratory feat for NLG, it poses new security risks (e.g., the generation of misinformation). To combat this novel challenge, researchers have developed diverse techniques to detect deepfake texts. While this niche field of deepfake text detection is growing, the field of NLG is growing at a much faster rate, thus making it difficult to understand the complex interplay between state-of-the-art NLG methods and the detectability of their generated texts. To understand such inter-play, two new computational problems emerge: (1) Deepfake Text Attribution (DTA) and (2) Deepfake Text Obfuscation (DTO) problems, where the DTA problem is concerned with attributing the authorship of a given text to one of k NLG methods, while the DTO problem is to evade the authorship of a given text by modifying parts of the text. In this cutting-edge tutorial, therefore, we call attention to the serious security risk both emerging problems pose and give a comprehensive review of recent literature on the detection and obfuscation of deepfake text authorships. Our tutorial will be 3 hours long with a mix of lecture and hands-on examples for interactive audience participation. You can find our tutorial materials here: https://tinyurl.com/naacl24-tutorial.
[ "Uchendu, Adaku", "Venkatraman, Saranya", "Le, Thai", "Lee, Dongwon" ]
Catch Me If You GPT: Tutorial on Deepfake Texts
naacl-tutorials.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-tutorials.2.bib
https://aclanthology.org/2024.naacl-tutorials.2/
@inproceedings{chen-etal-2024-combating, title = "Combating Security and Privacy Issues in the Era of Large Language Models", author = "Chen, Muhao and Xiao, Chaowei and Sun, Huan and Li, Lei and Derczynski, Leon and Anandkumar, Anima and Wang, Fei", editor = "Zhang, Rui and Schneider, Nathan and Chaturvedi, Snigdha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-tutorials.2", doi = "10.18653/v1/2024.naacl-tutorials.2", pages = "8--18", abstract = "This tutorial seeks to provide a systematic summary of risks and vulnerabilities in security, privacy and copyright aspects of large language models (LLMs), and most recent solutions to address those issues. We will discuss a broad thread of studies that try to answer the following questions: (i) How do we unravel the adversarial threats that attackers may leverage in the training time of LLMs, especially those that may exist in recent paradigms of instruction tuning and RLHF processes? (ii) How do we guard the LLMs against malicious attacks in inference time, such as attacks based on backdoors and jailbreaking? (iii) How do we ensure privacy protection of user information and LLM decisions for Language Model as-a-Service (LMaaS)? (iv) How do we protect the copyright of an LLM? (v) How do we detect and prevent cases where personal or confidential information is leaked during LLM training? (vi) How should we make policies to control against improper usage of LLM-generated content? In addition, will conclude the discussions by outlining emergent challenges in security, privacy and reliability of LLMs that deserve timely investigation by the community", }
This tutorial seeks to provide a systematic summary of risks and vulnerabilities in security, privacy and copyright aspects of large language models (LLMs), and most recent solutions to address those issues. We will discuss a broad thread of studies that try to answer the following questions: (i) How do we unravel the adversarial threats that attackers may leverage in the training time of LLMs, especially those that may exist in recent paradigms of instruction tuning and RLHF processes? (ii) How do we guard the LLMs against malicious attacks in inference time, such as attacks based on backdoors and jailbreaking? (iii) How do we ensure privacy protection of user information and LLM decisions for Language Model as-a-Service (LMaaS)? (iv) How do we protect the copyright of an LLM? (v) How do we detect and prevent cases where personal or confidential information is leaked during LLM training? (vi) How should we make policies to control against improper usage of LLM-generated content? In addition, will conclude the discussions by outlining emergent challenges in security, privacy and reliability of LLMs that deserve timely investigation by the community
[ "Chen, Muhao", "Xiao, Chaowei", "Sun, Huan", "Li, Lei", "Derczynski, Leon", "An", "kumar, Anima", "Wang, Fei" ]
Combating Security and Privacy Issues in the Era of Large Language Models
naacl-tutorials.2
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-tutorials.3.bib
https://aclanthology.org/2024.naacl-tutorials.3/
@inproceedings{zhu-etal-2024-explanation, title = "Explanation in the Era of Large Language Models", author = "Zhu, Zining and Chen, Hanjie and Ye, Xi and Lyu, Qing and Tan, Chenhao and Marasovic, Ana and Wiegreffe, Sarah", editor = "Zhang, Rui and Schneider, Nathan and Chaturvedi, Snigdha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-tutorials.3", doi = "10.18653/v1/2024.naacl-tutorials.3", pages = "19--25", abstract = "Explanation has long been a part of communications, where humans use language to elucidate each other and transmit information about the mechanisms of events. There have been numerous works that study the structures of the explanations and their utility to humans. At the same time, explanation relates to a collection of research directions in natural language processing (and more broadly, computer vision and machine learning) where researchers develop computational approaches to explain the (usually deep neural network) models. Explanation has received rising attention. In recent months, the advance of large language models (LLMs) provides unprecedented opportunities to leverage their reasoning abilities, both as tools to produce explanations and as the subjects of explanation analysis. On the other hand, the sheer sizes and the opaque nature of LLMs introduce challenges to the explanation methods. In this tutorial, we intend to review these opportunities and challenges of explanations in the era of LLMs, connect lines of research previously studied by different research groups, and hopefully spark thoughts of new research directions", }
Explanation has long been a part of communications, where humans use language to elucidate each other and transmit information about the mechanisms of events. There have been numerous works that study the structures of the explanations and their utility to humans. At the same time, explanation relates to a collection of research directions in natural language processing (and more broadly, computer vision and machine learning) where researchers develop computational approaches to explain the (usually deep neural network) models. Explanation has received rising attention. In recent months, the advance of large language models (LLMs) provides unprecedented opportunities to leverage their reasoning abilities, both as tools to produce explanations and as the subjects of explanation analysis. On the other hand, the sheer sizes and the opaque nature of LLMs introduce challenges to the explanation methods. In this tutorial, we intend to review these opportunities and challenges of explanations in the era of LLMs, connect lines of research previously studied by different research groups, and hopefully spark thoughts of new research directions
[ "Zhu, Zining", "Chen, Hanjie", "Ye, Xi", "Lyu, Qing", "Tan, Chenhao", "Marasovic, Ana", "Wiegreffe, Sarah" ]
Explanation in the Era of Large Language Models
naacl-tutorials.3
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-tutorials.4.bib
https://aclanthology.org/2024.naacl-tutorials.4/
@inproceedings{ganesan-etal-2024-text, title = "From Text to Context: Contextualizing Language with Humans, Groups, and Communities for Socially Aware {NLP}", author = "Ganesan, Adithya V and Mangalik, Siddharth and Varadarajan, Vasudha and Soni, Nikita and Juhng, Swanie and Sedoc, Jo{\~a}o and Schwartz, H. Andrew and Giorgi, Salvatore and Boyd, Ryan L", editor = "Zhang, Rui and Schneider, Nathan and Chaturvedi, Snigdha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-tutorials.4", doi = "10.18653/v1/2024.naacl-tutorials.4", pages = "26--33", abstract = "Aimed at the NLP researchers or practitioners who would like to integrate human - individual, group, or societal level factors into their analyses, this tutorial will cover recent techniques and libraries for doing so at each level of analysis. Starting with human-centered techniques that provide benefit to traditional document- or word-level NLP tasks (Garten et al., 2019; Lynn et al., 2017), we undertake a thorough exploration of critical human-level aspects as they pertain to NLP, gradually moving up to higher levels of analysis: individual persons, individual with agent (chat/dialogue), groups of people, and finally communities or societies.", }
Aimed at the NLP researchers or practitioners who would like to integrate human - individual, group, or societal level factors into their analyses, this tutorial will cover recent techniques and libraries for doing so at each level of analysis. Starting with human-centered techniques that provide benefit to traditional document- or word-level NLP tasks (Garten et al., 2019; Lynn et al., 2017), we undertake a thorough exploration of critical human-level aspects as they pertain to NLP, gradually moving up to higher levels of analysis: individual persons, individual with agent (chat/dialogue), groups of people, and finally communities or societies.
[ "Ganesan, Adithya V", "Mangalik, Siddharth", "Varadarajan, Vasudha", "Soni, Nikita", "Juhng, Swanie", "Sedoc, Jo{\\~a}o", "Schwartz, H. Andrew", "Giorgi, Salvatore", "Boyd, Ryan L" ]
From Text to Context: Contextualizing Language with Humans, Groups, and Communities for Socially Aware NLP
naacl-tutorials.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-tutorials.5.bib
https://aclanthology.org/2024.naacl-tutorials.5/
@inproceedings{yang-etal-2024-human, title = "Human-{AI} Interaction in the Age of {LLM}s", author = "Yang, Diyi and Wu, Sherry Tongshuang and Hearst, Marti A.", editor = "Zhang, Rui and Schneider, Nathan and Chaturvedi, Snigdha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-tutorials.5", doi = "10.18653/v1/2024.naacl-tutorials.5", pages = "34--38", abstract = "Recently, the development of Large Language Models (LLMs) has revolutionized the capabilities of AI systems. These models possess the ability to comprehend and generate human-like text, enabling them to engage in sophisticated conversations, generate content, and even perform tasks that once seemed beyond the reach of machines. As a result, the way we interact with technology and each other {---} an established field called {``}Human-AI Interaction{''} and have been studied for over a decade {---} is undergoing a profound transformation. This tutorial will provide an overview of the interaction between humans and LLMs, exploring the challenges, opportunities, and ethical considerations that arise in this dynamic landscape. It will start with a review of the types of AI models we interact with, and a walkthrough of the core concepts in Human-AI Interaction. We will then emphasize the emerging topics shared between HCI and NLP communities in light of LLMs.", }
Recently, the development of Large Language Models (LLMs) has revolutionized the capabilities of AI systems. These models possess the ability to comprehend and generate human-like text, enabling them to engage in sophisticated conversations, generate content, and even perform tasks that once seemed beyond the reach of machines. As a result, the way we interact with technology and each other {---} an established field called {``}Human-AI Interaction{''} and have been studied for over a decade {---} is undergoing a profound transformation. This tutorial will provide an overview of the interaction between humans and LLMs, exploring the challenges, opportunities, and ethical considerations that arise in this dynamic landscape. It will start with a review of the types of AI models we interact with, and a walkthrough of the core concepts in Human-AI Interaction. We will then emphasize the emerging topics shared between HCI and NLP communities in light of LLMs.
[ "Yang, Diyi", "Wu, Sherry Tongshuang", "Hearst, Marti A." ]
Human-AI Interaction in the Age of LLMs
naacl-tutorials.5
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-tutorials.6.bib
https://aclanthology.org/2024.naacl-tutorials.6/
@inproceedings{kordjamshidi-etal-2024-spatial, title = "Spatial and Temporal Language Understanding: Representation, Reasoning, and Grounding", author = "Kordjamshidi, Parisa and Ning, Qiang and Pustejovsky, James and Moens, Marie-Francine", editor = "Zhang, Rui and Schneider, Nathan and Chaturvedi, Snigdha", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 5: Tutorial Abstracts)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-tutorials.6", doi = "10.18653/v1/2024.naacl-tutorials.6", pages = "39--46", abstract = "This tutorial provides an overview of the cutting edge research on spatial and temporal language understanding. We also cover some essential background material from various subdisciplines to this topic, which we believe will enrich the CL community{'}s appreciation of the complexity of spatiotemporal reasoning.", }
This tutorial provides an overview of the cutting edge research on spatial and temporal language understanding. We also cover some essential background material from various subdisciplines to this topic, which we believe will enrich the CL community{'}s appreciation of the complexity of spatiotemporal reasoning.
[ "Kordjamshidi, Parisa", "Ning, Qiang", "Pustejovsky, James", "Moens, Marie-Francine" ]
Spatial and Temporal Language Understanding: Representation, Reasoning, and Grounding
naacl-tutorials.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.1.bib
https://aclanthology.org/2024.naacl-industry.1/
@inproceedings{ma-etal-2024-hpipe, title = "{HP}ipe: Large Language Model Pipeline Parallelism for Long Context on Heterogeneous Cost-effective Devices", author = "Ma, Ruilong and Yang, Xiang and Wang, Jingyu and Qi, Qi and Sun, Haifeng and Wang, Jing and Zhuang, Zirui and Liao, Jianxin", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.1", doi = "10.18653/v1/2024.naacl-industry.1", pages = "1--9", abstract = "Micro-enterprises and individual developers emerge analysis demands for long sequence with powerful Large Language Models (LLMs). They try to deploy the LLMs at local, but only possess various commodity devices and the unreliable interconnection between devices. Existing parallel techniques do not lead to the same effectiveness in limited environment. The heterogeneity of devices, coupled with their limited capacity and expensive communication, brings challenges to private deployment for maximized utilization of available devices while masking latency. Hence, we introduce HPipe, a pipeline inference framework that successfully mitigates LLMs from high-performance clusters to heterogeneous commodity devices. By ensuring a balanced distribution of workloads, HPipe facilitates the parallel execution of LLMs through pipelining the sequences on the token dimension. The evaluation conducted on LLaMA-7B and GPT3-2B demonstrates that HPipe holds the potential for context analysis on LLM with heterogeneity devices, achieving an impressive speedup in latency and throughput up to 2.28 times.", }
Micro-enterprises and individual developers emerge analysis demands for long sequence with powerful Large Language Models (LLMs). They try to deploy the LLMs at local, but only possess various commodity devices and the unreliable interconnection between devices. Existing parallel techniques do not lead to the same effectiveness in limited environment. The heterogeneity of devices, coupled with their limited capacity and expensive communication, brings challenges to private deployment for maximized utilization of available devices while masking latency. Hence, we introduce HPipe, a pipeline inference framework that successfully mitigates LLMs from high-performance clusters to heterogeneous commodity devices. By ensuring a balanced distribution of workloads, HPipe facilitates the parallel execution of LLMs through pipelining the sequences on the token dimension. The evaluation conducted on LLaMA-7B and GPT3-2B demonstrates that HPipe holds the potential for context analysis on LLM with heterogeneity devices, achieving an impressive speedup in latency and throughput up to 2.28 times.
[ "Ma, Ruilong", "Yang, Xiang", "Wang, Jingyu", "Qi, Qi", "Sun, Haifeng", "Wang, Jing", "Zhuang, Zirui", "Liao, Jianxin" ]
HPipe: Large Language Model Pipeline Parallelism for Long Context on Heterogeneous Cost-effective Devices
naacl-industry.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.2.bib
https://aclanthology.org/2024.naacl-industry.2/
@inproceedings{ou-etal-2024-lossless, title = "Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding", author = "Ou, Jie and Chen, Yueming and Tian, Prof.", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.2", doi = "10.18653/v1/2024.naacl-industry.2", pages = "10--22", abstract = "While Large Language Models (LLMs) have shown remarkable abilities, they are hindered by significant resource consumption and considerable latency due to autoregressive processing. In this study, we introduce Adaptive N-gram Parallel Decoding (ANPD), an innovative and lossless approach that accelerates inference by allowing the simultaneous generation of multiple tokens. ANPD incorporates a two-stage approach: it begins with a rapid drafting phase that employs an N-gram module, which adapts based on the current interactive context, followed by a verification phase, during which the original LLM assesses and confirms the proposed tokens. Consequently, ANPD preserves the integrity of the LLM{'}s original output while enhancing processing speed. We further leverage a multi-level architecture for the N-gram module to enhance the precision of the initial draft, consequently reducing inference latency. ANPD eliminates the need for retraining or extra GPU memory, making it an efficient and plug-and-play enhancement. In our experiments, models such as LLaMA and its fine-tuned variants have shown speed improvements up to 3.67x, validating the effectiveness of our proposed ANPD.", }
While Large Language Models (LLMs) have shown remarkable abilities, they are hindered by significant resource consumption and considerable latency due to autoregressive processing. In this study, we introduce Adaptive N-gram Parallel Decoding (ANPD), an innovative and lossless approach that accelerates inference by allowing the simultaneous generation of multiple tokens. ANPD incorporates a two-stage approach: it begins with a rapid drafting phase that employs an N-gram module, which adapts based on the current interactive context, followed by a verification phase, during which the original LLM assesses and confirms the proposed tokens. Consequently, ANPD preserves the integrity of the LLM{'}s original output while enhancing processing speed. We further leverage a multi-level architecture for the N-gram module to enhance the precision of the initial draft, consequently reducing inference latency. ANPD eliminates the need for retraining or extra GPU memory, making it an efficient and plug-and-play enhancement. In our experiments, models such as LLaMA and its fine-tuned variants have shown speed improvements up to 3.67x, validating the effectiveness of our proposed ANPD.
[ "Ou, Jie", "Chen, Yueming", "Tian, Prof." ]
Lossless Acceleration of Large Language Model via Adaptive N-gram Parallel Decoding
naacl-industry.2
Poster
2404.08698
[ "https://github.com/oujieww/anpd" ]
https://huggingface.co/papers/2404.08698
1
0
2
3
1
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.3.bib
https://aclanthology.org/2024.naacl-industry.3/
@inproceedings{kim-etal-2024-solar, title = "{SOLAR} 10.7{B}: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling", author = "Kim, Sanghoon and Kim, Dahyun and Park, Chanjun and Lee, Wonsung and Song, Wonho and Kim, Yunsu and Kim, Hyeonwoo and Kim, Yungi and Lee, Hyeonju and Kim, Jihoo and Ahn, Changbae and Yang, Seonghoon and Lee, Sukyung and Park, Hyunbyung and Gim, Gyoungjin and Cha, Mikyoung and Lee, Hwalsuk and Kim, Sunghun", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.3", doi = "10.18653/v1/2024.naacl-industry.3", pages = "23--35", abstract = "We introduce SOLAR 10.7B, a large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. Inspired by recent efforts to efficiently up-scale LLMs, we present a method for scaling LLMs called depth up-scaling (DUS), which encompasses depthwise scaling and continued pretraining. In contrast to other LLM up-scaling methods that use mixture-of-experts, DUS does not require complex changes to train and inference efficiently. We show experimentally that DUS is simple yet effective in scaling up high-performance LLMs from small ones. Building on the DUS model, we additionally present SOLAR 10.7B-Instruct, a variant fine-tuned for instruction-following capabilities, surpassing Mixtral-8x7B-Instruct. SOLAR 10.7B is publicly available under the Apache 2.0 license, promoting broad access and application in the LLM field.", }
We introduce SOLAR 10.7B, a large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. Inspired by recent efforts to efficiently up-scale LLMs, we present a method for scaling LLMs called depth up-scaling (DUS), which encompasses depthwise scaling and continued pretraining. In contrast to other LLM up-scaling methods that use mixture-of-experts, DUS does not require complex changes to train and inference efficiently. We show experimentally that DUS is simple yet effective in scaling up high-performance LLMs from small ones. Building on the DUS model, we additionally present SOLAR 10.7B-Instruct, a variant fine-tuned for instruction-following capabilities, surpassing Mixtral-8x7B-Instruct. SOLAR 10.7B is publicly available under the Apache 2.0 license, promoting broad access and application in the LLM field.
[ "Kim, Sanghoon", "Kim, Dahyun", "Park, Chanjun", "Lee, Wonsung", "Song, Wonho", "Kim, Yunsu", "Kim, Hyeonwoo", "Kim, Yungi", "Lee, Hyeonju", "Kim, Jihoo", "Ahn, Changbae", "Yang, Seonghoon", "Lee, Sukyung", "Park, Hyunbyung", "Gim, Gyoungjin", "Cha, Mikyoung", "Lee, Hwalsuk", "Kim, Sunghun" ]
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
naacl-industry.3
Poster
2312.15166
[ "" ]
https://huggingface.co/papers/2312.15166
1
56
9
18
1
[ "upstage/SOLAR-10.7B-Instruct-v1.0", "upstage/SOLAR-10.7B-v1.0", "LDCC/LDCC-SOLAR-10.7B", "CallComply/SOLAR-10.7B-Instruct-v1.0-128k", "Joseph717171/Mistral-10.7B-v0.2", "Joseph717171/Hermes-2-Pro-Mistral-10.7B", "daekeun-ml/phi-2-upscaled-4B-instruct-v0.1", "macadeliccc/SOLAR-math-2x10.7b-v0.2", "cgus/SOLAR-10.7B-Instruct-v1.0-128k-GGUF", "Joseph717171/Noromaid-10.7B-0.4-DPO", "PracticeLLM/Twice-KoSOLAR-16.1B-test", "PracticeLLM/Twice-KoSOLAR-16.1B-instruct-test", "macadeliccc/SOLAR-polyglot-4x10.7b", "cgus/SOLAR-10.7B-Instruct-v1.0-128k-exl2", "Joseph717171/multi_verse_model-10.7B", "Joseph717171/Mistral-12.25B-v0.2", "Joseph717171/Mistral-12.25B-Instruct-v0.2", "Joseph717171/ANIMA-Phi-Neptune-Mistral-10.7B", "giannisan/Mistral-10.7B-Instruct-v0.3-depth-upscaling", "RichardErkhov/giannisan_-_Mistral-10.7B-Instruct-v0.3-depth-upscaling-gguf", "dddsaty/SOLAR-Instruct-ko-Adapter-Attach", "dddsaty/SOLAR_Merge_Adapter_DPO_Orca", "nayohan/corningQA-solar-10.7b-v1.0", "hyeogi/Yi-9b-v1", "macadeliccc/SOLAR-10.7x2_19B", "mohomin123/M-DIE-M-10.7B", "macadeliccc/SOLAR-math-2x10.7b", "macadeliccc/Orca-SOLAR-4x10.7b", "davzoku/frankencria-llama2-11b-v1.3-m.1", "davzoku/frankencria-llama2-12.5b-v1.3-m.2", "freewheelin/free-solar-instrunction-v0.2", "RichardErkhov/hyeogi_-_Yi-9b-v1-gguf", "Chahnwoo/SOLAR-10.7B-v1.0-1E-QLoRA-SFT-Test", "freewheelin/free-solar-instrunction-v0.1", "freewheelin/free-solar-instrunction-v0.3", "freewheelin/free-solar-dpo-v0.2", "freewheelin/free-solar-dpo-v0.1", "cgus/SOLAR-10.7B-Instruct-v1.0-128k-iMat-GGUF", "Joseph717171/Cerebrum-1.0-10.7B", "Joseph717171/Genstruct-10.7B", "RichardErkhov/Joseph717171_-_Mistral-12.25B-v0.2-gguf", "Joseph717171/Tess-10.7B-v2.0", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw2.5", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw3.7", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw3.5", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw3", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw4.2", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw4", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw4.4", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw5.5", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw5", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw4.6", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw4.8", "blockblockblock/Hermes-2-Pro-Mistral-10.7B-bpw6", "RichardErkhov/upstage_-_SOLAR-10.7B-Instruct-v1.0-gguf", "RichardErkhov/upstage_-_SOLAR-10.7B-Instruct-v1.0-4bits", "RichardErkhov/upstage_-_SOLAR-10.7B-Instruct-v1.0-8bits", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw3", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw3.5", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw2.5", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw3.7", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw4", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw4.2", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw4.4", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw4.8", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw5", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw4.6", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw6", "blockblockblock/Mistral-12.25B-Instruct-v0.2-bpw5.5", "RichardErkhov/upstage_-_SOLAR-10.7B-v1.0-4bits", "RichardErkhov/upstage_-_SOLAR-10.7B-v1.0-8bits", "RichardErkhov/upstage_-_SOLAR-10.7B-v1.0-gguf", "RichardErkhov/PracticeLLM_-_Twice-KoSOLAR-16.1B-instruct-test-gguf", "freewheelin/free-llama3-dpo-v0.2", "Joseph717171/SOLAR-19.2B-Instruct-v1.0", "algograp-Inc/algograpV4", "RichardErkhov/LDCC_-_LDCC-SOLAR-10.7B-4bits", "RichardErkhov/LDCC_-_LDCC-SOLAR-10.7B-gguf", "yhavinga/Boreas-10.7B-step1", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-3_0bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-2_2bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-2_5bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-3_5bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-3_75bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-4_0bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-5_0bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-4_25bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-6_0bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-6_5bpw_exl2", "Zoyd/giannisan_Mistral-10.7B-Instruct-v0.3-depth-upscaling-8_0bpw_exl2", "RichardErkhov/davzoku_-_frankencria-llama2-11b-v1.3-m.1-gguf", "yhavinga/Boreas-10.7B-step2", "RichardErkhov/mohomin123_-_M-DIE-M-10.7B-gguf", "RichardErkhov/CallComply_-_SOLAR-10.7B-Instruct-v1.0-128k-gguf", "RichardErkhov/macadeliccc_-_SOLAR-math-2x10.7b-v0.2-gguf", "RichardErkhov/macadeliccc_-_Orca-SOLAR-4x10.7b-gguf", "RichardErkhov/Joseph717171_-_Tess-10.7B-v2.0-gguf", "RichardErkhov/Joseph717171_-_Mistral-10.7B-v0.2-gguf", "choco9966/Llama-2-7b-instruct-tuning" ]
[ "ChuGyouk/OpenOrca_Solar_filtered" ]
[ "Intel/low_bit_open_llm_leaderboard", "eduagarcia/open_pt_llm_leaderboard", "ZhangYuhan/3DGen-Arena", "speakleash/open_pl_llm_leaderboard", "open-llm-leaderboard-old/open_llm_leaderboard", "featherless-ai/try-this-model", "Yeyito/llm_contamination_detector", "meval/multilingual-chatbot-arena-leaderboard", "prometheus-eval/BiGGen-Bench-Leaderboard", "Justinrune/LLaMA-Factory", "macadeliccc/SOLAR-math-MoE-chat", "officialhimanshu595/llama-factory", "Vikhrmodels/small-shlepa-lb", "bardsai/performance-llm-board", "alKoGolik/codellama-CodeLlama-7b-hf", "li-qing/FIRE", "kenken999/fastapi_django_main_live", "colinfitzgerald/upstage-SOLAR-10.7B-Instruct-v1.0", "AhmedMagdy7/upstage-SOLAR-10.7B-v1.0", "tianleliphoebe/visual-arena", "meg/backend", "Ashmal/MobiLlama", "neubla/neubla-llm-evaluation-board", "0x1668/open_llm_leaderboard", "Darok/Featherless-Feud", "leafire/upstage-SOLAR-10.7B-Instruct-v1.0", "pngwn/open_llm_leaderboard-check", "asir0z/open_llm_leaderboard", "0xsboj/upstage-SOLAR-10.7B-Instruct-v1.0", "kbmlcoding/open_llm_leaderboard_free", "aichampions/open_llm_leaderboard", "digistand12at/SOLAR-10.7B-Space", "Adeco/open_llm_leaderboard", "smothiki/open_llm_leaderboard_old", "TarunG2010/upstage-SOLAR-10.7B-Instruct-v1.0", "nubifere/vis-llm-ft", "DragonJay/macadeliccc-SOLAR-math-2x10.7b-v0.2", "jhchoi8984/LDCC-LDCC-SOLAR-10.7B", "Mahyar/upstage-SOLAR-10.7B-Instruct-v1.0", "juy4ng/LDCC-LDCC-SOLAR-10.7B", "PeepDaSlan9/B2BMGMT_upstage-SOLAR-10.7B-Instruct-v1.0", "kutsoz/upstage-SOLAR-10.7B-Instruct-v1.0", "JustMe4Real/upstage-SOLAR-10.7B-Instruct-v1.0", "azaelriday/upstage-SOLAR-10.7B-Instruct-v1.0", "nicholas-miklaucic/upstage-SOLAR-10.7B-Instruct-v1.0", "euisL/upstage-SOLAR-10.7B-Instruct-v1.0", "AhmedMagdy7/upstage-SOLAR-10.7B-Instruct-v1.0", "Bjsilvaboi/upstage-SOLAR-10.7B-Instruct-v1.0", "thobuiq/mistral_8-7b", "0x7o/SOLAR-10.7B-Instruct-v1.0", "joaopaulopresa/workshop_llm_ufg_chatbot", "alKoGolik/asd", "dbasu/multilingual-chatbot-arena-leaderboard", "Bofeee5675/FIRE", "evelyn-lo/evelyn", "Xhaheen/AI_safety_testing", "zjasper666/bf16_vs_fp8" ]
https://aclanthology.org/2024.naacl-industry.4.bib
https://aclanthology.org/2024.naacl-industry.4/
@inproceedings{li-etal-2024-uinav, title = "{UIN}av: A Practical Approach to Train On-Device Automation Agents", author = "Li, Wei and Hsu, Fu-Lin and Bishop, William and Campbell-Ajala, Folawiyo and Lin, Max and Riva, Oriana", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.4", doi = "10.18653/v1/2024.naacl-industry.4", pages = "36--51", abstract = "Automation systems that can autonomously drive application user interfaces to complete user tasks are of great benefit, especially when users are situationally or permanently impaired. Prior automation systems do not produce generalizable models while AI-based automation agents work reliably only in simple, hand-crafted applications or incur high computation costs. We propose UINav, a demonstration-based approach to train automation agents that fit mobile devices, yet achieving high success rates with modest numbers of demonstrations. To reduce the demonstration overhead, UINav uses a referee model that provides users with immediate feedback on tasks where the agent fails, and automatically augments human demonstrations to increase diversity in training data. Our evaluation shows that with only 10 demonstrations can achieve 70{\%} accuracy, and that with enough demonstrations it can surpass 90{\%} accuracy.", }
Automation systems that can autonomously drive application user interfaces to complete user tasks are of great benefit, especially when users are situationally or permanently impaired. Prior automation systems do not produce generalizable models while AI-based automation agents work reliably only in simple, hand-crafted applications or incur high computation costs. We propose UINav, a demonstration-based approach to train automation agents that fit mobile devices, yet achieving high success rates with modest numbers of demonstrations. To reduce the demonstration overhead, UINav uses a referee model that provides users with immediate feedback on tasks where the agent fails, and automatically augments human demonstrations to increase diversity in training data. Our evaluation shows that with only 10 demonstrations can achieve 70{\%} accuracy, and that with enough demonstrations it can surpass 90{\%} accuracy.
[ "Li, Wei", "Hsu, Fu-Lin", "Bishop, William", "Campbell-Ajala, Folawiyo", "Lin, Max", "Riva, Oriana" ]
UINav: A Practical Approach to Train On-Device Automation Agents
naacl-industry.4
Poster
2312.10170
[ "" ]
https://huggingface.co/papers/2312.10170
0
0
0
6
1
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.5.bib
https://aclanthology.org/2024.naacl-industry.5/
@inproceedings{kundu-etal-2024-efficiently, title = "Efficiently Distilling {LLM}s for Edge Applications", author = "Kundu, Achintya and Lim, Yu Chin Fabian and Chew, Aaron and Wynter, Laura and Chong, Penny and Lee, Rhui", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.5", doi = "10.18653/v1/2024.naacl-industry.5", pages = "52--62", abstract = "Supernet training of LLMs is of great interest in industrial applications as it confers the ability to produce a palette of smaller models at constant cost, regardless of the number of models (of different size / latency) produced. We propose a new method called Multistage Low-rank Fine-tuning of Super-transformers (MLFS) for parameter-efficient supernet training. We show that it is possible to obtain high-quality encoder models that are suitable for commercial edge applications, and that while decoder-only models are resistant to a comparable degree of compression, decoders can be effectively sliced for a significant reduction in training time.", }
Supernet training of LLMs is of great interest in industrial applications as it confers the ability to produce a palette of smaller models at constant cost, regardless of the number of models (of different size / latency) produced. We propose a new method called Multistage Low-rank Fine-tuning of Super-transformers (MLFS) for parameter-efficient supernet training. We show that it is possible to obtain high-quality encoder models that are suitable for commercial edge applications, and that while decoder-only models are resistant to a comparable degree of compression, decoders can be effectively sliced for a significant reduction in training time.
[ "Kundu, Achintya", "Lim, Yu Chin Fabian", "Chew, Aaron", "Wynter, Laura", "Chong, Penny", "Lee, Rhui" ]
Efficiently Distilling LLMs for Edge Applications
naacl-industry.5
Poster
2404.01353
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.6.bib
https://aclanthology.org/2024.naacl-industry.6/
@inproceedings{pei-etal-2024-modeling, title = "Modeling and Detecting Company Risks from News", author = "Pei, Jiaxin and Vadlamannati, Soumya and Huang, Liang-Kang and Preotiuc-Pietro, Daniel and Hua, Xinyu", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.6", doi = "10.18653/v1/2024.naacl-industry.6", pages = "63--72", abstract = "Identifying risks associated with a company is important to investors and the wellbeing of the overall financial markets. In this study, we build a computational framework to automatically extract company risk factors from news articles. Our newly proposed schema comprises seven distinct aspects, such as supply chain, regulations, and competition. We annotate 666 news articles and benchmark various machine learning models. While large language mod- els have achieved remarkable progress in various types of NLP tasks, our experiment shows that zero-shot and few-shot prompting state-of- the-art LLMs (e.g., Llama-2) can only achieve moderate to low performances in identifying risk factors. In contrast, fine-tuning pre-trained language models yields better results on most risk factors. Using this model, we analyze over 277K Bloomberg News articles and demonstrate that identifying risk factors from news could provide extensive insights into the operations of companies and industries.", }
Identifying risks associated with a company is important to investors and the wellbeing of the overall financial markets. In this study, we build a computational framework to automatically extract company risk factors from news articles. Our newly proposed schema comprises seven distinct aspects, such as supply chain, regulations, and competition. We annotate 666 news articles and benchmark various machine learning models. While large language mod- els have achieved remarkable progress in various types of NLP tasks, our experiment shows that zero-shot and few-shot prompting state-of- the-art LLMs (e.g., Llama-2) can only achieve moderate to low performances in identifying risk factors. In contrast, fine-tuning pre-trained language models yields better results on most risk factors. Using this model, we analyze over 277K Bloomberg News articles and demonstrate that identifying risk factors from news could provide extensive insights into the operations of companies and industries.
[ "Pei, Jiaxin", "Vadlamannati, Soumya", "Huang, Liang-Kang", "Preotiuc-Pietro, Daniel", "Hua, Xinyu" ]
Modeling and Detecting Company Risks from News
naacl-industry.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.7.bib
https://aclanthology.org/2024.naacl-industry.7/
@inproceedings{tang-etal-2024-multiple, title = "Multiple-Question Multiple-Answer Text-{VQA}", author = "Tang, Peng and Appalaraju, Srikar and Manmatha, R. and Xie, Yusheng and Mahadevan, Vijay", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.7", doi = "10.18653/v1/2024.naacl-industry.7", pages = "73--88", abstract = "We present Multiple-Question Multiple-Answer (MQMA), a novel approach to do text-VQA in encoder-decoder transformer models. To the best of our knowledge, almost all previous approaches for text-VQA process a single question and its associated content to predict a single answer. However, in industry applications, users may come up with multiple questions about a single image. In order to answer multiple questions from the same image, each question and content are fed into the model multiple times. In contrast, our proposed MQMA approach takes multiple questions and content as input at the encoder and predicts multiple answers at the decoder in an auto-regressive manner at the same time. We make several novel architectural modifications to standard encoder-decoder transformers to support MQMA. We also propose a novel MQMA denoising pre-training task which is designed to teach the model to align and delineate multiple questions and content with associated answers. MQMA pre-trained model achieves state-of-the-art results on multiple text-VQA datasets, each with strong baselines. Specifically, on OCR-VQA (+2.5{\%}), TextVQA (+1.4{\%}), ST-VQA (+0.6{\%}), DocVQA (+1.1{\%}) absolute improvements over the previous state-of-the-art approaches.", }
We present Multiple-Question Multiple-Answer (MQMA), a novel approach to do text-VQA in encoder-decoder transformer models. To the best of our knowledge, almost all previous approaches for text-VQA process a single question and its associated content to predict a single answer. However, in industry applications, users may come up with multiple questions about a single image. In order to answer multiple questions from the same image, each question and content are fed into the model multiple times. In contrast, our proposed MQMA approach takes multiple questions and content as input at the encoder and predicts multiple answers at the decoder in an auto-regressive manner at the same time. We make several novel architectural modifications to standard encoder-decoder transformers to support MQMA. We also propose a novel MQMA denoising pre-training task which is designed to teach the model to align and delineate multiple questions and content with associated answers. MQMA pre-trained model achieves state-of-the-art results on multiple text-VQA datasets, each with strong baselines. Specifically, on OCR-VQA (+2.5{\%}), TextVQA (+1.4{\%}), ST-VQA (+0.6{\%}), DocVQA (+1.1{\%}) absolute improvements over the previous state-of-the-art approaches.
[ "Tang, Peng", "Appalaraju, Srikar", "Manmatha, R.", "Xie, Yusheng", "Mahadevan, Vijay" ]
Multiple-Question Multiple-Answer Text-VQA
naacl-industry.7
Poster
2311.08622
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.8.bib
https://aclanthology.org/2024.naacl-industry.8/
@inproceedings{liu-etal-2024-nlp, title = "An {NLP}-Focused Pilot Training Agent for Safe and Efficient Aviation Communication", author = "Liu, Xiaochen and Zou, Bowei and Aw, AiTi", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.8", doi = "10.18653/v1/2024.naacl-industry.8", pages = "89--96", abstract = "Aviation communication significantly influences the success of flight operations, ensuring safety of lives and efficient air transportation. In day-to-day flight operations, air traffic controllers (ATCos) would timely communicate instructions to pilots using specific phraseology for aircraft manipulation . However, pilots, originating from diverse backgrounds and understanding of English language, have struggled with conforming to strict phraseology for readback and communication in the live operation, this problem had not been effectively addressed over the past decades. Traditionally, aviation communication training involved expensive setups and resources, often relying on human-in-the-loop (HIL) air traffic simulations that demand allocating a specific environment, domain experts for participation, and substantial amount of annotated data for simulation. Therefore, we would like to propose an NLP-oriented training agent and address these challenges. Our approach involves leveraging only natural language capabilities and fine-tuning on communication data to generate instructions based on input scenarios (keywords). Given the absence of prior references for this business problem, we investigated the feasibility of our proposed solution by 1) generating all instructions at once and 2) generating one instruction while incorporating conversational history in each input. Our findings affirm the feasibility of this approach, highlighting the effectiveness of fine-tuning pre-trained models and large language models in advancing aviation communication training.", }
Aviation communication significantly influences the success of flight operations, ensuring safety of lives and efficient air transportation. In day-to-day flight operations, air traffic controllers (ATCos) would timely communicate instructions to pilots using specific phraseology for aircraft manipulation . However, pilots, originating from diverse backgrounds and understanding of English language, have struggled with conforming to strict phraseology for readback and communication in the live operation, this problem had not been effectively addressed over the past decades. Traditionally, aviation communication training involved expensive setups and resources, often relying on human-in-the-loop (HIL) air traffic simulations that demand allocating a specific environment, domain experts for participation, and substantial amount of annotated data for simulation. Therefore, we would like to propose an NLP-oriented training agent and address these challenges. Our approach involves leveraging only natural language capabilities and fine-tuning on communication data to generate instructions based on input scenarios (keywords). Given the absence of prior references for this business problem, we investigated the feasibility of our proposed solution by 1) generating all instructions at once and 2) generating one instruction while incorporating conversational history in each input. Our findings affirm the feasibility of this approach, highlighting the effectiveness of fine-tuning pre-trained models and large language models in advancing aviation communication training.
[ "Liu, Xiaochen", "Zou, Bowei", "Aw, AiTi" ]
An NLP-Focused Pilot Training Agent for Safe and Efficient Aviation Communication
naacl-industry.8
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.9.bib
https://aclanthology.org/2024.naacl-industry.9/
@inproceedings{qian-etal-2024-visual, title = "Visual Grounding for User Interfaces", author = "Qian, Yijun and Lu, Yujie and Hauptmann, Alexander and Riva, Oriana", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.9", doi = "10.18653/v1/2024.naacl-industry.9", pages = "97--107", abstract = "Enabling autonomous language agents to drive application user interfaces (UIs) as humans do can significantly expand the capability of today{'}s API-based agents. Essential to this vision is the ability of agents to ground natural language commands to on-screen UI elements. Prior UI grounding approaches work by relaying on developer-provided UI metadata (UI trees, such as web DOM, and accessibility labels) to detect on-screen elements. However, such metadata is often unavailable or incomplete. Object detection techniques applied to UI screens remove this dependency, by inferring location and types of UI elements directly from the UI{'}s visual appearance. The extracted semantics, however, are too limited to directly enable grounding. We overcome the limitations of both approaches by introducing the task of visual UI grounding, which unifies detection and grounding. A model takes as input a UI screenshot and a free-form language expression, and must identify the referenced UI element. We propose a solution to this problem, LVG, which learns UI element detection and grounding using a new technique called layout-guided contrastive learning, where the semantics of individual UI objects are learned also from their visual organization. Due to the scarcity of UI datasets, LVG integrates synthetic data in its training using multi-context learning. LVG outperforms baselines pre-trained on much larger datasets by over 4.9 points in top-1 accuracy, thus demonstrating its effectiveness.", }
Enabling autonomous language agents to drive application user interfaces (UIs) as humans do can significantly expand the capability of today{'}s API-based agents. Essential to this vision is the ability of agents to ground natural language commands to on-screen UI elements. Prior UI grounding approaches work by relaying on developer-provided UI metadata (UI trees, such as web DOM, and accessibility labels) to detect on-screen elements. However, such metadata is often unavailable or incomplete. Object detection techniques applied to UI screens remove this dependency, by inferring location and types of UI elements directly from the UI{'}s visual appearance. The extracted semantics, however, are too limited to directly enable grounding. We overcome the limitations of both approaches by introducing the task of visual UI grounding, which unifies detection and grounding. A model takes as input a UI screenshot and a free-form language expression, and must identify the referenced UI element. We propose a solution to this problem, LVG, which learns UI element detection and grounding using a new technique called layout-guided contrastive learning, where the semantics of individual UI objects are learned also from their visual organization. Due to the scarcity of UI datasets, LVG integrates synthetic data in its training using multi-context learning. LVG outperforms baselines pre-trained on much larger datasets by over 4.9 points in top-1 accuracy, thus demonstrating its effectiveness.
[ "Qian, Yijun", "Lu, Yujie", "Hauptmann, Alex", "er", "Riva, Oriana" ]
Visual Grounding for User Interfaces
naacl-industry.9
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.10.bib
https://aclanthology.org/2024.naacl-industry.10/
@inproceedings{buchner-etal-2024-prompt, title = "Prompt Tuned Embedding Classification for Industry Sector Allocation", author = "Buchner, Valentin and Cao, Lele and Kalo, Jan-Christoph and Von Ehrenheim, Vilhelm", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.10", doi = "10.18653/v1/2024.naacl-industry.10", pages = "108--118", abstract = "We introduce Prompt Tuned Embedding Classification (PTEC) for classifying companies within an investment firm{'}s proprietary industry taxonomy, supporting their thematic investment strategy. PTEC assigns companies to the sectors they primarily operate in, conceptualizing this process as a multi-label text classification task. Prompt Tuning, usually deployed as a text-to-text (T2T) classification approach, ensures low computational cost while maintaining high task performance. However, T2T classification has limitations on multi-label tasks due to the generation of non-existing labels, permutation invariance of the label sequence, and a lack of confidence scores. PTEC addresses these limitations by utilizing a classification head in place of the Large Language Models (LLMs) language head. PTEC surpasses both baselines and human performance while lowering computational demands. This indicates the continuing need to adapt state-of-the-art methods to domain-specific tasks, even in the era of LLMs with strong generalization abilities.", }
We introduce Prompt Tuned Embedding Classification (PTEC) for classifying companies within an investment firm{'}s proprietary industry taxonomy, supporting their thematic investment strategy. PTEC assigns companies to the sectors they primarily operate in, conceptualizing this process as a multi-label text classification task. Prompt Tuning, usually deployed as a text-to-text (T2T) classification approach, ensures low computational cost while maintaining high task performance. However, T2T classification has limitations on multi-label tasks due to the generation of non-existing labels, permutation invariance of the label sequence, and a lack of confidence scores. PTEC addresses these limitations by utilizing a classification head in place of the Large Language Models (LLMs) language head. PTEC surpasses both baselines and human performance while lowering computational demands. This indicates the continuing need to adapt state-of-the-art methods to domain-specific tasks, even in the era of LLMs with strong generalization abilities.
[ "Buchner, Valentin", "Cao, Lele", "Kalo, Jan-Christoph", "Von Ehrenheim, Vilhelm" ]
Prompt Tuned Embedding Classification for Industry Sector Allocation
naacl-industry.10
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.11.bib
https://aclanthology.org/2024.naacl-industry.11/
@inproceedings{bouziani-etal-2024-rexel, title = "{REXEL}: An End-to-end Model for Document-Level Relation Extraction and Entity Linking", author = "Bouziani, Nacime and Tyagi, Shubhi and Fisher, Joseph and Lehmann, Jens and Pierleoni, Andrea", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.11", doi = "10.18653/v1/2024.naacl-industry.11", pages = "119--130", abstract = "Extracting structured information from unstructured text is critical for many downstream NLP applications and is traditionally achieved by $\textit{closed information extraction}$ (cIE). However, existing approaches for cIE suffer from two limitations: $\textit{(i)}$ they are often pipelines which makes them prone to error propagation, and/or $\textit{(ii)}$ they are restricted to sentence level which prevents them from capturing long-range dependencies and results in expensive inference time. We address these limitations by proposing REXEL, a highly efficient and accurate model for the joint task of document level cIE (DocIE). REXEL performs mention detection, entity typing, entity disambiguation, coreference resolution and document-level relation classification in a single forward pass to yield facts fully linked to a reference knowledge graph. It is on average 11 times faster than competitive existing approaches in a similar setting and performs competitively both when optimised for any of the individual sub-task and a variety of combinations of different joint tasks, surpassing the baselines by an average of more than 6 F1 points. The combination of speed and accuracy makes REXEL an accurate cost-efficient system for extracting structured information at web-scale. We also release an extension of the DocRED dataset to enable benchmarking of future work on DocIE, which will be available at https://github.com/amazon-science/e2e-docie.", }
Extracting structured information from unstructured text is critical for many downstream NLP applications and is traditionally achieved by $\textit{closed information extraction}$ (cIE). However, existing approaches for cIE suffer from two limitations: $\textit{(i)}$ they are often pipelines which makes them prone to error propagation, and/or $\textit{(ii)}$ they are restricted to sentence level which prevents them from capturing long-range dependencies and results in expensive inference time. We address these limitations by proposing REXEL, a highly efficient and accurate model for the joint task of document level cIE (DocIE). REXEL performs mention detection, entity typing, entity disambiguation, coreference resolution and document-level relation classification in a single forward pass to yield facts fully linked to a reference knowledge graph. It is on average 11 times faster than competitive existing approaches in a similar setting and performs competitively both when optimised for any of the individual sub-task and a variety of combinations of different joint tasks, surpassing the baselines by an average of more than 6 F1 points. The combination of speed and accuracy makes REXEL an accurate cost-efficient system for extracting structured information at web-scale. We also release an extension of the DocRED dataset to enable benchmarking of future work on DocIE, which will be available at https://github.com/amazon-science/e2e-docie.
[ "Bouziani, Nacime", "Tyagi, Shubhi", "Fisher, Joseph", "Lehmann, Jens", "Pierleoni, Andrea" ]
REXEL: An End-to-end Model for Document-Level Relation Extraction and Entity Linking
naacl-industry.11
Poster
2404.12788
[ "https://github.com/amazon-science/e2e-docie" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.12.bib
https://aclanthology.org/2024.naacl-industry.12/
@inproceedings{xu-etal-2024-conformer, title = "Conformer-Based Speech Recognition On Extreme Edge-Computing Devices", author = "Xu, Mingbin and Jin, Alex and Wang, Sicheng and Su, Mu and Ng, Tim and Mason, Henry and Han, Shiyi and Lei, Zhihong and Deng, Yaqiao and Huang, Zhen and Krishnamoorthy, Mahesh", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.12", doi = "10.18653/v1/2024.naacl-industry.12", pages = "131--139", abstract = "With increasingly more powerful compute capabilities and resources in today{'}s devices, traditionally compute-intensive automatic speech recognition (ASR) has been moving from the cloud to devices to better protect user privacy. However, it is still challenging to implement on-device ASR on resource-constrained devices, such as smartphones, smart wearables, and other small home automation devices. In this paper, we propose a series of model architecture adaptions, neural network graph transformations, and numerical optimizations to fit an advanced Conformer based end-to-end streaming ASR system on resource-constrained devices without accuracy degradation. We achieve over 5.26 times faster than realtime (0.19 RTF) speech recognition on small wearables while minimizing energy consumption and achieving state-of-the-art accuracy. The proposed methods are widely applicable to other transformer-based server-free AI applications. In addition, we provide a complete theory on optimal pre-normalizers that numerically stabilize layer normalization in any $L_p$-$norm$ using any floating point precision.", }
With increasingly more powerful compute capabilities and resources in today{'}s devices, traditionally compute-intensive automatic speech recognition (ASR) has been moving from the cloud to devices to better protect user privacy. However, it is still challenging to implement on-device ASR on resource-constrained devices, such as smartphones, smart wearables, and other small home automation devices. In this paper, we propose a series of model architecture adaptions, neural network graph transformations, and numerical optimizations to fit an advanced Conformer based end-to-end streaming ASR system on resource-constrained devices without accuracy degradation. We achieve over 5.26 times faster than realtime (0.19 RTF) speech recognition on small wearables while minimizing energy consumption and achieving state-of-the-art accuracy. The proposed methods are widely applicable to other transformer-based server-free AI applications. In addition, we provide a complete theory on optimal pre-normalizers that numerically stabilize layer normalization in any $L_p$-$norm$ using any floating point precision.
[ "Xu, Mingbin", "Jin, Alex", "Wang, Sicheng", "Su, Mu", "Ng, Tim", "Mason, Henry", "Han, Shiyi", "Lei, Zhihong", "Deng, Yaqiao", "Huang, Zhen", "Krishnamoorthy, Mahesh" ]
Conformer-Based Speech Recognition On Extreme Edge-Computing Devices
naacl-industry.12
Poster
2312.10359
[ "" ]
https://huggingface.co/papers/2312.10359
0
0
0
11
1
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.13.bib
https://aclanthology.org/2024.naacl-industry.13/
@inproceedings{inan-etal-2024-generating, title = "Generating Signed Language Instructions in Large-Scale Dialogue Systems", author = "Inan, Mert and Atwell, Katherine and Sicilia, Anthony and Quandt, Lorna and Alikhani, Malihe", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.13", doi = "10.18653/v1/2024.naacl-industry.13", pages = "140--154", abstract = "We introduce a goal-oriented conversational AI system enhanced with American Sign Language (ASL) instructions, presenting the first implementation of such a system on a worldwide multimodal conversational AI platform. Accessible through a touch-based interface, our system receives input from users and seamlessly generates ASL instructions by leveraging retrieval methods and cognitively based gloss translations. Central to our design is a sign translation module powered by Large Language Models, alongside a token-based video retrieval system for delivering instructional content from recipes and wikiHow guides. Our development process is deeply rooted in a commitment to community engagement, incorporating insights from the Deaf and Hard-of-Hearing community, as well as experts in cognitive and ASL learning sciences. The effectiveness of our signing instructions is validated by user feedback, achieving ratings on par with those of the system in its non-signing variant. Additionally, our system demonstrates exceptional performance in retrieval accuracy and text-generation quality, measured by metrics such as BERTScore. We have made our codebase and datasets publicly accessible at https://github.com/Merterm/signed-dialogue, and a demo of our signed instruction video retrieval system is available at https://huggingface.co/spaces/merterm/signed-instructions.", }
We introduce a goal-oriented conversational AI system enhanced with American Sign Language (ASL) instructions, presenting the first implementation of such a system on a worldwide multimodal conversational AI platform. Accessible through a touch-based interface, our system receives input from users and seamlessly generates ASL instructions by leveraging retrieval methods and cognitively based gloss translations. Central to our design is a sign translation module powered by Large Language Models, alongside a token-based video retrieval system for delivering instructional content from recipes and wikiHow guides. Our development process is deeply rooted in a commitment to community engagement, incorporating insights from the Deaf and Hard-of-Hearing community, as well as experts in cognitive and ASL learning sciences. The effectiveness of our signing instructions is validated by user feedback, achieving ratings on par with those of the system in its non-signing variant. Additionally, our system demonstrates exceptional performance in retrieval accuracy and text-generation quality, measured by metrics such as BERTScore. We have made our codebase and datasets publicly accessible at https://github.com/Merterm/signed-dialogue, and a demo of our signed instruction video retrieval system is available at https://huggingface.co/spaces/merterm/signed-instructions.
[ "Inan, Mert", "Atwell, Katherine", "Sicilia, Anthony", "Qu", "t, Lorna", "Alikhani, Malihe" ]
Generating Signed Language Instructions in Large-Scale Dialogue Systems
naacl-industry.13
Poster
2410.14026
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.14.bib
https://aclanthology.org/2024.naacl-industry.14/
@inproceedings{jang-stikkel-2024-leveraging, title = "Leveraging Natural Language Processing and Large Language Models for Assisting Due Diligence in the Legal Domain", author = "Jang, Myeongjun and Stikkel, G{\'a}bor", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.14", doi = "10.18653/v1/2024.naacl-industry.14", pages = "155--164", abstract = "Due diligence is a crucial legal process that mitigates potential risks of mergers and acquisitions (M{\&}A). However, despite its prominent importance, there has been a lack of research regarding leveraging NLP techniques for due diligence. In this study, our aim is to explore the most efficient deep-learning model architecture for due diligence in terms of performance and latency, and evaluate the potential of large language models (LLMs) as an efficient due diligence assistant. To our knowledge, this is the first study that employs pre-trained language models (PLMs) and LLMs for the due diligence problem. Our experimental results suggest that methodologies that have demonstrated promising performance in the general domain encounter challenges when applied in due diligence due to the inherent lengthy nature of legal documents. We also ascertain that LLMs can be a useful tool for helping lawyers who perform due diligence.", }
Due diligence is a crucial legal process that mitigates potential risks of mergers and acquisitions (M{\&}A). However, despite its prominent importance, there has been a lack of research regarding leveraging NLP techniques for due diligence. In this study, our aim is to explore the most efficient deep-learning model architecture for due diligence in terms of performance and latency, and evaluate the potential of large language models (LLMs) as an efficient due diligence assistant. To our knowledge, this is the first study that employs pre-trained language models (PLMs) and LLMs for the due diligence problem. Our experimental results suggest that methodologies that have demonstrated promising performance in the general domain encounter challenges when applied in due diligence due to the inherent lengthy nature of legal documents. We also ascertain that LLMs can be a useful tool for helping lawyers who perform due diligence.
[ "Jang, Myeongjun", "Stikkel, G{\\'a}bor" ]
Leveraging Natural Language Processing and Large Language Models for Assisting Due Diligence in the Legal Domain
naacl-industry.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.15.bib
https://aclanthology.org/2024.naacl-industry.15/
@inproceedings{he-etal-2024-annollm, title = "{A}nno{LLM}: Making Large Language Models to Be Better Crowdsourced Annotators", author = "He, Xingwei and Lin, Zhenghao and Gong, Yeyun and Jin, A-Long and Zhang, Hang and Lin, Chen and Jiao, Jian and Yiu, Siu Ming and Duan, Nan and Chen, Weizhu", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.15", doi = "10.18653/v1/2024.naacl-industry.15", pages = "165--190", abstract = "Many natural language processing (NLP) tasks rely on labeled data to train machine learning models with high performance. However, data annotation is time-consuming and expensive, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator when provided with sufficient guidance and demonstrated examples. Accordingly, we propose AnnoLLM, an annotation system powered by LLMs, which adopts a two-step approach, explain-then-annotate. Concretely, we first prompt LLMs to provide explanations for why the specific ground truth answer/label was assigned for a given example. Then, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data with LLMs. Our experiment results on three tasks, including user input and keyword relevance assessment, BoolQ, and WiC, demonstrate that AnnoLLM surpasses or performs on par with crowdsourced annotators. Furthermore, we build the first conversation-based information retrieval dataset employing AnnoLLM. This dataset is designed to facilitate the development of retrieval models capable of retrieving pertinent documents for conversational text. Human evaluation has validated the dataset{'}s high quality.", }
Many natural language processing (NLP) tasks rely on labeled data to train machine learning models with high performance. However, data annotation is time-consuming and expensive, especially when the task involves a large amount of data or requires specialized domains. Recently, GPT-3.5 series models have demonstrated remarkable few-shot and zero-shot ability across various NLP tasks. In this paper, we first claim that large language models (LLMs), such as GPT-3.5, can serve as an excellent crowdsourced annotator when provided with sufficient guidance and demonstrated examples. Accordingly, we propose AnnoLLM, an annotation system powered by LLMs, which adopts a two-step approach, explain-then-annotate. Concretely, we first prompt LLMs to provide explanations for why the specific ground truth answer/label was assigned for a given example. Then, we construct the few-shot chain-of-thought prompt with the self-generated explanation and employ it to annotate the unlabeled data with LLMs. Our experiment results on three tasks, including user input and keyword relevance assessment, BoolQ, and WiC, demonstrate that AnnoLLM surpasses or performs on par with crowdsourced annotators. Furthermore, we build the first conversation-based information retrieval dataset employing AnnoLLM. This dataset is designed to facilitate the development of retrieval models capable of retrieving pertinent documents for conversational text. Human evaluation has validated the dataset{'}s high quality.
[ "He, Xingwei", "Lin, Zhenghao", "Gong, Yeyun", "Jin, A-Long", "Zhang, Hang", "Lin, Chen", "Jiao, Jian", "Yiu, Siu Ming", "Duan, Nan", "Chen, Weizhu" ]
AnnoLLM: Making Large Language Models to Be Better Crowdsourced Annotators
naacl-industry.15
Poster
2303.16854
[ "https://github.com/nlpcode/annollm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.16.bib
https://aclanthology.org/2024.naacl-industry.16/
@inproceedings{akella-etal-2024-automatic, title = "An Automatic Prompt Generation System for Tabular Data Tasks", author = "Akella, Ashlesha and Manatkar, Abhijit and Chavda, Brijkumar and Patel, Hima", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.16", doi = "10.18653/v1/2024.naacl-industry.16", pages = "191--200", abstract = "Efficient processing of tabular data is important in various industries, especially when working with datasets containing a large number of columns. Large language models (LLMs) have demonstrated their ability on several tasks through carefully crafted prompts. However, creating effective prompts for tabular datasets is challenging due to the structured nature of the data and the need to manage numerous columns. This paper presents an innovative auto-prompt generation system suitable for multiple LLMs, with minimal training. It proposes two novel methods; 1) A Reinforcement Learning-based algorithm for identifying and sequencing task-relevant columns 2) cell-level similarity-based approach for enhancing few-shot example selection. Our approach has been extensively tested across 66 datasets, demonstrating improved performance in three downstream tasks: data imputation, error detection, and entity matching using two distinct LLMs; Google/flant-t5xxl and Mixtral 8x7B.", }
Efficient processing of tabular data is important in various industries, especially when working with datasets containing a large number of columns. Large language models (LLMs) have demonstrated their ability on several tasks through carefully crafted prompts. However, creating effective prompts for tabular datasets is challenging due to the structured nature of the data and the need to manage numerous columns. This paper presents an innovative auto-prompt generation system suitable for multiple LLMs, with minimal training. It proposes two novel methods; 1) A Reinforcement Learning-based algorithm for identifying and sequencing task-relevant columns 2) cell-level similarity-based approach for enhancing few-shot example selection. Our approach has been extensively tested across 66 datasets, demonstrating improved performance in three downstream tasks: data imputation, error detection, and entity matching using two distinct LLMs; Google/flant-t5xxl and Mixtral 8x7B.
[ "Akella, Ashlesha", "Manatkar, Abhijit", "Chavda, Brijkumar", "Patel, Hima" ]
An Automatic Prompt Generation System for Tabular Data Tasks
naacl-industry.16
Poster
2405.05618
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.17.bib
https://aclanthology.org/2024.naacl-industry.17/
@inproceedings{hammami-etal-2024-fighting, title = "Fighting crime with Transformers: Empirical analysis of address parsing methods in payment data", author = "Hammami, Haitham and Baligand, Louis and Petrovski, Bojan", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.17", doi = "10.18653/v1/2024.naacl-industry.17", pages = "201--212", abstract = "In the financial industry, identifying the location of parties involved in payments is a major challenge in the context of Anti-Money Laundering transaction monitoring. For this purpose address parsing entails extracting fields such as street, postal code, or country from free text message attributes. While payment processing platforms are updating their standards with more structured formats such as SWIFT with ISO 20022, address parsing remains essential for a considerable volume of messages. With the emergence of Transformers and Generative Large Language Models (LLM), we explore the performance of state-of-the-art solutions given the constraint of processing a vast amount of daily data. This paper also aims to show the need for training robust models capable of dealing with real-world noisy transactional data. Our results suggest that a well fine-tuned Transformer model using early-stopping significantly outperforms other approaches. Nevertheless, generative LLMs demonstrate strong zero{\_}shot performance and warrant further investigations.", }
In the financial industry, identifying the location of parties involved in payments is a major challenge in the context of Anti-Money Laundering transaction monitoring. For this purpose address parsing entails extracting fields such as street, postal code, or country from free text message attributes. While payment processing platforms are updating their standards with more structured formats such as SWIFT with ISO 20022, address parsing remains essential for a considerable volume of messages. With the emergence of Transformers and Generative Large Language Models (LLM), we explore the performance of state-of-the-art solutions given the constraint of processing a vast amount of daily data. This paper also aims to show the need for training robust models capable of dealing with real-world noisy transactional data. Our results suggest that a well fine-tuned Transformer model using early-stopping significantly outperforms other approaches. Nevertheless, generative LLMs demonstrate strong zero{\_}shot performance and warrant further investigations.
[ "Hammami, Haitham", "Balig", ", Louis", "Petrovski, Bojan" ]
Fighting crime with Transformers: Empirical analysis of address parsing methods in payment data
naacl-industry.17
Poster
2404.05632
[ "https://github.com/hm-haitham/Fighting-crime-with-Transformers-Empirical-analysis-of-address-parsing-methods-in-payment-data" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.18.bib
https://aclanthology.org/2024.naacl-industry.18/
@inproceedings{hu-etal-2024-language, title = "Language Models are Alignable Decision-Makers: Dataset and Application to the Medical Triage Domain", author = "Hu, Brian and Ray, Bill and Leung, Alice and Summerville, Amy and Joy, David and Funk, Christopher and Basharat, Arslan", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.18", doi = "10.18653/v1/2024.naacl-industry.18", pages = "213--227", abstract = "In difficult decision-making scenarios, it is common to have conflicting opinions among expert human decision-makers as there may not be a single right answer. Such decisions may be guided by different attributes that can be used to characterize an individual{'}s decision. We introduce a novel dataset for medical triage decision-making, labeled with a set of decision-maker attributes (DMAs). This dataset consists of 62 scenarios, covering six different DMAs, including ethical principles such as fairness and moral desert. We present a novel software framework for human-aligned decision-making by utilizing these DMAs, paving the way for trustworthy AI with better guardrails. Specifically, we demonstrate how large language models (LLMs) can serve as ethical decision-makers, and how their decisions can be aligned to different DMAs using zero-shot prompting. Our experiments focus on different open-source models with varying sizes and training techniques, such as Falcon, Mistral, and Llama 2. Finally, we also introduce a new form of weighted self-consistency that improves the overall quantified performance. Our results provide new research directions in the use of LLMs as alignable decision-makers. The dataset and open-source software are publicly available at: https://github.com/ITM-Kitware/llm-alignable-dm.", }
In difficult decision-making scenarios, it is common to have conflicting opinions among expert human decision-makers as there may not be a single right answer. Such decisions may be guided by different attributes that can be used to characterize an individual{'}s decision. We introduce a novel dataset for medical triage decision-making, labeled with a set of decision-maker attributes (DMAs). This dataset consists of 62 scenarios, covering six different DMAs, including ethical principles such as fairness and moral desert. We present a novel software framework for human-aligned decision-making by utilizing these DMAs, paving the way for trustworthy AI with better guardrails. Specifically, we demonstrate how large language models (LLMs) can serve as ethical decision-makers, and how their decisions can be aligned to different DMAs using zero-shot prompting. Our experiments focus on different open-source models with varying sizes and training techniques, such as Falcon, Mistral, and Llama 2. Finally, we also introduce a new form of weighted self-consistency that improves the overall quantified performance. Our results provide new research directions in the use of LLMs as alignable decision-makers. The dataset and open-source software are publicly available at: https://github.com/ITM-Kitware/llm-alignable-dm.
[ "Hu, Brian", "Ray, Bill", "Leung, Alice", "Summerville, Amy", "Joy, David", "Funk, Christopher", "Basharat, Arslan" ]
Language Models are Alignable Decision-Makers: Dataset and Application to the Medical Triage Domain
naacl-industry.18
Poster
2406.06435
[ "https://github.com/itm-kitware/llm-alignable-dm" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.19.bib
https://aclanthology.org/2024.naacl-industry.19/
@inproceedings{ayala-bechard-2024-reducing, title = "Reducing hallucination in structured outputs via Retrieval-Augmented Generation", author = "Ayala, Orlando and Bechard, Patrice", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.19", doi = "10.18653/v1/2024.naacl-industry.19", pages = "228--238", abstract = "A current limitation of Generative AI (GenAI) is its propensity to hallucinate. While Large Language Models (LLM) have taken the world by storm, without eliminating or at least reducing hallucination, real-world GenAI systems will likely continue to face challenges in user adoption. In the process of deploying an enterprise application that produces workflows from natural language requirements, we devised a system leveraging Retrieval-Augmented Generation (RAG) to improve the quality of the structured output that represents such workflows. Thanks to our implementation of RAG, our proposed system significantly reduces hallucination and allows the generalization of our LLM to out-of-domain settings. In addition, we show that using a small, well-trained retriever can reduce the size of the accompanying LLM at no loss in performance, thereby making deployments of LLM-based systems less resource-intensive.", }
A current limitation of Generative AI (GenAI) is its propensity to hallucinate. While Large Language Models (LLM) have taken the world by storm, without eliminating or at least reducing hallucination, real-world GenAI systems will likely continue to face challenges in user adoption. In the process of deploying an enterprise application that produces workflows from natural language requirements, we devised a system leveraging Retrieval-Augmented Generation (RAG) to improve the quality of the structured output that represents such workflows. Thanks to our implementation of RAG, our proposed system significantly reduces hallucination and allows the generalization of our LLM to out-of-domain settings. In addition, we show that using a small, well-trained retriever can reduce the size of the accompanying LLM at no loss in performance, thereby making deployments of LLM-based systems less resource-intensive.
[ "Ayala, Orl", "o", "Bechard, Patrice" ]
Reducing hallucination in structured outputs via Retrieval-Augmented Generation
naacl-industry.19
Poster
2404.08189
[ "" ]
https://huggingface.co/papers/2404.08189
2
1
0
2
1
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.20.bib
https://aclanthology.org/2024.naacl-industry.20/
@inproceedings{yazdi-etal-2024-towards, title = "Towards Translating Objective Product Attributes Into Customer Language", author = "Yazdi, Ram and Kalinsky, Oren and Libov, Alexander and Shahaf, Dafna", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.20", doi = "10.18653/v1/2024.naacl-industry.20", pages = "239--247", abstract = "When customers search online for a product they are not familiar with, their needs are often expressed through subjective product attributes, such as {''}picture quality{''} for a TV or {''}easy to clean{''} for a sofa. In contrast, the product catalog in online stores includes objective attributes such as {''}screen resolution{''} or {''}material{''}. In this work, we aim to find a link between the objective product catalog and the subjective needs of the customers, to help customers better understand the product space using their own words. We apply correlation-based methods to the store{'}s product catalog and product reviews in order to find the best potential links between objective and subjective attributes; next, Large Language Models (LLMs) reduce spurious correlations by incorporating common sense and world knowledge (e.g., picture quality is indeed affected by screen resolution, and 8k is the best one). We curate a dataset for this task and show that our combined approach outperforms correlation-only and causation-only approaches.", }
When customers search online for a product they are not familiar with, their needs are often expressed through subjective product attributes, such as {''}picture quality{''} for a TV or {''}easy to clean{''} for a sofa. In contrast, the product catalog in online stores includes objective attributes such as {''}screen resolution{''} or {''}material{''}. In this work, we aim to find a link between the objective product catalog and the subjective needs of the customers, to help customers better understand the product space using their own words. We apply correlation-based methods to the store{'}s product catalog and product reviews in order to find the best potential links between objective and subjective attributes; next, Large Language Models (LLMs) reduce spurious correlations by incorporating common sense and world knowledge (e.g., picture quality is indeed affected by screen resolution, and 8k is the best one). We curate a dataset for this task and show that our combined approach outperforms correlation-only and causation-only approaches.
[ "Yazdi, Ram", "Kalinsky, Oren", "Libov, Alex", "er", "Shahaf, Dafna" ]
Towards Translating Objective Product Attributes Into Customer Language
naacl-industry.20
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.21.bib
https://aclanthology.org/2024.naacl-industry.21/
@inproceedings{konan-etal-2024-automating, title = "Automating the Generation of a Functional Semantic Types Ontology with Foundational Models", author = "Konan, Sachin and Rudolph, Larry and Affens, Scott", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.21", doi = "10.18653/v1/2024.naacl-industry.21", pages = "248--265", abstract = "The rise of data science, the inherent dirtiness of data, and the proliferation of vast data providers have increased the value proposition of Semantic Types. Semantic Types are a way of encoding contextual information onto a data schema that informs the user about the definitional meaning of data, its broader context, and relationships to other types. We increasingly see a world where providing structure to this information, attached directly to data, will enable both people and systems to better understand the content of a dataset and the ability to efficiently automate data tasks such as validation, mapping/joins, and eventually machine learning. While ontological systems exist, they have not had widespread adoption due to challenges in mapping to operational datasets and lack of specificity of entity-types. Additionally, the validation checks associated with data are stored in code bases separate from the datasets that are distributed. In this paper, we address both challenges holistically by proposing a system that efficiently maps and encodes functional meaning on Semantic Types.", }
The rise of data science, the inherent dirtiness of data, and the proliferation of vast data providers have increased the value proposition of Semantic Types. Semantic Types are a way of encoding contextual information onto a data schema that informs the user about the definitional meaning of data, its broader context, and relationships to other types. We increasingly see a world where providing structure to this information, attached directly to data, will enable both people and systems to better understand the content of a dataset and the ability to efficiently automate data tasks such as validation, mapping/joins, and eventually machine learning. While ontological systems exist, they have not had widespread adoption due to challenges in mapping to operational datasets and lack of specificity of entity-types. Additionally, the validation checks associated with data are stored in code bases separate from the datasets that are distributed. In this paper, we address both challenges holistically by proposing a system that efficiently maps and encodes functional meaning on Semantic Types.
[ "Konan, Sachin", "Rudolph, Larry", "Affens, Scott" ]
Automating the Generation of a Functional Semantic Types Ontology with Foundational Models
naacl-industry.21
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.22.bib
https://aclanthology.org/2024.naacl-industry.22/
@inproceedings{mukku-etal-2024-leveraging, title = "Leveraging Customer Feedback for Multi-modal Insight Extraction", author = "Mukku, Sandeep and Kanagarajan, Abinesh and Ghosh, Pushpendu and Aggarwal, Chetan", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.22", doi = "10.18653/v1/2024.naacl-industry.22", pages = "266--278", abstract = "Businesses can benefit from customer feedback in different modalities, such as text and images, to enhance their products and services. However, it is difficult to extract actionable and relevant pairs of text segments and images from customer feedback in a single pass. In this paper, we propose a novel multi-modal method that fuses image and text information in a latent space and decodes it to extract the relevant feedback segments using an image-text grounded text decoder. We also introduce a weakly-supervised data generation technique that produces training data for this task. We evaluate our model on unseen data and demonstrate that it can effectively mine actionable insights from multi-modal customer feedback, outperforming the existing baselines by 14 points in F1 score.", }
Businesses can benefit from customer feedback in different modalities, such as text and images, to enhance their products and services. However, it is difficult to extract actionable and relevant pairs of text segments and images from customer feedback in a single pass. In this paper, we propose a novel multi-modal method that fuses image and text information in a latent space and decodes it to extract the relevant feedback segments using an image-text grounded text decoder. We also introduce a weakly-supervised data generation technique that produces training data for this task. We evaluate our model on unseen data and demonstrate that it can effectively mine actionable insights from multi-modal customer feedback, outperforming the existing baselines by 14 points in F1 score.
[ "Mukku, S", "eep", "Kanagarajan, Abinesh", "Ghosh, Pushpendu", "Aggarwal, Chetan" ]
Leveraging Customer Feedback for Multi-modal Insight Extraction
naacl-industry.22
Poster
2410.09999
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.23.bib
https://aclanthology.org/2024.naacl-industry.23/
@inproceedings{zhao-etal-2024-optimizing, title = "Optimizing {LLM} Based Retrieval Augmented Generation Pipelines in the Financial Domain", author = "Zhao, Yiyun and Singh, Prateek and Bhathena, Hanoz and Ramos, Bernardo and Joshi, Aviral and Gadiyaram, Swaroop and Sharma, Saket", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.23", doi = "10.18653/v1/2024.naacl-industry.23", pages = "279--294", abstract = "Retrieval Augmented Generation (RAG) is a prominent approach in real-word applications for grounding large language model (LLM) generations in up to date and domain-specific knowledge. However, there is a lack of systematic investigations of the impact of each component (retrieval quality, prompts, generation models) on the generation quality of a RAG pipeline in real world scenarios. In this study, we benchmark 6 LLMs in 15 retrieval scenarios exploring 9 prompts over 2 real world financial domain datasets. We thoroughly discuss the impact of each component in RAG pipeline on answer generation quality and formulate specific recommendations for the design of RAG systems.", }
Retrieval Augmented Generation (RAG) is a prominent approach in real-word applications for grounding large language model (LLM) generations in up to date and domain-specific knowledge. However, there is a lack of systematic investigations of the impact of each component (retrieval quality, prompts, generation models) on the generation quality of a RAG pipeline in real world scenarios. In this study, we benchmark 6 LLMs in 15 retrieval scenarios exploring 9 prompts over 2 real world financial domain datasets. We thoroughly discuss the impact of each component in RAG pipeline on answer generation quality and formulate specific recommendations for the design of RAG systems.
[ "Zhao, Yiyun", "Singh, Prateek", "Bhathena, Hanoz", "Ramos, Bernardo", "Joshi, Aviral", "Gadiyaram, Swaroop", "Sharma, Saket" ]
Optimizing LLM Based Retrieval Augmented Generation Pipelines in the Financial Domain
naacl-industry.23
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.24.bib
https://aclanthology.org/2024.naacl-industry.24/
@inproceedings{striebel-etal-2024-scaling, title = "Scaling Up Authorship Attribution", author = {Striebel, Jacob and Edikala, Abishek and Irby, Ethan and Rosenfeld, Alex and Gage, J. and Dakota, Daniel and K{\"u}bler, Sandra}, editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.24", doi = "10.18653/v1/2024.naacl-industry.24", pages = "295--302", abstract = "We describe our system for authorship attribution in the IARPA HIATUS program. We describe the model and compute infrastructure developed to satisfy the set of technical constraints imposed by IARPA, including runtime limits as well as other constraints related to the ultimate use case. One use-case constraint concerns the explainability of the features used in the system. For this reason, we integrate features from frame semantic parsing, as they are both interpretable and difficult for adversaries to evade. One trade-off with using such features, however, is that more sophisticated feature representations require more complicated architectures, which limit usefulness in time-sensitive and constrained compute environments. We propose an approach to increase the efficiency of frame semantic parsing through an analysis of parallelization and beam search sizes. Our approach results in a system that is approximately 8.37x faster than the base system with a minimal effect on accuracy.", }
We describe our system for authorship attribution in the IARPA HIATUS program. We describe the model and compute infrastructure developed to satisfy the set of technical constraints imposed by IARPA, including runtime limits as well as other constraints related to the ultimate use case. One use-case constraint concerns the explainability of the features used in the system. For this reason, we integrate features from frame semantic parsing, as they are both interpretable and difficult for adversaries to evade. One trade-off with using such features, however, is that more sophisticated feature representations require more complicated architectures, which limit usefulness in time-sensitive and constrained compute environments. We propose an approach to increase the efficiency of frame semantic parsing through an analysis of parallelization and beam search sizes. Our approach results in a system that is approximately 8.37x faster than the base system with a minimal effect on accuracy.
[ "Striebel, Jacob", "Edikala, Abishek", "Irby, Ethan", "Rosenfeld, Alex", "Gage, J.", "Dakota, Daniel", "K{\\\"u}bler, S", "ra" ]
Scaling Up Authorship Attribution
naacl-industry.24
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.25.bib
https://aclanthology.org/2024.naacl-industry.25/
@inproceedings{miah-etal-2024-multimodal, title = "Multimodal Contextual Dialogue Breakdown Detection for Conversational {AI} Models", author = "Miah, Md Messal Monem and Schnaithmann, Ulie and Raghuvanshi, Arushi and Son, Youngseo", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.25", doi = "10.18653/v1/2024.naacl-industry.25", pages = "303--314", abstract = "Detecting dialogue breakdown in real time is critical for conversational AI systems, because it enables taking corrective action to successfully complete a task. In spoken dialog systems, this breakdown can be caused by a variety of unexpected situations including high levels of background noise, causing STT mistranscriptions, or unexpected user flows.In particular, industry settings like healthcare, require high precision and high flexibility to navigate differently based on the conversation history and dialogue states. This makes it both more challenging and more critical to accurately detect dialog breakdown. To accurately detect breakdown, we found it requires processing audio inputs along with downstream NLP model inferences on transcribed text in real time. In this paper, we introduce a Multimodal Contextual Dialogue Breakdown (MultConDB) model. This model significantly outperforms other known best models by achieving an F1 of 69.27.", }
Detecting dialogue breakdown in real time is critical for conversational AI systems, because it enables taking corrective action to successfully complete a task. In spoken dialog systems, this breakdown can be caused by a variety of unexpected situations including high levels of background noise, causing STT mistranscriptions, or unexpected user flows.In particular, industry settings like healthcare, require high precision and high flexibility to navigate differently based on the conversation history and dialogue states. This makes it both more challenging and more critical to accurately detect dialog breakdown. To accurately detect breakdown, we found it requires processing audio inputs along with downstream NLP model inferences on transcribed text in real time. In this paper, we introduce a Multimodal Contextual Dialogue Breakdown (MultConDB) model. This model significantly outperforms other known best models by achieving an F1 of 69.27.
[ "Miah, Md Messal Monem", "Schnaithmann, Ulie", "Raghuvanshi, Arushi", "Son, Youngseo" ]
Multimodal Contextual Dialogue Breakdown Detection for Conversational AI Models
naacl-industry.25
Poster
2404.08156
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.26.bib
https://aclanthology.org/2024.naacl-industry.26/
@inproceedings{wu-etal-2024-deferred, title = "Deferred {NAM}: Low-latency Top-K Context Injection via Deferred Context Encoding for Non-Streaming {ASR}", author = "Wu, Zelin and Song, Gan and Li, Christopher and Rondon, Pat and Meng, Zhong and Velez, Xavier and Wang, Weiran and Caseiro, Diamantino and Pundak, Golan and Munkhdalai, Tsendsuren and Chandorkar, Angad and Prabhavalkar, Rohit", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.26", doi = "10.18653/v1/2024.naacl-industry.26", pages = "315--323", abstract = "Contextual biasing enables speech recognizers to transcribe important phrases in the speaker{'}s context, such as contact names, even if they are rare in, or absent from, the training data. Attention-based biasing is a leading approach which allows for full end-to-end cotraining of the recognizer and biasing system and requires no separate inference-time components. Such biasers typically consist of a context encoder; followed by a context filter which narrows down the context to apply, improving per-step inference time; and, finally, context application via cross attention. Though much work has gone into optimizing per-frame performance, the context encoder is at least as important: recognition cannot begin before context encoding ends. Here, we show the lightweight phrase selection pass can be moved before context encoding, resulting in a speedup of up to 16.1 times and enabling biasing to scale to 20K phrases with a maximum pre-decoding delay under 33ms. With the addition of phrase- and wordpiece-level cross-entropy losses, our technique also achieves up to a 37.5{\%} relative WER reduction over the baseline without the losses and lightweight phrase selection pass.", }
Contextual biasing enables speech recognizers to transcribe important phrases in the speaker{'}s context, such as contact names, even if they are rare in, or absent from, the training data. Attention-based biasing is a leading approach which allows for full end-to-end cotraining of the recognizer and biasing system and requires no separate inference-time components. Such biasers typically consist of a context encoder; followed by a context filter which narrows down the context to apply, improving per-step inference time; and, finally, context application via cross attention. Though much work has gone into optimizing per-frame performance, the context encoder is at least as important: recognition cannot begin before context encoding ends. Here, we show the lightweight phrase selection pass can be moved before context encoding, resulting in a speedup of up to 16.1 times and enabling biasing to scale to 20K phrases with a maximum pre-decoding delay under 33ms. With the addition of phrase- and wordpiece-level cross-entropy losses, our technique also achieves up to a 37.5{\%} relative WER reduction over the baseline without the losses and lightweight phrase selection pass.
[ "Wu, Zelin", "Song, Gan", "Li, Christopher", "Rondon, Pat", "Meng, Zhong", "Velez, Xavier", "Wang, Weiran", "Caseiro, Diamantino", "Pundak, Golan", "Munkhdalai, Tsendsuren", "Ch", "orkar, Angad", "Prabhavalkar, Rohit" ]
Deferred NAM: Low-latency Top-K Context Injection via Deferred Context Encoding for Non-Streaming ASR
naacl-industry.26
Poster
2404.10180
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.27.bib
https://aclanthology.org/2024.naacl-industry.27/
@inproceedings{wang-etal-2024-less, title = "Less is More for Improving Automatic Evaluation of Factual Consistency", author = "Wang, Tong and Kulkarni, Ninad and Qi, Yanjun", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.27", doi = "10.18653/v1/2024.naacl-industry.27", pages = "324--334", abstract = "Assessing the factual consistency of automatically generated texts in relation to source context is crucial for developing reliable natural language generation applications. Recent literature proposes AlignScore which uses a unified alignment model to evaluate factual consistency and substantially outperforms previous methods across many benchmark tasks. In this paper, we take a closer look of datasets used in AlignScore and uncover an unexpected finding: utilizing a smaller number of data points can actually improve performance. We process the original AlignScore training dataset to remove noise, augment with robustness-enhanced samples, and utilize a subset comprising 10{\%} of the data to train an improved factual consistency evaluation model, we call LIM-RA (Less Is More for Robust AlignScore). LIM-RA demonstrates superior performance, consistently outperforming AlignScore and other strong baselines like ChatGPT across four benchmarks (two utilizing traditional natural language generation datasets and two focused on large language model outputs). Our experiments show that LIM-RA achieves the highest score on 24 of the 33 test datasets, while staying competitive on the rest, establishing the new state-of-the-art benchmarks.", }
Assessing the factual consistency of automatically generated texts in relation to source context is crucial for developing reliable natural language generation applications. Recent literature proposes AlignScore which uses a unified alignment model to evaluate factual consistency and substantially outperforms previous methods across many benchmark tasks. In this paper, we take a closer look of datasets used in AlignScore and uncover an unexpected finding: utilizing a smaller number of data points can actually improve performance. We process the original AlignScore training dataset to remove noise, augment with robustness-enhanced samples, and utilize a subset comprising 10{\%} of the data to train an improved factual consistency evaluation model, we call LIM-RA (Less Is More for Robust AlignScore). LIM-RA demonstrates superior performance, consistently outperforming AlignScore and other strong baselines like ChatGPT across four benchmarks (two utilizing traditional natural language generation datasets and two focused on large language model outputs). Our experiments show that LIM-RA achieves the highest score on 24 of the 33 test datasets, while staying competitive on the rest, establishing the new state-of-the-art benchmarks.
[ "Wang, Tong", "Kulkarni, Ninad", "Qi, Yanjun" ]
Less is More for Improving Automatic Evaluation of Factual Consistency
naacl-industry.27
Poster
2404.06579
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.28.bib
https://aclanthology.org/2024.naacl-industry.28/
@inproceedings{jang-etal-2024-driftwatch, title = "{D}rift{W}atch: A Tool that Automatically Detects Data Drift and Extracts Representative Examples Affected by Drift", author = "Jang, Myeongjun and Georgiadis, Antonios and Zhao, Yiyun and Silavong, Fran", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.28", doi = "10.18653/v1/2024.naacl-industry.28", pages = "335--346", abstract = "Data drift, which denotes a misalignment between the distribution of reference (i.e., training) and production data, constitutes a significant challenge for AI applications, as it undermines the generalisation capacity of machine learning (ML) models. Therefore, it is imperative to proactively identify data drift before users meet with performance degradation. Moreover, to ensure the successful execution of AI services, endeavours should be directed not only toward detecting the occurrence of drift but also toward effectively addressing this challenge. {\%} considering the limited resources prevalent in practical industrial domains. In this work, we introduce a tool designed to detect data drift in text data. In addition, we propose an unsupervised sampling technique for extracting representative examples from drifted instances. This approach bestows a practical advantage by significantly reducing expenses associated with annotating the labels for drifted instances, an essential prerequisite for retraining the model to sustain its performance on production data.", }
Data drift, which denotes a misalignment between the distribution of reference (i.e., training) and production data, constitutes a significant challenge for AI applications, as it undermines the generalisation capacity of machine learning (ML) models. Therefore, it is imperative to proactively identify data drift before users meet with performance degradation. Moreover, to ensure the successful execution of AI services, endeavours should be directed not only toward detecting the occurrence of drift but also toward effectively addressing this challenge. {\%} considering the limited resources prevalent in practical industrial domains. In this work, we introduce a tool designed to detect data drift in text data. In addition, we propose an unsupervised sampling technique for extracting representative examples from drifted instances. This approach bestows a practical advantage by significantly reducing expenses associated with annotating the labels for drifted instances, an essential prerequisite for retraining the model to sustain its performance on production data.
[ "Jang, Myeongjun", "Georgiadis, Antonios", "Zhao, Yiyun", "Silavong, Fran" ]
DriftWatch: A Tool that Automatically Detects Data Drift and Extracts Representative Examples Affected by Drift
naacl-industry.28
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.29.bib
https://aclanthology.org/2024.naacl-industry.29/
@inproceedings{marani-etal-2024-graph, title = "Graph Integrated Language Transformers for Next Action Prediction in Complex Phone Calls", author = "Marani, Amin and Schnaithmann, Ulie and Son, Youngseo and Iyer, Akil and Paldhe, Manas and Raghuvanshi, Arushi", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.29", doi = "10.18653/v1/2024.naacl-industry.29", pages = "347--358", abstract = "Current Conversational AI systems employ different machine learning pipelines, as well as external knowledge sources and business logic to predict the next action. Maintaining various components in dialogue managers{'} pipeline adds complexity in expansion and updates, increases processing time, and causes additive noise through the pipeline that can lead to incorrect next action prediction. This paper investigates graph integration into language transformers to improve understanding the relationships between humans{'} utterances, previous, and next actions without the dependency on external sources or components. Experimental analyses on real calls indicate that the proposed Graph Integrated Language Transformer models can achieve higher performance compared to other production level conversational AI systems in driving interactive calls with human users in real-world settings.", }
Current Conversational AI systems employ different machine learning pipelines, as well as external knowledge sources and business logic to predict the next action. Maintaining various components in dialogue managers{'} pipeline adds complexity in expansion and updates, increases processing time, and causes additive noise through the pipeline that can lead to incorrect next action prediction. This paper investigates graph integration into language transformers to improve understanding the relationships between humans{'} utterances, previous, and next actions without the dependency on external sources or components. Experimental analyses on real calls indicate that the proposed Graph Integrated Language Transformer models can achieve higher performance compared to other production level conversational AI systems in driving interactive calls with human users in real-world settings.
[ "Marani, Amin", "Schnaithmann, Ulie", "Son, Youngseo", "Iyer, Akil", "Paldhe, Manas", "Raghuvanshi, Arushi" ]
Graph Integrated Language Transformers for Next Action Prediction in Complex Phone Calls
naacl-industry.29
Poster
2404.08155
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.30.bib
https://aclanthology.org/2024.naacl-industry.30/
@inproceedings{jia-etal-2024-leveraging, title = "Leveraging {LLM}s for Dialogue Quality Measurement", author = "Jia, Jinghan and Komma, Abi and Leffel, Timothy and Peng, Xujun and Nagesh, Ajay and Soliman, Tamer and Galstyan, Aram and Kumar, Anoop", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.30", doi = "10.18653/v1/2024.naacl-industry.30", pages = "359--367", abstract = "In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization. Recent advances in large language models (LLMs) show robust zero- and few-shot capabilities across NLP tasks. Our paper explores using LLMs for automated dialogue quality evaluation, experimenting with various configurations on public and proprietary datasets. Manipulating factors such as model size, in-context examples, and selection techniques, we examine {``}chain-of-thought{''} (CoT) reasoning and label extraction procedures. Our results show that (1) larger models yield more accurate dialogue labels; (2) algorithmic selection of in-context examples outperforms random selection,; (3) CoT reasoning where an LLM is asked to provide justifications before outputting final labels improves performance; and (4) fine-tuned LLMs outperform out-of-the-box ones. In addition, we find that suitably tuned LLMs exhibit high accuracy in dialogue evaluation compared to human judgments.", }
In task-oriented conversational AI evaluation, unsupervised methods poorly correlate with human judgments, and supervised approaches lack generalization. Recent advances in large language models (LLMs) show robust zero- and few-shot capabilities across NLP tasks. Our paper explores using LLMs for automated dialogue quality evaluation, experimenting with various configurations on public and proprietary datasets. Manipulating factors such as model size, in-context examples, and selection techniques, we examine {``}chain-of-thought{''} (CoT) reasoning and label extraction procedures. Our results show that (1) larger models yield more accurate dialogue labels; (2) algorithmic selection of in-context examples outperforms random selection,; (3) CoT reasoning where an LLM is asked to provide justifications before outputting final labels improves performance; and (4) fine-tuned LLMs outperform out-of-the-box ones. In addition, we find that suitably tuned LLMs exhibit high accuracy in dialogue evaluation compared to human judgments.
[ "Jia, Jinghan", "Komma, Abi", "Leffel, Timothy", "Peng, Xujun", "Nagesh, Ajay", "Soliman, Tamer", "Galstyan, Aram", "Kumar, Anoop" ]
Leveraging LLMs for Dialogue Quality Measurement
naacl-industry.30
Poster
2406.17304
[ "" ]
https://huggingface.co/papers/2406.17304
1
0
0
8
1
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.31.bib
https://aclanthology.org/2024.naacl-industry.31/
@inproceedings{mora-cross-calderon-ramirez-2024-uncertainty, title = "Uncertainty Estimation in Large Language Models to Support Biodiversity Conservation", author = "Mora-Cross, Maria and Calderon-Ramirez, Saul", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.31", doi = "10.18653/v1/2024.naacl-industry.31", pages = "368--378", abstract = "Large Language Models (LLM) provide significant value in question answering (QA) scenarios and have practical application in complex decision-making contexts, such as biodiversity conservation. However, despite substantial performance improvements, they may still produce inaccurate outcomes. Consequently, incorporating uncertainty quantification alongside predictions is essential for mitigating the potential risks associated with their use. This study introduces an exploratory analysis of the application of Monte Carlo Dropout (MCD) and Expected Calibration Error (ECE) to assess the uncertainty of generative language models. To that end, we analyzed two publicly available language models (Falcon-7B and DistilGPT-2). Our findings suggest the viability of employing ECE as a metric to estimate uncertainty in generative LLM. The findings from this research contribute to a broader project aiming at facilitating free and open access to standardized and integrated data and services about Costa Rica{'}s biodiversity to support the development of science, education, and biodiversity conservation.", }
Large Language Models (LLM) provide significant value in question answering (QA) scenarios and have practical application in complex decision-making contexts, such as biodiversity conservation. However, despite substantial performance improvements, they may still produce inaccurate outcomes. Consequently, incorporating uncertainty quantification alongside predictions is essential for mitigating the potential risks associated with their use. This study introduces an exploratory analysis of the application of Monte Carlo Dropout (MCD) and Expected Calibration Error (ECE) to assess the uncertainty of generative language models. To that end, we analyzed two publicly available language models (Falcon-7B and DistilGPT-2). Our findings suggest the viability of employing ECE as a metric to estimate uncertainty in generative LLM. The findings from this research contribute to a broader project aiming at facilitating free and open access to standardized and integrated data and services about Costa Rica{'}s biodiversity to support the development of science, education, and biodiversity conservation.
[ "Mora-Cross, Maria", "Calderon-Ramirez, Saul" ]
Uncertainty Estimation in Large Language Models to Support Biodiversity Conservation
naacl-industry.31
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.32.bib
https://aclanthology.org/2024.naacl-industry.32/
@inproceedings{wang-etal-2024-ama, title = "{AMA}-{LSTM}: Pioneering Robust and Fair Financial Audio Analysis for Stock Volatility Prediction", author = "Wang, Shengkun and Ji, Taoran and He, Jianfeng and ALMutairi, Mariam and Wang, Dan and Wang, Linhan and Zhang, Min and Lu, Chang-Tien", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.32", doi = "10.18653/v1/2024.naacl-industry.32", pages = "379--386", abstract = "Stock volatility prediction is an important task in the financial industry. Recent multimodal methods have shown advanced results by combining text and audio information, such as earnings calls. However, these multimodal methods have faced two drawbacks. First, they often fail to yield reliable models and overfit the data due to their absorption of stochastic information from the stock market. Moreover, using multimodal models to predict stock volatility suffers from gender bias and lacks an efficient way to eliminate such bias. To address these aforementioned problems, we use adversarial training to generate perturbations that simulate the inherent stochasticity and bias, by creating areas resistant to random information around the input space to improve model robustness and fairness. Our comprehensive experiments on two real-world financial audio datasets reveal that this method exceeds the performance of current state-of-the-art solution. This confirms the value of adversarial training in reducing stochasticity and bias for stock volatility prediction tasks.", }
Stock volatility prediction is an important task in the financial industry. Recent multimodal methods have shown advanced results by combining text and audio information, such as earnings calls. However, these multimodal methods have faced two drawbacks. First, they often fail to yield reliable models and overfit the data due to their absorption of stochastic information from the stock market. Moreover, using multimodal models to predict stock volatility suffers from gender bias and lacks an efficient way to eliminate such bias. To address these aforementioned problems, we use adversarial training to generate perturbations that simulate the inherent stochasticity and bias, by creating areas resistant to random information around the input space to improve model robustness and fairness. Our comprehensive experiments on two real-world financial audio datasets reveal that this method exceeds the performance of current state-of-the-art solution. This confirms the value of adversarial training in reducing stochasticity and bias for stock volatility prediction tasks.
[ "Wang, Shengkun", "Ji, Taoran", "He, Jianfeng", "ALMutairi, Mariam", "Wang, Dan", "Wang, Linhan", "Zhang, Min", "Lu, Chang-Tien" ]
AMA-LSTM: Pioneering Robust and Fair Financial Audio Analysis for Stock Volatility Prediction
naacl-industry.32
Poster
2407.18324
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.33.bib
https://aclanthology.org/2024.naacl-industry.33/
@inproceedings{fu-etal-2024-tiny, title = "Tiny Titans: Can Smaller Large Language Models Punch Above Their Weight in the Real World for Meeting Summarization?", author = "Fu, Xue-Yong and Laskar, Md Tahmid Rahman and Khasanova, Elena and Chen, Cheng and Tn, Shashi", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.33", doi = "10.18653/v1/2024.naacl-industry.33", pages = "387--394", abstract = "Large Language Models (LLMs) have demonstrated impressive capabilities to solve a wide range of tasks without being explicitly fine-tuned on task-specific datasets. However, deploying LLMs in the real world is not trivial, as it requires substantial computing resources. In this paper, we investigate whether smaller, Compact LLMs are a good alternative to the comparatively Larger LLMs to address significant costs associated with utilizing LLMs in the real world. In this regard, we study the meeting summarization task in a real-world industrial environment and conduct extensive experiments by comparing the performance of fine-tuned compact LLMs (FLAN-T5, TinyLLaMA, LiteLLaMA, etc.) with zero-shot larger LLMs (LLaMA-2, GPT-3.5, PaLM-2). We observe that most smaller LLMs, even after fine-tuning, fail to outperform larger zero-shot LLMs in meeting summarization datasets. However, a notable exception is FLAN-T5 (780M parameters), which achieves performance on par with zero-shot Larger LLMs (from 7B to above 70B parameters), while being significantly smaller. This makes compact LLMs like FLAN-T5 a suitable cost-efficient LLM for real-world industrial deployment.", }
Large Language Models (LLMs) have demonstrated impressive capabilities to solve a wide range of tasks without being explicitly fine-tuned on task-specific datasets. However, deploying LLMs in the real world is not trivial, as it requires substantial computing resources. In this paper, we investigate whether smaller, Compact LLMs are a good alternative to the comparatively Larger LLMs to address significant costs associated with utilizing LLMs in the real world. In this regard, we study the meeting summarization task in a real-world industrial environment and conduct extensive experiments by comparing the performance of fine-tuned compact LLMs (FLAN-T5, TinyLLaMA, LiteLLaMA, etc.) with zero-shot larger LLMs (LLaMA-2, GPT-3.5, PaLM-2). We observe that most smaller LLMs, even after fine-tuning, fail to outperform larger zero-shot LLMs in meeting summarization datasets. However, a notable exception is FLAN-T5 (780M parameters), which achieves performance on par with zero-shot Larger LLMs (from 7B to above 70B parameters), while being significantly smaller. This makes compact LLMs like FLAN-T5 a suitable cost-efficient LLM for real-world industrial deployment.
[ "Fu, Xue-Yong", "Laskar, Md Tahmid Rahman", "Khasanova, Elena", "Chen, Cheng", "Tn, Shashi" ]
Tiny Titans: Can Smaller Large Language Models Punch Above Their Weight in the Real World for Meeting Summarization?
naacl-industry.33
Poster
2402.00841
[ "" ]
https://huggingface.co/papers/2402.00841
0
0
0
5
1
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.34.bib
https://aclanthology.org/2024.naacl-industry.34/
@inproceedings{munoz-etal-2024-shears, title = "Shears: Unstructured Sparsity with Neural Low-rank Adapter Search", author = "Munoz, Juan and Yuan, Jinjie and Jain, Nilesh", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.34", doi = "10.18653/v1/2024.naacl-industry.34", pages = "395--405", abstract = "Recently, several approaches successfully demonstrated that weight-sharing Neural Architecture Search (NAS) can effectively explore a search space of elastic low-rank adapters (LoRA), allowing the parameter-efficient fine-tuning (PEFT) and compression of large language models. In this paper, we introduce a novel approach called Shears, demonstrating how the integration of cost-effective sparsity and a proposed Neural Low-rank adapter Search (NLS) algorithm can further improve the efficiency of PEFT approaches. Results demonstrate the benefits of Shears compared to other methods, reaching high sparsity levels while improving or with little drop in accuracy, utilizing a single GPU for a pair of hours.", }
Recently, several approaches successfully demonstrated that weight-sharing Neural Architecture Search (NAS) can effectively explore a search space of elastic low-rank adapters (LoRA), allowing the parameter-efficient fine-tuning (PEFT) and compression of large language models. In this paper, we introduce a novel approach called Shears, demonstrating how the integration of cost-effective sparsity and a proposed Neural Low-rank adapter Search (NLS) algorithm can further improve the efficiency of PEFT approaches. Results demonstrate the benefits of Shears compared to other methods, reaching high sparsity levels while improving or with little drop in accuracy, utilizing a single GPU for a pair of hours.
[ "Munoz, Juan", "Yuan, Jinjie", "Jain, Nilesh" ]
Shears: Unstructured Sparsity with Neural Low-rank Adapter Search
naacl-industry.34
Poster
2404.10934
[ "https://github.com/intellabs/hardware-aware-automated-machine-learning" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.35.bib
https://aclanthology.org/2024.naacl-industry.35/
@inproceedings{lee-etal-2024-tree, title = "Tree-of-Question: Structured Retrieval Framework for {K}orean Question Answering Systems", author = "Lee, Dongyub and Jeong, Younghun and Kim, Hwa-Yeon and Yu, Hongyeon and Han, Seunghyun and Whang, Taesun and Cho, Seungwoo and Lee, Chanhee and Lee, Gunsu and Kim, Youngbum", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.35", doi = "10.18653/v1/2024.naacl-industry.35", pages = "406--418", abstract = "We introduce Korean language-specific RAG-based QA systems, primarily through the innovative Tree-of-Question (ToQ) methodology and enhanced query generation techniques. We address the complex, multi-hop nature of real-world questions by effectively integrating advanced LLMs with nuanced query planning. Our comprehensive evaluations, including a newly created Korean multi-hop QA dataset, demonstrate our method{'}s ability to elevate response validity and accuracy, especially in deeper levels of reasoning. This paper not only showcases significant progress in handling the intricacies of Korean linguistic structures but also sets a new standard in the development of context-aware and linguistically sophisticated QA systems.", }
We introduce Korean language-specific RAG-based QA systems, primarily through the innovative Tree-of-Question (ToQ) methodology and enhanced query generation techniques. We address the complex, multi-hop nature of real-world questions by effectively integrating advanced LLMs with nuanced query planning. Our comprehensive evaluations, including a newly created Korean multi-hop QA dataset, demonstrate our method{'}s ability to elevate response validity and accuracy, especially in deeper levels of reasoning. This paper not only showcases significant progress in handling the intricacies of Korean linguistic structures but also sets a new standard in the development of context-aware and linguistically sophisticated QA systems.
[ "Lee, Dongyub", "Jeong, Younghun", "Kim, Hwa-Yeon", "Yu, Hongyeon", "Han, Seunghyun", "Whang, Taesun", "Cho, Seungwoo", "Lee, Chanhee", "Lee, Gunsu", "Kim, Youngbum" ]
Tree-of-Question: Structured Retrieval Framework for Korean Question Answering Systems
naacl-industry.35
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.36.bib
https://aclanthology.org/2024.naacl-industry.36/
@inproceedings{mok-etal-2024-llm, title = "{LLM}-based Frameworks for {API} Argument Filling in Task-Oriented Conversational Systems", author = "Mok, Jisoo and Kachuee, Mohammad and Dai, Shuyang and Ray, Shayan and Taghavi, Tara and Yoon, Sungroh", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.36", doi = "10.18653/v1/2024.naacl-industry.36", pages = "419--426", abstract = "Task-orientated conversational agents interact with users and assist them via leveraging external APIs. A typical task-oriented conversational system can be broken down into three phases: external API selection, argument filling, and response generation. The focus of our work is the task of argument filling, which is in charge of accurately providing arguments required by the selected API. Upon comprehending the dialogue history and the pre-defined API schema, the argument filling task is expected to provide the external API with the necessary information to generate a desirable agent action. In this paper, we study the application of Large Language Models (LLMs) for the problem of API argument filling task. Our initial investigation reveals that LLMs require an additional grounding process to successfully perform argument filling, inspiring us to design training and prompting frameworks to ground their responses. Our experimental results demonstrate that when paired with proposed techniques, the argument filling performance of LLMs noticeably improves, paving a new way toward building an automated argument filling framework.", }
Task-orientated conversational agents interact with users and assist them via leveraging external APIs. A typical task-oriented conversational system can be broken down into three phases: external API selection, argument filling, and response generation. The focus of our work is the task of argument filling, which is in charge of accurately providing arguments required by the selected API. Upon comprehending the dialogue history and the pre-defined API schema, the argument filling task is expected to provide the external API with the necessary information to generate a desirable agent action. In this paper, we study the application of Large Language Models (LLMs) for the problem of API argument filling task. Our initial investigation reveals that LLMs require an additional grounding process to successfully perform argument filling, inspiring us to design training and prompting frameworks to ground their responses. Our experimental results demonstrate that when paired with proposed techniques, the argument filling performance of LLMs noticeably improves, paving a new way toward building an automated argument filling framework.
[ "Mok, Jisoo", "Kachuee, Mohammad", "Dai, Shuyang", "Ray, Shayan", "Taghavi, Tara", "Yoon, Sungroh" ]
LLM-based Frameworks for API Argument Filling in Task-Oriented Conversational Systems
naacl-industry.36
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.37.bib
https://aclanthology.org/2024.naacl-industry.37/
@inproceedings{kanchinadam-shaheen-2024-large, title = "Large Language Models Encode the Practice of Medicine", author = "Kanchinadam, Teja and Shaheen, Gauher", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.37", doi = "10.18653/v1/2024.naacl-industry.37", pages = "427--436", abstract = "Healthcare tasks such as predicting clinical outcomes across medical and surgical populations, disease prediction, predicting patient health journeys, are typically approached with supervised learning on task-specific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of billions of administrative claims, which essentially encapsulates the practice of medicine, offering a unique perspective on patient care and treatment patterns. Our model, MediClaimGPT, a 125M parameter Transformer demonstrates strong zero-shot predictive capabilities, accurately forecasting patient health events across four evaluation datasets, with its capabilities further demonstrated in various downstream tasks. A significant application of MediClaimGPT is in generating high-quality, clinically plausible synthetic claims data, enhancing healthcare data utility while preserving patient privacy. This research underscores the potential of language models in handling complex datasets and their strategic application in healthcare and related fields.", }
Healthcare tasks such as predicting clinical outcomes across medical and surgical populations, disease prediction, predicting patient health journeys, are typically approached with supervised learning on task-specific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of billions of administrative claims, which essentially encapsulates the practice of medicine, offering a unique perspective on patient care and treatment patterns. Our model, MediClaimGPT, a 125M parameter Transformer demonstrates strong zero-shot predictive capabilities, accurately forecasting patient health events across four evaluation datasets, with its capabilities further demonstrated in various downstream tasks. A significant application of MediClaimGPT is in generating high-quality, clinically plausible synthetic claims data, enhancing healthcare data utility while preserving patient privacy. This research underscores the potential of language models in handling complex datasets and their strategic application in healthcare and related fields.
[ "Kanchinadam, Teja", "Shaheen, Gauher" ]
Large Language Models Encode the Practice of Medicine
naacl-industry.37
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.38.bib
https://aclanthology.org/2024.naacl-industry.38/
@inproceedings{vedula-etal-2024-leveraging, title = "Leveraging Interesting Facts to Enhance User Engagement with Conversational Interfaces", author = "Vedula, Nikhita and Castellucci, Giuseppe and Agichtein, Eugene and Rokhlenko, Oleg and Malmasi, Shervin", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.38", doi = "10.18653/v1/2024.naacl-industry.38", pages = "437--446", abstract = "Conversational Task Assistants (CTAs) guide users in performing a multitude of activities, such as making recipes. However, ensuring that interactions remain engaging, interesting, and enjoyable for CTA users is not trivial, especially for time-consuming or challenging tasks. Grounded in psychological theories of human interest, we propose to engage users with contextual and interesting statements or facts during interactions with a multi-modal CTA, to reduce fatigue and task abandonment before a task is complete. To operationalize this idea, we train a high-performing classifier (82{\%} F1-score) to automatically identify relevant and interesting facts for users. We use it to create an annotated dataset of task-specific interesting facts for the domain of cooking. Finally, we design and validate a dialogue policy to incorporate the identified relevant and interesting facts into a conversation, to improve user engagement and task completion. Live testing on a leading multi-modal voice assistant shows that 66{\%} of the presented facts were received positively, leading to a 40{\%} gain in the user satisfaction rating, and a 37{\%} increase in conversation length. These findings emphasize that strategically incorporating interesting facts into the CTA experience can promote real-world user participation for guided task interactions.", }
Conversational Task Assistants (CTAs) guide users in performing a multitude of activities, such as making recipes. However, ensuring that interactions remain engaging, interesting, and enjoyable for CTA users is not trivial, especially for time-consuming or challenging tasks. Grounded in psychological theories of human interest, we propose to engage users with contextual and interesting statements or facts during interactions with a multi-modal CTA, to reduce fatigue and task abandonment before a task is complete. To operationalize this idea, we train a high-performing classifier (82{\%} F1-score) to automatically identify relevant and interesting facts for users. We use it to create an annotated dataset of task-specific interesting facts for the domain of cooking. Finally, we design and validate a dialogue policy to incorporate the identified relevant and interesting facts into a conversation, to improve user engagement and task completion. Live testing on a leading multi-modal voice assistant shows that 66{\%} of the presented facts were received positively, leading to a 40{\%} gain in the user satisfaction rating, and a 37{\%} increase in conversation length. These findings emphasize that strategically incorporating interesting facts into the CTA experience can promote real-world user participation for guided task interactions.
[ "Vedula, Nikhita", "Castellucci, Giuseppe", "Agichtein, Eugene", "Rokhlenko, Oleg", "Malmasi, Shervin" ]
Leveraging Interesting Facts to Enhance User Engagement with Conversational Interfaces
naacl-industry.38
Poster
2404.06659
[ "https://github.com/vnik18/cta-interesting-facts" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.39.bib
https://aclanthology.org/2024.naacl-industry.39/
@inproceedings{nakayama-etal-2024-search, title = "Search Query Refinement for {J}apanese Named Entity Recognition in {E}-commerce Domain", author = "Nakayama, Yuki and Tatsushima, Ryutaro and Mendieta, Erick and Murakami, Koji and Shinzato, Keiji", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.39", doi = "10.18653/v1/2024.naacl-industry.39", pages = "447--452", abstract = "In the E-Commerce domain, search query refinement reformulates malformed queries into canonicalized forms by preprocessing operations such as {``}term splitting{''} and {``}term merging{''}. Unfortunately, most relevant research is rather limited to English. In particular, there is a severe lack of study on search query refinement for the Japanese language. Furthermore, no attempt has ever been made to apply refinement methods to data improvement for downstream NLP tasks in real-world scenarios.This paper presents a novel query refinement approach for the Japanese language. Experimental results show that our method achieves significant improvement by 3.5 points through comparison with BERT-CRF as a baseline. Further experiments are also conducted to measure beneficial impact of query refinement on named entity recognition (NER) as the downstream task. Evaluations indicate that the proposed query refinement method contributes to better data quality, leading to performance boost on E-Commerce specific NER tasks by 11.7 points, compared to search query data preprocessed by MeCab, a very popularly adopted Japanese tokenizer.", }
In the E-Commerce domain, search query refinement reformulates malformed queries into canonicalized forms by preprocessing operations such as {``}term splitting{''} and {``}term merging{''}. Unfortunately, most relevant research is rather limited to English. In particular, there is a severe lack of study on search query refinement for the Japanese language. Furthermore, no attempt has ever been made to apply refinement methods to data improvement for downstream NLP tasks in real-world scenarios.This paper presents a novel query refinement approach for the Japanese language. Experimental results show that our method achieves significant improvement by 3.5 points through comparison with BERT-CRF as a baseline. Further experiments are also conducted to measure beneficial impact of query refinement on named entity recognition (NER) as the downstream task. Evaluations indicate that the proposed query refinement method contributes to better data quality, leading to performance boost on E-Commerce specific NER tasks by 11.7 points, compared to search query data preprocessed by MeCab, a very popularly adopted Japanese tokenizer.
[ "Nakayama, Yuki", "Tatsushima, Ryutaro", "Mendieta, Erick", "Murakami, Koji", "Shinzato, Keiji" ]
Search Query Refinement for Japanese Named Entity Recognition in E-commerce Domain
naacl-industry.39
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.40.bib
https://aclanthology.org/2024.naacl-industry.40/
@inproceedings{zou-etal-2024-eiven, title = "{EIVEN}: Efficient Implicit Attribute Value Extraction using Multimodal {LLM}", author = "Zou, Henry and Yu, Gavin and Fan, Ziwei and Bu, Dan and Liu, Han and Dai, Peng and Jia, Dongmei and Caragea, Cornelia", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.40", doi = "10.18653/v1/2024.naacl-industry.40", pages = "453--463", abstract = "In e-commerce, accurately extracting product attribute values from multimodal data is crucial for improving user experience and operational efficiency of retailers. However, previous approaches to multimodal attribute value extraction often struggle with implicit attribute values embedded in images or text, rely heavily on extensive labeled data, and can easily confuse similar attribute values. To address these issues, we introduce EIVEN, a data- and parameter-efficient generative framework that pioneers the use of multimodal LLM for implicit attribute value extraction. EIVEN leverages the rich inherent knowledge of a pre-trained LLM and vision encoder to reduce reliance on labeled data. We also introduce a novel Learning-by-Comparison technique to reduce model confusion by enforcing attribute value comparison and difference identification. Additionally, we construct initial open-source datasets for multimodal implicit attribute value extraction. Our extensive experiments reveal that EIVEN significantly outperforms existing methods in extracting implicit attribute values while requiring less labeled data.", }
In e-commerce, accurately extracting product attribute values from multimodal data is crucial for improving user experience and operational efficiency of retailers. However, previous approaches to multimodal attribute value extraction often struggle with implicit attribute values embedded in images or text, rely heavily on extensive labeled data, and can easily confuse similar attribute values. To address these issues, we introduce EIVEN, a data- and parameter-efficient generative framework that pioneers the use of multimodal LLM for implicit attribute value extraction. EIVEN leverages the rich inherent knowledge of a pre-trained LLM and vision encoder to reduce reliance on labeled data. We also introduce a novel Learning-by-Comparison technique to reduce model confusion by enforcing attribute value comparison and difference identification. Additionally, we construct initial open-source datasets for multimodal implicit attribute value extraction. Our extensive experiments reveal that EIVEN significantly outperforms existing methods in extracting implicit attribute values while requiring less labeled data.
[ "Zou, Henry", "Yu, Gavin", "Fan, Ziwei", "Bu, Dan", "Liu, Han", "Dai, Peng", "Jia, Dongmei", "Caragea, Cornelia" ]
EIVEN: Efficient Implicit Attribute Value Extraction using Multimodal LLM
naacl-industry.40
Poster
2404.08886
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.41.bib
https://aclanthology.org/2024.naacl-industry.41/
@inproceedings{min-etal-2024-exploring, title = "Exploring the Impact of Table-to-Text Methods on Augmenting {LLM}-based Question Answering with Domain Hybrid Data", author = "Min, Dehai and Hu, Nan and Jin, Rihui and Lin, Nuo and Chen, Jiaoyan and Chen, Yongrui and Li, Yu and Qi, Guilin and Li, Yun and Li, Nijun and Wang, Qianren", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.41", doi = "10.18653/v1/2024.naacl-industry.41", pages = "464--482", abstract = "Augmenting Large Language Models (LLMs) for Question Answering (QA) with domain specific data has attracted wide attention. However, domain data often exists in a hybrid format, including text and semi-structured tables, posing challenges for the seamless integration of information. Table-to-Text Generation is a promising solution by facilitating the transformation of hybrid data into a uniformly text-formatted corpus. Although this technique has been widely studied by the NLP community, there is currently no comparative analysis on how corpora generated by different table-to-text methods affect the performance of QA systems.In this paper, we address this research gap in two steps. First, we innovatively integrate table-to-text generation into the framework of enhancing LLM-based QA systems with domain hybrid data. Then, we utilize this framework in real-world industrial data to conduct extensive experiments on two types of QA systems (DSFT and RAG frameworks) with four representative methods: Markdown format, Template serialization, TPLM-based method, and LLM-based method. Based on the experimental results, we draw some empirical findings and explore the underlying reasons behind the success of some methods. We hope the findings of this work will provide a valuable reference for the academic and industrial communities in developing robust QA systems.", }
Augmenting Large Language Models (LLMs) for Question Answering (QA) with domain specific data has attracted wide attention. However, domain data often exists in a hybrid format, including text and semi-structured tables, posing challenges for the seamless integration of information. Table-to-Text Generation is a promising solution by facilitating the transformation of hybrid data into a uniformly text-formatted corpus. Although this technique has been widely studied by the NLP community, there is currently no comparative analysis on how corpora generated by different table-to-text methods affect the performance of QA systems.In this paper, we address this research gap in two steps. First, we innovatively integrate table-to-text generation into the framework of enhancing LLM-based QA systems with domain hybrid data. Then, we utilize this framework in real-world industrial data to conduct extensive experiments on two types of QA systems (DSFT and RAG frameworks) with four representative methods: Markdown format, Template serialization, TPLM-based method, and LLM-based method. Based on the experimental results, we draw some empirical findings and explore the underlying reasons behind the success of some methods. We hope the findings of this work will provide a valuable reference for the academic and industrial communities in developing robust QA systems.
[ "Min, Dehai", "Hu, Nan", "Jin, Rihui", "Lin, Nuo", "Chen, Jiaoyan", "Chen, Yongrui", "Li, Yu", "Qi, Guilin", "Li, Yun", "Li, Nijun", "Wang, Qianren" ]
Exploring the Impact of Table-to-Text Methods on Augmenting LLM-based Question Answering with Domain Hybrid Data
naacl-industry.41
Poster
2402.12869
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.42.bib
https://aclanthology.org/2024.naacl-industry.42/
@inproceedings{zhang-etal-2024-solving, title = "Solving General Natural-Language-Description Optimization Problems with Large Language Models", author = "Zhang, Jihai and Wang, Wei and Guo, Siyan and Wang, Li and Lin, Fangquan and Yang, Cheng and Yin, Wotao", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.42", doi = "10.18653/v1/2024.naacl-industry.42", pages = "483--490", abstract = "Optimization problems seek to find the best solution to an objective under a set of constraints, and have been widely investigated in real-world applications. Modeling and solving optimization problems in a specific domain typically require a combination of domain knowledge, mathematical skills, and programming ability, making it difficult for general users and even domain professionals. In this paper, we propose a novel framework called OptLLM that augments LLMs with external solvers. Specifically, OptLLM accepts user queries in natural language, convert them into mathematical formulations and programming codes, and calls the solvers to calculate the results for decision-making. In addition, OptLLM supports multi-round dialogues to gradually refine the modeling and solving of optimization problems. To illustrate the effectiveness of OptLLM, we provide tutorials on three typical optimization applications and conduct experiments on both prompt-based GPT models and a fine-tuned Qwen model using a large-scale self-developed optimization dataset. Experimental results show that OptLLM works with various LLMs, and the fine-tuned model achieves an accuracy boost compared to the prompt-based models. Some features of OptLLM framework have been available for trial since June 2023 (https://opt.alibabacloud.com/chat or https://opt.aliyun.com/chat).", }
Optimization problems seek to find the best solution to an objective under a set of constraints, and have been widely investigated in real-world applications. Modeling and solving optimization problems in a specific domain typically require a combination of domain knowledge, mathematical skills, and programming ability, making it difficult for general users and even domain professionals. In this paper, we propose a novel framework called OptLLM that augments LLMs with external solvers. Specifically, OptLLM accepts user queries in natural language, convert them into mathematical formulations and programming codes, and calls the solvers to calculate the results for decision-making. In addition, OptLLM supports multi-round dialogues to gradually refine the modeling and solving of optimization problems. To illustrate the effectiveness of OptLLM, we provide tutorials on three typical optimization applications and conduct experiments on both prompt-based GPT models and a fine-tuned Qwen model using a large-scale self-developed optimization dataset. Experimental results show that OptLLM works with various LLMs, and the fine-tuned model achieves an accuracy boost compared to the prompt-based models. Some features of OptLLM framework have been available for trial since June 2023 (https://opt.alibabacloud.com/chat or https://opt.aliyun.com/chat).
[ "Zhang, Jihai", "Wang, Wei", "Guo, Siyan", "Wang, Li", "Lin, Fangquan", "Yang, Cheng", "Yin, Wotao" ]
Solving General Natural-Language-Description Optimization Problems with Large Language Models
naacl-industry.42
Poster
2407.07924
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.naacl-industry.43.bib
https://aclanthology.org/2024.naacl-industry.43/
@inproceedings{vijayaraghavan-etal-2024-self, title = "Self-Regulated Data-Free Knowledge Amalgamation for Text Classification", author = "Vijayaraghavan, Prashanth and Wang, Hongzhi and Shi, Luyao and Baldwin, Tyler and Beymer, David and Degan, Ehsan", editor = "Yang, Yi and Davani, Aida and Sil, Avi and Kumar, Anoop", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 6: Industry Track)", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-industry.43", doi = "10.18653/v1/2024.naacl-industry.43", pages = "491--502", abstract = "Recently, there has been a growing availability of pre-trained text models on various model repositories. These models greatly reduce the cost of training new models from scratch as they can be fine-tuned for specific tasks or trained on large datasets. However, these datasets may not be publicly accessible due to the privacy, security, or intellectual property issues. In this paper, we aim to develop a lightweight student network that can learn from multiple teacher models without accessing their original training data. Hence, we investigate Data-Free Knowledge Amalgamation (DFKA), a knowledge-transfer task that combines insights from multiple pre-trained teacher models and transfers them effectively to a compact student network. To accomplish this, we propose STRATANET, a modeling framework comprising: (a) a steerable data generator that produces text data tailored to each teacher and (b) an amalgamation module that implements a self-regulative strategy using confidence estimates from the teachers{'} different layers to selectively integrate their knowledge and train a versatile student. We evaluate our method on three benchmark text classification datasets with varying labels or domains. Empirically, we demonstrate that the student model learned using our STRATANET outperforms several baselines significantly under data-driven and data-free constraints.", }
Recently, there has been a growing availability of pre-trained text models on various model repositories. These models greatly reduce the cost of training new models from scratch as they can be fine-tuned for specific tasks or trained on large datasets. However, these datasets may not be publicly accessible due to the privacy, security, or intellectual property issues. In this paper, we aim to develop a lightweight student network that can learn from multiple teacher models without accessing their original training data. Hence, we investigate Data-Free Knowledge Amalgamation (DFKA), a knowledge-transfer task that combines insights from multiple pre-trained teacher models and transfers them effectively to a compact student network. To accomplish this, we propose STRATANET, a modeling framework comprising: (a) a steerable data generator that produces text data tailored to each teacher and (b) an amalgamation module that implements a self-regulative strategy using confidence estimates from the teachers{'} different layers to selectively integrate their knowledge and train a versatile student. We evaluate our method on three benchmark text classification datasets with varying labels or domains. Empirically, we demonstrate that the student model learned using our STRATANET outperforms several baselines significantly under data-driven and data-free constraints.
[ "Vijayaraghavan, Prashanth", "Wang, Hongzhi", "Shi, Luyao", "Baldwin, Tyler", "Beymer, David", "Degan, Ehsan" ]
Self-Regulated Data-Free Knowledge Amalgamation for Text Classification
naacl-industry.43
Poster
2406.15476
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.1.bib
https://aclanthology.org/2024.findings-naacl.1/
@inproceedings{zhang-etal-2024-structured, title = "Structured Pruning for Large Language Models Using Coupled Components Elimination and Minor Fine-tuning", author = "Zhang, Honghe and XiaolongShi, XiaolongShi and Sun, Jingwei and Sun, Guangzhong", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.1", doi = "10.18653/v1/2024.findings-naacl.1", pages = "1--12", abstract = "Large language models (LLMs) have demonstrated powerful capabilities in natural language processing, yet their vast number of parameters poses challenges for deployment and inference efficiency. Structured model pruning emerges as a viable approach to reduce model size and accelerate inference, without requiring specialized operators and libraries for deployment. However, structured pruning often severely weakens the model{'}s capability.Despite repetitive fine-tuning can restore the capability to a certain extent, it impairs LLMs{'} utility as versatile problem solvers.To address this issue, we propose a novel structured pruning algorithm tailored for LLMs. It derives the importance of different components, namely rows and columns in parameter matrices, based on intermediate data dependencies. Then it removes coupled components across different layers simultaneously and preserves dependency relationships within remaining parameters, avoiding significant performance degradation. The pruned model requires only few epochs of fine-tuning to restore its performance, ensuring the model{'}s ability to generalize.Empirical evaluations on LLaMA, Vicuna, and ChatGLM3 demonstrate our algorithm{'}s efficacy, yielding 20{\%} parameter reduction while retaining at least 94.4{\%} of original performance metrics.", }
Large language models (LLMs) have demonstrated powerful capabilities in natural language processing, yet their vast number of parameters poses challenges for deployment and inference efficiency. Structured model pruning emerges as a viable approach to reduce model size and accelerate inference, without requiring specialized operators and libraries for deployment. However, structured pruning often severely weakens the model{'}s capability.Despite repetitive fine-tuning can restore the capability to a certain extent, it impairs LLMs{'} utility as versatile problem solvers.To address this issue, we propose a novel structured pruning algorithm tailored for LLMs. It derives the importance of different components, namely rows and columns in parameter matrices, based on intermediate data dependencies. Then it removes coupled components across different layers simultaneously and preserves dependency relationships within remaining parameters, avoiding significant performance degradation. The pruned model requires only few epochs of fine-tuning to restore its performance, ensuring the model{'}s ability to generalize.Empirical evaluations on LLaMA, Vicuna, and ChatGLM3 demonstrate our algorithm{'}s efficacy, yielding 20{\%} parameter reduction while retaining at least 94.4{\%} of original performance metrics.
[ "Zhang, Honghe", "XiaolongShi, XiaolongShi", "Sun, Jingwei", "Sun, Guangzhong" ]
Structured Pruning for Large Language Models Using Coupled Components Elimination and Minor Fine-tuning
findings-naacl.1
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.2.bib
https://aclanthology.org/2024.findings-naacl.2/
@inproceedings{wu-etal-2024-weight, title = "Weight-Inherited Distillation for Task-Agnostic {BERT} Compression", author = "Wu, Taiqiang and Hou, Cheng and Lao, Shanshan and Li, Jiayi and Wong, Ngai and Zhao, Zhe and Yang, Yujiu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.2", doi = "10.18653/v1/2024.findings-naacl.2", pages = "13--28", abstract = "Knowledge Distillation (KD) is a predominant approach for BERT compression.Previous KD-based methods focus on designing extra alignment losses for the student model to mimic the behavior of the teacher model.These methods transfer the knowledge in an indirect way.In this paper, we propose a novel Weight-Inherited Distillation (WID), which directly transfers knowledge from the teacher.WID does not require any additional alignment loss and trains a compact student by inheriting the weights, showing a new perspective of knowledge distillation.Specifically, we design the row compactors and column compactors as mappings and then compress the weights via structural re-parameterization.Experimental results on the GLUE and SQuAD benchmarks show that WID outperforms previous state-of-the-art KD-based baselines.Further analysis indicates that WID can also learn the attention patterns from the teacher model without any alignment loss on attention distributions.The code is available at https://github.com/wutaiqiang/WID-NAACL2024.", }
Knowledge Distillation (KD) is a predominant approach for BERT compression.Previous KD-based methods focus on designing extra alignment losses for the student model to mimic the behavior of the teacher model.These methods transfer the knowledge in an indirect way.In this paper, we propose a novel Weight-Inherited Distillation (WID), which directly transfers knowledge from the teacher.WID does not require any additional alignment loss and trains a compact student by inheriting the weights, showing a new perspective of knowledge distillation.Specifically, we design the row compactors and column compactors as mappings and then compress the weights via structural re-parameterization.Experimental results on the GLUE and SQuAD benchmarks show that WID outperforms previous state-of-the-art KD-based baselines.Further analysis indicates that WID can also learn the attention patterns from the teacher model without any alignment loss on attention distributions.The code is available at https://github.com/wutaiqiang/WID-NAACL2024.
[ "Wu, Taiqiang", "Hou, Cheng", "Lao, Shanshan", "Li, Jiayi", "Wong, Ngai", "Zhao, Zhe", "Yang, Yujiu" ]
Weight-Inherited Distillation for Task-Agnostic BERT Compression
findings-naacl.2
Poster
2305.09098
[ "https://github.com/wutaiqiang/WID-NAACL2024" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.3.bib
https://aclanthology.org/2024.findings-naacl.3/
@inproceedings{jang-etal-2024-ignore, title = "Ignore Me But Don{'}t Replace Me: Utilizing Non-Linguistic Elements for Pretraining on the Cybersecurity Domain", author = "Jang, Eugene and Cui, Jian and Yim, Dayeon and Jin, Youngjin and Chung, Jin-Woo and Shin, Seungwon and Lee, Yongjae", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.3", doi = "10.18653/v1/2024.findings-naacl.3", pages = "29--42", abstract = "Cybersecurity information is often technically complex and relayed through unstructured text, making automation of cyber threat intelligence highly challenging. For such text domains that involve high levels of expertise, pretraining on in-domain corpora has been a popular method for language models to obtain domain expertise. However, cybersecurity texts often contain non-linguistic elements (such as URLs and hash values) that could be unsuitable with the established pretraining methodologies. Previous work in other domains have removed or filtered such text as noise, but the effectiveness of these methods have not been investigated, especially in the cybersecurity domain. We experiment with different pretraining methodologies to account for non-linguistic elements (NLEs) and evaluate their effectiveness through downstream tasks and probing tasks. Our proposed strategy, a combination of selective MLM and jointly training NLE token classification, outperforms the commonly taken approach of replacing NLEs. We use our domain-customized methodology to train CyBERTuned, a cybersecurity domain language model that outperforms other cybersecurity PLMs on most tasks.", }
Cybersecurity information is often technically complex and relayed through unstructured text, making automation of cyber threat intelligence highly challenging. For such text domains that involve high levels of expertise, pretraining on in-domain corpora has been a popular method for language models to obtain domain expertise. However, cybersecurity texts often contain non-linguistic elements (such as URLs and hash values) that could be unsuitable with the established pretraining methodologies. Previous work in other domains have removed or filtered such text as noise, but the effectiveness of these methods have not been investigated, especially in the cybersecurity domain. We experiment with different pretraining methodologies to account for non-linguistic elements (NLEs) and evaluate their effectiveness through downstream tasks and probing tasks. Our proposed strategy, a combination of selective MLM and jointly training NLE token classification, outperforms the commonly taken approach of replacing NLEs. We use our domain-customized methodology to train CyBERTuned, a cybersecurity domain language model that outperforms other cybersecurity PLMs on most tasks.
[ "Jang, Eugene", "Cui, Jian", "Yim, Dayeon", "Jin, Youngjin", "Chung, Jin-Woo", "Shin, Seungwon", "Lee, Yongjae" ]
Ignore Me But Don't Replace Me: Utilizing Non-Linguistic Elements for Pretraining on the Cybersecurity Domain
findings-naacl.3
Poster
2403.10576
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.4.bib
https://aclanthology.org/2024.findings-naacl.4/
@inproceedings{cohen-etal-2024-extremely, title = "Extremely efficient online query encoding for dense retrieval", author = "Cohen, Nachshon and Fairstein, Yaron and Kushilevitz, Guy", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.4", doi = "10.18653/v1/2024.findings-naacl.4", pages = "43--50", abstract = "Existing dense retrieval systems utilize the same model architecture for encoding both the passages and the queries, even though queries are much shorter and simpler than passages. This leads to high latency of the query encoding, which is performed online and therefore might impact user experience. We show that combining a standard large passage encoder with a small efficient query encoder can provide significant latency drops with only a small decrease in quality. We offer a pretraining and training solution for multiple small query encoder architectures. Using a small transformer architecture we are able to decrease latency by up to $\sim12\times$, while $MRR@10$ on the MS MARCO dev set only decreases from 38.2 to 36.2. If this solution does not reach the desired latency requirements, we propose an efficient RNN as the query encoder, which processes the query prefix incrementally and only infers the last word after the query is issued. This shortens latency by $\sim38\times$ with only a minor drop in quality, reaching 35.5 $MRR@10$ score.", }
Existing dense retrieval systems utilize the same model architecture for encoding both the passages and the queries, even though queries are much shorter and simpler than passages. This leads to high latency of the query encoding, which is performed online and therefore might impact user experience. We show that combining a standard large passage encoder with a small efficient query encoder can provide significant latency drops with only a small decrease in quality. We offer a pretraining and training solution for multiple small query encoder architectures. Using a small transformer architecture we are able to decrease latency by up to $\sim12\times$, while $MRR@10$ on the MS MARCO dev set only decreases from 38.2 to 36.2. If this solution does not reach the desired latency requirements, we propose an efficient RNN as the query encoder, which processes the query prefix incrementally and only infers the last word after the query is issued. This shortens latency by $\sim38\times$ with only a minor drop in quality, reaching 35.5 $MRR@10$ score.
[ "Cohen, Nachshon", "Fairstein, Yaron", "Kushilevitz, Guy" ]
Extremely efficient online query encoding for dense retrieval
findings-naacl.4
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.5.bib
https://aclanthology.org/2024.findings-naacl.5/
@inproceedings{zhao-etal-2024-divknowqa, title = "{DIVKNOWQA}: Assessing the Reasoning Ability of {LLM}s via Open-Domain Question Answering over Knowledge Base and Text", author = "Zhao, Wenting and Liu, Ye and Niu, Tong and Wan, Yao and Yu, Philip and Joty, Shafiq and Zhou, Yingbo and Yavuz, Semih", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.5", doi = "10.18653/v1/2024.findings-naacl.5", pages = "51--68", abstract = "Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when solely relying on their internal knowledge, especially when answering questions that require less commonly known information. Retrievalaugmented LLMs have emerged as a potential solution to ground LLMs in external knowledge. Nonetheless, recent approaches have primarily emphasized retrieval from unstructured text corpora, owing to its seamless integration into prompts. When using structured data such as knowledge graphs, most methods simplify it into natural text, neglecting the underlying structures. Moreover, a significant gap in the current landscape is the absence of a realistic benchmark for evaluating the effectiveness of grounding LLMs on heterogeneous knowledge sources (e.g., knowledge base and text). To fill this gap, we have curated a comprehensive dataset that poses two unique challenges: (1) Two-hop multi-source questions that require retrieving information from both open-domain structured and unstructured knowledge sources; retrieving information from structured knowledge sources is a critical component in correctly answering the questions. (2) Generation of symbolic queries (e.g., SPARQL for Wikidata) is a key requirement, which adds another layer of challenge. Our dataset is created using a combination of automatic generation through predefined reasoning chains and human annotation. We also introduce a novel approach that leverages multiple retrieval tools, including text passage retrieval and symbolic language-assisted retrieval. Our model outperforms previous approaches by a significant margin, demonstrating its effectiveness in addressing the above-mentioned reasoning challenges.", }
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when solely relying on their internal knowledge, especially when answering questions that require less commonly known information. Retrievalaugmented LLMs have emerged as a potential solution to ground LLMs in external knowledge. Nonetheless, recent approaches have primarily emphasized retrieval from unstructured text corpora, owing to its seamless integration into prompts. When using structured data such as knowledge graphs, most methods simplify it into natural text, neglecting the underlying structures. Moreover, a significant gap in the current landscape is the absence of a realistic benchmark for evaluating the effectiveness of grounding LLMs on heterogeneous knowledge sources (e.g., knowledge base and text). To fill this gap, we have curated a comprehensive dataset that poses two unique challenges: (1) Two-hop multi-source questions that require retrieving information from both open-domain structured and unstructured knowledge sources; retrieving information from structured knowledge sources is a critical component in correctly answering the questions. (2) Generation of symbolic queries (e.g., SPARQL for Wikidata) is a key requirement, which adds another layer of challenge. Our dataset is created using a combination of automatic generation through predefined reasoning chains and human annotation. We also introduce a novel approach that leverages multiple retrieval tools, including text passage retrieval and symbolic language-assisted retrieval. Our model outperforms previous approaches by a significant margin, demonstrating its effectiveness in addressing the above-mentioned reasoning challenges.
[ "Zhao, Wenting", "Liu, Ye", "Niu, Tong", "Wan, Yao", "Yu, Philip", "Joty, Shafiq", "Zhou, Yingbo", "Yavuz, Semih" ]
DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain Question Answering over Knowledge Base and Text
findings-naacl.5
Poster
2310.20170
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.6.bib
https://aclanthology.org/2024.findings-naacl.6/
@inproceedings{pavlovic-sallinger-2024-speede, title = "{S}peed{E}: {E}uclidean Geometric Knowledge Graph Embedding Strikes Back", author = "Pavlovi{\'c}, Aleksandar and Sallinger, Emanuel", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.6", doi = "10.18653/v1/2024.findings-naacl.6", pages = "69--92", abstract = "Geometric knowledge graph embedding models (gKGEs) have shown great potential for knowledge graph completion (KGC), i.e., automatically predicting missing triples. However, contemporary gKGEs require high embedding dimensionalities or complex embedding spaces for good KGC performance, drastically limiting their space and time efficiency. Facing these challenges, we propose SpeedE, a lightweight Euclidean gKGE that (1) provides strong inference capabilities, (2) is competitive with state-of-the-art gKGEs, even significantly outperforming them on YAGO3-10 and WN18RR, and (3) dramatically increases their efficiency, in particular, needing solely a fifth of the training time and a fourth of the parameters of the state-of-the-art ExpressivE model on WN18RR to reach the same KGC performance.", }
Geometric knowledge graph embedding models (gKGEs) have shown great potential for knowledge graph completion (KGC), i.e., automatically predicting missing triples. However, contemporary gKGEs require high embedding dimensionalities or complex embedding spaces for good KGC performance, drastically limiting their space and time efficiency. Facing these challenges, we propose SpeedE, a lightweight Euclidean gKGE that (1) provides strong inference capabilities, (2) is competitive with state-of-the-art gKGEs, even significantly outperforming them on YAGO3-10 and WN18RR, and (3) dramatically increases their efficiency, in particular, needing solely a fifth of the training time and a fourth of the parameters of the state-of-the-art ExpressivE model on WN18RR to reach the same KGC performance.
[ "Pavlovi{\\'c}, Aleks", "ar", "Sallinger, Emanuel" ]
SpeedE: Euclidean Geometric Knowledge Graph Embedding Strikes Back
findings-naacl.6
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.7.bib
https://aclanthology.org/2024.findings-naacl.7/
@inproceedings{golchha-etal-2024-language, title = "Language Guided Exploration for {RL} Agents in Text Environments", author = "Golchha, Hitesh and Yerawar, Sahil and Patel, Dhruvesh and Dan, Soham and Murugesan, Keerthiram", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.7", doi = "10.18653/v1/2024.findings-naacl.7", pages = "93--102", abstract = "Real-world sequential decision making is characterized by sparse rewards and large decision spaces, posing significant difficulty for experiential learning systems like $\textit{tabula rasa}$ reinforcement learning (RL) agents. Large Language Models (LLMs), with a wealth of world knowledge, can help RL agents learn quickly and adapt to distribution shifts. In this work, we introduce Language Guided Exploration (LGE) framework, which uses a pre-trained language model (called GUIDE ) to provide decision-level guidance to an RL agent (called EXPLORER ). We observe that on ScienceWorld (Wang et al., 2022), a challenging text environment, LGE outperforms vanilla RL agents significantly and also outperforms other sophisticated methods like Behaviour Cloning and Text Decision Transformer.", }
Real-world sequential decision making is characterized by sparse rewards and large decision spaces, posing significant difficulty for experiential learning systems like $\textit{tabula rasa}$ reinforcement learning (RL) agents. Large Language Models (LLMs), with a wealth of world knowledge, can help RL agents learn quickly and adapt to distribution shifts. In this work, we introduce Language Guided Exploration (LGE) framework, which uses a pre-trained language model (called GUIDE ) to provide decision-level guidance to an RL agent (called EXPLORER ). We observe that on ScienceWorld (Wang et al., 2022), a challenging text environment, LGE outperforms vanilla RL agents significantly and also outperforms other sophisticated methods like Behaviour Cloning and Text Decision Transformer.
[ "Golchha, Hitesh", "Yerawar, Sahil", "Patel, Dhruvesh", "Dan, Soham", "Murugesan, Keerthiram" ]
Language Guided Exploration for RL Agents in Text Environments
findings-naacl.7
Poster
2403.03141
[ "" ]
https://huggingface.co/papers/2403.03141
0
2
0
5
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.8.bib
https://aclanthology.org/2024.findings-naacl.8/
@inproceedings{venkatraman-etal-2024-gpt, title = "{GPT}-who: An Information Density-based Machine-Generated Text Detector", author = "Venkatraman, Saranya and Uchendu, Adaku and Lee, Dongwon", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.8", doi = "10.18653/v1/2024.findings-naacl.8", pages = "103--115", abstract = "The Uniform Information Density (UID) principle posits that humans prefer to spread information evenly during language production. We examine if this UID principle can help capture differences between Large Language Models (LLMs)-generated and human-generated texts. We propose GPT-who, the first psycholinguistically-inspired domain-agnostic statistical detector. This detector employs UID-based featuresto model the unique statistical signature of each LLM and human author for accurate detection. We evaluate our method using 4 large-scale benchmark datasets and find that GPT-who outperforms state-of-the-art detectors (both statistical- {\&} non-statistical) such as GLTR, GPTZero, DetectGPT, OpenAI detector, and ZeroGPT by over 20{\%} across domains.In addition to better performance, it is computationally inexpensive and utilizes an interpretable representation of text articles. We find that GPT-who can distinguish texts generated by very sophisticated LLMs, even when the overlying text is indiscernible.UID-based measures for all datasets and code are available at https://github.com/saranya-venkatraman/gpt-who.", }
The Uniform Information Density (UID) principle posits that humans prefer to spread information evenly during language production. We examine if this UID principle can help capture differences between Large Language Models (LLMs)-generated and human-generated texts. We propose GPT-who, the first psycholinguistically-inspired domain-agnostic statistical detector. This detector employs UID-based featuresto model the unique statistical signature of each LLM and human author for accurate detection. We evaluate our method using 4 large-scale benchmark datasets and find that GPT-who outperforms state-of-the-art detectors (both statistical- {\&} non-statistical) such as GLTR, GPTZero, DetectGPT, OpenAI detector, and ZeroGPT by over 20{\%} across domains.In addition to better performance, it is computationally inexpensive and utilizes an interpretable representation of text articles. We find that GPT-who can distinguish texts generated by very sophisticated LLMs, even when the overlying text is indiscernible.UID-based measures for all datasets and code are available at https://github.com/saranya-venkatraman/gpt-who.
[ "Venkatraman, Saranya", "Uchendu, Adaku", "Lee, Dongwon" ]
GPT-who: An Information Density-based Machine-Generated Text Detector
findings-naacl.8
Poster
2310.06202
[ "https://github.com/saranya-venkatraman/gpt-who" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.9.bib
https://aclanthology.org/2024.findings-naacl.9/
@inproceedings{tang-etal-2024-deed, title = "{DEED}: Dynamic Early Exit on Decoder for Accelerating Encoder-Decoder Transformer Models", author = "Tang, Peng and Zhu, Pengkai and Li, Tian and Appalaraju, Srikar and Mahadevan, Vijay and Manmatha, R.", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.9", doi = "10.18653/v1/2024.findings-naacl.9", pages = "116--131", abstract = "Encoder-decoder transformer models have achieved great success on various vision-language (VL) and language tasks, but they suffer from high inference latency. Typically, the decoder takes up most of the latency because of the auto-regressive decoding. To accelerate the inference, we propose an approach of performing Dynamic Early Exit on Decoder (DEED). We build a multi-exit encoder-decoder transformer model which is trained with deep supervision so that each of its decoder layers is capable of generating plausible predictions. In addition, we leverage simple yet practical techniques, including shared generation head and adaptation modules, to keep accuracy when exiting at shallow decoder layers. Based on the multi-exit model, we perform step-level dynamic early exit during inference, where the model may decide to use fewer decoder layers based on its confidence of the current layer at each individual decoding step. Considering different number of decoder layers may be used at different decoding steps, we compute deeper-layer decoder features of previous decoding steps just-in-time, which ensures the features from different decoding steps are semantically aligned. We evaluate our approach with three state-of-the-art encoder-decoder transformer models on various VL and language tasks. We show our approach can reduce overall inference latency by 20{\%}-74{\%} with comparable or even higher accuracy compared to baselines.", }
Encoder-decoder transformer models have achieved great success on various vision-language (VL) and language tasks, but they suffer from high inference latency. Typically, the decoder takes up most of the latency because of the auto-regressive decoding. To accelerate the inference, we propose an approach of performing Dynamic Early Exit on Decoder (DEED). We build a multi-exit encoder-decoder transformer model which is trained with deep supervision so that each of its decoder layers is capable of generating plausible predictions. In addition, we leverage simple yet practical techniques, including shared generation head and adaptation modules, to keep accuracy when exiting at shallow decoder layers. Based on the multi-exit model, we perform step-level dynamic early exit during inference, where the model may decide to use fewer decoder layers based on its confidence of the current layer at each individual decoding step. Considering different number of decoder layers may be used at different decoding steps, we compute deeper-layer decoder features of previous decoding steps just-in-time, which ensures the features from different decoding steps are semantically aligned. We evaluate our approach with three state-of-the-art encoder-decoder transformer models on various VL and language tasks. We show our approach can reduce overall inference latency by 20{\%}-74{\%} with comparable or even higher accuracy compared to baselines.
[ "Tang, Peng", "Zhu, Pengkai", "Li, Tian", "Appalaraju, Srikar", "Mahadevan, Vijay", "Manmatha, R." ]
DEED: Dynamic Early Exit on Decoder for Accelerating Encoder-Decoder Transformer Models
findings-naacl.9
Poster
2311.08623
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.10.bib
https://aclanthology.org/2024.findings-naacl.10/
@inproceedings{chi-etal-2024-attention, title = "Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation", author = "Chi, Ta-Chung and Fan, Ting-Han and Rudnicky, Alexander", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.10", doi = "10.18653/v1/2024.findings-naacl.10", pages = "132--148", abstract = "An ideal length-extrapolatable Transformer language model can handle sequences longer than the training length without any fine-tuning. Such long-context utilization capability relies heavily on a flexible positional embedding design. Upon investigating the flexibility of existing large pre-trained Transformer language models, we find that the T5 family deserves a closer look, as its positional embeddings capture rich and flexible attention patterns. However, T5 suffers from the dispersed attention issue: the longer the input sequence, the flatter the attention distribution. To alleviate the issue, we propose two attention alignment strategies via temperature scaling. Our findings show improvement on the long-context utilization capability of T5 on language modeling, retrieval, multi-document question answering, and code completion tasks without any fine-tuning. This suggests that a flexible positional embedding design and attention alignment can go a long way toward Transformer length extrapolation. The code is released at: \url{https://github.com/chijames/T5-Attention-Alignment}", }
An ideal length-extrapolatable Transformer language model can handle sequences longer than the training length without any fine-tuning. Such long-context utilization capability relies heavily on a flexible positional embedding design. Upon investigating the flexibility of existing large pre-trained Transformer language models, we find that the T5 family deserves a closer look, as its positional embeddings capture rich and flexible attention patterns. However, T5 suffers from the dispersed attention issue: the longer the input sequence, the flatter the attention distribution. To alleviate the issue, we propose two attention alignment strategies via temperature scaling. Our findings show improvement on the long-context utilization capability of T5 on language modeling, retrieval, multi-document question answering, and code completion tasks without any fine-tuning. This suggests that a flexible positional embedding design and attention alignment can go a long way toward Transformer length extrapolation. The code is released at: \url{https://github.com/chijames/T5-Attention-Alignment}
[ "Chi, Ta-Chung", "Fan, Ting-Han", "Rudnicky, Alex", "er" ]
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation
findings-naacl.10
Poster
2311.00684
[ "" ]
https://huggingface.co/papers/2311.00684
0
0
0
3
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.11.bib
https://aclanthology.org/2024.findings-naacl.11/
@inproceedings{xu-etal-2024-automatic, title = "Automatic Pair Construction for Contrastive Post-training", author = "Xu, Canwen and Rosset, Corby and Chau, Ethan and Corro, Luciano and Mahajan, Shweti and McAuley, Julian and Neville, Jennifer and Awadallah, Ahmed and Rao, Nikhil", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.11", doi = "10.18653/v1/2024.findings-naacl.11", pages = "149--162", abstract = "Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we propose an automatic way to construct contrastive data for LLM, using preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continuing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from {``}easier{''} pairs and transitioning to {``}harder{''} ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, our automatic contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to outperform ChatGPT.", }
Alignment serves as an important step to steer large language models (LLMs) towards human preferences. In this paper, we propose an automatic way to construct contrastive data for LLM, using preference pairs from multiple models of varying strengths (e.g., InstructGPT, ChatGPT and GPT-4). We compare the contrastive techniques of SLiC and DPO to SFT baselines and find that DPO provides a step-function improvement even after continuing SFT saturates. We also explore a data curriculum learning scheme for contrastive post-training, which starts by learning from {``}easier{''} pairs and transitioning to {``}harder{''} ones, which further improves alignment. Finally, we scale up our experiments to train with more data and larger models like Orca. Remarkably, our automatic contrastive post-training further improves the performance of Orca, already a state-of-the-art instruction learning model tuned with GPT-4 outputs, to outperform ChatGPT.
[ "Xu, Canwen", "Rosset, Corby", "Chau, Ethan", "Corro, Luciano", "Mahajan, Shweti", "McAuley, Julian", "Neville, Jennifer", "Awadallah, Ahmed", "Rao, Nikhil" ]
Automatic Pair Construction for Contrastive Post-training
findings-naacl.11
Poster
2310.02263
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.12.bib
https://aclanthology.org/2024.findings-naacl.12/
@inproceedings{li-etal-2024-self, title = "Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models", author = "Li, Miaoran and Peng, Baolin and Galley, Michel and Gao, Jianfeng and Zhang, Zhu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.12", doi = "10.18653/v1/2024.findings-naacl.12", pages = "163--181", abstract = "Fact-checking is an essential task in NLP that is commonly utilized to validate the factual accuracy of a piece of text. Previous approaches mainly involve the resource-intensive process of fine-tuning pre-trained language models on specific datasets. In addition, there is a notable gap in datasets that focus on fact-checking texts generated by large language models (LLMs). In this paper, we introduce Self-Checker, a plug-and-play framework that harnesses LLMs for efficient and rapid fact-checking in a few-shot manner. We also present the BingCheck dataset, specifically designed for fact-checking texts generated by LLMs. Empirical results demonstrate the potential of Self-Checker in the use of LLMs for fact-checking. Compared to state-of-the-art fine-tuned models, there is still significant room for improvement, indicating that adopting LLMs could be a promising direction for future fact-checking research.", }
Fact-checking is an essential task in NLP that is commonly utilized to validate the factual accuracy of a piece of text. Previous approaches mainly involve the resource-intensive process of fine-tuning pre-trained language models on specific datasets. In addition, there is a notable gap in datasets that focus on fact-checking texts generated by large language models (LLMs). In this paper, we introduce Self-Checker, a plug-and-play framework that harnesses LLMs for efficient and rapid fact-checking in a few-shot manner. We also present the BingCheck dataset, specifically designed for fact-checking texts generated by LLMs. Empirical results demonstrate the potential of Self-Checker in the use of LLMs for fact-checking. Compared to state-of-the-art fine-tuned models, there is still significant room for improvement, indicating that adopting LLMs could be a promising direction for future fact-checking research.
[ "Li, Miaoran", "Peng, Baolin", "Galley, Michel", "Gao, Jianfeng", "Zhang, Zhu" ]
Self-Checker: Plug-and-Play Modules for Fact-Checking with Large Language Models
findings-naacl.12
Poster
2305.14623
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.13.bib
https://aclanthology.org/2024.findings-naacl.13/
@inproceedings{nzeyimana-2024-low, title = "Low-resource neural machine translation with morphological modeling", author = "Nzeyimana, Antoine", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.13", doi = "10.18653/v1/2024.findings-naacl.13", pages = "182--195", abstract = "Morphological modeling in neural machine translation (NMT) is a promising approach to achieving open-vocabulary machine translation for morphologically-rich languages. However, existing methods such as sub-word tokenization and character-based models are limited to the surface forms of the words. In this work, we propose a framework-solution for modeling complex morphology in low-resource settings. A two-tier transformer architecture is chosen to encode morphological information at the inputs. At the target-side output, a multi-task multi-label training scheme coupled with a beam search-based decoder are found to improve machine translation performance. An attention augmentation scheme to the transformer model is proposed in a generic form to allow integration of pre-trained language models and also facilitate modeling of word order relationships between the source and target languages. Several data augmentation techniques are evaluated and shown to increase translation performance in low-resource settings. We evaluate our proposed solution on Kinyarwanda $\leftrightarrow$ English translation using public-domain parallel text. Our final models achieve competitive performance in relation to large multi-lingual models. We hope that our results will motivate more use of explicit morphological information and the proposed model and data augmentations in low-resource NMT.", }
Morphological modeling in neural machine translation (NMT) is a promising approach to achieving open-vocabulary machine translation for morphologically-rich languages. However, existing methods such as sub-word tokenization and character-based models are limited to the surface forms of the words. In this work, we propose a framework-solution for modeling complex morphology in low-resource settings. A two-tier transformer architecture is chosen to encode morphological information at the inputs. At the target-side output, a multi-task multi-label training scheme coupled with a beam search-based decoder are found to improve machine translation performance. An attention augmentation scheme to the transformer model is proposed in a generic form to allow integration of pre-trained language models and also facilitate modeling of word order relationships between the source and target languages. Several data augmentation techniques are evaluated and shown to increase translation performance in low-resource settings. We evaluate our proposed solution on Kinyarwanda $\leftrightarrow$ English translation using public-domain parallel text. Our final models achieve competitive performance in relation to large multi-lingual models. We hope that our results will motivate more use of explicit morphological information and the proposed model and data augmentations in low-resource NMT.
[ "Nzeyimana, Antoine" ]
Low-resource neural machine translation with morphological modeling
findings-naacl.13
Poster
2404.02392
[ "https://github.com/anzeyimana/kinmt_naacl2024" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.14.bib
https://aclanthology.org/2024.findings-naacl.14/
@inproceedings{chu-etal-2024-self, title = "Self-Cleaning: Improving a Named Entity Recognizer Trained on Noisy Data with a Few Clean Instances", author = "Chu, Zhendong and Zhang, Ruiyi and Yu, Tong and Jain, Rajiv and Morariu, Vlad and Gu, Jiuxiang and Nenkova, Ani", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.14", doi = "10.18653/v1/2024.findings-naacl.14", pages = "196--210", abstract = "To achieve state-of-the-art performance, one still needs to train NER models on large-scale, high-quality annotated data, an asset that is both costly and time-intensive to accumulate. In contrast, real-world applications often resort to massive low-quality labeled data through non-expert annotators via crowdsourcing and external knowledge bases via distant supervision as a cost-effective alternative. However, these annotation methods result in noisy labels, which in turn lead to a notable decline in performance. Hence, we propose to denoise the noisy NER data with guidance from a small set of clean instances. Along with the main NER model we train a discriminator model and use its outputs to recalibrate the sample weights. The discriminator is capable of detecting both span and category errors with different discriminative prompts. Results on public crowdsourcing and distant supervision datasets show that the proposed method can consistently improve performance with a small guidance set.", }
To achieve state-of-the-art performance, one still needs to train NER models on large-scale, high-quality annotated data, an asset that is both costly and time-intensive to accumulate. In contrast, real-world applications often resort to massive low-quality labeled data through non-expert annotators via crowdsourcing and external knowledge bases via distant supervision as a cost-effective alternative. However, these annotation methods result in noisy labels, which in turn lead to a notable decline in performance. Hence, we propose to denoise the noisy NER data with guidance from a small set of clean instances. Along with the main NER model we train a discriminator model and use its outputs to recalibrate the sample weights. The discriminator is capable of detecting both span and category errors with different discriminative prompts. Results on public crowdsourcing and distant supervision datasets show that the proposed method can consistently improve performance with a small guidance set.
[ "Chu, Zhendong", "Zhang, Ruiyi", "Yu, Tong", "Jain, Rajiv", "Morariu, Vlad", "Gu, Jiuxiang", "Nenkova, Ani" ]
Self-Cleaning: Improving a Named Entity Recognizer Trained on Noisy Data with a Few Clean Instances
findings-naacl.14
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.15.bib
https://aclanthology.org/2024.findings-naacl.15/
@inproceedings{do-etal-2024-vlue, title = "{VLUE}: A New Benchmark and Multi-task Knowledge Transfer Learning for {V}ietnamese Natural Language Understanding", author = "Do, Phong and Tran, Son and Hoang, Phu and Nguyen, Kiet and Nguyen, Ngan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.15", doi = "10.18653/v1/2024.findings-naacl.15", pages = "211--222", abstract = "The success of Natural Language Understanding (NLU) benchmarks in various languages, such as GLUE for English, CLUE for Chinese, KLUE for Korean, and IndoNLU for Indonesian, has facilitated the evaluation of new NLU models across a wide range of tasks. To establish a standardized set of benchmarks for Vietnamese NLU, we introduce the first Vietnamese Language Understanding Evaluation (VLUE) benchmark. The VLUE benchmark encompasses five datasets covering different NLU tasks, including text classification, span extraction, and natural language understanding. To provide an insightful overview of the current state of Vietnamese NLU, we then evaluate seven state-of-the-art pre-trained models, including both multilingual and Vietnamese monolingual models, on our proposed VLUE benchmark. Furthermore, we present CafeBERT, a new state-of-the-art pre-trained model that achieves superior results across all tasks in the VLUE benchmark. Our model combines the proficiency of a multilingual pre-trained model with Vietnamese linguistic knowledge. CafeBERT is developed based on the XLM-RoBERTa model, with an additional pretraining step utilizing a significant amount of Vietnamese textual data to enhance its adaptation to the Vietnamese language. For the purpose of future research, CafeBERT is made publicly available for research purposes.", }
The success of Natural Language Understanding (NLU) benchmarks in various languages, such as GLUE for English, CLUE for Chinese, KLUE for Korean, and IndoNLU for Indonesian, has facilitated the evaluation of new NLU models across a wide range of tasks. To establish a standardized set of benchmarks for Vietnamese NLU, we introduce the first Vietnamese Language Understanding Evaluation (VLUE) benchmark. The VLUE benchmark encompasses five datasets covering different NLU tasks, including text classification, span extraction, and natural language understanding. To provide an insightful overview of the current state of Vietnamese NLU, we then evaluate seven state-of-the-art pre-trained models, including both multilingual and Vietnamese monolingual models, on our proposed VLUE benchmark. Furthermore, we present CafeBERT, a new state-of-the-art pre-trained model that achieves superior results across all tasks in the VLUE benchmark. Our model combines the proficiency of a multilingual pre-trained model with Vietnamese linguistic knowledge. CafeBERT is developed based on the XLM-RoBERTa model, with an additional pretraining step utilizing a significant amount of Vietnamese textual data to enhance its adaptation to the Vietnamese language. For the purpose of future research, CafeBERT is made publicly available for research purposes.
[ "Do, Phong", "Tran, Son", "Hoang, Phu", "Nguyen, Kiet", "Nguyen, Ngan" ]
VLUE: A New Benchmark and Multi-task Knowledge Transfer Learning for Vietnamese Natural Language Understanding
findings-naacl.15
Poster
2403.15882
[ "" ]
https://huggingface.co/papers/2403.15882
0
0
0
5
1
[ "uitnlp/CafeBERT" ]
[]
[]
https://aclanthology.org/2024.findings-naacl.16.bib
https://aclanthology.org/2024.findings-naacl.16/
@inproceedings{wang-etal-2024-leti, title = "{LETI}: Learning to Generate from Textual Interactions", author = "Wang, Xingyao and Peng, Hao and Jabbarvand, Reyhaneh and Ji, Heng", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.16", doi = "10.18653/v1/2024.findings-naacl.16", pages = "223--239", abstract = "Fine-tuning pre-trained language models (LMs) is essential for enhancing their capabilities.Existing techniques commonly fine-tune on input-output pairs (e.g., instruction tuning) or with numerical rewards that gauge the output quality (e.g., RLHF). We explore LMs{'} potential to **le**arn from **t**extual **i**nteractions (**LETI**) that not only check their correctness with *binary labels* but also pinpoint and explain errors in their outputs through *textual feedback*.Our focus is the code generation task, where the model produces code based on natural language instructions. This setting invites a natural and scalable way to acquire textual feedback: the error messages and stack traces from code execution using a Python interpreter. LETI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback. Prepended to this fine-tuning text, a binary reward token is used to differentiate correct and buggy solutions.LETI requires *no* ground-truth outputs for training and even outperforms a fine-tuned baseline that does. LETI not only improves the performance of LMs on a code generation dataset MBPP, but also generalizes to other datasets. Trained on MBPP, it achieves comparable or better performance than the base LMs on unseen problems in HumanEval. Furthermore, compared to binary feedback, we observe that textual feedback leads to improved generation quality and sample efficiency, achieving the same performance with fewer than half of the gradient steps.LETI is equally applicable in natural language tasks when they can be formulated as code generation, which we empirically verified on event argument extraction.", }
Fine-tuning pre-trained language models (LMs) is essential for enhancing their capabilities.Existing techniques commonly fine-tune on input-output pairs (e.g., instruction tuning) or with numerical rewards that gauge the output quality (e.g., RLHF). We explore LMs{'} potential to **le**arn from **t**extual **i**nteractions (**LETI**) that not only check their correctness with *binary labels* but also pinpoint and explain errors in their outputs through *textual feedback*.Our focus is the code generation task, where the model produces code based on natural language instructions. This setting invites a natural and scalable way to acquire textual feedback: the error messages and stack traces from code execution using a Python interpreter. LETI iteratively fine-tunes the model, using the LM objective, on a concatenation of natural language instructions, LM-generated programs, and textual feedback. Prepended to this fine-tuning text, a binary reward token is used to differentiate correct and buggy solutions.LETI requires *no* ground-truth outputs for training and even outperforms a fine-tuned baseline that does. LETI not only improves the performance of LMs on a code generation dataset MBPP, but also generalizes to other datasets. Trained on MBPP, it achieves comparable or better performance than the base LMs on unseen problems in HumanEval. Furthermore, compared to binary feedback, we observe that textual feedback leads to improved generation quality and sample efficiency, achieving the same performance with fewer than half of the gradient steps.LETI is equally applicable in natural language tasks when they can be formulated as code generation, which we empirically verified on event argument extraction.
[ "Wang, Xingyao", "Peng, Hao", "Jabbarv", ", Reyhaneh", "Ji, Heng" ]
LETI: Learning to Generate from Textual Interactions
findings-naacl.16
Poster
2305.10314
[ "https://github.com/xingyaoww/leti" ]
https://huggingface.co/papers/2305.10314
1
0
0
4
1
[ "xingyaoww/LeTI" ]
[]
[]
https://aclanthology.org/2024.findings-naacl.17.bib
https://aclanthology.org/2024.findings-naacl.17/
@inproceedings{kong-etal-2024-bilateral, title = "Bilateral Masking with prompt for Knowledge Graph Completion", author = "Kong, Yonghui and Fan, Cunhang and Chen, Yujie and Zhang, Shuai and Lv, Zhao and Tao, Jianhua", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.17", doi = "10.18653/v1/2024.findings-naacl.17", pages = "240--249", abstract = "The pre-trained language model (PLM) has achieved significant success in the field of knowledge graph completion (KGC) by effectively modeling entity and relation descriptions. In recent studies, the research in this field has been categorized into methods based on word matching and sentence matching, with the former significantly lags behind. However, there is a critical issue in word matching methods, which is that these methods fail to obtain satisfactory single embedding representations for entities.To address this issue and enhance entity representation, we propose the Bilateral Masking with prompt for Knowledge Graph Completion (BMKGC) approach.Our methodology employs prompts to narrow the distance between the predicted entity and the known entity. Additionally, the BMKGC model incorporates a bi-encoder architecture, enabling simultaneous predictions at both the head and tail. Furthermore, we propose a straightforward technique to augment positive samples, mitigating the problem of degree bias present in knowledge graphs and thereby improving the model{'}s robustness. Experimental results conclusively demonstrate that BMKGC achieves state-of-the-art performance on the WN18RR dataset.", }
The pre-trained language model (PLM) has achieved significant success in the field of knowledge graph completion (KGC) by effectively modeling entity and relation descriptions. In recent studies, the research in this field has been categorized into methods based on word matching and sentence matching, with the former significantly lags behind. However, there is a critical issue in word matching methods, which is that these methods fail to obtain satisfactory single embedding representations for entities.To address this issue and enhance entity representation, we propose the Bilateral Masking with prompt for Knowledge Graph Completion (BMKGC) approach.Our methodology employs prompts to narrow the distance between the predicted entity and the known entity. Additionally, the BMKGC model incorporates a bi-encoder architecture, enabling simultaneous predictions at both the head and tail. Furthermore, we propose a straightforward technique to augment positive samples, mitigating the problem of degree bias present in knowledge graphs and thereby improving the model{'}s robustness. Experimental results conclusively demonstrate that BMKGC achieves state-of-the-art performance on the WN18RR dataset.
[ "Kong, Yonghui", "Fan, Cunhang", "Chen, Yujie", "Zhang, Shuai", "Lv, Zhao", "Tao, Jianhua" ]
Bilateral Masking with prompt for Knowledge Graph Completion
findings-naacl.17
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.18.bib
https://aclanthology.org/2024.findings-naacl.18/
@inproceedings{su-etal-2024-mile, title = "{M}i{L}e Loss: a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models", author = "Su, Zhenpeng and Lin, Zijia and Baixue, Baixue and Chen, Hui and Hu, Songlin and Zhou, Wei and Ding, Guiguang and W, Xing", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.18", doi = "10.18653/v1/2024.findings-naacl.18", pages = "250--262", abstract = "Generative language models are usually pre-trained on large text corpus via predicting the next token (i.e., sub-word/word/phrase) given the previous ones. Recent works have demonstrated the impressive performance of large generative language models on downstream tasks. However, existing generative language models generally neglect an inherent challenge in text corpus during training, i.e., the imbalance between frequent tokens and infrequent ones. It can lead a language model to be dominated by common and easy-to-learn tokens, thereby overlooking the infrequent and difficult-to-learn ones. To alleviate that, we propose a **MiLe Loss** function for **mi**tigating the bias of **le**arning difficulties with tokens. During training, it can dynamically assess the learning difficulty of a to-be-learned token, according to the information entropy of the corresponding predicted probability distribution over the vocabulary. Then it scales the training loss adaptively, trying to lead the model to focus more on the difficult-to-learn tokens. On the Pile dataset, we train generative language models at different scales of 468M, 1.2B, and 6.7B parameters. Experiments reveal that models incorporating the proposed MiLe Loss can gain consistent performance improvement on downstream benchmarks.", }
Generative language models are usually pre-trained on large text corpus via predicting the next token (i.e., sub-word/word/phrase) given the previous ones. Recent works have demonstrated the impressive performance of large generative language models on downstream tasks. However, existing generative language models generally neglect an inherent challenge in text corpus during training, i.e., the imbalance between frequent tokens and infrequent ones. It can lead a language model to be dominated by common and easy-to-learn tokens, thereby overlooking the infrequent and difficult-to-learn ones. To alleviate that, we propose a **MiLe Loss** function for **mi**tigating the bias of **le**arning difficulties with tokens. During training, it can dynamically assess the learning difficulty of a to-be-learned token, according to the information entropy of the corresponding predicted probability distribution over the vocabulary. Then it scales the training loss adaptively, trying to lead the model to focus more on the difficult-to-learn tokens. On the Pile dataset, we train generative language models at different scales of 468M, 1.2B, and 6.7B parameters. Experiments reveal that models incorporating the proposed MiLe Loss can gain consistent performance improvement on downstream benchmarks.
[ "Su, Zhenpeng", "Lin, Zijia", "Baixue, Baixue", "Chen, Hui", "Hu, Songlin", "Zhou, Wei", "Ding, Guiguang", "W, Xing" ]
MiLe Loss: a New Loss for Mitigating the Bias of Learning Difficulties in Generative Language Models
findings-naacl.18
Poster
2310.19531
[ "https://github.com/suu990901/LLaMA-InfoEntropy-Loss" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.19.bib
https://aclanthology.org/2024.findings-naacl.19/
@inproceedings{zhang-moshfeghi-2024-gold, title = "{GOLD}: Geometry Problem Solver with Natural Language Description", author = "Zhang, Jiaxin and Moshfeghi, Yashar", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.19", doi = "10.18653/v1/2024.findings-naacl.19", pages = "263--278", abstract = "Addressing the challenge of automated geometry math problem-solving in artificial intelligence (AI) involves understanding multi-modal information and mathematics. blackCurrent methods struggle with accurately interpreting geometry diagrams, which hinders effective problem-solving. To tackle this issue, we present the \textbf{G}eometry problem s\textbf{O}lver with natural \textbf{L}anguage \textbf{D}escription (GOLD) model. GOLD enhances the extraction of geometric relations by separately processing symbols and geometric primitives within the diagram. Subsequently, it converts the extracted relations into natural language descriptions, efficiently utilizing large language models to solve geometry math problems. Experiments show that the GOLD model outperforms the Geoformer model, the previous best method on the UniGeo dataset, by achieving accuracy improvements of 12.7{\%} and 42.1{\%} in calculation and proving subsets. Additionally, it surpasses the former best model on the PGPS9K and Geometry3K datasets, PGPSNet, by obtaining accuracy enhancements of 1.8{\%} and 3.2{\%}, respectively.", }
Addressing the challenge of automated geometry math problem-solving in artificial intelligence (AI) involves understanding multi-modal information and mathematics. blackCurrent methods struggle with accurately interpreting geometry diagrams, which hinders effective problem-solving. To tackle this issue, we present the \textbf{G}eometry problem s\textbf{O}lver with natural \textbf{L}anguage \textbf{D}escription (GOLD) model. GOLD enhances the extraction of geometric relations by separately processing symbols and geometric primitives within the diagram. Subsequently, it converts the extracted relations into natural language descriptions, efficiently utilizing large language models to solve geometry math problems. Experiments show that the GOLD model outperforms the Geoformer model, the previous best method on the UniGeo dataset, by achieving accuracy improvements of 12.7{\%} and 42.1{\%} in calculation and proving subsets. Additionally, it surpasses the former best model on the PGPS9K and Geometry3K datasets, PGPSNet, by obtaining accuracy enhancements of 1.8{\%} and 3.2{\%}, respectively.
[ "Zhang, Jiaxin", "Moshfeghi, Yashar" ]
GOLD: Geometry Problem Solver with Natural Language Description
findings-naacl.19
Poster
2405.00494
[ "https://github.com/neurasearch/geometry-diagram-description" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.20.bib
https://aclanthology.org/2024.findings-naacl.20/
@inproceedings{codrut-etal-2024-rodia, title = "{R}o{D}ia: A New Dataset for {R}omanian Dialect Identification from Speech", author = "Codru{\textcommabelow{t}}, Rotaru and Ristea, Nicolae and Ionescu, Radu", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.20", doi = "10.18653/v1/2024.findings-naacl.20", pages = "279--286", abstract = "We introduce RoDia, the first dataset for Romanian dialect identification from speech. The RoDia dataset includes a varied compilation of speech samples from five distinct regions of Romania, covering both urban and rural environments, totaling 2 hours of manually annotated speech data. Along with our dataset, we introduce a set of competitive models to be used as baselines for future research. The top scoring model achieves a macro F1 score of 59.83{\%} and a micro F1 score of 62.08{\%}, indicating that the task is challenging. We thus believe that RoDia is a valuable resource that will stimulate research aiming to address the challenges of Romanian dialect identification. We release our dataset at https://github.com/codrut2/RoDia.", }
We introduce RoDia, the first dataset for Romanian dialect identification from speech. The RoDia dataset includes a varied compilation of speech samples from five distinct regions of Romania, covering both urban and rural environments, totaling 2 hours of manually annotated speech data. Along with our dataset, we introduce a set of competitive models to be used as baselines for future research. The top scoring model achieves a macro F1 score of 59.83{\%} and a micro F1 score of 62.08{\%}, indicating that the task is challenging. We thus believe that RoDia is a valuable resource that will stimulate research aiming to address the challenges of Romanian dialect identification. We release our dataset at https://github.com/codrut2/RoDia.
[ "Codru{\\textcommabelow{t}}, Rotaru", "Ristea, Nicolae", "Ionescu, Radu" ]
RoDia: A New Dataset for Romanian Dialect Identification from Speech
findings-naacl.20
Poster
2309.03378
[ "https://github.com/codrut2/rodia" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.21.bib
https://aclanthology.org/2024.findings-naacl.21/
@inproceedings{choenni-etal-2024-examining, title = "Examining Modularity in Multilingual {LM}s via Language-Specialized Subnetworks", author = "Choenni, Rochelle and Shutova, Ekaterina and Garrette, Dan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.21", doi = "10.18653/v1/2024.findings-naacl.21", pages = "287--301", abstract = "Recent work has proposed explicitly inducing language-wise modularity in multilingual LMs via sparse fine-tuning (SFT) on per-language subnetworks as a means of better guiding cross-lingual sharing. In this paper, we investigate (1) the degree to which language-wise modularity *naturally* arises within models with no special modularity interventions, and (2) how cross-lingual sharing and interference differ between such models and those with explicit SFT-guided subnetwork modularity. In order to do so, we use XLM-R as our multilingual LM. Moreover, to quantify language specialization and cross-lingual interaction, we use a Training Data Attribution method that estimates the degree to which a model{'}s predictions are influenced by in-language or cross-language training examples. Our results show that language-specialized subnetworks do naturally arise, and that SFT, rather than always increasing modularity, can decrease language specialization of subnetworks in favor of more cross-lingual sharing.", }
Recent work has proposed explicitly inducing language-wise modularity in multilingual LMs via sparse fine-tuning (SFT) on per-language subnetworks as a means of better guiding cross-lingual sharing. In this paper, we investigate (1) the degree to which language-wise modularity *naturally* arises within models with no special modularity interventions, and (2) how cross-lingual sharing and interference differ between such models and those with explicit SFT-guided subnetwork modularity. In order to do so, we use XLM-R as our multilingual LM. Moreover, to quantify language specialization and cross-lingual interaction, we use a Training Data Attribution method that estimates the degree to which a model{'}s predictions are influenced by in-language or cross-language training examples. Our results show that language-specialized subnetworks do naturally arise, and that SFT, rather than always increasing modularity, can decrease language specialization of subnetworks in favor of more cross-lingual sharing.
[ "Choenni, Rochelle", "Shutova, Ekaterina", "Garrette, Dan" ]
Examining Modularity in Multilingual LMs via Language-Specialized Subnetworks
findings-naacl.21
Poster
2311.08273
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.22.bib
https://aclanthology.org/2024.findings-naacl.22/
@inproceedings{zhang-etal-2024-reverse, title = "Reverse Chain: A Generic-Rule for {LLM}s to Master Multi-{API} Planning", author = "Zhang, Yinger and Cai, Hui and Song, Xierui and Chen, Yicheng and Sun, Rui and Zheng, Jing", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.22", doi = "10.18653/v1/2024.findings-naacl.22", pages = "302--325", abstract = "While enabling large language models to implement function calling (known as APIs) can greatly enhance the performance of Large Language Models (LLMs), function calling is still a challenging task due to the complicated relations between different APIs, especially in a context-learning setting without fine-tuning. This paper introduces {``}Reverse Chain{''}, a controllable, target-driven approach designed to empower LLMs with the capability to operate external APIs only via prompts. Recognizing that most LLMs have limited tool-use capabilities, Reverse Chain limits LLMs to executing simple tasks, e.g., API Selection and Argument Completion. Furthermore, to manage a controllable multi-function calling, Reverse Chain adopts a generic rule-based on a backward reasoning process. This rule determines when to do API selection or Argument completion. To evaluate the multi-tool-use capability of LLMs, we have released a compositional multi-tool task dataset, available at https://github.com/zhangyingerjelly/reverse-chain. Extensive numerical experiments validate the remarkable proficiency of Reverse Chain in managing multiple API calls.", }
While enabling large language models to implement function calling (known as APIs) can greatly enhance the performance of Large Language Models (LLMs), function calling is still a challenging task due to the complicated relations between different APIs, especially in a context-learning setting without fine-tuning. This paper introduces {``}Reverse Chain{''}, a controllable, target-driven approach designed to empower LLMs with the capability to operate external APIs only via prompts. Recognizing that most LLMs have limited tool-use capabilities, Reverse Chain limits LLMs to executing simple tasks, e.g., API Selection and Argument Completion. Furthermore, to manage a controllable multi-function calling, Reverse Chain adopts a generic rule-based on a backward reasoning process. This rule determines when to do API selection or Argument completion. To evaluate the multi-tool-use capability of LLMs, we have released a compositional multi-tool task dataset, available at https://github.com/zhangyingerjelly/reverse-chain. Extensive numerical experiments validate the remarkable proficiency of Reverse Chain in managing multiple API calls.
[ "Zhang, Yinger", "Cai, Hui", "Song, Xierui", "Chen, Yicheng", "Sun, Rui", "Zheng, Jing" ]
Reverse Chain: A Generic-Rule for LLMs to Master Multi-API Planning
findings-naacl.22
Poster
2310.04474
[ "" ]
https://huggingface.co/papers/2310.04474
0
2
0
5
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.23.bib
https://aclanthology.org/2024.findings-naacl.23/
@inproceedings{jiqunchu-lin-2024-incorporating, title = "Incorporating Exponential Smoothing into {MLP}: a Simple but Effective Sequence Model", author = "JiqunChu, JiqunChu and Lin, Zuoquan", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.23", doi = "10.18653/v1/2024.findings-naacl.23", pages = "326--337", abstract = "Modeling long-range dependencies in sequential data is a crucial step in sequence learning. A recently developed model, the Structured State Space (S4), demonstrated significant effectiveness in modeling long-range sequences. However, It is unclear whether the success of S4 can be attributed to its intricate parameterization and HiPPO initialization or simply due to State Space Models (SSMs). To further investigate the potential of the deep SSMs, we start with exponential smoothing (ETS), a simple SSM, and propose a stacked architecture by directly incorporating it into an element-wise MLP. We augment simple ETS with additional parameters and complex field to reduce the inductive bias. Despite increasing less than 1{\%} of parameters of element-wise MLP, our models achieve comparable results to S4 on the LRA benchmark.", }
Modeling long-range dependencies in sequential data is a crucial step in sequence learning. A recently developed model, the Structured State Space (S4), demonstrated significant effectiveness in modeling long-range sequences. However, It is unclear whether the success of S4 can be attributed to its intricate parameterization and HiPPO initialization or simply due to State Space Models (SSMs). To further investigate the potential of the deep SSMs, we start with exponential smoothing (ETS), a simple SSM, and propose a stacked architecture by directly incorporating it into an element-wise MLP. We augment simple ETS with additional parameters and complex field to reduce the inductive bias. Despite increasing less than 1{\%} of parameters of element-wise MLP, our models achieve comparable results to S4 on the LRA benchmark.
[ "JiqunChu, JiqunChu", "Lin, Zuoquan" ]
Incorporating Exponential Smoothing into MLP: a Simple but Effective Sequence Model
findings-naacl.23
Poster
2403.17445
[ "https://github.com/pkuai-lingroup/etsmlp" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.24.bib
https://aclanthology.org/2024.findings-naacl.24/
@inproceedings{kuang-etal-2024-openfmnav, title = "{O}pen{FMN}av: Towards Open-Set Zero-Shot Object Navigation via Vision-Language Foundation Models", author = "Kuang, Yuxuan and Lin, Hai and Jiang, Meng", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.24", doi = "10.18653/v1/2024.findings-naacl.24", pages = "338--351", abstract = "Object navigation (ObjectNav) requires an agent to navigate through unseen environments to find queried objects. Many previous methods attempted to solve this task by relying on supervised or reinforcement learning, where they are trained on limited household datasets with close-set objects. However, two key challenges are unsolved: understanding free-form natural language instructions that demand open-set objects, and generalizing to new environments in a zero-shot manner. Aiming to solve the two challenges, in this paper, we propose **OpenFMNav**, an **Open**-set **F**oundation **M**odel based framework for zero-shot object **Nav**igation. We first unleash the reasoning abilities of large language models (LLMs) to extract proposed objects from natural language instructions that meet the user{'}s demand. We then leverage the generalizability of large vision language models (VLMs) to actively discover and detect candidate objects from the scene, building a *Versatile Semantic Score Map (VSSM)*. Then, by conducting common sense reasoning on *VSSM*, our method can perform effective language-guided exploration and exploitation of the scene and finally reach the goal. By leveraging the reasoning and generalizing abilities of foundation models, our method can understand free-form human instructions and perform effective open-set zero-shot navigation in diverse environments. Extensive experiments on the HM3D ObjectNav benchmark show that our method surpasses all the strong baselines on all metrics, proving our method{'}s effectiveness. Furthermore, we perform real robot demonstrations to validate our method{'}s open-set-ness and generalizability to real-world environments.", }
Object navigation (ObjectNav) requires an agent to navigate through unseen environments to find queried objects. Many previous methods attempted to solve this task by relying on supervised or reinforcement learning, where they are trained on limited household datasets with close-set objects. However, two key challenges are unsolved: understanding free-form natural language instructions that demand open-set objects, and generalizing to new environments in a zero-shot manner. Aiming to solve the two challenges, in this paper, we propose **OpenFMNav**, an **Open**-set **F**oundation **M**odel based framework for zero-shot object **Nav**igation. We first unleash the reasoning abilities of large language models (LLMs) to extract proposed objects from natural language instructions that meet the user{'}s demand. We then leverage the generalizability of large vision language models (VLMs) to actively discover and detect candidate objects from the scene, building a *Versatile Semantic Score Map (VSSM)*. Then, by conducting common sense reasoning on *VSSM*, our method can perform effective language-guided exploration and exploitation of the scene and finally reach the goal. By leveraging the reasoning and generalizing abilities of foundation models, our method can understand free-form human instructions and perform effective open-set zero-shot navigation in diverse environments. Extensive experiments on the HM3D ObjectNav benchmark show that our method surpasses all the strong baselines on all metrics, proving our method{'}s effectiveness. Furthermore, we perform real robot demonstrations to validate our method{'}s open-set-ness and generalizability to real-world environments.
[ "Kuang, Yuxuan", "Lin, Hai", "Jiang, Meng" ]
OpenFMNav: Towards Open-Set Zero-Shot Object Navigation via Vision-Language Foundation Models
findings-naacl.24
Poster
2402.10670
[ "https://github.com/yxKryptonite/OpenFMNav" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.25.bib
https://aclanthology.org/2024.findings-naacl.25/
@inproceedings{brake-schaaf-2024-comparing, title = "Comparing Two Model Designs for Clinical Note Generation; Is an {LLM} a Useful Evaluator of Consistency?", author = "Brake, Nathan and Schaaf, Thomas", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.25", doi = "10.18653/v1/2024.findings-naacl.25", pages = "352--363", abstract = "Following an interaction with a patient, physicians are responsible for the submission of clinical documentation, often organized as a SOAP note. A clinical note is not simply a summary of the conversation but requires the use of appropriate medical terminology. The relevant information can then be extracted and organized according to the structure of the SOAP note. In this paper we analyze two different approaches to generate the different sections of a SOAP note based on the audio recording of the conversation, and specifically examine them in terms of note consistency. The first approach generates the sections independently, while the second method generates them all together. In this work we make use of PEGASUS-X Transformer models and observe that both methods lead to similar ROUGE values (less than 1{\%} difference) and have no difference in terms of the Factuality metric. We perform a human evaluation to measure aspects of consistency and demonstrate that LLMs like Llama2 can be used to perform the same tasks with roughly the same agreement as the human annotators. Between the Llama2 analysis and the human reviewers we observe a Cohen Kappa inter-rater reliability of 0.79, 1.00, and 0.32 for consistency of age, gender, and body part injury, respectively. With this we demonstrate the usefulness of leveraging an LLM to measure quality indicators that can be identified by humans but are not currently captured by automatic metrics. This allows scaling evaluation to larger data sets, and we find that clinical note consistency improves by generating each new section conditioned on the output of all previously generated sections.", }
Following an interaction with a patient, physicians are responsible for the submission of clinical documentation, often organized as a SOAP note. A clinical note is not simply a summary of the conversation but requires the use of appropriate medical terminology. The relevant information can then be extracted and organized according to the structure of the SOAP note. In this paper we analyze two different approaches to generate the different sections of a SOAP note based on the audio recording of the conversation, and specifically examine them in terms of note consistency. The first approach generates the sections independently, while the second method generates them all together. In this work we make use of PEGASUS-X Transformer models and observe that both methods lead to similar ROUGE values (less than 1{\%} difference) and have no difference in terms of the Factuality metric. We perform a human evaluation to measure aspects of consistency and demonstrate that LLMs like Llama2 can be used to perform the same tasks with roughly the same agreement as the human annotators. Between the Llama2 analysis and the human reviewers we observe a Cohen Kappa inter-rater reliability of 0.79, 1.00, and 0.32 for consistency of age, gender, and body part injury, respectively. With this we demonstrate the usefulness of leveraging an LLM to measure quality indicators that can be identified by humans but are not currently captured by automatic metrics. This allows scaling evaluation to larger data sets, and we find that clinical note consistency improves by generating each new section conditioned on the output of all previously generated sections.
[ "Brake, Nathan", "Schaaf, Thomas" ]
Comparing Two Model Designs for Clinical Note Generation; Is an LLM a Useful Evaluator of Consistency?
findings-naacl.25
Poster
2404.06503
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.26.bib
https://aclanthology.org/2024.findings-naacl.26/
@inproceedings{ma-etal-2024-volta, title = "{VOLTA}: Improving Generative Diversity by Variational Mutual Information Maximizing Autoencoder", author = "Ma, Yueen and Chi, DaFeng and Li, Jingjing and Song, Kai and Zhuang, Yuzheng and King, Irwin", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.26", doi = "10.18653/v1/2024.findings-naacl.26", pages = "364--378", abstract = "The natural language generation domain has witnessed great success thanks to Transformer models. Although they have achieved state-of-the-art generative quality, they often neglect generative diversity. Prior attempts to tackle this issue suffer from either low model capacity or over-complicated architectures. Some recent methods employ the VAE framework to enhance diversity, but their latent variables fully depend on the input context, restricting exploration of the latent space. In this paper, we introduce VOLTA, a framework that elevates generative diversity by bridging Transformer with VAE via a more effective cross-attention-based connection, departing from conventional embedding concatenation or summation. Additionally, we propose integrating InfoGAN-style latent codes to enable input-independent variability, further diversifying the generation. Moreover, our framework accommodates discrete inputs alongside its existing support for continuous inputs. We perform comprehensive experiments with two types of Transformers on six datasets from three different NLG tasks to show that our approach can significantly improve generative diversity while maintaining generative quality.", }
The natural language generation domain has witnessed great success thanks to Transformer models. Although they have achieved state-of-the-art generative quality, they often neglect generative diversity. Prior attempts to tackle this issue suffer from either low model capacity or over-complicated architectures. Some recent methods employ the VAE framework to enhance diversity, but their latent variables fully depend on the input context, restricting exploration of the latent space. In this paper, we introduce VOLTA, a framework that elevates generative diversity by bridging Transformer with VAE via a more effective cross-attention-based connection, departing from conventional embedding concatenation or summation. Additionally, we propose integrating InfoGAN-style latent codes to enable input-independent variability, further diversifying the generation. Moreover, our framework accommodates discrete inputs alongside its existing support for continuous inputs. We perform comprehensive experiments with two types of Transformers on six datasets from three different NLG tasks to show that our approach can significantly improve generative diversity while maintaining generative quality.
[ "Ma, Yueen", "Chi, DaFeng", "Li, Jingjing", "Song, Kai", "Zhuang, Yuzheng", "King, Irwin" ]
VOLTA: Improving Generative Diversity by Variational Mutual Information Maximizing Autoencoder
findings-naacl.26
Poster
2307.00852
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.27.bib
https://aclanthology.org/2024.findings-naacl.27/
@inproceedings{sharma-2024-ecospeak, title = "{E}co{S}peak: Cost-Efficient Bias Mitigation for Partially Cross-Lingual Speaker Verification", author = "Sharma, Divya", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.27", doi = "10.18653/v1/2024.findings-naacl.27", pages = "379--394", abstract = "Linguistic bias is a critical problem concerning the diversity, equity, and inclusiveness of Natural Language Processing tools. The severity of this problem intensifies in security systems, such as speaker verification, where fairness is paramount. Speaker verification systems are biometric systems that determine whether two speech recordings are of the same speaker. Such user-centric systems should be inclusive to bilingual speakers. However, Deep neural network models are linguistically biased. Linguistic bias can be full or partial. Partially cross-lingual bias occurs when one test trial pair recording is in the training set{'}s language, and the other is in an unseen target language. Such linguistic mismatch influences the speaker verification model{'}s decision, dissuading bilingual speakers from using the system. Domain adaptation can mitigate this problem. However, adapting to each existing language is expensive. This paper explores cost-efficient bias mitigation techniques for partially cross-lingual speaker verification. We study the behavior of five baselines in five partially cross-lingual scenarios. Using our baseline behavioral insights, we propose EcoSpeak, a low-cost solution to partially cross-lingual speaker verification. EcoSpeak incorporates contrastive linguistic (CL) attention. CL attention utilizes linguistic differences in trial pairs to emphasize relevant speaker verification embedding parts. Experimental results demonstrate EcoSpeak{'}s robustness to partially cross-lingual testing.", }
Linguistic bias is a critical problem concerning the diversity, equity, and inclusiveness of Natural Language Processing tools. The severity of this problem intensifies in security systems, such as speaker verification, where fairness is paramount. Speaker verification systems are biometric systems that determine whether two speech recordings are of the same speaker. Such user-centric systems should be inclusive to bilingual speakers. However, Deep neural network models are linguistically biased. Linguistic bias can be full or partial. Partially cross-lingual bias occurs when one test trial pair recording is in the training set{'}s language, and the other is in an unseen target language. Such linguistic mismatch influences the speaker verification model{'}s decision, dissuading bilingual speakers from using the system. Domain adaptation can mitigate this problem. However, adapting to each existing language is expensive. This paper explores cost-efficient bias mitigation techniques for partially cross-lingual speaker verification. We study the behavior of five baselines in five partially cross-lingual scenarios. Using our baseline behavioral insights, we propose EcoSpeak, a low-cost solution to partially cross-lingual speaker verification. EcoSpeak incorporates contrastive linguistic (CL) attention. CL attention utilizes linguistic differences in trial pairs to emphasize relevant speaker verification embedding parts. Experimental results demonstrate EcoSpeak{'}s robustness to partially cross-lingual testing.
[ "Sharma, Divya" ]
EcoSpeak: Cost-Efficient Bias Mitigation for Partially Cross-Lingual Speaker Verification
findings-naacl.27
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.28.bib
https://aclanthology.org/2024.findings-naacl.28/
@inproceedings{bhowmik-etal-2024-leveraging, title = "Leveraging Contextual Information for Effective Entity Salience Detection", author = "Bhowmik, Rajarshi and Ponza, Marco and Tendle, Atharva and Gupta, Anant and Jiang, Rebecca and Lu, Xingyu and Zhao, Qian and Preotiuc-Pietro, Daniel", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.28", doi = "10.18653/v1/2024.findings-naacl.28", pages = "395--408", abstract = "In text documents such as news articles, the content and key events usually revolve around a subset of all the entities mentioned in a document. These entities, often deemed as salient entities, provide useful cues of the aboutness of a document to a reader. Identifying the salience of entities was found helpful in several downstream applications such as search, ranking, and entity-centric summarization, among others. Prior work on salient entity detection mainly focused on machine learning models that require heavy feature engineering. We show that fine-tuning medium-sized language models with a cross-encoder style architecture yields substantial performance gains over feature engineering approaches. To this end, we conduct a comprehensive benchmarking of four publicly available datasets using models representative of the medium-sized pre-trained language model family. Additionally, we show that zero-shot prompting of instruction-tuned language models yields inferior results, indicating the task{'}s uniqueness and complexity.", }
In text documents such as news articles, the content and key events usually revolve around a subset of all the entities mentioned in a document. These entities, often deemed as salient entities, provide useful cues of the aboutness of a document to a reader. Identifying the salience of entities was found helpful in several downstream applications such as search, ranking, and entity-centric summarization, among others. Prior work on salient entity detection mainly focused on machine learning models that require heavy feature engineering. We show that fine-tuning medium-sized language models with a cross-encoder style architecture yields substantial performance gains over feature engineering approaches. To this end, we conduct a comprehensive benchmarking of four publicly available datasets using models representative of the medium-sized pre-trained language model family. Additionally, we show that zero-shot prompting of instruction-tuned language models yields inferior results, indicating the task{'}s uniqueness and complexity.
[ "Bhowmik, Rajarshi", "Ponza, Marco", "Tendle, Atharva", "Gupta, Anant", "Jiang, Rebecca", "Lu, Xingyu", "Zhao, Qian", "Preotiuc-Pietro, Daniel" ]
Leveraging Contextual Information for Effective Entity Salience Detection
findings-naacl.28
Poster
2309.07990
[ "" ]
https://huggingface.co/papers/2309.07990
5
7
0
8
1
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.29.bib
https://aclanthology.org/2024.findings-naacl.29/
@inproceedings{zhang-etal-2024-llm, title = "{LLM}-as-a-Coauthor: Can Mixed Human-Written and Machine-Generated Text Be Detected?", author = "Zhang, Qihui and Gao, Chujie and Chen, Dongping and Huang, Yue and Huang, Yixin and Sun, Zhenyang and Zhang, Shilin and Li, Weiye and Fu, Zhengyan and Wan, Yao and Sun, Lichao", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.29", doi = "10.18653/v1/2024.findings-naacl.29", pages = "409--436", abstract = "With the rapid development and widespread application of Large Language Models (LLMs), the use of Machine-Generated Text (MGT) has become increasingly common, bringing with it potential risks, especially in terms of quality and integrity in fields like news, education, and science. Current research mainly focuses on purely MGT detection, without adequately addressing mixed scenarios including AI-revised Human-Written Text (HWT) or human-revised MGT. To tackle this challenge, we define mixtext, a form of mixed text involving both AI and human-generated content. Then we introduce MixSet, the first dataset dedicated to studying these mixtext scenarios. Leveraging MixSet, we executed comprehensive experiments to assess the efficacy of prevalent MGT detectors in handling mixtext situations, evaluating their performance in terms of effectiveness, robustness, and generalization. Our findings reveal that existing detectors struggle to identify mixtext, particularly in dealing with subtle modifications and style adaptability. This research underscores the urgent need for more fine-grain detectors tailored for mixtext, offering valuable insights for future research. Code and Models are available at https://github.com/Dongping-Chen/MixSet.", }
With the rapid development and widespread application of Large Language Models (LLMs), the use of Machine-Generated Text (MGT) has become increasingly common, bringing with it potential risks, especially in terms of quality and integrity in fields like news, education, and science. Current research mainly focuses on purely MGT detection, without adequately addressing mixed scenarios including AI-revised Human-Written Text (HWT) or human-revised MGT. To tackle this challenge, we define mixtext, a form of mixed text involving both AI and human-generated content. Then we introduce MixSet, the first dataset dedicated to studying these mixtext scenarios. Leveraging MixSet, we executed comprehensive experiments to assess the efficacy of prevalent MGT detectors in handling mixtext situations, evaluating their performance in terms of effectiveness, robustness, and generalization. Our findings reveal that existing detectors struggle to identify mixtext, particularly in dealing with subtle modifications and style adaptability. This research underscores the urgent need for more fine-grain detectors tailored for mixtext, offering valuable insights for future research. Code and Models are available at https://github.com/Dongping-Chen/MixSet.
[ "Zhang, Qihui", "Gao, Chujie", "Chen, Dongping", "Huang, Yue", "Huang, Yixin", "Sun, Zhenyang", "Zhang, Shilin", "Li, Weiye", "Fu, Zhengyan", "Wan, Yao", "Sun, Lichao" ]
LLM-as-a-Coauthor: Can Mixed Human-Written and Machine-Generated Text Be Detected?
findings-naacl.29
Poster
2401.05952
[ "https://github.com/dongping-chen/mixset" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.30.bib
https://aclanthology.org/2024.findings-naacl.30/
@inproceedings{verhoeven-etal-2024-realistic, title = "A (More) Realistic Evaluation Setup for Generalisation of Community Models on Malicious Content Detection", author = "Verhoeven, Ivo and Mishra, Pushkar and Beloch, Rahel and Yannakoudakis, Helen and Shutova, Ekaterina", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.30", doi = "10.18653/v1/2024.findings-naacl.30", pages = "437--463", abstract = "Community models for malicious content detection, which take into account the context from a social graph alongside the content itself, have shown remarkable performance on benchmark datasets. Yet, misinformation and hate speech continue to propagate on social media networks. This mismatch can be partially attributed to the limitations of current evaluation setups that neglect the rapid evolution of online content and the underlying social graph. In this paper, we propose a novel evaluation setup for model generalisation based on our few-shot subgraph sampling approach. This setup tests for generalisation through few labelled examples in local explorations of a larger graph, emulating more realistic application settings. We show this to be a challenging inductive setup, wherein strong performance on the training graph is not indicative of performance on unseen tasks, domains, or graph structures. Lastly, we show that graph meta-learners trained with our proposed few-shot subgraph sampling outperform standard community models in the inductive setup.", }
Community models for malicious content detection, which take into account the context from a social graph alongside the content itself, have shown remarkable performance on benchmark datasets. Yet, misinformation and hate speech continue to propagate on social media networks. This mismatch can be partially attributed to the limitations of current evaluation setups that neglect the rapid evolution of online content and the underlying social graph. In this paper, we propose a novel evaluation setup for model generalisation based on our few-shot subgraph sampling approach. This setup tests for generalisation through few labelled examples in local explorations of a larger graph, emulating more realistic application settings. We show this to be a challenging inductive setup, wherein strong performance on the training graph is not indicative of performance on unseen tasks, domains, or graph structures. Lastly, we show that graph meta-learners trained with our proposed few-shot subgraph sampling outperform standard community models in the inductive setup.
[ "Verhoeven, Ivo", "Mishra, Pushkar", "Beloch, Rahel", "Yannakoudakis, Helen", "Shutova, Ekaterina" ]
A (More) Realistic Evaluation Setup for Generalisation of Community Models on Malicious Content Detection
findings-naacl.30
Poster
2404.01822
[ "https://github.com/rahelbeloch/meta-learning-gnns" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.31.bib
https://aclanthology.org/2024.findings-naacl.31/
@inproceedings{huang-chang-2024-citation, title = "Citation: A Key to Building Responsible and Accountable Large Language Models", author = "Huang, Jie and Chang, Kevin", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.31", doi = "10.18653/v1/2024.findings-naacl.31", pages = "464--473", abstract = "Large Language Models (LLMs) bring transformative benefits alongside unique challenges, including intellectual property (IP) and ethical concerns. This position paper explores a novel angle to mitigate these risks, drawing parallels between LLMs and established web systems. We identify {``}citation{''}{---}the acknowledgement or reference to a source or evidence{---}as a crucial yet missing component in LLMs. Incorporating citation could enhance content transparency and verifiability, thereby confronting the IP and ethical issues in the deployment of LLMs. We further propose that a comprehensive citation mechanism for LLMs should account for both non-parametric and parametric content. Despite the complexity of implementing such a citation mechanism, along with the potential pitfalls, we advocate for its development. Building on this foundation, we outline several research problems in this area, aiming to guide future explorations towards building more responsible and accountable LLMs.", }
Large Language Models (LLMs) bring transformative benefits alongside unique challenges, including intellectual property (IP) and ethical concerns. This position paper explores a novel angle to mitigate these risks, drawing parallels between LLMs and established web systems. We identify {``}citation{''}{---}the acknowledgement or reference to a source or evidence{---}as a crucial yet missing component in LLMs. Incorporating citation could enhance content transparency and verifiability, thereby confronting the IP and ethical issues in the deployment of LLMs. We further propose that a comprehensive citation mechanism for LLMs should account for both non-parametric and parametric content. Despite the complexity of implementing such a citation mechanism, along with the potential pitfalls, we advocate for its development. Building on this foundation, we outline several research problems in this area, aiming to guide future explorations towards building more responsible and accountable LLMs.
[ "Huang, Jie", "Chang, Kevin" ]
Citation: A Key to Building Responsible and Accountable Large Language Models
findings-naacl.31
Poster
2307.02185
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.32.bib
https://aclanthology.org/2024.findings-naacl.32/
@inproceedings{zhang-etal-2024-graph, title = "Graph-Induced Syntactic-Semantic Spaces in Transformer-Based Variational {A}uto{E}ncoders", author = "Zhang, Yingji and Valentino, Marco and Carvalho, Danilo and Pratt-Hartmann, Ian and Freitas, Andre", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.32", doi = "10.18653/v1/2024.findings-naacl.32", pages = "474--489", abstract = "The injection of syntactic information in Variational AutoEncoders (VAEs) can result in an overall improvement of performances and generalisation. An effective strategy to achieve such a goal is to separate the encoding of distributional semantic features and syntactic structures into heterogeneous latent spaces via multi-task learning or dual encoder architectures. However, existing works employing such techniques are limited to LSTM-based VAEs. This work investigates latent space separation methods for structural syntactic injection in Transformer-based VAE architectures (i.e., Optimus) through the integration of graph-based models. Our empirical evaluation reveals that the proposed end-to-end VAE architecture can improve theoverall organisation of the latent space, alleviating the information loss occurring in standard VAE setups, and resulting in enhanced performances on language modelling and downstream generation tasks.", }
The injection of syntactic information in Variational AutoEncoders (VAEs) can result in an overall improvement of performances and generalisation. An effective strategy to achieve such a goal is to separate the encoding of distributional semantic features and syntactic structures into heterogeneous latent spaces via multi-task learning or dual encoder architectures. However, existing works employing such techniques are limited to LSTM-based VAEs. This work investigates latent space separation methods for structural syntactic injection in Transformer-based VAE architectures (i.e., Optimus) through the integration of graph-based models. Our empirical evaluation reveals that the proposed end-to-end VAE architecture can improve theoverall organisation of the latent space, alleviating the information loss occurring in standard VAE setups, and resulting in enhanced performances on language modelling and downstream generation tasks.
[ "Zhang, Yingji", "Valentino, Marco", "Carvalho, Danilo", "Pratt-Hartmann, Ian", "Freitas, Andre" ]
Graph-Induced Syntactic-Semantic Spaces in Transformer-Based Variational AutoEncoders
findings-naacl.32
Poster
2311.08579
[ "https://github.com/snowyj/sem_syn_separation" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.33.bib
https://aclanthology.org/2024.findings-naacl.33/
@inproceedings{tan-etal-2024-narrowing, title = "Narrowing the Gap between Zero- and Few-shot Machine Translation by Matching Styles", author = "Tan, Weiting and Xu, Haoran and Shen, Lingfeng and Li, Shuyue Stella and Murray, Kenton and Koehn, Philipp and Van Durme, Benjamin and Chen, Yunmo", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.33", doi = "10.18653/v1/2024.findings-naacl.33", pages = "490--502", abstract = "Large language models trained primarily in a monolingual setting have demonstrated their ability to generalize to machine translation using zero- and few-shot examples with in-context learning. However, even though zero-shot translations are relatively good, there remains a discernible gap comparing their performance with the few-shot setting. In this paper, we investigate the factors contributing to this gap and find that this gap can largely be closed (for about 70{\%}) by matching the writing styles of the target corpus. Additionally, we explore potential approaches to enhance zero-shot baselines without the need for parallel demonstration examples, providing valuable insights into how these methods contribute to improving translation metrics.", }
Large language models trained primarily in a monolingual setting have demonstrated their ability to generalize to machine translation using zero- and few-shot examples with in-context learning. However, even though zero-shot translations are relatively good, there remains a discernible gap comparing their performance with the few-shot setting. In this paper, we investigate the factors contributing to this gap and find that this gap can largely be closed (for about 70{\%}) by matching the writing styles of the target corpus. Additionally, we explore potential approaches to enhance zero-shot baselines without the need for parallel demonstration examples, providing valuable insights into how these methods contribute to improving translation metrics.
[ "Tan, Weiting", "Xu, Haoran", "Shen, Lingfeng", "Li, Shuyue Stella", "Murray, Kenton", "Koehn, Philipp", "Van Durme, Benjamin", "Chen, Yunmo" ]
Narrowing the Gap between Zero- and Few-shot Machine Translation by Matching Styles
findings-naacl.33
Poster
2311.02310
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.34.bib
https://aclanthology.org/2024.findings-naacl.34/
@inproceedings{das-etal-2024-modality, title = "Which Modality should {I} use - Text, Motif, or Image? : Understanding Graphs with Large Language Models", author = "Das, Debarati and Gupta, Ishaan and Srivastava, Jaideep and Kang, Dongyeop", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.34", doi = "10.18653/v1/2024.findings-naacl.34", pages = "503--519", abstract = "Our research integrates graph data with Large Language Models (LLMs), which, despite their advancements in various fields using large text corpora, face limitations in encoding entire graphs due to context size constraints. This paper introduces a new approach to encoding a graph with diverse modalities, such as text, image, and motif, coupled with prompts to approximate a graph{'}s global connectivity, thereby enhancing LLMs{'} efficiency in processing complex graph structures. The study also presents GraphTMI, a novel benchmark for evaluating LLMs in graph structure analysis, focusing on homophily, motif presence, and graph difficulty. Key findings indicate that the image modality, especially with vision-language models like GPT-4V, is superior to text in balancing token limits and preserving essential information and comes close to prior graph neural net (GNN) encoders. Furthermore, the research assesses how various factors affect the performance of each encoding modality and outlines the existing challenges and potential future developments for LLMs in graph understanding and reasoning tasks. Our code and data are publicly available on our project page - https://minnesotanlp.github.io/GraphLLM/", }
Our research integrates graph data with Large Language Models (LLMs), which, despite their advancements in various fields using large text corpora, face limitations in encoding entire graphs due to context size constraints. This paper introduces a new approach to encoding a graph with diverse modalities, such as text, image, and motif, coupled with prompts to approximate a graph{'}s global connectivity, thereby enhancing LLMs{'} efficiency in processing complex graph structures. The study also presents GraphTMI, a novel benchmark for evaluating LLMs in graph structure analysis, focusing on homophily, motif presence, and graph difficulty. Key findings indicate that the image modality, especially with vision-language models like GPT-4V, is superior to text in balancing token limits and preserving essential information and comes close to prior graph neural net (GNN) encoders. Furthermore, the research assesses how various factors affect the performance of each encoding modality and outlines the existing challenges and potential future developments for LLMs in graph understanding and reasoning tasks. Our code and data are publicly available on our project page - https://minnesotanlp.github.io/GraphLLM/
[ "Das, Debarati", "Gupta, Ishaan", "Srivastava, Jaideep", "Kang, Dongyeop" ]
Which Modality should I use - Text, Motif, or Image? : Understanding Graphs with Large Language Models
findings-naacl.34
Poster
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.35.bib
https://aclanthology.org/2024.findings-naacl.35/
@inproceedings{hoang-etal-2024-fly, title = "On-the-Fly Fusion of Large Language Models and Machine Translation", author = "Hoang, Hieu and Khayrallah, Huda and Junczys-Dowmunt, Marcin", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.35", doi = "10.18653/v1/2024.findings-naacl.35", pages = "520--532", abstract = "We propose on-the-fly ensembling of a neural machine translation (NMT) model with a large language model (LLM), prompted on the same task and input. Through experiments on 4 language directions with varying data amounts, we find that a slightly weaker-at-translation LLM can improve translations of a NMT model, and such an ensemble can produce better translations than ensembling two stronger NMT models.We demonstrate that our ensemble method can be combined with various techniques from LLM prompting, such as in context learning and translation context.", }
We propose on-the-fly ensembling of a neural machine translation (NMT) model with a large language model (LLM), prompted on the same task and input. Through experiments on 4 language directions with varying data amounts, we find that a slightly weaker-at-translation LLM can improve translations of a NMT model, and such an ensemble can produce better translations than ensembling two stronger NMT models.We demonstrate that our ensemble method can be combined with various techniques from LLM prompting, such as in context learning and translation context.
[ "Hoang, Hieu", "Khayrallah, Huda", "Junczys-Dowmunt, Marcin" ]
On-the-Fly Fusion of Large Language Models and Machine Translation
findings-naacl.35
Poster
2311.08306
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.36.bib
https://aclanthology.org/2024.findings-naacl.36/
@inproceedings{li-etal-2024-read, title = "{READ}: Improving Relation Extraction from an {AD}versarial Perspective", author = "Li, Dawei and Hogan, William and Shang, Jingbo", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.36", doi = "10.18653/v1/2024.findings-naacl.36", pages = "533--548", abstract = "Recent works in relation extraction (RE) have achieved promising benchmark accuracy; however, our adversarial attack experiments show that these works excessively rely on entities, making their generalization capability questionable. To address this issue, we propose an adversarial training method specifically designed for RE. Our approach introduces both sequence- and token-level perturbations to the sample and uses a separate perturbation vocabulary to improve the search for entity and context perturbations.Furthermore, we introduce a probabilistic strategy for leaving clean tokens in the context during adversarial training. This strategy enables a larger attack budget for entities and coaxes the model to leverage relational patterns embedded in the context. Extensive experiments show that compared to various adversarial training methods, our method significantly improves both the accuracy and robustness of the model. Additionally, experiments on different data availability settings highlight the effectiveness of our method in low-resource scenarios.We also perform in-depth analyses of our proposed method and provide further hints.We will release our code at https://github.com/David-Li0406/READ.", }
Recent works in relation extraction (RE) have achieved promising benchmark accuracy; however, our adversarial attack experiments show that these works excessively rely on entities, making their generalization capability questionable. To address this issue, we propose an adversarial training method specifically designed for RE. Our approach introduces both sequence- and token-level perturbations to the sample and uses a separate perturbation vocabulary to improve the search for entity and context perturbations.Furthermore, we introduce a probabilistic strategy for leaving clean tokens in the context during adversarial training. This strategy enables a larger attack budget for entities and coaxes the model to leverage relational patterns embedded in the context. Extensive experiments show that compared to various adversarial training methods, our method significantly improves both the accuracy and robustness of the model. Additionally, experiments on different data availability settings highlight the effectiveness of our method in low-resource scenarios.We also perform in-depth analyses of our proposed method and provide further hints.We will release our code at https://github.com/David-Li0406/READ.
[ "Li, Dawei", "Hogan, William", "Shang, Jingbo" ]
READ: Improving Relation Extraction from an ADversarial Perspective
findings-naacl.36
Poster
2404.02931
[ "https://github.com/david-li0406/read" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.37.bib
https://aclanthology.org/2024.findings-naacl.37/
@inproceedings{ebrahimi-etal-2024-requal, title = "{REQUAL}-{LM}: Reliability and Equity through Aggregation in Large Language Models", author = "Ebrahimi, Sana and Shahbazi, Nima and Asudeh, Abolfazl", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.37", doi = "10.18653/v1/2024.findings-naacl.37", pages = "549--560", abstract = "The extensive scope of large language models (LLMs) across various domains underscores the critical importance of responsibility in their application, beyond natural language processing. In particular, the randomized nature of LLMs, coupled with inherent biases and historical stereotypes in data, raises critical concerns regarding reliability and equity. Addressing these challenges are necessary before using LLMs for applications with societal impact. Towards addressing this gap, we introduce REQUAL-LM, a novel method for finding reliable and equitable LLM outputs through aggregation. Specifically, we develop a Montecarlo method based on repeated sampling to find a reliable output close to the mean of the underlying distribution of possible outputs. We formally define the terms such as reliability and bias, and design an equity-aware aggregation to minimize harmful bias while finding a highly reliable output. REQUAL-LM does not require specialized hardware, does not impose a significant computing load, and uses LLMs as a blackbox. This design choice enables seamless scalability alongside the rapid advancement of LLM technologies. Our system does not require retraining the LLMs, which makes it deployment ready and easy to adapt. Our comprehensive experiments using various tasks and datasets demonstrate that REQUAL-LM effectively mitigates bias and selects a more equitable response, specifically the outputs that properly represents minority groups.", }
The extensive scope of large language models (LLMs) across various domains underscores the critical importance of responsibility in their application, beyond natural language processing. In particular, the randomized nature of LLMs, coupled with inherent biases and historical stereotypes in data, raises critical concerns regarding reliability and equity. Addressing these challenges are necessary before using LLMs for applications with societal impact. Towards addressing this gap, we introduce REQUAL-LM, a novel method for finding reliable and equitable LLM outputs through aggregation. Specifically, we develop a Montecarlo method based on repeated sampling to find a reliable output close to the mean of the underlying distribution of possible outputs. We formally define the terms such as reliability and bias, and design an equity-aware aggregation to minimize harmful bias while finding a highly reliable output. REQUAL-LM does not require specialized hardware, does not impose a significant computing load, and uses LLMs as a blackbox. This design choice enables seamless scalability alongside the rapid advancement of LLM technologies. Our system does not require retraining the LLMs, which makes it deployment ready and easy to adapt. Our comprehensive experiments using various tasks and datasets demonstrate that REQUAL-LM effectively mitigates bias and selects a more equitable response, specifically the outputs that properly represents minority groups.
[ "Ebrahimi, Sana", "Shahbazi, Nima", "Asudeh, Abolfazl" ]
REQUAL-LM: Reliability and Equity through Aggregation in Large Language Models
findings-naacl.37
Poster
2404.11782
[ "" ]
-1
-1
-1
-1
0
[]
[]
[]
https://aclanthology.org/2024.findings-naacl.38.bib
https://aclanthology.org/2024.findings-naacl.38/
@inproceedings{chen-etal-2024-addressing, title = "Addressing Both Statistical and Causal Gender Fairness in {NLP} Models", author = "Chen, Hannah and Ji, Yangfeng and Evans, David", editor = "Duh, Kevin and Gomez, Helena and Bethard, Steven", booktitle = "Findings of the Association for Computational Linguistics: NAACL 2024", month = jun, year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.findings-naacl.38", doi = "10.18653/v1/2024.findings-naacl.38", pages = "561--582", abstract = "Statistical fairness stipulates equivalent outcomes for every protected group, whereas causal fairness prescribes that a model makes the same prediction for an individual regardless of their protected characteristics. Counterfactual data augmentation (CDA) is effective for reducing bias in NLP models, yet models trained with CDA are often evaluated only on metrics that are closely tied to the causal fairness notion; similarly, sampling-based methods designed to promote statistical fairness are rarely evaluated for causal fairness. In this work, we evaluate both statistical and causal debiasing methods for gender bias in NLP models, and find that while such methods are effective at reducing bias as measured by the targeted metric, they do not necessarily improve results on other bias metrics. We demonstrate that combinations of statistical and causal debiasing techniques are able to reduce bias measured through both types of metrics.", }
Statistical fairness stipulates equivalent outcomes for every protected group, whereas causal fairness prescribes that a model makes the same prediction for an individual regardless of their protected characteristics. Counterfactual data augmentation (CDA) is effective for reducing bias in NLP models, yet models trained with CDA are often evaluated only on metrics that are closely tied to the causal fairness notion; similarly, sampling-based methods designed to promote statistical fairness are rarely evaluated for causal fairness. In this work, we evaluate both statistical and causal debiasing methods for gender bias in NLP models, and find that while such methods are effective at reducing bias as measured by the targeted metric, they do not necessarily improve results on other bias metrics. We demonstrate that combinations of statistical and causal debiasing techniques are able to reduce bias measured through both types of metrics.
[ "Chen, Hannah", "Ji, Yangfeng", "Evans, David" ]
Addressing Both Statistical and Causal Gender Fairness in NLP Models
findings-naacl.38
Poster
2404.00463
[ "https://github.com/hannahxchen/composed-debiasing" ]
-1
-1
-1
-1
0
[]
[]
[]