bibtex_url
stringlengths 41
50
| proceedings
stringlengths 38
47
| bibtext
stringlengths 709
3.56k
| abstract
stringlengths 17
2.11k
| authors
sequencelengths 1
72
| title
stringlengths 12
207
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 276
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
14
| num_comments
int64 -1
11
| n_authors
int64 -1
44
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
14
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.acl-long.400.bib | https://aclanthology.org/2023.acl-long.400/ | @inproceedings{yang-etal-2023-learning,
title = "Learning Better Masking for Better Language Model Pre-training",
author = "Yang, Dongjie and
Zhang, Zhuosheng and
Zhao, Hai",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.400",
doi = "10.18653/v1/2023.acl-long.400",
pages = "7255--7267",
abstract = "Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs). Existing PrLMs commonly adopt a Random-Token Masking strategy where a fixed masking ratio is applied and different contents are masked by an equal probability throughout the entire training. However, the model may receive complicated impact from pre-training status, which changes accordingly as training time goes on. In this paper, we show that such time-invariant MLM settings on masking ratio and masked content are unlikely to deliver an optimal outcome, which motivates us to explore the influence of time-variant MLM settings. We propose two scheduled masking approaches that adaptively tune the masking ratio and masked content in different training stages, which improves the pre-training efficiency and effectiveness verified on the downstream tasks. Our work is a pioneer study on time-variant masking strategy on ratio and content and gives a better understanding of how masking ratio and masked content influence the MLM pre-training.",
}
| Masked Language Modeling (MLM) has been widely used as the denoising objective in pre-training language models (PrLMs). Existing PrLMs commonly adopt a Random-Token Masking strategy where a fixed masking ratio is applied and different contents are masked by an equal probability throughout the entire training. However, the model may receive complicated impact from pre-training status, which changes accordingly as training time goes on. In this paper, we show that such time-invariant MLM settings on masking ratio and masked content are unlikely to deliver an optimal outcome, which motivates us to explore the influence of time-variant MLM settings. We propose two scheduled masking approaches that adaptively tune the masking ratio and masked content in different training stages, which improves the pre-training efficiency and effectiveness verified on the downstream tasks. Our work is a pioneer study on time-variant masking strategy on ratio and content and gives a better understanding of how masking ratio and masked content influence the MLM pre-training. | [
"Yang, Dongjie",
"Zhang, Zhuosheng",
"Zhao, Hai"
] | Learning Better Masking for Better Language Model Pre-training | acl-long.400 | Poster | 2208.10806 | [
"https://github.com/mutonix/better_masking"
] | https://huggingface.co/papers/2208.10806 | 1 | 0 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.401.bib | https://aclanthology.org/2023.acl-long.401/ | @inproceedings{tang-etal-2023-vistext,
title = "{V}is{T}ext: A Benchmark for Semantically Rich Chart Captioning",
author = "Tang, Benny and
Boggust, Angie and
Satyanarayan, Arvind",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.401",
doi = "10.18653/v1/2023.acl-long.401",
pages = "7268--7298",
abstract = "Captions that describe or explain charts help improve recall and comprehension of the depicted data and provide a more accessible medium for people with visual disabilities. However, current approaches for automatically generating such captions struggle to articulate the perceptual or cognitive features that are the hallmark of charts (e.g., complex trends and patterns). In response, we introduce VisText: a dataset of 12,441 pairs of charts and captions that describe the charts{'} construction, report key statistics, and identify perceptual and cognitive phenomena. In VisText, a chart is available as three representations: a rasterized image, a backing data table, and a \textit{scene graph}{---}a hierarchical representation of a chart{'}s visual elements akin to a web page{'}s Document Object Model (DOM). To evaluate the impact of VisText, we fine-tune state-of-the-art language models on our chart captioning task and apply prefix-tuning to produce captions that vary the semantic content they convey. Our models generate coherent, semantically rich captions and perform on par with state-of-the-art chart captioning models across machine translation and text generation metrics. Through qualitative analysis, we identify six broad categories of errors that our models make that can inform future work.",
}
| Captions that describe or explain charts help improve recall and comprehension of the depicted data and provide a more accessible medium for people with visual disabilities. However, current approaches for automatically generating such captions struggle to articulate the perceptual or cognitive features that are the hallmark of charts (e.g., complex trends and patterns). In response, we introduce VisText: a dataset of 12,441 pairs of charts and captions that describe the charts{'} construction, report key statistics, and identify perceptual and cognitive phenomena. In VisText, a chart is available as three representations: a rasterized image, a backing data table, and a \textit{scene graph}{---}a hierarchical representation of a chart{'}s visual elements akin to a web page{'}s Document Object Model (DOM). To evaluate the impact of VisText, we fine-tune state-of-the-art language models on our chart captioning task and apply prefix-tuning to produce captions that vary the semantic content they convey. Our models generate coherent, semantically rich captions and perform on par with state-of-the-art chart captioning models across machine translation and text generation metrics. Through qualitative analysis, we identify six broad categories of errors that our models make that can inform future work. | [
"Tang, Benny",
"Boggust, Angie",
"Satyanarayan, Arvind"
] | VisText: A Benchmark for Semantically Rich Chart Captioning | acl-long.401 | Oral | 2307.05356 | [
"https://github.com/mitvis/vistext"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.402.bib | https://aclanthology.org/2023.acl-long.402/ | @inproceedings{ingolfsdottir-etal-2023-byte,
title = "Byte-Level Grammatical Error Correction Using Synthetic and Curated Corpora",
author = "Ing{\'o}lfsd{\'o}ttir, Svanhv{\'\i}t Lilja and
Ragnarsson, Petur and
J{\'o}nsson, Haukur and
Simonarson, Haukur and
Thorsteinsson, Vilhjalmur and
Sn{\ae}bjarnarson, V{\'e}steinn",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.402",
doi = "10.18653/v1/2023.acl-long.402",
pages = "7299--7316",
abstract = "Grammatical error correction (GEC) is the task of correcting typos, spelling, punctuation and grammatical issues in text. Approaching the problem as a sequence-to-sequence task, we compare the use of a common subword unit vocabulary and byte-level encoding. Initial synthetic training data is created using an error-generating pipeline, and used for finetuning two subword-level models and one byte-level model. Models are then finetuned further on hand-corrected error corpora, including texts written by children, university students, dyslexic and second-language writers, and evaluated over different error types and error origins. We show that a byte-level model enables higher correction quality than a subword approach, not only for simple spelling errors, but also for more complex semantic, stylistic and grammatical issues. In particular, initial training on synthetic corpora followed by finetuning on a relatively small parallel corpus of real-world errors helps the byte-level model correct a wide range of commonly occurring errors. Our experiments are run for the Icelandic language but should hold for other similar languages, and in particular to morphologically rich ones.",
}
| Grammatical error correction (GEC) is the task of correcting typos, spelling, punctuation and grammatical issues in text. Approaching the problem as a sequence-to-sequence task, we compare the use of a common subword unit vocabulary and byte-level encoding. Initial synthetic training data is created using an error-generating pipeline, and used for finetuning two subword-level models and one byte-level model. Models are then finetuned further on hand-corrected error corpora, including texts written by children, university students, dyslexic and second-language writers, and evaluated over different error types and error origins. We show that a byte-level model enables higher correction quality than a subword approach, not only for simple spelling errors, but also for more complex semantic, stylistic and grammatical issues. In particular, initial training on synthetic corpora followed by finetuning on a relatively small parallel corpus of real-world errors helps the byte-level model correct a wide range of commonly occurring errors. Our experiments are run for the Icelandic language but should hold for other similar languages, and in particular to morphologically rich ones. | [
"Ing{\\'o}lfsd{\\'o}ttir, Svanhv{\\'\\i}t Lilja",
"Ragnarsson, Petur",
"J{\\'o}nsson, Haukur",
"Simonarson, Haukur",
"Thorsteinsson, Vilhjalmur",
"Sn{\\ae}bjarnarson, V{\\'e}steinn"
] | Byte-Level Grammatical Error Correction Using Synthetic and Curated Corpora | acl-long.402 | Oral | 2305.17906 | [
"https://github.com/mideind/byte-gec"
] | https://huggingface.co/papers/2305.17906 | 5 | 0 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.403.bib | https://aclanthology.org/2023.acl-long.403/ | @inproceedings{wu-etal-2023-multi,
title = "Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text",
author = {Wu, Qianhui and
Jiang, Huiqiang and
Yin, Haonan and
Karlsson, B{\"o}rje and
Lin, Chin-Yew},
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.403",
doi = "10.18653/v1/2023.acl-long.403",
pages = "7317--7332",
abstract = "Self-supervised representation learning has proved to be a valuable component for out-of-distribution (OoD) detection with only the texts of in-distribution (ID) examples. These approaches either train a language model from scratch or fine-tune a pre-trained language model using ID examples, and then take the perplexity output by the language model as OoD scores. In this paper, we analyze the complementary characteristic of both methods and propose a multi-level knowledge distillation approach that integrates their strengths while mitigating their limitations. Specifically, we use a fine-tuned model as the teacher to teach a randomly initialized student model on the ID examples. Besides the prediction layer distillation, we present a similarity-based intermediate layer distillation method to thoroughly explore the representation space of the teacher model. In this way, the learned student can better represent the ID data manifold while gaining a stronger ability to map OoD examples outside the ID data manifold with the regularization inherited from pre-training. Besides, the student model sees only ID examples during parameter learning, further promoting more distinguishable features for OoD detection. We conduct extensive experiments over multiple benchmark datasets, i.e., CLINC150, SST, ROSTD, 20 NewsGroups, and AG News; showing that the proposed method yields new state-of-the-art performance. We also explore its application as an AIGC detector to distinguish answers generated by ChatGPT and human experts. It is observed that our model exceeds human evaluators in the pair-expert task on the Human ChatGPT Comparison Corpus.",
}
| Self-supervised representation learning has proved to be a valuable component for out-of-distribution (OoD) detection with only the texts of in-distribution (ID) examples. These approaches either train a language model from scratch or fine-tune a pre-trained language model using ID examples, and then take the perplexity output by the language model as OoD scores. In this paper, we analyze the complementary characteristic of both methods and propose a multi-level knowledge distillation approach that integrates their strengths while mitigating their limitations. Specifically, we use a fine-tuned model as the teacher to teach a randomly initialized student model on the ID examples. Besides the prediction layer distillation, we present a similarity-based intermediate layer distillation method to thoroughly explore the representation space of the teacher model. In this way, the learned student can better represent the ID data manifold while gaining a stronger ability to map OoD examples outside the ID data manifold with the regularization inherited from pre-training. Besides, the student model sees only ID examples during parameter learning, further promoting more distinguishable features for OoD detection. We conduct extensive experiments over multiple benchmark datasets, i.e., CLINC150, SST, ROSTD, 20 NewsGroups, and AG News; showing that the proposed method yields new state-of-the-art performance. We also explore its application as an AIGC detector to distinguish answers generated by ChatGPT and human experts. It is observed that our model exceeds human evaluators in the pair-expert task on the Human ChatGPT Comparison Corpus. | [
"Wu, Qianhui",
"Jiang, Huiqiang",
"Yin, Haonan",
"Karlsson, B{\\\"o}rje",
"Lin, Chin-Yew"
] | Multi-Level Knowledge Distillation for Out-of-Distribution Detection in Text | acl-long.403 | Poster | 2211.11300 | [
"https://github.com/microsoft/KC"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.404.bib | https://aclanthology.org/2023.acl-long.404/ | @inproceedings{singh-etal-2023-peeking,
title = "Peeking inside the black box: A Commonsense-aware Generative Framework for Explainable Complaint Detection",
author = "Singh, Apoorva and
Jain, Raghav and
Jha, Prince and
Saha, Sriparna",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.404",
doi = "10.18653/v1/2023.acl-long.404",
pages = "7333--7347",
abstract = "Complaining is an illocutionary act in which the speaker communicates his/her dissatisfaction with a set of circumstances and holds the hearer (the complainee) answerable, directly or indirectly. Considering breakthroughs in machine learning approaches, the complaint detection task has piqued the interest of the natural language processing (NLP) community. Most of the earlier studies failed to justify their findings, necessitating the adoption of interpretable models that can explain the model{'}s output in real time. We introduce an explainable complaint dataset, X-CI, the first benchmark dataset for explainable complaint detection. Each instance in the X-CI dataset is annotated with five labels: complaint label, emotion label, polarity label, complaint severity level, and rationale (explainability), i.e., the causal span explaining the reason for the complaint/non-complaint label. We address the task of explainable complaint detection and propose a commonsense-aware unified generative framework by reframing the multitask problem as a text-to-text generation task. Our framework can predict the complaint cause, severity level, emotion, and polarity of the text in addition to detecting whether it is a complaint or not. We further establish the advantages of our proposed model on various evaluation metrics over the state-of-the-art models and other baselines when applied to the X-CI dataset in both full and few-shot settings.",
}
| Complaining is an illocutionary act in which the speaker communicates his/her dissatisfaction with a set of circumstances and holds the hearer (the complainee) answerable, directly or indirectly. Considering breakthroughs in machine learning approaches, the complaint detection task has piqued the interest of the natural language processing (NLP) community. Most of the earlier studies failed to justify their findings, necessitating the adoption of interpretable models that can explain the model{'}s output in real time. We introduce an explainable complaint dataset, X-CI, the first benchmark dataset for explainable complaint detection. Each instance in the X-CI dataset is annotated with five labels: complaint label, emotion label, polarity label, complaint severity level, and rationale (explainability), i.e., the causal span explaining the reason for the complaint/non-complaint label. We address the task of explainable complaint detection and propose a commonsense-aware unified generative framework by reframing the multitask problem as a text-to-text generation task. Our framework can predict the complaint cause, severity level, emotion, and polarity of the text in addition to detecting whether it is a complaint or not. We further establish the advantages of our proposed model on various evaluation metrics over the state-of-the-art models and other baselines when applied to the X-CI dataset in both full and few-shot settings. | [
"Singh, Apoorva",
"Jain, Raghav",
"Jha, Prince",
"Saha, Sriparna"
] | Peeking inside the black box: A Commonsense-aware Generative Framework for Explainable Complaint Detection | acl-long.404 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.405.bib | https://aclanthology.org/2023.acl-long.405/ | @inproceedings{feng-etal-2023-mmdialog,
title = "{MMD}ialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation",
author = "Feng, Jiazhan and
Sun, Qingfeng and
Xu, Can and
Zhao, Pu and
Yang, Yaming and
Tao, Chongyang and
Zhao, Dongyan and
Lin, Qingwei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.405",
doi = "10.18653/v1/2023.acl-long.405",
pages = "7348--7363",
abstract = "Responding with multi-modal content has been recognized as an essential capability for an intelligent conversational agent. In this paper, we introduce the MMDialog dataset to facilitate multi-modal conversation better. MMDialog is composed of a curated set of 1.08 million real-world dialogues with 1.53 million unique images across 4,184 topics. MMDialog has two main and unique advantages. First, it is the largest multi-modal conversation dataset by the number of dialogues by 88x. Second, it contains massive topics to generalize the open domain. To build an engaging dialogue system with this dataset, we propose and normalize two response prediction tasks based on retrieval and generative scenarios. In addition, we build two baselines for the above tasks with state-of-the-art techniques and report their experimental performance. We also propose a novel evaluation metric MM-Relevance to measure the multi-modal responses. Our dataset is available in \url{https://github.com/victorsungo/MMDialog}.",
}
| Responding with multi-modal content has been recognized as an essential capability for an intelligent conversational agent. In this paper, we introduce the MMDialog dataset to facilitate multi-modal conversation better. MMDialog is composed of a curated set of 1.08 million real-world dialogues with 1.53 million unique images across 4,184 topics. MMDialog has two main and unique advantages. First, it is the largest multi-modal conversation dataset by the number of dialogues by 88x. Second, it contains massive topics to generalize the open domain. To build an engaging dialogue system with this dataset, we propose and normalize two response prediction tasks based on retrieval and generative scenarios. In addition, we build two baselines for the above tasks with state-of-the-art techniques and report their experimental performance. We also propose a novel evaluation metric MM-Relevance to measure the multi-modal responses. Our dataset is available in \url{https://github.com/victorsungo/MMDialog}. | [
"Feng, Jiazhan",
"Sun, Qingfeng",
"Xu, Can",
"Zhao, Pu",
"Yang, Yaming",
"Tao, Chongyang",
"Zhao, Dongyan",
"Lin, Qingwei"
] | MMDialog: A Large-scale Multi-turn Dialogue Dataset Towards Multi-modal Open-domain Conversation | acl-long.405 | Poster | 2211.05719 | [
"https://github.com/victorsungo/mmdialog"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.406.bib | https://aclanthology.org/2023.acl-long.406/ | @inproceedings{belouadi-eger-2023-bygpt5,
title = "{B}y{GPT}5: End-to-End Style-conditioned Poetry Generation with Token-free Language Models",
author = "Belouadi, Jonas and
Eger, Steffen",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.406",
doi = "10.18653/v1/2023.acl-long.406",
pages = "7364--7381",
abstract = "State-of-the-art poetry generation systems are often complex. They either consist of task-specific model pipelines, incorporate prior knowledge in the form of manually created constraints, or both. In contrast, end-to-end models would not suffer from the overhead of having to model prior knowledge and could learn the nuances of poetry from data alone, reducing the degree of human supervision required. In this work, we investigate end-to-end poetry generation conditioned on styles such as rhyme, meter, and alliteration. We identify and address lack of training data and mismatching tokenization algorithms as possible limitations of past attempts. In particular, we successfully pre-train ByGPT5, a new token-free decoder-only language model, and fine-tune it on a large custom corpus of English and German quatrains annotated with our styles. We show that ByGPT5 outperforms other models such as mT5, ByT5, GPT-2 and ChatGPT, while also being more parameter efficient and performing favorably compared to humans. In addition, we analyze its runtime performance and demonstrate that it is not prone to memorization. We make our code, models, and datasets publicly available.",
}
| State-of-the-art poetry generation systems are often complex. They either consist of task-specific model pipelines, incorporate prior knowledge in the form of manually created constraints, or both. In contrast, end-to-end models would not suffer from the overhead of having to model prior knowledge and could learn the nuances of poetry from data alone, reducing the degree of human supervision required. In this work, we investigate end-to-end poetry generation conditioned on styles such as rhyme, meter, and alliteration. We identify and address lack of training data and mismatching tokenization algorithms as possible limitations of past attempts. In particular, we successfully pre-train ByGPT5, a new token-free decoder-only language model, and fine-tune it on a large custom corpus of English and German quatrains annotated with our styles. We show that ByGPT5 outperforms other models such as mT5, ByT5, GPT-2 and ChatGPT, while also being more parameter efficient and performing favorably compared to humans. In addition, we analyze its runtime performance and demonstrate that it is not prone to memorization. We make our code, models, and datasets publicly available. | [
"Belouadi, Jonas",
"Eger, Steffen"
] | ByGPT5: End-to-End Style-conditioned Poetry Generation with Token-free Language Models | acl-long.406 | Oral | 2212.10474 | [
"https://github.com/potamides/uniformers"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.407.bib | https://aclanthology.org/2023.acl-long.407/ | @inproceedings{lv-etal-2023-envisioning,
title = "Envisioning Future from the Past: Hierarchical Duality Learning for Multi-Turn Dialogue Generation",
author = "Lv, Ang and
Li, Jinpeng and
Xie, Shufang and
Yan, Rui",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.407",
doi = "10.18653/v1/2023.acl-long.407",
pages = "7382--7394",
abstract = "In this paper, we define a widely neglected property in dialogue text, duality, which is a hierarchical property that is reflected in human behaviours in daily conversations: Based on the logic in a conversation (or a sentence), people can infer follow-up utterances (or tokens) based on the previous text, and vice versa. We propose a hierarchical duality learning for dialogue (HDLD) to simulate this human cognitive ability, for generating high quality responses that connect both previous and follow-up dialogues. HDLD utilizes hierarchical dualities at token hierarchy and utterance hierarchy. HDLD maximizes the mutual information between past and future utterances. Thus, even if future text is invisible during inference, HDLD is capable of estimating future information implicitly based on dialogue history and generates both coherent and informative responses. In contrast to previous approaches that solely utilize future text as auxiliary information to encode during training, HDLD leverages duality to enable interaction between dialogue history and the future. This enhances the utilization of dialogue data, leading to the improvement in both automatic and human evaluation.",
}
| In this paper, we define a widely neglected property in dialogue text, duality, which is a hierarchical property that is reflected in human behaviours in daily conversations: Based on the logic in a conversation (or a sentence), people can infer follow-up utterances (or tokens) based on the previous text, and vice versa. We propose a hierarchical duality learning for dialogue (HDLD) to simulate this human cognitive ability, for generating high quality responses that connect both previous and follow-up dialogues. HDLD utilizes hierarchical dualities at token hierarchy and utterance hierarchy. HDLD maximizes the mutual information between past and future utterances. Thus, even if future text is invisible during inference, HDLD is capable of estimating future information implicitly based on dialogue history and generates both coherent and informative responses. In contrast to previous approaches that solely utilize future text as auxiliary information to encode during training, HDLD leverages duality to enable interaction between dialogue history and the future. This enhances the utilization of dialogue data, leading to the improvement in both automatic and human evaluation. | [
"Lv, Ang",
"Li, Jinpeng",
"Xie, Shufang",
"Yan, Rui"
] | Envisioning Future from the Past: Hierarchical Duality Learning for Multi-Turn Dialogue Generation | acl-long.407 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.408.bib | https://aclanthology.org/2023.acl-long.408/ | @inproceedings{zhang-etal-2023-dualgats,
title = "{D}ual{GAT}s: Dual Graph Attention Networks for Emotion Recognition in Conversations",
author = "Zhang, Duzhen and
Chen, Feilong and
Chen, Xiuyi",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.408",
doi = "10.18653/v1/2023.acl-long.408",
pages = "7395--7408",
abstract = "Capturing complex contextual dependencies plays a vital role in Emotion Recognition in Conversations (ERC). Previous studies have predominantly focused on speaker-aware context modeling, overlooking the discourse structure of the conversation. In this paper, we introduce Dual Graph ATtention networks (DualGATs) to concurrently consider the complementary aspects of discourse structure and speaker-aware context, aiming for more precise ERC. Specifically, we devise a Discourse-aware GAT (DisGAT) module to incorporate discourse structural information by analyzing the discourse dependencies between utterances. Additionally, we develop a Speaker-aware GAT (SpkGAT) module to incorporate speaker-aware contextual information by considering the speaker dependencies between utterances. Furthermore, we design an interaction module that facilitates the integration of the DisGAT and SpkGAT modules, enabling the effective interchange of relevant information between the two modules. We extensively evaluate our method on four datasets, and experimental results demonstrate that our proposed DualGATs surpass state-of-the-art baselines on the majority of the datasets.",
}
| Capturing complex contextual dependencies plays a vital role in Emotion Recognition in Conversations (ERC). Previous studies have predominantly focused on speaker-aware context modeling, overlooking the discourse structure of the conversation. In this paper, we introduce Dual Graph ATtention networks (DualGATs) to concurrently consider the complementary aspects of discourse structure and speaker-aware context, aiming for more precise ERC. Specifically, we devise a Discourse-aware GAT (DisGAT) module to incorporate discourse structural information by analyzing the discourse dependencies between utterances. Additionally, we develop a Speaker-aware GAT (SpkGAT) module to incorporate speaker-aware contextual information by considering the speaker dependencies between utterances. Furthermore, we design an interaction module that facilitates the integration of the DisGAT and SpkGAT modules, enabling the effective interchange of relevant information between the two modules. We extensively evaluate our method on four datasets, and experimental results demonstrate that our proposed DualGATs surpass state-of-the-art baselines on the majority of the datasets. | [
"Zhang, Duzhen",
"Chen, Feilong",
"Chen, Xiuyi"
] | DualGATs: Dual Graph Attention Networks for Emotion Recognition in Conversations | acl-long.408 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.409.bib | https://aclanthology.org/2023.acl-long.409/ | @inproceedings{chen-etal-2023-consistent,
title = "Consistent Prototype Learning for Few-Shot Continual Relation Extraction",
author = "Chen, Xiudi and
Wu, Hui and
Shi, Xiaodong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.409",
doi = "10.18653/v1/2023.acl-long.409",
pages = "7409--7422",
abstract = "Few-shot continual relation extraction aims to continually train a model on incrementally few-shot data to learn new relations while avoiding forgetting old ones. However, current memory-based methods are prone to overfitting memory samples, resulting in insufficient activation of old relations and limited ability to handle the confusion of similar classes. In this paper, we design a new N-way-K-shot Continual Relation Extraction (NK-CRE) task and propose a novel few-shot continual relation extraction method with Consistent Prototype Learning (ConPL) to address the aforementioned issues. Our proposed ConPL is mainly composed of three modules: 1) a prototype-based classification module that provides primary relation predictions under few-shot continual learning; 2) a memory-enhanced module designed to select vital samples and refined prototypical representations as a novel multi-information episodic memory; 3) a consistent learning module to reduce catastrophic forgetting by enforcing distribution consistency. To effectively mitigate catastrophic forgetting, ConPL ensures that the samples and prototypes in the episodic memory remain consistent in terms of classification and distribution. Additionally, ConPL uses prompt learning to extract better representations and adopts a focal loss to alleviate the confusion of similar classes. Experimental results on two commonly-used datasets show that our model consistently outperforms other competitive baselines.",
}
| Few-shot continual relation extraction aims to continually train a model on incrementally few-shot data to learn new relations while avoiding forgetting old ones. However, current memory-based methods are prone to overfitting memory samples, resulting in insufficient activation of old relations and limited ability to handle the confusion of similar classes. In this paper, we design a new N-way-K-shot Continual Relation Extraction (NK-CRE) task and propose a novel few-shot continual relation extraction method with Consistent Prototype Learning (ConPL) to address the aforementioned issues. Our proposed ConPL is mainly composed of three modules: 1) a prototype-based classification module that provides primary relation predictions under few-shot continual learning; 2) a memory-enhanced module designed to select vital samples and refined prototypical representations as a novel multi-information episodic memory; 3) a consistent learning module to reduce catastrophic forgetting by enforcing distribution consistency. To effectively mitigate catastrophic forgetting, ConPL ensures that the samples and prototypes in the episodic memory remain consistent in terms of classification and distribution. Additionally, ConPL uses prompt learning to extract better representations and adopts a focal loss to alleviate the confusion of similar classes. Experimental results on two commonly-used datasets show that our model consistently outperforms other competitive baselines. | [
"Chen, Xiudi",
"Wu, Hui",
"Shi, Xiaodong"
] | Consistent Prototype Learning for Few-Shot Continual Relation Extraction | acl-long.409 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.410.bib | https://aclanthology.org/2023.acl-long.410/ | @inproceedings{foley-etal-2023-matching,
title = "Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models",
author = "Foley, Myles and
Rawat, Ambrish and
Lee, Taesung and
Hou, Yufang and
Picco, Gabriele and
Zizzo, Giulio",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.410",
doi = "10.18653/v1/2023.acl-long.410",
pages = "7423--7442",
abstract = "The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption. While the pre-trained models can perform many tasks, such models are often fine-tuned to improve their performance on various downstream applications. However, this leads to issues over violation of model licenses, model theft, and copyright infringement. Moreover, recent advances show that generative technology is capable of producing harmful content which exacerbates the problems of accountability within model supply chains. Thus, we need a method to investigate how a model was trained or a piece of text was generated and what their pre-trained base model was. In this paper we take the first step to address this open problem by tracing back the origin of a given fine-tuned LLM to its corresponding pre-trained base model. We consider different knowledge levels and attribution strategies, and find that we can correctly trace back 8 out of the 10 fine tuned models with our best method.",
}
| The wide applicability and adaptability of generative large language models (LLMs) has enabled their rapid adoption. While the pre-trained models can perform many tasks, such models are often fine-tuned to improve their performance on various downstream applications. However, this leads to issues over violation of model licenses, model theft, and copyright infringement. Moreover, recent advances show that generative technology is capable of producing harmful content which exacerbates the problems of accountability within model supply chains. Thus, we need a method to investigate how a model was trained or a piece of text was generated and what their pre-trained base model was. In this paper we take the first step to address this open problem by tracing back the origin of a given fine-tuned LLM to its corresponding pre-trained base model. We consider different knowledge levels and attribution strategies, and find that we can correctly trace back 8 out of the 10 fine tuned models with our best method. | [
"Foley, Myles",
"Rawat, Ambrish",
"Lee, Taesung",
"Hou, Yufang",
"Picco, Gabriele",
"Zizzo, Giulio"
] | Matching Pairs: Attributing Fine-Tuned Models to their Pre-Trained Large Language Models | acl-long.410 | Poster | 2306.09308 | [
"https://github.com/ibm/model-attribution-in-machine-learning"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.411.bib | https://aclanthology.org/2023.acl-long.411/ | @inproceedings{zan-etal-2023-large,
title = "Large Language Models Meet {NL}2{C}ode: A Survey",
author = "Zan, Daoguang and
Chen, Bei and
Zhang, Fengji and
Lu, Dianjie and
Wu, Bingchao and
Guan, Bei and
Yongji, Wang and
Lou, Jian-Guang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.411",
doi = "10.18653/v1/2023.acl-long.411",
pages = "7443--7464",
abstract = "The task of generating code from a natural language description, or NL2Code, is considered a pressing and significant challenge in code intelligence. Thanks to the rapid development of pre-training techniques, surging large language models are being proposed for code, sparking the advances in NL2Code. To facilitate further research and applications in this field, in this paper, we present a comprehensive survey of 27 existing large language models for NL2Code, and also review benchmarks and metrics. We provide an intuitive comparison of all existing models on the HumanEval benchmark. Through in-depth observation and analysis, we provide some insights and conclude that the key factors contributing to the success of large language models for NL2Code are {``}Large Size, Premium Data, Expert Tuning{''}. In addition, we discuss challenges and opportunities regarding the gap between models and humans. We also create a website \url{https://nl2code.github.io} to track the latest progress through crowd-sourcing. To the best of our knowledge, this is the first survey of large language models for NL2Code, and we believe it will contribute to the ongoing development of the field.",
}
| The task of generating code from a natural language description, or NL2Code, is considered a pressing and significant challenge in code intelligence. Thanks to the rapid development of pre-training techniques, surging large language models are being proposed for code, sparking the advances in NL2Code. To facilitate further research and applications in this field, in this paper, we present a comprehensive survey of 27 existing large language models for NL2Code, and also review benchmarks and metrics. We provide an intuitive comparison of all existing models on the HumanEval benchmark. Through in-depth observation and analysis, we provide some insights and conclude that the key factors contributing to the success of large language models for NL2Code are {``}Large Size, Premium Data, Expert Tuning{''}. In addition, we discuss challenges and opportunities regarding the gap between models and humans. We also create a website \url{https://nl2code.github.io} to track the latest progress through crowd-sourcing. To the best of our knowledge, this is the first survey of large language models for NL2Code, and we believe it will contribute to the ongoing development of the field. | [
"Zan, Daoguang",
"Chen, Bei",
"Zhang, Fengji",
"Lu, Dianjie",
"Wu, Bingchao",
"Guan, Bei",
"Yongji, Wang",
"Lou, Jian-Guang"
] | Large Language Models Meet NL2Code: A Survey | acl-long.411 | Poster | 2212.09420 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.412.bib | https://aclanthology.org/2023.acl-long.412/ | @inproceedings{ni-etal-2023-aggregating,
title = "When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial {NLP}",
author = "Ni, Jingwei and
Jin, Zhijing and
Wang, Qian and
Sachan, Mrinmaya and
Leippold, Markus",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.412",
doi = "10.18653/v1/2023.acl-long.412",
pages = "7465--7488",
abstract = "Multi-task learning (MTL) aims at achieving a better model by leveraging data and knowledge from multiple tasks. However, MTL does not always work {--} sometimes negative transfer occurs between tasks, especially when aggregating loosely related skills, leaving it an open question when MTL works. Previous studies show that MTL performance can be improved by algorithmic tricks. However, what tasks and skills should be included is less well explored. In this work, we conduct a case study in Financial NLP where multiple datasets exist for skills relevant to the domain, such as numeric reasoning and sentiment analysis. Due to the task difficulty and data scarcity in the Financial NLP domain, we explore when aggregating such diverse skills from multiple datasets with MTL can work. Our findings suggest that the key to MTL success lies in skill diversity, relatedness between tasks, and choice of aggregation size and shared capacity. Specifically, MTL works well when tasks are diverse but related, and when the size of the task aggregation and the shared capacity of the model are balanced to avoid overwhelming certain tasks.",
}
| Multi-task learning (MTL) aims at achieving a better model by leveraging data and knowledge from multiple tasks. However, MTL does not always work {--} sometimes negative transfer occurs between tasks, especially when aggregating loosely related skills, leaving it an open question when MTL works. Previous studies show that MTL performance can be improved by algorithmic tricks. However, what tasks and skills should be included is less well explored. In this work, we conduct a case study in Financial NLP where multiple datasets exist for skills relevant to the domain, such as numeric reasoning and sentiment analysis. Due to the task difficulty and data scarcity in the Financial NLP domain, we explore when aggregating such diverse skills from multiple datasets with MTL can work. Our findings suggest that the key to MTL success lies in skill diversity, relatedness between tasks, and choice of aggregation size and shared capacity. Specifically, MTL works well when tasks are diverse but related, and when the size of the task aggregation and the shared capacity of the model are balanced to avoid overwhelming certain tasks. | [
"Ni, Jingwei",
"Jin, Zhijing",
"Wang, Qian",
"Sachan, Mrinmaya",
"Leippold, Markus"
] | When Does Aggregating Multiple Skills with Multi-Task Learning Work? A Case Study in Financial NLP | acl-long.412 | Poster | 2305.14007 | [
"https://github.com/EdisonNi-hku/when-does-mtl-work"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.413.bib | https://aclanthology.org/2023.acl-long.413/ | @inproceedings{fei-etal-2023-enhancing,
title = "Enhancing Grammatical Error Correction Systems with Explanations",
author = "Fei, Yuejiao and
Cui, Leyang and
Yang, Sen and
Lam, Wai and
Lan, Zhenzhong and
Shi, Shuming",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.413",
doi = "10.18653/v1/2023.acl-long.413",
pages = "7489--7501",
abstract = "Grammatical error correction systems improve written communication by detecting and correcting language mistakes. To help language learners better understand why the GEC system makes a certain correction, the causes of errors (evidence words) and the corresponding error types are two key factors. To enhance GEC systems with explanations, we introduce EXPECT, a large dataset annotated with evidence words and grammatical error types. We propose several baselines and anlysis to understand this task. Furthermore, human evaluation verifies our explainable GEC system{'}s explanations can assist second-language learners in determining whether to accept a correction suggestion and in understanding the associated grammar rule.",
}
| Grammatical error correction systems improve written communication by detecting and correcting language mistakes. To help language learners better understand why the GEC system makes a certain correction, the causes of errors (evidence words) and the corresponding error types are two key factors. To enhance GEC systems with explanations, we introduce EXPECT, a large dataset annotated with evidence words and grammatical error types. We propose several baselines and anlysis to understand this task. Furthermore, human evaluation verifies our explainable GEC system{'}s explanations can assist second-language learners in determining whether to accept a correction suggestion and in understanding the associated grammar rule. | [
"Fei, Yuejiao",
"Cui, Leyang",
"Yang, Sen",
"Lam, Wai",
"Lan, Zhenzhong",
"Shi, Shuming"
] | Enhancing Grammatical Error Correction Systems with Explanations | acl-long.413 | Oral | 2305.15676 | [
"https://github.com/lorafei/explainable_gec"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.414.bib | https://aclanthology.org/2023.acl-long.414/ | @inproceedings{gururaja-etal-2023-linguistic,
title = "Linguistic representations for fewer-shot relation extraction across domains",
author = "Gururaja, Sireesh and
Dutt, Ritam and
Liao, Tinglong and
Ros{\'e}, Carolyn",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.414",
doi = "10.18653/v1/2023.acl-long.414",
pages = "7502--7514",
abstract = "Recent work has demonstrated the positive impact of incorporating linguistic representations as additional context and scaffolds on the in-domain performance of several NLP tasks. We extend this work by exploring the impact of linguistic representations on cross-domain performance in a few-shot transfer setting. An important question is whether linguistic representations enhance generalizability by providing features that function as cross-domain pivots. We focus on the task of relation extraction on three datasets of procedural text in two domains, cooking and materials science. Our approach augments a popular transformer-based architecture by alternately incorporating syntactic and semantic graphs constructed by freely available off-the-shelf tools. We examine their utility for enhancing generalization, and investigate whether earlier findings, e.g. that semantic representations can be more helpful than syntactic ones, extend to relation extraction in multiple domains. We find that while the inclusion of these graphs results in significantly higher performance in few-shot transfer, both types of graph exhibit roughly equivalent utility.",
}
| Recent work has demonstrated the positive impact of incorporating linguistic representations as additional context and scaffolds on the in-domain performance of several NLP tasks. We extend this work by exploring the impact of linguistic representations on cross-domain performance in a few-shot transfer setting. An important question is whether linguistic representations enhance generalizability by providing features that function as cross-domain pivots. We focus on the task of relation extraction on three datasets of procedural text in two domains, cooking and materials science. Our approach augments a popular transformer-based architecture by alternately incorporating syntactic and semantic graphs constructed by freely available off-the-shelf tools. We examine their utility for enhancing generalization, and investigate whether earlier findings, e.g. that semantic representations can be more helpful than syntactic ones, extend to relation extraction in multiple domains. We find that while the inclusion of these graphs results in significantly higher performance in few-shot transfer, both types of graph exhibit roughly equivalent utility. | [
"Gururaja, Sireesh",
"Dutt, Ritam",
"Liao, Tinglong",
"Ros{\\'e}, Carolyn"
] | Linguistic representations for fewer-shot relation extraction across domains | acl-long.414 | Oral | 2307.03823 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.415.bib | https://aclanthology.org/2023.acl-long.415/ | @inproceedings{jin-etal-2023-darkbert,
title = "{D}ark{BERT}: A Language Model for the Dark Side of the {I}nternet",
author = "Jin, Youngjin and
Jang, Eugene and
Cui, Jian and
Chung, Jin-Woo and
Lee, Yongjae and
Shin, Seungwon",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.415",
doi = "10.18653/v1/2023.acl-long.415",
pages = "7515--7533",
abstract = "Recent research has suggested that there are clear differences in the language used in the Dark Web compared to that of the Surface Web. As studies on the Dark Web commonly require textual analysis of the domain, language models specific to the Dark Web may provide valuable insights to researchers. In this work, we introduce DarkBERT, a language model pretrained on Dark Web data. We describe the steps taken to filter and compile the text data used to train DarkBERT to combat the extreme lexical and structural diversity of the Dark Web that may be detrimental to building a proper representation of the domain. We evaluate DarkBERT and its vanilla counterpart along with other widely used language models to validate the benefits that a Dark Web domain specific model offers in various use cases. Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web.",
}
| Recent research has suggested that there are clear differences in the language used in the Dark Web compared to that of the Surface Web. As studies on the Dark Web commonly require textual analysis of the domain, language models specific to the Dark Web may provide valuable insights to researchers. In this work, we introduce DarkBERT, a language model pretrained on Dark Web data. We describe the steps taken to filter and compile the text data used to train DarkBERT to combat the extreme lexical and structural diversity of the Dark Web that may be detrimental to building a proper representation of the domain. We evaluate DarkBERT and its vanilla counterpart along with other widely used language models to validate the benefits that a Dark Web domain specific model offers in various use cases. Our evaluations show that DarkBERT outperforms current language models and may serve as a valuable resource for future research on the Dark Web. | [
"Jin, Youngjin",
"Jang, Eugene",
"Cui, Jian",
"Chung, Jin-Woo",
"Lee, Yongjae",
"Shin, Seungwon"
] | DarkBERT: A Language Model for the Dark Side of the Internet | acl-long.415 | Poster | 2305.08596 | [
""
] | https://huggingface.co/papers/2305.08596 | 2 | 8 | 11 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.416.bib | https://aclanthology.org/2023.acl-long.416/ | @inproceedings{cheng-etal-2023-mdace,
title = "{MDACE}: {MIMIC} Documents Annotated with Code Evidence",
author = "Cheng, Hua and
Jafari, Rana and
Russell, April and
Klopfer, Russell and
Lu, Edmond and
Striner, Benjamin and
Gormley, Matthew",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.416",
doi = "10.18653/v1/2023.acl-long.416",
pages = "7534--7550",
abstract = "We introduce a dataset for evidence/rationale extraction on an extreme multi-label classification task over long medical documents. One such task is Computer-Assisted Coding (CAC) which has improved significantly in recent years, thanks to advances in machine learning technologies. Yet simply predicting a set of final codes for a patient encounter is insufficient as CAC systems are required to provide supporting textual evidence to justify the billing codes. A model able to produce accurate and reliable supporting evidence for each code would be a tremendous benefit. However, a human annotated code evidence corpus is extremely difficult to create because it requires specialized knowledge. In this paper, we introduce MDACE, the first publicly available code evidence dataset, which is built on a subset of the MIMIC-III clinical records. The dataset {--} annotated by professional medical coders {--} consists of 302 Inpatient charts with 3,934 evidence spans and 52 Profee charts with 5,563 evidence spans. We implemented several evidence extraction methods based on the EffectiveCAN model (Liu et al., 2021) to establish baseline performance on this dataset. MDACE can be used to evaluate code evidence extraction methods for CAC systems, as well as the accuracy and interpretability of deep learning models for multi-label classification. We believe that the release of MDACE will greatly improve the understanding and application of deep learning technologies for medical coding and document classification.",
}
| We introduce a dataset for evidence/rationale extraction on an extreme multi-label classification task over long medical documents. One such task is Computer-Assisted Coding (CAC) which has improved significantly in recent years, thanks to advances in machine learning technologies. Yet simply predicting a set of final codes for a patient encounter is insufficient as CAC systems are required to provide supporting textual evidence to justify the billing codes. A model able to produce accurate and reliable supporting evidence for each code would be a tremendous benefit. However, a human annotated code evidence corpus is extremely difficult to create because it requires specialized knowledge. In this paper, we introduce MDACE, the first publicly available code evidence dataset, which is built on a subset of the MIMIC-III clinical records. The dataset {--} annotated by professional medical coders {--} consists of 302 Inpatient charts with 3,934 evidence spans and 52 Profee charts with 5,563 evidence spans. We implemented several evidence extraction methods based on the EffectiveCAN model (Liu et al., 2021) to establish baseline performance on this dataset. MDACE can be used to evaluate code evidence extraction methods for CAC systems, as well as the accuracy and interpretability of deep learning models for multi-label classification. We believe that the release of MDACE will greatly improve the understanding and application of deep learning technologies for medical coding and document classification. | [
"Cheng, Hua",
"Jafari, Rana",
"Russell, April",
"Klopfer, Russell",
"Lu, Edmond",
"Striner, Benjamin",
"Gormley, Matthew"
] | MDACE: MIMIC Documents Annotated with Code Evidence | acl-long.416 | Poster | 2307.03859 | [
"https://github.com/3mcloud/MDACE"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.417.bib | https://aclanthology.org/2023.acl-long.417/ | @inproceedings{wu-etal-2023-towards,
title = "Towards Zero-Shot Multilingual Transfer for Code-Switched Responses",
author = "Wu, Ting-Wei and
Zhao, Changsheng and
Chang, Ernie and
Shi, Yangyang and
Chuang, Pierce and
Chandra, Vikas and
Juang, Biing",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.417",
doi = "10.18653/v1/2023.acl-long.417",
pages = "7551--7563",
abstract = "Recent task-oriented dialog systems have had great success in building English-based personal assistants, but extending these systems to a global audience is challenging due to the need for annotated data in the target language. An alternative approach is to leverage existing data in a high-resource language to enable cross-lingual transfer in low-resource language models. However, this type of transfer has not been widely explored in natural language response generation. In this research, we investigate the use of state-of-the-art multilingual models such as mBART and T5 to facilitate zero-shot and few-shot transfer of code-switched responses. We propose a new adapter-based framework that allows for efficient transfer by learning task-specific representations and encapsulating source and target language representations. Our framework is able to successfully transfer language knowledge even when the target language corpus is limited. We present both quantitative and qualitative analyses to evaluate the effectiveness of our approach.",
}
| Recent task-oriented dialog systems have had great success in building English-based personal assistants, but extending these systems to a global audience is challenging due to the need for annotated data in the target language. An alternative approach is to leverage existing data in a high-resource language to enable cross-lingual transfer in low-resource language models. However, this type of transfer has not been widely explored in natural language response generation. In this research, we investigate the use of state-of-the-art multilingual models such as mBART and T5 to facilitate zero-shot and few-shot transfer of code-switched responses. We propose a new adapter-based framework that allows for efficient transfer by learning task-specific representations and encapsulating source and target language representations. Our framework is able to successfully transfer language knowledge even when the target language corpus is limited. We present both quantitative and qualitative analyses to evaluate the effectiveness of our approach. | [
"Wu, Ting-Wei",
"Zhao, Changsheng",
"Chang, Ernie",
"Shi, Yangyang",
"Chuang, Pierce",
"Ch",
"ra, Vikas",
"Juang, Biing"
] | Towards Zero-Shot Multilingual Transfer for Code-Switched Responses | acl-long.417 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.418.bib | https://aclanthology.org/2023.acl-long.418/ | @inproceedings{zeng-etal-2023-one,
title = "One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning",
author = "Zeng, Guangtao and
Zhang, Peiyuan and
Lu, Wei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.418",
doi = "10.18653/v1/2023.acl-long.418",
pages = "7564--7580",
abstract = "Fine-tuning pre-trained language models for multiple tasks can be expensive in terms of storage. Parameter-efficient transfer learning (PETL) methods have been proposed to address this issue, but they still require a significant number of parameters when being applied to broader ranges of tasks. To achieve even greater storage reduction, we propose ProPETL, a novel method that enables efficient sharing of a single prototype PETL network (e.g. adapter, LoRA, and prefix-tuning) across layers and tasks. We learn binary masks to select different sub-networks from the prototype network and apply them as PETL modules into different layers. We find that the binary masks can determine crucial structural information from the network, which is often ignored in previous studies. Our work can also be seen as a type of pruning method, where we find that overparameterization also exists in the seemingly small PETL modules. We evaluate ProPETL on various downstream tasks and show that it can outperform other PETL methods with around 10{\%} parameters required by the latter.",
}
| Fine-tuning pre-trained language models for multiple tasks can be expensive in terms of storage. Parameter-efficient transfer learning (PETL) methods have been proposed to address this issue, but they still require a significant number of parameters when being applied to broader ranges of tasks. To achieve even greater storage reduction, we propose ProPETL, a novel method that enables efficient sharing of a single prototype PETL network (e.g. adapter, LoRA, and prefix-tuning) across layers and tasks. We learn binary masks to select different sub-networks from the prototype network and apply them as PETL modules into different layers. We find that the binary masks can determine crucial structural information from the network, which is often ignored in previous studies. Our work can also be seen as a type of pruning method, where we find that overparameterization also exists in the seemingly small PETL modules. We evaluate ProPETL on various downstream tasks and show that it can outperform other PETL methods with around 10{\%} parameters required by the latter. | [
"Zeng, Guangtao",
"Zhang, Peiyuan",
"Lu, Wei"
] | One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning | acl-long.418 | Poster | 2305.17682 | [
"https://github.com/chaoscodes/propetl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.419.bib | https://aclanthology.org/2023.acl-long.419/ | @inproceedings{li-etal-2023-language,
title = "Can Language Models Make Fun? A Case Study in {C}hinese Comical Crosstalk",
author = "Li, Jianquan and
Wu, XiangBo and
Liu, Xiaokang and
Xie, Qianqian and
Tiwari, Prayag and
Wang, Benyou",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.419",
doi = "10.18653/v1/2023.acl-long.419",
pages = "7581--7596",
abstract = "Language is the principal tool for human communication, in which humor is one of the most attractive parts. Producing natural language like humans using computers, a.k.a, Natural Language Generation (NLG), has been widely used for dialogue systems, chatbots, machine translation, as well as computer-aid creation e.g., idea generations, scriptwriting. However, the humor aspect of natural language is relatively under-investigated, especially in the age of pre-trained language models. In this work, we aim to preliminarily test *whether NLG can generate humor as humans do*. We build a largest dataset consisting of numerous **C**hinese **C**omical **C**rosstalk scripts (called **C**3 in short), which is for a popular Chinese performing art called {`}Xiangsheng{'} or {`}ç¸å£°{'} since 1800s.We benchmark various generation approaches including training-from-scratch Seq2seq, fine-tuned middle-scale PLMs, and large-scale PLMs (with and without fine-tuning). Moreover, we also conduct a human assessment, showing that 1) *large-scale pretraining largely improves crosstalk generation quality*; and 2) *even the scripts generated from the best PLM is far from what we expect*. We conclude humor generation could be largely improved using large-scaled PLMs, but it is still in its infancy. The data and benchmarking code are publicly available in [\url{https://github.com/anonNo2/crosstalk-generation}](\url{https://github.com/anonNo2/crosstalk-generation}).",
}
| Language is the principal tool for human communication, in which humor is one of the most attractive parts. Producing natural language like humans using computers, a.k.a, Natural Language Generation (NLG), has been widely used for dialogue systems, chatbots, machine translation, as well as computer-aid creation e.g., idea generations, scriptwriting. However, the humor aspect of natural language is relatively under-investigated, especially in the age of pre-trained language models. In this work, we aim to preliminarily test *whether NLG can generate humor as humans do*. We build a largest dataset consisting of numerous **C**hinese **C**omical **C**rosstalk scripts (called **C**3 in short), which is for a popular Chinese performing art called {`}Xiangsheng{'} or {`}ç¸å£°{'} since 1800s.We benchmark various generation approaches including training-from-scratch Seq2seq, fine-tuned middle-scale PLMs, and large-scale PLMs (with and without fine-tuning). Moreover, we also conduct a human assessment, showing that 1) *large-scale pretraining largely improves crosstalk generation quality*; and 2) *even the scripts generated from the best PLM is far from what we expect*. We conclude humor generation could be largely improved using large-scaled PLMs, but it is still in its infancy. The data and benchmarking code are publicly available in [\url{https://github.com/anonNo2/crosstalk-generation}](\url{https://github.com/anonNo2/crosstalk-generation}). | [
"Li, Jianquan",
"Wu, XiangBo",
"Liu, Xiaokang",
"Xie, Qianqian",
"Tiwari, Prayag",
"Wang, Benyou"
] | Can Language Models Make Fun? A Case Study in Chinese Comical Crosstalk | acl-long.419 | Poster | 2207.00735 | [
"https://github.com/anonno2/crosstalk-generation"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.420.bib | https://aclanthology.org/2023.acl-long.420/ | @inproceedings{butoi-etal-2023-convergence,
title = "Convergence and Diversity in the Control Hierarchy",
author = "Butoi, Alexandra and
Cotterell, Ryan and
Chiang, David",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.420",
doi = "10.18653/v1/2023.acl-long.420",
pages = "7597--7616",
abstract = "Weir has defined a hierarchy of language classes whose second member (L2) is generated by tree-adjoining grammars (TAG), linear indexed grammars (LIG), combinatory categorial grammars, and head grammars. The hierarchy is obtained using the mechanism of control, and L2 is obtained using a context-free grammar (CFG) whose derivations are controlled by another CFG. We adapt Weir{'}s definition of a controllable CFG (called a labeled distinguished CFG) to give a definition of controllable pushdown automata (PDAs), called labeled distinguished PDAs. This yields three new characterizations of L2 as the class of languages generated by PDAs controlling PDAs, PDAs controlling CFGs, and CFGs controlling PDAs. We show that these four formalisms are not only weakly equivalent but equivalent in a stricter sense that we call d-weak equivalence. Furthermore, using an even stricter notion of equivalence called d-strong equivalence, we make precise the intuition that a CFG controlling a CFG is a TAG, a PDA controlling a PDA is an embedded PDA, and a PDA controlling a CFG is a LIG. The fourth member of this family, a CFG controlling a PDA, does not correspond to any kind of automaton we know of, so we invent one and call it a Pushdown Adjoining Automaton (PAA).",
}
| Weir has defined a hierarchy of language classes whose second member (L2) is generated by tree-adjoining grammars (TAG), linear indexed grammars (LIG), combinatory categorial grammars, and head grammars. The hierarchy is obtained using the mechanism of control, and L2 is obtained using a context-free grammar (CFG) whose derivations are controlled by another CFG. We adapt Weir{'}s definition of a controllable CFG (called a labeled distinguished CFG) to give a definition of controllable pushdown automata (PDAs), called labeled distinguished PDAs. This yields three new characterizations of L2 as the class of languages generated by PDAs controlling PDAs, PDAs controlling CFGs, and CFGs controlling PDAs. We show that these four formalisms are not only weakly equivalent but equivalent in a stricter sense that we call d-weak equivalence. Furthermore, using an even stricter notion of equivalence called d-strong equivalence, we make precise the intuition that a CFG controlling a CFG is a TAG, a PDA controlling a PDA is an embedded PDA, and a PDA controlling a CFG is a LIG. The fourth member of this family, a CFG controlling a PDA, does not correspond to any kind of automaton we know of, so we invent one and call it a Pushdown Adjoining Automaton (PAA). | [
"Butoi, Alex",
"ra",
"Cotterell, Ryan",
"Chiang, David"
] | Convergence and Diversity in the Control Hierarchy | acl-long.420 | Poster | 2306.03628 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.421.bib | https://aclanthology.org/2023.acl-long.421/ | @inproceedings{yang-etal-2023-confede,
title = "{C}on{FEDE}: Contrastive Feature Decomposition for Multimodal Sentiment Analysis",
author = "Yang, Jiuding and
Yu, Yakun and
Niu, Di and
Guo, Weidong and
Xu, Yu",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.421",
doi = "10.18653/v1/2023.acl-long.421",
pages = "7617--7630",
abstract = "Multimodal Sentiment Analysis aims to predict the sentiment of video content. Recent research suggests that multimodal sentiment analysis critically depends on learning a good representation of multimodal information, which should contain both modality-invariant representations that are consistent across modalities as well as modality-specific representations. In this paper, we propose ConFEDE, a unified learning framework that jointly performs contrastive representation learning and contrastive feature decomposition to enhance the representation of multimodal information. It decomposes each of the three modalities of a video sample, including text, video frames, and audio, into a similarity feature and a dissimilarity feature, which are learned by a contrastive relation centered around the text. We conducted extensive experiments on CH-SIMS, MOSI and MOSEI to evaluate various state-of-the-art multimodal sentiment analysis methods. Experimental results show that ConFEDE outperforms all baselines on these datasets on a range of metrics.",
}
| Multimodal Sentiment Analysis aims to predict the sentiment of video content. Recent research suggests that multimodal sentiment analysis critically depends on learning a good representation of multimodal information, which should contain both modality-invariant representations that are consistent across modalities as well as modality-specific representations. In this paper, we propose ConFEDE, a unified learning framework that jointly performs contrastive representation learning and contrastive feature decomposition to enhance the representation of multimodal information. It decomposes each of the three modalities of a video sample, including text, video frames, and audio, into a similarity feature and a dissimilarity feature, which are learned by a contrastive relation centered around the text. We conducted extensive experiments on CH-SIMS, MOSI and MOSEI to evaluate various state-of-the-art multimodal sentiment analysis methods. Experimental results show that ConFEDE outperforms all baselines on these datasets on a range of metrics. | [
"Yang, Jiuding",
"Yu, Yakun",
"Niu, Di",
"Guo, Weidong",
"Xu, Yu"
] | ConFEDE: Contrastive Feature Decomposition for Multimodal Sentiment Analysis | acl-long.421 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.422.bib | https://aclanthology.org/2023.acl-long.422/ | @inproceedings{pryor-etal-2023-using,
title = "Using Domain Knowledge to Guide Dialog Structure Induction via Neural Probabilistic Soft Logic",
author = "Pryor, Connor and
Yuan, Quan and
Liu, Jeremiah and
Kazemi, Mehran and
Ramachandran, Deepak and
Bedrax-Weiss, Tania and
Getoor, Lise",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.422",
doi = "10.18653/v1/2023.acl-long.422",
pages = "7631--7652",
abstract = "Dialog Structure Induction (DSI) is the task of inferring the latent dialog structure (i.e., a set of dialog states and their temporal transitions) of a given goal-oriented dialog. It is a critical component for modern dialog system design and discourse analysis. Existing DSI approaches are often purely data-driven, deploy models that infer latent states without access to domain knowledge, underperform when the training corpus is limited/noisy, or have difficulty when test dialogs exhibit distributional shifts from the training domain. This work explores a neural-symbolic approach as a potential solution to these problems. We introduce Neural Probabilistic Soft Logic Dialogue Structure Induction (NEUPSL DSI), a principled approach that injects symbolic knowledge into the latent space of a generative neural model. We conduct a thorough empirical investigation on the effect of NEUPSL DSI learning on hidden representation quality, few-shot learning, and out-of-domain generalization performance. Over three dialog structure induction datasets and across unsupervised and semi-supervised settings for standard and cross-domain generalization, the injection of symbolic knowledge using NEUPSL DSI provides a consistent boost in performance over the canonical baselines.",
}
| Dialog Structure Induction (DSI) is the task of inferring the latent dialog structure (i.e., a set of dialog states and their temporal transitions) of a given goal-oriented dialog. It is a critical component for modern dialog system design and discourse analysis. Existing DSI approaches are often purely data-driven, deploy models that infer latent states without access to domain knowledge, underperform when the training corpus is limited/noisy, or have difficulty when test dialogs exhibit distributional shifts from the training domain. This work explores a neural-symbolic approach as a potential solution to these problems. We introduce Neural Probabilistic Soft Logic Dialogue Structure Induction (NEUPSL DSI), a principled approach that injects symbolic knowledge into the latent space of a generative neural model. We conduct a thorough empirical investigation on the effect of NEUPSL DSI learning on hidden representation quality, few-shot learning, and out-of-domain generalization performance. Over three dialog structure induction datasets and across unsupervised and semi-supervised settings for standard and cross-domain generalization, the injection of symbolic knowledge using NEUPSL DSI provides a consistent boost in performance over the canonical baselines. | [
"Pryor, Connor",
"Yuan, Quan",
"Liu, Jeremiah",
"Kazemi, Mehran",
"Ramach",
"ran, Deepak",
"Bedrax-Weiss, Tania",
"Getoor, Lise"
] | Using Domain Knowledge to Guide Dialog Structure Induction via Neural Probabilistic Soft Logic | acl-long.422 | Poster | 2403.17853 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.423.bib | https://aclanthology.org/2023.acl-long.423/ | @inproceedings{peng-etal-2023-copying,
title = "Are You Copying My Model? Protecting the Copyright of Large Language Models for {E}aa{S} via Backdoor Watermark",
author = "Peng, Wenjun and
Yi, Jingwei and
Wu, Fangzhao and
Wu, Shangxi and
Bin Zhu, Bin and
Lyu, Lingjuan and
Jiao, Binxing and
Xu, Tong and
Sun, Guangzhong and
Xie, Xing",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.423",
doi = "10.18653/v1/2023.acl-long.423",
pages = "7653--7668",
abstract = "Large language models (LLMs) have demonstrated powerful capabilities in both text understanding and generation. Companies have begun to offer Embedding as a Service (EaaS) based on these LLMs, which can benefit various natural language processing (NLP) tasks for customers. However, previous studies have shown that EaaS is vulnerable to model extraction attacks, which can cause significant losses for the owners of LLMs, as training these models is extremely expensive. To protect the copyright of LLMs for EaaS, we propose an Embedding Watermark method called {pasted macro {`}METHOD{'}} that implants backdoors on embeddings. Our method selects a group of moderate-frequency words from a general text corpus to form a trigger set, then selects a target embedding as the watermark, and inserts it into the embeddings of texts containing trigger words as the backdoor. The weight of insertion is proportional to the number of trigger words included in the text. This allows the watermark backdoor to be effectively transferred to EaaS-stealer{'}s model for copyright verification while minimizing the adverse impact on the original embeddings{'} utility. Our extensive experiments on various datasets show that our method can effectively protect the copyright of EaaS models without compromising service quality. Our code is available at \url{https://github.com/yjw1029/EmbMarker}.",
}
| Large language models (LLMs) have demonstrated powerful capabilities in both text understanding and generation. Companies have begun to offer Embedding as a Service (EaaS) based on these LLMs, which can benefit various natural language processing (NLP) tasks for customers. However, previous studies have shown that EaaS is vulnerable to model extraction attacks, which can cause significant losses for the owners of LLMs, as training these models is extremely expensive. To protect the copyright of LLMs for EaaS, we propose an Embedding Watermark method called {pasted macro {`}METHOD{'}} that implants backdoors on embeddings. Our method selects a group of moderate-frequency words from a general text corpus to form a trigger set, then selects a target embedding as the watermark, and inserts it into the embeddings of texts containing trigger words as the backdoor. The weight of insertion is proportional to the number of trigger words included in the text. This allows the watermark backdoor to be effectively transferred to EaaS-stealer{'}s model for copyright verification while minimizing the adverse impact on the original embeddings{'} utility. Our extensive experiments on various datasets show that our method can effectively protect the copyright of EaaS models without compromising service quality. Our code is available at \url{https://github.com/yjw1029/EmbMarker}. | [
"Peng, Wenjun",
"Yi, Jingwei",
"Wu, Fangzhao",
"Wu, Shangxi",
"Bin Zhu, Bin",
"Lyu, Lingjuan",
"Jiao, Binxing",
"Xu, Tong",
"Sun, Guangzhong",
"Xie, Xing"
] | Are You Copying My Model? Protecting the Copyright of Large Language Models for EaaS via Backdoor Watermark | acl-long.423 | Poster | 2305.10036 | [
"https://github.com/yjw1029/embmarker"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.424.bib | https://aclanthology.org/2023.acl-long.424/ | @inproceedings{sun-etal-2023-answering,
title = "Answering Ambiguous Questions via Iterative Prompting",
author = "Sun, Weiwei and
Cai, Hengyi and
Chen, Hongshen and
Ren, Pengjie and
Chen, Zhumin and
de Rijke, Maarten and
Ren, Zhaochun",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.424",
doi = "10.18653/v1/2023.acl-long.424",
pages = "7669--7683",
abstract = "In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist. To provide feasible answers to an ambiguous question,one approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity. An alternative is to gather candidate answers and aggregate them, but this method can be computationally costly and may neglect dependencies among answers. In this paper, we present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions. Specifically, we integrate an answering model with a prompting model in an iterative manner. The prompting model adaptively tracks the reading process and progressively triggers the answering model to compose distinct and relevant answers. Additionally, we develop a task-specific post-pretraining approach for both the answering model and the prompting model, which greatly improves the performance of our framework. Empirical studies on two commonly-used open benchmarks show that AmbigPrompt achieves state-of-the-art or competitive results while using less memory and having a lower inference latency than competing approaches. Additionally, AmbigPrompt also performs well in low-resource settings.",
}
| In open-domain question answering, due to the ambiguity of questions, multiple plausible answers may exist. To provide feasible answers to an ambiguous question,one approach is to directly predict all valid answers, but this can struggle with balancing relevance and diversity. An alternative is to gather candidate answers and aggregate them, but this method can be computationally costly and may neglect dependencies among answers. In this paper, we present AmbigPrompt to address the imperfections of existing approaches to answering ambiguous questions. Specifically, we integrate an answering model with a prompting model in an iterative manner. The prompting model adaptively tracks the reading process and progressively triggers the answering model to compose distinct and relevant answers. Additionally, we develop a task-specific post-pretraining approach for both the answering model and the prompting model, which greatly improves the performance of our framework. Empirical studies on two commonly-used open benchmarks show that AmbigPrompt achieves state-of-the-art or competitive results while using less memory and having a lower inference latency than competing approaches. Additionally, AmbigPrompt also performs well in low-resource settings. | [
"Sun, Weiwei",
"Cai, Hengyi",
"Chen, Hongshen",
"Ren, Pengjie",
"Chen, Zhumin",
"de Rijke, Maarten",
"Ren, Zhaochun"
] | Answering Ambiguous Questions via Iterative Prompting | acl-long.424 | Poster | 2307.03897 | [
"https://github.com/sunnweiwei/ambigprompt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.425.bib | https://aclanthology.org/2023.acl-long.425/ | @inproceedings{ruggeri-etal-2023-dataset,
title = "A Dataset of Argumentative Dialogues on Scientific Papers",
author = "Ruggeri, Federico and
Mesgar, Mohsen and
Gurevych, Iryna",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.425",
doi = "10.18653/v1/2023.acl-long.425",
pages = "7684--7699",
abstract = "With recent advances in question-answering models, various datasets have been collected to improve and study the effectiveness of these models on scientific texts. Questions and answers in these datasets explore a scientific paper by seeking factual information from the paper{'}s content. However, these datasets do not tackle the argumentative content of scientific papers, which is of huge importance in persuasiveness of a scientific discussion. We introduce ArgSciChat, a dataset of 41 argumentative dialogues between scientists on 20 NLP papers. The unique property of our dataset is that it includes both exploratory and argumentative questions and answers in a dialogue discourse on a scientific paper. Moreover, the size of ArgSciChat demonstrates the difficulties in collecting dialogues for specialized domains. Thus, our dataset is a challenging resource to evaluate dialogue agents in low-resource domains, in which collecting training data is costly. We annotate all sentences of dialogues in ArgSciChat and analyze them extensively. The results confirm that dialogues in ArgSciChat include exploratory and argumentative interactions. Furthermore, we use our dataset to fine-tune and evaluate a pre-trained document-grounded dialogue agent. The agent achieves a low performance on our dataset, motivating a need for dialogue agents with a capability to reason and argue about their answers. We publicly release ArgSciChat.",
}
| With recent advances in question-answering models, various datasets have been collected to improve and study the effectiveness of these models on scientific texts. Questions and answers in these datasets explore a scientific paper by seeking factual information from the paper{'}s content. However, these datasets do not tackle the argumentative content of scientific papers, which is of huge importance in persuasiveness of a scientific discussion. We introduce ArgSciChat, a dataset of 41 argumentative dialogues between scientists on 20 NLP papers. The unique property of our dataset is that it includes both exploratory and argumentative questions and answers in a dialogue discourse on a scientific paper. Moreover, the size of ArgSciChat demonstrates the difficulties in collecting dialogues for specialized domains. Thus, our dataset is a challenging resource to evaluate dialogue agents in low-resource domains, in which collecting training data is costly. We annotate all sentences of dialogues in ArgSciChat and analyze them extensively. The results confirm that dialogues in ArgSciChat include exploratory and argumentative interactions. Furthermore, we use our dataset to fine-tune and evaluate a pre-trained document-grounded dialogue agent. The agent achieves a low performance on our dataset, motivating a need for dialogue agents with a capability to reason and argue about their answers. We publicly release ArgSciChat. | [
"Ruggeri, Federico",
"Mesgar, Mohsen",
"Gurevych, Iryna"
] | A Dataset of Argumentative Dialogues on Scientific Papers | acl-long.425 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.426.bib | https://aclanthology.org/2023.acl-long.426/ | @inproceedings{green-etal-2023-massively,
title = "Massively Multilingual Lexical Specialization of Multilingual Transformers",
author = "Green, Tommaso and
Ponzetto, Simone Paolo and
Glava{\v{s}}, Goran",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.426",
doi = "10.18653/v1/2023.acl-long.426",
pages = "7700--7715",
abstract = "While pretrained language models (PLMs) primarily serve as general-purpose text encoders that can be fine-tuned for a wide variety of downstream tasks, recent work has shown that they can also be rewired to produce high-quality word representations (i.e., static word embeddings) and yield good performance in type-level lexical tasks. While existing work primarily focused on the lexical specialization of monolingual PLMs with immense quantities of monolingual constraints, in this work we expose massively multilingual transformers (MMTs, e.g., mBERT or XLM-R) to multilingual lexical knowledge at scale, leveraging BabelNet as the readily available rich source of multilingual and cross-lingual type-level lexical knowledge. Concretely, we use BabelNet{'}s multilingual synsets to create synonym pairs (or synonym-gloss pairs) across 50 languages and then subject the MMTs (mBERT and XLM-R) to a lexical specialization procedure guided by a contrastive objective. We show that such massively multilingual lexical specialization brings substantial gains in two standard cross-lingual lexical tasks, bilingual lexicon induction and cross-lingual word similarity, as well as in cross-lingual sentence retrieval. Crucially, we observe gains for languages unseen in specialization, indicating that multilingual lexical specialization enables generalization to languages with no lexical constraints. In a series of subsequent controlled experiments, we show that the number of specialization constraints plays a much greater role than the set of languages from which they originate.",
}
| While pretrained language models (PLMs) primarily serve as general-purpose text encoders that can be fine-tuned for a wide variety of downstream tasks, recent work has shown that they can also be rewired to produce high-quality word representations (i.e., static word embeddings) and yield good performance in type-level lexical tasks. While existing work primarily focused on the lexical specialization of monolingual PLMs with immense quantities of monolingual constraints, in this work we expose massively multilingual transformers (MMTs, e.g., mBERT or XLM-R) to multilingual lexical knowledge at scale, leveraging BabelNet as the readily available rich source of multilingual and cross-lingual type-level lexical knowledge. Concretely, we use BabelNet{'}s multilingual synsets to create synonym pairs (or synonym-gloss pairs) across 50 languages and then subject the MMTs (mBERT and XLM-R) to a lexical specialization procedure guided by a contrastive objective. We show that such massively multilingual lexical specialization brings substantial gains in two standard cross-lingual lexical tasks, bilingual lexicon induction and cross-lingual word similarity, as well as in cross-lingual sentence retrieval. Crucially, we observe gains for languages unseen in specialization, indicating that multilingual lexical specialization enables generalization to languages with no lexical constraints. In a series of subsequent controlled experiments, we show that the number of specialization constraints plays a much greater role than the set of languages from which they originate. | [
"Green, Tommaso",
"Ponzetto, Simone Paolo",
"Glava{\\v{s}}, Goran"
] | Massively Multilingual Lexical Specialization of Multilingual Transformers | acl-long.426 | Poster | 2208.01018 | [
""
] | https://huggingface.co/papers/2208.01018 | 0 | 1 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.427.bib | https://aclanthology.org/2023.acl-long.427/ | @inproceedings{akyurek-etal-2023-rl4f,
title = "{RL}4{F}: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs",
author = "Akyurek, Afra Feyza and
Akyurek, Ekin and
Kalyan, Ashwin and
Clark, Peter and
Wijaya, Derry Tanti and
Tandon, Niket",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.427",
doi = "10.18653/v1/2023.acl-long.427",
pages = "7716--7733",
abstract = "Despite their unprecedented success, even the largest language models make mistakes. Similar to how humans learn and improve using feedback, previous work proposed providing language models with natural language feedback to guide them in repairing their outputs. Because human-generated critiques are expensive to obtain, researchers have devised learned critique generators in lieu of human critics while assuming one can train downstream models to utilize generated feedback. However, this approach does not apply to black-box or limited access models such as ChatGPT, as they cannot be fine-tuned. Moreover, in the era of large general-purpose language agents, fine-tuning is neither computationally nor spatially efficient as it results in multiple copies of the network. In this work, we introduce RL4F (Reinforcement Learning for Feedback), a multi-agent collaborative framework where the critique generator is trained to maximize end-task performance of GPT-3, a fixed model more than 200 times its size. RL4F produces critiques that help GPT-3 revise its outputs. We study three datasets for action planning, summarization and alphabetization and show relative improvements up to 10{\%} in multiple text similarity metrics over other learned, retrieval-augmented or prompting-based critique generators.",
}
| Despite their unprecedented success, even the largest language models make mistakes. Similar to how humans learn and improve using feedback, previous work proposed providing language models with natural language feedback to guide them in repairing their outputs. Because human-generated critiques are expensive to obtain, researchers have devised learned critique generators in lieu of human critics while assuming one can train downstream models to utilize generated feedback. However, this approach does not apply to black-box or limited access models such as ChatGPT, as they cannot be fine-tuned. Moreover, in the era of large general-purpose language agents, fine-tuning is neither computationally nor spatially efficient as it results in multiple copies of the network. In this work, we introduce RL4F (Reinforcement Learning for Feedback), a multi-agent collaborative framework where the critique generator is trained to maximize end-task performance of GPT-3, a fixed model more than 200 times its size. RL4F produces critiques that help GPT-3 revise its outputs. We study three datasets for action planning, summarization and alphabetization and show relative improvements up to 10{\%} in multiple text similarity metrics over other learned, retrieval-augmented or prompting-based critique generators. | [
"Akyurek, Afra Feyza",
"Akyurek, Ekin",
"Kalyan, Ashwin",
"Clark, Peter",
"Wijaya, Derry Tanti",
"T",
"on, Niket"
] | RL4F: Generating Natural Language Feedback with Reinforcement Learning for Repairing Model Outputs | acl-long.427 | Poster | 2305.08844 | [
"https://github.com/feyzaakyurek/rl4f"
] | https://huggingface.co/papers/2305.08844 | 3 | 1 | 0 | 7 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.428.bib | https://aclanthology.org/2023.acl-long.428/ | @inproceedings{whitehouse-etal-2023-webie,
title = "{W}eb{IE}: Faithful and Robust Information Extraction on the Web",
author = "Whitehouse, Chenxi and
Vania, Clara and
Aji, Alham Fikri and
Christodoulopoulos, Christos and
Pierleoni, Andrea",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.428",
doi = "10.18653/v1/2023.acl-long.428",
pages = "7734--7755",
abstract = "Extracting structured and grounded fact triples from raw text is a fundamental task in Information Extraction (IE). Existing IE datasets are typically collected from Wikipedia articles, using hyperlinks to link entities to the Wikidata knowledge base. However, models trained only on Wikipedia have limitations when applied to web domains, which often contain noisy text or text that does not have any factual information. We present WebIE, the first large-scale, entity-linked closed IE dataset consisting of 1.6M sentences automatically collected from the English Common Crawl corpus. WebIE also includes negative examples, i.e. sentences without fact triples, to better reflect the data on the web. We annotate {\textasciitilde}25K triples from WebIE through crowdsourcing and introduce mWebIE, a translation of the annotated set in four other languages: French, Spanish, Portuguese, and Hindi. We evaluate the in-domain, out-of-domain, and zero-shot cross-lingual performance of generative IE models and find models trained on WebIE show better generalisability. We also propose three training strategies that use entity linking as an auxiliary task. Our experiments show that adding Entity-Linking objectives improves the faithfulness of our generative IE models.",
}
| Extracting structured and grounded fact triples from raw text is a fundamental task in Information Extraction (IE). Existing IE datasets are typically collected from Wikipedia articles, using hyperlinks to link entities to the Wikidata knowledge base. However, models trained only on Wikipedia have limitations when applied to web domains, which often contain noisy text or text that does not have any factual information. We present WebIE, the first large-scale, entity-linked closed IE dataset consisting of 1.6M sentences automatically collected from the English Common Crawl corpus. WebIE also includes negative examples, i.e. sentences without fact triples, to better reflect the data on the web. We annotate {\textasciitilde}25K triples from WebIE through crowdsourcing and introduce mWebIE, a translation of the annotated set in four other languages: French, Spanish, Portuguese, and Hindi. We evaluate the in-domain, out-of-domain, and zero-shot cross-lingual performance of generative IE models and find models trained on WebIE show better generalisability. We also propose three training strategies that use entity linking as an auxiliary task. Our experiments show that adding Entity-Linking objectives improves the faithfulness of our generative IE models. | [
"Whitehouse, Chenxi",
"Vania, Clara",
"Aji, Alham Fikri",
"Christodoulopoulos, Christos",
"Pierleoni, Andrea"
] | WebIE: Faithful and Robust Information Extraction on the Web | acl-long.428 | Poster | 2305.14293 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.429.bib | https://aclanthology.org/2023.acl-long.429/ | @inproceedings{ziems-etal-2023-normbank,
title = "{N}orm{B}ank: A Knowledge Bank of Situational Social Norms",
author = "Ziems, Caleb and
Dwivedi-Yu, Jane and
Wang, Yi-Chia and
Halevy, Alon and
Yang, Diyi",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.429",
doi = "10.18653/v1/2023.acl-long.429",
pages = "7756--7776",
abstract = "We present NormBank, a knowledge bank of 155k situational norms. This resource is designed to ground flexible normative reasoning for interactive, assistive, and collaborative AI systems. Unlike prior commonsense resources, NormBank grounds each inference within a multivalent sociocultural frame, which includes the setting (e.g., restaurant), the agents{'} contingent roles (waiter, customer), their attributes (age, gender), and other physical, social, and cultural constraints (e.g., the temperature or the country of operation). In total, NormBank contains 63k unique constraints from a taxonomy that we introduce and iteratively refine here. Constraints then apply in different combinations to frame social norms. Under these manipulations, norms are non-monotonic {---} one can cancel an inference by updating its frame even slightly. Still, we find evidence that neural models can help reliably extend the scope and coverage of NormBank. We further demonstrate the utility of this resource with a series of transfer experiments. For data and code, see \url{https://github.com/SALT-NLP/normbank}",
}
| We present NormBank, a knowledge bank of 155k situational norms. This resource is designed to ground flexible normative reasoning for interactive, assistive, and collaborative AI systems. Unlike prior commonsense resources, NormBank grounds each inference within a multivalent sociocultural frame, which includes the setting (e.g., restaurant), the agents{'} contingent roles (waiter, customer), their attributes (age, gender), and other physical, social, and cultural constraints (e.g., the temperature or the country of operation). In total, NormBank contains 63k unique constraints from a taxonomy that we introduce and iteratively refine here. Constraints then apply in different combinations to frame social norms. Under these manipulations, norms are non-monotonic {---} one can cancel an inference by updating its frame even slightly. Still, we find evidence that neural models can help reliably extend the scope and coverage of NormBank. We further demonstrate the utility of this resource with a series of transfer experiments. For data and code, see \url{https://github.com/SALT-NLP/normbank} | [
"Ziems, Caleb",
"Dwivedi-Yu, Jane",
"Wang, Yi-Chia",
"Halevy, Alon",
"Yang, Diyi"
] | NormBank: A Knowledge Bank of Situational Social Norms | acl-long.429 | Poster | 2305.17008 | [
"https://github.com/salt-nlp/normbank"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.430.bib | https://aclanthology.org/2023.acl-long.430/ | @inproceedings{na-etal-2023-dip,
title = "{DIP}: Dead code Insertion based Black-box Attack for Programming Language Model",
author = "Na, CheolWon and
Choi, YunSeok and
Lee, Jee-Hyong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.430",
doi = "10.18653/v1/2023.acl-long.430",
pages = "7777--7791",
abstract = "Automatic processing of source code, such as code clone detection and software vulnerability detection, is very helpful to software engineers. Large pre-trained Programming Language (PL) models (such as CodeBERT, GraphCodeBERT, CodeT5, etc.), show very powerful performance on these tasks. However, these PL models are vulnerable to adversarial examples that are generated with slight perturbation. Unlike natural language, an adversarial example of code must be semantic-preserving and compilable. Due to the requirements, it is hard to directly apply the existing attack methods for natural language models. In this paper, we propose DIP (Dead code Insertion based Black-box Attack for Programming Language Model), a high-performance and effective black-box attack method to generate adversarial examples using dead code insertion. We evaluate our proposed method on 9 victim downstream-task large code models. Our method outperforms the state-of-the-art black-box attack in both attack efficiency and attack quality, while generated adversarial examples are compiled preserving semantic functionality.",
}
| Automatic processing of source code, such as code clone detection and software vulnerability detection, is very helpful to software engineers. Large pre-trained Programming Language (PL) models (such as CodeBERT, GraphCodeBERT, CodeT5, etc.), show very powerful performance on these tasks. However, these PL models are vulnerable to adversarial examples that are generated with slight perturbation. Unlike natural language, an adversarial example of code must be semantic-preserving and compilable. Due to the requirements, it is hard to directly apply the existing attack methods for natural language models. In this paper, we propose DIP (Dead code Insertion based Black-box Attack for Programming Language Model), a high-performance and effective black-box attack method to generate adversarial examples using dead code insertion. We evaluate our proposed method on 9 victim downstream-task large code models. Our method outperforms the state-of-the-art black-box attack in both attack efficiency and attack quality, while generated adversarial examples are compiled preserving semantic functionality. | [
"Na, CheolWon",
"Choi, YunSeok",
"Lee, Jee-Hyong"
] | DIP: Dead code Insertion based Black-box Attack for Programming Language Model | acl-long.430 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.431.bib | https://aclanthology.org/2023.acl-long.431/ | @inproceedings{liu-etal-2023-modeling,
title = "Modeling Structural Similarities between Documents for Coherence Assessment with Graph Convolutional Networks",
author = "Liu, Wei and
Fu, Xiyan and
Strube, Michael",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.431",
doi = "10.18653/v1/2023.acl-long.431",
pages = "7792--7808",
abstract = "Coherence is an important aspect of text quality, and various approaches have been applied to coherence modeling. However, existing methods solely focus on a single document{'}s coherence patterns, ignoring the underlying correlation between documents. We investigate a GCN-based coherence model that is capable of capturing structural similarities between documents. Our model first creates a graph structure for each document, from where we mine different subgraph patterns. We then construct a heterogeneous graph for the training corpus, connecting documents based on their shared subgraphs. Finally, a GCN is applied to the heterogeneous graph to model the connectivity relationships. We evaluate our method on two tasks, assessing discourse coherence and automated essay scoring. Results show that our GCN-based model outperforms all baselines, achieving a new state-of-the-art on both tasks.",
}
| Coherence is an important aspect of text quality, and various approaches have been applied to coherence modeling. However, existing methods solely focus on a single document{'}s coherence patterns, ignoring the underlying correlation between documents. We investigate a GCN-based coherence model that is capable of capturing structural similarities between documents. Our model first creates a graph structure for each document, from where we mine different subgraph patterns. We then construct a heterogeneous graph for the training corpus, connecting documents based on their shared subgraphs. Finally, a GCN is applied to the heterogeneous graph to model the connectivity relationships. We evaluate our method on two tasks, assessing discourse coherence and automated essay scoring. Results show that our GCN-based model outperforms all baselines, achieving a new state-of-the-art on both tasks. | [
"Liu, Wei",
"Fu, Xiyan",
"Strube, Michael"
] | Modeling Structural Similarities between Documents for Coherence Assessment with Graph Convolutional Networks | acl-long.431 | Poster | 2306.06472 | [
"https://github.com/liuwei1206/strusim"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.432.bib | https://aclanthology.org/2023.acl-long.432/ | @inproceedings{zhu-etal-2023-hitin,
title = "{H}i{TIN}: Hierarchy-aware Tree Isomorphism Network for Hierarchical Text Classification",
author = "Zhu, He and
Zhang, Chong and
Huang, Junjie and
Wu, Junran and
Xu, Ke",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.432",
doi = "10.18653/v1/2023.acl-long.432",
pages = "7809--7821",
abstract = "Hierarchical text classification (HTC) is a challenging subtask of multi-label classification as the labels form a complex hierarchical structure. Existing dual-encoder methods in HTC achieve weak performance gains with huge memory overheads and their structure encoders heavily rely on domain knowledge. Under such observation, we tend to investigate the feasibility of a memory-friendly model with strong generalization capability that could boost the performance of HTC without prior statistics or label semantics. In this paper, we propose Hierarchy-aware Tree Isomorphism Network (HiTIN) to enhance the text representations with only syntactic information of the label hierarchy. Specifically, we convert the label hierarchy into an unweighted tree structure, termed coding tree, with the guidance of structural entropy. Then we design a structure encoder to incorporate hierarchy-aware information in the coding tree into text representations. Besides the text encoder, HiTIN only contains a few multi-layer perceptions and linear transformations, which greatly saves memory. We conduct experiments on three commonly used datasets and the results demonstrate that HiTIN could achieve better test performance and less memory consumption than state-of-the-art (SOTA) methods.",
}
| Hierarchical text classification (HTC) is a challenging subtask of multi-label classification as the labels form a complex hierarchical structure. Existing dual-encoder methods in HTC achieve weak performance gains with huge memory overheads and their structure encoders heavily rely on domain knowledge. Under such observation, we tend to investigate the feasibility of a memory-friendly model with strong generalization capability that could boost the performance of HTC without prior statistics or label semantics. In this paper, we propose Hierarchy-aware Tree Isomorphism Network (HiTIN) to enhance the text representations with only syntactic information of the label hierarchy. Specifically, we convert the label hierarchy into an unweighted tree structure, termed coding tree, with the guidance of structural entropy. Then we design a structure encoder to incorporate hierarchy-aware information in the coding tree into text representations. Besides the text encoder, HiTIN only contains a few multi-layer perceptions and linear transformations, which greatly saves memory. We conduct experiments on three commonly used datasets and the results demonstrate that HiTIN could achieve better test performance and less memory consumption than state-of-the-art (SOTA) methods. | [
"Zhu, He",
"Zhang, Chong",
"Huang, Junjie",
"Wu, Junran",
"Xu, Ke"
] | HiTIN: Hierarchy-aware Tree Isomorphism Network for Hierarchical Text Classification | acl-long.432 | Poster | 2305.15182 | [
"https://github.com/rooooyy/hitin"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.433.bib | https://aclanthology.org/2023.acl-long.433/ | @inproceedings{zheng-etal-2023-contextual,
title = "Contextual Knowledge Learning for Dialogue Generation",
author = "Zheng, Wen and
Milic-Frayling, Natasa and
Zhou, Ke",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.433",
doi = "10.18653/v1/2023.acl-long.433",
pages = "7822--7839",
abstract = "Incorporating conversational context and knowledge into dialogue generation models has been essential for improving the quality of the generated responses. The context, comprising utterances from previous dialogue exchanges, is used as a source of content for response generation and as a means of selecting external knowledge. However, to avoid introducing irrelevant content, it is key to enable fine-grained scoring of context and knowledge. In this paper, we present a novel approach to context and knowledge weighting as an integral part of model training. We guide the model training through a Contextual Knowledge Learning (CKL) process which involves Latent Vectors for context and knowledge, respectively. CKL Latent Vectors capture the relationship between context, knowledge, and responses through weak supervision and enable differential weighting of context utterances and knowledge sentences during the training process. Experiments with two standard datasets and human evaluation demonstrate that CKL leads to a significant improvement compared with the performance of six strong baseline models and shows robustness with regard to reduced sizes of training sets.",
}
| Incorporating conversational context and knowledge into dialogue generation models has been essential for improving the quality of the generated responses. The context, comprising utterances from previous dialogue exchanges, is used as a source of content for response generation and as a means of selecting external knowledge. However, to avoid introducing irrelevant content, it is key to enable fine-grained scoring of context and knowledge. In this paper, we present a novel approach to context and knowledge weighting as an integral part of model training. We guide the model training through a Contextual Knowledge Learning (CKL) process which involves Latent Vectors for context and knowledge, respectively. CKL Latent Vectors capture the relationship between context, knowledge, and responses through weak supervision and enable differential weighting of context utterances and knowledge sentences during the training process. Experiments with two standard datasets and human evaluation demonstrate that CKL leads to a significant improvement compared with the performance of six strong baseline models and shows robustness with regard to reduced sizes of training sets. | [
"Zheng, Wen",
"Milic-Frayling, Natasa",
"Zhou, Ke"
] | Contextual Knowledge Learning for Dialogue Generation | acl-long.433 | Poster | 2305.18200 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.434.bib | https://aclanthology.org/2023.acl-long.434/ | @inproceedings{wang-etal-2023-easy,
title = "Easy Guided Decoding in Providing Suggestions for Interactive Machine Translation",
author = "Wang, Ke and
Ge, Xin and
Wang, Jiayi and
Zhang, Yuqi and
Zhao, Yu",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.434",
doi = "10.18653/v1/2023.acl-long.434",
pages = "7840--7852",
abstract = "Machine translation technology has made great progress in recent years, but it cannot guarantee error-free results. Human translators perform post-editing on machine translations to correct errors in the scene of computer aided translation. In favor of expediting the post-editing process, many works have investigated machine translation in interactive modes, in which machines can automatically refine the rest of translations constrained by human{'}s edits. Translation Suggestion (TS), as an interactive mode to assist human translators, requires machines to generate alternatives for specific incorrect words or phrases selected by human translators. In this paper, we utilize the parameterized objective function of neural machine translation (NMT) and propose a novel constrained decoding algorithm, namely Prefix-Suffix Guided Decoding (PSGD), to deal with the TS problem without additional training. Compared to state-of-the-art lexical-constrained decoding method, PSGD improves translation quality by an average of 10.6 BLEU and reduces time overhead by an average of 63.4{\%} on benchmark datasets. Furthermore, on both the WeTS and the WMT 2022 Translation Suggestion datasets, it is superior over other supervised learning systems trained with TS annotated data.",
}
| Machine translation technology has made great progress in recent years, but it cannot guarantee error-free results. Human translators perform post-editing on machine translations to correct errors in the scene of computer aided translation. In favor of expediting the post-editing process, many works have investigated machine translation in interactive modes, in which machines can automatically refine the rest of translations constrained by human{'}s edits. Translation Suggestion (TS), as an interactive mode to assist human translators, requires machines to generate alternatives for specific incorrect words or phrases selected by human translators. In this paper, we utilize the parameterized objective function of neural machine translation (NMT) and propose a novel constrained decoding algorithm, namely Prefix-Suffix Guided Decoding (PSGD), to deal with the TS problem without additional training. Compared to state-of-the-art lexical-constrained decoding method, PSGD improves translation quality by an average of 10.6 BLEU and reduces time overhead by an average of 63.4{\%} on benchmark datasets. Furthermore, on both the WeTS and the WMT 2022 Translation Suggestion datasets, it is superior over other supervised learning systems trained with TS annotated data. | [
"Wang, Ke",
"Ge, Xin",
"Wang, Jiayi",
"Zhang, Yuqi",
"Zhao, Yu"
] | Easy Guided Decoding in Providing Suggestions for Interactive Machine Translation | acl-long.434 | Poster | 2211.07093 | [
"https://github.com/wangke1996/translation-suggestion-psgd"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.435.bib | https://aclanthology.org/2023.acl-long.435/ | @inproceedings{jiang-etal-2023-discourse,
title = "Discourse-Centric Evaluation of Document-level Machine Translation with a New Densely Annotated Parallel Corpus of Novels",
author = "Jiang, Yuchen Eleanor and
Liu, Tianyu and
Ma, Shuming and
Zhang, Dongdong and
Sachan, Mrinmaya and
Cotterell, Ryan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.435",
doi = "10.18653/v1/2023.acl-long.435",
pages = "7853--7872",
abstract = "Several recent papers claim to have achieved human parity at sentence-level machine translation (MT){---}especially between high-resource language pairs. In response, the MT community has, in part, shifted its focus to document-level translation. Translating documents requires a deeper understanding of the structure and meaning of text, which is often captured by various kinds of discourse phenomena such as consistency, coherence, and cohesion. However, this renders conventional sentence-level MT evaluation benchmarks inadequate for evaluating the performance of context-aware MT systems. This paperpresents a new dataset with rich discourse annotations, built upon the large-scale parallel corpus BWB introduced in Jiang et al. (2022a). The new BWB annotation introduces four extra evaluation aspects, i.e., entity, terminology, coreference, and quotation, covering 15,095 entity mentions in both languages. Using these annotations, we systematically investigate the similarities and differences between the discourse structures of source and target languages, and the challenges they pose to MT. We discover that MT outputs differ fundamentally from human translations in terms of their latent discourse structures. This gives us a new perspective on the challenges and opportunities in document-level MT. We make our resource publicly available to spur future research in document-level MT and its generalization to other language translation tasks.",
}
| Several recent papers claim to have achieved human parity at sentence-level machine translation (MT){---}especially between high-resource language pairs. In response, the MT community has, in part, shifted its focus to document-level translation. Translating documents requires a deeper understanding of the structure and meaning of text, which is often captured by various kinds of discourse phenomena such as consistency, coherence, and cohesion. However, this renders conventional sentence-level MT evaluation benchmarks inadequate for evaluating the performance of context-aware MT systems. This paperpresents a new dataset with rich discourse annotations, built upon the large-scale parallel corpus BWB introduced in Jiang et al. (2022a). The new BWB annotation introduces four extra evaluation aspects, i.e., entity, terminology, coreference, and quotation, covering 15,095 entity mentions in both languages. Using these annotations, we systematically investigate the similarities and differences between the discourse structures of source and target languages, and the challenges they pose to MT. We discover that MT outputs differ fundamentally from human translations in terms of their latent discourse structures. This gives us a new perspective on the challenges and opportunities in document-level MT. We make our resource publicly available to spur future research in document-level MT and its generalization to other language translation tasks. | [
"Jiang, Yuchen Eleanor",
"Liu, Tianyu",
"Ma, Shuming",
"Zhang, Dongdong",
"Sachan, Mrinmaya",
"Cotterell, Ryan"
] | Discourse-Centric Evaluation of Document-level Machine Translation with a New Densely Annotated Parallel Corpus of Novels | acl-long.435 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.436.bib | https://aclanthology.org/2023.acl-long.436/ | @inproceedings{zhou-etal-2023-cmot,
title = "{CMOT}: Cross-modal Mixup via Optimal Transport for Speech Translation",
author = "Zhou, Yan and
Fang, Qingkai and
Feng, Yang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.436",
doi = "10.18653/v1/2023.acl-long.436",
pages = "7873--7887",
abstract = "End-to-end speech translation (ST) is the task of translating speech signals in the source language into text in the target language. As a cross-modal task, end-to-end ST is difficult to train with limited data. Existing methods often try to transfer knowledge from machine translation (MT), but their performances are restricted by the modality gap between speech and text. In this paper, we propose Cross-modal Mixup via Optimal Transport (CMOT) to overcome the modality gap. We find the alignment between speech and text sequences via optimal transport and then mix up the sequences from different modalities at a token level using the alignment. Experiments on the MuST-C ST benchmark demonstrate that CMOT achieves an average BLEU of 30.0 in 8 translation directions, outperforming previous methods. Further analysis shows CMOT can adaptively find the alignment between modalities, which helps alleviate the modality gap between speech and text.",
}
| End-to-end speech translation (ST) is the task of translating speech signals in the source language into text in the target language. As a cross-modal task, end-to-end ST is difficult to train with limited data. Existing methods often try to transfer knowledge from machine translation (MT), but their performances are restricted by the modality gap between speech and text. In this paper, we propose Cross-modal Mixup via Optimal Transport (CMOT) to overcome the modality gap. We find the alignment between speech and text sequences via optimal transport and then mix up the sequences from different modalities at a token level using the alignment. Experiments on the MuST-C ST benchmark demonstrate that CMOT achieves an average BLEU of 30.0 in 8 translation directions, outperforming previous methods. Further analysis shows CMOT can adaptively find the alignment between modalities, which helps alleviate the modality gap between speech and text. | [
"Zhou, Yan",
"Fang, Qingkai",
"Feng, Yang"
] | CMOT: Cross-modal Mixup via Optimal Transport for Speech Translation | acl-long.436 | Poster | 2305.14635 | [
"https://github.com/ictnlp/cmot"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.437.bib | https://aclanthology.org/2023.acl-long.437/ | @inproceedings{gu-hopkins-2023-evaluation,
title = "On the Evaluation of Neural Selective Prediction Methods for Natural Language Processing",
author = "Gu, Zhengyao and
Hopkins, Mark",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.437",
doi = "10.18653/v1/2023.acl-long.437",
pages = "7888--7899",
abstract = "We provide a survey and empirical comparison of the state-of-the-art in neural selective classification for NLP tasks. We also provide a methodological blueprint, including a novel metric called refinement that provides a calibrated evaluation of confidence functions for selective prediction. Finally, we supply documented, open-source code to support the future development of selective prediction techniques.",
}
| We provide a survey and empirical comparison of the state-of-the-art in neural selective classification for NLP tasks. We also provide a methodological blueprint, including a novel metric called refinement that provides a calibrated evaluation of confidence functions for selective prediction. Finally, we supply documented, open-source code to support the future development of selective prediction techniques. | [
"Gu, Zhengyao",
"Hopkins, Mark"
] | On the Evaluation of Neural Selective Prediction Methods for Natural Language Processing | acl-long.437 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.438.bib | https://aclanthology.org/2023.acl-long.438/ | @inproceedings{yu-etal-2023-speech,
title = "Speech-Text Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment",
author = "Yu, Tianshu and
Gao, Haoyu and
Lin, Ting-En and
Yang, Min and
Wu, Yuchuan and
Ma, Wentao and
Wang, Chao and
Huang, Fei and
Li, Yongbin",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.438",
doi = "10.18653/v1/2023.acl-long.438",
pages = "7900--7913",
abstract = "Recently, speech-text pre-training methods have shown remarkable success in many speech and natural language processing tasks. However, most previous pre-trained models are usually tailored for one or two specific tasks, but fail to conquer a wide range of speech-text tasks. In addition, existing speech-text pre-training methods fail to explore the contextual information within a dialogue to enrich utterance representations. In this paper, we propose Speech-text Pre-training for spoken dialog understanding with ExpliCiT cRoss-Modal Alignment (SPECTRA), which is the first-ever speech-text dialog pre-training model. Concretely, to consider the temporality of speech modality, we design a novel temporal position prediction task to capture the speech-text alignment. This pre-training task aims to predict the start and end time of each textual word in the corresponding speech waveform. In addition, to learn the characteristics of spoken dialogs, we generalize a response selection task from textual dialog pre-training to speech-text dialog pre-training scenarios. Experimental results on four different downstream speech-text tasks demonstrate the superiority of SPECTRA in learning speech-text alignment and multi-turn dialog context.",
}
| Recently, speech-text pre-training methods have shown remarkable success in many speech and natural language processing tasks. However, most previous pre-trained models are usually tailored for one or two specific tasks, but fail to conquer a wide range of speech-text tasks. In addition, existing speech-text pre-training methods fail to explore the contextual information within a dialogue to enrich utterance representations. In this paper, we propose Speech-text Pre-training for spoken dialog understanding with ExpliCiT cRoss-Modal Alignment (SPECTRA), which is the first-ever speech-text dialog pre-training model. Concretely, to consider the temporality of speech modality, we design a novel temporal position prediction task to capture the speech-text alignment. This pre-training task aims to predict the start and end time of each textual word in the corresponding speech waveform. In addition, to learn the characteristics of spoken dialogs, we generalize a response selection task from textual dialog pre-training to speech-text dialog pre-training scenarios. Experimental results on four different downstream speech-text tasks demonstrate the superiority of SPECTRA in learning speech-text alignment and multi-turn dialog context. | [
"Yu, Tianshu",
"Gao, Haoyu",
"Lin, Ting-En",
"Yang, Min",
"Wu, Yuchuan",
"Ma, Wentao",
"Wang, Chao",
"Huang, Fei",
"Li, Yongbin"
] | Speech-Text Pre-training for Spoken Dialog Understanding with Explicit Cross-Modal Alignment | acl-long.438 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.439.bib | https://aclanthology.org/2023.acl-long.439/ | @inproceedings{han-etal-2023-text,
title = "Text Style Transfer with Contrastive Transfer Pattern Mining",
author = "Han, Jingxuan and
Wang, Quan and
Zhang, Licheng and
Chen, Weidong and
Song, Yan and
Mao, Zhendong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.439",
doi = "10.18653/v1/2023.acl-long.439",
pages = "7914--7927",
abstract = "Text style transfer (TST) is an important task in natural language generation, which aims to alter the stylistic attributes (e.g., sentiment) of a sentence and keep its semantic meaning unchanged. Most existing studies mainly focus on the transformation between styles, yet ignore that this transformation can be actually carried out via different hidden transfer patterns. To address this problem, we propose a novel approach, contrastive transfer pattern mining (CTPM), which automatically mines and utilizes inherent latent transfer patterns to improve the performance of TST. Specifically, we design an adaptive clustering module to automatically discover hidden transfer patterns from the data, and introduce contrastive learning based on the discovered patterns to obtain more accurate sentence representations, and thereby benefit the TST task. To the best of our knowledge, this is the first work that proposes the concept of transfer patterns in TST, and our approach can be applied in a plug-and-play manner to enhance other TST methods to further improve their performance. Extensive experiments on benchmark datasets verify the effectiveness and generality of our approach.",
}
| Text style transfer (TST) is an important task in natural language generation, which aims to alter the stylistic attributes (e.g., sentiment) of a sentence and keep its semantic meaning unchanged. Most existing studies mainly focus on the transformation between styles, yet ignore that this transformation can be actually carried out via different hidden transfer patterns. To address this problem, we propose a novel approach, contrastive transfer pattern mining (CTPM), which automatically mines and utilizes inherent latent transfer patterns to improve the performance of TST. Specifically, we design an adaptive clustering module to automatically discover hidden transfer patterns from the data, and introduce contrastive learning based on the discovered patterns to obtain more accurate sentence representations, and thereby benefit the TST task. To the best of our knowledge, this is the first work that proposes the concept of transfer patterns in TST, and our approach can be applied in a plug-and-play manner to enhance other TST methods to further improve their performance. Extensive experiments on benchmark datasets verify the effectiveness and generality of our approach. | [
"Han, Jingxuan",
"Wang, Quan",
"Zhang, Licheng",
"Chen, Weidong",
"Song, Yan",
"Mao, Zhendong"
] | Text Style Transfer with Contrastive Transfer Pattern Mining | acl-long.439 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.440.bib | https://aclanthology.org/2023.acl-long.440/ | @inproceedings{yue-etal-2023-zero,
title = "Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning",
author = "Yue, Zhenrui and
Zeng, Huimin and
Lan, Mengfei and
Ji, Heng and
Wang, Dong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.440",
doi = "10.18653/v1/2023.acl-long.440",
pages = "7928--7943",
abstract = "With emerging online topics as a source for numerous new events, detecting unseen / rare event types presents an elusive challenge for existing event detection methods, where only limited data access is provided for training. To address the data scarcity problem in event detection, we propose MetaEvent, a meta learning-based framework for zero- and few-shot event detection. Specifically, we sample training tasks from existing event types and perform meta training to search for optimal parameters that quickly adapt to unseen tasks. In our framework, we propose to use the cloze-based prompt and a trigger-aware soft verbalizer to efficiently project output to unseen event types. Moreover, we design a contrastive meta objective based on maximum mean discrepancy (MMD) to learn class-separating features. As such, the proposed MetaEvent can perform zero-shot event detection by mapping features to event types without any prior knowledge. In our experiments, we demonstrate the effectiveness of MetaEvent in both zero-shot and few-shot scenarios, where the proposed method achieves state-of-the-art performance in extensive experiments on benchmark datasets FewEvent and MAVEN.",
}
| With emerging online topics as a source for numerous new events, detecting unseen / rare event types presents an elusive challenge for existing event detection methods, where only limited data access is provided for training. To address the data scarcity problem in event detection, we propose MetaEvent, a meta learning-based framework for zero- and few-shot event detection. Specifically, we sample training tasks from existing event types and perform meta training to search for optimal parameters that quickly adapt to unseen tasks. In our framework, we propose to use the cloze-based prompt and a trigger-aware soft verbalizer to efficiently project output to unseen event types. Moreover, we design a contrastive meta objective based on maximum mean discrepancy (MMD) to learn class-separating features. As such, the proposed MetaEvent can perform zero-shot event detection by mapping features to event types without any prior knowledge. In our experiments, we demonstrate the effectiveness of MetaEvent in both zero-shot and few-shot scenarios, where the proposed method achieves state-of-the-art performance in extensive experiments on benchmark datasets FewEvent and MAVEN. | [
"Yue, Zhenrui",
"Zeng, Huimin",
"Lan, Mengfei",
"Ji, Heng",
"Wang, Dong"
] | Zero- and Few-Shot Event Detection via Prompt-Based Meta Learning | acl-long.440 | Poster | 2305.17373 | [
"https://github.com/yueeeeeeee/metaevent"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.441.bib | https://aclanthology.org/2023.acl-long.441/ | @inproceedings{wei-etal-2023-text,
title = "Text Style Transfer Back-Translation",
author = "Wei, Daimeng and
Wu, Zhanglin and
Shang, Hengchao and
Li, Zongyao and
Wang, Minghan and
Guo, Jiaxin and
Chen, Xiaoyu and
Yu, Zhengzhe and
Yang, Hao",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.441",
doi = "10.18653/v1/2023.acl-long.441",
pages = "7944--7959",
abstract = "Back Translation (BT) is widely used in the field of machine translation, as it has been proved effective for enhancing translation quality. However, BT mainly improves the translation of inputs that share a similar style (to be more specific, translation-liked inputs), since the source side of BT data is machine-translated. For natural inputs, BT brings only slight improvements and sometimes even adverse effects. To address this issue, we propose Text Style Transfer Back Translation (TST BT), which uses a style transfer to modify the source side of BT data. By making the style of source-side text more natural, we aim to improve the translation of natural inputs. Our experiments on various language pairs, including both high-resource and low-resource ones, demonstrate that TST BT significantly improves translation performance against popular BT benchmarks. In addition, TST BT is proved to be effective in domain adaptation so this strategy can be regarded as a generalized data augmentation method. Our training code and text style transfer model are open-sourced.",
}
| Back Translation (BT) is widely used in the field of machine translation, as it has been proved effective for enhancing translation quality. However, BT mainly improves the translation of inputs that share a similar style (to be more specific, translation-liked inputs), since the source side of BT data is machine-translated. For natural inputs, BT brings only slight improvements and sometimes even adverse effects. To address this issue, we propose Text Style Transfer Back Translation (TST BT), which uses a style transfer to modify the source side of BT data. By making the style of source-side text more natural, we aim to improve the translation of natural inputs. Our experiments on various language pairs, including both high-resource and low-resource ones, demonstrate that TST BT significantly improves translation performance against popular BT benchmarks. In addition, TST BT is proved to be effective in domain adaptation so this strategy can be regarded as a generalized data augmentation method. Our training code and text style transfer model are open-sourced. | [
"Wei, Daimeng",
"Wu, Zhanglin",
"Shang, Hengchao",
"Li, Zongyao",
"Wang, Minghan",
"Guo, Jiaxin",
"Chen, Xiaoyu",
"Yu, Zhengzhe",
"Yang, Hao"
] | Text Style Transfer Back-Translation | acl-long.441 | Poster | 2306.01318 | [
"https://github.com/frxxzhl/ssebt"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.442.bib | https://aclanthology.org/2023.acl-long.442/ | @inproceedings{zhao-etal-2023-generating-visual,
title = "Generating Visual Spatial Description via Holistic 3{D} Scene Understanding",
author = "Zhao, Yu and
Fei, Hao and
Ji, Wei and
Wei, Jianguo and
Zhang, Meishan and
Zhang, Min and
Chua, Tat-Seng",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.442",
doi = "10.18653/v1/2023.acl-long.442",
pages = "7960--7977",
abstract = "Visual spatial description (VSD) aims to generate texts that describe the spatial relations of the given objects within images. Existing VSD work merely models the 2D geometrical vision features, thus inevitably falling prey to the problem of skewed spatial understanding of target objects. In this work, we investigate the incorporation of 3D scene features for VSD. With an external 3D scene extractor, we obtain the 3D objects and scene features for input images, based on which we construct a target object-centered 3D spatial scene graph (Go3D-S2G), such that we model the spatial semantics of target objects within the holistic 3D scenes. Besides, we propose a scene subgraph selecting mechanism, sampling topologically-diverse subgraphs from Go3D-S2G, where the diverse local structure features are navigated to yield spatially-diversified text generation. Experimental results on two VSD datasets demonstrate that our framework outperforms the baselines significantly, especially improving on the cases with complex visual spatial relations. Meanwhile, our method can produce more spatially-diversified generation.",
}
| Visual spatial description (VSD) aims to generate texts that describe the spatial relations of the given objects within images. Existing VSD work merely models the 2D geometrical vision features, thus inevitably falling prey to the problem of skewed spatial understanding of target objects. In this work, we investigate the incorporation of 3D scene features for VSD. With an external 3D scene extractor, we obtain the 3D objects and scene features for input images, based on which we construct a target object-centered 3D spatial scene graph (Go3D-S2G), such that we model the spatial semantics of target objects within the holistic 3D scenes. Besides, we propose a scene subgraph selecting mechanism, sampling topologically-diverse subgraphs from Go3D-S2G, where the diverse local structure features are navigated to yield spatially-diversified text generation. Experimental results on two VSD datasets demonstrate that our framework outperforms the baselines significantly, especially improving on the cases with complex visual spatial relations. Meanwhile, our method can produce more spatially-diversified generation. | [
"Zhao, Yu",
"Fei, Hao",
"Ji, Wei",
"Wei, Jianguo",
"Zhang, Meishan",
"Zhang, Min",
"Chua, Tat-Seng"
] | Generating Visual Spatial Description via Holistic 3D Scene Understanding | acl-long.442 | Poster | 2305.11768 | [
"https://github.com/zhaoyucs/vsd"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.443.bib | https://aclanthology.org/2023.acl-long.443/ | @inproceedings{zhang-etal-2023-continual,
title = "Continual Knowledge Distillation for Neural Machine Translation",
author = "Zhang, Yuanchi and
Li, Peng and
Sun, Maosong and
Liu, Yang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.443",
doi = "10.18653/v1/2023.acl-long.443",
pages = "7978--7996",
abstract = "While many parallel corpora are not publicly accessible for data copyright, data privacy and competitive differentiation reasons, trained translation models are increasingly available on open platforms. In this work, we propose a method called continual knowledge distillation to take advantage of existing translation models to improve one model of interest. The basic idea is to sequentially transfer knowledge from each trained model to the distilled model. Extensive experiments on Chinese-English and German-English datasets show that our method achieves significant and consistent improvements over strong baselines under both homogeneous and heterogeneous trained model settings and is robust to malicious models.",
}
| While many parallel corpora are not publicly accessible for data copyright, data privacy and competitive differentiation reasons, trained translation models are increasingly available on open platforms. In this work, we propose a method called continual knowledge distillation to take advantage of existing translation models to improve one model of interest. The basic idea is to sequentially transfer knowledge from each trained model to the distilled model. Extensive experiments on Chinese-English and German-English datasets show that our method achieves significant and consistent improvements over strong baselines under both homogeneous and heterogeneous trained model settings and is robust to malicious models. | [
"Zhang, Yuanchi",
"Li, Peng",
"Sun, Maosong",
"Liu, Yang"
] | Continual Knowledge Distillation for Neural Machine Translation | acl-long.443 | Poster | 2212.09097 | [
"https://github.com/thunlp-mt/ckd"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.444.bib | https://aclanthology.org/2023.acl-long.444/ | @inproceedings{amplayo-etal-2023-query,
title = "Query Refinement Prompts for Closed-Book Long-Form {QA}",
author = "Amplayo, Reinald Kim and
Webster, Kellie and
Collins, Michael and
Das, Dipanjan and
Narayan, Shashi",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.444",
doi = "10.18653/v1/2023.acl-long.444",
pages = "7997--8012",
abstract = "Large language models (LLMs) have been shown to perform well in answering questions and in producing long-form texts, both in few-shot closed-book settings. While the former can be validated using well-known evaluation metrics, the latter is difficult to evaluate. We resolve the difficulties to evaluate long-form output by doing both tasks at once {--} to do question answering that requires long-form answers. Such questions tend to be multifaceted, i.e., they may have ambiguities and/or require information from multiple sources. To this end, we define query refinement prompts that encourage LLMs to explicitly express the multifacetedness in questions and generate long-form answers covering multiple facets of the question. Our experiments on two long-form question answering datasets, ASQA and AQuAMuSe, show that using our prompts allows us to outperform fully finetuned models in the closed book setting, as well as achieve results comparable to retrieve-then-generate open-book models.",
}
| Large language models (LLMs) have been shown to perform well in answering questions and in producing long-form texts, both in few-shot closed-book settings. While the former can be validated using well-known evaluation metrics, the latter is difficult to evaluate. We resolve the difficulties to evaluate long-form output by doing both tasks at once {--} to do question answering that requires long-form answers. Such questions tend to be multifaceted, i.e., they may have ambiguities and/or require information from multiple sources. To this end, we define query refinement prompts that encourage LLMs to explicitly express the multifacetedness in questions and generate long-form answers covering multiple facets of the question. Our experiments on two long-form question answering datasets, ASQA and AQuAMuSe, show that using our prompts allows us to outperform fully finetuned models in the closed book setting, as well as achieve results comparable to retrieve-then-generate open-book models. | [
"Amplayo, Reinald Kim",
"Webster, Kellie",
"Collins, Michael",
"Das, Dipanjan",
"Narayan, Shashi"
] | Query Refinement Prompts for Closed-Book Long-Form QA | acl-long.444 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.445.bib | https://aclanthology.org/2023.acl-long.445/ | @inproceedings{hou-etal-2023-cone,
title = "{CONE}: An Efficient {CO}arse-to-fi{NE} Alignment Framework for Long Video Temporal Grounding",
author = "Hou, Zhijian and
Zhong, Wanjun and
Ji, Lei and
Gao, Difei and
Yan, Kun and
Chan, W.k. and
Ngo, Chong-Wah and
Shou, Mike Zheng and
Duan, Nan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.445",
doi = "10.18653/v1/2023.acl-long.445",
pages = "8013--8028",
abstract = "This paper tackles an emerging and challenging problem of long video temporal grounding (VTG) that localizes video moments related to a natural language (NL) query. Compared with short videos, long videos are also highly demanded but less explored, which brings new challenges in higher inference computation cost and weaker multi-modal alignment. To address these challenges, we propose CONE, an efficient COarse-to-fiNE alignment framework. CONE is a plug-and-play framework on top of existing VTG models to handle long videos through a sliding window mechanism. Specifically, CONE (1) introduces a query-guided window selection strategy to speed up inference, and (2) proposes a coarse-to-fine mechanism via a novel incorporation of contrastive learning to enhance multi-modal alignment for long videos. Extensive experiments on two large-scale long VTG benchmarks consistently show both substantial performance gains (e.g., from 3.13 to 6.87{\%} on MAD) and state-of-the-art results. Analyses also reveal higher efficiency as the query-guided window selection mechanism accelerates inference time by 2x on Ego4D-NLQ and 15x on MAD while keeping SOTA results. Codes have been released at \url{https://github.com/houzhijian/CONE}.",
}
| This paper tackles an emerging and challenging problem of long video temporal grounding (VTG) that localizes video moments related to a natural language (NL) query. Compared with short videos, long videos are also highly demanded but less explored, which brings new challenges in higher inference computation cost and weaker multi-modal alignment. To address these challenges, we propose CONE, an efficient COarse-to-fiNE alignment framework. CONE is a plug-and-play framework on top of existing VTG models to handle long videos through a sliding window mechanism. Specifically, CONE (1) introduces a query-guided window selection strategy to speed up inference, and (2) proposes a coarse-to-fine mechanism via a novel incorporation of contrastive learning to enhance multi-modal alignment for long videos. Extensive experiments on two large-scale long VTG benchmarks consistently show both substantial performance gains (e.g., from 3.13 to 6.87{\%} on MAD) and state-of-the-art results. Analyses also reveal higher efficiency as the query-guided window selection mechanism accelerates inference time by 2x on Ego4D-NLQ and 15x on MAD while keeping SOTA results. Codes have been released at \url{https://github.com/houzhijian/CONE}. | [
"Hou, Zhijian",
"Zhong, Wanjun",
"Ji, Lei",
"Gao, Difei",
"Yan, Kun",
"Chan, W.k.",
"Ngo, Chong-Wah",
"Shou, Mike Zheng",
"Duan, Nan"
] | CONE: An Efficient COarse-to-fiNE Alignment Framework for Long Video Temporal Grounding | acl-long.445 | Poster | 2209.10918 | [
"https://github.com/houzhijian/cone"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.446.bib | https://aclanthology.org/2023.acl-long.446/ | @inproceedings{yang-etal-2023-shot,
title = "Few-Shot Document-Level Event Argument Extraction",
author = "Yang, Xianjun and
Lu, Yujie and
Petzold, Linda",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.446",
doi = "10.18653/v1/2023.acl-long.446",
pages = "8029--8046",
abstract = "Event argument extraction (EAE) has been well studied at the sentence level but under-explored at the document level. In this paper, we study to capture event arguments that actually spread across sentences in documents. Prior works usually assume full access to rich document supervision, ignoring the fact that the available argument annotation is limited in production. To fill this gap, we present FewDocAE, a Few-Shot Document-Level Event Argument Extraction benchmark, based on the existing document-level event extraction dataset. We first define the new problem and reconstruct the corpus by a novel N-Way-D-Doc sampling instead of the traditional N-Way-K-Shot strategy. Then we adjust the current document-level neural models into the few-shot setting to provide baseline results under in- and cross-domain settings. Since the argument extraction depends on the context from multiple sentences and the learning process is limited to very few examples, we find this novel task to be very challenging with substantively low performance. Considering FewDocAE is closely related to practical use under low-resource regimes, we hope this benchmark encourages more research in this direction. Our data and codes will be available online.",
}
| Event argument extraction (EAE) has been well studied at the sentence level but under-explored at the document level. In this paper, we study to capture event arguments that actually spread across sentences in documents. Prior works usually assume full access to rich document supervision, ignoring the fact that the available argument annotation is limited in production. To fill this gap, we present FewDocAE, a Few-Shot Document-Level Event Argument Extraction benchmark, based on the existing document-level event extraction dataset. We first define the new problem and reconstruct the corpus by a novel N-Way-D-Doc sampling instead of the traditional N-Way-K-Shot strategy. Then we adjust the current document-level neural models into the few-shot setting to provide baseline results under in- and cross-domain settings. Since the argument extraction depends on the context from multiple sentences and the learning process is limited to very few examples, we find this novel task to be very challenging with substantively low performance. Considering FewDocAE is closely related to practical use under low-resource regimes, we hope this benchmark encourages more research in this direction. Our data and codes will be available online. | [
"Yang, Xianjun",
"Lu, Yujie",
"Petzold, Linda"
] | Few-Shot Document-Level Event Argument Extraction | acl-long.446 | Poster | 2209.02203 | [
"https://github.com/xianjun-yang/fewdocae"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.447.bib | https://aclanthology.org/2023.acl-long.447/ | @inproceedings{huang-etal-2023-paraamr,
title = "{P}ara{AMR}: A Large-Scale Syntactically Diverse Paraphrase Dataset by {AMR} Back-Translation",
author = "Huang, Kuan-Hao and
Iyer, Varun and
Hsu, I-Hung and
Kumar, Anoop and
Chang, Kai-Wei and
Galstyan, Aram",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.447",
doi = "10.18653/v1/2023.acl-long.447",
pages = "8047--8061",
abstract = "Paraphrase generation is a long-standing task in natural language processing (NLP). Supervised paraphrase generation models, which rely on human-annotated paraphrase pairs, are cost-inefficient and hard to scale up. On the other hand, automatically annotated paraphrase pairs (e.g., by machine back-translation), usually suffer from the lack of syntactic diversity {--} the generated paraphrase sentences are very similar to the source sentences in terms of syntax. In this work, we present ParaAMR, a large-scale syntactically diverse paraphrase dataset created by abstract meaning representation back-translation. Our quantitative analysis, qualitative examples, and human evaluation demonstrate that the paraphrases of ParaAMR are syntactically more diverse compared to existing large-scale paraphrase datasets while preserving good semantic similarity. In addition, we show that ParaAMR can be used to improve on three NLP tasks: learning sentence embeddings, syntactically controlled paraphrase generation, and data augmentation for few-shot learning. Our results thus showcase the potential of ParaAMR for improving various NLP applications.",
}
| Paraphrase generation is a long-standing task in natural language processing (NLP). Supervised paraphrase generation models, which rely on human-annotated paraphrase pairs, are cost-inefficient and hard to scale up. On the other hand, automatically annotated paraphrase pairs (e.g., by machine back-translation), usually suffer from the lack of syntactic diversity {--} the generated paraphrase sentences are very similar to the source sentences in terms of syntax. In this work, we present ParaAMR, a large-scale syntactically diverse paraphrase dataset created by abstract meaning representation back-translation. Our quantitative analysis, qualitative examples, and human evaluation demonstrate that the paraphrases of ParaAMR are syntactically more diverse compared to existing large-scale paraphrase datasets while preserving good semantic similarity. In addition, we show that ParaAMR can be used to improve on three NLP tasks: learning sentence embeddings, syntactically controlled paraphrase generation, and data augmentation for few-shot learning. Our results thus showcase the potential of ParaAMR for improving various NLP applications. | [
"Huang, Kuan-Hao",
"Iyer, Varun",
"Hsu, I-Hung",
"Kumar, Anoop",
"Chang, Kai-Wei",
"Galstyan, Aram"
] | ParaAMR: A Large-Scale Syntactically Diverse Paraphrase Dataset by AMR Back-Translation | acl-long.447 | Poster | 2305.16585 | [
"https://github.com/uclanlp/paraamr"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.448.bib | https://aclanthology.org/2023.acl-long.448/ | @inproceedings{zhang-etal-2023-towards-understanding,
title = "Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation",
author = "Zhang, Songming and
Liang, Yunlong and
Wang, Shuaibo and
Chen, Yufeng and
Han, Wenjuan and
Liu, Jian and
Xu, Jinan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.448",
doi = "10.18653/v1/2023.acl-long.448",
pages = "8062--8079",
abstract = "Knowledge distillation (KD) is a promising technique for model compression in neural machine translation. However, where the knowledge hides in KD is still not clear, which may hinder the development of KD. In this work, we first unravel this mystery from an empirical perspective and show that the knowledge comes from the top-1 predictions of teachers, which also helps us build a potential connection between word- and sequence-level KD. Further, we point out two inherent issues in vanilla word-level KD based on this finding. Firstly, the current objective of KD spreads its focus to whole distributions to learn the knowledge, yet lacks special treatment on the most crucial top-1 information. Secondly, the knowledge is largely covered by the golden information due to the fact that most top-1 predictions of teachers overlap with ground-truth tokens, which further restricts the potential of KD. To address these issues, we propose a new method named Top-1 Information Enhanced Knowledge Distillation (TIE-KD). Specifically, we design a hierarchical ranking loss to enforce the learning of the top-1 information from the teacher. Additionally, we develop an iterative KD procedure to infuse more additional knowledge by distilling on the data without ground-truth targets. Experiments on WMT{'}14 English-German, WMT{'}14 English-French and WMT{'}16 English-Romanian demonstrate that our method can respectively boost Transformer$_{base}$ students by +1.04, +0.60 and +1.11 BLEU scores and significantly outperforms the vanilla word-level KD baseline. Besides, our method shows higher generalizability on different teacher-student capacity gaps than existing KD techniques.",
}
| Knowledge distillation (KD) is a promising technique for model compression in neural machine translation. However, where the knowledge hides in KD is still not clear, which may hinder the development of KD. In this work, we first unravel this mystery from an empirical perspective and show that the knowledge comes from the top-1 predictions of teachers, which also helps us build a potential connection between word- and sequence-level KD. Further, we point out two inherent issues in vanilla word-level KD based on this finding. Firstly, the current objective of KD spreads its focus to whole distributions to learn the knowledge, yet lacks special treatment on the most crucial top-1 information. Secondly, the knowledge is largely covered by the golden information due to the fact that most top-1 predictions of teachers overlap with ground-truth tokens, which further restricts the potential of KD. To address these issues, we propose a new method named Top-1 Information Enhanced Knowledge Distillation (TIE-KD). Specifically, we design a hierarchical ranking loss to enforce the learning of the top-1 information from the teacher. Additionally, we develop an iterative KD procedure to infuse more additional knowledge by distilling on the data without ground-truth targets. Experiments on WMT{'}14 English-German, WMT{'}14 English-French and WMT{'}16 English-Romanian demonstrate that our method can respectively boost Transformer$_{base}$ students by +1.04, +0.60 and +1.11 BLEU scores and significantly outperforms the vanilla word-level KD baseline. Besides, our method shows higher generalizability on different teacher-student capacity gaps than existing KD techniques. | [
"Zhang, Songming",
"Liang, Yunlong",
"Wang, Shuaibo",
"Chen, Yufeng",
"Han, Wenjuan",
"Liu, Jian",
"Xu, Jinan"
] | Towards Understanding and Improving Knowledge Distillation for Neural Machine Translation | acl-long.448 | Poster | 2305.08096 | [
"https://github.com/songmzhang/nmt-kd"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.449.bib | https://aclanthology.org/2023.acl-long.449/ | @inproceedings{kumar-etal-2023-multi,
title = "Multi-Row, Multi-Span Distant Supervision For {T}able+{T}ext Question Answering",
author = "Kumar, Vishwajeet and
Gupta, Yash and
Chemmengath, Saneem and
Sen, Jaydeep and
Chakrabarti, Soumen and
Bharadwaj, Samarth and
Pan, Feifei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.449",
doi = "10.18653/v1/2023.acl-long.449",
pages = "8080--8094",
abstract = "Question answering (QA) over tables and linked text, also called TextTableQA, has witnessed significant research in recent years, as tables are often found embedded in documents along with related text. HybridQA and OTT-QA are the two best-known TextTableQA datasets, with questions that are best answered by combining information from both table cells and linked text passages. A common challenge in both datasets, and TextTableQA in general, is that the training instances include just the question and answer, where the gold answer may match not only multiple table cells across table rows but also multiple text spans within the scope of a table row and its associated text. This leads to a noisy multi-instance training regime. We present MITQA, a transformer-based TextTableQA system that is explicitly designed to cope with distant supervision along both these axes, through a multi-instance loss objective, together with careful curriculum design. Our experiments show that the proposed multi-instance distant supervision approach helps MITQA get sate-of-the-art results beating the existing baselines for both HybridQA and OTT-QA, putting MITQA at the top of HybridQA leaderboard with best EM and F1 scores on a held out test set.",
}
| Question answering (QA) over tables and linked text, also called TextTableQA, has witnessed significant research in recent years, as tables are often found embedded in documents along with related text. HybridQA and OTT-QA are the two best-known TextTableQA datasets, with questions that are best answered by combining information from both table cells and linked text passages. A common challenge in both datasets, and TextTableQA in general, is that the training instances include just the question and answer, where the gold answer may match not only multiple table cells across table rows but also multiple text spans within the scope of a table row and its associated text. This leads to a noisy multi-instance training regime. We present MITQA, a transformer-based TextTableQA system that is explicitly designed to cope with distant supervision along both these axes, through a multi-instance loss objective, together with careful curriculum design. Our experiments show that the proposed multi-instance distant supervision approach helps MITQA get sate-of-the-art results beating the existing baselines for both HybridQA and OTT-QA, putting MITQA at the top of HybridQA leaderboard with best EM and F1 scores on a held out test set. | [
"Kumar, Vishwajeet",
"Gupta, Yash",
"Chemmengath, Saneem",
"Sen, Jaydeep",
"Chakrabarti, Soumen",
"Bharadwaj, Samarth",
"Pan, Feifei"
] | Multi-Row, Multi-Span Distant Supervision For Table+Text Question Answering | acl-long.449 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.450.bib | https://aclanthology.org/2023.acl-long.450/ | @inproceedings{luo-etal-2023-hahe,
title = "{HAHE}: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level",
author = "Luo, Haoran and
E, Haihong and
Yang, Yuhao and
Guo, Yikai and
Sun, Mingzhi and
Yao, Tianyu and
Tang, Zichen and
Wan, Kaiyang and
Song, Meina and
Lin, Wei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.450",
doi = "10.18653/v1/2023.acl-long.450",
pages = "8095--8107",
abstract = "Link Prediction on Hyper-relational Knowledge Graphs (HKG) is a worthwhile endeavor. HKG consists of hyper-relational facts (H-Facts), composed of a main triple and several auxiliary attribute-value qualifiers, which can effectively represent factually comprehensive information. The internal structure of HKG can be represented as a hypergraph-based representation globally and a semantic sequence-based representation locally. However, existing research seldom simultaneously models the graphical and sequential structure of HKGs, limiting HKGs{'} representation. To overcome this limitation, we propose a novel Hierarchical Attention model for HKG Embedding (HAHE), including global-level and local-level attention. The global-level attention can model the graphical structure of HKG using hypergraph dual-attention layers, while the local-level attention can learn the sequential structure inside H-Facts via heterogeneous self-attention layers. Experiment results indicate that HAHE achieves state-of-the-art performance in link prediction tasks on HKG standard datasets. In addition, HAHE addresses the issue of HKG multi-position prediction for the first time, increasing the applicability of the HKG link prediction task. Our code is publicly available.",
}
| Link Prediction on Hyper-relational Knowledge Graphs (HKG) is a worthwhile endeavor. HKG consists of hyper-relational facts (H-Facts), composed of a main triple and several auxiliary attribute-value qualifiers, which can effectively represent factually comprehensive information. The internal structure of HKG can be represented as a hypergraph-based representation globally and a semantic sequence-based representation locally. However, existing research seldom simultaneously models the graphical and sequential structure of HKGs, limiting HKGs{'} representation. To overcome this limitation, we propose a novel Hierarchical Attention model for HKG Embedding (HAHE), including global-level and local-level attention. The global-level attention can model the graphical structure of HKG using hypergraph dual-attention layers, while the local-level attention can learn the sequential structure inside H-Facts via heterogeneous self-attention layers. Experiment results indicate that HAHE achieves state-of-the-art performance in link prediction tasks on HKG standard datasets. In addition, HAHE addresses the issue of HKG multi-position prediction for the first time, increasing the applicability of the HKG link prediction task. Our code is publicly available. | [
"Luo, Haoran",
"E, Haihong",
"Yang, Yuhao",
"Guo, Yikai",
"Sun, Mingzhi",
"Yao, Tianyu",
"Tang, Zichen",
"Wan, Kaiyang",
"Song, Meina",
"Lin, Wei"
] | HAHE: Hierarchical Attention for Hyper-Relational Knowledge Graphs in Global and Local Level | acl-long.450 | Poster | 2305.06588 | [
"https://github.com/lhrlab/hahe"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.451.bib | https://aclanthology.org/2023.acl-long.451/ | @inproceedings{hou-etal-2023-organ,
title = "{ORGAN}: Observation-Guided Radiology Report Generation via Tree Reasoning",
author = "Hou, Wenjun and
Xu, Kaishuai and
Cheng, Yi and
Li, Wenjie and
Liu, Jiang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.451",
doi = "10.18653/v1/2023.acl-long.451",
pages = "8108--8122",
abstract = "This paper explores the task of radiology report generation, which aims at generating free-text descriptions for a set of radiographs. One significant challenge of this task is how to correctly maintain the consistency between the images and the lengthy report. Previous research explored solving this issue through planning-based methods, which generate reports only based on high-level plans. However, these plans usually only contain the major observations from the radiographs (e.g., lung opacity), lacking much necessary information, such as the observation characteristics and preliminary clinical diagnoses. To address this problem, the system should also take the image information into account together with the textual plan and perform stronger reasoning during the generation process. In this paper, we propose an Observation-guided radiology Report Generation framework (ORGan). It first produces an observation plan and then feeds both the plan and radiographs for report generation, where an observation graph and a tree reasoning mechanism are adopted to precisely enrich the plan information by capturing the multi-formats of each observation. Experimental results demonstrate that our framework outperforms previous state-of-the-art methods regarding text quality and clinical efficacy.",
}
| This paper explores the task of radiology report generation, which aims at generating free-text descriptions for a set of radiographs. One significant challenge of this task is how to correctly maintain the consistency between the images and the lengthy report. Previous research explored solving this issue through planning-based methods, which generate reports only based on high-level plans. However, these plans usually only contain the major observations from the radiographs (e.g., lung opacity), lacking much necessary information, such as the observation characteristics and preliminary clinical diagnoses. To address this problem, the system should also take the image information into account together with the textual plan and perform stronger reasoning during the generation process. In this paper, we propose an Observation-guided radiology Report Generation framework (ORGan). It first produces an observation plan and then feeds both the plan and radiographs for report generation, where an observation graph and a tree reasoning mechanism are adopted to precisely enrich the plan information by capturing the multi-formats of each observation. Experimental results demonstrate that our framework outperforms previous state-of-the-art methods regarding text quality and clinical efficacy. | [
"Hou, Wenjun",
"Xu, Kaishuai",
"Cheng, Yi",
"Li, Wenjie",
"Liu, Jiang"
] | ORGAN: Observation-Guided Radiology Report Generation via Tree Reasoning | acl-long.451 | Poster | 2306.06466 | [
"https://github.com/wjhou/organ"
] | https://huggingface.co/papers/2306.06466 | 1 | 0 | 0 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.452.bib | https://aclanthology.org/2023.acl-long.452/ | @inproceedings{chang-jia-2023-data,
title = "Data Curation Alone Can Stabilize In-context Learning",
author = "Chang, Ting-Yun and
Jia, Robin",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.452",
doi = "10.18653/v1/2023.acl-long.452",
pages = "8123--8144",
abstract = "In-context learning (ICL) enables large language models (LLMs) to perform new tasks by prompting them with a sequence of training examples. However, it is known that ICL is very sensitive to the choice of training examples: randomly sampling examples from a training set leads to high variance in performance. In this paper, we show that carefully curating a subset of training data greatly stabilizes ICL performance without any other changes to the ICL algorithm (e.g., prompt retrieval or calibration). We introduce two methods to choose training subsets{---}both score training examples individually, then select the highest-scoring ones. CondAcc scores a training example by its average dev-set ICL accuracy when combined with random training examples, while Datamodels learns linear regressors that estimate how the presence of each training example influences LLM outputs. Across five tasks and two LLMs, sampling from stable subsets selected by CondAcc and Datamodels improves average accuracy over sampling from the entire training set by 7.7{\%} and 6.3{\%}, respectively. Surprisingly, the stable subset examples are not especially diverse in content or low in perplexity, in contrast with other work suggesting that diversity and perplexity are important when prompting LLMs.",
}
| In-context learning (ICL) enables large language models (LLMs) to perform new tasks by prompting them with a sequence of training examples. However, it is known that ICL is very sensitive to the choice of training examples: randomly sampling examples from a training set leads to high variance in performance. In this paper, we show that carefully curating a subset of training data greatly stabilizes ICL performance without any other changes to the ICL algorithm (e.g., prompt retrieval or calibration). We introduce two methods to choose training subsets{---}both score training examples individually, then select the highest-scoring ones. CondAcc scores a training example by its average dev-set ICL accuracy when combined with random training examples, while Datamodels learns linear regressors that estimate how the presence of each training example influences LLM outputs. Across five tasks and two LLMs, sampling from stable subsets selected by CondAcc and Datamodels improves average accuracy over sampling from the entire training set by 7.7{\%} and 6.3{\%}, respectively. Surprisingly, the stable subset examples are not especially diverse in content or low in perplexity, in contrast with other work suggesting that diversity and perplexity are important when prompting LLMs. | [
"Chang, Ting-Yun",
"Jia, Robin"
] | Data Curation Alone Can Stabilize In-context Learning | acl-long.452 | Poster | 2212.10378 | [
"https://github.com/terarachang/dataicl"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.453.bib | https://aclanthology.org/2023.acl-long.453/ | @inproceedings{shi-etal-2023-midmed,
title = "{M}id{M}ed: Towards Mixed-Type Dialogues for Medical Consultation",
author = "Shi, Xiaoming and
Liu, Zeming and
Wang, Chuan and
Leng, Haitao and
Xue, Kui and
Zhang, Xiaofan and
Zhang, Shaoting",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.453",
doi = "10.18653/v1/2023.acl-long.453",
pages = "8145--8157",
abstract = "Most medical dialogue systems assume that patients have clear goals (seeking a diagnosis, medicine querying, etc.) before medical consultation. However, in many real situations, due to the lack of medical knowledge, it is usually difficult for patients to determine clear goals with all necessary slots. In this paper, we identify this challenge as how to construct medical consultation dialogue systems to help patients clarify their goals. For further study, we create a novel human-to-human mixed-type medical consultation dialogue corpus, termed MidMed, covering four dialogue types: task-oriented dialogue for diagnosis, recommendation, QA, and chitchat. MidMed covers four departments (otorhinolaryngology, ophthalmology, skin, and digestive system), with 8,309 dialogues. Furthermore, we build benchmarking baselines on MidMed and propose an instruction-guiding medical dialogue generation framework, termed InsMed, to handle mixed-type dialogues. Experimental results show the effectiveness of InsMed.",
}
| Most medical dialogue systems assume that patients have clear goals (seeking a diagnosis, medicine querying, etc.) before medical consultation. However, in many real situations, due to the lack of medical knowledge, it is usually difficult for patients to determine clear goals with all necessary slots. In this paper, we identify this challenge as how to construct medical consultation dialogue systems to help patients clarify their goals. For further study, we create a novel human-to-human mixed-type medical consultation dialogue corpus, termed MidMed, covering four dialogue types: task-oriented dialogue for diagnosis, recommendation, QA, and chitchat. MidMed covers four departments (otorhinolaryngology, ophthalmology, skin, and digestive system), with 8,309 dialogues. Furthermore, we build benchmarking baselines on MidMed and propose an instruction-guiding medical dialogue generation framework, termed InsMed, to handle mixed-type dialogues. Experimental results show the effectiveness of InsMed. | [
"Shi, Xiaoming",
"Liu, Zeming",
"Wang, Chuan",
"Leng, Haitao",
"Xue, Kui",
"Zhang, Xiaofan",
"Zhang, Shaoting"
] | MidMed: Towards Mixed-Type Dialogues for Medical Consultation | acl-long.453 | Poster | 2306.02923 | [
"https://github.com/xmshi-trio/MidMed"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.454.bib | https://aclanthology.org/2023.acl-long.454/ | @inproceedings{ye-etal-2023-fid,
title = "{F}i{D}-{ICL}: A Fusion-in-Decoder Approach for Efficient In-Context Learning",
author = "Ye, Qinyuan and
Beltagy, Iz and
Peters, Matthew and
Ren, Xiang and
Hajishirzi, Hannaneh",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.454",
doi = "10.18653/v1/2023.acl-long.454",
pages = "8158--8185",
abstract = "Large pre-trained models are capable of few-shot in-context learning (ICL), i.e., performing a new task by prepending a few demonstrations before the test input. However, the concatenated demonstrations are often excessively long and induce additional computation. Inspired by fusion-in-decoder (FiD) models which efficiently aggregate more passages and thus outperforms concatenation-based models in open-domain QA, we hypothesize that similar techniques can be applied to improve the efficiency and end-task performance of ICL. To verify this, we present a comprehensive study on applying three fusion methods{---}concatenation-based (early fusion), FiD (intermediate), and ensemble-based (late){---}to ICL. We adopt a meta-learning setup where a model is first trained to perform ICL on a mixture of tasks using one selected fusion method, then evaluated on held-out tasks for ICL. Results on 11 held-out tasks show that FiD-ICL matches or outperforms the other two fusion methods. Additionally, we show that FiD-ICL (1) is 10x faster at inference time compared to concat-based and ensemble-based ICL, as we can easily pre-compute the representations of in-context examples and reuse them; (2) enables scaling up to meta-training 3B-sized models, which would fail for concat-based ICL.",
}
| Large pre-trained models are capable of few-shot in-context learning (ICL), i.e., performing a new task by prepending a few demonstrations before the test input. However, the concatenated demonstrations are often excessively long and induce additional computation. Inspired by fusion-in-decoder (FiD) models which efficiently aggregate more passages and thus outperforms concatenation-based models in open-domain QA, we hypothesize that similar techniques can be applied to improve the efficiency and end-task performance of ICL. To verify this, we present a comprehensive study on applying three fusion methods{---}concatenation-based (early fusion), FiD (intermediate), and ensemble-based (late){---}to ICL. We adopt a meta-learning setup where a model is first trained to perform ICL on a mixture of tasks using one selected fusion method, then evaluated on held-out tasks for ICL. Results on 11 held-out tasks show that FiD-ICL matches or outperforms the other two fusion methods. Additionally, we show that FiD-ICL (1) is 10x faster at inference time compared to concat-based and ensemble-based ICL, as we can easily pre-compute the representations of in-context examples and reuse them; (2) enables scaling up to meta-training 3B-sized models, which would fail for concat-based ICL. | [
"Ye, Qinyuan",
"Beltagy, Iz",
"Peters, Matthew",
"Ren, Xiang",
"Hajishirzi, Hannaneh"
] | FiD-ICL: A Fusion-in-Decoder Approach for Efficient In-Context Learning | acl-long.454 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.455.bib | https://aclanthology.org/2023.acl-long.455/ | @inproceedings{xu-etal-2023-s2ynre,
title = "{S}2yn{RE}: Two-stage Self-training with Synthetic data for Low-resource Relation Extraction",
author = "Xu, Benfeng and
Wang, Quan and
Lyu, Yajuan and
Dai, Dai and
Zhang, Yongdong and
Mao, Zhendong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.455",
doi = "10.18653/v1/2023.acl-long.455",
pages = "8186--8207",
abstract = "Current relation extraction methods suffer from the inadequacy of large-scale annotated data. While distant supervision alleviates the problem of data quantities, there still exists domain disparity in data qualities due to its reliance on domain-restrained knowledge bases. In this work, we propose S2ynRE, a framework of two-stage Self-training with Synthetic data for Relation Extraction.We first leverage the capability of large language models to adapt to the target domain and automatically synthesize large quantities of coherent, realistic training data. We then propose an accompanied two-stage self-training algorithm that iteratively and alternately learns from synthetic and golden data together. We conduct comprehensive experiments and detailed ablations on popular relation extraction datasets to demonstrate the effectiveness of the proposed framework.",
}
| Current relation extraction methods suffer from the inadequacy of large-scale annotated data. While distant supervision alleviates the problem of data quantities, there still exists domain disparity in data qualities due to its reliance on domain-restrained knowledge bases. In this work, we propose S2ynRE, a framework of two-stage Self-training with Synthetic data for Relation Extraction.We first leverage the capability of large language models to adapt to the target domain and automatically synthesize large quantities of coherent, realistic training data. We then propose an accompanied two-stage self-training algorithm that iteratively and alternately learns from synthetic and golden data together. We conduct comprehensive experiments and detailed ablations on popular relation extraction datasets to demonstrate the effectiveness of the proposed framework. | [
"Xu, Benfeng",
"Wang, Quan",
"Lyu, Yajuan",
"Dai, Dai",
"Zhang, Yongdong",
"Mao, Zhendong"
] | S2ynRE: Two-stage Self-training with Synthetic data for Low-resource Relation Extraction | acl-long.455 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.456.bib | https://aclanthology.org/2023.acl-long.456/ | @inproceedings{chen-etal-2023-dsee,
title = "{DSEE}: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models",
author = "Chen, Xuxi and
Chen, Tianlong and
Chen, Weizhu and
Awadallah, Ahmed Hassan and
Wang, Zhangyang and
Cheng, Yu",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.456",
doi = "10.18653/v1/2023.acl-long.456",
pages = "8208--8222",
abstract = "Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many fine-tuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning - by enforcing sparsity-aware low-rank updates on top of the pre-trained weights; and (ii) resource-efficient inference - by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structured sparse patterns in pre-trained language models viaa unified approach. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, RoBERTa, and GPT-2) on dozens of datasets, consistently demonstrate impressive parameter-/inference-efficiency, while maintaining competitive downstream performance. For instance, DSEE saves about 25{\%} inference FLOPs while achieving comparable performance, with 0.5{\%} trainable parameters on BERT. Codes are available at \url{https://github.com/VITA-Group/DSEE}.",
}
| Gigantic pre-trained models have become central to natural language processing (NLP), serving as the starting point for fine-tuning towards a range of downstream tasks. However, two pain points persist for this paradigm: (a) as the pre-trained models grow bigger (e.g., 175B parameters for GPT-3), even the fine-tuning process can be time-consuming and computationally expensive; (b) the fine-tuned model has the same size as its starting point by default, which is neither sensible due to its more specialized functionality, nor practical since many fine-tuned models will be deployed in resource-constrained environments. To address these pain points, we propose a framework for resource- and parameter-efficient fine-tuning by leveraging the sparsity prior in both weight updates and the final model weights. Our proposed framework, dubbed Dually Sparsity-Embedded Efficient Tuning (DSEE), aims to achieve two key objectives: (i) parameter efficient fine-tuning - by enforcing sparsity-aware low-rank updates on top of the pre-trained weights; and (ii) resource-efficient inference - by encouraging a sparse weight structure towards the final fine-tuned model. We leverage sparsity in these two directions by exploiting both unstructured and structured sparse patterns in pre-trained language models viaa unified approach. Extensive experiments and in-depth investigations, with diverse network backbones (i.e., BERT, RoBERTa, and GPT-2) on dozens of datasets, consistently demonstrate impressive parameter-/inference-efficiency, while maintaining competitive downstream performance. For instance, DSEE saves about 25{\%} inference FLOPs while achieving comparable performance, with 0.5{\%} trainable parameters on BERT. Codes are available at \url{https://github.com/VITA-Group/DSEE}. | [
"Chen, Xuxi",
"Chen, Tianlong",
"Chen, Weizhu",
"Awadallah, Ahmed Hassan",
"Wang, Zhangyang",
"Cheng, Yu"
] | DSEE: Dually Sparsity-embedded Efficient Tuning of Pre-trained Language Models | acl-long.456 | Poster | 2111.00160 | [
"https://github.com/vita-group/dsee"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.457.bib | https://aclanthology.org/2023.acl-long.457/ | @inproceedings{zhou-etal-2023-case,
title = "{CASE}: Aligning Coarse-to-Fine Cognition and Affection for Empathetic Response Generation",
author = "Zhou, Jinfeng and
Zheng, Chujie and
Wang, Bo and
Zhang, Zheng and
Huang, Minlie",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.457",
doi = "10.18653/v1/2023.acl-long.457",
pages = "8223--8237",
abstract = "Empathetic conversation is psychologically supposed to be the result of conscious alignment and interaction between the cognition and affection of empathy. However, existing empathetic dialogue models usually consider only the affective aspect or treat cognition and affection in isolation, which limits the capability of empathetic response generation. In this work, we propose the CASE model for empathetic dialogue generation. It first builds upon a commonsense cognition graph and an emotional concept graph and then aligns the user{'}s cognition and affection at both the coarse-grained and fine-grained levels. Through automatic and manual evaluation, we demonstrate that CASE outperforms state-of-the-art baselines of empathetic dialogues and can generate more empathetic and informative responses.",
}
| Empathetic conversation is psychologically supposed to be the result of conscious alignment and interaction between the cognition and affection of empathy. However, existing empathetic dialogue models usually consider only the affective aspect or treat cognition and affection in isolation, which limits the capability of empathetic response generation. In this work, we propose the CASE model for empathetic dialogue generation. It first builds upon a commonsense cognition graph and an emotional concept graph and then aligns the user{'}s cognition and affection at both the coarse-grained and fine-grained levels. Through automatic and manual evaluation, we demonstrate that CASE outperforms state-of-the-art baselines of empathetic dialogues and can generate more empathetic and informative responses. | [
"Zhou, Jinfeng",
"Zheng, Chujie",
"Wang, Bo",
"Zhang, Zheng",
"Huang, Minlie"
] | CASE: Aligning Coarse-to-Fine Cognition and Affection for Empathetic Response Generation | acl-long.457 | Poster | 2208.08845 | [
"https://github.com/jfzhouyoo/case"
] | https://huggingface.co/papers/2208.08845 | 1 | 0 | 0 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.458.bib | https://aclanthology.org/2023.acl-long.458/ | @inproceedings{herman-bernardim-andrade-etal-2023-comparative,
title = "Comparative evaluation of boundary-relaxed annotation for Entity Linking performance",
author = "Herman Bernardim Andrade, Gabriel and
Yada, Shuntaro and
Aramaki, Eiji",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.458",
doi = "10.18653/v1/2023.acl-long.458",
pages = "8238--8253",
abstract = "Entity Linking performance has a strong reliance on having a large quantity of high-quality annotated training data available. Yet, manual annotation of named entities, especially their boundaries, is ambiguous, error-prone, and raises many inconsistencies between annotators. While imprecise boundary annotation can degrade a model{'}s performance, there are applications where accurate extraction of entities{'} surface form is not necessary. For those cases, a lenient annotation guideline could relieve the annotators{'} workload and speed up the process. This paper presents a case study designed to verify the feasibility of such annotation process and evaluate the impact of boundary-relaxed annotation in an Entity Linking pipeline. We first generate a set of noisy versions of the widely used AIDA CoNLL-YAGO dataset by expanding the boundaries subsets of annotated entity mentions and then train three Entity Linking models on this data and evaluate the relative impact of imprecise annotation on entity recognition and disambiguation performances. We demonstrate that the magnitude of effects caused by noise in the Named Entity Recognition phase is dependent on both model complexity and noise ratio, while Entity Disambiguation components are susceptible to entity boundary imprecision due to strong vocabulary dependency.",
}
| Entity Linking performance has a strong reliance on having a large quantity of high-quality annotated training data available. Yet, manual annotation of named entities, especially their boundaries, is ambiguous, error-prone, and raises many inconsistencies between annotators. While imprecise boundary annotation can degrade a model{'}s performance, there are applications where accurate extraction of entities{'} surface form is not necessary. For those cases, a lenient annotation guideline could relieve the annotators{'} workload and speed up the process. This paper presents a case study designed to verify the feasibility of such annotation process and evaluate the impact of boundary-relaxed annotation in an Entity Linking pipeline. We first generate a set of noisy versions of the widely used AIDA CoNLL-YAGO dataset by expanding the boundaries subsets of annotated entity mentions and then train three Entity Linking models on this data and evaluate the relative impact of imprecise annotation on entity recognition and disambiguation performances. We demonstrate that the magnitude of effects caused by noise in the Named Entity Recognition phase is dependent on both model complexity and noise ratio, while Entity Disambiguation components are susceptible to entity boundary imprecision due to strong vocabulary dependency. | [
"Herman Bernardim Andrade, Gabriel",
"Yada, Shuntaro",
"Aramaki, Eiji"
] | Comparative evaluation of boundary-relaxed annotation for Entity Linking performance | acl-long.458 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.459.bib | https://aclanthology.org/2023.acl-long.459/ | @inproceedings{liu-ritter-2023-conll,
title = "Do {C}o{NLL}-2003 Named Entity Taggers Still Work Well in 2023?",
author = "Liu, Shuheng and
Ritter, Alan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.459",
doi = "10.18653/v1/2023.acl-long.459",
pages = "8254--8271",
abstract = "The CoNLL-2003 English named entity recognition (NER) dataset has been widely used to train and evaluate NER models for almost 20 years. However, it is unclear how well models that are trained on this 20-year-old data and developed over a period of decades using the same test set will perform when applied on modern data. In this paper, we evaluate the generalization of over 20 different models trained on CoNLL-2003, and show that NER models have very different generalization. Surprisingly, we find no evidence of performance degradation in pre-trained Transformers, such as RoBERTa and T5, even when fine-tuned using decades-old data. We investigate why some models generalize well to new data while others do not, and attempt to disentangle the effects of temporal drift and overfitting due to test reuse. Our analysis suggests that most deterioration is due to temporal mismatch between the pre-training corpora and the downstream test sets. We found that four factors are important for good generalization: model architecture, number of parameters, time period of the pre-training corpus, in addition to the amount of fine-tuning data. We suggest current evaluation methods have, in some sense, underestimated progress on NER over the past 20 years, as NER models have not only improved on the original CoNLL-2003 test set, but improved even more on modern data. Our datasets can be found at \url{https://github.com/ShuhengL/acl2023_conllpp}.",
}
| The CoNLL-2003 English named entity recognition (NER) dataset has been widely used to train and evaluate NER models for almost 20 years. However, it is unclear how well models that are trained on this 20-year-old data and developed over a period of decades using the same test set will perform when applied on modern data. In this paper, we evaluate the generalization of over 20 different models trained on CoNLL-2003, and show that NER models have very different generalization. Surprisingly, we find no evidence of performance degradation in pre-trained Transformers, such as RoBERTa and T5, even when fine-tuned using decades-old data. We investigate why some models generalize well to new data while others do not, and attempt to disentangle the effects of temporal drift and overfitting due to test reuse. Our analysis suggests that most deterioration is due to temporal mismatch between the pre-training corpora and the downstream test sets. We found that four factors are important for good generalization: model architecture, number of parameters, time period of the pre-training corpus, in addition to the amount of fine-tuning data. We suggest current evaluation methods have, in some sense, underestimated progress on NER over the past 20 years, as NER models have not only improved on the original CoNLL-2003 test set, but improved even more on modern data. Our datasets can be found at \url{https://github.com/ShuhengL/acl2023_conllpp}. | [
"Liu, Shuheng",
"Ritter, Alan"
] | Do CoNLL-2003 Named Entity Taggers Still Work Well in 2023? | acl-long.459 | Poster | 2212.09747 | [
"https://github.com/shuhengl/acl2023_conllpp"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.460.bib | https://aclanthology.org/2023.acl-long.460/ | @inproceedings{si-etal-2023-readin,
title = "{READIN}: A {C}hinese Multi-Task Benchmark with Realistic and Diverse Input Noises",
author = "Si, Chenglei and
Zhang, Zhengyan and
Chen, Yingfa and
Wang, Xiaozhi and
Liu, Zhiyuan and
Sun, Maosong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.460",
doi = "10.18653/v1/2023.acl-long.460",
pages = "8272--8285",
abstract = "For many real-world applications, the user-generated inputs usually contain various noises due to speech recognition errors caused by linguistic variations or typographical errors (typos). Thus, it is crucial to test model performance on data with realistic input noises to ensure robustness and fairness. However, little study has been done to construct such benchmarks for Chinese, where various language-specific input noises happen in the real world. In order to fill this important gap, we construct READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises. READIN contains four diverse tasks and requests annotators to re-enter the original test data with two commonly used Chinese input methods: Pinyin input and speech input. We designed our annotation pipeline to maximize diversity, for example by instructing the annotators to use diverse input method editors (IMEs) for keyboard noises and recruiting speakers from diverse dialectical groups for speech noises. We experiment with a series of strong pretrained language models as well as robust training methods, we find that these models often suffer significant performance drops on READIN even with robustness methods like data augmentation. As the first large-scale attempt in creating a benchmark with noises geared towards user-generated inputs, we believe that READIN serves as an important complement to existing Chinese NLP benchmarks. The source code and dataset can be obtained from \url{https://github.com/thunlp/READIN}.",
}
| For many real-world applications, the user-generated inputs usually contain various noises due to speech recognition errors caused by linguistic variations or typographical errors (typos). Thus, it is crucial to test model performance on data with realistic input noises to ensure robustness and fairness. However, little study has been done to construct such benchmarks for Chinese, where various language-specific input noises happen in the real world. In order to fill this important gap, we construct READIN: a Chinese multi-task benchmark with REalistic And Diverse Input Noises. READIN contains four diverse tasks and requests annotators to re-enter the original test data with two commonly used Chinese input methods: Pinyin input and speech input. We designed our annotation pipeline to maximize diversity, for example by instructing the annotators to use diverse input method editors (IMEs) for keyboard noises and recruiting speakers from diverse dialectical groups for speech noises. We experiment with a series of strong pretrained language models as well as robust training methods, we find that these models often suffer significant performance drops on READIN even with robustness methods like data augmentation. As the first large-scale attempt in creating a benchmark with noises geared towards user-generated inputs, we believe that READIN serves as an important complement to existing Chinese NLP benchmarks. The source code and dataset can be obtained from \url{https://github.com/thunlp/READIN}. | [
"Si, Chenglei",
"Zhang, Zhengyan",
"Chen, Yingfa",
"Wang, Xiaozhi",
"Liu, Zhiyuan",
"Sun, Maosong"
] | READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises | acl-long.460 | Poster | 2302.07324 | [
"https://github.com/thunlp/readin"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.461.bib | https://aclanthology.org/2023.acl-long.461/ | @inproceedings{dufraisse-etal-2023-mad,
title = "{MAD}-{TSC}: A Multilingual Aligned News Dataset for Target-dependent Sentiment Classification",
author = "Dufraisse, Evan and
Popescu, Adrian and
Tourille, Julien and
Brun, Armelle and
Deshayes, Jerome",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.461",
doi = "10.18653/v1/2023.acl-long.461",
pages = "8286--8305",
abstract = "Target-dependent sentiment classification (TSC) enables a fine-grained automatic analysis of sentiments expressed in texts. Sentiment expression varies depending on the domain, and it is necessary to create domain-specific datasets. While socially important, TSC in the news domain remains relatively understudied. We introduce MAD-TSC, a new dataset which differs substantially from existing resources. First, it includes aligned examples in eight languages to facilitate a comparison of performance for individual languages, and a direct comparison of human and machine translation. Second, the dataset is sampled from a diversified parallel news corpus, and is diversified in terms of news sources and geographic spread of entities. Finally, MAD-TSC is more challenging than existing datasets because its examples are more complex. We exemplify the use of MAD-TSC with comprehensive monolingual and multilingual experiments. The latter show that machine translations can successfully replace manual ones, and that performance for all included languages can match that of English by automatically translating test examples.",
}
| Target-dependent sentiment classification (TSC) enables a fine-grained automatic analysis of sentiments expressed in texts. Sentiment expression varies depending on the domain, and it is necessary to create domain-specific datasets. While socially important, TSC in the news domain remains relatively understudied. We introduce MAD-TSC, a new dataset which differs substantially from existing resources. First, it includes aligned examples in eight languages to facilitate a comparison of performance for individual languages, and a direct comparison of human and machine translation. Second, the dataset is sampled from a diversified parallel news corpus, and is diversified in terms of news sources and geographic spread of entities. Finally, MAD-TSC is more challenging than existing datasets because its examples are more complex. We exemplify the use of MAD-TSC with comprehensive monolingual and multilingual experiments. The latter show that machine translations can successfully replace manual ones, and that performance for all included languages can match that of English by automatically translating test examples. | [
"Dufraisse, Evan",
"Popescu, Adrian",
"Tourille, Julien",
"Brun, Armelle",
"Deshayes, Jerome"
] | MAD-TSC: A Multilingual Aligned News Dataset for Target-dependent Sentiment Classification | acl-long.461 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.462.bib | https://aclanthology.org/2023.acl-long.462/ | @inproceedings{yang-etal-2023-new,
title = "A New Dataset and Empirical Study for Sentence Simplification in {C}hinese",
author = "Yang, Shiping and
Sun, Renliang and
Wan, Xiaojun",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.462",
doi = "10.18653/v1/2023.acl-long.462",
pages = "8306--8321",
abstract = "Sentence Simplification is a valuable technique that can benefit language learners and children a lot. However, current research focuses more on English sentence simplification. The development of Chinese sentence simplification is relatively slow due to the lack of data. To alleviate this limitation, this paper introduces CSS, a new dataset for assessing sentence simplification in Chinese. We collect manual simplifications from human annotators and perform data analysis to show the difference between English and Chinese sentence simplifications. Furthermore, we test several unsupervised and zero/few-shot learning methods on CSS and analyze the automatic evaluation and human evaluation results. In the end, we explore whether Large Language Models can serve as high-quality Chinese sentence simplification systems by evaluating them on CSS.",
}
| Sentence Simplification is a valuable technique that can benefit language learners and children a lot. However, current research focuses more on English sentence simplification. The development of Chinese sentence simplification is relatively slow due to the lack of data. To alleviate this limitation, this paper introduces CSS, a new dataset for assessing sentence simplification in Chinese. We collect manual simplifications from human annotators and perform data analysis to show the difference between English and Chinese sentence simplifications. Furthermore, we test several unsupervised and zero/few-shot learning methods on CSS and analyze the automatic evaluation and human evaluation results. In the end, we explore whether Large Language Models can serve as high-quality Chinese sentence simplification systems by evaluating them on CSS. | [
"Yang, Shiping",
"Sun, Renliang",
"Wan, Xiaojun"
] | A New Dataset and Empirical Study for Sentence Simplification in Chinese | acl-long.462 | Poster | 2306.04188 | [
"https://github.com/maybenotime/css"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.463.bib | https://aclanthology.org/2023.acl-long.463/ | @inproceedings{goyal-etal-2023-factual,
title = "Factual or Contextual? Disentangling Error Types in Entity Description Generation",
author = "Goyal, Navita and
Nenkova, Ani and
Daum{\'e} III, Hal",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.463",
doi = "10.18653/v1/2023.acl-long.463",
pages = "8322--8340",
abstract = "In the task of entity description generation, given a context and a specified entity, a model must describe that entity correctly and in a contextually-relevant way. In this task, as well as broader language generation tasks, the generation of a nonfactual description (factual error) versus an incongruous description (contextual error) is fundamentally different, yet often conflated. We develop an evaluation paradigm that enables us to disentangle these two types of errors in naturally occurring textual contexts. We find that factuality and congruity are often at odds, and that models specifically struggle with accurate descriptions of entities that are less familiar to people. This shortcoming of language models raises concerns around the trustworthiness of such models, since factual errors on less well-known entities are exactly those that a human reader will not recognize.",
}
| In the task of entity description generation, given a context and a specified entity, a model must describe that entity correctly and in a contextually-relevant way. In this task, as well as broader language generation tasks, the generation of a nonfactual description (factual error) versus an incongruous description (contextual error) is fundamentally different, yet often conflated. We develop an evaluation paradigm that enables us to disentangle these two types of errors in naturally occurring textual contexts. We find that factuality and congruity are often at odds, and that models specifically struggle with accurate descriptions of entities that are less familiar to people. This shortcoming of language models raises concerns around the trustworthiness of such models, since factual errors on less well-known entities are exactly those that a human reader will not recognize. | [
"Goyal, Navita",
"Nenkova, Ani",
"Daum{\\'e} III, Hal"
] | Factual or Contextual? Disentangling Error Types in Entity Description Generation | acl-long.463 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.464.bib | https://aclanthology.org/2023.acl-long.464/ | @inproceedings{chen-etal-2023-weakly,
title = "Weakly Supervised Vision-and-Language Pre-training with Relative Representations",
author = "Chen, Chi and
Li, Peng and
Sun, Maosong and
Liu, Yang",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.464",
doi = "10.18653/v1/2023.acl-long.464",
pages = "8341--8355",
abstract = "Weakly supervised vision-and-language pre-training (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks. However, current WVLP methods use only local descriptions of images, i.e., object tags, as cross-modal anchors to construct weakly-aligned image-text pairs for pre-training. This affects the data quality and thus the effectiveness of pre-training. In this paper, we propose to directly take a small number of aligned image-text pairs as anchors, and represent each unaligned image and text by its similarities to these anchors, i.e., relative representations. We build a WVLP framework based on the relative representations, namely RELIT, which collects high-quality weakly-aligned image-text pairs from large-scale image-only and text-only data for pre-training through relative representation-based retrieval and generation. Experiments on four downstream tasks show that RELIT achieves new state-of-the-art results under the weakly supervised setting.",
}
| Weakly supervised vision-and-language pre-training (WVLP), which learns cross-modal representations with limited cross-modal supervision, has been shown to effectively reduce the data cost of pre-training while maintaining decent performance on downstream tasks. However, current WVLP methods use only local descriptions of images, i.e., object tags, as cross-modal anchors to construct weakly-aligned image-text pairs for pre-training. This affects the data quality and thus the effectiveness of pre-training. In this paper, we propose to directly take a small number of aligned image-text pairs as anchors, and represent each unaligned image and text by its similarities to these anchors, i.e., relative representations. We build a WVLP framework based on the relative representations, namely RELIT, which collects high-quality weakly-aligned image-text pairs from large-scale image-only and text-only data for pre-training through relative representation-based retrieval and generation. Experiments on four downstream tasks show that RELIT achieves new state-of-the-art results under the weakly supervised setting. | [
"Chen, Chi",
"Li, Peng",
"Sun, Maosong",
"Liu, Yang"
] | Weakly Supervised Vision-and-Language Pre-training with Relative Representations | acl-long.464 | Poster | 2305.15483 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.465.bib | https://aclanthology.org/2023.acl-long.465/ | @inproceedings{he-etal-2023-hermes,
title = "{H}erm{E}s: Interactive Spreadsheet Formula Prediction via Hierarchical Formulet Expansion",
author = "He, Wanrong and
Dong, Haoyu and
Gao, Yihuai and
Fan, Zhichao and
Guo, Xingzhuo and
Hou, Zhitao and
Lv, Xiao and
Jia, Ran and
Han, Shi and
Zhang, Dongmei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.465",
doi = "10.18653/v1/2023.acl-long.465",
pages = "8356--8372",
abstract = "We propose HermEs, the first approach for spreadsheet formula prediction via HiEraRchical forMulet ExpanSion, where hierarchical expansion means generating formulas following the underlying parse tree structure, and Formulet refers to commonly-used multi-level patterns mined from real formula parse trees. HermEs improves the formula prediction accuracy by (1) guaranteeing correct grammar by hierarchical generation rather than left-to-right generation and (2) significantly streamlining the token-level decoding with high-level Formulet. Notably, instead of generating formulas in a pre-defined fixed order, we propose a novel sampling strategy to systematically exploit a variety of hierarchical and multi-level expansion orders and provided solid mathematical proof, with the aim of meeting diverse human needs of the formula writing order in real applications. We further develop an interactive formula completion interface based on HermEs, which shows a new user experience in \url{https://github.com/formulet/HERMES}.",
}
| We propose HermEs, the first approach for spreadsheet formula prediction via HiEraRchical forMulet ExpanSion, where hierarchical expansion means generating formulas following the underlying parse tree structure, and Formulet refers to commonly-used multi-level patterns mined from real formula parse trees. HermEs improves the formula prediction accuracy by (1) guaranteeing correct grammar by hierarchical generation rather than left-to-right generation and (2) significantly streamlining the token-level decoding with high-level Formulet. Notably, instead of generating formulas in a pre-defined fixed order, we propose a novel sampling strategy to systematically exploit a variety of hierarchical and multi-level expansion orders and provided solid mathematical proof, with the aim of meeting diverse human needs of the formula writing order in real applications. We further develop an interactive formula completion interface based on HermEs, which shows a new user experience in \url{https://github.com/formulet/HERMES}. | [
"He, Wanrong",
"Dong, Haoyu",
"Gao, Yihuai",
"Fan, Zhichao",
"Guo, Xingzhuo",
"Hou, Zhitao",
"Lv, Xiao",
"Jia, Ran",
"Han, Shi",
"Zhang, Dongmei"
] | HermEs: Interactive Spreadsheet Formula Prediction via Hierarchical Formulet Expansion | acl-long.465 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.466.bib | https://aclanthology.org/2023.acl-long.466/ | @inproceedings{saha-srihari-2023-argu,
title = "{A}rg{U}: A Controllable Factual Argument Generator",
author = "Saha, Sougata and
Srihari, Rohini",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.466",
doi = "10.18653/v1/2023.acl-long.466",
pages = "8373--8388",
abstract = "Effective argumentation is essential towards a purposeful conversation with a satisfactory outcome. For example, persuading someone to reconsider smoking might involve empathetic, well founded arguments based on facts and expert opinions about its ill-effects and the consequences on one{'}s family. However, the automatic generation of high-quality factual arguments can be challenging. Addressing existing controllability issues can make the recent advances in computational models for argument generation a potential solution. In this paper, we introduce ArgU: a neural argument generator capable of producing factual arguments from input facts and real-world concepts that can be explicitly controlled for stance and argument structure using Walton{'}s argument scheme-based control codes. Unfortunately, computational argument generation is a relatively new field and lacks datasets conducive to training. Hence, we have compiled and released an annotated corpora of 69,428 arguments spanning six topics and six argument schemes, making it the largest publicly available corpus for identifying argument schemes; the paper details our annotation and dataset creation framework. We further experiment with an argument generation strategy that establishes an inference strategy by generating an {``}argument template{''} before actual argument generation. Our results demonstrate that it is possible to automatically generate diverse arguments exhibiting different inference patterns for the same set of facts by using control codes based on argument schemes and stance.",
}
| Effective argumentation is essential towards a purposeful conversation with a satisfactory outcome. For example, persuading someone to reconsider smoking might involve empathetic, well founded arguments based on facts and expert opinions about its ill-effects and the consequences on one{'}s family. However, the automatic generation of high-quality factual arguments can be challenging. Addressing existing controllability issues can make the recent advances in computational models for argument generation a potential solution. In this paper, we introduce ArgU: a neural argument generator capable of producing factual arguments from input facts and real-world concepts that can be explicitly controlled for stance and argument structure using Walton{'}s argument scheme-based control codes. Unfortunately, computational argument generation is a relatively new field and lacks datasets conducive to training. Hence, we have compiled and released an annotated corpora of 69,428 arguments spanning six topics and six argument schemes, making it the largest publicly available corpus for identifying argument schemes; the paper details our annotation and dataset creation framework. We further experiment with an argument generation strategy that establishes an inference strategy by generating an {``}argument template{''} before actual argument generation. Our results demonstrate that it is possible to automatically generate diverse arguments exhibiting different inference patterns for the same set of facts by using control codes based on argument schemes and stance. | [
"Saha, Sougata",
"Srihari, Rohini"
] | ArgU: A Controllable Factual Argument Generator | acl-long.466 | Poster | 2305.05334 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.467.bib | https://aclanthology.org/2023.acl-long.467/ | @inproceedings{gabburo-etal-2023-learning,
title = "Learning Answer Generation using Supervision from Automatic Question Answering Evaluators",
author = "Gabburo, Matteo and
Garg, Siddhant and
Koncel-Kedziorski, Rik and
Moschitti, Alessandro",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.467",
doi = "10.18653/v1/2023.acl-long.467",
pages = "8389--8403",
abstract = "Recent studies show that sentence-level extractive QA, i.e., based on Answer Sentence Selection (AS2), is outperformed by Generation-based QA (GenQA) models, which generate answers using the top-k answer sentences ranked by AS2 models (a la retrieval-augmented generation style). In this paper, we propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models (GAVA). Specifically, we propose three strategies to transfer knowledge from these QA evaluation models to a GenQA model: (i) augmenting training data with answers generated by the GenQA model and labelled by GAVA (either statically, before training, or (ii) dynamically, at every training epoch); and (iii) using the GAVA score for weighting the generator loss during the learning of the GenQA model. We evaluate our proposed methods on two academic and one industrial dataset, obtaining a significant improvement in answering accuracy over the previous state of the art.",
}
| Recent studies show that sentence-level extractive QA, i.e., based on Answer Sentence Selection (AS2), is outperformed by Generation-based QA (GenQA) models, which generate answers using the top-k answer sentences ranked by AS2 models (a la retrieval-augmented generation style). In this paper, we propose a novel training paradigm for GenQA using supervision from automatic QA evaluation models (GAVA). Specifically, we propose three strategies to transfer knowledge from these QA evaluation models to a GenQA model: (i) augmenting training data with answers generated by the GenQA model and labelled by GAVA (either statically, before training, or (ii) dynamically, at every training epoch); and (iii) using the GAVA score for weighting the generator loss during the learning of the GenQA model. We evaluate our proposed methods on two academic and one industrial dataset, obtaining a significant improvement in answering accuracy over the previous state of the art. | [
"Gabburo, Matteo",
"Garg, Siddhant",
"Koncel-Kedziorski, Rik",
"Moschitti, Aless",
"ro"
] | Learning Answer Generation using Supervision from Automatic Question Answering Evaluators | acl-long.467 | Poster | 2305.15344 | [
""
] | https://huggingface.co/papers/2305.15344 | 2 | 0 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.468.bib | https://aclanthology.org/2023.acl-long.468/ | @inproceedings{liu-etal-2023-recap,
title = "{RECAP}: Retrieval-Enhanced Context-Aware Prefix Encoder for Personalized Dialogue Response Generation",
author = "Liu, Shuai and
Cho, Hyundong and
Freedman, Marjorie and
Ma, Xuezhe and
May, Jonathan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.468",
doi = "10.18653/v1/2023.acl-long.468",
pages = "8404--8419",
abstract = "Endowing chatbots with a consistent persona is essential to an engaging conversation, yet it remains an unresolved challenge. In this work, we propose a new retrieval-enhanced approach for personalized response generation. Specifically, we design a hierarchical transformer retriever trained on dialogue domain data to perform personalized retrieval and a context-aware prefix encoder that fuses the retrieved information to the decoder more effectively. Extensive experiments on a real-world dataset demonstrate the effectiveness of our model at generating more fluent and personalized responses. We quantitatively evaluate our model{'}s performance under a suite of human and automatic metrics and find it to be superior compared to state-of-the-art baselines on English Reddit conversations.",
}
| Endowing chatbots with a consistent persona is essential to an engaging conversation, yet it remains an unresolved challenge. In this work, we propose a new retrieval-enhanced approach for personalized response generation. Specifically, we design a hierarchical transformer retriever trained on dialogue domain data to perform personalized retrieval and a context-aware prefix encoder that fuses the retrieved information to the decoder more effectively. Extensive experiments on a real-world dataset demonstrate the effectiveness of our model at generating more fluent and personalized responses. We quantitatively evaluate our model{'}s performance under a suite of human and automatic metrics and find it to be superior compared to state-of-the-art baselines on English Reddit conversations. | [
"Liu, Shuai",
"Cho, Hyundong",
"Freedman, Marjorie",
"Ma, Xuezhe",
"May, Jonathan"
] | RECAP: Retrieval-Enhanced Context-Aware Prefix Encoder for Personalized Dialogue Response Generation | acl-long.468 | Poster | 2306.07206 | [
"https://github.com/isi-nlp/recap"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.469.bib | https://aclanthology.org/2023.acl-long.469/ | @inproceedings{yang-tu-2023-dont,
title = "Don{'}t Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span Selection",
author = "Yang, Songlin and
Tu, Kewei",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.469",
doi = "10.18653/v1/2023.acl-long.469",
pages = "8420--8433",
abstract = "We present a simple and unified approach for both continuous and discontinuous constituency parsing via autoregressive span selection. Constituency parsing aims to produce a set of non-crossing spans so that they can form a constituency parse tree. We sort gold spans using a predefined order and leverage a pointer network to autoregressively select spans by that order. To deal with discontinuous spans, we consecutively select their subspans from left to right, label all but last subspans with special discontinuous labels and the last subspan as the whole discontinuous spans{'} labels. We use simple heuristic to output valid trees so that our approach is able to predict all possible continuous and discontinuous constituency trees without sacrificing data coverage and without the need to use expensive chart-based parsing algorithms. Experiments on multiple continuous and discontinuous benchmarks show that our model achieves state-of-the-art or competitive performance.",
}
| We present a simple and unified approach for both continuous and discontinuous constituency parsing via autoregressive span selection. Constituency parsing aims to produce a set of non-crossing spans so that they can form a constituency parse tree. We sort gold spans using a predefined order and leverage a pointer network to autoregressively select spans by that order. To deal with discontinuous spans, we consecutively select their subspans from left to right, label all but last subspans with special discontinuous labels and the last subspan as the whole discontinuous spans{'} labels. We use simple heuristic to output valid trees so that our approach is able to predict all possible continuous and discontinuous constituency trees without sacrificing data coverage and without the need to use expensive chart-based parsing algorithms. Experiments on multiple continuous and discontinuous benchmarks show that our model achieves state-of-the-art or competitive performance. | [
"Yang, Songlin",
"Tu, Kewei"
] | Don't Parse, Choose Spans! Continuous and Discontinuous Constituency Parsing via Autoregressive Span Selection | acl-long.469 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.470.bib | https://aclanthology.org/2023.acl-long.470/ | @inproceedings{crouse-etal-2023-laziness,
title = "Laziness Is a Virtue When It Comes to Compositionality in Neural Semantic Parsing",
author = "Crouse, Maxwell and
Kapanipathi, Pavan and
Chaudhury, Subhajit and
Naseem, Tahira and
Fernandez Astudillo, Ramon and
Fokoue, Achille and
Klinger, Tim",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.470",
doi = "10.18653/v1/2023.acl-long.470",
pages = "8434--8448",
abstract = "Nearly all general-purpose neural semantic parsers generate logical forms in a strictly top-down autoregressive fashion. Though such systems have achieved impressive results across a variety of datasets and domains, recent works have called into question whether they are ultimately limited in their ability to compositionally generalize. In this work, we approach semantic parsing from, quite literally, the opposite direction; that is, we introduce a neural semantic parsing generation method that constructs logical forms from the bottom up, beginning from the logical form{'}s leaves. The system we introduce is lazy in that it incrementally builds up a set of potential semantic parses, but only expands and processes the most promising candidate parses at each generation step. Such a parsimonious expansion scheme allows the system to maintain an arbitrarily large set of parse hypotheses that are never realized and thus incur minimal computational overhead. We evaluate our approach on compositional generalization; specifically, on the challenging CFQ dataset and two other Text-to-SQL datasets where we show that our novel, bottom-up semantic parsing technique outperforms general-purpose semantic parsers while also being competitive with semantic parsers that have been tailored to each task.",
}
| Nearly all general-purpose neural semantic parsers generate logical forms in a strictly top-down autoregressive fashion. Though such systems have achieved impressive results across a variety of datasets and domains, recent works have called into question whether they are ultimately limited in their ability to compositionally generalize. In this work, we approach semantic parsing from, quite literally, the opposite direction; that is, we introduce a neural semantic parsing generation method that constructs logical forms from the bottom up, beginning from the logical form{'}s leaves. The system we introduce is lazy in that it incrementally builds up a set of potential semantic parses, but only expands and processes the most promising candidate parses at each generation step. Such a parsimonious expansion scheme allows the system to maintain an arbitrarily large set of parse hypotheses that are never realized and thus incur minimal computational overhead. We evaluate our approach on compositional generalization; specifically, on the challenging CFQ dataset and two other Text-to-SQL datasets where we show that our novel, bottom-up semantic parsing technique outperforms general-purpose semantic parsers while also being competitive with semantic parsers that have been tailored to each task. | [
"Crouse, Maxwell",
"Kapanipathi, Pavan",
"Chaudhury, Subhajit",
"Naseem, Tahira",
"Fern",
"ez Astudillo, Ramon",
"Fokoue, Achille",
"Klinger, Tim"
] | Laziness Is a Virtue When It Comes to Compositionality in Neural Semantic Parsing | acl-long.470 | Poster | 2305.04346 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.471.bib | https://aclanthology.org/2023.acl-long.471/ | @inproceedings{wu-etal-2023-ad,
title = "{AD}-{KD}: Attribution-Driven Knowledge Distillation for Language Model Compression",
author = "Wu, Siyue and
Chen, Hongzhan and
Quan, Xiaojun and
Wang, Qifan and
Wang, Rui",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.471",
doi = "10.18653/v1/2023.acl-long.471",
pages = "8449--8465",
abstract = "Knowledge distillation has attracted a great deal of interest recently to compress large language models. However, existing knowledge distillation methods suffer from two limitations. First, the student model simply imitates the teacher{'}s behavior while ignoring the reasoning behind it. Second, these methods usually focus on the transfer of sophisticated model-specific knowledge but overlook data-specific knowledge. In this paper, we present a novel attribution-driven knowledge distillation approach, which explores the token-level rationale behind the teacher model based on Integrated Gradients (IG) and transfers attribution knowledge to the student model. To enhance the knowledge transfer of model reasoning and generalization, we further explore multi-view attribution distillation on all potential decisions of the teacher. Comprehensive experiments are conducted with BERT on the GLUE benchmark. The experimental results demonstrate the superior performance of our approach to several state-of-the-art methods.",
}
| Knowledge distillation has attracted a great deal of interest recently to compress large language models. However, existing knowledge distillation methods suffer from two limitations. First, the student model simply imitates the teacher{'}s behavior while ignoring the reasoning behind it. Second, these methods usually focus on the transfer of sophisticated model-specific knowledge but overlook data-specific knowledge. In this paper, we present a novel attribution-driven knowledge distillation approach, which explores the token-level rationale behind the teacher model based on Integrated Gradients (IG) and transfers attribution knowledge to the student model. To enhance the knowledge transfer of model reasoning and generalization, we further explore multi-view attribution distillation on all potential decisions of the teacher. Comprehensive experiments are conducted with BERT on the GLUE benchmark. The experimental results demonstrate the superior performance of our approach to several state-of-the-art methods. | [
"Wu, Siyue",
"Chen, Hongzhan",
"Quan, Xiaojun",
"Wang, Qifan",
"Wang, Rui"
] | AD-KD: Attribution-Driven Knowledge Distillation for Language Model Compression | acl-long.471 | Poster | 2305.10010 | [
"https://github.com/brucewsy/ad-kd"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.472.bib | https://aclanthology.org/2023.acl-long.472/ | @inproceedings{kim-etal-2023-qa,
title = "({QA})$^2$: Question Answering with Questionable Assumptions",
author = "Kim, Najoung and
Htut, Phu Mon and
Bowman, Samuel R. and
Petty, Jackson",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.472",
doi = "10.18653/v1/2023.acl-long.472",
pages = "8466--8487",
abstract = "Naturally occurring information-seeking questions often contain questionable assumptions{---}assumptions that are false or unverifiable. Questions containing questionable assumptions are challenging because they require a distinct answer strategy that deviates from typical answers for information-seeking questions. For instance, the question {``}When did Marie Curie discover Uranium?{''} cannot be answered as a typical {``}when{''} question without addressing the false assumption {``}Marie Curie discovered Uranium{''}. In this work, we propose (QA)2 (Question Answering with Questionable Assumptions), an open-domain evaluation dataset consisting of naturally occurring search engine queries that may or may not contain questionable assumptions. To be successful on (QA)2, systems must be able to detect questionable assumptions and also be able to produce adequate responses for both typical information-seeking questions and ones with questionable assumptions. Through human rater acceptability on end-to-end QA with (QA)2, we find that current models do struggle with handling questionable assumptions, leaving substantial headroom for progress.",
}
| Naturally occurring information-seeking questions often contain questionable assumptions{---}assumptions that are false or unverifiable. Questions containing questionable assumptions are challenging because they require a distinct answer strategy that deviates from typical answers for information-seeking questions. For instance, the question {``}When did Marie Curie discover Uranium?{''} cannot be answered as a typical {``}when{''} question without addressing the false assumption {``}Marie Curie discovered Uranium{''}. In this work, we propose (QA)2 (Question Answering with Questionable Assumptions), an open-domain evaluation dataset consisting of naturally occurring search engine queries that may or may not contain questionable assumptions. To be successful on (QA)2, systems must be able to detect questionable assumptions and also be able to produce adequate responses for both typical information-seeking questions and ones with questionable assumptions. Through human rater acceptability on end-to-end QA with (QA)2, we find that current models do struggle with handling questionable assumptions, leaving substantial headroom for progress. | [
"Kim, Najoung",
"Htut, Phu Mon",
"Bowman, Samuel R.",
"Petty, Jackson"
] | (QA)^2: Question Answering with Questionable Assumptions | acl-long.472 | Poster | [
"https://github.com/najoungkim/qaqa"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.473.bib | https://aclanthology.org/2023.acl-long.473/ | @inproceedings{hosking-etal-2023-attributable,
title = "Attributable and Scalable Opinion Summarization",
author = "Hosking, Tom and
Tang, Hao and
Lapata, Mirella",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.473",
doi = "10.18653/v1/2023.acl-long.473",
pages = "8488--8505",
abstract = "We propose a method for unsupervised opinion summarization that encodes sentences from customer reviews into a hierarchical discrete latent space, then identifies common opinions based on the frequency of their encodings. We are able to generate both abstractive summaries by decoding these frequent encodings, and extractive summaries by selecting the sentences assigned to the same frequent encodings. Our method is attributable, because the model identifies sentences used to generate the summary as part of the summarization process. It scales easily to many hundreds of input reviews, because aggregation is performed in the latent space rather than over long sequences of tokens. We also demonstrate that our appraoch enables a degree of control, generating aspect-specific summaries by restricting the model to parts of the encoding space that correspond to desired aspects (e.g., location or food). Automatic and human evaluation on two datasets from different domains demonstrates that our method generates summaries that are more informative than prior work and better grounded in the input reviews.",
}
| We propose a method for unsupervised opinion summarization that encodes sentences from customer reviews into a hierarchical discrete latent space, then identifies common opinions based on the frequency of their encodings. We are able to generate both abstractive summaries by decoding these frequent encodings, and extractive summaries by selecting the sentences assigned to the same frequent encodings. Our method is attributable, because the model identifies sentences used to generate the summary as part of the summarization process. It scales easily to many hundreds of input reviews, because aggregation is performed in the latent space rather than over long sequences of tokens. We also demonstrate that our appraoch enables a degree of control, generating aspect-specific summaries by restricting the model to parts of the encoding space that correspond to desired aspects (e.g., location or food). Automatic and human evaluation on two datasets from different domains demonstrates that our method generates summaries that are more informative than prior work and better grounded in the input reviews. | [
"Hosking, Tom",
"Tang, Hao",
"Lapata, Mirella"
] | Attributable and Scalable Opinion Summarization | acl-long.473 | Oral | 2305.11603 | [
"https://github.com/tomhosking/hercules"
] | https://huggingface.co/papers/2305.11603 | 1 | 0 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.474.bib | https://aclanthology.org/2023.acl-long.474/ | @inproceedings{he-etal-2023-targeted,
title = "Targeted Data Generation: Finding and Fixing Model Weaknesses",
author = "He, Zexue and
Ribeiro, Marco Tulio and
Khani, Fereshte",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.474",
doi = "10.18653/v1/2023.acl-long.474",
pages = "8506--8520",
abstract = "Even when aggregate accuracy is high, state-of-the-art NLP models often fail systematically on specific subgroups of data, resulting in unfair outcomes and eroding user trust. Additional data collection may not help in addressing these weaknesses, as such challenging subgroups may be unknown to users, and underrepresented in the existing and new data. We propose Targeted Data Generation (TDG), a framework that automatically identifies challenging subgroups, and generates new data for those subgroups using large language models (LLMs) with a human in the loop. TDG estimates the expected benefit and potential harm of data augmentation for each subgroup, and selects the ones most likely to improve within-group performance without hurting overall performance. In our experiments, TDG significantly improves the accuracy on challenging subgroups for state-of-the-art sentiment analysis and natural language inference models, while also improving overall test accuracy.",
}
| Even when aggregate accuracy is high, state-of-the-art NLP models often fail systematically on specific subgroups of data, resulting in unfair outcomes and eroding user trust. Additional data collection may not help in addressing these weaknesses, as such challenging subgroups may be unknown to users, and underrepresented in the existing and new data. We propose Targeted Data Generation (TDG), a framework that automatically identifies challenging subgroups, and generates new data for those subgroups using large language models (LLMs) with a human in the loop. TDG estimates the expected benefit and potential harm of data augmentation for each subgroup, and selects the ones most likely to improve within-group performance without hurting overall performance. In our experiments, TDG significantly improves the accuracy on challenging subgroups for state-of-the-art sentiment analysis and natural language inference models, while also improving overall test accuracy. | [
"He, Zexue",
"Ribeiro, Marco Tulio",
"Khani, Fereshte"
] | Targeted Data Generation: Finding and Fixing Model Weaknesses | acl-long.474 | Poster | 2305.17804 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.475.bib | https://aclanthology.org/2023.acl-long.475/ | @inproceedings{gui-xiao-2023-hifi,
title = "{H}i{F}i: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation",
author = "Gui, Anchun and
Xiao, Han",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.475",
doi = "10.18653/v1/2023.acl-long.475",
pages = "8521--8537",
abstract = "To fully leverage the advantages of large-scale pre-trained language models (PLMs) on downstream tasks, it has become a ubiquitous adaptation paradigm to fine-tune the entire parameters of PLMs. However, this paradigm poses issues of inefficient updating and resource over-consuming for fine-tuning in data-scarce and resource-limited scenarios, because of the large scale of parameters in PLMs. To alleviate these concerns, in this paper, we propose a parameter-efficient fine-tuning method HiFi, that is, only the highly informative and strongly correlated attention heads for the specific task are fine-tuned. To search for those significant attention heads, we develop a novel framework to analyze the effectiveness of heads. Specifically, we first model the relationship between heads into a graph from two perspectives of information richness and correlation, and then apply PageRank algorithm to determine the relative importance of each head. Extensive experiments on the GLUE benchmark demonstrate the effectiveness of our method, and show that HiFi obtains state-of-the-art performance over the prior baselines.",
}
| To fully leverage the advantages of large-scale pre-trained language models (PLMs) on downstream tasks, it has become a ubiquitous adaptation paradigm to fine-tune the entire parameters of PLMs. However, this paradigm poses issues of inefficient updating and resource over-consuming for fine-tuning in data-scarce and resource-limited scenarios, because of the large scale of parameters in PLMs. To alleviate these concerns, in this paper, we propose a parameter-efficient fine-tuning method HiFi, that is, only the highly informative and strongly correlated attention heads for the specific task are fine-tuned. To search for those significant attention heads, we develop a novel framework to analyze the effectiveness of heads. Specifically, we first model the relationship between heads into a graph from two perspectives of information richness and correlation, and then apply PageRank algorithm to determine the relative importance of each head. Extensive experiments on the GLUE benchmark demonstrate the effectiveness of our method, and show that HiFi obtains state-of-the-art performance over the prior baselines. | [
"Gui, Anchun",
"Xiao, Han"
] | HiFi: High-Information Attention Heads Hold for Parameter-Efficient Model Adaptation | acl-long.475 | Poster | 2305.04573 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.476.bib | https://aclanthology.org/2023.acl-long.476/ | @inproceedings{xiao-etal-2023-cfsum,
title = "{CFS}um Coarse-to-Fine Contribution Network for Multimodal Summarization",
author = "Xiao, Min and
Zhu, Junnan and
Lin, Haitao and
Zhou, Yu and
Zong, Chengqing",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.476",
doi = "10.18653/v1/2023.acl-long.476",
pages = "8538--8553",
abstract = "Multimodal summarization usually suffers from the problem that the contribution of the visual modality is unclear. Existing multimodal summarization approaches focus on designing the fusion methods of different modalities, while ignoring the adaptive conditions under which visual modalities are useful. Therefore, we propose a novel Coarse-to-Fine contribution network for multimodal Summarization (CFSum) to consider different contributions of images for summarization. First, to eliminate the interference of useless images, we propose a pre-filter module to abandon useless images. Second, to make accurate use of useful images, we propose two levels of visual complement modules, word level and phrase level. Specifically, image contributions are calculated and are adopted to guide the attention of both textual and visual modalities. Experimental results have shown that CFSum significantly outperforms multiple strong baselines on the standard benchmark. Furthermore, the analysis verifies that useful images can even help generate non-visual words which are implicitly represented in the image.",
}
| Multimodal summarization usually suffers from the problem that the contribution of the visual modality is unclear. Existing multimodal summarization approaches focus on designing the fusion methods of different modalities, while ignoring the adaptive conditions under which visual modalities are useful. Therefore, we propose a novel Coarse-to-Fine contribution network for multimodal Summarization (CFSum) to consider different contributions of images for summarization. First, to eliminate the interference of useless images, we propose a pre-filter module to abandon useless images. Second, to make accurate use of useful images, we propose two levels of visual complement modules, word level and phrase level. Specifically, image contributions are calculated and are adopted to guide the attention of both textual and visual modalities. Experimental results have shown that CFSum significantly outperforms multiple strong baselines on the standard benchmark. Furthermore, the analysis verifies that useful images can even help generate non-visual words which are implicitly represented in the image. | [
"Xiao, Min",
"Zhu, Junnan",
"Lin, Haitao",
"Zhou, Yu",
"Zong, Chengqing"
] | CFSum Coarse-to-Fine Contribution Network for Multimodal Summarization | acl-long.476 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.477.bib | https://aclanthology.org/2023.acl-long.477/ | @inproceedings{nityasya-etal-2023-scientific,
title = "On {``}Scientific Debt{''} in {NLP}: A Case for More Rigour in Language Model Pre-Training Research",
author = "Nityasya, Made Nindyatama and
Wibowo, Haryo and
Aji, Alham Fikri and
Winata, Genta and
Prasojo, Radityo Eko and
Blunsom, Phil and
Kuncoro, Adhiguna",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.477",
doi = "10.18653/v1/2023.acl-long.477",
pages = "8554--8572",
abstract = "This evidence-based position paper critiques current research practices within the language model pre-training literature. Despite rapid recent progress afforded by increasingly better pre-trained language models (PLMs), current PLM research practices often conflate different possible sources of model improvement, without conducting proper ablation studies and principled comparisons between different models under comparable conditions. These practices (i) leave us ill-equipped to understand which pre-training approaches should be used under what circumstances; (ii) impede reproducibility and credit assignment; and (iii) render it difficult to understand: {``}How exactly does each factor contribute to the progress that we have today?{''} We provide a case in point by revisiting the success of BERT over its baselines, ELMo and GPT-1, and demonstrate how {---} under comparable conditions where the baselines are tuned to a similar extent {---} these baselines (and even-simpler variants thereof) can, in fact, achieve competitive or better performance than BERT. These findings demonstrate how disentangling different factors of model improvements can lead to valuable new insights. We conclude with recommendations for how to encourage and incentivize this line of work, and accelerate progress towards a better and more systematic understanding of what factors drive the progress of our foundation models today.",
}
| This evidence-based position paper critiques current research practices within the language model pre-training literature. Despite rapid recent progress afforded by increasingly better pre-trained language models (PLMs), current PLM research practices often conflate different possible sources of model improvement, without conducting proper ablation studies and principled comparisons between different models under comparable conditions. These practices (i) leave us ill-equipped to understand which pre-training approaches should be used under what circumstances; (ii) impede reproducibility and credit assignment; and (iii) render it difficult to understand: {``}How exactly does each factor contribute to the progress that we have today?{''} We provide a case in point by revisiting the success of BERT over its baselines, ELMo and GPT-1, and demonstrate how {---} under comparable conditions where the baselines are tuned to a similar extent {---} these baselines (and even-simpler variants thereof) can, in fact, achieve competitive or better performance than BERT. These findings demonstrate how disentangling different factors of model improvements can lead to valuable new insights. We conclude with recommendations for how to encourage and incentivize this line of work, and accelerate progress towards a better and more systematic understanding of what factors drive the progress of our foundation models today. | [
"Nityasya, Made Nindyatama",
"Wibowo, Haryo",
"Aji, Alham Fikri",
"Winata, Genta",
"Prasojo, Radityo Eko",
"Blunsom, Phil",
"Kuncoro, Adhiguna"
] | On “Scientific Debt” in NLP: A Case for More Rigour in Language Model Pre-Training Research | acl-long.477 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.478.bib | https://aclanthology.org/2023.acl-long.478/ | @inproceedings{luo-etal-2023-end,
title = "End-to-end Knowledge Retrieval with Multi-modal Queries",
author = "Luo, Man and
Fang, Zhiyuan and
Gokhale, Tejas and
Yang, Yezhou and
Baral, Chitta",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.478",
doi = "10.18653/v1/2023.acl-long.478",
pages = "8573--8589",
abstract = "We investigate knowledge retrieval with multi-modal queries, i.e. queries containing information split across image and text inputs, a challenging task that differs from previous work on cross-modal retrieval. We curate a new dataset called ReMuQ for benchmarking progress on this task. ReMuQ requires a system to retrieve knowledge from a large corpus by integrating contents from both text and image queries. We introduce a retriever model {``}ReViz{''} that can directly process input text and images to retrieve relevant knowledge in an end-to-end fashion without being dependent on intermediate modules such as object detectors or caption generators. We introduce a new pretraining task that is effective for learning knowledge retrieval with multimodal queries and also improves performance on downstream tasks. We demonstrate superior performance in retrieval on two datasets (ReMuQ and OK-VQA) under zero-shot settings as well as further improvements when finetuned on these datasets.",
}
| We investigate knowledge retrieval with multi-modal queries, i.e. queries containing information split across image and text inputs, a challenging task that differs from previous work on cross-modal retrieval. We curate a new dataset called ReMuQ for benchmarking progress on this task. ReMuQ requires a system to retrieve knowledge from a large corpus by integrating contents from both text and image queries. We introduce a retriever model {``}ReViz{''} that can directly process input text and images to retrieve relevant knowledge in an end-to-end fashion without being dependent on intermediate modules such as object detectors or caption generators. We introduce a new pretraining task that is effective for learning knowledge retrieval with multimodal queries and also improves performance on downstream tasks. We demonstrate superior performance in retrieval on two datasets (ReMuQ and OK-VQA) under zero-shot settings as well as further improvements when finetuned on these datasets. | [
"Luo, Man",
"Fang, Zhiyuan",
"Gokhale, Tejas",
"Yang, Yezhou",
"Baral, Chitta"
] | End-to-end Knowledge Retrieval with Multi-modal Queries | acl-long.478 | Poster | 2306.00424 | [
"https://github.com/luomancs/remuq"
] | https://huggingface.co/papers/2306.00424 | 0 | 1 | 0 | 5 | 1 | [] | [] | [] |
https://aclanthology.org/2023.acl-long.479.bib | https://aclanthology.org/2023.acl-long.479/ | @inproceedings{huang-etal-2023-av,
title = "{AV}-{T}ran{S}peech: Audio-Visual Robust Speech-to-Speech Translation",
author = "Huang, Rongjie and
Liu, Huadai and
Cheng, Xize and
Ren, Yi and
Li, Linjun and
Ye, Zhenhui and
He, Jinzheng and
Zhang, Lichao and
Liu, Jinglin and
Yin, Xiang and
Zhao, Zhou",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.479",
doi = "10.18653/v1/2023.acl-long.479",
pages = "8590--8604",
abstract = "Direct speech-to-speech translation (S2ST) aims to convert speech from one language into another, and has demonstrated significant progress to date. Despite the recent success, current S2ST models still suffer from distinct degradation in noisy environments and fail to translate visual speech (i.e., the movement of lips and teeth). In this work, we present AV-TranSpeech, the first audio-visual speech-to-speech (AV-S2ST) translation model without relying on intermediate text. AV-TranSpeech complements the audio stream with visual information to promote system robustness and opens up a host of practical applications: dictation or dubbing archival films. To mitigate the data scarcity with limited parallel AV-S2ST data, we 1) explore self-supervised pre-training with unlabeled audio-visual data to learn contextual representation, and 2) introduce cross-modal distillation with S2ST models trained on the audio-only corpus to further reduce the requirements of visual data. Experimental results on two language pairs demonstrate that AV-TranSpeech outperforms audio-only models under all settings regardless of the type of noise. With low-resource audio-visual data (10h, 30h), cross-modal distillation yields an improvement of 7.6 BLEU on average compared with baselines. Audio samples are available at \url{https://AV-TranSpeech.github.io/}.",
}
| Direct speech-to-speech translation (S2ST) aims to convert speech from one language into another, and has demonstrated significant progress to date. Despite the recent success, current S2ST models still suffer from distinct degradation in noisy environments and fail to translate visual speech (i.e., the movement of lips and teeth). In this work, we present AV-TranSpeech, the first audio-visual speech-to-speech (AV-S2ST) translation model without relying on intermediate text. AV-TranSpeech complements the audio stream with visual information to promote system robustness and opens up a host of practical applications: dictation or dubbing archival films. To mitigate the data scarcity with limited parallel AV-S2ST data, we 1) explore self-supervised pre-training with unlabeled audio-visual data to learn contextual representation, and 2) introduce cross-modal distillation with S2ST models trained on the audio-only corpus to further reduce the requirements of visual data. Experimental results on two language pairs demonstrate that AV-TranSpeech outperforms audio-only models under all settings regardless of the type of noise. With low-resource audio-visual data (10h, 30h), cross-modal distillation yields an improvement of 7.6 BLEU on average compared with baselines. Audio samples are available at \url{https://AV-TranSpeech.github.io/}. | [
"Huang, Rongjie",
"Liu, Huadai",
"Cheng, Xize",
"Ren, Yi",
"Li, Linjun",
"Ye, Zhenhui",
"He, Jinzheng",
"Zhang, Lichao",
"Liu, Jinglin",
"Yin, Xiang",
"Zhao, Zhou"
] | AV-TranSpeech: Audio-Visual Robust Speech-to-Speech Translation | acl-long.479 | Poster | 2305.15403 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.480.bib | https://aclanthology.org/2023.acl-long.480/ | @inproceedings{zhang-etal-2023-dual,
title = "Dual Class Knowledge Propagation Network for Multi-label Few-shot Intent Detection",
author = "Zhang, Feng and
Chen, Wei and
Ding, Fei and
Wang, Tengjiao",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.480",
doi = "10.18653/v1/2023.acl-long.480",
pages = "8605--8618",
abstract = "Multi-label intent detection aims to assign multiple labels to utterances and attracts increasing attention as a practical task in task-oriented dialogue systems. As dialogue domains change rapidly and new intents emerge fast, the lack of annotated data motivates multi-label few-shot intent detection. However, previous studies are confused by the identical representation of the utterance with multiple labels and overlook the intrinsic intra-class and inter-class interactions. To address these two limitations, we propose a novel dual class knowledge propagation network in this paper. In order to learn well-separated representations for utterances with multiple intents, we first introduce a label-semantic augmentation module incorporating class name information. For better consideration of the inherent intra-class and inter-class relations, an instance-level and a class-level graph neural network are constructed, which not only propagate label information but also propagate feature structure. And we use a simple yet effective method to predict the intent count of each utterance. Extensive experimental results on two multi-label intent datasets have demonstrated that our proposed method outperforms strong baselines by a large margin.",
}
| Multi-label intent detection aims to assign multiple labels to utterances and attracts increasing attention as a practical task in task-oriented dialogue systems. As dialogue domains change rapidly and new intents emerge fast, the lack of annotated data motivates multi-label few-shot intent detection. However, previous studies are confused by the identical representation of the utterance with multiple labels and overlook the intrinsic intra-class and inter-class interactions. To address these two limitations, we propose a novel dual class knowledge propagation network in this paper. In order to learn well-separated representations for utterances with multiple intents, we first introduce a label-semantic augmentation module incorporating class name information. For better consideration of the inherent intra-class and inter-class relations, an instance-level and a class-level graph neural network are constructed, which not only propagate label information but also propagate feature structure. And we use a simple yet effective method to predict the intent count of each utterance. Extensive experimental results on two multi-label intent datasets have demonstrated that our proposed method outperforms strong baselines by a large margin. | [
"Zhang, Feng",
"Chen, Wei",
"Ding, Fei",
"Wang, Tengjiao"
] | Dual Class Knowledge Propagation Network for Multi-label Few-shot Intent Detection | acl-long.480 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.481.bib | https://aclanthology.org/2023.acl-long.481/ | @inproceedings{saxena-etal-2023-vendorlink,
title = "{V}endor{L}ink: An {NLP} approach for Identifying {\&} Linking Vendor Migrants {\&} Potential Aliases on {D}arknet Markets",
author = "Saxena, Vageesh and
Rethmeier, Nils and
van Dijck, Gijs and
Spanakis, Gerasimos",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.481",
doi = "10.18653/v1/2023.acl-long.481",
pages = "8619--8639",
abstract = "The anonymity on the Darknet allows vendors to stay undetected by using multiple vendor aliases or frequently migrating between markets. Consequently, illegal markets and their connections are challenging to uncover on the Darknet. To identify relationships between illegal markets and their vendors, we propose VendorLink, an NLP-based approach that examines writing patterns to verify, identify, and link unique vendor accounts across text advertisements (ads) on seven public Darknet markets. In contrast to existing literature, VendorLink utilizes the strength of supervised pre-training to perform closed-set vendor verification, open-set vendor identification, and low-resource market adaption tasks. Through VendorLink, we uncover (i) 15 migrants and 71 potential aliases in the Alphabay-Dreams-Silk dataset, (ii) 17 migrants and 3 potential aliases in the Valhalla-Berlusconi dataset, and (iii) 75 migrants and 10 potential aliases in the Traderoute-Agora dataset. Altogether, our approach can help Law Enforcement Agencies (LEA) make more informed decisions by verifying and identifying migrating vendors and their potential aliases on existing and Low-Resource (LR) emerging Darknet markets.",
}
| The anonymity on the Darknet allows vendors to stay undetected by using multiple vendor aliases or frequently migrating between markets. Consequently, illegal markets and their connections are challenging to uncover on the Darknet. To identify relationships between illegal markets and their vendors, we propose VendorLink, an NLP-based approach that examines writing patterns to verify, identify, and link unique vendor accounts across text advertisements (ads) on seven public Darknet markets. In contrast to existing literature, VendorLink utilizes the strength of supervised pre-training to perform closed-set vendor verification, open-set vendor identification, and low-resource market adaption tasks. Through VendorLink, we uncover (i) 15 migrants and 71 potential aliases in the Alphabay-Dreams-Silk dataset, (ii) 17 migrants and 3 potential aliases in the Valhalla-Berlusconi dataset, and (iii) 75 migrants and 10 potential aliases in the Traderoute-Agora dataset. Altogether, our approach can help Law Enforcement Agencies (LEA) make more informed decisions by verifying and identifying migrating vendors and their potential aliases on existing and Low-Resource (LR) emerging Darknet markets. | [
"Saxena, Vageesh",
"Rethmeier, Nils",
"van Dijck, Gijs",
"Spanakis, Gerasimos"
] | VendorLink: An NLP approach for Identifying & Linking Vendor Migrants & Potential Aliases on Darknet Markets | acl-long.481 | Poster | 2305.02763 | [
"https://github.com/maastrichtlawtech/vendorlink"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.482.bib | https://aclanthology.org/2023.acl-long.482/ | @inproceedings{wang-etal-2023-element,
title = "Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method",
author = "Wang, Yiming and
Zhang, Zhuosheng and
Wang, Rui",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.482",
doi = "10.18653/v1/2023.acl-long.482",
pages = "8640--8665",
abstract = "Automatic summarization generates concise summaries that contain key ideas of source documents. As the most mainstream datasets for the news sub-domain, CNN/DailyMail and BBC XSum have been widely used for performance benchmarking. However, the reference summaries of those datasets turn out to be noisy, mainly in terms of factual hallucination and information redundancy. To address this challenge, we first annotate new expert-writing Element-aware test sets following the {``}Lasswell Communication Model{''} proposed by Lasswell, allowing reference summaries to focus on more fine-grained news elements objectively and comprehensively. Utilizing the new test sets, we observe the surprising zero-shot summary ability of LLMs, which addresses the issue of the inconsistent results between human preference and automatic evaluation metrics of LLMs{'} zero-shot summaries in prior work. Further, we propose a Summary Chain-of-Thought (SumCoT) technique to elicit LLMs to generate summaries step by step, which helps them integrate more fine-grained details of source documents into the final summaries that correlate with the human writing mindset. Experimental results show our method outperforms state-of-the-art fine-tuned PLMs and zero-shot LLMs by +4.33/+4.77 in ROUGE-L on the two datasets, respectively. Dataset and code are publicly available at \url{https://github.com/Alsace08/SumCoT}.",
}
| Automatic summarization generates concise summaries that contain key ideas of source documents. As the most mainstream datasets for the news sub-domain, CNN/DailyMail and BBC XSum have been widely used for performance benchmarking. However, the reference summaries of those datasets turn out to be noisy, mainly in terms of factual hallucination and information redundancy. To address this challenge, we first annotate new expert-writing Element-aware test sets following the {``}Lasswell Communication Model{''} proposed by Lasswell, allowing reference summaries to focus on more fine-grained news elements objectively and comprehensively. Utilizing the new test sets, we observe the surprising zero-shot summary ability of LLMs, which addresses the issue of the inconsistent results between human preference and automatic evaluation metrics of LLMs{'} zero-shot summaries in prior work. Further, we propose a Summary Chain-of-Thought (SumCoT) technique to elicit LLMs to generate summaries step by step, which helps them integrate more fine-grained details of source documents into the final summaries that correlate with the human writing mindset. Experimental results show our method outperforms state-of-the-art fine-tuned PLMs and zero-shot LLMs by +4.33/+4.77 in ROUGE-L on the two datasets, respectively. Dataset and code are publicly available at \url{https://github.com/Alsace08/SumCoT}. | [
"Wang, Yiming",
"Zhang, Zhuosheng",
"Wang, Rui"
] | Element-aware Summarization with Large Language Models: Expert-aligned Evaluation and Chain-of-Thought Method | acl-long.482 | Poster | 2305.13412 | [
"https://github.com/alsace08/sumcot"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.483.bib | https://aclanthology.org/2023.acl-long.483/ | @inproceedings{yang-etal-2023-efficient,
title = "Efficient Shapley Values Estimation by Amortization for Text Classification",
author = "Yang, Chenghao and
Yin, Fan and
He, He and
Chang, Kai-Wei and
Ma, Xiaofei and
Xiang, Bing",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.483",
doi = "10.18653/v1/2023.acl-long.483",
pages = "8666--8680",
abstract = "Despite the popularity of Shapley Values in explaining neural text classification models, computing them is prohibitive for large pretrained models due to a large number of model evaluations. In practice, Shapley Values are often estimated with a small number of stochastic model evaluations. However, we show that the estimated Shapley Values are sensitive to random seed choices {--} the top-ranked features often have little overlap across different seeds, especially on examples with longer input texts. This can only be mitigated by aggregating thousands of model evaluations, which on the other hand, induces substantial computational overheads. To mitigate the trade-off between stability and efficiency, we develop an amortized model that directly predicts each input feature{'}s Shapley Value without additional model evaluations. It is trained on a set of examples whose Shapley Values are estimated from a large number of model evaluations to ensure stability. Experimental results on two text classification datasets demonstrate that our amortized model estimates Shapley Values accurately with up to 60 times speedup compared to traditional methods. Further, our model does not suffer from stability issues as inference is deterministic. We release our code at \url{https://github.com/yangalan123/Amortized-Interpretability}.",
}
| Despite the popularity of Shapley Values in explaining neural text classification models, computing them is prohibitive for large pretrained models due to a large number of model evaluations. In practice, Shapley Values are often estimated with a small number of stochastic model evaluations. However, we show that the estimated Shapley Values are sensitive to random seed choices {--} the top-ranked features often have little overlap across different seeds, especially on examples with longer input texts. This can only be mitigated by aggregating thousands of model evaluations, which on the other hand, induces substantial computational overheads. To mitigate the trade-off between stability and efficiency, we develop an amortized model that directly predicts each input feature{'}s Shapley Value without additional model evaluations. It is trained on a set of examples whose Shapley Values are estimated from a large number of model evaluations to ensure stability. Experimental results on two text classification datasets demonstrate that our amortized model estimates Shapley Values accurately with up to 60 times speedup compared to traditional methods. Further, our model does not suffer from stability issues as inference is deterministic. We release our code at \url{https://github.com/yangalan123/Amortized-Interpretability}. | [
"Yang, Chenghao",
"Yin, Fan",
"He, He",
"Chang, Kai-Wei",
"Ma, Xiaofei",
"Xiang, Bing"
] | Efficient Shapley Values Estimation by Amortization for Text Classification | acl-long.483 | Oral | 2305.19998 | [
"https://github.com/yangalan123/amortized-interpretability"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.484.bib | https://aclanthology.org/2023.acl-long.484/ | @inproceedings{xu-etal-2023-peerda,
title = "{P}eer{DA}: Data Augmentation via Modeling Peer Relation for Span Identification Tasks",
author = "Xu, Weiwen and
Li, Xin and
Deng, Yang and
Lam, Wai and
Bing, Lidong",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.484",
doi = "10.18653/v1/2023.acl-long.484",
pages = "8681--8699",
abstract = "Span identification aims at identifying specific text spans from text input and classifying them into pre-defined categories. Different from previous works that merely leverage the Subordinate (SUB) relation (i.e. if a span is an instance of a certain category) to train models, this paper for the first time explores the Peer (PR) relation, which indicates that two spans are instances of the same category and share similar features. Specifically, a novel Peer Data Augmentation (PeerDA) approach is proposed which employs span pairs with the PR relation as the augmentation data for training. PeerDA has two unique advantages: (1) There are a large number of PR span pairs for augmenting the training data. (2) The augmented data can prevent the trained model from over-fitting the superficial span-category mapping by pushing the model to leverage the span semantics. Experimental results on ten datasets over four diverse tasks across seven domains demonstrate the effectiveness of PeerDA. Notably, PeerDA achieves state-of-the-art results on six of them.",
}
| Span identification aims at identifying specific text spans from text input and classifying them into pre-defined categories. Different from previous works that merely leverage the Subordinate (SUB) relation (i.e. if a span is an instance of a certain category) to train models, this paper for the first time explores the Peer (PR) relation, which indicates that two spans are instances of the same category and share similar features. Specifically, a novel Peer Data Augmentation (PeerDA) approach is proposed which employs span pairs with the PR relation as the augmentation data for training. PeerDA has two unique advantages: (1) There are a large number of PR span pairs for augmenting the training data. (2) The augmented data can prevent the trained model from over-fitting the superficial span-category mapping by pushing the model to leverage the span semantics. Experimental results on ten datasets over four diverse tasks across seven domains demonstrate the effectiveness of PeerDA. Notably, PeerDA achieves state-of-the-art results on six of them. | [
"Xu, Weiwen",
"Li, Xin",
"Deng, Yang",
"Lam, Wai",
"Bing, Lidong"
] | PeerDA: Data Augmentation via Modeling Peer Relation for Span Identification Tasks | acl-long.484 | Poster | 2210.08855 | [
"https://github.com/damo-nlp-sg/peerda"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.485.bib | https://aclanthology.org/2023.acl-long.485/ | @inproceedings{monter-aldana-etal-2023-dynamic,
title = "Dynamic Regularization in {UDA} for Transformers in Multimodal Classification",
author = "Monter-Aldana, Ivonne and
Lopez Monroy, Adrian Pastor and
Sanchez-Vega, Fernando",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.485",
doi = "10.18653/v1/2023.acl-long.485",
pages = "8700--8711",
abstract = "Multimodal machine learning is a cutting-edge field that explores ways to incorporate information from multiple sources into models. As more multimodal data becomes available, this field has become increasingly relevant. This work focuses on two key challenges in multimodal machine learning. The first is finding efficient ways to combine information from different data types. The second is that often, one modality (e.g., text) is stronger and more relevant, making it difficult to identify meaningful patterns in the weaker modality (e.g., image). Our approach focuses on more effectively exploiting the weaker modality while dynamically regularizing the loss function. First, we introduce a new two-stream model called Multimodal BERT-ViT, which features a novel intra-CLS token fusion. Second, we utilize a dynamic adjustment that maintains a balance between specialization and generalization during the training to avoid overfitting, which we devised. We add this dynamic adjustment to the Unsupervised Data Augmentation (UDA) framework. We evaluate the effectiveness of these proposals on the task of multi-label movie genre classification using the Moviescope and MM-IMDb datasets. The evaluation revealed that our proposal offers substantial benefits, while simultaneously enabling us to harness the weaker modality without compromising the information provided by the stronger.",
}
| Multimodal machine learning is a cutting-edge field that explores ways to incorporate information from multiple sources into models. As more multimodal data becomes available, this field has become increasingly relevant. This work focuses on two key challenges in multimodal machine learning. The first is finding efficient ways to combine information from different data types. The second is that often, one modality (e.g., text) is stronger and more relevant, making it difficult to identify meaningful patterns in the weaker modality (e.g., image). Our approach focuses on more effectively exploiting the weaker modality while dynamically regularizing the loss function. First, we introduce a new two-stream model called Multimodal BERT-ViT, which features a novel intra-CLS token fusion. Second, we utilize a dynamic adjustment that maintains a balance between specialization and generalization during the training to avoid overfitting, which we devised. We add this dynamic adjustment to the Unsupervised Data Augmentation (UDA) framework. We evaluate the effectiveness of these proposals on the task of multi-label movie genre classification using the Moviescope and MM-IMDb datasets. The evaluation revealed that our proposal offers substantial benefits, while simultaneously enabling us to harness the weaker modality without compromising the information provided by the stronger. | [
"Monter-Aldana, Ivonne",
"Lopez Monroy, Adrian Pastor",
"Sanchez-Vega, Fern",
"o"
] | Dynamic Regularization in UDA for Transformers in Multimodal Classification | acl-long.485 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.486.bib | https://aclanthology.org/2023.acl-long.486/ | @inproceedings{frermann-etal-2023-conflicts,
title = "Conflicts, Villains, Resolutions: Towards models of Narrative Media Framing",
author = "Frermann, Lea and
Li, Jiatong and
Khanehzar, Shima and
Mikolajczak, Gosia",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.486",
doi = "10.18653/v1/2023.acl-long.486",
pages = "8712--8732",
abstract = "Despite increasing interest in the automatic detection of media frames in NLP, the problem is typically simplified as single-label classification and adopts a topic-like view on frames, evading modelling the broader document-level narrative. In this work, we revisit a widely used conceptualization of framing from the communication sciences which explicitly captures elements of narratives, including conflict and its resolution, and integrate it with the narrative framing of key entities in the story as heroes, victims or villains. We adapt an effective annotation paradigm that breaks a complex annotation task into a series of simpler binary questions, and present an annotated data set of English news articles, and a case study on the framing of climate change in articles from news outlets across the political spectrum. Finally, we explore automatic multi-label prediction of our frames with supervised and semi-supervised approaches, and present a novel retrieval-based method which is both effective and transparent in its predictions. We conclude with a discussion of opportunities and challenges for future work on document-level models of narrative framing.",
}
| Despite increasing interest in the automatic detection of media frames in NLP, the problem is typically simplified as single-label classification and adopts a topic-like view on frames, evading modelling the broader document-level narrative. In this work, we revisit a widely used conceptualization of framing from the communication sciences which explicitly captures elements of narratives, including conflict and its resolution, and integrate it with the narrative framing of key entities in the story as heroes, victims or villains. We adapt an effective annotation paradigm that breaks a complex annotation task into a series of simpler binary questions, and present an annotated data set of English news articles, and a case study on the framing of climate change in articles from news outlets across the political spectrum. Finally, we explore automatic multi-label prediction of our frames with supervised and semi-supervised approaches, and present a novel retrieval-based method which is both effective and transparent in its predictions. We conclude with a discussion of opportunities and challenges for future work on document-level models of narrative framing. | [
"Frermann, Lea",
"Li, Jiatong",
"Khanehzar, Shima",
"Mikolajczak, Gosia"
] | Conflicts, Villains, Resolutions: Towards models of Narrative Media Framing | acl-long.486 | Oral | 2306.02052 | [
"https://github.com/phenixace/narrative-framing"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.487.bib | https://aclanthology.org/2023.acl-long.487/ | @inproceedings{hardalov-etal-2023-bgglue,
title = "bg{GLUE}: A {B}ulgarian General Language Understanding Evaluation Benchmark",
author = "Hardalov, Momchil and
Atanasova, Pepa and
Mihaylov, Todor and
Angelova, Galia and
Simov, Kiril and
Osenova, Petya and
Stoyanov, Veselin and
Koychev, Ivan and
Nakov, Preslav and
Radev, Dragomir",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.487",
doi = "10.18653/v1/2023.acl-long.487",
pages = "8733--8759",
abstract = "We present bgGLUE (Bulgarian General Language Understanding Evaluation), a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. Our benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression). We run the first systematic evaluation of pre-trained language models for Bulgarian, comparing and contrasting results across the nine tasks in the benchmark. The evaluation results show strong performance on sequence labeling tasks, but there is a lot of room for improvement for tasks that require more complex reasoning. We make bgGLUE publicly available together with the fine-tuning and the evaluation code, as well as a public leaderboard at \url{https://bgglue.github.io}, and we hope that it will enable further advancements in developing NLU models for Bulgarian.",
}
| We present bgGLUE (Bulgarian General Language Understanding Evaluation), a benchmark for evaluating language models on Natural Language Understanding (NLU) tasks in Bulgarian. Our benchmark includes NLU tasks targeting a variety of NLP problems (e.g., natural language inference, fact-checking, named entity recognition, sentiment analysis, question answering, etc.) and machine learning tasks (sequence labeling, document-level classification, and regression). We run the first systematic evaluation of pre-trained language models for Bulgarian, comparing and contrasting results across the nine tasks in the benchmark. The evaluation results show strong performance on sequence labeling tasks, but there is a lot of room for improvement for tasks that require more complex reasoning. We make bgGLUE publicly available together with the fine-tuning and the evaluation code, as well as a public leaderboard at \url{https://bgglue.github.io}, and we hope that it will enable further advancements in developing NLU models for Bulgarian. | [
"Hardalov, Momchil",
"Atanasova, Pepa",
"Mihaylov, Todor",
"Angelova, Galia",
"Simov, Kiril",
"Osenova, Petya",
"Stoyanov, Veselin",
"Koychev, Ivan",
"Nakov, Preslav",
"Radev, Dragomir"
] | bgGLUE: A Bulgarian General Language Understanding Evaluation Benchmark | acl-long.487 | Poster | 2306.02349 | [
"https://github.com/bgglue/bgglue.github.io"
] | https://huggingface.co/papers/2306.02349 | 0 | 0 | 0 | 10 | 1 | [] | [
"bgglue/bgglue"
] | [] |
https://aclanthology.org/2023.acl-long.488.bib | https://aclanthology.org/2023.acl-long.488/ | @inproceedings{feng-etal-2023-dunst,
title = "{D}u{NST}: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation",
author = "Feng, Yuxi and
Yi, Xiaoyuan and
Wang, Xiting and
Lakshmanan, V.S., Laks and
Xie, Xing",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.488",
doi = "10.18653/v1/2023.acl-long.488",
pages = "8760--8785",
abstract = "Self-training (ST) has prospered again in language understanding by augmenting the fine-tuning of big pre-trained models when labeled data is insufficient. However, it remains challenging to incorporate ST into attribute-controllable language generation. Augmented only by self-generated pseudo text, generation models \textit{over-exploit} the previously learned text space and \textit{fail to explore} a larger one, suffering from a restricted generalization boundary and limited controllability. In this work, we propose DuNST, a novel ST framework to tackle these problems. DuNST jointly models text generation and classification as a dual process and further perturbs and escapes from the collapsed space by adding two kinds of flexible noise. In this way, our model could construct and utilize both pseudo text generated from given labels and pseudo labels predicted from available unlabeled text, which are gradually refined during the ST phase. We theoretically demonstrate that DuNST can be regarded as enhancing the exploration of the potentially larger real text space while maintaining exploitation, guaranteeing improved performance. Experiments on three controllable generation tasks show that DuNST significantly boosts control accuracy with comparable generation fluency and diversity against several strong baselines.",
}
| Self-training (ST) has prospered again in language understanding by augmenting the fine-tuning of big pre-trained models when labeled data is insufficient. However, it remains challenging to incorporate ST into attribute-controllable language generation. Augmented only by self-generated pseudo text, generation models \textit{over-exploit} the previously learned text space and \textit{fail to explore} a larger one, suffering from a restricted generalization boundary and limited controllability. In this work, we propose DuNST, a novel ST framework to tackle these problems. DuNST jointly models text generation and classification as a dual process and further perturbs and escapes from the collapsed space by adding two kinds of flexible noise. In this way, our model could construct and utilize both pseudo text generated from given labels and pseudo labels predicted from available unlabeled text, which are gradually refined during the ST phase. We theoretically demonstrate that DuNST can be regarded as enhancing the exploration of the potentially larger real text space while maintaining exploitation, guaranteeing improved performance. Experiments on three controllable generation tasks show that DuNST significantly boosts control accuracy with comparable generation fluency and diversity against several strong baselines. | [
"Feng, Yuxi",
"Yi, Xiaoyuan",
"Wang, Xiting",
"Lakshmanan, V.S., Laks",
"Xie, Xing"
] | DuNST: Dual Noisy Self Training for Semi-Supervised Controllable Text Generation | acl-long.488 | Poster | 2212.08724 | [
"https://github.com/peterfengyx/dunst"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.489.bib | https://aclanthology.org/2023.acl-long.489/ | @inproceedings{cui-etal-2023-failure,
title = "What does the Failure to Reason with {``}Respectively{''} in Zero/Few-Shot Settings Tell Us about Language Models?",
author = "Cui, Ruixiang and
Lee, Seolhwa and
Hershcovich, Daniel and
S{\o}gaard, Anders",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.489",
doi = "10.18653/v1/2023.acl-long.489",
pages = "8786--8800",
abstract = "Humans can effortlessly understand the coordinate structure of sentences such as {``}Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, *respectively*{''}. In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of {``}respectively{''}. We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions.",
}
| Humans can effortlessly understand the coordinate structure of sentences such as {``}Niels Bohr and Kurt Cobain were born in Copenhagen and Seattle, *respectively*{''}. In the context of natural language inference (NLI), we examine how language models (LMs) reason with respective readings (Gawron and Kehler, 2004) from two perspectives: syntactic-semantic and commonsense-world knowledge. We propose a controlled synthetic dataset WikiResNLI and a naturally occurring dataset NatResNLI to encompass various explicit and implicit realizations of {``}respectively{''}. We show that fine-tuned NLI models struggle with understanding such readings without explicit supervision. While few-shot learning is easy in the presence of explicit cues, longer training is required when the reading is evoked implicitly, leaving models to rely on common sense inferences. Furthermore, our fine-grained analysis indicates models fail to generalize across different constructions. To conclude, we demonstrate that LMs still lag behind humans in generalizing to the long tail of linguistic constructions. | [
"Cui, Ruixiang",
"Lee, Seolhwa",
"Hershcovich, Daniel",
"S{\\o}gaard, Anders"
] | What does the Failure to Reason with “Respectively” in Zero/Few-Shot Settings Tell Us about Language Models? | acl-long.489 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.490.bib | https://aclanthology.org/2023.acl-long.490/ | @inproceedings{orgad-belinkov-2023-blind,
title = "{BLIND}: Bias Removal With No Demographics",
author = "Orgad, Hadas and
Belinkov, Yonatan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.490",
doi = "10.18653/v1/2023.acl-long.490",
pages = "8801--8821",
abstract = "Models trained on real-world data tend to imitate and amplify social biases. Common methods to mitigate biases require prior information on the types of biases that should be mitigated (e.g., gender or racial bias) and the social groups associated with each data sample. In this work, we introduce BLIND, a method for bias removal with no prior knowledge of the demographics in the dataset. While training a model on a downstream task, BLIND detects biased samples using an auxiliary model that predicts the main model{'}s success, and down-weights those samples during the training process. Experiments with racial and gender biases in sentiment classification and occupation classification tasks demonstrate that BLIND mitigates social biases without relying on a costly demographic annotation process. Our method is competitive with other methods that require demographic information and sometimes even surpasses them.",
}
| Models trained on real-world data tend to imitate and amplify social biases. Common methods to mitigate biases require prior information on the types of biases that should be mitigated (e.g., gender or racial bias) and the social groups associated with each data sample. In this work, we introduce BLIND, a method for bias removal with no prior knowledge of the demographics in the dataset. While training a model on a downstream task, BLIND detects biased samples using an auxiliary model that predicts the main model{'}s success, and down-weights those samples during the training process. Experiments with racial and gender biases in sentiment classification and occupation classification tasks demonstrate that BLIND mitigates social biases without relying on a costly demographic annotation process. Our method is competitive with other methods that require demographic information and sometimes even surpasses them. | [
"Orgad, Hadas",
"Belinkov, Yonatan"
] | BLIND: Bias Removal With No Demographics | acl-long.490 | Poster | 2212.10563 | [
"https://github.com/technion-cs-nlp/blind"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.491.bib | https://aclanthology.org/2023.acl-long.491/ | @inproceedings{dyrmishi-etal-2023-humans,
title = "How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks",
author = "Dyrmishi, Salijona and
Ghamizi, Salah and
Cordy, Maxime",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.491",
doi = "10.18653/v1/2023.acl-long.491",
pages = "8822--8836",
abstract = "Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks {--} malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions. However, evaluations of these attacks ignore the property of imperceptibility or study it under limited settings. This entails that adversarial perturbations would not pass any human quality gate and do not represent real threats to human-checked NLP systems. To bypass this limitation and enable proper assessment (and later, improvement) of NLP model robustness, we have surveyed 378 human participants about the perceptibility of text adversarial examples produced by state-of-the-art methods. Our results underline that existing text attacks are impractical in real-world scenarios where humans are involved. This contrasts with previous smaller-scale human studies, which reported overly optimistic conclusions regarding attack success. Through our work, we hope to position human perceptibility as a first-class success criterion for text attacks, and provide guidance for research to build effective attack algorithms and, in turn, design appropriate defence mechanisms.",
}
| Natural Language Processing (NLP) models based on Machine Learning (ML) are susceptible to adversarial attacks {--} malicious algorithms that imperceptibly modify input text to force models into making incorrect predictions. However, evaluations of these attacks ignore the property of imperceptibility or study it under limited settings. This entails that adversarial perturbations would not pass any human quality gate and do not represent real threats to human-checked NLP systems. To bypass this limitation and enable proper assessment (and later, improvement) of NLP model robustness, we have surveyed 378 human participants about the perceptibility of text adversarial examples produced by state-of-the-art methods. Our results underline that existing text attacks are impractical in real-world scenarios where humans are involved. This contrasts with previous smaller-scale human studies, which reported overly optimistic conclusions regarding attack success. Through our work, we hope to position human perceptibility as a first-class success criterion for text attacks, and provide guidance for research to build effective attack algorithms and, in turn, design appropriate defence mechanisms. | [
"Dyrmishi, Salijona",
"Ghamizi, Salah",
"Cordy, Maxime"
] | How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks | acl-long.491 | Poster | 2305.15587 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.492.bib | https://aclanthology.org/2023.acl-long.492/ | @inproceedings{stefanik-etal-2023-soft,
title = "Soft Alignment Objectives for Robust Adaptation of Language Generation",
author = "{\v{S}}tef{\'a}nik, Michal and
Kadlcik, Marek and
Sojka, Petr",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.492",
doi = "10.18653/v1/2023.acl-long.492",
pages = "8837--8853",
abstract = "Domain adaptation allows generative language models to address specific flaws caused by the domain shift of their application. However, the traditional adaptation by further training on in-domain data rapidly weakens the model{'}s ability to generalize to other domains, making the open-ended deployments of the adapted models prone to errors. This work introduces novel training objectives built upon a semantic similarity of the predicted tokens to the reference. Our results show that (1) avoiding the common assumption of a single correct prediction by constructing the training target from tokens{'} semantic similarity can largely mitigate catastrophic forgetting of adaptation, while (2) preserving the adaptation in-domain quality, (3) with negligible additions to compute costs. In the broader context, the objectives grounded in a continuous token similarity pioneer the exploration of the middle ground between the efficient but naive exact-match token-level objectives and expressive but computationally- and resource-intensive sequential objectives.",
}
| Domain adaptation allows generative language models to address specific flaws caused by the domain shift of their application. However, the traditional adaptation by further training on in-domain data rapidly weakens the model{'}s ability to generalize to other domains, making the open-ended deployments of the adapted models prone to errors. This work introduces novel training objectives built upon a semantic similarity of the predicted tokens to the reference. Our results show that (1) avoiding the common assumption of a single correct prediction by constructing the training target from tokens{'} semantic similarity can largely mitigate catastrophic forgetting of adaptation, while (2) preserving the adaptation in-domain quality, (3) with negligible additions to compute costs. In the broader context, the objectives grounded in a continuous token similarity pioneer the exploration of the middle ground between the efficient but naive exact-match token-level objectives and expressive but computationally- and resource-intensive sequential objectives. | [
"{\\v{S}}tef{\\'a}nik, Michal",
"Kadlcik, Marek",
"Sojka, Petr"
] | Soft Alignment Objectives for Robust Adaptation of Language Generation | acl-long.492 | Poster | 2211.16550 | [
"https://github.com/mir-mu/softalign_objectives"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.493.bib | https://aclanthology.org/2023.acl-long.493/ | @inproceedings{adolphs-etal-2023-cringe,
title = "The {CRINGE} Loss: Learning what language not to model",
author = "Adolphs, Leonard and
Gao, Tianyu and
Xu, Jing and
Shuster, Kurt and
Sukhbaatar, Sainbayar and
Weston, Jason",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.493",
doi = "10.18653/v1/2023.acl-long.493",
pages = "8854--8874",
abstract = "Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data {--} examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the {``}CRINGE{''} loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement.",
}
| Standard language model training employs gold human documents or human-human interaction data, and treats all training data as positive examples. Growing evidence shows that even with very large amounts of positive training data, issues remain that can be alleviated with relatively small amounts of negative data {--} examples of what the model should not do. In this work, we propose a novel procedure to train with such data called the {``}CRINGE{''} loss (ContRastive Iterative Negative GEneration). We show the effectiveness of this approach across three different experiments on the tasks of safe generation, contradiction avoidance, and open-domain dialogue. Our models outperform multiple strong baselines and are conceptually simple, easy to train and implement. | [
"Adolphs, Leonard",
"Gao, Tianyu",
"Xu, Jing",
"Shuster, Kurt",
"Sukhbaatar, Sainbayar",
"Weston, Jason"
] | The CRINGE Loss: Learning what language not to model | acl-long.493 | Poster | 2211.05826 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.494.bib | https://aclanthology.org/2023.acl-long.494/ | @inproceedings{ye-etal-2023-modeling,
title = "Modeling User Satisfaction Dynamics in Dialogue via {H}awkes Process",
author = "Ye, Fanghua and
Hu, Zhiyuan and
Yilmaz, Emine",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.494",
doi = "10.18653/v1/2023.acl-long.494",
pages = "8875--8889",
abstract = "Dialogue systems have received increasing attention while automatically evaluating their performance remains challenging. User satisfaction estimation (USE) has been proposed as an alternative. It assumes that the performance of a dialogue system can be measured by user satisfaction and uses an estimator to simulate users. The effectiveness of USE depends heavily on the estimator. Existing estimators independently predict user satisfaction at each turn and ignore satisfaction dynamics across turns within a dialogue. In order to fully simulate users, it is crucial to take satisfaction dynamics into account. To fill this gap, we propose a new estimator ASAP (sAtisfaction eStimation via HAwkes Process) that treats user satisfaction across turns as an event sequence and employs a Hawkes process to effectively model the dynamics in this sequence. Experimental results on four benchmark dialogue datasets demonstrate that ASAP can substantially outperform state-of-the-art baseline estimators.",
}
| Dialogue systems have received increasing attention while automatically evaluating their performance remains challenging. User satisfaction estimation (USE) has been proposed as an alternative. It assumes that the performance of a dialogue system can be measured by user satisfaction and uses an estimator to simulate users. The effectiveness of USE depends heavily on the estimator. Existing estimators independently predict user satisfaction at each turn and ignore satisfaction dynamics across turns within a dialogue. In order to fully simulate users, it is crucial to take satisfaction dynamics into account. To fill this gap, we propose a new estimator ASAP (sAtisfaction eStimation via HAwkes Process) that treats user satisfaction across turns as an event sequence and employs a Hawkes process to effectively model the dynamics in this sequence. Experimental results on four benchmark dialogue datasets demonstrate that ASAP can substantially outperform state-of-the-art baseline estimators. | [
"Ye, Fanghua",
"Hu, Zhiyuan",
"Yilmaz, Emine"
] | Modeling User Satisfaction Dynamics in Dialogue via Hawkes Process | acl-long.494 | Poster | 2305.12594 | [
"https://github.com/smartyfh/asap"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.495.bib | https://aclanthology.org/2023.acl-long.495/ | @inproceedings{yadav-etal-2023-towards,
title = "Towards Identifying Fine-Grained Depression Symptoms from Memes",
author = "Yadav, Shweta and
Caragea, Cornelia and
Zhao, Chenye and
Kumari, Naincy and
Solberg, Marvin and
Sharma, Tanmay",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.495",
doi = "10.18653/v1/2023.acl-long.495",
pages = "8890--8905",
abstract = "The past decade has observed significant attention toward developing computational methods for classifying social media data based on the presence or absence of mental health conditions. In the context of mental health, for clinicians to make an accurate diagnosis or provide personalized intervention, it is crucial to identify fine-grained mental health symptoms. To this end, we conduct a focused study on depression disorder and introduce a new task of identifying fine-grained depressive symptoms from memes. Toward this, we create a high-quality dataset (RESTORE) annotated with 8 fine-grained depression symptoms based on the clinically adopted PHQ-9 questionnaire. We benchmark RESTORE on 20 strong monomodal and multimodal methods. Additionally, we show how imposing orthogonal constraints on textual and visual feature representations in a multimodal setting can enforce the model to learn non-redundant and de-correlated features leading to a better prediction of fine-grained depression symptoms. Further, we conduct an extensive human analysis and elaborate on the limitations of existing multimodal models that often overlook the implicit connection between visual and textual elements of a meme.",
}
| The past decade has observed significant attention toward developing computational methods for classifying social media data based on the presence or absence of mental health conditions. In the context of mental health, for clinicians to make an accurate diagnosis or provide personalized intervention, it is crucial to identify fine-grained mental health symptoms. To this end, we conduct a focused study on depression disorder and introduce a new task of identifying fine-grained depressive symptoms from memes. Toward this, we create a high-quality dataset (RESTORE) annotated with 8 fine-grained depression symptoms based on the clinically adopted PHQ-9 questionnaire. We benchmark RESTORE on 20 strong monomodal and multimodal methods. Additionally, we show how imposing orthogonal constraints on textual and visual feature representations in a multimodal setting can enforce the model to learn non-redundant and de-correlated features leading to a better prediction of fine-grained depression symptoms. Further, we conduct an extensive human analysis and elaborate on the limitations of existing multimodal models that often overlook the implicit connection between visual and textual elements of a meme. | [
"Yadav, Shweta",
"Caragea, Cornelia",
"Zhao, Chenye",
"Kumari, Naincy",
"Solberg, Marvin",
"Sharma, Tanmay"
] | Towards Identifying Fine-Grained Depression Symptoms from Memes | acl-long.495 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.496.bib | https://aclanthology.org/2023.acl-long.496/ | @inproceedings{shon-etal-2023-slue,
title = "{SLUE} Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks",
author = "Shon, Suwon and
Arora, Siddhant and
Lin, Chyi-Jiunn and
Pasad, Ankita and
Wu, Felix and
Sharma, Roshan and
Wu, Wei-Lun and
Lee, Hung-yi and
Livescu, Karen and
Watanabe, Shinji",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.496",
doi = "10.18653/v1/2023.acl-long.496",
pages = "8906--8937",
abstract = "Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will release a new benchmark suite, including for each task (i) curated annotations for a relatively small fine-tuning set, (ii) reproducible pipeline (speech recognizer + text model) and end-to-end baseline models and evaluation metrics, (iii) baseline model performance in various types of systems for easy comparisons. We present the details of data collection and annotation and the performance of the baseline models. We also analyze the sensitivity of pipeline models{'} performance to the speech recognition accuracy, using more than 20 publicly availablespeech recognition models.",
}
| Spoken language understanding (SLU) tasks have been studied for many decades in the speech research community, but have not received as much attention as lower-level tasks like speech and speaker recognition. In this work, we introduce several new annotated SLU benchmark tasks based on freely available speech data, which complement existing benchmarks and address gaps in the SLU evaluation landscape. We contribute four tasks: question answering and summarization involve inference over longer speech sequences; named entity localization addresses the speech-specific task of locating the targeted content in the signal; dialog act classification identifies the function of a given speech utterance. In order to facilitate the development of SLU models that leverage the success of pre-trained speech representations, we will release a new benchmark suite, including for each task (i) curated annotations for a relatively small fine-tuning set, (ii) reproducible pipeline (speech recognizer + text model) and end-to-end baseline models and evaluation metrics, (iii) baseline model performance in various types of systems for easy comparisons. We present the details of data collection and annotation and the performance of the baseline models. We also analyze the sensitivity of pipeline models{'} performance to the speech recognition accuracy, using more than 20 publicly availablespeech recognition models. | [
"Shon, Suwon",
"Arora, Siddhant",
"Lin, Chyi-Jiunn",
"Pasad, Ankita",
"Wu, Felix",
"Sharma, Roshan",
"Wu, Wei-Lun",
"Lee, Hung-yi",
"Livescu, Karen",
"Watanabe, Shinji"
] | SLUE Phase-2: A Benchmark Suite of Diverse Spoken Language Understanding Tasks | acl-long.496 | Poster | 2212.10525 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.497.bib | https://aclanthology.org/2023.acl-long.497/ | @inproceedings{holur-etal-2023-side,
title = "My side, your side and the evidence: Discovering aligned actor groups and the narratives they weave",
author = "Holur, Pavan and
Chong, David and
Tangherlini, Timothy and
Roychowdhury, Vwani",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.497",
doi = "10.18653/v1/2023.acl-long.497",
pages = "8938--8952",
abstract = "News reports about emerging issues often include several conflicting story lines. Individual stories can be conceptualized as samples from an underlying mixture of competing narratives. The automated identification of these distinct narratives from unstructured text is a fundamental yet difficult task in Computational Linguistics since narratives are often intertwined and only implicitly conveyed in text. In this paper, we consider a more feasible proxy task: Identify the distinct sets of aligned story actors responsible for sustaining the issue-specific narratives. Discovering aligned actors, and the groups these alignments create, brings us closer to estimating the narrative that each group represents. With the help of Large Language Models (LLM), we address this task by: (i) Introducing a corpus of text segments rich in narrative content associated with six different current issues; (ii) Introducing a novel two-step graph-based framework that (a) identifies alignments between actors (INCANT) and (b) extracts aligned actor groups using the network structure (TAMPA). Amazon Mechanical Turk evaluations demonstrate the effectiveness of our framework. Across domains, alignment relationships from INCANT are accurate (macro F1 {\textgreater}= 0.75) and actor groups from TAMPA are preferred over 2 non-trivial baseline models (ACC {\textgreater}= 0.75).",
}
| News reports about emerging issues often include several conflicting story lines. Individual stories can be conceptualized as samples from an underlying mixture of competing narratives. The automated identification of these distinct narratives from unstructured text is a fundamental yet difficult task in Computational Linguistics since narratives are often intertwined and only implicitly conveyed in text. In this paper, we consider a more feasible proxy task: Identify the distinct sets of aligned story actors responsible for sustaining the issue-specific narratives. Discovering aligned actors, and the groups these alignments create, brings us closer to estimating the narrative that each group represents. With the help of Large Language Models (LLM), we address this task by: (i) Introducing a corpus of text segments rich in narrative content associated with six different current issues; (ii) Introducing a novel two-step graph-based framework that (a) identifies alignments between actors (INCANT) and (b) extracts aligned actor groups using the network structure (TAMPA). Amazon Mechanical Turk evaluations demonstrate the effectiveness of our framework. Across domains, alignment relationships from INCANT are accurate (macro F1 {\textgreater}= 0.75) and actor groups from TAMPA are preferred over 2 non-trivial baseline models (ACC {\textgreater}= 0.75). | [
"Holur, Pavan",
"Chong, David",
"Tangherlini, Timothy",
"Roychowdhury, Vwani"
] | My side, your side and the evidence: Discovering aligned actor groups and the narratives they weave | acl-long.497 | Oral | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.acl-long.498.bib | https://aclanthology.org/2023.acl-long.498/ | @inproceedings{chang-etal-2023-characterizing,
title = "Characterizing and Measuring Linguistic Dataset Drift",
author = "Chang, Tyler and
Halder, Kishaloy and
Anna John, Neha and
Vyas, Yogarshi and
Benajiba, Yassine and
Ballesteros, Miguel and
Roth, Dan",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.498",
doi = "10.18653/v1/2023.acl-long.498",
pages = "8953--8967",
abstract = "NLP models often degrade in performance when real world data distributions differ markedly from training data. However, existing dataset drift metrics in NLP have generally not considered specific dimensions of linguistic drift that affect model performance, and they have not been validated in their ability to predict model performance at the individual example level, where such metrics are often used in practice. In this paper, we propose three dimensions of linguistic dataset drift: vocabulary, structural, and semantic drift. These dimensions correspond to content word frequency divergences, syntactic divergences, and meaning changes not captured by word frequencies (e.g. lexical semantic change). We propose interpretable metrics for all three drift dimensions, and we modify past performance prediction methods to predict model performance at both the example and dataset level for English sentiment classification and natural language inference. We find that our drift metrics are more effective than previous metrics at predicting out-of-domain model accuracies (mean 16.8{\%} root mean square error decrease), particularly when compared to popular fine-tuned embedding distances (mean 47.7{\%} error decrease). Fine-tuned embedding distances are much more effective at ranking individual examples by expected performance, but decomposing into vocabulary, structural, and semantic drift produces the best example rankings of all considered model-agnostic drift metrics (mean 6.7{\%} ROC AUC increase).",
}
| NLP models often degrade in performance when real world data distributions differ markedly from training data. However, existing dataset drift metrics in NLP have generally not considered specific dimensions of linguistic drift that affect model performance, and they have not been validated in their ability to predict model performance at the individual example level, where such metrics are often used in practice. In this paper, we propose three dimensions of linguistic dataset drift: vocabulary, structural, and semantic drift. These dimensions correspond to content word frequency divergences, syntactic divergences, and meaning changes not captured by word frequencies (e.g. lexical semantic change). We propose interpretable metrics for all three drift dimensions, and we modify past performance prediction methods to predict model performance at both the example and dataset level for English sentiment classification and natural language inference. We find that our drift metrics are more effective than previous metrics at predicting out-of-domain model accuracies (mean 16.8{\%} root mean square error decrease), particularly when compared to popular fine-tuned embedding distances (mean 47.7{\%} error decrease). Fine-tuned embedding distances are much more effective at ranking individual examples by expected performance, but decomposing into vocabulary, structural, and semantic drift produces the best example rankings of all considered model-agnostic drift metrics (mean 6.7{\%} ROC AUC increase). | [
"Chang, Tyler",
"Halder, Kishaloy",
"Anna John, Neha",
"Vyas, Yogarshi",
"Benajiba, Yassine",
"Ballesteros, Miguel",
"Roth, Dan"
] | Characterizing and Measuring Linguistic Dataset Drift | acl-long.498 | Poster | 2305.17127 | [
"https://github.com/amazon-science/characterizing-measuring-linguistic-drift"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.acl-long.499.bib | https://aclanthology.org/2023.acl-long.499/ | @inproceedings{qin-etal-2023-webcpm,
title = "{W}eb{CPM}: Interactive Web Search for {C}hinese Long-form Question Answering",
author = "Qin, Yujia and
Cai, Zihan and
Jin, Dian and
Yan, Lan and
Liang, Shihao and
Zhu, Kunlun and
Lin, Yankai and
Han, Xu and
Ding, Ning and
Wang, Huadong and
Xie, Ruobing and
Qi, Fanchao and
Liu, Zhiyuan and
Sun, Maosong and
Zhou, Jie",
editor = "Rogers, Anna and
Boyd-Graber, Jordan and
Okazaki, Naoaki",
booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.acl-long.499",
doi = "10.18653/v1/2023.acl-long.499",
pages = "8968--8988",
abstract = "Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 15,372 supporting facts and 125,954 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5{\%} and 47.5{\%} of the cases on our dataset and DuReader, respectively. The interface, dataset, and codes are publicly available at \url{https://github.com/thunlp/WebCPM}.",
}
| Long-form question answering (LFQA) aims at answering complex, open-ended questions with detailed, paragraph-length responses. The de facto paradigm of LFQA necessitates two procedures: information retrieval, which searches for relevant supporting facts, and information synthesis, which integrates these facts into a coherent answer. In this paper, we introduce WebCPM, the first Chinese LFQA dataset. One unique feature of WebCPM is that its information retrieval is based on interactive web search, which engages with a search engine in real time. Following WebGPT, we develop a web search interface. We recruit annotators to search for relevant information using our interface and then answer questions. Meanwhile, the web search behaviors of our annotators would be recorded. In total, we collect 5,500 high-quality question-answer pairs, together with 15,372 supporting facts and 125,954 web search actions. We fine-tune pre-trained language models to imitate human behaviors for web search and to generate answers based on the collected facts. Our LFQA pipeline, built on these fine-tuned models, generates answers that are no worse than human-written ones in 32.5{\%} and 47.5{\%} of the cases on our dataset and DuReader, respectively. The interface, dataset, and codes are publicly available at \url{https://github.com/thunlp/WebCPM}. | [
"Qin, Yujia",
"Cai, Zihan",
"Jin, Dian",
"Yan, Lan",
"Liang, Shihao",
"Zhu, Kunlun",
"Lin, Yankai",
"Han, Xu",
"Ding, Ning",
"Wang, Huadong",
"Xie, Ruobing",
"Qi, Fanchao",
"Liu, Zhiyuan",
"Sun, Maosong",
"Zhou, Jie"
] | WebCPM: Interactive Web Search for Chinese Long-form Question Answering | acl-long.499 | Poster | 2305.06849 | [
"https://github.com/thunlp/webcpm"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |