bibtex_url
stringlengths 41
50
| proceedings
stringlengths 38
47
| bibtext
stringlengths 709
3.56k
| abstract
stringlengths 17
2.11k
| authors
sequencelengths 1
72
| title
stringlengths 12
207
| id
stringlengths 7
16
| type
stringclasses 2
values | arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 276
values | n_linked_authors
int64 -1
13
| upvotes
int64 -1
14
| num_comments
int64 -1
11
| n_authors
int64 -1
44
| paper_page_exists_pre_conf
int64 0
1
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
14
| Spaces
sequencelengths 0
100
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.nlp4convai-1.7.bib | https://aclanthology.org/2023.nlp4convai-1.7/ | @inproceedings{de-raedt-etal-2023-idas,
title = "{IDAS}: Intent Discovery with Abstractive Summarization",
author = "De Raedt, Maarten and
Godin, Fr{\'e}deric and
Demeester, Thomas and
Develder, Chris",
editor = "Chen, Yun-Nung and
Rastogi, Abhinav",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.7",
doi = "10.18653/v1/2023.nlp4convai-1.7",
pages = "71--88",
abstract = "Intent discovery is the task of inferring latent intents from a set of unlabeled utterances, and is a useful step towards the efficient creation of new conversational agents. We show that recent competitive methods in intent discovery can be outperformed by clustering utterances based on abstractive summaries, i.e., {``}labels{''}, that retain the core elements while removing non-essential information. We contribute the IDAS approach, which collects a set of descriptive utterance labels by prompting a Large Language Model, starting from a well-chosen seed set of prototypical utterances, to bootstrap an In-Context Learning procedure to generate labels for non-prototypical utterances. The utterances and their resulting noisy labels are then encoded by a frozen pre-trained encoder, and subsequently clustered to recover the latent intents. For the unsupervised task (without any intent labels) IDAS outperforms the state-of-the-art by up to +7.42{\%} in standard cluster metrics for the Banking, StackOverflow, and Transport datasets. For the semi-supervised task (with labels for a subset of intents) IDAS surpasses 2 recent methods on the CLINC benchmark without even using labeled data.",
}
| Intent discovery is the task of inferring latent intents from a set of unlabeled utterances, and is a useful step towards the efficient creation of new conversational agents. We show that recent competitive methods in intent discovery can be outperformed by clustering utterances based on abstractive summaries, i.e., {``}labels{''}, that retain the core elements while removing non-essential information. We contribute the IDAS approach, which collects a set of descriptive utterance labels by prompting a Large Language Model, starting from a well-chosen seed set of prototypical utterances, to bootstrap an In-Context Learning procedure to generate labels for non-prototypical utterances. The utterances and their resulting noisy labels are then encoded by a frozen pre-trained encoder, and subsequently clustered to recover the latent intents. For the unsupervised task (without any intent labels) IDAS outperforms the state-of-the-art by up to +7.42{\%} in standard cluster metrics for the Banking, StackOverflow, and Transport datasets. For the semi-supervised task (with labels for a subset of intents) IDAS surpasses 2 recent methods on the CLINC benchmark without even using labeled data. | [
"De Raedt, Maarten",
"Godin, Fr{\\'e}deric",
"Demeester, Thomas",
"Develder, Chris"
] | IDAS: Intent Discovery with Abstractive Summarization | nlp4convai-1.7 | Poster | 2305.19783 | [
"https://github.com/maarten-deraedt/idas-intent-discovery-with-abstract-summarization"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.nlp4convai-1.8.bib | https://aclanthology.org/2023.nlp4convai-1.8/ | @inproceedings{zhan-etal-2023-user,
title = "User Simulator Assisted Open-ended Conversational Recommendation System",
author = "Zhan, Qiusi and
Guo, Xiaojie and
Ji, Heng and
Wu, Lingfei",
editor = "Chen, Yun-Nung and
Rastogi, Abhinav",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.8",
doi = "10.18653/v1/2023.nlp4convai-1.8",
pages = "89--101",
abstract = "Conversational recommendation systems (CRS) have gained popularity in e-commerce as they can recommend items during user interactions. However, current open-ended CRS have limited recommendation performance due to their short-sighted training process, which only predicts one utterance at a time without considering its future impact. To address this, we propose a User Simulator (US) that communicates with the CRS using natural language based on given user preferences, enabling long-term reinforcement learning. We also introduce a framework that uses reinforcement learning (RL) with two novel rewards, i.e., recommendation and conversation rewards, to train the CRS. This approach considers the long-term goals and improves both the conversation and recommendation performance of the CRS. Our experiments show that our proposed framework improves the recall of recommendations by almost 100{\%}. Moreover, human evaluation demonstrates the superiority of our framework in enhancing the informativeness of generated utterances.",
}
| Conversational recommendation systems (CRS) have gained popularity in e-commerce as they can recommend items during user interactions. However, current open-ended CRS have limited recommendation performance due to their short-sighted training process, which only predicts one utterance at a time without considering its future impact. To address this, we propose a User Simulator (US) that communicates with the CRS using natural language based on given user preferences, enabling long-term reinforcement learning. We also introduce a framework that uses reinforcement learning (RL) with two novel rewards, i.e., recommendation and conversation rewards, to train the CRS. This approach considers the long-term goals and improves both the conversation and recommendation performance of the CRS. Our experiments show that our proposed framework improves the recall of recommendations by almost 100{\%}. Moreover, human evaluation demonstrates the superiority of our framework in enhancing the informativeness of generated utterances. | [
"Zhan, Qiusi",
"Guo, Xiaojie",
"Ji, Heng",
"Wu, Lingfei"
] | User Simulator Assisted Open-ended Conversational Recommendation System | nlp4convai-1.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.nlp4convai-1.9.bib | https://aclanthology.org/2023.nlp4convai-1.9/ | @inproceedings{aggarwal-etal-2023-evaluating,
title = "Evaluating Inter-Bilingual Semantic Parsing for {I}ndian Languages",
author = "Aggarwal, Divyanshu and
Gupta, Vivek and
Kunchukuttan, Anoop",
editor = "Chen, Yun-Nung and
Rastogi, Abhinav",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.9",
doi = "10.18653/v1/2023.nlp4convai-1.9",
pages = "102--122",
abstract = "Despite significant progress in Natural Language Generation for Indian languages (IndicNLP), there is a lack of datasets around complex structured tasks such as semantic parsing. One reason for this imminent gap is the complexity of the logical form, which makes English to multilingual translation difficult. The process involves alignment of logical forms, intents and slots with translated unstructured utterance. To address this, we propose an Inter-bilingual Seq2seq Semantic parsing dataset IE-SemParse Suite for 11 distinct Indian languages. We highlight the proposed task{'}s practicality, and evaluate existing multilingual seq2seq models across several train-test strategies. Our experiment reveals a high correlation across performance of original multilingual semantic parsing datasets (such as mTOP, multilingual TOP and multiATIS++) and our proposed IE-SemParse suite.",
}
| Despite significant progress in Natural Language Generation for Indian languages (IndicNLP), there is a lack of datasets around complex structured tasks such as semantic parsing. One reason for this imminent gap is the complexity of the logical form, which makes English to multilingual translation difficult. The process involves alignment of logical forms, intents and slots with translated unstructured utterance. To address this, we propose an Inter-bilingual Seq2seq Semantic parsing dataset IE-SemParse Suite for 11 distinct Indian languages. We highlight the proposed task{'}s practicality, and evaluate existing multilingual seq2seq models across several train-test strategies. Our experiment reveals a high correlation across performance of original multilingual semantic parsing datasets (such as mTOP, multilingual TOP and multiATIS++) and our proposed IE-SemParse suite. | [
"Aggarwal, Divyanshu",
"Gupta, Vivek",
"Kunchukuttan, Anoop"
] | Evaluating Inter-Bilingual Semantic Parsing for Indian Languages | nlp4convai-1.9 | Poster | 2304.13005 | [
"https://github.com/divyanshuaggarwal/indic-semparse"
] | https://huggingface.co/papers/2304.13005 | 1 | 0 | 0 | 3 | 1 | [] | [
"Divyanshu/IE_SemParse"
] | [] |
https://aclanthology.org/2023.nlp4convai-1.10.bib | https://aclanthology.org/2023.nlp4convai-1.10/ | @inproceedings{xu-chen-2023-zero,
title = "Zero-Shot Dialogue Relation Extraction by Relating Explainable Triggers and Relation Names",
author = "Xu, Ze-Song and
Chen, Yun-Nung",
editor = "Chen, Yun-Nung and
Rastogi, Abhinav",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.10",
doi = "10.18653/v1/2023.nlp4convai-1.10",
pages = "123--128",
abstract = "Developing dialogue relation extraction (DRE) systems often requires a large amount of labeled data, which can be costly and time-consuming to annotate. In order to improve scalability and support diverse, unseen relation extraction, this paper proposes a method for leveraging the ability to capture triggers and relate them to previously unseen relation names. Specifically, we introduce a model that enables zero-shot dialogue relation extraction by utilizing trigger-capturing capabilities. Our experiments on a benchmark DialogRE dataset demonstrate that the proposed model achieves significant improvements for both seen and unseen relations. Notably, this is the first attempt at zero-shot dialogue relation extraction using trigger-capturing capabilities, and our results suggest that this approach is effective for inferring previously unseen relation types. Overall, our findings highlight the potential for this method to enhance the scalability and practicality of DRE systems.",
}
| Developing dialogue relation extraction (DRE) systems often requires a large amount of labeled data, which can be costly and time-consuming to annotate. In order to improve scalability and support diverse, unseen relation extraction, this paper proposes a method for leveraging the ability to capture triggers and relate them to previously unseen relation names. Specifically, we introduce a model that enables zero-shot dialogue relation extraction by utilizing trigger-capturing capabilities. Our experiments on a benchmark DialogRE dataset demonstrate that the proposed model achieves significant improvements for both seen and unseen relations. Notably, this is the first attempt at zero-shot dialogue relation extraction using trigger-capturing capabilities, and our results suggest that this approach is effective for inferring previously unseen relation types. Overall, our findings highlight the potential for this method to enhance the scalability and practicality of DRE systems. | [
"Xu, Ze-Song",
"Chen, Yun-Nung"
] | Zero-Shot Dialogue Relation Extraction by Relating Explainable Triggers and Relation Names | nlp4convai-1.10 | Poster | 2306.06141 | [
"https://github.com/miulab/unseendre"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.nlp4convai-1.11.bib | https://aclanthology.org/2023.nlp4convai-1.11/ | @inproceedings{lopez-latouche-etal-2023-generating,
title = "Generating Video Game Scripts with Style",
author = "Lopez Latouche, Gaetan and
Marcotte, Laurence and
Swanson, Ben",
editor = "Chen, Yun-Nung and
Rastogi, Abhinav",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.11",
doi = "10.18653/v1/2023.nlp4convai-1.11",
pages = "129--139",
abstract = "While modern language models can generate a scripted scene in the format of a play, movie, or video game cutscene the quality of machine generated text remains behind that of human authors. In this work, we focus on one aspect of this quality gap; generating text in the style of an arbitrary and unseen character. We propose the Style Adaptive Semiparametric Scriptwriter (SASS) which leverages an adaptive weighted style memory to generate dialog lines in accordance with a character{'}s speaking patterns. Using the LIGHT dataset as well as a new corpus of scripts from twenty-three AAA video games, we show that SASS not only outperforms similar models but in some cases can also be used in conjunction with them to yield further improvement.",
}
| While modern language models can generate a scripted scene in the format of a play, movie, or video game cutscene the quality of machine generated text remains behind that of human authors. In this work, we focus on one aspect of this quality gap; generating text in the style of an arbitrary and unseen character. We propose the Style Adaptive Semiparametric Scriptwriter (SASS) which leverages an adaptive weighted style memory to generate dialog lines in accordance with a character{'}s speaking patterns. Using the LIGHT dataset as well as a new corpus of scripts from twenty-three AAA video games, we show that SASS not only outperforms similar models but in some cases can also be used in conjunction with them to yield further improvement. | [
"Lopez Latouche, Gaetan",
"Marcotte, Laurence",
"Swanson, Ben"
] | Generating Video Game Scripts with Style | nlp4convai-1.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.nlp4convai-1.12.bib | https://aclanthology.org/2023.nlp4convai-1.12/ | @inproceedings{ganesh-etal-2023-survey,
title = "A Survey of Challenges and Methods in the Computational Modeling of Multi-Party Dialog",
author = "Ganesh, Ananya and
Palmer, Martha and
Kann, Katharina",
editor = "Chen, Yun-Nung and
Rastogi, Abhinav",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.12",
doi = "10.18653/v1/2023.nlp4convai-1.12",
pages = "140--154",
abstract = "Advances in conversational AI systems, powered in particular by large language models, have facilitated rapid progress in understanding and generating dialog. Typically, task-oriented or open-domain dialog systems have been designed to work with two-party dialog, i.e., the exchange of utterances between a single user and a dialog system. However, modern dialog systems may be deployed in scenarios such as classrooms or meetings where conversational analysis of multiple speakers is required. This survey will present research around computational modeling of {``}multi-party dialog{''}, outlining differences from two-party dialog, challenges and issues in working with multi-party dialog, and methods for representing multi-party dialog. We also provide an overview of dialog datasets created for the study of multi-party dialog, as well as tasks that are of interest in this domain.",
}
| Advances in conversational AI systems, powered in particular by large language models, have facilitated rapid progress in understanding and generating dialog. Typically, task-oriented or open-domain dialog systems have been designed to work with two-party dialog, i.e., the exchange of utterances between a single user and a dialog system. However, modern dialog systems may be deployed in scenarios such as classrooms or meetings where conversational analysis of multiple speakers is required. This survey will present research around computational modeling of {``}multi-party dialog{''}, outlining differences from two-party dialog, challenges and issues in working with multi-party dialog, and methods for representing multi-party dialog. We also provide an overview of dialog datasets created for the study of multi-party dialog, as well as tasks that are of interest in this domain. | [
"Ganesh, Ananya",
"Palmer, Martha",
"Kann, Katharina"
] | A Survey of Challenges and Methods in the Computational Modeling of Multi-Party Dialog | nlp4convai-1.12 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.nlp4convai-1.13.bib | https://aclanthology.org/2023.nlp4convai-1.13/ | @inproceedings{gupta-etal-2023-conversational,
title = "Conversational Recommendation as Retrieval: A Simple, Strong Baseline",
author = "Gupta, Raghav and
Aksitov, Renat and
Phatale, Samrat and
Chaudhary, Simral and
Lee, Harrison and
Rastogi, Abhinav",
editor = "Chen, Yun-Nung and
Rastogi, Abhinav",
booktitle = "Proceedings of the 5th Workshop on NLP for Conversational AI (NLP4ConvAI 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlp4convai-1.13",
doi = "10.18653/v1/2023.nlp4convai-1.13",
pages = "155--160",
abstract = "Conversational recommendation systems (CRS) aim to recommend suitable items to users through natural language conversation. However, most CRS approaches do not effectively utilize the signal provided by these conversations. They rely heavily on explicit external knowledge e.g., knowledge graphs to augment the models{'} understanding of the items and attributes, which is quite hard to scale. To alleviate this, we propose an alternative information retrieval (IR)-styled approach to the CRS item recommendation task, where we represent conversations as queries and items as documents to be retrieved. We expand the document representation used for retrieval with conversations from the training set. With a simple BM25-based retriever, we show that our task formulation compares favorably with much more complex baselines using complex external knowledge on a popular CRS benchmark. We demonstrate further improvements using user-centric modeling and data augmentation to counter the cold start problem for CRSs.",
}
| Conversational recommendation systems (CRS) aim to recommend suitable items to users through natural language conversation. However, most CRS approaches do not effectively utilize the signal provided by these conversations. They rely heavily on explicit external knowledge e.g., knowledge graphs to augment the models{'} understanding of the items and attributes, which is quite hard to scale. To alleviate this, we propose an alternative information retrieval (IR)-styled approach to the CRS item recommendation task, where we represent conversations as queries and items as documents to be retrieved. We expand the document representation used for retrieval with conversations from the training set. With a simple BM25-based retriever, we show that our task formulation compares favorably with much more complex baselines using complex external knowledge on a popular CRS benchmark. We demonstrate further improvements using user-centric modeling and data augmentation to counter the cold start problem for CRSs. | [
"Gupta, Raghav",
"Aksitov, Renat",
"Phatale, Samrat",
"Chaudhary, Simral",
"Lee, Harrison",
"Rastogi, Abhinav"
] | Conversational Recommendation as Retrieval: A Simple, Strong Baseline | nlp4convai-1.13 | Poster | 2305.13725 | [
""
] | https://huggingface.co/papers/2305.13725 | 1 | 0 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2023.nlrse-1.1.bib | https://aclanthology.org/2023.nlrse-1.1/ | @inproceedings{sen-etal-2023-knowledge,
title = "Knowledge Graph-augmented Language Models for Complex Question Answering",
author = "Sen, Priyanka and
Mavadia, Sandeep and
Saffari, Amir",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.1",
doi = "10.18653/v1/2023.nlrse-1.1",
pages = "1--8",
abstract = "Large language models have shown impressive abilities to reason over input text, however, they are prone to hallucinations. On the other hand, end-to-end knowledge graph question answering (KGQA) models output responses grounded in facts, but they still struggle with complex reasoning, such as comparison or ordinal questions. In this paper, we propose a new method for complex question answering where we combine a knowledge graph retriever based on an end-to-end KGQA model with a language model that reasons over the retrieved facts to return an answer. We observe that augmenting language model prompts with retrieved KG facts improves performance over using a language model alone by an average of 83{\%}. In particular, we see improvements on complex questions requiring count, intersection, or multi-hop reasoning operations.",
}
| Large language models have shown impressive abilities to reason over input text, however, they are prone to hallucinations. On the other hand, end-to-end knowledge graph question answering (KGQA) models output responses grounded in facts, but they still struggle with complex reasoning, such as comparison or ordinal questions. In this paper, we propose a new method for complex question answering where we combine a knowledge graph retriever based on an end-to-end KGQA model with a language model that reasons over the retrieved facts to return an answer. We observe that augmenting language model prompts with retrieved KG facts improves performance over using a language model alone by an average of 83{\%}. In particular, we see improvements on complex questions requiring count, intersection, or multi-hop reasoning operations. | [
"Sen, Priyanka",
"Mavadia, S",
"eep",
"Saffari, Amir"
] | Knowledge Graph-augmented Language Models for Complex Question Answering | nlrse-1.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.nlrse-1.2.bib | https://aclanthology.org/2023.nlrse-1.2/ | @inproceedings{zhang-etal-2023-exploring,
title = "Exploring the Curious Case of Code Prompts",
author = "Zhang, Li and
Dugan, Liam and
Xu, Hainiu and
Callison-burch, Chris",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.2",
doi = "10.18653/v1/2023.nlrse-1.2",
pages = "9--17",
abstract = "Recent work has shown that prompting language models with code-like representations of natural language leads to performance improvements on structured reasoning tasks. However, such tasks comprise only a small subset of all natural language tasks. In our work, we seek to answer whether or not code-prompting is the preferred way of interacting with language models in general. We compare code and text prompts across three popular GPT models (davinci, code-davinci-002, and text-davinci-002) on a broader selection of tasks (e.g., QA, sentiment, summarization) and find that with few exceptions, code prompts do not consistently outperform text prompts. Furthermore, we show that the style of code prompt has a large effect on performance for some (but not all) tasks and that fine-tuning on text instructions leads to better relative performance of code prompts.",
}
| Recent work has shown that prompting language models with code-like representations of natural language leads to performance improvements on structured reasoning tasks. However, such tasks comprise only a small subset of all natural language tasks. In our work, we seek to answer whether or not code-prompting is the preferred way of interacting with language models in general. We compare code and text prompts across three popular GPT models (davinci, code-davinci-002, and text-davinci-002) on a broader selection of tasks (e.g., QA, sentiment, summarization) and find that with few exceptions, code prompts do not consistently outperform text prompts. Furthermore, we show that the style of code prompt has a large effect on performance for some (but not all) tasks and that fine-tuning on text instructions leads to better relative performance of code prompts. | [
"Zhang, Li",
"Dugan, Liam",
"Xu, Hainiu",
"Callison-burch, Chris"
] | Exploring the Curious Case of Code Prompts | nlrse-1.2 | Poster | 2304.13250 | [
"https://github.com/zharry29/codex_vs_gpt3"
] | https://huggingface.co/papers/2304.13250 | 1 | 0 | 0 | 4 | 1 | [] | [] | [] |
https://aclanthology.org/2023.nlrse-1.3.bib | https://aclanthology.org/2023.nlrse-1.3/ | @inproceedings{zaninello-magnini-2023-smashed,
title = "A smashed glass cannot be full: Generation of Commonsense Explanations through Prompt-based Few-shot Learning",
author = "Zaninello, Andrea and
Magnini, Bernardo",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.3",
doi = "10.18653/v1/2023.nlrse-1.3",
pages = "18--29",
abstract = "We assume that providing explanations is a process to elicit implicit knowledge in human communication, and propose a general methodology to generate commonsense explanations from pairs of semantically related sentences. We take advantage of both prompting applied to large, encoder-decoder pre-trained language models, and few-shot learning techniques, such as pattern-exploiting training. Experiments run on the e-SNLI dataset show that the proposed method achieves state-of-the-art results on the explanation generation task, with a substantial reduction of labelled data. The obtained results open new perspective on a number of tasks involving the elicitation of implicit knowledge.",
}
| We assume that providing explanations is a process to elicit implicit knowledge in human communication, and propose a general methodology to generate commonsense explanations from pairs of semantically related sentences. We take advantage of both prompting applied to large, encoder-decoder pre-trained language models, and few-shot learning techniques, such as pattern-exploiting training. Experiments run on the e-SNLI dataset show that the proposed method achieves state-of-the-art results on the explanation generation task, with a substantial reduction of labelled data. The obtained results open new perspective on a number of tasks involving the elicitation of implicit knowledge. | [
"Zaninello, Andrea",
"Magnini, Bernardo"
] | A smashed glass cannot be full: Generation of Commonsense Explanations through Prompt-based Few-shot Learning | nlrse-1.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.nlrse-1.4.bib | https://aclanthology.org/2023.nlrse-1.4/ | @inproceedings{feldhus-etal-2023-saliency,
title = "Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods",
author = {Feldhus, Nils and
Hennig, Leonhard and
Nasert, Maximilian Dustin and
Ebert, Christopher and
Schwarzenberg, Robert and
M{\"o}ller, Sebastian},
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.4",
doi = "10.18653/v1/2023.nlrse-1.4",
pages = "30--46",
abstract = "Saliency maps can explain a neural model{'}s predictions by identifying important input features. They are difficult to interpret for laypeople, especially for instances with many features. In order to make them more accessible, we formalize the underexplored task of translating saliency maps into natural language and compare methods that address two key challenges of this approach {--} what and how to verbalize. In both automatic and human evaluation setups, using token-level attributions from text classification tasks, we compare two novel methods (search-based and instruction-based verbalizations) against conventional feature importance representations (heatmap visualizations and extractive rationales), measuring simulatability, faithfulness, helpfulness and ease of understanding. Instructing GPT-3.5 to generate saliency map verbalizations yields plausible explanations which include associations, abstractive summarization and commonsense reasoning, achieving by far the highest human ratings, but they are not faithfully capturing numeric information and are inconsistent in their interpretation of the task. In comparison, our search-based, model-free verbalization approach efficiently completes templated verbalizations, is faithful by design, but falls short in helpfulness and simulatability. Our results suggest that saliency map verbalization makes feature attribution explanations more comprehensible and less cognitively challenging to humans than conventional representations.",
}
| Saliency maps can explain a neural model{'}s predictions by identifying important input features. They are difficult to interpret for laypeople, especially for instances with many features. In order to make them more accessible, we formalize the underexplored task of translating saliency maps into natural language and compare methods that address two key challenges of this approach {--} what and how to verbalize. In both automatic and human evaluation setups, using token-level attributions from text classification tasks, we compare two novel methods (search-based and instruction-based verbalizations) against conventional feature importance representations (heatmap visualizations and extractive rationales), measuring simulatability, faithfulness, helpfulness and ease of understanding. Instructing GPT-3.5 to generate saliency map verbalizations yields plausible explanations which include associations, abstractive summarization and commonsense reasoning, achieving by far the highest human ratings, but they are not faithfully capturing numeric information and are inconsistent in their interpretation of the task. In comparison, our search-based, model-free verbalization approach efficiently completes templated verbalizations, is faithful by design, but falls short in helpfulness and simulatability. Our results suggest that saliency map verbalization makes feature attribution explanations more comprehensible and less cognitively challenging to humans than conventional representations. | [
"Feldhus, Nils",
"Hennig, Leonhard",
"Nasert, Maximilian Dustin",
"Ebert, Christopher",
"Schwarzenberg, Robert",
"M{\\\"o}ller, Sebastian"
] | Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods | nlrse-1.4 | Poster | 2210.07222 | [
"https://github.com/dfki-nlp/smv"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.nlrse-1.5.bib | https://aclanthology.org/2023.nlrse-1.5/ | @inproceedings{cohen-mooney-2023-using,
title = "Using Planning to Improve Semantic Parsing of Instructional Texts",
author = "Cohen, Vanya and
Mooney, Raymond",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.5",
doi = "10.18653/v1/2023.nlrse-1.5",
pages = "47--58",
abstract = "We develop a symbolic planning-based decoder to improve the few-shot semantic parsing of instructional texts. The system takes long-form instructional texts as input and produces sequences of actions in a formal language that enable execution of the instructions. This task poses unique challenges since input texts may contain long context dependencies and ambiguous and domain-specific language. Valid semantic parses also require sequences of steps that constitute an executable plan. We build on recent progress in semantic parsing by leveraging large language models to learn parsers from small amounts of training data. During decoding, our method employs planning methods and domain information to rank and correct candidate parses. To validate our method, we evaluate on four domains: two household instruction-following domains and two cooking recipe interpretation domains. We present results for few-shot semantic parsing using leave-one-out cross-validation. We show that utilizing planning domain information improves the quality of generated plans. Through ablations we also explore the effects of our decoder design choices.",
}
| We develop a symbolic planning-based decoder to improve the few-shot semantic parsing of instructional texts. The system takes long-form instructional texts as input and produces sequences of actions in a formal language that enable execution of the instructions. This task poses unique challenges since input texts may contain long context dependencies and ambiguous and domain-specific language. Valid semantic parses also require sequences of steps that constitute an executable plan. We build on recent progress in semantic parsing by leveraging large language models to learn parsers from small amounts of training data. During decoding, our method employs planning methods and domain information to rank and correct candidate parses. To validate our method, we evaluate on four domains: two household instruction-following domains and two cooking recipe interpretation domains. We present results for few-shot semantic parsing using leave-one-out cross-validation. We show that utilizing planning domain information improves the quality of generated plans. Through ablations we also explore the effects of our decoder design choices. | [
"Cohen, Vanya",
"Mooney, Raymond"
] | Using Planning to Improve Semantic Parsing of Instructional Texts | nlrse-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.nlrse-1.6.bib | https://aclanthology.org/2023.nlrse-1.6/ | @inproceedings{kulshreshtha-rumshisky-2023-reasoning,
title = "Reasoning Circuits: Few-shot Multi-hop Question Generation with Structured Rationales",
author = "Kulshreshtha, Saurabh and
Rumshisky, Anna",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.6",
doi = "10.18653/v1/2023.nlrse-1.6",
pages = "59--77",
abstract = "Multi-hop Question Generation is the task of generating questions which require the reader to reason over and combine information spread across multiple passages employing several reasoning steps. Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks and make model predictions more interpretable. However, few-shot performance gains from including rationales have been largely observed only in +100B language models, and otherwise require large-scale manual rationale annotation. In this paper, we introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime (8- to 128-shot). We propose to annotate a small number of examples following our proposed multi-step rationale schema, treating each reasoning step as a separate task to be performed by a generative language model. We show that our framework leads to improved control over the difficulty of the generated questions and better performance compared to baselines trained without rationales, both on automatic evaluation metrics and in human evaluation. Importantly, we show that this is achievable with a modest model size.",
}
| Multi-hop Question Generation is the task of generating questions which require the reader to reason over and combine information spread across multiple passages employing several reasoning steps. Chain-of-thought rationale generation has been shown to improve performance on multi-step reasoning tasks and make model predictions more interpretable. However, few-shot performance gains from including rationales have been largely observed only in +100B language models, and otherwise require large-scale manual rationale annotation. In this paper, we introduce a new framework for applying chain-of-thought inspired structured rationale generation to multi-hop question generation under a very low supervision regime (8- to 128-shot). We propose to annotate a small number of examples following our proposed multi-step rationale schema, treating each reasoning step as a separate task to be performed by a generative language model. We show that our framework leads to improved control over the difficulty of the generated questions and better performance compared to baselines trained without rationales, both on automatic evaluation metrics and in human evaluation. Importantly, we show that this is achievable with a modest model size. | [
"Kulshreshtha, Saurabh",
"Rumshisky, Anna"
] | Reasoning Circuits: Few-shot Multi-hop Question Generation with Structured Rationales | nlrse-1.6 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.nlrse-1.7.bib | https://aclanthology.org/2023.nlrse-1.7/ | @inproceedings{baek-etal-2023-knowledge,
title = "Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering",
author = "Baek, Jinheon and
Aji, Alham Fikri and
Saffari, Amir",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.7",
doi = "10.18653/v1/2023.nlrse-1.7",
pages = "78--106",
abstract = "Large Language Models (LLMs) are capable of performing zero-shot closed-book question answering tasks, based on their internal knowledge stored in parameters during pre-training. However, such internalized knowledge might be insufficient and incorrect, which could lead LLMs to generate factually wrong answers. Furthermore, fine-tuning LLMs to update their knowledge is expensive. To this end, we propose to augment the knowledge directly in the input of LLMs. Specifically, we first retrieve the relevant facts to the input question from the knowledge graph based on semantic similarities between the question and its associated facts. After that, we prepend the retrieved facts to the input question in the form of the prompt, which is then forwarded to LLMs to generate the answer. Our framework, Knowledge-Augmented language model PromptING (KAPING), requires no model training, thus completely zero-shot. We validate the performance of our KAPING framework on the knowledge graph question answering task, that aims to answer the user{'}s question based on facts over a knowledge graph, on which ours outperforms relevant zero-shot baselines by up to 48{\%} in average, across multiple LLMs of various sizes.",
}
| Large Language Models (LLMs) are capable of performing zero-shot closed-book question answering tasks, based on their internal knowledge stored in parameters during pre-training. However, such internalized knowledge might be insufficient and incorrect, which could lead LLMs to generate factually wrong answers. Furthermore, fine-tuning LLMs to update their knowledge is expensive. To this end, we propose to augment the knowledge directly in the input of LLMs. Specifically, we first retrieve the relevant facts to the input question from the knowledge graph based on semantic similarities between the question and its associated facts. After that, we prepend the retrieved facts to the input question in the form of the prompt, which is then forwarded to LLMs to generate the answer. Our framework, Knowledge-Augmented language model PromptING (KAPING), requires no model training, thus completely zero-shot. We validate the performance of our KAPING framework on the knowledge graph question answering task, that aims to answer the user{'}s question based on facts over a knowledge graph, on which ours outperforms relevant zero-shot baselines by up to 48{\%} in average, across multiple LLMs of various sizes. | [
"Baek, Jinheon",
"Aji, Alham Fikri",
"Saffari, Amir"
] | Knowledge-Augmented Language Model Prompting for Zero-Shot Knowledge Graph Question Answering | nlrse-1.7 | Poster | 2306.04136 | [
""
] | https://huggingface.co/papers/2306.04136 | 1 | 0 | 0 | 3 | 1 | [] | [
"msalnikov/mintaka"
] | [] |
https://aclanthology.org/2023.nlrse-1.8.bib | https://aclanthology.org/2023.nlrse-1.8/ | @inproceedings{tefnik-kadlcik-2023-context,
title = "Can In-context Learners Learn a Reasoning Concept from Demonstrations?",
author = "{\v{S}}tef{\'a}nik, Michal and
Kadl{\v{c}}{\'\i}k, Marek",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.8",
doi = "10.18653/v1/2023.nlrse-1.8",
pages = "107--115",
abstract = "Large language models show an emergent ability to learn a new task from a small number of input-output demonstrations. However, recent work shows that in-context learners largely rely on their pre-trained knowledge, such as the sentiment of the labels, instead of finding new associations in the input. However, the commonly-used few-shot evaluation settings using a random selection of in-context demonstrations can not disentangle models{'} ability to learn a new skill from demonstrations, as most of the randomly-selected demonstrations do not present relations informative for prediction beyond exposing the new task distribution. To disentangle models{'} in-context learning ability independent of models{'} memory, we introduce a Conceptual few-shot learning method selecting the demonstrations sharing a possibly-informative concept with the predicted sample. We extract a set of such concepts from annotated explanations and measure how much can models benefit from presenting these concepts in few-shot demonstrations. We find that smaller models are more sensitive to the presented concepts. While some of the models are able to benefit from concept-presenting demonstrations for each assessed concept, we find that none of the assessed in-context learners can benefit from all presented reasoning concepts consistently, leaving the in-context concept learning an open challenge.",
}
| Large language models show an emergent ability to learn a new task from a small number of input-output demonstrations. However, recent work shows that in-context learners largely rely on their pre-trained knowledge, such as the sentiment of the labels, instead of finding new associations in the input. However, the commonly-used few-shot evaluation settings using a random selection of in-context demonstrations can not disentangle models{'} ability to learn a new skill from demonstrations, as most of the randomly-selected demonstrations do not present relations informative for prediction beyond exposing the new task distribution. To disentangle models{'} in-context learning ability independent of models{'} memory, we introduce a Conceptual few-shot learning method selecting the demonstrations sharing a possibly-informative concept with the predicted sample. We extract a set of such concepts from annotated explanations and measure how much can models benefit from presenting these concepts in few-shot demonstrations. We find that smaller models are more sensitive to the presented concepts. While some of the models are able to benefit from concept-presenting demonstrations for each assessed concept, we find that none of the assessed in-context learners can benefit from all presented reasoning concepts consistently, leaving the in-context concept learning an open challenge. | [
"{\\v{S}}tef{\\'a}nik, Michal",
"Kadl{\\v{c}}{\\'\\i}k, Marek"
] | Can In-context Learners Learn a Reasoning Concept from Demonstrations? | nlrse-1.8 | Poster | 2212.01692 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.nlrse-1.9.bib | https://aclanthology.org/2023.nlrse-1.9/ | @inproceedings{kobbe-etal-2023-effect,
title = "Effect Graph: Effect Relation Extraction for Explanation Generation",
author = "Kobbe, Jonathan and
Hulpu{\textcommabelow{s}}, Ioana and
Stuckenschmidt, Heiner",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.9",
doi = "10.18653/v1/2023.nlrse-1.9",
pages = "116--127",
abstract = "Argumentation is an important means of communication. For describing especially arguments about consequences, the notion of effect relations has been introduced recently. We propose a method to extract effect relations from large text resources and apply it on encyclopedic and argumentative texts. By connecting the extracted relations, we generate a knowledge graph which we call effect graph. For evaluating the effect graph, we perform crowd and expert annotations and create a novel dataset. We demonstrate a possible use case of the effect graph by proposing a method for explaining arguments from consequences.",
}
| Argumentation is an important means of communication. For describing especially arguments about consequences, the notion of effect relations has been introduced recently. We propose a method to extract effect relations from large text resources and apply it on encyclopedic and argumentative texts. By connecting the extracted relations, we generate a knowledge graph which we call effect graph. For evaluating the effect graph, we perform crowd and expert annotations and create a novel dataset. We demonstrate a possible use case of the effect graph by proposing a method for explaining arguments from consequences. | [
"Kobbe, Jonathan",
"Hulpu{\\textcommabelow{s}}, Ioana",
"Stuckenschmidt, Heiner"
] | Effect Graph: Effect Relation Extraction for Explanation Generation | nlrse-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.nlrse-1.10.bib | https://aclanthology.org/2023.nlrse-1.10/ | @inproceedings{alkhamissi-etal-2023-opt,
title = "{OPT}-{R}: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models",
author = "Alkhamissi, Badr and
Verma, Siddharth and
Yu, Ping and
Jin, Zhijing and
Celikyilmaz, Asli and
Diab, Mona",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.10",
doi = "10.18653/v1/2023.nlrse-1.10",
pages = "128--138",
abstract = "We conduct a thorough investigation into the reasoning capabilities of Large Language Models (LLMs), focusing specifically on the Open Pretrained Transformers (OPT) models as a representative of such models. Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations. We then evaluate all models on 57 out-of-domain tasks drawn from the Super-NaturalInstructions benchmark, covering 26 distinct reasoning skills, utilizing three prompting techniques. Through a comprehensive grid of 27 configurations and 6,156 test evaluations, we investigate the dimensions of finetuning, prompting, and scale to understand the role of explanations on different reasoning skills. Our findings reveal that having explanations in the fewshot exemplar has no significant impact on the model{'}s performance when the model is finetuned, while positively affecting the non-finetuned counterpart. Moreover, we observe a slight yet consistent increase in classification accuracy as we incorporate explanations during prompting and finetuning, respectively. Finally, we offer insights on which reasoning skills benefit the most from incorporating explanations during finetuning and prompting, such as Numerical (+20.4{\%}) and Analogical (+13.9{\%}) reasoning, as well as skills that exhibit negligible or negative effects.",
}
| We conduct a thorough investigation into the reasoning capabilities of Large Language Models (LLMs), focusing specifically on the Open Pretrained Transformers (OPT) models as a representative of such models. Our study entails finetuning three different sizes of OPT on a carefully curated reasoning corpus, resulting in two sets of finetuned models: OPT-R, finetuned without explanations, and OPT-RE, finetuned with explanations. We then evaluate all models on 57 out-of-domain tasks drawn from the Super-NaturalInstructions benchmark, covering 26 distinct reasoning skills, utilizing three prompting techniques. Through a comprehensive grid of 27 configurations and 6,156 test evaluations, we investigate the dimensions of finetuning, prompting, and scale to understand the role of explanations on different reasoning skills. Our findings reveal that having explanations in the fewshot exemplar has no significant impact on the model{'}s performance when the model is finetuned, while positively affecting the non-finetuned counterpart. Moreover, we observe a slight yet consistent increase in classification accuracy as we incorporate explanations during prompting and finetuning, respectively. Finally, we offer insights on which reasoning skills benefit the most from incorporating explanations during finetuning and prompting, such as Numerical (+20.4{\%}) and Analogical (+13.9{\%}) reasoning, as well as skills that exhibit negligible or negative effects. | [
"Alkhamissi, Badr",
"Verma, Siddharth",
"Yu, Ping",
"Jin, Zhijing",
"Celikyilmaz, Asli",
"Diab, Mona"
] | OPT-R: Exploring the Role of Explanations in Finetuning and Prompting for Reasoning Skills of Large Language Models | nlrse-1.10 | Poster | 2305.12001 | [
""
] | https://huggingface.co/papers/2305.12001 | 3 | 1 | 0 | 6 | 1 | [] | [] | [] |
https://aclanthology.org/2023.nlrse-1.11.bib | https://aclanthology.org/2023.nlrse-1.11/ | @inproceedings{sprague-etal-2023-deductive,
title = "Deductive Additivity for Planning of Natural Language Proofs",
author = "Sprague, Zayne and
Bostrom, Kaj and
Chaudhuri, Swarat and
Durrett, Greg",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.11",
doi = "10.18653/v1/2023.nlrse-1.11",
pages = "139--156",
abstract = "Current natural language systems designed for multi-step claim validation typically operate in two phases: retrieve a set of relevant premise statements using heuristics (planning), then generate novel conclusions from those statements using a large language model (deduction). The planning step often requires expensive Transformer operations and does not scale to arbitrary numbers of premise statements. In this paper, we investigate whether efficient planning heuristic is possible via embedding spaces compatible with deductive reasoning. Specifically, we evaluate whether embedding spaces exhibit a property we call deductive additivity: the sum of premise statement embeddings should be close to embeddings of conclusions based on those premises. We explore multiple sources of off-the-shelf dense embeddings in addition to fine-tuned embeddings from GPT3 and sparse embeddings from BM25. We study embedding models both intrinsically, evaluating whether the property of deductive additivity holds, and extrinsically, using them to assist planning in natural language proof generation. Lastly, we create a dataset, Single-Step Reasoning Contrast (SSRC), to further probe performance on various reasoning types. Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they fall short of being effective heuristics and lack the ability to model certain categories of reasoning.",
}
| Current natural language systems designed for multi-step claim validation typically operate in two phases: retrieve a set of relevant premise statements using heuristics (planning), then generate novel conclusions from those statements using a large language model (deduction). The planning step often requires expensive Transformer operations and does not scale to arbitrary numbers of premise statements. In this paper, we investigate whether efficient planning heuristic is possible via embedding spaces compatible with deductive reasoning. Specifically, we evaluate whether embedding spaces exhibit a property we call deductive additivity: the sum of premise statement embeddings should be close to embeddings of conclusions based on those premises. We explore multiple sources of off-the-shelf dense embeddings in addition to fine-tuned embeddings from GPT3 and sparse embeddings from BM25. We study embedding models both intrinsically, evaluating whether the property of deductive additivity holds, and extrinsically, using them to assist planning in natural language proof generation. Lastly, we create a dataset, Single-Step Reasoning Contrast (SSRC), to further probe performance on various reasoning types. Our findings suggest that while standard embedding methods frequently embed conclusions near the sums of their premises, they fall short of being effective heuristics and lack the ability to model certain categories of reasoning. | [
"Sprague, Zayne",
"Bostrom, Kaj",
"Chaudhuri, Swarat",
"Durrett, Greg"
] | Deductive Additivity for Planning of Natural Language Proofs | nlrse-1.11 | Poster | 2307.02472 | [
"https://github.com/zayne-sprague/deductive_additivity_for_planning_of_natural_language_proofs"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.nlrse-1.12.bib | https://aclanthology.org/2023.nlrse-1.12/ | @inproceedings{akoju-etal-2023-synthetic,
title = "Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language Inference",
author = "Akoju, Sushma Anand and
Vacareanu, Robert and
Blanco, Eduardo and
Riaz, Haris and
Surdeanu, Mihai",
editor = "Dalvi Mishra, Bhavana and
Durrett, Greg and
Jansen, Peter and
Neves Ribeiro, Danilo and
Wei, Jason",
booktitle = "Proceedings of the 1st Workshop on Natural Language Reasoning and Structured Explanations (NLRSE)",
month = jun,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.nlrse-1.12",
doi = "10.18653/v1/2023.nlrse-1.12",
pages = "157--168",
abstract = "We introduce a synthetic dataset called Sentences Involving Complex Compositional Knowledge (SICCK) and a novel analysis that investigates the performance of Natural Language Inference (NLI) models to understand compositionality in logic. We produce 1,304 sentence pairs by modifying 15 examples from the SICK dataset (Marelli et al., 2014). To this end, we modify the original texts using a set of phrases modifiers that correspond to universal quantifiers, existential quantifiers, negation, and other concept modifiers in Natural Logic (NL) (MacCartney, 2009). We use these phrases to modify the subject, verb, and object parts of the premise and hypothesis. Lastly, we annotate these modified texts with the corresponding entailment labels following NL rules. We conduct a preliminary verification of how well the change in the structural and semantic composition is captured by neural NLI models, in both zero-shot and fine-tuned scenarios. We found that the performance of NLI models under the zero-shot setting is poor, especially for modified sentences with negation and existential quantifiers. After fine-tuning this dataset, we observe that models continue to perform poorly over negation, existential and universal modifiers.",
}
| We introduce a synthetic dataset called Sentences Involving Complex Compositional Knowledge (SICCK) and a novel analysis that investigates the performance of Natural Language Inference (NLI) models to understand compositionality in logic. We produce 1,304 sentence pairs by modifying 15 examples from the SICK dataset (Marelli et al., 2014). To this end, we modify the original texts using a set of phrases modifiers that correspond to universal quantifiers, existential quantifiers, negation, and other concept modifiers in Natural Logic (NL) (MacCartney, 2009). We use these phrases to modify the subject, verb, and object parts of the premise and hypothesis. Lastly, we annotate these modified texts with the corresponding entailment labels following NL rules. We conduct a preliminary verification of how well the change in the structural and semantic composition is captured by neural NLI models, in both zero-shot and fine-tuned scenarios. We found that the performance of NLI models under the zero-shot setting is poor, especially for modified sentences with negation and existential quantifiers. After fine-tuning this dataset, we observe that models continue to perform poorly over negation, existential and universal modifiers. | [
"Akoju, Sushma An",
"",
"Vacareanu, Robert",
"Blanco, Eduardo",
"Riaz, Haris",
"Surdeanu, Mihai"
] | Synthetic Dataset for Evaluating Complex Compositional Knowledge for Natural Language Inference | nlrse-1.12 | Poster | 2307.05034 | [
"https://github.com/clulab/releases"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.1.bib | https://aclanthology.org/2023.repl4nlp-1.1/ | @inproceedings{gupta-krishna-2023-adversarial,
title = "Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems",
author = "Gupta, Ashim and
Krishna, Amrith",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.1",
doi = "10.18653/v1/2023.repl4nlp-1.1",
pages = "1--12",
abstract = "Clean-label (CL) attack is a form of data poisoning attack where an adversary modifies only the textual input of the training data, without requiring access to the labeling function. CL attacks are relatively unexplored in NLP, as compared to label flipping (LF) attacks, where the latter additionally requires access to the labeling function as well. While CL attacks are more resilient to data sanitization and manual relabeling methods than LF attacks, they often demand as high as ten times the poisoning budget than LF attacks. In this work, we first introduce an Adversarial Clean Label attack which can adversarially perturb in-class training examples for poisoning the training set. We then show that an adversary can significantly bring down the data requirements for a CL attack, using the aforementioned approach, to as low as 20 {\%} of the data otherwise required. We then systematically benchmark and analyze a number of defense methods, for both LF and CL attacks, some previously employed solely for LF attacks in the textual domain and others adapted from computer vision. We find that text-specific defenses greatly vary in their effectiveness depending on their properties.",
}
| Clean-label (CL) attack is a form of data poisoning attack where an adversary modifies only the textual input of the training data, without requiring access to the labeling function. CL attacks are relatively unexplored in NLP, as compared to label flipping (LF) attacks, where the latter additionally requires access to the labeling function as well. While CL attacks are more resilient to data sanitization and manual relabeling methods than LF attacks, they often demand as high as ten times the poisoning budget than LF attacks. In this work, we first introduce an Adversarial Clean Label attack which can adversarially perturb in-class training examples for poisoning the training set. We then show that an adversary can significantly bring down the data requirements for a CL attack, using the aforementioned approach, to as low as 20 {\%} of the data otherwise required. We then systematically benchmark and analyze a number of defense methods, for both LF and CL attacks, some previously employed solely for LF attacks in the textual domain and others adapted from computer vision. We find that text-specific defenses greatly vary in their effectiveness depending on their properties. | [
"Gupta, Ashim",
"Krishna, Amrith"
] | Adversarial Clean Label Backdoor Attacks and Defenses on Text Classification Systems | repl4nlp-1.1 | Poster | 2305.19607 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.2.bib | https://aclanthology.org/2023.repl4nlp-1.2/ | @inproceedings{golchin-etal-2023-mask,
title = "Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords",
author = "Golchin, Shahriar and
Surdeanu, Mihai and
Tavabi, Nazgol and
Kiapour, Ata",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.2",
doi = "10.18653/v1/2023.repl4nlp-1.2",
pages = "13--21",
abstract = "We propose a novel task-agnostic in-domain pre-training method that sits between generic pre-training and fine-tuning. Our approach selectively masks in-domain keywords, i.e., words that provide a compact representation of the target domain. We identify such keywords using KeyBERT (Grootendorst, 2020). We evaluate our approach using six different settings: three datasets combined with two distinct pre-trained language models (PLMs). Our results reveal that the fine-tuned PLMs adapted using our in-domain pre-training strategy outperform PLMs that used in-domain pre-training with random masking as well as those that followed the common pre-train-then-fine-tune paradigm. Further, the overhead of identifying in-domain keywords is reasonable, e.g., 7-15{\%} of the pre-training time (for two epochs) for BERT Large (Devlin et al., 2019).",
}
| We propose a novel task-agnostic in-domain pre-training method that sits between generic pre-training and fine-tuning. Our approach selectively masks in-domain keywords, i.e., words that provide a compact representation of the target domain. We identify such keywords using KeyBERT (Grootendorst, 2020). We evaluate our approach using six different settings: three datasets combined with two distinct pre-trained language models (PLMs). Our results reveal that the fine-tuned PLMs adapted using our in-domain pre-training strategy outperform PLMs that used in-domain pre-training with random masking as well as those that followed the common pre-train-then-fine-tune paradigm. Further, the overhead of identifying in-domain keywords is reasonable, e.g., 7-15{\%} of the pre-training time (for two epochs) for BERT Large (Devlin et al., 2019). | [
"Golchin, Shahriar",
"Surdeanu, Mihai",
"Tavabi, Nazgol",
"Kiapour, Ata"
] | Do not Mask Randomly: Effective Domain-adaptive Pre-training by Masking In-domain Keywords | repl4nlp-1.2 | Poster | 2307.07160 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.3.bib | https://aclanthology.org/2023.repl4nlp-1.3/ | @inproceedings{nastase-merlo-2023-grammatical,
title = "Grammatical information in {BERT} sentence embeddings as two-dimensional arrays",
author = "Nastase, Vivi and
Merlo, Paola",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.3",
doi = "10.18653/v1/2023.repl4nlp-1.3",
pages = "22--39",
abstract = "Sentence embeddings induced with various transformer architectures encode much semantic and syntactic information in a distributed manner in a one-dimensional array. We investigate whether specific grammatical information can be accessed in these distributed representations. Using data from a task developed to test rule-like generalizations, our experiments on detecting subject-verb agreement yield several promising results. First, we show that while the usual sentence representations encoded as one-dimensional arrays do not easily support extraction of rule-like regularities, a two-dimensional reshaping of these vectors allows various learning architectures to access such information. Next, we show that various architectures can detect patterns in these two-dimensional reshaped sentence embeddings and successfully learn a model based on smaller amounts of simpler training data, which performs well on more complex test data. This indicates that current sentence embeddings contain information that is regularly distributed, and which can be captured when the embeddings are reshaped into higher dimensional arrays. Our results cast light on representations produced by language models and help move towards developing few-shot learning approaches.",
}
| Sentence embeddings induced with various transformer architectures encode much semantic and syntactic information in a distributed manner in a one-dimensional array. We investigate whether specific grammatical information can be accessed in these distributed representations. Using data from a task developed to test rule-like generalizations, our experiments on detecting subject-verb agreement yield several promising results. First, we show that while the usual sentence representations encoded as one-dimensional arrays do not easily support extraction of rule-like regularities, a two-dimensional reshaping of these vectors allows various learning architectures to access such information. Next, we show that various architectures can detect patterns in these two-dimensional reshaped sentence embeddings and successfully learn a model based on smaller amounts of simpler training data, which performs well on more complex test data. This indicates that current sentence embeddings contain information that is regularly distributed, and which can be captured when the embeddings are reshaped into higher dimensional arrays. Our results cast light on representations produced by language models and help move towards developing few-shot learning approaches. | [
"Nastase, Vivi",
"Merlo, Paola"
] | Grammatical information in BERT sentence embeddings as two-dimensional arrays | repl4nlp-1.3 | Poster | 2312.09890 | [
"https://github.com/clcl-geneva/blm-snfdisentangling"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.4.bib | https://aclanthology.org/2023.repl4nlp-1.4/ | @inproceedings{srinivasan-vajjala-2023-multilingual,
title = "A Multilingual Evaluation of {NER} Robustness to Adversarial Inputs",
author = "Srinivasan, Akshay and
Vajjala, Sowmya",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.4",
doi = "10.18653/v1/2023.repl4nlp-1.4",
pages = "40--53",
abstract = "Adversarial evaluations of language models typically focus on English alone. In this paper, we performed a multilingual evaluation of Named Entity Recognition (NER) in terms of its robustness to small perturbations in the input. Our results showed the NER models we explored across three languages (English, German and Hindi) are not very robust to such changes, as indicated by the fluctuations in the overall F1 score as well as in a more fine-grained evaluation. With that knowledge, we further explored whether it is possible to improve the existing NER models using a part of the generated adversarial data sets as augmented training data to train a new NER model or as fine-tuning data to adapt an existing NER model. Our results showed that both these approaches improve performance on the original as well as adversarial test sets. While there is no significant difference between the two approaches for English, re-training is significantly better than fine-tuning for German and Hindi.",
}
| Adversarial evaluations of language models typically focus on English alone. In this paper, we performed a multilingual evaluation of Named Entity Recognition (NER) in terms of its robustness to small perturbations in the input. Our results showed the NER models we explored across three languages (English, German and Hindi) are not very robust to such changes, as indicated by the fluctuations in the overall F1 score as well as in a more fine-grained evaluation. With that knowledge, we further explored whether it is possible to improve the existing NER models using a part of the generated adversarial data sets as augmented training data to train a new NER model or as fine-tuning data to adapt an existing NER model. Our results showed that both these approaches improve performance on the original as well as adversarial test sets. While there is no significant difference between the two approaches for English, re-training is significantly better than fine-tuning for German and Hindi. | [
"Srinivasan, Akshay",
"Vajjala, Sowmya"
] | A Multilingual Evaluation of NER Robustness to Adversarial Inputs | repl4nlp-1.4 | Poster | 2305.18933 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.5.bib | https://aclanthology.org/2023.repl4nlp-1.5/ | @inproceedings{xu-etal-2023-retrieval,
title = "Retrieval-Augmented Domain Adaptation of Language Models",
author = "Xu, Benfeng and
Zhao, Chunxu and
Jiang, Wenbin and
Zhu, PengFei and
Dai, Songtai and
Pang, Chao and
Sun, Zhuo and
Wang, Shuohuan and
Sun, Yu",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.5",
doi = "10.18653/v1/2023.repl4nlp-1.5",
pages = "54--64",
abstract = "Language models pretrained on general domain corpora usually exhibit considerable degradation when generalizing to downstream tasks of specialized domains. Existing approaches try to construct PLMs for each specific domains either from scratch or through further pretraining, which not only costs substantial resources, but also fails to cover all target domains at various granularity. In this work, we propose RADA, a novel Retrieval-Augmented framework for Domain Adaptation. We first construct a textual corpora that covers the downstream task at flexible domain granularity and resource availability. We employ it as a pluggable datastore to retrieve informative background knowledge, and integrate them into the standard language model framework to augment representations. We then propose a two-level selection scheme to integrate the most relevant information while alleviating irrelevant noises. Specifically, we introduce a differentiable sampling module as well as an attention mechanism to achieve both passage-level and word-level selection. Such a retrieval-augmented framework enables domain adaptation of language models with flexible domain coverage and fine-grained domain knowledge integration. We conduct comprehensive experiments across biomedical, science and legal domains to demonstrate the effectiveness of the overall framework, and its advantage over existing solutions.",
}
| Language models pretrained on general domain corpora usually exhibit considerable degradation when generalizing to downstream tasks of specialized domains. Existing approaches try to construct PLMs for each specific domains either from scratch or through further pretraining, which not only costs substantial resources, but also fails to cover all target domains at various granularity. In this work, we propose RADA, a novel Retrieval-Augmented framework for Domain Adaptation. We first construct a textual corpora that covers the downstream task at flexible domain granularity and resource availability. We employ it as a pluggable datastore to retrieve informative background knowledge, and integrate them into the standard language model framework to augment representations. We then propose a two-level selection scheme to integrate the most relevant information while alleviating irrelevant noises. Specifically, we introduce a differentiable sampling module as well as an attention mechanism to achieve both passage-level and word-level selection. Such a retrieval-augmented framework enables domain adaptation of language models with flexible domain coverage and fine-grained domain knowledge integration. We conduct comprehensive experiments across biomedical, science and legal domains to demonstrate the effectiveness of the overall framework, and its advantage over existing solutions. | [
"Xu, Benfeng",
"Zhao, Chunxu",
"Jiang, Wenbin",
"Zhu, PengFei",
"Dai, Songtai",
"Pang, Chao",
"Sun, Zhuo",
"Wang, Shuohuan",
"Sun, Yu"
] | Retrieval-Augmented Domain Adaptation of Language Models | repl4nlp-1.5 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.6.bib | https://aclanthology.org/2023.repl4nlp-1.6/ | @inproceedings{lyu-etal-2023-fine,
title = "Fine-grained Text Style Transfer with Diffusion-Based Language Models",
author = "Lyu, Yiwei and
Luo, Tiange and
Shi, Jiacheng and
Hollon, Todd and
Lee, Honglak",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.6",
doi = "10.18653/v1/2023.repl4nlp-1.6",
pages = "65--74",
abstract = "Diffusion probabilistic models have shown great success in generating high-quality images controllably, and researchers have tried to utilize this controllability into text generation domain. Previous works on diffusion-based language models have shown that they can be trained without external knowledge (such as pre-trained weights) and still achieve stable performance and controllability. In this paper, we trained a diffusion-based model on StylePTB dataset, the standard benchmark for fine-grained text style transfers. The tasks in StylePTB requires much more refined control over the output text compared to tasks evaluated in previous works, and our model was able to achieve state-of-the-art performance on StylePTB on both individual and compositional transfers. Moreover, our model, trained on limited data from StylePTB without external knowledge, outperforms previous works that utilized pretrained weights, embeddings, and external grammar parsers, and this may indicate that diffusion-based language models have great potential under low-resource settings.",
}
| Diffusion probabilistic models have shown great success in generating high-quality images controllably, and researchers have tried to utilize this controllability into text generation domain. Previous works on diffusion-based language models have shown that they can be trained without external knowledge (such as pre-trained weights) and still achieve stable performance and controllability. In this paper, we trained a diffusion-based model on StylePTB dataset, the standard benchmark for fine-grained text style transfers. The tasks in StylePTB requires much more refined control over the output text compared to tasks evaluated in previous works, and our model was able to achieve state-of-the-art performance on StylePTB on both individual and compositional transfers. Moreover, our model, trained on limited data from StylePTB without external knowledge, outperforms previous works that utilized pretrained weights, embeddings, and external grammar parsers, and this may indicate that diffusion-based language models have great potential under low-resource settings. | [
"Lyu, Yiwei",
"Luo, Tiange",
"Shi, Jiacheng",
"Hollon, Todd",
"Lee, Honglak"
] | Fine-grained Text Style Transfer with Diffusion-Based Language Models | repl4nlp-1.6 | Poster | 2305.19512 | [
"https://github.com/lvyiwei1/diffuseq_styleptb"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.7.bib | https://aclanthology.org/2023.repl4nlp-1.7/ | @inproceedings{lee-lee-2023-enhancing,
title = "Enhancing text comprehension for Question Answering with Contrastive Learning",
author = "Lee, Seungyeon and
Lee, Minho",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.7",
doi = "10.18653/v1/2023.repl4nlp-1.7",
pages = "75--86",
abstract = "Although Question Answering (QA) have advanced to the human-level language skills in NLP tasks, there is still a problem: the QA model gets confused when there are similar sentences or paragraphs. Existing studies focus on enhancing the text understanding of the candidate answers to improve the overall performance of the QA models. However, since these methods focus on re-ranking queries or candidate answers, they fail to resolve the confusion when many generated answers are similar to the expected answer. To address these issues, we propose a novel contrastive learning framework called ContrastiveQA that alleviates the confusion problem in answer extraction. We propose a supervised method where we generate positive and negative samples from the candidate answers and the given answer, respectively. We thus introduce ContrastiveQA, which uses contrastive learning with sampling data to reduce incorrect answers. Experimental results on four QA benchmarks show the effectiveness of the proposed method.",
}
| Although Question Answering (QA) have advanced to the human-level language skills in NLP tasks, there is still a problem: the QA model gets confused when there are similar sentences or paragraphs. Existing studies focus on enhancing the text understanding of the candidate answers to improve the overall performance of the QA models. However, since these methods focus on re-ranking queries or candidate answers, they fail to resolve the confusion when many generated answers are similar to the expected answer. To address these issues, we propose a novel contrastive learning framework called ContrastiveQA that alleviates the confusion problem in answer extraction. We propose a supervised method where we generate positive and negative samples from the candidate answers and the given answer, respectively. We thus introduce ContrastiveQA, which uses contrastive learning with sampling data to reduce incorrect answers. Experimental results on four QA benchmarks show the effectiveness of the proposed method. | [
"Lee, Seungyeon",
"Lee, Minho"
] | Enhancing text comprehension for Question Answering with Contrastive Learning | repl4nlp-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.8.bib | https://aclanthology.org/2023.repl4nlp-1.8/ | @inproceedings{shirai-etal-2023-towards,
title = "Towards Flow Graph Prediction of Open-Domain Procedural Texts",
author = "Shirai, Keisuke and
Kameko, Hirotaka and
Mori, Shinsuke",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.8",
doi = "10.18653/v1/2023.repl4nlp-1.8",
pages = "87--96",
abstract = "Machine comprehension of procedural texts is essential for reasoning about the steps and automating the procedures. However, this requires identifying entities within a text and resolving the relationships between the entities. Previous work focused on the cooking domain and proposed a framework to convert a recipe text into a flow graph (FG) representation. In this work, we propose a framework based on the recipe FG for flow graph prediction of open-domain procedural texts. To investigate flow graph prediction performance in non-cooking domains, we introduce the wikiHow-FG corpus from articles on wikiHow, a website of how-to instruction articles. In experiments, we consider using the existing recipe corpus and performing domain adaptation from the cooking to the target domain. Experimental results show that the domain adaptation models achieve higher performance than those trained only on the cooking or target domain data.",
}
| Machine comprehension of procedural texts is essential for reasoning about the steps and automating the procedures. However, this requires identifying entities within a text and resolving the relationships between the entities. Previous work focused on the cooking domain and proposed a framework to convert a recipe text into a flow graph (FG) representation. In this work, we propose a framework based on the recipe FG for flow graph prediction of open-domain procedural texts. To investigate flow graph prediction performance in non-cooking domains, we introduce the wikiHow-FG corpus from articles on wikiHow, a website of how-to instruction articles. In experiments, we consider using the existing recipe corpus and performing domain adaptation from the cooking to the target domain. Experimental results show that the domain adaptation models achieve higher performance than those trained only on the cooking or target domain data. | [
"Shirai, Keisuke",
"Kameko, Hirotaka",
"Mori, Shinsuke"
] | Towards Flow Graph Prediction of Open-Domain Procedural Texts | repl4nlp-1.8 | Poster | 2305.19497 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.9.bib | https://aclanthology.org/2023.repl4nlp-1.9/ | @inproceedings{geigle-etal-2023-one,
title = "One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks",
author = "Geigle, Gregor and
Liu, Chen and
Pfeiffer, Jonas and
Gurevych, Iryna",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.9",
doi = "10.18653/v1/2023.repl4nlp-1.9",
pages = "97--117",
abstract = "Current multimodal models, aimed at solving Vision and Language (V+L) tasks, predominantly repurpose Vision Encoders (VE) as feature extractors. While many VEs{---}of different architectures, trained on different data and objectives{---}are publicly available, they are not designed for the downstream V+L tasks. Nonetheless, most current work assumes that a \textit{single} pre-trained VE can serve as a general-purpose encoder. In this work, we focus on analysis and aim to understand whether the information stored within different VEs is complementary, i.e. if providing the model with features from multiple VEs can improve the performance on a target task, and how they are combined. We exhaustively experiment with three popular VEs on six downstream V+L tasks and analyze the attention and VE-dropout patterns. Our analyses suggest that diverse VEs complement each other, resulting in improved downstream V+L task performance, where the improvements are not due to simple ensemble effects (i.e. the performance does not always improve when increasing the number of encoders). We demonstrate that future VEs, which are not \textit{repurposed}, but explicitly \textit{designed} for V+L tasks, have the potential of improving performance on the target V+L tasks.",
}
| Current multimodal models, aimed at solving Vision and Language (V+L) tasks, predominantly repurpose Vision Encoders (VE) as feature extractors. While many VEs{---}of different architectures, trained on different data and objectives{---}are publicly available, they are not designed for the downstream V+L tasks. Nonetheless, most current work assumes that a \textit{single} pre-trained VE can serve as a general-purpose encoder. In this work, we focus on analysis and aim to understand whether the information stored within different VEs is complementary, i.e. if providing the model with features from multiple VEs can improve the performance on a target task, and how they are combined. We exhaustively experiment with three popular VEs on six downstream V+L tasks and analyze the attention and VE-dropout patterns. Our analyses suggest that diverse VEs complement each other, resulting in improved downstream V+L task performance, where the improvements are not due to simple ensemble effects (i.e. the performance does not always improve when increasing the number of encoders). We demonstrate that future VEs, which are not \textit{repurposed}, but explicitly \textit{designed} for V+L tasks, have the potential of improving performance on the target V+L tasks. | [
"Geigle, Gregor",
"Liu, Chen",
"Pfeiffer, Jonas",
"Gurevych, Iryna"
] | One does not fit all! On the Complementarity of Vision Encoders for Vision and Language Tasks | repl4nlp-1.9 | Poster | 2210.06379 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.10.bib | https://aclanthology.org/2023.repl4nlp-1.10/ | @inproceedings{zhao-etal-2023-spc,
title = "{SPC}: Soft Prompt Construction for Cross Domain Generalization",
author = "Zhao, Wenbo and
Gupta, Arpit and
Chung, Tagyoung and
Huang, Jing",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.10",
doi = "10.18653/v1/2023.repl4nlp-1.10",
pages = "118--130",
abstract = "Recent advances in prompt tuning have proven effective as a new language modeling paradigm for various natural language understanding tasks. However, it is challenging to adapt the soft prompt embeddings to different domains or generalize to low-data settings when learning soft prompts itself is unstable, task-specific, and bias-prone. This paper proposes a principled learning framework{---}soft prompt construction (SPC){---}to facilitate learning domain-adaptable soft prompts. Derived from the SPC framework is a simple loss that can plug into various models and tuning approaches to improve their cross-domain performance. We show SPC can improve upon SOTA for contextual query rewriting, summarization, and paraphrase detection by up to 5{\%}, 19{\%}, and 16{\%}, respectively.",
}
| Recent advances in prompt tuning have proven effective as a new language modeling paradigm for various natural language understanding tasks. However, it is challenging to adapt the soft prompt embeddings to different domains or generalize to low-data settings when learning soft prompts itself is unstable, task-specific, and bias-prone. This paper proposes a principled learning framework{---}soft prompt construction (SPC){---}to facilitate learning domain-adaptable soft prompts. Derived from the SPC framework is a simple loss that can plug into various models and tuning approaches to improve their cross-domain performance. We show SPC can improve upon SOTA for contextual query rewriting, summarization, and paraphrase detection by up to 5{\%}, 19{\%}, and 16{\%}, respectively. | [
"Zhao, Wenbo",
"Gupta, Arpit",
"Chung, Tagyoung",
"Huang, Jing"
] | SPC: Soft Prompt Construction for Cross Domain Generalization | repl4nlp-1.10 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.11.bib | https://aclanthology.org/2023.repl4nlp-1.11/ | @inproceedings{kochsiek-etal-2023-friendly,
title = "Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction",
author = "Kochsiek, Adrian and
Saxena, Apoorv and
Nair, Inderjeet and
Gemulla, Rainer",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.11",
doi = "10.18653/v1/2023.repl4nlp-1.11",
pages = "131--138",
abstract = "We propose KGT5-context, a simple sequence-to-sequence model for link prediction (LP) in knowledge graphs (KG). Our work expands on KGT5, a recent LP model that exploits textual features of the KG, has small model size, and is scalable. To reach good predictive performance, however, KGT5 relies on an ensemble with a knowledge graph embedding model, which itself is excessively large and costly to use. In this short paper, we show empirically that adding contextual information {---} i.e., information about the direct neighborhood of the query entity {---} alleviates the need for a separate KGE model to obtain good performance. The resulting KGT5-context model is simple, reduces model size significantly, and obtains state-of-the-art performance in our experimental study.",
}
| We propose KGT5-context, a simple sequence-to-sequence model for link prediction (LP) in knowledge graphs (KG). Our work expands on KGT5, a recent LP model that exploits textual features of the KG, has small model size, and is scalable. To reach good predictive performance, however, KGT5 relies on an ensemble with a knowledge graph embedding model, which itself is excessively large and costly to use. In this short paper, we show empirically that adding contextual information {---} i.e., information about the direct neighborhood of the query entity {---} alleviates the need for a separate KGE model to obtain good performance. The resulting KGT5-context model is simple, reduces model size significantly, and obtains state-of-the-art performance in our experimental study. | [
"Kochsiek, Adrian",
"Saxena, Apoorv",
"Nair, Inderjeet",
"Gemulla, Rainer"
] | Friendly Neighbors: Contextualized Sequence-to-Sequence Link Prediction | repl4nlp-1.11 | Poster | 2305.13059 | [
"https://github.com/uma-pi1/kgt5-context"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.12.bib | https://aclanthology.org/2023.repl4nlp-1.12/ | @inproceedings{singhania-etal-2023-extracting,
title = "Extracting Multi-valued Relations from Language Models",
author = "Singhania, Sneha and
Razniewski, Simon and
Weikum, Gerhard",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.12",
doi = "10.18653/v1/2023.repl4nlp-1.12",
pages = "139--154",
abstract = "The widespread usage of latent language representations via pre-trained language models (LMs) suggests that they are a promising source of structured knowledge. However, existing methods focus only on a single object per subject-relation pair, even though often multiple objects are correct. To overcome this limitation, we analyze these representations for their potential to yield materialized multi-object relational knowledge. We formulate the problem as a rank-then-select task. For ranking candidate objects, we evaluate existing prompting techniques and propose new ones incorporating domain knowledge. Among the selection methods, we find that choosing objects with a likelihood above a learned relation-specific threshold gives a 49.5{\%} F1 score. Our results highlight the difficulty of employing LMs for the multi-valued slot-filling task, and pave the way for further research on extracting relational knowledge from latent language representations.",
}
| The widespread usage of latent language representations via pre-trained language models (LMs) suggests that they are a promising source of structured knowledge. However, existing methods focus only on a single object per subject-relation pair, even though often multiple objects are correct. To overcome this limitation, we analyze these representations for their potential to yield materialized multi-object relational knowledge. We formulate the problem as a rank-then-select task. For ranking candidate objects, we evaluate existing prompting techniques and propose new ones incorporating domain knowledge. Among the selection methods, we find that choosing objects with a likelihood above a learned relation-specific threshold gives a 49.5{\%} F1 score. Our results highlight the difficulty of employing LMs for the multi-valued slot-filling task, and pave the way for further research on extracting relational knowledge from latent language representations. | [
"Singhania, Sneha",
"Razniewski, Simon",
"Weikum, Gerhard"
] | Extracting Multi-valued Relations from Language Models | repl4nlp-1.12 | Poster | 2307.03122 | [
"https://github.com/snehasinghania/multi_valued_slot_filling"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.13.bib | https://aclanthology.org/2023.repl4nlp-1.13/ | @inproceedings{chen-dhingra-2023-hierarchical,
title = "Hierarchical Multi-Instance Multi-Label Learning for Detecting Propaganda Techniques",
author = "Chen, Anni and
Dhingra, Bhuwan",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.13",
doi = "10.18653/v1/2023.repl4nlp-1.13",
pages = "155--163",
abstract = "Since the introduction of the SemEval 2020 Task 11 (CITATION), several approaches have been proposed in the literature for classifying propagandabased on the rhetorical techniques used to influence readers. These methods, however, classify one span at a time, ignoring dependencies from the labels of other spans within the same context. In this paper, we approach propaganda technique classification as aMulti-Instance Multi-Label (MIML) learning problem (CITATION) and propose a simple RoBERTa-based model (CITATION) for classifying all spans in an article simultaneously. Further, we note that, due to the annotation process whereannotators classified the spans by following a decision tree,there is an inherent hierarchical relationship among the differenttechniques, which existing approaches ignore. We incorporate these hierarchical label dependencies by adding an auxiliary classifier for each node in the decision tree to the training objective and ensembling the predictions from the original and auxiliary classifiers at test time. Overall, our model leads to an absolute improvement of 2.47{\%} micro-F1 over the model from the shared task winning team in a cross-validation setup and is the best performing non-ensemble model on the shared task leaderboard.",
}
| Since the introduction of the SemEval 2020 Task 11 (CITATION), several approaches have been proposed in the literature for classifying propagandabased on the rhetorical techniques used to influence readers. These methods, however, classify one span at a time, ignoring dependencies from the labels of other spans within the same context. In this paper, we approach propaganda technique classification as aMulti-Instance Multi-Label (MIML) learning problem (CITATION) and propose a simple RoBERTa-based model (CITATION) for classifying all spans in an article simultaneously. Further, we note that, due to the annotation process whereannotators classified the spans by following a decision tree,there is an inherent hierarchical relationship among the differenttechniques, which existing approaches ignore. We incorporate these hierarchical label dependencies by adding an auxiliary classifier for each node in the decision tree to the training objective and ensembling the predictions from the original and auxiliary classifiers at test time. Overall, our model leads to an absolute improvement of 2.47{\%} micro-F1 over the model from the shared task winning team in a cross-validation setup and is the best performing non-ensemble model on the shared task leaderboard. | [
"Chen, Anni",
"Dhingra, Bhuwan"
] | Hierarchical Multi-Instance Multi-Label Learning for Detecting Propaganda Techniques | repl4nlp-1.13 | Poster | 2305.19419 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.14.bib | https://aclanthology.org/2023.repl4nlp-1.14/ | @inproceedings{ri-etal-2023-contrastive,
title = "Contrastive Loss is All You Need to Recover Analogies as Parallel Lines",
author = "Ri, Narutatsu and
Lee, Fei-Tzin and
Verma, Nakul",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.14",
doi = "10.18653/v1/2023.repl4nlp-1.14",
pages = "164--173",
abstract = "While static word embedding models are known to represent linguistic analogies as parallel lines in high-dimensional space, the underlying mechanism as to why they result in such geometric structures remains obscure. We find that an elementary contrastive-style method employed over distributional information performs competitively with popular word embedding models on analogy recovery tasks, while achieving dramatic speedups in training time. Further, we demonstrate that a contrastive loss is sufficient to create these parallel structures in word embeddings, and establish a precise relationship between the co-occurrence statistics and the geometric structure of the resulting word embeddings.",
}
| While static word embedding models are known to represent linguistic analogies as parallel lines in high-dimensional space, the underlying mechanism as to why they result in such geometric structures remains obscure. We find that an elementary contrastive-style method employed over distributional information performs competitively with popular word embedding models on analogy recovery tasks, while achieving dramatic speedups in training time. Further, we demonstrate that a contrastive loss is sufficient to create these parallel structures in word embeddings, and establish a precise relationship between the co-occurrence statistics and the geometric structure of the resulting word embeddings. | [
"Ri, Narutatsu",
"Lee, Fei-Tzin",
"Verma, Nakul"
] | Contrastive Loss is All You Need to Recover Analogies as Parallel Lines | repl4nlp-1.14 | Poster | 2306.08221 | [
"https://github.com/narutatsuri/cwm"
] | https://huggingface.co/papers/2306.08221 | 1 | 0 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2023.repl4nlp-1.15.bib | https://aclanthology.org/2023.repl4nlp-1.15/ | @inproceedings{mohammadshahi-henderson-2023-syntax,
title = "Syntax-Aware Graph-to-Graph Transformer for Semantic Role Labelling",
author = "Mohammadshahi, Alireza and
Henderson, James",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.15",
doi = "10.18653/v1/2023.repl4nlp-1.15",
pages = "174--186",
abstract = "Recent models have shown that incorporating syntactic knowledge into the semantic role labelling (SRL) task leads to a significant improvement. In this paper, we propose Syntax-aware Graph-to-Graph Transformer (SynG2G-Tr) model, which encodes the syntactic structure using a novel way to input graph relations as embeddings, directly into the self-attention mechanism of Transformer. This approach adds a soft bias towards attention patterns that follow the syntactic structure but also allows the model to use this information to learn alternative patterns. We evaluate our model on both span-based and dependency-based SRL datasets, and outperform previous alternative methods in both in-domain and out-of-domain settings, on CoNLL 2005 and CoNLL 2009 datasets.",
}
| Recent models have shown that incorporating syntactic knowledge into the semantic role labelling (SRL) task leads to a significant improvement. In this paper, we propose Syntax-aware Graph-to-Graph Transformer (SynG2G-Tr) model, which encodes the syntactic structure using a novel way to input graph relations as embeddings, directly into the self-attention mechanism of Transformer. This approach adds a soft bias towards attention patterns that follow the syntactic structure but also allows the model to use this information to learn alternative patterns. We evaluate our model on both span-based and dependency-based SRL datasets, and outperform previous alternative methods in both in-domain and out-of-domain settings, on CoNLL 2005 and CoNLL 2009 datasets. | [
"Mohammadshahi, Alireza",
"Henderson, James"
] | Syntax-Aware Graph-to-Graph Transformer for Semantic Role Labelling | repl4nlp-1.15 | Poster | 2104.07704 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.16.bib | https://aclanthology.org/2023.repl4nlp-1.16/ | @inproceedings{rahimi-surdeanu-2023-improving,
title = "Improving Zero-shot Relation Classification via Automatically-acquired Entailment Templates",
author = "Rahimi, Mahdi and
Surdeanu, Mihai",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.16",
doi = "10.18653/v1/2023.repl4nlp-1.16",
pages = "187--195",
abstract = "While fully supervised relation classification (RC) models perform well on large-scale datasets, their performance drops drastically in low-resource settings. As generating annotated examples are expensive, recent zero-shot methods have been proposed that reformulate RC into other NLP tasks for which supervision exists such as textual entailment. However, these methods rely on templates that are manually created which is costly and requires domain expertise. In this paper, we present a novel strategy for template generation for relation classification, which is based on adapting Harris{'} distributional similarity principle to templates encoded using contextualized representations. Further, we perform empirical evaluation of different strategies for combining the automatically acquired templates with manual templates. The experimental results on TACRED show that our approach not only performs better than the zero-shot RC methods that only use manual templates, but also that it achieves state-of-the-art performance for zero-shot TACRED at 64.3 F1 score.",
}
| While fully supervised relation classification (RC) models perform well on large-scale datasets, their performance drops drastically in low-resource settings. As generating annotated examples are expensive, recent zero-shot methods have been proposed that reformulate RC into other NLP tasks for which supervision exists such as textual entailment. However, these methods rely on templates that are manually created which is costly and requires domain expertise. In this paper, we present a novel strategy for template generation for relation classification, which is based on adapting Harris{'} distributional similarity principle to templates encoded using contextualized representations. Further, we perform empirical evaluation of different strategies for combining the automatically acquired templates with manual templates. The experimental results on TACRED show that our approach not only performs better than the zero-shot RC methods that only use manual templates, but also that it achieves state-of-the-art performance for zero-shot TACRED at 64.3 F1 score. | [
"Rahimi, Mahdi",
"Surdeanu, Mihai"
] | Improving Zero-shot Relation Classification via Automatically-acquired Entailment Templates | repl4nlp-1.16 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.17.bib | https://aclanthology.org/2023.repl4nlp-1.17/ | @inproceedings{murahari-etal-2023-mux,
title = "{MUX}-{PLM}s: Pre-training Language Models with Data Multiplexing",
author = "Murahari, Vishvak and
Deshpande, Ameet and
Jimenez, Carlos and
Shafran, Izhak and
Wang, Mingqiu and
Cao, Yuan and
Narasimhan, Karthik",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.17",
doi = "10.18653/v1/2023.repl4nlp-1.17",
pages = "196--211",
abstract = "The widespread adoption of large language models such as ChatGPT and Bard has led to unprecedented demand for these technologies. The burgeoning cost of inference for ever-increasing model sizes coupled with hardware shortages has limited affordable access and poses a pressing need for efficiency approaches geared towards high throughput and performance. Multi-input multi-output (MIMO) algorithms such as data multiplexing, offer a promising solution with a many-fold increase in throughput by performing inference for multiple inputs at the cost of a single input. Yet these approaches are not currently performant enough to be deployed in modern systems. We change that by developing MUX-PLMs, a class of high throughput pre-trained language models (PLMs) trained with data multiplexing, that can be fine-tuned for any downstream task to yield high-throughput high-performance. Our novel multiplexing and demultiplexing modules proficiently entangle and disentangle inputs, and enable high-performance high throughput that are competitive with vanilla PLMs while achieving 2x/5x inference speedup with only a 1â4{\%} drop on a broad suite of tasks.",
}
| The widespread adoption of large language models such as ChatGPT and Bard has led to unprecedented demand for these technologies. The burgeoning cost of inference for ever-increasing model sizes coupled with hardware shortages has limited affordable access and poses a pressing need for efficiency approaches geared towards high throughput and performance. Multi-input multi-output (MIMO) algorithms such as data multiplexing, offer a promising solution with a many-fold increase in throughput by performing inference for multiple inputs at the cost of a single input. Yet these approaches are not currently performant enough to be deployed in modern systems. We change that by developing MUX-PLMs, a class of high throughput pre-trained language models (PLMs) trained with data multiplexing, that can be fine-tuned for any downstream task to yield high-throughput high-performance. Our novel multiplexing and demultiplexing modules proficiently entangle and disentangle inputs, and enable high-performance high throughput that are competitive with vanilla PLMs while achieving 2x/5x inference speedup with only a 1â4{\%} drop on a broad suite of tasks. | [
"Murahari, Vishvak",
"Deshp",
"e, Ameet",
"Jimenez, Carlos",
"Shafran, Izhak",
"Wang, Mingqiu",
"Cao, Yuan",
"Narasimhan, Karthik"
] | MUX-PLMs: Pre-training Language Models with Data Multiplexing | repl4nlp-1.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.18.bib | https://aclanthology.org/2023.repl4nlp-1.18/ | @inproceedings{gale-etal-2023-mixed,
title = "Mixed Orthographic/Phonemic Language Modeling: Beyond Orthographically Restricted Transformers ({BORT})",
author = "Gale, Robert and
Salem, Alexandra and
Fergadiotis, Gerasimos and
Bedrick, Steven",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.18",
doi = "10.18653/v1/2023.repl4nlp-1.18",
pages = "212--225",
abstract = "Speech language pathologists rely on information spanning the layers of language, often drawing from multiple layers (e.g. phonology {\&} semantics) at once. Recent innovations in large language models (LLMs) have been shown to build powerful representations for many complex language structures, especially syntax and semantics, unlocking the potential of large datasets through self-supervised learning techniques. However, these datasets are overwhelmingly orthographic, favoring writing systems like the English alphabet, a natural but phonetically imprecise choice. Meanwhile, LLM support for the international phonetic alphabet (IPA) ranges from poor to absent. Further, LLMs encode text at a word- or near-word level, and pre-training tasks have little to gain from phonetic/phonemic representations. In this paper, we introduce BORT, an LLM for mixed orthography/IPA meant to overcome these limitations. To this end, we extend the pre-training of an existing LLM with our own self-supervised pronunciation tasks. We then fine-tune for a clinical task that requires simultaneous phonological and semantic analysis. For an {``}easy{''} and {``}hard{''} version of these tasks, we show that fine-tuning from our models is more accurate by a relative 24{\%} and 29{\%}, and improved on character error rates by a relative 75{\%} and 31{\%}, respectively, than those starting from the original model.",
}
| Speech language pathologists rely on information spanning the layers of language, often drawing from multiple layers (e.g. phonology {\&} semantics) at once. Recent innovations in large language models (LLMs) have been shown to build powerful representations for many complex language structures, especially syntax and semantics, unlocking the potential of large datasets through self-supervised learning techniques. However, these datasets are overwhelmingly orthographic, favoring writing systems like the English alphabet, a natural but phonetically imprecise choice. Meanwhile, LLM support for the international phonetic alphabet (IPA) ranges from poor to absent. Further, LLMs encode text at a word- or near-word level, and pre-training tasks have little to gain from phonetic/phonemic representations. In this paper, we introduce BORT, an LLM for mixed orthography/IPA meant to overcome these limitations. To this end, we extend the pre-training of an existing LLM with our own self-supervised pronunciation tasks. We then fine-tune for a clinical task that requires simultaneous phonological and semantic analysis. For an {``}easy{''} and {``}hard{''} version of these tasks, we show that fine-tuning from our models is more accurate by a relative 24{\%} and 29{\%}, and improved on character error rates by a relative 75{\%} and 31{\%}, respectively, than those starting from the original model. | [
"Gale, Robert",
"Salem, Alex",
"ra",
"Fergadiotis, Gerasimos",
"Bedrick, Steven"
] | Mixed Orthographic/Phonemic Language Modeling: Beyond Orthographically Restricted Transformers (BORT) | repl4nlp-1.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.19.bib | https://aclanthology.org/2023.repl4nlp-1.19/ | @inproceedings{obadinma-etal-2023-effectiveness,
title = "Effectiveness of Data Augmentation for Parameter Efficient Tuning with Limited Data",
author = "Obadinma, Stephen and
Guo, Hongyu and
Zhu, Xiaodan",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.19",
doi = "10.18653/v1/2023.repl4nlp-1.19",
pages = "226--237",
abstract = "Recent work has demonstrated that using parameter efficient tuning techniques such as prefix tuning (or P-tuning) on pretrained language models can yield performance that is comparable or superior to fine-tuning while dramatically reducing trainable parameters. Nevertheless, the effectiveness of such methods under the context of data augmentation, a common strategy to improve learning under low data regimes, has not been fully explored. In this paper, we examine the effectiveness of several popular task-agnostic data augmentation techniques, i.e., EDA, Back Translation, and Mixup, when using two general parameter efficient tuning methods, P-tuning v2 and LoRA, under data scarcity. We show that data augmentation can be used to boost the performance of P-tuning and LoRA models, but the effectiveness of each technique varies and certain methods can lead to a notable degradation in performance, particularly when using larger models and on harder tasks. We further analyze the sentence representations of P-tuning compared to fine-tuning to help understand the above behaviour, and reveal how P-tuning generally presents a more limited ability to separate the sentence embeddings from different classes of augmented data. In addition, it displays poorer performance on heavily altered data. However, we demonstrate that by adding a simple contrastive loss function it can help mitigate such issues for prefix tuning, resulting in sizable improvements to augmented data performance.",
}
| Recent work has demonstrated that using parameter efficient tuning techniques such as prefix tuning (or P-tuning) on pretrained language models can yield performance that is comparable or superior to fine-tuning while dramatically reducing trainable parameters. Nevertheless, the effectiveness of such methods under the context of data augmentation, a common strategy to improve learning under low data regimes, has not been fully explored. In this paper, we examine the effectiveness of several popular task-agnostic data augmentation techniques, i.e., EDA, Back Translation, and Mixup, when using two general parameter efficient tuning methods, P-tuning v2 and LoRA, under data scarcity. We show that data augmentation can be used to boost the performance of P-tuning and LoRA models, but the effectiveness of each technique varies and certain methods can lead to a notable degradation in performance, particularly when using larger models and on harder tasks. We further analyze the sentence representations of P-tuning compared to fine-tuning to help understand the above behaviour, and reveal how P-tuning generally presents a more limited ability to separate the sentence embeddings from different classes of augmented data. In addition, it displays poorer performance on heavily altered data. However, we demonstrate that by adding a simple contrastive loss function it can help mitigate such issues for prefix tuning, resulting in sizable improvements to augmented data performance. | [
"Obadinma, Stephen",
"Guo, Hongyu",
"Zhu, Xiaodan"
] | Effectiveness of Data Augmentation for Parameter Efficient Tuning with Limited Data | repl4nlp-1.19 | Poster | 2303.02577 | [
""
] | https://huggingface.co/papers/2303.02577 | 0 | 1 | 0 | 3 | 1 | [] | [] | [] |
https://aclanthology.org/2023.repl4nlp-1.20.bib | https://aclanthology.org/2023.repl4nlp-1.20/ | @inproceedings{wang-li-2023-relational,
title = "Relational Sentence Embedding for Flexible Semantic Matching",
author = "Wang, Bin and
Li, Haizhou",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.20",
doi = "10.18653/v1/2023.repl4nlp-1.20",
pages = "238--252",
}
| No abstract found | [
"Wang, Bin",
"Li, Haizhou"
] | Relational Sentence Embedding for Flexible Semantic Matching | repl4nlp-1.20 | Poster | 2212.08802 | [
"https://github.com/binwang28/rse"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.21.bib | https://aclanthology.org/2023.repl4nlp-1.21/ | @inproceedings{xiao-etal-2023-tucker-decomposition,
title = "Tucker Decomposition with Frequency Attention for Temporal Knowledge Graph Completion",
author = "Xiao, Likang and
Zhang, Richong and
Chen, Zijie and
Chen, Junfan",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.21",
doi = "10.18653/v1/2023.repl4nlp-1.21",
pages = "253--265",
}
| No abstract found | [
"Xiao, Likang",
"Zhang, Richong",
"Chen, Zijie",
"Chen, Junfan"
] | Tucker Decomposition with Frequency Attention for Temporal Knowledge Graph Completion | repl4nlp-1.21 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.22.bib | https://aclanthology.org/2023.repl4nlp-1.22/ | @inproceedings{bielawski-vanrullen-2023-clip,
title = "{CLIP}-based image captioning via unsupervised cycle-consistency in the latent space",
author = "Bielawski, Romain and
VanRullen, Rufin",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.22",
doi = "10.18653/v1/2023.repl4nlp-1.22",
pages = "266--275",
}
| No abstract found | [
"Bielawski, Romain",
"VanRullen, Rufin"
] | CLIP-based image captioning via unsupervised cycle-consistency in the latent space | repl4nlp-1.22 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.23.bib | https://aclanthology.org/2023.repl4nlp-1.23/ | @inproceedings{bao-etal-2023-token,
title = "Token-level Fitting Issues of Seq2seq Models",
author = "Bao, Guangsheng and
Teng, Zhiyang and
Zhang, Yue",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.23",
doi = "10.18653/v1/2023.repl4nlp-1.23",
pages = "276--288",
}
| No abstract found | [
"Bao, Guangsheng",
"Teng, Zhiyang",
"Zhang, Yue"
] | Token-level Fitting Issues of Seq2seq Models | repl4nlp-1.23 | Poster | 2305.04493 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.24.bib | https://aclanthology.org/2023.repl4nlp-1.24/ | @inproceedings{chiang-etal-2023-revealing,
title = "Revealing the Blind Spot of Sentence Encoder Evaluation by {HEROS}",
author = "Chiang, Cheng-Han and
Lee, Hung-yi and
Chuang, Yung-Sung and
Glass, James",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.24",
doi = "10.18653/v1/2023.repl4nlp-1.24",
pages = "289--302",
}
| No abstract found | [
"Chiang, Cheng-Han",
"Lee, Hung-yi",
"Chuang, Yung-Sung",
"Glass, James"
] | Revealing the Blind Spot of Sentence Encoder Evaluation by HEROS | repl4nlp-1.24 | Poster | 2306.05083 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.25.bib | https://aclanthology.org/2023.repl4nlp-1.25/ | @inproceedings{harvill-etal-2023-one,
title = "One-Shot Exemplification Modeling via Latent Sense Representations",
author = "Harvill, John and
Hasegawa-Johnson, Mark and
Yoon, Hee Suk and
Yoo, Chang D. and
Yoon, Eunseop",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.25",
doi = "10.18653/v1/2023.repl4nlp-1.25",
pages = "303--314",
}
| No abstract found | [
"Harvill, John",
"Hasegawa-Johnson, Mark",
"Yoon, Hee Suk",
"Yoo, Chang D.",
"Yoon, Eunseop"
] | One-Shot Exemplification Modeling via Latent Sense Representations | repl4nlp-1.25 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.repl4nlp-1.26.bib | https://aclanthology.org/2023.repl4nlp-1.26/ | @inproceedings{shen-etal-2023-sen2pro,
title = "{S}en2{P}ro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model",
author = "Shen, Lingfeng and
Jiang, Haiyun and
Liu, Lemao and
Shi, Shuming",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.26",
doi = "10.18653/v1/2023.repl4nlp-1.26",
pages = "315--333",
}
| No abstract found | [
"Shen, Lingfeng",
"Jiang, Haiyun",
"Liu, Lemao",
"Shi, Shuming"
] | Sen2Pro: A Probabilistic Perspective to Sentence Embedding from Pre-trained Language Model | repl4nlp-1.26 | Poster | 2306.02247 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.repl4nlp-1.27.bib | https://aclanthology.org/2023.repl4nlp-1.27/ | @inproceedings{hong-etal-2023-visual-coherence,
title = "Visual Coherence Loss for Coherent and Visually Grounded Story Generation",
author = "Hong, Xudong and
Demberg, Vera and
Sayeed, Asad and
Zheng, Qiankun and
Schiele, Bernt",
editor = "Can, Burcu and
Mozes, Maximilian and
Cahyawijaya, Samuel and
Saphra, Naomi and
Kassner, Nora and
Ravfogel, Shauli and
Ravichander, Abhilasha and
Zhao, Chen and
Augenstein, Isabelle and
Rogers, Anna and
Cho, Kyunghyun and
Grefenstette, Edward and
Voita, Lena",
booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.repl4nlp-1.27",
doi = "10.18653/v1/2023.repl4nlp-1.27",
pages = "334--346",
}
| No abstract found | [
"Hong, Xudong",
"Demberg, Vera",
"Sayeed, Asad",
"Zheng, Qiankun",
"Schiele, Bernt"
] | Visual Coherence Loss for Coherent and Visually Grounded Story Generation | repl4nlp-1.27 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.1.bib | https://aclanthology.org/2023.semeval-1.1/ | @inproceedings{wang-etal-2023-knowcomp,
title = "{K}now{C}omp at {S}em{E}val-2023 Task 7: Fine-tuning Pre-trained Language Models for Clinical Trial Entailment Identification",
author = "Wang, Weiqi and
Xu, Baixuan and
Fang, Tianqing and
Zhang, Lirong and
Song, Yangqiu",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.1",
doi = "10.18653/v1/2023.semeval-1.1",
pages = "1--9",
abstract = "In this paper, we present our system for the textual entailment identification task as a subtask of the SemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data. The entailment identification task aims to determine whether a medical statement affirms a valid entailment given a clinical trial premise or forms a contradiction with it. Since the task is inherently a text classification task, we propose a system that performs binary classification given a statement and its associated clinical trial. Our proposed system leverages a human-defined prompt to aggregate the information contained in the statement, section name, and clinical trials. Pre-trained language models are then finetuned on the prompted input sentences to learn to discriminate the inference relation between the statement and clinical trial. To validate our system, we conduct extensive experiments with a wide variety of pre-trained language models. Our best system is built on DeBERTa-v3-large, which achieves an F1 score of 0.764 and secures the fifth rank in the official leaderboard.Further analysis indicates that leveraging our designed prompt is effective, and our model suffers from a low recall. Our code and pre-trained models are available at [\url{https://github.com/HKUST-KnowComp/NLI4CT}](\url{https://github.com/HKUST-KnowComp/NLI4CT}).",
}
| In this paper, we present our system for the textual entailment identification task as a subtask of the SemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data. The entailment identification task aims to determine whether a medical statement affirms a valid entailment given a clinical trial premise or forms a contradiction with it. Since the task is inherently a text classification task, we propose a system that performs binary classification given a statement and its associated clinical trial. Our proposed system leverages a human-defined prompt to aggregate the information contained in the statement, section name, and clinical trials. Pre-trained language models are then finetuned on the prompted input sentences to learn to discriminate the inference relation between the statement and clinical trial. To validate our system, we conduct extensive experiments with a wide variety of pre-trained language models. Our best system is built on DeBERTa-v3-large, which achieves an F1 score of 0.764 and secures the fifth rank in the official leaderboard.Further analysis indicates that leveraging our designed prompt is effective, and our model suffers from a low recall. Our code and pre-trained models are available at [\url{https://github.com/HKUST-KnowComp/NLI4CT}](\url{https://github.com/HKUST-KnowComp/NLI4CT}). | [
"Wang, Weiqi",
"Xu, Baixuan",
"Fang, Tianqing",
"Zhang, Lirong",
"Song, Yangqiu"
] | KnowComp at SemEval-2023 Task 7: Fine-tuning Pre-trained Language Models for Clinical Trial Entailment Identification | semeval-1.1 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.2.bib | https://aclanthology.org/2023.semeval-1.2/ | @inproceedings{conceicao-etal-2023-lasigebiotm,
title = "lasige{B}io{TM} at {S}em{E}val-2023 Task 7: Improving Natural Language Inference Baseline Systems with Domain Ontologies",
author = "Concei{\c{c}}{\~a}o, Sofia I. R. and
F. Sousa, Diana and
Silvestre, Pedro and
Couto, Francisco M",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.2",
doi = "10.18653/v1/2023.semeval-1.2",
pages = "10--15",
abstract = "Clinical Trials Reports (CTRs) contain highly valuable health information from which Natural Language Inference (NLI) techniques determine if a given hypothesis can be inferred from a given premise. CTRs are abundant with domain terminology with particular terms that are difficult to understand without prior knowledge. Thus, we proposed to use domain ontologies as a source of external knowledge that could help with the inference process in theSemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT). This document describes our participation in subtask 1: Textual Entailment, where Ontologies, NLP techniques, such as tokenization and named-entity recognition, and rule-based approaches are all combined in our approach. We were able to show that inputting annotations from domain ontologies improved the baseline systems.",
}
| Clinical Trials Reports (CTRs) contain highly valuable health information from which Natural Language Inference (NLI) techniques determine if a given hypothesis can be inferred from a given premise. CTRs are abundant with domain terminology with particular terms that are difficult to understand without prior knowledge. Thus, we proposed to use domain ontologies as a source of external knowledge that could help with the inference process in theSemEval-2023 Task 7: Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT). This document describes our participation in subtask 1: Textual Entailment, where Ontologies, NLP techniques, such as tokenization and named-entity recognition, and rule-based approaches are all combined in our approach. We were able to show that inputting annotations from domain ontologies improved the baseline systems. | [
"Concei{\\c{c}}{\\~a}o, Sofia I. R.",
"F. Sousa, Diana",
"Silvestre, Pedro",
"Couto, Francisco M"
] | lasigeBioTM at SemEval-2023 Task 7: Improving Natural Language Inference Baseline Systems with Domain Ontologies | semeval-1.2 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.3.bib | https://aclanthology.org/2023.semeval-1.3/ | @inproceedings{markchom-etal-2023-uor,
title = "{U}o{R}-{NCL} at {S}em{E}val-2023 Task 1: Learning Word-Sense and Image Embeddings for Word Sense Disambiguation",
author = "Markchom, Thanet and
Liang, Huizhi and
Gitau, Joyce and
Liu, Zehao and
Ojha, Varun and
Taylor, Lee and
Bonnici, Jake and
Alshadadi, Abdullah",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.3",
doi = "10.18653/v1/2023.semeval-1.3",
pages = "16--22",
abstract = "In SemEval-2023 Task 1, a task of applying Word Sense Disambiguation in an image retrieval system was introduced. To resolve this task, this work proposes three approaches: (1) an unsupervised approach considering similarities between word senses and image captions, (2) a supervised approach using a Siamese neural network, and (3) a self-supervised approach using a Bayesian personalized ranking framework. According to the results, both supervised and self-supervised approaches outperformed the unsupervised approach. They can effectively identify correct images of ambiguous words in the dataset provided in this task.",
}
| In SemEval-2023 Task 1, a task of applying Word Sense Disambiguation in an image retrieval system was introduced. To resolve this task, this work proposes three approaches: (1) an unsupervised approach considering similarities between word senses and image captions, (2) a supervised approach using a Siamese neural network, and (3) a self-supervised approach using a Bayesian personalized ranking framework. According to the results, both supervised and self-supervised approaches outperformed the unsupervised approach. They can effectively identify correct images of ambiguous words in the dataset provided in this task. | [
"Markchom, Thanet",
"Liang, Huizhi",
"Gitau, Joyce",
"Liu, Zehao",
"Ojha, Varun",
"Taylor, Lee",
"Bonnici, Jake",
"Alshadadi, Abdullah"
] | UoR-NCL at SemEval-2023 Task 1: Learning Word-Sense and Image Embeddings for Word Sense Disambiguation | semeval-1.3 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.4.bib | https://aclanthology.org/2023.semeval-1.4/ | @inproceedings{nakwijit-etal-2023-lexicools,
title = "Lexicools at {S}em{E}val-2023 Task 10: Sexism Lexicon Construction via {XAI}",
author = "Nakwijit, Pakawat and
Samir, Mahmoud and
Purver, Matthew",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.4",
doi = "10.18653/v1/2023.semeval-1.4",
pages = "23--43",
abstract = "This paper presents our work on the SemEval-2023 Task 10 Explainable Detection of Online Sexism (EDOS) using lexicon-based models. Our approach consists of three main steps: lexicon construction based on Pointwise Mutual Information (PMI) and Shapley value, lexicon augmentation using an unannotated corpus and Large Language Models (LLMs), and, lastly, lexical incorporation for Bag-of-Word (BoW) logistic regression and fine-tuning LLMs. Our results demonstrate that our Shapley approach effectively produces a high-quality lexicon. We also show that by simply counting the presence of certain words in our lexicons and comparing the count can outperform a BoW logistic regression in task B/C and fine-tuning BERT in task C. In the end, our classifier achieved F1-scores of 53.34{\textbackslash}{\%} and 27.31{\textbackslash}{\%} on the official blind test sets for tasks B and C, respectively. We, additionally, provide in-depth analysis highlighting model limitation and bias. We also present our attempts to understand the model{'}s behaviour based on our constructed lexicons. Our code and the resulting lexicons are open-sourced in our GitHub repository \url{https://github.com/SirBadr/SemEval2022-Task10}.",
}
| This paper presents our work on the SemEval-2023 Task 10 Explainable Detection of Online Sexism (EDOS) using lexicon-based models. Our approach consists of three main steps: lexicon construction based on Pointwise Mutual Information (PMI) and Shapley value, lexicon augmentation using an unannotated corpus and Large Language Models (LLMs), and, lastly, lexical incorporation for Bag-of-Word (BoW) logistic regression and fine-tuning LLMs. Our results demonstrate that our Shapley approach effectively produces a high-quality lexicon. We also show that by simply counting the presence of certain words in our lexicons and comparing the count can outperform a BoW logistic regression in task B/C and fine-tuning BERT in task C. In the end, our classifier achieved F1-scores of 53.34{\textbackslash}{\%} and 27.31{\textbackslash}{\%} on the official blind test sets for tasks B and C, respectively. We, additionally, provide in-depth analysis highlighting model limitation and bias. We also present our attempts to understand the model{'}s behaviour based on our constructed lexicons. Our code and the resulting lexicons are open-sourced in our GitHub repository \url{https://github.com/SirBadr/SemEval2022-Task10}. | [
"Nakwijit, Pakawat",
"Samir, Mahmoud",
"Purver, Matthew"
] | Lexicools at SemEval-2023 Task 10: Sexism Lexicon Construction via XAI | semeval-1.4 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.5.bib | https://aclanthology.org/2023.semeval-1.5/ | @inproceedings{li-etal-2023-augmenters,
title = "Augmenters at {S}em{E}val-2023 Task 1: Enhancing {CLIP} in Handling Compositionality and Ambiguity for Zero-Shot Visual {WSD} through Prompt Augmentation and Text-To-Image Diffusion",
author = "Li, Jie and
Shiue, Yow-Ting and
Shih, Yong-Siang and
Geiping, Jonas",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.5",
doi = "10.18653/v1/2023.semeval-1.5",
pages = "44--49",
abstract = "This paper describes our zero-shot approachesfor the Visual Word Sense Disambiguation(VWSD) Task in English. Our preliminarystudy shows that the simple approach of match-ing candidate images with the phrase usingCLIP suffers from the many-to-many natureof image-text pairs. We find that the CLIP textencoder may have limited abilities in captur-ing the compositionality in natural language. Conversely, the descriptive focus of the phrasevaries from instance to instance. We addressthese issues in our two systems, Augment-CLIPand Stable Diffusion Sampling (SD Sampling).Augment-CLIP augments the text prompt bygenerating sentences that contain the contextphrase with the help of large language mod-els (LLMs). We further explore CLIP modelsin other languages, as the an ambiguous wordmay be translated into an unambiguous one inthe other language. SD Sampling uses text-to-image Stable Diffusion to generate multipleimages from the given phrase, increasing thelikelihood that a subset of images match theone that paired with the text.",
}
| This paper describes our zero-shot approachesfor the Visual Word Sense Disambiguation(VWSD) Task in English. Our preliminarystudy shows that the simple approach of match-ing candidate images with the phrase usingCLIP suffers from the many-to-many natureof image-text pairs. We find that the CLIP textencoder may have limited abilities in captur-ing the compositionality in natural language. Conversely, the descriptive focus of the phrasevaries from instance to instance. We addressthese issues in our two systems, Augment-CLIPand Stable Diffusion Sampling (SD Sampling).Augment-CLIP augments the text prompt bygenerating sentences that contain the contextphrase with the help of large language mod-els (LLMs). We further explore CLIP modelsin other languages, as the an ambiguous wordmay be translated into an unambiguous one inthe other language. SD Sampling uses text-to-image Stable Diffusion to generate multipleimages from the given phrase, increasing thelikelihood that a subset of images match theone that paired with the text. | [
"Li, Jie",
"Shiue, Yow-Ting",
"Shih, Yong-Siang",
"Geiping, Jonas"
] | Augmenters at SemEval-2023 Task 1: Enhancing CLIP in Handling Compositionality and Ambiguity for Zero-Shot Visual WSD through Prompt Augmentation and Text-To-Image Diffusion | semeval-1.5 | Poster | 2307.05564 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.6.bib | https://aclanthology.org/2023.semeval-1.6/ | @inproceedings{salahudeen-etal-2023-hausanlp,
title = "{H}ausa{NLP} at {S}em{E}val-2023 Task 12: Leveraging {A}frican Low Resource {T}weet{D}ata for Sentiment Analysis",
author = "Salahudeen, Saheed Abdullahi and
Lawan, Falalu Ibrahim and
Wali, Ahmad and
Imam, Amina Abubakar and
Shuaibu, Aliyu Rabiu and
Yusuf, Aliyu and
Rabiu, Nur Bala and
Bello, Musa and
Adamu, Shamsuddeen Umaru and
Aliyu, Saminu Mohammad",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.6",
doi = "10.18653/v1/2023.semeval-1.6",
pages = "50--57",
abstract = "We present the findings of SemEval-2023 Task 12, a shared task on sentiment analysis for low-resource African languages using Twitter dataset. The task featured three subtasks; subtask A is monolingual sentiment classification with 12 tracks which are all monolingual languages, subtask B is multilingual sentiment classification using the tracks in subtask A and subtask C is a zero-shot sentiment classification. We present the results and findings of subtask A, subtask B and subtask C. We also release the code on github. Our goal is to leverage low-resource tweet data using pre-trained Afro-xlmr-large, AfriBERTa-Large, Bert-base-arabic-camelbert-da-sentiment (Arabic-camelbert), Multilingual-BERT (mBERT) and BERT models for sentiment analysis of 14 African languages. The datasets for these subtasks consists of a gold standard multi-class labeled Twitter datasets from these languages. Our results demonstrate that Afro-xlmr-large model performed better compared to the other models in most of the languages datasets. Similarly, Nigerian languages: Hausa, Igbo, and Yoruba achieved better performance compared to other languages and this can be attributed to the higher volume of data present in the languages.",
}
| We present the findings of SemEval-2023 Task 12, a shared task on sentiment analysis for low-resource African languages using Twitter dataset. The task featured three subtasks; subtask A is monolingual sentiment classification with 12 tracks which are all monolingual languages, subtask B is multilingual sentiment classification using the tracks in subtask A and subtask C is a zero-shot sentiment classification. We present the results and findings of subtask A, subtask B and subtask C. We also release the code on github. Our goal is to leverage low-resource tweet data using pre-trained Afro-xlmr-large, AfriBERTa-Large, Bert-base-arabic-camelbert-da-sentiment (Arabic-camelbert), Multilingual-BERT (mBERT) and BERT models for sentiment analysis of 14 African languages. The datasets for these subtasks consists of a gold standard multi-class labeled Twitter datasets from these languages. Our results demonstrate that Afro-xlmr-large model performed better compared to the other models in most of the languages datasets. Similarly, Nigerian languages: Hausa, Igbo, and Yoruba achieved better performance compared to other languages and this can be attributed to the higher volume of data present in the languages. | [
"Salahudeen, Saheed Abdullahi",
"Lawan, Falalu Ibrahim",
"Wali, Ahmad",
"Imam, Amina Abubakar",
"Shuaibu, Aliyu Rabiu",
"Yusuf, Aliyu",
"Rabiu, Nur Bala",
"Bello, Musa",
"Adamu, Shamsuddeen Umaru",
"Aliyu, Saminu Mohammad"
] | HausaNLP at SemEval-2023 Task 12: Leveraging African Low Resource TweetData for Sentiment Analysis | semeval-1.6 | Poster | 2304.13634 | [
"https://github.com/ahmadmwali/semeval-afrisenti"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.7.bib | https://aclanthology.org/2023.semeval-1.7/ | @inproceedings{mahmoud-nakov-2023-bertastic,
title = "{BERT}astic at {S}em{E}val-2023 Task 3: Fine-Tuning Pretrained Multilingual Transformers Does Order Matter?",
author = "Mahmoud, Tarek and
Nakov, Preslav",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.7",
doi = "10.18653/v1/2023.semeval-1.7",
pages = "58--63",
abstract = {The naive approach for fine-tuning pretrained deep learning models on downstream tasks involves feeding them mini-batches of randomly sampled data. In this paper, we propose a more elaborate method for fine-tuning Pretrained Multilingual Transformers (PMTs) on multilingual data. Inspired by the success of curriculum learning approaches, we investigate the significance of fine-tuning PMTs on multilingual data in a sequential fashion language by language. Unlike the curriculum learning paradigm where the model is presented with increasingly complex examples, we do not adopt a notion of {``}easy{''} and {``}hard{''} samples. Instead, our experiments draw insight from psychological findings on how the human brain processes new information and the persistence of newly learned concepts. We perform our experiments on a challenging news-framing dataset that contains texts in six languages. Our proposed method outperforms the na{\"\i}ve approach by achieving improvements of 2.57{\textbackslash}{\%} in terms of F1 score. Even when we supplement the na{\"\i}ve approach with recency fine-tuning, we still achieve an improvement of 1.34{\textbackslash}{\%} with a 3.63{\textbackslash}{\%}{\$} convergence speed-up. Moreover, we are the first to observe an interesting pattern in which deep learning models exhibit a human-like primacy-recency effect.},
}
| The naive approach for fine-tuning pretrained deep learning models on downstream tasks involves feeding them mini-batches of randomly sampled data. In this paper, we propose a more elaborate method for fine-tuning Pretrained Multilingual Transformers (PMTs) on multilingual data. Inspired by the success of curriculum learning approaches, we investigate the significance of fine-tuning PMTs on multilingual data in a sequential fashion language by language. Unlike the curriculum learning paradigm where the model is presented with increasingly complex examples, we do not adopt a notion of {``}easy{''} and {``}hard{''} samples. Instead, our experiments draw insight from psychological findings on how the human brain processes new information and the persistence of newly learned concepts. We perform our experiments on a challenging news-framing dataset that contains texts in six languages. Our proposed method outperforms the na{\"\i}ve approach by achieving improvements of 2.57{\textbackslash}{\%} in terms of F1 score. Even when we supplement the na{\"\i}ve approach with recency fine-tuning, we still achieve an improvement of 1.34{\textbackslash}{\%} with a 3.63{\textbackslash}{\%}{\$} convergence speed-up. Moreover, we are the first to observe an interesting pattern in which deep learning models exhibit a human-like primacy-recency effect. | [
"Mahmoud, Tarek",
"Nakov, Preslav"
] | BERTastic at SemEval-2023 Task 3: Fine-Tuning Pretrained Multilingual Transformers Does Order Matter? | semeval-1.7 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.8.bib | https://aclanthology.org/2023.semeval-1.8/ | @inproceedings{tang-2023-brooke,
title = "Brooke-{E}nglish at {S}em{E}val-2023 Task 5: Clickbait Spoiling",
author = "Tang, Shirui",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.8",
doi = "10.18653/v1/2023.semeval-1.8",
pages = "64--76",
abstract = "The task of clickbait spoiling is: generating a short text that satisfies the curiosity induced by a clickbait post. Clickbait links to a web page and advertises its contents by arousing curiosity instead of providing an informative summary. Previous studies on clickbait spoiling has shown the approach that classifing the type of spoilers is needed, then generating the appropriate spoilers is more effective on the Webis Clickbait Spoiling Corpus 2022 dataset. Our contribution focused on study of the three classes (phrase, passage and multi) and finding appropriate models to generate spoilers foreach class. Results were analysed in each type of spoilers, revealed some reasons of having diversed results in different spoiler types. {``}passage{''} type spoiler was identified as the most difficult and the most valuable type of spoiler.",
}
| The task of clickbait spoiling is: generating a short text that satisfies the curiosity induced by a clickbait post. Clickbait links to a web page and advertises its contents by arousing curiosity instead of providing an informative summary. Previous studies on clickbait spoiling has shown the approach that classifing the type of spoilers is needed, then generating the appropriate spoilers is more effective on the Webis Clickbait Spoiling Corpus 2022 dataset. Our contribution focused on study of the three classes (phrase, passage and multi) and finding appropriate models to generate spoilers foreach class. Results were analysed in each type of spoilers, revealed some reasons of having diversed results in different spoiler types. {``}passage{''} type spoiler was identified as the most difficult and the most valuable type of spoiler. | [
"Tang, Shirui"
] | Brooke-English at SemEval-2023 Task 5: Clickbait Spoiling | semeval-1.8 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.9.bib | https://aclanthology.org/2023.semeval-1.9/ | @inproceedings{chen-etal-2023-sea,
title = "{S}ea{\_}and{\_}{W}ine at {S}em{E}val-2023 Task 9: A Regression Model with Data Augmentation for Multilingual Intimacy Analysis",
author = "Chen, Yuxi and
Chang, Yu and
Tao, Yanqing and
Zhang, Yanru",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.9",
doi = "10.18653/v1/2023.semeval-1.9",
pages = "77--82",
abstract = "In Task 9, we are required to analyze the textual intimacy of tweets in 10 languages. We fine-tune XLM-RoBERTa (XLM-R) pre-trained model to adapt to this multilingual regression task. After tentative experiments, severe class imbalance is observed in the official released dataset, which may compromise the convergence and weaken the model effect. To tackle such challenge, we take measures in two aspects. On the one hand, we implement data augmentation through machine translation to enlarge the scale of classes with fewer samples. On the other hand, we introduce focal mean square error (MSE) loss to emphasize the contributions of hard samples to total loss, thus further mitigating the impact of class imbalance on model effect. Extensive experiments demonstrate remarkable effectiveness of our strategies, and our model achieves high performance on the Pearson{'}s correlation coefficient (CC) almost above 0.85 on validation dataset.",
}
| In Task 9, we are required to analyze the textual intimacy of tweets in 10 languages. We fine-tune XLM-RoBERTa (XLM-R) pre-trained model to adapt to this multilingual regression task. After tentative experiments, severe class imbalance is observed in the official released dataset, which may compromise the convergence and weaken the model effect. To tackle such challenge, we take measures in two aspects. On the one hand, we implement data augmentation through machine translation to enlarge the scale of classes with fewer samples. On the other hand, we introduce focal mean square error (MSE) loss to emphasize the contributions of hard samples to total loss, thus further mitigating the impact of class imbalance on model effect. Extensive experiments demonstrate remarkable effectiveness of our strategies, and our model achieves high performance on the Pearson{'}s correlation coefficient (CC) almost above 0.85 on validation dataset. | [
"Chen, Yuxi",
"Chang, Yu",
"Tao, Yanqing",
"Zhang, Yanru"
] | Sea_and_Wine at SemEval-2023 Task 9: A Regression Model with Data Augmentation for Multilingual Intimacy Analysis | semeval-1.9 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.10.bib | https://aclanthology.org/2023.semeval-1.10/ | @inproceedings{liao-etal-2023-marseclipse,
title = "{M}ars{E}clipse at {S}em{E}val-2023 Task 3: Multi-lingual and Multi-label Framing Detection with Contrastive Learning",
author = "Liao, Qisheng and
Lai, Meiting and
Nakov, Preslav",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.10",
doi = "10.18653/v1/2023.semeval-1.10",
pages = "83--87",
abstract = "This paper describes our system for SemEval-2023 Task 3 Subtask 2 on Framing Detection. We used a multi-label contrastive loss for fine-tuning large pre-trained language models in a multi-lingual setting, achieving very competitive results: our system was ranked first on the official test set and on the official shared task leaderboard for five of the six languages for which we had training data and for which we could perform fine-tuning. Here, we describe our experimental setup, as well as various ablation studies. The code of our system is available at \url{https://github.com/QishengL/SemEval2023}.",
}
| This paper describes our system for SemEval-2023 Task 3 Subtask 2 on Framing Detection. We used a multi-label contrastive loss for fine-tuning large pre-trained language models in a multi-lingual setting, achieving very competitive results: our system was ranked first on the official test set and on the official shared task leaderboard for five of the six languages for which we had training data and for which we could perform fine-tuning. Here, we describe our experimental setup, as well as various ablation studies. The code of our system is available at \url{https://github.com/QishengL/SemEval2023}. | [
"Liao, Qisheng",
"Lai, Meiting",
"Nakov, Preslav"
] | MarsEclipse at SemEval-2023 Task 3: Multi-lingual and Multi-label Framing Detection with Contrastive Learning | semeval-1.10 | Poster | 2304.14339 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.11.bib | https://aclanthology.org/2023.semeval-1.11/ | @inproceedings{falkenberg-etal-2023-mr,
title = "Mr-Fosdick at {S}em{E}val-2023 Task 5: Comparing Dataset Expansion Techniques for Non-Transformer and Transformer Models: Improving Model Performance through Data Augmentation",
author = {Falkenberg, Christian and
Sch{\"o}nw{\"a}lder, Erik and
Rietzke, Tom and
G{\"o}rner, Chris-Andris and
Walther, Robert and
Gonsior, Julius and
Reusch, Anja},
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.11",
doi = "10.18653/v1/2023.semeval-1.11",
pages = "88--93",
abstract = "In supervised learning, a significant amount of data is essential. To achieve this, we generated and evaluated datasets based on a provided dataset using transformer and non-transformer models. By utilizing these generated datasets during the training of new models, we attain a higher balanced accuracy during validation compared to using only the original dataset.",
}
| In supervised learning, a significant amount of data is essential. To achieve this, we generated and evaluated datasets based on a provided dataset using transformer and non-transformer models. By utilizing these generated datasets during the training of new models, we attain a higher balanced accuracy during validation compared to using only the original dataset. | [
"Falkenberg, Christian",
"Sch{\\\"o}nw{\\\"a}lder, Erik",
"Rietzke, Tom",
"G{\\\"o}rner, Chris-Andris",
"Walther, Robert",
"Gonsior, Julius",
"Reusch, Anja"
] | Mr-Fosdick at SemEval-2023 Task 5: Comparing Dataset Expansion Techniques for Non-Transformer and Transformer Models: Improving Model Performance through Data Augmentation | semeval-1.11 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.12.bib | https://aclanthology.org/2023.semeval-1.12/ | @inproceedings{shahriar-solorio-2023-safewebuh,
title = "{S}afe{W}eb{UH} at {S}em{E}val-2023 Task 11: Learning Annotator Disagreement in Derogatory Text: Comparison of Direct Training vs Aggregation",
author = "Shahriar, Sadat and
Solorio, Thamar",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.12",
doi = "10.18653/v1/2023.semeval-1.12",
pages = "94--100",
abstract = "Subjectivity and difference of opinion are key social phenomena, and it is crucial to take these into account in the annotation and detection process of derogatory textual content. In this paper, we use four datasets provided by SemEval-2023 Task 11 and fine-tune a BERT model to capture the disagreement in the annotation. We find individual annotator modeling and aggregation lowers the Cross-Entropy score by an average of 0.21, compared to the direct training on the soft labels. Our findings further demonstrate that annotator metadata contributes to the average 0.029 reduction in the Cross-Entropy score.",
}
| Subjectivity and difference of opinion are key social phenomena, and it is crucial to take these into account in the annotation and detection process of derogatory textual content. In this paper, we use four datasets provided by SemEval-2023 Task 11 and fine-tune a BERT model to capture the disagreement in the annotation. We find individual annotator modeling and aggregation lowers the Cross-Entropy score by an average of 0.21, compared to the direct training on the soft labels. Our findings further demonstrate that annotator metadata contributes to the average 0.029 reduction in the Cross-Entropy score. | [
"Shahriar, Sadat",
"Solorio, Thamar"
] | SafeWebUH at SemEval-2023 Task 11: Learning Annotator Disagreement in Derogatory Text: Comparison of Direct Training vs Aggregation | semeval-1.12 | Poster | 2305.01050 | [
"https://github.com/sadat1971/le-wi-di-semeval-23"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.13.bib | https://aclanthology.org/2023.semeval-1.13/ | @inproceedings{li-etal-2023-ecnu,
title = "{ECNU}{\_}{MIV} at {S}em{E}val-2023 Task 1: {CTIM} - Contrastive Text-Image Model for Multilingual Visual Word Sense Disambiguation",
author = "Li, Zhenghui and
Zhang, Qi and
Xia, Xueyin and
Ye, Yinxiang and
Zhang, Qi and
Huang, Cong",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.13",
doi = "10.18653/v1/2023.semeval-1.13",
pages = "101--107",
abstract = "Our team focuses on the multimodal domain of images and texts, we propose a model that can learn the matching relationship between text-image pairs by contrastive learning. More specifically, We train the model from the labeled data provided by the official organizer, after pre-training, texts are used to reference learned visual concepts enabling visual word sense disambiguation tasks. In addition, the top results our teams get have been released showing the effectiveness of our solution.",
}
| Our team focuses on the multimodal domain of images and texts, we propose a model that can learn the matching relationship between text-image pairs by contrastive learning. More specifically, We train the model from the labeled data provided by the official organizer, after pre-training, texts are used to reference learned visual concepts enabling visual word sense disambiguation tasks. In addition, the top results our teams get have been released showing the effectiveness of our solution. | [
"Li, Zhenghui",
"Zhang, Qi",
"Xia, Xueyin",
"Ye, Yinxiang",
"Zhang, Qi",
"Huang, Cong"
] | ECNU_MIV at SemEval-2023 Task 1: CTIM - Contrastive Text-Image Model for Multilingual Visual Word Sense Disambiguation | semeval-1.13 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.14.bib | https://aclanthology.org/2023.semeval-1.14/ | @inproceedings{devatine-etal-2023-melodi,
title = "{MELODI} at {S}em{E}val-2023 Task 3: In-domain Pre-training for Low-resource Classification of News Articles",
author = "Devatine, Nicolas and
Muller, Philippe and
Braud, Chlo{\'e}",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.14",
doi = "10.18653/v1/2023.semeval-1.14",
pages = "108--113",
abstract = "This paper describes our approach to Subtask 1 {``}News Genre Categorization{''} of SemEval-2023 Task 3 {``}Detecting the Category, the Framing, and the Persuasion Techniques in Online News in a Multi-lingual Setup{''}, which aims to determine whether a given news article is an opinion piece, an objective report, or satirical. We fine-tuned the domain-specific language model POLITICS, which was pre-trained on a large-scale dataset of more than 3.6M English political news articles following ideology-driven pre-training objectives. In order to use it in the multilingual setup of the task, we added as a pre-processing step the translation of all documents into English. Our system ranked among the top systems overall in most language, and ranked 1st on the English dataset.",
}
| This paper describes our approach to Subtask 1 {``}News Genre Categorization{''} of SemEval-2023 Task 3 {``}Detecting the Category, the Framing, and the Persuasion Techniques in Online News in a Multi-lingual Setup{''}, which aims to determine whether a given news article is an opinion piece, an objective report, or satirical. We fine-tuned the domain-specific language model POLITICS, which was pre-trained on a large-scale dataset of more than 3.6M English political news articles following ideology-driven pre-training objectives. In order to use it in the multilingual setup of the task, we added as a pre-processing step the translation of all documents into English. Our system ranked among the top systems overall in most language, and ranked 1st on the English dataset. | [
"Devatine, Nicolas",
"Muller, Philippe",
"Braud, Chlo{\\'e}"
] | MELODI at SemEval-2023 Task 3: In-domain Pre-training for Low-resource Classification of News Articles | semeval-1.14 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.15.bib | https://aclanthology.org/2023.semeval-1.15/ | @inproceedings{zhang-etal-2023-samsung,
title = "{S}amsung Research {C}hina - {B}eijing at {S}em{E}val-2023 Task 2: An {AL}-{R} Model for Multilingual Complex Named Entity Recognition",
author = "Zhang, Haojie and
Li, Xiao and
Gu, Renhua and
Qu, Xiaoyan and
Meng, Xiangfeng and
Hu, Shuo and
Liu, Song",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.15",
doi = "10.18653/v1/2023.semeval-1.15",
pages = "114--120",
abstract = "This paper describes our system for SemEval-2023 Task 2 Multilingual Complex Named EntityRecognition (MultiCoNER II). Our teamSamsung Research China - Beijing proposesan AL-R (Adjustable Loss RoBERTa) model toboost the performance of recognizing short andcomplex entities with the challenges of longtaildata distribution, out of knowledge base andnoise scenarios. We first employ an adjustabledice loss optimization objective to overcomethe issue of long-tail data distribution, which isalso proved to be noise-robusted, especially incombatting the issue of fine-grained label confusing. Besides, we develop our own knowledgeenhancement tool to provide related contextsfor the short context setting and addressthe issue of out of knowledge base. Experimentshave verified the validation of our approaches.",
}
| This paper describes our system for SemEval-2023 Task 2 Multilingual Complex Named EntityRecognition (MultiCoNER II). Our teamSamsung Research China - Beijing proposesan AL-R (Adjustable Loss RoBERTa) model toboost the performance of recognizing short andcomplex entities with the challenges of longtaildata distribution, out of knowledge base andnoise scenarios. We first employ an adjustabledice loss optimization objective to overcomethe issue of long-tail data distribution, which isalso proved to be noise-robusted, especially incombatting the issue of fine-grained label confusing. Besides, we develop our own knowledgeenhancement tool to provide related contextsfor the short context setting and addressthe issue of out of knowledge base. Experimentshave verified the validation of our approaches. | [
"Zhang, Haojie",
"Li, Xiao",
"Gu, Renhua",
"Qu, Xiaoyan",
"Meng, Xiangfeng",
"Hu, Shuo",
"Liu, Song"
] | Samsung Research China - Beijing at SemEval-2023 Task 2: An AL-R Model for Multilingual Complex Named Entity Recognition | semeval-1.15 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.16.bib | https://aclanthology.org/2023.semeval-1.16/ | @inproceedings{benlahbib-etal-2023-nlp,
title = "{NLP}-{LISAC} at {S}em{E}val-2023 Task 9: Multilingual Tweet Intimacy Analysis via a Transformer-based Approach and Data Augmentation",
author = "Benlahbib, Abdessamad and
Alami, Hamza and
Boumhidi, Achraf and
Benslimane, Omar",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.16",
doi = "10.18653/v1/2023.semeval-1.16",
pages = "121--124",
abstract = "This paper presents our system and findings for SemEval 2023 Task 9 Tweet Intimacy Analysis. The main objective of this task was to predict the intimacy of tweets in 10 languages. Our submitted model (ranked 28/45) consists of a transformer-based approach with data augmentation via machine translation.",
}
| This paper presents our system and findings for SemEval 2023 Task 9 Tweet Intimacy Analysis. The main objective of this task was to predict the intimacy of tweets in 10 languages. Our submitted model (ranked 28/45) consists of a transformer-based approach with data augmentation via machine translation. | [
"Benlahbib, Abdessamad",
"Alami, Hamza",
"Boumhidi, Achraf",
"Benslimane, Omar"
] | NLP-LISAC at SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis via a Transformer-based Approach and Data Augmentation | semeval-1.16 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.17.bib | https://aclanthology.org/2023.semeval-1.17/ | @inproceedings{neves-2023-bf3r,
title = "{B}f3{R} at {S}em{E}val-2023 Task 7: a text similarity model for textual entailment and evidence retrieval in clinical trials and animal studies",
author = "Neves, Mariana",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.17",
doi = "10.18653/v1/2023.semeval-1.17",
pages = "125--129",
abstract = "We describe our participation on the Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT) of SemEval{'}23. The organizers provided a collection of clinical trials as training data and a set of statements, which can be related to either a single trial or to a comparison of two trials. The task consisted of two sub-tasks: (i) textual entailment (Task 1) for predicting whether the statement is supported (Entailment) or not (Contradiction) by the corresponding trial(s); and (ii) evidence retrieval (Task 2) for selecting the evidences (sentences in the trials) that support the decision made for Task 1. We built a model based on a sentence-based BERT similarity model which was pre-trained on ClinicalBERT embeddings. Our best results on the official test sets were f-scores of 0.64 and 0.67 for Tasks 1 and 2, respectively.",
}
| We describe our participation on the Multi-evidence Natural Language Inference for Clinical Trial Data (NLI4CT) of SemEval{'}23. The organizers provided a collection of clinical trials as training data and a set of statements, which can be related to either a single trial or to a comparison of two trials. The task consisted of two sub-tasks: (i) textual entailment (Task 1) for predicting whether the statement is supported (Entailment) or not (Contradiction) by the corresponding trial(s); and (ii) evidence retrieval (Task 2) for selecting the evidences (sentences in the trials) that support the decision made for Task 1. We built a model based on a sentence-based BERT similarity model which was pre-trained on ClinicalBERT embeddings. Our best results on the official test sets were f-scores of 0.64 and 0.67 for Tasks 1 and 2, respectively. | [
"Neves, Mariana"
] | Bf3R at SemEval-2023 Task 7: a text similarity model for textual entailment and evidence retrieval in clinical trials and animal studies | semeval-1.17 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.18.bib | https://aclanthology.org/2023.semeval-1.18/ | @inproceedings{diem-etal-2023-university,
title = "{U}niversity of {H}ildesheim at {S}em{E}val-2023 Task 1: Combining Pre-trained Multimodal and Generative Models for Image Disambiguation",
author = "Diem, Sebastian and
Im, Chan Jong and
Mandl, Thomas",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.18",
doi = "10.18653/v1/2023.semeval-1.18",
pages = "130--135",
abstract = "Multimodal ambiguity is a challenge for understanding text and images. Large pre-trained models have reached a high level of quality already. This paper presents an implementation for solving a image disambiguation task relying solely on the knowledge captured in multimodal and language models. Within the task 1 of SemEval 2023 (Visual Word Sense Disambiguation), this approach managed to achieve an MRR of 0.738 using CLIP-Large and the OPT model for generating text. Applying a generative model to create more text given a phrase with an ambiguous word leads to an improvement of our results. The performance gain from a bigger language model is larger than the performance gain from using the lager CLIP model.",
}
| Multimodal ambiguity is a challenge for understanding text and images. Large pre-trained models have reached a high level of quality already. This paper presents an implementation for solving a image disambiguation task relying solely on the knowledge captured in multimodal and language models. Within the task 1 of SemEval 2023 (Visual Word Sense Disambiguation), this approach managed to achieve an MRR of 0.738 using CLIP-Large and the OPT model for generating text. Applying a generative model to create more text given a phrase with an ambiguous word leads to an improvement of our results. The performance gain from a bigger language model is larger than the performance gain from using the lager CLIP model. | [
"Diem, Sebastian",
"Im, Chan Jong",
"M",
"l, Thomas"
] | University of Hildesheim at SemEval-2023 Task 1: Combining Pre-trained Multimodal and Generative Models for Image Disambiguation | semeval-1.18 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.19.bib | https://aclanthology.org/2023.semeval-1.19/ | @inproceedings{tandon-chatterjee-2023-lrl,
title = "{LRL}{\_}{NC} at {S}em{E}val-2023 Task 4: The Touche23-{G}eorge-boole Approach for Multi-Label Classification of Human-Values behind Arguments",
author = "Tandon, Kushagri and
Chatterjee, Niladri",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.19",
doi = "10.18653/v1/2023.semeval-1.19",
pages = "136--142",
abstract = "The task ValueEval aims at assigning a sub- set of possible human value categories under- lying a given argument. Values behind argu- ments are often determinants to evaluate the relevance and importance of decisions in eth- ical sense, thereby making them essential for argument mining. The work presented here proposes two systems for the same. Both sys- tems use RoBERTa to encode sentences in each document. System1 makes use of features ob- tained from training models for two auxiliary tasks, whereas System2 combines RoBERTa with topic modeling to get sentence represen- tation. These features are used by a classifi- cation head to generate predictions. System1 secured the rank 22 in the official task rank- ing, achieving the macro F1-score 0.46 on the main dataset. System2 was not a part of official evaluation. Subsequent experiments achieved highest (among the proposed systems) macro F1-scores of 0.48 (System2), 0.31 (ablation on System1) and 0.33 (ablation on System1) on the main dataset, the Nahj al-Balagha dataset, and the New York Times dataset.",
}
| The task ValueEval aims at assigning a sub- set of possible human value categories under- lying a given argument. Values behind argu- ments are often determinants to evaluate the relevance and importance of decisions in eth- ical sense, thereby making them essential for argument mining. The work presented here proposes two systems for the same. Both sys- tems use RoBERTa to encode sentences in each document. System1 makes use of features ob- tained from training models for two auxiliary tasks, whereas System2 combines RoBERTa with topic modeling to get sentence represen- tation. These features are used by a classifi- cation head to generate predictions. System1 secured the rank 22 in the official task rank- ing, achieving the macro F1-score 0.46 on the main dataset. System2 was not a part of official evaluation. Subsequent experiments achieved highest (among the proposed systems) macro F1-scores of 0.48 (System2), 0.31 (ablation on System1) and 0.33 (ablation on System1) on the main dataset, the Nahj al-Balagha dataset, and the New York Times dataset. | [
"T",
"on, Kushagri",
"Chatterjee, Niladri"
] | LRL_NC at SemEval-2023 Task 4: The Touche23-George-boole Approach for Multi-Label Classification of Human-Values behind Arguments | semeval-1.19 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.20.bib | https://aclanthology.org/2023.semeval-1.20/ | @inproceedings{tandon-chatterjee-2023-lrl-nc,
title = "{LRL}{\_}{NC} at {S}em{E}val-2023 Task 6: Sequential Sentence Classification for Legal Documents Using Topic Modeling Features",
author = "Tandon, Kushagri and
Chatterjee, Niladri",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.20",
doi = "10.18653/v1/2023.semeval-1.20",
pages = "143--149",
abstract = "Natural Language Processing techniques can be leveraged to process legal proceedings for various downstream applications, such as sum- marization of a given judgement, prediction of the judgement for a given legal case, prece- dent search, among others. These applications will benefit from legal judgement documents already segmented into topically coherent units. The current task, namely, Rhetorical Role Pre- diction, aims at categorising each sentence in the sequence of sentences in a judgement document into different labels. The system proposed in this work combines topic mod- eling and RoBERTa to encode sentences in each document. A BiLSTM layer has been utilised to get contextualised sentence repre- sentations. The Rhetorical Role predictions for each sentence in each document are gen- erated by a final CRF layer of the proposed neuro-computing system. This system secured the rank 12 in the official task ranking, achiev- ing the micro-F1 score 0.7980. The code for the proposed systems has been made available at \url{https://github.com/KushagriT/SemEval23_} LegalEval{\_}TeamLRL{\_}NC",
}
| Natural Language Processing techniques can be leveraged to process legal proceedings for various downstream applications, such as sum- marization of a given judgement, prediction of the judgement for a given legal case, prece- dent search, among others. These applications will benefit from legal judgement documents already segmented into topically coherent units. The current task, namely, Rhetorical Role Pre- diction, aims at categorising each sentence in the sequence of sentences in a judgement document into different labels. The system proposed in this work combines topic mod- eling and RoBERTa to encode sentences in each document. A BiLSTM layer has been utilised to get contextualised sentence repre- sentations. The Rhetorical Role predictions for each sentence in each document are gen- erated by a final CRF layer of the proposed neuro-computing system. This system secured the rank 12 in the official task ranking, achiev- ing the micro-F1 score 0.7980. The code for the proposed systems has been made available at \url{https://github.com/KushagriT/SemEval23_} LegalEval{\_}TeamLRL{\_}NC | [
"T",
"on, Kushagri",
"Chatterjee, Niladri"
] | LRL_NC at SemEval-2023 Task 6: Sequential Sentence Classification for Legal Documents Using Topic Modeling Features | semeval-1.20 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.21.bib | https://aclanthology.org/2023.semeval-1.21/ | @inproceedings{dadas-2023-opi,
title = "{OPI} at {S}em{E}val-2023 Task 9: A Simple But Effective Approach to Multilingual Tweet Intimacy Analysis",
author = "Dadas, Slawomir",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.21",
doi = "10.18653/v1/2023.semeval-1.21",
pages = "150--154",
abstract = "This paper describes our submission to the SemEval 2023 multilingual tweet intimacy analysis shared task. The goal of the task was to assess the level of intimacy of Twitter posts in ten languages. The proposed approach consists of several steps. First, we perform in-domain pre-training to create a language model adapted to Twitter data. In the next step, we train an ensemble of regression models to expand the training set with pseudo-labeled examples. The extended dataset is used to train the final solution. Our method was ranked first in five out of ten language subtasks, obtaining the highest average score across all languages.",
}
| This paper describes our submission to the SemEval 2023 multilingual tweet intimacy analysis shared task. The goal of the task was to assess the level of intimacy of Twitter posts in ten languages. The proposed approach consists of several steps. First, we perform in-domain pre-training to create a language model adapted to Twitter data. In the next step, we train an ensemble of regression models to expand the training set with pseudo-labeled examples. The extended dataset is used to train the final solution. Our method was ranked first in five out of ten language subtasks, obtaining the highest average score across all languages. | [
"Dadas, Slawomir"
] | OPI at SemEval-2023 Task 9: A Simple But Effective Approach to Multilingual Tweet Intimacy Analysis | semeval-1.21 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.22.bib | https://aclanthology.org/2023.semeval-1.22/ | @inproceedings{dadas-2023-opi-semeval,
title = "{OPI} at {S}em{E}val-2023 Task 1: Image-Text Embeddings and Multimodal Information Retrieval for Visual Word Sense Disambiguation",
author = "Dadas, Slawomir",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.22",
doi = "10.18653/v1/2023.semeval-1.22",
pages = "155--162",
abstract = "The goal of visual word sense disambiguation is to find the image that best matches the provided description of the word{'}s meaning. It is a challenging problem, requiring approaches that combine language and image understanding. In this paper, we present our submission to SemEval 2023 visual word sense disambiguation shared task. The proposed system integrates multimodal embeddings, learning to rank methods, and knowledge-based approaches. We build a classifier based on the CLIP model, whose results are enriched with additional information retrieved from Wikipedia and lexical databases. Our solution was ranked third in the multilingual task and won in the Persian track, one of the three language subtasks.",
}
| The goal of visual word sense disambiguation is to find the image that best matches the provided description of the word{'}s meaning. It is a challenging problem, requiring approaches that combine language and image understanding. In this paper, we present our submission to SemEval 2023 visual word sense disambiguation shared task. The proposed system integrates multimodal embeddings, learning to rank methods, and knowledge-based approaches. We build a classifier based on the CLIP model, whose results are enriched with additional information retrieved from Wikipedia and lexical databases. Our solution was ranked third in the multilingual task and won in the Persian track, one of the three language subtasks. | [
"Dadas, Slawomir"
] | OPI at SemEval-2023 Task 1: Image-Text Embeddings and Multimodal Information Retrieval for Visual Word Sense Disambiguation | semeval-1.22 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.23.bib | https://aclanthology.org/2023.semeval-1.23/ | @inproceedings{chakraborty-2023-rgat,
title = "{RGAT} at {S}em{E}val-2023 Task 2: Named Entity Recognition Using Graph Attention Network",
author = "Chakraborty, Abir",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.23",
doi = "10.18653/v1/2023.semeval-1.23",
pages = "163--170",
abstract = "In this paper, we (team RGAT) describe our approach for the SemEval 2023 Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER II). The goal of this task is to locate and classify named entities in unstructured short complex texts in 12 different languages and one multilingual setup. We use the dependency tree of the input query as additional feature in a Graph Attention Network along with the token and part-of-speech features. We also experiment with additional layers like BiLSTM and Transformer in addition to the CRF layer. However, we have not included any external Knowledge base like Wikipedia to enrich our inputs. We evaluated our proposed approach on the English NER dataset that resulted in a clean-subset F1 of 61.29{\textbackslash}{\%} and overall F1 of 56.91{\textbackslash}{\%}. However, other approaches that used external knowledge base performed significantly better.",
}
| In this paper, we (team RGAT) describe our approach for the SemEval 2023 Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER II). The goal of this task is to locate and classify named entities in unstructured short complex texts in 12 different languages and one multilingual setup. We use the dependency tree of the input query as additional feature in a Graph Attention Network along with the token and part-of-speech features. We also experiment with additional layers like BiLSTM and Transformer in addition to the CRF layer. However, we have not included any external Knowledge base like Wikipedia to enrich our inputs. We evaluated our proposed approach on the English NER dataset that resulted in a clean-subset F1 of 61.29{\textbackslash}{\%} and overall F1 of 56.91{\textbackslash}{\%}. However, other approaches that used external knowledge base performed significantly better. | [
"Chakraborty, Abir"
] | RGAT at SemEval-2023 Task 2: Named Entity Recognition Using Graph Attention Network | semeval-1.23 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.24.bib | https://aclanthology.org/2023.semeval-1.24/ | @inproceedings{gajewska-2023-eevvgg,
title = "eevvgg at {S}em{E}val-2023 Task 11: Offensive Language Classification with Rater-based Information",
author = "Gajewska, Ewelina",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.24",
doi = "10.18653/v1/2023.semeval-1.24",
pages = "171--176",
abstract = "A standard majority-based approach to text classification is challenged with an individualised approach in the Semeval-2023 Task 11. Here, disagreements are treated as a useful source of information that could be utilised in the training pipeline. The team proposal makes use of partially disaggregated data and additional information about annotators provided by the organisers to train a BERT-based model for offensive text classification. The approach extends previous studies examining the impact of using raters{'} demographic features on classification performance (Hovy, 2015) or training machine learning models on disaggregated data (Davani et al., 2022). The proposed approach was ranked 11 across all 4 datasets, scoring best for cases with a large pool of annotators (6th place in the MD-Agreement dataset) utilising features based on raters{'} annotation behaviour.",
}
| A standard majority-based approach to text classification is challenged with an individualised approach in the Semeval-2023 Task 11. Here, disagreements are treated as a useful source of information that could be utilised in the training pipeline. The team proposal makes use of partially disaggregated data and additional information about annotators provided by the organisers to train a BERT-based model for offensive text classification. The approach extends previous studies examining the impact of using raters{'} demographic features on classification performance (Hovy, 2015) or training machine learning models on disaggregated data (Davani et al., 2022). The proposed approach was ranked 11 across all 4 datasets, scoring best for cases with a large pool of annotators (6th place in the MD-Agreement dataset) utilising features based on raters{'} annotation behaviour. | [
"Gajewska, Ewelina"
] | eevvgg at SemEval-2023 Task 11: Offensive Language Classification with Rater-based Information | semeval-1.24 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.25.bib | https://aclanthology.org/2023.semeval-1.25/ | @inproceedings{segura-bedmar-2023-hulat,
title = "{HULAT} at {S}em{E}val-2023 Task 9: Data Augmentation for Pre-trained Transformers Applied to Multilingual Tweet Intimacy Analysis",
author = "Segura-Bedmar, Isabel",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.25",
doi = "10.18653/v1/2023.semeval-1.25",
pages = "177--183",
abstract = "This paper describes our participation in SemEval-2023 Task 9, Intimacy Analysis of Multilingual Tweets. We fine-tune some of the most popular transformer models with the training dataset and synthetic data generated by different data augmentation techniques. During the development phase, our best results were obtained by using XLM-T. Data augmentation techniques provide a very slight improvement in the results. Our system ranked in the 27th position out of the 45 participating systems. Despite its modest results, our system shows promising results in languages such as Portuguese, English, and Dutch. All our code is available in the repository \url{https://github.com/isegura/hulat_intimacy}.",
}
| This paper describes our participation in SemEval-2023 Task 9, Intimacy Analysis of Multilingual Tweets. We fine-tune some of the most popular transformer models with the training dataset and synthetic data generated by different data augmentation techniques. During the development phase, our best results were obtained by using XLM-T. Data augmentation techniques provide a very slight improvement in the results. Our system ranked in the 27th position out of the 45 participating systems. Despite its modest results, our system shows promising results in languages such as Portuguese, English, and Dutch. All our code is available in the repository \url{https://github.com/isegura/hulat_intimacy}. | [
"Segura-Bedmar, Isabel"
] | HULAT at SemEval-2023 Task 9: Data Augmentation for Pre-trained Transformers Applied to Multilingual Tweet Intimacy Analysis | semeval-1.25 | Poster | 2302.12794 | [
"https://github.com/isegura/hulat_intimacy"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.26.bib | https://aclanthology.org/2023.semeval-1.26/ | @inproceedings{segura-bedmar-2023-hulat-semeval,
title = "{HULAT} at {S}em{E}val-2023 Task 10: Data Augmentation for Pre-trained Transformers Applied to the Detection of Sexism in Social Media",
author = "Segura-Bedmar, Isabel",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.26",
doi = "10.18653/v1/2023.semeval-1.26",
pages = "184--192",
abstract = "This paper describes our participation in SemEval-2023 Task 10, whose goal is the detection of sexism in social media. We explore some of the most popular transformer models such as BERT, DistilBERT, RoBERTa, and XLNet. We also study different data augmentation techniques to increase the training dataset. During the development phase, our best results were obtained by using RoBERTa and data augmentation for tasks B and C. However, the use of synthetic data does not improve the results for task C. We participated in the three subtasks. Our approach still has much room for improvement, especially in the two fine-grained classifications. All our code is available in the repository \url{https://github.com/isegura/hulat_edos}.",
}
| This paper describes our participation in SemEval-2023 Task 10, whose goal is the detection of sexism in social media. We explore some of the most popular transformer models such as BERT, DistilBERT, RoBERTa, and XLNet. We also study different data augmentation techniques to increase the training dataset. During the development phase, our best results were obtained by using RoBERTa and data augmentation for tasks B and C. However, the use of synthetic data does not improve the results for task C. We participated in the three subtasks. Our approach still has much room for improvement, especially in the two fine-grained classifications. All our code is available in the repository \url{https://github.com/isegura/hulat_edos}. | [
"Segura-Bedmar, Isabel"
] | HULAT at SemEval-2023 Task 10: Data Augmentation for Pre-trained Transformers Applied to the Detection of Sexism in Social Media | semeval-1.26 | Poster | 2302.12840 | [
"https://github.com/isegura/hulat_edos"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.27.bib | https://aclanthology.org/2023.semeval-1.27/ | @inproceedings{paulissen-wendt-2023-lauri,
title = "Lauri Ingman at {S}em{E}val-2023 Task 4: A Chain Classifier for Identifying Human Values behind Arguments",
author = "Paulissen, Spencer and
Wendt, Caroline",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.27",
doi = "10.18653/v1/2023.semeval-1.27",
pages = "193--198",
abstract = "Identifying expressions of human values in textual data is a crucial albeit complicated challenge, not least because ethics are highly variable, often implicit, and transcend circumstance. Opinions, arguments, and the like are generally founded upon more than one guiding principle, which are not necessarily independent. As such, little is known about how to classify and predict moral undertones in natural language sequences. Here, we describe and present a solution to ValueEval, our shared contribution to SemEval 2023 Task 4. Our research design focuses on investigating chain classifier architectures with pretrained contextualized embeddings to detect 20 different human values in written arguments. We show that our best model substantially surpasses the classification performance of the baseline method established in prior work. We discuss limitations to our approach and outline promising directions for future work.",
}
| Identifying expressions of human values in textual data is a crucial albeit complicated challenge, not least because ethics are highly variable, often implicit, and transcend circumstance. Opinions, arguments, and the like are generally founded upon more than one guiding principle, which are not necessarily independent. As such, little is known about how to classify and predict moral undertones in natural language sequences. Here, we describe and present a solution to ValueEval, our shared contribution to SemEval 2023 Task 4. Our research design focuses on investigating chain classifier architectures with pretrained contextualized embeddings to detect 20 different human values in written arguments. We show that our best model substantially surpasses the classification performance of the baseline method established in prior work. We discuss limitations to our approach and outline promising directions for future work. | [
"Paulissen, Spencer",
"Wendt, Caroline"
] | Lauri Ingman at SemEval-2023 Task 4: A Chain Classifier for Identifying Human Values behind Arguments | semeval-1.27 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.28.bib | https://aclanthology.org/2023.semeval-1.28/ | @inproceedings{benlahbib-boumhidi-2023-nlp,
title = "{NLP}-{LISAC} at {S}em{E}val-2023 Task 12: Sentiment Analysis for Tweets expressed in {A}frican languages via Transformer-based Models",
author = "Benlahbib, Abdessamad and
Boumhidi, Achraf",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.28",
doi = "10.18653/v1/2023.semeval-1.28",
pages = "199--204",
abstract = "This paper presents our systems and findings for SemEval-2023 Task 12: AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages. The main objective of this task was to determine the polarity of a tweet (positive, negative, or neutral). Our submitted models (highest rank is 1 and lowest rank is 21 depending on the target Track) consist of various Transformer-based approaches.",
}
| This paper presents our systems and findings for SemEval-2023 Task 12: AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages. The main objective of this task was to determine the polarity of a tweet (positive, negative, or neutral). Our submitted models (highest rank is 1 and lowest rank is 21 depending on the target Track) consist of various Transformer-based approaches. | [
"Benlahbib, Abdessamad",
"Boumhidi, Achraf"
] | NLP-LISAC at SemEval-2023 Task 12: Sentiment Analysis for Tweets expressed in African languages via Transformer-based Models | semeval-1.28 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.29.bib | https://aclanthology.org/2023.semeval-1.29/ | @inproceedings{heavey-etal-2023-stfx,
title = "{S}t{FX}-{NLP} at {S}em{E}val-2023 Task 4: Unsupervised and Supervised Approaches to Detecting Human Values in Arguments",
author = "Heavey, Ethan and
King, Milton and
Hughes, James",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.29",
doi = "10.18653/v1/2023.semeval-1.29",
pages = "205--211",
abstract = "In this paper, we discuss our models applied to Task 4: Human Value Detection of SemEval 2023, which incorporated two different embedding techniques to interpret the data. Preliminary experiments were conducted to observe important word types. Subsequently, we explored an XGBoost model, an unsupervised learning model, and two Ensemble learning models were then explored. The best performing model, an ensemble model employing a soft voting technique, secured the 34th spot out of 39 teams, on a class imbalanced dataset. We explored the inclusion of different parts of the provided knowledge resource and found that considering only specific parts assisted our models.",
}
| In this paper, we discuss our models applied to Task 4: Human Value Detection of SemEval 2023, which incorporated two different embedding techniques to interpret the data. Preliminary experiments were conducted to observe important word types. Subsequently, we explored an XGBoost model, an unsupervised learning model, and two Ensemble learning models were then explored. The best performing model, an ensemble model employing a soft voting technique, secured the 34th spot out of 39 teams, on a class imbalanced dataset. We explored the inclusion of different parts of the provided knowledge resource and found that considering only specific parts assisted our models. | [
"Heavey, Ethan",
"King, Milton",
"Hughes, James"
] | StFX-NLP at SemEval-2023 Task 4: Unsupervised and Supervised Approaches to Detecting Human Values in Arguments | semeval-1.29 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.30.bib | https://aclanthology.org/2023.semeval-1.30/ | @inproceedings{volosincu-etal-2023-fii,
title = "{FII} {SMART} at {S}em{E}val 2023 Task7: Multi-evidence Natural Language Inference for Clinical Trial Data",
author = "Volosincu, Mihai and
Lupu, Cosmin and
Trandabat, Diana and
Gifu, Daniela",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.30",
doi = "10.18653/v1/2023.semeval-1.30",
pages = "212--220",
abstract = "The {``}Multi-evidence Natural Language Inference forClinical Trial Data{''} task at SemEval 2023competition focuses on extracting essentialinformation on clinical trial data, by posing twosubtasks on textual entailment and evidence retrieval. In the context of SemEval, we present a comparisonbetween a method based on the BioBERT model anda CNN model. The task is based on a collection ofbreast cancer Clinical Trial Reports (CTRs),statements, explanations, and labels annotated bydomain expert annotators. We achieved F1 scores of0.69 for determining the inference relation(entailment vs contradiction) between CTR -statement pairs. The implementation of our system ismade available via Github - \url{https://github.com/volosincu/FII_Smart__Semeval2023}.",
}
| The {``}Multi-evidence Natural Language Inference forClinical Trial Data{''} task at SemEval 2023competition focuses on extracting essentialinformation on clinical trial data, by posing twosubtasks on textual entailment and evidence retrieval. In the context of SemEval, we present a comparisonbetween a method based on the BioBERT model anda CNN model. The task is based on a collection ofbreast cancer Clinical Trial Reports (CTRs),statements, explanations, and labels annotated bydomain expert annotators. We achieved F1 scores of0.69 for determining the inference relation(entailment vs contradiction) between CTR -statement pairs. The implementation of our system ismade available via Github - \url{https://github.com/volosincu/FII_Smart__Semeval2023}. | [
"Volosincu, Mihai",
"Lupu, Cosmin",
"Tr",
"abat, Diana",
"Gifu, Daniela"
] | FII SMART at SemEval 2023 Task7: Multi-evidence Natural Language Inference for Clinical Trial Data | semeval-1.30 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.31.bib | https://aclanthology.org/2023.semeval-1.31/ | @inproceedings{fang-etal-2023-epicurus,
title = "Epicurus at {S}em{E}val-2023 Task 4: Improving Prediction of Human Values behind Arguments by Leveraging Their Definitions",
author = "Fang, Christian and
Fang, Qixiang and
Nguyen, Dong",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.31",
doi = "10.18653/v1/2023.semeval-1.31",
pages = "221--229",
abstract = "We describe our experiments for SemEval-2023 Task 4 on the identification of human values behind arguments (ValueEval). Because human values are subjective concepts which require precise definitions, we hypothesize that incorporating the definitions of human values (in the form of annotation instructions and validated survey items) during model training can yield better prediction performance. We explore this idea and show that our proposed models perform better than the challenge organizers{'} baselines, with improvements in macro F1 scores of up to 18{\%}.",
}
| We describe our experiments for SemEval-2023 Task 4 on the identification of human values behind arguments (ValueEval). Because human values are subjective concepts which require precise definitions, we hypothesize that incorporating the definitions of human values (in the form of annotation instructions and validated survey items) during model training can yield better prediction performance. We explore this idea and show that our proposed models perform better than the challenge organizers{'} baselines, with improvements in macro F1 scores of up to 18{\%}. | [
"Fang, Christian",
"Fang, Qixiang",
"Nguyen, Dong"
] | Epicurus at SemEval-2023 Task 4: Improving Prediction of Human Values behind Arguments by Leveraging Their Definitions | semeval-1.31 | Poster | 2302.13925 | [
"https://github.com/fqixiang/semeval23task4"
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.32.bib | https://aclanthology.org/2023.semeval-1.32/ | @inproceedings{goot-2023-machamp,
title = "{M}a{C}h{A}mp at {S}em{E}val-2023 tasks 2, 3, 4, 5, 7, 8, 9, 10, 11, and 12: On the Effectiveness of Intermediate Training on an Uncurated Collection of Datasets.",
author = "van der Goot, Rob",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.32",
doi = "10.18653/v1/2023.semeval-1.32",
pages = "230--245",
abstract = "To improve the ability of language models to handle Natural Language Processing(NLP) tasks and intermediate step of pre-training has recently beenintroduced. In this setup, one takes a pre-trained language model, trains it ona (set of) NLP dataset(s), and then finetunes it for a target task. It isknown that the selection of relevant transfer tasks is important, but recentlysome work has shown substantial performance gains by doing intermediatetraining on a very large set of datasets. Most previous work uses generativelanguage models or only focuses on one or a couple of tasks and uses acarefully curated setup. We compare intermediate training with one or manytasks in a setup where the choice of datasets is more arbitrary; we use allSemEval 2023 text-based tasks. We reach performance improvements for most taskswhen using intermediate training. Gains are higher when doing intermediatetraining on single tasks than all tasks if the right transfer taskis identified. Dataset smoothing and heterogeneous batching did not lead torobust gains in our setup.",
}
| To improve the ability of language models to handle Natural Language Processing(NLP) tasks and intermediate step of pre-training has recently beenintroduced. In this setup, one takes a pre-trained language model, trains it ona (set of) NLP dataset(s), and then finetunes it for a target task. It isknown that the selection of relevant transfer tasks is important, but recentlysome work has shown substantial performance gains by doing intermediatetraining on a very large set of datasets. Most previous work uses generativelanguage models or only focuses on one or a couple of tasks and uses acarefully curated setup. We compare intermediate training with one or manytasks in a setup where the choice of datasets is more arbitrary; we use allSemEval 2023 text-based tasks. We reach performance improvements for most taskswhen using intermediate training. Gains are higher when doing intermediatetraining on single tasks than all tasks if the right transfer taskis identified. Dataset smoothing and heterogeneous batching did not lead torobust gains in our setup. | [
"van der Goot, Rob"
] | MaChAmp at SemEval-2023 tasks 2, 3, 4, 5, 7, 8, 9, 10, 11, and 12: On the Effectiveness of Intermediate Training on an Uncurated Collection of Datasets. | semeval-1.32 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.33.bib | https://aclanthology.org/2023.semeval-1.33/ | @inproceedings{bhatia-etal-2023-ubc,
title = "{UBC}-{DLNLP} at {S}em{E}val-2023 Task 12: Impact of Transfer Learning on {A}frican Sentiment Analysis",
author = "Bhatia, Gagan and
Adebara, Ife and
Elmadany, Abdelrahim and
Abdul-mageed, Muhammad",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.33",
doi = "10.18653/v1/2023.semeval-1.33",
pages = "246--255",
abstract = "We describe our contribution to the SemEVAl 2023 AfriSenti-SemEval shared task, where we tackle the task of sentiment analysis in 14 different African languages. We develop both monolingual and multilingual models under a full supervised setting (subtasks A and B). We also develop models for the zero-shot setting (subtask C). Our approach involves experimenting with transfer learning using six language models, including further pretraining of some of these models as well as a final finetuning stage. Our best performing models achieve an F1-score of 70.36 on development data and an F1-score of 66.13 on test data. Unsurprisingly, our results demonstrate the effectiveness of transfer learning and finetuning techniques for sentiment analysis across multiple languages. Our approach can be applied to other sentiment analysis tasks in different languages and domains.",
}
| We describe our contribution to the SemEVAl 2023 AfriSenti-SemEval shared task, where we tackle the task of sentiment analysis in 14 different African languages. We develop both monolingual and multilingual models under a full supervised setting (subtasks A and B). We also develop models for the zero-shot setting (subtask C). Our approach involves experimenting with transfer learning using six language models, including further pretraining of some of these models as well as a final finetuning stage. Our best performing models achieve an F1-score of 70.36 on development data and an F1-score of 66.13 on test data. Unsurprisingly, our results demonstrate the effectiveness of transfer learning and finetuning techniques for sentiment analysis across multiple languages. Our approach can be applied to other sentiment analysis tasks in different languages and domains. | [
"Bhatia, Gagan",
"Adebara, Ife",
"Elmadany, Abdelrahim",
"Abdul-mageed, Muhammad"
] | UBC-DLNLP at SemEval-2023 Task 12: Impact of Transfer Learning on African Sentiment Analysis | semeval-1.33 | Poster | 2304.11256 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.34.bib | https://aclanthology.org/2023.semeval-1.34/ | @inproceedings{ma-etal-2023-pai,
title = "{PAI} at {S}em{E}val-2023 Task 4: A General Multi-label Classification System with Class-balanced Loss Function and Ensemble Module",
author = "Ma, Long and
Sun, Zeye and
Jiang, Jiawei and
Li, Xuan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.34",
doi = "10.18653/v1/2023.semeval-1.34",
pages = "256--261",
abstract = "The Human Value Detection shared task{\textbackslash}cite{kiesel:2023} aims to classify whether or not the argument draws on a set of 20 value categories, given a textual argument. This is a difficult task as the discrimination of human values behind arguments is often implicit. Moreover, the number of label categories can be up to 20 and the distribution of data is highly imbalanced. To address these issues, we employ a multi-label classification model and utilize a class-balanced loss function. Our system wins 5 first places, 2 second places, and 6 third places out of 20 categories of the Human Value Detection shared task, and our overall average score of 0.54 also places third. The code is publicly available at {\textbackslash}url{https://www.github.com/diqiuzhuanzhuan/semeval2023}.",
}
| The Human Value Detection shared task{\textbackslash}cite{kiesel:2023} aims to classify whether or not the argument draws on a set of 20 value categories, given a textual argument. This is a difficult task as the discrimination of human values behind arguments is often implicit. Moreover, the number of label categories can be up to 20 and the distribution of data is highly imbalanced. To address these issues, we employ a multi-label classification model and utilize a class-balanced loss function. Our system wins 5 first places, 2 second places, and 6 third places out of 20 categories of the Human Value Detection shared task, and our overall average score of 0.54 also places third. The code is publicly available at {\textbackslash}url{https://www.github.com/diqiuzhuanzhuan/semeval2023}. | [
"Ma, Long",
"Sun, Zeye",
"Jiang, Jiawei",
"Li, Xuan"
] | PAI at SemEval-2023 Task 4: A General Multi-label Classification System with Class-balanced Loss Function and Ensemble Module | semeval-1.34 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.35.bib | https://aclanthology.org/2023.semeval-1.35/ | @inproceedings{manegold-girrbach-2023-tureuth,
title = {{T}{\"u}{R}euth Legal at {S}em{E}val-2023 Task 6: Modelling Local and Global Structure of Judgements for Rhetorical Role Prediction},
author = "Manegold, Henrik and
Girrbach, Leander",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.35",
doi = "10.18653/v1/2023.semeval-1.35",
pages = "262--269",
abstract = "This paper describes our system for SemEval-2023 Task 6: LegalEval: Understanding Legal Texts. We only participate in Sub-Task (A), Predicting Rhetorical Roles. Our final submission achieves 73.35 test set F1 score, ranking 17th of 27 participants. The proposed method combines global and local models of label distributions and transitions between labels. Through our analyses, we show that especially modelling the temporal distribution of labels contributes positively to performance.",
}
| This paper describes our system for SemEval-2023 Task 6: LegalEval: Understanding Legal Texts. We only participate in Sub-Task (A), Predicting Rhetorical Roles. Our final submission achieves 73.35 test set F1 score, ranking 17th of 27 participants. The proposed method combines global and local models of label distributions and transitions between labels. Through our analyses, we show that especially modelling the temporal distribution of labels contributes positively to performance. | [
"Manegold, Henrik",
"Girrbach, Le",
"er"
] | TüReuth Legal at SemEval-2023 Task 6: Modelling Local and Global Structure of Judgements for Rhetorical Role Prediction | semeval-1.35 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.36.bib | https://aclanthology.org/2023.semeval-1.36/ | @inproceedings{rusnachenko-etal-2023-nclu,
title = "nclu{\_}team at {S}em{E}val-2023 Task 6: Attention-based Approaches for Large Court Judgement Prediction with Explanation",
author = "Rusnachenko, Nicolay and
Markchom, Thanet and
Liang, Huizhi",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.36",
doi = "10.18653/v1/2023.semeval-1.36",
pages = "270--274",
abstract = "Legal documents tend to be large in size. In this paper, we provide an experiment with attention-based approaches complemented by certain document processing techniques for judgment prediction. For the prediction of explanation, we consider this as an extractive text summarization problem based on an output of (1) CNN with attention mechanism and (2) self-attention of language models. Our extensive experiments show that treating document endings at first results in a 2.1{\%} improvement in judgment prediction across all the models. Additional content peeling from non-informative sentences allows an improvement of explanation prediction performance by 4{\%} in the case of attention-based CNN models. The best submissions achieved 8{'}th and 3{'}rd ranks on judgment prediction (C1) and prediction with explanation (C2) tasks respectively among 11 participating teams. The results of our experiments are published",
}
| Legal documents tend to be large in size. In this paper, we provide an experiment with attention-based approaches complemented by certain document processing techniques for judgment prediction. For the prediction of explanation, we consider this as an extractive text summarization problem based on an output of (1) CNN with attention mechanism and (2) self-attention of language models. Our extensive experiments show that treating document endings at first results in a 2.1{\%} improvement in judgment prediction across all the models. Additional content peeling from non-informative sentences allows an improvement of explanation prediction performance by 4{\%} in the case of attention-based CNN models. The best submissions achieved 8{'}th and 3{'}rd ranks on judgment prediction (C1) and prediction with explanation (C2) tasks respectively among 11 participating teams. The results of our experiments are published | [
"Rusnachenko, Nicolay",
"Markchom, Thanet",
"Liang, Huizhi"
] | nclu_team at SemEval-2023 Task 6: Attention-based Approaches for Large Court Judgement Prediction with Explanation | semeval-1.36 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.37.bib | https://aclanthology.org/2023.semeval-1.37/ | @inproceedings{noviello-etal-2023-teamunibo,
title = "{T}eam{U}nibo at {S}em{E}val-2023 Task 6: A transformer based approach to Rhetorical Roles prediction and {NER} in Legal Texts",
author = "Noviello, Yuri and
Pallotta, Enrico and
Pinzarrone, Flavio and
Tanzi, Giuseppe",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.37",
doi = "10.18653/v1/2023.semeval-1.37",
pages = "275--284",
abstract = "This study aims to tackle some challenges posed by legal texts in the field of NLP. The LegalEval challenge proposes three tasks, based on Indial Legal documents: Rhetorical Roles Prediction, Legal Named Entity Recognition, and Court Judgement Prediction with Explanation. Our work focuses on the first two tasks. For the first task we present a context-aware approach to enhance sentence information. With the help of this approach, the classification model utilizing InLegalBert as a transformer achieved 81.12{\%} Micro-F1. For the second task we present a NER approach to extract and classify entities like names of petitioner, respondent, court or statute of a given document. The model utilizing XLNet as transformer and a dependency parser on top achieved 87.43{\%} Macro-F1.",
}
| This study aims to tackle some challenges posed by legal texts in the field of NLP. The LegalEval challenge proposes three tasks, based on Indial Legal documents: Rhetorical Roles Prediction, Legal Named Entity Recognition, and Court Judgement Prediction with Explanation. Our work focuses on the first two tasks. For the first task we present a context-aware approach to enhance sentence information. With the help of this approach, the classification model utilizing InLegalBert as a transformer achieved 81.12{\%} Micro-F1. For the second task we present a NER approach to extract and classify entities like names of petitioner, respondent, court or statute of a given document. The model utilizing XLNet as transformer and a dependency parser on top achieved 87.43{\%} Macro-F1. | [
"Noviello, Yuri",
"Pallotta, Enrico",
"Pinzarrone, Flavio",
"Tanzi, Giuseppe"
] | TeamUnibo at SemEval-2023 Task 6: A transformer based approach to Rhetorical Roles prediction and NER in Legal Texts | semeval-1.37 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.38.bib | https://aclanthology.org/2023.semeval-1.38/ | @inproceedings{garcia-diaz-etal-2023-umuteam,
title = "{UMUT}eam at {S}em{E}val-2023 Task 12: Ensemble Learning of {LLM}s applied to Sentiment Analysis for Low-resource {A}frican Languages",
author = "Garc{\'\i}a-D{\'\i}az, Jos{\'e} Antonio and
Caparros-laiz, Camilo and
Almela, {\'A}ngela and
Alcar{\'a}z-M{\'a}rmol, Gema and
Mar{\'\i}n-P{\'e}rez, Mar{\'\i}a Jos{\'e} and
Valencia-Garc{\'\i}a, Rafael",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.38",
doi = "10.18653/v1/2023.semeval-1.38",
pages = "285--292",
abstract = "These working notes summarize the participation of the UMUTeam in the SemEval 2023 shared task: AfriSenti, focused on Sentiment Analysis in several African languages. Two subtasks are proposed, one in which each language is considered separately and another one in which all languages are merged. Our proposal to solve both subtasks is grounded on the combination of features extracted from several multilingual Large Language Models and a subset of language-independent linguistic features. Our best results are achieved with the African languages less represented in the training set: Xitsonga, a Mozambique dialect, with a weighted f1-score of 54.89{\textbackslash}{\%}; Algerian Arabic, with a weighted f1-score of 68.52{\textbackslash}{\%}; Swahili, with a weighted f1-score of 60.52{\textbackslash}{\%}; and Twi, with a weighted f1-score of 71.14{\%}.",
}
| These working notes summarize the participation of the UMUTeam in the SemEval 2023 shared task: AfriSenti, focused on Sentiment Analysis in several African languages. Two subtasks are proposed, one in which each language is considered separately and another one in which all languages are merged. Our proposal to solve both subtasks is grounded on the combination of features extracted from several multilingual Large Language Models and a subset of language-independent linguistic features. Our best results are achieved with the African languages less represented in the training set: Xitsonga, a Mozambique dialect, with a weighted f1-score of 54.89{\textbackslash}{\%}; Algerian Arabic, with a weighted f1-score of 68.52{\textbackslash}{\%}; Swahili, with a weighted f1-score of 60.52{\textbackslash}{\%}; and Twi, with a weighted f1-score of 71.14{\%}. | [
"Garc{\\'\\i}a-D{\\'\\i}az, Jos{\\'e} Antonio",
"Caparros-laiz, Camilo",
"Almela, {\\'A}ngela",
"Alcar{\\'a}z-M{\\'a}rmol, Gema",
"Mar{\\'\\i}n-P{\\'e}rez, Mar{\\'\\i}a Jos{\\'e}",
"Valencia-Garc{\\'\\i}a, Rafael"
] | UMUTeam at SemEval-2023 Task 12: Ensemble Learning of LLMs applied to Sentiment Analysis for Low-resource African Languages | semeval-1.38 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.39.bib | https://aclanthology.org/2023.semeval-1.39/ | @inproceedings{garcia-diaz-etal-2023-umuteam-sinai,
title = "{UMUT}eam and {SINAI} at {S}em{E}val-2023 Task 9: Multilingual Tweet Intimacy Analysis using Multilingual Large Language Models and Data Augmentation",
author = "Garc{\'\i}a-D{\'\i}az, Jos{\'e} Antonio and
Pan, Ronghao and
Jim{\'e}nez Zafra, Salud Mar{\'\i}a and
Martn-Valdivia, Mar{\'\i}a-Teresa and
Ure{\~n}a-L{\'o}pez, L. Alfonso and
Valencia-Garc{\'\i}a, Rafael",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.39",
doi = "10.18653/v1/2023.semeval-1.39",
pages = "293--299",
abstract = "This work presents the participation of the UMUTeam and the SINAI research groups in the SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. The goal of this task is to predict the intimacy of a set of tweets in 10 languages: English, Spanish, Italian, Portuguese, French, Chinese, Hindi, Arabic, Dutch and Korean, of which, the last 4 are not in the training data. Our approach to address this task is based on data augmentation and the use of three multilingual Large Language Models (multilingual BERT, XLM and mDeBERTA) by ensemble learning. Our team ranked 30th out of 45 participants. Our best results were achieved with two unseen languages: Korean (16th) and Hindi (19th).",
}
| This work presents the participation of the UMUTeam and the SINAI research groups in the SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis. The goal of this task is to predict the intimacy of a set of tweets in 10 languages: English, Spanish, Italian, Portuguese, French, Chinese, Hindi, Arabic, Dutch and Korean, of which, the last 4 are not in the training data. Our approach to address this task is based on data augmentation and the use of three multilingual Large Language Models (multilingual BERT, XLM and mDeBERTA) by ensemble learning. Our team ranked 30th out of 45 participants. Our best results were achieved with two unseen languages: Korean (16th) and Hindi (19th). | [
"Garc{\\'\\i}a-D{\\'\\i}az, Jos{\\'e} Antonio",
"Pan, Ronghao",
"Jim{\\'e}nez Zafra, Salud Mar{\\'\\i}a",
"Martn-Valdivia, Mar{\\'\\i}a-Teresa",
"Ure{\\~n}a-L{\\'o}pez, L. Alfonso",
"Valencia-Garc{\\'\\i}a, Rafael"
] | UMUTeam and SINAI at SemEval-2023 Task 9: Multilingual Tweet Intimacy Analysis using Multilingual Large Language Models and Data Augmentation | semeval-1.39 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.40.bib | https://aclanthology.org/2023.semeval-1.40/ | @inproceedings{jiang-2023-team,
title = "Team {QUST} at {S}em{E}val-2023 Task 3: A Comprehensive Study of Monolingual and Multilingual Approaches for Detecting Online News Genre, Framing and Persuasion Techniques",
author = "Jiang, Ye",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.40",
doi = "10.18653/v1/2023.semeval-1.40",
pages = "300--306",
abstract = "This paper describes the participation of team QUST in the SemEval2023 task3. The monolingual models are first evaluated with the under-sampling of the majority classes in the early stage of the task. Then, the pre-trained multilingual model is fine-tuned with a combination of the class weights and the sample weights. Two different fine-tuning strategies, the task-agnostic and the task-dependent, are further investigated. All experiments are conducted under the 10-fold cross-validation, the multilingual approaches are superior to the monolingual ones. The submitted system achieves the second best in Italian and Spanish (zero-shot) in subtask-1.",
}
| This paper describes the participation of team QUST in the SemEval2023 task3. The monolingual models are first evaluated with the under-sampling of the majority classes in the early stage of the task. Then, the pre-trained multilingual model is fine-tuned with a combination of the class weights and the sample weights. Two different fine-tuning strategies, the task-agnostic and the task-dependent, are further investigated. All experiments are conducted under the 10-fold cross-validation, the multilingual approaches are superior to the monolingual ones. The submitted system achieves the second best in Italian and Spanish (zero-shot) in subtask-1. | [
"Jiang, Ye"
] | Team QUST at SemEval-2023 Task 3: A Comprehensive Study of Monolingual and Multilingual Approaches for Detecting Online News Genre, Framing and Persuasion Techniques | semeval-1.40 | Poster | 2304.04190 | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
|
https://aclanthology.org/2023.semeval-1.41.bib | https://aclanthology.org/2023.semeval-1.41/ | @inproceedings{chang-etal-2023-nicenlp,
title = "nice{NLP} at {S}em{E}val-2023 Task 10: Dual Model Alternate Pseudo-labeling Improves Your Predictions",
author = "Chang, Yu and
Chen, Yuxi and
Zhang, Yanru",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.41",
doi = "10.18653/v1/2023.semeval-1.41",
pages = "307--311",
abstract = "Sexism is a growing online problem. It harms women who are targeted and makes online spaces inaccessible and unwelcoming. In this paper, we present our approach for Task A of SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS), which aims to perform binary sexism detection on textual content. To solve this task, we fine-tune the pre-trained model based on several popular natural language processing methods to improve the generalization ability in the face of different data. According to the experimental results, the effective combination of multiple methods enables our approach to achieve excellent performance gains.",
}
| Sexism is a growing online problem. It harms women who are targeted and makes online spaces inaccessible and unwelcoming. In this paper, we present our approach for Task A of SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS), which aims to perform binary sexism detection on textual content. To solve this task, we fine-tune the pre-trained model based on several popular natural language processing methods to improve the generalization ability in the face of different data. According to the experimental results, the effective combination of multiple methods enables our approach to achieve excellent performance gains. | [
"Chang, Yu",
"Chen, Yuxi",
"Zhang, Yanru"
] | niceNLP at SemEval-2023 Task 10: Dual Model Alternate Pseudo-labeling Improves Your Predictions | semeval-1.41 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.42.bib | https://aclanthology.org/2023.semeval-1.42/ | @inproceedings{lee-etal-2023-ncuee,
title = "{NCUEE}-{NLP} at {S}em{E}val-2023 Task 8: Identifying Medical Causal Claims and Extracting {PIO} Frames Using the Transformer Models",
author = "Lee, Lung-Hao and
Cheng, Yuan-Hao and
Yang, Jen-Hao and
Tien, Kao-Yuan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.42",
doi = "10.18653/v1/2023.semeval-1.42",
pages = "312--317",
abstract = "This study describes the model design of the NCUEE-NLP system for the SemEval-2023 Task 8. We use the pre-trained transformer models and fine-tune the task datasets to identify medical causal claims and extract population, intervention, and outcome elements in a Reddit post when a claim is given. Our best system submission for the causal claim identification subtask achieved a F1-score of 70.15{\%}. Our best submission for the PIO frame extraction subtask achieved F1-scores of 37.78{\%} for Population class, 43.58{\%} for Intervention class, and 30.67{\%} for Outcome class, resulting in a macro-averaging F1-score of 37.34{\%}. Our system evaluation results ranked second position among all participating teams.",
}
| This study describes the model design of the NCUEE-NLP system for the SemEval-2023 Task 8. We use the pre-trained transformer models and fine-tune the task datasets to identify medical causal claims and extract population, intervention, and outcome elements in a Reddit post when a claim is given. Our best system submission for the causal claim identification subtask achieved a F1-score of 70.15{\%}. Our best submission for the PIO frame extraction subtask achieved F1-scores of 37.78{\%} for Population class, 43.58{\%} for Intervention class, and 30.67{\%} for Outcome class, resulting in a macro-averaging F1-score of 37.34{\%}. Our system evaluation results ranked second position among all participating teams. | [
"Lee, Lung-Hao",
"Cheng, Yuan-Hao",
"Yang, Jen-Hao",
"Tien, Kao-Yuan"
] | NCUEE-NLP at SemEval-2023 Task 8: Identifying Medical Causal Claims and Extracting PIO Frames Using the Transformer Models | semeval-1.42 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.43.bib | https://aclanthology.org/2023.semeval-1.43/ | @inproceedings{he-zhang-2023-zhegu,
title = "Zhegu at {S}em{E}val-2023 Task 9: Exponential Penalty Mean Squared Loss for Multilingual Tweet Intimacy Analysis",
author = "He, Pan and
Zhang, Yanru",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.43",
doi = "10.18653/v1/2023.semeval-1.43",
pages = "318--323",
abstract = "We present the system description of our team Zhegu in SemEval-2023 Task 9 Multilingual Tweet Intimacy Analysis. We propose {\textbackslash}textbf{EPM} ({\textbackslash}textbf{E}xponential {\textbackslash}textbf{P}enalty {\textbackslash}textbf{M}ean Squared Loss) for the purpose of enhancing the ability of learning difficult samples during the training process. Meanwhile, we also apply several methods (frozen Tuning {\textbackslash}{\&}amp; contrastive learning based on Language) on the XLM-R multilingual language model for fine-tuning and model ensemble. The results in our experiments provide strong faithful evidence of the effectiveness of our methods. Eventually, we achieved a Pearson score of 0.567 on the test set.",
}
| We present the system description of our team Zhegu in SemEval-2023 Task 9 Multilingual Tweet Intimacy Analysis. We propose {\textbackslash}textbf{EPM} ({\textbackslash}textbf{E}xponential {\textbackslash}textbf{P}enalty {\textbackslash}textbf{M}ean Squared Loss) for the purpose of enhancing the ability of learning difficult samples during the training process. Meanwhile, we also apply several methods (frozen Tuning {\textbackslash}{\&}amp; contrastive learning based on Language) on the XLM-R multilingual language model for fine-tuning and model ensemble. The results in our experiments provide strong faithful evidence of the effectiveness of our methods. Eventually, we achieved a Pearson score of 0.567 on the test set. | [
"He, Pan",
"Zhang, Yanru"
] | Zhegu at SemEval-2023 Task 9: Exponential Penalty Mean Squared Loss for Multilingual Tweet Intimacy Analysis | semeval-1.43 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.44.bib | https://aclanthology.org/2023.semeval-1.44/ | @inproceedings{thin-etal-2023-abcd,
title = "{ABCD} Team at {S}em{E}val-2023 Task 12: An Ensemble Transformer-based System for {A}frican Sentiment Analysis",
author = "Thin, Dang and
Nguyen, Dai and
Qui, Dang and
Hao, Duong and
Nguyen, Ngan",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.44",
doi = "10.18653/v1/2023.semeval-1.44",
pages = "324--330",
abstract = "This paper describes the system of the ABCD team for three main tasks in the SemEval-2023 Task 12: AfriSenti-SemEval for Low-resource African Languages using Twitter Dataset. We focus on exploring the performance of ensemble architectures based on the soft voting technique and different pre-trained transformer-based language models. The experimental results show that our system has achieved competitive performance in some Tracks in Task A: Monolingual Sentiment Analysis, where we rank the Top 3, Top 2, and Top 4 for the Hause, Igbo and Moroccan languages. Besides, our model achieved competitive results and ranked {\$}14{\^{}}{th}{\$} place in Task B (multilingual) setting and {\$}14{\^{}}{th}{\$} and {\$}8{\^{}}{th}{\$} place in Track 17 and Track 18 of Task C (zero-shot) setting.",
}
| This paper describes the system of the ABCD team for three main tasks in the SemEval-2023 Task 12: AfriSenti-SemEval for Low-resource African Languages using Twitter Dataset. We focus on exploring the performance of ensemble architectures based on the soft voting technique and different pre-trained transformer-based language models. The experimental results show that our system has achieved competitive performance in some Tracks in Task A: Monolingual Sentiment Analysis, where we rank the Top 3, Top 2, and Top 4 for the Hause, Igbo and Moroccan languages. Besides, our model achieved competitive results and ranked {\$}14{\^{}}{th}{\$} place in Task B (multilingual) setting and {\$}14{\^{}}{th}{\$} and {\$}8{\^{}}{th}{\$} place in Track 17 and Track 18 of Task C (zero-shot) setting. | [
"Thin, Dang",
"Nguyen, Dai",
"Qui, Dang",
"Hao, Duong",
"Nguyen, Ngan"
] | ABCD Team at SemEval-2023 Task 12: An Ensemble Transformer-based System for African Sentiment Analysis | semeval-1.44 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.45.bib | https://aclanthology.org/2023.semeval-1.45/ | @inproceedings{mukans-barzdins-2023-riga,
title = "{RIGA} at {S}em{E}val-2023 Task 2: {NER} Enhanced with {GPT}-3",
author = "Mukans, Eduards and
Barzdins, Guntis",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.45",
doi = "10.18653/v1/2023.semeval-1.45",
pages = "331--339",
abstract = "The following is a description of the RIGA team{'}s submissions for the English track of the SemEval-2023 Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER) II. Our approach achieves 17{\%} boost in results by utilizing pre-existing Large-scale Language Models (LLMs), such as GPT-3, to gather additional contexts. We then fine-tune a pre-trained neural network utilizing these contexts. The final step of our approach involves meticulous model and compute resource scaling, which results in improved performance. Our results placed us 12th out of 34 teams in terms of overall ranking and 7th in terms of the noisy subset ranking. The code for our method is available on GitHub (\url{https://github.com/emukans/multiconer2-riga}).",
}
| The following is a description of the RIGA team{'}s submissions for the English track of the SemEval-2023 Task 2: Multilingual Complex Named Entity Recognition (MultiCoNER) II. Our approach achieves 17{\%} boost in results by utilizing pre-existing Large-scale Language Models (LLMs), such as GPT-3, to gather additional contexts. We then fine-tune a pre-trained neural network utilizing these contexts. The final step of our approach involves meticulous model and compute resource scaling, which results in improved performance. Our results placed us 12th out of 34 teams in terms of overall ranking and 7th in terms of the noisy subset ranking. The code for our method is available on GitHub (\url{https://github.com/emukans/multiconer2-riga}). | [
"Mukans, Eduards",
"Barzdins, Guntis"
] | RIGA at SemEval-2023 Task 2: NER Enhanced with GPT-3 | semeval-1.45 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.46.bib | https://aclanthology.org/2023.semeval-1.46/ | @inproceedings{hematian-hemati-etal-2023-sutnlp,
title = "{SUTNLP} at {S}em{E}val-2023 Task 4: {LG}-Transformer for Human Value Detection",
author = "Hematian Hemati, Hamed and
Alavian, Sayed Hesam and
Sameti, Hossein and
Beigy, Hamid",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.46",
doi = "10.18653/v1/2023.semeval-1.46",
pages = "340--346",
abstract = "When we interact with other humans, humanvalues guide us to consider the human element. As we shall see, value analysis in NLP hasbeen applied to personality profiling but not toargument mining. As part of SemEval-2023Shared Task 4, our system paper describes amulti-label classifier for identifying human val-ues. Human value detection requires multi-label classification since each argument maycontain multiple values. In this paper, we pro-pose an architecture called Label Graph Trans-former (LG-Transformer). LG-Transformeris a two-stage pipeline consisting of a trans-former jointly encoding argument and labelsand a graph module encoding and obtainingfurther interactions between labels. Using ad-versarial training, we can boost performanceeven further. Our best method scored 50.00 us-ing F1 score on the test set, which is 7.8 higherthan the best baseline method. Our code ispublicly available on Github.",
}
| When we interact with other humans, humanvalues guide us to consider the human element. As we shall see, value analysis in NLP hasbeen applied to personality profiling but not toargument mining. As part of SemEval-2023Shared Task 4, our system paper describes amulti-label classifier for identifying human val-ues. Human value detection requires multi-label classification since each argument maycontain multiple values. In this paper, we pro-pose an architecture called Label Graph Trans-former (LG-Transformer). LG-Transformeris a two-stage pipeline consisting of a trans-former jointly encoding argument and labelsand a graph module encoding and obtainingfurther interactions between labels. Using ad-versarial training, we can boost performanceeven further. Our best method scored 50.00 us-ing F1 score on the test set, which is 7.8 higherthan the best baseline method. Our code ispublicly available on Github. | [
"Hematian Hemati, Hamed",
"Alavian, Sayed Hesam",
"Sameti, Hossein",
"Beigy, Hamid"
] | SUTNLP at SemEval-2023 Task 4: LG-Transformer for Human Value Detection | semeval-1.46 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.47.bib | https://aclanthology.org/2023.semeval-1.47/ | @inproceedings{hematian-hemati-etal-2023-sutnlp-semeval,
title = "{SUTNLP} at {S}em{E}val-2023 Task 10: {RLAT}-Transformer for explainable online sexism detection",
author = "Hematian Hemati, Hamed and
Alavian, Sayed Hesam and
Beigy, Hamid and
Sameti, Hossein",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.47",
doi = "10.18653/v1/2023.semeval-1.47",
pages = "347--356",
abstract = "There is no simple definition of sexism, butit can be described as prejudice, stereotyping,or discrimination, especially against women,based on their gender. In online interactions,sexism is common. One out of ten Americanadults says that they have been harassed be-cause of their gender and have been the targetof sexism, so sexism is a growing issue. TheExplainable Detection of Online Sexism sharedtask in SemEval-2023 aims at building sexismdetection systems for the English language. Inorder to address the problem, we use largelanguage models such as RoBERTa and De-BERTa. In addition, we present Random LayerAdversarial Training (RLAT) for transformers,and show its significant impact on solving allsubtasks. Moreover, we use virtual adversar-ial training and contrastive learning to improveperformance on subtask A. Upon completionof subtask A, B, and C test sets, we obtainedmacro-F1 of 84.45, 67.78, and 52.52, respec-tively outperforming proposed baselines on allsubtasks. Our code is publicly available onGithub.",
}
| There is no simple definition of sexism, butit can be described as prejudice, stereotyping,or discrimination, especially against women,based on their gender. In online interactions,sexism is common. One out of ten Americanadults says that they have been harassed be-cause of their gender and have been the targetof sexism, so sexism is a growing issue. TheExplainable Detection of Online Sexism sharedtask in SemEval-2023 aims at building sexismdetection systems for the English language. Inorder to address the problem, we use largelanguage models such as RoBERTa and De-BERTa. In addition, we present Random LayerAdversarial Training (RLAT) for transformers,and show its significant impact on solving allsubtasks. Moreover, we use virtual adversar-ial training and contrastive learning to improveperformance on subtask A. Upon completionof subtask A, B, and C test sets, we obtainedmacro-F1 of 84.45, 67.78, and 52.52, respec-tively outperforming proposed baselines on allsubtasks. Our code is publicly available onGithub. | [
"Hematian Hemati, Hamed",
"Alavian, Sayed Hesam",
"Beigy, Hamid",
"Sameti, Hossein"
] | SUTNLP at SemEval-2023 Task 10: RLAT-Transformer for explainable online sexism detection | semeval-1.47 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.48.bib | https://aclanthology.org/2023.semeval-1.48/ | @inproceedings{gokani-etal-2023-witcherses,
title = "Witcherses at {S}em{E}val-2023 Task 12: Ensemble Learning for {A}frican Sentiment Analysis",
author = "Gokani, Monil and
Srivatsa, K V Aditya and
Mamidi, Radhika",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.48",
doi = "10.18653/v1/2023.semeval-1.48",
pages = "357--364",
abstract = "This paper describes our system submission for SemEval-2023 Task 12 AfriSenti-SemEval: Sentiment Analysis for African Languages. We propose an XGBoost-based ensemble model trained on emoticon frequency-based features and the predictions of several statistical models such as SVMs, Logistic Regression, Random Forests, and BERT-based pre-trained language models such as AfriBERTa and AfroXLMR. We also report results from additional experiments not in the system. Our system achieves a mixed bag of results, achieving a best rank of 7th in three of the languages - Igbo, Twi, and Yoruba.",
}
| This paper describes our system submission for SemEval-2023 Task 12 AfriSenti-SemEval: Sentiment Analysis for African Languages. We propose an XGBoost-based ensemble model trained on emoticon frequency-based features and the predictions of several statistical models such as SVMs, Logistic Regression, Random Forests, and BERT-based pre-trained language models such as AfriBERTa and AfroXLMR. We also report results from additional experiments not in the system. Our system achieves a mixed bag of results, achieving a best rank of 7th in three of the languages - Igbo, Twi, and Yoruba. | [
"Gokani, Monil",
"Srivatsa, K V Aditya",
"Mamidi, Radhika"
] | Witcherses at SemEval-2023 Task 12: Ensemble Learning for African Sentiment Analysis | semeval-1.48 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.49.bib | https://aclanthology.org/2023.semeval-1.49/ | @inproceedings{keinan-hacohen-kerner-2023-jct,
title = "{JCT} at {S}em{E}val-2023 Tasks 12 A and 12{B}: Sentiment Analysis for Tweets Written in Low-resource {A}frican Languages using Various Machine Learning and Deep Learning Methods, Resampling, and {H}yper{P}arameter Tuning",
author = "Keinan, Ron and
Hacohen-Kerner, Yaakov",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.49",
doi = "10.18653/v1/2023.semeval-1.49",
pages = "365--378",
abstract = "In this paper, we describe our submissions to the SemEval-2023 contest. We tackled subtask 12 - {``}AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages using Twitter Dataset{''}. We developed different models for 12 African languages and a 13th model for a multilingual dataset built from these 12 languages. We applied a wide variety of word and char n-grams based on their tf-idf values, 4 classical machine learning methods, 2 deep learning methods, and 3 oversampling methods. We used 12 sentiment lexicons and applied extensive hyperparameter tuning.",
}
| In this paper, we describe our submissions to the SemEval-2023 contest. We tackled subtask 12 - {``}AfriSenti-SemEval: Sentiment Analysis for Low-resource African Languages using Twitter Dataset{''}. We developed different models for 12 African languages and a 13th model for a multilingual dataset built from these 12 languages. We applied a wide variety of word and char n-grams based on their tf-idf values, 4 classical machine learning methods, 2 deep learning methods, and 3 oversampling methods. We used 12 sentiment lexicons and applied extensive hyperparameter tuning. | [
"Keinan, Ron",
"Hacohen-Kerner, Yaakov"
] | JCT at SemEval-2023 Tasks 12 A and 12B: Sentiment Analysis for Tweets Written in Low-resource African Languages using Various Machine Learning and Deep Learning Methods, Resampling, and HyperParameter Tuning | semeval-1.49 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.50.bib | https://aclanthology.org/2023.semeval-1.50/ | @inproceedings{andres-santamaria-2023-ixa,
title = "{IXA} at {S}em{E}val-2023 Task 2: Baseline Xlm-Roberta-base Approach",
author = "Andres Santamaria, Edgar",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.50",
doi = "10.18653/v1/2023.semeval-1.50",
pages = "379--381",
abstract = "IXA proposes a Sequence labeling fine-tune approach, which consists of a lightweight few-shot baseline (10e), the system takes advantage of transfer learning from pre-trained Named Entity Recognition and cross-lingual knowledge from the LM checkpoint. This technique obtains a drastic reduction in the effective training costs that works as a perfect baseline, future improvements in the baseline approach could fit: 1) Domain adequation, 2) Data augmentation, and 3) Intermediate task learning.",
}
| IXA proposes a Sequence labeling fine-tune approach, which consists of a lightweight few-shot baseline (10e), the system takes advantage of transfer learning from pre-trained Named Entity Recognition and cross-lingual knowledge from the LM checkpoint. This technique obtains a drastic reduction in the effective training costs that works as a perfect baseline, future improvements in the baseline approach could fit: 1) Domain adequation, 2) Data augmentation, and 3) Intermediate task learning. | [
"Andres Santamaria, Edgar"
] | IXA at SemEval-2023 Task 2: Baseline Xlm-Roberta-base Approach | semeval-1.50 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.51.bib | https://aclanthology.org/2023.semeval-1.51/ | @inproceedings{purificato-navigli-2023-apatt,
title = "{AP}att at {S}em{E}val-2023 Task 3: The Sapienza {NLP} System for Ensemble-based Multilingual Propaganda Detection",
author = "Purificato, Antonio and
Navigli, Roberto",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.51",
doi = "10.18653/v1/2023.semeval-1.51",
pages = "382--388",
abstract = "In this paper, we present our approach to the task of identification of persuasion techniques in text, which is a subtask of the SemEval-2023 Task 3 on the multilingual detection of genre, framing, and persuasion techniques in online news. The subtask is multi-label at the paragraph level and the inventory considered by the organizers covers 23 persuasion techniques. Our solution is based on an ensemble of a variety of pre-trained language models (PLMs) fine-tuned on the propaganda dataset. We first describe our system, the different experimental setups we considered, and then provide the results on the dev and test sets released by the organizers. The official evaluation shows our solution ranks 1st in English and attains high scores in all the other languages, i.e. French, German, Italian, Polish, and Russian. We also perform an extensive analysis of the data and the annotations to investigate how they can influence the quality of our systems.",
}
| In this paper, we present our approach to the task of identification of persuasion techniques in text, which is a subtask of the SemEval-2023 Task 3 on the multilingual detection of genre, framing, and persuasion techniques in online news. The subtask is multi-label at the paragraph level and the inventory considered by the organizers covers 23 persuasion techniques. Our solution is based on an ensemble of a variety of pre-trained language models (PLMs) fine-tuned on the propaganda dataset. We first describe our system, the different experimental setups we considered, and then provide the results on the dev and test sets released by the organizers. The official evaluation shows our solution ranks 1st in English and attains high scores in all the other languages, i.e. French, German, Italian, Polish, and Russian. We also perform an extensive analysis of the data and the annotations to investigate how they can influence the quality of our systems. | [
"Purificato, Antonio",
"Navigli, Roberto"
] | APatt at SemEval-2023 Task 3: The Sapienza NLP System for Ensemble-based Multilingual Propaganda Detection | semeval-1.51 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.52.bib | https://aclanthology.org/2023.semeval-1.52/ | @inproceedings{belbachir-2023-foul,
title = "Foul at {S}em{E}val-2023 Task 12: {MARBERT} Language model and lexical filtering for sentiments analysis of tweets in {A}lgerian {A}rabic",
author = "Belbachir, Faiza",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.52",
doi = "10.18653/v1/2023.semeval-1.52",
pages = "389--396",
abstract = "This paper describes the system we designed for our participation to SemEval2023 Task 12 Track 6 about Algerian dialect sentiment analysis. We propose a transformer language model approach combined with a lexicon mixing terms and emojis which is used in a post-processing filtering stage. The Algerian sentiment lexicons was extracted manually from tweets. We report on our experiments on Algerian dialect, where we compare the performance of marbert to the one of arabicbert and camelbert on the training and development datasets of Task 12. We also analyse the contribution of our post processing lexical filtering for sentiment analysis. Our system obtained a F1 score equal to $70\%$, ranking 9th among 30 participants.",
}
| This paper describes the system we designed for our participation to SemEval2023 Task 12 Track 6 about Algerian dialect sentiment analysis. We propose a transformer language model approach combined with a lexicon mixing terms and emojis which is used in a post-processing filtering stage. The Algerian sentiment lexicons was extracted manually from tweets. We report on our experiments on Algerian dialect, where we compare the performance of marbert to the one of arabicbert and camelbert on the training and development datasets of Task 12. We also analyse the contribution of our post processing lexical filtering for sentiment analysis. Our system obtained a F1 score equal to $70\%$, ranking 9th among 30 participants. | [
"Belbachir, Faiza"
] | Foul at SemEval-2023 Task 12: MARBERT Language model and lexical filtering for sentiments analysis of tweets in Algerian Arabic | semeval-1.52 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.53.bib | https://aclanthology.org/2023.semeval-1.53/ | @inproceedings{huang-etal-2023-cpic,
title = "{CPIC} at {S}em{E}val-2023 Task 7: {GPT}2-Based Model for Multi-evidence Natural Language Inference for Clinical Trial Data",
author = "Huang, Mingtong and
Ren, Junxiang and
Liu, Lang and
Song, Ruilin and
Yin, Wenbo",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.53",
doi = "10.18653/v1/2023.semeval-1.53",
pages = "397--401",
abstract = "This paper describes our system submitted for SemEval Task 7, Multi-Evidence Natural Language Inference for Clinical Trial Data. The task consists of 2 subtasks. Subtask 1 is to determine the relationships between clinical trial data (CTR) and statements. Subtask 2 is to output a set of supporting facts extracted from the premises with the input of CTR premises and statements. Through experiments, we found that our GPT2-based pre-trained models can obtain good results in Subtask 2. Therefore, we use the GPT2-based pre-trained model to fine-tune Subtask 2. We transform the evidence retrieval task into a binary class task by combining premises and statements as input, and the output is whether the premises and statements match. We obtain a top-5 score in the evaluation phase of Subtask 2.",
}
| This paper describes our system submitted for SemEval Task 7, Multi-Evidence Natural Language Inference for Clinical Trial Data. The task consists of 2 subtasks. Subtask 1 is to determine the relationships between clinical trial data (CTR) and statements. Subtask 2 is to output a set of supporting facts extracted from the premises with the input of CTR premises and statements. Through experiments, we found that our GPT2-based pre-trained models can obtain good results in Subtask 2. Therefore, we use the GPT2-based pre-trained model to fine-tune Subtask 2. We transform the evidence retrieval task into a binary class task by combining premises and statements as input, and the output is whether the premises and statements match. We obtain a top-5 score in the evaluation phase of Subtask 2. | [
"Huang, Mingtong",
"Ren, Junxiang",
"Liu, Lang",
"Song, Ruilin",
"Yin, Wenbo"
] | CPIC at SemEval-2023 Task 7: GPT2-Based Model for Multi-evidence Natural Language Inference for Clinical Trial Data | semeval-1.53 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |
||
https://aclanthology.org/2023.semeval-1.54.bib | https://aclanthology.org/2023.semeval-1.54/ | @inproceedings{huo-etal-2023-antcontenttech,
title = "{A}nt{C}ontent{T}ech at {S}em{E}val-2023 Task 6: Domain-adaptive Pretraining and Auxiliary-task Learning for Understanding {I}ndian Legal Texts",
author = "Huo, Jingjing and
Zhang, Kezun and
Liu, Zhengyong and
Lin, Xuan and
Xu, Wenqiang and
Zheng, Maozong and
Wang, Zhaoguo and
Li, Song",
editor = {Ojha, Atul Kr. and
Do{\u{g}}ru{\"o}z, A. Seza and
Da San Martino, Giovanni and
Tayyar Madabushi, Harish and
Kumar, Ritesh and
Sartori, Elisa},
booktitle = "Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023)",
month = jul,
year = "2023",
address = "Toronto, Canada",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.semeval-1.54",
doi = "10.18653/v1/2023.semeval-1.54",
pages = "402--408",
abstract = "The objective of this shared task is to gain an understanding of legal texts, and it is beset with difficulties such as the comprehension of lengthy noisy legal documents, domain specificity as well as the scarcity of annotated data. To address these challenges, we propose a system that employs a hierarchical model and integrates domain-adaptive pretraining, data augmentation, and auxiliary-task learning techniques. Moreover, to enhance generalization and robustness, we ensemble the models that utilize these diverse techniques. Our system ranked first on the RR sub-task and in the middle for the other two sub-tasks.",
}
| The objective of this shared task is to gain an understanding of legal texts, and it is beset with difficulties such as the comprehension of lengthy noisy legal documents, domain specificity as well as the scarcity of annotated data. To address these challenges, we propose a system that employs a hierarchical model and integrates domain-adaptive pretraining, data augmentation, and auxiliary-task learning techniques. Moreover, to enhance generalization and robustness, we ensemble the models that utilize these diverse techniques. Our system ranked first on the RR sub-task and in the middle for the other two sub-tasks. | [
"Huo, Jingjing",
"Zhang, Kezun",
"Liu, Zhengyong",
"Lin, Xuan",
"Xu, Wenqiang",
"Zheng, Maozong",
"Wang, Zhaoguo",
"Li, Song"
] | AntContentTech at SemEval-2023 Task 6: Domain-adaptive Pretraining and Auxiliary-task Learning for Understanding Indian Legal Texts | semeval-1.54 | Poster | [
""
] | -1 | -1 | -1 | -1 | 0 | [] | [] | [] |