Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
566
3.75k
abstract
stringlengths
4
3.1k
authors
sequencelengths
1
66
title
stringlengths
12
172
id
stringlengths
7
19
type
stringclasses
2 values
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringlengths
0
40
n_linked_authors
int64
-1
21
upvotes
int64
-1
116
num_comments
int64
-1
11
n_authors
int64
-1
61
Models
sequencelengths
0
100
Datasets
sequencelengths
0
100
Spaces
sequencelengths
0
100
old_Models
sequencelengths
0
100
old_Datasets
sequencelengths
0
100
old_Spaces
sequencelengths
0
100
paper_page_exists_pre_conf
int64
0
1
https://aclanthology.org/2024.wikinlp-1.3.bib
https://aclanthology.org/2024.wikinlp-1.3/
@inproceedings{li-etal-2024-bordirlines, title = "{B}ord{IR}lines: A Dataset for Evaluating Cross-lingual Retrieval Augmented Generation", author = "Li, Bryan and Haider, Samar and Luo, Fiona and Agashe, Adwait and Callison-Burch, Chris", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.3", pages = "1--13", abstract = "Large language models excel at creative generation but continue to struggle with the issues of hallucination and bias. While retrieval-augmented generation (RAG) provides a framework for grounding LLMs{'} responses in accurate and up-to-date information, it still raises the question of bias: which sources should be selected for inclusion in the context? And how should their importance be weighted? In this paper, we study the challenge of cross-lingual RAG and present a dataset to investigate the robustness of existing systems at answering queries about geopolitical disputes, which exist at the intersection of linguistic, cultural, and political boundaries. Our dataset is sourced from Wikipedia pages containing information relevant to the given queries and we investigate the impact of including additional context, as well as the composition of this context in terms of language and source, on an LLM{'}s response. Our results show that existing RAG systems continue to be challenged by cross-lingual use cases and suffer from a lack of consistency when they are provided with competing information in multiple languages. We present case studies to illustrate these issues and outline steps for future research to address these challenges.", }
Large language models excel at creative generation but continue to struggle with the issues of hallucination and bias. While retrieval-augmented generation (RAG) provides a framework for grounding LLMs{'} responses in accurate and up-to-date information, it still raises the question of bias: which sources should be selected for inclusion in the context? And how should their importance be weighted? In this paper, we study the challenge of cross-lingual RAG and present a dataset to investigate the robustness of existing systems at answering queries about geopolitical disputes, which exist at the intersection of linguistic, cultural, and political boundaries. Our dataset is sourced from Wikipedia pages containing information relevant to the given queries and we investigate the impact of including additional context, as well as the composition of this context in terms of language and source, on an LLM{'}s response. Our results show that existing RAG systems continue to be challenged by cross-lingual use cases and suffer from a lack of consistency when they are provided with competing information in multiple languages. We present case studies to illustrate these issues and outline steps for future research to address these challenges.
[ "Li, Bryan", "Haider, Samar", "Luo, Fiona", "Agashe, Adwait", "Callison-Burch, Chris" ]
BordIRlines: A Dataset for Evaluating Cross-lingual Retrieval Augmented Generation
wikinlp-1.3
Poster
[ "https://github.com/manestay/bordirlines" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wikinlp-1.7.bib
https://aclanthology.org/2024.wikinlp-1.7/
@inproceedings{gelles-dunham-2024-multi, title = "Multi-Label Field Classification for Scientific Documents using Expert and Crowd-sourced Knowledge", author = "Gelles, Rebecca and Dunham, James", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.7", pages = "14--20", abstract = "Taxonomies of scientific research seek to describe complex domains of activity that are overlapping and dynamic. We address this challenge by combining knowledge curated by the Wikipedia community with the input of subject-matter experts to identify, define, and validate a system of 1,110 granular fields of study for use in multi-label classification of scientific publications. The result is capable of categorizing research across subfields of artificial intelligence, computer security, semiconductors, genetics, virology, immunology, neuroscience, biotechnology, and bioinformatics. We then develop and evaluate a solution for zero-shot classification of publications in terms of these fields.", }
Taxonomies of scientific research seek to describe complex domains of activity that are overlapping and dynamic. We address this challenge by combining knowledge curated by the Wikipedia community with the input of subject-matter experts to identify, define, and validate a system of 1,110 granular fields of study for use in multi-label classification of scientific publications. The result is capable of categorizing research across subfields of artificial intelligence, computer security, semiconductors, genetics, virology, immunology, neuroscience, biotechnology, and bioinformatics. We then develop and evaluate a solution for zero-shot classification of publications in terms of these fields.
[ "Gelles, Rebecca", "Dunham, James" ]
Multi-Label Field Classification for Scientific Documents using Expert and Crowd-sourced Knowledge
wikinlp-1.7
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wikinlp-1.8.bib
https://aclanthology.org/2024.wikinlp-1.8/
@inproceedings{li-etal-2024-uncovering, title = "Uncovering Differences in Persuasive Language in {R}ussian versus {E}nglish {W}ikipedia", author = "Li, Bryan and Panasyuk, Aleksey and Callison-Burch, Chris", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.8", pages = "21--35", abstract = "We study how differences in persuasive language across Wikipedia articles, written in either English and Russian, can uncover each culture{'}s distinct perspective on different subjects. We develop a large language model (LLM) powered system to identify instances of persuasive language in multilingual texts. Instead of directly prompting LLMs to detect persuasion, which is subjective and difficult, we propose to reframe the task to instead ask high-level questions (HLQs) which capture different persuasive aspects. Importantly, these HLQs are authored by LLMs themselves. LLMs over-generate a large set of HLQs, which are subsequently filtered to a small set aligned with human labels for the original task. We then apply our approach to a large-scale, bilingual dataset of Wikipedia articles (88K total), using a two-stage identify-then-extract prompting strategy to find instances of persuasion. We quantify the amount of persuasion per article, and explore the differences in persuasion through several experiments on the paired articles. Notably, we generate rankings of articles by persuasion in both languages. These rankings match our intuitions on the culturally-salient subjects; Russian Wikipedia highlights subjects on Ukraine, while English Wikipedia highlights the Middle East. Grouping subjects into larger topics, we find politically-related events contain more persuasion than others. We further demonstrate that HLQs obtain similar performance when posed in either English or Russian. Our methodology enables cross-lingual, cross-cultural understanding at scale, and we release our code, prompts, and data.", }
We study how differences in persuasive language across Wikipedia articles, written in either English and Russian, can uncover each culture{'}s distinct perspective on different subjects. We develop a large language model (LLM) powered system to identify instances of persuasive language in multilingual texts. Instead of directly prompting LLMs to detect persuasion, which is subjective and difficult, we propose to reframe the task to instead ask high-level questions (HLQs) which capture different persuasive aspects. Importantly, these HLQs are authored by LLMs themselves. LLMs over-generate a large set of HLQs, which are subsequently filtered to a small set aligned with human labels for the original task. We then apply our approach to a large-scale, bilingual dataset of Wikipedia articles (88K total), using a two-stage identify-then-extract prompting strategy to find instances of persuasion. We quantify the amount of persuasion per article, and explore the differences in persuasion through several experiments on the paired articles. Notably, we generate rankings of articles by persuasion in both languages. These rankings match our intuitions on the culturally-salient subjects; Russian Wikipedia highlights subjects on Ukraine, while English Wikipedia highlights the Middle East. Grouping subjects into larger topics, we find politically-related events contain more persuasion than others. We further demonstrate that HLQs obtain similar performance when posed in either English or Russian. Our methodology enables cross-lingual, cross-cultural understanding at scale, and we release our code, prompts, and data.
[ "Li, Bryan", "Panasyuk, Aleksey", "Callison-Burch, Chris" ]
Uncovering Differences in Persuasive Language in Russian versus English Wikipedia
wikinlp-1.8
Poster
2409.19148
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wikinlp-1.9.bib
https://aclanthology.org/2024.wikinlp-1.9/
@inproceedings{yang-etal-2024-retrieval, title = "Retrieval Evaluation for Long-Form and Knowledge-Intensive Image{--}Text Article Composition", author = "Yang, Jheng-Hong and Lassance, Carlos and Rezende, Rafael S. and Srinivasan, Krishna and Clinchant, St{\'e}phane and Lin, Jimmy", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.9", pages = "36--45", abstract = "This paper examines the integration of images into Wikipedia articles by evaluating image{--}text retrieval tasks in multimedia content creation, focusing on developing retrieval-augmented tools to enhance the creation of high-quality multimedia articles. Despite ongoing research, the interplay between text and visuals, such as photos and diagrams, remains underexplored, limiting support for real-world applications. We introduce AToMiC, a dataset for long-form, knowledge-intensive image{--}text retrieval, detailing its task design, evaluation protocols, and relevance criteria.Our findings show that a hybrid approach combining a sparse retriever with a dense retriever achieves satisfactory effectiveness, with nDCG@10 scores around 0.4 for Image Suggestion and Image Promotion tasks, providing insights into the challenges of retrieval evaluation in an image{--}text interleaved article composition context.The AToMiC dataset is available at https://github.com/TREC-AToMiC/AToMiC.", }
This paper examines the integration of images into Wikipedia articles by evaluating image{--}text retrieval tasks in multimedia content creation, focusing on developing retrieval-augmented tools to enhance the creation of high-quality multimedia articles. Despite ongoing research, the interplay between text and visuals, such as photos and diagrams, remains underexplored, limiting support for real-world applications. We introduce AToMiC, a dataset for long-form, knowledge-intensive image{--}text retrieval, detailing its task design, evaluation protocols, and relevance criteria.Our findings show that a hybrid approach combining a sparse retriever with a dense retriever achieves satisfactory effectiveness, with nDCG@10 scores around 0.4 for Image Suggestion and Image Promotion tasks, providing insights into the challenges of retrieval evaluation in an image{--}text interleaved article composition context.The AToMiC dataset is available at https://github.com/TREC-AToMiC/AToMiC.
[ "Yang, Jheng-Hong", "Lassance, Carlos", "Rezende, Rafael S.", "Srinivasan, Krishna", "Clinchant, St{\\'e}phane", "Lin, Jimmy" ]
Retrieval Evaluation for Long-Form and Knowledge-Intensive Image–Text Article Composition
wikinlp-1.9
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wikinlp-1.10.bib
https://aclanthology.org/2024.wikinlp-1.10/
@inproceedings{salas-jimenez-etal-2024-wikibias, title = "{W}iki{B}ias as an Extrapolation Corpus for Bias Detection", author = "Salas-Jimenez, K. and Lopez-Ponce, Francisco Fernando and Ojeda-Trueba, Sergio-Luis and Bel-Enguix, Gemma", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.10", pages = "46--52", abstract = "This paper explores whether it is possible to train a machine learning model using Wikipedia data to detect subjectivity in sentences and generalize effectively to other domains. To achieve this, we performed experiments with the WikiBias corpus, the BABE corpus, and the CheckThat! Dataset. Various classical models for ML were tested, including Logistic Regression, SVC, and SVR, including characteristics such as Sentence Transformers similarity, probabilistic sentiment measures, and biased lexicons. Pre-trained models like DistilRoBERTa, as well as large language models like Gemma and GPT-4, were also tested for the same classification task.", }
This paper explores whether it is possible to train a machine learning model using Wikipedia data to detect subjectivity in sentences and generalize effectively to other domains. To achieve this, we performed experiments with the WikiBias corpus, the BABE corpus, and the CheckThat! Dataset. Various classical models for ML were tested, including Logistic Regression, SVC, and SVR, including characteristics such as Sentence Transformers similarity, probabilistic sentiment measures, and biased lexicons. Pre-trained models like DistilRoBERTa, as well as large language models like Gemma and GPT-4, were also tested for the same classification task.
[ "Salas-Jimenez, K.", "Lopez-Ponce, Francisco Fern", "o", "Ojeda-Trueba, Sergio-Luis", "Bel-Enguix, Gemma" ]
WikiBias as an Extrapolation Corpus for Bias Detection
wikinlp-1.10
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wikinlp-1.11.bib
https://aclanthology.org/2024.wikinlp-1.11/
@inproceedings{borkakoty-espinosa-anke-2024-hoaxpedia, title = "{HOAXPEDIA}: A Unified {W}ikipedia Hoax Articles Dataset", author = "Borkakoty, Hsuvas and Espinosa-Anke, Luis", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.11", pages = "53--66", abstract = "Hoaxes are a recognised form of disinformation created deliberately, with potential serious implications in the credibility of reference knowledge resources such as Wikipedia. What makes detecting Wikipedia hoaxes hard is that they often are written according to the official style guidelines. In this work, we first provide a systematic analysis of similarities and discrepancies between legitimate and hoax Wikipedia articles, and introduce HOAXPEDIA, a collection of 311 hoax articles (from existing literature and official Wikipedia lists), together with semantically similar legitimate articles, which together form a binary text classification dataset aimed at fostering research in automated hoax detection. In this paper, We report results after analyzing several language models, hoax-to-legit ratios, and the amount of text classifiers are exposed to (full article vs the article{'}s definition alone). Our results suggest that detecting deceitful content in Wikipedia based on content alone is hard but feasible, and complement our analysis with a study on the differences in distributions in edit histories, and find that looking at this feature yields better classification results than context.", }
Hoaxes are a recognised form of disinformation created deliberately, with potential serious implications in the credibility of reference knowledge resources such as Wikipedia. What makes detecting Wikipedia hoaxes hard is that they often are written according to the official style guidelines. In this work, we first provide a systematic analysis of similarities and discrepancies between legitimate and hoax Wikipedia articles, and introduce HOAXPEDIA, a collection of 311 hoax articles (from existing literature and official Wikipedia lists), together with semantically similar legitimate articles, which together form a binary text classification dataset aimed at fostering research in automated hoax detection. In this paper, We report results after analyzing several language models, hoax-to-legit ratios, and the amount of text classifiers are exposed to (full article vs the article{'}s definition alone). Our results suggest that detecting deceitful content in Wikipedia based on content alone is hard but feasible, and complement our analysis with a study on the differences in distributions in edit histories, and find that looking at this feature yields better classification results than context.
[ "Borkakoty, Hsuvas", "Espinosa-Anke, Luis" ]
HOAXPEDIA: A Unified Wikipedia Hoax Articles Dataset
wikinlp-1.11
Poster
2405.02175
[ "https://github.com/hsuvas/hoaxpedia_dataset" ]
https://huggingface.co/papers/2405.02175
0
0
0
2
[]
[ "hsuvaskakoty/hoaxpedia" ]
[]
[]
[ "hsuvaskakoty/hoaxpedia" ]
[]
1
https://aclanthology.org/2024.wikinlp-1.12.bib
https://aclanthology.org/2024.wikinlp-1.12/
@inproceedings{brooks-etal-2024-rise, title = "The Rise of {AI}-Generated Content in {W}ikipedia", author = "Brooks, Creston and Eggert, Samuel and Peskoff, Denis", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.12", pages = "67--79", abstract = "The rise of AI-generated content in popular information sources raises significant concerns about accountability, accuracy, and bias amplification. Beyond directly impacting consumers, the widespread presence of this content poses questions for the long-term viability of training language models on vast internet sweeps. We use GPTZero, a proprietary AI detector, and Binoculars, an open-source alternative, to establish lower bounds on the presence of AI-generated content in recently created Wikipedia pages. Both detectors reveal a marked increase in AI-generated content in recent pages compared to those from before the release of GPT-3.5. With thresholds calibrated to achieve a 1{\%} false positive rate on pre-GPT-3.5 articles, detectors flag over 5{\%} of newly created English Wikipedia articles as AI-generated, with lower percentages for German, French, and Italian articles. Flagged Wikipedia articles are typically of lower quality and are often self-promotional or partial towards a specific viewpoint on controversial topics.", }
The rise of AI-generated content in popular information sources raises significant concerns about accountability, accuracy, and bias amplification. Beyond directly impacting consumers, the widespread presence of this content poses questions for the long-term viability of training language models on vast internet sweeps. We use GPTZero, a proprietary AI detector, and Binoculars, an open-source alternative, to establish lower bounds on the presence of AI-generated content in recently created Wikipedia pages. Both detectors reveal a marked increase in AI-generated content in recent pages compared to those from before the release of GPT-3.5. With thresholds calibrated to achieve a 1{\%} false positive rate on pre-GPT-3.5 articles, detectors flag over 5{\%} of newly created English Wikipedia articles as AI-generated, with lower percentages for German, French, and Italian articles. Flagged Wikipedia articles are typically of lower quality and are often self-promotional or partial towards a specific viewpoint on controversial topics.
[ "Brooks, Creston", "Eggert, Samuel", "Peskoff, Denis" ]
The Rise of AI-Generated Content in Wikipedia
wikinlp-1.12
Poster
2410.08044
[ "https://github.com/brooksca3/wiki_collection" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wikinlp-1.13.bib
https://aclanthology.org/2024.wikinlp-1.13/
@inproceedings{shibuya-utsuro-2024-embedded, title = "Embedded Topic Models Enhanced by Wikification", author = "Shibuya, Takashi and Utsuro, Takehito", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.13", pages = "80--90", abstract = "Topic modeling analyzes a collection of documents to learn meaningful patterns of words.However, previous topic models consider only the spelling of words and do not take into consideration the polysemy of words.In this study, we incorporate the Wikipedia knowledge into a neural topic model to make it aware of named entities.We evaluate our method on two datasets, 1) news articles of New York Times and 2) the AIDA-CoNLL dataset.Our experiments show that our method improves the performance of neural topic models in generalizability.Moreover, we analyze frequent words in each topic and the temporal dependencies between topics to demonstrate that our entity-aware topic models can capture the time-series development of topics well.", }
Topic modeling analyzes a collection of documents to learn meaningful patterns of words.However, previous topic models consider only the spelling of words and do not take into consideration the polysemy of words.In this study, we incorporate the Wikipedia knowledge into a neural topic model to make it aware of named entities.We evaluate our method on two datasets, 1) news articles of New York Times and 2) the AIDA-CoNLL dataset.Our experiments show that our method improves the performance of neural topic models in generalizability.Moreover, we analyze frequent words in each topic and the temporal dependencies between topics to demonstrate that our entity-aware topic models can capture the time-series development of topics well.
[ "Shibuya, Takashi", "Utsuro, Takehito" ]
Embedded Topic Models Enhanced by Wikification
wikinlp-1.13
Poster
2410.02441
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wikinlp-1.14.bib
https://aclanthology.org/2024.wikinlp-1.14/
@inproceedings{johnson-etal-2024-wikimedia, title = "Wikimedia data for {AI}: a review of Wikimedia datasets for {NLP} tasks and {AI}-assisted editing", author = "Johnson, Isaac and Kaffee, Lucie-Aim{\'e}e and Redi, Miriam", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.14", pages = "91--101", abstract = "Wikimedia content is used extensively by the AI community and within the language modeling community in particular. In this paper, we provide a review of the different ways in which Wikimedia data is curated to use in NLP tasks across pre-training, post-training, and model evaluations. We point to opportunities for greater use of Wikimedia content but also identify ways in which the language modeling community could better center the needs of Wikimedia editors. In particular, we call for incorporating additional sources of Wikimedia data, a greater focus on benchmarks for LLMs that encode Wikimedia principles, and greater multilingualism in Wikimedia-derived datasets.", }
Wikimedia content is used extensively by the AI community and within the language modeling community in particular. In this paper, we provide a review of the different ways in which Wikimedia data is curated to use in NLP tasks across pre-training, post-training, and model evaluations. We point to opportunities for greater use of Wikimedia content but also identify ways in which the language modeling community could better center the needs of Wikimedia editors. In particular, we call for incorporating additional sources of Wikimedia data, a greater focus on benchmarks for LLMs that encode Wikimedia principles, and greater multilingualism in Wikimedia-derived datasets.
[ "Johnson, Isaac", "Kaffee, Lucie-Aim{\\'e}e", "Redi, Miriam" ]
Wikimedia data for AI: a review of Wikimedia datasets for NLP tasks and AI-assisted editing
wikinlp-1.14
Poster
2410.08918
[ "" ]
https://huggingface.co/papers/2410.08918
2
2
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wikinlp-1.16.bib
https://aclanthology.org/2024.wikinlp-1.16/
@inproceedings{li-etal-2024-blocks, title = "Blocks Architecture ({B}lo{A}rk): Efficient, Cost-Effective, and Incremental Dataset Architecture for {W}ikipedia Revision History", author = "Li, Lingxi and Yao, Zonghai and Kwon, Sunjae and Yu, Hong", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.16", pages = "102--111", abstract = "Wikipedia (Wiki) is one of the most widely used and publicly available resources for natural language processing (NLP) applications. Wikipedia Revision History (WikiRevHist) shows the order in which edits were made to any Wiki page since its first modification. While the most up-to-date Wiki has been widely used as a training source, WikiRevHist can also be valuable resources for NLP applications. However, there are insufficient tools available to process WikiRevHist without having substantial computing resources, making additional customization, and spending extra time adapting others{'} works. Therefore, we report Blocks Architecture (BloArk), an efficiency-focused data processing architecture that reduces running time, computing resource requirements, and repeated works in processing WikiRevHist dataset. BloArk consists of three parts in its infrastructure: blocks, segments, and warehouses. On top of that, we build the core data processing pipeline: builder and modifier. The BloArk builder transforms the original WikiRevHist dataset from XML syntax into JSON Lines (JSONL) format for improving the concurrent and storage efficiency. The BloArk modifier takes previously-built warehouses to operate incremental modifications for improving the utilization of existing databases and reducing the cost of reusing others{'} works. In the end, BloArk can scale up easily in both processing Wikipedia Revision History and incrementally modifying existing dataset for downstream NLP use cases. The source code, documentations, and example usages are publicly available online and open-sourced under GPL-2.0 license.", }
Wikipedia (Wiki) is one of the most widely used and publicly available resources for natural language processing (NLP) applications. Wikipedia Revision History (WikiRevHist) shows the order in which edits were made to any Wiki page since its first modification. While the most up-to-date Wiki has been widely used as a training source, WikiRevHist can also be valuable resources for NLP applications. However, there are insufficient tools available to process WikiRevHist without having substantial computing resources, making additional customization, and spending extra time adapting others{'} works. Therefore, we report Blocks Architecture (BloArk), an efficiency-focused data processing architecture that reduces running time, computing resource requirements, and repeated works in processing WikiRevHist dataset. BloArk consists of three parts in its infrastructure: blocks, segments, and warehouses. On top of that, we build the core data processing pipeline: builder and modifier. The BloArk builder transforms the original WikiRevHist dataset from XML syntax into JSON Lines (JSONL) format for improving the concurrent and storage efficiency. The BloArk modifier takes previously-built warehouses to operate incremental modifications for improving the utilization of existing databases and reducing the cost of reusing others{'} works. In the end, BloArk can scale up easily in both processing Wikipedia Revision History and incrementally modifying existing dataset for downstream NLP use cases. The source code, documentations, and example usages are publicly available online and open-sourced under GPL-2.0 license.
[ "Li, Lingxi", "Yao, Zonghai", "Kwon, Sunjae", "Yu, Hong" ]
Blocks Architecture (BloArk): Efficient, Cost-Effective, and Incremental Dataset Architecture for Wikipedia Revision History
wikinlp-1.16
Poster
2410.04410
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wikinlp-1.17.bib
https://aclanthology.org/2024.wikinlp-1.17/
@inproceedings{jin-etal-2024-armada, title = "{ARMADA}: Attribute-Based Multimodal Data Augmentation", author = "Jin, Xiaomeng and Kim, Jeonghwan and Zhou, Yu and Huang, Kuan-Hao and Wu, Te-Lin and Peng, Nanyun and Ji, Heng", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.17", pages = "112--125", abstract = "In Multimodal Language Models (MLMs), the cost of manually annotating high-quality image-text pair data for fine-tuning and alignment is extremely high. While existing multimodal data augmentation frameworks propose ways to augment image-text pairs, they either suffer from semantic inconsistency between texts and images, or generate unrealistic images, causing knowledge gap with real world examples. To address these issues, we propose Attribute-based Multimodal Data Augmentation (ARMADA), a novel multimodal data augmentation method via knowledge-guided manipulation of visual attributes of the mentioned entities. Specifically, we extract entities and their visual attributes from the original text data, then search for alternative values for the visual attributes under the guidance of knowledge bases (KBs) and large language models (LLMs). We then utilize an image-editing model to edit the images with the extracted attributes. ARMADA is a novel multimodal data generation framework that: (i) extracts knowledge-grounded attributes from symbolic KBs for semantically consistent yet distinctive image-text pair generation, (ii) generates visually similar images of disparate categories using neighboring entities in the KB hierarchy, and (iii) uses the commonsense knowledge of LLMs to modulate auxiliary visual attributes such as backgrounds for more robust representation of original entities. Our empirical results over four downstream tasks demonstrate the efficacy of our framework to produce high-quality data and enhance the model performance. This also highlights the need to leverage external knowledge proxies for enhanced interpretability and real-world grounding.", }
In Multimodal Language Models (MLMs), the cost of manually annotating high-quality image-text pair data for fine-tuning and alignment is extremely high. While existing multimodal data augmentation frameworks propose ways to augment image-text pairs, they either suffer from semantic inconsistency between texts and images, or generate unrealistic images, causing knowledge gap with real world examples. To address these issues, we propose Attribute-based Multimodal Data Augmentation (ARMADA), a novel multimodal data augmentation method via knowledge-guided manipulation of visual attributes of the mentioned entities. Specifically, we extract entities and their visual attributes from the original text data, then search for alternative values for the visual attributes under the guidance of knowledge bases (KBs) and large language models (LLMs). We then utilize an image-editing model to edit the images with the extracted attributes. ARMADA is a novel multimodal data generation framework that: (i) extracts knowledge-grounded attributes from symbolic KBs for semantically consistent yet distinctive image-text pair generation, (ii) generates visually similar images of disparate categories using neighboring entities in the KB hierarchy, and (iii) uses the commonsense knowledge of LLMs to modulate auxiliary visual attributes such as backgrounds for more robust representation of original entities. Our empirical results over four downstream tasks demonstrate the efficacy of our framework to produce high-quality data and enhance the model performance. This also highlights the need to leverage external knowledge proxies for enhanced interpretability and real-world grounding.
[ "Jin, Xiaomeng", "Kim, Jeonghwan", "Zhou, Yu", "Huang, Kuan-Hao", "Wu, Te-Lin", "Peng, Nanyun", "Ji, Heng" ]
ARMADA: Attribute-Based Multimodal Data Augmentation
wikinlp-1.17
Poster
2408.10086
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wikinlp-1.18.bib
https://aclanthology.org/2024.wikinlp-1.18/
@inproceedings{li-etal-2024-summarization, title = "Summarization-Based Document {ID}s for Generative Retrieval with Language Models", author = "Li, Alan and Cheng, Daniel and Keung, Phillip and Kasai, Jungo and Smith, Noah A.", editor = "Lucie-Aim{\'e}e, Lucie and Fan, Angela and Gwadabe, Tajuddeen and Johnson, Isaac and Petroni, Fabio and van Strien, Daniel", booktitle = "Proceedings of the First Workshop on Advancing Natural Language Processing for Wikipedia", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wikinlp-1.18", pages = "126--135", abstract = "Generative retrieval (Wang et al., 2022; Tay et al., 2022) is a popular approach for end-to-end document retrieval that directly generates document identifiers given an input query. We introduce summarization-based document IDs, in which each document{'}s ID is composed of an extractive summary or abstractive keyphrases generated by a language model, rather than an integer ID sequence or bags of n-grams as proposed in past work. We find that abstractive, content-based IDs (ACID) and an ID based on the first 30 tokens are very effective in direct comparisons with previous approaches to ID creation. We show that using ACID improves top-10 and top-20 recall by 15.6{\%} and 14.4{\%} (relative) respectively versus the cluster-based integer ID baseline on the MSMARCO 100k retrieval task, and 9.8{\%} and 9.9{\%} respectively on the Wikipedia-based NQ 100k retrieval task. Our results demonstrate the effectiveness of human-readable, natural-language IDs created through summarization for generative retrieval. We also observed that extractive IDs outperformed abstractive IDs on Wikipedia articles in NQ but not the snippets in MSMARCO, which suggests that document characteristics affect generative retrieval performance.", }
Generative retrieval (Wang et al., 2022; Tay et al., 2022) is a popular approach for end-to-end document retrieval that directly generates document identifiers given an input query. We introduce summarization-based document IDs, in which each document{'}s ID is composed of an extractive summary or abstractive keyphrases generated by a language model, rather than an integer ID sequence or bags of n-grams as proposed in past work. We find that abstractive, content-based IDs (ACID) and an ID based on the first 30 tokens are very effective in direct comparisons with previous approaches to ID creation. We show that using ACID improves top-10 and top-20 recall by 15.6{\%} and 14.4{\%} (relative) respectively versus the cluster-based integer ID baseline on the MSMARCO 100k retrieval task, and 9.8{\%} and 9.9{\%} respectively on the Wikipedia-based NQ 100k retrieval task. Our results demonstrate the effectiveness of human-readable, natural-language IDs created through summarization for generative retrieval. We also observed that extractive IDs outperformed abstractive IDs on Wikipedia articles in NQ but not the snippets in MSMARCO, which suggests that document characteristics affect generative retrieval performance.
[ "Li, Alan", "Cheng, Daniel", "Keung, Phillip", "Kasai, Jungo", "Smith, Noah A." ]
Summarization-Based Document IDs for Generative Retrieval with Language Models
wikinlp-1.18
Poster
2311.08593
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.1.bib
https://aclanthology.org/2024.wmt-1.1/
@inproceedings{kocmi-etal-2024-findings, title = "Findings of the {WMT}24 General Machine Translation Shared Task: The {LLM} Era Is Here but {MT} Is Not Solved Yet", author = "Kocmi, Tom and Avramidis, Eleftherios and Bawden, Rachel and Bojar, Ond{\v{r}}ej and Dvorkovich, Anton and Federmann, Christian and Fishel, Mark and Freitag, Markus and Gowda, Thamme and Grundkiewicz, Roman and Haddow, Barry and Karpinska, Marzena and Koehn, Philipp and Marie, Benjamin and Monz, Christof and Murray, Kenton and Nagata, Masaaki and Popel, Martin and Popovi{\'c}, Maja and Shmatova, Mariya and Steingr{\'\i}msson, Steinth{\'o}r and Zouhar, Vil{\'e}m", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.1", pages = "1--46", abstract = "This overview paper presents the results of the General Machine Translation Task organised as part of the 2024 Conference on Machine Translation (WMT). In the general MT task, participants were asked to build machine translation systems for any of 11 language pairs, to be evaluated on test sets consisting of three to five different domains. In addition to participating systems, we collected translations from 8 different large language models (LLMs) and 4 online translation providers. We evaluate system outputs with professional human annotators using a new protocol called Error Span Annotations (ESA).", }
This overview paper presents the results of the General Machine Translation Task organised as part of the 2024 Conference on Machine Translation (WMT). In the general MT task, participants were asked to build machine translation systems for any of 11 language pairs, to be evaluated on test sets consisting of three to five different domains. In addition to participating systems, we collected translations from 8 different large language models (LLMs) and 4 online translation providers. We evaluate system outputs with professional human annotators using a new protocol called Error Span Annotations (ESA).
[ "Kocmi, Tom", "Avramidis, Eleftherios", "Bawden, Rachel", "Bojar, Ond{\\v{r}}ej", "Dvorkovich, Anton", "Federmann, Christian", "Fishel, Mark", "Freitag, Markus", "Gowda, Thamme", "Grundkiewicz, Roman", "Haddow, Barry", "Karpinska, Marzena", "Koehn, Philipp", "Marie, Benjamin", "Monz, Christof", "Murray, Kenton", "Nagata, Masaaki", "Popel, Martin", "Popovi{\\'c}, Maja", "Shmatova, Mariya", "Steingr{\\'\\i}msson, Steinth{\\'o}r", "Zouhar, Vil{\\'e}m" ]
Findings of the WMT24 General Machine Translation Shared Task: The LLM Era Is Here but MT Is Not Solved Yet
wmt-1.1
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.2.bib
https://aclanthology.org/2024.wmt-1.2/
@inproceedings{freitag-etal-2024-llms, title = "Are {LLM}s Breaking {MT} Metrics? Results of the {WMT}24 Metrics Shared Task", author = "Freitag, Markus and Mathur, Nitika and Deutsch, Daniel and Lo, Chi-Kiu and Avramidis, Eleftherios and Rei, Ricardo and Thompson, Brian and Blain, Frederic and Kocmi, Tom and Wang, Jiayi and Adelani, David Ifeoluwa and Buchicchio, Marianna and Zerva, Chrysoula and Lavie, Alon", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.2", pages = "47--81", abstract = "The WMT24 Metrics Shared Task evaluated the performance of automatic metrics for machine translation (MT), with a major focus on LLM-based translations that were generated as part of the WMT24 General MT Shared Task. As LLMs become increasingly popular in MT, it is crucial to determine whether existing evaluation metrics can accurately assess the output of these systems.To provide a robust benchmark for this evaluation, human assessments were collected using Multidimensional Quality Metrics (MQM), continuing the practice from recent years. Furthermore, building on the success of the previous year, a challenge set subtask was included, requiring participants to design contrastive test suites that specifically target a metric{'}s ability to identify and penalize different types of translation errors.Finally, the meta-evaluation procedure was refined to better reflect real-world usage of MT metrics, focusing on pairwise accuracy at both the system- and segment-levels.We present an extensive analysis on how well metrics perform on three language pairs: English to Spanish (Latin America), Japanese to Chinese, and English to German. The results strongly confirm the results reported last year, that fine-tuned neural metrics continue to perform well, even when used to evaluate LLM-based translation systems.", }
The WMT24 Metrics Shared Task evaluated the performance of automatic metrics for machine translation (MT), with a major focus on LLM-based translations that were generated as part of the WMT24 General MT Shared Task. As LLMs become increasingly popular in MT, it is crucial to determine whether existing evaluation metrics can accurately assess the output of these systems.To provide a robust benchmark for this evaluation, human assessments were collected using Multidimensional Quality Metrics (MQM), continuing the practice from recent years. Furthermore, building on the success of the previous year, a challenge set subtask was included, requiring participants to design contrastive test suites that specifically target a metric{'}s ability to identify and penalize different types of translation errors.Finally, the meta-evaluation procedure was refined to better reflect real-world usage of MT metrics, focusing on pairwise accuracy at both the system- and segment-levels.We present an extensive analysis on how well metrics perform on three language pairs: English to Spanish (Latin America), Japanese to Chinese, and English to German. The results strongly confirm the results reported last year, that fine-tuned neural metrics continue to perform well, even when used to evaluate LLM-based translation systems.
[ "Freitag, Markus", "Mathur, Nitika", "Deutsch, Daniel", "Lo, Chi-Kiu", "Avramidis, Eleftherios", "Rei, Ricardo", "Thompson, Brian", "Blain, Frederic", "Kocmi, Tom", "Wang, Jiayi", "Adelani, David Ifeoluwa", "Buchicchio, Marianna", "Zerva, Chrysoula", "Lavie, Alon" ]
Are LLMs Breaking MT Metrics? Results of the WMT24 Metrics Shared Task
wmt-1.2
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.3.bib
https://aclanthology.org/2024.wmt-1.3/
@inproceedings{zerva-etal-2024-findings, title = "Findings of the Quality Estimation Shared Task at {WMT} 2024: Are {LLM}s Closing the Gap in {QE}?", author = "Zerva, Chrysoula and Blain, Frederic and C. De Souza, Jos{\'e} G. and Kanojia, Diptesh and Deoghare, Sourabh and Guerreiro, Nuno M. and Attanasio, Giuseppe and Rei, Ricardo and Orasan, Constantin and Negri, Matteo and Turchi, Marco and Chatterjee, Rajen and Bhattacharyya, Pushpak and Freitag, Markus and Martins, Andr{\'e}", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.3", pages = "82--109", abstract = "We report the results of the WMT 2024 shared task on Quality Estimation, in which the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels, without access to reference translations. In this edition, we expanded our scope to assess the potential for quality estimates to help in the correction of translated outputs, hence including an automated post-editing (APE) direction. We publish new test sets with human annotations that target two directions: providing new Multidimensional Quality Metrics (MQM) annotations for three multi-domain language pairs (English to German, Spanish and Hindi) and extending the annotations on Indic languages providing direct assessments and post edits for translation from English into Hindi, Gujarati, Tamil and Telugu. We also perform a detailed analysis of the behaviour of different models with respect to different phenomena including gender bias, idiomatic language, and numerical and entity perturbations. We received submissions based both on traditional, encoder-based approaches as well as large language model (LLM) based ones.", }
We report the results of the WMT 2024 shared task on Quality Estimation, in which the challenge is to predict the quality of the output of neural machine translation systems at the word and sentence levels, without access to reference translations. In this edition, we expanded our scope to assess the potential for quality estimates to help in the correction of translated outputs, hence including an automated post-editing (APE) direction. We publish new test sets with human annotations that target two directions: providing new Multidimensional Quality Metrics (MQM) annotations for three multi-domain language pairs (English to German, Spanish and Hindi) and extending the annotations on Indic languages providing direct assessments and post edits for translation from English into Hindi, Gujarati, Tamil and Telugu. We also perform a detailed analysis of the behaviour of different models with respect to different phenomena including gender bias, idiomatic language, and numerical and entity perturbations. We received submissions based both on traditional, encoder-based approaches as well as large language model (LLM) based ones.
[ "Zerva, Chrysoula", "Blain, Frederic", "C. De Souza, Jos{\\'e} G.", "Kanojia, Diptesh", "Deoghare, Sourabh", "Guerreiro, Nuno M.", "Attanasio, Giuseppe", "Rei, Ricardo", "Orasan, Constantin", "Negri, Matteo", "Turchi, Marco", "Chatterjee, Rajen", "Bhattacharyya, Pushpak", "Freitag, Markus", "Martins, Andr{\\'e}" ]
Findings of the Quality Estimation Shared Task at WMT 2024: Are LLMs Closing the Gap in QE?
wmt-1.3
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.4.bib
https://aclanthology.org/2024.wmt-1.4/
@inproceedings{maillard-etal-2024-findings, title = "Findings of the {WMT} 2024 Shared Task of the Open Language Data Initiative", author = "Maillard, Jean and Burchell, Laurie and Anastasopoulos, Antonios and Federmann, Christian and Koehn, Philipp and Wang, Skyler", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.4", pages = "110--117", abstract = "We present the results of the WMT 2024 shared task of the Open Language Data Initiative. Participants were invited to contribute to the FLORES+ and MT Seed multilingual datasets, two foundational open resources that facilitate the organic expansion of language technology{'}s reach. We accepted ten submissions covering 16 languages, which extended the range of languages included in the datasets and improved the quality of existing data.", }
We present the results of the WMT 2024 shared task of the Open Language Data Initiative. Participants were invited to contribute to the FLORES+ and MT Seed multilingual datasets, two foundational open resources that facilitate the organic expansion of language technology{'}s reach. We accepted ten submissions covering 16 languages, which extended the range of languages included in the datasets and improved the quality of existing data.
[ "Maillard, Jean", "Burchell, Laurie", "Anastasopoulos, Antonios", "Federmann, Christian", "Koehn, Philipp", "Wang, Skyler" ]
Findings of the WMT 2024 Shared Task of the Open Language Data Initiative
wmt-1.4
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.5.bib
https://aclanthology.org/2024.wmt-1.5/
@inproceedings{higashiyama-2024-results, title = "Results of the {WAT}/{WMT} 2024 Shared Task on Patent Translation", author = "Higashiyama, Shohei", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.5", pages = "118--123", abstract = "This paper presents the results of the patent translation shared task at the 11th Workshop on Asian Translation and 9th Conference on Machine Translation. Two teams participated in this task, and their submitted translation results for one or more of the six language directions were automatically and manually evaluated. The evaluation results demonstrate the strong performance of large language model-based systems from both participants.", }
This paper presents the results of the patent translation shared task at the 11th Workshop on Asian Translation and 9th Conference on Machine Translation. Two teams participated in this task, and their submitted translation results for one or more of the six language directions were automatically and manually evaluated. The evaluation results demonstrate the strong performance of large language model-based systems from both participants.
[ "Higashiyama, Shohei" ]
Results of the WAT/WMT 2024 Shared Task on Patent Translation
wmt-1.5
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.6.bib
https://aclanthology.org/2024.wmt-1.6/
@inproceedings{neves-etal-2024-findings, title = "Findings of the {WMT} 2024 Biomedical Translation Shared Task: Test Sets on Abstract Level", author = "Neves, Mariana and Grozea, Cristian and Thomas, Philippe and Roller, Roland and Bawden, Rachel and N{\'e}v{\'e}ol, Aur{\'e}lie and Castle, Steffen and Bonato, Vanessa and Di Nunzio, Giorgio Maria and Vezzani, Federica and Vicente Navarro, Maika and Yeganova, Lana and Jimeno Yepes, Antonio", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.6", pages = "124--138", abstract = "We present the results of the ninth edition of the Biomedical Translation Task at WMT{'}24. We released test sets for six language pairs, namely, French, German, Italian, Portuguese, Russian, and Spanish, from and into English. Eachtest set consists of 50 abstracts from PubMed. Differently from previous years, we did not split abstracts into sentences. We received submissions from five teams, and for almost all language directions. We used a baseline/comparison system based on Llama 3.1 and share the source code at https://github.com/cgrozea/wmt24biomed-ref.", }
We present the results of the ninth edition of the Biomedical Translation Task at WMT{'}24. We released test sets for six language pairs, namely, French, German, Italian, Portuguese, Russian, and Spanish, from and into English. Eachtest set consists of 50 abstracts from PubMed. Differently from previous years, we did not split abstracts into sentences. We received submissions from five teams, and for almost all language directions. We used a baseline/comparison system based on Llama 3.1 and share the source code at https://github.com/cgrozea/wmt24biomed-ref.
[ "Neves, Mariana", "Grozea, Cristian", "Thomas, Philippe", "Roller, Rol", "", "Bawden, Rachel", "N{\\'e}v{\\'e}ol, Aur{\\'e}lie", "Castle, Steffen", "Bonato, Vanessa", "Di Nunzio, Giorgio Maria", "Vezzani, Federica", "Vicente Navarro, Maika", "Yeganova, Lana", "Jimeno Yepes, Antonio" ]
Findings of the WMT 2024 Biomedical Translation Shared Task: Test Sets on Abstract Level
wmt-1.6
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.7.bib
https://aclanthology.org/2024.wmt-1.7/
@inproceedings{larkin-etal-2024-mslc24, title = "{MSLC}24 Submissions to the General Machine Translation Task", author = "Larkin, Samuel and Lo, Chi-Kiu and Knowles, Rebecca", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.7", pages = "139--146", abstract = "The MSLC (Metric Score Landscape Challenge) submissions for English-German, English-Spanish, and Japanese-Chinese are constrained systems built using Transformer models for the purpose of better evaluating metric performance in the WMT24 Metrics Task. They are intended to be representative of the performance of systems that can be built relatively simply using constrained data and with minimal modifications to the translation training pipeline.", }
The MSLC (Metric Score Landscape Challenge) submissions for English-German, English-Spanish, and Japanese-Chinese are constrained systems built using Transformer models for the purpose of better evaluating metric performance in the WMT24 Metrics Task. They are intended to be representative of the performance of systems that can be built relatively simply using constrained data and with minimal modifications to the translation training pipeline.
[ "Larkin, Samuel", "Lo, Chi-Kiu", "Knowles, Rebecca" ]
MSLC24 Submissions to the General Machine Translation Task
wmt-1.7
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.8.bib
https://aclanthology.org/2024.wmt-1.8/
@inproceedings{zhang-2024-iol, title = "{IOL} Research Machine Translation Systems for {WMT}24 General Machine Translation Shared Task", author = "Zhang, Wenbo", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.8", pages = "147--154", abstract = "This paper illustrates the submission system of the IOL Research team for the WMT24 General Machine Translation shared task. We submitted translations for all translation directions in the general machine translation task. According to the official track categorization, our system qualifies as an open system due to the utilization of open-source resources in developing our machine translation model. With the growing prevalence of large language models (LLMs) as a conventional approach for managing diverse NLP tasks, we have developed our machine translation system by leveraging the capabilities of LLMs. Overall, we first performed continued pretraining using the open-source LLMs with tens of billions of parameters to enhance the model{'}s multilingual capabilities. Subsequently, we employed open-source Large Language Models, equipped with hundreds of billions of parameters, to generate synthetic data. This data was then blended with a modest quantity of additional open-source data for precise supervised fine-tuning. In the final stage, we also used ensemble learning to improve translation quality. Based on the official automated evaluation metrics, our system excelled by securing the top position in 8 out of the total 11 translation directions, spanning both open and constrained system categories.", }
This paper illustrates the submission system of the IOL Research team for the WMT24 General Machine Translation shared task. We submitted translations for all translation directions in the general machine translation task. According to the official track categorization, our system qualifies as an open system due to the utilization of open-source resources in developing our machine translation model. With the growing prevalence of large language models (LLMs) as a conventional approach for managing diverse NLP tasks, we have developed our machine translation system by leveraging the capabilities of LLMs. Overall, we first performed continued pretraining using the open-source LLMs with tens of billions of parameters to enhance the model{'}s multilingual capabilities. Subsequently, we employed open-source Large Language Models, equipped with hundreds of billions of parameters, to generate synthetic data. This data was then blended with a modest quantity of additional open-source data for precise supervised fine-tuning. In the final stage, we also used ensemble learning to improve translation quality. Based on the official automated evaluation metrics, our system excelled by securing the top position in 8 out of the total 11 translation directions, spanning both open and constrained system categories.
[ "Zhang, Wenbo" ]
IOL Research Machine Translation Systems for WMT24 General Machine Translation Shared Task
wmt-1.8
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.9.bib
https://aclanthology.org/2024.wmt-1.9/
@inproceedings{wu-etal-2024-choose, title = "Choose the Final Translation from {NMT} and {LLM} Hypotheses Using {MBR} Decoding: {HW}-{TSC}{'}s Submission to the {WMT}24 General {MT} Shared Task", author = "Wu, Zhanglin and Wei, Daimeng and Li, Zongyao and Shang, Hengchao and Guo, Jiaxin and Li, Shaojun and Rao, Zhiqiang and Luo, Yuanchang and Xie, Ning and Yang, Hao", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.9", pages = "155--164", abstract = "This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT24 general machine translation (MT) shared task, where we participate in the English to Chinese (en→zh) language pair. Similar to previous years{'} work, we use training strategies such as regularized dropout, bidirectional training, data diversification, forward translation, back translation, alternated training, curriculum learning, and transductive ensemble learning to train the neural machine translation (NMT) model based on the deep Transformer-big architecture. The difference is that we also use continue pre-training, supervised fine-tuning, and contrastive preference optimization to train the large language model (LLM) based MT model. By using Minimum Bayesian risk (MBR) decoding to select the final translation from multiple hypotheses for NMT and LLM-based MT models, our submission receives competitive results in the final evaluation.", }
This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT24 general machine translation (MT) shared task, where we participate in the English to Chinese (en→zh) language pair. Similar to previous years{'} work, we use training strategies such as regularized dropout, bidirectional training, data diversification, forward translation, back translation, alternated training, curriculum learning, and transductive ensemble learning to train the neural machine translation (NMT) model based on the deep Transformer-big architecture. The difference is that we also use continue pre-training, supervised fine-tuning, and contrastive preference optimization to train the large language model (LLM) based MT model. By using Minimum Bayesian risk (MBR) decoding to select the final translation from multiple hypotheses for NMT and LLM-based MT models, our submission receives competitive results in the final evaluation.
[ "Wu, Zhanglin", "Wei, Daimeng", "Li, Zongyao", "Shang, Hengchao", "Guo, Jiaxin", "Li, Shaojun", "Rao, Zhiqiang", "Luo, Yuanchang", "Xie, Ning", "Yang, Hao" ]
Choose the Final Translation from NMT and LLM Hypotheses Using MBR Decoding: HW-TSC's Submission to the WMT24 General MT Shared Task
wmt-1.9
Poster
2409.14800
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.10.bib
https://aclanthology.org/2024.wmt-1.10/
@inproceedings{dreano-etal-2024-cyclegn, title = "{C}ycle{GN}: A Cycle Consistent Approach for Neural Machine Translation", author = {Dreano, S{\"o}ren and Molloy, Derek and Murphy, Noel}, editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.10", pages = "165--175", abstract = "CycleGN is a fully self-supervised Neural Machine Translation framework relying on the Transformer architecture that does not require parallel data. Its approach is similar to a Discriminator-less CycleGAN, hence the {``}non-adversarial{''} name, specifically tailored for non-parallel text datasets. The foundational concept of our research posits that in an ideal scenario, retro-translations of generated translations should revert to the original source sentences. Consequently, a pair of models can be trained using a Cycle Consistency Loss (CCL) only, with one model translating in one direction and the second model in the opposite direction.In the context of this research, two sub-categories of non-parallel datasets are introduced. A {``}permuted{''} dataset is defined as a parallel dataset wherein the sentences of one language have been systematically rearranged. Consequently, this results in a non-parallel corpus where it is guaranteed that each sentence has a corresponding translation located at an unspecified index within the dataset. A {``}non-intersecting{''} dataset is a non-parallel dataset for which it is guaranteed that no sentence has an exact translation.Masked Language Modeling (MLM) is a pre-training strategy implemented in BERT, where a specified proportion of the input tokens are substituted with a unique {\$}mask{\$} token. The objective of the neural network under this paradigm is to accurately reconstruct the original sentence from this degraded input.In inference mode, Transformers are able to generate sentences without labels. Thus, the first step is to generate pseudo-labels in inference, that are then used as labels during training. However, the models consistently converge towards a trivial solution in which the input, the generated pseudo-labels and the output are identical, achieving an optimal outcome on the CCL function, registering a value of zero. CycleGN demonstrates how MLM pre-training can be leveraged to move away from this trivial path and perform actual text translation.As a contribution to the WMT24 challenge, this study explores the efficacy of the CycleGN architectural framework in learning translation tasks across eleven language pairs under the permuted condition and four under the non-intersecting condition. Moreover, two additional language pairs from the previous WMT edition were trained and the evaluations demonstrate the robust adaptability of CycleGN in learning translation tasks.", }
CycleGN is a fully self-supervised Neural Machine Translation framework relying on the Transformer architecture that does not require parallel data. Its approach is similar to a Discriminator-less CycleGAN, hence the {``}non-adversarial{''} name, specifically tailored for non-parallel text datasets. The foundational concept of our research posits that in an ideal scenario, retro-translations of generated translations should revert to the original source sentences. Consequently, a pair of models can be trained using a Cycle Consistency Loss (CCL) only, with one model translating in one direction and the second model in the opposite direction.In the context of this research, two sub-categories of non-parallel datasets are introduced. A {``}permuted{''} dataset is defined as a parallel dataset wherein the sentences of one language have been systematically rearranged. Consequently, this results in a non-parallel corpus where it is guaranteed that each sentence has a corresponding translation located at an unspecified index within the dataset. A {``}non-intersecting{''} dataset is a non-parallel dataset for which it is guaranteed that no sentence has an exact translation.Masked Language Modeling (MLM) is a pre-training strategy implemented in BERT, where a specified proportion of the input tokens are substituted with a unique {\$}mask{\$} token. The objective of the neural network under this paradigm is to accurately reconstruct the original sentence from this degraded input.In inference mode, Transformers are able to generate sentences without labels. Thus, the first step is to generate pseudo-labels in inference, that are then used as labels during training. However, the models consistently converge towards a trivial solution in which the input, the generated pseudo-labels and the output are identical, achieving an optimal outcome on the CCL function, registering a value of zero. CycleGN demonstrates how MLM pre-training can be leveraged to move away from this trivial path and perform actual text translation.As a contribution to the WMT24 challenge, this study explores the efficacy of the CycleGN architectural framework in learning translation tasks across eleven language pairs under the permuted condition and four under the non-intersecting condition. Moreover, two additional language pairs from the previous WMT edition were trained and the evaluations demonstrate the robust adaptability of CycleGN in learning translation tasks.
[ "Dreano, S{\\\"o}ren", "Molloy, Derek", "Murphy, Noel" ]
CycleGN: A Cycle Consistent Approach for Neural Machine Translation
wmt-1.10
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.11.bib
https://aclanthology.org/2024.wmt-1.11/
@inproceedings{tan-etal-2024-uva, title = "{U}v{A}-{MT}{'}s Participation in the {WMT}24 General Translation Shared Task", author = "Tan, Shaomu and Stap, David and Aycock, Seth and Monz, Christof and Wu, Di", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.11", pages = "176--184", abstract = "Fine-tuning Large Language Models (FT-LLMs) with parallel data has emerged as a promising paradigm in recent machine translation research. In this paper, we explore the effectiveness of FT-LLMs and compare them to traditional encoder-decoder Neural Machine Translation (NMT) systems under the WMT24 general MT shared task for English to Chinese direction. We implement several techniques, including Quality Estimation (QE) data filtering, supervised fine-tuning, and post-editing that integrate NMT systems with LLMs. We demonstrate that fine-tuning LLaMA2 on a high-quality but relatively small bitext dataset (100K) yields COMET results comparable to much smaller encoder-decoder NMT systems trained on over 22 million bitexts. However, this approach largely underperforms on surface-level metrics like BLEU and ChrF. We further control the data quality using the COMET-based quality estimation method. Our experiments show that 1) filtering low COMET scores largely improves encoder-decoder systems, but 2) no clear gains are observed for LLMs when further refining the fine-tuning set. Finally, we show that combining NMT systems with LLMs via post-editing generally yields the best performance for the WMT24 official test set.", }
Fine-tuning Large Language Models (FT-LLMs) with parallel data has emerged as a promising paradigm in recent machine translation research. In this paper, we explore the effectiveness of FT-LLMs and compare them to traditional encoder-decoder Neural Machine Translation (NMT) systems under the WMT24 general MT shared task for English to Chinese direction. We implement several techniques, including Quality Estimation (QE) data filtering, supervised fine-tuning, and post-editing that integrate NMT systems with LLMs. We demonstrate that fine-tuning LLaMA2 on a high-quality but relatively small bitext dataset (100K) yields COMET results comparable to much smaller encoder-decoder NMT systems trained on over 22 million bitexts. However, this approach largely underperforms on surface-level metrics like BLEU and ChrF. We further control the data quality using the COMET-based quality estimation method. Our experiments show that 1) filtering low COMET scores largely improves encoder-decoder systems, but 2) no clear gains are observed for LLMs when further refining the fine-tuning set. Finally, we show that combining NMT systems with LLMs via post-editing generally yields the best performance for the WMT24 official test set.
[ "Tan, Shaomu", "Stap, David", "Aycock, Seth", "Monz, Christof", "Wu, Di" ]
UvA-MT's Participation in the WMT24 General Translation Shared Task
wmt-1.11
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.12.bib
https://aclanthology.org/2024.wmt-1.12/
@inproceedings{rei-etal-2024-tower, title = "Tower v2: Unbabel-{IST} 2024 Submission for the General {MT} Shared Task", author = "Rei, Ricardo and Pombal, Jose and Guerreiro, Nuno M. and Alves, Jo{\~a}o and Martins, Pedro Henrique and Fernandes, Patrick and Wu, Helena and Vaz, Tania and Alves, Duarte and Farajian, Amin and Agrawal, Sweta and Farinhas, Antonio and C. De Souza, Jos{\'e} G. and Martins, Andr{\'e}", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.12", pages = "185--204", abstract = "In this work, we present Tower v2, an improved iteration of the state-of-the-art open-weight Tower models, and the backbone of our submission to the WMT24 General Translation shared task. Tower v2 introduces key improvements including expanded language coverage, enhanced data quality, and increased model capacity up to 70B parameters. Our final submission combines these advancements with quality-aware decoding strategies, selecting translations based on multiple translation quality signals. The resulting system demonstrates significant improvement over previous versions, outperforming closed commercial systems like GPT-4o, Claude 3.5, and DeepL even at a smaller 7B scale.", }
In this work, we present Tower v2, an improved iteration of the state-of-the-art open-weight Tower models, and the backbone of our submission to the WMT24 General Translation shared task. Tower v2 introduces key improvements including expanded language coverage, enhanced data quality, and increased model capacity up to 70B parameters. Our final submission combines these advancements with quality-aware decoding strategies, selecting translations based on multiple translation quality signals. The resulting system demonstrates significant improvement over previous versions, outperforming closed commercial systems like GPT-4o, Claude 3.5, and DeepL even at a smaller 7B scale.
[ "Rei, Ricardo", "Pombal, Jose", "Guerreiro, Nuno M.", "Alves, Jo{\\~a}o", "Martins, Pedro Henrique", "Fern", "es, Patrick", "Wu, Helena", "Vaz, Tania", "Alves, Duarte", "Farajian, Amin", "Agrawal, Sweta", "Farinhas, Antonio", "C. De Souza, Jos{\\'e} G.", "Martins, Andr{\\'e}" ]
Tower v2: Unbabel-IST 2024 Submission for the General MT Shared Task
wmt-1.12
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.13.bib
https://aclanthology.org/2024.wmt-1.13/
@inproceedings{mynka-mikhaylovskiy-2024-tsu, title = "{TSU} {HITS}{'}s Submissions to the {WMT} 2024 General Machine Translation Shared Task", author = "Mynka, Vladimir and Mikhaylovskiy, Nikolay", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.13", pages = "205--209", abstract = "This paper describes the TSU HITS team{'}s submission system for the WMT{'}24 general translation task. We focused on exploring the capabilities of discrete diffusion models for the English-to-{Russian, German, Czech, Spanish} translation tasks in the constrained track. Our submission system consists of a set of discrete diffusion models for each language pair. The main advance is using a separate length regression model to determine the length of the output sequence more precisely.", }
This paper describes the TSU HITS team{'}s submission system for the WMT{'}24 general translation task. We focused on exploring the capabilities of discrete diffusion models for the English-to-{Russian, German, Czech, Spanish} translation tasks in the constrained track. Our submission system consists of a set of discrete diffusion models for each language pair. The main advance is using a separate length regression model to determine the length of the output sequence more precisely.
[ "Mynka, Vladimir", "Mikhaylovskiy, Nikolay" ]
TSU HITS's Submissions to the WMT 2024 General Machine Translation Shared Task
wmt-1.13
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.14.bib
https://aclanthology.org/2024.wmt-1.14/
@inproceedings{kudo-etal-2024-document, title = "Document-level Translation with {LLM} Reranking: Team-{J} at {WMT} 2024 General Translation Task", author = "Kudo, Keito and Deguchi, Hiroyuki and Morishita, Makoto and Fujii, Ryo and Ito, Takumi and Ozaki, Shintaro and Natsumi, Koki and Sato, Kai and Yano, Kazuki and Takahashi, Ryosuke and Kimura, Subaru and Hara, Tomomasa and Sakai, Yusuke and Suzuki, Jun", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.14", pages = "210--226", abstract = "We participated in the constrained track for English-Japanese and Japanese-Chinese translations at the WMT 2024 General Machine Translation Task. Our approach was to generate a large number of sentence-level translation candidates and select the most probable translation using minimum Bayes risk (MBR) decoding and document-level large language model (LLM) re-ranking. We first generated hundreds of translation candidates from multiple translation models and retained the top 30 candidates using MBR decoding. In addition, we continually pre-trained LLMs on the target language corpora to leverage document-level information. We utilized LLMs to select the most probable sentence sequentially in context from the beginning of the document.", }
We participated in the constrained track for English-Japanese and Japanese-Chinese translations at the WMT 2024 General Machine Translation Task. Our approach was to generate a large number of sentence-level translation candidates and select the most probable translation using minimum Bayes risk (MBR) decoding and document-level large language model (LLM) re-ranking. We first generated hundreds of translation candidates from multiple translation models and retained the top 30 candidates using MBR decoding. In addition, we continually pre-trained LLMs on the target language corpora to leverage document-level information. We utilized LLMs to select the most probable sentence sequentially in context from the beginning of the document.
[ "Kudo, Keito", "Deguchi, Hiroyuki", "Morishita, Makoto", "Fujii, Ryo", "Ito, Takumi", "Ozaki, Shintaro", "Natsumi, Koki", "Sato, Kai", "Yano, Kazuki", "Takahashi, Ryosuke", "Kimura, Subaru", "Hara, Tomomasa", "Sakai, Yusuke", "Suzuki, Jun" ]
Document-level Translation with LLM Reranking: Team-J at WMT 2024 General Translation Task
wmt-1.14
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.15.bib
https://aclanthology.org/2024.wmt-1.15/
@inproceedings{zong-etal-2024-dlut, title = "{DLUT} and {GTCOM}{'}s Neural Machine Translation Systems for {WMT}24", author = "Zong, Hao and Bei, Chao and Liu, Huan and Yuan, Conghu and Chen, Wentao and Huang, Degen", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.15", pages = "227--231", abstract = "This paper presents the submission from Global Tone Communication Co., Ltd. and Dalian University of Technology for the WMT24 shared general Machine Translation (MT) task at the Conference on Empirical Methods in Natural Language Processing (EMNLP). Our participation encompasses two language pairs: English to Japanese and Japanese to Chinese. The systems are developed without particular constraints or requirements, facilitating extensive research in machine translation. We emphasize back-translation, utilize multilingual translation models, and apply fine-tuning strategies to improve performance. Additionally, we integrate both human-generated and machine-generated data to fine-tune our models, leading to enhanced translation accuracy. The automatic evaluation results indicate that our system ranks first in terms of BLEU score for the Japanese to Chinese translation.", }
This paper presents the submission from Global Tone Communication Co., Ltd. and Dalian University of Technology for the WMT24 shared general Machine Translation (MT) task at the Conference on Empirical Methods in Natural Language Processing (EMNLP). Our participation encompasses two language pairs: English to Japanese and Japanese to Chinese. The systems are developed without particular constraints or requirements, facilitating extensive research in machine translation. We emphasize back-translation, utilize multilingual translation models, and apply fine-tuning strategies to improve performance. Additionally, we integrate both human-generated and machine-generated data to fine-tune our models, leading to enhanced translation accuracy. The automatic evaluation results indicate that our system ranks first in terms of BLEU score for the Japanese to Chinese translation.
[ "Zong, Hao", "Bei, Chao", "Liu, Huan", "Yuan, Conghu", "Chen, Wentao", "Huang, Degen" ]
DLUT and GTCOM's Neural Machine Translation Systems for WMT24
wmt-1.15
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.16.bib
https://aclanthology.org/2024.wmt-1.16/
@inproceedings{hrabal-etal-2024-cuni, title = "{CUNI} at {WMT}24 General Translation Task: {LLM}s, ({Q}){L}o{RA}, {CPO} and Model Merging", author = "Hrabal, Miroslav and Jon, Josef and Popel, Martin and Luu, Nam and Semin, Danil and Bojar, Ond{\v{r}}ej", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.16", pages = "232--246", abstract = "This paper presents the contributions of Charles University teams to the WMT24 General Translation task (English to Czech, German and Russian, and Czech to Ukrainian), and the WMT24 Translation into Low-Resource Languages of Spain task.Our most elaborate submission, CUNI-MH for en2cs, is the result of fine-tuning Mistral 7B v0.1 for translation using a three-stage process: Supervised fine-tuning using QLoRA, Contrastive Preference Optimization, and merging of model checkpoints. We also describe the CUNI-GA, CUNI-Transformer and CUNI-DocTransformer submissions, which are based on our systems from the previous year.Our en2ru system CUNI-DS uses a similar first stage as CUNI-MH (QLoRA for en2cs) and follows with transferring to en2ru.For en2de (CUNI-NL), we experimented with a LLM-based speech translation system, to translate without the speech input.For the Translation into Low-Resource Languages of Spain task, we performed QLoRA fine-tuning of a large LLM on a small amount of synthetic (backtranslated) data.", }
This paper presents the contributions of Charles University teams to the WMT24 General Translation task (English to Czech, German and Russian, and Czech to Ukrainian), and the WMT24 Translation into Low-Resource Languages of Spain task.Our most elaborate submission, CUNI-MH for en2cs, is the result of fine-tuning Mistral 7B v0.1 for translation using a three-stage process: Supervised fine-tuning using QLoRA, Contrastive Preference Optimization, and merging of model checkpoints. We also describe the CUNI-GA, CUNI-Transformer and CUNI-DocTransformer submissions, which are based on our systems from the previous year.Our en2ru system CUNI-DS uses a similar first stage as CUNI-MH (QLoRA for en2cs) and follows with transferring to en2ru.For en2de (CUNI-NL), we experimented with a LLM-based speech translation system, to translate without the speech input.For the Translation into Low-Resource Languages of Spain task, we performed QLoRA fine-tuning of a large LLM on a small amount of synthetic (backtranslated) data.
[ "Hrabal, Miroslav", "Jon, Josef", "Popel, Martin", "Luu, Nam", "Semin, Danil", "Bojar, Ond{\\v{r}}ej" ]
CUNI at WMT24 General Translation Task: LLMs, (Q)LoRA, CPO and Model Merging
wmt-1.16
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.17.bib
https://aclanthology.org/2024.wmt-1.17/
@inproceedings{elshin-etal-2024-general, title = "From General {LLM} to Translation: How We Dramatically Improve Translation Quality Using Human Evaluation Data for {LLM} Finetuning", author = "Elshin, Denis and Karpachev, Nikolay and Gruzdev, Boris and Golovanov, Ilya and Ivanov, Georgy and Antonov, Alexander and Skachkov, Nickolay and Latypova, Ekaterina and Layner, Vladimir and Enikeeva, Ekaterina and Popov, Dmitry and Chekashev, Anton and Negodin, Vladislav and Frantsuzova, Vera and Chernyshev, Alexander and Denisov, Kirill", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.17", pages = "247--252", abstract = "In this paper, we present the methodology employed by the NLP team at Yandex LLC for participating in the WMT 2024 General MT Translation track, focusing on English-to-Russian translation. Our approach involves training a YandexGPT LLM-based model for translation tasks using a multi-stage process to ensure high-quality and contextually accurate translations.Initially, we utilize a pre-trained model, trained on a large corpus of high-quality monolingual texts in various languages, crawled from various open sources, not limited to English and Russian. This extensive pre-training allows the model to capture a broad spectrum of linguistic nuances and structures. Following this, the model is fine-tuned on a substantial parallel corpus of high-quality texts collected from diverse open sources, including websites, books, and subtitles. These texts are meticulously aligned at both the sentence and paragraph levels to enhance the model{'}s contextual understanding and translation accuracy.In the subsequent stage, we employ p-tuning on an internal high-quality corpus of paragraph-aligned data. This step ensures that the model is finely adjusted to handle complex paragraph-level translations with greater fluency and coherence.Next, we apply the Contrastive Pretraining Objective (CPO) method, as described in the paper CPO, using a human-annotated translation corpus. This stage focuses on refining the model{'}s performance based on metrics evaluated at the paragraph level, emphasizing both the accuracy of the translation and the fluency of the resulting texts. The CPO method helps the model to better distinguish between subtle contextual differences, thereby improving translation quality.In the final stage, we address the importance of preserving the content structure in translations, which is crucial for the General MT test set. To achieve this, we introduce a synthetic corpus based on web pages and video subtitles, and use it during HE markup finetune training. This encourages the model to maintain the original text{'}s tag structure. This step ensures that the translated output retains the structural integrity of the source web pages, providing a seamless user experience.Our multi-stage approach, combining extensive pre-training, targeted fine-tuning, advanced p-tuning, and structure-preserving techniques, ensures that our model delivers high-quality, fluent, and structurally consistent translations suitable for practical applications and competitive benchmarks.", }
In this paper, we present the methodology employed by the NLP team at Yandex LLC for participating in the WMT 2024 General MT Translation track, focusing on English-to-Russian translation. Our approach involves training a YandexGPT LLM-based model for translation tasks using a multi-stage process to ensure high-quality and contextually accurate translations.Initially, we utilize a pre-trained model, trained on a large corpus of high-quality monolingual texts in various languages, crawled from various open sources, not limited to English and Russian. This extensive pre-training allows the model to capture a broad spectrum of linguistic nuances and structures. Following this, the model is fine-tuned on a substantial parallel corpus of high-quality texts collected from diverse open sources, including websites, books, and subtitles. These texts are meticulously aligned at both the sentence and paragraph levels to enhance the model{'}s contextual understanding and translation accuracy.In the subsequent stage, we employ p-tuning on an internal high-quality corpus of paragraph-aligned data. This step ensures that the model is finely adjusted to handle complex paragraph-level translations with greater fluency and coherence.Next, we apply the Contrastive Pretraining Objective (CPO) method, as described in the paper CPO, using a human-annotated translation corpus. This stage focuses on refining the model{'}s performance based on metrics evaluated at the paragraph level, emphasizing both the accuracy of the translation and the fluency of the resulting texts. The CPO method helps the model to better distinguish between subtle contextual differences, thereby improving translation quality.In the final stage, we address the importance of preserving the content structure in translations, which is crucial for the General MT test set. To achieve this, we introduce a synthetic corpus based on web pages and video subtitles, and use it during HE markup finetune training. This encourages the model to maintain the original text{'}s tag structure. This step ensures that the translated output retains the structural integrity of the source web pages, providing a seamless user experience.Our multi-stage approach, combining extensive pre-training, targeted fine-tuning, advanced p-tuning, and structure-preserving techniques, ensures that our model delivers high-quality, fluent, and structurally consistent translations suitable for practical applications and competitive benchmarks.
[ "Elshin, Denis", "Karpachev, Nikolay", "Gruzdev, Boris", "Golovanov, Ilya", "Ivanov, Georgy", "Antonov, Alex", "er", "Skachkov, Nickolay", "Latypova, Ekaterina", "Layner, Vladimir", "Enikeeva, Ekaterina", "Popov, Dmitry", "Chekashev, Anton", "Negodin, Vladislav", "Frantsuzova, Vera", "Chernyshev, Alex", "er", "Denisov, Kirill" ]
From General LLM to Translation: How We Dramatically Improve Translation Quality Using Human Evaluation Data for LLM Finetuning
wmt-1.17
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.18.bib
https://aclanthology.org/2024.wmt-1.18/
@inproceedings{jasonarson-etal-2024-cogs, title = "Cogs in a Machine, Doing What They{'}re Meant to Do {--} the {AMI} Submission to the {WMT}24 General Translation Task", author = "Jasonarson, Atli and Hafsteinsson, Hinrik and {\'A}rmannsson, Bjarki and Steingr{\'\i}msson, Steinth{\'o}r", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.18", pages = "253--262", abstract = "This paper presents the submission of the Arni Magnusson Institute{'}s team to the WMT24 General translation task. We work on the English→Icelandic translation direction. Our system comprises four translation models and a grammar correction model. For training our systems we carefully curate our datasets, aggressively filtering out sentence pairs that may detrimentally affect the quality of our systems output. Some of our data are collected from human translations and some are synthetically generated. A part of the synthetic data is generated using an LLM, and we find that it increases the translation capability of our system significantly.", }
This paper presents the submission of the Arni Magnusson Institute{'}s team to the WMT24 General translation task. We work on the English→Icelandic translation direction. Our system comprises four translation models and a grammar correction model. For training our systems we carefully curate our datasets, aggressively filtering out sentence pairs that may detrimentally affect the quality of our systems output. Some of our data are collected from human translations and some are synthetically generated. A part of the synthetic data is generated using an LLM, and we find that it increases the translation capability of our system significantly.
[ "Jasonarson, Atli", "Hafsteinsson, Hinrik", "{\\'A}rmannsson, Bjarki", "Steingr{\\'\\i}msson, Steinth{\\'o}r" ]
Cogs in a Machine, Doing What They're Meant to Do – the AMI Submission to the WMT24 General Translation Task
wmt-1.18
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.19.bib
https://aclanthology.org/2024.wmt-1.19/
@inproceedings{liao-etal-2024-ikun, title = "{IKUN} for {WMT}24 General {MT} Task: {LLM}s Are Here for Multilingual Machine Translation", author = "Liao, Baohao and Herold, Christian and Khadivi, Shahram and Monz, Christof", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.19", pages = "263--269", abstract = "This paper introduces two multilingual systems, IKUN and IKUN-C, developed for the general machine translation task in WMT24. IKUN and IKUN-C represent an open system and a constrained system, respectively, built on Llama-3-8b and Mistral-7B-v0.3. Both systems are designed to handle all 11 language directions using a single model. According to automatic evaluation metrics, IKUN-C achieved 6 first-place and 3 second-place finishes among all constrained systems, while IKUN secured 1 first-place and 2 second-place finishes across both open and constrained systems. These encouraging results suggest that large language models (LLMs) are nearing the level of proficiency required for effective multilingual machine translation. The systems are based on a two-stage approach: first, continuous pre-training on monolingual data in 10 languages, followed by fine-tuning on high-quality parallel data for 11 language directions. The primary difference between IKUN and IKUN-C lies in their monolingual pre-training strategy. IKUN-C is pre-trained using constrained monolingual data, whereas IKUN leverages monolingual data from the OSCAR dataset. In the second phase, both systems are fine-tuned on parallel data sourced from NTREX, Flores, and WMT16-23 for all 11 language pairs.", }
This paper introduces two multilingual systems, IKUN and IKUN-C, developed for the general machine translation task in WMT24. IKUN and IKUN-C represent an open system and a constrained system, respectively, built on Llama-3-8b and Mistral-7B-v0.3. Both systems are designed to handle all 11 language directions using a single model. According to automatic evaluation metrics, IKUN-C achieved 6 first-place and 3 second-place finishes among all constrained systems, while IKUN secured 1 first-place and 2 second-place finishes across both open and constrained systems. These encouraging results suggest that large language models (LLMs) are nearing the level of proficiency required for effective multilingual machine translation. The systems are based on a two-stage approach: first, continuous pre-training on monolingual data in 10 languages, followed by fine-tuning on high-quality parallel data for 11 language directions. The primary difference between IKUN and IKUN-C lies in their monolingual pre-training strategy. IKUN-C is pre-trained using constrained monolingual data, whereas IKUN leverages monolingual data from the OSCAR dataset. In the second phase, both systems are fine-tuned on parallel data sourced from NTREX, Flores, and WMT16-23 for all 11 language pairs.
[ "Liao, Baohao", "Herold, Christian", "Khadivi, Shahram", "Monz, Christof" ]
IKUN for WMT24 General MT Task: LLMs Are Here for Multilingual Machine Translation
wmt-1.19
Poster
2408.11512
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.20.bib
https://aclanthology.org/2024.wmt-1.20/
@inproceedings{kondo-etal-2024-nttsu, title = "{NTTSU} at {WMT}2024 General Translation Task", author = "Kondo, Minato and Fukuda, Ryo and Wang, Xiaotian and Chousa, Katsuki and Nishimura, Masato and Buma, Kosei and Kano, Takatomo and Utsuro, Takehito", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.20", pages = "270--279", abstract = "The NTTSU team{'}s submission leverages several large language models developed through a training procedure that includes continual pre-training and supervised fine-tuning. For paragraph-level translation, we generated synthetic paragraph-aligned data and utilized this data for training.In the task of translating Japanese to Chinese, we particularly focused on the speech domain translation. Specifically, we built Whisper models for Japanese automatic speech recognition (ASR). We used YODAS dataset for Whisper training. Since this data contained many noisy data pairs, we combined the Whisper outputs using ROVER for polishing the transcriptions. Furthermore, to enhance the robustness of the translation model against errors in the transcriptions, we performed data augmentation by forward translation from audio, using both ASR and base translation models.To select the best translation from multiple hypotheses of the models, we applied Minimum Bayes Risk decoding + reranking, incorporating scores such as COMET-QE, COMET, and cosine similarity by LaBSE.", }
The NTTSU team{'}s submission leverages several large language models developed through a training procedure that includes continual pre-training and supervised fine-tuning. For paragraph-level translation, we generated synthetic paragraph-aligned data and utilized this data for training.In the task of translating Japanese to Chinese, we particularly focused on the speech domain translation. Specifically, we built Whisper models for Japanese automatic speech recognition (ASR). We used YODAS dataset for Whisper training. Since this data contained many noisy data pairs, we combined the Whisper outputs using ROVER for polishing the transcriptions. Furthermore, to enhance the robustness of the translation model against errors in the transcriptions, we performed data augmentation by forward translation from audio, using both ASR and base translation models.To select the best translation from multiple hypotheses of the models, we applied Minimum Bayes Risk decoding + reranking, incorporating scores such as COMET-QE, COMET, and cosine similarity by LaBSE.
[ "Kondo, Minato", "Fukuda, Ryo", "Wang, Xiaotian", "Chousa, Katsuki", "Nishimura, Masato", "Buma, Kosei", "Kano, Takatomo", "Utsuro, Takehito" ]
NTTSU at WMT2024 General Translation Task
wmt-1.20
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.21.bib
https://aclanthology.org/2024.wmt-1.21/
@inproceedings{li-etal-2024-scir, title = "{SCIR}-{MT}{'}s Submission for {WMT}24 General Machine Translation Task", author = "Li, Baohang and Ye, Zekai and Huang, Yichong and Feng, Xiaocheng and Qin, Bing", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.21", pages = "280--285", abstract = "This paper introduces the submission of SCIR research center of Harbin Institute of Technology participating in the WMT24 machine translation evaluation task of constrained track for English to Czech. Our approach involved a rigorous process of cleaning and deduplicating both monolingual and bilingual data, followed by a three-stage model training recipe. During the testing phase, we used the beam serach decoding method to generate a large number of candidate translations. Furthermore, we employed COMET-MBR decoding to identify optimal translations.", }
This paper introduces the submission of SCIR research center of Harbin Institute of Technology participating in the WMT24 machine translation evaluation task of constrained track for English to Czech. Our approach involved a rigorous process of cleaning and deduplicating both monolingual and bilingual data, followed by a three-stage model training recipe. During the testing phase, we used the beam serach decoding method to generate a large number of candidate translations. Furthermore, we employed COMET-MBR decoding to identify optimal translations.
[ "Li, Baohang", "Ye, Zekai", "Huang, Yichong", "Feng, Xiaocheng", "Qin, Bing" ]
SCIR-MT's Submission for WMT24 General Machine Translation Task
wmt-1.21
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.22.bib
https://aclanthology.org/2024.wmt-1.22/
@inproceedings{rikters-miwa-2024-aist, title = "{AIST} {AIRC} Systems for the {WMT} 2024 Shared Tasks", author = "Rikters, Matiss and Miwa, Makoto", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.22", pages = "286--291", abstract = "At WMT 2024 AIST AIRC participated in the General Machine Translation shared task and the Biomedical Translation task. We trained constrained track models for translation between English, German, and Japanese. Before training the final models, we first filtered the parallel data, then performed iterative back-translation as well as parallel data distillation. We experimented with training baseline Transformer models, Mega models, and fine-tuning open-source T5 and Gemma model checkpoints using the filtered parallel data. Our primary submissions contain translations from ensembles of two Mega model checkpoints and our contrastive submissions are generated by our fine-tuned T5 model checkpoints.", }
At WMT 2024 AIST AIRC participated in the General Machine Translation shared task and the Biomedical Translation task. We trained constrained track models for translation between English, German, and Japanese. Before training the final models, we first filtered the parallel data, then performed iterative back-translation as well as parallel data distillation. We experimented with training baseline Transformer models, Mega models, and fine-tuning open-source T5 and Gemma model checkpoints using the filtered parallel data. Our primary submissions contain translations from ensembles of two Mega model checkpoints and our contrastive submissions are generated by our fine-tuned T5 model checkpoints.
[ "Rikters, Matiss", "Miwa, Makoto" ]
AIST AIRC Systems for the WMT 2024 Shared Tasks
wmt-1.22
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.23.bib
https://aclanthology.org/2024.wmt-1.23/
@inproceedings{avramidis-etal-2024-occiglot, title = "Occiglot at {WMT}24: {E}uropean Open-source Large Language Models Evaluated on Translation", author = {Avramidis, Eleftherios and Gr{\"u}tzner-Zahn, Annika and Brack, Manuel and Schramowski, Patrick and Ortiz Suarez, Pedro and Ostendorff, Malte and Barth, Fabio and Manakhimova, Shushen and Macketanz, Vivien and Rehm, Georg and Kersting, Kristian}, editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.23", pages = "292--298", abstract = "This document describes the submission of the very first version of the Occiglot open-source large language model to the General MT Shared Task of the 9th Conference of Machine Translation (WMT24). Occiglot is an open-source, community-based LLM based on Mistral-7B, which went through language-specific continual pre-training and subsequent instruction tuning, including instructions relevant to machine translation.We examine the automatic metric scores for translating the WMT24 test set and provide a detailed linguistically-motivated analysis.Despite Occiglot performing worse than many of the other system submissions, we observe that it performs better than Mistral7B, which has been based upon, which indicates the positive effect of the language specific continual-pretraining and instruction tuning. We see the submission of this very early version of the model as a motivation to unite community forces and pursue future LLM research on the translation task.", }
This document describes the submission of the very first version of the Occiglot open-source large language model to the General MT Shared Task of the 9th Conference of Machine Translation (WMT24). Occiglot is an open-source, community-based LLM based on Mistral-7B, which went through language-specific continual pre-training and subsequent instruction tuning, including instructions relevant to machine translation.We examine the automatic metric scores for translating the WMT24 test set and provide a detailed linguistically-motivated analysis.Despite Occiglot performing worse than many of the other system submissions, we observe that it performs better than Mistral7B, which has been based upon, which indicates the positive effect of the language specific continual-pretraining and instruction tuning. We see the submission of this very early version of the model as a motivation to unite community forces and pursue future LLM research on the translation task.
[ "Avramidis, Eleftherios", "Gr{\\\"u}tzner-Zahn, Annika", "Brack, Manuel", "Schramowski, Patrick", "Ortiz Suarez, Pedro", "Ostendorff, Malte", "Barth, Fabio", "Manakhimova, Shushen", "Macketanz, Vivien", "Rehm, Georg", "Kersting, Kristian" ]
Occiglot at WMT24: European Open-source Large Language Models Evaluated on Translation
wmt-1.23
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.24.bib
https://aclanthology.org/2024.wmt-1.24/
@inproceedings{mukherjee-etal-2024-cost, title = "{C}o{ST} of breaking the {LLM}s", author = "Mukherjee, Ananya and Yadav, Saumitra and Shrivastava, Manish", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.24", pages = "299--306", abstract = "This paper presents an evaluation of 16 machine translation systems submitted to the Shared Task of the 9th Conference of Machine Translation (WMT24) for the English-Hindi (en-hi) language pair using our Complex Structures Test (CoST) suite. Aligning with this year{'}s test suite sub-task theme, {``}Help us break LLMs{''}, we curated a comprehensive test suite encompassing diverse datasets across various categories, including autobiography, poetry, legal, conversation, play, narration, technical, and mixed genres. Our evaluation reveals that all the systems struggle significantly with the archaic style of text like legal and technical writings or text with creative twist like conversation and poetry datasets, highlighting their weaknesses in handling complex linguistic structures and stylistic nuances inherent in these text types. Our evaluation identifies the strengths and limitations of the submitted models, pointing to specific areas where further research and development are needed to enhance their performance. Our test suite is available at \url{https://github.com/AnanyaCoder/CoST-WMT-24-Test-Suite-Task}.", }
This paper presents an evaluation of 16 machine translation systems submitted to the Shared Task of the 9th Conference of Machine Translation (WMT24) for the English-Hindi (en-hi) language pair using our Complex Structures Test (CoST) suite. Aligning with this year{'}s test suite sub-task theme, {``}Help us break LLMs{''}, we curated a comprehensive test suite encompassing diverse datasets across various categories, including autobiography, poetry, legal, conversation, play, narration, technical, and mixed genres. Our evaluation reveals that all the systems struggle significantly with the archaic style of text like legal and technical writings or text with creative twist like conversation and poetry datasets, highlighting their weaknesses in handling complex linguistic structures and stylistic nuances inherent in these text types. Our evaluation identifies the strengths and limitations of the submitted models, pointing to specific areas where further research and development are needed to enhance their performance. Our test suite is available at \url{https://github.com/AnanyaCoder/CoST-WMT-24-Test-Suite-Task}.
[ "Mukherjee, Ananya", "Yadav, Saumitra", "Shrivastava, Manish" ]
CoST of breaking the LLMs
wmt-1.24
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.25.bib
https://aclanthology.org/2024.wmt-1.25/
@inproceedings{dawkins-etal-2024-wmt24, title = "{WMT}24 Test Suite: Gender Resolution in Speaker-Listener Dialogue Roles", author = "Dawkins, Hillary and Nejadgholi, Isar and Lo, Chi-Kiu", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.25", pages = "307--326", abstract = "We assess the difficulty of gender resolution in literary-style dialogue settings and the influence of gender stereotypes. Instances of the test suite contain spoken dialogue interleaved with external meta-context about the characters and the manner of speaking. We find that character and manner stereotypes outside of the dialogue significantly impact the gender agreement of referents within the dialogue.", }
We assess the difficulty of gender resolution in literary-style dialogue settings and the influence of gender stereotypes. Instances of the test suite contain spoken dialogue interleaved with external meta-context about the characters and the manner of speaking. We find that character and manner stereotypes outside of the dialogue significantly impact the gender agreement of referents within the dialogue.
[ "Dawkins, Hillary", "Nejadgholi, Isar", "Lo, Chi-Kiu" ]
WMT24 Test Suite: Gender Resolution in Speaker-Listener Dialogue Roles
wmt-1.25
Poster
2411.06194
[ "https://github.com/hillary-dawkins/wmt24-gender-dialogue" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.26.bib
https://aclanthology.org/2024.wmt-1.26/
@inproceedings{friidhriksdottir-2024-genderqueer, title = "The {G}ender{Q}ueer Test Suite", author = "Friidhriksd{\'o}ttir, Steinunn Rut", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.26", pages = "327--340", abstract = "This paper introduces the GenderQueer Test Suite, an evaluation set for assessing machine translation (MT) systems{'} capabilities in handling gender-diverse and queer-inclusive content, focusing on English to Icelandic translation. The suite evaluates MT systems on various aspects of gender-inclusive translation, including pronoun and adjective agreement, LGBTQIA+ terminology, and the impact of explicit gender specifications.The 17 MT systems submitted to the WMT24 English-Icelandic track were evaluated. Key findings reveal significant performance differences between large language model-based systems (LLMs) and lightweight models in handling context for gender agreement. Challenges in translating the singular {``}they{''} were widespread, while most systems performed relatively well in translating LGBTQIA+ terminology. Accuracy in adjective gender agreement is quite low, with some models struggling particularly with the feminine form.This evaluation set contributes to the ongoing discussion about inclusive language in MT and natural language processing. By providing a tool for assessing MT systems{'} handling of gender-diverse content, it aims to enhance the inclusivity of language technology. The methodology and evaluation scripts are made available for adaptation to other languages, promoting further research in this area.", }
This paper introduces the GenderQueer Test Suite, an evaluation set for assessing machine translation (MT) systems{'} capabilities in handling gender-diverse and queer-inclusive content, focusing on English to Icelandic translation. The suite evaluates MT systems on various aspects of gender-inclusive translation, including pronoun and adjective agreement, LGBTQIA+ terminology, and the impact of explicit gender specifications.The 17 MT systems submitted to the WMT24 English-Icelandic track were evaluated. Key findings reveal significant performance differences between large language model-based systems (LLMs) and lightweight models in handling context for gender agreement. Challenges in translating the singular {``}they{''} were widespread, while most systems performed relatively well in translating LGBTQIA+ terminology. Accuracy in adjective gender agreement is quite low, with some models struggling particularly with the feminine form.This evaluation set contributes to the ongoing discussion about inclusive language in MT and natural language processing. By providing a tool for assessing MT systems{'} handling of gender-diverse content, it aims to enhance the inclusivity of language technology. The methodology and evaluation scripts are made available for adaptation to other languages, promoting further research in this area.
[ "Friidhriksd{\\'o}ttir, Steinunn Rut" ]
The GenderQueer Test Suite
wmt-1.26
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.27.bib
https://aclanthology.org/2024.wmt-1.27/
@inproceedings{bhattacharjee-etal-2024-domain, title = "Domain Dynamics: Evaluating Large Language Models in {E}nglish-{H}indi Translation", author = "Bhattacharjee, Soham and Gain, Baban and Ekbal, Asif", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.27", pages = "341--354", abstract = "Large Language Models (LLMs) have demonstrated impressive capabilities in machine translation, leveraging extensive pre-training on vast amounts of data. However, this generalist training often overlooks domain-specific nuances, leading to potential difficulties when translating specialized texts. In this study, we present a multi-domain test suite, collated from previously published datasets, designed to challenge and evaluate the translation abilities of LLMs. The test suite encompasses diverse domains such as judicial, education, literature (specifically religious texts), and noisy user-generated content from online product reviews and forums like Reddit. Each domain consists of approximately 250-300 sentences, carefully curated and randomized in the final compilation. This English-to-Hindi dataset aims to evaluate and expose the limitations of LLM-based translation systems, offering valuable insights into areas requiring further research and development. We have submitted the dataset to WMT24 Break the LLM subtask. In this paper, we present our findings. We have made the code and the dataset publicly available at \url{https://github.com/sohamb37/wmt24-test-suite}", }
Large Language Models (LLMs) have demonstrated impressive capabilities in machine translation, leveraging extensive pre-training on vast amounts of data. However, this generalist training often overlooks domain-specific nuances, leading to potential difficulties when translating specialized texts. In this study, we present a multi-domain test suite, collated from previously published datasets, designed to challenge and evaluate the translation abilities of LLMs. The test suite encompasses diverse domains such as judicial, education, literature (specifically religious texts), and noisy user-generated content from online product reviews and forums like Reddit. Each domain consists of approximately 250-300 sentences, carefully curated and randomized in the final compilation. This English-to-Hindi dataset aims to evaluate and expose the limitations of LLM-based translation systems, offering valuable insights into areas requiring further research and development. We have submitted the dataset to WMT24 Break the LLM subtask. In this paper, we present our findings. We have made the code and the dataset publicly available at \url{https://github.com/sohamb37/wmt24-test-suite}
[ "Bhattacharjee, Soham", "Gain, Baban", "Ekbal, Asif" ]
Domain Dynamics: Evaluating Large Language Models in English-Hindi Translation
wmt-1.27
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.28.bib
https://aclanthology.org/2024.wmt-1.28/
@inproceedings{manakhimova-etal-2024-investigating, title = "Investigating the Linguistic Performance of Large Language Models in Machine Translation", author = {Manakhimova, Shushen and Macketanz, Vivien and Avramidis, Eleftherios and Lapshinova-Koltunski, Ekaterina and Bagdasarov, Sergei and M{\"o}ller, Sebastian}, editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.28", pages = "355--371", abstract = "This paper summarizes the results of our test suite evaluation on 39 machine translation systems submitted at the Shared Task of the Ninth Conference of Machine Translation (WMT24). It offers a fine-grained linguistic evaluation of machine translation outputs for English{--}German and English{--}Russian, resulting from significant manual linguistic effort. Based on our results, LLMs are inferior to NMT in English{--}German, both in overall scores and when translating specific linguistic phenomena, such as punctuation, complex future verb tenses, and stripping. LLMs show quite a competitive performance in English-Russian, although top-performing systems might struggle with some cases of named entities and terminology, function words, mediopassive voice, and semantic roles. Additionally, some LLMs generate very verbose or empty outputs, posing challenges to the evaluation process.", }
This paper summarizes the results of our test suite evaluation on 39 machine translation systems submitted at the Shared Task of the Ninth Conference of Machine Translation (WMT24). It offers a fine-grained linguistic evaluation of machine translation outputs for English{--}German and English{--}Russian, resulting from significant manual linguistic effort. Based on our results, LLMs are inferior to NMT in English{--}German, both in overall scores and when translating specific linguistic phenomena, such as punctuation, complex future verb tenses, and stripping. LLMs show quite a competitive performance in English-Russian, although top-performing systems might struggle with some cases of named entities and terminology, function words, mediopassive voice, and semantic roles. Additionally, some LLMs generate very verbose or empty outputs, posing challenges to the evaluation process.
[ "Manakhimova, Shushen", "Macketanz, Vivien", "Avramidis, Eleftherios", "Lapshinova-Koltunski, Ekaterina", "Bagdasarov, Sergei", "M{\\\"o}ller, Sebastian" ]
Investigating the Linguistic Performance of Large Language Models in Machine Translation
wmt-1.28
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.29.bib
https://aclanthology.org/2024.wmt-1.29/
@inproceedings{rozanov-etal-2024-isochronometer, title = "{I}so{C}hrono{M}eter: A Simple and Effective Isochronic Translation Evaluation Metric", author = "Rozanov, Nikolai and Pankov, Vikentiy and Mukhutdinov, Dmitrii and Vypirailenko, Dima", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.29", pages = "372--379", abstract = "Machine translation (MT) has come a long way and is readily employed in production systems to serve millions of users daily. With the recent advances in generative AI, a new form of translation is becoming possible - video dubbing. This work motivates the importance of isochronic translation, especially in the context of automatic dubbing, and introduces {`}IsoChronoMeter{'} (ICM). ICM is a simple yet effective metric to measure isochrony of translations in a scalable and resource-efficient way without the need for gold data, based on state-of-the-art text-to-speech (TTS) duration predictors. We motivate IsoChronoMeter and demonstrate its effectiveness. Using ICM we demonstrate the shortcomings of state-of-the-art translation systems and show the need for new methods. We release the code at this URL: \url{https://github.com/braskai/isochronometer}.", }
Machine translation (MT) has come a long way and is readily employed in production systems to serve millions of users daily. With the recent advances in generative AI, a new form of translation is becoming possible - video dubbing. This work motivates the importance of isochronic translation, especially in the context of automatic dubbing, and introduces {`}IsoChronoMeter{'} (ICM). ICM is a simple yet effective metric to measure isochrony of translations in a scalable and resource-efficient way without the need for gold data, based on state-of-the-art text-to-speech (TTS) duration predictors. We motivate IsoChronoMeter and demonstrate its effectiveness. Using ICM we demonstrate the shortcomings of state-of-the-art translation systems and show the need for new methods. We release the code at this URL: \url{https://github.com/braskai/isochronometer}.
[ "Rozanov, Nikolai", "Pankov, Vikentiy", "Mukhutdinov, Dmitrii", "Vypirailenko, Dima" ]
IsoChronoMeter: A Simple and Effective Isochronic Translation Evaluation Metric
wmt-1.29
Poster
2410.11127
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.30.bib
https://aclanthology.org/2024.wmt-1.30/
@inproceedings{miceli-barone-sun-2024-test, title = "A Test Suite of Prompt Injection Attacks for {LLM}-based Machine Translation", author = "Miceli Barone, Antonio Valerio and Sun, Zhifan", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.30", pages = "380--450", abstract = "LLM-based NLP systems typically work by embedding their input data into prompt templates which contain instructions and/or in-context examples, creating queries which are submitted to a LLM, then parse the LLM response in order to generate the system outputs. Prompt Injection Attacks (PIAs) are a type of subversion of these systems where a malicious user crafts special inputs which interfer with the prompt templates, causing the LLM to respond in ways unintended by the system designer.Recently, Sun and Miceli-Barone (2024) proposed a class of PIAs against LLM-based machine translation. Specifically, the task is to translate questions from the TruthfulQA test suite, where an adversarial prompt is prepended to the questions, instructing the system to ignore the translation instruction and answer the questions instead.In this test suite we extend this approach to all the language pairs of the WMT 2024 General Machine Translation task. Moreover, we include additional attack formats in addition to the one originally studied.", }
LLM-based NLP systems typically work by embedding their input data into prompt templates which contain instructions and/or in-context examples, creating queries which are submitted to a LLM, then parse the LLM response in order to generate the system outputs. Prompt Injection Attacks (PIAs) are a type of subversion of these systems where a malicious user crafts special inputs which interfer with the prompt templates, causing the LLM to respond in ways unintended by the system designer.Recently, Sun and Miceli-Barone (2024) proposed a class of PIAs against LLM-based machine translation. Specifically, the task is to translate questions from the TruthfulQA test suite, where an adversarial prompt is prepended to the questions, instructing the system to ignore the translation instruction and answer the questions instead.In this test suite we extend this approach to all the language pairs of the WMT 2024 General Machine Translation task. Moreover, we include additional attack formats in addition to the one originally studied.
[ "Miceli Barone, Antonio Valerio", "Sun, Zhifan" ]
A Test Suite of Prompt Injection Attacks for LLM-based Machine Translation
wmt-1.30
Poster
2410.05047
[ "https://github.com/Avmb/adversarial_MT_prompt_injection" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.31.bib
https://aclanthology.org/2024.wmt-1.31/
@inproceedings{armannsson-etal-2024-killing, title = "Killing Two Flies with One Stone: An Attempt to Break {LLM}s Using {E}nglish-{I}celandic Idioms and Proper Names", author = "{\'A}rmannsson, Bjarki and Hafsteinsson, Hinrik and Jasonarson, Atli and Steingrimsson, Steinthor", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.31", pages = "451--458", abstract = "The submission of the {\'A}rni Magn{\'u}sson Institute{'}s team to the WMT24 test suite subtask focuses on idiomatic expressions and proper names for the English→Icelandic translation direction. Intuitively and empirically, idioms and proper names are known to be a significant challenge for neural translation models. We create two different test suites. The first evaluates the competency of MT systems in translating common English idiomatic expressions, as well as testing whether systems can distinguish between those expressions and the same phrases when used in a literal context. The second test suite consists of place names that should be translated into their Icelandic exonyms (and correctly inflected) and pairs of Icelandic names that share a surface form between the male and female variants, so that incorrect translations impact meaning as well as readibility. The scores reported are relatively low, especially for idiomatic expressions and place names, and indicate considerable room for improvement.", }
The submission of the {\'A}rni Magn{\'u}sson Institute{'}s team to the WMT24 test suite subtask focuses on idiomatic expressions and proper names for the English→Icelandic translation direction. Intuitively and empirically, idioms and proper names are known to be a significant challenge for neural translation models. We create two different test suites. The first evaluates the competency of MT systems in translating common English idiomatic expressions, as well as testing whether systems can distinguish between those expressions and the same phrases when used in a literal context. The second test suite consists of place names that should be translated into their Icelandic exonyms (and correctly inflected) and pairs of Icelandic names that share a surface form between the male and female variants, so that incorrect translations impact meaning as well as readibility. The scores reported are relatively low, especially for idiomatic expressions and place names, and indicate considerable room for improvement.
[ "{\\'A}rmannsson, Bjarki", "Hafsteinsson, Hinrik", "Jasonarson, Atli", "Steingrimsson, Steinthor" ]
Killing Two Flies with One Stone: An Attempt to Break LLMs Using English-Icelandic Idioms and Proper Names
wmt-1.31
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.32.bib
https://aclanthology.org/2024.wmt-1.32/
@inproceedings{anugraha-etal-2024-metametrics, title = "{M}eta{M}etrics-{MT}: Tuning Meta-Metrics for Machine Translation via Human Preference Calibration", author = "Anugraha, David and Kuwanto, Garry and Susanto, Lucky and Wijaya, Derry Tanti and Winata, Genta", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.32", pages = "459--469", abstract = "We present MetaMetrics-MT, an innovative metric designed to evaluate machine translation (MT) tasks by aligning closely with human preferences through Bayesian optimization with Gaussian Processes. MetaMetrics-MT enhances existing MT metrics by optimizing their correlation with human judgments. Our experiments on the WMT24 metric shared task dataset demonstrate that MetaMetrics-MT outperforms all existing baselines, setting a new benchmark for state-of-the-art performance in the reference-based setting. Furthermore, it achieves comparable results to leading metrics in the reference-free setting, offering greater efficiency.", }
We present MetaMetrics-MT, an innovative metric designed to evaluate machine translation (MT) tasks by aligning closely with human preferences through Bayesian optimization with Gaussian Processes. MetaMetrics-MT enhances existing MT metrics by optimizing their correlation with human judgments. Our experiments on the WMT24 metric shared task dataset demonstrate that MetaMetrics-MT outperforms all existing baselines, setting a new benchmark for state-of-the-art performance in the reference-based setting. Furthermore, it achieves comparable results to leading metrics in the reference-free setting, offering greater efficiency.
[ "Anugraha, David", "Kuwanto, Garry", "Susanto, Lucky", "Wijaya, Derry Tanti", "Winata, Genta" ]
MetaMetrics-MT: Tuning Meta-Metrics for Machine Translation via Human Preference Calibration
wmt-1.32
Poster
2411.00390
[ "https://github.com/meta-metrics/metametrics" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.33.bib
https://aclanthology.org/2024.wmt-1.33/
@inproceedings{mukherjee-shrivastava-2024-chrf, title = "chr{F}-{S}: Semantics Is All You Need", author = "Mukherjee, Ananya and Shrivastava, Manish", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.33", pages = "470--474", abstract = "Machine translation (MT) evaluation metrics like BLEU and chrF++ are widely used reference-based metrics that do not require training and are language-independent. However, these metrics primarily focus on n-gram matching and often overlook semantic depth and contextual understanding. To address this gap, we introduce chrF-S (Semantic chrF++), an enhanced metric that integrates sentence embeddings to evaluate translation quality more comprehensively. By combining traditional character and word n-gram analysis with semantic information derived from embeddings, chrF-S captures both syntactic accuracy and sentence-level semantics. This paper presents our contributions to the WMT24 shared metrics task, showcasing our participation and the development of chrF-S. We also demonstrate that, according to preliminary results on the leaderboard, our metric performs on par with other supervised and LLM-based metrics. By merging semantic insights with n-gram precision, chrF-S offers a significant enhancement in the assessment of machine-generated translations, advancing the field of MT evaluation. Our code and data will be made available at \url{https://github.com/AnanyaCoder/chrF-S}.", }
Machine translation (MT) evaluation metrics like BLEU and chrF++ are widely used reference-based metrics that do not require training and are language-independent. However, these metrics primarily focus on n-gram matching and often overlook semantic depth and contextual understanding. To address this gap, we introduce chrF-S (Semantic chrF++), an enhanced metric that integrates sentence embeddings to evaluate translation quality more comprehensively. By combining traditional character and word n-gram analysis with semantic information derived from embeddings, chrF-S captures both syntactic accuracy and sentence-level semantics. This paper presents our contributions to the WMT24 shared metrics task, showcasing our participation and the development of chrF-S. We also demonstrate that, according to preliminary results on the leaderboard, our metric performs on par with other supervised and LLM-based metrics. By merging semantic insights with n-gram precision, chrF-S offers a significant enhancement in the assessment of machine-generated translations, advancing the field of MT evaluation. Our code and data will be made available at \url{https://github.com/AnanyaCoder/chrF-S}.
[ "Mukherjee, Ananya", "Shrivastava, Manish" ]
chrF-S: Semantics Is All You Need
wmt-1.33
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.34.bib
https://aclanthology.org/2024.wmt-1.34/
@inproceedings{knowles-etal-2024-mslc24, title = "{MSLC}24: Further Challenges for Metrics on a Wide Landscape of Translation Quality", author = "Knowles, Rebecca and Larkin, Samuel and Lo, Chi-Kiu", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.34", pages = "475--491", abstract = "In this second edition of the Metric Score Landscape Challenge (MSLC), we examine how automatic metrics for machine translation perform on a wide variety of machine translation output, ranging from very low quality systems to the types of high-quality systems submitted to the General MT shared task at WMT. We also explore metric results on specific types of data, such as empty strings, wrong- or mixed-language text, and more. We raise several alarms about inconsistencies in metric scores, some of which can be resolved by increasingly explicit instructions for metric use, while others highlight technical flaws.", }
In this second edition of the Metric Score Landscape Challenge (MSLC), we examine how automatic metrics for machine translation perform on a wide variety of machine translation output, ranging from very low quality systems to the types of high-quality systems submitted to the General MT shared task at WMT. We also explore metric results on specific types of data, such as empty strings, wrong- or mixed-language text, and more. We raise several alarms about inconsistencies in metric scores, some of which can be resolved by increasingly explicit instructions for metric use, while others highlight technical flaws.
[ "Knowles, Rebecca", "Larkin, Samuel", "Lo, Chi-Kiu" ]
MSLC24: Further Challenges for Metrics on a Wide Landscape of Translation Quality
wmt-1.34
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.35.bib
https://aclanthology.org/2024.wmt-1.35/
@inproceedings{juraska-etal-2024-metricx, title = "{M}etric{X}-24: The {G}oogle Submission to the {WMT} 2024 Metrics Shared Task", author = "Juraska, Juraj and Deutsch, Daniel and Finkelstein, Mara and Freitag, Markus", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.35", pages = "492--504", abstract = "In this paper, we present the MetricX-24 submissions to the WMT24 Metrics Shared Task and provide details on the improvements we made over the previous version of MetricX. Our primary submission is a hybrid reference-based/-free metric, which can score a translation irrespective of whether it is given the source segment, the reference, or both. The metric is trained on previous WMT data in a two-stage fashion, first on the DA ratings only, then on a mixture of MQM and DA ratings. The training set in both stages is augmented with synthetic examples that we created to make the metric more robust to several common failure modes, such as fluent but unrelated translation, or undertranslation. We demonstrate the benefits of the individual modifications via an ablation study, and show a significant performance increase over MetricX-23 on the WMT23 MQM ratings, as well as our new synthetic challenge set.", }
In this paper, we present the MetricX-24 submissions to the WMT24 Metrics Shared Task and provide details on the improvements we made over the previous version of MetricX. Our primary submission is a hybrid reference-based/-free metric, which can score a translation irrespective of whether it is given the source segment, the reference, or both. The metric is trained on previous WMT data in a two-stage fashion, first on the DA ratings only, then on a mixture of MQM and DA ratings. The training set in both stages is augmented with synthetic examples that we created to make the metric more robust to several common failure modes, such as fluent but unrelated translation, or undertranslation. We demonstrate the benefits of the individual modifications via an ablation study, and show a significant performance increase over MetricX-23 on the WMT23 MQM ratings, as well as our new synthetic challenge set.
[ "Juraska, Juraj", "Deutsch, Daniel", "Finkelstein, Mara", "Freitag, Markus" ]
MetricX-24: The Google Submission to the WMT 2024 Metrics Shared Task
wmt-1.35
Poster
2410.03983
[ "https://github.com/google-research/metricx" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.36.bib
https://aclanthology.org/2024.wmt-1.36/
@inproceedings{wang-etal-2024-evaluating, title = "Evaluating {WMT} 2024 Metrics Shared Task Submissions on {A}fri{MTE} (the {A}frican Challenge Set)", author = "Wang, Jiayi and Adelani, David Ifeoluwa and Stenetorp, Pontus", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.36", pages = "505--516", abstract = "The AfriMTE challenge set from WMT 2024 Metrics Shared Task aims to evaluate the capabilities of evaluation metrics for machine translation on low-resource African languages, which primarily assesses cross-lingual transfer learning and generalization of machine translation metrics across a wide range of under-resourced languages. In this paper, we analyze the submissions to WMT 2024 Metrics Shared Task. Our findings indicate that language-specific adaptation, cross-lingual transfer learning, and larger language model sizes contribute significantly to improved metric performance. Moreover, supervised models with relatively moderate sizes demonstrate robust performance, when augmented with specific language adaptation for low-resource African languages. Finally, submissions show promising results for language pairs including Darija-French, English-Egyptian Arabic, and English-Swahili. However, significant challenges persist for extremely low-resource languages such as English-Luo and English-Twi, highlighting areas for future research and improvement in machine translation metrics for African languages.", }
The AfriMTE challenge set from WMT 2024 Metrics Shared Task aims to evaluate the capabilities of evaluation metrics for machine translation on low-resource African languages, which primarily assesses cross-lingual transfer learning and generalization of machine translation metrics across a wide range of under-resourced languages. In this paper, we analyze the submissions to WMT 2024 Metrics Shared Task. Our findings indicate that language-specific adaptation, cross-lingual transfer learning, and larger language model sizes contribute significantly to improved metric performance. Moreover, supervised models with relatively moderate sizes demonstrate robust performance, when augmented with specific language adaptation for low-resource African languages. Finally, submissions show promising results for language pairs including Darija-French, English-Egyptian Arabic, and English-Swahili. However, significant challenges persist for extremely low-resource languages such as English-Luo and English-Twi, highlighting areas for future research and improvement in machine translation metrics for African languages.
[ "Wang, Jiayi", "Adelani, David Ifeoluwa", "Stenetorp, Pontus" ]
Evaluating WMT 2024 Metrics Shared Task Submissions on AfriMTE (the African Challenge Set)
wmt-1.36
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.37.bib
https://aclanthology.org/2024.wmt-1.37/
@inproceedings{avramidis-etal-2024-machine, title = "Machine Translation Metrics Are Better in Evaluating Linguistic Errors on {LLM}s than on Encoder-Decoder Systems", author = {Avramidis, Eleftherios and Manakhimova, Shushen and Macketanz, Vivien and M{\"o}ller, Sebastian}, editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.37", pages = "517--528", abstract = "This year{'}s MT metrics challenge set submission by DFKI expands on previous years{'} linguistically motivated challenge sets. It includes 137,000 items extracted from 100 MT systems for the two language directions (English to German, English to Russian), covering more than 100 linguistically motivated phenomena organized into 14 linguistic categories. The metrics with the statistically significant best performance in our linguistically motivated analysis are MetricX-24-Hybrid and MetricX-24 for English to German, and MetricX-24 for English to Russian. Metametrics and XCOMET are in the next ranking positions in both language pairs. Metrics are more accurate in detecting linguistic errors in translations by large language models (LLMs) than in translations based on the encoder-decoder neural machine translation (NMT) architecture. Some of the most difficult phenomena for the metrics to score are the transitive past progressive, multiple connectors, and the ditransitive simple future I for English to German, and pseudogapping, contact clauses, and cleft sentences for English to Russian. Despite its overall low performance, the LLM-based metric Gemba performs best in scoring German negation errors.", }
This year{'}s MT metrics challenge set submission by DFKI expands on previous years{'} linguistically motivated challenge sets. It includes 137,000 items extracted from 100 MT systems for the two language directions (English to German, English to Russian), covering more than 100 linguistically motivated phenomena organized into 14 linguistic categories. The metrics with the statistically significant best performance in our linguistically motivated analysis are MetricX-24-Hybrid and MetricX-24 for English to German, and MetricX-24 for English to Russian. Metametrics and XCOMET are in the next ranking positions in both language pairs. Metrics are more accurate in detecting linguistic errors in translations by large language models (LLMs) than in translations based on the encoder-decoder neural machine translation (NMT) architecture. Some of the most difficult phenomena for the metrics to score are the transitive past progressive, multiple connectors, and the ditransitive simple future I for English to German, and pseudogapping, contact clauses, and cleft sentences for English to Russian. Despite its overall low performance, the LLM-based metric Gemba performs best in scoring German negation errors.
[ "Avramidis, Eleftherios", "Manakhimova, Shushen", "Macketanz, Vivien", "M{\\\"o}ller, Sebastian" ]
Machine Translation Metrics Are Better in Evaluating Linguistic Errors on LLMs than on Encoder-Decoder Systems
wmt-1.37
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.38.bib
https://aclanthology.org/2024.wmt-1.38/
@inproceedings{sato-etal-2024-tmu, title = "{TMU}-{HIT}{'}s Submission for the {WMT}24 Quality Estimation Shared Task: Is {GPT}-4 a Good Evaluator for Machine Translation?", author = "Sato, Ayako and Nakajima, Kyotaro and Kim, Hwichan and Chen, Zhousi and Komachi, Mamoru", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.38", pages = "529--534", abstract = "In machine translation quality estimation (QE), translation quality is evaluated automatically without the need for reference translations. This paper describes our contribution to the sentence-level subtask of Task 1 at the Ninth Machine Translation Conference (WMT24), which predicts quality scores for neural MT outputs without reference translations. We fine-tune GPT-4o mini, a large-scale language model (LLM), with limited data for QE.We report results for the direct assessment (DA) method for four language pairs: English-Gujarati (En-Gu), English-Hindi (En-Hi), English-Tamil (En-Ta), and English-Telugu (En-Te).Experiments under zero-shot, few-shot prompting, and fine-tuning settings revealed significantly low performance in the zero-shot, while fine-tuning achieved accuracy comparable to last year{'}s best scores. Our system demonstrated the effectiveness of this approach in low-resource language QE, securing 1st place in both En-Gu and En-Hi, and 4th place in En-Ta and En-Te.", }
In machine translation quality estimation (QE), translation quality is evaluated automatically without the need for reference translations. This paper describes our contribution to the sentence-level subtask of Task 1 at the Ninth Machine Translation Conference (WMT24), which predicts quality scores for neural MT outputs without reference translations. We fine-tune GPT-4o mini, a large-scale language model (LLM), with limited data for QE.We report results for the direct assessment (DA) method for four language pairs: English-Gujarati (En-Gu), English-Hindi (En-Hi), English-Tamil (En-Ta), and English-Telugu (En-Te).Experiments under zero-shot, few-shot prompting, and fine-tuning settings revealed significantly low performance in the zero-shot, while fine-tuning achieved accuracy comparable to last year{'}s best scores. Our system demonstrated the effectiveness of this approach in low-resource language QE, securing 1st place in both En-Gu and En-Hi, and 4th place in En-Ta and En-Te.
[ "Sato, Ayako", "Nakajima, Kyotaro", "Kim, Hwichan", "Chen, Zhousi", "Komachi, Mamoru" ]
TMU-HIT's Submission for the WMT24 Quality Estimation Shared Task: Is GPT-4 a Good Evaluator for Machine Translation?
wmt-1.38
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.39.bib
https://aclanthology.org/2024.wmt-1.39/
@inproceedings{shan-etal-2024-hw, title = "{HW}-{TSC} 2024 Submission for the Quality Estimation Shared Task", author = "Shan, Weiqiao and Zhu, Ming and Li, Yuang and Piao, Mengyao and Zhao, Xiaofeng and Su, Chang and Zhang, Min and Yang, Hao and Jiang, Yanfei", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.39", pages = "535--540", abstract = "Quality estimation (QE) is a crucial technique for evaluating the quality of machine translations without the need for reference translations. This paper focuses on Huawei Translation Services Center{'}s (HW-TSC{'}s) submission to the sentence-level QE shared task, named LLMs-enhanced-CrossQE. Our system builds upon the CrossQE architecture from our submission from last year, which consists of a multilingual base model and a task-specific downstream layer. The model input is a concatenation of the source and the translated sentences. To enhance performance, we fine-tuned and ensembled multiple base models, including XLM-R, InfoXLM, RemBERT, and CometKiwi. Specifically, we employed two pseudo-data generation methods: 1) a diverse pseudo-data generation method based on the corruption-based data augmentation technique introduced last year, and 2) a pseudo-data generation method that simulates machine translation errors using large language models (LLMs). Our results demonstrate that the system achieves outstanding performance on sentence-level QE test sets.", }
Quality estimation (QE) is a crucial technique for evaluating the quality of machine translations without the need for reference translations. This paper focuses on Huawei Translation Services Center{'}s (HW-TSC{'}s) submission to the sentence-level QE shared task, named LLMs-enhanced-CrossQE. Our system builds upon the CrossQE architecture from our submission from last year, which consists of a multilingual base model and a task-specific downstream layer. The model input is a concatenation of the source and the translated sentences. To enhance performance, we fine-tuned and ensembled multiple base models, including XLM-R, InfoXLM, RemBERT, and CometKiwi. Specifically, we employed two pseudo-data generation methods: 1) a diverse pseudo-data generation method based on the corruption-based data augmentation technique introduced last year, and 2) a pseudo-data generation method that simulates machine translation errors using large language models (LLMs). Our results demonstrate that the system achieves outstanding performance on sentence-level QE test sets.
[ "Shan, Weiqiao", "Zhu, Ming", "Li, Yuang", "Piao, Mengyao", "Zhao, Xiaofeng", "Su, Chang", "Zhang, Min", "Yang, Hao", "Jiang, Yanfei" ]
HW-TSC 2024 Submission for the Quality Estimation Shared Task
wmt-1.39
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.40.bib
https://aclanthology.org/2024.wmt-1.40/
@inproceedings{yu-etal-2024-hw, title = "{HW}-{TSC}{'}s Participation in the {WMT} 2024 {QEAPE} Task", author = "Yu, Jiawei and Zhao, Xiaofeng and Zhang, Min and Yanqing, Zhao and Li, Yuang and Chang, Su and Qiao, Xiaosong and Miaomiao, Ma and Yang, Hao", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.40", pages = "541--546", abstract = "The paper presents the submission by HW-TSC in the WMT 2024 Quality-informed Automatic Post Editing (QEAPE) shared task for the English-Hindi (En-Hi) and English-Tamil (En-Ta) language pair. We use LLM for En-Hi and Transformer for EN-ta respectively. For LLM, we first continue pertrain the Llama3, and then use the real APE data to SFT the pre-trained LLM. As for the transformer in En-Ta, we first pre-train a Machine Translation (MT) model by utilizing MT data collected from the web. Then, we fine-tune the model by employing real APE data.We also use the data augmentation method to enhance our model. Specifically, we incorporate candidate translations obtained from an external Machine Translation (MT) system.Given that APE systems tend to exhibit a tendency of {`}over-correction{'}, we employ a sentence-level Quality Estimation (QE) system to select the final output, deciding between the original translation and the corresponding output generated by the APE model. Our experiments demonstrate that pre-trained MT models are effective when being fine-tuned with the APE corpus of a limited size, and the performance can be further improved with external MT augmentation. our approach improves the HTER by -15.99 points and -0.47 points on En-Hi and En-Ta, respectively.", }
The paper presents the submission by HW-TSC in the WMT 2024 Quality-informed Automatic Post Editing (QEAPE) shared task for the English-Hindi (En-Hi) and English-Tamil (En-Ta) language pair. We use LLM for En-Hi and Transformer for EN-ta respectively. For LLM, we first continue pertrain the Llama3, and then use the real APE data to SFT the pre-trained LLM. As for the transformer in En-Ta, we first pre-train a Machine Translation (MT) model by utilizing MT data collected from the web. Then, we fine-tune the model by employing real APE data.We also use the data augmentation method to enhance our model. Specifically, we incorporate candidate translations obtained from an external Machine Translation (MT) system.Given that APE systems tend to exhibit a tendency of {`}over-correction{'}, we employ a sentence-level Quality Estimation (QE) system to select the final output, deciding between the original translation and the corresponding output generated by the APE model. Our experiments demonstrate that pre-trained MT models are effective when being fine-tuned with the APE corpus of a limited size, and the performance can be further improved with external MT augmentation. our approach improves the HTER by -15.99 points and -0.47 points on En-Hi and En-Ta, respectively.
[ "Yu, Jiawei", "Zhao, Xiaofeng", "Zhang, Min", "Yanqing, Zhao", "Li, Yuang", "Chang, Su", "Qiao, Xiaosong", "Miaomiao, Ma", "Yang, Hao" ]
HW-TSC's Participation in the WMT 2024 QEAPE Task
wmt-1.40
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.41.bib
https://aclanthology.org/2024.wmt-1.41/
@inproceedings{perez-ortiz-etal-2024-expanding, title = "Expanding the {FLORES}+ Multilingual Benchmark with Translations for {A}ragonese, Aranese, {A}sturian, and {V}alencian", author = "Perez-Ortiz, Juan Antonio and S{\'a}nchez-Mart{\'\i}nez, Felipe and S{\'a}nchez-Cartagena, V{\'\i}ctor M. and Espl{\`a}-Gomis, Miquel and Galiano Jimenez, Aaron and Oliver, Antoni and Avent{\'\i}n-Boya, Claudi and Pardos, Alejandro and Vald{\'e}s, Cristina and Sans Socasau, Jus{\`e}p Lo{\'\i}s and Mart{\'\i}nez, Juan Pablo", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.41", pages = "547--555", abstract = "In this paper, we describe the process of creating the FLORES+ datasets for several Romance languages spoken in Spain, namely Aragonese, Aranese, Asturian, and Valencian. The Aragonese and Aranese datasets are entirely new additions to the FLORES+ multilingual benchmark. An initial version of the Asturian dataset was already available in FLORES+, and our work focused on a thorough revision. Similarly, FLORES+ included a Catalan dataset, which we adapted to the Valencian variety spoken in the Valencian Community. The development of the Aragonese, Aranese, and revised Asturian FLORES+ datasets was undertaken as part of a WMT24 shared task on translation into low-resource languages of Spain.", }
In this paper, we describe the process of creating the FLORES+ datasets for several Romance languages spoken in Spain, namely Aragonese, Aranese, Asturian, and Valencian. The Aragonese and Aranese datasets are entirely new additions to the FLORES+ multilingual benchmark. An initial version of the Asturian dataset was already available in FLORES+, and our work focused on a thorough revision. Similarly, FLORES+ included a Catalan dataset, which we adapted to the Valencian variety spoken in the Valencian Community. The development of the Aragonese, Aranese, and revised Asturian FLORES+ datasets was undertaken as part of a WMT24 shared task on translation into low-resource languages of Spain.
[ "Perez-Ortiz, Juan Antonio", "S{\\'a}nchez-Mart{\\'\\i}nez, Felipe", "S{\\'a}nchez-Cartagena, V{\\'\\i}ctor M.", "Espl{\\`a}-Gomis, Miquel", "Galiano Jimenez, Aaron", "Oliver, Antoni", "Avent{\\'\\i}n-Boya, Claudi", "Pardos, Alej", "ro", "Vald{\\'e}s, Cristina", "Sans Socasau, Jus{\\`e}p Lo{\\'\\i}s", "Mart{\\'\\i}nez, Juan Pablo" ]
Expanding the FLORES+ Multilingual Benchmark with Translations for Aragonese, Aranese, Asturian, and Valencian
wmt-1.41
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.42.bib
https://aclanthology.org/2024.wmt-1.42/
@inproceedings{ahmed-etal-2024-bangla, title = "The {B}angla/{B}engali Seed Dataset Submission to the {WMT}24 Open Language Data Initiative Shared Task", author = "Ahmed, Firoz and Venkateswaran, Nitin and Moeller, Sarah", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.42", pages = "556--566", abstract = "We contribute a seed dataset for the Bangla/Bengali language as part of the WMT24 Open Language Data Initiative shared task. We validate the quality of the dataset against a mined and automatically aligned dataset (NLLBv1) and two other existing datasets of crowdsourced manual translations. The validation is performed by investigating the performance of state-of-the-art translation models fine-tuned on the different datasets after controlling for training set size. Machine translation models fine-tuned on our dataset outperform models tuned on the other datasets in both translation directions (English-Bangla and Bangla-English). These results confirm the quality of our dataset. We hope our dataset will support machine translation for the Bangla/Bengali community and related low-resource languages.", }
We contribute a seed dataset for the Bangla/Bengali language as part of the WMT24 Open Language Data Initiative shared task. We validate the quality of the dataset against a mined and automatically aligned dataset (NLLBv1) and two other existing datasets of crowdsourced manual translations. The validation is performed by investigating the performance of state-of-the-art translation models fine-tuned on the different datasets after controlling for training set size. Machine translation models fine-tuned on our dataset outperform models tuned on the other datasets in both translation directions (English-Bangla and Bangla-English). These results confirm the quality of our dataset. We hope our dataset will support machine translation for the Bangla/Bengali community and related low-resource languages.
[ "Ahmed, Firoz", "Venkateswaran, Nitin", "Moeller, Sarah" ]
The Bangla/Bengali Seed Dataset Submission to the WMT24 Open Language Data Initiative Shared Task
wmt-1.42
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.43.bib
https://aclanthology.org/2024.wmt-1.43/
@inproceedings{ferrante-2024-high, title = "A High-quality Seed Dataset for {I}talian Machine Translation", author = "Ferrante, Edoardo", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.43", pages = "567--569", abstract = "This paper describes the submission of a high-quality translation of the OLDI Seed datasetinto Italian for the WMT 2023 Open LanguageData Initiative shared task.The base of this submission is a previous ver-sion of an Italian OLDI Seed dataset releasedby Haberland et al. (2024) via machine trans-lation and partial post-editing. This data wassubsequently reviewed in its entirety by twonative speakers of Italian, who carried out ex-tensive post-editing with particular attention tothe idiomatic translation of named entities.", }
This paper describes the submission of a high-quality translation of the OLDI Seed datasetinto Italian for the WMT 2023 Open LanguageData Initiative shared task.The base of this submission is a previous ver-sion of an Italian OLDI Seed dataset releasedby Haberland et al. (2024) via machine trans-lation and partial post-editing. This data wassubsequently reviewed in its entirety by twonative speakers of Italian, who carried out ex-tensive post-editing with particular attention tothe idiomatic translation of named entities.
[ "Ferrante, Edoardo" ]
A High-quality Seed Dataset for Italian Machine Translation
wmt-1.43
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.44.bib
https://aclanthology.org/2024.wmt-1.44/
@inproceedings{abdulmumin-etal-2024-correcting, title = "Correcting {FLORES} Evaluation Dataset for Four {A}frican Languages", author = "Abdulmumin, Idris and Mkhwanazi, Sthembiso and Mbooi, Mahlatse and Muhammad, Shamsuddeen Hassan and Ahmad, Ibrahim Said and Putini, Neo and Mathebula, Miehleketo and Shingange, Matimba and Gwadabe, Tajuddeen and Marivate, Vukosi", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.44", pages = "570--578", abstract = "This paper describes the corrections made to the FLORES evaluation (dev and devtest) dataset for four African languages, namely Hausa, Northern Sotho (Sepedi), Xitsonga, and isiZulu. The original dataset, though groundbreaking in its coverage of low-resource languages, exhibited various inconsistencies and inaccuracies in the reviewed languages that could potentially hinder the integrity of the evaluation of downstream tasks in natural language processing (NLP), especially machine translation. Through a meticulous review process by native speakers, several corrections were identified and implemented, improving the dataset{'}s overall quality and reliability. For each language, we provide a concise summary of the errors encountered and corrected and also present some statistical analysis that measures the difference between the existing and corrected datasets. We believe that our corrections enhance the linguistic accuracy and reliability of the data and, thereby, contribute to a more effective evaluation of NLP tasks involving the four African languages. Finally, we recommend that future translation efforts, particularly in low-resource languages, prioritize the active involvement of native speakers at every stage of the process to ensure linguistic accuracy and cultural relevance.", }
This paper describes the corrections made to the FLORES evaluation (dev and devtest) dataset for four African languages, namely Hausa, Northern Sotho (Sepedi), Xitsonga, and isiZulu. The original dataset, though groundbreaking in its coverage of low-resource languages, exhibited various inconsistencies and inaccuracies in the reviewed languages that could potentially hinder the integrity of the evaluation of downstream tasks in natural language processing (NLP), especially machine translation. Through a meticulous review process by native speakers, several corrections were identified and implemented, improving the dataset{'}s overall quality and reliability. For each language, we provide a concise summary of the errors encountered and corrected and also present some statistical analysis that measures the difference between the existing and corrected datasets. We believe that our corrections enhance the linguistic accuracy and reliability of the data and, thereby, contribute to a more effective evaluation of NLP tasks involving the four African languages. Finally, we recommend that future translation efforts, particularly in low-resource languages, prioritize the active involvement of native speakers at every stage of the process to ensure linguistic accuracy and cultural relevance.
[ "Abdulmumin, Idris", "Mkhwanazi, Sthembiso", "Mbooi, Mahlatse", "Muhammad, Shamsuddeen Hassan", "Ahmad, Ibrahim Said", "Putini, Neo", "Mathebula, Miehleketo", "Shingange, Matimba", "Gwadabe, Tajuddeen", "Marivate, Vukosi" ]
Correcting FLORES Evaluation Dataset for Four African Languages
wmt-1.44
Poster
2409.00626
[ "https://github.com/dsfsi/flores-fix-4-africa" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.45.bib
https://aclanthology.org/2024.wmt-1.45/
@inproceedings{ali-etal-2024-expanding, title = "Expanding {FLORES}+ Benchmark for More Low-Resource Settings: {P}ortuguese-Emakhuwa Machine Translation Evaluation", author = "Ali, Felermino Dario Mario and Lopes Cardoso, Henrique and Sousa-Silva, Rui", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.45", pages = "579--592", abstract = "As part of the Open Language Data Initiative shared tasks, we have expanded the FLORES+ evaluation set to include Emakhuwa, a low-resource language widely spoken in Mozambique. We translated the \textit{dev} and \textit{devtest} sets from Portuguese into Emakhuwa, and we detail the translation process and quality assurance measures used. Our methodology involved various quality checks, including post-editing and adequacy assessments. The resulting datasets consist of multiple reference sentences for each source. We present baseline results from training a Neural Machine Translation system and fine-tuning existing multilingual translation models. Our findings suggest that spelling inconsistencies remain a challenge in Emakhuwa. Additionally, the baseline models underperformed on this evaluation set, underscoring the necessity for further research to enhance machine translation quality for Emakhuwa.The data is publicly available at \url{https://huggingface.co/datasets/LIACC/Emakhuwa-FLORES}", }
As part of the Open Language Data Initiative shared tasks, we have expanded the FLORES+ evaluation set to include Emakhuwa, a low-resource language widely spoken in Mozambique. We translated the \textit{dev} and \textit{devtest} sets from Portuguese into Emakhuwa, and we detail the translation process and quality assurance measures used. Our methodology involved various quality checks, including post-editing and adequacy assessments. The resulting datasets consist of multiple reference sentences for each source. We present baseline results from training a Neural Machine Translation system and fine-tuning existing multilingual translation models. Our findings suggest that spelling inconsistencies remain a challenge in Emakhuwa. Additionally, the baseline models underperformed on this evaluation set, underscoring the necessity for further research to enhance machine translation quality for Emakhuwa.The data is publicly available at \url{https://huggingface.co/datasets/LIACC/Emakhuwa-FLORES}
[ "Ali, Felermino Dario Mario", "Lopes Cardoso, Henrique", "Sousa-Silva, Rui" ]
Expanding FLORES+ Benchmark for More Low-Resource Settings: Portuguese-Emakhuwa Machine Translation Evaluation
wmt-1.45
Poster
2408.11457
[ "" ]
https://huggingface.co/papers/2408.11457
0
2
1
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.46.bib
https://aclanthology.org/2024.wmt-1.46/
@inproceedings{kuzhuget-etal-2024-enhancing, title = "Enhancing Tuvan Language Resources through the {FLORES} Dataset", author = "Kuzhuget, Ali and Mongush, Airana and Oorzhak, Nachyn-Enkhedorzhu", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.46", pages = "593--599", abstract = "FLORES is a benchmark dataset designed for evaluating machine translation systems, partic- ularly for low-resource languages. This paper, conducted as a part of Open Language Data Ini- tiative (OLDI) shared task, presents our contri- bution to expanding the FLORES dataset with high-quality translations from Russian to Tu- van, an endangered Turkic language. Our ap- proach combined the linguistic expertise of na- tive speakers to ensure both accuracy and cul- tural relevance in the translations. This project represents a significant step forward in support- ing Tuvan as a low-resource language in the realm of natural language processing (NLP) and machine translation (MT).", }
FLORES is a benchmark dataset designed for evaluating machine translation systems, partic- ularly for low-resource languages. This paper, conducted as a part of Open Language Data Ini- tiative (OLDI) shared task, presents our contri- bution to expanding the FLORES dataset with high-quality translations from Russian to Tu- van, an endangered Turkic language. Our ap- proach combined the linguistic expertise of na- tive speakers to ensure both accuracy and cul- tural relevance in the translations. This project represents a significant step forward in support- ing Tuvan as a low-resource language in the realm of natural language processing (NLP) and machine translation (MT).
[ "Kuzhuget, Ali", "Mongush, Airana", "Oorzhak, Nachyn-Enkhedorzhu" ]
Enhancing Tuvan Language Resources through the FLORES Dataset
wmt-1.46
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.47.bib
https://aclanthology.org/2024.wmt-1.47/
@inproceedings{yu-etal-2024-machine, title = "Machine Translation Evaluation Benchmark for {W}u {C}hinese: Workflow and Analysis", author = "Yu, Hongjian and Shi, Yiming and Zhou, Zherui and Haberland, Christopher", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.47", pages = "600--605", abstract = "We introduce a FLORES+ dataset as an evaluation benchmark for modern Wu Chinese machine translation models and showcase its compatibility with existing Wu data. Wu Chinese is mutually unintelligible with other Sinitic languages such as Mandarin and Yue (Cantonese), but uses a set of Hanzi (Chinese characters) that profoundly overlaps with others. The population of Wu speakers is the second largest among languages in China, but the language has been suffering from significant drop in usage especially among the younger generations. We identify Wu Chinese as a textually low-resource language and address challenges for its machine translation models. Our contributions include: (1) an open-source, manually translated dataset, (2) full documentations on the process of dataset creation and validation experiments, (3) preliminary tools for Wu Chinese normalization and segmentation, and (4) benefits and limitations of our dataset, as well as implications to other low-resource languages.", }
We introduce a FLORES+ dataset as an evaluation benchmark for modern Wu Chinese machine translation models and showcase its compatibility with existing Wu data. Wu Chinese is mutually unintelligible with other Sinitic languages such as Mandarin and Yue (Cantonese), but uses a set of Hanzi (Chinese characters) that profoundly overlaps with others. The population of Wu speakers is the second largest among languages in China, but the language has been suffering from significant drop in usage especially among the younger generations. We identify Wu Chinese as a textually low-resource language and address challenges for its machine translation models. Our contributions include: (1) an open-source, manually translated dataset, (2) full documentations on the process of dataset creation and validation experiments, (3) preliminary tools for Wu Chinese normalization and segmentation, and (4) benefits and limitations of our dataset, as well as implications to other low-resource languages.
[ "Yu, Hongjian", "Shi, Yiming", "Zhou, Zherui", "Haberl", ", Christopher" ]
Machine Translation Evaluation Benchmark for Wu Chinese: Workflow and Analysis
wmt-1.47
Poster
2410.10278
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.48.bib
https://aclanthology.org/2024.wmt-1.48/
@inproceedings{mamasaidov-shopulatov-2024-open, title = "Open Language Data Initiative: Advancing Low-Resource Machine Translation for {K}arakalpak", author = "Mamasaidov, Mukhammadsaid and Shopulatov, Abror", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.48", pages = "606--613", abstract = "This study presents several contributions for the Karakalpak language: a FLORES+ devtest dataset translated to Karakalpak, parallel corpora for Uzbek-Karakalpak, Russian-Karakalpak and English-Karakalpak of 100,000 pairs each and open-sourced fine-tuned neural models for translation across these languages. Our experiments compare different model variants and training approaches, demonstrating improvements over existing baselines. This work, conducted as part of the Open Language Data Initiative (OLDI) shared task, aims to advance machine translation capabilities for Karakalpak and contribute to expanding linguistic diversity in NLP technologies.", }
This study presents several contributions for the Karakalpak language: a FLORES+ devtest dataset translated to Karakalpak, parallel corpora for Uzbek-Karakalpak, Russian-Karakalpak and English-Karakalpak of 100,000 pairs each and open-sourced fine-tuned neural models for translation across these languages. Our experiments compare different model variants and training approaches, demonstrating improvements over existing baselines. This work, conducted as part of the Open Language Data Initiative (OLDI) shared task, aims to advance machine translation capabilities for Karakalpak and contribute to expanding linguistic diversity in NLP technologies.
[ "Mamasaidov, Mukhammadsaid", "Shopulatov, Abror" ]
Open Language Data Initiative: Advancing Low-Resource Machine Translation for Karakalpak
wmt-1.48
Poster
2409.04269
[ "" ]
https://huggingface.co/papers/2409.04269
2
9
3
2
[ "tahrirchi/dilmash-raw", "tahrirchi/dilmash", "tahrirchi/dilmash-til" ]
[ "tahrirchi/dilmash" ]
[]
[ "tahrirchi/dilmash-raw", "tahrirchi/dilmash", "tahrirchi/dilmash-til" ]
[ "tahrirchi/dilmash" ]
[]
1
https://aclanthology.org/2024.wmt-1.49.bib
https://aclanthology.org/2024.wmt-1.49/
@inproceedings{gordeev-etal-2024-flores, title = "{FLORES}+ Translation and Machine Translation Evaluation for the {E}rzya Language", author = "Gordeev, Isai and Kuldin, Sergey and Dale, David", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.49", pages = "614--623", abstract = "This paper introduces a translation of the FLORES+ dataset into the endangered Erzya language, with the goal of evaluating machine translation between this language and any of the other 200 languages already included into FLORES+. This translation was carried out as a part of the Open Language Data shared task at WMT24. We also present a benchmark of existing translation models bases on this dataset and a new translation model that achieves the state-of-the-art quality of translation into Erzya from Russian and English.", }
This paper introduces a translation of the FLORES+ dataset into the endangered Erzya language, with the goal of evaluating machine translation between this language and any of the other 200 languages already included into FLORES+. This translation was carried out as a part of the Open Language Data shared task at WMT24. We also present a benchmark of existing translation models bases on this dataset and a new translation model that achieves the state-of-the-art quality of translation into Erzya from Russian and English.
[ "Gordeev, Isai", "Kuldin, Sergey", "Dale, David" ]
FLORES+ Translation and Machine Translation Evaluation for the Erzya Language
wmt-1.49
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.50.bib
https://aclanthology.org/2024.wmt-1.50/
@inproceedings{cols-2024-spanish, title = "{S}panish Corpus and Provenance with Computer-Aided Translation for the {WMT}24 {OLDI} Shared Task", author = "Cols, Jose", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.50", pages = "624--635", abstract = "This paper presents the Seed-CAT submission to the WMT24 Open Language Data Initiative shared task. We detail our data collection method, which involves a computer-aided translation tool developed explicitly for translating Seed corpora. We release a professionally translated Spanish corpus and a provenance dataset documenting the translation process. The quality of the data was validated on the FLORES+ benchmark with English-Spanish neural machine translation models, achieving an average chrF++ score of 34.9.", }
This paper presents the Seed-CAT submission to the WMT24 Open Language Data Initiative shared task. We detail our data collection method, which involves a computer-aided translation tool developed explicitly for translating Seed corpora. We release a professionally translated Spanish corpus and a provenance dataset documenting the translation process. The quality of the data was validated on the FLORES+ benchmark with English-Spanish neural machine translation models, achieving an average chrF++ score of 34.9.
[ "Cols, Jose" ]
Spanish Corpus and Provenance with Computer-Aided Translation for the WMT24 OLDI Shared Task
wmt-1.50
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.51.bib
https://aclanthology.org/2024.wmt-1.51/
@inproceedings{kim-etal-2024-efficient, title = "Efficient Terminology Integration for {LLM}-based Translation in Specialized Domains", author = "Kim, Sejoon and Sung, Mingi and Lee, Jeonghwan and Lim, Hyunkuk and Gimenez Perez, Jorge", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.51", pages = "636--642", abstract = "Traditional machine translation methods typically involve training models directly on large parallel corpora, with limited emphasis on specialized terminology. However, In specialized fields such as patents, finance, biomedical domains, terminology is crucial for translation, with many terminologies that should not be translated based on semantics of the sentence but should be translated following agreed-upon conventions. In this paper we introduce a methodology that efficiently trains models with a smaller amount of data while preserving the accuracy of terminology translation. The terminology extraction model generates a glossary from existing training datasets and further refines the LLM by instructing it to effectively incorporate these terms into translations. We achieve this through a systematic process of term extraction and glossary creation using the Trie Tree algorithm, followed by data reconstruction to teach the LLM how to integrate these specialized terms. This methodology enhances the model{'}s ability to handle specialized terminology and ensures high-quality translations, particularly in fields where term consistency is crucial. Our approach has demonstrated exceptional performance, achieving the highest translation score among participants in the WMT patent task to date, showcasing its effectiveness and broad applicability in specialized translation domains where general methods often fall short.", }
Traditional machine translation methods typically involve training models directly on large parallel corpora, with limited emphasis on specialized terminology. However, In specialized fields such as patents, finance, biomedical domains, terminology is crucial for translation, with many terminologies that should not be translated based on semantics of the sentence but should be translated following agreed-upon conventions. In this paper we introduce a methodology that efficiently trains models with a smaller amount of data while preserving the accuracy of terminology translation. The terminology extraction model generates a glossary from existing training datasets and further refines the LLM by instructing it to effectively incorporate these terms into translations. We achieve this through a systematic process of term extraction and glossary creation using the Trie Tree algorithm, followed by data reconstruction to teach the LLM how to integrate these specialized terms. This methodology enhances the model{'}s ability to handle specialized terminology and ensures high-quality translations, particularly in fields where term consistency is crucial. Our approach has demonstrated exceptional performance, achieving the highest translation score among participants in the WMT patent task to date, showcasing its effectiveness and broad applicability in specialized translation domains where general methods often fall short.
[ "Kim, Sejoon", "Sung, Mingi", "Lee, Jeonghwan", "Lim, Hyunkuk", "Gimenez Perez, Jorge" ]
Efficient Terminology Integration for LLM-based Translation in Specialized Domains
wmt-1.51
Poster
2410.15690
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.52.bib
https://aclanthology.org/2024.wmt-1.52/
@inproceedings{htun-poncelas-2024-rakutens, title = "Rakuten{'}s Participation in {WMT} 2024 Patent Translation Task", author = "Htun, Ohnmar and Poncelas, Alberto", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.52", pages = "643--646", abstract = "This paper introduces our machine transla- tion system (team sakura), developed for the 2024 WMT Patent Translation Task. Our sys- tem focuses on translations between Japanese- English, Japanese-Korean, and Japanese- Chinese. As large language models have shown good results for various natural language pro- cessing tasks, we have adopted the RakutenAI- 7B-chat model, which has demonstrated effec- tiveness in English and Japanese. We fine-tune this model with patent-domain parallel texts and translate using multiple prompts.", }
This paper introduces our machine transla- tion system (team sakura), developed for the 2024 WMT Patent Translation Task. Our sys- tem focuses on translations between Japanese- English, Japanese-Korean, and Japanese- Chinese. As large language models have shown good results for various natural language pro- cessing tasks, we have adopted the RakutenAI- 7B-chat model, which has demonstrated effec- tiveness in English and Japanese. We fine-tune this model with patent-domain parallel texts and translate using multiple prompts.
[ "Htun, Ohnmar", "Poncelas, Alberto" ]
Rakuten's Participation in WMT 2024 Patent Translation Task
wmt-1.52
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.53.bib
https://aclanthology.org/2024.wmt-1.53/
@inproceedings{castaldo-etal-2024-setu, title = "The {SETU}-{ADAPT} Submission for {WMT} 24 Biomedical Shared Task", author = "Castaldo, Antonio and Zafar, Maria and Nayak, Prashanth and Haque, Rejwanul and Way, Andy and Monti, Johanna", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.53", pages = "647--653", abstract = "This system description paper presents SETU-ADAPT{'}s submission to the WMT 2024 Biomedical Shared Task, where we participated for the language pairs English-to-French and English-to-German. Our approach focused on fine-tuning Large Language Models, using in-domain and synthetic data, employing different data augmentation and data retrieval strategies. We introduce a novel MT framework, involving three autonomous agents: a Translator Agent, an Evaluator Agent and a Reviewer Agent. We present our findings and report the quality of the outputs.", }
This system description paper presents SETU-ADAPT{'}s submission to the WMT 2024 Biomedical Shared Task, where we participated for the language pairs English-to-French and English-to-German. Our approach focused on fine-tuning Large Language Models, using in-domain and synthetic data, employing different data augmentation and data retrieval strategies. We introduce a novel MT framework, involving three autonomous agents: a Translator Agent, an Evaluator Agent and a Reviewer Agent. We present our findings and report the quality of the outputs.
[ "Castaldo, Antonio", "Zafar, Maria", "Nayak, Prashanth", "Haque, Rejwanul", "Way, Andy", "Monti, Johanna" ]
The SETU-ADAPT Submission for WMT 24 Biomedical Shared Task
wmt-1.53
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.54.bib
https://aclanthology.org/2024.wmt-1.54/
@inproceedings{pakray-etal-2024-findings, title = "Findings of {WMT} 2024 Shared Task on Low-Resource {I}ndic Languages Translation", author = "Pakray, Partha and Pal, Santanu and Vetagiri, Advaitha and Krishna, Reddi and Maji, Arnab Kumar and Dash, Sandeep and Laitonjam, Lenin and Sarah, Lyngdoh and Manna, Riyanka", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.54", pages = "654--668", abstract = "This paper presents the results of the low-resource Indic language translation task, organized in conjunction with the Ninth Conference on Machine Translation (WMT) 2024. In this edition, participants were challenged to develop machine translation models for four distinct language pairs: English{--}Assamese, English-Mizo, English-Khasi, and English-Manipuri. The task utilized the enriched IndicNE-Corp1.0 dataset, which includes an extensive collection of parallel and monolingual corpora for northeastern Indic languages. The evaluation was conducted through a comprehensive suite of automatic metrics{---}BLEU, TER, RIBES, METEOR, and ChrF{---}supplemented by meticulous human assessment to measure the translation systems{'} performance and accuracy. This initiative aims to drive advancements in low-resource machine translation and make a substantial contribution to the growing body of knowledge in this dynamic field.", }
This paper presents the results of the low-resource Indic language translation task, organized in conjunction with the Ninth Conference on Machine Translation (WMT) 2024. In this edition, participants were challenged to develop machine translation models for four distinct language pairs: English{--}Assamese, English-Mizo, English-Khasi, and English-Manipuri. The task utilized the enriched IndicNE-Corp1.0 dataset, which includes an extensive collection of parallel and monolingual corpora for northeastern Indic languages. The evaluation was conducted through a comprehensive suite of automatic metrics{---}BLEU, TER, RIBES, METEOR, and ChrF{---}supplemented by meticulous human assessment to measure the translation systems{'} performance and accuracy. This initiative aims to drive advancements in low-resource machine translation and make a substantial contribution to the growing body of knowledge in this dynamic field.
[ "Pakray, Partha", "Pal, Santanu", "Vetagiri, Advaitha", "Krishna, Reddi", "Maji, Arnab Kumar", "Dash, S", "eep", "Laitonjam, Lenin", "Sarah, Lyngdoh", "Manna, Riyanka" ]
Findings of WMT 2024 Shared Task on Low-Resource Indic Languages Translation
wmt-1.54
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.55.bib
https://aclanthology.org/2024.wmt-1.55/
@inproceedings{dabre-kunchukuttan-2024-findings, title = "Findings of {WMT} 2024{'}s {M}ulti{I}ndic22{MT} Shared Task for Machine Translation of 22 {I}ndian Languages", author = "Dabre, Raj and Kunchukuttan, Anoop", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.55", pages = "669--676", abstract = "This paper presents the findings of the WMT 2024{'}s MultiIndic22MT Shared Task, focusing on Machine Translation (MT) of 22 Indian Languages. In this task, we challenged participants with building MT systems which could translate between any or all of 22 Indian languages in the 8th schedule of the Indian constitution and English. For evaluation, we focused on automatic metrics, namely, chrF, chrF++ and BLEU.", }
This paper presents the findings of the WMT 2024{'}s MultiIndic22MT Shared Task, focusing on Machine Translation (MT) of 22 Indian Languages. In this task, we challenged participants with building MT systems which could translate between any or all of 22 Indian languages in the 8th schedule of the Indian constitution and English. For evaluation, we focused on automatic metrics, namely, chrF, chrF++ and BLEU.
[ "Dabre, Raj", "Kunchukuttan, Anoop" ]
Findings of WMT 2024's MultiIndic22MT Shared Task for Machine Translation of 22 Indian Languages
wmt-1.55
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.56.bib
https://aclanthology.org/2024.wmt-1.56/
@inproceedings{parida-etal-2024-findings, title = "Findings of {WMT}2024 {E}nglish-to-Low Resource Multimodal Translation Task", author = "Parida, Shantipriya and Bojar, Ond{\v{r}}ej and Abdulmumin, Idris and Muhammad, Shamsuddeen Hassan and Ahmad, Ibrahim Said", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.56", pages = "677--683", abstract = "This paper presents the results of the English-to-Low Resource Multimodal Translation shared tasks from the Ninth Conference on Machine Translation (WMT2024). This year, 7 teams submitted their translation results for the automatic and human evaluation.", }
This paper presents the results of the English-to-Low Resource Multimodal Translation shared tasks from the Ninth Conference on Machine Translation (WMT2024). This year, 7 teams submitted their translation results for the automatic and human evaluation.
[ "Parida, Shantipriya", "Bojar, Ond{\\v{r}}ej", "Abdulmumin, Idris", "Muhammad, Shamsuddeen Hassan", "Ahmad, Ibrahim Said" ]
Findings of WMT2024 English-to-Low Resource Multimodal Translation Task
wmt-1.56
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.57.bib
https://aclanthology.org/2024.wmt-1.57/
@inproceedings{sanchez-martinez-etal-2024-findings, title = "Findings of the {WMT} 2024 Shared Task Translation into Low-Resource Languages of {S}pain: Blending Rule-Based and Neural Systems", author = "S{\'a}nchez-Mart{\'\i}nez, Felipe and Perez-Ortiz, Juan Antonio and Galiano Jimenez, Aaron and Oliver, Antoni", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.57", pages = "684--698", abstract = "This paper presents the results of the Ninth Conference on Machine Translation (WMT24) Shared Task {``}Translation into Low-Resource Languages of Spain{''}{'}. The task focused on the development of machine translation systems for three language pairs: Spanish-Aragonese, Spanish-Aranese, and Spanish-Asturian. 17 teams participated in the shared task with a total of 87 submissions. The baseline system for all language pairs was Apertium, a rule-based machine translation system that still performs competitively well, even in an era dominated by more advanced non-symbolic approaches. We report and discuss the results of the submitted systems, highlighting the strengths of both neural and rule-based approaches.", }
This paper presents the results of the Ninth Conference on Machine Translation (WMT24) Shared Task {``}Translation into Low-Resource Languages of Spain{''}{'}. The task focused on the development of machine translation systems for three language pairs: Spanish-Aragonese, Spanish-Aranese, and Spanish-Asturian. 17 teams participated in the shared task with a total of 87 submissions. The baseline system for all language pairs was Apertium, a rule-based machine translation system that still performs competitively well, even in an era dominated by more advanced non-symbolic approaches. We report and discuss the results of the submitted systems, highlighting the strengths of both neural and rule-based approaches.
[ "S{\\'a}nchez-Mart{\\'\\i}nez, Felipe", "Perez-Ortiz, Juan Antonio", "Galiano Jimenez, Aaron", "Oliver, Antoni" ]
Findings of the WMT 2024 Shared Task Translation into Low-Resource Languages of Spain: Blending Rule-Based and Neural Systems
wmt-1.57
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.58.bib
https://aclanthology.org/2024.wmt-1.58/
@inproceedings{wang-etal-2024-findings, title = "Findings of the {WMT} 2024 Shared Task on Discourse-Level Literary Translation", author = "Wang, Longyue and Liu, Siyou and Lyu, Chenyang and Jiao, Wenxiang and Wang, Xing and Xu, Jiahao and Tu, Zhaopeng and Gu, Yan and Chen, Weiyu and Wu, Minghao and Zhou, Liting and Koehn, Philipp and Way, Andy and Yuan, Yulin", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.58", pages = "699--700", abstract = "Translating literary works has perennially stood as an elusive dream in machine translation (MT), a journey steeped in intricate challenges. To foster progress in this domain, we hold a new shared task at WMT 2023, the second edition of the \textit{ Discourse-Level Literary Translation}. First, we (Tencent AI Lab and China Literature Ltd.) release a copyrighted and document-level Chinese-English web novel corpus. Furthermore, we put forth an industry-endorsed criteria to guide human evaluation process. This year, we totally received 10 submissions from 5 academia and industry teams. We employ both automatic and human evaluations to measure the performance of the submitted systems. The official ranking of the systems is based on the overall human judgments. In addition, our extensive analysis reveals a series of interesting findings on literary and discourse-aware MT. We release data, system outputs, and leaderboard at \url{https://www2.statmt.org/wmt24/literary-translation-task.html}.", }
Translating literary works has perennially stood as an elusive dream in machine translation (MT), a journey steeped in intricate challenges. To foster progress in this domain, we hold a new shared task at WMT 2023, the second edition of the \textit{ Discourse-Level Literary Translation}. First, we (Tencent AI Lab and China Literature Ltd.) release a copyrighted and document-level Chinese-English web novel corpus. Furthermore, we put forth an industry-endorsed criteria to guide human evaluation process. This year, we totally received 10 submissions from 5 academia and industry teams. We employ both automatic and human evaluations to measure the performance of the submitted systems. The official ranking of the systems is based on the overall human judgments. In addition, our extensive analysis reveals a series of interesting findings on literary and discourse-aware MT. We release data, system outputs, and leaderboard at \url{https://www2.statmt.org/wmt24/literary-translation-task.html}.
[ "Wang, Longyue", "Liu, Siyou", "Lyu, Chenyang", "Jiao, Wenxiang", "Wang, Xing", "Xu, Jiahao", "Tu, Zhaopeng", "Gu, Yan", "Chen, Weiyu", "Wu, Minghao", "Zhou, Liting", "Koehn, Philipp", "Way, Andy", "Yuan, Yulin" ]
Findings of the WMT 2024 Shared Task on Discourse-Level Literary Translation
wmt-1.58
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.59.bib
https://aclanthology.org/2024.wmt-1.59/
@inproceedings{mohammed-etal-2024-findings, title = "Findings of the {WMT} 2024 Shared Task on Chat Translation", author = "Mohammed, Wafaa and Agrawal, Sweta and Farajian, Amin and Cabarr{\~a}o, Vera and Eikema, Bryan and Farinha, Ana C and C. De Souza, Jos{\'e} G.", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.59", pages = "701--714", abstract = "This paper presents the findings from the third edition of the Chat Translation Shared Task. As with previous editions, the task involved translating bilingual customer support conversations, specifically focusing on the impact of conversation context in translation quality and evaluation. We also include two new language pairs: English-Korean and English-Dutch, in addition to the set of language pairs from previous editions: English-German, English-French, and English-Brazilian Portuguese.We received 22 primary submissions and 32 contrastive submissions from eight teams, with each language pair having participation from at least three teams. We evaluated the systems comprehensively using both automatic metrics and human judgments via a direct assessment framework.The official rankings for each language pair were determined based on human evaluation scores, considering performance in both translation directions{---}agent and customer. Our analysis shows that while the systems excelled at translating individual turns, there is room for improvement in overall conversation-level translation quality.", }
This paper presents the findings from the third edition of the Chat Translation Shared Task. As with previous editions, the task involved translating bilingual customer support conversations, specifically focusing on the impact of conversation context in translation quality and evaluation. We also include two new language pairs: English-Korean and English-Dutch, in addition to the set of language pairs from previous editions: English-German, English-French, and English-Brazilian Portuguese.We received 22 primary submissions and 32 contrastive submissions from eight teams, with each language pair having participation from at least three teams. We evaluated the systems comprehensively using both automatic metrics and human judgments via a direct assessment framework.The official rankings for each language pair were determined based on human evaluation scores, considering performance in both translation directions{---}agent and customer. Our analysis shows that while the systems excelled at translating individual turns, there is room for improvement in overall conversation-level translation quality.
[ "Mohammed, Wafaa", "Agrawal, Sweta", "Farajian, Amin", "Cabarr{\\~a}o, Vera", "Eikema, Bryan", "Farinha, Ana C", "C. De Souza, Jos{\\'e} G." ]
Findings of the WMT 2024 Shared Task on Chat Translation
wmt-1.59
Poster
2410.11624
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.60.bib
https://aclanthology.org/2024.wmt-1.60/
@inproceedings{kinugawa-etal-2024-findings, title = "Findings of the {WMT} 2024 Shared Task on Non-Repetitive Translation", author = "Kinugawa, Kazutaka and Mino, Hideya and Goto, Isao and Shirai, Naoto", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.60", pages = "715--727", abstract = "The repetition of words in an English sentence can create a monotonous or awkward impression. In such cases, repetition should be avoided appropriately. To evaluate the performance of machine translation (MT) systems in avoiding such repetition and outputting more polished translations, we presented the shared task of controlling the lexical choice of MT systems. From Japanese{--}English parallel news articles, we collected several hundred sentence pairs in which the source sentences containing repeated words were translated in a style that avoided repetition. Participants were required to encourage the MT system to output tokens in a non-repetitive manner while maintaining translation quality. We conducted human and automatic evaluations of systems submitted by two teams based on an encoder-decoder Transformer and a large language model, respectively. From the experimental results and analysis, we report a series of findings on this task.", }
The repetition of words in an English sentence can create a monotonous or awkward impression. In such cases, repetition should be avoided appropriately. To evaluate the performance of machine translation (MT) systems in avoiding such repetition and outputting more polished translations, we presented the shared task of controlling the lexical choice of MT systems. From Japanese{--}English parallel news articles, we collected several hundred sentence pairs in which the source sentences containing repeated words were translated in a style that avoided repetition. Participants were required to encourage the MT system to output tokens in a non-repetitive manner while maintaining translation quality. We conducted human and automatic evaluations of systems submitted by two teams based on an encoder-decoder Transformer and a large language model, respectively. From the experimental results and analysis, we report a series of findings on this task.
[ "Kinugawa, Kazutaka", "Mino, Hideya", "Goto, Isao", "Shirai, Naoto" ]
Findings of the WMT 2024 Shared Task on Non-Repetitive Translation
wmt-1.60
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.61.bib
https://aclanthology.org/2024.wmt-1.61/
@inproceedings{yadav-etal-2024-a3, title = "A3-108 Controlling Token Generation in Low Resource Machine Translation Systems", author = "Yadav, Saumitra and Mukherjee, Ananya and Shrivastava, Manish", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.61", pages = "728--734", abstract = "Translating for languages with limited resources poses a persistent challenge due to the scarcity of high-quality training data. To enhance translation accuracy, we explored controlled generation mechanisms, focusing on the importance of control tokens. In our experiments, while training, we encoded the target sentence length as a control token to the source sentence, treating it as an additional feature for the source sentence. We developed various NMT models using transformer architecture and conducted experiments across 8 language directions (English = Assamese, Manipuri, Khasi, and Mizo), exploring four variations of length encoding mechanisms. Through comparative analysis against the baseline model, we submitted two systems for each language direction. We report our findings for the same in this work.", }
Translating for languages with limited resources poses a persistent challenge due to the scarcity of high-quality training data. To enhance translation accuracy, we explored controlled generation mechanisms, focusing on the importance of control tokens. In our experiments, while training, we encoded the target sentence length as a control token to the source sentence, treating it as an additional feature for the source sentence. We developed various NMT models using transformer architecture and conducted experiments across 8 language directions (English = Assamese, Manipuri, Khasi, and Mizo), exploring four variations of length encoding mechanisms. Through comparative analysis against the baseline model, we submitted two systems for each language direction. We report our findings for the same in this work.
[ "Yadav, Saumitra", "Mukherjee, Ananya", "Shrivastava, Manish" ]
A3-108 Controlling Token Generation in Low Resource Machine Translation Systems
wmt-1.61
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.62.bib
https://aclanthology.org/2024.wmt-1.62/
@inproceedings{roque-etal-2024-samsung, title = "{S}amsung {R}{\&}{D} Institute {P}hilippines @ {WMT} 2024 {I}ndic {MT} Task", author = "Roque, Matthew Theodore and Catalan, Carlos Rafael and Velasco, Dan John and Rufino, Manuel Antonio and Cruz, Jan Christian Blaise", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.62", pages = "735--741", abstract = "This paper presents the methodology developed by the Samsung R{\&}D Institute Philippines (SRPH) Language Intelligence Team (LIT) for the WMT 2024 Shared Task on Low-Resource Indic Language Translation. We trained standard sequence-to-sequence Transformer models from scratch for both English-to-Indic and Indic-to-English translation directions. Additionally, we explored data augmentation through backtranslation and the application of noisy channel reranking to improve translation quality. A multilingual model trained across all language pairs was also investigated. Our results demonstrate the effectiveness of the multilingual model, with significant performance improvements observed in most language pairs, highlighting the potential of shared language representations in low-resource translation scenarios.", }
This paper presents the methodology developed by the Samsung R{\&}D Institute Philippines (SRPH) Language Intelligence Team (LIT) for the WMT 2024 Shared Task on Low-Resource Indic Language Translation. We trained standard sequence-to-sequence Transformer models from scratch for both English-to-Indic and Indic-to-English translation directions. Additionally, we explored data augmentation through backtranslation and the application of noisy channel reranking to improve translation quality. A multilingual model trained across all language pairs was also investigated. Our results demonstrate the effectiveness of the multilingual model, with significant performance improvements observed in most language pairs, highlighting the potential of shared language representations in low-resource translation scenarios.
[ "Roque, Matthew Theodore", "Catalan, Carlos Rafael", "Velasco, Dan John", "Rufino, Manuel Antonio", "Cruz, Jan Christian Blaise" ]
Samsung R&D Institute Philippines @ WMT 2024 Indic MT Task
wmt-1.62
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.63.bib
https://aclanthology.org/2024.wmt-1.63/
@inproceedings{ju-etal-2024-dlut, title = "{DLUT}-{NLP} Machine Translation Systems for {WMT}24 Low-Resource {I}ndic Language Translation", author = "Ju, Chenfei and Liu, Junpeng and Huang, Kaiyu and Huang, Degen", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.63", pages = "742--746", abstract = "This paper describes the submission systems of DLUT-NLP team for the WMT24 low-resource Indic language translation shared task. We participated in the translation task of four language pairs, including en-as, en-mz, en-kha, en-mni.", }
This paper describes the submission systems of DLUT-NLP team for the WMT24 low-resource Indic language translation shared task. We participated in the translation task of four language pairs, including en-as, en-mz, en-kha, en-mni.
[ "Ju, Chenfei", "Liu, Junpeng", "Huang, Kaiyu", "Huang, Degen" ]
DLUT-NLP Machine Translation Systems for WMT24 Low-Resource Indic Language Translation
wmt-1.63
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.64.bib
https://aclanthology.org/2024.wmt-1.64/
@inproceedings{patil-etal-2024-srib, title = "{SRIB}-{NMT}{'}s Submission to the {I}ndic {MT} Shared Task in {WMT} 2024", author = "Patil, Pranamya and Hr, Raghavendra and Raghuwanshi, Aditya and Verma, Kushal", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.64", pages = "747--750", abstract = "In the context of the Indic Low Resource Ma-chine Translation (MT) challenge at WMT-24, we participated in four language pairs:English-Assamese (en-as), English-Mizo (en-mz), English-Khasi (en-kh), and English-Manipuri (en-mn). To address these tasks,we employed a transformer-based sequence-to-sequence architecture (Vaswani et al., 2017).In the PRIMARY system, which did not uti-lize external data, we first pretrained languagemodels (low resource languages) using avail-able monolingual data before finetuning themon small parallel datasets for translation. Forthe CONTRASTIVE submission approach, weutilized pretrained translation models like In-dic Trans2 (Gala et al., 2023) and appliedLoRA Fine-tuning (Hu et al., 2021) to adaptthem to smaller, low-resource languages, aim-ing to leverage cross-lingual language transfercapabilities (CONNEAU and Lample, 2019).These approaches resulted in significant im-provements in SacreBLEU scores(Post, 2018)for low-resource languages.", }
In the context of the Indic Low Resource Ma-chine Translation (MT) challenge at WMT-24, we participated in four language pairs:English-Assamese (en-as), English-Mizo (en-mz), English-Khasi (en-kh), and English-Manipuri (en-mn). To address these tasks,we employed a transformer-based sequence-to-sequence architecture (Vaswani et al., 2017).In the PRIMARY system, which did not uti-lize external data, we first pretrained languagemodels (low resource languages) using avail-able monolingual data before finetuning themon small parallel datasets for translation. Forthe CONTRASTIVE submission approach, weutilized pretrained translation models like In-dic Trans2 (Gala et al., 2023) and appliedLoRA Fine-tuning (Hu et al., 2021) to adaptthem to smaller, low-resource languages, aim-ing to leverage cross-lingual language transfercapabilities (CONNEAU and Lample, 2019).These approaches resulted in significant im-provements in SacreBLEU scores(Post, 2018)for low-resource languages.
[ "Patil, Pranamya", "Hr, Raghavendra", "Raghuwanshi, Aditya", "Verma, Kushal" ]
SRIB-NMT's Submission to the Indic MT Shared Task in WMT 2024
wmt-1.64
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.65.bib
https://aclanthology.org/2024.wmt-1.65/
@inproceedings{p-m-etal-2024-mtnlp, title = "{MTNLP}-{IIITH}: Machine Translation for Low-Resource {I}ndic Languages", author = "P M, Abhinav and Shetye, Ketaki and Krishnamurthy, Parameswari", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.65", pages = "751--755", abstract = "Machine Translation for low-resource languages presents significant challenges, primarily due to limited data availability. We have a baseline model and a primary model. For the baseline model, we first fine-tune the mBART model (mbart-large-50-many-to-many-mmt) for the language pairs English-Khasi, Khasi-English, English-Manipuri, and Manipuri-English. We then augment the dataset by back-translating from Indic languages to English. To enhance data quality, we fine-tune the LaBSE model specifically for Khasi and Manipuri, generating sentence embeddings and applying a cosine similarity threshold of 0.84 to filter out low-quality back-translations. The filtered data is combined with the original training data and used to further fine-tune the mBART model, creating our primary model. The results show that the primary model slightly outperforms the baseline model, with the best performance achieved by the English-to-Khasi (en-kh) primary model, which recorded a BLEU score of 0.0492, a chrF score of 0.3316, and a METEOR score of 0.2589 (on a scale of 0 to 1), with similar results for other language pairs.", }
Machine Translation for low-resource languages presents significant challenges, primarily due to limited data availability. We have a baseline model and a primary model. For the baseline model, we first fine-tune the mBART model (mbart-large-50-many-to-many-mmt) for the language pairs English-Khasi, Khasi-English, English-Manipuri, and Manipuri-English. We then augment the dataset by back-translating from Indic languages to English. To enhance data quality, we fine-tune the LaBSE model specifically for Khasi and Manipuri, generating sentence embeddings and applying a cosine similarity threshold of 0.84 to filter out low-quality back-translations. The filtered data is combined with the original training data and used to further fine-tune the mBART model, creating our primary model. The results show that the primary model slightly outperforms the baseline model, with the best performance achieved by the English-to-Khasi (en-kh) primary model, which recorded a BLEU score of 0.0492, a chrF score of 0.3316, and a METEOR score of 0.2589 (on a scale of 0 to 1), with similar results for other language pairs.
[ "P M, Abhinav", "Shetye, Ketaki", "Krishnamurthy, Parameswari" ]
MTNLP-IIITH: Machine Translation for Low-Resource Indic Languages
wmt-1.65
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.66.bib
https://aclanthology.org/2024.wmt-1.66/
@inproceedings{dreano-etal-2024-exploration, title = "Exploration of the {C}ycle{GN} Framework for Low-Resource Languages", author = {Dreano, S{\"o}ren and Molloy, Derek and Murphy, Noel}, editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.66", pages = "756--761", abstract = "CycleGN is a Neural Machine Translation framework relying on the Transformer architecture. The foundational concept of our research posits that in an ideal scenario, retro-translations of generated translations should revert to the original source sentences. Consequently, a pair of models can be trained using a Cycle Consistency Loss only, with one model translating in one direction and the second model in the opposite direction.", }
CycleGN is a Neural Machine Translation framework relying on the Transformer architecture. The foundational concept of our research posits that in an ideal scenario, retro-translations of generated translations should revert to the original source sentences. Consequently, a pair of models can be trained using a Cycle Consistency Loss only, with one model translating in one direction and the second model in the opposite direction.
[ "Dreano, S{\\\"o}ren", "Molloy, Derek", "Murphy, Noel" ]
Exploration of the CycleGN Framework for Low-Resource Languages
wmt-1.66
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.67.bib
https://aclanthology.org/2024.wmt-1.67/
@inproceedings{gajakos-etal-2024-setu, title = "The {SETU}-{ADAPT} Submissions to the {WMT}24 Low-Resource {I}ndic Language Translation Task", author = "Gajakos, Neha and Nayak, Prashanth and Haque, Rejwanul and Way, Andy", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.67", pages = "762--769", abstract = "This paper presents the SETU-ADAPT{'}s submissions to the WMT 2024 Low-Resource Indic Language Translation task. We participated in the unconstrained segment of the task, focusing on the Assamese-to-English and English-to-Assamese language pairs. Our approach involves leveraging Large Language Models (LLMs) as the baseline systems for all our MT tasks. Furthermore, we applied various strategies to improve the baseline systems. In our first approach, we fine-tuned LLMs using all the data provided by the task organisers. Our second approach explores in-context learning by focusing on few-shot prompting. In our final approach we explore an efficient data extraction technique based on a fuzzy match-based similarity measure for fine-tuning. We evaluated our systems using BLEU, chrF, WER, and COMET. The experimental results showed that our strategies can effectively improve the quality of translations in low-resource scenarios.", }
This paper presents the SETU-ADAPT{'}s submissions to the WMT 2024 Low-Resource Indic Language Translation task. We participated in the unconstrained segment of the task, focusing on the Assamese-to-English and English-to-Assamese language pairs. Our approach involves leveraging Large Language Models (LLMs) as the baseline systems for all our MT tasks. Furthermore, we applied various strategies to improve the baseline systems. In our first approach, we fine-tuned LLMs using all the data provided by the task organisers. Our second approach explores in-context learning by focusing on few-shot prompting. In our final approach we explore an efficient data extraction technique based on a fuzzy match-based similarity measure for fine-tuning. We evaluated our systems using BLEU, chrF, WER, and COMET. The experimental results showed that our strategies can effectively improve the quality of translations in low-resource scenarios.
[ "Gajakos, Neha", "Nayak, Prashanth", "Haque, Rejwanul", "Way, Andy" ]
The SETU-ADAPT Submissions to the WMT24 Low-Resource Indic Language Translation Task
wmt-1.67
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.68.bib
https://aclanthology.org/2024.wmt-1.68/
@inproceedings{sayed-etal-2024-spring, title = "{SPRING} Lab {IITM}{'}s Submission to Low Resource {I}ndic Language Translation Shared Task", author = "Sayed, Hamees and Joglekar, Advait and Umesh, Srinivasan", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.68", pages = "770--774", abstract = "We develop a robust translation model for four low-resource Indic languages: Khasi, Mizo, Manipuri, and Assamese. Our approach includes a comprehensive pipeline from data collection and preprocessing to training and evaluation, leveraging data from WMT task datasets, BPCC, PMIndia, and OpenLanguageData. To address the scarcity of bilingual data, we use back-translation techniques on monolingual datasets for Mizo and Khasi, significantly expanding our training corpus. We fine-tune the pre-trained NLLB 3.3B model for Assamese, Mizo, and Manipuri, achieving improved performance over the baseline. For Khasi, which is not supported by the NLLB model, we introduce special tokens and train the model on our Khasi corpus. Our training involves masked language modelling, followed by fine-tuning for English-to-Indic and Indic-to-English translations.", }
We develop a robust translation model for four low-resource Indic languages: Khasi, Mizo, Manipuri, and Assamese. Our approach includes a comprehensive pipeline from data collection and preprocessing to training and evaluation, leveraging data from WMT task datasets, BPCC, PMIndia, and OpenLanguageData. To address the scarcity of bilingual data, we use back-translation techniques on monolingual datasets for Mizo and Khasi, significantly expanding our training corpus. We fine-tune the pre-trained NLLB 3.3B model for Assamese, Mizo, and Manipuri, achieving improved performance over the baseline. For Khasi, which is not supported by the NLLB model, we introduce special tokens and train the model on our Khasi corpus. Our training involves masked language modelling, followed by fine-tuning for English-to-Indic and Indic-to-English translations.
[ "Sayed, Hamees", "Joglekar, Advait", "Umesh, Srinivasan" ]
SPRING Lab IITM's Submission to Low Resource Indic Language Translation Shared Task
wmt-1.68
Poster
2411.00727
[ "" ]
https://huggingface.co/papers/2411.00727
2
0
0
3
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.69.bib
https://aclanthology.org/2024.wmt-1.69/
@inproceedings{wei-etal-2024-machine, title = "Machine Translation Advancements of Low-Resource {I}ndian Languages by Transfer Learning", author = "Wei, Bin and Jiawei, Zheng and Li, Zongyao and Wu, Zhanglin and Guo, Jiaxin and Wei, Daimeng and Rao, Zhiqiang and Li, Shaojun and Luo, Yuanchang and Shang, Hengchao and Yang, Jinlong and Xie, Yuhao and Yang, Hao", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.69", pages = "775--780", abstract = "This paper introduces the submission by Huawei Translation Center (HW-TSC) to the WMT24 Indian Languages Machine Translation (MT) Shared Task. To develop a reliable machine translation system for low-resource Indian languages, we employed two distinct knowledge transfer strategies, taking into account the characteristics of the language scripts and the support available from existing open-source models for Indian languages. For Assamese(as) and Manipuri(mn), we fine-tuned the existing IndicTrans2 open-source model to enable bidirectional translation between English and these languages. For Khasi(kh) and Mizo(mz), we trained a multilingual model as the baseline using bilingual data from this four language pairs as well as additional Bengali data, which share the same language family. This was followed by fine-tuning to achieve bidirectional translation between English and Khasi, as well as English and Mizo. Our transfer learning experiments produced significant results: 23.5 BLEU for en→as, 31.8 BLEU for en→mn, 36.2 BLEU for as→en, and 47.9 BLEU for mn→en on their respective test sets. Similarly, the multilingual model transfer learning experiments yielded impressive outcomes, achieving 19.7 BLEU for en→kh, 32.8 BLEU for en→mz, 16.1 BLEU for kh→en, and 33.9 BLEU for mz→en on their respective test sets. These results not only highlight the effectiveness of transfer learning techniques for low-resource languages but also contribute to advancing machine translation capabilities for low-resource Indian languages.", }
This paper introduces the submission by Huawei Translation Center (HW-TSC) to the WMT24 Indian Languages Machine Translation (MT) Shared Task. To develop a reliable machine translation system for low-resource Indian languages, we employed two distinct knowledge transfer strategies, taking into account the characteristics of the language scripts and the support available from existing open-source models for Indian languages. For Assamese(as) and Manipuri(mn), we fine-tuned the existing IndicTrans2 open-source model to enable bidirectional translation between English and these languages. For Khasi(kh) and Mizo(mz), we trained a multilingual model as the baseline using bilingual data from this four language pairs as well as additional Bengali data, which share the same language family. This was followed by fine-tuning to achieve bidirectional translation between English and Khasi, as well as English and Mizo. Our transfer learning experiments produced significant results: 23.5 BLEU for en→as, 31.8 BLEU for en→mn, 36.2 BLEU for as→en, and 47.9 BLEU for mn→en on their respective test sets. Similarly, the multilingual model transfer learning experiments yielded impressive outcomes, achieving 19.7 BLEU for en→kh, 32.8 BLEU for en→mz, 16.1 BLEU for kh→en, and 33.9 BLEU for mz→en on their respective test sets. These results not only highlight the effectiveness of transfer learning techniques for low-resource languages but also contribute to advancing machine translation capabilities for low-resource Indian languages.
[ "Wei, Bin", "Jiawei, Zheng", "Li, Zongyao", "Wu, Zhanglin", "Guo, Jiaxin", "Wei, Daimeng", "Rao, Zhiqiang", "Li, Shaojun", "Luo, Yuanchang", "Shang, Hengchao", "Yang, Jinlong", "Xie, Yuhao", "Yang, Hao" ]
Machine Translation Advancements of Low-Resource Indian Languages by Transfer Learning
wmt-1.69
Poster
2409.15879
[ "" ]
https://huggingface.co/papers/2409.15879
0
0
0
13
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.70.bib
https://aclanthology.org/2024.wmt-1.70/
@inproceedings{sahoo-etal-2024-nlip, title = "{NLIP}{\_}{L}ab-{IITH} Low-Resource {MT} System for {WMT}24 {I}ndic {MT} Shared Task", author = "Sahoo, Pramit and Brahma, Maharaj and Desarkar, Maunendra Sankar", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.70", pages = "781--787", abstract = "In this paper, we describe our system for the WMT 24 shared task of Low-Resource Indic Language Translation. We consider eng↔{as, kha, lus, mni} as participating language pairs. In this shared task, we explore the fine-tuning of a pre-trained model motivated by the pre-trained objective of aligning embeddings closer by alignment augmentation (Lin et al.,2020) for 22 scheduled Indian languages. Our primary system is based on language-specific finetuning on a pre-trained model. We achieve chrF2 scores of 50.6, 42.3, 54.9, and 66.3 on the official public test set for eng→as, eng→kha, eng→lus, eng→mni respectively. We also explore multilingual training with/without language grouping and layer-freezing.", }
In this paper, we describe our system for the WMT 24 shared task of Low-Resource Indic Language Translation. We consider eng↔{as, kha, lus, mni} as participating language pairs. In this shared task, we explore the fine-tuning of a pre-trained model motivated by the pre-trained objective of aligning embeddings closer by alignment augmentation (Lin et al.,2020) for 22 scheduled Indian languages. Our primary system is based on language-specific finetuning on a pre-trained model. We achieve chrF2 scores of 50.6, 42.3, 54.9, and 66.3 on the official public test set for eng→as, eng→kha, eng→lus, eng→mni respectively. We also explore multilingual training with/without language grouping and layer-freezing.
[ "Sahoo, Pramit", "Brahma, Maharaj", "Desarkar, Maunendra Sankar" ]
NLIP_Lab-IITH Low-Resource MT System for WMT24 Indic MT Shared Task
wmt-1.70
Poster
2410.03215
[ "https://github.com/pramitsahoo/wmt2024-lrilt" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.71.bib
https://aclanthology.org/2024.wmt-1.71/
@inproceedings{bhaskar-krishnamurthy-2024-yes, title = "Yes-{MT}{'}s Submission to the Low-Resource {I}ndic Language Translation Shared Task in {WMT} 2024", author = "Bhaskar, Yash and Krishnamurthy, Parameswari", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.71", pages = "788--792", abstract = "This paper presents the systems submitted by the Yes-MT team for the Low-Resource Indic Language Translation Shared Task at WMT 2024, focusing on translating between English and the Assamese, Mizo, Khasi, and Manipuri languages. The experiments explored various approaches, including fine-tuning pre-trained models like mT5 and IndicBart in both Multilingual and Monolingual settings, LoRA finetune IndicTrans2, zero-shot and few-shot prompting with large language models (LLMs) like Llama 3 and Mixtral 8x7b, LoRA Supervised Fine Tuning Llama 3, and training Transformers from scratch. The results were evaluated on the WMT23 Low-Resource Indic Language Translation Shared Task{'}s test data using SacreBLEU and CHRF highlighting the challenges of low-resource translation and show the potential of LLMs for these tasks, particularly with fine-tuning.", }
This paper presents the systems submitted by the Yes-MT team for the Low-Resource Indic Language Translation Shared Task at WMT 2024, focusing on translating between English and the Assamese, Mizo, Khasi, and Manipuri languages. The experiments explored various approaches, including fine-tuning pre-trained models like mT5 and IndicBart in both Multilingual and Monolingual settings, LoRA finetune IndicTrans2, zero-shot and few-shot prompting with large language models (LLMs) like Llama 3 and Mixtral 8x7b, LoRA Supervised Fine Tuning Llama 3, and training Transformers from scratch. The results were evaluated on the WMT23 Low-Resource Indic Language Translation Shared Task{'}s test data using SacreBLEU and CHRF highlighting the challenges of low-resource translation and show the potential of LLMs for these tasks, particularly with fine-tuning.
[ "Bhaskar, Yash", "Krishnamurthy, Parameswari" ]
Yes-MT's Submission to the Low-Resource Indic Language Translation Shared Task in WMT 2024
wmt-1.71
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.72.bib
https://aclanthology.org/2024.wmt-1.72/
@inproceedings{joshi-etal-2024-system, title = "System Description of {BV}-{SLP} for {S}indhi-{E}nglish Machine Translation in {M}ulti{I}ndic22{MT} 2024 Shared Task", author = "Joshi, Nisheeth and Katyayan, Pragya and Arora, Palak and Nathani, Bharti", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.72", pages = "793--796", abstract = "This paper presents our machine translation system that was developed for the WAT2024 MultiInidc MT shared task. We built our system for the Sindhi-English language pair. We developed two MT systems. The first system was our baseline system where Sindhi was translated into English. In the second system we used Hindi as a pivot for the translation of text. In both the cases we had identified the name entities and translated them into English as a preprocessing step. Once this was done, the standard NMT process was followed to train and generate MT outputs for the task. The systems were tested on the hidden dataset of the shared task", }
This paper presents our machine translation system that was developed for the WAT2024 MultiInidc MT shared task. We built our system for the Sindhi-English language pair. We developed two MT systems. The first system was our baseline system where Sindhi was translated into English. In the second system we used Hindi as a pivot for the translation of text. In both the cases we had identified the name entities and translated them into English as a preprocessing step. Once this was done, the standard NMT process was followed to train and generate MT outputs for the task. The systems were tested on the hidden dataset of the shared task
[ "Joshi, Nisheeth", "Katyayan, Pragya", "Arora, Palak", "Nathani, Bharti" ]
System Description of BV-SLP for Sindhi-English Machine Translation in MultiIndic22MT 2024 Shared Task
wmt-1.72
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.73.bib
https://aclanthology.org/2024.wmt-1.73/
@inproceedings{singh-etal-2024-wmt24, title = "{WMT}24 System Description for the {M}ulti{I}ndic22{MT} Shared Task on {M}anipuri Language", author = "Singh, Ningthoujam Justwant and Singh, Kshetrimayum Boynao and Singh, Ningthoujam Avichandra and Phijam, Sanjita and Singh, Thoudam Doren", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.73", pages = "797--803", abstract = "This paper presents a Transformer-based Neural Machine Translation (NMT) system developed by the Centre for Natural Language Processing and the Department of Computer Science and Engineering at the National Institute of Technology Silchar, India (NITS-CNLP) for the MultiIndic22MT 2024 Shared Task. The system focused on the English-Manipuri language pair for the WMT24 shared task. The proposed WMT system shows a BLEU score of 6.4, a chrF score of 28.6, and a chrF++ score of 26.6 on the public test set Indic-Conv dataset. Further, in the public test set Indic-Gen dataset, it achieved a BLEU score of 8.1, a chrF score of 32.1, and a chrF++ score of 29.4 on the English-to-Manipuri translation.", }
This paper presents a Transformer-based Neural Machine Translation (NMT) system developed by the Centre for Natural Language Processing and the Department of Computer Science and Engineering at the National Institute of Technology Silchar, India (NITS-CNLP) for the MultiIndic22MT 2024 Shared Task. The system focused on the English-Manipuri language pair for the WMT24 shared task. The proposed WMT system shows a BLEU score of 6.4, a chrF score of 28.6, and a chrF++ score of 26.6 on the public test set Indic-Conv dataset. Further, in the public test set Indic-Gen dataset, it achieved a BLEU score of 8.1, a chrF score of 32.1, and a chrF++ score of 29.4 on the English-to-Manipuri translation.
[ "Singh, Ningthoujam Justwant", "Singh, Kshetrimayum Boynao", "Singh, Ningthoujam Avich", "ra", "Phijam, Sanjita", "Singh, Thoudam Doren" ]
WMT24 System Description for the MultiIndic22MT Shared Task on Manipuri Language
wmt-1.73
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.74.bib
https://aclanthology.org/2024.wmt-1.74/
@inproceedings{brahma-etal-2024-nlip, title = "{NLIP}-Lab-{IITH} Multilingual {MT} System for {WAT}24 {MT} Shared Task", author = "Brahma, Maharaj and Sahoo, Pramit and Desarkar, Maunendra Sankar", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.74", pages = "804--809", abstract = "This paper describes NLIP Lab{'}s multilingual machine translation system for the WAT24 shared task on multilingual Indic MT task for 22 scheduled languages belonging to 4 language families. We explore pre-training for Indic languages using alignment agreement objectives. We utilize bi-lingual dictionaries to substitute words from source sentences. Furthermore, we fine-tuned language direction-specific multilingual translation models using small and high-quality seed data. Our primary submission is a 243M parameters multilingual translation model covering 22 Indic languages. In the IN22-Gen benchmark, we achieved an average chrF++ score of 46.80 and 18.19 BLEU score for the En-Indic direction. In the Indic-En direction, we achieved an average chrF++ score of 56.34 and 30.82 BLEU score. In the In22-Conv benchmark, we achieved an average chrF++ score of 43.43 and BLEU score of 16.58 in the En-Indic direction, and in the Indic-En direction, we achieved an average of 52.44 and 29.77 for chrF++ and BLEU respectively. Our model is competitive with IndicTransv1 (474M parameter model).", }
This paper describes NLIP Lab{'}s multilingual machine translation system for the WAT24 shared task on multilingual Indic MT task for 22 scheduled languages belonging to 4 language families. We explore pre-training for Indic languages using alignment agreement objectives. We utilize bi-lingual dictionaries to substitute words from source sentences. Furthermore, we fine-tuned language direction-specific multilingual translation models using small and high-quality seed data. Our primary submission is a 243M parameters multilingual translation model covering 22 Indic languages. In the IN22-Gen benchmark, we achieved an average chrF++ score of 46.80 and 18.19 BLEU score for the En-Indic direction. In the Indic-En direction, we achieved an average chrF++ score of 56.34 and 30.82 BLEU score. In the In22-Conv benchmark, we achieved an average chrF++ score of 43.43 and BLEU score of 16.58 in the En-Indic direction, and in the Indic-En direction, we achieved an average of 52.44 and 29.77 for chrF++ and BLEU respectively. Our model is competitive with IndicTransv1 (474M parameter model).
[ "Brahma, Maharaj", "Sahoo, Pramit", "Desarkar, Maunendra Sankar" ]
NLIP-Lab-IITH Multilingual MT System for WAT24 MT Shared Task
wmt-1.74
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.75.bib
https://aclanthology.org/2024.wmt-1.75/
@inproceedings{haq-etal-2024-dcu, title = "{DCU} {ADAPT} at {WMT}24: {E}nglish to Low-resource Multi-Modal Translation Task", author = "Haq, Sami and Huidrom, Rudali and Castilho, Sheila", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.75", pages = "810--814", abstract = "This paper presents the system description of {``}DCU{\_}NMT{'}s{''} submission to the WMT-WAT24 English-to-Low-Resource Multimodal Translation Task. We participated in the English-to-Hindi track, developing both text-only and multimodal neural machine translation (NMT) systems. The text-only systems were trained from scratch on constrained data and augmented with back-translated data. For the multimodal approach, we implemented a context-aware transformer model that integrates visual features as additional contextual information. Specifically, image descriptions generated by an image captioning model were encoded using BERT and concatenated with the textual input.The results indicate that our multimodal system, trained solely on limited data, showed improvements over the text-only baseline in both the challenge and evaluation sets, suggesting the potential benefits of incorporating visual information.", }
This paper presents the system description of {``}DCU{\_}NMT{'}s{''} submission to the WMT-WAT24 English-to-Low-Resource Multimodal Translation Task. We participated in the English-to-Hindi track, developing both text-only and multimodal neural machine translation (NMT) systems. The text-only systems were trained from scratch on constrained data and augmented with back-translated data. For the multimodal approach, we implemented a context-aware transformer model that integrates visual features as additional contextual information. Specifically, image descriptions generated by an image captioning model were encoded using BERT and concatenated with the textual input.The results indicate that our multimodal system, trained solely on limited data, showed improvements over the text-only baseline in both the challenge and evaluation sets, suggesting the potential benefits of incorporating visual information.
[ "Haq, Sami", "Huidrom, Rudali", "Castilho, Sheila" ]
DCU ADAPT at WMT24: English to Low-resource Multi-Modal Translation Task
wmt-1.75
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.76.bib
https://aclanthology.org/2024.wmt-1.76/
@inproceedings{hatami-etal-2024-english, title = "{E}nglish-to-Low-Resource Translation: A Multimodal Approach for {H}indi, {M}alayalam, {B}engali, and {H}ausa", author = "Hatami, Ali and Banerjee, Shubhanker and Arcan, Mihael and Raja Chakravarthi, Bharathi and Buitelaar, Paul and Philip McCrae, John", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.76", pages = "815--822", abstract = "Multimodal machine translation leverages multiple data modalities to enhance translation quality, particularly for low-resourced languages. This paper uses a Multimodal model that integrates visual information with textual data to improve translation accuracy from English to Hindi, Malayalam, Bengali, and Hausa. This approach employs a gated fusion mechanism to effectively combine the outputs of textual and visual encoders, enabling more nuanced translations that consider both language and contextual visual cues. The performance of the multimodal model was evaluated against the text-only machine translation model based on BLEU, ChrF2 and TER. Experimental results demonstrate that the multimodal approach consistently outperforms the text-only baseline, highlighting the potential of integrating visual information in low-resourced language translation tasks.", }
Multimodal machine translation leverages multiple data modalities to enhance translation quality, particularly for low-resourced languages. This paper uses a Multimodal model that integrates visual information with textual data to improve translation accuracy from English to Hindi, Malayalam, Bengali, and Hausa. This approach employs a gated fusion mechanism to effectively combine the outputs of textual and visual encoders, enabling more nuanced translations that consider both language and contextual visual cues. The performance of the multimodal model was evaluated against the text-only machine translation model based on BLEU, ChrF2 and TER. Experimental results demonstrate that the multimodal approach consistently outperforms the text-only baseline, highlighting the potential of integrating visual information in low-resourced language translation tasks.
[ "Hatami, Ali", "Banerjee, Shubhanker", "Arcan, Mihael", "Raja Chakravarthi, Bharathi", "Buitelaar, Paul", "Philip McCrae, John" ]
English-to-Low-Resource Translation: A Multimodal Approach for Hindi, Malayalam, Bengali, and Hausa
wmt-1.76
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.77.bib
https://aclanthology.org/2024.wmt-1.77/
@inproceedings{parida-etal-2024-odiagenais, title = "{O}dia{G}en{AI}{'}s Participation in {WMT}2024 {E}nglish-to-Low Resource Multimodal Translation Task", author = "Parida, Shantipriya and Sahoo, Shashikanta and Sekhar, Sambit and Jena, Upendra and Jena, Sushovan and Lata, Kusum", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.77", pages = "823--828", abstract = "This paper covers the system description of the team {``}ODIAGEN{'}s{''} submission to the WMT{\textasciitilde}2024 English-to-Low-Resource Multimodal Translation Task. We participated in the English-to-Low Resource Multimodal Translation Task, in two of the tasks, i.e. Text-only Translation and Multi-modal Translation. For Text-only Translation, we trained the Mistral-7B model for English to Multi-lingual (Hindi, Bengali, Malayalam, Hausa). For Multi-modal Translation (using both image and text), we trained the PaliGemma-3B model for English to Hindi translation.", }
This paper covers the system description of the team {``}ODIAGEN{'}s{''} submission to the WMT{\textasciitilde}2024 English-to-Low-Resource Multimodal Translation Task. We participated in the English-to-Low Resource Multimodal Translation Task, in two of the tasks, i.e. Text-only Translation and Multi-modal Translation. For Text-only Translation, we trained the Mistral-7B model for English to Multi-lingual (Hindi, Bengali, Malayalam, Hausa). For Multi-modal Translation (using both image and text), we trained the PaliGemma-3B model for English to Hindi translation.
[ "Parida, Shantipriya", "Sahoo, Shashikanta", "Sekhar, Sambit", "Jena, Upendra", "Jena, Sushovan", "Lata, Kusum" ]
OdiaGenAI's Participation in WMT2024 English-to-Low Resource Multimodal Translation Task
wmt-1.77
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.78.bib
https://aclanthology.org/2024.wmt-1.78/
@inproceedings{ahmad-etal-2024-arewa, title = "Arewa {NLP}{'}s Participation at {WMT}24", author = "Ahmad, Mahmoud and Khalid, Auwal and Aliyu, Lukman and Sani, Babangida and Abdullahi, Mariya", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.78", pages = "829--832", abstract = "This paper presents the work of our team, {``}ArewaNLP,{''} for the WMT 2024 shared task. The paper describes the system submitted to the Ninth Conference on Machine Translation (WMT24). We participated in the English-Hausa text-only translation task. We fine-tuned the OPUS-MT-en-ha transformer model and our submission achieved competitive results in this task. We achieve a BLUE score of 27.76, 40.31 and 5.85 on the Development Test, Evaluation Test and Challenge Test respectively.", }
This paper presents the work of our team, {``}ArewaNLP,{''} for the WMT 2024 shared task. The paper describes the system submitted to the Ninth Conference on Machine Translation (WMT24). We participated in the English-Hausa text-only translation task. We fine-tuned the OPUS-MT-en-ha transformer model and our submission achieved competitive results in this task. We achieve a BLUE score of 27.76, 40.31 and 5.85 on the Development Test, Evaluation Test and Challenge Test respectively.
[ "Ahmad, Mahmoud", "Khalid, Auwal", "Aliyu, Lukman", "Sani, Babangida", "Abdullahi, Mariya" ]
Arewa NLP's Participation at WMT24
wmt-1.78
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.79.bib
https://aclanthology.org/2024.wmt-1.79/
@inproceedings{rajpoot-etal-2024-multimodal, title = "Multimodal Machine Translation for Low-Resource {I}ndic Languages: A Chain-of-Thought Approach Using Large Language Models", author = "Rajpoot, Pawan and Bhat, Nagaraj and Shrivastava, Ashish", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.79", pages = "833--838", abstract = "This paper presents the approach and results of team v036 in the English-to-Low-Resource Multi-Modal Translation Task at the Ninth Conference on Machine Translation (WMT24). Our team tackled the challenge of translating English source text to low-resource Indic languages, specifically Hindi, Malayalam, and Bengali, while leveraging visual context provided alongside the text data. We used InternVL2 for extracting the image context along with Knowledge Distillation from bigger LLMs to train Small Language Model on the tranlsation task. During current shared task phase, we submitted best models (for this task), and overall we got rank 3 on Hindi, Bengali, and Malyalam datasets. We also open source our models on huggingface.", }
This paper presents the approach and results of team v036 in the English-to-Low-Resource Multi-Modal Translation Task at the Ninth Conference on Machine Translation (WMT24). Our team tackled the challenge of translating English source text to low-resource Indic languages, specifically Hindi, Malayalam, and Bengali, while leveraging visual context provided alongside the text data. We used InternVL2 for extracting the image context along with Knowledge Distillation from bigger LLMs to train Small Language Model on the tranlsation task. During current shared task phase, we submitted best models (for this task), and overall we got rank 3 on Hindi, Bengali, and Malyalam datasets. We also open source our models on huggingface.
[ "Rajpoot, Pawan", "Bhat, Nagaraj", "Shrivastava, Ashish" ]
Multimodal Machine Translation for Low-Resource Indic Languages: A Chain-of-Thought Approach Using Large Language Models
wmt-1.79
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.80.bib
https://aclanthology.org/2024.wmt-1.80/
@inproceedings{khan-etal-2024-chitranuvad, title = "Chitranuvad: Adapting Multi-lingual {LLM}s for Multimodal Translation", author = "Khan, Shaharukh and Tarun, Ayush and Faraz, Ali and Kamble, Palash and Dahiya, Vivek and Pokala, Praveen and Kulkarni, Ashish and Khatri, Chandra and Ravi, Abhinav and Agarwal, Shubham", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.80", pages = "839--851", abstract = "In this work, we provide the system description of our submission as part of the English-to-Lowres Multimodal Translation Task at theWorkshop on Asian Translation (WAT2024). We introduce Chitranuvad, a multimodal model that effectively integrates Multilingual LLMand a vision module for Multimodal Translation. Our method uses a ViT image encoder to extract visual representations as visual tokenembeddings which are projected to the LLM space by an adapter layer and generates translation in an autoregressive fashion. We participated in all the three tracks (Image Captioning, Text-only and Multimodal translationtasks) for Indic languages (ie. English translation to Hindi, Bengali and Malyalam) and achieved SOTA results for Hindi in all of themon the Challenge set while remaining competitive for the other languages in the shared task.", }
In this work, we provide the system description of our submission as part of the English-to-Lowres Multimodal Translation Task at theWorkshop on Asian Translation (WAT2024). We introduce Chitranuvad, a multimodal model that effectively integrates Multilingual LLMand a vision module for Multimodal Translation. Our method uses a ViT image encoder to extract visual representations as visual tokenembeddings which are projected to the LLM space by an adapter layer and generates translation in an autoregressive fashion. We participated in all the three tracks (Image Captioning, Text-only and Multimodal translationtasks) for Indic languages (ie. English translation to Hindi, Bengali and Malyalam) and achieved SOTA results for Hindi in all of themon the Challenge set while remaining competitive for the other languages in the shared task.
[ "Khan, Shaharukh", "Tarun, Ayush", "Faraz, Ali", "Kamble, Palash", "Dahiya, Vivek", "Pokala, Praveen", "Kulkarni, Ashish", "Khatri, Ch", "ra", "Ravi, Abhinav", "Agarwal, Shubham" ]
Chitranuvad: Adapting Multi-lingual LLMs for Multimodal Translation
wmt-1.80
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.81.bib
https://aclanthology.org/2024.wmt-1.81/
@inproceedings{betala-chokshi-2024-brotherhood, title = "Brotherhood at {WMT} 2024: Leveraging {LLM}-Generated Contextual Conversations for Cross-Lingual Image Captioning", author = "Betala, Siddharth and Chokshi, Ishan", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.81", pages = "852--861", abstract = "In this paper, we describe our system under the team name Brotherhood for the English-to-Lowres Multi-Modal Translation Task. We participate in the multi-modal translation tasks for English-Hindi, English-Hausa, English-Bengali, and English-Malayalam language pairs. We present a method leveraging multi-modal Large Language Models (LLMs), specifically GPT-4o and Claude 3.5 Sonnet, to enhance cross-lingual image captioning without traditional training or fine-tuning.Our approach utilizes instruction-tuned prompting to generate rich, contextual conversations about cropped images, using their English captions as additional context. These synthetic conversations are then translated into the target languages. Finally, we employ a weighted prompting strategy, balancing the original English caption with the translated conversation to generate captions in the target language.This method achieved competitive results, scoring 37.90 BLEU on the English-Hindi Challenge Set and ranking first and second for English-Hausa on the Challenge and Evaluation Leaderboards, respectively. We conduct additional experiments on a subset of 250 images, exploring the trade-offs between BLEU scores and semantic similarity across various weighting schemes.", }
In this paper, we describe our system under the team name Brotherhood for the English-to-Lowres Multi-Modal Translation Task. We participate in the multi-modal translation tasks for English-Hindi, English-Hausa, English-Bengali, and English-Malayalam language pairs. We present a method leveraging multi-modal Large Language Models (LLMs), specifically GPT-4o and Claude 3.5 Sonnet, to enhance cross-lingual image captioning without traditional training or fine-tuning.Our approach utilizes instruction-tuned prompting to generate rich, contextual conversations about cropped images, using their English captions as additional context. These synthetic conversations are then translated into the target languages. Finally, we employ a weighted prompting strategy, balancing the original English caption with the translated conversation to generate captions in the target language.This method achieved competitive results, scoring 37.90 BLEU on the English-Hindi Challenge Set and ranking first and second for English-Hausa on the Challenge and Evaluation Leaderboards, respectively. We conduct additional experiments on a subset of 250 images, exploring the trade-offs between BLEU scores and semantic similarity across various weighting schemes.
[ "Betala, Siddharth", "Chokshi, Ishan" ]
Brotherhood at WMT 2024: Leveraging LLM-Generated Contextual Conversations for Cross-Lingual Image Captioning
wmt-1.81
Poster
2409.15052
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
0
https://aclanthology.org/2024.wmt-1.82.bib
https://aclanthology.org/2024.wmt-1.82/
@inproceedings{mutal-ormaechea-2024-tim, title = "{TIM}-{UNIGE} Translation into Low-Resource Languages of {S}pain for {WMT}24", author = "Mutal, Jonathan and Ormaechea, Luc{\'\i}a", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.82", pages = "862--870", abstract = "We present the results of our constrained submission to the WMT 2024 shared task, which focuses on translating from Spanish into two low-resource languages of Spain: Aranese (spa-arn) and Aragonese (spa-arg). Our system integrates real and synthetic data generated by large language models (e.g., BLOOMZ) and rule-based Apertium translation systems. Built upon the pre-trained NLLB system, our translation model utilizes a multistage approach, progressively refining the initial model through the sequential use of different datasets, starting with large-scale synthetic or crawled data and advancing to smaller, high-quality parallel corpora. This approach resulted in BLEU scores of 30.1 for Spanish to Aranese and 61.9 for Spanish to Aragonese.", }
We present the results of our constrained submission to the WMT 2024 shared task, which focuses on translating from Spanish into two low-resource languages of Spain: Aranese (spa-arn) and Aragonese (spa-arg). Our system integrates real and synthetic data generated by large language models (e.g., BLOOMZ) and rule-based Apertium translation systems. Built upon the pre-trained NLLB system, our translation model utilizes a multistage approach, progressively refining the initial model through the sequential use of different datasets, starting with large-scale synthetic or crawled data and advancing to smaller, high-quality parallel corpora. This approach resulted in BLEU scores of 30.1 for Spanish to Aranese and 61.9 for Spanish to Aragonese.
[ "Mutal, Jonathan", "Ormaechea, Luc{\\'\\i}a" ]
TIM-UNIGE Translation into Low-Resource Languages of Spain for WMT24
wmt-1.82
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.83.bib
https://aclanthology.org/2024.wmt-1.83/
@inproceedings{oliver-2024-tan, title = "{TAN}-{IBE} Participation in the Shared Task: Translation into Low-Resource Languages of {S}pain", author = "Oliver, Antoni", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.83", pages = "871--877", abstract = "This paper describes the systems presented by the TAN-IBE team into the WMT24 Shared task Translation into Low-Resource Languages of Spain. The aim of this joint task was to train systems for Spanish-Asturian, Spanish-Aragonese and Spanish-Aranesian. Our team presented systems for all three language pairs and for two types of submission: for Spanish-Aragonese and Spanish-Aranese we participated with constrained submissions, and for Spanish-Asturian with an open submission.", }
This paper describes the systems presented by the TAN-IBE team into the WMT24 Shared task Translation into Low-Resource Languages of Spain. The aim of this joint task was to train systems for Spanish-Asturian, Spanish-Aragonese and Spanish-Aranesian. Our team presented systems for all three language pairs and for two types of submission: for Spanish-Aragonese and Spanish-Aranese we participated with constrained submissions, and for Spanish-Asturian with an open submission.
[ "Oliver, Antoni" ]
TAN-IBE Participation in the Shared Task: Translation into Low-Resource Languages of Spain
wmt-1.83
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.84.bib
https://aclanthology.org/2024.wmt-1.84/
@inproceedings{garcia-2024-enhaced, title = "Enhaced Apertium System: Translation into Low-Resource Languages of {S}pain {S}panish{--}{A}sturian", author = "Garc{\'\i}a, Sof{\'\i}a", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.84", pages = "878--884", abstract = "We present the Spanish{--}Asturian Apertium translation system, which has been enhanced and refined by our team of linguists for the shared task: Low Resource Languages of Spain of this WMT24 under the closed submission. While our system did not rank among the top 10 in terms of results, we believe that Apertium{'}s translations are of a commendable standard and demonstrate competitiveness with respect to the other systems.", }
We present the Spanish{--}Asturian Apertium translation system, which has been enhanced and refined by our team of linguists for the shared task: Low Resource Languages of Spain of this WMT24 under the closed submission. While our system did not rank among the top 10 in terms of results, we believe that Apertium{'}s translations are of a commendable standard and demonstrate competitiveness with respect to the other systems.
[ "Garc{\\'\\i}a, Sof{\\'\\i}a" ]
Enhaced Apertium System: Translation into Low-Resource Languages of Spain Spanish–Asturian
wmt-1.84
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.85.bib
https://aclanthology.org/2024.wmt-1.85/
@inproceedings{galiano-jimenez-etal-2024-universitat, title = "{U}niversitat d{'}Alacant{'}s Submission to the {WMT} 2024 Shared Task on Translation into Low-Resource Languages of {S}pain", author = "Galiano Jimenez, Aaron and S{\'a}nchez-Cartagena, V{\'\i}ctor M. and Perez-Ortiz, Juan Antonio and S{\'a}nchez-Mart{\'\i}nez, Felipe", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.85", pages = "885--891", abstract = "This paper describes the submissions of the Transducens group of the Universitat d{'}Alacant to the WMT 2024 Shared Task on Translation into Low-Resource Languages of Spain; in particular, the task focuses on the translation from Spanish into Aragonese, Aranese and Asturian. Our submissions use parallel and monolingual data to fine-tune the NLLB-1.3B model and to investigate the effectiveness of synthetic corpora and transfer-learning between related languages such as Catalan, Galician and Valencian. We also present a many-to-many multilingual neural machine translation model focused on the Romance languages of Spain.", }
This paper describes the submissions of the Transducens group of the Universitat d{'}Alacant to the WMT 2024 Shared Task on Translation into Low-Resource Languages of Spain; in particular, the task focuses on the translation from Spanish into Aragonese, Aranese and Asturian. Our submissions use parallel and monolingual data to fine-tune the NLLB-1.3B model and to investigate the effectiveness of synthetic corpora and transfer-learning between related languages such as Catalan, Galician and Valencian. We also present a many-to-many multilingual neural machine translation model focused on the Romance languages of Spain.
[ "Galiano Jimenez, Aaron", "S{\\'a}nchez-Cartagena, V{\\'\\i}ctor M.", "Perez-Ortiz, Juan Antonio", "S{\\'a}nchez-Mart{\\'\\i}nez, Felipe" ]
Universitat d'Alacant's Submission to the WMT 2024 Shared Task on Translation into Low-Resource Languages of Spain
wmt-1.85
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.86.bib
https://aclanthology.org/2024.wmt-1.86/
@inproceedings{velasco-etal-2024-samsung, title = "{S}amsung {R}{\&}{D} Institute {P}hilippines @ {WMT} 2024 Low-resource Languages of {S}pain Shared Task", author = "Velasco, Dan John and Rufino, Manuel Antonio and Cruz, Jan Christian Blaise", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.86", pages = "892--900", abstract = "This paper details the submission of Samsung R{\&}D Institute Philippines (SRPH) Language Intelligence Team (LIT) to the WMT 2024 Low-resource Languages of Spain shared task. We trained translation models for Spanish to Aragonese, Spanish to Aranese/Occitan, and Spanish to Asturian using a standard sequence-to-sequence Transformer architecture, augmenting it with a noisy-channel reranking strategy to select better outputs during decoding. For Spanish to Asturian translation, our method reaches comparable BLEU scores to a strong commercial baseline translation system using only constrained data, backtranslations, noisy channel reranking, and a shared vocabulary spanning all four languages.", }
This paper details the submission of Samsung R{\&}D Institute Philippines (SRPH) Language Intelligence Team (LIT) to the WMT 2024 Low-resource Languages of Spain shared task. We trained translation models for Spanish to Aragonese, Spanish to Aranese/Occitan, and Spanish to Asturian using a standard sequence-to-sequence Transformer architecture, augmenting it with a noisy-channel reranking strategy to select better outputs during decoding. For Spanish to Asturian translation, our method reaches comparable BLEU scores to a strong commercial baseline translation system using only constrained data, backtranslations, noisy channel reranking, and a shared vocabulary spanning all four languages.
[ "Velasco, Dan John", "Rufino, Manuel Antonio", "Cruz, Jan Christian Blaise" ]
Samsung R&D Institute Philippines @ WMT 2024 Low-resource Languages of Spain Shared Task
wmt-1.86
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.87.bib
https://aclanthology.org/2024.wmt-1.87/
@inproceedings{velayuthan-etal-2024-back, title = "Back to the Stats: Rescuing Low Resource Neural Machine Translation with Statistical Methods", author = "Velayuthan, Menan and Jayakody, Dilith and De Silva, Nisansa and Fernando, Aloka and Ranathunga, Surangika", editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.87", pages = "901--907", abstract = "This paper describes our submission to the WMT24 shared task for Low-Resource Languages of Spain in the Constrained task category. Due to the lack of deep learning-based data filtration methods for these languages, we propose a purely statistical-based, two-stage pipeline for data filtration. In the primary stage, we begin by removing spaces and punctuation from the source sentences (Spanish) and deduplicating them. We then filter out sentence pairs with inconsistent language predictions by the language identification model, followed by the removal of pairs with anomalous sentence length and word count ratios, using the development set statistics as the threshold. In the secondary stage, for corpora of significant size, we employ a Jensen Shannon divergence-based method to curate training data of the desired size. Our filtered data allowed us to complete a two-step training process in under 3 hours, with GPU power consumption kept below 1 kWh, making our system both economical and eco-friendly. The source code, training data, and best models are available on the project{'}s GitHub page.", }
This paper describes our submission to the WMT24 shared task for Low-Resource Languages of Spain in the Constrained task category. Due to the lack of deep learning-based data filtration methods for these languages, we propose a purely statistical-based, two-stage pipeline for data filtration. In the primary stage, we begin by removing spaces and punctuation from the source sentences (Spanish) and deduplicating them. We then filter out sentence pairs with inconsistent language predictions by the language identification model, followed by the removal of pairs with anomalous sentence length and word count ratios, using the development set statistics as the threshold. In the secondary stage, for corpora of significant size, we employ a Jensen Shannon divergence-based method to curate training data of the desired size. Our filtered data allowed us to complete a two-step training process in under 3 hours, with GPU power consumption kept below 1 kWh, making our system both economical and eco-friendly. The source code, training data, and best models are available on the project{'}s GitHub page.
[ "Velayuthan, Menan", "Jayakody, Dilith", "De Silva, Nisansa", "Fern", "o, Aloka", "Ranathunga, Surangika" ]
Back to the Stats: Rescuing Low Resource Neural Machine Translation with Statistical Methods
wmt-1.87
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1
https://aclanthology.org/2024.wmt-1.88.bib
https://aclanthology.org/2024.wmt-1.88/
@inproceedings{de-gibert-etal-2024-hybrid, title = "Hybrid Distillation from {RBMT} and {NMT}: {H}elsinki-{NLP}{'}s Submission to the Shared Task on Translation into Low-Resource Languages of {S}pain", author = {De Gibert, Ona and Aulamo, Mikko and Scherrer, Yves and Tiedemann, J{\"o}rg}, editor = "Haddow, Barry and Kocmi, Tom and Koehn, Philipp and Monz, Christof", booktitle = "Proceedings of the Ninth Conference on Machine Translation", month = nov, year = "2024", address = "Miami, Florida, USA", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.wmt-1.88", pages = "908--917", abstract = "The Helsinki-NLP team participated in the 2024 Shared Task on Translation into Low-Resource languages of Spain with four multilingual systems covering all language pairs. The task consists in developing Machine Translation (MT) models to translate from Spanish into Aragonese, Aranese and Asturian. Our models leverage known approaches for multilingual MT, namely, data filtering, fine-tuning, data tagging, and distillation. We use distillation to merge the knowledge from neural and rule-based systems and explore the trade-offs between translation quality and computational efficiency. We demonstrate that our distilled models can achieve competitive results while significantly reducing computational costs. Our best models ranked 4th, 5th, and 2nd in the open submission track for Spanish{--}Aragonese, Spanish{--}Aranese, and Spanish{--}Asturian, respectively. We release our code and data publicly at https://github.com/Helsinki-NLP/lowres-spain-st.", }
The Helsinki-NLP team participated in the 2024 Shared Task on Translation into Low-Resource languages of Spain with four multilingual systems covering all language pairs. The task consists in developing Machine Translation (MT) models to translate from Spanish into Aragonese, Aranese and Asturian. Our models leverage known approaches for multilingual MT, namely, data filtering, fine-tuning, data tagging, and distillation. We use distillation to merge the knowledge from neural and rule-based systems and explore the trade-offs between translation quality and computational efficiency. We demonstrate that our distilled models can achieve competitive results while significantly reducing computational costs. Our best models ranked 4th, 5th, and 2nd in the open submission track for Spanish{--}Aragonese, Spanish{--}Aranese, and Spanish{--}Asturian, respectively. We release our code and data publicly at https://github.com/Helsinki-NLP/lowres-spain-st.
[ "De Gibert, Ona", "Aulamo, Mikko", "Scherrer, Yves", "Tiedemann, J{\\\"o}rg" ]
Hybrid Distillation from RBMT and NMT: Helsinki-NLP's Submission to the Shared Task on Translation into Low-Resource Languages of Spain
wmt-1.88
Poster
[ "" ]
-1
-1
-1
-1
[]
[]
[]
[]
[]
[]
1