paper_id
stringlengths
17
19
venue
stringclasses
2 values
focused_review
stringlengths
504
6.11k
point
stringlengths
136
567
ARR_2022_60_review
ARR_2022
- Underdefined and conflation of concepts - Several important details missing - Lack of clarity in how datasets were curated prevents one from assessing their validity - Too many results which are not fully justified or explained This is a very important, interesting, and valuable paper with many positives. First and foremost, annotators’ backgrounds are an important factor and should be taken into consideration when designing datasets for hate speech, toxicity, or related phenomena. The paper not only accounts for demographic variables as done in previous work but other attitudinal covariates like attitude towards free speech that are well-chosen. The paper presents two well-thought out experiments and presents results in a clear manner which contain several important findings. It is precisely because of the great potential and impact of this paper, I think the current manuscript requires more consideration and fine-tuning before it can reach its final stage. At this point, there seems to be a lack of important details that prevent me from fully gauging the paper’s findings and claims. Generally: - There were too many missing details (for example, what is the distribution of people with ‘free off speech’ attitudes? What is the correlation of the chosen scale item in the breadth-of-posts study?). On a minor note, many important points are relegated to the appendix. - Certain researcher choices and experiment design choices were not justified (for example, why were these particular scales used?) - The explanation of the creation of the breadth-of-posts was confusing. How accurate was the classification of AAE dialect and vulgarity? - The toxicity experiment was intriguing but there was too little space to be meaningful. More concretely, - With regard to terminology and concepts, toxicity and hate speech may be related but are not the same thing. The instructions to the annotators seem to conflate both. The paper also doesn’t present a concrete definition of either. While it might seem redundant or trivial, the wording to annotators plays an important role and can confound the results presented here. - Why were the particular scales chosen for obtaining attitudes? Particularly, for empathy there are several scale items [1], so why choose the Interpersonal Reactivity Index? - What was the distribution of the annotator’s background with respect to the attitudes? For example, if there are too few ‘free of speech’ annotators, then the results shown in Table 3, 4, etc are underpowered. - What were the correlations of the chosen attitudinal scale item for the breadth-of-posts study with the toxicity in the breadth-of-workers study? - How accurate are the automated classification in the breadth-of-posts experiment, i.e., how well does the states technique differentiate identity vs non-identity vulgarity or AAE language for that particular dataset. Particularly, how can it be ascertained whether the n-word was used as a reclaimed slur or not? - In that line, Section 6 discusses perceptions of vulgarity, but there are too many confounds here. Using b*tch in a sentence can be an indication of vulgarity and toxicity (due to sexism). - In my opinion, the perspective API experiment was interesting but rather shallow. My suggestion would be to follow up on it in more detail in a new paper rather than include it in this one. The newly created space could be used to enter the missing details mentioned in the review. - Finally, given that the paper notes that MTurk tends to be predominantly liberal and the authors (commendably) took several steps to ensure greater participation from conservatives, I was wondering if ‘typical’ hate speech datasets are annotated by more homogenous annotators compared to the sample in this paper. What could be the implications of this? Do this paper's findings then hold for existing hate speech datasets? Besides these, I also note some ethical issues in the ‘Ethical Concerns’ section. To conclude, while my rating might seem quite harsh, I believe this work has great potential and I hope to see it enriched with the required experimental details. References: [1] Gerdes, Karen E., Cynthia A. Lietz, and Elizabeth A. Segal. " Measuring empathy in the 21st century: Development of an empathy index rooted in social cognitive neuroscience and social justice." Social Work Research 35, no. 2 (2011): 83-93.
- There were too many missing details (for example, what is the distribution of people with ‘free off speech’ attitudes? What is the correlation of the chosen scale item in the breadth-of-posts study?). On a minor note, many important points are relegated to the appendix.
ACL_2017_333_review
ACL_2017
There are some few details on the implementation and on the systems to which the authors compared their work that need to be better explained. - General Discussion: - Major review: - I wonder if the summaries obtained using the proposed methods are indeed abstractive. I understand that the target vocabulary is build out of the words which appear in the summaries in the training data. But given the example shown in Figure 4, I have the impression that the summaries are rather extractive. The authors should choose a better example for Figure 4 and give some statistics on the number of words in the output sentences which were not present in the input sentences for all test sets. - page 2, lines 266-272: I understand the mathematical difference between the vector hi and s, but I still have the feeling that there is a great overlap between them. Both "represent the meaning". Are both indeed necessary? Did you trying using only one of them. - Which neural network library did the authors use for implementing the system? There is no details on the implementation. - page 5, section 44: Which training data was used for each of the systems that the authors compare to? Diy you train any of them yourselves? - Minor review: - page 1, line 44: Although the difference between abstractive and extractive summarization is described in section 2, this could be moved to the introduction section. At this point, some users might no be familiar with this concept. - page 1, lines 93-96: please provide a reference for this passage: "This approach achieves huge success in tasks like neural machine translation, where alignment between all parts of the input and output are required." - page 2, section 1, last paragraph: The contribution of the work is clear but I think the authors should emphasize that such a selective encoding model has never been proposed before (is this true?). Further, the related work section should be moved to before the methods section. - Figure 1 vs. Table 1: the authors show two examples for abstractive summarization but I think that just one of them is enough. Further, one is called a figure while the other a table. - Section 3.2, lines 230-234 and 234-235: please provide references for the following two passages: "In the sequence-to-sequence machine translation (MT) model, the encoder and decoder are responsible for encoding input sentence information and decoding the sentence representation to generate an output sentence"; "Some previous works apply this framework to summarization generation tasks." - Figure 2: What is "MLP"? It seems not to be described in the paper. - page 3, lines 289-290: the sigmoid function and the element-wise multiplication are not defined for the formulas in section 3.1. - page 4, first column: many elements of the formulas are not defined: b (equation 11), W (equation 12, 15, 17) and U (equation 12, 15), V (equation 15). - page 4, line 326: the readout state rt is not depicted in Figure 2 (workflow). - Table 2: what does "#(ref)" mean? - Section 4.3, model parameters and training. Explain how you achieved the values to the many parameters: word embedding size, GRU hidden states, alpha, beta 1 and 2, epsilon, beam size. - Page 5, line 450: remove "the" word in this line? " SGD as our optimizing algorithms" instead of "SGD as our the optimizing algorithms." - Page 5, beam search: please include a reference for beam search. - Figure 4: Is there a typo in the true sentence? " council of europe again slams french prison conditions" (again or against?) - typo "supper script" -> "superscript" (4 times)
- Section 4.3, model parameters and training. Explain how you achieved the values to the many parameters: word embedding size, GRU hidden states, alpha, beta 1 and 2, epsilon, beam size.
ARR_2022_269_review
ARR_2022
- It is not clear for me about the novelty of the proposed methods. - The proposed method relies on the quality of translation systems. - I'm not sure whether the differences of some results are significant (see Table 1). - The differences in results in Table 2 are very small that make the interpretation of results rather difficult. Furthermore, it is then unclear which proposed methods are really effective. - Did the authors run their experiments several times with different random initializations?
- The differences in results in Table 2 are very small that make the interpretation of results rather difficult. Furthermore, it is then unclear which proposed methods are really effective.
ARR_2022_331_review
ARR_2022
- While the language has been improved, there are still some awkward phrases. I suggest the authors have the paper reviewed by a native English speaker. 1) Line 29: "To support the GEC study...". Your study or GEC in general? Maybe you mean "To support GEC research/development/solutions"? 2) Line 53: "Because, obviously, there are usually multiple acceptable references with close meanings for an incorrect sentence, as illustrated by the example in Table 1." This is not a well-formed sentence. Rewrite or attach to the previous one. 3) Line 59: Choose either "the model will be unfairly *penalised*" or "*performance* will be unfairly underestimated". 4) Line 83: "... for detailed illustration". ??? 5) Line 189: "To facilitate illustration, our guidelines adopt a two-tier hierarchical error taxonomy..." You said earlier that you adopted the "direct re-rewriting" approach so why does your annotation guidelines provide a taxonomy or errors? Is it just to make sure that all ocurrences of the same types of errors are handled equally by all annotators? Weren't they free to correct the sentences in any way they wanted as you stated in lines 178-180? 6) Line 264: "We attribute this to our strict control of the over-correction phenomenon." What do you mean exactly? The original annotation considered some sentences to be erroneous while your guidelines did not? 7) Line 310: "Since it is usually ..." This is not a well-formed sentence. Rewrite or attach to the previous one. 8) Line 399: "... and only use the erroneous part for training" Do you mean you discard correct sentences? As it stands, it sounds as if you only kept the incorrect sentences without their corrections. You might want to make this clearer. 9) "... which does not need error-coded annotation". This is not exactly so. ERRANT computes P, R and F from M2 files containing span-level annotations. For English, it is able to automatically generate these annotations from parallel text using an alignment and edit extraction algorithm. In the case of Chinese, you did this yourself. So while it is not necessary to manually annotate the error spans, you do need to extract them somehow before ERRANT can compute the measures. 10) Table 5: "For calculating the human performance, each submitted result is considered as a sample if an annotator submits multiple results." I am afraid this does not clearly explain how human performance was computed. Each annotator against the rest? Averaged across all of them? How are multiple corrections from a single annotator handled? If you compared each annotation to the rest but the systems were compared to all the annotations, then I believe human evaluation is an underestimation. This is still not clear. 11) Line 514: "The word-order errors can be identified by heuristic rules following Hinson et al. (2020)." Did you classify the errors in the M2 files before feeding them into ERRANT? 12) Line 544: "... we remove all extra references if a sentence has more than 2 gold-standard references". Do you remove them randomly or sequentially? 13) TYPOS/ERRORS: "them" -> "this approach"? ( line 72), "both formal/informal" -> "both formal and informal" (line 81), "supplement" -> "supply"? ( lines 89, 867), "from total" -> "from *the* total" (line 127), "Finally, we have obtained 7,137 sentences" -> "In the end, we obtained 7,137 sentences" (line 138), "suffers from" -> "poses" (line 155), "illustration" -> "annotation"? ( line 189), "Golden" -> "Gold" (lines 212, 220, 221, 317), "sentence numbers" -> "number of sentences" (Table 3 caption), "numbers (proportion)" -> "number (proportion)" (Table 3 caption), "averaged character numbers" -> "average number of characters" (Table 3 caption), "averaged edit numbers" -> "average number of edits" (Table 3 caption), "averaged reference numbers" -> "average number of references" (Table 3 caption), "in the parenthesis of the..." -> "in parentheses in the..." (Table 3 caption), "previous" -> "original"? ( line 262), "use" (delete, line 270), "in *the* re-annotated" (line 271), "twice of that" -> "twice that" (line 273), "edit number" -> "number of edits" (line 281), "the sentence length" -> "sentence length" (line 282), "numbers" -> "number" (lines 283, 297), "numbers" -> "the number" (line 295), "Same" -> "Identical" (line 298), "calculated" -> "counted" (line 299), "the different" -> "different" (Figure 1 caption), "reference number" -> "number of references" (line 305), "for" -> "to" (line 307), "the descending" -> "descending" (line 326), "sentence numbers" -> "number of sentences" (line 327), "It" -> "This" (line 331), "annotate" -> "annotated" (Figure 2 caption), "limitation" -> "limitations" (line 343), "SOTA" -> "state-of-the-art (SOTA)" (line 353), "these" -> "this" (line 369), "where" -> "on which" (line 393), "hugging face" -> "Hugging Face" (431), "these" -> "this" (line 464), "The" -> "A" (line 466), "reference number" -> "the number of references" (Figure 3 caption), "start" -> "have started" (line 571), "will be" -> "are" (line 863), "false" -> "incorrect"? ( line 865).
- While the language has been improved, there are still some awkward phrases. I suggest the authors have the paper reviewed by a native English speaker.
ACL_2017_477_review
ACL_2017
1) The character tri-gram LSTM seems a little unmotivated. Did the authors try other character n-grams as well? As a reviewer, I can guess that character tri-grams roughly correspond to morphemes, especially in Semitic languages, but what made the authors report results for 3-grams as opposed to 2- or 4-? In addition, there are roughly 26^3=17576 possible distinct trigrams in the Latin lower-case alphabet, which is enough to almost constitute a word embedding table. Did the authors only consider observed trigrams? How many distinct observed trigrams were there? 2) I don't think you can meaningfully claim to be examining the effectiveness of character-level models on root-and-pattern morphology if your dataset is unvocalised and thus doesn't have the 'pattern' bit of 'root-and-pattern'. I appreciate that finding transcribed Arabic and Hebrew with vowels may be challenging, but it's half of the typology. 3) Reduplication seems to be a different kind of phenomenon to the other three, which are more strictly morphological typologies. Indonesian and Malay also exhibit various word affixes, which can be used on top of reduplication, which is a more lexical process. I'm not sure splitting it out from the other linguistic typologies is justified. - General Discussion: 1) The paper was structured very clearly and was very easy to read. 2) I'm a bit puzzled about why the authors chose to use 200 dimensional character embeddings. Once the dimensionality of the embedding is greater than the size of the vocabulary (here the number of characters in the alphabet), surely you're not getting anything extra? ------------------------------- Having read the author response, my opinions have altered little. I still think the same strengths and weakness that I have already discussed hold.
1) The character tri-gram LSTM seems a little unmotivated. Did the authors try other character n-grams as well? As a reviewer, I can guess that character tri-grams roughly correspond to morphemes, especially in Semitic languages, but what made the authors report results for 3-grams as opposed to 2- or 4-? In addition, there are roughly 26^3=17576 possible distinct trigrams in the Latin lower-case alphabet, which is enough to almost constitute a word embedding table. Did the authors only consider observed trigrams? How many distinct observed trigrams were there?
ACL_2017_104_review
ACL_2017
- Comparison with ALIGN could be better. ALIGN used content window size 10 vs this paper's 5, vector dimension of 500 vs this paper's 200. Also its not clear to me whether N(e_j) includes only entities that link to e_j. The graph is directed and consists of wikipedia outlinks, but is adjacency defined as it would be for an undirected graph? For ALIGN, the context of an entity is the set of entities that link to that entity. If N(e_j) is different, we cannot tell how much impact this change has on the learned vectors, and this could contribute to the difference in scores on the entity similarity task. - It is sometimes difficult to follow whether "mention" means a string type, or a particular mention in a particular document. The phrase "mention embedding" is used, but it appears that embeddings are only learned for mention senses. - It is difficult to determine the impact of sense disambiguation order without comparison to other unsupervised entity linking methods. - General Discussion:
- It is difficult to determine the impact of sense disambiguation order without comparison to other unsupervised entity linking methods.
ARR_2022_89_review
ARR_2022
1. The experiments are held on a private datasets and the exact setup is impossible to reproduce. 2. A minor point would be that few-shot would be a more realistic setup for that task, as domain-specific TODOs are easy to acquire, however I agree that the current setup is adequate as well. 3. More error analysis could be useful, especially on the public dataset, as their data could be included without any restrictions, e.g., error types/examples? patterns? Examples when non-contextualized embeddings outperform contextualized ones, or even LITE? I urge the authors to release at least some part of the dataset to the wider public, or under some end user-agreement. Comments: 1. I suggest the authors to focus their comparison on word2vec baselines (currently in appendix), instead of Sentence-BERT, as the latter does not show good performance on the short texts. It seems that non-contextualized embeddings are more suitable for the task. 2. Maybe it makes more sense to try out models pre-trained on conversations, e.g., text from Twitter or natural language conversations.
2. A minor point would be that few-shot would be a more realistic setup for that task, as domain-specific TODOs are easy to acquire, however I agree that the current setup is adequate as well.
ARR_2022_65_review
ARR_2022
1. The paper covers little qualitative aspects of the domains, so it is hard to understand how they differ in linguistic properties. For example, I think it is vague to say that the fantasy novel is more “canonical” (line 355). Text from a novel may be similar to that from news articles in that sentences tend to be complete and contain fewer omissions, in contrast to product comments which are casually written and may have looser syntactic structures. However, novel text is also very different from news text in that it contains unusual predicates and even imaginary entities as arguments. It seems that the authors are arguing that syntactic factors are more significant in SRL performance, and the experimental results are also consistent with this. Then it would be helpful to show a few examples from each domain to illustrate how they differ structurally. 2. The proposed dataset uses a new annotation scheme that is different from that of previous datasets, which introduces difficulties of comparison with previous results. While I think the frame-free scheme is justified in this paper, the compatibility with other benchmarks is an important issue that needs to be discussed. It may be possible to, for example, convert frame-based annotations to frame-free ones. I believe this is doable because FrameNet also has the core/non-core sets of argument for each frame. It would also be better if the authors can elaborate more on the relationship between this new scheme and previous ones. Besides eliminating the frame annotation, what are the major changes to the semantic role labels? - In Sec. 3, it is a bit confusing why there is a division of source domain and target domain. Thus, it might be useful to mention explicitly that the dataset is designed for domain transfer experiments. - Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. mentioning Li et al. (2019a) earlier, to make it clear and precise. - More information about the annotators would be needed. Are they all native Chinese speakers? Do they have linguistics background? - Were pred-wise/arg-wise consistencies used in the construction of existing datasets? I think they are not newly invented. It is useful to know where they come from. - In the SRL formulation (Sec. 5), I am not quite sure what is “the concerned word”. Is it the predicate? Does this formulation cover the task of identifying the predicate(s), or are the predicates given by syntactic parsing results? - From Figure 3 it is not clear to me how ZX is the most similar domain to Source. Grouping the bars by domain instead of role might be better (because we can compare the shapes). It may also be helpful to leverage some quantitative measure (e.g. cross entropy). - How was the train/dev/test split determined? This should be noted (even if it is simply done randomly).
- From Figure 3 it is not clear to me how ZX is the most similar domain to Source. Grouping the bars by domain instead of role might be better (because we can compare the shapes). It may also be helpful to leverage some quantitative measure (e.g. cross entropy).
ACL_2017_37_review
ACL_2017
Weak results/summary of "side-by-side human" comparison in Section 5. Some disfluency/agrammaticality. - General Discussion: The article proposes a principled means of modeling utterance context, consisting of a sequence of previous utterances. Some minor issues: 1. Past turns in Table 1 could be numbered, making the text associated with this table (lines 095-103) less difficult to ingest. Currently, readers need to count turns from the top when identifying references in the authors' description, and may wonder whether "second", "third", and "last" imply a side-specific or global enumeration. 2. Some reader confusion may be eliminated by explicitly defining what "segment" means in "segment level", as occurring on line 269. Previously, on line 129, this seemingly same thing was referred to as "a sequence-sequence [similarity matrix]". The two terms appear to be used interchangeably, but it is not clear what they actually mean, despite the text in section 3.3. It seems the authors may mean "word subsequence" and "word subsequence to word subsequence", where "sub-" implies "not the whole utterance", but not sure. 3. Currently, the variable symbol "n" appears to be used to enumerate words in an utterance (line 306), as well as utterances in a dialogue (line 389). The authors may choose two different letters for these two different purposes, to avoid confusing readers going through their equations. 4. The statement "This indicates that a retrieval based chatbot with SMN can provide a better experience than the state-of-the-art generation model in practice." at the end of section 5 appears to be unsupported. The two approaches referred to are deemed comparable in 555 out of 1000 cases, with the baseline better than the proposed method in 238 our of the remaining 445 cases. The authors are encouraged to assess and present the statistical significance of this comparison. If it is weak, their comparison permits to at best claim that their proposed method is no worse (rather than "better") than the VHRED baseline. 5. The authors may choose to insert into Figure 1 the explicit "first layer", "second layer" and "third layer" labels they use in the accompanying text. 6. Their is a pervasive use of "to meet" as in "a response candidate can meet each utterace" on line 280 which is difficult to understand. 7. Spelling: "gated recurrent unites"; "respectively" on line 133 should be removed; punctuation on line 186 and 188 is exchanged; "baseline model over" -> "baseline model by"; "one cannot neglects".
5. The authors may choose to insert into Figure 1 the explicit "first layer", "second layer" and "third layer" labels they use in the accompanying text.
ARR_2022_338_review
ARR_2022
- The unsupervised translation tasks are all quite superficial, taking existing datasets of similar languages (e.g. En-De Multi30k, En-Fr WMT) and editing them to an unsupervised MT corpus. - Improvements on Multi30k are quite small (< 1 BLEU) and reported over single runs and measuring BLEU scores alone. It would be good to report averages over multiple runs and report some more modern metrics as well like COMET or BLEURT. - It is initially quite unclear from the writing where the sentence-level representations come from. As they are explicitly modeled, they need supervision from somewhere. The constant comparison to latent variable models and calling these sentence representations latent codes does not add to the clarity of the paper. I hope this will be improved in a revision of the paper. Some typos: - 001: "The latent variables" -> "Latent variables" - 154: "efficiently to compute" -> "efficient to compute" - 299: "We denote the encoder and decoder for encoding and generating source-language sentences as the source encoder and decoder" - unclear - 403: "langauge" -> "language"
- Improvements on Multi30k are quite small (< 1 BLEU) and reported over single runs and measuring BLEU scores alone. It would be good to report averages over multiple runs and report some more modern metrics as well like COMET or BLEURT.
ACL_2017_588_review
ACL_2017
and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems. - Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task. 2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler. 3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary. This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise. 4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities. - Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested? 2) Have you tried building a classifier that just takes h_i^e as inputs? I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores.
4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities.
ARR_2022_123_review
ARR_2022
1) it uses different experiment settings (e.g., a 32k batch size, a beam size of 5) and does not mention some details (e.g., the number of training steps), these different settings may contribute the performance and the comparisons in Table 2 may not be sufficiently reliable. 2) it only compares with some weak baselines in Tables 3, 4 and 6. Though the approach can surpass the sentence-level model baseline, the naive document-to-document translation model and Zheng et al. (2020), these baselines seem weak, for example, Voita et al. (2019) achieve 81.6, 58.1, 72.2 and 80.0 for deixis, lexical cohesion, ellipsis (infl.) and ellipsis (VP) respectively with the CADec model, while this work only gets 64.7, 46.3, 65.9 and 53.0. It seems that there is still a large gap with the presented approach in these linguistic evaluations. 3) the multi-resolutional data processing approach may somehow increase the instance weight of the document-level data, and how this affects the performance is not studied. 1) It's better to adopt experiment settings consistent with previous work. 2) There is still a large performance gap compared to Voita et al. (2019) in linguistic evaluations, while BLEU may not be able to reflect these document-level phenomena and linguistic evaluations are important. The issues for the performance gap shall be investigated and solved. 3) It's better to investigate whether the model really leverages document-level contexts correctly, probably refer to this paper: Do Context-Aware Translation Models Pay the Right Attention? In ACL 2021.
2) it only compares with some weak baselines in Tables 3, 4 and 6. Though the approach can surpass the sentence-level model baseline, the naive document-to-document translation model and Zheng et al. (2020), these baselines seem weak, for example, Voita et al. (2019) achieve 81.6, 58.1, 72.2 and 80.0 for deixis, lexical cohesion, ellipsis (infl.) and ellipsis (VP) respectively with the CADec model, while this work only gets 64.7, 46.3, 65.9 and 53.0. It seems that there is still a large gap with the presented approach in these linguistic evaluations.
ACL_2017_96_review
ACL_2017
lack statistics of the datsets (e.g. average length, vocabulary size) the baseline (Moses) is not proper because of the small size of the dataset the assumption "sarcastic tweets often differ from their non sarcastic interpretations in as little as one sentiment word" is not supported by the data. - General Discussion: This discussion gives more details about the weaknesses of the paper. Half of the paper is about the new dataset for sarcasm interpretation. However, the paper doesn't show important information about the dataset such as average length, vocabulary size. More importantly, the paper doesn't show any statistical evidence to support their method of focusing on sentimental words. Because the dataset is small (only 3000 tweets), I guess that many words are rare. Therefore, Moses alone is not a proper baseline. A proper baseline should be a MT system that can handle rare words very well. In fact, using clustering and declustering (as in Sarcasm SIGN) is a way to handle rare words. Sarcasm SIGN is built based on the assumption that "sarcastic tweets often differ from their non sarcastic interpretations in as little as one sentiment word". Table 1 however strongly disagrees with this assumption: the human interpretations are often different from the tweets at not only sentimental words. I thus strongly suggest the authors to give statistical evidence from the dataset that supports their assumption. Otherwise, the whole idea of Sarcasm SIGN is just a hack. -------------------------------------------------------------- I have read the authors' response. I don't change my decision because of the following reasons: - the authors wrote that "the Fiverr workers might not take this strategy": to me it is not the spirit of corpus-based NLP. A model must be built to fit given data, not that the data must follow some assumption that the model is built on. - the authors wrote that "the BLEU scores of Moses and SIGN are above 60, which is generally considered decent in the MT literature": to me the number 60 doesn't show anything at all because the sentences in the dataset are very short. And that, if we look at table 6, %changed of Moses is only 42%, meaning that even more than half of the time translation is simply copying, the BLUE score is more than 60. - "While higher scores might be achieved with MT systems that explicitly address rare words, these systems don't focus on sentiment words": it's true, but I was wondering whether sentiment words are rare in the corpus. If they are, those MT systems should obviously handle them (in addition to other rare words).
- "While higher scores might be achieved with MT systems that explicitly address rare words, these systems don't focus on sentiment words": it's true, but I was wondering whether sentiment words are rare in the corpus. If they are, those MT systems should obviously handle them (in addition to other rare words).
ACL_2017_178_review
ACL_2017
- The evaluation reported in this paper includes only intrinsic tasks, mainly on similarity/relatedness datasets. As the authors note, such evaluations are known to have very limited power in predicting the utility of embeddings in extrinsic tasks. Accordingly, it has become recently much more common to include at least one or two extrinsic tasks as part of the evaluation of embedding models. - The similarity/relatedness evaluation datasets used in the paper are presented as datasets recording human judgements of similarity between concepts. However, if I understand correctly, the actual judgements were made based on presenting phrases to the human annotators, and therefore they should be considered as phrase similarity datasets, and analyzed as such. - The medical concept evaluation dataset, ‘mini MayoSRS’ is extremely small (29 pairs), and its larger superset ‘MayoSRS’ is only a little larger (101 pairs) and was reported to have a relatively low human annotator agreement. The other medical concept evaluation dataset, ‘UMNSRS’, is more reasonable in size, but is based only on concepts that can be represented as single words, and were represented as such to the human annotators. This should be mentioned in the paper and makes the relevance of this dataset questionable with respect to representations of phrases and general concepts. - As the authors themselves note, they (quite extensively) fine tune their hyperparameters on the very same datasets for which they report their results and compare them with prior work. This makes all the reported results and analyses questionable. - The authors suggest that their method is superb to prior work, as it achieved comparable results while prior work required much more manual annotation. I don't think this argument is very strong because the authors also use large manually-constructed ontologies, and also because the manually annotated dataset used in prior work comes from existing clinical records that did not require dedicated annotations. - In general, I was missing more useful insights into what is going on behind the reported numbers. The authors try to treat the relation between a phrase and its component words on one hand, and a concept and its alternative phrases on the other, as similar types of a compositional relation. However, they are different in nature and in my mind each deserves a dedicated analysis. For example, around line 588, I would expect an NLP analysis specific to the relation between phrases and their component words. Perhaps the reason for the reported behavior is dominant phrase headwords, etc. Another aspect that was absent but could strengthen the work, is an investigation of the effect of the hyperparameters that control the tradeoff between the atomic and compositional views of phrases and concepts. General Discussion: Due to the above mentioned weaknesses, I recommend to reject this submission. I encourage the authors to consider improving their evaluation datasets and methodology before re-submitting this paper. Minor comments: - Line 069: contexts -> concepts - Line 202: how are phrase overlaps handled? - Line 220: I believe the dimensions should be |W| x d. Also, the terminology ‘negative sampling matrix’ is confusing as the model uses these embeddings to represent contexts in positive instances as well. - Line 250: regarding ‘the observed phrase just completed’, it not clear to me how words are trained in the joint model. The text may imply that only the last words of a phrase are considered as target words, but that doesn’t make sense. - Notation in Equation 1 is confusing (using c instead of o) - Line 361: Pedersen et al 2007 is missing in the reference section. - Line 388: I find it odd to use such a fine-grained similarity scale (1-100) for human annotations. - Line 430: The newly introduced term ‘strings’ here is confusing. I suggest to keep using ‘phrases’ instead. - Line 496: Which task exactly was used for the hyper-parameter tuning? That’s important. I couldn’t find that even in the appendix. - Table 3: It’s hard to see trends here, for instance PM+CL behaves rather differently than either PM or CL alone. It would be interesting to see development set trends with respect to these hyper-parameters. - Line 535: missing reference to Table 5.
- The similarity/relatedness evaluation datasets used in the paper are presented as datasets recording human judgements of similarity between concepts. However, if I understand correctly, the actual judgements were made based on presenting phrases to the human annotators, and therefore they should be considered as phrase similarity datasets, and analyzed as such.
ACL_2017_326_review
ACL_2017
1. The proposed method is not compared with other CWS models. The baseline model (Bi-LSTM) is proposed in [1] and [2]. However, these model is proposed not for CWS but for POS tagging and NE tagging. The description "In this paper, we employ the state-of-the-art architecture ..." (in Section 2) is misleading. 2. The purpose of experiments in Section 6.4 is unclear. In Sec. 6.4, the purpose is that investigating "datasets in traditional Chinese and simplified Chinese could help each other." However, in the experimental setting, the model is separately trained on simplified Chinese and traditional Chinese, and the shared parameters are fixed after training on simplified Chinese. What is expected to fixed shared parameters? - General Discussion: The paper should be more interesting if there are more detailed discussion about the datasets that adversarial multi-criteria learning does not boost the performance. [1] Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional lstm-crf models for sequence tagging. arXiv preprint arXiv:1508.01991. [2] Xuezhe Ma and Eduard Hovy. 2016. End-to-end sequence labeling via bi-directional lstm-cnns-crf. arXiv preprint arXiv:1603.01354 .
1. The proposed method is not compared with other CWS models. The baseline model (Bi-LSTM) is proposed in [1] and [2]. However, these model is proposed not for CWS but for POS tagging and NE tagging. The description "In this paper, we employ the state-of-the-art architecture ..." (in Section 2) is misleading.
ACL_2017_768_review
ACL_2017
. First, the classification model used in this paper (concat + linear classifier) was shown to be inherently unable to learn relations in "Do Supervised Distributional Methods Really Learn Lexical Inference Relations?" ( Levy et al., 2015). Second, the paper makes superiority claims in the text that are simply not substantiated in the quantitative results. In addition, there are several clarity and experiment setup issues that give an overall feeling that the paper is still half-baked. = Classification Model = Concatenating two word vectors as input for a linear classifier was mathematically proven to be incapable of learning a relation between words (Levy et al., 2015). What is the motivation behind using this model in the contextual setting? While this handicap might be somewhat mitigated by adding similarity features, all these features are symmetric (including the Euclidean distance, since |L-R| = |R-L|). Why do we expect these features to detect entailment? I am not convinced that this is a reasonable classification model for the task. = Superiority Claims = The authors claim that their contextual representation is superior to context2vec. This is not evident from the paper, because: 1) The best result (F1) in both table 3 and table 4 (excluding PPDB features) is the 7th row. To my understanding, this variant does not use the proposed contextual representation; in fact, it uses the context2vec representation for the word type. 2) This experiment uses ready-made embeddings (GloVe) and parameters (context2vec) that were tuned on completely different datasets with very different sizes. Comparing the two is empirically flawed, and probably biased towards the method using GloVe (which was a trained on a much larger corpus). In addition, it seems that the biggest boost in performance comes from adding similarity features and not from the proposed context representation. This is not discussed. = Miscellaneous Comments = - I liked the WordNet dataset - using the example sentences is a nice trick. - I don’t quite understand why the task of cross-lingual lexical entailment is interesting or even reasonable. - Some basic baselines are really missing. Instead of the "random" baseline, how well does the "all true" baseline perform? What about the context-agnostic symmetric cosine similarity of the two target words? - In general, the tables are very difficult to read. The caption should make the tables self-explanatory. Also, it is unclear what each variant means; perhaps a more precise description (in text) of each variant could help the reader understand? - What are the PPDB-specific features? This is really unclear. - I could not understand 8.1. - Table 4 is overfull. - In table 4, the F1 of "random" should be 0.25. - Typo in line 462: should be "Table 3" = Author Response = Thank you for addressing my comments. Unfortunately, there are still some standing issues that prevent me from accepting this paper: - The problem I see with the base model is not that it is learning prototypical hypernyms, but that it's mathematically not able to learn a relation. - It appears that we have a different reading of tables 3 and 4. Maybe this is a clarity issue, but it prevents me from understanding how the claim that contextual representations substantially improve performance is supported. Furthermore, it seems like other factors (e.g. similarity features) have a greater effect.
1) The best result (F1) in both table 3 and table 4 (excluding PPDB features) is the 7th row. To my understanding, this variant does not use the proposed contextual representation; in fact, it uses the context2vec representation for the word type.
ACL_2017_779_review
ACL_2017
There were many sentences in the abstract and in other places in the paper where the authors stuff too much information into a single sentence. This could be avoided. One can always use an extra sentence to be more clear. There could have been a section where the actual method used could be explained in a more detailed. This explanation is glossed over in the paper. It's non-trivial to guess the idea from reading the sections alone. During test time, you need the source-pivot corpus as well. This is a major disadvantage of this approach. This is played down - in fact it's not mentioned at all. I could strongly encourage the authors to mention this and comment on it. - General Discussion: This paper uses knowledge distillation to improve zero-resource translation. The techniques used in this paper are very similar to the one proposed in Yoon Kim et. al. The innovative part is that they use it for doing zero-resource translation. They compare against other prominent works in the field. Their approach also eliminates the need to do double decoding. Detailed comments: - Line 21-27 - the authors could have avoided this complicated structure for two simple sentences. Line 41 - Johnson et. al has SOTA on English-French and German-English. Line 77-79 there is no evidence provided as to why combination of multiple languages increases complexity. Please retract this statement or provide more evidence. Evidence in literature seems to suggest the opposite. Line 416-420 - The two lines here are repeated again. They were first mentioned in the previous paragraph. Line 577 - Figure 2 not 3!
- Line 21-27 - the authors could have avoided this complicated structure for two simple sentences. Line 41 - Johnson et. al has SOTA on English-French and German-English. Line 77-79 there is no evidence provided as to why combination of multiple languages increases complexity. Please retract this statement or provide more evidence. Evidence in literature seems to suggest the opposite. Line 416-420 - The two lines here are repeated again. They were first mentioned in the previous paragraph. Line 577 - Figure 2 not 3!
ACL_2017_494_review
ACL_2017
- I was hoping to see some analysis of why the morph-fitted embeddings worked better in the evaluation, and how well that corresponds with the intuitive motivation of the authors. - The authors introduce a synthetic word similarity evaluation dataset, Morph-SimLex. They create it by applying their presumably semantic-meaning-preserving morphological rules to SimLex999 to generate many more pairs with morphological variability. They do not manually annotate these new pairs, but rather use the original similarity judgements from SimLex999. The obvious caveat with this dataset is that the similarity scores are presumed and therefore less reliable. Furthermore, the fact that this dataset was generated by the very same rules that are used in this work to morph-fit word embeddings, means that the results reported on this dataset in this work should be taken with a grain of salt. The authors should clearly state this in their paper. - (Soricut and Och, 2015) is mentioned as a future source for morphological knowledge, but in fact it is also an alternative approach to the one proposed in this paper for generating morphologically-aware word representations. The authors should present it as such and differentiate their work. - The evaluation does not include strong morphologically-informed embedding baselines. General Discussion: With the few exceptions noted, I like this work and I think it represents a nice contribution to the community. The authors presented a simple approach and showed that it can yield nice improvements using various common embeddings on several evaluations and four different languages. I’d be happy to see it in the conference. Minor comments: - Line 200: I found this phrasing unclear: “We then query … of linguistic constraints”. - Section 2.1: I suggest to elaborate a little more on what the delta is between the model used in this paper and the one it is based on in Wieting 2015. It seemed to me that this was mostly the addition of the REPEL part. - Line 217: “The method’s cost function consists of three terms” - I suggest to spell this out in an equation. - Line 223: x and t in this equation (and following ones) are the vector representations of the words. I suggest to denote that somehow. Also, are the vectors L2-normalized before this process? Also, when computing ‘nearest neighbor’ examples do you use cosine or dot-product? Please share these details. - Line 297-299: I suggest to move this text to Section 3, and make the note that you did not fine-tune the params in the main text and not in a footnote. - Line 327: (create, creates) seems like a wrong example for that rule. - I have read the author response
- I was hoping to see some analysis of why the morph-fitted embeddings worked better in the evaluation, and how well that corresponds with the intuitive motivation of the authors.
ACL_2017_727_review
ACL_2017
Quantitative results are given only for the author's PSL model and not compared against any traditional baseline classification algorithms, making it unclear to what degree their model is necessary. Poor comparison with alternative approaches makes it difficult to know what to take away from the paper. The qualitative investigation is interesting, but the chosen visualizations are difficult to make sense of and add little to the discussion. Perhaps it would make sense to collapse across individual politicians to create a clearer visual. - General Discussion: The submission is well written and covers a topic which may be of interest to the ACL community. At the same time, it lacks proper quantitative baselines for comparison. Minor comments: - line 82: A year should be provided for the Boydstun et al. citation - It’s unclear to me why similar behavior (time of tweeting) should necessarily be indicative of similar framing and no citation was given to support this assumption in the model. - The related work goes over quite a number of areas, but glosses over the work most clearly related (e.g. PSL models and political discourse work) while spending too much time mentioning work that is only tangential (e.g. unsupervised models using Twitter data). - Section 4.2 it is unclear whether Word2Vec was trained on their dataset or if they used pre-trained embeddings. - The authors give no intuition behind why unigrams are used to predict frames, while bigrams/trigrams are used to predict party. - The authors note that temporal similarity worked best with one hour chunks, but make no mention of how important this assumption is to their results. If the authors are unable to provide full results for this work, it would still be worthwhile to give the reader a sense of what performance would look like if the time window were widened. - Table 4: Caption should make it clear these are F1 scores as well as clarifying how the F1 score is weighted (e.g. micro/macro). This should also be made clear in the “evaluation metrics” section on page 6.
- Table 4: Caption should make it clear these are F1 scores as well as clarifying how the F1 score is weighted (e.g. micro/macro). This should also be made clear in the “evaluation metrics” section on page 6.
ARR_2022_1_review
ARR_2022
- Using original encoders as baselines might not be sufficient. In most experiments, the paper only compares with the original XLM-R or mBERT trained without any knowledge base information. It is unclear whether such encoders being fine-tuned towards the KB tasks would actually perform comparable to the proposed approach. I would like to see experiments like just fine tuning the encoders with the same dataset but the MLM objective in their original pretraining and comparing with them. Such baselines can leverage on input sequences as simple as `<s>X_s X_p X_o </s>` where one of them is masked w.r.t. MLM training. - The design of input formats is intuitive and lacks justifications. Although the input formats for monolingual and cross-lingual links are designed to be consistent, it is hard to tell why the design would be chosen. As the major contribution of the paper, justifying the design choice matters. In other words, it would be better to see some comparisons over some variants, say something like `<s>[S]X_s[S][P]X_p[P][O]X_o[O]</s>` as wrapping tokens in the input sequence has been widely used in the community. - The abstract part is lengthy so some background and comparisons with prior work can be elaborated in the introduction and related work. Otherwise, they shift perspective of the abstract, making it hard for the audience to catch the main novelties and contributions. - In line 122, triples denoted as $(e_1, r, e_2)$ would clearly show its tuple-like structure instead of sets. - In sec 3.2, the authors argue that the Prix-LM (All) model consistently outperforms the single model, hence the ability of leveraging multilingual information. Given the training data sizes differ a lot, I would like to see an ablation that the model is trained on a mix of multilingual data with the same overall dataset size as the monolingual. Otherwise, it is hard to justify whether the performance gain is from the large dataset or from the multilingual training.
- The design of input formats is intuitive and lacks justifications. Although the input formats for monolingual and cross-lingual links are designed to be consistent, it is hard to tell why the design would be chosen. As the major contribution of the paper, justifying the design choice matters. In other words, it would be better to see some comparisons over some variants, say something like `<s>[S]X_s[S][P]X_p[P][O]X_o[O]</s>` as wrapping tokens in the input sequence has been widely used in the community.
ARR_2022_333_review
ARR_2022
- The writing is really poor. Many places are very confusing. The figures are not clearly separated from the text, and it is confusing that where I should look at. Many sentences use the past tense, while other sentences use the present tense. The only reason that I would like this paper to be accepted is because of the dataset. The writing itself is far from a solid paper, and I suggest authors go over the writing again. - This dataset is the first Thai N-NER dataset, and N-NER in Thai is a new task for the community, so it could be very insightful to know what specific challenges are in Thai, and what errors the models make. The paper provides an error analysis, but not deep enough. It would be insightful if the authors could list error patterns at a finer granularity. It is also unclear to me that why syllable segmentation could be useful for the annotation. Many of the readers do not know Thai, so I think more explanation is necessary. For the writing part, for example, Section 3 is mixed with the past tense and present tense. Figure 2 is hidden in the text. I suggest the authors put all the tables and figures at the top of the pages.
- The writing is really poor. Many places are very confusing. The figures are not clearly separated from the text, and it is confusing that where I should look at. Many sentences use the past tense, while other sentences use the present tense. The only reason that I would like this paper to be accepted is because of the dataset. The writing itself is far from a solid paper, and I suggest authors go over the writing again.
ACL_2017_71_review
ACL_2017
-The explanation of methods in some paragraphs is too detailed and there is no mention of other work and it is repeated in the corresponding method sections, the authors committed to address this issue in the final version. -README file for the dataset [Authors committed to add README file] - General Discussion: - Section 2.2 mentions examples of DBpedia properties that were used as features. Do the authors mean that all the properties have been used or there is a subset? If the latter please list them. In the authors' response, the authors explain in more details this point and I strongly believe that it is crucial to list all the features in details in the final version for clarity and replicability of the paper. - In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they will correct the figure. - Based on section 2.4 it seems that topical relatedness implies that some features are domain dependent. It would be helpful to see how much domain dependent features affect the performance. In the final version, the authors will add the performance results for the above mentioned features, as mentioned in their response. - In related work, the authors make a strong connection to Sil and Florian work where they emphasize the supervised vs. unsupervised difference. The proposed approach is still supervised in the sense of training, however the generation of training data doesn’t involve human interference
- Based on section 2.4 it seems that topical relatedness implies that some features are domain dependent. It would be helpful to see how much domain dependent features affect the performance. In the final version, the authors will add the performance results for the above mentioned features, as mentioned in their response.
ARR_2022_187_review
ARR_2022
1. Not clear if the contribution of the paper are sufficient for a long *ACL paper. By tightening the writing and removing unnecessary details, I suspect the paper will make a nice short paper, but in its current form, the paper lacks sufficient novelty. 2. The writing is difficult to follow in many places and can be simplified. 1. Line 360-367 are occupying too much space than needed. 2. It was not clear to me that Vikidia is the new dataset that was introduced by the paper until I read the last section :) 3. Too many metrics used for evaluation. While I commend the paper’s thoroughness by using different metrics for evaluation, I believe in this case the multiple metrics create more confusion than clarity in understanding the results. I recommend using the strictest metric (such as RA) because it will clearly highlight the differences in performance. Also consider marking the best results in each column/row using boldface text. 4. I suspect that other evaluation metrics NDCG, SRRR, KTCC are unable to resolve the differences between NPRM and the baselines in some cases. For e.g., Based on the extremely large values (>0.99) for all approaches in Table 4, I doubt the difference between NPRM’s 0.995 and Glove+SVMRank 0.992 for Avg. SRR on NewsEla-EN is statistically significant. 5. I did not understand the utility of presenting results in Table 2 and Table 3. Why not simplify the presentation by selecting the best regression based and classification based approaches for each evaluation dataset and compare them against NPRM in Table 4 itself? 6. From my understanding, RA is the strictest evaluation metric, and NPRM performs worse on RA when compared to the baselines (Table 4) where simpler approaches fare better. 7. I appreciate the paper foreseeing the limitations of the proposed NPRM approach. However, I find the discussion of the first limitation somewhat incomplete and ending abruptly. The last sentence has the tone of “despite the weaknesses, NPRM is useful'' but it does not flesh out why it’s useful. 8. I found ln616-632 excessively detailed for a conclusion paragraph. Maybe simply state that better metrics are needed for ARA evaluation? Such detailed discussion is better suited for Sec 4.4 9. Why was a classification based model not used for the zero shot experiments in Table 5 and Table 6? These results in my opinion are the strongest aspect of the paper, and should be as thorough as the rest of the results. 10. Line 559: “lower performance on Vikidia-Fr compared to Newsela-Es …” – Why? These are different languages after all, so isn’t the performance difference in-comparable?
5. I did not understand the utility of presenting results in Table 2 and Table 3. Why not simplify the presentation by selecting the best regression based and classification based approaches for each evaluation dataset and compare them against NPRM in Table 4 itself?
ACL_2017_727_review
ACL_2017
Quantitative results are given only for the author's PSL model and not compared against any traditional baseline classification algorithms, making it unclear to what degree their model is necessary. Poor comparison with alternative approaches makes it difficult to know what to take away from the paper. The qualitative investigation is interesting, but the chosen visualizations are difficult to make sense of and add little to the discussion. Perhaps it would make sense to collapse across individual politicians to create a clearer visual. - General Discussion: The submission is well written and covers a topic which may be of interest to the ACL community. At the same time, it lacks proper quantitative baselines for comparison. Minor comments: - line 82: A year should be provided for the Boydstun et al. citation - It’s unclear to me why similar behavior (time of tweeting) should necessarily be indicative of similar framing and no citation was given to support this assumption in the model. - The related work goes over quite a number of areas, but glosses over the work most clearly related (e.g. PSL models and political discourse work) while spending too much time mentioning work that is only tangential (e.g. unsupervised models using Twitter data). - Section 4.2 it is unclear whether Word2Vec was trained on their dataset or if they used pre-trained embeddings. - The authors give no intuition behind why unigrams are used to predict frames, while bigrams/trigrams are used to predict party. - The authors note that temporal similarity worked best with one hour chunks, but make no mention of how important this assumption is to their results. If the authors are unable to provide full results for this work, it would still be worthwhile to give the reader a sense of what performance would look like if the time window were widened. - Table 4: Caption should make it clear these are F1 scores as well as clarifying how the F1 score is weighted (e.g. micro/macro). This should also be made clear in the “evaluation metrics” section on page 6.
- line 82: A year should be provided for the Boydstun et al. citation - It’s unclear to me why similar behavior (time of tweeting) should necessarily be indicative of similar framing and no citation was given to support this assumption in the model.
ARR_2022_311_review
ARR_2022
__1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance.
ACL_2017_376_review
ACL_2017
Many points are not explained well in the paper. - General Discussion: This work tackles an important and interesting event extraction problem -- identifying positive and negative interactions between pairs of countries in the world (or rather, between actors affiliated with countries). The primary contribution is an application of supervised, structured neural network models for sentence-level event/relation extraction. While previous work has examined tasks in the overall area, to my knowledge there has not been any publicly availble sentence-level annotated data for the problem -- the authors here make a contribution as well by annotating some data included with the submission; if it is released, it could be useful for future researchers in this area. The proposed models -- which seem to be an application of various tree-structured recursive neural network models -- demonstrate a nice performance increase compared to a fairly convincing, broad set of baselines (if we are able to trust them; see below). The paper also presents a manual evaluation of the inferred time series from a news corpus which is nice to see. I'm torn about this paper. The problem is a terrific one and the application of the recursive models seems like a contribution to this problem. Unfortunately, many aspects of the models, experimentation, and evaluation are not explained very well. The same work, with a more carefully written paper, could be really great. Some notes: - Baselines need more explanation. For example, the sentiment lexicon is not explained for the SVM. The LSTM classifier is left highly unspecified (L407-409) -- there are multiple different architectures to use an LSTM for classification. How was it trained? Is there a reference for the approach? Are the authors using off-the-shelf code (in which case, please refer and cite, which would also make it easier for the reader to understand and replicate if necessary)? It would be impossible to replicate based on the two-line explanation here. - (The supplied code does not seem to include the baselines, just the recursive NN models. It's great the authors supplied code for part of the system so I don't want to penalize them for missing it -- but this is relevant since the paper itself has so few details on the baselines that they could not really be replicated based on the explanation in the paper.) - How were the recursive NN models trained? - The visualization section is only a minor contribution; there isn't really any innovation or findings about what works or doesn't work here. Line by line: L97-99: Unclear. Why is this problem difficult? Compared to what? ( also the sentence is somewhat ungrammatical...) L231 - the trees are binarized, but how? Footnote 2 -- "the tensor version" - needs citation to explain what's being referred to. L314: How are non-state verbs defined? Does the definition of "event word"s here come from any particular previous work that motivates it? Please refer to something appropriate or related. Footnote 4: of course the collapsed form doesn't work, because the authors aren't using dependency labels -- the point of stanford collapsed form is to remove prepositions from the dependeny path and instead incorporate them into the labels. L414: How are the CAMEO/TABARI categories mapped to positive and negative entries? Is performance sensitive to this mapping? It seems like a hard task (there are hundreds of those CAMEO categories....) Did the authors consider using the Goldstein scaling, which has been used in political science, as well as the cited work by O'Connor et al.? Or is it bad to use for some reason? L400-401: what is the sentiment lexicon and why is it appropriate for the task? L439-440: Not clear. "We failed at finding an alpha meeting the requirements for the FT model." What does that mean? What are the requirements? What did the authors do in their attempt to find it? L447,L470: "precision and recall values are based on NEG and POS classes". What does this mean? So there's a 3x3 contingency table of gold and predicted (POS, NEU, NEG) classes, but this sentence leaves ambiguous how precision and recall are calculated from this information. 5.1 aggregations: this seems fine though fairly ad-hoc. Is this temporal smoothing function a standard one? There's not much justification for it, especially given something simpler like a fixed window average could have been used. 5.2 visualizations: this seems pretty ad-hoc without much justification for the choices. The graph visualization shown does not seem to illustrate much. Should also discuss related work in 2d spatial visualization of country-country relationships by Peter Hoff and Michael Ward. 5.3 L638-639: "unions of countries" isn't a well defined concept. mMybe the authors mean "international organizations"? L646-648: how were these 5 strong and 5 weak peaks selected? In particular, how were they chosen if there were more than 5 such peaks? L680-683: This needs more examples or explanation of what it means to judge the polarity of a peak. What does it look like if the algorithm is wrong? How hard was this to assess? What was agreement rate if that can be judged? L738-740: The authors claim Gerrish and O'Connor et al. have a different "purpose and outputs" than the authors' work. That's not right. Both those works try to do both (1) extract time series or other statistical information about the polarity of the relationships between countries, and *also* (2) extract topical keywords to explain aspects of the relationships. The paper here is only concerned with #1 and less concerned with #2, but certainly the previous work addresses #1. It's fine to not address #2 but this last sentence seems like a pretty odd statement. That raises the question -- Gerrish and O'Connor both conduct evaluations with an external database of country relations developed in political science ("MID", military interstate disputes). Why don't the authors of this work do this evaluation as well? There are various weaknesses of the MID data, but the evaluation approach needs to be discussed or justified.
- The visualization section is only a minor contribution; there isn't really any innovation or findings about what works or doesn't work here. Line by line: L97-99: Unclear. Why is this problem difficult? Compared to what? ( also the sentence is somewhat ungrammatical...) L231 - the trees are binarized, but how? Footnote 2 -- "the tensor version" - needs citation to explain what's being referred to.
ARR_2022_253_review
ARR_2022
- The paper uses much analysis to justify that the information axis is a good tool to be applied. As pointed out in conclusion, I'm curious to see some related experiments that this information axis tool can help with. - For Figure 1, I have another angle for explaining why randomly-generated n-grams are far away from the extant words: the characterBERT would explicitly maximize the probability of seen character sequence (implicitly minimize the probability of unseen character sequence). So I guess the randomly generated n-grams would have distant different PPL value with the extant words. This is justified in Section 5.4. - It would be better to define some notations and give a clear definition of the "information axis", "word concreteness" and also "Markov chain information content". - Other than UMAP, there are some other tools for analyzing the geometry of high-dimensional representations. I believe the idea is not highly integrated with UMAP. So it would be better to show demonstrate results with other tools like T-SNE.
- Other than UMAP, there are some other tools for analyzing the geometry of high-dimensional representations. I believe the idea is not highly integrated with UMAP. So it would be better to show demonstrate results with other tools like T-SNE.
ARR_2022_10_review
ARR_2022
- The number of datasets used is relatively small, to really access the importance of different design decisions, it would probably be good to use further datasets, e.g., the classical GeoQuery dataset. - I would have appreciated a discussion of the statistical properties of the results - with the given number of tests, what is the probability that differences are generated by random noise and does a regression on the different design decisions give us a better idea of the importance of the factors? - The paper mentions that a heuristic is used to identify variable names in the Django corpus, however, I could not find information on how this heuristic works. Another detail that was not clear to me is whether the BERT model was fine tuned and how the variable strings were incorporated into the BERT model (the paper mentions that they were added to the vocabulary, but not how). For a paper focused on determining what actually matters in building a text to code system, I think it is important to be precise on these details. It would take some time to implement your task for other corpora, which potentially use different programming languages, but it might be possible to still strengthen your results using bootstrapping. You could resample some corpora from the existing two and see how stable your results are. If you have some additional space, it would also be interesting to know if you have discuss results based on types of examples - e.g., do certain decisions make more of a difference if there are more variables? Typos: - Page 1: "set of value" -> "set of values" "For instance, Orlanski and Gittens (2021) fine-tunes BART" -> "fine-tune" - Page 2: "Non determinism" -> "Non-Determinism"
- I would have appreciated a discussion of the statistical properties of the results - with the given number of tests, what is the probability that differences are generated by random noise and does a regression on the different design decisions give us a better idea of the importance of the factors?
ARR_2022_248_review
ARR_2022
It is a bit confusing about the word error rates in Fig. 3. The description in the paper use word error rate (WER) all the time. But in Fig. 3, the authors used (100 - WER) as the vertical axis. It is not clear about the motivation of doing this. Note: the WER can exceed 100% in some cases. Some grammar issues need to be reviewed. For example, at line 394 in section 4.2: Like the baseline models Shon et al. (2021), we to train on the finer label set (18 entity tags) and evaluate on the com396 bined version (7 entity tags) -> "we to ...."
3. The description in the paper use word error rate (WER) all the time. But in Fig. 3, the authors used (100 - WER) as the vertical axis. It is not clear about the motivation of doing this. Note: the WER can exceed 100% in some cases. Some grammar issues need to be reviewed. For example, at line 394 in section 4.2: Like the baseline models Shon et al. (2021), we to train on the finer label set (18 entity tags) and evaluate on the com396 bined version (7 entity tags) -> "we to ...."
ARR_2022_237_review
ARR_2022
of the paper include: - The introduction of relation embeddings for relation extraction is not new, for example look at all Knowledge graph completion approaches that explicitly model relation embeddings or works on distantly supervised relation extraction. However, an interesting experiment would be to show the impact that such embeddings can have by comparing with a simple baseline that does not take advantage of those. - Improvements are incremental across datasets, with the exception of WebNLG. Why mean and standard deviation are not shown for the test set of DocRED? - It is not clear if the benefit of the method is just performance-wise. Could this particular alignment of entity and relation embeddings (that gives the most in performance) offer some interpretability? ( perhaps this could be shown with a t-SNE plot, i.e. check that their embeddings are close in space). Comments/Suggestions: - Lines 26-27: Multiple entities typically exist in both sentences and documents and this is the case even for relation classification, not only document-level RE or joint entity and relation extraction. - Lines 39-42: Point to figure 1 for this particular example. - Lines 97-98: Rephrase the sentence "one that searches for ... objects" as it is currently confusing - Line 181, Equations 4: $H^s$, $E^s$, $E^o$, etc are never explained. - Could you show ablations on EPO and SEO? You mention in the Appendix that the proposed method is able to solve all those cases but you don't show if your method is better than others. - It would be interesting to also show how the method performs when different number of triples reside in the input sequence. Would the method help more sequences with more triples? Questions: - Improvement still be observed with a better encoder, e.g. RoBERTa-base, instead of BERT? - How many seeds did you use to report mean and stdev on the development set? - For DocRED, did you consider the documents as an entire sentence? How do you deal with concepts (multiple entity mentions referring to the same entity)? This information is currently missing from the manuscript.
- It would be interesting to also show how the method performs when different number of triples reside in the input sequence. Would the method help more sequences with more triples? Questions:
ACL_2017_588_review
ACL_2017
and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems. - Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task. 2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler. 3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary. This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise. 4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities. - Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested? 2) Have you tried building a classifier that just takes h_i^e as inputs? I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores.
2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler.
ACL_2017_684_review
ACL_2017
- Different variants of the model achieve state-of-the-art performance across various data sets. However, the authors do provide an explanation for this (i.e. size of data set and text anonymization patterns). - General Discussion: The paper describes an approach to text comprehension which uses gated attention modules to achieve state-of-the-art performance. Compared to previous attention mechanisms, the gated attention reader uses the query embedding and makes multiple passes (multi-hop architecture) over the document and applies multiplicative updates to the document token vectors before finally producing a classification output regarding the answer. This technique somewhat mirrors how humans solve text comprehension problems. Results show that the approach performs well on large data sets such as CNN and Daily Mail. For the CBT data set, some additional feature engineering is needed to achieve state-of-the-art performance. Overall, the paper is very well-written and model is novel and well-motivated. Furthermore, the approach achieves state-of-the-art performance on several data sets. I had only minor issues with the evaluation. The experimental results section does not mention whether the improvements (e.g. in Table 3) are statistically significant and if so, which test was used and what was the p-value. Also I couldn't find an explanation for the performance on CBT-CN data set where the validation performance is superior to NSE but test performance is significantly worse.
- Different variants of the model achieve state-of-the-art performance across various data sets. However, the authors do provide an explanation for this (i.e. size of data set and text anonymization patterns).
README.md exists but content is empty.
Downloads last month
19