paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
7
10.2k
point
stringlengths
47
690
NIPS_2016_265
NIPS_2016
1. For the captioning experiment, the paper compares to related work only on some not official test set or dev set, however the final results should be compared on the official COOC leader board on the blind test set: https://competitions.codalab.org/competitions/3221#results e.g. [5,17] have won this challenge and have been evaluated on the blind challenge set. Also, several other approaches have been proposed since then and significantly improved (see leaderboard, the paper should at least compare to the once where an corresponding publication is available). 2. A human evaluation for caption generation would be more convincing as the automatic evaluation metrics can be misleading. 3. It is not clear from Section 4.2 how the supervision is injected for the source code caption experiment. While it is over interesting work, for acceptance at least points 1 and 3 of the weaknesses have to be addressed. ==== post author response === The author promised to include the results from 1. in the final For 3. it would be good to state it explicitly in Section section 4.2. I encourage the authors to include the additional results they provided in the rebuttal, e.g. T_r in the final version, as it provides more insight in the approach. Mine and, as far as I can see, the other reviewers concerns have been largely addressed, I thus recommend to accept the paper.
1. For the captioning experiment, the paper compares to related work only on some not official test set or dev set, however the final results should be compared on the official COOC leader board on the blind test set: https://competitions.codalab.org/competitions/3221#results e.g. [5,17] have won this challenge and have been evaluated on the blind challenge set. Also, several other approaches have been proposed since then and significantly improved (see leaderboard, the paper should at least compare to the once where an corresponding publication is available).
NIPS_2022_1200
NIPS_2022
Weakness: Originality: 1.I want to know if this paper is the first time to study the problem of the robust collaborative inference, where there are both arbitrary agents and adversarial agents. The arbitrary agents are easy to identify. However, I’m afraid the proposed method achieves a similar performance to identify the adversarial agents compared with baselines. From Eq.(5), the framework aims to find a combined feature l which is on the manifold and is near h . The manifold projection could get a similar results for the adversarial sub-features. Could the authors discuss more about it? Writtings: 1.After so many times of reading, I guess I understand this paper. The authors introduce their method in Section 2.3, which is very simple. However, it relies block-sparse structure which is detailed stated in Section4. This could cause confuse when understanding the proposed method. 2.The notations are confusing. For example, h and l both denote the feature. Why not use a letter (or with its variants)? the citation format may be ICLR rather NeurIPS. Theoretical analysis: 1.This paper provides an extensive theoretical analysis. In fact, I suggest the authors discuss more what the analysis means. Compared with baselines, why CoPur could do better. 2.Could the authors give an intuitive explanation about the effect of the sparsity α on CoPur? Experiments: 1.From the ablation studies, CoPur achieves a better performance compared with the manifold projection, what if there are different Ω c and different Ω a d v ? 2.More analysis is helpful, for example, The comparison on optimization efficiency. The authors discuss the limitations in Appendix. I have no other suggestions.
2.The notations are confusing. For example, h and l both denote the feature. Why not use a letter (or with its variants)? the citation format may be ICLR rather NeurIPS. Theoretical analysis:
NIPS_2016_238
NIPS_2016
- My biggest concern with this paper is the fact that it motivates “diversity” extensively (even the word diversity is in the title) but the model does not enforce diversity explicitly. I was all excited to see how the authors managed to get the diversity term into their model and got disappointed when I learned that there is no diversity. - The proposed solution is an incremental step considering the relaxation proposed by Guzman. et. al. Minor suggestions: - The first sentence of the abstract needs to be re-written. - Diversity should be toned down. - line 108, the first “f” should be “g” in “we fixed the form of ..” - extra “.” in the middle of a sentence in line 115. One Question: For the baseline MCL with deep learning, how did the author ensure that each of the networks have converged to a reasonable results. Cutting the learners early on might significantly affect the ensemble performance.
- The first sentence of the abstract needs to be re-written.
NIPS_2020_1476
NIPS_2020
- As mentioned before, the dataset used in the experiments are all very small. It would be more convincing to see some result on medium or even large dataset such as ImageNet. But this is just a minor issue and it will not affect the overall quality of the paper. - Which model did you used in section 5 for image recognition task? To some extend it show the capability of Augerino on this task. However, on image recognition the network architecture strongly affect the result. It is interested to see what kind of chemical reaction will take place between Augrino and difference DNN architectures. --------------- after rebuttal ----------------- - Regarding the authors' response and all the other review comments I am agree with R4, that in this paper there is still some important issues needed to be re-worked before publication. I thus decided to lower my rating. I would like to encourage the authors to re-submit after revision.
- As mentioned before, the dataset used in the experiments are all very small. It would be more convincing to see some result on medium or even large dataset such as ImageNet. But this is just a minor issue and it will not affect the overall quality of the paper.
NIPS_2017_188
NIPS_2017
of different approaches. 5) Some formulations are not quite clear, such as “limited stochasticity” vs “powerful decoder” in lines 88 and 96. Also the statement in line 111 about “approximately optimizing the KL divergence” and the corresponding footnote looks a bit too abstract - so do the authors optimize it or not? 6) In the bibliography the authors tend to ignore the ICLR conference and list many officially published papers as arxiv. 7) Putting a whole section on cross-domain relations to the appendix is not good practice at all. I realize it’s difficult to fit all content to 8 pages, but it’s the job of the authors to organize the paper in such a way that all important contributions fit into the main paper. Overall, I am in the borderline mode. The results are quite good, but the novelty seems limited.
6) In the bibliography the authors tend to ignore the ICLR conference and list many officially published papers as arxiv.
ARR_2022_356_review
ARR_2022
1. As mentioned in Paper Summary, clear distinction between the 3 classes of Extreme Speech are needed. While the authors included definitions, I still find it difficult to differentiate derogatory extreme speech from exclusionary extreme speech. For instance, in the sample data file you provided, why is the instance "I support it 100/% #मुस्लिमो_का_संपूर्ण_बहिष्कार" exclusionary extreme speech but derogatory extreme speech? It seems that annotators took into account of local regulation over speech (line 438-441) - which regulation exactly?. Since the local regulation plays a role in annotations, does it reflect on zero-shot cross-country classification? - maybe the poor performance is due to annotation variance not model capability (line 546-548 v.s. line 553-558). I understand that this task is complicated and believe that adding some examples will help. If possible, providing a screenshot or text description of annotation guideline will be great. 2. The dataset contains only extreme speech - it seems that the authors filtered out somehow the neutral text (or ones that don't require moderation according to their definitions). Why did you decide to discard them? Who set the criteria? 1. In Table 14, the α and overall accuracy are especially low for India. Is there any explanation? 2. In appendix, there are many tables with very little description or even none. If a table is added, it would be better to include relevant discussions. Otherwise, readers can only guess what it meant. 3. One interesting analysis would be comparing annotations between passages using only local language and the ones using local language and English (code-switching).
1. As mentioned in Paper Summary, clear distinction between the 3 classes of Extreme Speech are needed. While the authors included definitions, I still find it difficult to differentiate derogatory extreme speech from exclusionary extreme speech. For instance, in the sample data file you provided, why is the instance "I support it 100/% #मुस्लिमो_का_संपूर्ण_बहिष्कार" exclusionary extreme speech but derogatory extreme speech? It seems that annotators took into account of local regulation over speech (line 438-441) - which regulation exactly?. Since the local regulation plays a role in annotations, does it reflect on zero-shot cross-country classification?
NIPS_2019_165
NIPS_2019
of the approach and experiments or list future direction for readers. The writeup is exceptionally clear and well organized-- full marks! I have only minor feedback to improve clarity: 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between the learning curves and M-PHATE. Why do you want to want me to look at the learning curves? Does worse performing model always result in structural collapse? What is the accuracy number? For the last task? or average? 3. Make the captions more descriptive. It's annoying to have to search through the text for your interpretation of the figures, which is usually on a different page 4. Explain the scramble network better... 5. Fig 1, Are these the same plots, just colored differently? It would be nice to keep all three on the same scale (the left one seems condensed) M-PHATE results in significantly more interpretable visualization of evolution than previous work. It also preserves neighbors better (Question: why do you think t-SNE works better in two conditions? The difference is very small tho). On continual learning tasks, M-PHATE clearly distinguishes poor performing learning algorithms via a collapse. (See the question about this in 5. Improvement). The generalization vignette shows that the heterogeneity in M-PHATE output correlates with performance. I would really like to recommend a strong accept for this paper, but my major concern is that the vignettes focus on one dataset MNIST and one NN architecture MLP, which makes the experiments feel incomplete. The results and observations made by authors would be much more convincing if they could repeat these experiments for more datasets and NN architectures.
3. Make the captions more descriptive. It's annoying to have to search through the text for your interpretation of the figures, which is usually on a different page 4. Explain the scramble network better...
NIPS_2022_1034
NIPS_2022
Regarding the background: the authors should consider adding a preliminary section to introduce the background knowledge on the nonparametric kernel regression, kernel density estimation, and the generalized Fourier Integral theorem, which could help the readers easily follow the derivation of Section 2 and understand the motivation to use the Fourier Integral theorem as a guide to developing a new self-attention mechanism. Regarding the experimental evaluation: the issues are three-fold. 1) since the authors provide an analysis of the approximation error between estimators and true functions (Theorem 1 and 2), it is informative to provide an empirical evaluation of these quantities on real data as further verification. 2) The experiments should be more comprehensive and general. For both the language modeling task and image classification task, the model size is limited and the baselines are restrictive. 3) Since the FourierFormer need customized operators for implementation, the authors should also provide the memory/time cost profiling compared to popular Transformer architectures. Based on these issues, the efficiency and effectiveness of the FourierFormer are doubtful. -------After Rebuttal------- Thank authors for the detailed response. Most of my concerns have been addressed. I have updated my scores to 6.
2) The experiments should be more comprehensive and general. For both the language modeling task and image classification task, the model size is limited and the baselines are restrictive.
NIPS_2020_1451
NIPS_2020
1. Unlike the works HaoChen and Sra and Nagaraj et.al, this work uses the fact that all component functions f_i are mu strongly convex. 2. The authors need to explain why removing some of the assumptions like bounded variance and bounded gradients is an important contribution via. solid examples. 3. The quantity sigma^{*} being finite also implies that all the gradients are finite via. smoothness property of the functions f_i and gives a natural upper bound.
2. The authors need to explain why removing some of the assumptions like bounded variance and bounded gradients is an important contribution via. solid examples.
ARR_2022_12_review
ARR_2022
I feel the design of NVSB and some experimental results need more explanation (more information in the section below). 1. In Figure 1, given experimental dataset have paired amateur and professional recordings from the same singer, what are the main rationals for (a) Having a separate timbre encoder module (b) SADTW takes outputs of content encoder (and not timbre encoder) as input? 2. For results shown in Table 3, how to interpret: (a) For Chinese MOS-Q, NVSB is comparable to GT Mel A. (b) For Chinese and English MOS-V, Baseline and NVSB have overlapping 95% CI.
2. For results shown in Table 3, how to interpret: (a) For Chinese MOS-Q, NVSB is comparable to GT Mel A. (b) For Chinese and English MOS-V, Baseline and NVSB have overlapping 95% CI.
NIPS_2021_815
NIPS_2021
- In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes. - Comparison with GSN: The authors mention in section 2 that F-MPNNs are a unifying framework that includes GSNs. In my perspective, given that GSN is a quite similar framework to this work, this is an important claim that should be more formally stated. In particular, as shown by Curticapean et al., 2017, in order to obtain isomorphism counts of a pattern P, one needs not only to compute P-homomorphisms, but also those of the graphs that arise when doing “non-edge contractions” (the spasm of P). Hence a spasm(P)-MPNN would require one extra layer to simulate a P-GSN. I think formally stating this will give the interested reader intuition on the expressive power of GSNs, albeit not an exact characterisation (we can only say that P-GSN is at most as powerful as a spasm(P)-MPNN but we cannot exactly characterise it; is that correct?) - Also, since the concept of homomorphisms is not entirely new in graph ML, a more elaborate comparison with the paper by NT and Maehara, “Graph Homomorphism Convolution”, ICML’20 would be beneficial. This paper can be perceived as the kernel analogue to F-MPNNs. Moreover, in this paper, a universality result is provided, which might turn out to be beneficial for the authors as well. Additional comments: I think that something is missing from Proposition 3. In particular, if I understood correctly the proof is based on the fact that we can always construct a counterexample such that F-MPNNs will not be equally strong to 2-WL (which by the way is a stronger claim). However, if the graphs are of bounded size, a counterexample is not guaranteed to exist (this would imply that the reconstruction conjecture is false). Maybe it would help to mention in Proposition 3 that graphs are of unbounded size? Moreover, there is a detail in the proof of Proposition 3 that I am not sure that it’s that obvious. I understand why the subgraph counts of C m + 1 are unequal between the two compared graphs, but I am not sure why this is also true for homomorphism counts. Theorem 3: The definition of the core of a graph is unclear to me (e.g., what if P contains cliques of multiple sizes?) In the appendix, the authors mention they used 16 layers for their dataset. That is an unusually large number of layers for GNNs. Could the authors comment on this choice? In the same context as above, the experiments on the ZINC benchmark are usually performed with either ~100K or 500K parameters. Although I doubt that changing the number of parameters will lead to a dramatic change in performance, I suggest that the authors repeat their experiments, simply for consistency with the baselines. The method of Bouritsas et al., arxiv’20 is called “Graph Substructure Networks” (instead of “Structure”). I encourage the authors to correct this. After rebuttal The authors have adequately addressed all my concerns. Enhancing MPNNs with structural features is a family of well-performing techniques that have recently gained traction. This paper introduces a unifying framework, in the context of which many open theoretical questions can be answered, hence significantly improving our understanding. Therefore, I will keep my initial recommendation and vote for acceptance. Please see my comment below for my final suggestions which, along with some improvements on the presentation, I hope will increase the impact of the paper. Limitations: The limitations are clearly stated in section 1, by mainly referring to the fact that the patterns need to be selected by hand. I would also add a discussion on the computational complexity of homomorphism counting. Negative societal impact: A satisfactory discussion is included in the end of the experimental section.
- The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes.
NIPS_2016_279
NIPS_2016
Weakness: 1. The main concern with the paper is the applicability of the model to real-world diffusion process. Though the authors define an interesting problem with elegant solutions, however, it will be great if the authors could provide empirical evidence that the proposed model captures the diffusion phenomena in real-world. 2. Though the IIM problem is defined on the Ising network model, all the analysis is based on the mean-field approximation. Therefore, it will be great if the authors can carry out experiments to show how similar is the mean-field approximation compared to the true distribution via methods such as Gibbs sampling. Detailed Comments: 1. Section 3, Paragraph 1, Line 2, if there there exists -> if there exists.
1. The main concern with the paper is the applicability of the model to real-world diffusion process. Though the authors define an interesting problem with elegant solutions, however, it will be great if the authors could provide empirical evidence that the proposed model captures the diffusion phenomena in real-world.
NIPS_2020_1857
NIPS_2020
- The paper is not particularly novel or exciting since it takes algorithms already applied in the field of semantic segmentation and applies them to the stereo depth estimation problem. The idea of using AutoML for stereo is not particularly novel either, as stated by the authors themselves, even if the proposed algorithm outperforms the previous proposal. - From my point of view the main reason to use AutoML approaches, besides improving raw performances, is extracting hints that can be reused in the design of new network architectures in the future. Unfortunately the authors did not spend much time commenting on these aspects. For example, what might be the biggest takeaways from the found architecture? - I would have liked to see an additional ablation study to better highlight the contribution of the proposed method with respect to AutoDispNet. The main differences with respect to the previously published work is the search performed also on the network level and the use of two separate feature and matching networks. Ablating the contribution of one and the other might have been interesting. - The evaluation on Middleburry should include for fairness a test of the found architecture running at quarter resolution to match the testing setup of all the other deep architecture. While it is true that the ability to run at higher resolution is an advantage of the proposed method there is nothing (besides hw limitation) preventing the other networks to run at higher resolution as well. Therefore I think that a fair comparison between networks running on the same test setup will improve the paper highlighting the contribution of the proposed method. - Some minor implementation details are missing from the paper, I will expand this point in my questions to the authors.
- From my point of view the main reason to use AutoML approaches, besides improving raw performances, is extracting hints that can be reused in the design of new network architectures in the future. Unfortunately the authors did not spend much time commenting on these aspects. For example, what might be the biggest takeaways from the found architecture?
NIPS_2021_1343
NIPS_2021
Weakness - I am not convinced that transformer free of locality-bias is indeed the best option. In fact, due to limited speed of information propagation, the neighborhood agents should naturally have more impacts on each other, compared to far away nodes. I hope the authors to explain more why transformer’s no-locality won’t make a concern here. - Due to the above, I feel graph networks seem to capture this better than the too-free transformer, and their lack of global context/ the “over-squashing” might be mitigated by adding non-local blocks (e.g., check “Non-Local Graph Neural Networks” or several other works proposing “global attention” for GNNs). - The authors also claimed “traditional GNNs” cannot handle direction-feature coupling: that is not true. See a latest work “MagNet: A Neural Network for Directed Graphs” and I am sure there were more prior arts. Authors are asked to consider whether those directional GNNs can possibly suit their task well too. - Transform is introduced as a centralized agent. Its computational overhead can become formidable when the network gets larger. Authors shall discuss how they prepare to address the scalability bottleneck.
- I am not convinced that transformer free of locality-bias is indeed the best option. In fact, due to limited speed of information propagation, the neighborhood agents should naturally have more impacts on each other, compared to far away nodes. I hope the authors to explain more why transformer’s no-locality won’t make a concern here.
NIPS_2017_351
NIPS_2017
- As I said above, I found the writing / presentation a bit jumbled at times. - The novelty here feels a bit limited. Undoubtedly the architecture is more complex than and outperforms the MCB for VQA model [7], but much of this added complexity is simply repeating the intuition of [7] at higher (trinary) and lower (unary) orders. I don't think this is a huge problem, but I would suggest the authors clarify these contributions (and any I may have missed). - I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify. - Figure 2 is at an odd level of abstraction where it is not detailed enough to understand the network's functionality but also not abstract enough to easily capture the outline of the approach. I would suggest trying to simplify this figure to emphasize the unary/pairwise/trinary potential generation more clearly. - Figure 3 is never referenced unless I missed it. Some things I'm curious about: - What values were learned for the linear coefficients for combining the marginalized potentials in equations (1)? It would be interesting if different modalities took advantage of different potential orders. - I find it interesting that the 2-Modalities Unary+Pairwise model under-performs MCB [7] despite such a similar architecture. I was disappointed that there was not much discussion about this in the text. Any intuition into this result? Is it related to swap to the MCB / MCT decision computation modules? - The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed?
- The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed?
NIPS_2019_757
NIPS_2019
Weakness 1. Online Normalization introduces two additional hype-parameters: forward and backward decay factors. The authors use a logarithmic grid sweep to search the best factors. This operation largely increases the training cost of Online Normalization. Question: 1. The paper mentions that Batch Normalization has the problem of gradient bias because it uses mini-batch to estimate the real gradient distribution. In contrast, Online Normalization can be implemented locally within individual neurons without the dependency on batch size. It sounds like that Online Normalization and Batch Normalization are two different ways to estimate the real gradient distribution. I am confused why Online Normalization is unbiased and Batch Normalization is biased. ** I have read other reviews and the author response. I will stay with my original score. **
1. The paper mentions that Batch Normalization has the problem of gradient bias because it uses mini-batch to estimate the real gradient distribution. In contrast, Online Normalization can be implemented locally within individual neurons without the dependency on batch size. It sounds like that Online Normalization and Batch Normalization are two different ways to estimate the real gradient distribution. I am confused why Online Normalization is unbiased and Batch Normalization is biased. ** I have read other reviews and the author response. I will stay with my original score. **
NIPS_2016_537
NIPS_2016
weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers.
2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition.
ICLR_2022_212
ICLR_2022
Weakness: 1. The introduction of the motivation (the concept of in-context bias) is not easy to understand at the very beginning. The paper said: “the pretrained NLM can model much stronger dependencies between text segments that appeared in the same training example, than it can between text segments that appeared in different training examples.” Acutally it seems quite natural for me and I did not realize it is a problem until I saw more explanations in section 1.1. 2. The theory is a bit complicated and not easy to follow. 3. The experiments are limited. The authors only conduct the evaluation on sentence similarity tasks and open domain QA tasks. However, there are many other tasks that involve sentence pairs. For example, sentence inference tasks such as MNLI and RTE are common tasks in NLP field. The authors should conduct experiments on more types of sentence pair tasks.
3. The experiments are limited. The authors only conduct the evaluation on sentence similarity tasks and open domain QA tasks. However, there are many other tasks that involve sentence pairs. For example, sentence inference tasks such as MNLI and RTE are common tasks in NLP field. The authors should conduct experiments on more types of sentence pair tasks.
NIPS_2019_1408
NIPS_2019
- The paper is not that original given the amount of work in learning multimodal generative models: — For example, from the perspective of the model, the paper builds on top of the work by Wu and Goodman (2018) except that they learn a mixture of experts rather than a product of experts variational posterior. — In addition, from the perspective of the 4 desirable attributes for multimodal learning that the authors mention in the introduction, it seems very similar to the motivation in the paper by Tsai et al. Learning Factorized Multimodal Representations, ICLR 2019, which also proposed a multimodal factorized deep generative model that performs well for discriminative and generative tasks as well as in the presence of missing modalities. The authors should have cited and compared with this paper. ****************************Quality**************************** Strengths: - The experimental results are nice. The paper claims that their MMVAE modal fulfills all four criteria including (1) latent variables that decompose into shared and private subspaces, (2) be able to generate data across all modalities, (3) be able to generate data across individual modalities, and (4) improve discriminative performance in each modality by leveraging related data from other modalities. Let's look at each of these 4 in detail: — (1) Yes, their model does indeed learn factorized variables which can be shown by good conditional generation on MNIST+SVHN dataset. — (2) Yes, joint generation (which I assume to mean generation from a single modality) is performed on vision -> vision and language -> language for CUB, — (3) Yes, conditional generation can be performed on CUB via language -> vision and vice versa. Weaknesses: - (continuing on whether the model does indeed achieve the 4 properties that the authors describe) — (3 continued) However, it is unclear how significant the performance is for both 2) and 3) since the authors report no comparisons with existing generative models, even simple ones such as a conditional VAE from language to vision. In other words, what if I forgo with the complicated MoE VAE, and all the components of the proposed model, and simply use a conditional VAE from language to vision. There are many ablation studies that are missing from the paper especially since the model is so complicated. — (4) The authors have not seemed to perform extensive experiments for this criteria since they only report the performance of a simple linear classifier on top of the latent variables. There has been much work in learning discriminative models for multimodal data involving aligning or fusing language and vision spaces. Just to name a few involving language and vision: - Multimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding, EMNLP 2016 - DeViSE: A Deep Visual-Semantic Embedding Model, NeurIPS 2013 Therefore, it is important to justify why I should use this MMVAE model when there is a lot of existing work on fusing multimodal data for prediction. ****************************Clarity**************************** Strengths: - The paper is generally clear. I particularly liked the introduction of the paper especially motivation Figures 1 and 2. Figure 2 is particularly informative given what we know about multimodal data and multimodal information. - The table in Figure 2 nicely summarizes some of the existing works in multimodal learning and whether they fulfill the 4 criteria that the authors have pointed out to be important. Weaknesses: - Given the authors' great job in setting up the paper via Figure 1, Figure 2, and the introduction, I was rather disappointed that section 2 did not continue on this clear flow. To begin, a model diagram/schematic at the beginning of section 2 would have helped a lot. Ideally, such a model diagram could closely resemble Figure 2 where you have already set up a nice 'Venn Diagram' of multimodal information. Given this, your model basically assigns latent variables to each of the information overlapping spaces as well as arrows (neural network layers) as the inference and generation path from the variables to observed data. Showing such a detailed model diagram in an 'expanded' or 'more detailed' version of Figure 2 would be extremely helpful in understanding the notation (which there are a lot), how MMVAE accomplishes all 4 properties, as well as the inference and generation paths in MMVAE. - Unfortunately, the table in Figure 2 it is not super complete given the amount of work that has been done in latent factorization (e.g. Learning Factorized Multimodal Representations, ICLR 2019) and purely discriminative multimodal fusion (i.e. point d on synergy) - There are a few typos and stylistic issues: 1. line 18: "Given the lack explicit labels available” -> “Given the lack of explicit labels available” 2. line 19: “can provided important” -> “can provide important” 3. line 25: “between (Yildirim, 2014) them” -> “between them (Yildirim, 2014)” 4. and so on… ****************************Significance**************************** Strengths: - This paper will likely be a nice addition to the current models we have for processing multimodal data, especially since the results are quite interesting. - The paper did a commendable job in attempting to perform experiments to justify the 4 properties they outlined in the introduction. - I can see future practitioners using the variational MoE layers for encoding multimodal data, especially when there is missing multimodal data. Weaknesses: - That being said, there are some important concerns especially regarding the utility of the model as compared to existing work. In particular, there are some statements in the model description where it would be nice to have some experimental results in order to convince the reader that this model compares favorably with existing work: 1. line 113: You set \alpha_m uniformly to be 1/M which implies that the contributions from all modalities are the same. However, works in multimodal fusion have shown that dynamically weighting the modalities is quite important because 1) modalities might contain noise or uncertain information, 2) different modalities contribute differently to the prediction (e.g. in a video when a speaker is not saying anything then their visual behaviors are more indicative than their speech or language behaviors). Recent works therefore study, for example, gated attentions (e.g. Gated-Attention Architectures for Task-Oriented Language Grounding, AAAI 2018 or Multimodal Sentiment Analysis with Word-level Fusion and Reinforcement Learning, ICMI 2017) to learn these weights. How does your model compare to this line of related work, and can your model be modified to take advantage of these fusion methods? 2. line 145-146: "We prefer the IWAE objective over the standard ELBO objective not just for the fact that it estimates a tighter bound, but also for the properties of the posterior when computing the multi-sample estimate." -> Do you have experimental results that back this up? How significant is the difference? 3. line 157-158: "needing M^2 passes over the respective decoders in total" -> Do you have experimental runtimes to show that this is not a significant overhead? The number of modalities is quite small (2 or 3), but when the decoders are large recurrent of deconvolutional layers then this could be costly. ****************************Post Rebuttal**************************** The author response addressed some of my concerns regarding novelty but I am still inclined to keep my score since I do not believe that the paper is substantially improving over (Wu and Goodmann, 2018) and (Tsai et al, 2019). The clarity of writing can be improved in some parts and I hope that the authors would make these changes. Regarding the quality of generation, it is definitely not close to SOTA language models such as GPT-2 but I would still give the authors credit since generation is not their main goal, but rather one of their 4 defined goals to measure the quality of multimodal representation learning.
- The paper is generally clear. I particularly liked the introduction of the paper especially motivation Figures 1 and 2. Figure 2 is particularly informative given what we know about multimodal data and multimodal information.
ztT70ubhsc
ICLR_2025
- The professional sketches (Multi-Gen-20M) considered in this work are in binarised versions of HED edges, which is very different from what a real artist would draw (no artist or professional sketcher would produce lines like those in Figure 1). This makes the basic assumptions/conditions of the paper not very rigorous, somewhat deviating from the ambitious objectives, i.e., dealing with pro-sketch and any other complexity levels with a unified model. - The modulator is heuristically designed. It is hard to justify if there is a scalability issue that might need tedious hyperparameter tuning for diverse training data. - The effectiveness and applicability of the knob mechanism is questionable. - From Figure 6, the effect does not seem very pronounced: in the volcano example, the volcano corresponding to the intermediate gamma value appears to match the details of the input sketch better; in Keith's example (the second row from the bottom), the changes in facial details are also not noticeable. - Besides, the user has to try different knob values until satisfaction (and this may be pretty different for diverse input sketches) since it has no apparent relation to the user's need for the complexity level from the input sketches. - The impact of fine-grained cues is hard to manage precisely, as they have been injected into the model at early denoising steps, and the effect will last in the following denoising steps. - The current competitors in experiments are not designed for sketches. It would be great if some sketch-guided image generation works, e.g., [a], could be compared and discussed. - There is a “second evaluation set” with 100 hand-drawn images created by novice users used for experiments. It would be great to show these sketch images for completion. [a] Sketch-Guided Text-to-Image Diffusion Models, SIGGRAPH 2023
- The modulator is heuristically designed. It is hard to justify if there is a scalability issue that might need tedious hyperparameter tuning for diverse training data.
NIPS_2017_235
NIPS_2017
weakness even if true but worth discussing in detail since it could guide future work.) [EDIT: I see now that this is *not* the case, see response below] I also feel that the discussion of Chow-Liu is missing a very important aspect. Chow-Liu doesn't just correctly recover the true structure when run on data generated from a tree. Rather, Chow-Lui finds the *maximum-likelihood* tree for data from an *arbitrary* distribution. This is a property that almost all follow-up work does not satisfy (Srebro showed bounds). The discussion in this paper is all true, but doesn't mention that "maximum likelihood" or "model robustness" issue at all, which is hugely important in practice. For reference: The basic result is that given a single node $u$, and a hypothesized set of "separators" $S$ (neighbors of $u$) then there will be some set of nodes $I$ with size at most $r-1$ such that $u$ and $I$ have positive conditional mutual information. The proof of the central result proceeds by setting up a "game", which works as follows: 1) We pick a node $X_u$ to look at. 2) Alice draws two joint samples $X$ and $X'$. 3) Alice draws a random value $R$ (uniformly from the space of possible values of $X_u$ 4) Alice picks a random set of neighbors of $X_u$, call them $X_I$. 5) Alice tells Bob the values of $X_I$ 6) Bob get's to wager on if $X_u=R$ or $X'_u=R$. Bob wins his wager if $X_u=R$ an loses his wager if $X'_u=R$ and nothing happens if both or neither are true. Here I first felt like I *must* be missing something, since this is just establishing that $X_u$ has mutual information with it's neighbors. (There is no reference to the "separator" set S in the main result.) However, it later appears that this is just a warmup (regular mutual information) and can be extended to the conditional setting. Actually, couldn't the conditional setting itself be phrased as a game, something like 1) We pick a node $X_u$ and a set of hypothesized "separators" $X_S$ to look at. 2) Alice draws two joint samples $X$ and $X'$ Both are conditioned on the some random value for $X_S$. 3) Alice draws a random value $R$ (uniformly from the space of possible values of $X_u$ 4) Alice picks a random set of nodes (not including $X_u$ or $X_S$, call them $X_I$. 5) Alice tells Bob the values of $X_I$ 6) Bob get's to wager on if $X_u=R$ or $X'_u=R$. Bob wins his wager if $X_u=R$ an loses his wager if $X'_u=R$ and nothing happens if both or neither are true. I don't think this adds anything to the final result, but is an intuition for something closer to the final goal. After all this, the paper discusses an algorithm for greedily learning an MRF graph (in sort of the obvious way, by exploiting the above result) There is some analysis of how often you might go wrong estimating mutual information from samples, which I appreciate. Overall, as far as I can see, the result appears to be true. However, I'm not sure that the purely theoretical result is sufficiently interesting (at NIPS) to be published with no experiments. As I mentioned above, Chow-Liu has the major advantage of finding the maximum likelihood solution, which the current method does not appear to have. (It would violate hardness results due to Srebro.) Further, note that the bound given in Theorem 5.1, despite the high order, is only for correctly recovering the structure of a single node, so there would need to be another lever of applying the union bound to this result with lower delta to get a correct full model. EDIT AFTER REBUTTAL: Thanks for the rebuttal. I see now that I should understand $r$ not as the maximum clique size, but rather as the maximum order of interactions. (E.g. if one has a fully-connected Ising model, you would have r=2 but the maximum clique size would be n). This answers the question I had about this being a generalization of Bressler's result. (That is, this paper's is a strict generalization.) This does slightly improve my estimation of this paper, though I thought this was a relatively small concern in any case. My more serious concerns are that a pure theoretical result is appropriate for NIPS.
3) Alice draws a random value $R$ (uniformly from the space of possible values of $X_u$ 4) Alice picks a random set of nodes (not including $X_u$ or $X_S$, call them $X_I$. 5) Alice tells Bob the values of $X_I$
kfFmqu3zQm
ICLR_2025
1. Some conclusions are not convincing. For example, the paper contends that *We believe that continuous learning with unlabeled data accumulates noise, which is detrimental to representation quality.* The results might come from the limited exploration of combination methods. In rehearsal-free continual learning, feature-replay methods have shown great potential, like [R1] in continual learning and [R2] (FRoST) in CCD. A more recent work [R3] also employs feature replay to continually adjust the feature space, which also obtains remarkable performance for continual category discovery. 2. The proposed method is naïve and the novelty is relatively limited. The method includes basic clustering and number estimation. I’m afraid that this method could not provide so many insights to the community. 3. The feature space (i.e., backbone) is only tuned using labeled known classes, does this manner result in overfitting? because the data is purely labeled but the number is limited. 4. The class number estimation algorithm requires a pre-defined threshold $d_{min}$, which is intractable to define in advance and could largely impact the results. Some experiments and ablations should be included. 5. Detailed results of each continual session (at least for one or two datasets) should also be presented to show the performance. References: [R1]. Prototype Augmentation and Self-Supervision for Incremental Learning. CVPR 2021. [R2]. Class-incremental Novel Class Discovery. ECCV 2022. [R3]. Happy: A Debiased Learning Framework for Continual Generalized Category Discovery. NeurIPS 2024. arXiv:2410.0653.
1. Some conclusions are not convincing. For example, the paper contends that *We believe that continuous learning with unlabeled data accumulates noise, which is detrimental to representation quality.* The results might come from the limited exploration of combination methods. In rehearsal-free continual learning, feature-replay methods have shown great potential, like [R1] in continual learning and [R2] (FRoST) in CCD. A more recent work [R3] also employs feature replay to continually adjust the feature space, which also obtains remarkable performance for continual category discovery.
ICLR_2021_1043
ICLR_2021
, which justify the score: • The theoretical developments presented in the paper build on the Rademacher complexity, but ignore the conclusions drawn by Zhang et al. in Section 2.2 of their ICLR 2017 paper (Understanding deep learning requires rethinking generalization). • The theoretical developments build on the assumption that (i) there exists a lower bound, valid for any input, to the distance between the output of each pair of neurons, and (ii) the proposed diversity loss increases this lower bound. Those two assumptions are central to the theoretical developments, but are quite arguable. For example, a pair of neuron that is not activated by a sample, which is quite common, leads to a zero lower bound. • Experimental validation are not convincing. Only shallow networks are considered (2 or 3 layers), and the optimization strategy, including the grid search strategy for hyperparameters selection, is not described. Minor issue: positioning with respect to related works is limited. For example, layer redundancy (which is the opposite of diversity) has been considered in the context of network pruning: https://openaccess.thecvf.com/content_CVPR_2019/papers/He_Filter_Pruning_via_Geometric_Median_for_Deep_Convolutional_Neural_Networks_CVPR_2019_paper.pdf
• Experimental validation are not convincing. Only shallow networks are considered (2 or 3 layers), and the optimization strategy, including the grid search strategy for hyperparameters selection, is not described. Minor issue: positioning with respect to related works is limited. For example, layer redundancy (which is the opposite of diversity) has been considered in the context of network pruning: https://openaccess.thecvf.com/content_CVPR_2019/papers/He_Filter_Pruning_via_Geometric_Median_for_Deep_Convolutional_Neural_Networks_CVPR_2019_paper.pdf
NIPS_2020_153
NIPS_2020
* Both this paper and the Nasr et al paper use number detectors (number selective units) as an indicator of number sense. However, the presence of number selective units is not a necessary condition for number sense. There are potentially other distributed coding schemes (other than tuning curves) that could be employed. It seems like the question that you really want to ask is whether the representation in the last convolutional layer is capable of distinguishing images of varying numerosity. In which case, why not just train a linear probe? Number sense is a cognitive ability, not a property of individual neurons. We don't really care what proportion of units are number selective as long as the network is able to perceive numerosity (which might not require very many units). A larger proportion of number selective units doesn't necessarily imply a better number sense. As such, I question the reliance on the analysis of individual units and would rather see population decoding results. * The motivation for analyzing only the last convolutional layer is not clear. Why would numerosity not appear in earlier layers? * The motivation for using classification rather than regression when training explicitly for numerosity is not well justified. The justification, "numerosity is a raw perception rather than resulting from arithmetic", is not clear. Humans clearly perceive numbers on a scale not as unrelated categories. That the subjective experience of numerosity does not involve arithmetic does not constrain the neural mechanisms that could underly that perception. * No effect sizes are reported for number selectivity. Since you did ANOVAs there should be an eta squared for the main effect of numerosity. How number selective are these units?
* The motivation for analyzing only the last convolutional layer is not clear. Why would numerosity not appear in earlier layers?
ACL_2017_614_review
ACL_2017
- I don't understand effectiveness of the multi-view clustering approach. Almost all across the board, the paraphrase similarity view does significantly better than other views and their combination. What, then, do we learn about the usefulness of the other views? There is one empirical example of how the different views help in clustering paraphrases of the word 'slip', but there is no further analysis about how the different clustering techniques differ, except on the task directly. Without a more detailed analysis of differences and similarities between these views, it is hard to draw solid conclusions about the different views. - The paper is not fully clear on a first read. Specifically, it is not immediately clear how the sections connect to each other, reading more like disjoint pieces of work. For instance, I did not understand the connections between section 2.1 and section 4.3, so adding forward/backward pointer references to sections should be useful in clearing up things. Relatedly, the multi-view clustering section (3.1) needs editing, since the subsections seem to be out of order, and citations seem to be missing (lines 392 and 393). - The relatively poor performance on nouns makes me uneasy. While I can expect TWSI to do really well due to its nature, the fact that the oracle GAP for PPDBClus is higher than most clustering approaches is disconcerting, and I would like to understand the gap better. This also directly contradicts the claim that the clustering approach is generalizable to all parts of speech (124-126), since the performance clearly isn't uniform. - General Discussion: The paper is mostly straightforward in terms of techniques used and experiments. Even then, the authors show clear gains on the lexsub task by their two-pronged approach, with potentially more to be gained by using stronger WSD algorithms. Some additional questions for the authors : - Lines 221-222 : Why do you add hypernyms/hyponyms? - Lines 367-368 : Why does X^{P} need to be symmetric? - Lines 387-389 : The weighting scheme seems kind of arbitrary. Was this indeed arbitrary or is this a principled choice? - Is the high performance of SubstClus^{P} ascribable to the fact that the number of clusters was tuned based on this view? Would tuning the number of clusters based on other matrices affect the results and the conclusions? - What other related tasks could this approach possibly generalize to? Or is it only specific to lexsub?
- Is the high performance of SubstClus^{P} ascribable to the fact that the number of clusters was tuned based on this view? Would tuning the number of clusters based on other matrices affect the results and the conclusions?
p4S5Z6Sah4
ICLR_2024
Note, the below concerns have resulted in a lower score, which I would be happy to increase pending the authors’ responses. **A. Wave fields** The wave-field comparisons, claims, and references seem a bit strained and unnecessary. Presumably, by “wave-field,” the authors simply mean a vector field that supports wave solutions. In any case, since this term is not oft-used in neuroscience or ML that I am aware of, a brief definition should be provided if the term is kept. However, I am unsure that it is necessary or helpful. That the brain supports wavelike activity is well-established, and some evidence for this is appropriately outlined by the authors. Many computational neuroscience models support waves in a way that has been mathematically analyzed (e.g., Wilson-Cowan neural fields equations). The authors’ discretization methodology suggests a similar connection to such analyses. However, appealing to “physical wave fields” to relate waves and memory seems to be overly speculative and unnecessary for the simple system under study in this manuscript. The brain is a dissipative rather than a conservative system, so that many aspects of physical wave fields may well not apply. Moreover, the single reference the authors do make to the concept does not apply either to the brain or to their wave-RNN. Instead, Perrard et al. 2016 describe a specific study that demonstrates that a particle-pilot wave system can still maintain memory in a very specific way that does not at all clearly apply to brains or the authors’ RNN, despite that study studying a dissipative (and chaotic) system. Instead, the readers would benefit much more from gaining an intuition as to why such wavelike activity might benefit learning and recalling sequential inputs. Unfortunately, Fig. 1 does little to help in this vein. However, the concept certainly is simple enough, and the authors provide a few intuitions in the manuscript that help. I believe the manuscript would improve by removing the discussion of wave fields and instead providing / moving the intuitive explanations (e.g., the “register or ‘tape’” description on p. 20) as to how waves may help with sequential tasks to the same portion of the Introduction. **B. Fourier analysis** Overall, I found the wave and Fourier analysis a bit inconsistent and potentially problematic. While I agree that the wRNNs clearly display waves when plotted directly, the mapping and analysis within the spatiotemporal Fourier domain (FD below) does not always match patterns in the regular spatiotemporal plots (RSP below). Moreover, it’s unclear how much substance they add to the analysis results. In more detail: 1. Constant-velocity, 1-D waves don’t need to be transformed to the FD to infer their speeds. The slopes in the RSP correspond to their speeds. For example, in Fig. 2 (top left), there is a wave that begins at unit 250, step ~400, that continues through to unit 0, step ~650, corresponding to a wave speed of ~1.7 units/step, far larger than the diagonal peak shown in the FD below it that would correspond to a speed of ~0.3 units/step, as indicated by the authors. 2. Similar, seemingly speed mismatches can be observed in the Appendix. E.g., in Fig. 9 (2nd column, top), the slopes of the waves are around 0.35-0.42 units/step (close enough to likely be considered the same speed, especially as they converge in time to form a more clustered wave pulse) from what I can tell, whereas the slopes in the FD below it are ~0.3 for the diagonal (perhaps this is close enough to my rough estimate) and ~0.9, well above any observable wave speed. Perhaps there is a much faster wave that is unobservable in the RSP due to the min/max values set for the image intensity in the plot, but in that case the authors should demonstrate this. Given (a) the potential mismatch in the speeds for the waves that can be observed, (b) the mismatch in the speeds discussed above in Fig. 2, and (c) the fact that some waves may be missed in FD (see below), I would worry about assuming this without checking. 3. As alluded to in the point above, iRNN in Fig. 2 appears to have some fast pulse bursts easily observed in the RSP that don’t show in the FD. For example, there is a very fast wave observable in the RSP in units ~175-180, time steps 0-350. Note, the resolution is poor, but zooming in and scrolling to where the wave begins around unit 175, step 0 makes it clear. If one scrolls vertically such that the bottom of the wave at step 0 is just barely unobservable, then one can see the wave rapidly come into view and continue downwards. Similarly some short-lasting, slower pulses in units near 190, steps 0-350 are observable in the RSP. None of these appear in the FD. Note, this would not take away from the claim that wRNNs facilitate wave activity much more than iRNNs do, but rather that some small amounts—likely insufficient amounts for facilitating sequence learning—of wave activity might still arise in iRNNs. If the authors believe these wavelike activities are aberrant, it would be helpful for them to explain why so. 4. I looked over the original algorithm the authors used (in Section III of “Recognition and Velocity Computation of Large Moving Objects in Images”—RVC paper below—which I would recommend for the authors to cite), and I wonder if an error in the initial calibration steps (steps 1 & 2) occurred that might explain the speed disparities observed between the RSPs and FDs. 5. There do seem to be some different wave speeds—e.g., in Fig. 9, there appear to be fast and narrow excitatory waves overlapping with slow and broad inhibitory waves. But given that each channel has its own wave speed parameter $\nu$, it isn’t clear why a single channel would support multiple wave speeds. This should be explored in greater depth, and if obvious examples of sufficiently different speeds of excitatory waves are known (putatively Fig. 9, 2nd column), these should be clearly shown and carefully described and analyzed. 6. Is there cross-talk across the channels? If so, have the authors examined the images of the hidden units (with dimensions __hidden units__ x __channels__) for evidence of cross-channel waves? If so, perhaps this is one reason for multiple wave speeds to exist per channel? 7. Overall, it is unclear overall what FT adds to the detection of 1-D waves. If there are such waves, we should be able to observe them directly in the RSPs. In skimming over the RVC paper, it seems like it would be most useful in determining velocities of 2-D objects and perhaps wave pulses. That suggests that one place the FD analysis might be useful is if there are cross-channel waves as I mention above. If so, the waves should still be observable in the images (and I would encourage such images be shown), but might be more easily characterized following the marginalization decomposition procedure described in the original algorithm in Section III of the RVC paper. Note, the FDs might also facilitate the detection of multiple wave speeds in the network, as potentially shown in Fig. 9. However, in that case it would seem they should only appear in Fig. 9, and if the speeds are otherwise verified. 8. The authors mention they re-sorted the iRNN units to look for otherwise hidden waves. This seems highly problematic. If there are waves, then re-sorting can destroy them, and if there is only random activity then re-sorting can cause them to look like waves. **C. Mechanisms** Finally, while the results are overall impressive, and hypotheses made regarding the underlying mechanisms for the performance levels of the network, there is too little analysis of the these mechanisms. While the ablation study is important and helpful, much more could be done to characterize the relationship between wavelike activity and network performance. **D. Minor** 1. Fig. 2: Both plots on the right have the leftmost y-axis digits obscured 2. Fig. 9, top, plots appear to have their x- and y- labels transposed (or else the lower FD plots and those in Fig. 2 have theirs transposed. 3. Fig. 15 needs axis labels
4. I looked over the original algorithm the authors used (in Section III of “Recognition and Velocity Computation of Large Moving Objects in Images”—RVC paper below—which I would recommend for the authors to cite), and I wonder if an error in the initial calibration steps (steps 1 & 2) occurred that might explain the speed disparities observed between the RSPs and FDs.
NIPS_2016_314
NIPS_2016
I found in the paper includes: 1. The paper mentions that their model can work well for a variety of image noise, but they show results only on images corrupted using Gaussian noise. Is there any particular reason for the same? 2. I can't find details on how they make the network fit the residual instead of directly learning the input - output mapping. - Is it through the use of skip connections? If so, this argument would make more sense if the skip connections exist after every layer (not every 2 layers) 3. It would have been nice if there was an ablation study on what plays the most important factor on the improvement in performance. Whether it is the number of layers or the skip connections, and how does the performance vary when the skip connections are used for every layer. 4. The paper says that almost all existing methods estimate the corruption level at first. There is a high possibility that the same is happening in the initial layers of their Residual net. If so, the only advantage is that theirs is end to end. 5. The authors mention in the Related works section that the use of regularization helps the problem of image- restoration, but they don’t use any type of regularization in their proposed model. It would be great if the authors can address these points (mainly 1, 2 and 3) in the rebuttal.
2. I can't find details on how they make the network fit the residual instead of directly learning the input - output mapping.
ICLR_2021_785
ICLR_2021
This paper addresses an interesting problem which is well motivated, well defined and clearly and extensively related to other problem settings from the literature. The paper explains the problem clearly and gives good examples of failure modes of naïve approaches, motivating an alternate approach. The paper shows that indeed performance improvements can be achieved by using such an alternate approach which explicitly considers the one-of-many-learning problem, by training a selection module to select the target used for training dynamically, in a joint fashion using RL. The main weakness of the paper is that the specific proposed solution/framework (training a selection module using RL) is in my opinion not well motivated. The motivating problem (with the naïve MinLoss strategy) seems to be (lack of) exploration: can't this be addressed simply by adding some randomness (e.g. sampling target proportional to loss or model probability). Why is a separate neural network module needed? Also, I do not understand the reward structure (# predicted variables equal to main model). It seems that the selection module is trained to basically match the prediction of the main model, and the paper states that this is 'since we do not know a-priori which y is optimal for defining the loss'. How does training a separate model to match the main model prediction help overcome this problem? While the results do show the benefit of the 'SelectR' framework, I would like to see them compared to a simple (informed) strategy such as sampling a target according to model probability / loss / MinLoss with some epsilon probability of using a random target. The results may help in answering above questions. Although I like the writing in general, I think the paper uses too much math notation and formalism. For example, the concepts of MinLoss and SelectR are relatively simple, but the notation in terms of one-hot w_ij makes things unnecessarily hard(er) to read and complicated. I think also the Lemma's and examples would be better explained with words than with heavy math, to help understanding. As a bonus, removing a lot of the math would allow for very helpful Algorithm 1 and maybe Figure 3 of the Appendix to be included in the main text. 3. Recommendation My current assessment is that the paper is marginally below the acceptance threshold. 4. Arguments for recommendation The paper addresses a well motivated and well explained problem and the results obtained using SelectR improve over the baselines, but it does not convince (enough) that such a (relatively complicated) approach is actually beneficial from a practical point of view as simpler alternatives are underexplored (see strengths & weaknesses). Additionally, I think the paper should use less formal notation as that will make the paper easier to read without losing the content. 5. Questions to authors See also strengths & weaknesses Do I understand correctly (from the formula for the cross entropy loss function) that Example 1 assumes a model which predicts y_1 and y_2 independently? I can imagine an autoregressive (structured) model which has p(y_1 = 1) = 0.5 and p(y_2 = 1 y1=1) = 0 and p(y_2 = 1 y_1 = 0) = 1 so p(1,0)=p(0,1)=0.5, which is optimal. It seems to me that the problem arises because of the independence structure of the variables in the model combined with the loss function (i.e. summing the log-probs for all targets). Why does Lemma 1, which is posed as a formalization of the problem arising in Example 1, consider a zero-one loss whereas Example 1 is based on a cross-entropy loss? This is confusing to me. The paper repeatedly mentions a 'prediction y^hat' as output of the main model. How is this defined? The model outputs a (structured) probability distribution but y^hat seems a vector. Is this vector a sample/argmax solution? Additional feedback Minor comments/suggestions: The paper claims that compared to Neural Program Synthesis (NPS), where a generated program can be verified, in the setting considered in the paper there is no such additional signal available. However, the experiments all consist of problems where a solution can be verified easily, even if it is outside the target set, so this does seem similar to NPS. It seems that forcing all probability mass to concentrate on one target can be helpful for some models (i.e. Example 1 if we assume independence of the variables), but may also be (unnecessarily) restrictive for other models which could more easily divide the probability mass. Maybe this would be interesting to discuss/investigate. The paper notes that 'one could also backpropagate the gradient of the expected loss given Pr(y_ij)'. This seems preferred to sampling, so a bit more discussion on why this was or was not used would be interesting. I would not consider the parameters of the main model as 'input' for the selection module. I would just say it takes as input the prediction from the main model.
3. Recommendation My current assessment is that the paper is marginally below the acceptance threshold.
NIPS_2020_844
NIPS_2020
- It is claimed that the proposed method aims to discrminatively localize the sounding objects from their mixed sound without any manual annotations. However, the method aslo aims to do class-aware localization. As shown in Figure 4, the object categories are labeled for the localized regions for the proposed method. It is unclear to this reviewer whether the labels there are only for illustrative purposes? - Even the proposed method doesn't rely on any class labels, it needs the number of categories of potential sound sources in the data to build the object dictionary. - Though the performance of method is pretty good especially in Table 2, the novelty/contribution of the method is somewhat incremental. The main contribution of the work is a new network design drawing inspirations from prior work for the sound source localization task. - The method assumes single source videos are available to train in the first stage, which is also a strong assumption even though class labels are not used. Most in-the-wild videos are noisy and multi-source. It would be desired to have some analysis to show how robust the system is to noise in videos or how the system can learn without clean single source videos to build the object dictionary.
- Though the performance of method is pretty good especially in Table 2, the novelty/contribution of the method is somewhat incremental. The main contribution of the work is a new network design drawing inspirations from prior work for the sound source localization task.
NIPS_2021_2191
NIPS_2021
of the paper: [Strengths] The problem is relevant. Good ablation study. [Weaknesses] - The statement in the intro about bottom up methods is not necessarily true (Line 28). Bottom-up methods do have a receptive fields that can infer from all the information in the scene and can still predict invisible keypoints. - Several parts of the methodology are not clear. - PPG outputs a complete pose relative to every part’s center. Thus O_{up} should contain the offset for every keypoint with respect to the center of the upper part. In Eq.2 of the supplementary material, it seems that O_{up} is trained to output the offset for the keypoints that are not farther than a distance \textit{r) to the center of corresponding part. How are the groundtruths actually built? If it is the latter, how can the network parts responsible for each part predict all the keypoints of the pose. - Line 179, what did the authors mean by saying that the fully connected layers predict the ground-truth in addition to the offsets? - Is \delta P_{j} a single offset for the center of that part or it contains distinct offsets for every keypoint? - In Section 3.3, how is G built using the human skeleton? It is better to describe the size and elements of G. Also, add the dimensions of G,X, and W to better understand what DGCN is doing. - Experiment can be improved: - For instance, the bottom-up method [9] has reported results on crowdpose dataset outperforming all methods in Table 4 with a ResNet-50 (including the paper one). It will be nice to include it in the tables - It will be nice to evaluate the performance of their method on the standard MS coco dataset to see if there is a drop in performance in easy (non occluded) settings. - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. - Can we visualize G, the dynamic graph, as it changes through DGCN? It might give an insight on what the network used to predict keypoints, especially the invisible ones. [Minor comments] In Algorithm 1 line 8 in Suppl Material, did the authors mean Eq 11 instead of Eq.4? Fig1 and Fig2 in supplementary are the same Spelling Mistake line 93: It it requires… What does ‘… updated as model parameters’ mean in line 176 Do the authors mean Equation 7 in line 212? The authors have talked about limitations in Section 5 and have mentioned that there are not negative societal impacts.
- PPG outputs a complete pose relative to every part’s center. Thus O_{up} should contain the offset for every keypoint with respect to the center of the upper part. In Eq.2 of the supplementary material, it seems that O_{up} is trained to output the offset for the keypoints that are not farther than a distance \textit{r) to the center of corresponding part. How are the groundtruths actually built? If it is the latter, how can the network parts responsible for each part predict all the keypoints of the pose.
NIPS_2021_2163
NIPS_2021
Weakness: I have some concerns on identification mechanism based on identity bank. 1) Scalability. As shown in Table 3 (a), the performance is getting worse with growth of the maximum number of identities. It means that the capacity should be preset to some small number (e.g., 10). In real-world scenario, we can have more than 10 objects and most of the time we don't know how many objects we will need to handle in the future. Have the authors thought about how to scale up without compromising performance? 2) Randomness. Identities are randomly assigned one embedding from the identity bank. How the results are robust against this randomness? It would be undesirable for the result to change with each inference. It would be great to have some analysis on this aspect. Overall Evaluation: The paper present a novel approach for multi-object video object segmentation and the proposed method outperfrom previous state-of-the-arts on several benchmarks. Now, I would recommend to accept this paper. I will finalize the score after seeing how authors address my concerns in Weakness. While future works are discussed in Supplementary Materials, I encourage the authors to include more discussions on limitations and societal impacts.
1) Scalability. As shown in Table 3 (a), the performance is getting worse with growth of the maximum number of identities. It means that the capacity should be preset to some small number (e.g., 10). In real-world scenario, we can have more than 10 objects and most of the time we don't know how many objects we will need to handle in the future. Have the authors thought about how to scale up without compromising performance?
ACL_2017_365_review
ACL_2017
except for the qualitative analysis, the paper may belong better to the applications area, since the models are not particularly new but the application itself is most of its novelty - General Discussion: This paper presents a "sequence-to-sequence" model with attention mechanisms and an auxiliary phonetic prediction task to tackle historical text normalization. None of the used models or techniques are new by themselves, but they seem to have never been used in this problem before, showing and improvement over the state-of-the-art. Most of the paper seem like a better fit for the applications track, except for the final analysis where the authors link attention with multi-task learning, claiming that the two produce similar effects. The hypothesis is intriguing, and it's supported with a wealth of evidence, at least for the presented task. I do have some questions on this analysis though: 1) In Section 5.1, aren't you assuming that the hidden layer spaces of the two models are aligned? Is it safe to do so? 2) Section 5.2, I don't get what you mean by the errors that each of the models resolve independently of each other. This is like symmetric-difference? That is, if we combine the two models these errors are not resolved anymore? On a different vein, 3) Why is there no comparison with Azawi's model? ======== After reading the author's response. I'm feeling more concerned than I was before about your claims of alignment in the hidden space of the two models. If accepted, I would strongly encourage the authors to make clear in the paper the discussion you have shared with us for why you think that alignment holds in practice.
2) Section 5.2, I don't get what you mean by the errors that each of the models resolve independently of each other. This is like symmetric-difference? That is, if we combine the two models these errors are not resolved anymore? On a different vein,
ICLR_2022_1935
ICLR_2022
Weakness: A semi-supervised feature learning baseline is missing. This is my main concern about the paper. The key argument in the paper is that feature learning and classifier learning should 1) be decoupled, 2) use random sampling and class-balanced sampling respectively, 3) train on all labels and only ground-truth labels respectively. The authors, therefore, propose a carefully designed alternate sampling strategy. However, a more straightforward strategy could be 1) train a feature extractor ( f in the paper) and a classifier ( g ′ in the paper) using random sampling and any semi-supervised learning method on all data, then 2) freeze the feature extractor ( f ) and train a new classifier ( g in the paper) using class-balanced sampling on data with ground-truth labels. Compared with the alternate sampling strategy proposed in this paper, the semi-supervised feature learning baseline will take less implementation effort and is easier to combine with any semi-supervised learning methods. The baseline seems to be missing in the paper. Although the naive baseline may not give the best performance, it should be compared to justify the sophisticatedly designed alternate sampling strategy. References are not up-to-date. All references in the paper are in or before 2020. In fact, much research progress has been made since then. For example, some recent works [1, 2] study the class-imbalanced semi-supervised learning as well, a discussion on these methods should be necessary. A recent survey on long-tailed learning [3] could be a useful resource to help update the related works in the paper. Minor issues: "The model is the fine-tuned on the combination of ..." -> "The model is then fine-tuned on the combination of ..." [1] Su et al., A Realistic Evaluation of Semi-Supervised Learning for Fine-Grained Classification, CVPR 2021 [2] Wei et al., CReST: A Class-Rebalancing Self-Training Framework for Imbalanced Semi-Supervised Learning, CVPR 2021 [3] Deep Long-Tailed Learning: A Survey, arXiv 2021
1) train a feature extractor ( f in the paper) and a classifier ( g ′ in the paper) using random sampling and any semi-supervised learning method on all data, then
NIPS_2016_69
NIPS_2016
- The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: - What is the size of the beach/golf course/train station/hospital datasets? - How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset.
- How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset.
NIPS_2017_35
NIPS_2017
- The applicability of the methods to real world problems is rather limited as strong assumptions are made about the availability of camera parameters (extrinsics and intrinsics are known) and object segmentation. - The numerical evaluation is not fully convincing as the method is only evaluated on synthetic data. The comparison with [5] is not completely fair as [5] is designed for a more complex problem, i.e., no knowledge of the camera pose parameters. - Some explanations are a little vague. For example, the last paragraph of Section 3 (lines 207-210) on the single image case. Questions/comments: - In the Recurrent Grid Fusion, have you tried ordering the views sequentially with respect to the camera viewing sphere? - The main weakness to me is the numerical evaluation. I understand that the hypothesis of clean segmentation of the object and known camera pose limit the evaluation to purely synthetic settings. However, it would be interesting to see how the architecture performs when the camera pose is not perfect and/or when the segmentation is noisy. Per category results could also be useful. - Many typos (e.g., lines 14, 102, 161, 239 ), please run a spell-check.
- In the Recurrent Grid Fusion, have you tried ordering the views sequentially with respect to the camera viewing sphere?
NIPS_2019_445
NIPS_2019
- Quality: Results of Section 2.1, which builds the main motivation of the paper, is demonstrated on a very limited settings and examples. It does not convince the reader that overfitting is the general reason for potential poor performance of the models under study. - Soundness: While expressiveness is useful, it does not mean that the optimal weights are learnable. The paper seem to not pay attention to this issue. - Clarity: Related work could be improved. Some related works are mainly named but their differences are not described enough. - Organization could be improved. Currently the paper is dependent on appendix (eg the algorithms). Also the contents of tables are too small. Overall, I do not think the quality of the paper is high enough and I vote for it to be rejected.
- Clarity: Related work could be improved. Some related works are mainly named but their differences are not described enough.
ICLR_2023_2494
ICLR_2023
My general comment is that it is hard to place this work both wrt to existing numerical methods, but also with respect to neural solvers, neural surrogates. That makes it really tough to properly weigh the pros and cons wrt to e.g. Proper Orthogonal Decomposition, Dynamic Model Decomposition, but also operator learning methods or PINNs. Furthermore, I would like to see a test against a model which does not preserve time reversibility. - The only comparison is against Proper Orthogonal Decomposition (POD) which as stated in the introduction “is flawed in that it ignores the temporal dependence of state variables”. Naturally the question is for example how Dynamic Model Decomposition (DMD) is doing for the problems. - Also from a Deep Learning perspective, e.g. Figure 4 could be learned by an operator learning that takes the initial state as input and outputs the state of the system after e.g. every 100th step. On the other hand, if done with e.g. PINNs this could be performed completely without training data. Just using the boundary conditions for the loss and the equation for the residual loss. It would be really interesting to have a comparison wrt to speed and accuracy. - PyTorch and more specifically the automatic differentiation capability of Pytorch is used as an optimization tool. It is however hard to judge from the paper what exactly the parameters are that are optimized. How many parameters are optimized for the different problems? Algorithms 1 and 2 in the appendix help a great deal but it took me a great deal to scroll back and forth. Is it possible to place the algorithms in the main paper and write refer to the important parts in Section 3,4? - The loss should be a bit more central to the writing of the paper. In Fig 1 the most important components are the reduced-order fluid model and the trajectory-wise discrepancy loss. However, Sections 4.3 takes a huge part of the main paper and is actually hard to follow. In my opinion the readability of Section 4 can be improved by focusing more on the relevant parts and not let the reader figure out what they are. - Why is e.g. JAX not used for optimization? Did the authors consider that? - How do optimization time vs inference time relate between the different models? - How do the individual components of the algorithm relate to the performance, e.g. how does the performance change for other loss terms? - Is it correct that for the first benchmark a single trajectory is used, whereas in the second benchmark more trajectories are used for optimization? On which trajectories is the testing done afterwards? Shouldn’t there be more than just one trajectories to get a better performance estimate?
- Why is e.g. JAX not used for optimization? Did the authors consider that?
NIPS_2021_942
NIPS_2021
The biggest real-world limitation is that the method does not perform as well as backprop. This is unfortunate, but also understandable. The authors do mention that ASAP has a lower memory footprint, but there are also other methods that can reduce memory footprints of neural networks trained using backprop. Given that this method is worse than backprop, and it is also not easy to implement, I cannot see any practical use for it. On the theoretical side, although the ideas here are interesting, I take issue with the term "biologically plausible" and the appeal to biological networks. Given that cognitive neuroscience has not yet proceeded to a point where we understand how patterns and learning occur in brains, it is extremely premature to try and train networks that match biological neurons on the surface, and claim that we can expect better performance because they are biology inspired. To say that these networks behave more similarly to biological neurons is true only on the surface, and the claim that these networks should therefore be better or superior (in any metric, not just predictive performance) is completely unfounded (and in fact, we can see that the more "inspiration" we draw from biological neurons, the worse our artificial networks tend to be). In this particular case, humans can perform image classification nearly perfectly, and better than the best CNNs trained with backprop. And these CNNs trained with backprop do better than any networks trained using other methods (including ASAP). To clarify, I do not blame (or penalize) the authors for appealing to biological networks, since I think this is a bigger issue in the theoretical ML community as a whole, but I do implore them to soften the language and recognize the severe limitations that prevent us from claiming homology between artificial neural networks and biological neural networks (at least in this decade). I encourage the authors to explicitly clarify that: 1) biological neurons are not yet understood, so drawing inspiration from the little we know does not improve our chances at building better artificial networks; 2) the artificial networks trained using ASAP (and similar methods) do not improve our understanding of biological neurons at all; and 3) the artificial networks trained using ASAP (and similar methods) do not necessarily resemble biological networks (other than the weight transport problem, which is of arguable importance) more than other techniques like backprop. Again, I do not hold the authors accountable for this, and this does not affect the review I gave.
3) the artificial networks trained using ASAP (and similar methods) do not necessarily resemble biological networks (other than the weight transport problem, which is of arguable importance) more than other techniques like backprop. Again, I do not hold the authors accountable for this, and this does not affect the review I gave.
ICLR_2023_3063
ICLR_2023
The novelty and technical contribution are limited. It is unclear for the deformable graph attention module. It is unclear why the proposed method has lower computational complexity. Detailed comments: What is the motivation to choose personalized pagerank score, bfs, and feature similarity as sorting criteria? For NodeSort, 1) how to choose the base node, or is every node a base node? 2)“NodeSort differentially sorts nodes depending on the base node.” Does this mean that the base node affects the ordering, affects the key nodes for attention, and further affects the model performance? 3)After getting the sorted node sequence, how to sample the key nodes for each node? And how many key nodes are sampled?is the number of key nodes a hyper-parameter? 4)What are the Value nodes used in Transformer in this paper? 5)How to fuse node representations generated by attention for different ranking criteria. Intuitively, the design of deformable graph attention is complicated, and the Katz positional encoding involves the exponentiation of adjacency matrix, so Is the computational complexity really reduced? Where can the reduction in complexity be explained from the proposed method compared to baselines? or just from the sparse implementation?
2)“NodeSort differentially sorts nodes depending on the base node.” Does this mean that the base node affects the ordering, affects the key nodes for attention, and further affects the model performance?
NIPS_2019_651
NIPS_2019
(large relative error compared to AA on full dataset) are reported. - Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-implementation of the method appears feasible. - Significance: The submission provides a method to perform (approximate) AA on large datasets by making use of coresets and therefore might be potentially useful for a variety of applications. Detailed remarks/questions: 1. Algorithm 2 provides the coreset C and the query Q consists of the archetypes z_1, …, z_k which are initialised with the FurthestSum procedure. However, it is not quite clear to me how the archetype positions are updated after initialisation. Could the authors please comment on that? 2. The presented theorems provide guarantees for the objective functions phi on data X and coreset C for a query Q. Table 1 reporting the relative errors suggests that there might be a substantial deviation between coreset and full dataset archetypes. However, the interpretation of archetypes in a particular application is when AA proves particularly useful (as for example in [1] or [2]). Is the archetypal interpretation of identifying (more or less) stable prototypes whose convex combinations describe the data still applicable? 3. Practically, the number of archetypes k is of interest. In the presented framework, is there a way to perform model selection in order to identify an appropriate k? 4. The work in [3] might be worth to mention as a related approach. There, the edacious nature of AA is approached by learning latent representation of the dataset as a convex combination of (learnt) archetypes and can be viewed as a non-linear AA approach. [1] Shoval et al., Evolutionary Trade-Offs, Pareto Optimality, and the Geometry of Phenotype Space, Science 2012. [2] Hart et al., Inferring biological tasks using Pareto analysis of high-dimensional data, Nature Methods 2015. [3] Keller et al., Deep Archetypal Analysis, arxiv preprint 2019. ---------------------------------------------------------------------------------------------------------------------- I appreciate the authors’ response and the additional experimental results. I consider the plot of the coreset archetypes on a toy experiment insightful and it might be a relevant addition to the appendix. In my opinion, the submission constitutes a relevant contribution to archetypal analysis which makes it more feasible in real-world applications and provides some theoretical guarantees. Therefore, I raise my assessment to accept.
- Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-implementation of the method appears feasible.
NIPS_2018_494
NIPS_2018
1. The biggest weakness is that there is little empirical validation provided for the constructed methods. A single table presents some mixed results where in some cases hyperbolic networks perform better and in others their euclidean counterparts or a mixture of the two work best. It seems that more work is needed to clearly understand how powerful the proposed hyperbolic neural networks are. 2. The experimental setup, tasks, and other details are also moved to the appendix which makes it hard to interpret this anyway. I would suggest moving some of these details back in and moving some background from Section 2 to the appendix instead. 3. The tasks studied in the experiments section (textual entailment, and a constructed prefix detection task) also fail to provide any insight on when / how the hyperbolic layers might be useful. Perhaps more thought could have been given to constructing a synthetic task which can clearly show the benefits of using such layers. In summary, the theoretical contributions of the paper are significant and would foster more exciting research in this nascent field. However, though it is not the central focus of the paper, the experiments carried out are unconvincing.
2. The experimental setup, tasks, and other details are also moved to the appendix which makes it hard to interpret this anyway. I would suggest moving some of these details back in and moving some background from Section 2 to the appendix instead.
NIPS_2021_1360
NIPS_2021
and questions: There lacks an introduction of Laplacian matrix but directly uses it. A paper should be self-contained. The motivation is not strong. The authors stated that "... the transformer architecture ... outperformed many SOTA models ... motivates us". This sounds like "A tool is powerful, then I try the tool on my task, and it works well! Then I publish a paper". However, it lacks of analysis why Transformer works well on this task, which would bring more insights to the community. Section 3.3 needs to be polished more on writing. 1 In Eqn. (8), The e^{(t)} is proposed without further explanation. What is it? Why is it needed? What is the motivation of proposing it? 2 In Eqn. (8), are f_\theta(v_j) and \hat{y}_j^{(t)} the same thing since they both are the predictions of v_j by the predictor. 3 In line 177-178, if the y_j^{true} is "user-unkonwn", then how do you compute e^{(t)} in Enq. (8)? 4 Why this SE framework can help to improve, how does it help? Similar to 2, please DO NOT just show me what you have done and achieved, but also show me why and how you manage to do these. I would consider increasing the rating based on the authors' response. Reference: [1] Luo, et al. "Neural architecture search with gbdt." arXiv preprint arXiv:2007.04785 (2020). https://arxiv.org/abs/2007.04785
3 In line 177-178, if the y_j^{true} is "user-unkonwn", then how do you compute e^{(t)} in Enq. (8)?
ARR_2022_93_review
ARR_2022
1. From an experimental design perspective, the experimental design suggested by the authors has been used widely for open-domain dialogue systems with the caveat of it not being done in live interactive settings. 2. The authors have not referenced those works that use continuous scales in the evaluation and there is a large bunch of literature missing from the paper. Some of the references are provided in the comments section. 3. Lack of screenshots of the experimental interface Comments: 1. Please add screenshots of the interface that was designed. 2. Repetition of the word Tables in Line 549 3. In Appendix A.3, the GLEU metric is reference as GLUE. Questions: 1. In table 1, is there any particular reason for the reduction in pass rate % from free run 1 and free run2? 2. What is the purpose of the average duration reported in Table 1? There is no supporting explanation about it. Does it include time spent by the user waiting for the model to generate a response? 3. With regards to the model section, is there any particular reason that there was an emphasis on choosing retriever-based transformer models over generative models? Even if the models are based on ConvAI2, there are other language modeling GPT2 based techniques that could have been picked. 4. In figure 6, what are the models in the last two columns lan_model_p and lan_model? Missing References: 1. Howcroft, David M., Anja Belz, Miruna-Adriana Clinciu, Dimitra Gkatzia, Sadid A. Hasan, Saad Mahamood, Simon Mille, Emiel van Miltenburg, Sashank Santhanam, and Verena Rieser. " Twenty years of confusion in human evaluation: NLG needs evaluation sheets and standardised definitions." In Proceedings of the 13th International Conference on Natural Language Generation, pp. 169-182. 2020. 2. Santhanam, S. and Shaikh, S., 2019. Towards best experiment design for evaluating dialogue system output. arXiv preprint arXiv:1909.10122. 3. Santhanam, S., Karduni, A. and Shaikh, S., 2020, April. Studying the effects of cognitive biases in evaluation of conversational agents. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1-13). 4. Novikova, J., Dušek, O. and Rieser, V., 2018. RankME: Reliable human ratings for natural language generation. arXiv preprint arXiv:1803.05928. 5. Li, M., Weston, J. and Roller, S., 2019. Acute-eval: Improved dialogue evaluation with optimized questions and multi-turn comparisons. arXiv preprint arXiv:1909.03087.
2. What is the purpose of the average duration reported in Table 1? There is no supporting explanation about it. Does it include time spent by the user waiting for the model to generate a response?
NIPS_2022_1372
NIPS_2022
/// Most of my comments here are related to wanting more results/substantiation relative to what was claimed in the paper, not necessarily fatal flaws. Sensitivity to noise levels: the paper claims their method (NResilient) “has better or similar RD than each baseline at all noise levels” (336-338) — unless I’m drastically misinterpreting figure 10b in the appendix, this seems to be untrue? -> leading me to also wonder at what noise levels do the fairness-utility curves (as shown in the main paper) start looking less nice. Group membership: The model for group membership is not clear from the main body of the paper, specifically what it means for the model to handle “multiple” sensitive features, or the case where each individual item can belong to “one or more” groups. The experiments also only illustrate results where group membership is disjoint. This is addressed in the appendix (use marginals to model more complex relationships between different groups), but given that [the uncertainty of] group membership is the core motivation of the paper, it feels important for this model, and how to achieve specific instantiations of the model, to be clearer! I’d further be curious to see empirical results, though I could live without them. Fairness metrics: Though the work as a whole claims to present a framework for a class of metrics/constraints (e.g. 158-160), empirical results are only shown (even in the appendix) for constraints/metrics that implement versions of equal representation, and thm. 3.2 is specific to equal representation (though 3.1 is more general to this class of constraints). Equal representation feels like an "easier" metric to achieve than proportional representation (similar to demographic parity being "easier" than equalized odds in classification), so I'm just curious as to what performance looks like for other constraints. Furthermore, I do wish there had been at least a nominal discussion, even in the appendix, of choice of constraint and what the consequences of different constraints would be, more than just the handwavy mention of appropriate choice being “context dependent.” /// Typos /// 191 problem 2.5 format? 224 constraint 5 appl[ies] 249 “solve[d]” 311 “l[o]sing” 399 “works [for?] a general” Yes - the authors are very clear about the specific scope of their work and the assumptions that their models rely on. As discussed above, I would have appreciated some discussion of these assumptions and choices (e.g. the existence of the utility matrix, choice of fairness constraint and associated values), but understand the need for a focused work.
224 constraint 5 appl[ies] 249 “solve[d]” 311 “l[o]sing” 399 “works [for?] a general” Yes - the authors are very clear about the specific scope of their work and the assumptions that their models rely on. As discussed above, I would have appreciated some discussion of these assumptions and choices (e.g. the existence of the utility matrix, choice of fairness constraint and associated values), but understand the need for a focused work.
39n570rxyO
ICLR_2025
This paper has weaknesses to address: * The major weakness of this paper is the extremely limited experiments section. There are many experiments, yet almost no explanation of how they're run or interpretation of the results. Most of the results are written like an advertisement, mostly just stating the method outperforms others. This leaves the reader unclear why the performance gains happen. Ultimately it's not clear when/why the findings would generalize. The result is that some claims appear to be quite overstated. For example, L423-L424 states *"embeddings of domains with shared high-level semantics cluster together, as depicted in Appendix E.1. For example, embeddings of mono and stereo audio group closely, as do those of banking and economics."* But this is cherry-picked---Temperature is way closer to Mono and Stereo Audio than Banking is to Economics. * Similarly, many important experimental details are missing or relegated to the Appendix, and the Appendix also includes almost no explanations or interpretations. For example, the PCA experiments in Figures 3, 7, and 8 aren't explained. * It's unclear how many variables actually overlap between training/testing, which seems to be a key element to make the model outperform others. Yet this isn't analyzed. Showing that others fail by ignoring other variables should be a key element of the experiments.
* Similarly, many important experimental details are missing or relegated to the Appendix, and the Appendix also includes almost no explanations or interpretations. For example, the PCA experiments in Figures 3, 7, and 8 aren't explained.
cdwXPlM4uN
ICLR_2024
major concerns: - This paper proposes a knowledge distillation framework from ANN to SNN. The only contribution is the loss function used here is different from previous work. This contribution is not sufficient for conferences like ICLR. - It is hard to determine that the usage of this loss function is unique in the literature. In the domain of knowledge distillation in ANNs, there are numerous works studying different types of loss functions. Whether this loss function is original remains questioned. - I find the design choice of simply averaging the time dimension in SNN featuremaps inappropriate. The method in this paper is not the real way to compute the similarity matrix. Instead, to calculate the real similarity matrix of SNNs, the authors should flatten the SNN activation to $[B, TCHW]$ and then compute the $B\times B$ covariance matrix. For detailed explanation, check [1]. - Apart from accuracy, there are not many insightful discoveries for readers to understand the specific mechanism of this loss function. - The experiments are not validated on ImageNet, which largely weakens the empirical contribution of this paper. minor concerns: - " The second is that the neuron state of ANNs is represented in binary format but that of SNNs is represented in float format, leading to the precision mismatch between ANNs and SNNs", wrong value formats of SNNs and ANNs. - The related work should be placed in the main text, rather than the appendix. There are many spaces left on the 9th page. [1] Uncovering the Representation of Spiking Neural Networks Trained with Surrogate Gradient
- " The second is that the neuron state of ANNs is represented in binary format but that of SNNs is represented in float format, leading to the precision mismatch between ANNs and SNNs", wrong value formats of SNNs and ANNs.
ICLR_2023_2322
ICLR_2023
--- W1. The authors have clearly reduced whitespace throughout the paper; equations are crammed together, captions are too close to the figures. This by itself is grounds for rejection since it effectively violates the 9-page paper limit. W2. An important weakness that is not mentioned anywhere is that the factors A ( k ) in Eq (8) must have dimensions that factorize the dimensions of W . For example, they must satisfy ∏ k = 1 S a j ( k ) = w j . So what is hailed as greater flexibility of the proposed model in the caption of Fig 1 is in fact a limitation. For example, if the dimensions of W are prime numbers, then for each mode of W , only a single tensor A ( k ) can have a non-singleton dimension in that same mode. This may be fixable with appropriate zero padding, but this has to at least be discussed and highlighted in the paper. W3. The 2nd point in the list of contributions in Sec 1 claims that the paper provides a means of finding the best approximation in the proposed format. In fact, it is easy to see that this claim is likely to be false: The decomposition corresponds to a difficult non-convex optimization problem, and it is therefore unlikely that a simple algorithm with a finite number of steps could solve it optimally. W4. SeKron is claimed to generalize various other decompositions. But it is not clear that the proposed algorithm could ever reproduce those decompositions. For example, since there is no SVD-based algorithm for CP decomposition, I strongly suspect that the proposed algorithm (which is SVD-based) cannot recreate the decomposition that, say, an alternating least squares based approach for CP decomposition would achieve. W5. The paper is unclear and poor notation is used in multiple places. For examples: Subscripts are sometimes used to denote indices (e.g., Eq (5)), sometimes to denote sequences of tensors (e.g., Eqs (7), (8)), and sometimes used to denote both at the same time (e.g., Thm 3, Eq (35))! This is very confusing. It is unclear how Eq (7) follows from Eq (5). The confusing indices exacerbate this. In Thm 1, A ( k ) are tensors, so it's unclear what you mean by " R i are ranks of intermediate matrices". In Alg 1, you apply SVD to a 3-way tensors. This operation is not defined. If you mean batched SVD, you need to specify that. The W r 1 ⋯ r k − 1 ( k ) tensors in Eq (10) haven't been defined. The definition of Unfold below Eq (13) is ambiguous. Similarly, you say that Mat reformulates a tensor to a matrix, but list the output space as R d 1 ⋯ d N , i.e., indicating that the output is a vector. Below Eq (15) you discuss "projection". This is not an appropriate term to use, since these aren't projections; projection is a term with a specific meaning in linear algebra. In Eq (16), the r k indices appear on the right-hand side but not on the left-hand side.
--- W1. The authors have clearly reduced whitespace throughout the paper; equations are crammed together, captions are too close to the figures. This by itself is grounds for rejection since it effectively violates the 9-page paper limit.
zWGDn1AmRH
EMNLP_2023
1.This paper is challenging to follow, and the proposed method is highly complex, making it difficult to reproduce. 2.The proposed method comprises several complicated modules and has more parameters than the baselines. It remains unclear whether the main performance gain originates from a particular module or if the improvement is merely due to having more parameters. The current version of the ablation study does not provide definitive answers to these questions. 3.The authors claim that one of their main contributions is the use of a Mahalanobis contrastive learning method to narrow the distribution gap between retrieved examples and current samples. However, there are no experiments to verify whether Mahalanobis yields better results than standard contrastive learning. 4.The proposed method involves multiple modules, which could impact training and inference speed. There should be experiments conducted to study and analyze these effects.
2.The proposed method comprises several complicated modules and has more parameters than the baselines. It remains unclear whether the main performance gain originates from a particular module or if the improvement is merely due to having more parameters. The current version of the ablation study does not provide definitive answers to these questions.
NIPS_2018_428
NIPS_2018
weakness which decreased my score. Some line by line comments: - lines 32 - 37: You discuss how the regret cannot be sublinear, but proceed to prove that your method achieves T^{1/2} regret. Do you mean that the prediction error over the entire horizon T cannot be sublinear? - eq after line 145: typo --- i goes from 1 to n and since M,N are W x k x n x m, the index i should go in the third position. Based on the proof, the summation over u should go from tau to t, not from 1 to T. - line 159: typo -- "M" --> Theta_hat - line 188: use Theta_hat for consistency. - line 200: typo -- there should no Pi in the polynomial. - line 212: typo --- "beta^j" --> beta_j - line 219: the vector should be indexed - lines 227 - 231: the predictions in hindsight are denoted once by y_t^* and once by hat{y}_t^* - eq after line 255: in the last two terms hat{y}_t --> y_t Comments on the Appendix: - General comment about the Appendix: the references to Theorems and equations are broken. It is not clear if a reference points to the main text or to the appendix. - line 10: Consider a noiseless LDS... - line 19: typo -- N_i ---> P_i - equation (21): same comment about the summation over u as above. - line 41: what is P? - line 46: typo --- M_omega' ---> M_ell' - eq (31): typo -- no parenthesis before N_ell - line 56: the projection formula is broken - eq (56): why did you use Holder in that fashion? By assumption the Euclidean norm of x is bounded, so Cauchy Schwartz would avoid the extra T^{1/2}. ================== In line 40 of the appendix you defined R_x to be a bound on \|x\|_2 so there is no need for the inequality you used in the rebuttal. Maybe there is a typo in line 40, \|x\|_2 maybe should be \|x\|_\infty
- lines 32 - 37: You discuss how the regret cannot be sublinear, but proceed to prove that your method achieves T^{1/2} regret. Do you mean that the prediction error over the entire horizon T cannot be sublinear?
ICLR_2022_1998
ICLR_2022
In Section 3. The paper uses a measure ρ that is essentially the fraction of examples at which local monotonicity (in any of the prescribed directions in M ) is violated and then show that this measure decreases when using the paper's method over the baselines. However, I'm not certain that this measure corresponds to the global monotonicity requirment that is often desired in practice: namely, the one that appears in Definition 1. For example, consider a 1 -D function over [ 1 , 99.99 ] whose graph is a piecewise linear curve connecting the points ( 0 , 100 ) , ( 0.99 , 100.99 ) , ( 1 , 99 ) , ( 1.99 , 99.99 ) , ( 2 , 98 ) , ( 2.99 , 98.99 ) , . . . , ( 99 , 1 ) , ( 99.99 , 1.99 ) . This function has nonnegative derivative at about 99 of its domain, yet if one chooses two points x 1 , x 2 uniformaly and independently from the domain, then there's at least a 97 chance that f ( m i n x 1 , x 2 ) > f ( m a x x 1 , x 2 ) . I think, therefore, that it would be good to complement the local ρ with an estimate of the probability that Definition 1 would not hold over the distribution in question (training, test or random). Section 4.1 The authors introduce the notion of group monotonicity, but it's unclear how the regularizer introduced in equation 3 helps to encourage that property. Specifically, 1) Only the sum of the gradient is taken into account (so it could be that a component a_w_{i,j} has a very negative gradient, but still the sum will be positive), and 2) the softmax in equation 3 seems to encourage that the total gradient of S y is larger than the total gradient of all the other S k 's, not that it's positive. Perhaps I'm missing something? Section 4.2 The paper claims that the fact that a good performance of the "total activation classifier" shows evidence that the original classifier satisfies group monotonicity. But that claim is not clear to me. The total activation classifier does not depend on the part of the network that computes the output from the intermediate layer which is critical for the satisfaction of group monotonicity. Section 4.2.2 The paper doesn't compare their methods to other methods for detecting noisey/adverserial test examples.
1) Only the sum of the gradient is taken into account (so it could be that a component a_w_{i,j} has a very negative gradient, but still the sum will be positive), and
NIPS_2021_2338
NIPS_2021
Weakness: 1. Regarding the adaptive masking part, the authors' work is incremental, and there have been many papers on how to do feature augmentation, such as GraphCL[1], GCA[2]. The authors do not experiment with widely used datasets such as Cora, Citeseer, ArXiv, etc. And they did not compare with better baselines for node classification, such as GRACE[3], GCA[2], MVGRL[4], etc. I think this part of the work is shallow and not enough to constitute a contribution. The authors should focus on the main contribution, i.e., graph-level contrastive learning, and need to improve the node-level augmentation scheme. 2. In the graph classification task, the compared baseline is not sufficient, such as MVGRL[4], gpt-gnn[5] are missing. I hope the authors could add more baselines of graph contrastive learning and test them on some common datasets. 3. I am concerned whether the similarity-aware positive sample selection will accelerate GNN-based encoder over-smoothing, i.e., similar nodes or graphs will be trained with features that converge excessively and discard their own unique features. In addition, whether selecting positive samples in the same dataset without introducing some perturbation noise would lead to lower generalization performance. The authors experimented with the transfer performance of the model on the graph classification task, though it still did not allay my concerns about the model generalization. I hope there will be more experiments on different downstream tasks and across different domains. Remarks: 1. The authors seem to have over-compressed the line spacing and abused vspace. 2. Table 5 is collapsed. [1] Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. Shen, “Graph contrastive learning with augmentations,” Advances in Neural Information Processing Systems, vol. 33, 2020. [2] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Graph contrastive learning with adaptive augmentation,” arXiv preprint arXiv:2010.14945, 2020. [3] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, “Deep graph contrastive representation learning,” arXiv preprint arXiv:2006.04131, 2020. [4] Hassani, Kaveh, and Amir Hosein Khasahmadi. "Contrastive multi-view representation learning on graphs." International Conference on Machine Learning. PMLR, 2020. [5] Hu, Ziniu, et al. "Gpt-gnn: Generative pre-training of graph neural networks." Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 2020.
3. I am concerned whether the similarity-aware positive sample selection will accelerate GNN-based encoder over-smoothing, i.e., similar nodes or graphs will be trained with features that converge excessively and discard their own unique features. In addition, whether selecting positive samples in the same dataset without introducing some perturbation noise would lead to lower generalization performance. The authors experimented with the transfer performance of the model on the graph classification task, though it still did not allay my concerns about the model generalization. I hope there will be more experiments on different downstream tasks and across different domains. Remarks:
rtx8B94JMS
ICLR_2024
From my point of view, there are 2 main weaknesses in this submission (for details, see below). 1. The method and the experiments are insufficiently described, and I have some questions in this regard. However, I am convinced, that the manuscript can be updated to be much more clear. 2. The empirical evaluation is of limited scope. Qualitatively, the method is evaluated on 2 toy problems (fOU & Hurst index); quantitatively on a single synthetic dataset (stochastic moving MNIST). For the latter, only two baseline models from 2018 and 2020 are compared to. In consequence, the usefulness of the method is not established. On the plus side, there are 3 ablations / further studies. Minor weaknesses. - The method appears to be inefficient, with a training time of 39 hours on an NVIDIA GeForce RTX 4090 for one model per model trained on stochastic moving MNIST. - A detailed comparison with Tong et al. (2022) (who also learn approximations to fBM) is missing. So far, it is only stated that Tong et al. did not apply their model to video data and that it is completely different. A clear illustration of the conceptual differences and a comparison of the pros and cons of each approach would be appreciated (summary in main part + mathematical details in supplementary material). # Summary For me, this is a borderline submission. On the one hand, the proposed method is novel, significant and of theoretical interest. On the other hand, there are clarity issues, a weak empirical evaluation and no clear use case. Expecting the clarity issues to be resolved, I rate the submission as a marginally above the acceptance threshold, as the theoretical strengths outweigh the empirical flaws. ---- # Details on major weaknesses ## Point 1 (clarity). **Regarding the method.** After reading the method and experiment section multiple times, I still have no idea, how to implement it. I am aware of the provided source code, nevertheless I found the paper to be insufficient in this regard. What I got from section E and Figure 4 is that: - First, there is an encoding step that returns $h$, a sequence of vectors/matrices over time. Somehow, these vectors are used to compute $\omega$. I have no idea how this $\omega$ is related to the optimal one from Prop. 5. - $h$ is given to a temporal convolution layer that returns $g$ and this $g$ has as many 'frames' as the input and is used as input to the control function $u$. - The control, drift and the diffusion function are implemented as neural networks. - An SDE solution is numerically approximated with a Stratonovich–Milstein solver. - $\omega$ is used in the decoding step, I do not understand why and how. - Where do the approximation processes $Y$ enter. How are they parametrized, is $\gamma$ as in Prop 5? Are they integrated separately from $X$? - The ELBO contains an expectation over sample paths. How many paths are sampled to estimate the mean? - Fig. 6: Why do the samples from the prior always show a 4 and a 7? Does the prior depend on the observations? **Regarding Moving MNIST.** What precisely is the task / evaluation protocol in the experiment on the stoch. moving MNIST dataset? I did not see it specified, but from the overall description it appears that a sequence of 25 frames is given to the model and the task is to return the same 25 frames again (with the goal of learning a generative model). ## Point 2 (empirical evaluation) - The method is not evaluated on real world data. - Quantitatively, the method is only evaluated on one synthetic dataset (stoch. moving MNIST). - While the method is motivated by "*Unfortunately, for many practical scenarios, BM falls short of capturing the full complexity and richness of the observed real data, which often contains long-range dependencies, rare events, and intricate temporal structures that cannot be faithfully represented by a Markovian process"*, the moving MNIST dataset is not of this kind. It is not long range (only 25 frames) and there is no correlated noise. - The method is only compared to 2 baselines (SVG, SLRVP) on moving MNIST. - Table 1 does not show standard deviations. Overall, this would be a far stronger submission, if the experiments were more extensive. This includes: - Evaluation on more task and datasets, and specifically on datasets were this method is expected to shine, i.e., in the presence of correlated noise. The pendulum dataset of [Becker et al., Factorized inference in high-dimensional deep feature space, ICML 2019] would be one example. - Comparison with more baselines. In particular, more recent / state-of-the-art methods that do not model a differential equation and the fBM model by Tong et al. ---- There is a typo in Proposition 1: "Markov rocesses"
- Table 1 does not show standard deviations. Overall, this would be a far stronger submission, if the experiments were more extensive. This includes:
ICLR_2023_1093
ICLR_2023
1. I thought the novelty is questionable. The authors claimed that the proposed Uni-Mol is the first pure 3D molecular pretraining framework. However, there have been already a few similar works. For example, a. The Graph Multi-View Pre-training (GraphMVP) framework leverages the correspondence and consistency between 2D topological structures and 3D geometric views. Liu et al., Pre-training Molecular Graph Representation with 3D Geometry, ICLR 2021. b. The geometry-enhanced molecular representation learning method (GEM) proposes includes several dedicated geometry-level self-supervised learning strategies to learn molecular geometry knowledge. Fang et al., Geometry-enhanced molecular representation learning for property prediction, nature machine intelligence, 2022. c. Guo et al. proposed a self-supervised pre-training model for learning structure embeddings from protein 3D structures. Guo et al., Self-Supervised Pre-training for Protein Embeddings Using Tertiary Structures, AAAI 2022. d. The GeomEtry-Aware Relational Graph Neural Network (GearNet) framework uses type prediction, distance prediction and angle prediction of masked parts for pretaining. Zhang et al., Protein Representation Learning by Geometric Structure Pretraining, ICML 2022 workshop. 2. The comparison with the SOTA methods may be unfair. The performance of the paper is based on the newly collected 209M dataset. However, the existing methods use smaller datasets. For example, GEM employs only 20M unlabeled data. Because the scale of datasets has a significant impact on the accuracy, the superior of the proposed method may be from the new large-scale datasets. 3. The authors claimed one of the contributions is that the proposed Uni-Mol contains a simple and efficient SE(3)-equivariant Transformer backbone. However, I thought this contribution is too weak. 4. The improvement is not very impressive or convincing. Although with a larger dataset for pretraining, the improvement is a bit limited, e.g., in Table 1. 5. It is not clear which part causes the main improvement: Transformer, pretraining or the larger dataset? 6. It could be better to show the 3D position recovery and masked atom prediction accuracy and visualize the results. 7. The visualization of the self-attention map and pair distance map in Appendix H is interesting. However, according to the visualization, the self-attention map is very similar to the pair distance map, as the author explained. In this case, why not directly use pair distance as attention? Or what does self-attention actually learn besides distance in the task? As self-attention is computationally expensive, is it really needed?
2. The comparison with the SOTA methods may be unfair. The performance of the paper is based on the newly collected 209M dataset. However, the existing methods use smaller datasets. For example, GEM employs only 20M unlabeled data. Because the scale of datasets has a significant impact on the accuracy, the superior of the proposed method may be from the new large-scale datasets.
NIPS_2021_121
NIPS_2021
Weakness] 1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise injected models does not ensure the flatness of the minima. So, to claim that the minima found by minimizing the loss in Eq (3), the analysis on the losses of the noise-injected models after training is required. 2. In Eq (4), the class prototypes before and after injecting noise are utilized for prototype fixing regularization. However, this means that F2M have to compute the prototypes of the base class every time the noise is injected: M+1 times for each update. Considering the fact that there are many classes and many samples for the base classes, this prototype fixing is computationally inefficient. If I miss some details about the prototype fixing, please fix my misunderstanding in rebuttal. 3. Analysis on the sampling times M and noise bound value b is missing. These values decide the flat area around the flat minima, and the performance would be affected by theses value. However, there is no analysis on M and b in the main paper nor the appendix. Moreover, the exact value M used for the experiments is not reported. 4. Comparison with single session incremental few-shot learning is missing. Like [42] in the main paper, there are some meta-learning based single session incremental FSL methods are being studied. Although this paper targets on multi-session incremental FSL with different setting and different dataset split, it would be more informative to compare the proposed F2M with that kind of methods, considering that the idea of finding flat minima seems valuable for the single session incremental few-shot learning task too. There is a typo in Table 2 – the miniImageNet task is 5-way, but it is written as 10-way. Post Rebuttal Reviewer clarified the confusing parts of the paper, and added useful analysis during rebuttal. Therefore, I raise my score to 6.
1. Although the paper argue that proposed method finds the flat minima, the analysis about flatness is missing. The loss used for training base model is the averaged loss for the noise injected models, and the authors provided convergence analysis on this loss. However, minimizing the averaged loss across the noise injected models does not ensure the flatness of the minima. So, to claim that the minima found by minimizing the loss in Eq (3), the analysis on the losses of the noise-injected models after training is required.
NIPS_2017_415
NIPS_2017
Weakness: 1. From the methodology aspect, the novelty of paper appears to be rather limited. The ENCODE part is already proposed in [10] and the incremental contribution lies in the decomposition part which just factorizes the M_v into factor D and slices Phi_v. 2. For the experiment, I'd like to the effect of optimized connectome in comparison with that of LiFE model, so we can see the performance differences and the effectiveness of the tensor-based LiFE_sd model. This part of experiment is missing.
1. From the methodology aspect, the novelty of paper appears to be rather limited. The ENCODE part is already proposed in [10] and the incremental contribution lies in the decomposition part which just factorizes the M_v into factor D and slices Phi_v.
47hDbAMLbc
ICLR_2024
- The paper is mainly dedicated to the existence of robust training. No results on optimization or robust generalization are derived. Given that, the scope seems to be quite limited. - Since overparameterization can often lead to powerful memorization and good generalization performance, the necessary conditions may have stronger implications if they are connected to generalization bounds. It is not clear in the paper that the constructions of ReLU networks for robust memorization would lead to robust generalization. I know the authors acknowledge this in the conclusion, but I think this is a very serious question. - The main theorems 4.8 and 5.2 only guarantee the existence of optimal robust memorization. These results would be more useful if an optimization or constructive algorithm is given to find the optimal memorization.
- Since overparameterization can often lead to powerful memorization and good generalization performance, the necessary conditions may have stronger implications if they are connected to generalization bounds. It is not clear in the paper that the constructions of ReLU networks for robust memorization would lead to robust generalization. I know the authors acknowledge this in the conclusion, but I think this is a very serious question.
ICLR_2023_3956
ICLR_2023
(W1) Eproxy alone does not actually work that well, it is only Eproxy+DPS that delivers convincing results - however, the necessity of having extra trained architectures (even if only 20) puts this work in a significantly different complexity regime than most of the baselines to which the paper compares - in short, the choice of baselines seems inadequate to the cost of performing EProxy+DPS. (W2) Related to the above, searching cost on ImageNet for Eproxy+DPS seems to ignore the necessity of training 20 architectures for DPS - including this cost would increase the searching time for Eproxy+DPS SIGNIFICANTLY, make it fall short to many of the baselines presented in Table 8 (e.g., TE-NAS, P-DARTS, likely SNAS and PC-DARTS). (W3) Comparison to synflow and NASWOT in table 5 seems biased -- why only assume 0 queries for these method where both were originally proposed to be used with either random sampling or evolutionary search? Specifically, results reported for synflow in the original paper are between 51 and 34 queries to reach test accuracy of 94.22, which is significantly better than what is reported for Eproxy+DPS (150 queries). Also, it is unclear what the authors mean by "we utilize the Eproxy as the fitness function for Regularized Evolution" and then showing results for varying number of queries, especially 0 queries (this might be also related to the point below). (W4) Showing that "Eproxy+DPS can find optimal global architectures within the RE search history" in Table 4 is incorrect in its context - it is possible that RE significantly helps Eproxy+DPS by prefiltering all architectures based on their accuracy (thus being very costly), showing results of Eproxy+DPS conditioned by running RE is not the same as showing results of just Eproxy+DPS (what the Table's caption suggests, by saying: "Eproxy+DPS uses substantially lower queries to find the global optimal architectures.") Minor shortcomings and suggestions: Figure 1 is not exactly informative as it does not convey any information about quality of each method Figure 2 also does not seem to add much, I actually had pretty hard time understanding what it tries to say - what is "W" at x axis? why does "arch 1" finish earlier than "arch 2"? why do they follow the same curve? Just to be clear, I am quite sure I know what the authors tried to say here, the concept of cheap proxies for NAS is not hard to grasp, but the Figure in its current form is not helpful, in my opinion while the choice of NAS benchmarks seems adequate to me, it would strengthen the paper if the authors could include experiments on some additional, large search spaces other than benchmarks/DARTS, for example a MobileNet-like search space zero-cost proxies considered in the paper are not very new, some of the more recent ones that the authors could include to make comparison more convincing are: ZenNAS, KNAS I would suggest changing the title - in its current form it is unclear really what the authors tried to say, some reasoning: 1) it is unclear if the authors mean a particular efficient proxy or efficient proxies in general, "is" suggests that it is a particular proxy, but then there is not proxy called "Efficient Proxy", which suggests that it is rather referring to a family of efficient proxies; 2) however, saying that a generic class of efficient proxies is now extensible suggests that the authors propose a generic method to make any efficient proxy extensible, which is not the case. I would recommend changing the title to something like "Extensible and Efficient Proxy for NAS", or something similar, it would make it much clear that the authors propose a new proxy that is supposed to be both efficient and extensible. Actually, another minor comment is that I am not sure if "extensible" is the best term here -- in my opinion a better term would be: "generalizable", "transferable" or "adaptive", all of them suggest to me some kind of changes in the training data (i.e., downstream task), which seems to be the main point of Eproxy. At the same time, "extensible" suggests more that something new could be easily added to the proposed method in order to make it better. Details: W1: Specifically, on NDS Eproxy is only comparable to NASWOT, despite performing few-shot training of architectures (i.e., even if the absolute cost is not very high, it's still 10x more costly than NASWOT, without any obvious benefits), furthermore on TransNAS-Bench-Micro Eproxy fails quite significantly on 6 out of 7 tasks. On NAS-Bench-MR Eproxy achieves convincing improvement (i.e., better in at least one metric - spearman-r or top% models - while not compromising another) over the selected baselines in only 2 out of 9 cases, consequently it falls short to synflow on average. On the other hand, as mentioned above, Eproxy+DPS does improve upon both synflow and naswot, most of the time, but it also requires training a number of architecture, which makes it no longer low-cost method - at best, it should be treated as efficient, predictor-based approach. However, the comparison is done almost exclusively with methods that do not assume any training (the only notable exceptions are Table 6, 7, and 8, but they have their own problems, as mentioned in other parts of the review, and the choice of baselines in Table 6 and 7 is not also not very convincing). W2: reported searching cost is approximately 1.5 GPU hours (0.06 GPU days) but training a single model from the DARTS CNN space on CIFAR-10, using the official code, takes roughly a day on a V100. Even though the authors use models from a different search space (NDS-DARTS benchmark), that in practice come for free, and might also be cheaper to train than standard DARTS, I am not convinced that the presented results are fair in comparing the proposed methods vs. baselines. Specifically, considering that the main focus of the submitted work is to enable low-cost NAS for diverse tasks, it is not unreasonable in my opinion to expect the proposed method to work well in situations when we do not have a priori knowledge of performance of certain architectures, or even if there exists a highly-correlated proxy task/search space (like DARTS vs. NDS-DARTS). Therefore not including the cost of obtaining accuracy for the 20 architectures needed to conduct DPS is likely to be misleading, were the method to be applied in real-world situations, which it tries to address. W4: to be clear, I am inclined to believe that Eproxy+DPS is likely to produce better results than just using RE, especially if the authors used it to augment RE, like, e.g., Abdelfattah et al. However, formally speaking, this has not been shown in the current paper. Specifically, if we know that RE finds optimal architecture in N steps, then "finding optimal architecture within RE history" is actually a problem of finding the correct model within the N models, rathar than within the whole search space -- the a priori knowledge that the optimal models exists in a relatively small subset of the search space provides a very strong prior which can naturally boost a method's performance. In the context of Table 7, RE is shown to find the optimal architecture in ~560 queries, so Eproxy+DPS needing 58 actually explores 10% of all models - if we were to apply the same level of performance to the entire search space (15k models) then it would be 1500 queries. While I would not claim that this is the expected number for Eproxy+DPS, I hope that this clearly shows why the current results are misleading. Even a random search would do significantly better in this setting.
1) it is unclear if the authors mean a particular efficient proxy or efficient proxies in general, "is" suggests that it is a particular proxy, but then there is not proxy called "Efficient Proxy", which suggests that it is rather referring to a family of efficient proxies;
NIPS_2018_813
NIPS_2018
(the simulations seem to address whether or not an improvement is actually seen in practice), the paper would benefit from a discussion of the fact that the targeted improvement is in the (relatively) small n regime. 4. The paper would benefit from a more detailed comparison with related work, in particular making a detailed comparison to the time complexity and competitiveness of prior art. Minor: 1. The proofs repeatedly refer to the Cauchy inequality, but it might be better given audience familiarity to refer to it as the Cauchy-Schwarz inequality. Post-rebuttal: I have read the authors' response and am satisfied with it. I maintain my vote for acceptance.
4. The paper would benefit from a more detailed comparison with related work, in particular making a detailed comparison to the time complexity and competitiveness of prior art. Minor:
oqDoAMYbgA
ICLR_2024
1. The experimental study is limited: the comparisons with other methods are provided only on a single Wiki-small dataset. From that, it’s not enough to judge on the comparison with other baselines. 2. The training time seems to be the main bottleneck of the method, its training is slower than for almost any other tree method (as reported in the paper). Probably because of that, applying the method on bigger datasets becomes infeasible. (Fair to say, that the same shortcoming applies for the original Softmax Tree, and the presented method seems to double the training time). 3. The method seems to be quite sensitive to hyperparameters, so in order to apply it method for a new problem, one has to perform some careful hyperparameter search to find a proper $\alpha$.
3. The method seems to be quite sensitive to hyperparameters, so in order to apply it method for a new problem, one has to perform some careful hyperparameter search to find a proper $\alpha$.
NIPS_2017_250
NIPS_2017
#ERROR!
3. When the proposed compression is allowed to have the same bit budget to represent each coordinate, the pairwise Euclidean distance is not distorted? Most of compression algorithms (not PCA-type algorithms) suffer from the same problem. === Updates after the feedback === I've read other reviewers' comments and authors' feedback. I think that my original concerns are resolved by the feedback. Moreover, they suggest a possible direction to combine QuadSketch with OPQ, which is appealing.
NIPS_2017_631
NIPS_2017
1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is not the best one out there. But in order to claim that CBN should help even the more powerful VQA models, I would like the authors to conduct experiments on more than one VQA model – favorably the ones which are closer to state-of-art (and whose codes are publicly available) such as MCB (Fukui et al., EMNLP16), HieCoAtt (Lu et al., NIPS16). It could be the case that these more powerful VQA models are already so powerful that the proposed early modulating does not help. So, it is good to know if the proposed conditional batch norm can advance the state-of-art in VQA or not. 2. L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to? 3. In table 1, the results on the VQA dataset are reported on the test-dev split. However, as mentioned in the guidelines from the VQA dataset authors (http://www.visualqa.org/vqa_v1_challenge.html), numbers should be reported on test-standard split because one can overfit to test-dev split by uploading multiple entries. 4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening? 5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a). 6. The first two bullets about contributions (at the end of the intro) can be combined together. 7. Other errors/typos: a. L14 and 15: repetition of word “imagine” b. L42: missing reference c. L56: impact -> impacts Post-rebuttal comments: The new results of applying CBN on the MRN model are interesting and convincing that CBN helps fairly developed VQA models as well (the results have not been reported on state-of-art VQA model). So, I would like to recommend acceptance of the paper. However I still have few comments -- 1. It seems that there is still some confusion about test-standard and test-dev splits of the VQA dataset. In the rebuttal, the authors report the performance of the MCB model to be 62.5% on test-standard split. However, 62.5% seems to be the performance of the MCB model on the test-dev split as per table 1 in the MCB paper (https://arxiv.org/pdf/1606.01847.pdf). 2. The reproduced performance reported on MRN model seems close to that reported in the MRN paper when the model is trained using VQA train + val data. I would like the authors to clarify in the final version if they used train + val or just train to train the MRN and MRN + CBN models. And if train + val is being used, the performance can't be compared with 62.5% of MCB because that is when MCB is trained on train only. When MCB is trained on train + val, the performance is around 64% (table 4 in MCB paper). 3. The citation for the MRN model (in the rebuttal) is incorrect. It should be -- @inproceedings{kim2016multimodal, title={Multimodal residual learning for visual qa}, author={Kim, Jin-Hwa and Lee, Sang-Woo and Kwak, Donghyun and Heo, Min-Oh and Kim, Jeonghee and Ha, Jung-Woo and Zhang, Byoung-Tak}, booktitle={Advances in Neural Information Processing Systems}, pages={361--369}, year={2016} } 4. As AR2 and AR3, I would be interested in seeing if the findings from ResNet carry over to other CNN architectures such as VGGNet as well.
6. The first two bullets about contributions (at the end of the intro) can be combined together.
ACL_2017_779_review
ACL_2017
There were many sentences in the abstract and in other places in the paper where the authors stuff too much information into a single sentence. This could be avoided. One can always use an extra sentence to be more clear. There could have been a section where the actual method used could be explained in a more detailed. This explanation is glossed over in the paper. It's non-trivial to guess the idea from reading the sections alone. During test time, you need the source-pivot corpus as well. This is a major disadvantage of this approach. This is played down - in fact it's not mentioned at all. I could strongly encourage the authors to mention this and comment on it. - General Discussion: This paper uses knowledge distillation to improve zero-resource translation. The techniques used in this paper are very similar to the one proposed in Yoon Kim et. al. The innovative part is that they use it for doing zero-resource translation. They compare against other prominent works in the field. Their approach also eliminates the need to do double decoding. Detailed comments: - Line 21-27 - the authors could have avoided this complicated structure for two simple sentences. Line 41 - Johnson et. al has SOTA on English-French and German-English. Line 77-79 there is no evidence provided as to why combination of multiple languages increases complexity. Please retract this statement or provide more evidence. Evidence in literature seems to suggest the opposite. Line 416-420 - The two lines here are repeated again. They were first mentioned in the previous paragraph. Line 577 - Figure 2 not 3!
-General Discussion: This paper uses knowledge distillation to improve zero-resource translation. The techniques used in this paper are very similar to the one proposed in Yoon Kim et. al. The innovative part is that they use it for doing zero-resource translation. They compare against other prominent works in the field. Their approach also eliminates the need to do double decoding. Detailed comments:
NIPS_2022_2137
NIPS_2022
weakness) The proposed approach is incremental relative to previous works. Despite the novelty of the proposed approaches' application to learning communication protocols in MARL, these techniques have previously been proposed for other prediction problems, as alluded to by the authors in Lines 144 and 175. Line 155 also indicates that some theoretical contributions are minor extensions of theorems proven in previous work. Nevertheless, this weakness is outweighed by the remaining theoretical and empirical studies that the authors have conducted to show that the proposed approach can learn communication protocols requiring symmetry breaking and going beyond 1-WL expressivity in MARL. Quality 1. (Major Strength) Result reproducibility. The authors have provided adequate descriptions of the model architecture and hyperparameters, environments, and experiment protocols that ensure their results are reproducible. 2. (Major Strength) The experiment design adequately demonstrates the claims made in by the authors. The environment and baseline selection empirically highlight the claims of this work regarding RNI/CLIP's ability to learn improved communication protocols in environments requiring (i) symmetry-breaking or (ii) function estimators beyond the 1-WL expressivity. Note that the evaluation in Predator-Prey and Easy Traffic Junction also highlights the potential weaknesses of the proposed approach when applied to environments not requiring (i) or (ii). This provides valuable insights that can inform the readers when applying the proposed technique to other environments. 3. (Minor weakness) Missing theoretical/empirical analysis to strengthen the claims of this work further. Despite ensuring the existence of a learned GNN that can help with (i) symmetry-breaking, I believe the theoretical analysis on the ability of RL-based optimisation techniques to learn such GNNs is lacking. Nonetheless, the empirical demonstration of RNI/CLIP's performance indicates that this is not a significant issue for the evaluated environments. Furthermore, since the work mentioned convergence as a potential issue with a few proposed approaches, it is crucial to report the learning curve from the experiments in the experiment section (although this is provided in the Appendix). That aside, it will also be interesting to analyze messages that RNI/CLIP passes in environments requiring (i) or (ii). Specifically, in terms of proving the usefulness of RNI/CLIP in addressing (i), inspecting the passed representation may highlight how they aid the symmetry-breaking process. Clarity 1. (Strength) The paper is well-written. The document is generally well-written. The method description and related theoretical analysis were clear and concise. Furthermore, the experiment section clearly states the intention behind the design/selection of various baseline/environments for demonstrating the central claims of the work. Significance 1. (Strength) The work provides significant results for people working in applying GNNs for learning communication protocols for MARL. Despite the proposed approach mainly being incremental compared to previous work, the theoretical and empirical analysis provides findings useful for people working in communication for MARL. The way the authors highlighted the potential limitations of the work is specifically helpful for future research. The authors have adequately highlighted the limitations of their work through their experiments and the resulting analysis. In terms of societal impact, I do not think that this work requires additional information regarding its negative societal impact since its application is still rather limited to simple MARL environments.
2. (Major Strength) The experiment design adequately demonstrates the claims made in by the authors. The environment and baseline selection empirically highlight the claims of this work regarding RNI/CLIP's ability to learn improved communication protocols in environments requiring (i) symmetry-breaking or (ii) function estimators beyond the 1-WL expressivity. Note that the evaluation in Predator-Prey and Easy Traffic Junction also highlights the potential weaknesses of the proposed approach when applied to environments not requiring (i) or (ii). This provides valuable insights that can inform the readers when applying the proposed technique to other environments.
ARR_2022_161_review
ARR_2022
The amount of background provided can be reduced, and consists of quite a few detailed descriptions of topics and experiments that are not directly related to the experiments of the paper (e.g. the Priming paragraph at L210, Novel verbs paragraph at L224). The space that is currently occupied by this extensive background could be used more efficiently, and cutting it down a bit opens up space for additional experiments. The ‘Jabberwocky constructions’ experiment is quite prone to several potential confounds that need to be explored in more detail in order to ensure that the current set of results truly hints at ‘the neural reality of argument structure constructions’. The fact that the contextualised embeddings of verbs in the same syntactic configuration is highly similar isn’t that surprising in itself (as is noted by the authors as well). The authors decided to drop the ‘priming’ component of the original paper in order to adapt the experiment to LMs, but there are other options that can be explored to align the setup more closely to the original (see section below for some ideas). ### Comments / Questions - Could the results of the Sentence Sorting be driven by the fact that sentence embeddings are obtained by averaging over word embeddings? It seems that this procedure would be quite prone to simply cluster based on features stemming from individual tokens, instead of a more general abstract signal. I could imagine that in a BERT-like architecture the representation at the [CLS] position might serve as a sentence representation as well. - Alternatively, would it be possible to set up the sentence sorting experiment in such a way that the lexical overlap in between sentences is limited? This is common in structural priming experiments as well, and models are known to rely heavily on lexical heuristics. , - Did you consider different measures of similarity in the Jabberwocky experiment? Euclidean distance might not be the most perfect measure for expressing similarity, and I would suggest looking into alternatives as well, like cosine similarity. - A bit pedantic, but Jabberwocky words are non-existing nonce words, whereas the setup that the authors arrived at is only semantically nonsensical, yet still made up of existing words (a la ‘Colorless green ideas’). Referring to them as Jabberwocky (L.458) would give the impression of actually using nonce words. - How many ASCs have been argued to exist (within English)? Is there a reason why the 4 constructions used in Case Study 1 (_transitive, ditransitive, caused-motion, resultative_; L.165), are slightly different from Case Study 2 (_ditransitive, resultative, caused-motion, removal_; Table 2)? --- ### Suggestions: - L.490: “we take the embedding of the first subword token as the verb embedding.” It is also quite common in cases like that to average over the subword representations, which is done e.g. by [Hewitt and Manning (2019, footnote 4)](https://aclanthology.org/N19-1419.pdf). - I suggest not phrasing the Jabberwocky experiment as “probing” (L.444 ‘method to probe LMs’, L.465 ‘probing strategy’, etc.), given the connotation this term has with training small classifiers on top of a model’s hidden state representations. - Could the ‘priming’ aspect of Johnson and Goldberg (2013) in the Jabberwocky experiment perhaps be emulated more closely by framing it as a “Targeted Syntactic Evaluation” task (akin to Marvin & Linzen (2018), a.o.). In the context of priming, a similar setup has recently been utilised by [Sinclair et al. (2021)](https://arxiv.org/pdf/2109.14989.pdf). One could compare the probability of _P(gave | She traded her the epicenter. He)_ to that of _P(gave | He cut it seasonal. She)_, and likewise for the other 2 constructions. This way you wouldn’t run into the confounding issues that stem from using the contextualisation of ‘gave’. - An additional experiment that might be interesting to explore is by probing for construction type across layers at the position of the verb. In the ‘Jabberwocky’ setup one would expect that at the word embedding level construction information can’t be present yet, but as it is contextualised more and more in each layer the ASC information is likely to increase gradually. Would also be interesting than to see how the curve of a jabberwocky verb compares to that of a sensical/prototypical verb (like _gave_): there is probably _some_ degree of argument structure already encoded in the word embedding there (as a lexicalist would argue), so I would expect probing performance for such verbs to be much higher at lower levels already. - Adding to the previous point: probing in itself would not even be necessary to gain insight into the layerwise contextualisation; some of the current experiments could be conducted in such a fashion as well. --- ### Typos/Style: Very well written paper, no remarks here.
- L.490: “we take the embedding of the first subword token as the verb embedding.” It is also quite common in cases like that to average over the subword representations, which is done e.g. by [Hewitt and Manning (2019, footnote 4)](https://aclanthology.org/N19-1419.pdf).
ARR_2022_311_review
ARR_2022
__1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
__3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged.
aURCCzSuhc
EMNLP_2023
1. Limited set of baseline methods was selected 2. Experiment setup is basic and not matching with the ideal situation. Why the experiments are performed separately for subtype/supertype and overlapping relations among entity types? 3. Limited set of entity types is considered. Perhaps, that is the reason, even the basic X-Annot. method has similar performance as with other methods. 4. Manual intervention to get the final sets of entity types if relations among them exist. It may not be a good idea when entity types increase from 100s to 1000s.
4. Manual intervention to get the final sets of entity types if relations among them exist. It may not be a good idea when entity types increase from 100s to 1000s.
ACL_2017_554_review
ACL_2017
1) The paper does not dig into the theory profs and show the convergence properties of the proposed algorithm. 2) The paper only shows the comparison between SG-MCMC vs RMSProp and did not conduct other comparison. It should explain more about the relation between pSGLD vs RMSProp other than just mentioning they are conterparts in two families. 2) The paper does not talk about the training speed impact with more details. - General Discussion:
1) The paper does not dig into the theory profs and show the convergence properties of the proposed algorithm.
NIPS_2022_532
NIPS_2022
• It seems that ODA, one of the methods of solving the MOIP problem, has learned the policy to imitate the problem-solving method, but it did not clearly suggest how the presented method improved the performance and computation speed of the solution rather than just using ODA. • In order to apply imitation learning, it is necessary to obtain labeled data by optimally solving various problems. There are no experiments on whether there are any difficulties in obtaining the corresponding data, and how the performance changes depending on the size of the labeled data.
• In order to apply imitation learning, it is necessary to obtain labeled data by optimally solving various problems. There are no experiments on whether there are any difficulties in obtaining the corresponding data, and how the performance changes depending on the size of the labeled data.
NIPS_2020_125
NIPS_2020
1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this. 2. Similar to above, it would be good to provide more details of how the attention modules are added to tested architectures. I assume they are added following the SE paper but would be good to clarify. 3. Related to above, how is the complexity of the added module controlled? Is there a tunable channel weight similar to SE? It would be good to clarify this. 4. In Table 3, the additional complexity of the found module is ~5-15% in terms of parameters and flops. It is not clear if this is actually negligible. Would be good to perform comparisons where the complexity matches more closely. 5. In Table 3, it seems that the gains are decreasing for larger models. It would be good to show results with larger and deeper models (ResNet-101 and ResNet-152) to see if the gains transfer. 6. Similar to above, it would be good to show results for different model types (e.g. ResNeXt or MobileNet) to see if the module transfer to different model types. All current experiments use ResNet models. 7. It would be good to discuss and report how the searched module affect the training time, inference time, and memory usage (compared to vanilla baselines and other attention modules). 8. It would be interesting to see the results of searching for the module using a different backbone (e.g. ResNet-56) or a different dataset (e.g. CIFAR-100) and compare both the performance and the resulting module. 9. The current search space for the attention module consists largely of existing attention operations as basic ops. It would be interesting to consider a richer / less specific set of operators.
1. It is not very clear how exactly is the attention module attached to the backbone ResNet-20 architecture when performing the search. How many attention modules are used? Where are they placed? After each block? After each stage? It would be good to clarify this.
ARR_2022_186_review
ARR_2022
There are two main concerns from me . The first one is that how to generate the adversarial training set with code is not clearly discussed and the detail of adversarial training set is not be provided. The second concern is that the reason of excluding 25% of the word phrases said in the last paragraph in 4.1 and the reason of using first 50 examples from the OntoNotes test set is not be fully discussed. 1. For the Appendix H section, it should be reorganized which is difficult to follow.
1. For the Appendix H section, it should be reorganized which is difficult to follow.
ARR_2022_59_review
ARR_2022
- If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defined error bars under different random seeds. - Another concern is that the method may not be practical. In fine-tuning, THE-X firstly drops the pooler of the pre-trained model and replaces softmax and GeLU, then conducts standard fine-tuning. For the fine-tuned model, they add LayerNorm approximation and d distill knowledge from original LN layers. Next, they drop the original LN and convert the model into fully HE-supported ops. The pipeline is too complicated and the knowledge distillation may not be easy to control. - Only evaluating the approach on BERTtiny is also not convincing although I understand that there are other existing papers that may do the same thing. For example, a BiLSTM-CRF could yield a 91.03 F1-score and a BERT-base could achieve 92.8. Although computation efficiency and energy-saving are important, it is necessary to comprehensively evaluate the proposed approach. - The LayerNorm approximation seems to have a non-negligible impact on the performances for several tasks. I think it is an important issue that is worth exploring. - I am willing to see other reviews of this paper and the response of the authors. - Line #069: it possible -> it is possible?
- If I understand correctly, In Tables 1 and 2, the authors report the best results on the **dev set** with the hyper-parameter search and model selection on **dev set**, which is not enough to be convincing. I strongly suggest that the paper should present the **average** results on the **test set** with clearly defined error bars under different random seeds.
NIPS_2021_1072
NIPS_2021
I have certain concerns about the paper. 1/ I think the contribution of the paper is a bit limited. V-MAIL combines several existing ideas, e.g., latent imagination, GAIL, variational state space model, etc., and achieves a good performance. However, how these components affect each other and how they contribute to the final performance are not clear. An ablation study is also missing from the paper. In this case, it would be hard to get inspiration from reading it. 2/ Although the paper has provided theoretical analysis about the model-based adversarial imitation learning (sect. 4.1 and sect. 4.2), they are disconnected from the practical implementations (sect. 4.3). In particular, Theorem 1 shows that the divergence of the visitation distributions in a MDP can be upper bounded by a divergence of the visitation distributions in a POMDP. However, in the practical implementation, a variational state space model captures the belief, rather than the visitation distribution of the belief. In addition, it feels difficult to compute the visitation frequency of a belief, whose size is exponential to the size of the history and the state space. I believe the proposed algorithm indeed has its merit, but I don’t think Theorem 1 provides a correct justification of the optimization objective used in this paper. 3/ I feel the author should be careful when making certain claims. For example, from line 39 to line 48, the authors are analyzing the limitations of the existing IRL methods and adversarial imitation learning methods. “These approaches explicitly train a GAN-based classifier [17] to distinguish the visitation distribution of the agent from the expert, and use it as a reward signal for training the agent with RL….” However, not all IRL methods are adversarial imitation learning. In fact, most of them don’t train a GAN-based classifier and do RL afterwards. Instead, a lot of them recover the reward and do planning instead. 4/ The authors claimed that V-MAIL achieves zero-shot transfer to novel environments. However, the policy is fine-tuned with additional expert demonstrations, as shown in Alg. 2. Why is this zero-shot? In addition, I guess the transferability might be limited by the real difficulty of the source task / target task. Walker-run is clearly harder than walker-walk, so the policy transfer here is possible. Also, for the manipulation scenario, 3-prong task with both clockwise / counter clockwise rotations, together with one 4-prong task, actually provides sufficient information about the target task. I guess it might be difficult to transfer a policy from simpler tasks to more complex tasks. This has to be made clear in the paper. Otherwise, it is quite misleading.
2. Why is this zero-shot? In addition, I guess the transferability might be limited by the real difficulty of the source task / target task. Walker-run is clearly harder than walker-walk, so the policy transfer here is possible. Also, for the manipulation scenario, 3-prong task with both clockwise / counter clockwise rotations, together with one 4-prong task, actually provides sufficient information about the target task. I guess it might be difficult to transfer a policy from simpler tasks to more complex tasks. This has to be made clear in the paper. Otherwise, it is quite misleading.
ICLR_2021_1966
ICLR_2021
, Suggestions, Questions: 1. A theoretical discussion about following points will improve the contribution of the paper: a. Why do large margins result in higher adversarial robustness? What happens if I change the attack type? b. Benefits compared over other adversarial training methods are not clear. c. A more detailed discussion about the equilibrium state is necessary, as currently provided in Sec. 2.3. This is rather an example. 2. Experimental section: a. Need to report average over multiple runs. Results are very close together and it is hard to favor one method. b. Sec. 3.1: Since this is the toy-dataset, a discussion why the decision boundaries look as they do, would be interesting. c. Sec. 3.3: What information is in Fig. 9 middle and right? 3. Formatting and writing: a. Detailed proofreading required. e.g. on p. 3 “using cross-entropy loss and clean data for training” b. Some variables are used but not introduced. e.g. x_n1, x_n2 in Sec. 2.3. c. Figures are too small and not properly labeled in experimental section. d. References to prior work are missing as e.g. “Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning” e. Algorithms need rework, e.g. information of Alg. 1 can be written in 2,3 lines. Though the idea of adaptive adversarial noise magnitude is in general appealing, the paper has some weaknesses: (i) theoretical contribution is relatively minor, (ii) the paper does not present the material sufficiently clearly to the reader, and (iii) experimental evaluation is not sufficiently conclusive in favor of the paper's central hypothesis.
2. Experimental section: a. Need to report average over multiple runs. Results are very close together and it is hard to favor one method. b. Sec. 3.1: Since this is the toy-dataset, a discussion why the decision boundaries look as they do, would be interesting. c. Sec. 3.3: What information is in Fig. 9 middle and right?
FpElWzxzu4
ICLR_2024
1. In the intro section, the claim that Transformers reply on regularly sampled time-series data is wrong. For example, [1] shows that the Transformer model handles irregularly-sampled time series well for imputation. 2. In section 2, "Finally, INRs operate on a per-data-instance basis, meaning that one time-series instance is required to train an INR". This claim is true but I don't think it is an advantage. A model that can only handle a single time series data is almost useless. 3. In section 3.1, why $c_i^t$ is a vector rather than a scalar? It just denotes a temporal coordinate of a time and should be a scalar. 4. Why the frequency of the Fourier feature is uniformly sampled? 5. "Vectors Bm ∈ R1×DψF , δm ∈ R1×DψF denote the phase shift and the bias respectively". I think what the author wants to claim is that B represents bias while δm denotes phase shift. 6. In Figure 5, it is unclear what the vertical axis represents. [1] NRTSI: Non-Recurrent Time Series Imputation, ICASSP 2023.
2. In section 2, "Finally, INRs operate on a per-data-instance basis, meaning that one time-series instance is required to train an INR". This claim is true but I don't think it is an advantage. A model that can only handle a single time series data is almost useless.
UEx5dZqXvr
EMNLP_2023
Based on the analyses presented in the paper, the scaling behavior of document-level machine translation models (in terms of number of parameters and number of data points) is not crucially different from the scaling behavior of sentence-level machine translation systems, which has already been studied by [Ghorbani et al. (2021)](https://arxiv.org/pdf/2109.07740.pdf). In my opinion, the experiments (or their reporting) could be more thorough in many places to back up the claims made by the authors, e.g. - Figure 1 indicates a causal relationship between the number of parameters and translation quality by affecting sample efficiency and optimal sequence length. While the graphic is confusing to me (e.g. it looks to me as if the number of parameters is affecting the corpus size), I don’t think that the intended claim (as reiterated at the end of the “Introduction” section) is supported by the results. None of the experiments actually show causation but rather association between the variables. - There is no information on how the function for the optimal sequence length was estimated (Equation 1) and how reliable we expect this model to be. - In Section 5.2, the authors should define what they mean by “error accumulation” explicitly. Is the problem that the models base the translation of later sentences made on erroneous translation of earlier sentences? If that is what the authors meant by “error accumulation”, it would need more analysis to verify such a phenomenon. Personally, I’d be interested in details like the maximum sequence length used for these experiments and the number of training examples per bin (which are not reported yet), as the observations made by the authors could be related to tendencies of the Transformer to overfit to length statistics of the training data, see [Varis and Bojar (2021)](https://aclanthology.org/2021.emnlp-main.650.pdf). - The results presented in Section 5.3 look rather noisy to me as the accuracy on ContraPro decreases from context length 60 to 120 and from 120 to 250 while it improves considerably for larger context lengths. Intuitively, a substantial context length increase (e.g. from 60 to 250 tokens) should not hurt the accuracy. The authors do not make an attempt to explain this trend. - It would be helpful to also provide the confidence intervals to strengthen the conclusions from the experiments with one or multiple factors. - The authors mention in Section 5.1 that the cross entropy loss “fails to fully depict the translation quality”. I don’t think that this conclusion is valid based on the authors' experiments. The authors are measuring translation quality in terms of (d-)BLEU, i.e. in terms of a metric that has many limitations, especially for document-level MT, see [Kocmi et al. (2021)](https://aclanthology.org/2021.wmt-1.57.pdf); [Post and Junczys-Dowmunt (2023)](https://arxiv.org/pdf/2304.12959.pdf). If general translation quality is not properly reflected by the metric, then it can’t really be determined whether the loss is a good indicator of general translation quality. Some minor comments are: - For most of the experiments in Section 4, the factor that is held constant is not reported, e.g. for the experiment of the joint effect of maximum sequence length and data scale (in Section 4.2), I couldn’t find the model size. - The authors mention that previous work, in particular, Beltagy et al. (2020) and Press et al. (2021) demonstrate that model performance improves with a larger context. Let me note here that their methods are different from the ones presented in this paper and thus conclusions can be different for a number of reasons. - In my opinion, a lot of the details on training and inference configurations can be moved to the appendix. - The abbreviation “MAC” is used in line 194 already but only explained later in the paper.
- There is no information on how the function for the optimal sequence length was estimated (Equation 1) and how reliable we expect this model to be.
NIPS_2018_125
NIPS_2018
- Some missing references and somewhat weak baseline comparisons (see below) - Writing style needs some improvement, although, it is overall well written and easy to understand. Technical comments and questions: - The idea of active feature acquisition, especially in the medical domain was studied early on by Ashish Kapoor and Eric Horvitz. See https://www.microsoft.com/en-us/research/wp-content/uploads/2016/12/NIPS2009.pdf There is also a number of missing citations to work on using MDPs for acquiring information from external sources. Kanani et al, WSDM 2012, Narsimhan et al, "Improving Information Extraction by Acquiring External Evidence with Reinforcement Learning", and others. - section 3, line 131: "hyperparameter balancing the relative importances of two terms is absorbed in the predefined cost". How is this done? The predefined cost could be externally defined, so it's not clear how these two things interact. - section 3.1, line 143" "Then the state changes and environment gives a reward". This is not true of standard MDP formulations. You may not get a reward after each action, but this makes it sound like that. Also, line 154, it's not clear if each action is a single feature or the power set. Maybe make the description more clear. - The biggest weakness of the paper is that it does not compare to simple feature acquisition baselines like expected utility or some such measure to prove the effectiveness of the proposed approach. Writing style and other issues: - Line 207: I didn't find the pseudo code in the supplementary material - The results are somewhat difficult to read. It would be nice to have a more cleaner representation of results in figures 1 and 2. - Line 289: You should still include results of DWSC if it's a reasonable baseline - Line 319: your dollar numbers in the table don't match! - The paper will become more readable by fixing simple style issues like excessive use of "the" (I personally still struggle with this problem), or other grammar issues. I'll try and list most of the fixes here. 4: joint 29: only noise 47: It is worth noting that 48: pre-training is unrealistic 50: optimal learning policy 69: we cannot guarantee 70: manners meaning that => manner, that is, 86: work 123: for all data points 145: we construct an MDP (hopefully, it will be proper, so no need to mention that) 154: we assume that 174: learning is a value-based 175: from experience. To handle continuous state space, we use deep-Q learning (remove three the's) 176: has shown 180: instead of basic Q-learning 184: understood as multi-task learning 186: aim to optimize a single 208: We follow the n-step 231: versatility (?), we perform extensive 233: we use Adam optimizer 242: We assume uniform acquisition cost 245: LSTM 289: not only feature acquisition but also classification. 310: datasets 316: examination cost?
- The biggest weakness of the paper is that it does not compare to simple feature acquisition baselines like expected utility or some such measure to prove the effectiveness of the proposed approach. Writing style and other issues:
NIPS_2017_434
NIPS_2017
--- This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance: 1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not ablated. How important is the added complexity? Will one IN do? 2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2. 3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder. While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated. Why is this particular dimension of difficulty interesting? 4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error? 5. Are the learned object state embeddings interpretable in any way before decoding? 6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions: * The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs). * How many different kinds of physical interaction can be in one simulation? * How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates? Preliminary Evaluation --- Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection.
3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder. While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated. Why is this particular dimension of difficulty interesting?
NIPS_2017_567
NIPS_2017
Weakness: 1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly. Here are some examples: (1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where are the h_{t-1}^{1..5} in Fig2(b)? What is h_{t-1} in Figure2(b)? (2) In line 96, I do not understand the sentence "our lower hierarchical layers zoom in time" and the sentence following that. 2. It seems to me that the multi-scale statement is a bit misleading, because the slow and fast RNN do not operate on different physical time scale, but rather on the logical time scale when the stacks are sequentialized in the graph. Therefore, the only benefit here seems to be the reduce of gradient path by the slow RNN. 3. To reduce the gradient path on stacked RNN, a simpler approach is to use the Residual Units or simply fully connect the stacked cells. However, there is no comparison or mention in the paper. 4. The experimental results do not contain standard deviations and therefore it is hard to judge the significance of the results.
1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly. Here are some examples: (1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where are the h_{t-1}^{1..5} in Fig2(b)? What is h_{t-1} in Figure2(b)? (2) In line 96, I do not understand the sentence "our lower hierarchical layers zoom in time" and the sentence following that.
ICLR_2023_624
ICLR_2023
1. evaluation on a single domain The method is evaluated only on the tasks from Meta World, a robotic manipulation domain. Hence, it is difficult to judge whether the results will generalize to other domains. I strongly recommend running experiments on a different benchmark such as Atari which is commonly used in the literature. This would also verify whether the method works with discrete action spaces and high-dimensional observations. 2. evaluation on a setting created by the authors, no well-established external benchmark The authors seem to create their own train and test splits in Meta World. This seems strange since Meta World recommends a particular train and test split (e.g. MT10 or MT50) in order to ensure fair comparison across different papers. I strongly suggest running experiments on a pre-established setting so that your results can easily be compared with prior work (without having to re-implement or re-run them). You don't need to get SOTA results, just show how it compares with reasonable baselines like the ones you already include. Otherwise, there is a big question mark around why you created your own "benchmark" when a very similar one exists already and whether this was somehow carefully designed to make your approach look better. 3. limited number of baselines While you do have some transformer-based baselines I believe the method could greatly benefit from additional ones like BC, transformer-BC, and other offline RL methods like CQL or IQL. Such comparisons could help shed more light into whether the transformer architecture is crucial, the hypernetwork initialization, the adaptation layers, or the training objective. 4. more analysis is needed It isn't clear how the methods compare with the given expert demonstrations on the new tasks. Do they learn to imitate the policy or do they learn a better policy than the given demonstration? I suggest comparing with the performance of the demonstration or policy from which the demonstration was collected. If the environment is deterministic and the agent gets to see expert demonstrations, isn't the problem of learning to imitate it quite easy? What happens if there is more stochasticity in the environments or the given demonstration isn't optimal? When finetuning transformers, it is often the case that they forget the tasks they were trained on. It would be valuable to show the performance of your different methods on the tasks they were trained on after being finetuned on the downstream tasks. Are some of them better than the others at preserving previously learned skills? 5. missing some important details The paper seems to be missing some important details regarding the experimental setup. For example, it wasn't clear to me how the learning from observations setting works. At some point you mention that you condition on the expert observations while collecting online data. Does this assume the ability to reset the environment in any state / observation? If so, this is a big assumption that should be more clearly emphasized and discussed. how exactly are you using the expert observations in combination with online learning? There are also some missing details regarding the expertise of the demonstrations at test time. Are these demonstrations coming from an an expert or how good are they? Minor sometimes you refer to generalization to new tasks. however, you finetune your models, so i believe a better term would be transfer or adaptation to new tasks.
1. evaluation on a single domain The method is evaluated only on the tasks from Meta World, a robotic manipulation domain. Hence, it is difficult to judge whether the results will generalize to other domains. I strongly recommend running experiments on a different benchmark such as Atari which is commonly used in the literature. This would also verify whether the method works with discrete action spaces and high-dimensional observations.
FgEM735i5M
EMNLP_2023
1. This paper presents a highly effective engineering method for ReC. However, it should be noted that the proposed framework incorporates some combinatorial and heuristic aspects. In particular, the Non-Ambiguous Query Generation procedure relies on a sophisticated filtering template. It would be helpful if the author could clarify the impact of these heuristic components. 2. Since the linguistic expression rewriting utilizes the powerful GPT-3.5 language model, it would be interesting to understand the extent of randomness and deviation that may arise from the influence of GPT-3.5. Is there any studies or analyses on this aspect?
1. This paper presents a highly effective engineering method for ReC. However, it should be noted that the proposed framework incorporates some combinatorial and heuristic aspects. In particular, the Non-Ambiguous Query Generation procedure relies on a sophisticated filtering template. It would be helpful if the author could clarify the impact of these heuristic components.
e4dXIBRQ9u
EMNLP_2023
- **Flexibility**: One of the limitations of this approach lies in the requirement for a homologous teacher model in terms of paradigm and vocabulary table with the student model. This can hinder the method's flexibility and may pose challenges during the preparation of the teacher model. - **Scope**: The pre-trained teacher model tends to be already a strong model; do we truly need to train a student model from scratch using weighted training? Is it possible to simply fine-tune the teacher model with weighted training? Weighted training would be meaningful if our goal is to obtain a small yet strong student model. However, the authors have not clearly indicated the specific targeted scenarios in this paper. - **Experiments**: 1) The authors did not explore how different teacher models affect the student's learning effectiveness, which could have provided valuable insights into the impact of varying teacher models on the proposed method's performance. The choice of teacher model may lack flexibility; however, it is worth noting that there are numerous robust GEC systems that share the same PLM architecture. 2) The effectiveness of the proposed approach for other language families remains unknown.
2) The effectiveness of the proposed approach for other language families remains unknown.
ICLR_2022_917
ICLR_2022
Via the top-K approach, it appears like the authors make an implicit assumption about what fraction of edges is important in the graph (a fixed fraction, can't be lower or higher) - in my opinion, this certainly limits the wide applicability of the method. Alternatively, in GNN Explainer [1], they allow up to K edges to explain the label. The applicability of the method is limited by the graph encoder used (here GNN). 1-WL GNN's are known to be unable to predict links or identify and distinguish certain classes of subgraphs [2][3][4] i.e. GNN's are not well suited for Eq. 5 The work appears to miss relevant baselines like GNN Explainer[1] / CF-GNN Explainer [5]. Moreover the authors ideally should compare with baselines, which enrich GNN's with structural features [6][7][8] (As they are explainable to a certain extent as well). Minor: The separability of a graph into two subgraphs the causal and non causal might not always be possible? (would this need an encoder which is able to accurately capture the discrete topology over graphs of all orders and sizes - if not I can just make a house into a clique and the label would be incorrectly assigned as 1 because the edges required for the house are present) Please elaborate on this. The set {s} employed in the test are limited to the ones seen in train. Moreover, consider a graph with say 20 nodes, and the case where the causal part for instance consists of a house motif and a tree base and the label assigned is 1 or 0 based on the output of House Motif XOR Tree Base - would this be captured by the proposed method (appears like it wont)? References: 1.Ying, Rex, et al. "Gnn explainer: A tool for post-hoc explanation of graph neural networks." arXiv preprint arXiv:1903.03894 (2019). 2.Srinivasan, Balasubramaniam, and Bruno Ribeiro. "On the equivalence between positional node embeddings and structural graph representations." arXiv preprint arXiv:1910.00452 (2019). 3.Dwivedi, Vijay Prakash, et al. "Benchmarking graph neural networks." arXiv preprint arXiv:2003.00982 (2020). 4.Chen, Zhengdao, et al. "Can graph neural networks count substructures?." arXiv preprint arXiv:2002.04025 (2020). 5.Lucic, Ana, et al. "CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks." arXiv preprint arXiv:2102.03322 (2021). 6.Bouritsas, Giorgos, et al. "Improving graph neural network expressivity via subgraph isomorphism counting." arXiv preprint arXiv:2006.09252 (2020). 7.Bodnar, Cristian, et al. "Weisfeiler and lehman go topological: Message passing simplicial networks." arXiv preprint arXiv:2103.03212 (2021). 8.Bodnar, Cristian, et al. "Weisfeiler and lehman go cellular: Cw networks." arXiv preprint arXiv:2106.12575 (2021).
6.Bouritsas, Giorgos, et al. "Improving graph neural network expressivity via subgraph isomorphism counting." arXiv preprint arXiv:2006.09252 (2020).
NIPS_2017_356
NIPS_2017
] My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my following questions in the rebuttal. 1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there any improvements when the proposed model use larger word embedding, and LSTM parameters? 2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting? 3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory.
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory.
NIPS_2020_547
NIPS_2020
- While I understand the space limitations, I think the paper could greatly benefit from more explanation of the meaning of the bounds (perhaps in the appendix). - Line 122: it's not obvious to me that the smoothed bound is more stable, since the \gamma factor in the numerator is also larger. Some calculations here, or a very simple experiment, would greatly help the reader understand when smoothing would be desirable. - The above also applies for the discussion on overestimation starting on line 181, especially in the trade-off of reducing overestimation error and converging to a suboptimal value function. - The above applies for the combined smoothness + regularization algorithm
- While I understand the space limitations, I think the paper could greatly benefit from more explanation of the meaning of the bounds (perhaps in the appendix).
ARR_2022_205_review
ARR_2022
- Missing ablations: It is unclear from the results how much performance gain is due to the task formulation, and how much is because of pre-trained language models. The paper should include results using the GCPG model without pre-trained initializations. - Missing baselines for lexically controlled paraphrasing: The paper does not compare with any lexically constrained decoding methods (see references below). Moreover, the keyword control mechanism proposed in this method has been introduced in CTRLSum paper (He et al, 2020) for keywork-controlled summarization. - Related to the point above, the related work section is severely lacking (see below for missing references). Particularly, the paper completely omits lexically constrained decoding methods, both in related work and as baselines for comparison. - The paper is hard to follow and certain sections (particularly the Experimental Setup) needs to made clearer. It was tough to understand exactly where the exemplar/target syntax was obtained for different settings, and how these differed between training and inference for each of those settings. - The paper should include examples of generated paraphrases using all control options studies (currently only exemplar-controlled examples are included in Figure 5). Also, including generation from baseline systems for the same examples would help illustrate the differences better. Missing References: Syntactically controlled paraphrase generation: Goyal et al., ACL2020, Neural Syntactic Preordering for Controlled Paraphrase Generation Sun et al. EMNLP2021, AESOP: Paraphrase Generation with Adaptive Syntactic Control <-- this is a contemporaneous work, but would be nice to cite in next version. Keyword-controlled decoding strategies: Hokamp et al. ACL2017, Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search Post et al, NAACL 2018, Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation Other summarization work that uses similar technique for keyword control: He et al, CTRLSum, Towards Generic Controllable Text Summarization
- Missing ablations: It is unclear from the results how much performance gain is due to the task formulation, and how much is because of pre-trained language models. The paper should include results using the GCPG model without pre-trained initializations.
ARR_2022_215_review
ARR_2022
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they even mentioned/discussed again). I found this not only misleading, but I would have also liked the paper to go deeper into the respective topics, at least to some extent. 2. It seemed a little disappointing to me that the 212 new pairs have _not_ been translated to English (if I'm not mistaken). To really make this dataset a bilingual resource, it would be good to have all pairs in both languages. In the given way, it seems that ultimately only the French version was of interest to the study - unlike it is claimed initially. 3. Almost no information about the reliability of the translations and the annotations is given (except for the result of the translation checking in line 285), which seems unsatisfying to me. To assess the translations, more information about the language/translation expertise of the authors would be helpful (I don't think this violates anonymity). For the annotations, I would expect some measure of inter-annotator agreement. 4. The metrics in Tables 4 and 5 need explanation, in order to make the paper self-contained. Without going to the original paper on CrowS-pairs, the values are barely understandable. Also, information on the values ranges should be given as well as whether higher or lower values are better. - 066: social contexts >> I find this term misleading here, since the text seems to be about countries/language regions. - 121: Deviding 1508 into 16*90 = 1440 cases cannot be fully correct. What about the remaining 68 cases? - 241: It would also be good to state the maximum number of tasks done by any annotator. - Table 3: Right-align the numeric columns. - Table 4 (1): Always use the same number of decimal places, for example 61.90 instead of 61.9 to match the other values. This would increase readability. - Table 4 (2): The table exceeds the page width; that needs to be fixed. - Tables 4+5 (1): While I undersand the layout problem, the different approaches would be much easier to compare if tables and columns were flipped (usually, one approach per row, one metric per column). - Tables 4+5 (2): What's the idea of showing the run-time? I didn't see for what this is helpful. - 305/310: Marie/Mary >> I think these should be written the same. - 357: The text speaks of "53", but I believe the value "52.9" from Table 4 is meant. In my view, such rounding makes understanding harder rather than helping. - 575/577: "1/" and "2/" >> Maybe better use "(1)" and "(2)"; confused me first.
1. The paper raises two hypotheses in lines 078-086 about multilinguality and country/language-specific bias. While I don't think the hypotheses are phrased optimally (could they be tested as given?), their underlying ideas are valuable. However, the paper actually does not really study these hypotheses (nor are they even mentioned/discussed again). I found this not only misleading, but I would have also liked the paper to go deeper into the respective topics, at least to some extent.
ACL_2017_494_review
ACL_2017
- I was hoping to see some analysis of why the morph-fitted embeddings worked better in the evaluation, and how well that corresponds with the intuitive motivation of the authors. - The authors introduce a synthetic word similarity evaluation dataset, Morph-SimLex. They create it by applying their presumably semantic-meaning-preserving morphological rules to SimLex999 to generate many more pairs with morphological variability. They do not manually annotate these new pairs, but rather use the original similarity judgements from SimLex999. The obvious caveat with this dataset is that the similarity scores are presumed and therefore less reliable. Furthermore, the fact that this dataset was generated by the very same rules that are used in this work to morph-fit word embeddings, means that the results reported on this dataset in this work should be taken with a grain of salt. The authors should clearly state this in their paper. - (Soricut and Och, 2015) is mentioned as a future source for morphological knowledge, but in fact it is also an alternative approach to the one proposed in this paper for generating morphologically-aware word representations. The authors should present it as such and differentiate their work. - The evaluation does not include strong morphologically-informed embedding baselines. General Discussion: With the few exceptions noted, I like this work and I think it represents a nice contribution to the community. The authors presented a simple approach and showed that it can yield nice improvements using various common embeddings on several evaluations and four different languages. I’d be happy to see it in the conference. Minor comments: - Line 200: I found this phrasing unclear: “We then query … of linguistic constraints”. - Section 2.1: I suggest to elaborate a little more on what the delta is between the model used in this paper and the one it is based on in Wieting 2015. It seemed to me that this was mostly the addition of the REPEL part. - Line 217: “The method’s cost function consists of three terms” - I suggest to spell this out in an equation. - Line 223: x and t in this equation (and following ones) are the vector representations of the words. I suggest to denote that somehow. Also, are the vectors L2-normalized before this process? Also, when computing ‘nearest neighbor’ examples do you use cosine or dot-product? Please share these details. - Line 297-299: I suggest to move this text to Section 3, and make the note that you did not fine-tune the params in the main text and not in a footnote. - Line 327: (create, creates) seems like a wrong example for that rule. - I have read the author response
- I was hoping to see some analysis of why the morph-fitted embeddings worked better in the evaluation, and how well that corresponds with the intuitive motivation of the authors.
ARR_2022_298_review
ARR_2022
The main weak point of the paper is that at it is not super clear. There are many parts in which I believe the authors should spend some time in providing either more explanations or re-structure a bit the discussion (see the comments section). - I suggest to revise a bit the discussion, especially in the modeling section, which in its current form is not clear enough. For example, in section 2 it would be nice to see a better formalization of the architecture. If I understood correctly, the Label Embeddings are external parameters; instead, the figure is a bit misleading, as it seems that the Label Embeddings are the output of the encoder. - Also, when describing the contribution in the Introduction, using the word hypothesis/null hypothesis really made me think about statistical significance. For example, in lines 87-90 the authors introduce the hypothesis patterns (in contrast to the null hypothesis) referring to the way of representing the input labels, and they mention "no significant" difference, which instead is referring to the statistical significance. I would suggest to revise this part. - It is not clear the role of the dropout, as there is not specific experiment or comment on the impact of such technique. Can you add some details? - On which data is fine-tuned the model for the "Knowledge Distillation" - Please, add an intro paragraph to section 4. - The baseline with CharSVM seems disadvantaged. In fact, a SVM model in a few-shot setting with up to 5-grams risks to have huge data-sparsity and overfitting problems. Can the authors explain why they selected this baseline? Is a better (fair) baseline available? - In line 303 the authors mention "sentence transformers". Why are the authors mentioning this? Is it possible to add a citation? - There are a couple of footnotes referring to wikipedia. It is fine, but I think the authors can find a better citation for the Frobenius norm and the Welch test. - I suggest to make a change to the tables. Now the authors are reporting in bold the results whose difference is statistical significant. Would it be possible to highlight in bold the best result in each group (0, 8, 64, 512) and with another symbol (maybe underline) the statistical significant ones? -
- On which data is fine-tuned the model for the "Knowledge Distillation" - Please, add an intro paragraph to section 4.
NIPS_2016_186
NIPS_2016
weakness in the algorithm is the handling of the discretization. It seems that two improvements are somewhat easily achievable: First, there should probably be a way to obtain instance dependent bounds for the continuous setting. It seems that by taking a confidence bound of size \sqrt{log(st)/T_{i,t}} rather than \sqrt{log(t)/T_{i,t}}, one can get a logarithmic dependence on s, rather than polynomial, which may solve this issue. If that doesn’t work, the paper should benefit from an explanation for why that doesn’t work. Second, it seems that the discretization should be adaptive to the data. Otherwise, the running time and memory are dependent of the time horizon in cases where they do not have to. Overall, the paper is well written and motivated. Its results, though having room for improvement are non-trivial and deserve publication. Minor comments: - Where else was the k-max problem discussed? Please provide a citation for this.
- Where else was the k-max problem discussed? Please provide a citation for this.
NIPS_2019_703
NIPS_2019
of their work? The submission is of very high quality. Max-value entropy search is well motivated, demonstrably works well in practice. The authors deserve credit for the careful, high quality experiments. Furthermore, the paper provides a theoretical guarantee on the performance, although I lack the expertise to properly judge the significance of this. (2) Clarity: Is the submission clearly written? Is it well organized? (If not, please make constructive suggestions for improving its clarity.) Does it adequately inform the reader? (Note: a superbly written paper provides enough information for an expert reader to reproduce its results.) The paper is an excellent read. It presents the algorithm in a structured way, with sufficient detail for reproduction. I would recommend it for reading to anyone interested in the general topic. (3) Originality: Are the tasks or methods new? Is the work a novel combination of well-known techniques? Is it clear how this work differs from previous contributions? Is related work adequately cited? As mentioned above, it is clear that this work was heavily inspired by MES (Wang and Jegelka, 2017). This work does not adequately cite related works. My problem is that the way this submission puts it, 'our work is inspired by the recent success of single objective BO algorithms based on the idea of optimizing output-space information gain', might give the wrong impression to the reader. MESMO is the natural extension of MES (Wang and Jegelka, 2017) to the multiobjective domain. It also positions MESMO against PESMO and it compares/contrasts the two. This comparison might make it seem like the contribution is greater than it is in reality. On multiple occasion, the paper presents work without disclosing the strong connection to (Wang and Jegelka, 2017). Examples: - Section 4, Equations 4.4-4.6. These equations literally come from (Wang and Jegelka, 2017). - Section 4 1) and 2), The algorithm this paper uses to approximate MESMO is the same algorithm as in (Wang and Jegelka, 2017). - Section 4.1 The theoretical result is analogous to Section 3.4 of (Wang and Jegelka, 2017). Based on the arguments above, I will rephrase the question: Are the ideas in this paper novel enough to warrant a new publication over (Wang and Jegelka, 2017)? I would argue that they are not. The reason is that there does not seem to be a significant hurdle that this work needed to overcome in order to extend MES to the multiobjective domain. Both the formula for MESMO and the algorithm to approximate it extend to multiobjective problems. I cannot comment on Theorem 1. (4) Significance: Are the results important? Are others (researchers or practitioners) likely to use the ideas or build on them? Does the submission address a difficult task in a better way than previous work? Does it advance the state of the art in a demonstrable way? Does it provide unique data, unique conclusions about existing data, or a unique theoretical or experimental approach? The method is certainly interesting to practitioners. The work demonstrates the performance of MESMO on a variety of benchmarks and it consistently outperforms the baselines. _________________________________________________________________________ After reading the author's reply, I decided to raise my score. I expect that the connection to (Wang and Jegelka, 2017) is properly disclosed in the revised version. The paper is very well written but it has shortcomings in originality. The explanation for my score is that I think a practitioner might find this paper useful, but I don't expect it to have a large impact in the research community. I think this paper is truly borderline. I do not have strong arguments for or against acceptance.
- Section 4, Equations 4.4-4.6. These equations literally come from (Wang and Jegelka, 2017).
JMSkoIYFSn
EMNLP_2023
1. This paper shows little novelty, as it is only a marginal improvement of the conventional attention mechanism based on some span-level inductive bias. 2. The performance improvement over simple baselines like max-pooling is also marginal for probing tasks. 3. Although 4 patterns are proposed in this paper, it seems hard to find a unified solution or guideline for how to combine them in different tasks. If we have to try all possible combinations every time, the practicality of this method would be significantly reduced. 4. The authors validate the effectiveness of the proposed span-level attention only on the backbone of BERT. The paper lacks experiments on other encoder backbones to further demonstrate the generality of the proposed method. 5. The authors do not compare their methods with other state-of-the-art methods for span-related tasks, such as SpanBERT, thus lacking some credibility. 6. The writing can be improved. There are some typos and unclear descriptions. Please refer to comments for detail.
5. The authors do not compare their methods with other state-of-the-art methods for span-related tasks, such as SpanBERT, thus lacking some credibility.
NIPS_2018_639
NIPS_2018
Weakness: - I am quite not convinced by the experimental results of this paper. The paper sets to solve POMDP problem with non-convex value function. To motivate the case for their solution the examples of POMDP problem with non-convex value functions used are: (a) surveillance in museums with thresholded rewards; (b) privacy preserving data collection. So then the first question is when the case we are trying to solve are above two, why is there not a single experiment on such a setting, not even a simulated one? This basically makes the experiments section not quite useful. - How does the reader know that the reward definitions of rho for this tasks necessitates a non-convex reward function. Surveillance and data collection has been studied in POMDP context by many papers. Fortunately/unfortunately, many of these papers show that the increase in the reward due to a rho based PWLC reward in comparison to a corresponding PWLC state-based reward (R(s,a)) is not that big. (Papers from Mykel Kochenderfer, Matthijs Spaan, Shimon Whiteson are some I can remember from top of my head.) The related work section while missing from the paper, if existed, should cover papers from these groups, some on exactly the same topic (surveillance and data collection). - This basically means that we have devised a new method for solving non-convex value function POMDPs, but do we really need to do all that work? The current version of the paper does not answer this question to me. Also, follow up question would be exactly what situation do I want to use the methodology proposed by this paper vs the existing methods. In terms of critisim of significance, the above points can be summarized as why should I care about this method when I do not see the results on problem the method is supposedly designed for.
- I am quite not convinced by the experimental results of this paper. The paper sets to solve POMDP problem with non-convex value function. To motivate the case for their solution the examples of POMDP problem with non-convex value functions used are: (a) surveillance in museums with thresholded rewards; (b) privacy preserving data collection. So then the first question is when the case we are trying to solve are above two, why is there not a single experiment on such a setting, not even a simulated one? This basically makes the experiments section not quite useful.
ARR_2022_331_review
ARR_2022
- While the language has been improved, there are still some awkward phrases. I suggest the authors have the paper reviewed by a native English speaker. 1) Line 29: "To support the GEC study...". Your study or GEC in general? Maybe you mean "To support GEC research/development/solutions"? 2) Line 53: "Because, obviously, there are usually multiple acceptable references with close meanings for an incorrect sentence, as illustrated by the example in Table 1." This is not a well-formed sentence. Rewrite or attach to the previous one. 3) Line 59: Choose either "the model will be unfairly *penalised*" or "*performance* will be unfairly underestimated". 4) Line 83: "... for detailed illustration". ??? 5) Line 189: "To facilitate illustration, our guidelines adopt a two-tier hierarchical error taxonomy..." You said earlier that you adopted the "direct re-rewriting" approach so why does your annotation guidelines provide a taxonomy or errors? Is it just to make sure that all ocurrences of the same types of errors are handled equally by all annotators? Weren't they free to correct the sentences in any way they wanted as you stated in lines 178-180? 6) Line 264: "We attribute this to our strict control of the over-correction phenomenon." What do you mean exactly? The original annotation considered some sentences to be erroneous while your guidelines did not? 7) Line 310: "Since it is usually ..." This is not a well-formed sentence. Rewrite or attach to the previous one. 8) Line 399: "... and only use the erroneous part for training" Do you mean you discard correct sentences? As it stands, it sounds as if you only kept the incorrect sentences without their corrections. You might want to make this clearer. 9) "... which does not need error-coded annotation". This is not exactly so. ERRANT computes P, R and F from M2 files containing span-level annotations. For English, it is able to automatically generate these annotations from parallel text using an alignment and edit extraction algorithm. In the case of Chinese, you did this yourself. So while it is not necessary to manually annotate the error spans, you do need to extract them somehow before ERRANT can compute the measures. 10) Table 5: "For calculating the human performance, each submitted result is considered as a sample if an annotator submits multiple results." I am afraid this does not clearly explain how human performance was computed. Each annotator against the rest? Averaged across all of them? How are multiple corrections from a single annotator handled? If you compared each annotation to the rest but the systems were compared to all the annotations, then I believe human evaluation is an underestimation. This is still not clear. 11) Line 514: "The word-order errors can be identified by heuristic rules following Hinson et al. (2020)." Did you classify the errors in the M2 files before feeding them into ERRANT? 12) Line 544: "... we remove all extra references if a sentence has more than 2 gold-standard references". Do you remove them randomly or sequentially? 13) TYPOS/ERRORS: "them" -> "this approach"? ( line 72), "both formal/informal" -> "both formal and informal" (line 81), "supplement" -> "supply"? ( lines 89, 867), "from total" -> "from *the* total" (line 127), "Finally, we have obtained 7,137 sentences" -> "In the end, we obtained 7,137 sentences" (line 138), "suffers from" -> "poses" (line 155), "illustration" -> "annotation"? ( line 189), "Golden" -> "Gold" (lines 212, 220, 221, 317), "sentence numbers" -> "number of sentences" (Table 3 caption), "numbers (proportion)" -> "number (proportion)" (Table 3 caption), "averaged character numbers" -> "average number of characters" (Table 3 caption), "averaged edit numbers" -> "average number of edits" (Table 3 caption), "averaged reference numbers" -> "average number of references" (Table 3 caption), "in the parenthesis of the..." -> "in parentheses in the..." (Table 3 caption), "previous" -> "original"? ( line 262), "use" (delete, line 270), "in *the* re-annotated" (line 271), "twice of that" -> "twice that" (line 273), "edit number" -> "number of edits" (line 281), "the sentence length" -> "sentence length" (line 282), "numbers" -> "number" (lines 283, 297), "numbers" -> "the number" (line 295), "Same" -> "Identical" (line 298), "calculated" -> "counted" (line 299), "the different" -> "different" (Figure 1 caption), "reference number" -> "number of references" (line 305), "for" -> "to" (line 307), "the descending" -> "descending" (line 326), "sentence numbers" -> "number of sentences" (line 327), "It" -> "This" (line 331), "annotate" -> "annotated" (Figure 2 caption), "limitation" -> "limitations" (line 343), "SOTA" -> "state-of-the-art (SOTA)" (line 353), "these" -> "this" (line 369), "where" -> "on which" (line 393), "hugging face" -> "Hugging Face" (431), "these" -> "this" (line 464), "The" -> "A" (line 466), "reference number" -> "the number of references" (Figure 3 caption), "start" -> "have started" (line 571), "will be" -> "are" (line 863), "false" -> "incorrect"? ( line 865).
- While the language has been improved, there are still some awkward phrases. I suggest the authors have the paper reviewed by a native English speaker.
yCAigmDGVy
ICLR_2025
1. As the paper primarily focuses on applying quantum computing to global Lipschitz constant estimation, it is uncertain whether the ICLR community will find this topic compelling. 2. The paper lacks discussion on the theoretical guarantee about the approximation ratio of the hierarchical strategy to the global optimal of original QUBO. 3. The experimental results are derived entirely from simulations under ideal conditions, without consideration for practical aspects of quantum devices such as finite shots, device noise, and limited coherence time. These non-ignorable imperfections could significantly impact the quality of solutions obtained from quantum algorithms in practice.
2. The paper lacks discussion on the theoretical guarantee about the approximation ratio of the hierarchical strategy to the global optimal of original QUBO.
ICLR_2022_2810
ICLR_2022
The clarity of the writing could be improved substantially. Descriptions are often vague, which makes the technical details harder to understand. I think it's fine to give high-level intuitions separate from low-level details, but the current writing invites confusion. For example, at the start of Section 3, the references to buffers and clusters are vague. The text refers readers to where these concepts are described, but the high-level description doesn't really give a clear picture, making the text that follows harder to understand. Ideas are not always presented clearly. For example: may only exploit a small part of it, making most of the goals pointless.``` - Along the same lines, at the start of the Experiments section, when reading ```the ability of DisTop to select skills to learn``` I am left to wonder what this "ability" and "selection" refers to. This is not a criticism of word choice. The issue is that the previous section did not set up these ideas. - Sections of the results do not seem to actually address the experimental question they are motivated by (that is, the question at the paragraph header). In general, this paper tends to draw conclusions that seem only speculatively supported by the results. - Overall, the paper is not particularly easy to follow. The presentation lacks a clear intuition for how the pieces fit together and the experiments have little to hang on to as a result. - The conclusions drawn from the experiments are not particularly convincing. While there is some positive validation, demonstration of the *topology* learning's success is lacking. There are some portions of the appendix that get at this, but the analysis feels incomplete. Personally, I am much more convinced by a demonstration that the underlying pieces of the algorithm are viable than by seeing that, when they are all put together, the training curves look better. ### Questions/Comments: - The second paragraph of 2.1 is hard to follow. If the technical details are important, it may make more sense to work them into a different area of the text. - The same applies to 2.2. The technical details are hard to follow. - You claim "In consequence, we avoid using a hand engineered environment-specific scheduling" on page 4. Does this suggest that the $\beta$ parameter and the $\omega'$ update rate are environment independent? - Why do DisTop and Skew-Fit have such different starting distances for Visual Pusher (Figure 1, left middle)? - It is somewhat strange phrasing to describe Skew-Fit as having "favorite" environments (page 6).
- Overall, the paper is not particularly easy to follow. The presentation lacks a clear intuition for how the pieces fit together and the experiments have little to hang on to as a result.
NIPS_2018_537
NIPS_2018
1. The motivation or the need for this technique is unclear. It would have been great to have some intuition why replacing last layer of ResNets by capsule projection layer is necessary and why should it work. 2. The paper is not very well-written, possibly hurriedly written, so not easy to read. A lot is left desired in presentation and formatting, especially in figures/tables. 3. Even though the technique is novel, the contributions of this paper is not very significant. Also, there is not much attempt in contrasting this technique with traditional classification or manifold learning literature. 4. There are a lot of missing entries in the experimental results table and it is not clear why. Questions for authors: Why is the input feature vector from backbone network needed to be decomposed into the capsule subspace component and also its component perpendicular to the subspace? What shortcomings in the current techniques lead to such a design? What purpose is the component perpendicular to the subspace serving? The authors state that this component appears in the gradient and helps in detecting novel characteristics. However, the gradient (Eq 3) does not only contain the perpendicular component but also another term x^T W_l^{+T} - is not this transformation similar to P_l x (the projection to the subspace). How to interpret this term in the gradient? Moreover, should we interpret the projection onto subspace as a dimensionality reduction technique? If so, how does it compare with standard dimensionality reduction techniques or a simple dimension-reducing matrix transformation? What does "grouping neurons to form capsules" mean - any reference or explanation would be useful? Any insights into why orthogonal projection is needed will be helpful. Are there any reason why subspace dimension c was chosen to be in smaller ranges apart from computational aspect/independence assumption? Is it possible that a larger c can lead to better separability? Regarding experiments, it will be good to have baselines like densenet, capsule networks (Dynamic routing between capsules, Sabour et al NIPS 2017 - they have also tried out on CIFAR10). Moreover it will be interesting to see if the capsule projection layer is working well only if the backbone network is a ResNet type network or does it help even when backbone is InceptionNet or VGGNet/AlexNet.
2. The paper is not very well-written, possibly hurriedly written, so not easy to read. A lot is left desired in presentation and formatting, especially in figures/tables.
IcYDRzcccP
ICLR_2025
1. The Sec. 3.1 for 3D Gaussians generation seems to just follow the previous work, Luciddreamer. Please correct me there is any additional novel effort for this part. 2. Reprojecting the point cloud from the single view image to different views always leads to the holes, distortion and some other artifacts. A common idea to incorporate the generative model to fulfill the missing information. But it seems the author does not include anything related to this point. Please include more discussion if it is not necessary. 3. Minimizing the reprojection error for the predicted motions seems to be feasible in Sec. 3.2, However, this optimization is still based on the 2D information and can hardly achieve the 3D consistency. For example, in Structure from Motions, such method is always used refine the estimated camera poses and the position for the point clouds. But it does not consider the inter relationship between each 3D positions, so I am not sure that the 3D consistency can be achieved with this method. 4. Please provide more implementation details of the 3D Motion Optimization Module. It seems to be vague now. 5. The Sec. 3.3 for 4D Gaussians generation seems to just follow the previous work. Please correct me there is any additional novel effort for this part. 6. The experimental results will be more solid and comprehensive if more dataset and baselines can be included. For example, the author mentions that simply using animation based method cannot achieve satisfying results, so similar baselines can be included. 7. The paper presentation can be more compact if the part of just following the previous work can be shortened while emphasizing the novel part.
1. The Sec. 3.1 for 3D Gaussians generation seems to just follow the previous work, Luciddreamer. Please correct me there is any additional novel effort for this part.
NIPS_2016_287
NIPS_2016
weakness, however, is the experiment on real data where no comparison against any other method is provided. Please see the details comments below.1. While [5] is a closely related work, it is not cited or discussed at all in Section 1. I think proper credit should be given to [5] in Sec. 1 since the spacey random walk was proposed there. The difference between the random walk model in this paper and that in [5] should also be clearly stated to clarify the contributions. 2. The AAAI15 paper titled "Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications" by Ghoshdastidar and Dukkipati seems to be a related work missed by the authors. This AAAI15 paper deals with hypergraph data with tensors as well so it should be discussed and compared against to provide a better understanding of the state-of-the-art. 3. This work combines ideas from [4], [5], and [14] so it is very important to clearly state the relationships and differences with these earlier works. 4. End of Sec. 2., there are two important parameters/thresholds to set. One is the minimum cluster size and the other is the conductance threshold. However, the experimental section (Sec. 3) did not mention or discuss how these parameters are set and how sensitive the performance is with respect to these parameters. 5. Sec. 3.2 and Sec. 3.3: The real data experiments study only the proposed method and there is no comparison against any existing method on real data. Furthermore, there is only some qualitative analysis/discussion on the real data results. Adding some quantitative studies will be more helpful to the readers and researchers in this area. 6. Possible Typo? Line 131: "wants to transition".
4. End of Sec.2., there are two important parameters/thresholds to set. One is the minimum cluster size and the other is the conductance threshold. However, the experimental section (Sec. 3) did not mention or discuss how these parameters are set and how sensitive the performance is with respect to these parameters.
NIPS_2021_616
NIPS_2021
The authors discuss two limitations: first, this paper focuses only on methods with explicit negatives. This is not a problem for me since it is okay for an analysis paper to focus on one type of methods. The second limitation is that the datasets used in the experiments are not fully realistic. This again is not an issue for me, since 1) the datasets used in this paper are variants of ImageNet and MNIST which are both realistic, and 2) fully realistic datasets will make it hard to control multiple aspects of variation with precision. I agree with the authors' judgement that there is no immediate societal impact.
1) the datasets used in this paper are variants of ImageNet and MNIST which are both realistic, and
ICLR_2023_1980
ICLR_2023
Motivated by the fact that local learning can limit memory when training the network and the adaptive nature of each individual block, the paper extends local learning to the ResNet-50 to handle large datasets. However, it seems that the results of the paper do not demonstrate the benefits of doing so. The detailed weaknesses are as follows: 1)The method proposed in the paper essentially differs very little from the traditional BP method. The main contribution of the paper is adding the stop gradient operation between blocks, which appears to be less innovative. 2)The local learning strategy is not superior to the BP optimization method. In addition, the model is more sensitive to each block after the model is blocked, especially the first block. More additional corrections are needed to improve the performance and robustness of the model, although still lower than BP's method. 3)Experimental results show that simultaneous blockwise training is better than sequential blockwise training. But the simultaneous blockwise training strategy cannot limit memory. 4)The blockwise training strategy relies on a special network structure like the block structure of the ResNet-50 model. 5)There are some writing errors in the paper, such as "informative informative" on page 5 and "performance" on page 1, which lacks a title.
5)There are some writing errors in the paper, such as "informative informative" on page 5 and "performance" on page 1, which lacks a title.