paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
10.2k
| point
stringlengths 47
690
|
---|---|---|---|
NIPS_2019_263 | NIPS_2019 | --- Weaknesses of the evaluation in general: * 4th loss (active fooling): The concatenation of 4 images into one and the choice of only one pair of classes makes me doubt whether the motivation aligns well with the implementation, so 1) the presentation should be clearer or 2) it should be more clearly shown that it does generalize to the initial intuition about any two objects in the same image. The 2nd option might be accomplished by filtering an existing dataset to create a new one that only contains images with pairs of classes and trying to swap those classes (in the same non-composite image). * I understand how LRP_T works and why it might be a good idea in general, but it seems new. Is it new? How does it relate to prior work? Does the original LRP would work as the basis or target of adversarial attacks? What can we say about the succeptibility of LRP to these attacks based on the LRP_T results? * How hard is it to find examples that illustrate the loss principles clearly like those presented in the paper and the supplement? Weaknesses of the proposed FSR metric specifically: * L195: Why does the norm need to be changed for the center mass version of FSR? * The metric should measure how different the explanations are before and after adversarial manipulation. It does this indirectly by measuring losses that capture similar but more specific intuitions. It would be better to measure the difference in heatmaps before and after explicitly. This could be done using something like the rank correlation metric used in Grad-CAM. I think this would be a clearly superior metric because it would be more direct. * Which 10k images were used to compute FSR? Will the set be released? Philosohpical and presentation weaknesses: * L248: What does "wrong" mean here? The paper gets into some of the nuance of this position at L255, but it would be helpful to clarify what is meant by a good/bad/wrong explanation before using those concepts. * L255: Even though this is an interesting argument that forwards the discussion, I'm not sure I really buy it. If this was an attention layer that acted as a bottleneck in the CNN architecuture then I think I'd be forced to buy this argument. As it is, I'm not convinced one way or the other. It seems plausible, but how do you know that the final representation fed to the classifier has no information outside the highlighted area. Furthermore, even if there is a very small amount of attention on relevant parts that might be enough. * The random parameterization sanity check from [25] also changes the model parameters to evaluation visualizations. This particular experiment should be emphasized more because it is the only other case I can think of which considers how explanations change as a function of model parameters (other than considering completely different models). To be clear, the experiment in [25] is different from what is proposed here, I just think it provides interesting contrast to these experiments. The claim here is that the explanations change too much while the claim there is that they don't change enough. Final Justification --- Quality - There are a number of minor weaknesses in the evaluation that together make me unsure about how easy it is to perform this kind of attack and how generalizable the attack is. I think the experiments do clearly establish that the attack is possible. Clarity - The presentation is pretty clear. I didn't have to work hard to understand any of it. Originality - I haven't seen an attack on interpreters via model manipulation before. Significance - This is interesting because it establishes a new way to evaluate models and/or interpreters. The paper is a bit lacking in scientific quality in a number of minor ways, but the other factors clearly make up for that defect. | * L248: What does "wrong" mean here? The paper gets into some of the nuance of this position at L255, but it would be helpful to clarify what is meant by a good/bad/wrong explanation before using those concepts. |
NIPS_2020_1228 | NIPS_2020 | - The method section looks not self-contained and lacks descriptions of some key components. In particular: * What is Eq.(9) for? Why "the SL is the negative logarithm of a polynomial in \theta" -- where is the "negative logarithm" in Eq.(9)? * Eq.(9) is not practically tractable. It looks its practical implementation is discussed in the "Evaluating the Semantic Loss" part (L.140) which involves the Weighted Model Count (WMC) and knowledge compilation (KC). However, no details about KC are presented. Considering the importance of the component in the whole proposed approach, I feel it's very necessary to clearly present the details and make the approach self-contained. - The proposed approach essentially treats the structured constraints (a logical rule) as part of the discriminator that supervises the training of the generator. This idea looks not new -- one can simply treat the constraints as an energy function and plug it into energy-based GANs (https://arxiv.org/abs/1609.03126). Modeling structured constraints as a GAN discriminator to train the generative model has also been studied in [15] (which also discussed the relation b/w the structured approach with energy-based GANs). Though the authors derive the formula from a perspective of semantic loss, it's unclear what's the exact difference from the previous work? - The paper claims better results in the Molecule generation experiment (Table.3). However, it looks adding the proposed constrained method actually yields lower validity and diversity. | - The paper claims better results in the Molecule generation experiment (Table.3). However, it looks adding the proposed constrained method actually yields lower validity and diversity. |
ARR_2022_40_review | ARR_2022 | - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework.
- Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system?
- Line 216: How many paraphrases were created for each question, and what was their quality rate?
- Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models?
- Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences?
- Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones?
- The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker) | - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system? |
NIPS_2021_2191 | NIPS_2021 | of the paper: [Strengths]
The problem is relevant.
Good ablation study.
[Weaknesses] - The statement in the intro about bottom up methods is not necessarily true (Line 28). Bottom-up methods do have a receptive fields that can infer from all the information in the scene and can still predict invisible keypoints. - Several parts of the methodology are not clear. - PPG outputs a complete pose relative to every part’s center. Thus O_{up} should contain the offset for every keypoint with respect to the center of the upper part. In Eq.2 of the supplementary material, it seems that O_{up} is trained to output the offset for the keypoints that are not farther than a distance \textit{r) to the center of corresponding part. How are the groundtruths actually built? If it is the latter, how can the network parts responsible for each part predict all the keypoints of the pose. - Line 179, what did the authors mean by saying that the fully connected layers predict the ground-truth in addition to the offsets? - Is \delta P_{j} a single offset for the center of that part or it contains distinct offsets for every keypoint? - In Section 3.3, how is G built using the human skeleton? It is better to describe the size and elements of G. Also, add the dimensions of G,X, and W to better understand what DGCN is doing. - Experiment can be improved: - For instance, the bottom-up method [9] has reported results on crowdpose dataset outperforming all methods in Table 4 with a ResNet-50 (including the paper one). It will be nice to include it in the tables - It will be nice to evaluate the performance of their method on the standard MS coco dataset to see if there is a drop in performance in easy (non occluded) settings. - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. - Can we visualize G, the dynamic graph, as it changes through DGCN? It might give an insight on what the network used to predict keypoints, especially the invisible ones.
[Minor comments]
In Algorithm 1 line 8 in Suppl Material, did the authors mean Eq 11 instead of Eq.4?
Fig1 and Fig2 in supplementary are the same
Spelling Mistake line 93: It it requires…
What does ‘… updated as model parameters’ mean in line 176
Do the authors mean Equation 7 in line 212?
The authors have talked about limitations in Section 5 and have mentioned that there are not negative societal impacts. | - No study of inference time. Since this is a pose estimation method that is direct and does not require detection or keypoint grouping, it is worth to compare its inference speed to previous top-down and bottom-up pose estimation method. |
50RNY6uM2Q | ICLR_2025 | 1. As mentioned in the article itself, the introduction of multi-granularity and multi-scale to enhance model performance is a common approach to convolutional networks, and merely migrating this approach to the field of MLMs is hardly an innovative contribution. Some of the algorithms used in the article from object detection only do some information enhancement on the input side, while many MLMs can already accomplish the object detection task by themselves nowadays.
2. The scores achieved on both the MMBench as well as SEEDBench datasets, while respectable, are not compared to some of the more competitive models. I identified MMB as version 1 and SEEDBench as Avg based on the scores of Qwen-VL and MiniCPM-V2, and there are a number of scores on both leaderboards that are higher than the scores of MG-LLaVA work, eg. Honeybee (Cha et al., 2024), AllSeeing-v2 (Wang et al. 2024) based on Vicuna-13b at MMB-test. and then you can also find a lot of similar models with higher scores on the same substrate.
3. In addition to Perception Benchmarks. this problem can also be found in Visual QA and Video QA. such as on the MSRVTT-QA dataset. there are already many models with very high scores in 2024. Some of them also use some methods to improve the model's ability on fine-grained tasks. eg. Flash-VStream (Zhang et al. 2024) Monkey (Li et al. 2023). The article does not seem to compare these new 2024 models.
To summarize, I think the approach proposed in the article is valid, but MG-LLaVA does not do the job of making a difference, either from an innovation perspective or from a performance perspective.
[1] Cha, Junbum, et al. "Honeybee: Locality-enhanced projector for multimodal llm." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024.
[2] Wang, Weiyun, et al. "The all-seeing project v2: Towards general relation comprehension of the open world." *arXiv preprint arXiv:2402.19474* (2024).
[3] Zhang, Haoji, et al. "Flash-VStream: Memory-Based Real-Time Understanding for Long Video Streams." *arXiv preprint arXiv:2406.08085* (2024).
[4] Li, Zhang, et al. "Monkey: Image resolution and text label are important things for large multi-modal models." *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition*. 2024. | 1. As mentioned in the article itself, the introduction of multi-granularity and multi-scale to enhance model performance is a common approach to convolutional networks, and merely migrating this approach to the field of MLMs is hardly an innovative contribution. Some of the algorithms used in the article from object detection only do some information enhancement on the input side, while many MLMs can already accomplish the object detection task by themselves nowadays. |
zpayaLaUhL | EMNLP_2023 | - Limited Experiments
- Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function, and architecture (i.e., encoder, encoder-decoder, or decoder). In particular, it is worthwhile to include more analysis and discussion for GPT-2. For example, I would like to see the results of Figure 2 for GPT-2.
- The input for the analysis is limited to only 100 or 200 samples from wikitext-2. It would be desirable to experiment with a larger number of samples or with datasets from various domains.
- Findings are interesting, but no statement of what the contribution is and how practical impact on the community or practical use. (Question A).
- Results contradicting those reported in existing studies (Clark+'19) are observed but not discussed (Question B).
- I do not really agree with the argument in Section 5 that word embedding contributes to relative position-dependent attention patterns. The target head is in layer 8, and the changes caused by large deviations from the input, such as only position embedding, are quite large at layer 8. It is likely that the behavior is not such that it can be discussed to explain the behavior under normal conditions. Word embeddings may be the only prerequisites for the model to work properly rather than playing an important role in certain attention patterns.
- Introduction says to analyze "why attention depends on relative position," but I cannot find content that adequately answers this question.
- There is no connection or discussion of relative position embedding, which is typically employed in recent Transformer models in place of learnable APE (Question C). | - Limited Experiments - Most of the experiments (excluding Section 4.1.1) are limited to RoBERTa-base only, and it is unclear if the results can be generalized to other models adopting learnable APEs. It is important to investigate whether the results can be generalized to differences in model size, objective function, and architecture (i.e., encoder, encoder-decoder, or decoder). In particular, it is worthwhile to include more analysis and discussion for GPT-2. For example, I would like to see the results of Figure 2 for GPT-2. |
ACL_2017_148_review | ACL_2017 | - The goal of your paper is not entirely clear. I had to read the paper 4 times and I still do not understand what you are talking about!
- The article is highly ambiguous what it talks about - machine comprehension or text readability for humans - you miss important work in the readability field - Section 2.2. has completely unrelated discussion of theoretical topics.
- I have the feeling that this paper is trying to answer too many questions in the same time, by this making itself quite weak. Questions such as “does text readability have impact on RC datasets” should be analyzed separately from all these prerequisite skills.
- General Discussion: - The title is a bit ambiguous, it would be good to clarify that you are referring to machine comprehension of text, and not human reading comprehension, because “reading comprehension” and “readability” usually mean that.
- You say that your “dataset analysis suggested that the readability of RC datasets does not directly affect the question difficulty”, but this depends on the method/features used for answer detection, e.g. if you use POS/dependency parse features.
- You need to proofread the English of your paper, there are some important omissions, like “the question is easy to solve simply look..” on page 1.
- How do you annotate datasets with “metrics”??
- Here you are mixing machine reading comprehension of texts and human reading comprehension of texts, which, although somewhat similar, are also quite different, and also large areas.
- “readability of text” is not “difficulty of reading contents”. Check this: DuBay, W.H. 2004. The Principles of Readability. Costa Mesa, CA: Impact information. - it would be good if you put more pointers distinguishing your work from readability of questions for humans, because this article is highly ambiguous.
E.g. on page 1 “These two examples show that the readability of the text does not necessarily correlate with the difficulty of the questions” you should add “for machine comprehension” - Section 3.1. - Again: are you referring to such skills for humans or for machines? If for machines, why are you citing papers for humans, and how sure are you they are referring to machines too?
- How many questions the annotators had to annotate? Were the annotators clear they annotate the questions keeping in mind machines and not people? | - Again: are you referring to such skills for humans or for machines? If for machines, why are you citing papers for humans, and how sure are you they are referring to machines too? |
NIPS_2017_110 | NIPS_2017 | weakness of this paper in my opinion (and one that does not seem to be resolved in Schiratti et al., 2015 either), is that it makes no attempt to answer this question, either theoretically, or by comparing the model with a classical longitudinal approach.
If we take the advantage of the manifold approach on faith, then this paper certainly presents a highly useful extension to the method presented in Schiratti et al. (2015). The added flexibility is very welcome, and allows for modelling a wider variety of trajectories. It does seem that only a single breakpoint was tried in the application to renal cancer data; this seems appropriate given this dataset, but it would have been nice to have an application to a case where more than one breakpoint is advantageous (even if it is in the simulated data). Similarly, the authors point out that the model is general and can deal with trajectories in more than one dimensions, but do not demonstrate this on an applied example.
(As a side note, it would be interesting to see this approach applied to drug response data, such as the Sanger Genomics of Drug Sensitivity in Cancer project).
Overall, the paper is well-written, although some parts clearly require a background in working on manifolds. The work presented extends Schiratti et al. (2015) in a useful way, making it applicable to a wider variety of datasets.
Minor comments:
- In the introduction, the second paragraph talks about modelling curves, but it is not immediately obvious what is being modelled (presumably tumour growth).
- The paper has a number of typos, here are some that caught my eyes: p.1 l.36 "our model amounts to estimate an average trajectory", p.4 l.142 "asymptotic constrains", p.7 l. 245 "the biggest the sample size", p.7l.257 "a Symetric Random Walk", p.8 l.269 "the escapement of a patient".
- Section 2.2., it is stated that n=2, but n is the number of patients; I believe the authors meant m=2.
- p.4, l.154 describes a particular choice of shift and scaling, and the authors state that "this [choice] is the more appropriate.", but neglect to explain why.
- p.5, l.164, "must be null" - should this be "must be zero"?
- On parameter estimation, the authors are no doubt aware that in classical mixed models, a popular estimation technique is maximum likelihood via REML. While my intuition is that either the existence of breakpoints or the restriction to a manifold makes REML impossible, I was wondering if the authors could comment on this.
- In the simulation study, the authors state that the standard deviation of the noise is 3, but judging from the observations in the plot compared to the true trajectories, this is actually not a very high noise value. It would be good to study the behaviour of the model under higher noise.
- For Figure 2, I think the x axis needs to show the scale of the trajectories, as well as a label for the unit.
- For Figure 3, labels for the y axes are missing.
- It would have been useful to compare the proposed extension with the original approach from Schiratti et al. (2015), even if only on the simulated data. | - In the introduction, the second paragraph talks about modelling curves, but it is not immediately obvious what is being modelled (presumably tumour growth). |
NIPS_2019_1420 | NIPS_2019 | Weakness - Not completely sure about the meaning of the results of certain experiments and the paper refuses to hypothesize any explanations. Other results show very little difference between the alternatives and unclear whether they are significant. - Lot of result description is needlessly convoluted e.g. "less likely to produce less easier to teach and less structured languages when no listener gets reset". ** Suggestions - A related idea of speaker-listener communication from a teachability perspective was studied in [1] - In light of [2], it's pertinent that we check that useful communication is actually happening. The differences in figures seem too small. Although the topography plots do seem to indicate something reasonable going on. [1]: https://arxiv.org/abs/1806.06464 [2]: https://arxiv.org/abs/1903.05168 | - Lot of result description is needlessly convoluted e.g. "less likely to produce less easier to teach and less structured languages when no listener gets reset". ** Suggestions - A related idea of speaker-listener communication from a teachability perspective was studied in [1] - In light of [2], it's pertinent that we check that useful communication is actually happening. The differences in figures seem too small. Although the topography plots do seem to indicate something reasonable going on. [1]: https://arxiv.org/abs/1806.06464 [2]: https://arxiv.org/abs/1903.05168 |
NIPS_2022_728 | NIPS_2022 | Weakness 1. The setup of capturing strategy is complicated and is not easy for applications in real life. To initialize the canonical space, the first stage is to capture the static state using a moving camera. Then to model motions, the second stage is to capture dynamic states using a few (4) fixed cameras. Such a 2-stage capturing is not straightforward. 2. To utilize a volumetric representation in the deformation field is not a novel idea. In the real-time dynamic reconstruction task, VolumeDeform [1] has proposed volumetric grids to encode both the geometry and motion, respectively. 3. The quantitative experiments (Tab. 2 and Tab. 3) show that the fidelity of rendered results highly depends on the 2-stage training strategy. In a general capturing case, other methods can obtain more accurate rendered images. Oppositely, Tab. 2 shows that it is not easy to fuse the designed 2-stage training strategy into current mainstream frameworks, such as D-NeRF, Nerfies and HyperNeRF. It verifies that the 2-stages training strategy is not a general design for dynamic NeRF.
[1] Innmann, Matthias, Michael Zollhöfer, Matthias Nießner, Christian Theobalt, and Marc Stamminger. "Volumedeform: Real-time volumetric non-rigid reconstruction." In European conference on computer vision (ECCV), pp. 362-379. Springer, Cham, 2016. | 2. To utilize a volumetric representation in the deformation field is not a novel idea. In the real-time dynamic reconstruction task, VolumeDeform [1] has proposed volumetric grids to encode both the geometry and motion, respectively. |
NIPS_2018_606 | NIPS_2018 | , I tend to vote in favor of this paper. * Detailed remarks: - The analysis in Figure 4 is very interesting. What is a possible explanation for the behaviour in Figure 4(d), showing that the number of function evaluations automatically increases with the epochs? Consequently, how is it possible to control the tradeoff between accuracy and computation time if, automatically, the computation time increases along the training? In this direction, I think it would be nice to see how classification accuracy evolves (e.g. on MNIST) with the precision required. - In Figure 6, an interpolation experiment shows that the probability distribution evolves smoothly along time, which is an indirect way to interpret it. Since this is a low (2D) dimensional case, wouldn't it be possible to directly analyze the learnt ODE function, by looking at its fixed points and their stability? - For the continuous-time time-series model, subsec 6.1 clarity should be improved. Regarding the autoencoder formulation, why is an RNN used for the encoder, and not an ODE-like layer? Indeed, the authors argue that RNNs have trouble coping with such time-series, so this might also be the case in the encoding part. - Do the authors plan to share the code of the experiments (not only of the main module)? - I think it would be better if notations in appendix A followed the notations of the main paper. - Section 3 and Section 4 are slightly redundant: maybe putting the first paragraph of sec 4 in sec 3 and putting the remainder of sec 4 before section 3 would help. | - Section 3 and Section 4 are slightly redundant: maybe putting the first paragraph of sec 4 in sec 3 and putting the remainder of sec 4 before section 3 would help. |
OBUQNASaWw | ICLR_2025 | 1.It’s suggested that authors give a comprehensive survey on adaptive sparse training method. Although authors claim “Previous works have only managed to solve one, or perhaps two of these challenges”, can authors give a comprehensive comparison of existing methods?
2.Considering different clients train different submodels, the server also maintains a full model. So can the sparsity of clients be different to apply for heterogeneous hardware?
3.Can authors further explain why clients should achieves consensus on the clients’ sparse model masks when server always maintain a full model.
4.What’s the definition of the model plasticity?
5.In experimental section, authors only compared with two baselines, there’re several works also focus on the same questions, for example [1,2,3], so it’s suggested to add more experimental to show the effectiveness of proposed method.
6.Considering the model architecture, authors only show the effectiveness on convolutional network, what’s the performance on other architecture, for example Transformer?
[1]Stripelis, Dimitris, et al. "Federated progressive sparsification (purge, merge, tune)+." arXiv preprint arXiv:2204.12430 (2022).
[2]Wang, Yangyang, et al. "Theoretical convergence guaranteed resource-adaptive federated learning with mixed heterogeneity." Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023.
[3]Zhou, Hanhan, et al. "Every parameter matters: Ensuring the convergence of federated learning with dynamic heterogeneous models reduction." Advances in Neural Information Processing Systems 36 (2024). | 5.In experimental section, authors only compared with two baselines, there’re several works also focus on the same questions, for example [1,2,3], so it’s suggested to add more experimental to show the effectiveness of proposed method. |
NIPS_2018_641 | NIPS_2018 | weakness. First, the main result, Corollary 10, is not very strong. It is asymptotic, and requires the iterates to lie in a "good" set of regular parameters; the condition on the iterates was not checked. Corollary 10 only requires a lower bound on the regularization parameter; however, if the parameter is set too large such that the regularization term is dominating, then the output will be statistically meaningless. Second, there is an obvious gap between the interpretation and what has been proved. Even if Corollary 10 holds under more general and acceptable conditions, it only says that uncertainty sampling iterates along the descent directions of the expected 0-1 loss. I don't think that one may claim that uncertainty sampling is SGD merely based on Corollary 10. Furthermore, existing results for SGD require some regularity conditions on the objective function, and the learning rate should be chosen properly with respect to the conditions; as the conditions were not checked for the expected 0-1 loss and the "learning rate" in uncertainty sampling was not specified, it seems not very rigorous to explain empirical observations based on existing results of SGD. The paper is overall well-structured. I appreciate the authors' trying providing some intuitive explanations of the proofs, though there are some over-simplifications in my view. The writing looks very hasty; there are many typos and minor grammar mistakes. I would say that this work is a good starting point for an interesting research direction, but currently not very sufficient for publication. Other comments: 1. ln. 52: Not all convex programs can be efficiently solved. See, e.g. "Gradient methods for minimizing composite functions" by Yu. Nesterov. 2. ln. 55: I don't see why the regularized empirical risk minimizer will converge to the risk minimizer without any condition on, for example, the regularization parameter. 3. ln. 180--182: Corollar 10 only shows that uncertainty sampling moves in descent directions of the expected 0-1 loss; this does not necessarily mean that uncertainty sampling is not minimizing the expected convex surrogate. 4. ln. 182--184: Non-convexity may not be an issue for the SGD to converge, if the function Z has some good properties. 5. The proofs in the supplementary material are too terse. | 3. ln. 180--182: Corollar 10 only shows that uncertainty sampling moves in descent directions of the expected 0-1 loss; this does not necessarily mean that uncertainty sampling is not minimizing the expected convex surrogate. |
NIPS_2017_401 | NIPS_2017 | Weakness:
1. There are no collaborative games in experiments. It would be interesting to see how the evaluated methods behave in both collaborative and competitive settings.
2. The meta solvers seem to be centralized controllers. The authors should clarify the difference between the meta solvers and the centralized RL where agents share the weights. For instance, Foester et al., Learning to communicate with deep multi-agent reinforcement learning, NIPS 2016.
3. There is not much novelty in the methodology. The proposed meta algorithm is basically a direct extension of existing methods.
4. The proposed metric only works in the case of two players. The authors have not discussed if it can be applied to more players.
Initial Evaluation:
This paper offers an analysis of the effectiveness of the policy learning by existing approaches with little extension in two player competitive games. However, the authors should clarify the novelty of the proposed approach and other issues raised above. Reproducibility:
Appears to be reproducible. | 4. The proposed metric only works in the case of two players. The authors have not discussed if it can be applied to more players. Initial Evaluation: This paper offers an analysis of the effectiveness of the policy learning by existing approaches with little extension in two player competitive games. However, the authors should clarify the novelty of the proposed approach and other issues raised above. Reproducibility: Appears to be reproducible. |
NIPS_2019_1207 | NIPS_2019 | - Moderate novelty. This paper combines various components proposed in previous work (some of it, it seems, unbeknownst to the authors - see Comment 1): hierarchical/structured optimal transport distances, Wasserstein-Procrustes methods, sample complexity results for Wasserstein/Sinkhorn objectives. Thus, I see the contributions of this paper being essentially: putting together these pieces and solving them cleverly via ADMM. - Lacking awareness of related work (see Comment 1) - Missing relevant baselines and runtime experimental results (Comments 2, 3 and 4) Major Comments/Questions: 1 Related Work. My main concern with this paper is its apparent lack of awareness of two very related lines of work. On the one hand, the idea of defining hierarchical OT distances has been explored before in various contexts (e.g., [5], [6] and [7]), and so has leveraging cluster information for structured losses, e.g. [9] and [10] (note that latter of these relies on an ADMM approach too). On the other hand, combining OT with Procrustes alignment has a long history too (e.g, [1]), with recent successful application in high-dimensional problems ([2], [3], [4]). All of these papers solve some version of Eq (4) with orthogonality (or more general constraints), leading to algorithms whose core is identical to Algorithm 1. Given that this paper sits at the intersection of two rich lines of work in the OT literature, I would have expected some effort to contrast their approach, both theoretically and empirically, with all these related methods. 2. Baselines. Related to the point above, any method that does not account for rotations across data domains (e.g., classic Wasserstein distance) is inadequate as a baseline. Comparing to any of the methods [1]-[4] would have been much more informative. In addition, none of the baselines models group structure, which again, would have been easy to remedy by including at least one alternative that does (e.g., [10] or the method of Courty et al, which is cited and mentioned in passing, but not compared against). As for the neuron application, I am not familiar with the DAD method, but the same applies about the lack of comparison to OT-based methods with structure/Procrustes invariance. 3. Conflation of geometric invariance and hierarchical components. Given that this approach combines two independent extensions on the classic OT problem (namely, the hierarchical formulation and the aligment over the stiefel manifold), I would like to understand how important these two are for the applications explored in this work. Yet, no ablation results are provided. A starting point would be to solve the same problem but fixing the transformation T to be the identity, which would provide a lower bound that, when compared against the classic WA, would neatly show the advantage of the hierarchical vs a "flat" classic OT versions of the problem. 4. No runtime results. Since computational efficiency is one of the major contributions touted in the abstract and introduction, I was expecting to see at least empirical and/or a formal convergence/runtime complexity analysis, but neither of these was provided. Since the toy example is relatively small, and no details about the neural population task are provided, the reader is left to wonder about the practical applicability of this framework for real applications. Minor Comments/Typos: - L53. *the* data. - L147. It's not clear to me why (1) is referred to as an update step here. Wrong eqref? - Please provide details (size, dimensionality, interpretation) about the neural population datasets, at least on the supplement. Many readers will not be familiar with it. References: * OT-based methods to align in the presence of unitary transformations: [1] Rangarajan et al, "The Softassign Procrustes Matching Algorithm", 1997. [2] Zhang et al, "Earth Moverâs Distance Minimization for Unsupervised Bilingual Lexicon Induction", 2017. [3] Alvarez-Melis et al, "Towards Optimal Transport with Global Invariances", 2019. [4] Grave et al, "Unsupervised Alignment of Embeddings with Wasserstein Procrustes", 2019. *Hierarchical OT methods: [5] Yuorochkin et al, "Hierarhical Optimal Transport for Document Representation". [6] Shmitzer and Schnorr, "A Hierarchical Approach to Optimal Transport", 2013 [7] Dukler et al, "Wasserstein of Wasserstein Loss for Learning Generative Models", 2019 [9] Alvarez-Melis et al, "Structured Optimal Transport", 2018 [10] Das and Lee, "Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching", 2018 | - L53. *the* data.- L147. It's not clear to me why (1) is referred to as an update step here. Wrong eqref? |
NIPS_2018_544 | NIPS_2018 | - the presented results do not give me the confidence to say that this approach is better than any of the other due to a lot of ad-hoc decisions in the paper (e.g. digital identity part of the code vs full code evaluation, the evaluation itself, and the choice of the knn classifier) - the results in table 1 are quite unusual - there is a big gap between the standard autoencoders and the variational methods which makes me ask whether thereâs something particular about the classifier used (knn) that is a better fit for autoencoders. the particularities of the loss or the distribution used when training. why was k-nn used? a logical choice would be a more powerful method like svm or a multilayer perceptron. there is no explanation for this big gap - there is no visual comparison of what some of the baseline methods produce as disentangled representations so itâs impossibly to compare the quality of (dis) entanglement and the semantics behind each factor of variation - the degrees of freedom among features of the code seem binary in this case, therefore it is important which version of vae and beta-vae, as well as infogan are used, but the paper does not provide those details - the method presented can easily be applied on unlabeled data only, and that should have been one point of comparison to the other methods dealing with unlabeled data only - showing whether it works on par with baselines when no labels are used, but that wasnât done. the only trace of that is in the figure 3, but that compares the dual and primary accuracy curves (for supervision of 0.0), but does not compare it with other methods - though parts of the paper are easy to understand, in whole it is difficult to get the necessary details of the model and the training procedure (without looking into the appendix which i admittedly did not do, but then, i want to understand the paper fully (without particular details like hyperparameters) from the main body of the paper). i think the paper would benefit from another writing iteration Questions: - are the features of the code binary? because i didnât find it clear from the paper. if so, then the effects of varying the single code feature is essentially a binary choice, right? did the baseline (beta-)vae and infogan methods use the appropriate distribution in that case? i cannot find this in the paper - is there a concrete reason why you decided to apply the model only in a single pass for labelled data, because you could have applied the dual-swap on labelled data too - how is this method applied at test time - one needs to supply two inputs? which ones? does it work when one supplies the same input for both? - 57 - multi dimension attribute encoding, does this essentially mean that the projection code is a matrix, instead of a vector, and thatâs it? - 60-62 - if the dimensions are not independent, then the disentanglement is not perfect - meaning there might be correlations between specific parts of the representation. did you measure/check for that? - can you explicitly say what labels for each dataset in 4.1 are - where are they coming from? from the dataset itself? from what i understand, thatâs easy for the generated datasets (and generated parts of the dataset), but what about cas-peal-r1 and mugshot? - i find the explanation in 243-245 very unclear. could you please elaborate what this exactly means - why is it 5*3 (and not 5*2, e.g. in the case of beta-vae where thereâs a mean and stdev in the code) - 247 - why was k-nn used, and not some other more elaborate classifier? what is the k, what is the distance metric used? - algorithm 1, ascending the gradient estimate? what is the training algorithm used? isnât this employing a version of gradient descent (minimising loss)? - what is the effect of the balance parameter? it is a seemingly important parameter, but there are no results showing a sweep of that parameter, just a choice between 0.5 and 1 (and why does 0.5 work better)? - did you try beta-vae with significantly higher values of beta (a sweep between 10 and 150 would do)? Other: - the notation (dash, double dash, dot, double dot over inputs is a bit unfortunate because itâs difficult to follow) - algorithm 1 is a bit too big to follow clearly, consider revising, and this is one of the points where itâs difficult to follow the notation clearly - figure 1, primary-stage I assume that f_\phi is as a matter of a fact two f_\phis with shared parameters. please split it otherwise the reader can think that f_\phi accepts 4 inputs (and in the dual-stage it accepts only 2) - figure 3 a and b are very messy / poorly designed - it is impossible to discern different lines in a because thereâs too many of them, plus itâs difficult to compare values among the ones which are visible (both in a and b). log scale might be a better choice. as for the overfitting in a, from the figure, printed version, i just cannot see that overfitting - 66 - is shared - 82-83 Also, require limited weakerâ¦sentence not clear/gramatically correct - 113 - one domain entity - 127 - is shared - 190 - conduct disentangled encodings? strange word choice - why do all citations in parentheses have a blank space between the opening parentheses and the name of the author? - 226 - despite the qualities of hybrid images are not exceptional - sentence not clear/correct - 268 - is that fig 2b or 2a? - please provide an informative caption in figure 2 (what each letter stands for) UPDATE: Iâve read the author rebuttal, as well as all the reviews again, and in light of a good reply, Iâm increasing my score. In short, I think the results look promising on dSprites and that my questions were well answered. I still feel i) the lack of clarity is apparent in the variable length issue as all reviewers pointed to that, and that the experiments cannot give a fair comparison to other methods, given that the said vector is not disentangled itself and could account for a higher accuracy, and iii) that the paper doesnât compare all the algorithms in an unsupervised setting (SR=0.0) where I wouldnât necessarily expect better performance than the other models. | - can you explicitly say what labels for each dataset in 4.1 are - where are they coming from? from the dataset itself? from what i understand, thatâs easy for the generated datasets (and generated parts of the dataset), but what about cas-peal-r1 and mugshot? |
ICLR_2022_3248 | ICLR_2022 | compared to [1], which are advantages of the IBP. 1) Unlike a factorized model with an IBP prior, the proposed method lacks a sparsity constraint in the number of factors used by subsequent tasks. As such, the model will not be incentivized to use less factors, leading to increasing number of factors and increased computation with more tasks. 2) The IBP prior allows the data to dictate the number of factors to add for each task. The proposed method has no such mechanism, requiring setting the growth rate by hand using heuristics or a pre-determined schedule. Either is liable to over- or under-utilization of model capacity. Table 4 in the Experiments show that this does indeed have a significant impact on performance.
Overall, I think this is an example of convergent ideas rather than plagiarism, but a discussion of the connections is warranted.
Task incremental learning: This method requires knowing the task ID at test time to pick which factor selector weights to use. Without it, the proposed method doesn’t know which subnetwork to use, and would likely have to resort to trying all of them, which isn’t guaranteed to produce the right results. Recent continual learning methods are often evaluated in the more challenging class incremental setting, where task ID is not known.
Experiments 1. (+) Experiments are conducted on a good set of datasets 2. (+) Error bars are shown 3. (+) The proposed method mostly outperforms the baselines, especially on the more complex datasets. 4. (-) More baselines should be compared against, particularly dynamic architecture approaches, as that’s the category that this method falls under. Many of the compared methods don’t operate on the same set of continual learning assumptions as this paper; in particular, the replay-based methods are often using replay because they consider class incremental learning. 5. (-) Why are the results of Multitask learning so bad for S-CIFAR-100 and S-miniImageNet? My understanding is that it trains on all the data jointly, which should actually be the upper bound for a single model. 6. It would have been nice to visualize the factor selection matrices S for each task in order to visualize knowledge transfer.
Miscellaneous: 1. \citep should be used for parenthetical citations. 2. Initial double quote “ is backwards (Related Works). 3. “the first task,rk,1” 4. Figure 3 caption: “abd”
Questions: 1. How would you apply the weight factorization to 4D convolutional kernels?
[1] “Continual Learning using a Bayesian Nonparametric Dictionary of Weight Factors”, AISTATS 2021 | 1) Unlike a factorized model with an IBP prior, the proposed method lacks a sparsity constraint in the number of factors used by subsequent tasks. As such, the model will not be incentivized to use less factors, leading to increasing number of factors and increased computation with more tasks. |
NIPS_2020_204 | NIPS_2020 | 1.The authors have done a good job with placing their work appropriately. One point of weakness is insufficient comparison to approaches that aim to reduce spatial redudancy, or make the networks more efficient specifically the ones skipping layers/channels. Comparison to OctConv and SkipNet even for a single datapoint with say the same backbone architecture will be valuable to the readers. 2. The authors need to show a graph showing the plot of T vs number of images, and Expectation(T) over the imagenet test set. It is important to understand whether the performance improvement stems solely from the network design to exploit spatial redundancies, or whether the redudancies stem from the nature of ImageNet, ie., large fraction of images can be done with Glance and hence any algorithm with lower resolution will have an unfair advantage. Note, algorithms skipping layers or channels do not enjoy this luxury. 3. The authors should add results from [57] and discuss the comparison. Recent alternatives to MSDNets should be compared and discussed. 4. Efficient backbone architectures and approaches tailoring the computation by controlling convolutional operator have the added advantage that they can be generally applied to semantic (object recognition) and dense pixel-wise tasks. Extension of this approach, unlike other approaches exploiting spatial redundancy to alternate vision tasks is not straightforward. The authors should discuss the implications of this approach to other vision tasks. | 2. The authors need to show a graph showing the plot of T vs number of images, and Expectation(T) over the imagenet test set. It is important to understand whether the performance improvement stems solely from the network design to exploit spatial redundancies, or whether the redudancies stem from the nature of ImageNet, ie., large fraction of images can be done with Glance and hence any algorithm with lower resolution will have an unfair advantage. Note, algorithms skipping layers or channels do not enjoy this luxury. |
NIPS_2022_1590 | NIPS_2022 | 1) One of the key components is the matching metric, namely, the Pearson correlation coefficient (PCC). However, the assumption that PCC is a more relaxed constraint compared with KL divergence because of its invariance to scale and shift is not convincing enough. The constraint strength of a loss function is defined via its gradient distribution. For example, KL divergence and MSE loss have the same optimal solution while MSE loss is stricter than KL because of stricter punishment according to its gradient distribution. From this perspective, it is necessary to provide the gradient comparison between KL and PCC. 2) The experiments are not sufficient enough. 2-1) There are limited types of teacher architectures. 2-2) Most compared methods are proposed before 2019 (see Tab. 5). 2-3) The compared methods are not sufficient in Tab. 3 and 4. 2-4) The overall performance comparisons are only conducted on the small-scale dataset (i.e., CIFAR100). Large datasets (e.g., ImageNet) should also be evaluated. 2-5) The performance improvement compared with SOTAs is marginal (see Tab. 5). Some students only have a 0.06% gain compared with CRD. 3) There are some typos and some improper presentations. The texts of the figure are too small, especially the texts in Fig.2. Some typos, such as “on each classes” in the caption of Fig. 3, should be corrected.
The authors have discussed the limitations and societal impacts of their works. The proposed method cannot fully address the binary classification tasks. | 1) One of the key components is the matching metric, namely, the Pearson correlation coefficient (PCC). However, the assumption that PCC is a more relaxed constraint compared with KL divergence because of its invariance to scale and shift is not convincing enough. The constraint strength of a loss function is defined via its gradient distribution. For example, KL divergence and MSE loss have the same optimal solution while MSE loss is stricter than KL because of stricter punishment according to its gradient distribution. From this perspective, it is necessary to provide the gradient comparison between KL and PCC. |
NIPS_2016_374 | NIPS_2016 | weakness is the presentation: - From my understanding of the submission instructions, the main part of the paper should include all that one needs to understand the paper (even if proofs may be in supplementary material). I thus found it awkward to have a huge algorithm listing as in Alg. 2, without any accompanying text explaining it, and to have algorithms in the supplementary material without giving at least a brief idea of the algorithms in the main body of the paper. This makes it hard to read the paper, and I think it is not appropriate for publication. - Perhaps something more should be done to convince the reader that a query of the type SEARCH is feasible in some realistic scenario. One other little thing: - what is meant by "and quantities that do appear" in line 115? | - Perhaps something more should be done to convince the reader that a query of the type SEARCH is feasible in some realistic scenario. One other little thing: |
NIPS_2019_646 | NIPS_2019 | weakness of each approach. - All the scores in Table 3 (Avg) is lower than their Table 2 counterpart, which makes me wonder if the imbalance nature of the data across categories has more effect than it should be. Chair and table sort of dominate the dataset, and skew the final score toward the trend of these two classes. I feel like a more fair comparison is when all class has an equal number of object or when each class is weighted equally. I know that this is pretty common in classification task, but it can be misleading. ? The result for bed is very interesting and worth a discussion. MCD [23] outperform other methods by a large margin. And if we assume that the proposed approach with only G is the same as MCD, then adding local alignment drop the classification score from 26.1 to 4.3 (and adding attention further drop them to 1) Do you have any intuition on why this is the case? Clarity: + Overall the paper is not difficult to understand, + The format of the experiments, ablation study, and the tables to show the results are all very clear and easy to digest. ? I feel like section 3.5 doesnât add much to the narrative and could be put in the supplementary instead. - Itâs not immediately clear to me in line 90-91 that P(s) P(t) refers to the distribution (itâs defined in the next section) Significant: + I believe the idea of self adaptive node for 3D object would be useful to the research community, if it works. Aligning feature might not be new, but doing so in 3D setting and on top of PointNet based feature shows that it is possible and promising, at least for chair and table categories. + Itâs true that not much works are looking into Domain Adaptation for 3D data, and it helps to have a common benchmark even if it just a combination of existing dataset. --UPDATED AFTER REBUTTAL-- Thanks for the detailed rebuttal. The additional results are quite interesting and further convince me that the proposed local alignment does help. So I'm keeping my score at 6. | - Itâs not immediately clear to me in line 90-91 that P(s) P(t) refers to the distribution (itâs defined in the next section) Significant: |
NIPS_2020_83 | NIPS_2020 | While I think the paper makes a good contribution, there are some limitation at the present stage: - [Remark 3.1] While it has been done in previous works, I think that a deeper understanding of those cases where modelling the pushforward P in (8) as a composition of perturbation in an RKHS does not introduce an error, would increase the quality of the work. Alternatively, trying to undestand the kind of error that this parametrization introduces would be valuable too. - The analysis does not cover explicitly what happens when the input measures \beta_i are absolutely continuous and one has to rely on samples. How does the sampling part impact the bound? - The experiments are limited to toy data. There is a range of problems with real data where barycenters can be used and it would be interesting to show performance of the method in those settings too. | - The experiments are limited to toy data. There is a range of problems with real data where barycenters can be used and it would be interesting to show performance of the method in those settings too. |
NIPS_2020_960 | NIPS_2020 | - The writing of this paper is very misleading. First of all, it claims that it can be trained only using a single viewpoint of the object. In fact, all previous diffrentiable rendering techniques can be trained using a single view of object at training time. However, the reason why multi-view images are used for training in prior works is that single-view images usually lead to ambiguity in the depth direction. The proposed method also suffers from this problem -- it cannot resolve the ambiguity of depth using a single image either. The distrance-transformed silhouette can only provide information on the xy plane - the shape perpendicular to the viewing direction. - I doubt the proposed method can be trained without using any camera information (Line 223, the so called "knowledge of CAD model correspondences"). Without knowing the viewpoint, how is it possible to perform ray marching? How do you know where the ray comes from? - The experiments are not comprehensive and convincing. 1) The comparisons do not seem fair. The performance of DVR is far worse than that in the original DVR paper. Is DVR trained and tested on the same data? What is the code used for evaluation? Is it from the original authors or reimplementation? 2) Though it could be interesting to see how SoftRas performs, it is not very fair to compare SoftRas here as it use a different 3D representation -- mesh. It is well known that mesh representation cannot model arbitrary topology. Thus it is not surprising to see it is outperformed. Since this paper works on implicit surface, it would be more interesting to compare with more state-of-the-art differentiable renderers for implicit surface, i.e. [26],[27],[14], or at least the baseline approach [38]. However, no direct comparisons with these approaches are provided, making it difficult to verify the effectivenss of the proposed approach. | - I doubt the proposed method can be trained without using any camera information (Line 223, the so called "knowledge of CAD model correspondences"). Without knowing the viewpoint, how is it possible to perform ray marching? How do you know where the ray comes from? |
ICLR_2021_872 | ICLR_2021 | The authors push on the idea of scalable approximate inference, yet the largest experiment shown is on CIFAR-10. Given this focus on scalability, and the experiments in recent literature in this space, I think experiments on ImageNet would greatly strengthen the paper (though I sympathize with the idea that this can a high bar from a resources standpoint).
As I noted down below, the experiments currently lack results for the standard variational BNN with mean-field Gaussians. More generally, I think it would be great to include the remaining models from Ovadia et al. (2019). More recent results from ICML could also useful to include (as referenced in the related works sections). Recommendation
Overall, I believe this is a good paper, but the current lack of experiments on a dataset larger than CIFAR-10, while also focusing on scalability, make it somewhat difficult to fully recommend acceptance. Therefore, I am currently recommending marginal acceptance for this paper.
Additional comments
p. 5-7: Including tables of results for each experiment (containing NLL, ECE, accuracy, etc.) in the main text would be helpful to more easily assess
p. 7: For the MNIST experiments, in Ovadia et al. (2019) they found that variational BNNs (SVI) outperformed all other methods (including deep ensembles) on all shifted and OOD experiments. How does your proposed method compare? I think this would be an interesting experiment to include, especially since the consensus in Ovadia et al. (2019) (and other related literature) is that full variational BNNs are quite promising but generally methodologically difficult to scale to large problems, with relative performance degrading even on CIFAR-10. Minor
p. 6: In the phrase "for 'in-between' uncertainty", the first quotation mark on 'in-between' needs to be the forward mark rather than the backward mark (i.e., ‘ i n − b e t w e e n ′ ).
p. 7: s/out of sitribution/out of distribution/
p. 8: s/expensive approaches 2) allows/expensive approaches, 2) allows/
p. 8: s/estimates 3) is/estimates, and 3) is/
In the references:
Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in several of the papers.
Dusenberry et al. (2020) was published in ICML 2020
Osawa et al. (2019) was published in NeurIPS 2019
Swiatkowski et al. (2020) was published in ICML 2020
p. 13, supplement, Fig. 5: error bar regions should be upper and lowered bounded by [0, 1] for accuracy.
p. 13, Table 2: Splitting this into two tables, one for MNIST and one for CIFAR-10, would be easier to read. | 8: s/expensive approaches2) allows/expensive approaches,2) allows/ p.8: s/estimates3) is/estimates, and3) is/ In the references: Various words in many of the references need capitalization, such as "ai" in Amodei et al. (2016), "bayesian" in many of the papers, and "Advances in neural information processing systems" in several of the papers. Dusenberry et al. (2020) was published in ICML 2020 Osawa et al. (2019) was published in NeurIPS 2019 Swiatkowski et al. (2020) was published in ICML 2020 p. 13, supplement, Fig. |
NIPS_2016_153 | NIPS_2016 | weakness of previous models. Thus I find these results novel and exciting.Modeling studies of neural responses are usually measured on two scales: a. Their contribution to our understanding of the neural physiology, architecture or any other biological aspect. b. Model accuracy, where the aim is to provide a model which is better than the state of the art. To the best of my understanding, this study mostly focuses on the latter, i.e. provide a better than state of the art model. If I am misunderstanding, then it would probably be important to stress the biological insights gained from the study. Yet if indeed modeling accuracy is the focus, it's important to provide a fair comparison to the state of the art, and I see a few caveats in that regard: 1. The authors mention the GLM model of Pillow et al. which is pretty much state of the art, but a central point in that paper was that coupling filters between neurons are very important for the accuracy of the model. These coupling filters are omitted here which makes the comparison slightly unfair. I would strongly suggest comparing to a GLM with coupling filters. Furthermore, I suggest presenting data (like correlation coefficients) from previous studies to make sure the comparison is fair and in line with previous literature. 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation to the spatial filter. To make the comparison as fair as possible I think it is important to try to reproduce the main features of previous models. Minor notes: 1. Please define the dashed lines in fig. 2A-B and 4B. 2. Why is the training correlation increasing with the amount of training data for the cutout LN model (fig. 4A)? 3. I think figure 6C is a bit awkward, it implies negative rates, which is not the case, I would suggest using a second y-axis or another visualization which is more physically accurate. 4. Please clarify how the model in fig. 7 was trained. Was it on full field flicker stimulus changing contrast with a fixed cycle? If the duration of the cycle changes (shortens, since as the authors mention the model cannot handle longer time scales), will the time scale of adaptation shorten as reported in e.g Smirnakis et al. Nature 1997. | 2. The authors note that the LN model needed regularization, but then they apply regularization (in the form of a cropped stimulus) to both LN models and GLMs. To the best of my recollection the GLM presented by pillow et al. did not crop the image but used L1 regularization for the filters and a low rank approximation to the spatial filter. To make the comparison as fair as possible I think it is important to try to reproduce the main features of previous models. Minor notes: |
ACL_2017_503_review | ACL_2017 | Reranking use is not mentioned in the introduction.
It would be a great news in NLP context if an Earley parser would run in linear time for NLP grammars (unlike special kinds of formal language grammars).
Unfortunately, this result involves deep assumptions about the grammar and the kind of input. Linear complexity of parsing of an input graph seem right for a top-down deterministic grammars but the paper does not recognise the fact that an input string in NLP usually gives rise to an exponential number of graphs. In other words, the parsing complexity result must be interpreted in the context of graph validation or where one wants to find out a derivation of the graph, for example, for the purposes of graph transduction via synchronous derivations.
To me, the paper should be more clear in this as a random reader may miss the difference between semantic parsing (from strings) and parsing of semantic parses (the current work).
There does not seem to be any control of the linear order of 0-arity edges. It might be useful to mention that if the parser is extended to string inputs with the aim to find the (best?) hypergraph for a given external nodes, then the item representations of the subgraphs must also keep track of the covered 0-arity edges. This makes the string-parser variant exponential. - Easily correctable typos or textual problems: 1) Lines 102-106 is misleading. While intersection and probs are true, "such distribution" cannot refer to the discussion in the above.
2) line 173: I think you should rather talk about validation or recognition algorithms than parsing algorithms as "parsing" in NLP means usually completely different thing that is much more challenging due to the lexical and structural ambiguity.
3) lines 195-196 are unclear: what are the elements of att_G; in what sense they are pairwise distinct. Compare Example 1 where ext_G and att_G(e_1) are not disjoint sets.
4) l.206. Move *rank* definition earlier and remove redundancy.
5) l. 267: rather "immediately derives", perhaps.
6) 279: add "be" 7) l. 352: give an example of a nontrivial internal path.
8) l. 472: define a subgraph of a hypergraph 9) l. 417, l.418: since there are two propositions, you may want to tell how they contribute to what is quoted.
10) l. 458: add "for" Table: Axiom: this is only place where this is introduced as an axiom. Link to the text that says it is a trigger.
- General Discussion: It might be useful to tell about MSOL graph languages and their yields, which are context-free string languages. What happens if the grammar is ambiguous and not top-down deterministic?
What if there are exponential number of parses even for the input graph due to lexical ambiguity or some other reasons. How would the parser behave then?
Wouldn't the given Earley recogniser actually be strictly polynomial to m or k ?
Even a synchronous derivation of semantic graphs can miss some linguistic phenomena where a semantic distinction is expressed by different linguistic means. E.g. one language may add an affix to a verb when another language may express the same distinction by changing the object. I am suggesting that although AMR increases language independence in parses it may have such cross-lingual challenges.
I did not fully understand the role of the marker in subgraphs. It was elided later and not really used.
l. 509-510: I already started to miss the remark of lines 644-647 at this point.
It seems that the normal order is not unique. Can you confirm this?
It is nice that def 7, cond 1 introduces lexical anchors to predictions.
Compare the anchors in lexicalized grammars.
l. 760. Are you sure that non-crossing links do not occur when parsing linearized sentences to semantic graphs?
- Significant questions to the Authors: Linear complexity of parsing of an input graph seem right for a top-down deterministic grammars but the paper does not recognise the fact that an input string in NLP usually gives rise to an exponential number of graphs. In other words, the parsing complexity result must be interpreted in the context of graph validation or where one wants to find out a derivation of the graph, for example, for the purposes of graph transduction via synchronous derivations.
What would you say about parsing complexity in the case the RGG is a non-deterministic, possibly ambiguous regular tree grammar, but one is interested to use it to assign trees to frontier strings like a context-free grammar? Can one adapt the given Earley algorithm to this purpose (by guessing internal nodes and their edges)?
Although this question might seem like a confusion, it is relevant in the NLP context.
What prevents the RGGs to generate hypergraphs whose 0-arity edges (~words) are then linearised? What principle determines how they are linearised? Is the linear order determined by the Earley paths (and normal order used in productions) or can one consider an actual word order in strings of a natural language? There is no clear connection to (non)context-free string languages or sets of (non)projective dependency graphs used in semantic parsing. What is written on lines 757-758 is just misleading: Lines 757-758 mention that HRGs can be used to generate non-context-free languages. Are these graph languages or string languages? How an NLP expert should interpret the (implicit) fact that RGGs generate only context-free languages? Does this mean that the graphs are noncrossing graphs in the sense of Kuhlmann & Jonsson (2015)? | 1) Lines 102-106 is misleading. While intersection and probs are true, "such distribution" cannot refer to the discussion in the above. |
oKn2eMAdfc | ICLR_2024 | 1. The introduction to orthogonality in Part 2 could be more detailed.
2. No details on how the capsule blocks are connected to each other.
3. The fourth line of Algorithm 1 does not state why the flatten operation is performed.
4.The presentation of the α-enmax function is not clear.
5. Eq. (4) does not specify why BatchNorm is used for scalars (L2-norm of sj).
6. The proposed method was tested on relatively small datasets, so that the effectiveness of the method was not well evaluated. | 1. The introduction to orthogonality in Part 2 could be more detailed. |
4WrqZlEK3K | EMNLP_2023 | 1. It is desired to have more evidence or analysis supporting the training effectiveness property of the dataset or other key properties
that will explain the importance and possible use-cases of _LMGQS_ over other QFS datasets.
2. Several unclear methods affecting readability and reproducibility:
* "To use LMGQS in the zero-shot setting, it is necessary to convert the queries of diverse formats into natural questions." (L350) please explain why.
* "Specifically, we finetune a BART model to generate queries with the document and summary as input." (L354) - how did you FT? what is the training set and any relevant hyper-parameters for reproducing the results.
* "we manually create a query template to transform the query into a natural language question." (L493) - what are the templates? what are the query template and several examples. | 1. It is desired to have more evidence or analysis supporting the training effectiveness property of the dataset or other key properties that will explain the importance and possible use-cases of _LMGQS_ over other QFS datasets. |
Wo66GEFnXd | ICLR_2025 | 1. This paper just simply combines neural networks into the physical sciences problems for predicting TDDFT for molecules. Due to the lack of comparison with other learning based methods and insufficient experiment results, I don’t see the novelty and effectiveness of this method from the learning perspective. Maybe this work is more appropriate for some physical science journals.
2. This paper only does experiments on a very limited number of molecules and only provides in-distribution testing for these samples. I think the value of this method would be limited if it needs to train for each molecule individually.
3. There is no comparison for this method with other state-of-art work but I think using neural networks to predict for molecules is a very popular topic. | 2. This paper only does experiments on a very limited number of molecules and only provides in-distribution testing for these samples. I think the value of this method would be limited if it needs to train for each molecule individually. |
NIPS_2020_341 | NIPS_2020 | - For theorem 5.1 and 5.2, is there a way to decouple the statement, i.e., separating out the optimization part and the generalization part? It would be clearer if one could give a uniform convergence guarantee first followed by how the optimization output can instantiate such uniform convergence. - In the experiments, is it reasonable for the German and Law school dataset to have shorter training time in Gerrymandering than Independent? Since in Experiment 2, ERM and plug-in have similar performance to Kearns et al. and the main advantage is its computation time, it would be good to have the code published. | - In the experiments, is it reasonable for the German and Law school dataset to have shorter training time in Gerrymandering than Independent? Since in Experiment 2, ERM and plug-in have similar performance to Kearns et al. and the main advantage is its computation time, it would be good to have the code published. |
ICLR_2021_1716 | ICLR_2021 | Results are on MNIST only. Historically it’s often been the case that strong results on MNIST would not carry over to more complex data. Additionally, at least some core parts of the analysis does not require training networks (but could even be performed e.g. with pre-trained classifiers on ImageNet) - there is thus no severe computational bottleneck, which is often the case when going beyond MNIST.
The “Average stochastic activation diameter” is a quite crude measure and results must thus be taken with a (large) grain of salt. It would be good to perform some control experiments and sanity checks to make sure that the measure behaves as expected, particularly in high-dimensional spaces.
The current paper reports the hashing effect and starts relating it to what’s known in the literature, and has some experiments that try to understand the underlying causes for the hashing effect. However, while some factors are found to have an influence on the strength of the effect, some control experiments are still missing (training on random labels, results on untrained networks, and an analysis of how the results change when starting to leave out more and more of the early layers).
Correctness Overall the methodology, results, and conclusions seem mostly fine (I’m currently not very convinced by the “stochastic activation diameter” and would not read too much into the corresponding results). Additionally some claims are not entirely supported (in fullest generality), based on the results shown, see comments for more on this.
Clarity The main idea is well presented and related literature is nicely cited. However, some of the writing is quite redundant (some parts of the intro appear as literal copies later in the text). Most importantly the writing in some parts of the manuscript seems quite rushed with quite a few typos and some sentences/passages that could be rephrased for more fluent reading.
Improvements (that would make me raise my score) / major issues (that need to be addressed)
Experiments on more complex datasets.
One question that is currently unresolved is: is the hashing effect mostly attributed to early layer activations? Ultimately, a high-accuracy classifier will “lump together” all datapoints of a certain class when looking at the network output only. The question is whether this really happens at the very last layer or already earlier in the network. Similarly, when considering the input to the network (the raw data) the hashing effect holds since each data-point is unique. It is conceivable that the first layer activations only marginally transform the data in which case it would be somewhat trivially expected to see the hashing effect (when considering all activations simultaneously). However that might not explain e.g. the K-NN results. I think it would be very insightful to compute the redundancy ratio layer-wise and/or when leaving out more and more of the early layer activations (i.e. more and more rows of the activation pattern matrix). Additionally it would be great to see how this evolves over time, i.e. is the hashing effect initially mostly localized in early layers and does it gradually shape deeper activations over training? This would also shed some light on the very important issue of how a network that maps each (test-) data-point to a unique pattern generalize well?
Another unresolved question is whether it’s mostly the structure of the input-data or the labels driving the organization of the hashed space? The random data experiments answers this partially. Additionally it would be interesting to see what happens when (i) training with random data, (ii) training with random labels - is the hashing effect still there, does the K-NN classification still work?
Clarify: Does Fig 3c and 4a show results for untrained networks? I.e. is the redundancy ratio near 0 for training, test and random data in an untrained network? I would not be entirely surprised by that (a “reservoir effect”) but if that’s the case that should be commented/discussed in the paper, and improvement 3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1.
Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on unaltered data, evaluated with random data?
Clarify: Random data (Fig 3). Was the non-random data normalized or not (i.e. is the additional “unit-ball” noise small or large compared to the data). Ideally show some examples of the random data in the appendix.
P3: “It is worth noting that the volume of boundaries between linear regions is zero” - is this still true for non-ReLU nonlinearities (e.g. sigmoids)? If not what are the consequences (can you still easily make the claims on P1: “This linear region partition can be extended to the neural networks containing smooth activations”)? Otherwise please rephrase the claims to refer to ReLU networks only.
I disagree that model capacity is well measured by layer width. Please use the term ‘model-size’ instead of ‘model-capacity’ throughout the text. Model capacity is a more complex concept that is influenced by regularizers and other architectural properties (also note that the term capacity has e.g. a well-defined meaning in information theory, and when applied to neural networks it does not simply correspond to layer-width).
Sec 5.4: I disagree that regularization “has very little impact” (as mentioned in the abstract and intro). Looking at the redundancy ratio for weight decay (unfortunately only shown in the appendix) one can clearly see a significant and systematic impact of the regularizer towards higher redundancy ratios (as theoretically expected) for some networks (I guess the impact is stronger for larger networks, unfortunately Fig 8 in the appendix does not allow to precisely answer which networks are which).
Minor comments A) Formally define what “well-trained” means. The term is used quite often and it is unclear whether it simply means converged, or whether it refers to the trained classifier having to have a certain performance.
B) There is quite an extensive body of literature (mainly 90s and early 2000s) on “reservoir effects” in randomly initialized, untrained networks (e.g. echo state networks and liquid state machines, however the latter use recurrent random nets). Perhaps it’s worth checking that literature for similar results.
C) Remark 1: is really only the training distribution meant, i.e. without the test data, or is it the unaltered data generating distribution (i.e. without unit-ball noise)?
D) Is the red histogram in Fig 3a and 3b the same (i.e. does Fig 3b use the network trained with 500 epochs)?
E) P2 - Sufficiently-expressive regime: “This regime involves almost all common scenarios in the current practice of deep learning”. This is a bit of a strong claim which is not fully supported by the experiments - please tone it down a bit. It is for instance unclear whether the effect holds for non-classification tasks, and variational methods with strong entropy-based regularizers, or Dropout, ...
F) P2- The Rosenblatt 1961 citation is not entirely accurate, MLP today typically only loosely refers to the original Perceptron (stacked into multiple-layers), most notably the latter is not trained via gradient backpropagation. I think it’s fine to use the term MLP without citation, or point out that MLP refers to a multi-layer feedforward network (trained via backprop).
G) First paragraph in Sec. 4 is very redundant with the first two bullet points on P2 (parts of the text are literally copied). This is not a good writing style.
H) P4 - first bullet point: “Generally, a larger redundancy ratio corresponds a worse encoding property.”. This is a quite hand-wavy statement - “worse” with respect to what? One could argue that for instance for good generalization high redundancy could be good.
I) Fig 3: “10 epochs (red) and 500 epochs (blue),” does not match the figure legend where red and blue are swapped.
J) Fig 3: Panel b says “Rondom” data.
K) Should the x-axis in Fig 3c be 10^x where x is what’s currently shown on the axis? (Similar to how 4a is labelled?)
L) Some typos P2: It is worths noting P2: By contrast, our the partition in activation hash phase chart characerizes goodnessof-hash. P3: For the brevity P3: activation statue | 3) mentioned above would become even more important. If the figures do not show results for untrained networks then please run the corresponding experiments and add them to the figures and Table 1. Clarify: Random data (Fig 3c). Was the network trained on random data, or do the dotted lines show networks trained on unaltered data, evaluated with random data? Clarify: Random data (Fig 3). Was the non-random data normalized or not (i.e. is the additional “unit-ball” noise small or large compared to the data). Ideally show some examples of the random data in the appendix. |
ARR_2022_10_review | ARR_2022 | - The number of datasets used is relatively small, to really access the importance of different design decisions, it would probably be good to use further datasets, e.g., the classical GeoQuery dataset.
- I would have appreciated a discussion of the statistical properties of the results - with the given number of tests, what is the probability that differences are generated by random noise and does a regression on the different design decisions give us a better idea of the importance of the factors?
- The paper mentions that a heuristic is used to identify variable names in the Django corpus, however, I could not find information on how this heuristic works. Another detail that was not clear to me is whether the BERT model was fine tuned and how the variable strings were incorporated into the BERT model (the paper mentions that they were added to the vocabulary, but not how). For a paper focused on determining what actually matters in building a text to code system, I think it is important to be precise on these details.
It would take some time to implement your task for other corpora, which potentially use different programming languages, but it might be possible to still strengthen your results using bootstrapping. You could resample some corpora from the existing two and see how stable your results are.
If you have some additional space, it would also be interesting to know if you have discuss results based on types of examples - e.g., do certain decisions make more of a difference if there are more variables?
Typos: - Page 1: "set of value" -> "set of values" "For instance, Orlanski and Gittens (2021) fine-tunes BART" -> "fine-tune" - Page 2: "Non determinism" -> "Non-Determinism" | - I would have appreciated a discussion of the statistical properties of the results - with the given number of tests, what is the probability that differences are generated by random noise and does a regression on the different design decisions give us a better idea of the importance of the factors? |
8HG2QrtXXB | ICLR_2024 | - Source of Improvement and Ablation Study:
- Given the presence of various complex architectural choices, it's difficult to determine whether the Helmholtz decomposition is the primary source of the observed performance improvement. Notably, the absence of the multi-head mechanism leads to a performance drop (0.1261 -> 0.1344) for the 64x64 Navier-Stokes, which is somewhat comparable to the performance decrease resulting from the ablation of the Helmholtz decomposition (0.1261 -> 0.1412). These results raise questions about the model's overall performance gain compared to the baseline models when the multi-head trick is absent. Additionally, the ablation studies need to be explained more comprehensively with sufficient details, as the current presentation makes it difficult to understand the methodology and outcomes.
- The paper claims that Vortex (Deng et al., 2023) cannot be tested on other datasets, which seems unusual, as they are the same type of task and data that are disconnected from the choice of dynamics modeling itself. It should be further clarified why Vortex cannot be applied to other datasets.
- Interpretability Claim:
- The paper's claim about interpretability is not well-explained. If the interpretability claim is based on the model's prediction of an explicit term of velocity, it needs further comparison and a more comprehensive explanation. Does the Helmholtz decomposition significantly improve interpretability compared to baseline models, such as Vortex (Deng et al., 2023)?
- In Figure 4, it appears that the model predicts incoherent velocity fields around the circle boundary, even with non-zero velocity outside the boundary, while baseline models do not exhibit such artifacts. This weakens the interpretability claim.
- Multiscale modeling:
- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly.
- Regarding some missing experimental results with cited baselines, it's crucial to include and report all baseline results to ensure transparency, even if the outcomes are considered inferior.
- Minor issues:
- Ensure proper citation format for baseline models (Authors, Year).
- Make sure that symbols are well-defined with clear reference to their definitions. For example, in Equation (4), the undefined operator $\mathbb{I}_{\vec r\in\mathbb{S}}$ needs clarification. If it's an indicator function, use standard notation with a proper explanation. "Embed(•)" should be indicated more explicitly. | - Multiscale modeling:- The aggregation operation after "Integration" needs further clarification. Please provide more details in the main paper, and if you refer to other architectures, acknowledge their structure properly. |
2JF8mJRJ7M | ICLR_2024 | 1. Utilizing energy models to explain the fine-tuning of pre-trained models seems not to be essential. As per my understanding, the objective of the method in this paper as well as related methods ([1,2,3], etc.) is to reduce the difference in features extracted by the models before and after fine-tuning.
2. The authors claim that the text used is randomly generated, but it appears from the code in the supplementary material that tokens are sampled from the openai_imagenet_template. According to CAR-FT, using all templates as text input also yields good performance. What then is the significance of random token sampling in this scenario?
3. It is suggested that the authors provide a brief introduction to energy models in the related work section.
In Figure 1, it is not mentioned which points different learning rates in the left graph and different steps in the right graph correspond to.
[1] Context-aware robust fine-tuning.
[2] Fine-tuning can cripple your foundation model; preserving features may be the solution.
[3] Robust fine-tuning of zero-shot models. | 3. It is suggested that the authors provide a brief introduction to energy models in the related work section. In Figure 1, it is not mentioned which points different learning rates in the left graph and different steps in the right graph correspond to. [1] Context-aware robust fine-tuning. [2] Fine-tuning can cripple your foundation model; preserving features may be the solution. [3] Robust fine-tuning of zero-shot models. |
NIPS_2021_291 | NIPS_2021 | The writing is clear and the motivation is clarified clearly. Besides, the theoretical grounding and experimental evaluation are not sufficient to show their originality and significance. Here are some of the suggestions: 1) I would like to see ablation studies for the proposed training method, the traditional backpropagation framework refers as the baseline. 2) How to deal with the different types of inputs (e.g., bio-medical signals or speech)? It would be valuable to discuss it and present your solutions in this paper. The citation seems a bit disordered. | 2) How to deal with the different types of inputs (e.g., bio-medical signals or speech)? It would be valuable to discuss it and present your solutions in this paper. The citation seems a bit disordered. |
NIPS_2021_537 | NIPS_2021 | Weakness: The main weakness of the approach is the lack of novelty. 1. The key contribution of the paper is to propose a framework which gradually fits the high-performing sub-space in the NAS search space using a set of weak predictors rather than fitting the whole space using one strong predictor. However, this high-level idea, though not explicitly highlighted, has been adopted in almost all query-based NAS approaches where the promising architectures are predicted and selected at each iteration and used to update the predictor model for next iteration. As the authors acknowledged in Section 2.3, their approach is exactly a simplified version of BO which has been extensively used for NAS [1,2,3,4]. However, unlike BO, the predictor doesn’t output uncertainty and thus the authors use a heuristic to trade-off exploitation and exploration rather than using more principled acquisition functions.
2. If we look at the specific components of the approach, they are not novel as well. The weak predictor used are MLP, Regression Tree or Random Forest, all of which have been used for NAS performance prediction before [2,3,7]. The sampling strategy is similar to epsilon-greedy and exactly the same as that in BRP-NAS[5]. In fact the results of the proposed WeakNAS is almost the same as BRP-NAS as shown in Table 2 in Appendix C. 3. Given the strong empirical results of the proposed method, a potentially more novel and interesting contribution would be to find out through theorical analyses or extensive experiments the reasons why simple greedy selection approach outperforms more principled acquisition functions (if that’s true) on NAS and why deterministic MLP predictors, which is often overconfident when extrapolate, outperform more robust probabilistic predictors like GPs, deep ensemble or Bayesian neural networks. However, such rigorous analyses are missing in the paper.
Detailed Comments: 1. The authors conduct some ablation studies in Section 3.2. However, a more important ablation would be to modify the proposed predictor model to get some uncertainty (by deep-ensemble or add a BLR final output layer) and then use BO acquisition functions (e.g. EI) to do the sampling. The proposed greedy sampling strategy works because the search space for NAS-Bench-201 and 101 are relatively small and as demonstrated in [6], local search even gives the SOTA performance on these benchmark search spaces. For a more realistic search space like NAS-Bench-301[7], the greedy sampling strategy which lacks a principled exploitation-exploration trade-off might not work well. 2. Following the above comment, I’ll suggest the authors to evaluate their methods on NAS-Bench-301 and compare with more recent BO methods like BANANAS[2] and NAS-BOWL[4] or predictor-based method like BRP-NAS [5] which is almost the same as the proposed approach. I’m aware that the authors have compared to BONAS and shows better performance. However, BONAS uses a different surrogate which might be worse than the options proposed in this paper. More importantly, BONAS use weight-sharing to evaluate architectures queried which may significantly underestimate the true architecture performance. This trades off its performance for time efficiency. 3. For results on open-domain search, the authors perform search based on a pre-trained super-net. Thus, the good final performance of WeakNAS on MobileNet space and NASNet space might be due to the use of a good/well-trained supernet; as shown in Table 6, OFA with evalutinary algorithm can give near top performance already. More importantly, if a super-net has been well-trained and is good, the cost of finding the good subnetwork from it is rather low as each query via weight-sharing is super cheap. Thus, the cost gain in query efficiency by WeakNAS on these open-domain experiments is rather insignificant. The query efficiency improvement is likely due to the use of a predictor to guide the subnetwork selection in contrast to the naïve model-free selection methods like evolutionary algorithm or random search. A more convincing result would be to perform the proposed method on DARTS space (I acknowledge that doing it on ImageNet would be too expensive) without using the supernet (i.e. evaluate the sampled architectures from scratch) and compare its performance with BANANAS[2] or NAS-BOWL[4]. 4. If the advantage of the proposed method is query-efficiency, I’d love to see Table 2, 3 (at least the BO baselines) in plots like Fig. 4 and 5, which help better visualise the faster convergence of the proposed method. 5. Some intuitions are provided in the paper on what I commented in Point 3 in Weakness above. However, more thorough experiments or theoretical justifications are needed to convince potential users to use the proposed heuristic (a simplified version of BO) rather than the original BO for NAS. 6. I might misunderstand something here but the results in Table 3 seem to contradicts with the results in Table 4. As in Table 4, WeakNAS takes 195 queries on average to find the best architecture on NAS-Bench-101 but in Table 3, WeakNAS cannot reach the best architecture after even 2000 queries.
7. The results in Table 2 which show linear-/exponential-decay sampling clearly underperforms uniform sampling confuse me a bit. If the predictor is accurate on the good subregion, as argued by the authors, increasing the sampling probability for top-performing predicted architectures should lead to better performance than uniform sampling, especially when the performance of architectures in the good subregion are rather close. 8. In Table 1, what does the number of predictors mean? To me, they are simply the number of search iterations. Do the authors reuse the weak predictors from previous iterations in later iterations like an ensemble?
I understand that given the time constraint, the authors are unlikely to respond to my comments. Hope those comments can help the authors for future improvement of the paper.
References: [1] Kandasamy, Kirthevasan, et al. "Neural architecture search with Bayesian optimisation and optimal transport." NeurIPS. 2018. [2] White, Colin, et al. "BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search." AAAI. 2021. [3] Shi, Han, et al. "Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS." NeurIPS. 2020. [4] Ru, Binxin, et al. "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels." ICLR. 2020. [5] Dudziak, Lukasz, et al. "BRP-NAS: Prediction-based NAS using GCNs." NeurIPS. 2020. [6] White, Colin, et al. "Local search is state of the art for nas benchmarks." arXiv. 2020. [7] Siems, Julien, et al. "NAS-Bench-301 and the case for surrogate benchmarks for neural architecture search." arXiv. 2020.
The limitation and social impacts are briefly discussed in the conclusion. | 3. Given the strong empirical results of the proposed method, a potentially more novel and interesting contribution would be to find out through theorical analyses or extensive experiments the reasons why simple greedy selection approach outperforms more principled acquisition functions (if that’s true) on NAS and why deterministic MLP predictors, which is often overconfident when extrapolate, outperform more robust probabilistic predictors like GPs, deep ensemble or Bayesian neural networks. However, such rigorous analyses are missing in the paper. Detailed Comments: |
NIPS_2018_25 | NIPS_2018 | - My understanding is that R,t and K (the extrinsic and intrinsic parameters of the camera) are provided to the model at test time for the re-projection layer. Correct me in the rebuttal if I am wrong. If that is the case, the model will be very limited and it cannot be applied to general settings. If that is not the case and these parameters are learned, what is the loss function? - Another issue of the paper is that the disentangling is done manually. For example, the semantic segmentation network is the first module in the pipeline. Why is that? Why not something else? It would be interesting if the paper did not have this type of manual disentangling, and everything was learned. - "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper. - During evaluation at test time, how is the 3D alignment between the prediction and the groundtruth found? - Please comment on why the performance of GTSeeNet is lower than that of SeeNetFuse and ThinkNetFuse. The expectation is that groundtruth 2D segmentation should improve the results. - line 180: Why not using the same amount of samples for SUNCG-D and SUNCG-RGBD? - What does NoSeeNet mean? Does it mean D=1 in line 96? - I cannot parse lines 113-114. Please clarify. | - "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper. |
ARR_2022_317_review | ARR_2022 | - Lack of novelty: - Adversarial attacks by perturbing text has been done on many NLP models and image-text models. It is nicely summarized in related work of this paper. The only new effort is to take similar ideas and apply it on video-text models.
- Checklist (Ribeiro et. al., ACL 2020) had shown many ways to stress test NLP models and evaluate them. Video-text models could also be tested on some of those dimensions. For instance on changing NER.
- If you could propose any type of perturbation which is specific to video-text models (and probably not that important to image-text or just text models) will be interesting to see. Otherwise, this work, just looks like a using an already existing method on this new problem (video-text) which is just coming up.
- Is there a way to take any clue from the video to create harder negatives. | - Lack of novelty:- Adversarial attacks by perturbing text has been done on many NLP models and image-text models. It is nicely summarized in related work of this paper. The only new effort is to take similar ideas and apply it on video-text models. |
NIPS_2018_245 | NIPS_2018 | Weakness] 1: Poor writing and annotations are a little hard to follow. 2: Although applying GCN on FVQA is interesting, the technical novelty of this paper is limited. 3: The motivation is to solve when the question doesn't focus on the most obvious visual concept when there are synonyms and homographs. However, from the experiment, it's hard to see whether this specific problem is solved or not. Although the number is better than the previous method, it will be great if the authors could product more experiments to show more about the question/motivation raised in the introduction. 4: Following 3, applying MLP after GCN is very common, and I'm not surprised that the performance will drop without MLP. The authors should show more ablation studies on performance when varying the number of facts retrieval, what happened if we different number of layer of GCN? | 1: Poor writing and annotations are a little hard to follow. |
ACL_2017_108_review | ACL_2017 | Clarification is needed in several places.
1. In section 3, in addition to the description of the previous model, MH, you need point out the issues of MH which motivate you to propose a new model.
2. In section 4, I don't see the reason why separators are introduced. what additional info they convene beyond T/I/O?
3. section 5.1 does not seem to provide useful info regarding why the new model is superior.
4. the discussion in section 5.2 is so abstract that I don't get the insights why the new model is better than MH. can you provide examples of spurious structures? - General Discussion: The paper presents a new model for detecting overlapping entities in text. The new model improves the previous state-of-the-art, MH, in the experiments on a few benchmark datasets. But it is not clear why and how the new model works better. | -General Discussion: The paper presents a new model for detecting overlapping entities in text. The new model improves the previous state-of-the-art, MH, in the experiments on a few benchmark datasets. But it is not clear why and how the new model works better. |
hkjcdmz8Ro | ICLR_2024 | 1. The technique contribution is week. The proposed method utilizes the LLM to refine the prompt. Thus, the performance of the proposed method heavily relies on the designed system prompt and LLMs. Moreover, the proposed method is based on heuristics, i.e., there is no insight for the proposed approach. But I understand those two points could be very challenging for LLM research.
2. The evaluation is not systematic. For instance, only 50 questions are used in the evaluation. Thus, it is unclear whether the proposed approach is generalizable. More importantly, is the judge model the same for the proposed algorithm and evaluation? If this is the case, it is hard to see whether the reported results are reliable as LLMs could be inaccurate in their predictions. It would be better if other metrics could be used for cross-validation, e.g., manually check and the word list used by Zou et al. 2023. The proposed method is only compared with GCG. There are also many other baselines, e.g., handcrafted methods (https://www.jailbreakchat.com/).
3. In GCG, authors showed that their approach could be transferred to other LLMs. Thus, GCG could craft adversarial prompts and transfer them to other LLMs. It would be good if such a comparison could be included.
A minor point: The jailbreaking percentage is low for certain LLMs. | 3. In GCG, authors showed that their approach could be transferred to other LLMs. Thus, GCG could craft adversarial prompts and transfer them to other LLMs. It would be good if such a comparison could be included. A minor point: The jailbreaking percentage is low for certain LLMs. |
t8cBsT9mcg | ICLR_2024 | 1. The abstract should be expanded to encompass key concepts that effectively summarize the paper's contributions. In the introduction, the authors emphasize the significance of interpretability and the challenges it poses in achieving high accuracy. By including these vital points in the abstract, the paper can provide a more comprehensive overview of its content and contributions.
2. Regarding the abstention process, it appears to be based on a prediction probability threshold, where if the probability is lower than the threshold, the prediction is abstained? How does it different from a decision threshold used by the models? Can authors clarify that?
3. In the results and discussion section, there's limited exploration and commentary on the impact of the solution on system accuracy, as seen in Table 2. Notably, the confirmation budget appears to have a limited effect on datasets like "noisyconcepts25" and "warbler" compared to others. The paper can delve into the reasons behind this discrepancy.
4. In real-world applications of this solution, questions about the ease of concept approval and handling conflicting user feedback arise. While these aspects may be considered out of scope, addressing them would be beneficial for evaluating the practicality of implementing this approach in real-world scenarios. This is particularly important when considering the potential challenges of user feedback and conflicting inputs in such applications.
Minor things:
Page 4, confirm. we —> replace . with comma
Section 4.2, Table Table 2 —> Table 2
Shouldn’t Table 2 rather be labelled as Figure 2? | 2. Regarding the abstention process, it appears to be based on a prediction probability threshold, where if the probability is lower than the threshold, the prediction is abstained? How does it different from a decision threshold used by the models? Can authors clarify that? |
NIPS_2016_287 | NIPS_2016 | weakness, however, is the experiment on real data where no comparison against any other method is provided. Please see the details comments below.1. While [5] is a closely related work, it is not cited or discussed at all in Section 1. I think proper credit should be given to [5] in Sec. 1 since the spacey random walk was proposed there. The difference between the random walk model in this paper and that in [5] should also be clearly stated to clarify the contributions. 2. The AAAI15 paper titled "Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications" by Ghoshdastidar and Dukkipati seems to be a related work missed by the authors. This AAAI15 paper deals with hypergraph data with tensors as well so it should be discussed and compared against to provide a better understanding of the state-of-the-art. 3. This work combines ideas from [4], [5], and [14] so it is very important to clearly state the relationships and differences with these earlier works. 4. End of Sec. 2., there are two important parameters/thresholds to set. One is the minimum cluster size and the other is the conductance threshold. However, the experimental section (Sec. 3) did not mention or discuss how these parameters are set and how sensitive the performance is with respect to these parameters. 5. Sec. 3.2 and Sec. 3.3: The real data experiments study only the proposed method and there is no comparison against any existing method on real data. Furthermore, there is only some qualitative analysis/discussion on the real data results. Adding some quantitative studies will be more helpful to the readers and researchers in this area. 6. Possible Typo? Line 131: "wants to transition". | 2. The AAAI15 paper titled "Spectral Clustering Using Multilinear SVD: Analysis, Approximations and Applications" by Ghoshdastidar and Dukkipati seems to be a related work missed by the authors. This AAAI15 paper deals with hypergraph data with tensors as well so it should be discussed and compared against to provide a better understanding of the state-of-the-art. |
ICLR_2022_2081 | ICLR_2022 | 1. My feeling is that the conclusion is somewhat overclaimed. In both abstract and conclusion, it is emphasized that this work proves the pessimistic result that reweighting algorithms always overfit. However, the paper only proves that this conclusion might be true for some specific situations. For example, the reweighting algorithms need to satisfy Assumption 1 and Assumption 2, which means not all reweighting algorithms are considered. The overparameterized models need to be linear models, linearized neural networks or wide fully-connected neural networks, which are not commonly used in practice. Besides, the squared loss needs to be used to confirm the update rule is linear. All those assumptions are not quite mild for me. 2. The analysis of neural networks contributes less. With the existing NTK theorem, the extension from linear models to wide fully-connected neural networks is trivial (Section 3.2, 3.3). The work bypasses the core problem of overparametrized neural networks and only considers the easy wide fully-connected neural networks. 3. The theoretical results and experiments do not match. The theoretical proof considers wide fully-connected neural networks, while the experiments utilize a ResNet18 as the model, which is quite different. 4. Some key steps are empirical, although the paper claims that it provides a theoretical backing in the abstract. For example, this paper only proves that reweighting algorithms will converge to the same level as ERM, but the conclusion that ERM has a poor worst-group test performance is summarized through observation in practice. Besides, the paper can only empirically demonstrate that commonly used algorithms satisfy Assumption 2. | 2. The analysis of neural networks contributes less. With the existing NTK theorem, the extension from linear models to wide fully-connected neural networks is trivial (Section 3.2, 3.3). The work bypasses the core problem of overparametrized neural networks and only considers the easy wide fully-connected neural networks. |
ARR_2022_186_review | ARR_2022 | - it is not clear what's the goal of the paper. Is the release of a challenging dataset or proposing an analysis of augmenting models with expert guided adversarial examples. If it is the first, ok, but the paper misses a lot of important information, and data analysis to give a sense of the quality and usefulness of such a dataset. If it is the second, it is not clear what's the novelty.
- In general, it seems the authors want to propose a way of create a challenging set. However, what they describe seems very specific and not scalable.
- The paper structure and writing is not sufficient
My main concern is that it's not clear what's the goal of the paper. Also, the structure and writing should greatly improve. I believe also that the choice to go for a short paper was penalizing the authors, as it seems clear that they cut out some information that could've been useful to better understand the paper (also given the 5 pages appendix).
Detailed comments/questions: - Line 107 data, -> data.
- Line 161-162: this sentence is not clear.
- Table 1: are these all the rules you defined? How the rule is applied? When you decide to make small changes to the context? For example, when you decide to add "and her team" as in the last example of Table 1? - Also, it seems that all the rules change a one-token entity to a multi-token one or vice-versa. Will models be biased by this?
- Line 183-197: not clear what you're doing here. Details cannot be in appendix.
- What is not clear also to me is how is used the Challenge Set. If I understood correctly, the CS is created by the linguistic experts and it's used for evaluation purposes. Is this used also to augment the training material? If yes, what is the data split you used? - Line 246-249: this sentence lacks the conclusion - Line 249: What are eligible and not eligible examples?
- Line 251: what is p?
- Line 253: The formula doesn't depend on p, so why the premise is "if p=100% of the eligible example"?
- Line 252: Not clear what is the subject of this sentence. | - What is not clear also to me is how is used the Challenge Set. If I understood correctly, the CS is created by the linguistic experts and it's used for evaluation purposes. Is this used also to augment the training material? If yes, what is the data split you used? |
ACL_2017_31_review | ACL_2017 | ] See below for details of the following weaknesses: - Novelties of the paper are relatively unclear.
- No detailed error analysis is provided.
- A feature comparison with prior work is shallow, missing two relevant papers.
- The paper has several obscure descriptions, including typos.
[General Discussion:] The paper would be more impactful if it states novelties more explicitly. Is the paper presenting the first neural network based approach for event factuality identification? If this is the case, please state that.
The paper would crystallize remaining challenges in event factuality identification and facilitate future research better if it provides detailed error analysis regarding the results of Table 3 and 4. What are dominant sources of errors made by the best system BiLSTM+CNN(Att)? What impacts do errors in basic factor extraction (Table 3) have on the overall performance of factuality identification (Table 4)? The analysis presented in Section 5.4 is more like a feature ablation study to show how useful some additional features are.
The paper would be stronger if it compares with prior work in terms of features. Does the paper use any new features which have not been explored before? In other words, it is unclear whether main advantages of the proposed system come purely from deep learning, or from a combination of neural networks and some new unexplored features. As for feature comparison, the paper is missing two relevant papers: - Kenton Lee, Yoav Artzi, Yejin Choi and Luke Zettlemoyer. 2015 Event Detection and Factuality Assessment with Non-Expert Supervision. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1643-1648.
- Sandeep Soni, Tanushree Mitra, Eric Gilbert and Jacob Eisenstein. 2014.
Modeling Factuality Judgments in Social Media Text. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 415-420.
The paper would be more understandable if more examples are given to illustrate the underspecified modality (U) and the underspecified polarity (u). There are two reasons for that. First, the definition of 'underspecified' is relatively unintuitive as compared to other classes such as 'probable' or 'positive'.
Second, the examples would be more helpful to understand the difficulties of Uu detection reported in line 690-697. Among the seven examples (S1-S7), only S7 corresponds to Uu, and its explanation is quite limited to illustrate the difficulties.
A minor comment is that the paper has several obscure descriptions, including typos, as shown below: - The explanations for features in Section 3.2 are somewhat intertwined and thus confusing. The section would be more coherently organized with more separate paragraphs dedicated to each of lexical features and sentence-level features, by: - (1) stating that the SIP feature comprises two features (i.e., lexical-level and sentence-level) and introduce their corresponding variables (l and c) *at the beginning*; - (2) moving the description of embeddings of the lexical feature in line 280-283 to the first paragraph; and - (3) presenting the last paragraph about relevant source identification in a separate subsection because it is not about SIP detection.
- The title of Section 3 ('Baseline') is misleading. A more understandable title would be 'Basic Factor Extraction' or 'Basic Feature Extraction', because the section is about how to extract basic factors (features), not about a baseline end-to-end system for event factuality identification.
- The presented neural network architectures would be more convincing if it describes how beneficial the attention mechanism is to the task.
- Table 2 seems to show factuality statistics only for all sources. The table would be more informative along with Table 4 if it also shows factuality statistics for 'Author' and 'Embed'.
- Table 4 would be more effective if the highest system performance with respect to each combination of the source and the factuality value is shown in boldface.
- Section 4.1 says, "Aux_Words can describe the *syntactic* structures of sentences," whereas section 5.4 says, "they (auxiliary words) can reflect the *pragmatic* structures of sentences." These two claims do not consort with each other well, and neither of them seems adequate to summarize how useful the dependency relations 'aux' and 'mark' are for the task.
- S7 seems to be another example to support the effectiveness of auxiliary words, but the explanation for S7 is thin, as compared to the one for S6. What is the auxiliary word for 'ensure' in S7?
- Line 162: 'event go in S1' should be 'event go in S2'.
- Line 315: 'in details' should be 'in detail'.
- Line 719: 'in Section 4' should be 'in Section 4.1' to make it more specific.
- Line 771: 'recent researches' should be 'recent research' or 'recent studies'. 'Research' is an uncountable noun.
- Line 903: 'Factbank' should be 'FactBank'. | - (1) stating that the SIP feature comprises two features (i.e., lexical-level and sentence-level) and introduce their corresponding variables (l and c) *at the beginning*; |
NIPS_2020_887 | NIPS_2020 | - Part of the method (2), relies on having a list of orientation words. It's not clear how the orientation words are determined. The effect of k (for the number of top-k selected orientation words) is also not studied. - Most of the gains comes from components 2) and 3) and it's not certain how much of usefulness of the method is specific to this dataset. - The ablation study is not as thorough as it can be (it adds the components in order). Ideally, it would also show the effect of 2) and 3) without using 1) (with ResNet features instead of GloRe) and effect of 3) without 2) - The proposed method is very task and data specific and it is not clear whether it would be of interest to the broader NeurIPS community. | 2) and3) without using1) (with ResNet features instead of GloRe) and effect of |
NIPS_2022_477 | NIPS_2022 | 1.In experiments, the PRODEN method also uses mixup and consistency training techniques for fair comparisons. What about other competitive baselines? I'd like to see how much the strong CC method could benefit from the representation training technique.
2.It is not clear why the proposed sample selection mechanism helps preserve the label distribution.
3.In App. B.2, a relaxed solution of Sinkhorn-Knopp algorithm is proposed. Why the relaxed problem guarantees to converge?Does Solar always run this relaxed version of Sinkhorn-Knopp?
4.How is gamma in the Sinknhorn-Knopp affect the performance?
5.How does the class distribution estimate for PRODEN in Figure 1?
Societal Impacts: The main negative impact is lower annotation costs may decrease the requirement for annotator employment.
Limitations: The experiments need to be further improved. | 2.It is not clear why the proposed sample selection mechanism helps preserve the label distribution. |
Ku1tUKnAnC | ICLR_2025 | 1. This approach requires enumerating all off-target concepts in order to construct the oracle probe for Z_e. However, this is a common assumption in the causal-inference-in-text literature, so not surprising.
2. A crux of this paper is the development of *good* oracle probes. However, there is very little analysis of how to evaluate the oracle probes (which are then used to evaluate causal probing methods).
3. Currently, this paper only considers binary concepts. This is a common assumption, so not very limiting. | 3. Currently, this paper only considers binary concepts. This is a common assumption, so not very limiting. |
Uj2Wjv0pMY | ICLR_2024 | • Compared to Assembly 101 (error detection), the paper seems like an inferior / less complicated dataset. Claims like higher ratio of error to normal videos needs to be validated.
• Compared to datasets, the dataset prides itself on adding different modalities especially depth channel (RGB-D). The paper fails to validate the necessities of such modality. One crucial different between assembly dataset is use of depth values. What role does it play in training baseline models? Does it boost the model’s performance if these weren’t present. In current deep learning area, depth channels should be reasonably be producible via the help of existing models.
• I’m not convinced that the binary classification is a justifiable baseline metrics. While I agree with the TAL task is really important here and a good problem to solve, I’m not sure how coarse grained binary classification can assess models understanding of fine-grained error like technique error.
• Timing Error (Duration of an activity) and Temperature based error, does these really need ML based solutions? In sensitive tasks, simple sensor reading can indicate error. I’m not sure testing computer vision models on such tasks is justifiable. These require more heuristics-based methods, working with if-else statement.
• Procedure Learning: its very vaguely defined, mostly left unexplained and seems like an after thought. I recommend authors devote passage to methods “M1 (Dwibedi et al., 2019)” and “M2 (Bansal, Siddhant et al., 2022)”. In Table 5, value of lambda? Is not mentioned.
• The authors are dealing with a degree of subjectivity in terms of severity of errors. It would have been greatly beneficial, if the errors could be finely measured. For example if the person uses a tablespoon instead of teaspoon, is still an error? Some errors are more grave than others, is there a weighted scores? Is there a way to measure level of deviation for each type of error or time stamp of occurrence of error. Is one recipe more difficult than the other recipe. | • I’m not convinced that the binary classification is a justifiable baseline metrics. While I agree with the TAL task is really important here and a good problem to solve, I’m not sure how coarse grained binary classification can assess models understanding of fine-grained error like technique error. |
NIPS_2017_53 | NIPS_2017 | Weakness
1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA.
2. Given that the paper uses a billinear layer to combine representations, it should mention in related work the rich line of work in VQA, starting with [B] which uses billinear pooling for learning joint question image representations. Right now the manner in which things are presented a novice reader might think this is the first application of billinear operations for question answering (based on reading till the related work section). Billinear pooling is compared to later.
3. L151: Would be interesting to have some sort of a group norm in the final part of the model (g, Fig. 1) to encourage disentanglement further.
4. It is very interesting that the approach does not use an LSTM to encode the question. This is similar to the work on a simple baseline for VQA [C] which also uses a bag of words representation.
5. (*) Sec. 4.2 it is not clear how the question is being used to learn an attention on the image feature since the description under Sec. 4.2 does not match with the equation in the section. Speficially the equation does not have any term for r^q which is the question representation. Would be good to clarify. Also it is not clear what \sigma means in the equation. Does it mean the sigmoid activation? If so, multiplying two sigmoid activations (with the \alpha_v computation seems to do) might be ill conditioned and numerically unstable.
6. (*) Is the object detection based attention being performed on the image or on some convolutional feature map V \in R^{FxWxH}? Would be good to clarify. Is some sort of rescaling done based on the receptive field to figure out which image regions belong correspond to which spatial locations in the feature map?
7. (*) L254: Trimming the questions after the first 10 seems like an odd design choice, especially since the question model is just a bag of words (so it is not expensive to encode longer sequences).
8. L290: it would be good to clarify how the implemented billinear layer is different from other approaches which do billinear pooling. Is the major difference the dimensionality of embeddings? How is the billinear layer swapped out with the hadarmard product and MCB approaches? Is the compression of the representations using Equation. (3) still done in this case?
Minor Points:
- L122: Assuming that we are multiplying in equation (1) by a dense projection matrix, it is unclear how the resulting matrix is expected to be sparse (arenât we mutliplying by a nicely-conditioned matrix to make sure everything is dense?).
- Likewise, unclear why the attended image should be sparse. I can see this would happen if we did attention after the ReLU but if sparsity is an issue why not do it after the ReLU?
Perliminary Evaluation
The paper is a really nice contribution towards leveraging traditional vision tasks for visual question answering. Major points and clarifications for the rebuttal are marked with a (*).
[A] Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. 2015. âNeural Module Networks.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1511.02799.
[B] Fukui, Akira, Dong Huk Park, Daylen Yang, Anna Rohrbach, Trevor Darrell, and Marcus Rohrbach. 2016. âMultimodal Compact Bilinear Pooling for Visual Question Answering and Visual Grounding.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1606.01847.
[C] Zhou, Bolei, Yuandong Tian, Sainbayar Sukhbaatar, Arthur Szlam, and Rob Fergus. 2015. âSimple Baseline for Visual Question Answering.â arXiv [cs.CV]. arXiv. http://arxiv.org/abs/1512.02167. | 1. When discussing related work it is crucial to mention related work on modular networks for VQA such as [A], otherwise the introduction right now seems to paint a picture that no one does modular architectures for VQA. |
ARR_2022_126_review | ARR_2022 | - The experiment section is not informative to the readers, because it is stating obvious results such as Transformer models being more “capable of capturing content-specific information”.
- The case study section is also not very informative. The authors evaluate whether the model assigns bigger attention weights to the appropriate control token when generating relevant sentences. However, they only show a single sample as an illustration. Such a single sample illustration is very vulnerable to cherry-picking and biases. Moreover, interpreting bigger attention weights as being more important is somewhat controversial in the field. It would be more helpful to the readers if the authors show diverse quantitative analyses.
- The authors show how different control sequences affect the generated outputs with only 3 examples, which is extremely limited to see whether the model can be controlled.
- It will be interesting to drop the control sequences from the gold labels (or replace them with negative ones) one after one and see how the automatic metric changes. This will show whether the control tokens really do contribute to the model’s performance. | - The authors show how different control sequences affect the generated outputs with only 3 examples, which is extremely limited to see whether the model can be controlled. |
Kjs0mpGJwb | EMNLP_2023 | 1. Although the structural information has not been explicitly used in the current problem statement, it has been implicitly used in few previous works on bilingual mapping induction. Please see:
"Multi-Stage Framework with Refinement based Point Set Registration for Unsupervised Bi-Lingual Word Alignment". Oprea et al., COLING 2022.
"Point Set Registration for Unsupervised Bilingual Lexicon Induction". H. Cao and T. Zhao, IJCAI 2018.
As such, a proper discussion and comparison is necessary in the current paper.
2. The GCN that is proposed has no learnable parameters, so it is a static matrix transformation operation. So, why is the term GCN used - isn't it misleading? Further, if it is not learning anything, it is a simple aggregation function, as far as I understood. Then the structural information is not propagating through the entire graph - this is counter-intuitive because the author(s) claim that structural information usage is a key feature of the proposed framework. Am I missing something here?
3. For experiments, I have 2 comments - (i) addition of performance on word similarity and sentence translation tasks as in the MUSE paper (and others) would lend more credibility to the robustness and effectiveness of the framework. (ii) addition of morphologically rich languages like Finnish, Hebrew, etc and low-resource languages in the experiments would be good to have (minor point). | 3. For experiments, I have 2 comments - (i) addition of performance on word similarity and sentence translation tasks as in the MUSE paper (and others) would lend more credibility to the robustness and effectiveness of the framework. (ii) addition of morphologically rich languages like Finnish, Hebrew, etc and low-resource languages in the experiments would be good to have (minor point). |
ICLR_2021_1740 | ICLR_2021 | are in its clarity and the experimental part.
Strong points Novelty: The paper provides a novel approach for estimating the likelihood of p(class image), by developing a new variational approach for modelling the causal direction (s,v->x). Correctness: Although I didn’t verify the details of the proofs, the approach seems technically correct. Note that I was not convinced that s->y (see weakness)
Weak points Experiments and Reproducibility: The experiments show some signal, but are not through enough: • shifted-MNIST: it is not clear why shift=0 is much better than shift~ N ( 0 , σ 2 )
, since both cases incorporate a domain shift • It would be useful to show the performance the model and baselines on test samples from the observational (in) distribution. • Missing details about evaluation split for shifted-MNIST: Did the experiments used a validation set for hyper-param search with shifted-MNIST and ImageCLEF? Was it based on in-distribution data or OOD data? • It would be useful to provide an ablation study, since the approach has a lot of "moving parts". • It would be useful to have an experiment on an additional dataset, maybe more controlled than ImageCLEF, but less artificial than shifted-MNIST. • What were the ranges used for hyper-param search? What was the search protocol?
Clarity: • The parts describing the method are hard to follow, it will be useful to improve their clarity. • It will be beneficial to explicitly state which are the learned parametrized distributions, and how inference is applied with them. • What makes the VAE inference mappings (x->s,v) stable to domain shift? E.g. [1] showed that correlated latent properties in VAEs are not robust to such domain shifts. • What makes v distinctive of s? Is it because y only depends on s? • Does the approach uses any information on the labels of the domain?
Correctness: I was not convinced about the causal relation s->y. I.e. that the semantic concept cause the label, independently of the image. I do agree that there is a semantic concept (e.g. s) that cause the image. But then, as explained by [Arjovsky 2019] the labelling process is caused by the image. I.e. s->image->y, and not as argued by the paper. The way I see it, is like a communication channel: y_tx -> s -> image -> y_rx. Could the authors elaborate how the model will change if replacing s->y by y_tx->s ?
Other comments: • I suggest discussing [2,3,4], which learned similar stable mechanisms in images. • I am not sure about the statement that this work is the "first to identify the semantic factor and leverage causal invariance for OOD prediction" e.g. see [3,4] • The title may be confusing. OOD usually refers to anomaly-detection, while this paper relates to domain-generalization and domain-adaptation. • It will be useful to clarify that the approach doesn't use any external-semantic-knowledge. • Section 3.2 - I suggest to add a first sentence to introduce what this section is about. • About remark in page 6: (1) what is a deterministic s-v relation? (2) chairs can also appear in a workspace, and it may help to disentangle the desks from workspaces.
[1] Suter et al. 2018, Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness [2] Besserve et al. 2020, Counterfactuals uncover the modular structure of deep generative models [3] Heinze-Deml et al. 2017, Conditional Variance Penalties and Domain Shift Robustness [4] Atzmon et al. 2020, A causal view of compositional zero-shot recognition
EDIT: Post rebuttal
I thank the authors for their reply. Although the authors answered most of my questions, I decided to keep the score as is, because I share similar concerns with R2 about the presentation, and because experiments are still lacking.
Additionally, I am concerned with one of the author's replies saying All methods achieve accuracy 1 ... on the training distribution, because usually there is a trade-off between accuracy on the observational distribution versus the shifted distribution (discussed by Rothenhäusler, 2018 [Anchor regression]): Achieving perfect accuracy on the observational distribution, usually means relying on the spurious correlations. And under domain-shift scenarios, this would hinder the performance on the shifted-distribution. | • shifted-MNIST: it is not clear why shift=0 is much better than shift~ N ( 0 , σ 2 ) , since both cases incorporate a domain shift • It would be useful to show the performance the model and baselines on test samples from the observational (in) distribution. |
NIPS_2020_930 | NIPS_2020 | 1. The title is misleading and the authors might overclaim their contribution. Indeed, the stochastic problem in Eq.(1) is a special instance of nonconvex-concave minimax problems and equivalent to nonconvex compositional optimization problem in Eq.(2). Solving such problem is easier than the general case consider in [23, 34]; see also (Rafique, Arxiv 1810.02060) and (Thekumparampil, NeurIPS'19). In addition, the KKT points and approximate KKT points are also defined based on such special structure. 2. The literature review is not complete. The authors mainly focus on the algorithms for stochastic compositional optimization instead of stochastic nonconvex-concave minimax optimization. 3. The algorithm is not single-loop in general. To be more specific, Algorithm 1 needs to solve Eq.(9) at each loop. This is also a nonsmooth strongly convex problem in general and the solution does not have the closed form. To this end, what is the advantage of Algorithm 1 over prox-linear algorithms in nonsmooth case? 4. Given the current stochastic problem in Eq.(1), I believe that the prox-linear subproblem can be reformulated using the conjugate function and becomes the same as the subproblem in Algorithm 1. That is to say, we can simply improve prox-linear algorithms for solving stochastic problem in Eq.(1). This makes the motivation of Algorithm 1 unclear. 5. The proof techniques heavily depend on the biased hybrid estimators introduced in [29]. The current paper does not convince me that such extension is nontrivial and has sufficient technical novelty. | 4. Given the current stochastic problem in Eq.(1), I believe that the prox-linear subproblem can be reformulated using the conjugate function and becomes the same as the subproblem in Algorithm 1. That is to say, we can simply improve prox-linear algorithms for solving stochastic problem in Eq.(1). This makes the motivation of Algorithm 1 unclear. |
NIPS_2022_2373 | NIPS_2022 | weakness in He et al., and proposes a more invisible watermarking algorithm, making their method more appealing to the community. 2. Instead of using a heuristic search, the authors elegantly cast the watermark search issue into an optimization problem and provide rigorous proof. 3. The authors conduct comprehensive experiments to validate the efficacy of CATER in various settings, including an architectural mismatch between the victim and the imitation model and cross-domain imitation. 4. This work theoretically proves that CATER is resilient to statistical reverse-engineering, which is also verified by their experiments. In addition, they show that CATER can defend against ONION, an effective approach for backdoor removal.
Weakness: 1. The authors assume that all training data are from the API response, but what if the adversary only uses the part of the API response? 2. Figure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5.
The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could extend CATER to other languages in the future. | 2. Figure 5 is hard to comprehend. I would like to see more details about the two baselines presented in Figure 5. The authors only study CATER for the English-centric datasets. However, as we know, the widespread text generation APIs are for translation, which supports multiple languages. Probably, the authors could extend CATER to other languages in the future. |
qYwdyvvvqQ | ICLR_2024 | 1. Figure 1 does not convey the main idea clearly and should be significantly improved.
2. The presentation of the proposed method in 3.2 is confusing and should be significantly improved. For example, it would be better to have a small roadmap on the beginning of 3.2 so that the readers know what each step is doing. Also, it would be better to break up the page-long paragraph to smaller paragraphs, and use each paragraph to explain a small part of computation. Also, explain the intention of each equation and the reasons of design choice.
3. The related work only discussed sparse efficient attentions and Reformer and SMYRF that related to the proposed idea. There are a lot more efficient attentions that have the property of “information flow throughout the entire input sequence” (which was one of the motivation for the proposed idea), such as low rank based attentions (Linformer https://arxiv.org/abs/2006.04768, Nystromformer https://arxiv.org/abs/2102.03902, Performer https://arxiv.org/abs/2009.14794, RFA https://arxiv.org/abs/2103.02143) or multi-resolution based attentions (H-Transformer https://arxiv.org/abs/2107.11906, MRA Attention https://arxiv.org/abs/2207.10284).
4. Missing discussion about Set Transformer (https://arxiv.org/abs/1810.00825) and other related works that also uses summary tokens.
5. In 4-th paragraph of related work, the authors claim some baselines are unstable and the proposed method is stable, but the claim is not supported by any experiments.
6. Experiments are only performed on LRA benchmark, which consists a set of small datasets. The results might be difficult to generalize to larger scale experiments. It would be better to evaluate the methods on larger scale datasets, such as language modeling or at least ImageNet.
7. Given that LRA benchmark is a small scale experiment, it would be better to run the experiment multiple times and calculate the error bars since the results could be very noisy. | 4. Missing discussion about Set Transformer (https://arxiv.org/abs/1810.00825) and other related works that also uses summary tokens. |
NIPS_2021_275 | NIPS_2021 | weakness Originality
+ Novel setting. As far as I am aware, the paper proposes a novel setting - Few-shot Hypothesis Adaptation (FHA) - a combination of existing problems - Hypothesis Transfer Learning and the Few-Shot Domain Adaptation.
+/- Somewhat novel method. As far as I am aware, the paper also proposes a novel method - TOHAN - which is a minor adaptation of FADA [23] into the setting. The method architecture seems to be heavily inspired by FADA [23]. Unlike FADA, TOHAN generates and uses the generated intermediate domain instead of the original source domain. However, apart from that, it is not entirely clear what the technical differences are between these methods. Which line in Algorithm 1 would be different for FADA / S-FADA / T-FADA / ST-FADA?
- The relation between FHA and FSL is poorly explained, and FSL is poorly cited. I found the sentence in lines 88-89 particularly vague and poorly written. In addition, line 90 that compares TSN [37] and ProtoNets [29] ("which is relatively weaker than the former") brings little value to the paper. In my humble opinion, these methods were designed for two different problems (i.e. video action classification [37] and single image classification [29]), and it is inappropriate to compare them in this way. The authors might find the following works from FSL literature more related to their work: (Antreas et al., 2018), (Hariharan & Girshick., 2017), (Wang et al., 2018). Generally, these methods also generate/hallucinate samples from limited target-domain data and should be cited. Quality
+ Some good experiments. The paper performs a solid number of comparisons with representative baselines from HTL and FDA literature ([19] and [23], respectively) and basic fine-tuning baseline proposed by the authors.
+/- Work seems grounded in some theoretical work. However, I did not attempt to verify the proof, so I cannot comment on its correctness.
- Only marginal improvements over baselines, mostly within the error bar range. Although the authors claim the method performs better than the baselines, the error range is rather high, suggesting that the performance differences between some methods are not very significant.
- There might be a possible fundamental flaw to the claim of full data privacy. The authors claim that the TOHAN method (and therefore also the FHA problem) "strictly" protects the privacy of the source domain by using the source hypothesis rather than the source data itself (lines 245-247). However, the claim that knowledge is "completely" inaccessible may be false. For instance, Ateniese et al. (2015) have shown that it is possible to extract certain types of information from pretrained classifiers and models. In a more recent privacy analysis of deep learning, Nasr et al. (2019) suggest that even well-trained models might leak a significant amount of information about the data they were trained on. Moreover, the proposed TOHAN relies on the leaked source-domain knowledge to generator appropriate source-domain data. Therefore, it seems to me that claim that TOHAN is privacy secure is completely false.
- Lack of sufficient evidence for no source domain features in intermediate data. The authors claim that "high-level, visual and useful features of source domain are rare in the generated intermediate data (Figure 6)" (lines 243-244); however, no empirical value is provided to support this claim (i.e. what do authors mean by "rare" in this context? how rare is "rare"?). Figure 6a/b does support this claim as it shows only 4 intermediate-domain images and no examples of the original source-domain data. This is insufficient evidence to draw the conclusion in lines 243-244, and in fact, it points to the contrary. Clarity
- The paper is poorly written. The paper is not particularly easy to read. It contains many spelling and grammatical errors (see below for proposed corrections). There is a large number of acronyms (e.g. FHA, SHOT, HTL, ST+F etc..) and some confusing mathematical notation (e.g. D , D , X and X
refer to different entities) which make this work more confusing to read. The notation in the figure is inconsistent with the main text (uses X
instead of D ). Significance
- The paper presents a novel and interesting problem but could be flawed. This novel problem and method could have important consequences in the context of data privacy - however, to me, the idea seems fundamentally flawed (see my comments in the "Quality" subsection, or "Limitations And Societal Impact").
- The TOHAN improvements over baselines are mostly marginal. Although the authors claim the method performs better than the baselines, the error range is rather high, suggesting that the performance differences between some methods are not very significant and the improvements are marginal.
Spelling, Grammer, and Other Possible Mistakes.
- Line 32, grammar: "face of generation" --> "face generation"
- Figure 2 caption, grammar: "which two data come from ... " --> "where the two data points come from ... "
- Line 128, wrong word: "dependents" --> "depends"
- Line 167, redundant word: "regarding to the ..." --> "regarding the ... "
- Line 210, redundant words: "which confuses D unable to distinguish between ..." --> "which confuses D between ... "
- Line 213, wrong word: "initial" --> "initialize" References
Ateniese et al., 2015, "Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers", International Journal of Security and Networks, Volume 10, Issue 3
Nasr et al., 2019, "Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning," 2019 IEEE Symposium on Security and Privacy
Antoniou et al., 2018, "Data Augmentation Generative Adversarial Networks", (ICLR 2018 Workshop)
Hariharan & Girshick., 2017, "Low-shot Visual Recognition by Shrinking and Hallucinating Features", (ICCV 2017)
Wang et al., 2018, "Low-shot learning from imaginary data" (CVPR 2018) POST-REBUTTAL
After a detailed discussion with the authors, I decided to increase my original rating from 3 to 7. The initial low rating was due to initially hidden assumptions and a poorly defined scope of data privacy which are central in the paper. These have been discussed and clarified by the authors during the rebuttal. The authors also addressed my concerns regarding the novelty and source-domain leakage into the intermediate domain. The authors have agreed to improve the clarity, literature review, dampen down on the privacy claims, and include additional experiments, and I am happy to increase the rating. Although the privacy claims are now not as strong as originally claimed (e.g. the method does not guard against source-information leakage, but rather shelters individual source data points from a possible data leakage), the paper still opens up an interesting area of research and presents a novel method that will likely attract attention in the community.
The authors claim that their method allows for full privacy of the source domain by relying on a well-trained source domain classifier. However, this seems to be based on the false assumption that the source domain classifier does not leak any private information. Recent studies (for example, Nasr et al. (2019) and Ateniese et al. (2015)) suggest that there might be a significant amount of source-domain information that pre-trained models leak. Moreover, TOHAN relies on leaking information from the source domain to generate realistic/compatible intermediate-domain data. This completely invalidates one of the main claims of the paper that private information is protected. | - Only marginal improvements over baselines, mostly within the error bar range. Although the authors claim the method performs better than the baselines, the error range is rather high, suggesting that the performance differences between some methods are not very significant. |
NIPS_2020_1897 | NIPS_2020 | - The problem setting of this paper is too simplified, where only a “linearized” self-attention layer with all non-linear activations, layer normalization and softmax operation removed. However, given that the main purpose of the paper is to analyze the functionality of self-attention in terms of integrating inputs, these relaxations are not totally unreasonable. - The experiments are not sufficient. More empirical experiments or toy experiments (for the simplified self-attention model considered in the theoretical analysis) need to be done to show the validity of the model relaxations and the consistence of the theoretical analysis with empirical results, besides citing the result in Kaplan et al. 2020. - Although the paper is well organized, some parts are not well explained, especially for the proof sketch for Theorem 1 and Theorem 2. | - The experiments are not sufficient. More empirical experiments or toy experiments (for the simplified self-attention model considered in the theoretical analysis) need to be done to show the validity of the model relaxations and the consistence of the theoretical analysis with empirical results, besides citing the result in Kaplan et al. 2020. |
ikX6D1oM1c | ICLR_2024 | - I found Sec 5.1 and 5.2 difficult to read and I think clarity can be improved. What confused me initially was that you suggest fixing $P^*(U|x, a)$ but then the $\sup$ in Eq. 5 is also over the distributions $p(u|x, A)$. Reading it further, the sup is only for $A \neq a$ but I think clarifying that you only fix for the treatment $a$ that enters into $Q$ would be useful. Maybe this is obvious, but it will still make it easier to understand what is being optimized over in the $\sup$.
- It would also be nice to have some intuition of the proof of Theorem 1. Also, the invertible function $f^*$ would depend on the fixed $P^*$. Does certain distributions $P^*$ make it easier to determine $f^*$. In practice, how should you determine which $P^*$ to fix? | - It would also be nice to have some intuition of the proof of Theorem 1. Also, the invertible function $f^*$ would depend on the fixed $P^*$. Does certain distributions $P^*$ make it easier to determine $f^*$. In practice, how should you determine which $P^*$ to fix? |
ICLR_2021_2929 | ICLR_2021 | Weakness: The major concern is the limited contribution of this work. 1.Using image-to-image translation to unify the representations across-domain is an existing technique in domain adaptation, especially in segmentation tasks [1,2]. 2. The use of morphologic information in this paper is simple as the combination of edge detection and segmentation, which are both employed as tools from existing benchmarks (in this paper the author used DeeplabV3, DexiNed-f, employed as off-the-shelf tools for image pre-processing purpose as mentioned in section 4). 3.There should be more on how to use the morphologic segmentation across-domain, and how morphologic segmentation should be conducted differently for different domains. Or is it exactly the same given any arbitrary domain? These questions are important given the task domain adaptation. This paper didn’t provide insight into this but assumed morphologic segmentation will be invariant. 4. Results compared to other domain adaptation methods (especially generative methods) are missing. There is an obvious lack of evidence that the proposed method is superior.
In brief, the contribution of this paper is limited, the results provided are not sufficient to support the method being effective. A reject.
[1] Learning from Synthetic Data: Addressing Domain Shift for Semantic Segmentation [2] Image to Image Translation for Domain Adaptation | 3.There should be more on how to use the morphologic segmentation across-domain, and how morphologic segmentation should be conducted differently for different domains. Or is it exactly the same given any arbitrary domain? These questions are important given the task domain adaptation. This paper didn’t provide insight into this but assumed morphologic segmentation will be invariant. |
FbZSZEIkEU | ICLR_2025 | - The experimental reports are lacking in many details about the experimental methodology, making it difficult to be confident that the claims are robust.
- The explanations throughout the paper should be clearer to fully communicate the ideas and experiments of the authors.
- The S2 hacking hypothesis is quite vague and the author do not present any deep understanding that would explain the mechanisms by which certain attention heads pay extra attention to the S2 token.
- In the experiments on the DoubleIO and other prompt variations, it is unclear at which token positions paths are being ablated, as this is unspecified by the original circuit.
- The authors write: “Given the algorithm inferred from the IOI circuit, it is clear that the full model should completely fail on this task”. However this is a misunderstanding of the original work. The IOI circuit was discovered using mean ablations that keep most of the prompt intact. Therefore Wang et al. don’t expect it generalize to different prompt formats.
- The authors write “In the base IOI circuit, the Induction, Duplicate Token, and Previous Token heads primarily attend to the S2 token” this is incorrect according to Section 3 of Wang et al., 2023. These heads are _active_ at the S2 token, but do not primarily attend to it.
- The authors write: "The proposed IOI circuit is shown to perform very well while still being faithful to the full model" In fact, the IOI circuit is known to have severe limitations, as shown in concurrent work by Miller et al. (2024) [[2]](https://arxiv.org/abs/2407.08734). Nitpicks:
- In Figure 2, it is not clear what Head 1, 2, 3 and 4 refer to.
- The paper should include Figure 2 from Wang et al. 2023 [[1]](https://arxiv.org/pdf/2211.00593#page=4.37) to make it easier to follow discussions about the circuit. | - The authors write “In the base IOI circuit, the Induction, Duplicate Token, and Previous Token heads primarily attend to the S2 token” this is incorrect according to Section 3 of Wang et al., 2023. These heads are _active_ at the S2 token, but do not primarily attend to it. |
pxclAomHat | ICLR_2025 | 1. The paper does not explicitly link the quality of the optimization landscape to convergence speed or generalization performance, which undermines its stated goal of theoretically explaining the performance of LoRA and GaLore. Many claims also lack this connection (e.g., lines 60-61, 289-291, and 301-304), making them appear weaker and less convincing.
2. The theoretical results in this paper are derived from analyses of MLPs, whereas LLMs, with their attention mechanisms, layer normalization, etc., are more complex. Since there is a gap between MLPs and LLMs, and the paper does not directly validate its theoretical results on LLMs (instead of relying only on the convergence and performance outcomes), the theoretical conclusions are less convincing.
3. The paper does not clearly motivate GaRare; it lacks evidence or justification for GaRare's advantages over GaLore based on theoretical analysis. Additionally, a more detailed algorithmic presentation is needed, particularly to clarify the process of recovering updated parameters from projected gradients, which would enhance understanding. | 3. The paper does not clearly motivate GaRare; it lacks evidence or justification for GaRare's advantages over GaLore based on theoretical analysis. Additionally, a more detailed algorithmic presentation is needed, particularly to clarify the process of recovering updated parameters from projected gradients, which would enhance understanding. |
ARR_2022_311_review | ARR_2022 | - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown.
- The proposed mixup strategy is very simple (Equation 5), I wonder if the authors have tried other ways to interpolate the one-hot vector with the MLM smoothed vector.
- How does \lambda influence the performances?
- How does the augmentation method compare to other baselines with more training data? | - The main weaknesses of the paper are the experiments, which is understandable for a short paper but I'd still expect it to be stronger. First, the setting is only on extremely low-resource regime, which is not the only case we want to use data augmentation in real-world applications. Also, sentence classification is an easier task. I feel like the proposed augmentation method has potential to be used on more NLP tasks, which was unfortunately not shown. |
NIPS_2017_486 | NIPS_2017 | 1. The paper is motivated with using natural language feedback just as humans would provide while teaching a child. However, in addition to natural language feedback, the proposed feedback network also uses three additional pieces of information â which phrase is incorrect, what is the correct phrase, and what is the type of the mistake. Using these additional pieces is more than just natural language feedback. So I would like the authors to be clearer about this in introduction.
2. The improvements of the proposed model over the RL without feedback model is not so high (row3 vs. row4 in table 6), in fact a bit worse for BLEU-1. So, I would like the authors to verify if the improvements are statistically significant.
3. How much does the information about incorrect phrase / corrected phrase and the information about the type of the mistake help the feedback network? What is the performance without each of these two types of information and what is the performance with just the natural language feedback?
4. In figure 1 caption, the paper mentions that in training the feedback network, along with the natural language feedback sentence, the phrase marked as incorrect by the annotator and the corrected phrase is also used. However, from equations 1-4, it is not clear where the information about incorrect phrase and corrected phrase is used. Also L175 and L176 are not clear. What do the authors mean by âas an exampleâ?
5. L216-217: What is the rationale behind using cross entropy for first (P â floor(t/m)) phrases? How is the performance when using reinforcement algorithm for all phrases?
6. L222: Why is the official test set of MSCOCO not used for reporting results?
7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant?
8. Table 6: can authors please clarify why the MLEC accuracy using ROUGE-L is so low? Is that a typo?
9. Can authors discuss the failure cases of the proposed (RLF) network in order to guide future research?
10. Other errors/typos:
a. L190: complete -> completed
b. L201, âWe use either ⦠feedback collectionâ: incorrect phrasing
c. L218: multiply -> multiple
d. L235: drop âbyâ
Post-rebuttal comments:
I agree that proper evaluation is critical. Hence I would like the authors to verify that the baseline results [33] are comparable and the proposed model is adding on top of that.
So, I would like to change my rating to marginally below acceptance threshold. | 7. FBN results (table 5): can authors please throw light on why the performance degrades when using the additional information about missing/wrong/redundant? |
NIPS_2022_2813 | NIPS_2022 | 1. The proposed method is a two-stage optimization strategy, which is a bit difficult to balance the two steps optimization. Could it be end-to-end training? 2. Although it is intuitive that including multiple local prompts helps, for different categories, the features and their positions are not the same. | 2. Although it is intuitive that including multiple local prompts helps, for different categories, the features and their positions are not the same. |
IksoHnq4rC | EMNLP_2023 | TL;DR I appreciate the efforts and observations / merits found by the authors. However, this paper poorly presents the methodology (both details and its key advantage), and it’s hard to validate the conclusion with such little hyperparameter analysis. I would love to see more detailed results, but I could not accept this version to be accepted as an EMNLP paper.
1 There are too many missing details when presenting the methodology: e.g., what will be the effect if I remove one or two losses presented by the authors? Though their motivations are clear, they do not validate the hypothesis clearly.
2 A lot of equations look like placeholders, such as equations (1, 2, 3, 5, 6).
3 Some of the pieces are simply using existing methods, such as equation (12), the presentation of these methods are also vague (can only be understood after checking the original paper).
4 The pipeline misses a lot of details. For example, how long does it take to pre-train each module? How adding pre-training will benefit the performance? How to schedule the training of the discriminator and the main module? Not mentioning the detail design of the RNN network used.
5 Why do we need to focus on the four aspects? They are just listed there. Also, some of the results presentation does not seem to be thorough and valid. For example, in table 2, the Quora datasets have the highest perturbation ratio, but the downgraded performance is the least among the three. Is it really because the adversarial samples are effective instead of the task variance or dataset variance? Also, we didn’t see the attack performance of other comparison methods. And how is the test set generated? What is the size of the adversarial test set and why is that a good benchmark?
6 In table 4, it’s actually hard to say which is better, A^3 or A^2T, if you count the number of winners for each row and column.
7 In table 5, is the computation time also considered the pre-training stage? If not, why? Does the pre-training stage can serve as a unified step which is agnostic to the dataset and tasks?
8 I don’t quite understand the point of section 4.6, and its relationship to the effectiveness of A^3. This influence of /rho seems to be really obvious. I would rather be more interested in changing the six hyperparameters mentioned in line 444 and test their effectiveness.
9 The related work section is also not very well-written. I couldn’t understand what is the key difference and key advantage of A^3 compared to the other methods. | 3 Some of the pieces are simply using existing methods, such as equation (12), the presentation of these methods are also vague (can only be understood after checking the original paper). |
fkvdewFFN6 | ICLR_2024 | 1. The overall framework seems quite similar to the Seldonian algorithm design of Thomas et al. (2019), e.g., see Fig. 1 of Thomas et al. (2019)). Although it is true that Thomas et al. (2019) only considered fair classification experiments, as mentioned in this paper's related works, the proposed FRG also has an objective function related to the expressiveness of the representation, and some of the details even match; for instance, the discussions on "$1 - \delta$ confidence upper bound" on pg. 4 are quite similar to the caption of Fig. 1 of Thomas et al. (2019). Then, the question boils down to what is the novel contribution of this work, and my current understanding is that this is a simple application of Thomas et al. (2019) to fair representation learning. Of course, there are theoretical analysis, practical considerations and good experimental results that are specific to fair representation learning, but I believe that (as I will elaborate below) there are some problems that need to be addressed. Lastly, I believe that Thomas et al. (2019) should be given much more credit than the current draft.
2. Although the paper focuses on the high-probability fair representation construction (which should be backed theoretically in a solid way, IMHO), there are too many components (for "practicality") that are theoretically unjustified. There are three such main components: doubling the width of the confidence interval to "avoid" overfitting, introducing the hyperparameters $\gamma$ and $v$ for upper bounding $\Delta_{DP}$, and approximating the candidate optimization.
3. Also, the theoretical discussions can use some improvements. Although directly related to fair representation, the current theorems follow directly from the algorithm design itself and the well-known property of mutual information to $\Delta_{DP}$. For instance, I was expecting some sample complexity-type results for not returning NSF, e.g., given confidence levels, what is the sufficient amount of training data points that would not return NSF.
4. Lastly, if the authors meant for this paper to be a practical paper, then it should be clearly positioned that way. For instance, the paper should allocate much more space to the experimental verifications and do more experiments. Right now, the experiments are only done for two datasets, both of which consider binary sensitive attributes. In order to show the proposed FRG's versatility, the paper should do a more thorough experimental evaluation of various datasets of different characteristics with multiple group and/or nonbinary sensitive attributes, trade-off (Pareto front) between fairness and performance (or any of such kind), even maybe controlled synthetic experiments.
**Summary**. Although the framework is simple and has promising experiments, I believe that there is still much to be done. In its current form, the paper's contribution seems to be incremental and not clear. | 3. Also, the theoretical discussions can use some improvements. Although directly related to fair representation, the current theorems follow directly from the algorithm design itself and the well-known property of mutual information to $\Delta_{DP}$. For instance, I was expecting some sample complexity-type results for not returning NSF, e.g., given confidence levels, what is the sufficient amount of training data points that would not return NSF. |
NIPS_2017_502 | NIPS_2017 | Weakness]
- This paper is poorly written.
- The motivation, the contribution, and the intuition of this work can barely be understood based on the introduction.
- Sharing the style of citations and bullet items is confusing.
- Representing a translation vector with the notation $t^{i} = [x^{i}, y^{i}, z^{i} ] ^{T}$ is usually more preferred.
- The experimental results are not convincing.
- The descriptions of baseline models are unclear.
- Comparing the performance of the methods which directly optimize the mesh parameters, rotation, translation, and focal length according to the metric provided (projection error) doesn't make sense since the objective is in a different domain of the measurement of the performance.
- Comparing the performance of the model only pre-trained on synthetic data is unfair; instead, demonstrating that the proposed three projection errors are important is more preferred. In other words, providing the performance of the models pre-trained on synthetic data but fine-tuned on real-world datasets with different losses is necessary.
- The reason of claiming that it is a supervised learning framework is unclear. In my opinion, the supervision signals are still labeled. [Reproducibility]
The proposed framework is very simple and well explained with sufficient description of network parameters and optimization details. I believe it's trivial to reproduce the results. [Overall]
In term of the proposed framework, this paper only shows the improvement gained of fine-tuning the model based on the proposed losses defined by the reprojection errors of key points, optical flow, and foreground-background segmentation.
Taking into account that this work does show that fine-tuning the model pre-trained on synthetic datasets on real-world video clips improves the performance especially, it's still a convicting article.
In sum, as far as I am concerned this work makes a contribution but is insufficient. | - Comparing the performance of the model only pre-trained on synthetic data is unfair; instead, demonstrating that the proposed three projection errors are important is more preferred. In other words, providing the performance of the models pre-trained on synthetic data but fine-tuned on real-world datasets with different losses is necessary. |
ICLR_2022_1887 | ICLR_2022 | W1: The proposed method is a combination of existing loss. The novelty of technical contribution is not very strong.
W2: The proposed hybrid loss is argued that it is beneficial as the Proxy-NCA loss will promotes learning new knowledge better (first paragraph in Sec. 3.4), rather than less catastrophic forgetting. But the empirical results show that the proposed method exhibits much less forgetting than the prior arts (Table. 2). The argument and the empirical results are not well aligned.
Also, as the proposed method seems promoting learning new knowledge, it is suggested to empirically validate the benefit of the proposed approach by a measure to evaluate the ability to learn new knowledge (e.g., intransigence (Chaudhry et al., 2018)).
W3: Missing important comparison to Ahn et al.'s method in Table 3 (and corresponding section, titled "comparison with Logits Bias Solutions for Conventional CIL setting").
W4: Missing analyses of ablated models (Table 4). The proposed hybrid loss exhibits meaningful empirical gains only in CIFAR100 (and marginal gain in CIFAR10), comparing "MS loss with NCM (Gen)" and "Hybrid loss with NCM (Gen)". But there is no descriptive analysis for it.
W5: Lack of details of Smooth datasets in Sec. 4.3.
W6: Missing some citation (or comparison) using logit bias correction in addition to Wu et al., 2019 and Anh et al., 2020
Kang et al., 2020: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9133417
Mittal et al., 2021: https://openaccess.thecvf.com/content/CVPR2021W/CLVision/papers/Mittal_Essentials_for_Class_Incremental_Learning_CVPRW_2021_paper.pdf
W7: Unclear arguments or arguments lack of supporting facts
4th para in Sec.1
'It should be noticed that in online CIL setting the data is seen only once, not fully trained, so it is analogous to the low data regime in which the generative classifier is preferable.' Why?
5th line of 2nd para in Sec. 3.1.3
'This problem becomes more severe as the the number of classes increases.'
Lack of supporting facts
W8: Some mistakes in text (see details in notes below) and unclear presentations Note
Mistakes in text
End of 1st para in Sec.1: intelligence agents -> intelligent agents
3th line of 1 para in Sec. 3.2: we can inference with -> we can conduct inference with
1st line of 1st para in Sec. 3.3
we choose MS loss as training... -> we choose the MS loss as a training...
MS loss is a state-of... -> MS loss is the state-of...
6th line of Proposition 1: 'up to an additive constant and L' -> 'up to an additive constant c and L'
Improvement ideas in presentations
Fig. 1: texts in legends and axis labels should be larger
At the beginning of page 6: Proposition (1) -> Proposition 1. --> (1) is confused with Equation 1.
Captions and legend's font should be larger (similar to text size) in Fig. 2 and 3. | 1: texts in legends and axis labels should be larger At the beginning of page 6: Proposition (1) -> Proposition 1. --> (1) is confused with Equation 1. Captions and legend's font should be larger (similar to text size) in Fig. 2 and 3. |
ICLR_2021_457 | ICLR_2021 | My main concern is that it is not completely clear to me how the authors suggest using the dataset for developing AI that is more ethical. I can clearly understand that one can use it to train an auxiliary model that will test/verify/give value for RL etc. I can also see that using it to fine tune language models and test them as done in the paper, can give an idea of how the language representation is aligned with or represents well ethical concepts. But it seems that the authors are trying to claim something broader when they say ““By defining and benchmarking a model’s understanding of basic concepts in ETHICS…” and “To do well on the ETHICS dataset, models must know about the morally relevant factors emphasized by each of these ethical systems”. It sounds as if they claim that given a model one can benchmark it on the dataset. If that is the case, they should explain how (for example say I develop a model that filters CVs and I want to see if it is fair, how can I use the dataset to test that model?). If not, I would suggest being clearer about the way the dataset can be used.
In addition, I personally do not like using language such as “With the ETHICS dataset, we find that current language models have a promising but incomplete understanding of basic ethical knowledge.” Or “By defining and benchmarking a model’s understanding of basic concepts in ETHICS, we enable future research necessary for ethical AI”. I think that even if a model can perform well on the ETHICS dataset, it is far from clear that it has understanding of ethical concepts. It is a leap of faith in my mind to conclude from what is essentially learning a classification task to ethical understanding. I would like to see the authors make more precise claims in that respect.
Recommendation: I vote for accepting this paper, at its current state marginally above threshold but provided some clarifications, I find this a clear accept. I think the area of ethical AI is important, releasing a well-constructed dataset is an important step forward and overall this paper should be of interest to the ICLR community.
Questions and minor comments: 1.There are missing details about division to train and test sets, numbers as well as how the division was made (simply random? Any other considerations?). These details should be added. 2. In the Impartiality section there is missing reference to Fig 2 – it is given only later so one does not see the relevant examples.
Post-rebuttal comments: My concerns are resolved. I have changed my vote to acceptance. (7). | 1.There are missing details about division to train and test sets, numbers as well as how the division was made (simply random? Any other considerations?). These details should be added. |
NIPS_2021_40 | NIPS_2021 | /Questions:
I only have minor suggestions:
1.) In the discussion, it may be worth including a brief discussion on the empirical motivation for a time-varying Q ^ t and S t
, as opposed to a fixed one as in Section 4.2. For example, what is the effect on the volatility of α t
and also on the average lengths of the predictive intervals when we let Q ^ t and S t
vary with time?
2.) I found the definition of the quantile a little confusing, an extra pair of brackets around the term ( 1 | D | ∑ ( X r , Y r ) ∈ D 1 S ( X r , Y r ) ≤ s )
might help, or maybe defining the bracketed term separately if space allows.
3.) I think there are typos in Lines 93, 136, 181 (and maybe in the Appendix too): should it be Q ^ t ( 1 − α t ) instead? ##################################################################### Overall:
This is a very interesting extension to conformal prediction that no longer relies on exchangeability but is still general, which will hopefully lead to future work that guarantees coverage under weak assumptions. I believe the generality also makes this method useful in practice.
The authors have described the limitations of their theory, e.g. having a fixed Q ^
with time. | 1.) In the discussion, it may be worth including a brief discussion on the empirical motivation for a time-varying Q ^ t and S t , as opposed to a fixed one as in Section 4.2. For example, what is the effect on the volatility of α t and also on the average lengths of the predictive intervals when we let Q ^ t and S t vary with time? |
NIPS_2020_1623 | NIPS_2020 | - If I understand correctly, the method requires maintaining a probability vector for each data point. This is not an issue for small data sets with few classes, but can become a problem at ImageNet scale. I did not find any comment regarding this issue in the main paper or in the supplement. Could the authors please elaborate on this? - From Table 2 b) it seems that for 40% label noise on CIFAR10 the method is reasonably robust to the hyper parameter values. Does this observation transfer to other corruption percentages and data sets? - Additional experiments on larger data sets would be nice (but I understand that compute might be an issue). --- Thanks for the author response. I still think maintaining the probabilities might become an issue, in particular at large batch size, but I don't think this aspect is critical. Generally, the response addressed my concerns well. | - Additional experiments on larger data sets would be nice (but I understand that compute might be an issue). --- Thanks for the author response. I still think maintaining the probabilities might become an issue, in particular at large batch size, but I don't think this aspect is critical. Generally, the response addressed my concerns well. |
NIPS_2019_653 | NIPS_2019 | of the method. Clarity: The paper has been written in a manner that is straightforward to read and follow. Significance: There are two factors which dent the significance of this work. 1. The work uses only binary features. Real world data is usually a mix of binary, real and categorical features. It is not clear if the method is applicable to real and categorical features too. 2. The method does not seem to be scalable, unless a distributed version of it is developed. It's not reasonable to expect a single instance can hold all the training data that the real world datasets ususally contain. | 1. The work uses only binary features. Real world data is usually a mix of binary, real and categorical features. It is not clear if the method is applicable to real and categorical features too. |
NIPS_2020_1335 | NIPS_2020 | Given how strong the first four sections (five pages) of the paper were, I was relatively disappointed in the experiments, which were somewhat light. Specifically: 1) While the authors' methods allow for learning a state-action-dependent weighting of the shaping rewards, it seemed to me possible that in all of the experiments presented, learning a *uniform* state-action-independent weighting would have sufficed. Moreover, since learning a state-action-independent weighting is much simpler (i.e. it is a single scalar), it may even outperform the authors' methods for the current experiments. Based on this, I would like to suggest the following: 1a) Could the authors provide visualizations of the state-action variation of their learnt weightings? They plot the average weight in some cases (Fig 1 and 3), but given Cartpole has such a small state-action space, it should be possible to visualize the variation. The specific question here is: do the weights vary much at all in these cases? 1b) Could the authors include a baseline of learnt state-action-*independent* weights? In other words, this model has a single parameter, replacing z_phi(s,a) with a single scalar z. This should be pretty easy to implement. The authors could take any (or all) of their existing gradient approximators and simply average them across all (s,a) in a batch to get the gradient w.r.t. z. 1c) Could the authors include an additional experiment that specifically benefits from learning state-action-*dependent* (so non-uniform) weights? Here is a simple example for Cartpole: the shaping reward f(s,a) is helpful for half the state space and unhelpful for the other half. The "halves" could be whether the pole orientation is in the left or right half. The helpful reward could be that from Section 5.1 while the unhelpful reward could be that from the first adaptability test in Section 5.3. 2) To me, the true power of the author's approach is not in learning to ignore bad rewards (just turn them off!) but to intelligently incorporate sort-of-useful-but-not-perfect rewards. This way a researcher can quickly hand design an ok shaping reward but then let the authors' method transform it into a good one. Thus, I was surprised the experiments focussed primarily on ignoring obviously bad rewards and upweighting obviously good rewards. In particular, the MuJoCo experiments would be more compelling if they included more than just a single unhelpful shaping reward. I think the authors could really demonstrate the usefulness of their method there by doing the following: hand design a roughly ok shaping reward for each task. For example, the torso velocity or head height off the ground for Humanoid-v2. Then apply the authors' method and show that it outperforms naive use of this shaping reward. 3) Although the authors discussed learning a shaping reward *from scratch* in the related work section, I was surprised that they did not included this as a baseline. One would like to see that their method, when provided with a decent shaping reward to start, can learn faster by leveraging this hand-crafted knowledge. Fortunately, it seems to me again very easy to implement a baseline like this within the author's framework: simply set f(s,a)=1 and use the authors' methods (perhaps also initializing z_phi(s,a)=0). | 1) While the authors' methods allow for learning a state-action-dependent weighting of the shaping rewards, it seemed to me possible that in all of the experiments presented, learning a *uniform* state-action-independent weighting would have sufficed. Moreover, since learning a state-action-independent weighting is much simpler (i.e. it is a single scalar), it may even outperform the authors' methods for the current experiments. Based on this, I would like to suggest the following: 1a) Could the authors provide visualizations of the state-action variation of their learnt weightings? They plot the average weight in some cases (Fig 1 and |
NIPS_2020_897 | NIPS_2020 | 1. Not clear how this method can be applied outside of fully cooperative settings, as the authors claim. The authors should justify this claim theoretically or empirically, or else remove it. 2. Missing some citations to set this in context of other MARL work e.g. recent papers on self-play and population-play with respect to exploration and coordination (such as https://arxiv.org/abs/1806.10071, https://arxiv.org/abs/1812.07019). 3. The analysis is somewhat "circumstantial", need more detailed experiments to be a convincing argument in this section. For example the claim in lines 235 - 236 seems to require further evidence to be completely convincing. 4. The link with self-play could be more clearly drawn out. As far as I can tell, the advantage of this over self-play is precisely the different initialization of the separate agents. It is surprising and important that this has such a significant effect, and could potentially spur a meta-learning investigation into optimal initialization for SEAC in future work. | 2. Missing some citations to set this in context of other MARL work e.g. recent papers on self-play and population-play with respect to exploration and coordination (such as https://arxiv.org/abs/1806.10071, https://arxiv.org/abs/1812.07019). |
ACL_2017_108_review | ACL_2017 | Clarification is needed in several places.
1. In section 3, in addition to the description of the previous model, MH, you need point out the issues of MH which motivate you to propose a new model.
2. In section 4, I don't see the reason why separators are introduced. what additional info they convene beyond T/I/O?
3. section 5.1 does not seem to provide useful info regarding why the new model is superior.
4. the discussion in section 5.2 is so abstract that I don't get the insights why the new model is better than MH. can you provide examples of spurious structures? - General Discussion: The paper presents a new model for detecting overlapping entities in text. The new model improves the previous state-of-the-art, MH, in the experiments on a few benchmark datasets. But it is not clear why and how the new model works better. | 4. the discussion in section 5.2 is so abstract that I don't get the insights why the new model is better than MH. can you provide examples of spurious structures? |
NIPS_2017_143 | NIPS_2017 | For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant:
- In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy?
- In l32-45 they pretend to give a real example but for me this is too vague. I do see that in some scenarios the loss/objective they consider (high accuracy on majority) kind of makes sense. But I imagine that such losses already have been studied, without necessarily referring to "strategic" settings. In particular, how is this related to robust statistics, Huber loss, precision, recall, etc.?
- In l50 they claim that "pershaps even in most [...] practical scenarios" predicting accurate on the majority is most important. I contradict: in many areas with safety issues such as robotics and self-driving cars (generally: control), the models are allowed to have small errors, but by no means may have large errors (imagine a self-driving car to significantly overestimate the distance to the next car in 1% of the situations).
Related to this, in my view they fall short of what they claim as their contribution in the introduction and in l79-87:
- Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player).
- In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE).
- Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize).
- I agree that "prediction is not done in isolation", but I don't see the "main" contribution of showing that the "task of prediction may have strategic aspects" yet. REMARKS:
What's "true" payoff in Table 1? I would have expected to see the test set payoff in that column. Or is it the population (complete sample) empirical payoff?
Have you looked into the work by Vapnik about teaching a learner with side information? This looks a bit similar as having your discrapency p alongside x,y. | - Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player). |
ARR_2022_67_review | ARR_2022 | The main weakness of this paper is that it is not sufficiently grounded in the sociolinguistic side of bias measurement, and therefore makes some significant claims and design decisions that are not entirely appropriate.
The proposed task and the experimental design are predicated on the assumption that word senses as semantically disjoint with respect to social biases, but this is not really true (which is actually nodded to in the paper's discussion section). References to nationality and language can often be used metonymically to imply one another (particularly for voicing negative opinions), and colour and race are most certainly strongly correlated in visual design and storytelling (eg the use of dark colours, especially black, for evil in Western European storytelling, which is less common in African stories). Occupations and associated actions, while not used metonymically, may still be correlated: it is likely higher surprisal to refer to a woman "engineering a solution" than a man, for example. ( It is worth noting that the maximum pairwise similarity approach used in WAT calculation runs counter to this assumption of disjoint senses.)
There is also an important distinction between observability of a bias (e.g., through surveying/prompting native speakers) and measuring it using computational tools. Social biases may be observable without being measured by current approaches. These nuances are important to discuss throughout the paper: social bias assessment is not a computational task alone, but must incorporate human judgment and be framed through specific human viewpoints with acknowledged limitations.
There are several further issues with the precision of the language used in the paper that are important to address when discussing a sensitive topic such as social bias measurement.
- The term "bias" itself needs some context. Biases may be positive or neutral (eg preferring to complete "they sat at the computer to write ___" with "code" vs "a book" when trained on different data), as well as negative. As such, it is not clear what it means to have an "unbiased" model, or when that is desirable. A clearer definition of what kinds of bias are being assessed, what their impact is, and what mitigation of these biases might look like will help set the context better.
- The term "stereotype" is not used correctly. A stereotype is a commonly-held association between one group and some attribute or behavior; although most stereotypes involve negative attributes or disfavored behaviors, a stereotype is not inherently negative. The labels of "stereotype/anti-stereotype" are therefore not appropriate; positive/negative would be better. ( An anti-stereotype would be something that contradicts the stereotypical attribute or behavior.) Stereotypes are also socially-derived based on common beliefs/associations; for example, "Japanese people are stupid" is not a commonly-held belief, so this is not a stereotype.
- The terms "pleasant" and "unpleasant" are used, but are not clearly defined and feel subjective enough to mask the import of the difference being discussed. " Positive" and "negative" are the standard words used to discuss emotional valence of this type; these should either be used or the use of alternatives should be justified and clearly defined.
- Race and ethnicity are distinct constructs; the evaluation described in Section 4.2 is comparing race senses with colour senses, not ethnicity. ( Ethnicity is related more to heritage and origin, while race is a social group that one can be assigned to by others or identify with voluntarily.)
- Referring to nationality groups as "disadvantaged groups" (Line 286) doesn't quite work; while the racial identities that nationalities are correlated with may be disadvantaged, non-US nationalities themselves are not disadvantaged. ( Eg Scandinavian nationalities are very unlikely to be considered disadvantaged.)
It is not clear what the impact could be of measuring bias in sense embeddings or of debiasing sense embeddings. These models are much more rarely used than word embeddings, and in much more specialized settings.
Other issues with the presentation: - Section 3 is difficult to follow. It is not clear how (X and Y) and (A and B) are related, or what a high or low score on the s and w functions means.
- Figures 2 and 3 are unreadable.
- P-value calculation is discussed in Section 3, but no p-values are reported in the experimental results.
- The inclusion of the verb sense of "carpenter" in Section 4.3 is a little questionable; this is a very rarely-used sense. The other occupation/action words are commonly used and could reasonably be measured, but very few texts (for embedding training) will use carpenter as a verb, and many native speakers will not recognize it as correct.
- There is growing consensus to capitalize racial identifiers (certainly "Black", increasingly "White").
- Lines 442-445: This actually does not accurately simulate the word-level embedding case, as it assigns equal weight to all senses. In practice, word senses are more likely to be Zipf-distributed, so a word-level embedding model will be exposed to much more training data for some senses than for others.
- It is not clear how the categories in WEAT (Table 2) are associated with the social biases this paper is framed around.
- It would be helpful to mark the extended WEAT/WAT contributed by this paper using a modified label (eg WEAT*), in Table 2 and in text.
- The Ethical Statement is well written, but should be extended with some more concrete discussion of challenges not addressed in the dataset (eg gender beyond the binary, races other than Black, other sources of social bias).
- It would be helpful to have a listing of nationalities/racial identities/occupations included in the dataset (along with the adjectives used) as part of the paper, such as in supplementary tables.
- It would be interesting to see what human behavior is for these prompts/comparisons... - Dev et al 2019 and Nadeem et al 2020 references are missing publication information.
Typos - Figure 1, graph titles "embembeddings" - Line 051: extra "is" - Line 122: missing "are" - Line 603: "bises" -> "biases" - Line 610: missing "being" - Spaces not needed between section symbol and number - Spaces after non-terminal periods ("vs.", "cf.") should be escaped to avoid spacing issues | - It would be interesting to see what human behavior is for these prompts/comparisons... |
ACL_2017_365_review | ACL_2017 | except for the qualitative analysis, the paper may belong better to the applications area, since the models are not particularly new but the application itself is most of its novelty - General Discussion: This paper presents a "sequence-to-sequence" model with attention mechanisms and an auxiliary phonetic prediction task to tackle historical text normalization. None of the used models or techniques are new by themselves, but they seem to have never been used in this problem before, showing and improvement over the state-of-the-art. Most of the paper seem like a better fit for the applications track, except for the final analysis where the authors link attention with multi-task learning, claiming that the two produce similar effects. The hypothesis is intriguing, and it's supported with a wealth of evidence, at least for the presented task.
I do have some questions on this analysis though: 1) In Section 5.1, aren't you assuming that the hidden layer spaces of the two models are aligned? Is it safe to do so?
2) Section 5.2, I don't get what you mean by the errors that each of the models resolve independently of each other. This is like symmetric-difference? That is, if we combine the two models these errors are not resolved anymore?
On a different vein, 3) Why is there no comparison with Azawi's model?
======== After reading the author's response.
I'm feeling more concerned than I was before about your claims of alignment in the hidden space of the two models. If accepted, I would strongly encourage the authors to make clear in the paper the discussion you have shared with us for why you think that alignment holds in practice. | 1) In Section 5.1, aren't you assuming that the hidden layer spaces of the two models are aligned? Is it safe to do so? |
ICLR_2021_1208 | ICLR_2021 | - It is unclear to me what scientific insight we get from this model and formalism over the prior task-optimized approaches. For instance, this model (as formulated in Section 2.3) is not shown to be a prototype approximation to these non-linear RNN models that exhibit emergent behavior. So it is not clear that your work provides any further “explanation” as to how these nonlinear models attain such solutions purely through optimization on a task. - Furthermore, I am not really sure how “emergent” the hexagonal grid patterns really are in this model. Given partitioning of the generator matrices into blocks in Section 2.5, it almost seems by construction we would get hexagonal grid patterns and it would be very hard for the model to learn anything different.
While the ideas of this paper are mathematically elegant, I do not see the added utility these models provide over prior approaches nor how they provide a deeper explanation of the surprising emergent grid firing patterns observed in task-optimized nonlinear RNNs. For these reasons, I recommend rejection. | - It is unclear to me what scientific insight we get from this model and formalism over the prior task-optimized approaches. For instance, this model (as formulated in Section 2.3) is not shown to be a prototype approximation to these non-linear RNN models that exhibit emergent behavior. So it is not clear that your work provides any further “explanation” as to how these nonlinear models attain such solutions purely through optimization on a task. |
ICLR_2023_59 | ICLR_2023 | The following three points are drawbacks 1 There is no discussion or theoretical analysis of why the proposed method is effective. Therefore, I cannot eliminate the suspicion that the effectiveness of the proposed method over the prior methods depends on the superiority of the hyperparameter tuning. I think it is good to show what kind of "prior physics knowledge in nonlinear dynamics" can be implemented by the proposed algorithm, and provide numerical experiments to demonstrate its effectiveness.
2 Where the stability of the estimation results due to hyperparameter settings is unclear. Please explain the optimization method for hyperparameters and the stability of the estimation results with respect to variations in hyperparameters.
3 Where it is unclear whether the hyperparameter tuning of the prior method is appropriate. It is same as above point. In particular, we would like to know why the results in Table 1 do not match the results of similar experiments presented in the paper proposing the prior method. (Table 1 in [Mundhenk et al., 2021]) | 3 Where it is unclear whether the hyperparameter tuning of the prior method is appropriate. It is same as above point. In particular, we would like to know why the results in Table 1 do not match the results of similar experiments presented in the paper proposing the prior method. (Table 1 in [Mundhenk et al., 2021]) |
ARR_2022_121_review | ARR_2022 | 1. The writing needs to be improved. Structurally, there should be a "Related Work" section which would inform the reader that this is where prior research has been done, as well as what differentiates the current work with earlier work. A clear separation between the "Introduction" and "Related Work" sections would certainly improve the readability of the paper.
2. The paper does not compare the results with some of the earlier research work from 2020. While the authors have explained their reasons for not doing so in the author response along the lines of "Those systems are not state-of-the-art", they have compared the results to a number of earlier systems with worse performances (Eg. Taghipour and Ng (2016)).
Comments: 1. Please keep a separate "Related Work" section. Currently "Introduction" section of the paper reads as 2-3 paragraphs of introduction, followed by 3 bullet points of related work and again a lot of introduction. I would suggest that you shift those 3 bullet points ("Traditional AES", "Deep Neural AES" and "Pre-training AES") to the Related work section.
2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work.
3. While the out of domain experiment is pre-trained on other prompts, it is still fine-tuned during training on the target prompt essays. Typos: 1. In Table #2, Row 10, the reference for R2BERT is Yang et al. (2020), not Yang et al. (2019).
Missing References: 1. Panitan Muangkammuen and Fumiyo Fukumoto. " Multi-task Learning for Automated Essay Scoring with Sentiment Analysis". 2020. In Proceedings of the AACL-IJCNLP 2020 Student Research Workshop.
2. Sandeep Mathias, Rudra Murthy, Diptesh Kanojia, Abhijit Mishra, Pushpak Bhattacharyya. 2020. Happy Are Those Who Grade without Seeing: A Multi-Task Learning Approach to Grade Essays Using Gaze Behaviour. In Proceedings of the 2020 AACL-IJCNLP Main Conference. | 1. In Table #2, Row 10, the reference for R2BERT is Yang et al. (2020), not Yang et al. (2019). Missing References: |
ACL_2017_71_review | ACL_2017 | -The explanation of methods in some paragraphs is too detailed and there is no mention of other work and it is repeated in the corresponding method sections, the authors committed to address this issue in the final version.
-README file for the dataset [Authors committed to add README file] - General Discussion: - Section 2.2 mentions examples of DBpedia properties that were used as features. Do the authors mean that all the properties have been used or there is a subset? If the latter please list them. In the authors' response, the authors explain in more details this point and I strongly believe that it is crucial to list all the features in details in the final version for clarity and replicability of the paper.
- In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they will correct the figure.
- Based on section 2.4 it seems that topical relatedness implies that some features are domain dependent. It would be helpful to see how much domain dependent features affect the performance. In the final version, the authors will add the performance results for the above mentioned features, as mentioned in their response.
- In related work, the authors make a strong connection to Sil and Florian work where they emphasize the supervised vs. unsupervised difference. The proposed approach is still supervised in the sense of training, however the generation of training data doesn’t involve human interference | - In section 2.3 the authors use Lample et al. Bi-LSTM-CRF model, it might be beneficial to add that the input is word embeddings (similarly to Lample et al.) - Figure 3, KNs in source language or in English? ( since the mentions have been translated to English). In the authors' response, the authors stated that they will correct the figure. |
NIPS_2021_1027 | NIPS_2021 | weakness of this paper is that the results are asymptotic (i.e., only holds when the number of samples approaches infinite). It is unclear how realistic are the phenomena described in this paper. For example, the convergence rate in Theorem 6 depends polynomially on the term 1/c_\mu, which is the minimum data density over all state-action pairs. For MDPs with large state/action space, the term could be much larger than the number of samples. In other words, it’s unclear whether the dominating term is still characterized by Eq. (6) in real-world OPE tasks.
In addition, this paper doesn’t directly tackle the difficulty of linear OPE tasks. On the one hand, the assumptions such as 1/c_\mu-dependence or upper bound of |F^#_\gamma|_2 exclude some natural hard instances. Are these assumptions an artifact of the analysis, or necessary for the result? On the other hand, the sample complexity of tile-coding estimators is exponential in the dimension d, which is not very realistic.
Minor issues which may cause unnecessary confusion: - The notation f* in Definition 8 and 9 is used to denote different functions. - Does Definition 3 depend on the policy \pi being evaluated (because the term (P\phi)(s,a) depends on \pi)? | - Does Definition 3 depend on the policy \pi being evaluated (because the term (P\phi)(s,a) depends on \pi)? |
NIPS_2020_1592 | NIPS_2020 | Major concerns: 1. While it is impressive that this work gets slightly better results than MLE, there are more hyper-parameters to tune, including mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, MLE pretraining (according to appendix). I find it disappointing that so many tricks are needed. If you get rid of pretraining/initialization from T5/BART, would this method work? 2. This work requires MLE pretraining, while prior work "Training Language GANs from Scratch" does not. 3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum. 4. This work is claiming that it is the first time that language GANs outperform MLE, while prior works like seqGAN or scratchGAN all claim to be better than MLE. Is this argument based on the tradeoff between BLEU and self-BLEU from "language GANs falling short"? If so, Figure 2 is not making a fair comparison since this work uses T5/BART which is trained on external data, while previous works do not. What if you only use in-domain data? Would this still outperform MLE? Minor concerns: 5. This work only uses answer generation and summarization to evaluate the proposed method. While these are indeed conditional generation tasks, they are close to "open domain" generation rather than "close domain" generation such as machine translation. I think this work would be more convincing if it is also evaluated in machine translation which exhibits much lower uncertainties per word. 6. The discriminator accuracy of ~70% looks low to me, compared to "Real or Fake? Learning to Discriminate Machine from Human Generated Text" which achieves almost 90% accuracy. I wonder if the discriminator was not initialized with a pretrained LM, or is that because the discriminator used is too small? ===post-rebuttal=== The added scratch GAN+pretraining (and coldGAN-pretraining) experiments are fairer, but scratch GAN does not need MLE pretraining while this work does, and we know that MLE pretraining makes a big difference, so I am still not very convinced. My main concern is the existence of so many hyper-parameters/tricks: mixture weight, proposal temperature, nucleus cutoff, importance weight clipping, and MLE pretraining. I think some sensitivity analysis similar to scratch GAN's would be very helpful. In addition, rebuttal Figure 2 is weird: when generating only one word, why would cold GAN already outperform MLE by 10%? To me, this seems to imply that improvement might be due to hyper-parameter tuning. | 3. For evaluation, since the claim of this paper is to reduce exposure bias, training a discriminator on generations from the learned model is needed to confirm if it is the case, in a way similar to Figure 1. Note that it is different from Figure 4, since during training the discriminator is co-adapting with the generator, and it might get stuck at a local optimum. |
NIPS_2019_1049 | NIPS_2019 | - While the types of interventions included in the paper are reasonable computationally, it would be important to think about whether they are practical and safe for querying in the real world. - The assumption of disentangled factors seems to be a strong one given factors are often dependent in the real world. The authors do include a way to disentangle observations though, which helps to address this limitation. Originality: The problem of causal misidentification is novel and interesting. First, identifying this phenomenon as an issue in imitation learning settings is an important step towards improved robustness in learned policies. Second, the authors provide a convincing solution as one way to address distributional shift by discovering the causal model underlying expert action behaviors. Quality: The quality of the work is high. Many details are not included in the main paper, but the appendices help to clarify some of the confusion. The authors evaluated the approach on multiple domains with several baselines. It was particularly helpful to see the motivating domains early on with an explanation of how the problem exists in these domains. This motivated the solution and experiments at the end. Clarity: The work was very well-written, but many parts of the paper relied on pointers to the appendices so it was necessary to go through them to understand the full details. There was a typo on page 3: Z_t â Z^t. Significance: The problem and approach can be of significant value to the community. Many current learning systems fail to identify important features relevant for a task due to limited data and due to the training environment not matching the real world. Since there will almost always be a gap between training and testing, developing approaches that learn the correct causal relationships between variables can be an important step towards building more robust models. Other comments: - What if the factors in the state are assumed to be disentangled but are not? What will the approach do/in what cases will it fail? - It seems unrealistic to query for expert actions at arbitrary states. One reason is because states might be dangerous, as the authors point out. But even if states are not dangerous, parachuting to a particular state would be hard practically. The expert could instead be simply presented a state and asked what they would do hypothetically (assuming the state representations of the imitator and expert match, which may not hold), but it could be challenging for an expert to hypothesize what he or she would do in this scenario. Basically, querying out of context can be challenging with real users. - In the policy execution mode, is it safe to execute the imitatorâs learned policy in the real world? The expert may be capable of acting safely in the world, but given that the imitator is a learning agent, deploying the agent and accumulating rewards in the real world can be unsafe. - On page 7, there is a reference to equation 3, which doesnât appear in the main submission, only in the appendix. - In the results section for intervention by policy execution, the authors indicate that the current model is updated after each episode. How long does this update take? - For the Atari game experiments, how is the number of disentangled factors chosen to be 30? In general, this might be hard to specify for an arbitrary domain. - Why is the performance for DAgger in Figure 7 evaluated at fewer intervals? The line is much sharper than the intervention performance curve. - The authors indicate that GAIL outperforms the expert query approach but that the number of episodes required are an order of magnitude higher. Is there a reason the authors did not plot a more equivalent baseline to show a fair comparison? - Why is the variance on Hopper so large? - On page 8, the authors state that the choice of the approach for learning the mixture of policies doesnât matter, but disc-intervention obtains clearly much higher reward than unif-intervention in Figures 6 and 7, so it seems like it does make a difference. ----------------------------- I read the author response and was happy with the answers. I especially appreciate the experiment on testing the assumption of disentanglement. It would be interesting to think about how the approach can be modified in the future to handle these settings. Overall, the work is of high quality and is relevant and valuable for the community. | - Why is the performance for DAgger in Figure 7 evaluated at fewer intervals? The line is much sharper than the intervention performance curve. |
ACL_2017_588_review | ACL_2017 | and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems.
- Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task.
2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler.
3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary.
This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise.
4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities.
- Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested?
2) Have you tried building a classifier that just takes h_i^e as inputs?
I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores. | - Weaknesses:1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task. |
NIPS_2019_819 | NIPS_2019 | Weakness: Due to the intractbility of the MMD DRO problem, the submission did not find an exact reformulation as much other literature in DRO did for other probability metrics. Instead, the author provides several layers of approximation. The reason why I emphasize the importance of a tight bound, if not an exact reformulation, is that one of the major criticism about (distributionally) robust optimization is that it is sometimes too conservative, and thus a loose upper bound might not be sufficient to mitigate the over-conservativeness and demonstrate the power of distributionally robust optimization. When a new distance is introduced into the DRO framework, a natural question is why it should be used compared with other existing approaches. I hope there will be a more fair comparision in the camera-ready version. =============== 1. The study of MMD DRO is mostly motivated by the poor out-of-sample performance of existing phi-divergence and Wasserstein uncertainty sets. However, I don't believe this is indeed the case. For example, Namkoong and Duchi (2016), and Blanchet, Kang, and Murthy (2016) show the dimension-independent bound 1/\sqrt{n} for a broad class of objective functions in the case of phi-divergence and Wasserstein metric respectively. They didn't require the population distribution to be within the uncertainty set, and in fact, such a requirement is way too conservative and it is exactly what they wanted to avoid. 2. Unlike phi-divergence or Wasserstein uncertainty sets, MMD DRO seems not enjoy a tractable exact equivalent reformulation, which seems to be a severe drawback to me. The upper bound provided in Theorem 3.1 is crude especially because it drops the nonnegative constraint on the distribution, and further approximation is still needed even applied to a simple kernel ridge regression problem. Moreover, it seems restrictive to assume the loss \ell_f belongs to the RKHS as already pointed out by the authors. 3. I am confused about the statement in Theorem 5.1, as it might indicate some disadvantage of MMD DRO, as it provides a more conservative upper bound than the variance regularized problem. 4. Given the intractability of the MMD DRO and several layers of approximation, the numerical experiment in Section 6 is insufficient to demonstrate the usefulness of the new framework. References: Namkoong, H. and Duchi, J.C., 2017. Variance-based regularization with convex objectives. In Advances in Neural Information Processing Systems (pp. 2971-2980). Blanchet, J., Kang, Y. and Murthy, K., 2016. Robust wasserstein profile inference and applications to machine learning. arXiv preprint arXiv:1610.05627. | 2. Unlike phi-divergence or Wasserstein uncertainty sets, MMD DRO seems not enjoy a tractable exact equivalent reformulation, which seems to be a severe drawback to me. The upper bound provided in Theorem 3.1 is crude especially because it drops the nonnegative constraint on the distribution, and further approximation is still needed even applied to a simple kernel ridge regression problem. Moreover, it seems restrictive to assume the loss \ell_f belongs to the RKHS as already pointed out by the authors. |
ICLR_2022_2971 | ICLR_2022 | The experiment part is kind of weak. Only experiments of CIFAR-100 are conducted and only DeiT-small/-base are compared. 1. For comparison with DeiT, can you add DeiT variants with the same number of layers, heads, hidden dimension as CMHSA to do a fair comparison? DeiT with 12 layers performs worse than CMHSA with 6 layers on CIFAR-100 is as expected, thus not a convincing comparison.
2. If possible, can you add CNN models to show if CMHSA would actually make ViT performs better/on-par with CNN models, which should be the ultimate goal of training ViT in low-data regime, otherwise one would pretrain ViT on large scale dataset or just use CNN models.
3. If possible, results on ImageNet can be more convincing of the proposed method | 3. If possible, results on ImageNet can be more convincing of the proposed method |
elMKXvhhQ9 | ICLR_2024 | 1. The paper should acknowledge related works that are pertinent to the proposed learnable data augmentation, such as [a] and [b]. It is crucial to cite and discuss the distinctions between these works and the proposed approach, providing readers with a clear understanding of the novel contributions made by this study.
2. The paper predominantly explores the concept of applying learnable data augmentation for graph anomaly detection. While this is valuable, investigating its applicability in broader graph learning tasks, such as node classification with contrastive learning, could significantly expand its scope. For example, how about its benefits to generic graph contrastive learning tasks, compared to existing contrastive techniques?
3. While consistency training might usually be deployed on unlabeled data, I wonder if it would be beneficial to utilize labeled data for consistency training as well. Specifically, labeled data has exact labels, which might provide effective information for consistency training the model in dealing with the taks of graph anomaly detection.
[a] Graph Contrastive Learning Automated. Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang. ICML 2021
[b] Graph Contrastive Learning with Adaptive Augmentation. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang. WWW 2021. | 3. While consistency training might usually be deployed on unlabeled data, I wonder if it would be beneficial to utilize labeled data for consistency training as well. Specifically, labeled data has exact labels, which might provide effective information for consistency training the model in dealing with the taks of graph anomaly detection. [a] Graph Contrastive Learning Automated. Yuning You, Tianlong Chen, Yang Shen, Zhangyang Wang. ICML 2021 [b] Graph Contrastive Learning with Adaptive Augmentation. Yanqiao Zhu, Yichen Xu, Feng Yu, Qiang Liu, Shu Wu, Liang Wang. WWW 2021. |
NIPS_2021_2235 | NIPS_2021 | (and questions):
A) The biggest weakness I think is that the analysis happens on a very restricted scenario, with no transfer: the authors study only the case where we have a single dataset and learn the encoder without using the label that we know exist and use to learn the classifiers - this is suboptimal and would not make sense in practive. I understand that this evaluation is common practice in SSL paper, however this is only a small part of the evaulations these papers have, and transfer learning is the more important and realisting setting. The authors do discuss this in lines 121-124 but justifying their choice by only citing empirical evidence of correlation of this "task" with transfer tasks, but I wouldn't say there are no guarantees there. Calling the second stage of classifier learning on the same dataset as traing as a "downstream supervised task" is an exaggeration (I would suggest to the authors to rephrase). Although this task "correlates" with transfer tasks, it is not clear to me if also this analysis extends. It would be great to discuss this at least a bit further.
B) Even for this task above, there are further simplifications to facilitate the analysis: 1) only the SimCLR case is covered and yet, there is no analysis on a seemingly important (see SimCLR-v2 and other recent papers that show that) part of that approach, ie the projection head. 2) The MoCo approach which is a very popular variant with a memory queue is not discussed. How does the analysis extend to negatives from a memory queue and dual encoders with exponential moving average? 3) There is a further sumplification by the use of a mean classifier, which is not common practice . Why is that simplification there, and is it central for the analysis?
C) The (absolute) numbers in Table 1 are not so intutive, unbounded and hard to understand. It is really hard to understand what is the main message of Table 1 and some of the rows, eg colisions, could perhaps be made more informative by turning them into probabilities. It is unclear what is meant in line 269 by "10 sampled data augmentation per sample" and unclear what reporting the Collision bound without the alpha and beta constants offer (section 4.2 is very unclear to me).
Some more notes/questions:
The discussion on clustering based SSL methods and Sec 4.4 is very restricted to this unrealistic task, that becomes even more unrealistic for clustering based pretraining. It is uncler to me what it offers.
A missing ref (Kalantidis et al "Hard Negative Mixing for Contrastive Learning" NeuriPS 2020) synthesizes hard negatives for contrastive SSL. Same as MoCo, it would be interesting to discuss how this analysis extends to synthetic negatives. Rating
Although an interesting study, the paper has limitations (see "weaknesses" section above). I would say that the current version of the paper is marginally below the acceptance threshold, but I am looking forward to the authors addressing my concerns above in their rebuttal.
Post-rebutal thoughts
The authors provided extensive responses to my questions, answering many in a satisfactory way. I still think however that a central concern listed in the original review stand: the fact that Arora et al study the same task that first learns without labels and the with labels on the same dataset (and only that task) doesn't mean that this is what should be the only task to study for "Understanding Negative Samples in Instance Discriminative Self-supervised Representation Learning".
In their response, the authors claim that
The self-supervised learning setting of our analysis is practical because the setting is quite similar to a semi-supervised learning setting, where we can access massive unlabeled samples and a few labeled samples.
With all due respect, I wouldn't compare this to semi-supervised learning for one key reason: as the authors also say here, in semi-supervised learning you have few labeled examples, a key property of the task. So, I would totally understand this analysis if the proposed bound was evaluated in a semi-supervised setting. This is not the case here, ie more than few labeled examples per class are used for learning the classifiers in this case.
Similarly, wrt the answer on the usage of a mean classifier:
a few-shot learning setting uses a mean classifier, namely, Prototypical Networks [9], which has been cited more than 2700 times, according to Google Scholar.
Again, in the same way, the use of a mean classifier is indeed justified for few-shot learning, but it is well known that in the case of datasets with many labels, a logistic regression classifiers is superior.
Overall, I do see some merit in this paper, yet I think the breadth of the analysis is not enough; I will keep my score to 5.
The authors do discuss some limitations, but not potential societal impacts. Given the nature of the work, the latter is not easy to assess and in my opinion it is fine to skip for a theoretical paper on SSL. | 1) only the SimCLR case is covered and yet, there is no analysis on a seemingly important (see SimCLR-v2 and other recent papers that show that) part of that approach, ie the projection head. |
NIPS_2017_104 | NIPS_2017 | ---
There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit.
* More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy (e.g., how a human would demonstrate for Baxter)? Does the model generalize from any working policy? What about a policy which spends most of its time doing irrelevant or intentionally misleading manipulations? Can a demonstration task be input in a higher level language like the one used throughout the paper (e.g., at line 129)?
* How does this setting relate to question answering or visual question answering?
* How does the model perform on the same train data it's seen already? How much does it overfit?
* How hard is it to find intuitive attention examples as in figure 4?
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help.
* The related works section would be better understood knowing how the model works, so it should be presented later. | --- There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit. |
NIPS_2022_1770 | NIPS_2022 | Weakness: There are still several concerns with the finding that the perplexity is highly correlated with the number of decoder parameters.
According to Figure 4, the correlation decreases as top-10% architectures are chosen instead of top-100%, which indicates that the training-free proxy is less accurate for parameter-heavy decoders.
The range of sampled architectures should also affect the correlation. For instance, once the sampled architectures are of similar sizes, it could be more challenging to differentiate their perplexity and thus the correlation can be lower.
Detailed Comments:
Some questions regarding Figure 4: 1) there is a drop of correlation after a short period of training, which goes up with more training iterations; 2) the title "Top-x%" should be further explained;
Though the proposed approach yields the Pareto frontier of perplexity, latency and memory, is there any systematic way to choose a single architecture given the target perplexity? | 1) there is a drop of correlation after a short period of training, which goes up with more training iterations; |
NIPS_2022_2233 | NIPS_2022 | In lines 3-4, the statement is not necessarily true as there are also methods that focus on better distribution calibration during training (some of these ideas are mentioned later in the paper as well). I would recommend softening the message of this sentence to be more precise.
Language like 'Obviously' (line 47) and 'It is easy to see' (line 168) should generally be avoided in academic writing.
At the end of Sec. 2, it would be helpful to have a one or two sentence discussion to contextualize the proposed BATS method in the described related work.
Variations on the phrase 'We propose to rectify the features into the feature's typical set and then use these typical features to calculate the OOD score.' are repeated frequently throughout the paper. The λ
hyperparameter was not introduced in line 142 when it was first used.
The decreased BATS performance on Tiny-Imagenet OOD detection in Table 2 is not mentioned or discussed.
The paper needs to be proofread for typos. The following is a non-exhaustive list of the typos I found:
Line 2: 'which raises the attention on out-of-distribution (OOD) detection' is awkward phrasing.
Lines 74-75: 'large sufficiently' should read 'sufficiently large'.
Line 98: 'energe score' should read 'energy score'.
Line 109: 'is provable aligned' should read 'is provably aligned'.
Line 130: 'common-used layer' should read 'commonly used layer'.
Footnote 1: I did not grammatically understand the phrase: 'the pre-training outputs moving average estimators during iterations'. Maybe something like: 'The pre-trained model outputs moving average estimators at each iteration.'?
Line 184: 'a two-side rectified normal distribution' should read 'a two-sided rectified normal distribution'.
There should be a space between the abbreviation and the number in references (i.e., Fig. X, Table Y, Sec. Z).
Line 197: 'Fig.2 illustrate' should read 'Fig. 2 illustrates'.
Line 220: 'verse vice' should read 'vice versa'.
Line 225: 'Recent researches propose' should read 'Recent literature/work proposes'.
Line 235: 'models are standard pre-trained' is grammatically awkward. Maybe something like: 'models are pre-trained in a standard manner'?
Line 238: 'In specific,' should read 'Specifically,'.
Line 246: 'which cost more' should read 'which costs more'.
Line 252: 'The start learning rate' should read 'The starting learning rate'.
Line 284: 'Our BATS can reduce the variance that benefit the OOD detection but also introduce a bias.' should read 'Our proposed BATS method can reduce variance, which benefits OOD detection, but can also introduce a bias.'.
Line 285: 'Energy Score (The horizontal lines).' should read 'Energy Score (the horizontal lines)'.
Fig. 5: x-axis labels are missing. This is also the case for some figures in the appendix.
Sec. 6: 'Limitation and societal impact' should read 'Limitations and societal impact'.
Sec. 6: batch normalization is referred to in three different ways in the last paragraph (Batch Normalization, Batch-Norm, BN).
Line 693: 'our method surpass' should read 'our method surpasses'.
The references should be proofread (e.g., to ensure the year is not entered twice in a citation, the conference venue is listed instead of ArXiv when available, etc.).
The authors have done a good job in listing limitations of the BATS method. However, the addition of some potential negative societal implications would be helpful. For example, by truncating the features, certain biases in the data learned by the pre-trained model may be amplified. Furthermore, the process of truncation will inherently cause some information loss which may be crucial to model performance during deployment (building on the hyperparameter tuning discussion in the paper). | 5: x-axis labels are missing. This is also the case for some figures in the appendix. Sec. |
NIPS_2018_66 | NIPS_2018 | of their proposed method for disentangling discrete features in different datasets. I think that the main of the paper lies in the relatively thorough experimentation. I thought the results in Figure 6 were particularly interesting in that they suggest that there is an ordering in features in terms of mutual information between data and latent variable (for which the KL is an upper bound), where higher mutual information features appear first as the capacity is increased. I also appreciate the explicit discussion of the robust of the degree of disentanglement across restarts, as well as the sensitivity to hyperparameters. Given the difficulties observed in Figure 4 in distinguishing between similar digits (such as 5s and 8s), it would be interesting to see results for this method on a dataset like dSprites, where the shapes are very similar in pixel space. The inferred chair rotations in Figure 7 are also a nice illustration of the ability of the method to generalize to the test set. The main thing that this paper lacks is a more quantitative evaluation. A number of recent papers have proposed metrics for evaluating disentangled representations. In addition the metrics proposed by Kim & Mnih (2018) and Chen et al. (2018), the work by Eastwood & Williams (2017) [1] is relevant in this context. All of these metrics presume that we have access to labels for true latent factors, which is not the case for any of the datasets considered in the experimentation. However, it would probably be worth evaluating one or more of these metrics on a dataset such as dSprites. A minor criticism is that details the training procedure and network architectures are somewhat scarce in the main text. It would be helpful to briefly describe the architectures and training setup in a bit more detail, and explicitly call out the relevant sections of the supplementary material. In particular, it would be good to list key parameters such as γ and the schedule for the capacities Cz and Cc, e.g., the figure captions. In Figure 6a, please mark the 25k iterations (e.g. with a vertical dashed line) to indicate that this is where the capacity is no longer increased further. Questions - How robust is the ordering on features Figure 6, given the noted variability across restarts in Section 4.3? I would hypothesize that the discrete variable always emerges first (given that this variable is in some sense given a âgreaterâ capacity than individual dimensions in the continuous variables). Is the ordering on the continuous variables always the same? What happens when you keep increasing the capacity beyond 25k iterations. Does the network eventually use all of the dimensions of the latent variables? - I would also appreciate some discussion of how the hyperparameters in the objective were chosen. In particular, one could imagine that the relative magnitude of Cc and Cz would matter, as well as γ. This means that there are more parameter to tune than in, e.g., a vanilla β-VAE. Can the authors comment on how they chose the reported values, and perhaps discuss the sensitivity to these particular hyperparameters in more detail? - In Figure 2, what is the range of values over which traversal is performed? Related Work In addition to the work by Eastwood & Williams, there are a couple of related references that the authors should probably cite: - Kumar et. al [2] also proposed the total correlation term along with Kim & Mnih (2018) and Chen et al. (2018). - A recent paper by Esmaeli et al. [3] employs an objective based on the Total Correlation, related to the one in Kim & Mnih (2018) and Chen et. al (2018) to induce disentangled representations that can incorporate both discrete and continuous variables. Minor Comments - As the authors write in the introduction, one of the purported advantages of VAEs over GANs is stability of training. However, as mentioned by the author, including multiple variables of different types also makes the representation unstable. Given this observation, maybe it is worth qualifying these statements in the introduction. - I would say that section 3.2 can be eliminated - I think that at this point readers can be presumed to know about the Gumbel-Softmax/Concrete distribution. - Figure 1 could be optimized to use less whitespace. - I would recommend to replace instances of (\citet{key}) with \citep{key}. References [1] Eastwood, C. & Williams, C. K. I. A Framework for the Quantitative Evaluation of Disentangled Representations. (2018). [2] Kumar, A., Sattigeri, P. & Balakrishnan, A. Variational inference of disentangled latent concepts from unlabeled observations. arXiv preprint arXiv:1711.00848 (2017). [3] Esmaeili, B. et al. Structured Disentangled Representations. arXiv:1804.02086 [cs, stat] (2018). | - I would say that section 3.2 can be eliminated - I think that at this point readers can be presumed to know about the Gumbel-Softmax/Concrete distribution. |
ICLR_2022_3205 | ICLR_2022 | This method trades one intractible problem for another: it requires the learning of cross-values v e ′ ( x t ; e )
for all pairs of possible environments e , e ′
. It is not clear that this will be an improvement when scaling up.
At a few points the paper introduces approximations, but the gap to the true value and the implications of these approximations are not made completely clear to me. The authors should be more precise about the tradeoffs and costs of the methods they propose, both in terms of accuracy and computational cost.
On page 6, it claims that estimating v c
according to samples will lead to Thompson sampling-like behavior, which might lead to better exploration. This seems a bit facetious given that this paper attempts to find a Bayes-optimal policy and explicitly points out the weaknesses of Thompson sampling in an earlier section.
Not scaled to larger domains, but this is understandable.
Questions and minor comments
Is the belief state conditioning the policy also supposed to change with time τ
? As written it looks like the optimal Bayes-adaptive policy conditions on one sampled belief about the environment and then plays without updating that belief.
It is not intuitive to me how it is possible to estimate v f
, despite the Bellman equation written in Eq. 12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct?
I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraphs — it's currently a huge wall of text. | 12. It would seem that this update would have to integrate over all possible environments in order to be meaningful, assuming that the true environment is not known at update time. Is that correct? I guess this was probably for space reasons, but the bolded sections in page 6 should really be broken out into \paragraphs — it's currently a huge wall of text. |
ICLR_2022_985 | ICLR_2022 | • More motivation and derivations of the different Fisher scores U x
(for GANs, VAEs, supervised) would be beneficial for understanding better the paper.
• More discussions on the non-diagonal version of the Fisher information matrix models (see comments below) would be beneficial.
• Discussion on the dependence of the quality of the NFK embedding on the quality of the pre-trained neural network.
• Discussion on the choice of the low-dimensional embedding dimensionality k
General comments:
• Using the diagonal of the Fisher information matrix (FIM) seems desirable from a computational reason, however a natural question is what happens if one tries to use the full matrix. Given the size of the parameters θ
in a neural network, estimating the whole matrix would indeed be extremely computationally expensive, but by discarding them, one loses significant information. Could the authors comment on that? Is the diag of FIM related to other known concepts in statistics? Does using only the diagonal imply that the off-diagonal elements are zero meaning the parameters are orthogonal? How does this affect the results and interpretation?
• Could the authors give the derivation of U x
in eq (1) for GANs (I see part of the derivation is in the paper by Zhai et al, 2019)? For VAEs and supervised case, FIM I
is the inner product using U x
, but for GANs it acts in the output space of the generator. Why is this the case (some explanations are given in the original paper but would be helpful to discuss this a bit more here)? The derivation of eq (4) would also be useful.
• Low-rank structure of NFK and Alg 1: How does one choose the feature dimensionality k
? Many methods that rely on kernels and manifold learning make the assumption of low-dimensionality/low-rankness and show that a small number of eigenfunctions is sufficient to reconstruct the input data. How is this different for NFK? The way I understand “low rank” is that the data has its own rank, which is low, and could potentially be learned. However here the authors input the dimensionality/rank k
which might be or not close to the true rank in real applications.
• How does this work relate to the work of Belkin et al (2018) – “To understand deep learning we need to understand kernel learning”, where the authors look at other kernels (Laplacian and Gaussian)?
• Could the approach be used for neural networks that are not pre-trained, as the neural tangent kernel NTK?
• The experimental results are nice, however the focus on computation is not so relevant given that it only uses the diagonal of the Fisher information matrix. Comparisons using the whole matrix would also be needed. What error is used in Table 3 (MSE, MAE, RMSE)? The goal of the paper is to present a method for supervised and unsupervised settings, however in the results an example on semi-supervised is also presented. I wonder if the examples on semi-supervised and knowledge distillation could leave room to improve the supervised and unsupervised settings discussions, and potentially be moved to the Appendix?
Other comments:
• Please update reference (Jaakkola et al): year, conference, also there should be no “et al” there are only two authors
• Doesn’t, don’t, won’t, it’s , etc -> does not, do not, would not, it is
• Both the concepts of “data representation” and “feature representation” are used. Do they always refer to the same thing? If yes, would be good to specify that.
• Expression of K f i s h e r
=> second U
should be subscript z not x ?
• “FIM defined as the variance of the score …” -> the FIM matrix is defined between all pairs of parameters θ i and θ j
, so it should be a covariance?
• Appendix Fig 3: Not sure I fully understand this example. Could one try the reconstruction of the digits using a simple method, such as PCA using the first 100 principal components as a baseline?
• Not familiar with the “Fisher vector” terminology, except in image classification and the “Adversarial Fisher vector” from Zhai et al, 2019. Are there other references? | • “FIM defined as the variance of the score …” -> the FIM matrix is defined between all pairs of parameters θ i and θ j , so it should be a covariance? |
ICLR_2023_3449 | ICLR_2023 | 1.The spurious features in Section 3.1 and 3.2 are very similar to backdoor triggers. They both are some artificial patterns that only appear a few times in the training set. For example, Chen et al. (2017) use random noise patterns. Gu et al. (2019) [1] use single-pixel and simple patterns as triggers. It is well-known that a few training examples with such triggers (rare spurious examples in this paper) would have a large impact on the trained model.
2.How neural nets learn natural rare spurious correlations is unknown to the community (to the best of my knowledge). However, most of analysis and ablation studies use the artificial patterns instead of natural spurious correlations. Duplicating the same artificial pattern for multiple times is different from natural spurious features, which are complex and different in every example.
3.What’s the experiment setup in Section 3.3? (data augmentation methods, learning rate, etc.).
[1]: BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. https://messlab.moyix.net/papers/badnets_ieeeaccess19.pdf | 1.The spurious features in Section 3.1 and 3.2 are very similar to backdoor triggers. They both are some artificial patterns that only appear a few times in the training set. For example, Chen et al. (2017) use random noise patterns. Gu et al. (2019) [1] use single-pixel and simple patterns as triggers. It is well-known that a few training examples with such triggers (rare spurious examples in this paper) would have a large impact on the trained model. |
ICLR_2021_1682 | ICLR_2021 | + The value of episodic training is increasingly being questioned, and the submission approaches the topic from a new and interesting perspective.
+ The connection between nearest-centroid few-shot learning approaches and NCA has not been made in the literature to my knowledge and has potential applications beyond the scope of this paper.
+ The paper is well-written, easy to follow, and well-connected to the existing literature.
- The extent to which the observations presented generalize to few-shot learners beyond Prototypical Networks is not evaluated, which may limit the scope of the submission’s contributions in terms of understanding the properties of episodic training.
- The Matching Networks / NCA connection makes more sense in my opinion than the Prototypical Networks / NCA connection.
- A single set of hyperparameters was used across learners for a given benchmark, which can bias the conclusions drawn from the experiments. Recommendation
I’m leaning towards acceptance. I have some issues with the submission that are detailed below, but overall the paper presents an interesting take on a topic that’s currently very relevant to the few-shot learning community, and I feel that the value it brings to the conversation is sufficient to overcome the concerns I have.
Detailed justification
The biggest concern I have with the submission is methodological. One one hand, the authors went beyond the usual practice of reporting accuracies on a single run and instead trained each method with five different random initializations, and this is a practice that I’m happy to see in a few-shot classification paper. On the other hand, the choice to share a single set of hyperparameters across learners for a given benchmark leaves a blind spot in the evaluation. What if Prototypical Networks are more sensitive to the choice of optimizer, learning rate schedule, and weight decay coefficient than NCA? Is it possible that the set of hyperparameters chosen for the experiments happens to work poorly for Prototypical Networks? Would we observe the same trends if we tuned hyperparameters independently for each experimental setting? In its current form the submission shows that Prototypical Networks are sensitive to the hyperparameters used to sample episodes while keeping other hyperparameters fixed, but showing the same trend while doing a reasonable effort at tuning other hyperparameters would make for a more convincing argument. This is why I take the claim made in Section 4.2 that "NCA performs better than all PN configurations, no matter the batch size" with a grain of salt, for instance.
I also feel that the submission misses out on an opportunity to support a more general statement about episodic training via observations on approaches such as Matching Networks, MAML, etc. I really like the way Figure 1 explains visually how Prototypical Networks miss out on useful relationships between examples in a batch and is therefore data-inefficient. To me, this is one of the submission’s most important contributions: the suggestion that a leave-one-out strategy could allow episodic approaches to achieve the same kind of data efficiency as non-episodic approaches, alleviating the need for a supervised pre-training / episodic fine-tuning strategy. To be clear, I don’t think the missed opportunity would be a reason to reject the paper, but I think that showing empirically that the leave-one-out strategy applies beyond Prototypical Networks would make me lean more strongly towards acceptance.
The connection drawn between Prototypical Networks and NCA feels forced at times. In the introduction the paper claims to "show that, without episodic learning, Prototypical Networks correspond to the classic Neighbourhood Component Analysis", but Section 3.3 lists the creation of prototypes as a key difference between the two which is not resolved by training non-episodically. From my perspective, NCA would be more akin to the non-episodic counterpart to Matching Networks without Full Contextual Embeddings – albeit with a Euclidean metric rather than a cosine similarity metric – since both perform comparisons on example pairs.
This relationship with Matching Networks could be exploited to improve clarity. For instance, row 6 of Figure 4 can be interpreted as a Matching Networks implementation with a Euclidean distance metric. With this in mind, could the difference in performance between "1-NN with class centroids" and k-NN / Soft Assignment noted in Section 4.1 – as well as the drop in performance observed in Figure 4’s row 6 – be explained by the fact that a (soft) nearest-neighbour approach is more sensitive to outliers?
Finally, I have some issues with how results are reported in Tables 1 and 2. Firstly, we don’t know how competing approaches would perform if we applied the paper’s proposed multi-layer concatenation trick, and the idea itself feels more like a way to give NCA’s performance a small boost and bring it into SOTA-like territory. Comparing NCA without multi-layer against other approaches is therefore more interesting to me. Secondly, 95% confidence intervals are provided, but the absence of identification of the best-performing approach(es) in each setting makes it hard to draw high-level conclusions at a glance. I would suggest bolding the best accuracy in each column along with all other entries for which a 95% confidence interval test on the difference between the means is inconclusive in determining that the difference is significant. Questions
In Equation 2, why is the sum normalized by the total number of examples in the episode rather than the number of query examples?
Can the authors comment on the extent to which Figure 2 supports the hypothesis that NCA is better for training because it learns from a larger number of positives and negatives? Assuming this is true, we should see that Prototypical Networks configurations that increase the number of positives and negatives should perform better for a given batch size. Does Figure 2 support this assertion?
Can the authors elaborate on the "no S/Q" ablation (Figure 4, row 7)? What is the point of reference when computing distances for support and query examples? Is the loss computed in the same way for support and query examples? The text in Section 4.3 makes it appear like the loss for query examples is the NCA loss, but the loss for support examples is the prototypical loss. Wouldn’t it be conceptually cleaner to compute leave-one-out prototypes, i.e. leave each example out of the computation of its own class’ prototype (resulting in slightly different prototypes for examples of the same class)? In my mind, this would be the best way to remove the support/query partition while maintaining prototype computation, thereby showing that the partition is detrimental to Prototypical Networks training.
Additional feedback
This is somewhat inconsequential, but across all implementations of episodic training that I have examined I haven’t encountered an implementation that uses a flag to differentiate between support and query examples. Instead, the implementations I have examined explicitly represent support and query examples as separate tensors. I was therefore surprised to read that "in most implementations [...] each image is characterised by a flag indicating whether it corresponds to the support or the query set [...]"; can the authors point to the implementations they have in mind when making that assertion?
I would be careful with the assertion that "during evaluation the triplet {w, n, m} [...] must stay unchanged across methods". While this is true for the benchmarks considered in this submission, benchmarks like Meta-Dataset evaluate on variable-ways and variable-shots episodes.
I’m not too concerned with the computational efficiency of NCA. The pairwise Euclidean distances can be computed efficiently using the inner- and outer-product of the batch of embeddings with itself. | - The extent to which the observations presented generalize to few-shot learners beyond Prototypical Networks is not evaluated, which may limit the scope of the submission’s contributions in terms of understanding the properties of episodic training. |