paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
7
10.2k
point
stringlengths
47
690
NIPS_2019_229
NIPS_2019
It is hard to assess what is the contribution of the paper from a practical point of view as SGD automatically avoids possible training problems related to AV. On the theoretical side, the paper does not investigate deeply the relationship between AV and usual flat or sharp minima. For example, how are AV connected to each others and which properties of sharp/flat minima generalize to AV? Questions: - Under what conditions (on the network architecture) do AV appear? Is there an intuitive interpretation of why they can be found in the loss function associated with many `modern' neural networks? - Is the presence of AV restricted to the over-parameterized case? If yes, what happens in the under-parameterized situation? Which of the given theoretical properties extend to that case (as local minima cannot be expected to be equivalent in the under-parameterized case). - Do the structure of AV depend on the type of objective function used for training? What happens if a L-2 penalty term on the weights is added to the loss function? - Would it be possible to built a 2-dimensional analytical example of AV? - The averaged SGD performs well also in the case of convex loss. Is its good behaviour around a AV related to this? - In Section 3.2, it is claimed that AV can be found with `decent probability'. What is the order of magnitude of such probability? Does it depend on the complexity of the model? Does this mean that most of the minima are AV?
- The averaged SGD performs well also in the case of convex loss. Is its good behaviour around a AV related to this?
NIPS_2017_382
NIPS_2017
weakness that there is much tuning and other specifics of the implementation that need to be determined on a case by case basis. It could be improved by giving some discussion of guidelines, principles, or references to other work explaining how tuning can be done, and some acknowledgement that the meaning of fairness may change dramatically depending on that tuning. * Clarity The paper is well organized and explained. It could be improved by some acknowledgement that there are a number of other (competing, often contradictory) definitions of fairness, and that the two appearing as constraints in the present work can in fact be contradictory in such a way that the optimization problem may be infeasible for some values of the tuning parameters. * Originality The most closely related work of Zemel et al. (2013) is referenced, the present paper explains how it is different, and gives comparisons in simulations. It could be improved by making these comparisons more systematic with respect to the tuning of each method--i.e. compare the best performance of each. * Significance The broad problem addressed here is of the utmost importance. I believe the popularity of (IF) and modularity of using preprocessing to address fairness means the present paper is likely to be used or built upon.
* Significance The broad problem addressed here is of the utmost importance. I believe the popularity of (IF) and modularity of using preprocessing to address fairness means the present paper is likely to be used or built upon.
NIPS_2020_183
NIPS_2020
* It will help if the paper describes the possible alternate formulations for Confidence Diversity (CD). I am not asking for additional experiments here. It is difficult to what extra information is CD capturing on top of Predictive Uncertainty in the current form. Or why is entropy not a good measure of "amount of spreading of teacher predictions over the probability simplex among different (training) samples" (line 115). Note that line 113 did not clarify it for me. * I think the experimental section is somewhat weak. The paper considers three datasets and two models, but all the datasets are vision datasets and are small. Moreover, the baselines are sufficient. I would like to see at least a standard distillation baseline, trained with both soft and hard labels. We don't know the number of generations for the SD baseline as well. I understand the authors point out that parameter tuning is expensive. Still, they should consider ablations (and not parameter tuning) and show that their empirical gains are consistent across different parameters.
* It will help if the paper describes the possible alternate formulations for Confidence Diversity (CD). I am not asking for additional experiments here. It is difficult to what extra information is CD capturing on top of Predictive Uncertainty in the current form. Or why is entropy not a good measure of "amount of spreading of teacher predictions over the probability simplex among different (training) samples" (line 115). Note that line 113 did not clarify it for me.
m8ERGrOf1f
ICLR_2025
1. While the unified low-precision quantization strategy is effective, the scalability of the approach—especially when applied to larger diffusion models beyond those tested—is unclear. Including runtime and memory trade-offs when scaling to more complex models (e.g., SDXL) or higher-resolution tasks would enhance the practical utility of the work. 2. There is a lack of BF16 (baseline) when the authors try to demonstrate the effectiveness of the purposed method on FP8 configurations. 3. In Fig.5, as shown in the third figure, the proposed sensitive-layer selection against randomized selection does not make too much difference in terms of StableDiffusion and the authors do not further discuss such an observation. Besides, there is a lack of mathematical or theoretical justification for the proposed Algorithm.1.
3. In Fig.5, as shown in the third figure, the proposed sensitive-layer selection against randomized selection does not make too much difference in terms of StableDiffusion and the authors do not further discuss such an observation. Besides, there is a lack of mathematical or theoretical justification for the proposed Algorithm.1.
qb2QRoE4W3
ICLR_2025
Despite the idea being interesting, I have found some technical issues that weakened the overall soundness. I enumerate them as follows: 1. The assumption that generated URLs are always meaningfully related to the core content of the document from where the premises are to be fetched is not true by and large. It works for Wikipedia because the URLs are well-structured semantically. 2. LLMs generating URLs on Wikidata have a significantly higher probability of being linked with a valid URL because extensive entity linking has already been done. This, however, is not the case for many other web sources. 3. There are several URLs that are not named according to the premise entities. In that case, those sources will never be fetched. 4. How to resolve contradictory entailment from premises belonging to different sources? 5. There can be many sources that are themselves false (particularly for the open Internet and also in cases of unverified Wiki pages). So assuming the premises to be true may lead to incorrect RTE. 6. It is unclear how the prompt templates are designed, i.e., the rationale and methodology that would drive the demonstration example patterns in the few-shot cases. 7. A discussion on the prompt dataset (for the few-shot case) creation together with its source should be discussed. 8. The assumption that RTE (i.e. NLI) being true would imply that the hypothesis (fact/claim) is verified is bit tricky and may not always be true. A false statement can entail a hypothesis as well as its true version. Eg.: $\textit{Apples come in many colors}$ $\implies$ $\textit{Apples can be blue (claim)}$. $\textit{John was a follower of Jesus}$ $\implies$ $\textit{John was a Christian (claim)}$. 9. Line 253: What is citation threshold? I could not find the definition. 10. In the comparisons with the baselines and variants of LLM-Cite, what was the justification behind not keeping the model set fixed for all the experiments? I think this should be clear. 11. In sections 4.1 and 4.2, an analysis of why the verification models perform better on model-generated claims as compared to human-generated claims is very important to me. I could not find any adequate analysis for that. 12. The key success of LLM-Cite depends on the NLI model (given that at least one valid URL is generated that points to a valid premise). Hence, a discussion on the accuracy of SOTA NLI models (with citation) and the rationale behind choosing the Check Grounding NLI API and Gemini-1.5-Flash should be included.
7. A discussion on the prompt dataset (for the few-shot case) creation together with its source should be discussed.
rJhk7Fpnvh
EMNLP_2023
1. It was quite unclear how the experiments performed in the work to corroborate the authors’ theory did that. In the random premise task – how could the authors ensure that the random predicate indeed resulted in NO-ENTAIL? I understand that such random sampling has a very small probability of resulting in something that is not NO-ENTAIL, but given that many predicates have synonyms, and other predicates whom they entail, and hence by proxy the current hypothesis might also entail, it feels like it is crucial to ensure that indeed NO-ENTAIL was the case for all the instances (as this is not the train set, but rather the evaluation set). 2. Additionally, it was not clear how the generic argument task and the random argument task proved what the authors claimed. All in all, the whole dataset transformation and the ensuing experimental setup felt very cumbersome, and not very clear.
2. Additionally, it was not clear how the generic argument task and the random argument task proved what the authors claimed. All in all, the whole dataset transformation and the ensuing experimental setup felt very cumbersome, and not very clear.
NIPS_2019_1158
NIPS_2019
1. The proposed method only gets convergence rate in expectation (i.e. only variance bound), not with high probability. Though Chebyshev's inequality gives bound in probability from the variance bound, this is still weaker than that of Bach [3]. 2. The method description lacks necessary details and intuition: - It's not clear how to get/estimate the mean element mu_g for different kernel spaces. - It's not clear how to sample from the DPP if the eigenfunctions e_n's are inaccessible (Eq (10) line 130). This seems to be the same problem with sampling from the leverage score in [3], so I'm not sure how sampling from the DPP is easier than sampling from the leverage score. - There is no intuition why DPP with that particular repulsion kernel is better than other sampling schemes. 3. The empirical results are not presented clearly: - In Figure 1: what is "quadrature error"? Is it the sup of error over all possible integrand f in the RKHS, or for a specific f? If it's the sup over all f, how does one get that quantity for other methods such as Bayesian quadrature (which doesn't have theoretical guarantee). If it's for a specific f, which function is it, and why is the error on that specific f representative of other functions? Other comment: - Eq (18), definition of principal angle: seems to be missing absolute value on the right hand side, as it could be negative. Minor: - Reference for Kernel herding is missing [?] - Line 205: Getting of the product -> Getting rid of the product - Please ensure correct capitalization in the references (e.g., [1] tsp -> TSP, [39] rkhss -> RKHSs) [3] F. Bach. On the equivalence between kernel quadrature rules and random feature expansions. The Journal of Machine Learning Research, 18(1):714–751, 2017. ===== Update after rebuttal: My questions have been adequately addressed. The main comparison in the paper seems to be the results of F. Bach [3]. Compared to [3], I do think the theoretical contribution (better convergence rate) is significant. However, as the other reviews pointed out, the theoretical comparison with Bayesian quadrature is lacking. The authors have agreed to address this. Therefore, I'm increasing my score.
- It's not clear how to sample from the DPP if the eigenfunctions e_n's are inaccessible (Eq (10) line 130). This seems to be the same problem with sampling from the leverage score in [3], so I'm not sure how sampling from the DPP is easier than sampling from the leverage score.
NIPS_2019_1350
NIPS_2019
of the method. CLARITY: The paper is well organized, partially well written and easy to follow, in other parts with quite some potential for improvement, specifically in the experiments section. Suggestions for more clarity below. SIGNIFICANCE: I consider the work significant, because there might be many settings in which integrated data about the same quantity (or related quantities) may come at different cost. There is no earlier method that allows to take several sources of data into account, and even though it is a fairly straightforward extension of multi-task models and inference on aggregated data, it is relevant. MORE DETAILED COMMENTS: --INTRO & RELATED WORK: * Could you state somewhere early in the introduction that by "task" you mean "output"? * Regarding the 3rd paragraph of the introduction and the related work section: They read unnaturally separated. The paragraph in the introduction reads very technical and it would be great if the authors could put more emphasis there in how their work differs from previous work and introduce just the main concepts (e.g. in what way multi-task learning differs from multiple instance learning). Much of the more technical assessment could go into the related work section (or partially be condensed). --SECTION 2.3: Section 2 was straightforward to follow up to 2.3 (SVI). From there on, it would be helpful if a bit more explanation was available (at the expense of parts of the related work section, for example). More concretely: * l.145ff: $N_d$ is not defined. It would be good to state explicitely that there could be a different number of observations per task. * l.145ff: The notation has confused me when first reading, e.g. $\mathbb{y}$ has been used in l.132 for a data vector with one observation per task, and in l.145 for the collection of all observations. I am aware that the setting (multi-task, multiple supports, different number of observations per task) is inherently complex, but it would help to better guide the reader through this by adding some more explanation and changing notation. Also l.155: do you mean the process f as in l.126 or do you refer to the object introduced in l.147? * l.150ff: How are the inducing inputs Z chosen? Is there any effect of the integration on the choice of inducing inputs? l.170: What is z' here? Is that where the inducing inputs go? * l.166ff: It would be very helpful for the reader to be reminded of the dimensions of the matrices involved. * l.174 Could you explicitly state the computational complexity? * Could you comment on the performance of this approximate inference scheme based on inducing inputs and SVI? --EXPERIMENTS: * synthetic data: Could you give an example what kind of data could look like this? In Figure 1, what is meant by "support data" and what by "predicted training count data"? Could you write down the model used here explicitly, e.g. add it to the appendix? * Fertility rates: - It is unclear to me how the training data is aggregated and over which inputs, i.e. what you mean by 5x5. - Now that the likelihood is Gaussian, why not go for exact inference? * Sensor network: - l.283/4 You might want to emphasize here that CI give high accuracy but low time resolution results, e.g. "...a cheaper method for __accurately__ assessing the mass..." - Again, given a Gaussian likelihood, why do you use inducing inputs? What is the trade-off (computational and quality) between using the full model and SVI? - l.304ff: What do you mean by "additional training data"? - Figure 3: I don't understand the red line: Where does the test data come from? Do you have a ground truth? - Now the sensors are co-located. Ideally, you would want to have more low-cost sensors that high-cost (high accuracy) sensors in different locations. Do you have a thought on how you would account for spatial distribution of sensors? --REFERENCES: * please make the style of your references consistent, and start with the last name. Typos etc: ------------- * l.25 types of datasets * l.113 should be $f_{d'}(v')$, i.e. $d'$ instead of $d$ * l.282 "... but are badly bias" should be "is(?) badly biased" (does the verb refer to measurement or the sensor? Maybe rephrase.) * l.292 biased * Figure 3: biased, higher peaks, 500 with unit. * l.285 consisting of? Or just "...as observations of integrals" * l.293 these variables
- Figure 3: I don't understand the red line: Where does the test data come from? Do you have a ground truth?
FAYIlGDBa1
ICLR_2025
W1) Some of the technical contents, including the problem formulation, are not properly formalized, with some important aspects completely omitted. Notably, it should be stated that matrix $A$ in (SPCA) is necessarily symmetric positive semidefinite, implying the same for the blocks in the block-diagonal approximation. The absence of this information and the fact that throughout the paper $A$ is simply referred to as an "input matrix" rather than a covariance matrix may mislead the reader into thinking that the problem is more general than it actually is. W2) The presentation of the simulation results is somewhat superficial, focusing only on presenting and briefly commenting the two quantitative criteria used for comparison, without much discussion or insight into what is going on. Specifically: - Separate values for the different used $k$ should be reported (see point W2 below). - It would be interesting to report the threshold value $\varepsilon$ effectively chosen by the algorithm (relatively to the infinity norm of the input matrix), as well as the proportion of zero entries after the thresholding. - It would also be interesting to compare the support of the solution obtained by the proposed scheme with that obtained by the baseline methods (e.g., using a Jaccard index). W3) Reporting average results with respect to $k$ is not a reasonable choice in my opinion, as the statistics of the chosen metrics probably behave very differently for different values of $k$. W4) As it is, I don't see the utility of Section 4.1. First, this model is not applied to any of the datasets used in the experiments. This leads one to suspect that in practice it is quite hard to come up with estimates of the parameters required by Algorithm 4. Second, the authors do not even generate synthetic data following such a model (which is one typical use of a model) in order to illustrate the obtained results. In my view results of this kind should be added, or else the contents of 4.1 should be moved to the appendices as they're not really important (or both). W5) Though generally well-written, the paper lacks some clarity at times, and the notation is not always consistent/clear. In particular: - The sentence "This result is non-trivial: while the support of an optimal solution could span multiple blocks, we theoretically show that there must exist a block that contains the support of an optimal solution, which guarantees the efficiency of our framework." seems to be contradicatory. I believe that the authors mean the following: *one could think* that the solution could span multiple blocks, but they show this is not true. The same goes for Remark 1. - What is the difference between $A^\varepsilon$ and $\tilde{A}$ in Theorem 1? It seems that two different symbols are used to denote the same object. - The constraint $\|x\|_0 \le k$ in the last problem that appears in the proof of Theorem 2 is inocuous, since the size of each block $\tilde{A}_i'$ is at most $k$ anyway. This should be commented.
- It would also be interesting to compare the support of the solution obtained by the proposed scheme with that obtained by the baseline methods (e.g., using a Jaccard index).
ACL_2017_96_review
ACL_2017
lack statistics of the datsets (e.g. average length, vocabulary size) the baseline (Moses) is not proper because of the small size of the dataset the assumption "sarcastic tweets often differ from their non sarcastic interpretations in as little as one sentiment word" is not supported by the data. - General Discussion: This discussion gives more details about the weaknesses of the paper. Half of the paper is about the new dataset for sarcasm interpretation. However, the paper doesn't show important information about the dataset such as average length, vocabulary size. More importantly, the paper doesn't show any statistical evidence to support their method of focusing on sentimental words. Because the dataset is small (only 3000 tweets), I guess that many words are rare. Therefore, Moses alone is not a proper baseline. A proper baseline should be a MT system that can handle rare words very well. In fact, using clustering and declustering (as in Sarcasm SIGN) is a way to handle rare words. Sarcasm SIGN is built based on the assumption that "sarcastic tweets often differ from their non sarcastic interpretations in as little as one sentiment word". Table 1 however strongly disagrees with this assumption: the human interpretations are often different from the tweets at not only sentimental words. I thus strongly suggest the authors to give statistical evidence from the dataset that supports their assumption. Otherwise, the whole idea of Sarcasm SIGN is just a hack. -------------------------------------------------------------- I have read the authors' response. I don't change my decision because of the following reasons: - the authors wrote that "the Fiverr workers might not take this strategy": to me it is not the spirit of corpus-based NLP. A model must be built to fit given data, not that the data must follow some assumption that the model is built on. - the authors wrote that "the BLEU scores of Moses and SIGN are above 60, which is generally considered decent in the MT literature": to me the number 60 doesn't show anything at all because the sentences in the dataset are very short. And that, if we look at table 6, %changed of Moses is only 42%, meaning that even more than half of the time translation is simply copying, the BLUE score is more than 60. - "While higher scores might be achieved with MT systems that explicitly address rare words, these systems don't focus on sentiment words": it's true, but I was wondering whether sentiment words are rare in the corpus. If they are, those MT systems should obviously handle them (in addition to other rare words).
- "While higher scores might be achieved with MT systems that explicitly address rare words, these systems don't focus on sentiment words": it's true, but I was wondering whether sentiment words are rare in the corpus. If they are, those MT systems should obviously handle them (in addition to other rare words).
kklwv4c4dI
ICLR_2024
Table 1 presents the previous and current results strangely: 1) First of all, to compare the obtained complexity for the proposed method with the previous result in a strongly-convex concave case, it should be used standard regularization trick. 2) From my point of you, when complexity contains several terms, each of them should be added. About Table 2, The authors claim that "The sequential version of FeDualEx leads to the stochastic dual extrapolation for CO and yields, to our knowledge, the first convergence rate for the stochastic optimization of composite SPP in non-Euclidean settings ." It is not true, there is a wide field related to operator splitting in deterministic and stochastic cases. Look at this paper please https://epubs.siam.org/doi/epdf/10.1137/20M1381678. Also, compared to the previous works, the authors use bounded stochastic gradient assumption and homogeneity of data. In many federated learning papers, those assumptions are avoided. Despite that the authors write "Assumption e is a standard assumption", it would be better to provide analysis without it to have more generality. In Theorem 1, and Theorem 2, the final result contains mistakes in complexity, because some of them were done in the proof. The first mistake is made in theorem 3 and repeats in the main theorem. Please look at the last inequality on page 40: To make $3\eta^2\beta^2 -1 \leq 0$, the stepsize should be chosen in the following way: $\eta \leq \frac{1}{\sqrt{3}\beta}$. This will change the complexity of the methods. The same was done in the proof of Theorems 1, and 2. Please see Lemma 3, 17. The second mistake is made in the proof of Lemma 13, in the last two inequalities, where should be $\dots\sqrt{2V^l_z(\cdot)} \leq \dots \sqrt{B}$. This thing also will change the final complexity. The appendix is hard to read in terms of the order of Lemmas. I think it would be better if the numeration of Lemmas had a strict order (for example, after Lemma 5 lemma 6 follows.) Other things dealing with weaknesses, please, see in questions.
1) First of all, to compare the obtained complexity for the proposed method with the previous result in a strongly-convex concave case, it should be used standard regularization trick.
ICLR_2021_1948
ICLR_2021
a. Anonymisation Failure in References i. A reference uncited in the manuscript body contains a non-anonymised set of author names to a paper with the same title as the system presented in this paper. This was not detected during initial review. "Shuby Deshpande and Jeff Schneider. Vizarel: A System to Help Better Understand RL Agents.arXiv:2007.05577." b. Citations i. There is egregiously missing information in almost all citations. This is quite obvious in all the missing dates in the manuscript text. c. Clarity i. There are a few unclear or misleadingly worded statements made as below: 1) "However, there is no corresponding set of tools for the reinforcement learning setting." - This is false. See references below (also some in the submitted paper). 2) "stronger feedback loop between the researcher and the agent" - This is at least confusing. In any learning setting, there is a strong interaction loop between experimentation by the researcher and resulting outcomes for the trained model. 3) "To the best of our knowledge, there do not exist visualization systems built for interpretable reinforcement learning that effectively address the broader goals we have identified" - It isn't clear what these broader goals are that have been identified. Therefore it isn't possible to evaluate this claim. 4) "For multi-dimensional action spaces, the viewport could be repurposed to display the variance of the action distribution, plot different projections of the action distribution, or use more sophisticated techniques (Huber)." - It would be clearer to actually state what the sophisticated techniques from Huber are here. ii. The framework could be clearer that it applies most directly as described on single agent RL. The same approach could be used with multi-agent RL but the observation state and visualisations around that get more confusing when there are multiple potentially different sets of observations. Is this not the case, please clarify? d. Experimental rigour i. The paper does include 3 extremely brief examples of how the tool might be used. However, it does not include any experiments to suggest that this tool would actually improve the debugging process for training RL in a real user study. Not every paper requires a user study, however, the contributions proposed by this particular manuscript require validation at some level from actual RL users (even a case study with some feedback from users would address this to some extent). e. Novelty in Related Work i. The manuscript contains references to several relevant publications that can be compared to the current work. However, the paper is also missing references to many related and relevant works in the space of debugging reinforcement learning using visualisations, especially with an eye towards explainable reinforcement learning. See a sampling below. 1) @inproceedings{ Rupprecht2020Finding, title={Finding and Visualizing Weaknesses of Deep Reinforcement Learning Agents}, author={Christian Rupprecht and Cyril Ibrahim and Christopher J. Pal}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=rylvYaNYDH} } 2) @inproceedings{ Atrey2020Exploratory, title={Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps for Deep Reinforcement Learning}, author={Akanksha Atrey and Kaleigh Clary and David Jensen}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=rkl3m1BFDB} } 3) @inproceedings{ Puri2020Explain, title={Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution}, author={Nikaash Puri and Sukriti Verma and Piyush Gupta and Dhruv Kayastha and Shripad Deshmukh and Balaji Krishnamurthy and Sameer Singh}, booktitle={International Conference on Learning Representations}, year={2020}, url={https://openreview.net/forum?id=SJgzLkBKPB} } 4) @article{reddy2019learning, title={Learning human objectives by evaluating hypothetical behavior}, author={Reddy, Siddharth and Dragan, Anca D and Levine, Sergey and Legg, Shane and Leike, Jan}, journal={arXiv preprint arXiv:1912.05652}, year={2019} } 5) @inproceedings{mcgregor2015facilitating, title={Facilitating testing and debugging of Markov Decision Processes with interactive visualization}, author={McGregor, Sean and Buckingham, Hailey and Dietterich, Thomas G and Houtman, Rachel and Montgomery, Claire and Metoyer, Ronald}, booktitle={2015 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)}, pages={53--61}, year={2015}, organization={IEEE} } 6) @article{puiutta2020explainable, title={Explainable Reinforcement Learning: A Survey}, author={Puiutta, Erika and Veith, Eric}, journal={arXiv preprint arXiv:2005.06247}, year={2020} } 7) @book{calvaresi2019explainable, title={Explainable, Transparent Autonomous Agents and Multi-Agent Systems: First International Workshop, EXTRAAMAS 2019, Montreal, QC, Canada, May 13--14, 2019, Revised Selected Papers}, author={Calvaresi, Davide and Najjar, Amro and Schumacher, Michael and Fr{\"a}mling, Kary}, volume={11763}, year={2019}, publisher={Springer Nature} } 8) @inproceedings{juozapaitis2019explainable, title={Explainable reinforcement learning via reward decomposition}, author={Juozapaitis, Zoe and Koul, Anurag and Fern, Alan and Erwig, Martin and Doshi-Velez, Finale}, booktitle={IJCAI/ECAI Workshop on Explainable Artificial Intelligence}, year={2019} } 9) @misc{sundararajan2020shapley, title={The many Shapley values for model explanation}, author={Mukund Sundararajan and Amir Najmi}, year={2020}, eprint={1908.08474}, archivePrefix={arXiv}, primaryClass={cs.AI} } 10) @misc{madumal2020distal, title={Distal Explanations for Model-free Explainable Reinforcement Learning}, author={Prashan Madumal and Tim Miller and Liz Sonenberg and Frank Vetere}, year={2020}, eprint={2001.10284}, archivePrefix={arXiv}, primaryClass={cs.AI} } 11) @article{Sequeira_2020, title={Interestingness elements for explainable reinforcement learning: Understanding agents’ capabilities and limitations}, volume={288}, ISSN={0004-3702}, url={http://dx.doi.org/10.1016/j.artint.2020.103367}, DOI={10.1016/j.artint.2020.103367}, journal={Artificial Intelligence}, publisher={Elsevier BV}, author={Sequeira, Pedro and Gervasio, Melinda}, year={2020}, month={Nov}, pages={103367} } 12) @article{Fukuchi_2017, title={Autonomous Self-Explanation of Behavior for Interactive Reinforcement Learning Agents}, ISBN={9781450351133}, url={http://dx.doi.org/10.1145/3125739.3125746}, DOI={10.1145/3125739.3125746}, journal={Proceedings of the 5th International Conference on Human Agent Interaction}, publisher={ACM}, author={Fukuchi, Yosuke and Osawa, Masahiko and Yamakawa, Hiroshi and Imai, Michita}, year={2017}, month={Oct} } 4. Recommendation a. I recommend this paper for rejection as the degree of change needed to validate the contribution through a user study or other validation is likely not feasible in the time and space needed. However, the contribution is potentially valuable to RL, so with the inclusion of this missing evaluation and additions to related work/contextualisation of the contribution, I would consider increasing my score and changing my recommendation. 5. Minor Comments/Suggestions a. It is recommended to use the TensorFlow whitepaper citation for TensorBoard (https://arxiv.org/abs/1603.04467). This is the official response (https://github.com/tensorflow/tensorboard/issues/3437).
1) "However, there is no corresponding set of tools for the reinforcement learning setting." - This is false. See references below (also some in the submitted paper).
NIPS_2016_313
NIPS_2016
Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim. 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. 3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods. 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images?
1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim.
iamWnRpMuQ
ICLR_2025
The results have a few issues which make evaluating the contribution difficult: 1. The paper lacks a comparison with some existing works, particularly methods involve iterative PPO/DPO method that train a reward model simultaneously and reward ensembles [1]. [1] Coste T, Anwar U, Kirk R, et al. Reward model ensembles help mitigate overoptimization. 2. The alignment of relabeled reward data with human annotator judgments remains insufficiently validated.
2. The alignment of relabeled reward data with human annotator judgments remains insufficiently validated.
NIPS_2019_1131
NIPS_2019
1. There is no discussion on the choice of "proximity" and the nature of the task. On the proposed tasks, proximity on the fingertip Cartesian positions is strongly correlated with proximity in the solution space. However, this relationship doesn't hold for certain tasks. For example, in a complicated maze, two nearby positions in the Euclidean metric can be very far in the actual path. For robotic tasks with various obstacles and collisions, similar results apply. The paper would be better if it analyzes what tasks have reasonable proximity metrics, and demostrate failure on those that don't. 2.  Some ablation study is missing, which could cause confusion and extra experimentation for practitioners. For example, the \sigma in the RBF kernel seems to play a crucial role, but no analysis is given on it. Figure 4 analyzes how changing \lambda changes the performance, but it would be nice to see how \eta and \tau in equation (7) affect performance. Minor comments: 1. The diversity term, defined as the facility location function, is undirected and history-invariant. Thus it shouldn't be called "curiosity", since curiosity only works on novel experiences. Please use a different name. 2. The curves in Figure 3 (a) are suspiciously cut at Epoch = 50, after which the baseline methods seem to catch up and perhaps surpass CHER. Perhaps this should be explained.
2. Â Some ablation study is missing, which could cause confusion and extra experimentation for practitioners. For example, the \sigma in the RBF kernel seems to play a crucial role, but no analysis is given on it. Figure 4 analyzes how changing \lambda changes the performance, but it would be nice to see how \eta and \tau in equation (7) affect performance. Minor comments:
NIPS_2020_335
NIPS_2020
- The paper reads too much like LTF-V1++, and at some points assumes too much familiarity of the reader to LTF-V1. Since this method is not well known, I wish the paper was a bit more pedagogical/self-contained. - The method seems more involved that it needs to be. One would suspect that there is an underlying, simpler, principle that is propulsing the quality gains.
- The method seems more involved that it needs to be. One would suspect that there is an underlying, simpler, principle that is propulsing the quality gains.
ICLR_2023_3449
ICLR_2023
1.The spurious features in Section 3.1 and 3.2 are very similar to backdoor triggers. They both are some artificial patterns that only appear a few times in the training set. For example, Chen et al. (2017) use random noise patterns. Gu et al. (2019) [1] use single-pixel and simple patterns as triggers. It is well-known that a few training examples with such triggers (rare spurious examples in this paper) would have a large impact on the trained model. 2.How neural nets learn natural rare spurious correlations is unknown to the community (to the best of my knowledge). However, most of analysis and ablation studies use the artificial patterns instead of natural spurious correlations. Duplicating the same artificial pattern for multiple times is different from natural spurious features, which are complex and different in every example. 3.What’s the experiment setup in Section 3.3? (data augmentation methods, learning rate, etc.). [1]: BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. https://messlab.moyix.net/papers/badnets_ieeeaccess19.pdf
3.What’s the experiment setup in Section 3.3? (data augmentation methods, learning rate, etc.). [1]: BadNets: Evaluating Backdooring Attacks on Deep Neural Networks. https://messlab.moyix.net/papers/badnets_ieeeaccess19.pdf
NIPS_2021_1958
NIPS_2021
1.The problem formulation is somewhat unclear in the statement and introduction examples. 2. More baselines or self variants should be compared to better prove the effectiveness. Detailed comments: The problem definition of keyword search on incomplete graphs is ambiguous and confusing. The KS-GNN mostly optimizes on node similarity and the inference stage tends to select the top-k most similar results towards the query keyword set. However, the problem itself seems more like a combinatorial one, or say, set optimization, node-set selection with minimal distance measurement. Are the targets here equivalent? The baseline approach seems much inferior to KS-GNN. It would be great to include some variants of the KS-GNN that delete some of the module or training objectives to confirm the contribution of each component. Table 2 with missing edges is supposed to be more challenging than the task in Table 1. However, a lot of models perform even better (or comparable) which seems strange. Also, the claim of “KS-GNN has no significant effect” does not apply to the Toy and Video datasets for correctness. It is hard to conclude from Figure 4 on the benefit of keyword frequency regularization. That is, it’s better to show the performance scores along with the visualization. Figure 3: The notations of the figure (especially function f and g) are confusing.
1.The problem formulation is somewhat unclear in the statement and introduction examples.
NIPS_2021_952
NIPS_2021
- Some important points about the method and the experiments are left unclear (see also questions below). - The writing could be improved (see also Typos & Additional Questions below) - Multiple runs and significance tests are missing. This makes it hard to judge the improvements (Table 2 & 3). Most Important Questions - Line 156: What is q_ij^k here exactly? I thought q_ij was a state flag, such as “2” or “0”. But you tokenize it and encode it, so it sounds more like it is something like “Copy(snow)”? (If it is the latter, then what is the meaning of tokenizing and encoding something like “Len(9)”?) - 192: What exactly is storyline and what do you need it for? - The baseline takes the predicate logic constraints as input: How does T6 know what to do with these inputs? Was the model trained on this but without the NRETM module? Can you give an example of what the input looks likes? How do these inputs guide which sentences should be generated? Looking at the datsset, it feels like one would need at least the first 2 sentences or so to know how to continue. Maybe this information is now in your constraints but it would be important to understand what they look like and how they were created. Is there no other suitable baseline for this experiment? - What is the overhead of your method compared to standard decoding approaches? (you mention GBS can only be used with T5-Base, so your method is more efficient? That would be important to point out) - What happens if the decoding process cannot find a sequence that satisfies all constraint? - Document-level MT: How do you know at test time whether the system translates a particular sentence or not? - How many sentences are misaligned by Doc-mBART25? What are the s-BLEU and d-BLEU values on the subset that NRETM aligns correctly and Doc does not? - Why was NEUROLOGIC not used as a comparison baseline? - What is dynamic vs static strategy? In which experiment did you show that dynamic works better than static (from conclusion)? Typos & Additional Questions - Line 40: you could mention here that the examples will be translated into logic forms in the next section. - Paragraph starting at line 53: Why did you choose these datasets? How will they help evaluate the proposed approach? - Line 75: a and b should be bold faced? - 83: “that used” -> “that are used” - 83: “details” -> “for details” - Paragraph at line 86: At this point, the state matrix is unclear. What are the initial values? How can the state matrix be used to understand if a constraint is satisfied or not? - 98: “take[s]” & “generate[s]” - 108: “be all” -> “all be” - Paragraph at line 101: What is dynamic vs static strategy? - Paragraph at line 109: The state flag explanation would greatly benefit from an example. Does q_i refer to whether a particular U_i is satisfied? - Eq 2: What is the meaning of N? Can it change depending on the definition of U_k? Does it mean this constraint is not relevant for x_i? - 133: Figure 1 should be Figure 2 - Figure 2: What exactly do the “&” rows track? - Figure 2: Is the state flag matrix equal to the state matrix? If not, how do you go from one to the other? - Line 146: What does the inf in the superscript signify? - 177: What is the symbolic operator? - Paragraph at line 194: Without understanding what a storyline is, it is not clear what the constraints are. An example might be helpful here. - Line 204: what is the ROUGH-L metric? Do you mean ROUGE-L? - Line 223: How do you obtain the morphological inflections for the concepts? - 237: @necessity [of] integrating” - 3.3: How exactly is the document-level MT done? Is the entire input document the input to T5? - 293: “because” typo - 3.4 where/how exactly is the sentence index used? The paper's broader impact section discusses general potential benefits and issues of text generation (from large language models). It could maybe be tailored a bit better by discussing what effect this proposed work would have on the potential benefits and issues.
- Line 223: How do you obtain the morphological inflections for the concepts?
NIPS_2017_631
NIPS_2017
1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is not the best one out there. But in order to claim that CBN should help even the more powerful VQA models, I would like the authors to conduct experiments on more than one VQA model – favorably the ones which are closer to state-of-art (and whose codes are publicly available) such as MCB (Fukui et al., EMNLP16), HieCoAtt (Lu et al., NIPS16). It could be the case that these more powerful VQA models are already so powerful that the proposed early modulating does not help. So, it is good to know if the proposed conditional batch norm can advance the state-of-art in VQA or not. 2. L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to? 3. In table 1, the results on the VQA dataset are reported on the test-dev split. However, as mentioned in the guidelines from the VQA dataset authors (http://www.visualqa.org/vqa_v1_challenge.html), numbers should be reported on test-standard split because one can overfit to test-dev split by uploading multiple entries. 4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening? 5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a). 6. The first two bullets about contributions (at the end of the intro) can be combined together. 7. Other errors/typos: a. L14 and 15: repetition of word “imagine” b. L42: missing reference c. L56: impact -> impacts Post-rebuttal comments: The new results of applying CBN on the MRN model are interesting and convincing that CBN helps fairly developed VQA models as well (the results have not been reported on state-of-art VQA model). So, I would like to recommend acceptance of the paper. However I still have few comments -- 1. It seems that there is still some confusion about test-standard and test-dev splits of the VQA dataset. In the rebuttal, the authors report the performance of the MCB model to be 62.5% on test-standard split. However, 62.5% seems to be the performance of the MCB model on the test-dev split as per table 1 in the MCB paper (https://arxiv.org/pdf/1606.01847.pdf). 2. The reproduced performance reported on MRN model seems close to that reported in the MRN paper when the model is trained using VQA train + val data. I would like the authors to clarify in the final version if they used train + val or just train to train the MRN and MRN + CBN models. And if train + val is being used, the performance can't be compared with 62.5% of MCB because that is when MCB is trained on train only. When MCB is trained on train + val, the performance is around 64% (table 4 in MCB paper). 3. The citation for the MRN model (in the rebuttal) is incorrect. It should be -- @inproceedings{kim2016multimodal, title={Multimodal residual learning for visual qa}, author={Kim, Jin-Hwa and Lee, Sang-Woo and Kwak, Donghyun and Heo, Min-Oh and Kim, Jeonghee and Ha, Jung-Woo and Zhang, Byoung-Tak}, booktitle={Advances in Neural Information Processing Systems}, pages={361--369}, year={2016} } 4. As AR2 and AR3, I would be interested in seeing if the findings from ResNet carry over to other CNN architectures such as VGGNet as well.
2.L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to?
ICLR_2021_2143
ICLR_2021
Weakness: ---Lack of novelty--- While the results are impressive (e.g., MOS score) I feel like the proposed approaches lack novelty. For example, 1)Duration prediction has been there for a while since Fastspeech1. 2)GaussianSampling is similar to EATS (https://arxiv.org/abs/2006.03575). 3)FVAE is also not proposed by the authors ---Lack of some experiments--- I assume the biggest advantage of Non-attentive Tacotron is the robustness in generation quality. If so, does the unsupervised Non-attentive Tacotron still has the advantages of less over-generation and word skipping problems? It’d have been nicer if the authors had shown UDR and WDR results in the unsupervised setting too. ---Writings--- Although using FVAE makes the unsupervised experiments successful, the insights on using FVAE to tackle this problem is not clearly stated. The authors must state why is combining FVAE approach expected to predict duration better than the naïve w/o FVAE version. Also, I ask the authors to keep readers in mind little more when writing the paper. The authors should make the writings clearer when explaining the existing methods. For example, in Section3, the depth of explanations on FVAE is too short. Please elaborate more on how it can be formulated as conditional VAE framework. Rating: I consider it “not bad” paper, but I think the impact of this paper is not strong enough to pass the bar of ICLR because of the lack of novelty (as written in Weakness part). Therefore, I recommend rejection. However, I’d be happy to listen to the authors’ opinion regarding this issue. Questions: --- Questions on Vanilla upsampling --- It has been shown by Fastspeech1 that the Vanilla upsampling gives similar score to Tacotron2 model (3.84 vs 3.86). In this paper, however, it seems like the Vanilla upsampling is not working very well compare to Tacotron2 (4.13 vs 4.37). I’d like to ask the authors what the source of this difference could be. --- Questions on training WaveRNN --- The authors have written that WaveRNN model was trained on predicted features. In this case, I assume the ground truth waveform and the predicted features are unaligned because Tacotron2 autoregressively decodes features, which must be different to the ground truth mel-spectrogram. How could WaveRNN be trained well enough in this setting? Were the predicted features predicted using Teacher-forcing?
---Lack of novelty--- While the results are impressive (e.g., MOS score) I feel like the proposed approaches lack novelty. For example,
ICLR_2021_458
ICLR_2021
W1. Clarity The organization of the paper is such that the reader has to refer to the appendix a lot. My biggest concern on clarity is on the “theoretical” results which are not rigorous and at times unsupported. Further, some statements/claims are not precise or clear enough for me to be convinced that the method is well-motivated and is doing what it is claimed to be doing. W2. Soundness I have a lot of concerns and questions here as I read through Sect. 3. At a high-level, I don’t see a clear connection between “improved variance control of prediction y^ or the smoothness of loss landscape” and “zero-shot learning effectiveness.” Details below. This is in part due to poor clarity. W3. Experiments IMO, if the main claim is really about the effectiveness of the two tricks and the proposed class normalization, then the experiments should go beyond one zero-shot learning starting point --- 3-layer MLP (Table 2). If baseline methods already adopt some of these tricks, it should be made clear and see if removing these tricks lead to inferior performance. If baseline methods do not adopt some of these tricks, these tricks, especially class normalization, could be applied to show improved performance. If it is difficult to apply these tricks, further explanation should be given (generally, also mention applicability of these tricks.) This is done to some degree in the continual setting. W4. Related work As I mentioned in W3, it is unclear which methods are linear/deep, and which methods have already benefited from existing/proposed tricks. Detailed comments (mainly to clarify my points about weaknesses) Statement 1 The main claim for this part is that this statement provides “a theoretical understanding of the trick” and “allows to speed up the search [of the optimal value fo \gamma].” However, I feel that we need further justifications on the correlation between Statement 1 (variance of y^_c, “better stability” and “the training would not stale”) and the zero-shot learning accuracy for this to be the “why normalization + scaling works.” My understanding is that the Appendix simply validates that Eq. (4) seems to hold in practice. Moreover, is the usual search region [5,10] actually effective? Do we have stronger supporting empirical evidence than the three groups of practitioners (Li et al 2019, Zhang et al. 2019, Guo et al. 2020), who may have influenced each other, used it? Finally, can the authors comment on the validity of multiple assumptions in Appendix A? To which degrees does each of them hold in practice? Statement 2 and 3 Why wouldn’t the following statement in Sect. 3.3 invalidate Statement 1? “This may create an impression that it does not matter how we initialize the weights — normalization would undo any fluctuations. However it is not true, because it is still important how the signal flows, i.e. for an unnormalized and unscaled logit value” It is unclear (at least not from the beginning) why understanding attribute normalization has to do with initialization of the weights. Similar to my comments to Statement 1, why should we believe that the explanation in Sect. 3.3 and Sect. 3.4 is the reason for zero-shot learning effectiveness? In particular, the authors again claim that the main bottleneck in improving zero-shot learning is “variance control” (the end of Sect. 3.3). I also have a hard time understanding some statements in Appendix H, which is needed to motivate the following statement in Sect. 3.3: “And these assumptions are safe to assume only for z but not for a_c, because they do not hold for the standard datasets (see Appendix H).” H1: Would this statement still be true after we transform a_c with an MLP? H2: Why is it not “a sensible thing to do” if we just want zero mean and unit variance? H3: Why is “such an approach far from being scalable”? H4: What if these are things like word embeddings? H5: Fig. 12 and Fig. 13 are not explained. H6: Histograms in Fig. 13 look quite normal. How useful is Statement 2? Why is the connection with Xavier initialization important? Why is “preserving the variance between z and y~” in Statement 3 important for zero-shot learning? Improved smoothness The claim “improved smoothness” at the end of Sect. 3 and Appendix F is really hard to understand. F1: How do the authors define “irregular loss surface”? F2: “Santurkar et al. (2018) showed that batch-wise standardization procedure decreases the Lipschitz constant of a model, which suggests that our class-wise standardization will provide the same impact.” This is not very precise and seems unsupported. Please make it clear how. If this is a hypothesis, please make it clear. Similarly to my comments to Statement 1-3, how is improved smoothness related to zero-shot learning effectiveness? Other more minor comments Abstract: Are the authors the one to “generalize ZSL to a broader problem”? Please tone down the claim if not. After Eq. (2): Why does attribute normalization look “inconsiderable” (possibly this is not the right word?) or why is it “surprising” that this is preferred in practice? Don’t most zero-shot learning methods use this (see for example Table 4 in [A])? Suggestions for references for attribute normalization. This can be improved; I can trace this back to much earlier work such as [A] and [B] (though I think this fact is stated more explicitly in [A]). Under Table 1 “These two tricks work well and normalize the variance to a unit value when the underlying ZSL model is linear (see Figure 1), but they fail when we use a multi-layer architecture.”: Could the authors provide a reference to evidence to support this? I think it is also important to provide a clear statement of what separates a “linear” or “multi-layer” model. The first paragraph of Sect. 3: Could you provide references for motivations for different activation functions? Further, It is unclear that all of them perform normalization. The second paragraph of Sect. 3: What exactly limits “the tools” for zero-shot learning vs. supervised learning? Further, it would also be nice to separate traditional supervised learning where classes are balanced and imbalanced; see, e.g., [C]. What is the closest existing zero-shot model to the one the authors describe in Sect. 3.1? Why is the described model considered/selected? [A] Synthesized Classifiers for Zero-Shot Learning [B] Zero-Shot Learning by Convex Combination of Semantic Embeddings [C] Class-Balanced Loss Based on Effective Number of Samples
3. At a high-level, I don’t see a clear connection between “improved variance control of prediction y^ or the smoothness of loss landscape” and “zero-shot learning effectiveness.” Details below. This is in part due to poor clarity.
NIPS_2018_761
NIPS_2018
Weakness] * How to set the parameter S remains a problem. * Algorithm SMILE is interesting but their theoretical results on its performance is not easy to interpret. * No performance comparison with existing algorithms [Recommendation] I recommend this paper to be evaluated as "a good submission; an accept". Their problem formalization is clear, and SMILE algorithm and its theoretical results are interesting. All their analyses are asymptotically evaluated, so I worry about how large the constant factors are. It would make this manuscript more valuable if how good their algorithms (OOMM & SMILE) would be shown theoretically and empirically compared to other existing algorithms. [Detailed Comments] p.7 Th 3 & Cor 1: C^G and C^B look random variables. If it is true, then they should not be used as parameters of T's order. Maybe the authors want to use their upper bounds shown above instead of them. p.7 Sec. 5 : Write the values of parameter S. [Comments to Authors’ Feedback] Setting parameter S: Asymptotic relation shown in Th 3 is a relation between two functions. It is impossible to estimate S from estimated M for a specific n using such a asymptotic functional relation.
* How to set the parameter S remains a problem.
ARR_2022_104_review
ARR_2022
1. it utilizes quantization technology but does not compare with the other quantization approaches. 2. there is still a softmax computation left in the presented approach. 3. energy consumption is only estimated within the attention module, even though the reduction in a Transformer block is added (17%), the reduction of the full model (with the classifier) is not reported. I have concerns that the overall reduction may not be sufficiently significant. 4. L1 norm between vectors of 2 matrices needs a high memory cost than the matrix multiplication. 5. experiments were performed on 3 language pairs with the base setting, I wonder whether the approach can perform well in challenging settings (Transformer Big and deep Transformers). 6. the maximum performance loss (- 0.78 BLEU) might be a significant loss. It's recommended to report the overall energy reduction of the full model, conduct experiments on more language pairs and with the big setting on a few of them.
1. it utilizes quantization technology but does not compare with the other quantization approaches.
NIPS_2017_645
NIPS_2017
- The main paper is dense. This is despite the commendable efforts by the authors to make their contributions as readable as possible. I believe it is due to NIPS page limit restrictions; the same set of ideas presented at their natural length would make for a more easily digestible paper. - The authors do not quite discuss computational aspects in detail (other than a short discussion in the appendix), but it is unclear whether their proposed methods can be made practically useful for high dimensions. As stated, their algorithm requires solving several LPs in high dimensions, each involving a parameter that is not easily calculable. This is reflected in the authors’ experiments which are all performed on very small scale datasets. - The authors mainly seem to focus on SSC, and do not contrast their method with several other subsequent methods (thresholded subspace clustering (TSC), greedy subspace clustering by Park, etc) which are all computationally efficient as well as come with similar guarantees.
- The authors mainly seem to focus on SSC, and do not contrast their method with several other subsequent methods (thresholded subspace clustering (TSC), greedy subspace clustering by Park, etc) which are all computationally efficient as well as come with similar guarantees.
ICLR_2022_2403
ICLR_2022
1: The theoretical analysis in Theorem 1 is unclear and weak. It is unclear that what the error bound in Theorem 1 means. The authors need to analyze and compare the theoretical results to other comparable methods. 2: The title is ambiguous and may lead to inappropriate reviewers. 3: I see no code attached to this submission, which makes me a bit concerned about reproducibility.
1: The theoretical analysis in Theorem 1 is unclear and weak. It is unclear that what the error bound in Theorem 1 means. The authors need to analyze and compare the theoretical results to other comparable methods.
NIPS_2018_840
NIPS_2018
1. It is confusing to me what the exact goal of this paper is. Are we claiming the multi-prototype model is superior to other binary classification models (such as linear SVM, kNN, etc.) in terms of interpretability? Why do we have two sets of baselines for higher-dimensional and lower-dimensional data? 2. In Figure 3, for the baselines on the left hand side, what if we sparsify the trained models to reduce the number of selected features and compare accuracy to the proposed model? 3. Since the parameter for sparsity constraint has to be manually picked, can the authors provide any experimental results on the sensitivity of this parameter? Similar issue arises when picking the number of prototypes. Update after Author's Feedback: All my concerns are addressed by the authors's additional results. I'm changing my score based on that.
2. In Figure 3, for the baselines on the left hand side, what if we sparsify the trained models to reduce the number of selected features and compare accuracy to the proposed model?
ICLR_2021_1832
ICLR_2021
The paper perhaps bites off a little more than it can chew. It might be best if the authors focused on their theoretical contributions in this paper, added more text and intuition about the extensions of their current bias-free NNs, fleshed out their analyses of the lottery ticket hypothesis and stopped at that. The exposition and experiments done with tropical pruning need more work. Its extension to convolutional layers is a non-trivial but important aspect that the authors are strongly encouraged to address. This work could possibly be written up into another paper. Similarly, the work done towards generating adversarial samples could definitely do with more detailed explanations and experiments. Probably best left to another paper. Contributions: The theoretical contributions of the work are significant and interesting. The fact that the authors have been able to take their framework and apply it to multiple interesting problems in the ML landscape speaks to the promise of their theory and its resultant perspectives. The manner in which the tropical geometric framework is applied to empirical problems however, requires more work. Readability: The general organization and technical writing of the paper are quite strong, in that concepts are laid out in a manner that make the paper approachable despite the unfamiliarity of the topic for the general ML researcher. The language of the paper however, could do with some improvement; Certain statements are written such that they are not the easiest to follow, and could therefore be misinterpreted. Detailed comments: While there are relatively few works that have explicitly used tropical geometry to study NN decision boundaries, there are others such as [2] which are similar in spirit, and it would be interesting to see exactly how they relate to each other. Abstract: It gets a little hard to follow what the authors are trying to say when they talk about how they use the new perspectives provided by the geometric characterizations of the NN decision boundaries. It would be helpful if the tasks were clearly enumerated. Introduction: “For instance, and in an attempt to…” Typo – delete “and”. Similar typos found in the rest of the section too, addressing which would improve the readability of the paper a fair bit. Preliminaries to tropical geometry: The preliminaries provided by the authors are much appreciated, and it would be incredibly helpful to have a slightly more detailed discussion of the same with some examples in the appendix. To that end, it would be a lot more insightful to discuss ex. 2 in Fig. 1, in addition to ex. 1. What exactly do the authors mean by the “upper faces” of the convex hull? The dual subdivision and projection π need to be explained better. Decision boundaries of neural networks: The variable ‘p’ is not explicitly defined. This is rather problematic since it has been used extensively throughout the rest of the paper. It would make sense to move def. 6 to the section discussing preliminaries. Digesting Thm. 2: This section is much appreciated and greatly improves the accessibility of the paper. It would however be important, to provide some intuition about how one would study decision boundaries when the network is not bias-free, in the main text. In particular, how would the geometry of the dual subdivision δ ( R ( x ) ) change? On a similar note, how do things change in practice when studying deep networks that are not bias free, given that, “Although the number of vertices of a zonotope is polynomial in the number of its generating line segments, fast algorithms for enumerating these vertices are still restricted to zonotopes with line segments starting at the origin”? Can Prop. 1 and Cor. 1 be extended to this case trivially? Tropical perspective to the lottery ticket hypothesis: It would be nice to quantify the (dis)similarity in the shape of the decision boundaries polytopes across initializations and pruning using something like the Wasserstein metric. Tropical network pruning: How are λ 1 , λ 2 chosen? Any experiments conducted to decide on the values of the hyper-parameters should be mentioned in the main text and included in the appendix. To that end, is there an intuitive way to weight the two hyper-parameters relative to each other? Extension to deeper networks: Does the order in which the pruning is applied to different layers really make a difference? It would also be interesting to see whether this pruning can be parallelized in some way. A little more discussion and intuition regarding this extension would be much appreciated. Experiments: The descriptions of the methods used as comparisons are a little confusing – in particular, what do the authors mean when they say “pruning for all parameters for each node in a layer” Wouldn’t these just be the weights in the layer? “…we demonstrate experimentally that our approach can outperform all other methods even when all parameters or when only the biases are fine-tuned after pruning” – it is not immediately obvious why one would only want to fine-tune the biases of the network post pruning and a little more intuition on this front might help the reader better appreciate the proposed work and its contributions. Additionally, it might be an unfair comparison to make with other methods, since the objective of the tropical geometry-based pruning is preservation of decision boundaries while that of most other methods is agnostic of any other properties of the NN’s representational space. Going by the results shown in Fig. 5, it would perhaps be better to say that the tropical pruning method is competitive with other pruning methods, rather than outperforming them (e.g., other methods seem to do better with the VGG16 on SVHN and CIFAR100) “Since fully connected layers in DNNs tend to have much higher memory complexity than convolutional layers, we restrict our focus to pruning fully connected layers.” While it is true that fully connected layers tend to have higher memory requirements than convolutional ones, the bulk of the parameters in modern CNNs still belong to convolutional layers. Moreover, the most popular CNNs are now fully convolutional (e.g., ResNet, UNet) which would mean that the proposed methods in their current form would simply not apply to them. Comparison against tropical geometry approaches for network pruning – why are the accuracies for the two methods different when 100% of the neurons are kept and the base architecture used is the same? The numbers reported are à (100, 98.6, 98.84) Tropical adversarial attacks: Given that this topic is not at all elaborated upon in the main text (and none of the figures showcase any relevant results either), it is strongly recommended that the authors either figure out a way to allocate significantly more space to this section, or not include it in this paper. (The idea itself though seems interesting and could perhaps make for another paper in its own right.) References: He et al. 2018a and 2018b seem to be the same. [1] Zhang L. et al., “Tropical Geometry of Deep Neural Networks”, ICML 2018. [2] Balestriero R. and Baraniuk R., “A Spline Theory of Deep Networks”, ICML 2018.
1. What exactly do the authors mean by the “upper faces” of the convex hull? The dual subdivision and projection π need to be explained better. Decision boundaries of neural networks: The variable ‘p’ is not explicitly defined. This is rather problematic since it has been used extensively throughout the rest of the paper. It would make sense to move def.
NIPS_2016_283
NIPS_2016
weakness of the paper are the empirical evaluation which lacks some rigor, and the presentation thereof: - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard". - The results comparing standard- vs. evolutional dropout on shallow models should be presented as a mean over many runs (at least 10), ideally with error-bars. The plotted curves are obviously from single runs, and might be subject to significant fluctuations. Also the models are small, so there really is no excuse for not providing statistics. - I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark: - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: *
- In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor:
NIPS_2016_431
NIPS_2016
1. It seems the asymptotic performance analysis (i.e., big-oh notation of the complexity) is missing. How is it improved from O(M^6)? 2. On line 205, it should be Fig. 1 instead of Fig. 5.1. In latex, please put '\label' after the '\caption', and the bug will be solved.
2. On line 205, it should be Fig. 1 instead of Fig. 5.1. In latex, please put '\label' after the '\caption', and the bug will be solved.
ICLR_2022_3199
ICLR_2022
The main concern is that the improvements on GLUE are moderate, considering that the baselines are relatively lower than others. The original BERT uses a batch size of 256 and is trained for 1M steps. Your model uses a batch size of 2K and is trained for 300K steps. It can be seen that your models are trained longer than the original BERT, while your baselines are still lower. Also, it has been widely proven that the original BERT results are quite underestimated. Taking all these into consideration, I cannot say the results of Dict-BERT is significantly better than vanilla BERT. Training and fine-tuning Dict-BERT is relatively complicated (than vanilla BERT). Considering it outcomes, I am not convinced that Dict-BERT can be generally used and adapted. As we also need to append the definitions of rare words in fine-tuning stage, it introduces additional computing cost and might not be friendly to long-sequence tasks (such as document classification and machine reading comprehension). Unfortunately, the authors did not discuss these aspects. Major comments: Figure 1: I am not sure if the rare words are always masked. At least in this example, Covid-19 is masked. Maybe a clear illustration should be given on whether the rare words are always masked. Section 3.4.2: You mentioned that the negative sample is RANDOMLY chosen from the whole vocabulary. However, in Figure 1, the negative sample SARS seems to be a very good negative to Covid-19. It has been widely proven that hard negative samples are much useful in discriminative training. Did you try to find a better way rather than randomly choosing a negative sample? If there are n rare words, does that mean we have to append n negative samples accordingly? As mentioned in weaknesses, Dict-BERT may not be friendly to long-sequence tasks. Have you tried your model on those tasks? (such as SQuAD) Minor comments: Section 4.1: we use BERT -> We use BERT Section 4.2: 252581 -> 252,581 Please try to reorganize the positions of the figures and tables. For example, you first mention Table 1 on page 8, but the actual table 1 is on page 6, which is not friendly to the readers. I have to read back and forth in these sections. Section 4.6: In Knowledge Attention v.s. Full Attention.. What is Appendix x.x?
252581 -> 252,581 Please try to reorganize the positions of the figures and tables. For example, you first mention Table 1 on page 8, but the actual table 1 is on page 6, which is not friendly to the readers. I have to read back and forth in these sections. Section 4.6: In Knowledge Attention v.s. Full Attention.. What is Appendix x.x?
j9e3WVc49w
EMNLP_2023
- The claim is grounded in empirical findings and does not provide a solid mathematical foundation. - Although I acknowledge that KD and LS are not identical, I believe KD can be viewed as a special form of LS. This is particularly true when the teacher network is uniformly distributed and the temperature is set at 1, then LS and KD are equivalent. - The authors only compared one of the existing works in this area and did not sufficiently address related works. Here are some related works for LS and KD: Lee, Dongkyu, Ka Chun Cheung, and Nevin Zhang. "Adaptive Label Smoothing with Self-Knowledge in Natural Language Generation." Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022. Zhang, Zhilu, and Mert Sabuncu. "Self-distillation as instance-specific label smoothing." Advances in Neural Information Processing Systems 33 (2020): 2184-2195. Li Yuan, Francis EH Tay, Guilin Li, Tao Wang, and Jiashi Feng. "Revisit knowledge distillation: a teacher-free framework." arXiv preprint arXiv:1909.11723, 2019. Yun, Sukmin, et al. "Regularizing class-wise predictions via self-knowledge distillation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.
- Although I acknowledge that KD and LS are not identical, I believe KD can be viewed as a special form of LS. This is particularly true when the teacher network is uniformly distributed and the temperature is set at 1, then LS and KD are equivalent.
NIPS_2022_285
NIPS_2022
Terminology: Introduction l. 24-26 "pixels near the peripheral of the object of interest can generally be challenging, but not relevant to topology." I think this statement is problematic. When considering the inverse e.g. in the case of a surface or vessel, that a foreground pixel changes to background. Such a scenario would immediately lead to a topology mismatch (Betti error 1). Terminology: "topologically critical location" --> I find this terminology to be not optimally chosen. I agree that the warping concept appears to help with identifying pixels which may close loops or fill holes. However, considering the warping I do not see a guarantee that such "locations" (as in the exact location) which I understand to refer to individual or groups of pixels are indeed part of the real foreground, nor are these locations unique. A slightly varying warping may propose a set of different pixels. The identified locations are more likely to be relevant to topological errors. --> this statement should be statistically supported. Compared to what exactly? Does this rely on the point estimate for any pixel? Or given a particularly trained network? Theorem 1: The presentation of a well known definition from Kong et al. is trivial and could be presented in a different way. Experimentation, lack of implementation details: In Table 2 and a dedicated section, the authors show an ablation study on the influence of lamda on the results. Lamda is a linear parameter, weighting the contribution of the new loss to the overall loss. Similarly, the studied baseline methods, e.g. TopoNet [24], DMT [25], and clDice [42] have a loss weighting parameter. It would be important to understand how and if the parameters of the baselines were chosen and experimented with. (I understand that the authors cannot train ablation studies for all baselines etc.) However, it is an important information to understand the results in Table 1. Terminology: l. 34 "to force the neural network to memorize them" --> I would tone down this statement, in my understanding, the neural network does not memorize an exact "critical point" as such in TopoNet [24]. Minor: I find the method section to be a bit wordy, it could be compressed on the essential definitions. There exist several grammatical errors, please double-check these with a focus on plurals and articles. E.g. l. 271 "This lemma is naturally generalized to 3D case." l. 52 "language of topology" I find this to be an imprecise definition or formulation. Note: After rebuttal and discussion I increased the rating to 5.
34 "to force the neural network to memorize them" --> I would tone down this statement, in my understanding, the neural network does not memorize an exact "critical point" as such in TopoNet [24]. Minor: I find the method section to be a bit wordy, it could be compressed on the essential definitions. There exist several grammatical errors, please double-check these with a focus on plurals and articles. E.g. l.
NIPS_2020_748
NIPS_2020
* There are numerous approaches to reduce ConvNet's memory footprint and computational resources at inference time, including but not limited to channel pruning, dynamic computational graph, and model distillation. Why is removing shortcut connection the best way to achieve the same goal? The baselines considered in Table 3 and 4 are rather lacking. For example, how does the proposed method compare to: 1. Pruning method that reduces ResNet-50 channel counts to match the memory footprint and FLOPs of plain-CNN 50. What will be the drop in accuracy? 2. Distill a ResNet-50 model to a smaller ResNet with similar memory footprint as plain-CNN 50. How does this compare to the proposed training scheme? * L107-111 states that "At the early stages of the training process, the gradients from ResNets play a bigger role with a larger weight, and at the later stages, the gradients contributed by ResNets fade out ...". There doesn't seem to be any empirical results or citations to back up this claim. * L133-L143, are there any ablation studies with path 1 - path 3 removed? What is the result of a naive KD method (without those skip connections)? EDIT AFTER REBUTTAL ================== I have read the authors' rebuttal and other reviews. I think the authors have done an excellent job in addressing my concerns. The updated table of results and efficiency metrics are very convincing. Please incorporate this into your final paper! I have thus updated my rating to accept.
2. Distill a ResNet-50 model to a smaller ResNet with similar memory footprint as plain-CNN 50. How does this compare to the proposed training scheme?
ACL_2017_792_review
ACL_2017
1. Unfortunately, the results are rather inconsistent and one is not left entirely convinced that the proposed models are better than the alternatives, especially given the added complexity. Negative results are fine, but there is insufficient analysis to learn from them. Moreover, no results are reported on the word analogy task, besides being told that the proposed models were not competitive - this could have been interesting and analyzed further. 2. Some aspects of the experimental setup were unclear or poorly motivated, for instance w.r.t. to corpora and datasets (see details below). 3. Unfortunately, the quality of the paper deteriorates towards the end and the reader is left a little disappointed, not only w.r.t. to the results but with the quality of the presentation and the argumentation. - General Discussion: 1. The authors aim "to learn representations for both words and senses in a shared emerging space". This is only done in the LSTMEmbed_SW version, which rather consisently performs worse than the alternatives. In any case, what is the motivation for learning representations for words and senses in a shared semantic space? This is not entirely clear and never really discussed in the paper. 2. The motivation for, or intuition behind, predicting pre-trained embeddings is not explicitly stated. Also, are the pre-trained embeddings in the LSTMEmbed_SW model representations for words or senses, or is a sum of these used again? If different alternatives are possible, which setup is used in the experiments? 3. The importance of learning sense embeddings is well recognized and also stressed by the authors. Unfortunately, however, it seems that these are never really evaluated; if they are, this remains unclear. Most or all of the word similarity datasets considers words independent of context. 4. What is the size of the training corpora? For instance, using different proportions of BabelWiki and SEW is shown in Figure 4; however, the comparison is somewhat problematic if the sizes are substantially different. The size of SemCor is moreover really small and one would typically not use such a small corpus for learning embeddings with, e.g., word2vec. If the proposed models favor small corpora, this should be stated and evaluated. 5. Some of the test sets are not independent, i.e. WS353, WSSim and WSRel, which makes comparisons problematic, in this case giving three "wins" as opposed to one. 6. The proposed models are said to be faster to train by using pre-trained embeddings in the output layer. However, no evidence to support this claim is provided. This would strengthen the paper. 7. Table 4: why not use the same dimensionality for a fair(er) comparison? 8. A section on synonym identification is missing under similarity measurement that would describe how the multiple-choice task is approached. 9. A reference to Table 2 is missing. 10. There is no description of any training for the word analogy task, which is mentioned when describing the corresponding dataset.
4. What is the size of the training corpora? For instance, using different proportions of BabelWiki and SEW is shown in Figure 4; however, the comparison is somewhat problematic if the sizes are substantially different. The size of SemCor is moreover really small and one would typically not use such a small corpus for learning embeddings with, e.g., word2vec. If the proposed models favor small corpora, this should be stated and evaluated.
NIPS_2022_2041
NIPS_2022
• The paper is a bit hard to follow, and several sections were needed more than one reading pass. I suggest improving the structure (introduction->method->experiments), and put more focus on the IEM in Fig 3, which is in my view the main figure in this paper. Also, to improve the visualization of the Fig 7, and Fig. 10. It will be good to exemplify few failure cases of your model (e.g., on the FG or medical datasets). Perhaps other FG factors are needed? (e.g., good continuation?).
• The paper is a bit hard to follow, and several sections were needed more than one reading pass. I suggest improving the structure (introduction->method->experiments), and put more focus on the IEM in Fig 3, which is in my view the main figure in this paper. Also, to improve the visualization of the Fig 7, and Fig.
NIPS_2018_933
NIPS_2018
weakness of this paper, in my opinion, is that it lacks strong empirical results. In particular, from what I can tell, none of the human-optimized models actually beat models that are optimized by some easily-computable proxy for interpretability (e.g., mean path length in a decision tree). Indeed, the bulk of the paper talks about optimizing for interpretability proxies, though the title and rhetoric focus on the human-in-the-loop component. What, then, do we get out of human-optimization? The authors argue in line 249 that “Human response times suggest preferences between proxies that rank models differently.” This statement comes from Figure 3a, but the variance in the results shown in that figure seems so large that it’s hard to make any conclusive claims. Do you have some measure of statistical significance? Looking at Fig 4a, it seems like models that optimize for path length do as well as models that optimize for the number of non-zero features. Despite the above shortcomings, I think that this paper contains interesting ideas and is a solid contribution to human-in-the-loop ML research. It is hard to criticize the paper too much for not being able to show state-of-the-art results on such a challenging task, and this paper certainly stands out in terms of trying to involve humans in model building in a meaningful way. Other comments: 1) Is the likelihood formulation necessary? It complicates notation, and it’s not obvious to me that the SILF likelihood or a prior based on mean-response-time is particularly probabilistically meaningful. I think it might simplify the discourse to just say that we want accuracy to be above some threshold, and then we also want mean response time to be high, without dealing with all of the probabilistic notions. (e.g., as a reader, I didn’t understand the SILF measure or its motivation). 2) Does mean response time take into account correctness? A model that users can simulate quickly but incorrectly seems like it should be counted as uninterpretable. 3) Also, why is HIS proportional to the *negative* inverse of mean response time? 4) The local explanations part of the paper is the least convincing. It seems like a model that is locally explainable can be globally opaque; and none of the human results seem to have been run on the CoverType dataset anyway? Given that the paper is bursting at the seams in terms of content (and overflowing into the supplement :), perhaps leaving that part out might help the overall organization. 5) On local explanations, line 154 says: “We note that these local models will only be nontrivial if the data point x is in the vicinity of a decision boundary; if not, we will not succeed in fitting a local model.” Why is this true? Why can’t we measure differences in probabilities, e.g., “if feature 1 were higher, the model would be less confident in class y”?
4) The local explanations part of the paper is the least convincing. It seems like a model that is locally explainable can be globally opaque; and none of the human results seem to have been run on the CoverType dataset anyway? Given that the paper is bursting at the seams in terms of content (and overflowing into the supplement :), perhaps leaving that part out might help the overall organization.
NIPS_2016_69
NIPS_2016
- The paper is somewhat incremental. The developed model is a fairly straighforward extension of the GAN for static images. - The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions: - What is the size of the beach/golf course/train station/hospital datasets? - How do the video generation results from the network trained on 5000 hours of video look? Summary: While somewhat incremental, the paper seems to have enough novelty for a poster. The visual results encouraging but with many artifacts. The action classification results demonstrate benefits of the learnt representation compared with random weights but are significantly below state-of-the-art results on the considered dataset.
- The generated videos have significant artifacts. Only some of the beach videos are kind of convincing. The action recognition performance is much below the current state-of-the-art on the UCF dataset, which uses more complex (deeper, also processing optic flow) architectures. Questions:
BSGQHpGI1Q
ICLR_2025
- The overall motivation of using characteristic function regularization is not clear. - The abstract states “improves performance … by preserving essential distributional properties…” -> How does the preservation of such properties aid in generalization? - The abstract states that the method is meant to be used in conjunction with existing regularization methods. Were the results presented results utilizing multiple forms of regularization (such as $L_2 + \psi_2$) or were the only singular forms of regularization? - In the conclusion, the author state the follwoing: “integrating these techniques can offer a probability theory based perspective on model architecture construction which allows assembling relevant regularization mechanisms.” —> I do not see how this can be done after reading the work. can you give a concrete example of how the results presented in this work may give any insight into model architecture construction? ## Overall While I found the work interesting and captivating to read, after finishing the manuscript I am left wondering what possible benefit the regularization provides over existing methods. The results are somewhat ambiguous and I find they do not demonstrate why or when a clear benefit can be achieved by applying the given regularization method. If the authors could provide some insight as to when and why the method would be successful, I think it would go a long way in demonstrating the real-world usefulness of characteristic function regularization. Even if this could be demonstrated in a synthetic toy setting, it could provide interesting insights.
- The overall motivation of using characteristic function regularization is not clear.
NIPS_2021_2418
NIPS_2021
- The class of problems is not very well motivated. The CIFAR example is contrived and built for demonstration purposes. It is not clear what application would warrant sequentially (or in batches) and jointly selecting tasks and parameters to simultaneously optimize multiple objective functions. Although one could achieve lower regret in terms of total task-function evaluations by selecting the specific task(s) to evaluate rather than evaluating all tasks simultaneously, the regret may not be better with respect to timesteps. For example, in the assemble-to-order, even if no parameters are evaluated for task function (warehouse s) at timestep t, that warehouse is going to use some (default) set of parameters at timestep t (assuming it is in operation---if this is all on a simulator then the importance of choosing s seems even less well motivated). There are contextual BO methods (e.g. Feng et al 2020) that address the case of simultaneously tuning parameters for multiple different contexts (tasks), where all tasks are evaluated at every timestep. Compelling motivating examples would help drive home the significance of this paper. - The authors take time to discuss how KG handles the continuous task setting, but there are no experiments with continuous tasks - It’s great that entropy methods for conditional optimization are derived in Section 7 in the appendix, but why are these not included in the experiments? How does the empirical performance of these methods compare to ConBO? - The empirical performance is not that strong. EI is extremely competitive and better in low-budget regimes on ambulance and ATO - The performance evaluation procedure is bizarre: “We measure convergence of each benchmark by sampling a set of test tasks S_test ∼ P[s] ∝ W(s) which are never used during optimization”. Why are the methods evaluated on test tasks not used during the optimization since all benchmark problems have discrete (and relatively small) sets of tasks? Why not evaluate performance on the expected objective (i.e. true, weighted) across tasks? - The asymptotic convergence result for Hybrid KG is not terribly compelling - It is really buried in the appendix that approximate gradients are used to optimize KG using Adam. I would feature this more prominently. - For the global optimization study on hybrid KG, it would be interesting to see performance compared to other recent kg work (e.g. one-shot KG, since that estimator formulation can be optimized with exact gradients) Writing: - L120: this is a run-on sentence - Figure 2: left title “poster mean” -> “posterior mean” - Figure 4: mislabeled plots. The title says validation error, but many subplots appear to show validation accuracy. Also, “hyperaparameters” -> hyperparameters - L286: “best validation error (max y)” is contradictory - L293: “We apply this trick to all algorithms in this experiment”: what is “this experiment”? - The appendix is not using NeurIPS 2021 style files - I recommend giving the appendix a proofread: Some things that jump out P6: “poster mean”, “peicewise-linear” P9: “sugggest” Limitations and societal impacts are discussed, but the potential negative societal impacts could be expounded upon.
- The authors take time to discuss how KG handles the continuous task setting, but there are no experiments with continuous tasks - It’s great that entropy methods for conditional optimization are derived in Section 7 in the appendix, but why are these not included in the experiments? How does the empirical performance of these methods compare to ConBO?
NIPS_2018_630
NIPS_2018
- While there is not much related work, I am wondering whether more experimental comparisons would be appropriate, e.g. with min-max networks, or Dugas et al., at least on some dataset where such models can express the desired constraints. - The technical delta from monotonic models (existing) to monotonic and convex/concave seems rather small, but sufficient and valuable, in my opinion. - The explanation of lattice models (S4) is fairly opaque for readers unfamiliar with such models. - The SCNN architecture is pretty much given as-is and is pretty terse; I would appreciate a bit more explanation, comparison to ICNN, and maybe a figure. It is not obvious for me to see that it leads to a convex and monotonic model, so it would be great if the paper would guide the reader a bit more there. Questions: - Lattice models expect the input to be scaled in [0, 1]. If this is done at training time using the min/max from the training set, then some test set samples might be clipped, right? Are the constraints affected in such situations? Does convexity hold? - I know the author's motivation (unlike ICNN) is not to learn easy-to-minimize functions; but would convex lattice models be easy to minimize? - Why is this paper categorized under Fairness/Accountability/Transparency, am I missing something? - The SCNN getting "lucky" on domain pricing is suspicious given your hyperparameter tuning. Are the chosen hyperparameters ever at the end of the searched range? The distance to the next best model is suspiciously large there. Presentation suggestions: - The introduction claims that "these shape constraints do not require tuning a free parameter". While technically true, the *choice* of employing a convex or concave constraint, and an increasing/decreasing constraint, can be seen as a hyperparameter that needs to be chosen or tuned. - "We have found it easier to be confident about applying ceterus paribus convexity;" -- the word "confident" threw me off a little here, as I was not sure if this is about model confidence or human interpretability. I suspect the latter, but some slight rephrasing would be great. - Unless I missed something, unconstrained neural nets are still often the best model on half of the tasks. After thinking about it, this is not surprising. It would be nice to guide the readers toward acknowledging this. - Notation: the x[d] notation is used in eqn 1 before being defined on line 133. - line 176: "corresponds" should be "corresponding" (or alternatively, replace "GAMs, with the" -> "GAMs; the") - line 216: "was not separately run" -> "it was not separately run" - line 217: "a human can summarize the machine learned as": not sure what this means, possibly "a human can summarize what the machine (has) learned as"? or "a human can summarize the machine-learned model as"? Consider rephrasing. - line 274, 279: write out "standard deviation" instead of "std dev" - line 281: write out "diminishing returns" - "Result Scoring" strikes me as a bit too vague for a section heading, it could be perceived to be about your experiment result. Is there a more specific name for this task, maybe "query relevance scoring" or something? === I have read your feedback. Thank you for addressing my observations; moving appendix D to the main seems like a good idea. I am not changing my score.
- The introduction claims that "these shape constraints do not require tuning a free parameter". While technically true, the *choice* of employing a convex or concave constraint, and an increasing/decreasing constraint, can be seen as a hyperparameter that needs to be chosen or tuned.
ACL_2017_818_review
ACL_2017
1) Many aspects of the approach need to be clarified (see detailed comments below). What worries me the most is that I did not understand how the approach makes knowledge about objects interact with knowledge about verbs such that it allows us to overcome reporting bias. The paper gets very quickly into highly technical details, without clearly explaining the overall approach and why it is a good idea. 2) The experiments and the discussion need to be finished. In particular, there is no discussion of the results of one of the two tasks tackled (lower half of Table 2), and there is one obvious experiment missing: Variant B of the authors' model gives much better results on the first task than Variant A, but for the second task only Variant A is tested -- and indeed it doesn't improve over the baseline. - General Discussion: The paper needs quite a bit of work before it is ready for publication. - Detailed comments: 026 five dimensions, not six Figure 1, caption: "implies physical relations": how do you know which physical relations it implies? Figure 1 and 113-114: what you are trying to do, it looks to me, is essentially to extract lexical entailments (as defined in formal semantics; see e.g. Dowty 1991) for verbs. Could you please explicit link to that literature? Dowty, David. " Thematic proto-roles and argument selection." Language (1991): 547-619. 135 around here you should explain the key insight of your approach: why and how does doing joint inference over these two pieces of information help overcome reporting bias? 141 "values" ==> "value"? 143 please also consider work on multimodal distributional semantics, here and/or in the related work section. The following two papers are particularly related to your goals: Bruni, Elia, et al. "Distributional semantics in technicolor." Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1. Association for Computational Linguistics, 2012. Silberer, Carina, Vittorio Ferrari, and Mirella Lapata. " Models of Semantic Representation with Visual Attributes." ACL (1). 2013. 146 please clarify that your contribution is the specific task and approach -- commonsense knowledge extraction from language is long-standing task. 152 it is not clear what "grounded" means at this point Section 2.1: why these dimensions, and how did you choose them? 177 explain terms "pre-condition" and "post-condition", and how they are relevant here 197-198 an example of the full distribution for an item (obtained by the model, or crowd-sourced, or "ideal") would help. Figure 2. I don't really see the "x is slower than y" part: it seems to me like this is related to the distinction, in formal semantics, between stage-level vs. individual-level predicates: when a person throws a ball, the ball is faster than the person (stage-level) but it's not true in general that balls are faster than people (individual-level). I guess this is related to the pre-condition vs. post-condition issue. Please spell out the type of information that you want to extract. 248 "Above definition": determiner missing Section 3 "Action verbs": Which 50 classes do you pick, and you do you choose them? Are the verbs that you pick all explicitly tagged as action verbs by Levin? 306ff What are "action frames"? How do you pick them? 326 How do you know whether the frame is under- or over-generating? Table 1: are the partitions made by frame, by verb, or how? That is, do you reuse verbs or frames across partitions? Also, proportions are given for 2 cases (2/3 and 3/3 agreement), whereas counts are only given for one case; which? 336 "with... PMI": something missing (threshold?) 371 did you do this partitions randomly? 376 "rate *the* general relationship" 378 "knowledge dimension we choose": ? ( how do you choose which dimensions you will annotate for each frame?) Section 4 What is a factor graph? Please give enough background on factor graphs for a CL audience to be able to follow your approach. What are substrates, and what is the role of factors? How is the factor graph different from a standard graph? More generally, at the beginning of section 4 you should give a higher level description of how your model works and why it is a good idea. 420 "both classes of knowledge": antecedent missing. 421 "object first type" 445 so far you have been only talking about object pairs and verbs, and suddenly selectional preference factors pop in. They seem to be a crucial part of your model -- introduce earlier? In any case, I didn't understand their role. 461 "also"? 471 where do you get verb-level similarities from? Figure 3: I find the figure totally unintelligible. Maybe if the text was clearer it would be interpretable, but maybe you can think whether you can find a way to convey your model a bit more intuitively. Also, make sure that it is readable in black-and-white, as per ACL submission instructions. 598 define term "message" and its role in the factor graph. 621 why do you need a "soft 1" instead of a hard 1? 647ff you need to provide more details about the EMB-MAXENT classifier (how did you train it, what was the input data, how was it encoded), and also explain why it is an appropriate baseline. 654 "more skimp seed knowledge": ? 659 here and in 681, problem with table reference (should be Table 2). 664ff I like the thought but I'm not sure the example is the right one: in what sense is the entity larger than the revolution? Also, "larger" is not the same as "stronger". 681 as mentioned above, you should discuss the results for the task of inferring knowledge on objects, and also include results for model (B) (incidentally, it would be better if you used the same terminology for the model in Tables 1 and 2) 778 "latent in verbs": why don't you mention objects here? 781 "both tasks": antecedent missing The references should be checked for format, e.g. Grice, Sorower et al for capitalization, the verbnet reference for bibliographic details.
598 define term "message" and its role in the factor graph.
ARR_2022_236_review
ARR_2022
- My main criticism is that the "mismatched" image caption dataset is artificial and may not capture the kind of misinformation that is posted on platforms like Twitter. For instance, someone posting a fake image of a lockdown at a particular place may not just be about a mismatch between the image and the caption, but may rather require fact-checking, etc. Moreover, the in-the-wild datasets on which a complementary evaluation is conducted are also more about mismatched image-caption and not the real misinformation (lines 142-143). Therefore, the extent to which this dataset can be used for misinformation detection is limited. I would have liked to see this distinction between misinformation and mismatched image captions being clear in the paper. - Also, since the dataset is artificially created, the dataset itself might have a lot of noise. For instance, the collected "pristine" set of tweets may not be pristine enough and might instead contain misinformation as well as out-of-context images. I would have liked to see more analysis around the quality of the collected dataset and the amount of noise it potentially has. - Since this is a new dataset, I would have liked to see evaluation of more models (other than just CLIP). But given that it is only a short paper, it is probably not critical (the paper makes enough contributions otherwise) - Table 4 and Table 5: Are the differences statistically significant? ( especially important because the hNews and Twitter datasets are really small) - Lines 229-240: Are the differences between the topics statistically significant?
- Also, since the dataset is artificially created, the dataset itself might have a lot of noise. For instance, the collected "pristine" set of tweets may not be pristine enough and might instead contain misinformation as well as out-of-context images. I would have liked to see more analysis around the quality of the collected dataset and the amount of noise it potentially has.
ACL_2017_178_review
ACL_2017
- The evaluation reported in this paper includes only intrinsic tasks, mainly on similarity/relatedness datasets. As the authors note, such evaluations are known to have very limited power in predicting the utility of embeddings in extrinsic tasks. Accordingly, it has become recently much more common to include at least one or two extrinsic tasks as part of the evaluation of embedding models. - The similarity/relatedness evaluation datasets used in the paper are presented as datasets recording human judgements of similarity between concepts. However, if I understand correctly, the actual judgements were made based on presenting phrases to the human annotators, and therefore they should be considered as phrase similarity datasets, and analyzed as such. - The medical concept evaluation dataset, ‘mini MayoSRS’ is extremely small (29 pairs), and its larger superset ‘MayoSRS’ is only a little larger (101 pairs) and was reported to have a relatively low human annotator agreement. The other medical concept evaluation dataset, ‘UMNSRS’, is more reasonable in size, but is based only on concepts that can be represented as single words, and were represented as such to the human annotators. This should be mentioned in the paper and makes the relevance of this dataset questionable with respect to representations of phrases and general concepts. - As the authors themselves note, they (quite extensively) fine tune their hyperparameters on the very same datasets for which they report their results and compare them with prior work. This makes all the reported results and analyses questionable. - The authors suggest that their method is superb to prior work, as it achieved comparable results while prior work required much more manual annotation. I don't think this argument is very strong because the authors also use large manually-constructed ontologies, and also because the manually annotated dataset used in prior work comes from existing clinical records that did not require dedicated annotations. - In general, I was missing more useful insights into what is going on behind the reported numbers. The authors try to treat the relation between a phrase and its component words on one hand, and a concept and its alternative phrases on the other, as similar types of a compositional relation. However, they are different in nature and in my mind each deserves a dedicated analysis. For example, around line 588, I would expect an NLP analysis specific to the relation between phrases and their component words. Perhaps the reason for the reported behavior is dominant phrase headwords, etc. Another aspect that was absent but could strengthen the work, is an investigation of the effect of the hyperparameters that control the tradeoff between the atomic and compositional views of phrases and concepts. General Discussion: Due to the above mentioned weaknesses, I recommend to reject this submission. I encourage the authors to consider improving their evaluation datasets and methodology before re-submitting this paper. Minor comments: - Line 069: contexts -> concepts - Line 202: how are phrase overlaps handled? - Line 220: I believe the dimensions should be |W| x d. Also, the terminology ‘negative sampling matrix’ is confusing as the model uses these embeddings to represent contexts in positive instances as well. - Line 250: regarding ‘the observed phrase just completed’, it not clear to me how words are trained in the joint model. The text may imply that only the last words of a phrase are considered as target words, but that doesn’t make sense. - Notation in Equation 1 is confusing (using c instead of o) - Line 361: Pedersen et al 2007 is missing in the reference section. - Line 388: I find it odd to use such a fine-grained similarity scale (1-100) for human annotations. - Line 430: The newly introduced term ‘strings’ here is confusing. I suggest to keep using ‘phrases’ instead. - Line 496: Which task exactly was used for the hyper-parameter tuning? That’s important. I couldn’t find that even in the appendix. - Table 3: It’s hard to see trends here, for instance PM+CL behaves rather differently than either PM or CL alone. It would be interesting to see development set trends with respect to these hyper-parameters. - Line 535: missing reference to Table 5.
- The similarity/relatedness evaluation datasets used in the paper are presented as datasets recording human judgements of similarity between concepts. However, if I understand correctly, the actual judgements were made based on presenting phrases to the human annotators, and therefore they should be considered as phrase similarity datasets, and analyzed as such.
ARR_2022_223_review
ARR_2022
The majority of the weaknesses in the paper seem to stem from confusion and inconsistencies between some of the prose and the results. 1. Figure 2, as it is, isn't totally convincing there is a gap in convergence times. The x-axis of the graph is time, when it would have been more convincing using steps. Without an efficient, factored attention for prompting implementation a la [He et al. (2022)](https://arxiv.org/abs/2110.04366) prompt tuning can cause slow downs from the increased sequence length. With time on the x-axis it is unclear if prompt tuning requires more steps or if each step just takes more time. Similarly, this work uses $0.001$ for the learning rate. This is a lot smaller than the suggested learning rate of $0.3$ in [Lester et al (2021)](https://aclanthology.org/2021.emnlp-main.243/), it would have been better to see if a larger learning rate would have closed this gap. Finally, this gap with finetuning is used as a motivating examples but the faster convergence times of things like their initialization strategy is never compared to finetuning. 2. Confusion around output space and label extraction. In the prose (and Appendix A.3) it is stated that labels are based on the predictions at `[MASK]` for RoBERTa Models and the T5 Decoder for generation. Scores in the paper, for example the random vector baseline for T5 in Table 2 suggest that the output space is restricted to only valid labels as a random vector of T5 generally produces nothing. Using this rank classification approach should be stated plainly as direct prompt reuse is unlikely to work for actual T5 generation. 3. The `laptop` and `restaurant` datasets don't seem to match their descriptions in the appendix. It is stated that they have 3 labels but their random vector performance is about 20% suggesting they actually have 5 labels? 4. Some relative performance numbers in Figure 3 are really surprising, things like $1$ for `MRPC` to `resturant` transfer seem far too low, `laptop` source to `laptop` target on T5 doesn't get 100, Are there errors in the figure or is where something going wrong with the datasets or implementation? 5. Prompt similarities are evaluated based on correlation with zero-shot performance for direct prompt transfer. Given that very few direct prompt transfers yield gain in performance, what is actually important when it comes to prompt transferability is how well the prompt works as an initialization and does that boost performance. Prompt similarity tracking zero-shot performance will be a good metric if that is in turn correlated with transfer performance. The numbers from Table 1 generally support that this as a good proxy method as 76% of datasets show small improvements when using the best zero-shot performing prompt as initialization when using T5 (although only 54% of datasets show improvement for RoBERTa). However Table 2 suggests that this zero-shot performance isn't well correlated with transfer performance. In only 38% of datasets does the best zero-shot prompt match the best prompt to use for transfer (And of these 5 successes 3 of them are based on using MNLI, a dataset well known for giving strong transfer results [(Phang et al., 2017)](https://arxiv.org/abs/1811.01088)). Given that zero-shot performance doesn't seem to be correlated with transfer performance (and that zero-shot transfer is relatively easy to compute) it seems like _ON_'s strong correlation would not be very useful in practice. 6. While recent enough that it is totally fair to call [Vu et al., (2021)](https://arxiv.org/abs/2110.07904) concurrent work, given the similarity of several approaches there should be a deeper discussion comparing the two works. Both the prompt transfer via initialization and the prompt similarity as a proxy for transferability are present in that work. Given the numerous differences (Vu et al transfer mostly focuses on large mixtures transferring to tasks and performance while this work focuses on task to task transfer with an eye towards speed. _ ON_ as an improvement over the Cosine similarities which are also present in Vu et al) it seems this section should be expanded considering how much overlap there is. 7. The majority of Model transfer results seem difficult to leverage. Compared to cross-task transfer, the gains are minimal and the convergence speed ups are small. Coupled with the extra time it takes to train the projector for _Task Tuning_ (which back propagation with the target model) it seems hard to imagine situations where this method is worth doing (that knowledge is useful). Similarly, the claim on line 109 that model transfer can significantly accelerate prompt tuning seems lie an over-claim. 8. Line 118 claims `embedding distances of prompts do not well indicate prompt transferability` but Table 4 shows that C$_{\text{average}}$ is not far behind _ON_. This claim seems over-reaching and should instead be something like "our novel method of measuring prompt similarity via model activations is better correlated with transfer performance than embedding distance based measures" 1. Line 038: They state that GPT-3 showed extremely large LM can give remarkable improvements. I think it would be correct to have one of their later citations on continually developed LM as the one that showed that. GPT-3 mostly showed promise for Few-Shot evaluation, not that it get really good performance on downstream tasks. 2. Line 148: I think it would make sense to make a distinction between hard prompt work updates the frozen model (Schick and Schütez, etc) from ones that don't. 3. Line 153: I think it makes sense to include [_Learning How to Ask: Querying LMs with Mixtures of Soft Prompts_ (Qin and Eisner, 2021)](https://aclanthology.org/2021.naacl-main.410.pdf) in the citation list for work on soft prompts. 4. Figure 3: The coloring of the PI group makes the text very hard to read in Black and White. 5. Table 1: Including the fact that the prompt used for initialization is the one that performed best in direct transfer in the caption as well as the prose would make the table more self contained. 6. Table 2: Mentioning that the prompt used as cross model initialization is from _Task Tuning_ in the caption would make the table more self contained. 7. Line 512: It is mentioned that _ON_ has a drop when applied to T$5_{\text{XXL}}$ and it is suggested this has to do with redundancy as the models grow. I think this section could be improved by highlighting that the Cosine based metrics have a similar drop (suggesting this is a fact of the model rather than the fault of the _ON_ method). Similarly, Figure 4 shows the dropping correlation as the model grows. Pointing out the that the _ON_ correlation for RoBERTA$_{\text{large}}$ would fit the tend of correlation vs model size (being between T5 Base and T5 Large) also strengths the argument but showing it isn't an artifact of _ON_ working poorly on encoder-decoder models. I think this section should also be reordered to show that this drop is correlated with model size. Then the section can be ended with hypothesizing and limited exploration of model redundancy. 8. Figure 6. It would have been interesting to see how the unified label space worked for T5 rather than RoBERTAa as the generative nature of T5's decoding is probably more vulnerable to issue stemming from different labels. 9. _ ON_ could be pushed farther. An advantage of prompt tuning is that the prompt is transformed by the models attention based on the value of the prompt. Without having an input to the model, the prompts activations are most likely dissimilar to the kind of activations one would expect when actually using the prompt. 10. Line 074: This sentence is confusing. Perhaps something like "Thus" over "Hence only"? 11. Line 165: Remove "remedy,"
2. Line 148: I think it would make sense to make a distinction between hard prompt work updates the frozen model (Schick and Schütez, etc) from ones that don't.
ICLR_2021_1740
ICLR_2021
are in its clarity and the experimental part. Strong points Novelty: The paper provides a novel approach for estimating the likelihood of p(class image), by developing a new variational approach for modelling the causal direction (s,v->x). Correctness: Although I didn’t verify the details of the proofs, the approach seems technically correct. Note that I was not convinced that s->y (see weakness) Weak points Experiments and Reproducibility: The experiments show some signal, but are not through enough: • shifted-MNIST: it is not clear why shift=0 is much better than shift~ N ( 0 , σ 2 ) , since both cases incorporate a domain shift • It would be useful to show the performance the model and baselines on test samples from the observational (in) distribution. • Missing details about evaluation split for shifted-MNIST: Did the experiments used a validation set for hyper-param search with shifted-MNIST and ImageCLEF? Was it based on in-distribution data or OOD data? • It would be useful to provide an ablation study, since the approach has a lot of "moving parts". • It would be useful to have an experiment on an additional dataset, maybe more controlled than ImageCLEF, but less artificial than shifted-MNIST. • What were the ranges used for hyper-param search? What was the search protocol? Clarity: • The parts describing the method are hard to follow, it will be useful to improve their clarity. • It will be beneficial to explicitly state which are the learned parametrized distributions, and how inference is applied with them. • What makes the VAE inference mappings (x->s,v) stable to domain shift? E.g. [1] showed that correlated latent properties in VAEs are not robust to such domain shifts. • What makes v distinctive of s? Is it because y only depends on s? • Does the approach uses any information on the labels of the domain? Correctness: I was not convinced about the causal relation s->y. I.e. that the semantic concept cause the label, independently of the image. I do agree that there is a semantic concept (e.g. s) that cause the image. But then, as explained by [Arjovsky 2019] the labelling process is caused by the image. I.e. s->image->y, and not as argued by the paper. The way I see it, is like a communication channel: y_tx -> s -> image -> y_rx. Could the authors elaborate how the model will change if replacing s->y by y_tx->s ? Other comments: • I suggest discussing [2,3,4], which learned similar stable mechanisms in images. • I am not sure about the statement that this work is the "first to identify the semantic factor and leverage causal invariance for OOD prediction" e.g. see [3,4] • The title may be confusing. OOD usually refers to anomaly-detection, while this paper relates to domain-generalization and domain-adaptation. • It will be useful to clarify that the approach doesn't use any external-semantic-knowledge. • Section 3.2 - I suggest to add a first sentence to introduce what this section is about. • About remark in page 6: (1) what is a deterministic s-v relation? (2) chairs can also appear in a workspace, and it may help to disentangle the desks from workspaces. [1] Suter et al. 2018, Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness [2] Besserve et al. 2020, Counterfactuals uncover the modular structure of deep generative models [3] Heinze-Deml et al. 2017, Conditional Variance Penalties and Domain Shift Robustness [4] Atzmon et al. 2020, A causal view of compositional zero-shot recognition EDIT: Post rebuttal I thank the authors for their reply. Although the authors answered most of my questions, I decided to keep the score as is, because I share similar concerns with R2 about the presentation, and because experiments are still lacking. Additionally, I am concerned with one of the author's replies saying All methods achieve accuracy 1 ... on the training distribution, because usually there is a trade-off between accuracy on the observational distribution versus the shifted distribution (discussed by Rothenhäusler, 2018 [Anchor regression]): Achieving perfect accuracy on the observational distribution, usually means relying on the spurious correlations. And under domain-shift scenarios, this would hinder the performance on the shifted-distribution.
• Section 3.2 - I suggest to add a first sentence to introduce what this section is about.
aRlH9AkiEA
EMNLP_2023
1. It's still unclear how topic entities can improve the relationship representations. This claim is less intuitive. 2. The improvements on different datasets are trivial and the novelty of this paper is limited. Lots of previous works focus on this topic. Just adding topic entities seems incremental. 3. Missed related work. 4. Some methods (e.g., KnowBERT, CorefBERT) in related work are not selected as baselines for comparison.
2. The improvements on different datasets are trivial and the novelty of this paper is limited. Lots of previous works focus on this topic. Just adding topic entities seems incremental.
NIPS_2022_242
NIPS_2022
> Although we can learn many things from Table 1, I have a concern on the experiments in the aspect of the limited evaluation. Many researchers have given attention to the powerful generation capability of Transformer decoders. But, this paper only measured the perplexities, the probabilistic results of language models. I think it seems to be not sufficient to note that there is no performance degradation on generation tasks. How about zero/few-shot evaluation results? With various kinds of tasks, the contribution of this paper would be strengthened. I have also another concern on the acceleration of this method, the vector-wise quantization and decomposition method. There is a short explanation about acceleration in Appendix A and Table 3. But I think it is not sufficient to prove there is no acceleration overhead. There is no description on the environmental setup of inference measurement. I think this paper should include some arguments about this issue in detail. To support their assertions, I think this paper can be refined in two aspects: 1) additional experimental results on performance of generation models, and 2) additional arguments in the acceleration environment and implementation.
1) additional experimental results on performance of generation models, and
NIPS_2017_110
NIPS_2017
of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to the model's application demonstrate the greatest relative error, and the experiments are not described with adequate detail. This last issue is particularly important as the rupture time is what clinicians would be using to determine treatment choices. In the experiments with real data, a fully Bayesian approach would have been helpful to assess the uncertainty associated with the rupture times. Paritcularly, a probabilistic evaluation of the prospective performance is warranted if that is the setting in which the authors imagine it to be most useful. Lastly, the details of the experiment are lacking. In particular, the RECIST score is a categorical score, but the authors evaluate a numerical score, the time scale is not defined in Figure 3a, and no overall statistics are reported in the evaluation, only figures with a select set of examples, and there was no mention of out-of-sample evaluation. Specific comments: - l132: Consider introducing the aspects of the specific model that are specific to this example model. For example, it should be clear from the beginning that we are not operating in a setting with infinite subdivisions for \gamma^1 and \gamma^m and that certain parameters are bounded on one side (acceleration and scaling parameters). - l81-82: Do you mean to write t_R^m or t_R^{m-1} in this unnumbered equation? If it is correct, please define t_R^m. It is used subsequently and it's meaning is unclear. - l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function. - Throughout, the authors use the term constrains and should change to constraints. - l124: What is meant by the (*)? - l134: Do the authors mean m=2? - l148: known, instead of know - l156: please define \gamma_0^{***} - Figure 1: Please specify the meaning of the colors in the caption as well as the text. - l280: "Then we made it explicit" instead of "Then we have explicit it"
- l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function.
ICLR_2021_1744
ICLR_2021
Weakness: This work simply applies the meta-learning method into the federated learning setting. I can’t see any technical contribution, either in the meta-learning perspective or the federated perspective. The experimental results are not convincing because the data partition is not for federated learning. Reusing data partition in a meta-learning context is unrealistic for a federated learning setting. The title is misleading or over-claimed. Only the adaptation phase costs a few rounds, but the communication cost of the meta-training phase is still high. The non-IID partition is unrealistic. The authors simply reuse the dataset partitions used in the meta-learning context, which is not a real federated setting. Or in other words, the proposed method can only work in the distribution which is similar to the meta-learning setting. Some meta earning-related benefits are intertwined with reducing communication costs. For example, the author claimed the proposed method has better generalization ability, however, this is from the contribution of the meta-learning. More importantly, this property can only be obvious when the data distribution cross-clients meet the assumption in the context of meta-learning. The comparison is unfair to FedAvg. At least, we should let FedAvg use the same clients and dataset resources as those used in Meta-Training and Few-Rounds adaptation. “Episodic training” is a term from meta-learning. I suggest the authors introduce meta-learning and its advantage first in the Introduction. Few-shot FL-related works are not fully covered. Several recent published knowledge distillation-based few-shot FL should be discussed. Overall Rating I tend to clearly reject this paper because: 1) the proposed framework is a simple combination of meta-learning and federated learning. I cannot see any technical contribution. 2) Claiming the few round adaptations can reduce communication costs for federated learning is misleading, since the meta-training phase is also expensive. 3) the data partition is directly borrowed from meta-learning, which is unrealistic in federated learning. ---------after rebuttal-------- The rebuttal does not convince me with evidence, thus I keep my overall rating. I hope the author can obviously compare the total cost of meta-learning phase plus FL fine-tuning phase with other baselines.
1) the proposed framework is a simple combination of meta-learning and federated learning. I cannot see any technical contribution.
ICLR_2023_3203
ICLR_2023
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance segmentation, the overall structures between these modules and the one that this paper proposed are pretty similar. 2. Though the improvement is consistent for different frameworks and tasks, the relative gains are not very strong. For most of the baselines, the proposed methods can only achieve just about 1% gain on a relative small backbone ResNet-50. As the proposed method introduces global pooling into its structure, it might be easy to improve a relatively small backbone since it is with a smaller receptive field. I suspect whether the proposed method still works well on large backbone models like Swin-B or Swin-L. 3. Some of the baseline results do not matched with their original paper. I roughly checked the original Mask2former paper but the performance reported in this paper is much lower than the one reported in the original Mask2former paper. For example, for panoptic segmentation, Mask2former reported 51.9 but in this paper it's 50.4, and the AP for instance segmentation reported in the original paper is 43.7 but here what reported is 42.4. Meanwhile, there are some missing references about panoptic segmentation that should be included in this paper [5, 6]. Reference [1] Chen, Yunpeng, et al. "A^ 2-nets: Double attention networks." NeurIPS 2018. [2] Cao, Yue, et al. "Gcnet: Non-local networks meet squeeze-excitation networks and beyond." T-PAMI 2020 [3] Yinpeng Chen, et al. Dynamic convolution: Attention over convolution kernels. CVPR 2020. [4] Zhang, Hang, et al. "Resnest: Split-attention networks." CVPR workshop 2022. [5] Zhang, Wenwei, et al. "K-net: Towards unified image segmentation." Advances in Neural Information Processing Systems 34 (2021): 10326-10338. [6] Wang, Huiyu, et al. "Max-deeplab: End-to-end panoptic segmentation with mask transformers." CVPR 2021
1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance segmentation, the overall structures between these modules and the one that this paper proposed are pretty similar.
Ux0BEP46fd
ICLR_2025
* The key techinique behind the dataset collection is to enrich existing video datasets with aligned, cross-modality representations, which is achieved by leveraging off-the-shelf pre-trained models, e.g. captions to video frames and audio. The quality of the multi-modal datasets is directly affected by the selected models. However, this part is only briefly mentioned in Section 3.1. More ablations or discussions would be beneficial regarding this part, as I presume this affects the generic applicablity of the proposed dataset collection approach, i.e. whether the potential distribution shift between the pre-trained model checkpoint and the source video dataset would affect the quality of the generated dataset. * The proposed model demonstrate impressive performance on many benchmarks (setting new SoTA scores) but more careful analysis probably is needed, especially for some pretty "old" benchmarks that the data might have been indirectly seen by the model via the "data curation" process. More details about the evaluation procedures would be helpful. * There exists quite some grammatical errors and typos (e.g. L181-182, L212, L216, etc.) The paper would benefit from more careful revisions.
* The proposed model demonstrate impressive performance on many benchmarks (setting new SoTA scores) but more careful analysis probably is needed, especially for some pretty "old" benchmarks that the data might have been indirectly seen by the model via the "data curation" process. More details about the evaluation procedures would be helpful.
RsnWEcuymH
ICLR_2024
- My main concern is that the performance improvement, though generally better, is not particularly too significant, not to mention that those proxy-based method achieves also pretty good IM results while using only a negligible amount of time compared to BOIM (or other simulation-based method in general) - Other choices of graph kernel are not considered and experimented with such as random walk or Graphlet kernel? There are probably easy tricks to turn them into valid GP kernels. - Despite the time reduction introduced by BOIM, proxy-based methods are still substantially cheaper. Would it be possible to use proxy-based methods as heuristics to seed BOIM or other zero-order optimization method (e.g., CMA-ES). - While the author has shown that GSS has theoretically lower variance, it’d be nice to compare against with random sampling and check empirically how well it performs. - Results presentation can be improved. For example, in Figure 2 and 3, the y-axis is labeled as “performance” which is ambiguous, and the runtime is not represented in those figure. A scatter plot with x/y axes being runtime/performance could help the reader better understand and interpret the results. Best results in tables can also be highlighted. Minor: - Typo in Section 2: “Mockus (1998) and has since become…” → “Mockus (1998) has since become…”
- Results presentation can be improved. For example, in Figure 2 and 3, the y-axis is labeled as “performance” which is ambiguous, and the runtime is not represented in those figure. A scatter plot with x/y axes being runtime/performance could help the reader better understand and interpret the results. Best results in tables can also be highlighted. Minor:
ICLR_2023_1503
ICLR_2023
• 2D and 3D information encoding are from previous work. This work just simply combines them, and the model architecture is the same as Graphormer[1]. The novelty is not enough. • Supervised pretraining based on the prediction of homo-lumo gap may lead to negative transfer. For example, on QM9 in downstream experiments, Transformer-M performs poorly on most tasks other than homo, lumo, and gap. This may be contradictory to the description "general-purpose neural network model" claimed in this paper. • Lack of description of PDBBind data processing and splitting in downstream tasks. • Absence of some ablation experiments: ①(p2D, p3D, p2D&3D) = (1:0:0) / (0:1:0) / (0:0:1); ②Only using the 3D position denoising task while pretraining. Other questions: • Do the authors consider encoding 1D molecular data mode, e.g., SMILES, simultaneously? • What do the authors think about the possibility of negative transfer on downstream tasks due to the supervised signal introduced during pretraining? • Whether there is data leakage during finetuning on PDBBind, because we know that the general, refined, and core sets have overlapping parts.
• Supervised pretraining based on the prediction of homo-lumo gap may lead to negative transfer. For example, on QM9 in downstream experiments, Transformer-M performs poorly on most tasks other than homo, lumo, and gap. This may be contradictory to the description "general-purpose neural network model" claimed in this paper.
Q8ypeYHKFO
ICLR_2024
1) For the experimental comparisons, a detailed description of all the baselines would be useful. 2) I have some questions about the sufficiency of the baselines and comparisons. See below. 3) Would be nice to have a result on one more setup but it is not a major drawback.
3) Would be nice to have a result on one more setup but it is not a major drawback.
ICLR_2023_802
ICLR_2023
- There are no experiments to support the claim that A-DGN can specifically alleviate/mitigate oversquashing. - There are no experiments to support the claim that A-DGN can effectively handle long-range dependencies specifically on graph data requiring long-range reasoning. - The poor long-range modelling ability of DGNs is attributed to oversquashing and vanishing/exploding gradients but the poor performance could also be due to oversmoothing, another phenomenon observed in the context of very deep graph networks [Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning, In AAAI'18].
- The poor long-range modelling ability of DGNs is attributed to oversquashing and vanishing/exploding gradients but the poor performance could also be due to oversmoothing, another phenomenon observed in the context of very deep graph networks [Deeper Insights into Graph Convolutional Networks for Semi-Supervised Learning, In AAAI'18].
ICLR_2022_2555
ICLR_2022
Weakness: The writing should be improved for proofreading. 1.1. Some figures are far away from the paragraphs describing them. For example, the double selective activation is defined on page 5, while the illustration of this activation appears on page 3 (Fig 1(A)). 1.2. Mathematic formula should be more professional. In the 7th line of section 2, x^(k) are n-dimensional vectors and the relation "<" is not defined for them. Texts in math formula should not be oblique, such as "otherwise", "argmax", and "if". The claimed contributions are minor or not supported. 2.1. Interpretable constructions for the universal approximation is not novel. Chapter 4 of [1] provides a visual (and can be proven rigorously) proof of the universal approximation, which indicates that neural networks can approximate functions using "almost" piecewise constant functions. 2.2. The proof of the universal approximation using TNN only holds for 1-dimensional inputs, as admitted by authors in the paragraph begins with "generalization to n-dimensions" at the end of page 4. 2.3. The claimed “universal approximation” is not exactly the one machine learning cares. This paper only considers fitting training samples, while the universal approximation in machine learning considers approximating a target function on a compact set. 2.4. The resistance to catastrophic forgetting is not supported by theorems. Authors claim that this contribution is supported by Proposition 1 on page 4, but Proposition 1 has nothing to do with catastrophic forgetting. Catastrophic forgetting is " the tendency for knowledge of previously learned task(s) (e.g. task A) to be abruptly lost as information relevant to the current task (e.g. task B) is incorporated"[2], which needs two sequential tasks. Although D and DUA are two training sets, Proposition 1 has nothing to do with catastrophic forgetting in the following sense: i) DUA contains D as a subset, thus the knowledge in the previous task (D) also exists in the current task and forgetting does not exist obviously; ii) the statement of Proposition 1 is trivial since we can always choose A=D’ and the conclusion in Proposition 1 is the same as Theorem 1. 2.5. The generalization is not the one machine learning cares about. The paragraph begins with “generalizability to test dataset” on page 4 claims that “they (test data) can be incorporated into the training dataset to create a better model with the above-said error upper-bound”. In machine learning, the label of test data should not be used for training. [1] Nielsen, M. A. (2015). Neural networks and deep learning (Vol. 25). San Francisco, CA: Determination press. [2] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., ... & Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13), 3521-3526.
25). San Francisco, CA: Determination press. [2] Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., ... & Hadsell, R. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13), 3521-3526.
kRjLBXWn1T
ICLR_2025
1. I find that separating Theorem 3.3 into parts A and B is tangential to the story and overly confusing. In reality, we do not have full control over our correction network and its Lipschitz constant. Therefore, we can never determine the best scheduling. This section seems like its being theoretical for its own sake! It might be clearer to simply present Lemma A.2 of the appendix in its most general form: $$W_2(p^b_t, p^f_t) \le W_2(p^b_{t_0}, p^f_{t_0}) \cdot e^{L(t-t_0)}$$ and say that improving the Wasserstein distance on the RHS for $t_0$ can effectively bound the Wasserstein distance on the LHS, especially for $t$ that is sufficiently close to $t_0$. I don't think A, B, and examples 3.4 and 3.5 are particularly insightful when it is not directly factored into the decisions made in the experiments. The results in A and B can still be included, but in the appendix. 2. The parallel sampling section seems slightly oversold! To my understanding, while both forward passes can be done in parallel, it cannot be done in one batch because the forward call is on different methods. Could you please provide a time comparison between parallel and serial sampling on one experiment with the hardware that you have? 3. The statement of Lemma 3.6 seems to spill over to the rest of the main text and I generally do not agree with the base assumption that $p_t^f = p^b_{t, \sigma}$ which is the main driver for Lemma 3.6. Please let me know if I am misunderstanding this! 4. I don't find the comparison between this method and Dai and Wipf [B] appropriate! [B] trains a VAE on VAE to fix problems associated with the dimensionality mismatch between the data manifold and the manifold induced by the (first) VAE. That is not a concern in flow-matching and diffusion models as these models are known not to suffer from the manifold mismatch difficulties as much. 5. Although FIDs are still being widely used for evaluation, there have been clear flaws associated with them and the simplistic Inception network [C]. Please use DinoV2 Frechet Distances for the comparisons from [C], in addition to the widely used FID metric. 6. Please also provide evaluations "matching" the same NFEs in the corresponding non-corrected models. ### Minor points 1. I personally do not agree with the notation abuse of rewriting the conditional probability flow $p_t(x | z)$ as the marginal probability flow $p_t(x)$; it is highly confusing in my opinion. 2. Rather than introducing the new probability flows $\nu_t$ and $\mu_t$, in theorem 3.3, please consider using the same $p^b_t$ and $p^f_t$ for reduced notation overhead, and then restate the theorem in full formality for the appendix. 3. (nitpick) In Eq. (8), $t$ should be a sub-index of $u$.
5. Although FIDs are still being widely used for evaluation, there have been clear flaws associated with them and the simplistic Inception network [C]. Please use DinoV2 Frechet Distances for the comparisons from [C], in addition to the widely used FID metric.
34QscjTwOc
ICLR_2024
- As far as I see, there is no mention of limitations of this work, let alone a Limitations section. No work is perfect, and every work should include a Limitations section so that, only two reasons given here for concision, (1) readers are quickly aware of cases in which this work applies and in which it doesn't and (2) readers have confidence that the paper is at least somewhat cognizant of (1). I'm unsure whether this is in the Appendix or Supplementary Material. - Very limited Related Works section. A large section of related works that is relevant is "sparsity in neural networks," and this could be broken down into multiple relevant subsections, such as "sparsity over training progress", "sparsity with respect to {eigenvalues, spectral norms, Hessian properties [1], etc.}" - Limited rigor in original (at least original as far as I know, such as the categorization of salient features) concepts. - What quantitative rigor justifies the categorization of a feature into one of the 5 mentioned categories? - Is there some sort of goodness of fit test or statistical hypothesis test or principled approach for assigning a feature to a category? - What if the training epochs were extended and the utility trended in a way that changed categorization? - What was the stopping criteria for training? - Was any analysis done for the reliability of assigning features to categories? - Unclear in several aspects. Some include - Why use only one layer for each of the DNNs? How was this layer selected? How would results changing using a different intermediate layer? - Why use the threshold values for rank, approximation error for salient feature count, the number of training epochs used, among others? - Are the results in Figure 5a, 5b, and 5c each for one "sample", "sentence", and "image" in the single DNN model and single dataset listed? - Do Figures X and Y show results for randomly sampled images? Since it's impossible to confirm whether this was actually the case, are there examples that do not align with these results, or even contradict these results? Is there analysis as to why? - The novelty of using PCA to reduce interaction count seems incremental and the significance of the paper results is unclear to me. Using PCA to reduce the interaction count seems intuitive, as PCA aims to retain the maximum information in the data with the reduced dimensionality chosen, assuming certain assumptions are met. How well are the assumptions met? [1] Dombrowski, Ann-Kathrin, Christopher J. Anders, Klaus-Robert Müller, and Pan Kessel. "Towards robust explanations for deep neural networks." Pattern Recognition 121 (2022): 108194.
- The novelty of using PCA to reduce interaction count seems incremental and the significance of the paper results is unclear to me. Using PCA to reduce the interaction count seems intuitive, as PCA aims to retain the maximum information in the data with the reduced dimensionality chosen, assuming certain assumptions are met. How well are the assumptions met? [1] Dombrowski, Ann-Kathrin, Christopher J. Anders, Klaus-Robert Müller, and Pan Kessel. "Towards robust explanations for deep neural networks." Pattern Recognition 121 (2022): 108194.
NIPS_2022_1250
NIPS_2022
Lacking of discussions or motivations for the importance of the proposed idea Empirical results: Can be on toy tasks The paper pursues an interesting research direction, which tries to unify existing POMDP formalisms. The approach looks very promising. The proposed design of the critic is very interesting. It would become very interesting if the paper can provides some basic empirical results on toy tasks to show all important claim in practice. - As the unified framework can now obtain provably efficient learning for most POMDP formalisms. Is there any limitations of its, e.g. can it do the same for any general POMDP formulations (continuous or infinite spaces)? - How can one understand agnostic learning? In Algorithm, is z just defined as historical observations? Or is it in the form of belief?
- As the unified framework can now obtain provably efficient learning for most POMDP formalisms. Is there any limitations of its, e.g. can it do the same for any general POMDP formulations (continuous or infinite spaces)?
NIPS_2017_585
NIPS_2017
weakness of the paper is in the experiments: there should be more complete comparisons in computation time, and comparisons with QMC-based methods of Yang et al (ICML2014). Without this the advantage of the proposed method remains unclear. - The limitation of the obtained results: The authors assume that the spectrum of a kernel is sub-gaussian. This is OK, as the popular Gaussian kernels are in this class. However, another popular class of kernels such as Matern kernels are not included, since their spectrum only decay polynomially. In this sense, the results of the paper could be restrictive. - Eq. (3): What is $e_l$? Corollaries 1, 2 and 3 and Theorem 4: All of these results have exponential dependence on the diameter $M$ of the domain of data: a required feature size increases exponentially as $M$ grows. While this factor does not increase as a required amount of error $\varepsilon$ decreases, the dependence on $M$ affects the constant factor of the required feature size. In fact, Figure 1 shows that the performance is more quickly getting worse than standard random features. This may exhibit the weakness of the proposed approaches (or at least of the theoretical results). - The equation in Line 170: What is $e_i$? - Subsampled dense grid: This approach is what the authors used in Section 5 on experiments. However, it looks that there is no theoretical guarantee for this method. Those having theoretical guarantees seem not to be practically useful. - Reweighted grid quadrature: (i) It looks that there is no theoretical guarantee with this method. (ii) The approach reminds me of Bayesian quadrature, which essentially obtains the weights by minimizing the worst case error in the unit ball of an RKHS. I would like to look at comparison with this approach. (iii) Would it be possible to derive a time complexity? (iv) How do you chose the regularization parameter $\lambda$ in the case of the $\ell_1$ approach? - Experiments in Section 5: (i) The authors reported the results of computation time very briefly (320 secs vs. 384 seconds for 28800 features in MNIST and "The quadrature-based features ... are about twice as fast to generate, compared to random Fourier features ..." in TIMIT). I do not they are not enough: the authors should report the results in the form of Tables, for example, varying the number of features. (ii) There should be comparison with the QMC-based methods of Yang et al. (ICML2014, JMLR2016). It is not clear what is the advantage of the proposed method over the QMC-based methods. (iii) There should be explanation on the settings of the MNIST and TIMIT classification tasks: what classifiers did you use, and how did you determine the hyper-parameters of these methods? At least such explantion should be included in the appendix.
- The limitation of the obtained results: The authors assume that the spectrum of a kernel is sub-gaussian. This is OK, as the popular Gaussian kernels are in this class. However, another popular class of kernels such as Matern kernels are not included, since their spectrum only decay polynomially. In this sense, the results of the paper could be restrictive.
ARR_2022_311_review
ARR_2022
__1. Lack of significance test:__ I'm glad to see the paper reports the standard deviation of accuracy among 15 runs. However, the standard deviation of the proposed method overlaps significantly with that of the best baseline, which raises my concern about whether the improvement is statistically significant. It would be better to conduct a significance test on the experimental results. __2. Anomalous result:__ According to Table 3, the performance of BARTword and BARTspan on SST-2 degrades a lot after incorporating text smoothing, why? __3. Lack of experimental results on more datasets:__ I suggest conducting experiments on more datasets to make a more comprehensive evaluation of the proposed method. The experiments on the full dataset instead of that in the low-resource regime are also encouraged. __4. Lack of some technical details:__ __4.1__. Is the smoothed representation all calculated based on pre-trained BERT, even when the text smoothing method is adapted to GPT2 and BART models (e.g., GPT2context, BARTword, etc.)? __4.2__. What is the value of the hyperparameter lambda of the mixup in the experiments? Will the setting of this hyperparameter have a great impact on the result? __4.3__. Generally, traditional data augmentation methods have the setting of __augmentation magnification__, i.e., the number of augmented samples generated for each original sample. Is there such a setting in the proposed method? If so, how many augmented samples are synthesized for each original sample? 1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty. 2. The number of BARTword + text smoothing and BARTspan + text smoothing on SST-2 in Table 3 should NOT be in bold as they cause degeneration of the performance. 3. I suggest Listening 1 to reflect the process of sending interpolated_repr into the task model to get the final representation
1. Some items in Table 2 and Table 3 have Spaces between accuracy and standard deviation, and some items don't, which affects beauty.
ARR_2022_65_review
ARR_2022
1. The paper covers little qualitative aspects of the domains, so it is hard to understand how they differ in linguistic properties. For example, I think it is vague to say that the fantasy novel is more “canonical” (line 355). Text from a novel may be similar to that from news articles in that sentences tend to be complete and contain fewer omissions, in contrast to product comments which are casually written and may have looser syntactic structures. However, novel text is also very different from news text in that it contains unusual predicates and even imaginary entities as arguments. It seems that the authors are arguing that syntactic factors are more significant in SRL performance, and the experimental results are also consistent with this. Then it would be helpful to show a few examples from each domain to illustrate how they differ structurally. 2. The proposed dataset uses a new annotation scheme that is different from that of previous datasets, which introduces difficulties of comparison with previous results. While I think the frame-free scheme is justified in this paper, the compatibility with other benchmarks is an important issue that needs to be discussed. It may be possible to, for example, convert frame-based annotations to frame-free ones. I believe this is doable because FrameNet also has the core/non-core sets of argument for each frame. It would also be better if the authors can elaborate more on the relationship between this new scheme and previous ones. Besides eliminating the frame annotation, what are the major changes to the semantic role labels? - In Sec. 3, it is a bit confusing why there is a division of source domain and target domain. Thus, it might be useful to mention explicitly that the dataset is designed for domain transfer experiments. - Line 226-238 seem to suggest that the authors selected sentences from raw data of these sources, but line 242-244 say these already have syntactic information. If I understand correctly, the data selected is a subset of Li et al. (2019a)’s dataset. If this is the case, I think this description can be revised, e.g. mentioning Li et al. (2019a) earlier, to make it clear and precise. - More information about the annotators would be needed. Are they all native Chinese speakers? Do they have linguistics background? - Were pred-wise/arg-wise consistencies used in the construction of existing datasets? I think they are not newly invented. It is useful to know where they come from. - In the SRL formulation (Sec. 5), I am not quite sure what is “the concerned word”. Is it the predicate? Does this formulation cover the task of identifying the predicate(s), or are the predicates given by syntactic parsing results? - From Figure 3 it is not clear to me how ZX is the most similar domain to Source. Grouping the bars by domain instead of role might be better (because we can compare the shapes). It may also be helpful to leverage some quantitative measure (e.g. cross entropy). - How was the train/dev/test split determined? This should be noted (even if it is simply done randomly).
- From Figure 3 it is not clear to me how ZX is the most similar domain to Source. Grouping the bars by domain instead of role might be better (because we can compare the shapes). It may also be helpful to leverage some quantitative measure (e.g. cross entropy).
NIPS_2020_556
NIPS_2020
* The visual quality/fidelity of the generated images is quite low. Making sure that the visual fidelity on common metrics such as FID matches or is at least close enough to GAN models will be useful to validate that the approach supports high fidelity (as otherwise it may be the case that it achieves compositionality at the expense of lower potential for fine details or high fidelity, as is the case in e.g. VAEs). Given that there have been many works that explore combinations of properties for CelebA images with GANs, showing that the proposed approach can compete with them is especially important. * It is unclear to me if MCMC is efficient in terms of training and convergence. Showing learning plots as well compared to other types of generative models will be useful. * The use of energy models for image generation is much more unexplored compared to GANs and VAEs and so exploring it further is great. However, note that the motivation and goals of the model -- to achieve compositional generation through logical combination of concepts learned through data subsets, is similar to a prior VAE paper. See further details in the related work review part. * Given the visual samples in the paper, it looks as if it might be the case that the model has limited variability in generated images: the face images in figure 3 show that both in the second and 4th rows the model tends to generate images that feature unspecified but correlated properties, such as the blonde hair or the very similar bottom three faces. That’s also the case in figure 5 rows 2-4. Consequently, it gives the sense that the model or sampling may not allow for large variation in the generated images, but rather tend to take typical likely examples, as happened in the earlier GAN models. A quantitative comparison of the variance in the images compared to other types of generative models will be useful to either refute or validate this.
* The use of energy models for image generation is much more unexplored compared to GANs and VAEs and so exploring it further is great. However, note that the motivation and goals of the model -- to achieve compositional generation through logical combination of concepts learned through data subsets, is similar to a prior VAE paper. See further details in the related work review part.
ARR_2022_149_review
ARR_2022
- The attribute-based approach can be useful if the attribute is given. This limits the application of the proposed approach if there is no attribute given but the text is implicitly offensive. - It is not described if the knowledge bases that are inserted in are free from societal biases, or the issue is not affected by such restriction. Comments - I like attacking implicit offensive texts with reasoning chains, but not yet convinced with the example of Fig. 1. If other contexts such as 'S1 is fat/poor' is not given, then the conversation between S1 and S2 seems quite natural. The addressee may ask if bookclubs provide free food without offensive intention. If the word of S2 is to be decided as 'implicitly offensive', then one of the reasoning chain, such as 'you are fat/poor', should be provided as a context. If I correctly understood, I recommend the authors to change the example to more clear-cut and non-ambiguous one. - I found some issues in the provided example; i) AIR is still written as ASR, and ii) there are some empty chains in the full file. Hope this to be checked in the final release. Suggestions - It would be nice if the boundary between explicit and implicit offensive texts is stated clearly. Is it decided as explicit if the statement contains specific terms and offensive? Or specific expression / sentence structure? - Please be more specific on the 'Chain of Reasoning' section, especially line 276. - Please describe more on MNLI corpus, on the reason why the dataset is utilized in the training of entailment system. Typos - TweetEval << two 'L's in line 349
- It is not described if the knowledge bases that are inserted in are free from societal biases, or the issue is not affected by such restriction. Comments - I like attacking implicit offensive texts with reasoning chains, but not yet convinced with the example of Fig.
NIPS_2020_1394
NIPS_2020
1. In both Theorem 1 and Theorem 2, the discount factor \gamma are required to be greater than 1/3. I'm curious about why such the lower bound is imposed. 2. The primary focus of the paper is about the polynomial rate, can the current technique be applied to the case of linear rate?
2. The primary focus of the paper is about the polynomial rate, can the current technique be applied to the case of linear rate?
NIPS_2020_902
NIPS_2020
- The paper could benefit from a better practical motivation, in its current form it will be quite hard for someone who is not at home in this field to understand why they should care about this work. What are specific practical examples in which the proposed algorithm would be beneficial? - The presentation of the simulation study is not really doing a favor to the authors. Specifically, the authors do not really comment on why the GPC (benchmark) is performing better than BPC (their method). It would be worth re-iterating that this is b/c of the bandit feedback and not using information about the form of the cost function. - More generally, the discussion of the simulation study results could be strenghtened. It is not really clear what the reader should take away from the results, and some discussion could help a lot with interpreting them properly.
- The presentation of the simulation study is not really doing a favor to the authors. Specifically, the authors do not really comment on why the GPC (benchmark) is performing better than BPC (their method). It would be worth re-iterating that this is b/c of the bandit feedback and not using information about the form of the cost function.
NIPS_2022_69
NIPS_2022
1. This work uses an antiquated GNN model and method, it seriously impacts the performance of this framework. The baseline algorithms/methods are also antiquated. 2. The experimental results did not show that this work model obviously outperforms other variant comparison algorithms/models. 3. The innovations of network architecture design and constraint embedding are rather limited. The authors discussed that the performance is limited by the performance of the oracle expert.
3. The innovations of network architecture design and constraint embedding are rather limited. The authors discussed that the performance is limited by the performance of the oracle expert.
ACL_2017_104_review
ACL_2017
- Comparison with ALIGN could be better. ALIGN used content window size 10 vs this paper's 5, vector dimension of 500 vs this paper's 200. Also its not clear to me whether N(e_j) includes only entities that link to e_j. The graph is directed and consists of wikipedia outlinks, but is adjacency defined as it would be for an undirected graph? For ALIGN, the context of an entity is the set of entities that link to that entity. If N(e_j) is different, we cannot tell how much impact this change has on the learned vectors, and this could contribute to the difference in scores on the entity similarity task. - It is sometimes difficult to follow whether "mention" means a string type, or a particular mention in a particular document. The phrase "mention embedding" is used, but it appears that embeddings are only learned for mention senses. - It is difficult to determine the impact of sense disambiguation order without comparison to other unsupervised entity linking methods. - General Discussion:
- It is difficult to determine the impact of sense disambiguation order without comparison to other unsupervised entity linking methods.
NIPS_2016_93
NIPS_2016
- The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dialog. - With a fixed policy, this setting is a subset of reinforcement learning. Can tasks get more complicated (like what explained in the last paragraph of the paper) so that the policy is not fixed. Then, the authors can compare with a reinforcement learning algorithm baseline. - The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations. - Overall, the writing quality of the paper should be improved; e.g., the authors spend the same space on explaining basic memory networks and then the forward model. The related work has missing pieces on more reinforcement learning tasks in the literature. - The 10 sub-tasks are rather simplistic for bAbi. They could solve all the sub-tasks with their final model. More discussions are required here. - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails.
- The details of the forward-prediction model is not well explained. In particular, Figure 2(b) does not really show the schematic representation of the forward prediction model; the figure should be redrawn. It was hard to connect the pieces of the text with the figure as well as the equations.
ICLR_2022_2651
ICLR_2022
There's a lack of detailed explanation of why a particular model component is chosen and not the alternatives, sometimes an intuitive example or reason behind the choice would be helpful, or if there was an implementation of an alternative but later was determined inferior, there should be a description of it. For example, why using attention as the aggregator in graph neural network? have you tried other gnn architectures? how are their performance? The example in fig1 seems a bit weird since it's clear from the entity labels that "Boris Johnson" and "David Cameron" are different entities despite them having the same neighbors or structures in the graph. Knowledge graphs are not just a graph, they also have labels, entities types, schemas, and other information associated with them that can help align and differentiate entities, temporal information is just another information that can help. So even for models without using temporal information, they may still be able to differentiate "Boris Johnson" and "David Cameron" by simply looking into their labels. Perhaps, in the entity alignment baseline models, there should be a model that uses this kind of label or textual information. Also, all the baseline models in EA do not use time information, this seems to be an unfair comparison as the improvement might simply come from this introduction of additional (temporal) information. So it would be better if there could be a baseline model that uses temporal information and another baseline model that uses another additional information (such as textual or label information) to 1) see whether TR-GAT can outperform other temporal kge models in EA task and 2) understand whether temporal info is more useful or textual (label) info is more useful in EA task. Wikidata12k is also a popular benchmark dataset for tkg completion and it is missing in the experiment A popular task in tkg models and related papers is time prediction task which predictions the time interval of a particular statement (s, r, o). For example this paper (https://arxiv.org/abs/2005.05035). This task seems to be missing in the paper. It seems that the EA is possible because two KGs are trained simultaneously. In this case, can this (training 2 KGs together and having an additional margin rank loss) be applied to existing TKG embedding models and then create additional baseline models? How would these additional baselines perform compared with TR-GAT? The TR-GAT models doesn't not seem to beat existing models on ICEWS05-15 dataset though.
1) see whether TR-GAT can outperform other temporal kge models in EA task and
NIPS_2016_314
NIPS_2016
I found in the paper includes: 1. The paper mentions that their model can work well for a variety of image noise, but they show results only on images corrupted using Gaussian noise. Is there any particular reason for the same? 2. I can't find details on how they make the network fit the residual instead of directly learning the input - output mapping. - Is it through the use of skip connections? If so, this argument would make more sense if the skip connections exist after every layer (not every 2 layers) 3. It would have been nice if there was an ablation study on what plays the most important factor on the improvement in performance. Whether it is the number of layers or the skip connections, and how does the performance vary when the skip connections are used for every layer. 4. The paper says that almost all existing methods estimate the corruption level at first. There is a high possibility that the same is happening in the initial layers of their Residual net. If so, the only advantage is that theirs is end to end. 5. The authors mention in the Related works section that the use of regularization helps the problem of image- restoration, but they don’t use any type of regularization in their proposed model. It would be great if the authors can address these points (mainly 1, 2 and 3) in the rebuttal.
1. The paper mentions that their model can work well for a variety of image noise, but they show results only on images corrupted using Gaussian noise. Is there any particular reason for the same?
NIPS_2021_2306
NIPS_2021
1, All the experiments are conducted using images under 224*224 resolution, it would be interesting to see how the performance will be if we use a larger resolution. 2. The accuracy with lower resolutions for some examples is even better than the model with full resolution. Is there any underlying reason for this phenomenon? 3, It seems the improvement over the flops does align well with that over the real latency as shown in fig.3 and tab.3. It would be good to provide the performance and speed trade-off for real acceleration. 4, For the training process, the base models will be first trained and then combined with the resolution selector network for fine-tuning. I’m wondering if it is possible to train the whole model from scratch? Some minor issues: Line121: “The first is the large classifier network with both high performance and expensive computational costs is first trained”, is the “is first trained” redundant?
1, All the experiments are conducted using images under 224*224 resolution, it would be interesting to see how the performance will be if we use a larger resolution.
ARR_2022_314_review
ARR_2022
1. Although the work is important and detailed, from the novelty perspective, it is an extension of norm-based and rollout aggregation methods to another set of residual connections and norm layer in the encoder block. Not a strong weakness, as the work makes a detailed qualitative and quantitative analysis, roles of each component, which is a novelty in its own right. 2. The impact of the work would be more strengthened with the proposed approach's (local and global) applicability to tasks other than classification like question answering, textual similarity, etc. ( Like in the previous work, Kobayashi et al. (2020)) 1. For equations 12 and 13, authors assume equal contribution from the residual connection and multi-head attention. However, in previous work by Kobayashi et al. (2021), it is observed and revealed that residual connections have a huge impact compared to mixing (attention). This assumption seems to be the opposite of the observations made previously. What exactly is the reason for that, for simplicity (like assumptions made by Abnar and Zuidema (2020))? 2. At the beginning of the paper, including the abstract and list of contributions, the claim about the components involved is slightly inconsistent with the rest. For instance, the 8th line in abstract is "incorporates all components", line 73 also says the "whole encoder", but on further reading, FFN (Feed forward layers) is omitted from the framework. This needs to be framed (rephrased) better in the beginning to provide a clearer picture. 3. While FFNs are omitted because a linear decomposition cannot be obtained (as mentioned in the paper), is there existing work that offers a way around (an approximation, for instance) to compute the contribution? If not, maybe a line or two should be added that there exists no solution for this, and it is an open (hard) problem. It improves the readability and gives a clearer overall picture to the reader. 4. Will the code be made publicly available with an inference script? It's better to state it in the submission, as it helps in making an accurate judgement that the code will be useful for further research.
3. While FFNs are omitted because a linear decomposition cannot be obtained (as mentioned in the paper), is there existing work that offers a way around (an approximation, for instance) to compute the contribution? If not, maybe a line or two should be added that there exists no solution for this, and it is an open (hard) problem. It improves the readability and gives a clearer overall picture to the reader.
JWwvC7As4S
ICLR_2024
### Theory The main theoretical results are Theorem 2.1 and 2.2. They state that if the "average last-layer feature norm and the last-layer weight matrix norm are both bounded, then achieving near-optimal loss implies that most classes have intra-class cosine similarity near one and most pairs of classes have inter-class cosine similarity near -1/(C-1)". Qualitatively, this result is an immediate consequence of continuity of the loss function together with the fact that bounded average last-layer feature norm and bounded last-layer weight matrices implies NC. Quantitatively, this work proves asymptotic bounds on the proximity to NC as a function of the loss. This quantitative aspect is novel. I am not convinced of its significance however, as I will outline below. 1. The result is only asymptotic, and thus it cannot be used to estimate proximity to NC from a given loss value. 2. The bound is used as basis to argue that *"under the presence of batch normalization and weight decay of the final layer, larger values of weight decay provide stronger NC guarantees in the sense that the intra-class cosine similarity of most classes is nearer to 1 and the inter-class cosine similarity of most pairs of classes is nearer to -1/(C-1)."* This is backed up by the observation, that the bounds get more tight if the weight decay parameter $\lambda$ increases. To be more specific, Theorem 2.2 shows that if $L< L{min}+\epsilon$, then the average intra class cosine similarity is smaller than $-1/(C-1) + O(f(C,\lambda,\epsilon,\delta))$ and $f$ decreases with $\lambda$. The problem with this argument is that the loss function itself depends on the regularization parameter $\lambda$ and so it is a-priori not clear whether values of $\epsilon$ are comparable for different $\lambda$. For example, apply this argument to the more simple loss function $L(x,\lambda)=\lambda x^2$. As $L$ is convex, it is clear that the value of $\lambda>0$ is irrelevant for the minimum and the near optimal solutions. Yet, $L(x,\lambda)<\epsilon$ implies $x^2<\epsilon/\lambda$ which decreases with $\lambda$. By the logic given in this work, the latter inequality suggests that minimizing a loss function with a larger value of $\lambda$ provides stronger guarantees for arriving close to the minimum at $0$. Clearly, this is not the case and an artifact of quantifying closeness to the loss minimum by $\epsilon$, when it should have been adjusted to $\lambda \epsilon$ instead. I have doubts on how batch normalization is handled. As far as I see, batch normalization enters the proofs only through the condition $\sum_i \| h_i \|^2 =\| h_i \|^2$ (see Prop 2.1). However, this is only an implication and batch normalization induces stronger constraints. The theorems assume that the loss minimizer is a simplex ETF in the presence of batch normalization. This is not obvious, and neither proven nor discussed. It is also not accounted for in the part of the proof of Theorem 2.2, where the loss minimum $m_{reg}$ is derived. ### Experiments - Theorems 2.1 and 2.2 are not evaluated empirically. It is not tested, whether the average intra / inter class cosine similarities of near optimal solutions follow the exponential dependency in $\lambda$ and the square (or sixth) root dependency on $\epsilon$ as suggested by the theorems. - Instead, the dependency of cosine similarities at the end of training (200 epochs) on weight decay strength is evaluated. As presumed by the authors, the intra class cosine similarities get closer to the optimum, if the weight decay strength increases. Yet, there are problems with this experiment. It is inconsistent with the setting of the theory part and thus only provides limited insight on if the idealized theoretical results transfer to practice. 1. The theory part depends only on the weight decay strength on the last layer parameters. Yet, in the experiments, weight decay is applied to all layers and its strength varies between experiments (when instead only the strength of the last layer should change). 2. The theorems assume near optimal training loss, but training losses are not reported. Moreover, the reported cosine similarities are far from optimal (e.g. intra class is around 0.2 instead of 1) which suggests that the training loss is also far from optimal. It also suggests that the models are of too small capacity to justify the 'unconstrained-features' assumption. 3. As (suboptimally) weight decay is applied to all layers, we would expect a large training loss and thus suboptimal cosine similarities for large weight decay parameters. Conveniently, cosine similarities for such large weight decay strengths are not reported and the plots end at a weight decay strength where cosine similarities are still close to optimal. 4. On real-world data sets, the inter class cosine similarity increases with weight decay (even for batch norm models VGG11), disagreeing with the theoretical prediction. This observation is insufficiently acknowledged. ### General The central question that this work wants to answer **What is a minimal set of conditions that would guarantee the emergence of NC?"** is already solved in the sense that it is known that minimal loss plus a norm constraint on the features (explicit via feature normalization or implicit via weight decay) implies neural collapse. The authors argue to add batch normalization to this list but that contradicts minimality. The first contribution listed by the authors is not a contribution. 1. *"We propose the intra-class and inter-class cosine similarity measure, a simple and geometrically intuitive quantity that measures the proximity of a set of feature vectors to several core structural properties of NC. (Section 2.2)"* Cosine similarity (i.e. the normalized inner product) is a well known and an extensively used distance measure on the sphere. In the context of neural collapse, cosine similarities were already used in the foundational paper by Papyan et al. (2020) to empirically quantify closeness to NC (cf. Figure 3 in this reference) and many others. Minor: - There is a grammatical error in the second sentence of the second paragraph - There is no punctuation after formulas; In the appendix, multiple rows start with a punctuation - intra / inter is sometimes written in italics, sometimes upright - $\beta$ is used multiply with a different meaning - Proposition 2.1 $N$ = batch site, Theorem 2.2 $N$ = number of samples per class. - As a consequence, it seems that $\gamma$ needs to be rescaled to account for the number of batches
3. As (suboptimally) weight decay is applied to all layers, we would expect a large training loss and thus suboptimal cosine similarities for large weight decay parameters. Conveniently, cosine similarities for such large weight decay strengths are not reported and the plots end at a weight decay strength where cosine similarities are still close to optimal.
ARR_2022_333_review
ARR_2022
- The writing is really poor. Many places are very confusing. The figures are not clearly separated from the text, and it is confusing that where I should look at. Many sentences use the past tense, while other sentences use the present tense. The only reason that I would like this paper to be accepted is because of the dataset. The writing itself is far from a solid paper, and I suggest authors go over the writing again. - This dataset is the first Thai N-NER dataset, and N-NER in Thai is a new task for the community, so it could be very insightful to know what specific challenges are in Thai, and what errors the models make. The paper provides an error analysis, but not deep enough. It would be insightful if the authors could list error patterns at a finer granularity. It is also unclear to me that why syllable segmentation could be useful for the annotation. Many of the readers do not know Thai, so I think more explanation is necessary. For the writing part, for example, Section 3 is mixed with the past tense and present tense. Figure 2 is hidden in the text. I suggest the authors put all the tables and figures at the top of the pages.
- The writing is really poor. Many places are very confusing. The figures are not clearly separated from the text, and it is confusing that where I should look at. Many sentences use the past tense, while other sentences use the present tense. The only reason that I would like this paper to be accepted is because of the dataset. The writing itself is far from a solid paper, and I suggest authors go over the writing again.
pO7YD7PADN
EMNLP_2023
1. Limited technical contributions. The compression techniques evaluated are standard existing methods like quantization and distillation. The debiasing baselines are also from prior work. There is little technical innovation. 2. Limited datasets and models. The bias benchmarks only assess gender, race, and religion. Other important biases and datasets are not measured. Also missing are assessments on state-of-the-art generative models like GPT. 3. Writing logic needs improvement. Some parts, like introducing debiasing baselines in the results, make the flow confusing.
2. Limited datasets and models. The bias benchmarks only assess gender, race, and religion. Other important biases and datasets are not measured. Also missing are assessments on state-of-the-art generative models like GPT.
NIPS_2017_585
NIPS_2017
weakness of the paper is in the experiments: there should be more complete comparisons in computation time, and comparisons with QMC-based methods of Yang et al (ICML2014). Without this the advantage of the proposed method remains unclear. - The limitation of the obtained results: The authors assume that the spectrum of a kernel is sub-gaussian. This is OK, as the popular Gaussian kernels are in this class. However, another popular class of kernels such as Matern kernels are not included, since their spectrum only decay polynomially. In this sense, the results of the paper could be restrictive. - Eq. (3): What is $e_l$? Corollaries 1, 2 and 3 and Theorem 4: All of these results have exponential dependence on the diameter $M$ of the domain of data: a required feature size increases exponentially as $M$ grows. While this factor does not increase as a required amount of error $\varepsilon$ decreases, the dependence on $M$ affects the constant factor of the required feature size. In fact, Figure 1 shows that the performance is more quickly getting worse than standard random features. This may exhibit the weakness of the proposed approaches (or at least of the theoretical results). - The equation in Line 170: What is $e_i$? - Subsampled dense grid: This approach is what the authors used in Section 5 on experiments. However, it looks that there is no theoretical guarantee for this method. Those having theoretical guarantees seem not to be practically useful. - Reweighted grid quadrature: (i) It looks that there is no theoretical guarantee with this method. (ii) The approach reminds me of Bayesian quadrature, which essentially obtains the weights by minimizing the worst case error in the unit ball of an RKHS. I would like to look at comparison with this approach. (iii) Would it be possible to derive a time complexity? (iv) How do you chose the regularization parameter $\lambda$ in the case of the $\ell_1$ approach? - Experiments in Section 5: (i) The authors reported the results of computation time very briefly (320 secs vs. 384 seconds for 28800 features in MNIST and "The quadrature-based features ... are about twice as fast to generate, compared to random Fourier features ..." in TIMIT). I do not they are not enough: the authors should report the results in the form of Tables, for example, varying the number of features. (ii) There should be comparison with the QMC-based methods of Yang et al. (ICML2014, JMLR2016). It is not clear what is the advantage of the proposed method over the QMC-based methods. (iii) There should be explanation on the settings of the MNIST and TIMIT classification tasks: what classifiers did you use, and how did you determine the hyper-parameters of these methods? At least such explantion should be included in the appendix.
- Reweighted grid quadrature: (i) It looks that there is no theoretical guarantee with this method. (ii) The approach reminds me of Bayesian quadrature, which essentially obtains the weights by minimizing the worst case error in the unit ball of an RKHS. I would like to look at comparison with this approach. (iii) Would it be possible to derive a time complexity? (iv) How do you chose the regularization parameter $\lambda$ in the case of the $\ell_1$ approach?
ARR_2022_253_review
ARR_2022
- The paper uses much analysis to justify that the information axis is a good tool to be applied. As pointed out in conclusion, I'm curious to see some related experiments that this information axis tool can help with. - For Figure 1, I have another angle for explaining why randomly-generated n-grams are far away from the extant words: the characterBERT would explicitly maximize the probability of seen character sequence (implicitly minimize the probability of unseen character sequence). So I guess the randomly generated n-grams would have distant different PPL value with the extant words. This is justified in Section 5.4. - It would be better to define some notations and give a clear definition of the "information axis", "word concreteness" and also "Markov chain information content". - Other than UMAP, there are some other tools for analyzing the geometry of high-dimensional representations. I believe the idea is not highly integrated with UMAP. So it would be better to show demonstrate results with other tools like T-SNE.
- Other than UMAP, there are some other tools for analyzing the geometry of high-dimensional representations. I believe the idea is not highly integrated with UMAP. So it would be better to show demonstrate results with other tools like T-SNE.
NIPS_2021_895
NIPS_2021
The description of the method is somewhat unclear and it is hard to understand all the design choices. Some natural baselines and important related work seem to be missing. Major comments: The lack of flexibility of standard GPs is not a new observation as has been approached in the past, possibly most famously by the deep GP [1]. These models have recently become a mainstream tool with easily usable frameworks [e.g., 2], so that it would seems like a natural baseline to compare against. Generally, a lot of related work seems to be missed by this paper. For instance, meta-learning kernels for GPs for few-shot tasks has already been done by [3] and then later also by [4,5,6]. These should probably be mentioned and it should be discussed how the proposed method compares against them. The paper proposes to use CNFs, but these require solving a complex-looking integral (e.g., Eq. 9). It should be discussed how tractable this integral is or how it is approximated in practice. Moreover, it seems like an easier choice would be standard NFs, so it should be discussed why CNFs are assumed to be better here. Possibly, one should also directly compare against a model with a standard NF as an ablation study. In l. 257ff it is claimed that the proposed GP methods are less prone to memorization. How does this compare to the results in [4], where DKT seems to memorize as well? Could the regularization proposed in [4] be combined with the proposed model? Minor comments: In l. 104 it is said that every kernel can be described by a feature space parameterized by a neural network, but this is trivially not true. For instance, for RBF kernels, the RKHS is famously infinite-dimensional, such that one would need an NN with infinite width to represent it. So at most, NNs can represent finite-dimensional RKHSs in practice. This limitation should be made more clear. l. 151 with GP -> with a GP l. 152 use invertible mapping -> use an invertible mapping l. 161 the "marginal log-probability" is more commonly called "log marginal likelihood" or "log evidence" Eq. (8): should it be z instead of y ? In the tables, it would be more helpful to also bolden the fonts of the entries where the error bars overlap with the best entry. [1] Damianou & Lawrence 2012, https://arxiv.org/abs/1211.0358 [2] Dutordoir et al. 2021, https://arxiv.org/abs/2104.05674 [3] Fortuin et al. 2019, https://arxiv.org/abs/1901.08098 [4] Rothfuss et al. 2020, https://arxiv.org/abs/2002.05551 [5] Venkitaraman et al. 2020, https://arxiv.org/abs/2006.07212 [6] Titsias et al. 2020, https://arxiv.org/abs/2009.03228 The limitations of the method are hard to assess, mostly because the choice of CNFs over NFs or any other flexible distribution family is not well motivated and because (theoretical and empirical) comparisons to many relevant related methods are missing. This should be addressed.
104 it is said that every kernel can be described by a feature space parameterized by a neural network, but this is trivially not true. For instance, for RBF kernels, the RKHS is famously infinite-dimensional, such that one would need an NN with infinite width to represent it. So at most, NNs can represent finite-dimensional RKHSs in practice. This limitation should be made more clear. l.
NIPS_2016_232
NIPS_2016
weakness of the suggested method. 5) The literature contains other improper methods for influence estimation, e.g. 'Discriminative Learning of Infection Models' [WSDM 16], which can probably be modified to handle noisy observations. 6) The authors discuss the misestimation of mu, but as it is the proportion of missing observations - it is not wholly clear how it can be estimated at all. 5) The experimental setup borrowed from [2] is only semi-real, as multi-node seed cascades are artificially created by merging single-node seed cascades. This should be mentioned clearly. 7) As noted, the assumption of random missing entries is not very realistic. It would seem worthwhile to run an experiment to see how this assumption effects performance when the data is missing due to more realistic mechanisms.
5) The experimental setup borrowed from [2] is only semi-real, as multi-node seed cascades are artificially created by merging single-node seed cascades. This should be mentioned clearly.
ICLR_2023_2658
ICLR_2023
Weakness: 1.I think the work is lack of novelty as the work GPN[1] has already proposed to add node importance score in the calculation of class prototype and the paper only give a theoretical analysis on it. 2.The experiment part is not sufficient enough. (1) For few-shot graph node classification problem to predict nodes with novel labels, there are some methods that the paper does not compare with. For example, G-Meta is mentioned in the related works but not compared in the experiments. A recent work TENT[2] is not mentioned in related works. As far as I know, the above two approaches can be applied in the problem setting in the paper. (2) For the approach proposed in the paper, there is no detailed ablation study for the functionalities of each part designed. (3) It is better to add a case study part to show the strength of the proposed method by an example. Concerns: 1.The paper consider the node importance among nodes with same label in support set. In 1-shot scenario, how node importance can be used? I also find that the experiment part in the paper does not include the 1-shot paper setting, but related works such as RALE have 1-shot setting, why? 2.The paper says that the theory of node importance can be applied to other domains. I think there should be an example to verify that conclusion. 3.In section 5.3, ‘we get access to abundant nodes belonging to each class’. I do not think this is always true as there might be a class in the training set that only has few samples given the long-tailed distribution of samples in most graph datasets. [1] Ding et al. Graph Prototypical Networks for Few-shot Learning on Attributed Networks [2] Wang et al. Task-Adaptive Few-shot Node Classification
1.The paper consider the node importance among nodes with same label in support set. In 1-shot scenario, how node importance can be used? I also find that the experiment part in the paper does not include the 1-shot paper setting, but related works such as RALE have 1-shot setting, why?
NIPS_2016_313
NIPS_2016
Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim. 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. 3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods. 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images?
3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods.
NIPS_2018_168
NIPS_2018
- The idea of using heads (or branches) in deep networks is not entirely novel and the authors do not describe or compare against past work such as mixture of experts and mixture of softmax (ICLR 2018). The authors should also investigate the Born Again Networks (ICML 2018) work that uses self-distillation to obtain improved results. It would also be beneficial if the authors described auxiliary training in the related work since they explain their approach using that as foundation (Section 3.1). - The proposed changes can be explained better. For instance, it is not clear how the hierarchy should be defined for a general classifier and which of the heads is retained during inference. - What baseline is chosen in Figure 3? Is it also trained on a combination of hard and soft targets? - The approach presented is quite heuristic it would be desirable if the authors could discuss some theoretical grounding for the proposed ideas. This is especially true for the section on "backpropagation rescaling". Did the authors try using a function that splits the outputs equally for all heads (i.e., I(x) = x/H, I'(x) = 1/H)? The argument for not using sum_h Lh is also not convincing. Why not modulate eta accordingly? - The paper would benefit from careful editing; there are several typos such as "foucus", "cross entroy", and awkward or colloquial phrases such as "requires to design different classifier heads", "ILR sharing with backpropagation rescaling well aggregates the gradient flow", "confusion often happens to network training", "put on top of individual learning", "SGD has nicer convergence". Post-rebuttal: Raising my score to a 6 in response to the rebuttal. I do wish to point out that: - To my understanding, BANs don't ensemble. They only self-distill. - Is using Lhard the right baseline for Q3?
- To my understanding, BANs don't ensemble. They only self-distill.
hbon6Jbp9Q
ICLR_2025
- The pruning method does not appear to offer much beyond the method of feature reweighted representational similarity analysis, which is quite popular (see Kaniuth and Hebert, 2022; NeuroImage). In fact, it is essentially a particular limited case of FR-RSA, where the weights of features are either 0 or 1. The authors do not appear well aware of the literature, as only 20 references are made. - I found the technique of using multiple different feature spaces (the 25 feature space of Mitchell et. al to fit voxel encoding models, then the full/pruned glove model to analyze similarities within clusters) to be convoluted and potentially circular. - As the technique is not particularly novel, it is important that the authors deliver some clear novel findings about brain function. The abstract only lists one "From a neurobiological perspective, we find that brain regions encoding social and cognitive aspects of lexical items consistently also represent their sensory-motor features, though the reverse does not hold." I did not find the case for this finding to be particularly strong. I welcome the authors to make the case more strongly. - The figures are poorly made, certainly well below the bar of ICLR, and do not communicate much if anything that will affect how researcher's think about semantic organization in the brain. For example, while an interesting approach of clustering brain regions is used, these regions are never visualized. In the one plot that attempts to explain some differences across brain regions, the authors use arbitrary number indices for brain regions; at minimum, anatomical labels are needed. However, in 2024, it is expected that a strong paper on this topic can make elegant visualizations of the cortical surface. Practitioners in this field understand the importance of such visualizations for relating findings to pre-existing conceptual notions of cortical organization, and for driving further intuition that will affect future research. - The authors waste precious space presenting fits to training data (see Table 1, "complete dataset", which reports the representational similarity after selecting the features that optimize to improve that representational similarity). Only the cross-validated results are worth presenting. - Focusing on which clusters are "best" rather than what the differences in representation are between them, seems an odd choice given the motivation of the paper. - Averaging voxels across subjects is likely to drastically reduce the granularity of the possible findings, since there is no expectation in voxel-level alignment of fine-grained conceptual information, but only larger-scale information. I believe it would be better to construct the clusters using all subject's individual data in a group-aligned space, where the same methods can otherwise be used, but individual voxel's are kept independent and not averaged across subjects.
- Focusing on which clusters are "best" rather than what the differences in representation are between them, seems an odd choice given the motivation of the paper.
ICLR_2021_1740
ICLR_2021
are in its clarity and the experimental part. Strong points Novelty: The paper provides a novel approach for estimating the likelihood of p(class image), by developing a new variational approach for modelling the causal direction (s,v->x). Correctness: Although I didn’t verify the details of the proofs, the approach seems technically correct. Note that I was not convinced that s->y (see weakness) Weak points Experiments and Reproducibility: The experiments show some signal, but are not through enough: • shifted-MNIST: it is not clear why shift=0 is much better than shift~ N ( 0 , σ 2 ) , since both cases incorporate a domain shift • It would be useful to show the performance the model and baselines on test samples from the observational (in) distribution. • Missing details about evaluation split for shifted-MNIST: Did the experiments used a validation set for hyper-param search with shifted-MNIST and ImageCLEF? Was it based on in-distribution data or OOD data? • It would be useful to provide an ablation study, since the approach has a lot of "moving parts". • It would be useful to have an experiment on an additional dataset, maybe more controlled than ImageCLEF, but less artificial than shifted-MNIST. • What were the ranges used for hyper-param search? What was the search protocol? Clarity: • The parts describing the method are hard to follow, it will be useful to improve their clarity. • It will be beneficial to explicitly state which are the learned parametrized distributions, and how inference is applied with them. • What makes the VAE inference mappings (x->s,v) stable to domain shift? E.g. [1] showed that correlated latent properties in VAEs are not robust to such domain shifts. • What makes v distinctive of s? Is it because y only depends on s? • Does the approach uses any information on the labels of the domain? Correctness: I was not convinced about the causal relation s->y. I.e. that the semantic concept cause the label, independently of the image. I do agree that there is a semantic concept (e.g. s) that cause the image. But then, as explained by [Arjovsky 2019] the labelling process is caused by the image. I.e. s->image->y, and not as argued by the paper. The way I see it, is like a communication channel: y_tx -> s -> image -> y_rx. Could the authors elaborate how the model will change if replacing s->y by y_tx->s ? Other comments: • I suggest discussing [2,3,4], which learned similar stable mechanisms in images. • I am not sure about the statement that this work is the "first to identify the semantic factor and leverage causal invariance for OOD prediction" e.g. see [3,4] • The title may be confusing. OOD usually refers to anomaly-detection, while this paper relates to domain-generalization and domain-adaptation. • It will be useful to clarify that the approach doesn't use any external-semantic-knowledge. • Section 3.2 - I suggest to add a first sentence to introduce what this section is about. • About remark in page 6: (1) what is a deterministic s-v relation? (2) chairs can also appear in a workspace, and it may help to disentangle the desks from workspaces. [1] Suter et al. 2018, Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness [2] Besserve et al. 2020, Counterfactuals uncover the modular structure of deep generative models [3] Heinze-Deml et al. 2017, Conditional Variance Penalties and Domain Shift Robustness [4] Atzmon et al. 2020, A causal view of compositional zero-shot recognition EDIT: Post rebuttal I thank the authors for their reply. Although the authors answered most of my questions, I decided to keep the score as is, because I share similar concerns with R2 about the presentation, and because experiments are still lacking. Additionally, I am concerned with one of the author's replies saying All methods achieve accuracy 1 ... on the training distribution, because usually there is a trade-off between accuracy on the observational distribution versus the shifted distribution (discussed by Rothenhäusler, 2018 [Anchor regression]): Achieving perfect accuracy on the observational distribution, usually means relying on the spurious correlations. And under domain-shift scenarios, this would hinder the performance on the shifted-distribution.
• What were the ranges used for hyper-param search? What was the search protocol? Clarity:
ICLR_2023_2406
ICLR_2023
1. However, my major concern is that the contribution is insufficient. In general, the authors studied the connection between the complementary and the model robustness but without further studies on how to leverage such characteristics to improve model robustness. Even though this paper could be the first work to study this connection, the conclusion could be easily and intuitively obtained, i.e., when multimodal complementary is higher, the robustness is more delicate when one of the modalities is corrupted. Except for the analysis of the connection between complementary and robustness, it is expected to see more insightful findings or possible solutions. 2. The proposed metric is calculated on the features extracted by some pre-trained models. So the pre-trained models are necessary for metric computing which is contradictory to that the metric is used to measure the multimodal data complementary. In addition, in my opinion, the metric is unreliable since the model participates in the metric calculation and will inevitably affect the calculation results. 3. There are many factors that will affect the model's robustness. The multimodal data complementary is one of them. However, multimodal data complementary is not solely determined by the data itself. For example, classification on MS-COCO data is obviously less complementary than VQA on MS-COCO data. As mentioned by the author, the VQA task requires both modalities for question answering, accordingly the complementary is determined by the multimodal and the target task. However, I didn't see much further discussion about these possible factors.
1. However, my major concern is that the contribution is insufficient. In general, the authors studied the connection between the complementary and the model robustness but without further studies on how to leverage such characteristics to improve model robustness. Even though this paper could be the first work to study this connection, the conclusion could be easily and intuitively obtained, i.e., when multimodal complementary is higher, the robustness is more delicate when one of the modalities is corrupted. Except for the analysis of the connection between complementary and robustness, it is expected to see more insightful findings or possible solutions.
XvA1Mn9OFy
ICLR_2025
1. While I agree that the performance gains in table 1 illustrate that GAM > SAM > SGD, the relative gains of GAM over SAM seem relatively small. 2. It would be nice to see some results in other modalities (e.g., maybe some language related tasks. Aside: for language related tasks, people care about OOD performance as well, so maybe expected test loss is not as meaningful?)
2. It would be nice to see some results in other modalities (e.g., maybe some language related tasks. Aside: for language related tasks, people care about OOD performance as well, so maybe expected test loss is not as meaningful?)
ICLR_2023_1765
ICLR_2023
weakness, which are summarized in the following points: Important limitations of the quasi-convex architecture are not addressed in the main text. The proposed architecture can only represent non-negative functions, which is a significant weakness for regression problems. However, this is completed elided and could be missed by the casual reader. The submission is not always rigorous and some of the mathematical developments are unclear. For example, see the development of the feasibility algorithm in Eq. 4 and Eq. 5. Firstly, t ∈ R while y , f ( θ ) ∈ R n , where n is the size of the training set, so that the operation y − t − f ( θ ) is not well-defined. Moreover, even if y , f ( θ ) ∈ R , the inequality ψ t ( θ ) ≤ 0 implies l ( θ ) ≤ t 2 / 2 , rather than ( θ ) ≤ t . Since, in general, the training problem will be defined for y ∈ R n , the derivations in the text should handle this general case. The experiments are fairly weak and do not convince me that the proposed models have sufficient representation power to merit use over kernel methods and other easy-to-train models. The main issue here is the experimental evaluation does not contain a single standard benchmark problem nor does it compare against standard baseline methods. For example, I would really have liked to see regression experiments on several UCI datsets with comparisons against kernel regression, two-layer ReLU networks, etc. Although boring, such experiments establish a baseline capacity for the quasi-concave networks; this is necessary to show they are "reasonable". The experiments as given have several notable flaws: Synthetic dataset: This is a cute synthetic problem, but obviously plays to the strength of the quasi-concave models. I would have preferred to see a synthetic problem for which was noisy with non piece-wise linear relationship. Contour Detection Dataset: It is standard to report the overall test ODS, instead of reporting it on different subgroups. This allows the reader to make a fair overall comparison between the two methods. Mass-Damper System Datasets: This is a noiseless linear regression problem in disguise, so it's not surprising that quasi-concave networks perform well. Change-point Detection: Again, I would really have rather seen some basic benchmarks like MNIST before moving on to novel applications like detecting changes in data distribution. Minor Comments Introduction: - The correct reference for SGD is the seminal paper by Robbins and Monro [1]. - The correct reference for backpropagation is Rumelhart et al. [2] - "Issue 1: Is non-convex deep neural networks always better?": "is" should be "are". - "While some experiments show that certain local optima are equivalent and yield similar learning performance" -- this should be supported by a reference. - "However, the derivation of strong duality in the literature requires the planted model assumption" --- what do you mean by "planted model assumption"? The only necessary assumption for these works is that the shallow network is sufficiently wide. Section 4: - "In fact, suppose there are m weights, constraining all the weights to be non-negative will result in only 1 / 2 m representation power." -- A statement like this only makes sense under some definition of "representation power". For example, it is not obvious how non-negativity constraints affect the underlying hypothesis class (aside from forcing it to contain only non-negative functions), which is the natural notion of representation power. - Equation 3: There are several important aspects of this model which should be mentioned explicitly in the text. Firstly, it consists of only one neuron; this is obvious from the notation, but should be stated as well. Secondly, it can only model non-negative functions. This is a strong restriction and should be discussed somewhere. - "Among these operations, we choose the minimization procedure because it is easy to apply and has a simple gradient." --- the minimization operator may produce a non-smooth function, which does not admit a gradient everywhere. Nor is it guaranteed to have a subgradient since the negative function only quasi-convex, rather than convex. - "... too many minimization pooling layers will damage the representation power of the neural network" --- why? Can the authors expand on this observation? Section 5: - "... if we restrict the network output to be smaller than the network labels, i.e., f ( θ ) ≤ y " --- note that this observation requires y ≥ 0 , which does not appear to be explicitly mentioned. - What method is being used to solve the convex feasibility problem in Eq. (5)? I cannot find this stated anywhere. Figure 6: - Panel (b): "conveyers" -> "converges". Figure 7: - The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text. - "It could explain that the classification accuracy of QCNN (94.2%) outperforms that of deep networks (92.7%)" --- Is this test accuracy, or training accuracy? I assume this is the test metric on the hold-out set, but the text should state this clearly. References [1] Robbins, Herbert, and Sutton Monro. "A stochastic approximation method." The annals of mathematical statistics (1951): 400-407. [2] Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning representations by back-propagating errors." nature 323.6088 (1986): 533-536.
- The text inside the figure and the labels are too small to read without zooming. This text should be roughly the same size as the manuscript text.
ICLR_2023_4735
ICLR_2023
1) The contribution of this paper is to exploit the knowledge from the multiple pretrained deep models for VQA, while the practical method is doubtful. First, I am wondering why there are different pretrained models for different datasets. It is better to have a deeper analysis or insight into the fusion mechanism of various pretrained deep models. How the different pretrained models contribute to the quality-aware feature extraction is also valuable to be studied. 2) There are many pretrained models for various tasks, more pretrained models are encouraged to study for VQA. 3) I am still unclear about the motivation of intra-consistency and inter-divisibility loss related to quality assessment.
2) There are many pretrained models for various tasks, more pretrained models are encouraged to study for VQA.
ARR_2022_52_review
ARR_2022
1. A critical weakness of the paper is the lack of novelty and incremental nature of work. The paper addresses a particular problem of column operations in designing semantic parsers for Text-to-SQL. They design a new dataset which is a different train/test split of an existing dataset SQUALL. The other synthetic benchmark paper proposed is based on a single question template, "What was <column> in <year>?". 2. The paper assumes strong domain knowledge about the column types and assumes a domain developer first creates a set of templates based on column types. With the help of these column templates, I think many approaches (parsers) can easily solve the problem. For example, parsers utilizing the SQL grammar to generate the output SQL can use these templates to add new rules that can be used while generating the output. Few such works are 1. A Globally Normalized Neural Model for Semantic Parsing ACl 2021 2. TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation EMNP 2018 3. GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing, ICLR 2021. 1. It will good if the authors can learn the templates for schema expansion from source domain data. 2. Compare the proposed approach with methods which uses domain knowledge in the form of grammar. Comparing with below methods will show generality of ideas proposed in the paper in a much better way. 1. A Globally Normalized Neural Model for Semantic Parsing ACl 2021 2. TRANX: A Transition-based Neural Abstract Syntax Parser for Semantic Parsing and Code Generation EMNP 2018 3. GraPPa: Grammar-Augmented Pre-Training for Table Semantic Parsing, ICLR 2021.
1. A critical weakness of the paper is the lack of novelty and incremental nature of work. The paper addresses a particular problem of column operations in designing semantic parsers for Text-to-SQL. They design a new dataset which is a different train/test split of an existing dataset SQUALL. The other synthetic benchmark paper proposed is based on a single question template, "What was <column> in <year>?".
NIPS_2017_104
NIPS_2017
--- There aren't any major weaknesses, but there are some additional questions that could be answered and the presentation might be improved a bit. * More details about the hard-coded demonstration policy should be included. Were different versions of the hard-coded policy tried? How human-like is the hard-coded policy (e.g., how a human would demonstrate for Baxter)? Does the model generalize from any working policy? What about a policy which spends most of its time doing irrelevant or intentionally misleading manipulations? Can a demonstration task be input in a higher level language like the one used throughout the paper (e.g., at line 129)? * How does this setting relate to question answering or visual question answering? * How does the model perform on the same train data it's seen already? How much does it overfit? * How hard is it to find intuitive attention examples as in figure 4? * The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help. * The related works section would be better understood knowing how the model works, so it should be presented later.
* The model is somewhat complicated and its presentation in section 4 requires careful reading, perhaps with reference to the supplement. If possible, try to improve this presentation. Replacing some of the natural language description with notation and adding breakout diagrams showing the attention mechanisms might help.
ARR_2022_178_review
ARR_2022
__1. The relation between instance difficulty and training-inference consistency remains vague:__ This paper seems to try to decouple the concept of instance difficulty and training-inference consistency in current early exiting works. However, I don't think these two things are orthogonal and can be directly decoupled. Intuitively, according to the accuracy of the prediction, there are two main situations for training-inference inconsistency: the inconsistent exit makes the prediction during inference better than that during training, and vice versa. The first case is unlikely to occur. For the second case, considering instance difficulty reflects the decline in the prediction accuracy of the instances during training and inference, it may be regarded as the second case of training-inference consistency. Accordingly, I am still a little bit confused about the relation between instance difficulty and training-inference consistency after reading the paper. I would suggest that the authors calculate the token-level difficulty of the model before and after using the hash function, and perform more analysis on this basis. In fact, if the hash function is instance-level, the sentence-level difficulty of all baselines (including static and dynamic models) can be calculated, which will provide a more comprehensive and fair comparison. __2. Lack of the analysis of the relation between instance-level consistency and token-level consistency:__ The core idea derived from the preliminary experiments is to enhance the instance-level consistency between training and inference, i.e., mapping semantically similar instances into the same exiting layer. However, the practical method introduces the consistency constrain at the token level. The paper doesn’t show that whether the token-level method can truely address or mitigate the inconsistency problem at the instance level. I would suggest the authors define metrics to reflect the instance-level and token-level consistency, and conduct an experiment to verify that whether they are correlated. 1. I didn’t follow the description of the sentence-level hash function from Line 324 to Line 328: If we use the sequence encoder (e.g., Sentence-BERT) as a hash function to directly map the instances to the exiting layer, why do we still need an internal classifier at that layer? And considering all the instances can be hashed by the pre-trained sequence encoder in advance before training (and early exiting), the appearance of label imbalance should not cause any actual harm? Why does it become a problem? 2. The paper addresses many times (Line 95-97, Line 308-310) that the consistency between training and inference can be easily satisfied due to the smoothness of neural models. I would suggest giving more explanations on this.
2. The paper addresses many times (Line 95-97, Line 308-310) that the consistency between training and inference can be easily satisfied due to the smoothness of neural models. I would suggest giving more explanations on this.
NIPS_2020_725
NIPS_2020
1. I think it is time to retire this dataset in favor of something more realistic that could be more specific about human learning. 2. The model seems overly simple. This is both a feature and a bug. 3. The characterizations are not as systematic as one would like. For example, while all models are characterized in terms of their learning dynamics and final representational structure, only two are displayed in Figure 2 for their stage-like behavior. If this is in the supplementary material, ok, but the paper should be relatively self-contained. 4. The notation is very confusing.
2. The model seems overly simple. This is both a feature and a bug.
NIPS_2020_1012
NIPS_2020
-The scenarios that success of the attack is less than 50%, a simple ensemble method could be used to defend the attack. It seems that the success of attack in attacking the Google model is around 20% which could be circumvented by using multiple models. -The attack seems to be unstable when changing the architecture. For instance the attack on VGG does not succeed as much as the attack on other architectures. -On novelty of the paper: The ideas behind the attack seem to be simple and borrow ideas from the Metalearing literature. However, this is not necessary a bad thing as it shows simple ideas can be used to attack models. - The experiments of the paper are done only on neural networks and image classification tasks. It would be interesting to see the performance of attack on other architecture and classification tasks.
- The experiments of the paper are done only on neural networks and image classification tasks. It would be interesting to see the performance of attack on other architecture and classification tasks.
ACL_2017_588_review
ACL_2017
and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems. - Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task. 2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler. 3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary. This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise. 4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities. - Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested? 2) Have you tried building a classifier that just takes h_i^e as inputs? I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores.
1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested?
ICLR_2023_4741
ICLR_2023
Weakness 1 The novelty is limited. The low-rank design is relevant to (Hu et al., 2021). The sparse design is similar to Taylor pruning (Molchanov et al., 2019). 2 The experiments are not quite convincing. The authors choose the old baseline like R3D and C3D. To reduce computation complexity, many papers have been proposed in 3D CNN (X3D, SlowFast, etc). Does the proposed method also works on these 3D CNNs? Or compared to these approaches, what is the advantage of the proposed method? 3 The paper is hard to follow. In fact, I have to read many times to get what it is. I understand it is a theoretical-kind paper. But please further explain the mathmetical formulation clearly to show why and how it works.
2 The experiments are not quite convincing. The authors choose the old baseline like R3D and C3D. To reduce computation complexity, many papers have been proposed in 3D CNN (X3D, SlowFast, etc). Does the proposed method also works on these 3D CNNs? Or compared to these approaches, what is the advantage of the proposed method?
ICLR_2022_1267
ICLR_2022
Weakness 1. The proposed model's parameterization depends on the number of events and predicates making it difficult to generalize to unseen events or required retraining. 2. The writing needs to be improved to clearly discuss the proposed approach. 3. The experiments baselines are of the authors' own design; it lacks a comparison to the literature baselines using the same dataset. If there is no such baseline, please discuss the criteria in choosing such baselines. Details: 1. Page 1, "causal mechanisms", causality is different from temporal relationship. Please use the terms carefully. 2. Page 3, it seems to me that M_T is defined over the probabilities of atomic events. The notation as it is used not making it difficult to make sense of this concept. Please consider providing examples to explain M_T. 3. Page 4, equation (2), it is not usual to feed probabilities to convolution. a. Please discuss in section 3 how your framework can handle raw inputs, such as video or audio? Do you need an atomic event predictor or human label to use your proposed system? If so, is it possible to extend your framework to directly have video as input instead of event probability distributions? Can you do end2end training from raw inputs, such as video or audio? (although you mentioned Faster R-CNN in the experiment section, it is better to discuss the whole pipeline in the methodology). b. Have you tried discrete event embeddings to represent the atomic and composite events so as the framework can learn distributional embedding representation of events so as to learn the temporal rules? 4. Page 4, please explain what you want to achieve with M_A = M_C \otimes M_D. It is unusual to multiple length by conv1D output. Also please define \otimes here. I am guessing it is elementwise multiplication from the context. 5. Page 4, "M_{D:,:,l}=l. This can be thought as a positional encoding. It is not clear to me why this can be taken as positional encoding? 6. Page 6, please detail how do you sample top c predicates. Please define what is s in a = softmax(s). It seems to me the dimension of s with \sum_i (c i) can be quite large making it softmax(s) very costly.
1. Page 1, "causal mechanisms", causality is different from temporal relationship. Please use the terms carefully.
NIPS_2022_1352
NIPS_2022
Limited novelty of some contributions: SMA is mostly based on related models (SWA and SWAD); it “simply” removes unnecessary hyperparameters but the principle of averaging models is unchanged. Ensembling SMAs is novel but this contribution is mostly empirical as a theoretical explanation of its success over standard ensembling is missing. The change in the model selection procedure of SMA vs SWAD is also minor although useful in practice; it would be interesting to see if better stability with MA applies to other learning objectives (e.g. domain-invariance criterias) to better highlight this contribution. The theoretical section simply applies the well-known bias-variance trade-off to domain generalization with minor consideration of the experimental setting (c.f. questions below). A big part of the theoretical results (Taylor expansion l198) tries to validate empirically an existing result (Izmailov2018). The novelty of this section would be improved if an explanation why EoA outperforms standard ensembling is provided, which is currently missing. Two other weaknesses that justify my score: Question on applicability of theoretical results to practise and unjustified statements in the theoretical section (see questions below). Missing explanation of the sucess of EoA. Fairness of the comparison to baseline models (see limitations below). In particular, EoA and the ensemble baseline see more training data than other baselines. Izmailov2018: Averaging wights leads to wider optima and better generalization. The limitations highlited by the authors are interesting. Yet, several important limitations are not enough discussed / missing: How realistic is the theoretical analysis w.r.t. experimental setup? C.f. question above. The SoTA results are obtained with two major limitations which are not discussed. This puts into question the fairness of the comparison. high inference cost of EoA and ensembling: 6 times higher than the baselines; this important question is not discussed by the authors. the ensembling and EoA models see more data than baselines as there are trained on different training/validation splits. It seems normal that the results are better than using a single training/validation split. A fairer evaluation would only consider a same training/validation split.
6 times higher than the baselines; this important question is not discussed by the authors. the ensembling and EoA models see more data than baselines as there are trained on different training/validation splits. It seems normal that the results are better than using a single training/validation split. A fairer evaluation would only consider a same training/validation split.
NIPS_2019_873
NIPS_2019
--- I think human studies in interpretability research are mis-represented at L59. * These approaches don't just ask people whether they think an approach is trustworthy. They also ask humans to do things with explanations and that seems to have a better connection to whether or not an explanation really explains model behavior. This follows the version of interpretability from [1]. This paper laments a lack of theoretical foundation to interpretability approaches (e.g., at L241,L275-277) and it acknowledges at multiple points that we don't know what ground truth for feature importance estimates should look like. Doesn't a person have to interpret an explanation at some point a model for it to be called interpretable? It seems like human studies may offer a way to philosophically ground interpretability, but this part of the paper mis-represents that research direction in contrast with its treatment of the rest of the related work. Minor evaluation problems: * Given that there are already multiple samples for all these experiments, what is the variance? How significant are the differences between rankings? I only see this as a minor problem because the differences on the right of figure 4 are quite large and those are what matter most. * I understand why more baseline estimators weren't included: it's expensive. It would be interesting to incorporate lower frequency visualizations like Grad-CAM. These can sometimes give significantly different performance (e.g., as in [3]). I expect it may have significant impact here because a more coarse explanation (e.g., 14x14 heatmap) may help avoid noise that comes from the non-smooth, high frequency, per-pixel importance of the explanations investigated. This seems further confirmed by the visualizations in figure 1 which remove whole objects as pointed out at L264. The smoothness of coarse visualization method seems like it should do something similar, so it would further confirm the hypothesis about whole objects implied at L264. * It would be nice to summarize ROAR into one number. It would probably have much more impact that way. One way to do so would be to look at the area under the test accuracy curves of figure 4. Doing so would obscure richer insights that ROAR would provide, but this is a tradeoff made by any aggregate statistic. Presentation: * L106: This seems to carelessly resolve a debate that the paper was previously careful to leave open (L29). Why can't it be that the distribution has changed? Do any experiments disentangle changes in distribution from removal of information? Things I didn't understand: * L29: I didn't get this till later in the paper. I think I do now, but my understanding might change again after the rebuttal. More detail here would be useful. * L85: Wouldn't L1 regularization be applied to the weights? Is that feature selection? What steps were actually taken in the experiments used in this paper? Did the ResNet50 used have L1 regularization? * L122: What makes this a bit unclear is that I don't know what is and what is not a random variable. Normally I would expect some of these (epsilon, eta) to be constants. Suggestions --- * It would be nice to know a bit more about how ROAR is implemented. Were the new datasets dynamically generated? Were they pre-processed and stored? * Say you start re-training from the same point. Train two identical networks with different random seeds. How similar are the importance estimates from these networks (e.g. using rank correlation similarity)? How similar are the sets of the final 10% of important pixels identified by ROAR across different random seeds? If they're not similar then whatever importance estimator isn't even consistent with itself in some sense. This could be thought of as an additional sanity check and it might help understand why the baseline estimators considered don't do well. [1]: Doshi-Velez, F., & Kim, B. (2017). A Roadmap for a Rigorous Science of Interpretability. ArXiv, abs/1702.08608. Final Evaluation --- Quality: The experiments were thorough and appropriately supported the conclusions. The paper really only evaluate importance estimators using ROAR. It doesn't really evaluate ROAR itself. I think this is appropriate given the strong motivation the paper has and the lack of concensus about what methods like ROAR should be doing. Clarity: The paper could be clearer in multiple places, but it ultimately gets the point across. Originality: The idea is similar to [30] as cited. ROAR uses a similar principle with re-training and this makes it new enough. Significance: This evaluation could become popular, inspire future metrics, and inspire better importance estimators. Overall, this makes a solid contribution. Post-rebuttal Update --- After reading the author feedback, reading the other reviews, and participating in a somewhat in-depth discussion I think we reached some agreement, though not everyone agreed about everything. In particular, I agree with R4's two recommendations for the final version. These changes would address burning questions about ROAR. I still think the existing contribution is a pretty good contribution to NeurIPS (7 is a good rating), though I'm not quite as enthusiastic as before. I disagree somewhat with R4's stated main concern, that ROAR does not distinguish enough between saliency methods. While it would be nice to have more analysis about the differences between these methods, ROAR is only one way to analyze these explanations and one analysis needn't be responsible for identifying differences between all the approaches it analyzes.
* L106: This seems to carelessly resolve a debate that the paper was previously careful to leave open (L29). Why can't it be that the distribution has changed? Do any experiments disentangle changes in distribution from removal of information? Things I didn't understand: