paper_id
stringlengths
10
19
venue
stringclasses
15 values
focused_review
stringlengths
7
10.2k
point
stringlengths
47
690
NIPS_2017_631
NIPS_2017
1. The main contribution of the paper is CBN. But the experimental results in the paper are not advancing the state-of-art in VQA (on the VQA dataset which has been out for a while and a lot of advancement has been made on this dataset), perhaps because the VQA model used in the paper on top of which CBN is applied is not the best one out there. But in order to claim that CBN should help even the more powerful VQA models, I would like the authors to conduct experiments on more than one VQA model – favorably the ones which are closer to state-of-art (and whose codes are publicly available) such as MCB (Fukui et al., EMNLP16), HieCoAtt (Lu et al., NIPS16). It could be the case that these more powerful VQA models are already so powerful that the proposed early modulating does not help. So, it is good to know if the proposed conditional batch norm can advance the state-of-art in VQA or not. 2. L170: it would be good to know how much of performance difference this (using different image sizes and different variations of ResNets) can lead to? 3. In table 1, the results on the VQA dataset are reported on the test-dev split. However, as mentioned in the guidelines from the VQA dataset authors (http://www.visualqa.org/vqa_v1_challenge.html), numbers should be reported on test-standard split because one can overfit to test-dev split by uploading multiple entries. 4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening? 5. Figure 4 visualization: the visualization in figure (a) is from ResNet which is not finetuned at all. So, it is not very surprising to see that there are not clear clusters for answer types. However, the visualization in figure (b) is using ResNet whose batch norm parameters have been finetuned with question information. So, I think a more meaningful comparison of figure (b) would be with the visualization from Ft BN ResNet in figure (a). 6. The first two bullets about contributions (at the end of the intro) can be combined together. 7. Other errors/typos: a. L14 and 15: repetition of word “imagine” b. L42: missing reference c. L56: impact -> impacts Post-rebuttal comments: The new results of applying CBN on the MRN model are interesting and convincing that CBN helps fairly developed VQA models as well (the results have not been reported on state-of-art VQA model). So, I would like to recommend acceptance of the paper. However I still have few comments -- 1. It seems that there is still some confusion about test-standard and test-dev splits of the VQA dataset. In the rebuttal, the authors report the performance of the MCB model to be 62.5% on test-standard split. However, 62.5% seems to be the performance of the MCB model on the test-dev split as per table 1 in the MCB paper (https://arxiv.org/pdf/1606.01847.pdf). 2. The reproduced performance reported on MRN model seems close to that reported in the MRN paper when the model is trained using VQA train + val data. I would like the authors to clarify in the final version if they used train + val or just train to train the MRN and MRN + CBN models. And if train + val is being used, the performance can't be compared with 62.5% of MCB because that is when MCB is trained on train only. When MCB is trained on train + val, the performance is around 64% (table 4 in MCB paper). 3. The citation for the MRN model (in the rebuttal) is incorrect. It should be -- @inproceedings{kim2016multimodal, title={Multimodal residual learning for visual qa}, author={Kim, Jin-Hwa and Lee, Sang-Woo and Kwak, Donghyun and Heo, Min-Oh and Kim, Jeonghee and Ha, Jung-Woo and Zhang, Byoung-Tak}, booktitle={Advances in Neural Information Processing Systems}, pages={361--369}, year={2016} } 4. As AR2 and AR3, I would be interested in seeing if the findings from ResNet carry over to other CNN architectures such as VGGNet as well.
4. Table 2, applying Conditional Batch Norm to layer 2 in addition to layers 3 and 4 deteriorates performance for GuessWhat?! compared to when CBN is applied to layers 4 and 3 only. Could authors please throw some light on this? Why do they think this might be happening?
ARR_2022_338_review
ARR_2022
- The unsupervised translation tasks are all quite superficial, taking existing datasets of similar languages (e.g. En-De Multi30k, En-Fr WMT) and editing them to an unsupervised MT corpus. - Improvements on Multi30k are quite small (< 1 BLEU) and reported over single runs and measuring BLEU scores alone. It would be good to report averages over multiple runs and report some more modern metrics as well like COMET or BLEURT. - It is initially quite unclear from the writing where the sentence-level representations come from. As they are explicitly modeled, they need supervision from somewhere. The constant comparison to latent variable models and calling these sentence representations latent codes does not add to the clarity of the paper. I hope this will be improved in a revision of the paper. Some typos: - 001: "The latent variables" -> "Latent variables" - 154: "efficiently to compute" -> "efficient to compute" - 299: "We denote the encoder and decoder for encoding and generating source-language sentences as the source encoder and decoder" - unclear - 403: "langauge" -> "language"
- Improvements on Multi30k are quite small (< 1 BLEU) and reported over single runs and measuring BLEU scores alone. It would be good to report averages over multiple runs and report some more modern metrics as well like COMET or BLEURT.
NIPS_2016_537
NIPS_2016
weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers.
7) in the same section, the notation {\cal P} with a subscript is used several times without being defined.
NIPS_2022_601
NIPS_2022
Although I like the general idea of using DPP, I found there are various issues with the current version of the paper. Please see my detailed comments as follows. • The paper specifically targets the permutation problems, but I don't see how this permutation property is incorporated into the design of the proposed acquisition function (except the fact that batch BO is used so that we can evaluate multiple data points parallel and thus can avoid the issue of large search space for permutation problems). • Even though the paper provides various theoretical analysis, but these analyses seem not to be rigorous, and might not really help to answer the analytical properties of the proposed approach. o The Acquisition Weighted Kernel L^{AW} defined in Line 122 seems to not be a real (valid) kernel? As it depends on any acquisition function a(x), so it seems impossible to me that it is a valid kernel for all cases. Besides, this seems to be like a component of the proposed acquisition function, rather than to be called a "kernel". o The regret analysis in Theorem 3.6 depends on the maximum information gain \gamma_T, but this \gamma_T value is not properly bounded in Theorem 3.9. Theorem 3.9 only shows that \gamma_T is smaller than a function of \lambda_{max} but there is no guarantee that \lambda_{max} is bounded when T goes to infinity. This is a key analysis in any BO analysis. In the literature, only several kernels have been shown that their \lambda_{max} is bounded when T goes to infinity. o Again, same problem with Theorem 3.12, is there any guarantee that the \lambda_{max} of the position kernel is upper bounded? • The performance of the proposed approach on the permutation optimization problems are not that good though (Section 5.1). LAW-EI performs pretty bad, much worse than other baselines in various cases, while LAW-EST is on par with other baselines and only performs well in one problem. Besides, I don't understand why LAW-UCB is not added as one of the baselines. The justification regarding the size of search space does not seem reasonable to me. • Besides, the experiments seem not too strong and fair to me. I don't understand why all the baselines use the position kernels, why don't we use the default settings of these baselines in the literature? Besides, it seems like some baselines related to BO with discrete & categorial variables are missing. The paper also needs to compare its proposed approach with these baselines. I think the paper does not mention much about the limitations or the societal impacts of their proposed approach.
• Besides, the experiments seem not too strong and fair to me. I don't understand why all the baselines use the position kernels, why don't we use the default settings of these baselines in the literature? Besides, it seems like some baselines related to BO with discrete & categorial variables are missing. The paper also needs to compare its proposed approach with these baselines. I think the paper does not mention much about the limitations or the societal impacts of their proposed approach.
oEuTWBfVoe
ICLR_2024
I think the paper has several weaknesses. Please see the following list and the questions sections. * The statement in the introduction regarding the biological plausibility of backpropagation may be too weak ("While the backpropagation ..., its biological plausibility remains a subject of debate."). It is widely accepted that backpropagation is biologically implausible. * Regarding the following sentence in Section 3 "We further define a readout ... ,i.e., $\mathbf{m}^t = f(y^t).$", did you mean to write $\mathbf{m}^t = f(\mathbf{y}^t)$ ($\mathbf{y}^t$ with boldsymbol)? * On page 4, "setting $\theta_{110} = 1$ and $\theta_{012} = -1$," the second term should be $\theta_{021}$ rather than $\theta_{012}$. * The initialization of polynomial coefficient parameters is not clear, and it seems they are initialized close to zero according to Figure 2. It would be valuable to explain how they were initialized. * The paper models synaptic plasticity rules only for feedforward connections. It would be interesting to explore the impact of lateral connections (by adding additional terms in Equation 6). Have you experimented with such a setup? * In page 7, the authors state that "In the case of the MLP, we tested various architectures and highlight results for a 3-10-1 neuron topology." What are the results for other various architectures? Putting them into the paper would also be valuable (as ablation studies). * The hyperparameters for the experiments are missing? What is the learning rate, what is the optimizer, etc.? * I do not see that much difference between the experiment presented in Section 4 and the experiment in (Confavreux et al., 2020) (Section 3.1) except the choice of optimization method. In your experimental setup, you also do not model the global reward. Therefore, I think it makes it more similar to the experiment in (Confavreux et al., 2020). * Comparison to previous work is missing.
* The statement in the introduction regarding the biological plausibility of backpropagation may be too weak ("While the backpropagation ..., its biological plausibility remains a subject of debate."). It is widely accepted that backpropagation is biologically implausible.
NIPS_2017_356
NIPS_2017
- I would have liked to see some analysis about the distribution of the addressing coefficients (Betas) with and without the bias towards sequential addressing. This difference seems to be very important for the synthetic task (likely because each question is based on the answer set of the previous one). Also I don't think the value of the trade-off parameter (Theta) was ever mentioned. What was it and how was it selected? If instead of a soft attention, the attention from the previous question was simply used, how would that baseline perform? - Towards the same point, to what degree does the sequential bias affect the VisDial results? - Minor complaint. There are a lot of footnotes which can be distracting, although I understand they are good for conserving clarity / space while still providing useful details. - Does the dynamic weight prediction seem to identify a handful of modes depending on the question being asked? Analysis of these weights (perhaps tSNE colored by question type) would be interesting. - It would have been interesting to see not only the retrieved and final attentions but also the tentative attention maps in the qualitative figures.
- It would have been interesting to see not only the retrieved and final attentions but also the tentative attention maps in the qualitative figures.
ICLR_2022_1393
ICLR_2022
I think that: The comparison to baselines could be improved. Some of the claims are not carefully backed up. The explanation of the relationship to the existing literature could be improved. More details on the above weaknesses: Comparison to baselines: "We did not find good benchmarks to compare our unsupervised, iterative inferencing algorithm against" I think this is a slightly unfair comment. The unsupervised and iterative inferencing aspects are only positives if they have the claimed benefits, as compared to other ML methods (more accurate and better generalization). There is a lot of recent work addressing the same ML task (as mentioned in the related work section.) This paper contains some comparisons to previous work, but as I detail below, there seem to be some holes. FCNN is by far the strongest competitor for the Laplace example in the appendix. Why is this left off of the baseline comparison table in the main paper? Further, is there any reason that FCNN couldn't have been used for the other examples? Why is FNO not applied to the Chip cooling (Temperature) example? A major point in this paper is improved generalization across PDE conditions. However, I think that's hard to check when only looking at the test errors for each method. In other words, is CoAE-MLSim's error lower than UNet's error because the approach fit the training data better, or is it because it generalized better? Further, in some cases, it's not obvious to me if the test errors are impressive, so maybe it is having a hard time generalizing. It would be helpful to see train vs. test errors, and ideally I like to see train vs. val. vs. test. For the second main example (vortex decay over time), looking at Figures 8 and 33 (four of the fifty test conditions), CoAE-MLSim has much lower error than the baselines in the extrapolation phase but noticeably higher in the interpolation phase. In some cases, it's hard to tell how close the FNO line is to zero - it could be that CoAE-MLSim even has orders of magnitude more error. Since we can see that there's a big difference between interpolation and extrapolation, it would be helpful to see the test error averaged over the 50 test cases but not averaged over the 50 time steps. When averaged over all 50 time steps for the table on page 9, it could be that CoAE-MLSim looks better than FNO just because of the extrapolation regime. In practice, someone might pick FNO over CoAE-MLSim if they aren't interested in extrapolating in time. Do the results in the table for vortex decay back up the claim that CoAE-MLSim is generalizing over initial conditions better than FNO, or is it just better at extrapolation in time? Backing up claims: The abstract says that the method is tested for a variety of cases to demonstrate a list of things, including "scalability." The list of "significant contributions" also includes "This enables scaling to arbitrary PDE conditions..." I might have missed/forgotten something, but I think this wasn't tested? "Hence, the choice of subdomain size depends on the trade-off between speed and accuracy." This isn't clear to me from the results. It seems like 32^3 is the fastest and most accurate? I noticed some other claims that I think are speculations, not backed up with reported experiments. If I didn't miss something, this could be fixed by adding words like "might." "Physics constrained optimization at inference time can be used to improve convergence robustness and fidelity with physics." "The decoupling allows for better modeling of long range time dynamics and results in improved stability and generalizability." "Each solution variable can be trained using a different autoencoder to improve accuracy." "Since, the PDE solutions are dependent and unique to PDE conditions, establishing this explicit dependency in the autoencoder improves robustness." "Additionally, the CoAE-MLSim apprach solves the PDE solution in the latent space, and hence, the idea of conditioning at the bottleneck layer improves solution predictions near geometry and boundaries, especially when the solution latent vector prediction has minor deviations." "It may be observed that the FCNN performs better than both UNet and FNO and this points to an important aspect about representation of PDE conditions and its impact on accuracy." The representation of the PDE conditions could be why, but it's hard to say without careful ablation studies. There's a lot different about the networks. Similarly: "Furthermore, compressed representations of sparse, high-dimensional PDE conditions improves generalizability." Relationship to literature: The citation in this sentence is abrupt and confusing because it sounds like CoAE-MLSim is a method from that paper instead of the new method: "Figure 4 shows a schematic of the autoencoder setup used in the CoAE-MLSim (Ranade et al., 2021a)." More broadly, Ranade et al., 2021a, Ranade et al., 2021b, and Maleki, et al., 2021 are all cited and all quite related to this paper. It should be more clear how the authors are building on those papers (what exactly they are citing them for), and which parts of CoAE-MLSim are new. (The Maleki part is clearer in the appendix, but the reader shouldn't have to check the appendix to know what is new in a paper.) I thought that otherwise the related work section was okay but was largely just summarizing some papers without giving context for how they relate to this paper. Additional feedback (minor details, could fix in a later version, but no need to discuss in the discussion phase): - The abstract could be clearer about what the machine learning task is that CoAE-MLSim addresses. - The text in the figures is often too small. - "using pre-trained decoders (g)" - probably meant g_u? - Many of the figures would be more clear if they said pre-trained solution encoders & solution decoders, since there are multiple types of autoencoders. - The notation is inconsistent, especially with nu. For example, the notation in Figures 2 & 3 doesn't seem to match the notation in Alg 1. Then on Page 4 & Figure 4, the notation changes again. - Why is the error table not ordered 8^3, 16^3, 32^3 like Figure 9? The order makes it harder for the reader to reason about the tradeoff. - Why is Err(T_max) negative sometimes? Maybe I don't understand the definition, but I would expect to use absolute value? - I don't think the study about different subdomain sizes is an "ablation" study since they aren't removing a component of the method. - Figure 11: I'm guessing that the y-axis is log error, but this isn't labeled as such. I didn't really understand the the legend or the figure in general until I got to the appendix, since there's little discussion of it in the main paper. - "Figure 30 shows comparisons of CoAE-MLSim with Ansys Fluent for 4 unseen objects in addition to the example shown in the main paper." - probably from previous draft. Now this whole example is in the appendix, unless I missed something. - My understanding is that each type of autoencoder is trained separately and that there's an ordering that makes sense to do this in, so you can use one trained autoencoder for the next one (i.e. train the PDE condition AEs, then the PDE solution AE, then the flux conservation AE, then the time integration AE). This took me a while to understand though, so maybe this could be mentioned in the body of the paper. (Or perhaps I missed that!) - It seems that the time integration autoencoder isn't actually an autoencoder if it's outputting the solution at the next time step, not reconstructing the input. - Either I don't understand Figure 5 or the labels are wrong. - It's implied in the paper (like in Algorithm 1) that the boundary conditions are encoded like the other PDE conditions. In the Appendix (A.1), it's stated that "The training portion of the CoAE-MLSim approach proposed in this work corresponds to training of several autoencoders to learn the representations of PDE solutions, conditions, such as geometry, boundary conditions and PDE source terms as well as flux conservation and time integration." But then later in the appendix (A.1.3), it's stated that boundary conditions could be learned with autoencoders but are actually manually encoded for this paper. That seems misleading.
- I don't think the study about different subdomain sizes is an "ablation" study since they aren't removing a component of the method.
NIPS_2017_217
NIPS_2017
- The paper is incremental and does not have much technical substance. It just adds a new loss to [31]. - "Embedding" is an overloaded word for a scalar value that represents object ID. - The model of [31] is used in a post-processing stage to refine the detection. Ideally, the proposed model should be end-to-end without any post-processing. - Keypoint detection results should be included in the experiments section. - Sometimes the predicted tag value might be in the range of tag values for two or more nearby people, how is it determined to which person the keypoint belongs? - Line 168: It is mentioned that the anchor point changes if the neck is occluded. This makes training noisy since the distances for most examples are computed with respect to the neck. Overall assessment: I am on the fence for this paper. The paper achieves state-of-the-art performance, but it is incremental and does not have much technical substance. Furthermore, the main improvement comes from running [31] in a post-processing stage.
- The paper is incremental and does not have much technical substance. It just adds a new loss to [31].
rs78DlnUB8
EMNLP_2023
1. The paper lacks a clear motivation for considering text graphs. Except for the choice of complexity indices, which can easily be changed according to the domain, the proposed method is general and can be applied to other graphs or even other types of data. Moreover, formal formulations of text graphs and the research question are missing in the paper. 2. Several curriculum learning methods have been discussed in Section 1. However, the need for designing a new curriculum learning method for text graphs is not justified. The research gap, e.g., why existing methods can’t be applied, is not discussed. 3. Equations 7-11 provide several choices of function f. However, there is no theoretical analysis or empirical experiments to advise on the choice of function f. 4. In the overall performance comparison (Table 2), other curriculum learning methods do not improve performance compared to No-CL. These results are not consistent with the results reported in the papers of the competitors. At least some discussions about the reason should be included. In addition, it is unclear how many independent runs have been conducted to get the accuracy and F1. What are the standard deviations? 5. Although experimental results in Table 3 and Table 4 show that the performance remains unchanged, it is unclear how the transfer of knowledge is done in the proposed method. An in-depth discussion of this property should strengthen the soundness of the paper. 6. In line 118, the authors said the learned curricula are model-dependent, but they also said the curricula are transferrable across models. These two statements seem to be contradictory.
2. Several curriculum learning methods have been discussed in Section 1. However, the need for designing a new curriculum learning method for text graphs is not justified. The research gap, e.g., why existing methods can’t be applied, is not discussed.
NIPS_2021_1743
NIPS_2021
1. While the paper claim the importance of language modeling capability of pre-trained models, the authors did not conduct experments on generation tasks that are more likely to require a well-performing language model. Experiments on word similarity and SquAD in section 5.3 cannot really reflect the capability of language modeling. The authors may consider to include tasks like language modeling, machine translation or text sumarization to strenghen this part, as this is one of the main motivations of COCO-LM. 2. Analysis of SCL in section 5.2 regarding few-shot abaility looks not convincing. The paper claims that a more regularized representation space by SCL may result in better generalization ability in few-shot scenarios. However, results in Figure 7(c) and (d) do not meet our expectation such that COCO-LM achieves much more improvements with less labels and the improvements will gradually disappear with more labels. Besides, the authors may check if COCO-LM brings benefits to sentence retrieval tasks with the learned anisotropy text representations. 3. The comparison with Megatron is a little overrated. The performance of Megatron and COCO-LM is close to other approaches, for examples, RoBERTa, ELECTRA, and DeBERTa, which are with similar sizes as COCO-LM. If the author claim that COCO-LM is parameter-efficient, the conclusion is also applicable to the above related works. Questions for the Authors 1. In experimental setup, why did the authors switch the types of BPE vocabulary, i.e., uncased and cased. Will the change of BPE cause the variance of performance? 2. In Table 2, it looks like COCO-LM especially affects the performance on CoLA and RTE hence the final performance. Can the authors provide some explanation on how the proposed pre-training tasks affect the two different GLEU tasks? 3. In section 5.1, the authors say that the benefits of the stop gradient operation are more on stability. What stability, the training process? If so, are there any learning curves of COCO-LM with and without stop gradient during pre-training to support this claim? 4. In section 5.2, the term “Data Argumentation” seems wrong. Did the authors mean data augmentation? Typos 1. Check the term “Argumentation” in line 164, 252, and 314. 2. Line 283, “a unbalanced task”, should be “an unbalanced task”. 3. Line 326, “contrast pairs”, should be “contrastive pairs” to be consistent throughout the paper?
3. The comparison with Megatron is a little overrated. The performance of Megatron and COCO-LM is close to other approaches, for examples, RoBERTa, ELECTRA, and DeBERTa, which are with similar sizes as COCO-LM. If the author claim that COCO-LM is parameter-efficient, the conclusion is also applicable to the above related works. Questions for the Authors 1. In experimental setup, why did the authors switch the types of BPE vocabulary, i.e., uncased and cased. Will the change of BPE cause the variance of performance?
WzUPae4WnA
ICLR_2025
1. **The motivation of this paper appears to be questionable.** The authors claim that DoRA increases the risk of overfitting, basing this on two pieces of evidence: - DoRA introduces additional parameters compared to LoRA. - The gap between training and test accuracy curves for DoRA is larger than that of BiDoRA. However, these two points do not convincingly support the claim. First, while additional parameters can sometimes contribute to overfitting, they are not a sufficient condition for it. In fact, DoRA adds only a negligible number of parameters (0.01% of the model size, as reported by the authors) beyond LoRA. Moreover, prior work [1] suggests that LoRA learns less than full fine-tuning and may even act as a form of regularization, implying that the risk of overfitting is generally low across these PEFT methods. Additionally, the training curves are not necessarily indicative of overfitting, as they can be significantly influenced by factors such as hyperparameters, model architecture, and dataset characteristics. The authors present results from only a single configuration, which limits the generalizability of their findings. Finally, the authors’ attribution of an *alleged overfitting problem* to DoRA’s concurrent training lacks a strong foundation. 2. **The proposed BiDoRA method is overly complex and difficult to use.** It requires a two-phase training process, with the first phase itself consisting of two sub-steps. It also introduces two additional hyperparameters: the weight of orthogonality regularization and a ratio for splitting training and validation sets. As a result, BiDoRA takes 3.92 times longer to train than LoRA. 3. **Performance differences between methods are minimal across evaluations**. In nearly all results, the performance differences between the methods are less than 1 percentage point, which may be attributable to random variation. Furthermore, the benchmarks selected are outdated and likely saturated. [1] [LoRA Learns Less and Forgets Less](https://arxiv.org/abs/2405.09673)
3. **Performance differences between methods are minimal across evaluations**. In nearly all results, the performance differences between the methods are less than 1 percentage point, which may be attributable to random variation. Furthermore, the benchmarks selected are outdated and likely saturated. [1] [LoRA Learns Less and Forgets Less](https://arxiv.org/abs/2405.09673)
NIPS_2019_494
NIPS_2019
of the approach, it may be interesting to do that. Clarity: The paper is well written but clarity could be improved in several cases: - I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is provided in the paper (what is S and Xt). - even with the pseudocode given in the supplementary material I don't get the feeling the paper is written to be reproduced. It is written to provide an intuitive understanding of the work, but to actually reproduce it, more details are required that are neither provided in the paper nor in the supplementary material. This includes, for example, details about the RNN implementation (like number of units etc), and many other technical details. - the paper is presented well, e.g., quality of graphs is good (though labels on the graphs in Fig 3 could be slightly bigger) Significance: - from just the paper: the results would be more interesting (and significant) if there was a way to reproduce the work more easily. At present I cannot see this work easily taken up by many other researchers mainly due to lack of detail in the description. The work is interesting, and I like the idea, but with a relatively high-level description of it in the paper it would need a little more than the peudocode in the materials to convince me using it (but see next). - In the supplementary material it is stated the source code will be made available, and in combination with paper and information in the supplementary material, the level of detail may be just right (but it's hard to say without seeing the code). Given the promising results, I can imagine this approach being useful at least for more research in a similar direction.
- I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is provided in the paper (what is S and Xt).
NIPS_2020_1371
NIPS_2020
When reading the paper, I've got the impression that the paper is not finished with couple of key experiments missing. Some parts of the paper lack motivation. Terminology is sometimes unclear and ambiguous. 1. Terminology. The paper uses terms "animation", "generative", "interpolation". See contribution 1 in L40-42. While the paper reported some interpolation experiments, I haven't found any animation or generation experiments. It seems the authors equate interpolation and animation (Section 4.2) which is not correct. I consider animation is a physically plausible motion. Like a person opens a mouth, car moves, while interpolation is just warping one image into the other. Fig 7 shows exactly this with the end states being plausible states of the system. The authors should fix the ambiguity to avoid misunderstanding. The authors also don't report any generation results. Can I sample a random shape from the learnt distribution? If not the I don't think it's correct to say the model is generative. 2. Motivation. It's not clear why the problem is important from practical standpoint? Why one would like to interpolate between two icons? Motivation behind animation is more clear, but in my opinion, the paper doesn't do animation. I believe from a practical standpoint letting the user to input text and be able to generate an icon would also be important. Again, I have hard time understanding why shape autoencoding and interpolation is interesting. 3. Experiments. Probably the biggest concern with the paper is with the experiments. The paper reports only self comparisons. The paper also doesn't explain why this is so, which adds to the poor motivation problem. In a generative setting comparisons with SketchRNN could be performed.
3. Experiments. Probably the biggest concern with the paper is with the experiments. The paper reports only self comparisons. The paper also doesn't explain why this is so, which adds to the poor motivation problem. In a generative setting comparisons with SketchRNN could be performed.
ICLR_2022_2967
ICLR_2022
The motivation for the federated image translation remains unclear. Authors refer to [1] for the motivation of federated I2I and provide an example of the medical domain, but there are significant issues with both arguments. First, [1] is not accepted to a peer-reviewed venue, which makes the use of any arguments from that paper questionable. Second, it appears that for the majority of medical applications, federated I2I is rather impractical, because a) I2I, in general, requires a large amount of data which is usually not achievable for most medical applications; b) I2I is unreliable as it may not preserve the important information of the input image in the translation result, which is very undesirable for the medical domains. For most of the other applications generally discussed in the I2I literature, the privacy issue is usually not a concern. The main idea behind this method, which is to trivially split the CUT loss into one that only uses the samples from the source domain (Eq. 12) and the one that uses target domain examples (Eq. 11) provides very limited novelty. In fact, [1] uses a very similar and rather trivial loss decomposition strategy. The use of classifier weights for discriminator initialization is also a very well-known technique in the literature. Even if we disregard the triviality of the proposed approach, the proposed training strategy is impractical. Even though the proposed method implies roughly 500 times less data transfer than Federated CycleGAN [1] that required transmitting the weights of two generator-discriminator pairs (Table 1), it still requires the transfer of roughly 4.2 MB of data per iteration, which would make training of a single I2I model last for months. In the experiments section, the authors compare the proposed method with the Federated CycleGAN, CUT and FastCUT, and report the FID scores and segmentation quality on the popular I2I datasets. It appears that the reported improvement in the translation quality comes from either random weight initialization or due to the use of the pretrained classifier weights (because everything else is essentially the same as in FastCUT). Ideally, an average score over several randomly initialized runs must be reported to exclude the first factor, and an ablation study with the randomly initialized discriminator should be held to illustrate the second factor. Also, since the main claim of the paper is the federation component, it is crucial to report the amount of time spent on training of all methods discussed above vs the proposed method. The related work section misses an overview of the existing non-federated I2I methods. [1] Park, Taesung, et al. "Contrastive learning for unpaired image-to-image translation." European Conference on Computer Vision. Springer, Cham, 2020. [2] Song, Joonyoung, and Jong Chul Ye. "Federated CycleGAN for Privacy-Preserving Image-to-Image Translation." arXiv preprint arXiv:2106.09246 (2021).
12) and the one that uses target domain examples (Eq.
ICLR_2023_1823
ICLR_2023
Weakness: 1.They stack the methods of Mirzasoleiman et al., 2020 and Group-learning setting, and then use one classical method DBSCAN to cluster. 2. In comparison of gradient space and feature space, the normalization of the data in Figure 2 is not so clear. I think you do not need to do normalization of the data, since the shrinkage of the correctly classified points is profit to outlier identification. 3. It is not clear what variable the derivative is based on. I thought the gradient is a very large dimensional vector (tensor). Since the network is with many layers (e.g. ResNet-50), the dimension of the derivative should be about 50 times. When moving to numerical result, I found the ResNet-50 is pretrained and the derivative is only related to the parameters of logistic regression.
1.They stack the methods of Mirzasoleiman et al., 2020 and Group-learning setting, and then use one classical method DBSCAN to cluster.
NIPS_2020_541
NIPS_2020
Some concerns: 1. There is too much-overleaped information in Table 1, Table 2, and Figure 1. Figure 1 includes all information presented in Tables 1 and 2. 2. What’s the logic between the proposed method and [9] and [16]? Why the authors compare the proposed method with [9] first, then [16]? Why the authors only compare the computational cost with [9], but [16]? Is the computational cost a big contribution to this paper? Is that a big issue in a practical scenario? That part is weird to me and there is no further discussion about it in the rest of this paper. 3. Why the proposed column smoothing method produces better result compare with block smoothing method? 4. The accuracy drop for the Imagenet dataset is a concern, which makes the proposed method in-practical.
2. What’s the logic between the proposed method and [9] and [16]? Why the authors compare the proposed method with [9] first, then [16]? Why the authors only compare the computational cost with [9], but [16]? Is the computational cost a big contribution to this paper? Is that a big issue in a practical scenario? That part is weird to me and there is no further discussion about it in the rest of this paper.
ARR_2022_187_review
ARR_2022
1. Not clear if the contribution of the paper are sufficient for a long *ACL paper. By tightening the writing and removing unnecessary details, I suspect the paper will make a nice short paper, but in its current form, the paper lacks sufficient novelty. 2. The writing is difficult to follow in many places and can be simplified. 1. Line 360-367 are occupying too much space than needed. 2. It was not clear to me that Vikidia is the new dataset that was introduced by the paper until I read the last section :) 3. Too many metrics used for evaluation. While I commend the paper’s thoroughness by using different metrics for evaluation, I believe in this case the multiple metrics create more confusion than clarity in understanding the results. I recommend using the strictest metric (such as RA) because it will clearly highlight the differences in performance. Also consider marking the best results in each column/row using boldface text. 4. I suspect that other evaluation metrics NDCG, SRRR, KTCC are unable to resolve the differences between NPRM and the baselines in some cases. For e.g., Based on the extremely large values (>0.99) for all approaches in Table 4, I doubt the difference between NPRM’s 0.995 and Glove+SVMRank 0.992 for Avg. SRR on NewsEla-EN is statistically significant. 5. I did not understand the utility of presenting results in Table 2 and Table 3. Why not simplify the presentation by selecting the best regression based and classification based approaches for each evaluation dataset and compare them against NPRM in Table 4 itself? 6. From my understanding, RA is the strictest evaluation metric, and NPRM performs worse on RA when compared to the baselines (Table 4) where simpler approaches fare better. 7. I appreciate the paper foreseeing the limitations of the proposed NPRM approach. However, I find the discussion of the first limitation somewhat incomplete and ending abruptly. The last sentence has the tone of “despite the weaknesses, NPRM is useful'' but it does not flesh out why it’s useful. 8. I found ln616-632 excessively detailed for a conclusion paragraph. Maybe simply state that better metrics are needed for ARA evaluation? Such detailed discussion is better suited for Sec 4.4 9. Why was a classification based model not used for the zero shot experiments in Table 5 and Table 6? These results in my opinion are the strongest aspect of the paper, and should be as thorough as the rest of the results. 10. Line 559: “lower performance on Vikidia-Fr compared to Newsela-Es …” – Why? These are different languages after all, so isn’t the performance difference in-comparable?
5. I did not understand the utility of presenting results in Table 2 and Table 3. Why not simplify the presentation by selecting the best regression based and classification based approaches for each evaluation dataset and compare them against NPRM in Table 4 itself?
NIPS_2016_370
NIPS_2016
, and while the scores above are my best attempt to turn these strengths and weaknesses into numerical judgments, I think it's important to consider the strengths and weaknesses holistically when making a judgment. Below are my impressions. First, the strengths: 1. The idea to perform improper unsupervised learning is an interesting one, which allows one to circumvent certain NP hardness results in the unsupervised learning setting. 2. The results, while mostly based on "standard" techniques, are not obvious a priori, and require a fair degree of technical competency (i.e., the techniques are really only "standard" to a small group of experts). 3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses: 1. The biggest weakness is some issues with the framework itself. In particular: 1a. It is not obvious that "k-bit representation" is the right notion for unsupervised learning. Presumably the idea is that if one can compress to a small number of bits, one will obtain good generalization performance from a small number of labeled samples. But in reality, this will also depend on the chosen model class used to fit this hypothetical supervised data: perhaps there is one representation which admits a linear model, while another requires a quadratic model or a kernel. It seems more desirable to have a linear model on 10,000 bits than a quadratic model on 1,000 bits. This is an issue that I felt was brushed under the rug in an otherwise clear paper. 1b. It also seems a bit clunky to work with bits (in fact, the paper basically immediately passes from bits to real numbers). 1c. Somewhat related to 1a, it wasn't obvious to me if the representations implicit in the main results would actually lead to good performance if the resulting features were then used in supervised learning. I generally felt that it would be better if the framework was (a) more tied to eventual supervised learning performance, and (b) a bit simpler to work with. 2. I thought that the introduction was a bit grandiose in comparing itself to PAC learning. 3. The main point (that improper unsupervised learning can overcome NP hardness barriers) didn't come through until I had read the paper in detail. When deciding what papers to accept into a conference, there are inevitably cases where one must decide between conservatively accepting only papers that are clearly solid, and taking risks to allow more original but higher-variance papers to reach a wide audience. I generally favor the latter approach, I think this paper is a case in point: it's hard for me to tell whether the ideas in this paper will ultimately lead to a fruitful line of work, or turn out to be flawed in the end. So the variance is high, but the expected value is high as well, and I generally get the sense from reading the paper that the authors know what they are doing. So I think it should be accepted. Some questions for the authors (please answer in rebuttal): -Do the representations implicit in Theorems 3.2 and Theorem 4.1 yield features that would be appropriate for subsequent supervised learning of a linear model (i.e., would linear combinations of the features yield a reasonable model family)? -How easy is it to handle e.g. manifolds defined by cubic constraints with the spectral decoding approach?
3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses:
NIPS_2018_15
NIPS_2018
- The hGRU architecture seems pretty ad-hoc and not very well motivated. - The comparison with state-of-the-art deep architectures may not be entirely fair. - Given the actual implementation, the link to biology and the interpretation in terms of excitatory and inhibitory connections seem a bit overstated. Conclusion: Overall, I think this is a really good paper. While some parts could be done a bit more principled and perhaps simpler, I think the paper makes a good contribution as it stands and may inspire a lot of interesting future work. My main concern is the comparison with state-of-the-art deep architectures, where I would like the authors to perform a better control (see below), the results of which may undermine their main claim to some extent. Details: - The comparison with state-of-the-art deep architectures seems a bit unfair. These architectures are designed for dealing with natural images and therefore have an order of magnitude more feature maps per layer, which are probably not necessary for the simple image statistics in the Pathfinder challenge. However, this difference alone increases the number of parameters by two orders of magnitude compared with hGRU or smaller CNNs. I suspect that using the same architectures with smaller number of feature maps per layer would bring the number of parameters much closer to the hGRU model without sacrificing performance on the Pathfinder task. In the author response, I would like to see the numbers for this control at least on the ResNet-152 or one of the image-to-image models. The hGRU architecture seems very ad-hoc. - It is not quite clear to me what is the feature that makes the difference between GRU and hGRU. Is it the two steps, the sharing of the weights W, the additional constants that are introduced everywhere and in each iteration (eta_t). I would have hoped for a more systematic exploration of these features. - Why are the gain and mix where they are? E.g. why is there no gain going from H^(1) to \tilde H^(2)? - I would have expected Eqs. (7) and (10) to be analogous, but instead one uses X and the other one H^(1). Why is that? - Why are both H^(1) and C^(2) multiplied by kappa in Eq. (10)? - Are alpha, mu, beta, kappa, omega constrained to be positive? Otherwise the minus and plus signs in Eqs. (7) and (10) are arbitrary, since some of these parameters could be negative and invert the sign. - The interpretation of excitatory and inhibitory horizontal connections is a bit odd. The same kernel (W) is applied twice (but on different hidden states). Once the result is subtracted and once it's added (but see the question above whether this interpretation even makes sense). Can the authors explain the logic behind this approach? Wouldn't it be much cleaner and make more sense to learn both an excitatory and an inhibitory kernel and enforce positive and negative weights, respectively? - The claim that the non-linear horizontal interactions are necessary does not appear to be supported by the experimental results: the nonlinear lesion performs only marginally worse than the full model. - I do not understand what insights the eigenconnectivity analysis provides. It shows a different model (trained on BSDS500 rather than Pathfinder) for which we have no clue how it performs on the task and the authors do not comment on what's the interpretation of the model trained on Pathfinder not showing these same patterns. Also, it's not clear to me where the authors see the "association field, with collinear excitation and orthogonal suppression." For that, we would have to know the preferred orientation of a feature and then look at its incoming horizontal weights. If that is what Fig. 4a shows, it needs to be explained better.
- I would have expected Eqs. (7) and (10) to be analogous, but instead one uses X and the other one H^(1). Why is that?
ICLR_2023_3918
ICLR_2023
- The evaluation results reported in table 1 are based on only three trials for each case. While this is fine, statistically this is not significant, and thus it does not make sense to report the deviations. That is why that in many cases the deviation is 0. Due to this reason, statements such as “our performance is at least two standard deviation better than the next best baseline” do not make sense. - In the reported ablation studies in Table 2, for CUB and SOP datasets, the complete loss function performed even worse than those with some terms missing. That does not appear to make sense. Why?
- In the reported ablation studies in Table 2, for CUB and SOP datasets, the complete loss function performed even worse than those with some terms missing. That does not appear to make sense. Why?
4kuLaebvKx
EMNLP_2023
- The chained impacts of image captioning and multilingual understanding model in the proposed pipeline. If the Image caption gives worse results and the final results could be worse. So The basic performance of the image caption model and multilingual language mode depends on the engineering model choice when it applies to zero-shot. - This pipeline style method including two models does not give better average results for both XVNLI and MaRVL. Baseline models in the experiments are not well introduced.
- This pipeline style method including two models does not give better average results for both XVNLI and MaRVL. Baseline models in the experiments are not well introduced.
NIPS_2022_532
NIPS_2022
• It seems that ODA, one of the methods of solving the MOIP problem, has learned the policy to imitate the problem-solving method, but it did not clearly suggest how the presented method improved the performance and computation speed of the solution rather than just using ODA. • In order to apply imitation learning, it is necessary to obtain labeled data by optimally solving various problems. There are no experiments on whether there are any difficulties in obtaining the corresponding data, and how the performance changes depending on the size of the labeled data.
• It seems that ODA, one of the methods of solving the MOIP problem, has learned the policy to imitate the problem-solving method, but it did not clearly suggest how the presented method improved the performance and computation speed of the solution rather than just using ODA.
NIPS_2019_629
NIPS_2019
- To my opinion, the setting and the algorithm lack a bit of originality and might seem as incremental combinations of methods of graph labelings prediction and online learning in a switching environment. Yet, the algorithm for graph labelings is efficient, new and seem different from the existing ones. - Lower bounds and optimality of the results are not discussed. In the conclusion section, it is asked whether the loglog(T) can be removed. Does this mean that up to this term the bounds are tight? I would like more discussions on this. More comparison with existing upper-bounds and lower-bound without switches could be made for instance. In addition, this could be interesting to plot the upper-bound on the experiments, to see how tight is the analysis. Other comments: - Only bounds in expectation are provided. Would it be possible to get high-probability bounds? For instance by using ensemble methods as performed in the experiments. Some measure about the robustness could be added to the experiments (such as error bars or standard deviation) in addition to the mean error. - When reading the introduction, I thought that the labels were adversarially chosen by an adaptive adversary. It seems that the analysis is only valid when all labels are chosen in advance by an oblivious adversary. Am I right? This should maybe be clarified. - This paper deals with many graph notions and it is a bit hard to get into it but the writing is generally good though more details could sometimes be provided (definition of the resistance distance, more explanations on Alg. 1 with brief sentences defining A_t, Y_t,...). - How was alpha tuned in the experiments (as 1/(t+1) or optimally)? - Some possible extensions could be discussed (are they straightforward?): directed or weighted graph, regression problem (e.g, to predict the number of bikes in your experiment)... Typo: l 268: the sum should start at 1
- Only bounds in expectation are provided. Would it be possible to get high-probability bounds? For instance by using ensemble methods as performed in the experiments. Some measure about the robustness could be added to the experiments (such as error bars or standard deviation) in addition to the mean error.
NIPS_2021_732
NIPS_2021
Weakness: Notation are sometimes confusing. (1) In Eq.3 the function d is overloaded. The function outputs the task embedding in the first part. It outputs RNN hidden representation in the second part. (2) In line 7-8 in Algorithm 1, e_j is a new variable that seems to be newly-introduced. Please also provide hints for the reader to link e_j with p(y|x_j), e.g., refer the reader to Eq. 1. Questions: Is there any specific reason that RNN is selected as the task embedding network? How long is the training process? How large is the final model? Some comparison in model size and training efficiency may be included. Minor Suggestions: Eq. 7. Is the first stage done on the support set or the query set or both? Line 274: “w/ XY”, Table 2 “w/ X&Y”. Please keep consistent. I believe the right side of Fig 1 is actually zooming in the part of [e_l^{task}] —> [Bottleneck Adapter]. Please consider highlighting this information to make the figure more accessible. The authors provide detailed discussion on Grad2Task's limitations in Sec 7. I listed some of my concerns and suggestions in the main review.
1. Questions: Is there any specific reason that RNN is selected as the task embedding network? How long is the training process? How large is the final model? Some comparison in model size and training efficiency may be included. Minor Suggestions: Eq.
NIPS_2018_831
NIPS_2018
- I wasn't fully clear about the repeat/remember example in Section 4. I understand that the unrolled reverse computation of a TBPTT of an exactly reversible model for the repeat task is equivalent to the forward pass of a regular model for the remember task, but aren't they still quite different in major ways? First, are they really equivalent in terms of their gradient updates? In the end, they draw two different computation graphs? Second, at *test time*, the former is not auto-regressive (i.e., it uses the given input sequence) whereas the latter is. Maybe I'm missing something simple, but a more careful explanation of the example would be helpful. Also a minor issue: why are an NF-RevGRU and an LSTM compared in Appendix A? Shouldn't an NF-RevLSTM be used for a fairer comparison? - I'm not familiar with the algorithm of Maclaurin et al., so it's difficult to get much out of the description of Algorithm 1 other than its mechanics. A review/justification of the algorithm may make the paper more self-contained. - As the paper acknowledges, the reversible version has a much higher computational cost during training (2-3 times slower). Given how cheap memory is, it remains to be seen how actually practical this approach is. OTHER COMMENTS - It'd still be useful to include the perplexity/BLEU scores of a NF-Rev{GRU, LSTM} just to verify that the gating mechanism is indeed necessary. - More details on using attention would be useful, perhaps as an extra appendix.
- More details on using attention would be useful, perhaps as an extra appendix.
yIv4SLzO3u
ICLR_2024
- Lack of comparison with a highly relevant method. [1] also proposes to utilize the previous knowledge with ‘inter-task ensemble’, while enhancing the current task’s performance with ‘intra-task’ ensemble. Yet, the authors didn’t include the method comparison or performance comparison. - Novelty is limited. From my perspective, the submission simply applies the existing weight averaging and the bounded update to the class incremental learning problems. - No theoretical justification or interpretation. Is there any theoretical guarantee that to what extent inter-task weight averaging or bounded update counter catastrophic forgetting? - Though it shows in Table 3 that bounded model update mitigates the forgetting, the incorporation of bounded model update doesn’t seem to have enough motivation from the methodological perspective. The inter-task weight average is designed to incorporate both old and new knowledge, which is by design enough to tackle forgetting. ##### References: 1.Miao, Z., Wang, Z., Chen, W., & Qiu, Q. (2021, October). Continual learning with filter atom swapping. In International Conference on Learning Representations.
- Lack of comparison with a highly relevant method. [1] also proposes to utilize the previous knowledge with ‘inter-task ensemble’, while enhancing the current task’s performance with ‘intra-task’ ensemble. Yet, the authors didn’t include the method comparison or performance comparison.
ACL_2017_779_review
ACL_2017
However, there are many points that need to be address before this paper is ready for publication. 1) Crucial information is missing Can you flesh out more clearly how training and decoding happen in your training framework? I found out that the equations do not completely describe the approach. It might be useful to use a couple of examples to make your approach clearer. Also, how is the montecarlo sampling done? 2) Organization The paper is not very well organized. For example, results are broken into several subsections, while they’d better be presented together.  The organization of the tables is very confusing. Table 7 is referred before table 6. This made it difficult to read the results. 3) Inconclusive results: After reading the results section, it’s difficult to draw conclusions when, as the authors point out in their comparisons, this can be explained by the total size of the corpus involved in their methods (621  ). 4) Not so useful information: While I appreciate the fleshing out of the assumptions, I find that dedicating a whole section of the paper plus experimental results is a lot of space. - General Discussion: Other: 578:  We observe that word-level models tend to have lower valid loss compared with sentence- level methods…. Is it valid to compare the loss from two different loss functions? Sec 3.2, the notations are not clear. What does script(Y) means? How do we get p(y|x)? this is never explained Eq 7 deserves some explanation, or better removed. 320: What approach did you use? You should talk about that here 392 : Do you mean 2016? Nitty-gritty: 742  : import => important 772  : inline citation style 778: can significantly outperform 275: Assumption 2 needs to be rewritten … a target sentence y from x should be close to that from its counterpart z.
4) Not so useful information: While I appreciate the fleshing out of the assumptions, I find that dedicating a whole section of the paper plus experimental results is a lot of space.
OvoRkDRLVr
ICLR_2024
1. The paper proposes a multimodal framework built atop a frozen Large Language Model (LLM) aimed at seamlessly integrating and managing various modalities. However, this approach seems to be merely an extension of the existing InstructBLIP. 2. Additionally, the concept of extending to multiple modalities, such as the integration of audio and 3D modalities, has already been proposed in prior works like PandaGPT. Therefore, the paper appears to lack sufficient novelty in both concept and methodology. 3. In Table 1, there is a noticeable drop in performance for X-InstructBLIP. Could you please clarify the reason behind this? If this drop is due to competition among different modalities, do you propose any solutions to mitigate this issue? 4. The promised dataset has not yet been made publicly available, so a cautious approach should be taken regarding this contribution until the dataset is openly accessible.
4. The promised dataset has not yet been made publicly available, so a cautious approach should be taken regarding this contribution until the dataset is openly accessible.
NIPS_2017_143
NIPS_2017
For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant: - In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy? - In l32-45 they pretend to give a real example but for me this is too vague. I do see that in some scenarios the loss/objective they consider (high accuracy on majority) kind of makes sense. But I imagine that such losses already have been studied, without necessarily referring to "strategic" settings. In particular, how is this related to robust statistics, Huber loss, precision, recall, etc.? - In l50 they claim that "pershaps even in most [...] practical scenarios" predicting accurate on the majority is most important. I contradict: in many areas with safety issues such as robotics and self-driving cars (generally: control), the models are allowed to have small errors, but by no means may have large errors (imagine a self-driving car to significantly overestimate the distance to the next car in 1% of the situations). Related to this, in my view they fall short of what they claim as their contribution in the introduction and in l79-87: - Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player). - In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE). - Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize). - I agree that "prediction is not done in isolation", but I don't see the "main" contribution of showing that the "task of prediction may have strategic aspects" yet. REMARKS: What's "true" payoff in Table 1? I would have expected to see the test set payoff in that column. Or is it the population (complete sample) empirical payoff? Have you looked into the work by Vapnik about teaching a learner with side information? This looks a bit similar as having your discrapency p alongside x,y.
- Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize).
4NhMhElWqP
ICLR_2024
- The main weakness of this paper is that it overclaims and underdelivers. In its current state, the study is not strong enough to claim the title of a "foundational model". - The authors mention that they use datasets from diverse domains. However, out of the 12 datasets studied, 6 come from a single domain. The distribution of sampling frequencies of these datasets are also not diverse with 6 hourly datasets and a limited representation of other frequencies (and some popular frequencies completely missing). - Another aspect that could have justified the term "foundational" is a diversity of tasks. However, the paper mostly focuses on the long-term forecasting tasks with limited discussion of other tasks. Importantly, the practically relevant task of short-term forecasting (e.g., Monash time series forecasting archive) gets very less attention. - The claim _Most existing forecasting models were designed to process regularly sampled data of fixed length. We argue that this restriction is the central reason for poor generalisation in time series forecasting_ has not been justified convincingly. - The visualizations are poorly done and confusing for a serious academic paper. Please consider using cleaner figures. It is unclear how exactly inference on a new dataset is performed. It would improve the clarity of the paper if a specific paragraph on inference is added. Please see specific questions in the questions section. - The results on the long-term forecasting benchmarks, while reasonable, are not impressive for a "foundational model" that has been trained on a larger corpus of datasets. - The very-long-term forecasting task is of limited practical significance. Despite that, the discussion requires improvement, e.g., by conducting experiments on more datasets and training the baseline models with the "correct" forecast horizon to put the results in a proper context. - The zero-shot analysis (Sec. 4.3) has only been conducted on two datasets. Moreover, since prior works such as PatchTST and NHITS do not claim to be foundational models, a proper comparison would be with baselines trained specifically on these held-out datasets. DAM would most likely be worse in that case but it would be a better gauge for the zero-shot performance.
- The very-long-term forecasting task is of limited practical significance. Despite that, the discussion requires improvement, e.g., by conducting experiments on more datasets and training the baseline models with the "correct" forecast horizon to put the results in a proper context.
NIPS_2019_1377
NIPS_2019
- The proof works only under the assumption that the corresponding RNN is contractive, i.e. has no diverging directions in its eigenspace. As the authors point out (line #127), for expansive RNN there will usually be no corresponding URNN. While this is true, I think it still imposes a strong limitation a priori on the classes of problems that could be computed by an URNN. For instance chaotic attractors with at least one diverging eigendirection are ruled out to begin with. I think this needs further discussion. For instance, could URNN/ contractive RNN still *efficiently* solve some of the classical long-term RNN benchmarks, like the multiplication problem? Minor stuff: - Statement on line 134: Only true for standard sigmoid [1+exp(-x)]^-1, depends on max. slope - Theorem 4.1: Would be useful to elaborate a bit more in the main text why this holds (intuitively, since the RNN unlike the URNN will converge to the nearest FP). - line 199: The difference is not fundamental but only for the specific class of smooth (sigmoid) and non-smooth (ReLU) activation functions considered I think? Moreover: Is smoothness the crucial difference at all, or rather the fact that sigmoid is truly contractive while ReLU is just non-expansive? - line 223-245: Are URNN at all practical given the costly requirement to enforce the unitary matrix after each iteration?
- Statement on line 134: Only true for standard sigmoid [1+exp(-x)]^-1, depends on max. slope - Theorem 4.1: Would be useful to elaborate a bit more in the main text why this holds (intuitively, since the RNN unlike the URNN will converge to the nearest FP).
NIPS_2021_2304
NIPS_2021
There are four limitations: 1. In this experiment, single dataset training and single dataset testing cannot verify the generalizable ability of models, it should conduct experiments on large-scale datasets. 2. The efficiency of such pairwise matching is very low, making it difficult to be used in practical application systems. 3. I hope to see that you can compare your model with ResNet-IBN / ResNet of FastReID, which is practical work in the person Reid task. 4. I think the authors only use the transformer to achieve the local matching, therefore, the contribution is limited.
2. The efficiency of such pairwise matching is very low, making it difficult to be used in practical application systems.
ACL_2017_588_review
ACL_2017
and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems. - Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task. 2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler. 3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary. This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise. 4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities. - Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested? 2) Have you tried building a classifier that just takes h_i^e as inputs? I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores.
3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary. This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise.
W6fIyuK8Lk
ICLR_2025
1. The first paragraph of the Introduction is entirely devoted to a general introduction of DNNs, without any mention of drift. Given that the paper's core focus is on detecting drift types and drift magnitude, I believe the DNN-related introduction is not central to this paper, and this entire paragraph provides little valuable information to readers. 2. The paper is poorly written. A paper should highlight its key points and quickly convey its innovations to readers. In the introduction, three paragraphs are spent on related work, while only one paragraph describes the paper's contribution, and even this paragraph fails to intuitively explain why the proposed method would work. 3. The first paragraph of Preliminaries is entirely about concept drift. So I assume the paper aims to address concept drift issues. If this is the case, there is a serious misuse of terminology. In concept drift, drift types include abrupt drift, gradual drift, recurrent drift, etc. While this paper uses the term "drift type" 86 times, it never explains or strictly defines what drift type means. According to Table 1 in the paper, the authors treat "gaussian noise, poisson noise, salt noise, snow fog rain, etc" as drift types. I find this inappropriate as these are more like different types of concepts. In summary, drift type is a specialized term in concept drift [1]. 4. Line 163 states: "Pi,j denote the prediction probability distribution of the images belonging to the class j predicted as class i", but according to equation 1, I believe p_ij is a scalar, not a distribution. This appears to be an expression issue. I believe the paper consistently misuses the term "distribution". 5. Line 183, "per each drift type" should be changed to "for each effect type". 6. Lines 117-119 state: "To understand the impact of data drifts on image classification neural networks, let us consider the impact of Gaussian noise on a classification network trained on the MNIST handwritten digit image dataset, detailed in Section 4, under the effect of Gaussian noise." This sentence is highly redundant. 7. The experimental evaluation is inadequate as it only compares against a single baseline. For a paper proposing a new framework, comparing with multiple state-of-the-art methods is essential to demonstrate the effectiveness and advantages of the proposed approach. The limited comparison significantly weakens the paper's experimental validation. Overall, I find the paper poorly written, with issues including misuse of terminology, redundant expressions, unclear logical flow, and lack of focus. Moreover, the paper seems to compare against only one baseline, which makes the experimental results unconvincing to me. [1] Agrahari, S. and Singh, A.K., 2022. Concept drift detection in data stream mining: A literature review. Journal of King Saud University-Computer and Information Sciences, 34(10), pp.9523-9540.
1. The first paragraph of the Introduction is entirely devoted to a general introduction of DNNs, without any mention of drift. Given that the paper's core focus is on detecting drift types and drift magnitude, I believe the DNN-related introduction is not central to this paper, and this entire paragraph provides little valuable information to readers.
ICLR_2022_3205
ICLR_2022
Major I'm quite torn about this submission in the sense that, while it is a nice paper, it lacks a main theoretical or empirical contribution. The experiments conducted in smaller environments are illustrative but are, by themselves, not compelling enough to demonstrate how the concept of cross values aids in accelerating Bayes-adaptive RL in more complex environments. Even one compelling experiment that shows how cross values improve upon the approaches of, for instance, (Zintgraf et al., 2020, Zintgraf et al., 2021) would really ground the paper around a culminating empirical result and take it over the edge. Alternatively, a nice set of theoretical results could also accomplish this. While the definition of the value of current information in Equation 9 looks reasonable, I can't help but wonder if it is uniquely suited to helping accelerate BADMP learning. Certainly, the literature on PAC-BAMDP methods offers other forms of reward bonuses that do yield provably-efficient learning [12,19]. For instance, rather than integrating out randomness in both e and e', why not take e' to be the environment that has highest likelihood under b_t? Or, alternatively, rather than looking at the expected value across all environments e, perhaps it makes sense to look at the worst case value (minimizing over e) which might have implications for safe or risk-sensitive reinforcement learning? The higher level point here is that, without a corroborating theory, these definitions, while intuitive, seem heuristic and it is unclear if these are the "right" quantities to be examining and learning. Why is the augmented reward structure proposed here better than the Bayesian exploration bonus of [12] or the variance-based bonus of [19]? Notably, both of those approaches come with PAC-BAMDP/PAC-MDP guarantees, while the same cannot be said (or at least has yet to proved) of learning under the PCR. The different in value functions shown in Equation 14 seems to align with the definition of the value of information (VOI) in the context of optimal learning [9,10,15,18]. In the context of multi-armed bandits, such knowledge-gradient algorithms based on VOI will only take an action if there is an immediate improvement in posterior reward. There is an alternative to Thompson sampling [17] which is explicitly designed to address the shortcoming mentioned on page 4: "a posterior sampling agent cannot learn how to perform actions that are not optimal in any known environment." The information-directed sampling (IDS) algorithm, while originally limited to bandits[11,13], attempts to strike the appropriate balance between information gain and regret minimization. Section 4.3.3 of [17] provides a few examples of how an exploration criterion based on VOI is still insufficient for addressing exploration; briefly, the issues boils down to the fact that information revealed at a given moment in time need not result in an immediate performance improvement to be useful, so long as it contributes to performance in the long-term. Highlighting the map example mentioned at the top of page 4, consider a nested version of the problem where an agent must navigate to two separate maps in sequence (for context, say one map to get out of some wrong building and then a second to navigate efficiently in the correct building). Acquiring information by navigating to the first map doesn't result in higher expected return, but is clearly a necessary step towards achieving higher returns and ultimately solving the task. Do the authors have any thoughts on this connection and whether or not a similar impediment arises when using PCR rewards? If the same story holds true, it would again call into question whether or not PCR rewards are the "correct" quantity for calibrating information gain. Minor On the point of developing supporting theory for the paper, I can't help but notice that Equation 13 has the exact form of a potential-based shaping function [14], except in the context of a BAMDP rather than the traditional MDP. Have the authors contemplated this connection at any depth? To the best of my knowledge, I don't know of any papers that take the years of work on potential-based reward shaping in MDPs and considers how to invoke those ideas to accelerate BAMDP learning. An updated version of this work could build on that connection as part of a more concrete theoretical contribution. Clarity Strengths The paper is both well-written and well-organized. The authors do a fantastic job of navigating readers through BAMDPs, cross values, predictive reward cashing, limitations, experiments, and future work. Weaknesses Major I would have liked to see explicity algorithms with pseudocode for PCR Q-learning and PCR-TD, just to confirm my intuitions about them beyond the exposition of Sections 5-8. Minor There is some inconsistency in the document between the use of PCR vs. PRC. I believe the authors intended to use PCR and instances of the latter are typos. Originality Strengths The authors do a good job of contextualizing the potential of their approach in augmenting and improving recent work on Bayes-adaptive deep reinforcement-learning algorithms. Weaknesses Major The literature review on more classic work on BAMDPs and Bayesian reinforcement learning seems lacking. For a paper that leans so heavily on these topics and offers a foundational contribution, I would expect a more nuanced discussion of the classic literature [2, 3, 8, 20, 5, 4, 6, 7, 1, 12, 16, 19] without deferring readers to the 2015 survey paper on Bayesian reinforcement learning. Minor Significance Strengths The results do a good job of highlighting the potential utility of cross values in Bayesian reinforcement learning. Weaknesses Major While the conceptual idea behind cross values is interesting, I suspect there is a nontrivial amount of work needed to realize its benefits (if any) in the context of deep reinforcement learning. Without any confirmation of that success yet, it is difficult to see this paper as having high impact. As mentioned above, an alternative contribution would provide supporting theory for cross values and PCR rewards, offering the community a new perspective on how to handle approximate Bayesian reinforcement learning. Minor References Asmuth, John, Lihong Li, Michael L. Littman, Ali Nouri, and David Wingate. "A Bayesian sampling approach to exploration in reinforcement learning." In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pp. 19-26. 2009. Bellman, Richard, and Robert Kalaba. "On adaptive control processes." IRE Transactions on Automatic Control 4, no. 2 (1959): 1-9. Dayan, Peter, and Terrence J. Sejnowski. "Exploration bonuses and dual control." Machine Learning 25, no. 1 (1996): 5-22. Dearden, Richard, Nir Friedman, and Stuart Russell. "Bayesian Q-learning." In Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence, pp. 761-768. 1998. Duff, Michael O. "Monte-Carlo algorithms for the improvement of finite-state stochastic controllers: Application to Bayes-adaptive Markov decision processes." In International Workshop on Artificial Intelligence and Statistics, pp. 93-97. PMLR, 2001. Duff, Michael O. "Design for an optimal probe." In Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 131-138. 2003. Duff, Michael O., and Andrew G. Barto. "Local bandit approximation for optimal learning problems." In Proceedings of the 9th International Conference on Neural Information Processing Systems, pp. 1019-1025. 1996. Duff, Michael O'Gordon. Optimal Learning: Computational procedures for Bayes-adaptive Markov decision processes. University of Massachusetts Amherst, 2002. Frazier, Peter I., and Warren B. Powell. "Paradoxes in learning and the marginal value of information." Decision Analysis 7, no. 4 (2010): 378-403. Frazier, Peter I., Warren B. Powell, and Savas Dayanik. "A knowledge-gradient policy for sequential information collection." SIAM Journal on Control and Optimization 47, no. 5 (2008): 2410-2439. Kirschner, Johannes, and Andreas Krause. "Information directed sampling and bandits with heteroscedastic noise." In Conference On Learning Theory, pp. 358-384. PMLR, 2018. Kolter, J. Zico, and Andrew Y. Ng. "Near-Bayesian exploration in polynomial time." In Proceedings of the 26th annual international conference on machine learning, pp. 513-520. 2009. Lu, Xiuyuan, Benjamin Van Roy, Vikranth Dwaracherla, Morteza Ibrahimi, Ian Osband, and Zheng Wen. "Reinforcement Learning, Bit by Bit." arXiv preprint arXiv:2103.04047 (2021). Ng, Andrew Y., Daishi Harada, and Stuart Russell. "Policy invariance under reward transformations: Theory and application to reward shaping." In Icml, vol. 99, pp. 278-287. 1999. Powell, Warren B., and Ilya O. Ryzhov. Optimal learning. Vol. 841. John Wiley & Sons, 2012. Poupart, Pascal, Nikos Vlassis, Jesse Hoey, and Kevin Regan. "An analytic solution to discrete Bayesian reinforcement learning." In Proceedings of the 23rd international conference on Machine learning, pp. 697-704. 2006. Russo, Daniel, and Benjamin Van Roy. "Learning to optimize via information-directed sampling." Advances in Neural Information Processing Systems 27 (2014): 1583-1591. Ryzhov, Ilya O., Warren B. Powell, and Peter I. Frazier. "The knowledge gradient algorithm for a general class of online learning problems." Operations Research 60, no. 1 (2012): 180-195. Sorg, Jonathan, Satinder Singh, and Richard L. Lewis. "Variance-based rewards for approximate Bayesian reinforcement learning." In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, pp. 564-571. 2010. Strens, Malcolm. "A Bayesian framework for reinforcement learning." In ICML, vol. 2000, pp. 943-950. 2000.
4 (2010): 378-403. Frazier, Peter I., Warren B. Powell, and Savas Dayanik. "A knowledge-gradient policy for sequential information collection." SIAM Journal on Control and Optimization 47, no.
ICLR_2021_2506
ICLR_2021
Weakness == Exposition == The exposition of the proposed method can be improved. For examples, - it’s unclear how the “Semantic Kernel Generation” is implemented. I can probably guess this step is essentially a 1 x 1 convolution, but it would be better to fill in the details. - In the introduction, the paper mentioned that the translated images often contain artifacts due to the nearest-neighbor upsampling. Yet, in the Feature Spatial Expansion step it also used nearest-neighbor interpolation. This needs some more justification. - For Figure 3, it’s unclear what the learned semantically adaptive kernel visualization means. At each upsampling layer, the kernel is only K x K. - At the end of Section 2, I do not understand what it means by “the semantics of the upsampled feature map can be stronger than the original one”. - The proposed upsampling layer is called “semantically aware”. However, I do not see anything that’s related to the semantics other than the fact that the input is a semantic map. I would suggest that this should be called “content aware” instead. == Technical novelty == - My main concern about the paper lies in its technical novelty. There have been multiple papers that proposed content aware filter. As far as I know, the first one is [Xu et al. 2016]. Jia, Xu, et al. "Dynamic filter networks." Advances in neural information processing systems. 2016. The work most relevant to this paper is the CARAFE [Wang et al. ICCV 2019], which can be viewed as a special case of the dynamic filter network for feature upsampling. By comparing Figure 2 in this paper and Figure 2 in the CARAFE paper [Wang et al. ICCV 2019], it seems to me that the two methods are * the same*. The only difference is the application of the layout-to-image task. The paper in the related work section suggests, “the settings of these tasks are significantly different from ours, making their methods cannot be used directly.” I respectfully disagree with this statement because both methods take in a feature tensor and produce an upsampled feature tensor. I believe that the application would be straightforward without any modification. Given the similarity to prior work, it would be great for the authors to (1) describe in detail the differences (if any) and (2) compare with prior work that also uses spatially varying upsampling kernel. Minor: - CARAFE: Content-Aware ReAssembly of Features. The paper is in ICCV 2019, not CVPR 2019 In sum, I think the idea of the spatially adaptive upsampling kernel is technically sound. I also like the extensive evaluation in this paper. However, I have concerns about the high degree of similarity with the prior method and the lack of comparison with CARAFE. ** After discussions ** I have read other reviewers' comments. Many of the reviewers share similar concerns regarding the technical novelty of this work. I don't find sufficient ground to recommend acceptance of this paper.
- CARAFE: Content-Aware ReAssembly of Features. The paper is in ICCV 2019, not CVPR 2019 In sum, I think the idea of the spatially adaptive upsampling kernel is technically sound. I also like the extensive evaluation in this paper. However, I have concerns about the high degree of similarity with the prior method and the lack of comparison with CARAFE. ** After discussions ** I have read other reviewers' comments. Many of the reviewers share similar concerns regarding the technical novelty of this work. I don't find sufficient ground to recommend acceptance of this paper.
ICLR_2023_2630
ICLR_2023
- The technical novelty and contributions are a bit limited. The overall idea of using a transformer to process time series data is not new, as also acknowledged by the authors. The masked prediction was also used in prior works e.g. MAE (He et al., 2022). The main contribution, in this case, is the data pre-processing approach that was based on the bins. The continuous value embedding (CVE) was also from a prior work (Tipirneni & Reddy 2022), and also the early fusion instead of late fusion (Tipirneni & Reddy, 2022; Zhang et al., 2022). It would be better to clearly clarify the key novelty compared to previous works, especially the contribution (or performance gain) from the data pre-processing scheme. - It is unclear if there are masks applied to all the bins, or only to one bin as shown in Fig. 1. - It is unclear how the static data (age, gender etc.) were encoded to input to the MLP. The time-series data was also not clearly presented. - It is unclear what is the "learned [MASK] embedding" mean in the SSL pre-training stage of the proposed method. - The proposed "masked event dropout scheme" was not clearly presented. Was this dropout applied to the ground truth or the prediction? If it was applied to the prediction or the training input data, will this be considered for the loss function? - The proposed method was only evaluated on EHR data but claimed to be a method designed for "time series data" as in both the title and throughout the paper. Suggest either tone-down the claim or providing justification on more other time series data. - The experimental comparison with other methods seems to be a bit unfair. As the proposed method was pre-trained before the fine-tuning stage, it is unclear if the compared methods were also initialised with the same (or similar scale) pre-trained model. If not, as shown in Table 1, the proposed method without SSL performs inferior to most of the compared methods. - Missing reference to the two used EHR datasets at the beginning of Sec. 4.
- It is unclear what is the "learned [MASK] embedding" mean in the SSL pre-training stage of the proposed method.
Va4t6R8cGG
ICLR_2024
- This paper does not seem to be the first work of fully end-to-end spatio-temporal localization, while TubeR has proposed to directly detect an action tubelet in a video by simultaneously performing action localization and recognition before. This weakens the novelty of this paper. The authors claim the differences with TubeR but the most significant difference is that the proposed method is much less complex. - The symbols in this paper are inconsistent, e.g., b. - The authors need to perform ablation experiments to compare the proposed method with other methods (e.g., TubeR) in terms of the number of learnable parameters and GFLOPs.
- The authors need to perform ablation experiments to compare the proposed method with other methods (e.g., TubeR) in terms of the number of learnable parameters and GFLOPs.
NIPS_2018_857
NIPS_2018
Weakness: - Long range contexts may be helpful for object detection as shown in [a, b]. For example, the sofa in Figure 1 may help detect the monitor. But in the SNIPER, images are cropped into chips, which makes the detector cannot benefit from long range contexts. Is there any idea to address this? - The writing should be improved. Some points in the paper is unclear to me. 1. In line 121, authors said partially overlapped ground-truth instances are cropped. But is there any threshold for the partial overlap? In the lower left figure of the Figure 1 right side, there is a sofa whose bounding-box is partially overlapped with the chip, but not shown in a red rectangle. 2. In line 165, authors claimed that a large object which may generate a valid small proposal after being cropped. This is a follow-up question of the previous one. In the upper left figure of the Figure 1 right side, I would imagine the corner of the sofa would make some very small proposals to be valid and labelled as sofa. Does that distract the training process since there may be too little information to classify the little proposal to sofa? 3. Are the negative chips fixed after being generated from the lightweight RPN? Or they will be updated while the RPN is trained in the later stage? Would this (alternating between generating negative chips and train the network) help the performance? 4. What are the r^i_{min}'s, r^i_{max}'s and n in line 112? 5. In the last line of table3, the AP50 is claimed to be 48.5. Is it a typo? [a] Wang et al. Non-local neural networks. In CVPR 2018. [b] Hu et al. Relation Networks for Object Detection. In CVPR 2018. ----- Authors' response addressed most of my questions. After reading the response, I'd like to remain my overall score. I think the proposed method is useful in object detection by enabling BN and improving the speed, and I vote for acceptance. The writing issues should be fixed in the later versions.
- The writing should be improved. Some points in the paper is unclear to me.
ICLR_2022_2421
ICLR_2022
Weakness: 1.Some typos such as “TRAFE-OFFS” in the title of section 4.1. 2.The 24 different structures generated by random premutation in section 4.1 should be explained in more detail. 3.The penultimate sentence of Section 3.3 states that "iterative greedy search can avoid the suboptimality of the resulting scaling strategy on a particular model", which is not a serious statement because the results of the iterative greedy search are also suboptimal solutions. 4.The conclusion of "Cost breakdown can indicate the transferability effectiveness" in Figure 7 is not sufficient. We cannot extend the conclusion obtained from a few specific experiments to any different hardware devices or different architectures. 5.Why not use the same cost for different devices instead of flops, latency, and 1/FPS for different hardware? 6.The result comparison of "Iteratively greedy Search" versus "random search" on the model structure should be supplemented.
6.The result comparison of "Iteratively greedy Search" versus "random search" on the model structure should be supplemented.
NIPS_2016_370
NIPS_2016
, and while the scores above are my best attempt to turn these strengths and weaknesses into numerical judgments, I think it's important to consider the strengths and weaknesses holistically when making a judgment. Below are my impressions. First, the strengths: 1. The idea to perform improper unsupervised learning is an interesting one, which allows one to circumvent certain NP hardness results in the unsupervised learning setting. 2. The results, while mostly based on "standard" techniques, are not obvious a priori, and require a fair degree of technical competency (i.e., the techniques are really only "standard" to a small group of experts). 3. The paper is locally well-written and the technical presentation flows easily: I can understand the statement of each theorem without having to wade through too much notation, and the authors do a good job of conveying the gist of the proofs. Second, the weaknesses: 1. The biggest weakness is some issues with the framework itself. In particular: 1a. It is not obvious that "k-bit representation" is the right notion for unsupervised learning. Presumably the idea is that if one can compress to a small number of bits, one will obtain good generalization performance from a small number of labeled samples. But in reality, this will also depend on the chosen model class used to fit this hypothetical supervised data: perhaps there is one representation which admits a linear model, while another requires a quadratic model or a kernel. It seems more desirable to have a linear model on 10,000 bits than a quadratic model on 1,000 bits. This is an issue that I felt was brushed under the rug in an otherwise clear paper. 1b. It also seems a bit clunky to work with bits (in fact, the paper basically immediately passes from bits to real numbers). 1c. Somewhat related to 1a, it wasn't obvious to me if the representations implicit in the main results would actually lead to good performance if the resulting features were then used in supervised learning. I generally felt that it would be better if the framework was (a) more tied to eventual supervised learning performance, and (b) a bit simpler to work with. 2. I thought that the introduction was a bit grandiose in comparing itself to PAC learning. 3. The main point (that improper unsupervised learning can overcome NP hardness barriers) didn't come through until I had read the paper in detail. When deciding what papers to accept into a conference, there are inevitably cases where one must decide between conservatively accepting only papers that are clearly solid, and taking risks to allow more original but higher-variance papers to reach a wide audience. I generally favor the latter approach, I think this paper is a case in point: it's hard for me to tell whether the ideas in this paper will ultimately lead to a fruitful line of work, or turn out to be flawed in the end. So the variance is high, but the expected value is high as well, and I generally get the sense from reading the paper that the authors know what they are doing. So I think it should be accepted. Some questions for the authors (please answer in rebuttal): -Do the representations implicit in Theorems 3.2 and Theorem 4.1 yield features that would be appropriate for subsequent supervised learning of a linear model (i.e., would linear combinations of the features yield a reasonable model family)? -How easy is it to handle e.g. manifolds defined by cubic constraints with the spectral decoding approach?
2. The results, while mostly based on "standard" techniques, are not obvious a priori, and require a fair degree of technical competency (i.e., the techniques are really only "standard" to a small group of experts).
NIPS_2021_780
NIPS_2021
5 Limitations a. The authors briefly talk about the limitations of the approach in section 5. The main limitation they draw attention to is the challenge of moving closer to the local maxima of the reward function in the latter stages of optimization. To resolve this they discuss combining their method with local optimization techniques; however, I wonder whether the temperature approach they discuss in the earlier part of their paper (combined with some annealing scheme) could also be used here? b. One limitation the authors do not mention is how the method scales in terms of the size of the state and action space. The loss function requires for every current state the sum over all previous states and actions that may have led to the current state (see term 1 of Eq.9). I assume this may become intractable for very large state-action spaces (and the flows one is trying to model become very small). Can one approximate the sum using a subset? Also what about continuous state/action spaces? 6 Societal impact The authors state that they foresee no negative social impacts of their work (line 379). While I do not believe this work has the potential for significant negative social impact (and I'm not quite sure if/how I'm meant to review this aspect of their work), the authors could always mention the social impact of increased automation, or the risks from the dual use of their method, etc.
6 Societal impact The authors state that they foresee no negative social impacts of their work (line 379). While I do not believe this work has the potential for significant negative social impact (and I'm not quite sure if/how I'm meant to review this aspect of their work), the authors could always mention the social impact of increased automation, or the risks from the dual use of their method, etc.
NIPS_2021_2050
NIPS_2021
1. Transformer has been adopted for lots of NLP and vision tasks, and it is no longer novel in this field. Although the authors made a modification on the transformer, i.e. cross-layer, it does not bring much insight in aspect of machine learning. Besides, in ablation study (table4 and 5), the self-cross attention brings limited improvement (<1%). I don’t think this should be considered as significant improvement. It seems that the main improvements over other methods come from using a naïve transformer instead of adding the proposed modification. 2. This work only focuses on a niche task, which is more suitable for CV conference like CVPR rather than machine learning conference. The audience should be more interested in techniques that can work for general tasks, like general image retrieval. 3. The proposed method uses AdamW with cosine lr for training, while comparing methods only use adam with fixed lr. Directly comparing with their numbers in paper is unfair. It would be better to reproduce their results using the same setting, since most of the recent methods have their code released.
3. The proposed method uses AdamW with cosine lr for training, while comparing methods only use adam with fixed lr. Directly comparing with their numbers in paper is unfair. It would be better to reproduce their results using the same setting, since most of the recent methods have their code released.
ICLR_2022_2470
ICLR_2022
Weakness: The idea is a bit simple -- which in of itself is not a true weakness. ResNet as an idea is not complicated at all. I find it disheartening that the paper did not really tell readers how to construct a white paper in section 3 (if I simply missed it, please let me know). However, the code in the supplementary materials helped. White paper is constructed as follow: white_paper_gen = torch.ones(args.train_batch, 3, 32, 32) It offers another way of constructing white paper, which is white_paper_gen = 255 * np.ones((32, 32, 3), dtype=np.uint8) white_paper_gen = Image.fromarray(white_paper_gen) white_paper_gen = transforms.ToTensor()(white_paper_gen) white_paper_gen = transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))(white_paper_gen) The code states that either version works similarly and does not affect the performance. I wonder if there are other white papers as well, for example np.zeros((32, 32, 3)) -- most CNN models add explicit bias terms in their CNN kernel. Would a different white paper reveal different bias in the model? I don't think the paper answers this question or discusses it. 2. Section 4 "Is white paper training harmful to the model?" -- the evidences do not seem to support the claim. The evidences are 1). Only projection head (CNN layers) are affected but not classification head (FCN layer); 2). Parameter changes are small. None of these constitute as a direct support that the training is not "harmful" to the model. This point can simply be illustrated by the experimental results 3. Section 5.1 and 5.2 mainly build the narrative that WPA improves the test performance (generalization performance), but they are indirect evidence to support that WPA does in fact alleviate shortcut learning. Only Section 5.3 and Table 6 directly show whether WPA does what it's designed to do. A suggestion is to discuss the result of Section 5.3 more. 4. It would be interesting to try to explain why WPA works -- with np.ones input, what is the model predicting? Would any input serve as white paper? Figure 2 seems to suggest that Gaussian noise input does not work as well as WPA. Why? Authors spend lot of time showing WPA improves the test performance of the original model, but fails to provide useful insights on how WPA works -- this is particularly important because it can spark future research directions.
4. It would be interesting to try to explain why WPA works -- with np.ones input, what is the model predicting? Would any input serve as white paper? Figure 2 seems to suggest that Gaussian noise input does not work as well as WPA. Why? Authors spend lot of time showing WPA improves the test performance of the original model, but fails to provide useful insights on how WPA works -- this is particularly important because it can spark future research directions.
NIPS_2022_1564
NIPS_2022
1.The main part can be more concise (especially for the introduction part)and including empirical results. 2.Given the new introduced hyper-parameters, it is still not clear whether this new proposed method is empirically useful. How to choose hyper-parameters in a more practical training setting? 3.The empirical evaluations can not well supported their theoretical analysis. As the authors claim running experiments with 24 A100 GPUs, all methods should be compared in a relatively large scaled training task. Only small linear regression experiment results are reported, where communication is not really an issue. The paper discusses a new variant on a technique in distributed training. As far as I’m concerned, there is no serious issue or limitation that would impact society.
1.The main part can be more concise (especially for the introduction part)and including empirical results.
NIPS_2017_217
NIPS_2017
Weakness: - The paper is rather incremental with respect to [31]. The authors adapt the existing architecture for the multi-person case producing identity/tag heatmaps with the joint heatmaps. - Some explanations are unclear and rather vague. Especially, the solution for the multi-scale case (end of Section 3) and the pose refinement used in section 4.4 / table 4. This is important as most of the improvement with respect to state of the art methods seems to come from these 2 elements of the pipeline as indicated in Table 4. Comments: The state-of-the-art performance in multi-person pose estimation is a strong point. However, I find that there is too little novelty in the paper with respect to the stacked hour glasses paper and that explanations are not always clear. What seems to be the key elements to outperform other competing methods, namely the scale-invariance aspect and the pose refinement stage, are not well explained.
- The paper is rather incremental with respect to [31]. The authors adapt the existing architecture for the multi-person case producing identity/tag heatmaps with the joint heatmaps.
NIPS_2016_192
NIPS_2016
Weakness: (e.g., why I am recommending poster, and not oral) - Impact: This paper makes it easier to train models using learning to search, but it doesn't really advance state-of-the-art in terms of the kind of models we can build. - Impact: This paper could be improved by explicitly showing the settings for the various knobs of this algorithm to mimic prior work: Dagger, searn, etc...it would help the community by providing a single review of the various advances in this area. - (Minor issue) What's up with Figure 3? "OAA" is never referenced in the body text. It looks like there's more content in the appendix that is missing here, or the caption is out of date.
- Impact: This paper could be improved by explicitly showing the settings for the various knobs of this algorithm to mimic prior work: Dagger, searn, etc...it would help the community by providing a single review of the various advances in this area.
ARR_2022_121_review
ARR_2022
1. The writing needs to be improved. Structurally, there should be a "Related Work" section which would inform the reader that this is where prior research has been done, as well as what differentiates the current work with earlier work. A clear separation between the "Introduction" and "Related Work" sections would certainly improve the readability of the paper. 2. The paper does not compare the results with some of the earlier research work from 2020. While the authors have explained their reasons for not doing so in the author response along the lines of "Those systems are not state-of-the-art", they have compared the results to a number of earlier systems with worse performances (Eg. Taghipour and Ng (2016)). Comments: 1. Please keep a separate "Related Work" section. Currently "Introduction" section of the paper reads as 2-3 paragraphs of introduction, followed by 3 bullet points of related work and again a lot of introduction. I would suggest that you shift those 3 bullet points ("Traditional AES", "Deep Neural AES" and "Pre-training AES") to the Related work section. 2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work. 3. While the out of domain experiment is pre-trained on other prompts, it is still fine-tuned during training on the target prompt essays. Typos: 1. In Table #2, Row 10, the reference for R2BERT is Yang et al. (2020), not Yang et al. (2019). Missing References: 1. Panitan Muangkammuen and Fumiyo Fukumoto. " Multi-task Learning for Automated Essay Scoring with Sentiment Analysis". 2020. In Proceedings of the AACL-IJCNLP 2020 Student Research Workshop. 2. Sandeep Mathias, Rudra Murthy, Diptesh Kanojia, Abhijit Mishra, Pushpak Bhattacharyya. 2020. Happy Are Those Who Grade without Seeing: A Multi-Task Learning Approach to Grade Essays Using Gaze Behaviour. In Proceedings of the 2020 AACL-IJCNLP Main Conference.
2. Would the use of feature engineering help in improving the performance? Uto et al. (2020)'s system reaches a QWK of 0.801 by using a set of hand-crafted features. Perhaps using Uto et al. (2020)'s same feature set could also improve the results of this work.
ICLR_2023_650
ICLR_2023
1.One severe problem of this paper is that it misses several important related work/baselines to compare[1,2,3,4], either in discussion [1,2,3,4]or experiments[1,2]. This paper addresses to design a normalization layer that can be plugged in the network for avoiding the dimensional collapse of representation (in intermediate layer). This idea has been done by the batch whitening methods [1,2,3] (e.g, Decorrelated Batch Normalization (DBN), IterNorm, etal.). Batch whitening, which is a general extent of BN that further decorrelating the axes, can ensure the covariance matrix of the normalized output as Identity (IterNorm can obtain an approximate one). These normalization modules can surely satisfy the requirements these paper aims to do. I noted that this paper cites the work of Hua et al, 2021, which uses Decorrelated Batch Normalization for Self-supervised learning (with further revision using shuffling). This paper should note the exist of Decorrelated Batch Normalization. Indeed, the first work to using whitening for self-supervised learning is [4], where it shows how the main motivations of whitening benefits self-supervised learning. 2.I have concerns on the connections and analyses, which is not rigorous for me. Firstly, this paper removes the A D − 1 in Eqn.6, and claims that “In fact, the operation corresponds to the stop-gradient technique, which is widely used in contrastive learning methods (He et al., 2020; Grill et al., 2020). By throwing away some terms in the gradient, stop-gradient makes the training process asymmetric and thus avoids representation collapse with less computational overhead. It verifies the feasibility of our discarding operation”. I do not understand how to stop gradients used in SSL can be connected to the removement of A D − 1 , I expect this paper can provide the demonstration or further clarification. Secondly, It is not clear why layerNorm is necessary. Besides, how the layer normalization can be replace with an additional factor (1+s) to rescale H shown in claims “For the convenience of analysis, we replace the layer normalization with an additional factor 1 + s to rescale H”. I think the assumption is too strong. In summary, the connections between the proposed contraNorm and uniformity loss requires: 1) removing A D − 1 and 2) add layer normalization, furthermore the propositions for support the connection require the assumption “layer normalization can be replace with an additional factor (1+s) to rescale H”. I personally feel that the connection and analysis are somewhat farfetched. Other minors: 1)Figure 1 is too similar to the Figure 1 of Hua et al, 2021, I feel it is like a copy at my first glance, even though I noted some slightly differences when I carefully compare Figure 1 of this paper to Figure 1 of Hua et al, 2021. 2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ should be shown in a rigorous way or this paper mention it. 3)In page 6. the reference of Eq.(24)? References: [1] Decorrelated Batch Normalization, CVPR 2018 [2] Iterative Normalization: Beyond Standardization towards Efficient Whitening, CVPR 2019 [3] Whitening and Coloring transform for GANs. ICLR, 2019 [4]Whitening for Self-Supervised Representation Learning, ICML 2021
2)The derivation from Eqn. 3 to Eqn. 4 misses the temperature τ , τ should be shown in a rigorous way or this paper mention it.
NIPS_2020_396
NIPS_2020
1- While the experimental results suggest that the proposed approach is valuable for self-supervised learning on 360 video data which have spatial audio, little insights are given about why do we need to do self-supervised learning on this kind of data. In particular, 1) There are currently several large audio-video datasets such as HowTo100M and VIOLIN, 2) There is not much 360 video data on YouTube in comparison to normal data. 2- For the experimental comparisons, the authors at least should report the performance with using other self-supervised learning losses. For instance, masking features, predicting next video/audio feature, or reconstructing a feature. This will be very useful for understanding the importance of introduced loss in comparison with previous ones. 3- How the videos are divided into 10s segments? 4- It would be interesting to see how this spatial alignment works. For example, aligning an audio to the video and visualizing the corresponding visual region. 5- What's the impact of batch size on performance? batch size of 28 seems small to cover enough positive and negative samples. In this case, using MoCo loss instead of InfoNCE wouldn't help?
1- While the experimental results suggest that the proposed approach is valuable for self-supervised learning on 360 video data which have spatial audio, little insights are given about why do we need to do self-supervised learning on this kind of data. In particular,
NIPS_2022_1598
NIPS_2022
Weakness: It is unclear whether the gain of BooT comes from 1. Extra data 2. Different architecture (pretrained gpt2 vs not) 3. Some inherent property in the sequence model as opposed to other world models that may only predict the observation and the reward. It is unclear from the paper whether bootstrapping is novel beyond supervised learning (e.g. in RL) There are quite a few additional limitations not mentioned in the paper (l349-350): 1. The extra two hyperparameters introduced k and η require finetuning, which depends on availability to the environment or a good OPE method. 2. As mentioned in l37-39, for other tasks in general, it is unclear whether the dataset available is sufficient to train a BooT, unless we try it, which will incur extra training time and cost, as mentioned in l349-350.
1. The extra two hyperparameters introduced k and η require finetuning, which depends on availability to the environment or a good OPE method.
CoEuk8SNI1
EMNLP_2023
- Very difficult to follow the motivation of this paper. And it looks like an incremental engineering paper. - The abstract looks a little vague. For example, “However, it is difficult to fully model interaction between utterances …” What is 'interaction between utterances' and why is it difficult to model? This information is not evident from the previous context. Additionally, the misalignment between the two views might seem obvious since most ERC models aggregate information using methods like residual connections. So, why the need for alignment? Isn't the goal to leverage the advantages of both features? Or does alignment help in achieving a balance between both features? - The author used various techniques to enhance performance, including contrastive learning, external knowledge, and graph networks. However, these methods seem contradictory to the limited experiments conducted. For example, the author proposes a new semi-parametric inferencing paradigm involving memorization to address the recognition problem of tail class samples. However, the term "semi-parametric" is not clearly defined, and there is a lack of experimental evidence to support the effectiveness of the proposed method in tackling the tail class samples problem. - The Related Work section lacks a review of self-supervised contrastive learning in ERC. - The most recent comparative method is still the preprint version available on ArXiv, which lacks convincing evidence. - Table 3 needs some significance tests to further verify the assumptions put forward in the paper. - In Chapter 5.3, the significant impact of subtle hyperparameter fluctuations on performance raises concerns about the method's robustness. The authors could consider designing an automated hyperparameter search mechanism or decoupling dependencies on these hyperparameters to address this. - There is a lack of error analysis. - The formatting of the reference list is disorganized and needs to be adjusted. - Writing errors are common across the overall paper.
- Very difficult to follow the motivation of this paper. And it looks like an incremental engineering paper.
NIPS_2016_321
NIPS_2016
#ERROR!
* The paper focuses on learning HMMs with non-parametric emission distributions, but it does not become clear how those emission distributions affect inference. Which of the common inference tasks in a discrete HMM (filtering, smoothing, marginal observation likelihood) can be computed exactly/approximately with an NP-SPEC-HMM?
ARR_2022_114_review
ARR_2022
By showing that there is an equivalent graph in the rank space on which message passing is equivalent to message passing in the original joint state and rank space, this work exposes the fact that these large structured prediction models with fully decomposable clique potentials (Chiu et al 2021 being an exception) are equivalent to a smaller structured prediction model (albeit with over-parameterized clique potentials). For example, looking at Figure 5 (c), the original HMM is equivalent to a smaller MRF with state size being the rank size (which is the reason why inference complexity does not depend on the original number of states at all after calculating the equivalent transition and emission matrices). One naturally wonders why not simply train a smaller HMM, and where does the performance gain of this paper come from in Table 3. As another example, looking at Figure 4 (a), the original PCFG is equivalent to a smaller PCFG (with fully decomposable potentials) with state size being the rank size. This smaller PCFG is over-parameterized though, e.g., its potential $H\in \mathcal{R}^{r \times r}$ is parameterized as $V U^T$ where $U,V\in \mathcal{R}^{r \times m}$ and $r < m$, instead of directly being parameterized as a learned matrix of $\mathcal{R}^{r \times r}$. That being said, I don't consider this a problem introduced by this paper since this should be a problem of many previous works as well, and it seems an intriguing question why large state spaces help despite the existence of these equivalent small models. Is it similar to why overparameterizing in neural models help? Is there an equivalent form of the lottery ticket hypothesis here? In regard to weakness #1, I think this work would be strengthened by adding the following baselines: 1. For each PCFG with rank r, add a baseline smaller PCFG with state size being r, but where $H, I, J, K, L$ are directly parameterized as learned matrices of $\mathcal{R}^{r \times r}$, $\mathcal{R}^{r \times o}$, $\mathcal{R}^{r}$, etc. Under this setting, parsing F-1 might not be directly comparable, but perplexity can still be compared. 2. For each HMM with rank r, add a baseline smaller HMM with state size being r.
1. For each PCFG with rank r, add a baseline smaller PCFG with state size being r, but where $H, I, J, K, L$ are directly parameterized as learned matrices of $\mathcal{R}^{r \times r}$, $\mathcal{R}^{r \times o}$, $\mathcal{R}^{r}$, etc. Under this setting, parsing F-1 might not be directly comparable, but perplexity can still be compared.
ICLR_2021_318
ICLR_2021
weakness, thought this is shared in RIM as well, and not that relevant to what is being evaluated/investigated in the paper. Decision: This paper makes an algorithmic contribution to the systemic generalization literature, and many in the NeurIPS community who are interested in this literature would benefit from having this paper accepted to the conference. I'm in favour of acceptance. Questions to authors: 1. Is there a specific reason why you reported results on knowledge transfer (i.e. section 4.3) only on a few select environments? 2. As mentioned in the “weak points” section, it would be nice if you could elaborate on 3. Is it possible that the caption of Figure 4 is misplaced? That figure is referenced in Section 4.1 (Improved Sample Efficiency), but the caption suggests it has something to do with better knowledge transfer. 4. If you have the resources, I would be very interested to see how the “small learning rate for attention parameters” benchmark (described above) would compare with the proposed approach. 5. In Section 4, 1st paragraph, you write “Do the ingredients of the proposed method lead to […] a better curriculum learning regime[…]”. Could you elaborate on what you mean by this? [1] Beaulieu, Shawn, et al. "Learning to continually learn." arXiv preprint arXiv:2002.09571 (2020).
4. If you have the resources, I would be very interested to see how the “small learning rate for attention parameters” benchmark (described above) would compare with the proposed approach.
NIPS_2022_738
NIPS_2022
W1) The paper states that "In order to introduce epipolar constraints into attention-based feature matching while maintaining robustness to camera pose and calibration inaccuracies, we develop a Window-based Epipolar Transformer (WET), which matches reference pixels and source windows near the epipolar lines." It claims that it introduces "a window-based epipolar Transformer (WET) for enhancing patch-to-patch matching between the reference feature and corresponding windows near epipolar lines in source features". To me, taking a window around the epipolar line into account seems like an approximation to estimating the uncertainty region around the epipolar lines caused by inaccuracies in calibration and camera pose and then searching within this region (see [Förstner & Wrobel, Photogrammetric Computer Vision, Springer 2016] for a detailed derivation of how to estimate uncertainties). Is it really valid to claim this part of the proposed approach as novel? W2) I am not sure how significant the results on the DTU dataset are: a) The difference with respect to the best performing methods is less than 0.1 mm (see Tab. 1). Is the ground truth sufficiently accurate enough that such a small difference is actually noticeable / measurable or is the difference due to noise or randomness in the training process? b) Similarly, there is little difference between the results reported for the ablation study in Tab. 4. Does the claim "It can be seen from the table that our proposed modules improve in both accuracy and completeness" really hold? Why not use another dataset for the ablation study, e.g., the training set of Tanks & Temples or ETH3D? W3) I am not sure what is novel about the "novel geometric consistency loss (Geo Loss)". Looking at Eq. 10, it seems to simply combine a standard reprojection error in an image with a loss on the depth difference. I don't see how Eq. 10 provides a combination of both losses. W4) While the paper discusses prior work in Sec. 2, there is mostly no mentioning on how the paper under review is related to these existing works. In my opinion, a related work section should explain the relation of prior work to the proposed approach. This is missing. W5) There are multiple parts in the paper that are unclear to me: a) What is C in line 106? The term does not seem to be introduced. b) How are the hyperparameters in Sec. 4.1 chosen? Is their choice critical? c) Why not include UniMVSNet in Fig. 5, given that UniMVSNet also claims to generate denser point clouds (as does the paper under review)? d) Why use only N=5 images for DTU and not all available ones? e) Why is Eq. 9 a reprojection error? Eq. 9 measures the depth difference as a scalar and no projection into the image is involved. I don't see how any projection is involved in this loss. Overall, I think this is a solid paper that presents a well-engineered pipeline that represents the current state-of-the-art on a challenging benchmark. While I raised multiple concerns, most of them should be easy to address. E.g., I don't think that removing the novelty claim from W1 would make the paper weaker. The main exception is the ablation study, where I believe that the DTU dataset is too easy to provide meaningful comparisons (the relatively small differences might be explained by randomness in the training process. The following minor comments did not affect my recommendation: References are missing for Pytorch and the Adam optimizer. Post-rebuttal comments Thank you for the detailed answers. Here are my comments to the last reply: Q: Relationship to prior work. Thank you very much, this addresses my concern. A: Fig. 5 is not used to claim our method achieves the best performance among all the methods in terms of completeness, it actually indicates that our proposed method could help reconstruct complete results while keeping high accuracy (Tab. 1) compared with our baseline network [7] and the most relevant method [3]. In that context, we not only consider the quality of completeness but also the relevance to our method to perform comparison in Fig. 5. As I understand lines 228-236 in the paper, in particular "The quantitative results of DTU evaluation set are summarized in Tab. 1, where Accuracy and Completeness are a pair of official evaluation metrics. Accuracy is the percentage of generated point clouds matched in the ground truth point clouds, while Completeness measures the opposite. Overall is the mean of Accuracy and Completeness. Compared with the other methods, our proposed method shows its capability for generating denser and more complete point clouds on textureless regions, which is visualized in Fig. 5.", the paper seems to claim that the proposed method generates denser point clouds. Maybe this could be clarified? A: As a) nearly all the learning-based MVS methods (including ours) take the DTU as an important dataset for evaluation, b) the GT of DTU is approximately the most accurate GT we can obtain (compared with other datasets), c) the final results are the average across 22 test scans, we think that fewer errors could indicate better performance. However, your point about the accuracy of DTU GT is enlightening, and we think it's valuable future work. This still does not address my concern. My question is whether the ground truth is accurate enough that we can be sure that the small differences between the different components really comes from improvements provided by adding components. In this context, stating that "the GT of DTU is approximately the most accurate GT we can obtain (compared with other datasets)" does not answer this question as, even though DTU has the most accurate GT, it might not be accurate enough to measure differences at this level of accuracy (0.05 mm difference). If the GT is not accurate enough to differentiate in the 0.05 mm range, then averaging over different test scans will not really help. That "nearly all the learning-based MVS methods (including ours) take the DTU as an important dataset for evaluation" does also not address this question. Since the paper claims improvements when using the different components and uses the results to validate the components, I do not think that answering the question whether the ground truth is accurate enough to make these claims in future work is really an option. I think it would be better to run the ablation study on a dataset where improvements can be measured more clearly. Final rating I am inclined to keep my original rating ("6: Weak Accept: Technically solid, moderate-to-high impact paper, with no major concerns with respect to evaluation, resources, reproducibility, ethical considerations."). I still like the good results on the Tanks & Temples dataset and believe that the proposed approach is technically sound. However, I do not find the authors' rebuttals particularly convincing and thus do not want to increase my rating. In particular, I still have concerns about the ablation study as I am not sure whether the ground truth of the DTU dataset is accurate enough that it makes sense to claim improvements if the difference is 0.05 mm or smaller. Since this only impacts the ablation study, it is also not a reason to decrease my rating.
1). Is the ground truth sufficiently accurate enough that such a small difference is actually noticeable / measurable or is the difference due to noise or randomness in the training process? b) Similarly, there is little difference between the results reported for the ablation study in Tab.
ICLR_2022_2196
ICLR_2022
weakness] Modeling: The rewards are designed based on a discriminator. As we know, generative adversarial networks are not easy to train since generative networks and discriminative networks are trained alternatively. In the proposed method, the policy network and the discriminator are trained alternatively. I doubt if it is easy to train the model. I would like to see the training curves for rewards value. The detailed alignment function used in Eq. (1) and Eq. (3) need to be provided. Experiment: - The results are not satisfying. In the experiment, the generation quality of the proposed method is not good as traditional generative networks in terms of FID. In the image parsing part, the results are far behind the compared methods. - Since the results are not comparable to the existing methods, there seems not too much significance for the proposed methods.
- Since the results are not comparable to the existing methods, there seems not too much significance for the proposed methods.
ARR_2022_98_review
ARR_2022
1. Human evaluations were not performed. Given the weaknesses of SARI (Vásquez-Rodríguez et al. 2021) and FKGL (Tanprasert and Kauchak, 2021), the lack of human evaluations severely limits the potential impact of the results, combined with the variability in the results on different datasets. 2. While the authors explain the need to include text generation models into the framework of (Kumar et al., 2020), it is not clear as to why only the delete operation was retained from the framework, which used multiple edit operations (reordering, deletion, lexical simplification, etc.). Further, it is not clear how including those other operations will affect the quality and performance of the system. 3. ( minor) It is unclear how the authors arrived at the different components of the "scoring function," nor is it clear how they arrived at the different threshold values/ranges. 4. Finally, one might wonder that the performance gains on Newsela are due to a domain effect, given that the system was explicitly tuned for deletion operations (that abound in Newsela) and that performance is much lower on the ASSET test corpus. It is unclear how the system would generalize to new datasets with varying levels of complexity, and peripheral content. 1. Is there any reason why 'Gold Reference' was not reported for Newsela? It makes it hard to assess the performance of the existing system. 2. Similarly, is there a reason why the effect of linguistic acceptability was not analyzed (Table 3 and Section 4.6)? 3. It will be nice to see some examples of the system on actual texts (vs. other components & models). 4. What were the final thresholds that were used for the results? It will also be good for reproducibility if the authors can share the full set of hyperparameters as well.
3. It will be nice to see some examples of the system on actual texts (vs. other components & models).
Ud7I21wHnl
ICLR_2025
1. In conjunction with strength 3, I will note that although the experimentation is extensive, the authors do not do a good job of explaining the results or giving readers a better understanding of what the experiments show. The authors could have gone into more detail about what the results show; For example: could have provided a sample input image, various versions of the backdoored image and then shown the AHs and MLP's attending to different triggers differently. This could have drive home their claim and visually strengthened their analysis. 2. For a more thorough ablation study, the authors could maybe experiment with varying architectures, or modified existing networks to change layer orders to understand whether their claims such as "Infected AHs are centered on the last layer" are indeed due to certain trigger/attack-based factors rather than architectural reasons. 3. In conjunction with strength 4, although the paper's language is lucid, the math hamper's readability. The authors could supplement their use of math with textual explanation of what the math intuitively explores; For example in section 'Repairing representations of infected AH' or 'Preliminary'. It may help to boil down equations to only necessary parts that are directly relevant to reader's understanding. 4. Although the paper's experimentation is extensive, the authors could have incorporated stronger attacks like dynamic triggers ('Reflection Backdoor', 'Hidden Trigger Backdoor Attacks') and patch-based triggers with adaptive patterns ('Input-Aware Dynamic Backdoor Attack').
2. For a more thorough ablation study, the authors could maybe experiment with varying architectures, or modified existing networks to change layer orders to understand whether their claims such as "Infected AHs are centered on the last layer" are indeed due to certain trigger/attack-based factors rather than architectural reasons.
NIPS_2017_575
NIPS_2017
- While the general architecture of the model is described well and is illustrated by figures, architectural details lack mathematical definition, for example multi-head attention. Why is there a split arrow in Figure 2 right, bottom right? I assume these are the inputs for the attention layer, namely query, keys, and values. Are the same vectors used for keys and values here or different sections of them? A formal definition of this would greatly help readers understand this. - The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3. - The complexity argument claims that self-attention models have a maximum path length of 1 which should help maintaining information flow between distant symbols (i.e. long-range dependencies). It would be good to see this empirically validated by evaluating performance on long sentences specifically. Minor comments: - Are you using dropout on the source/target embeddings? - Line 146: There seems to be dangling "2"
- The proposed model contains lots of hyperparameters, and the most important ones are evaluated in ablation studies in the experimental section. It would have been nice to see significance tests for the various configurations in Table 3.
kNvwWXp6xD
ICLR_2025
- I found the paper's flow to be quite confusing. It seems the author's had a lot of material to cover, most of which is placed in the appendix. Perhaps because of this, the actual paper lacks clarity and the required detail. It would be helpful if the authors present the material in the main sections, and refer to the appropriate appendix in case the reader wants further detail. - There seems to be forward referencing in the paper. Material is introduced without proper explanation, and is explained in later sections e.g. Figure1 - The exact contribution(s) need to be written more clearly in the Introduction. Moreover, the material supporting the main contributions seems to be in the appendix and not the main sections e.g. deep-rag algorithm or discussion on the high concurrency. - The experiments section seems to be defining the evaluation measures rather than focusing on an explanation of the experiments and results - The authors mention that the superior performance of their approach can be attributed to several factors. However, it is not clear which factor is actually contributing towards the better results - Some sentences are confusing e.g. in the first para of the Introduction: HumanEval() first proposed to let LLM generating code based on .......
- There seems to be forward referencing in the paper. Material is introduced without proper explanation, and is explained in later sections e.g. Figure1 - The exact contribution(s) need to be written more clearly in the Introduction. Moreover, the material supporting the main contributions seems to be in the appendix and not the main sections e.g. deep-rag algorithm or discussion on the high concurrency.
NIPS_2017_28
NIPS_2017
- Most importantly, the explanations are very qualitative and whenever simulation or experiment-based evidence is given, the procedures are described very minimally or not at all, and some figures are confusing, e.g. what is "sample count" in fig. 2? It would really help adding more details to the paper and/or supplementary information in order to appreciate what exactly was done in each simulation. Whenever statistical inferences are made, there should be error bars and/or p-values. - Although in principle the argument that in case of recognition lists are recalled based on items makes sense, in the most common case of recognition, old vs new judgments, new items comprise the list of all items available in memory (minus the ones seen), and it's hard to see how such an exhaustive list could be effectively implemented and concrete predictions tested with simulations. - Model implementation should be better justified: for example, the stopping rule with n consecutive identical samples seems a bit arbitrary (at least it's hard to imagine neural/behavioral parallels for that) and sensitivity with regard to n is not discussed. - Finally it's unclear how perceptual modifications apply for the case of recall: in my understanding the items are freely recalled from memory and hence can't be perceptually modified. Also what are speeded/unspeeded conditions?
- Most importantly, the explanations are very qualitative and whenever simulation or experiment-based evidence is given, the procedures are described very minimally or not at all, and some figures are confusing, e.g. what is "sample count" in fig. 2? It would really help adding more details to the paper and/or supplementary information in order to appreciate what exactly was done in each simulation. Whenever statistical inferences are made, there should be error bars and/or p-values.
ARR_2022_23_review
ARR_2022
The technical novelty is rather lacking. Although I believe this doesn't affect the contribution of this paper. - You mention that you only select 10 answers from all correct answers, why do you do this? Does this affect the underestimation of the performances? - Do you think generative PLMs that are pretrained on biomedical texts could be more suitable for solving the multi-token problem?
- You mention that you only select 10 answers from all correct answers, why do you do this? Does this affect the underestimation of the performances?
FGBEoz9WzI
EMNLP_2023
1. Some claims may be inspired from existing studies; thus, it is critical to add the supportive references. For example, Lines 55-64: "we identify four critical factors that affect the performance of chain-of-thought prompting and require large human effort to deal with: (1) order sensitivity: the order combination of the exemplars; (2) complexity: the number of reasoning steps of the rationale chains; (3) diversity: the combination of different complex-level exemplars; (4) style sensitivity: the writing/linguistic style of the rationale chains." --- Most of the above factors have been discussed in existing studies. 2. This approach requires extensive queries to optimize and organize the demonstration exemplars, which would costly behind the paywalls. It also relies on a training-based pipeline, which further increases the complexity of the whole framework.
1. Some claims may be inspired from existing studies; thus, it is critical to add the supportive references. For example, Lines 55-64: "we identify four critical factors that affect the performance of chain-of-thought prompting and require large human effort to deal with: (1) order sensitivity: the order combination of the exemplars; (2) complexity: the number of reasoning steps of the rationale chains; (3) diversity: the combination of different complex-level exemplars; (4) style sensitivity: the writing/linguistic style of the rationale chains." --- Most of the above factors have been discussed in existing studies.
xozJw0kZXF
EMNLP_2023
1. Is object hallucination the most important problem of multimodal LLM? Others include knowledge, object spatial relationships, fine-grained attributes, etc. 2. Is it sufficient to measure object hallucination through only yes/no responses? Yes response does not necessarily indicate that the model comprehends the presence of the object in the image, as it may still produce incorrect objects when undertaking other tasks.
2. Is it sufficient to measure object hallucination through only yes/no responses? Yes response does not necessarily indicate that the model comprehends the presence of the object in the image, as it may still produce incorrect objects when undertaking other tasks.
NIPS_2022_2592
NIPS_2022
- (major) I don’t agree with the limitation (ii) of current TN models: “At least one Nth-order factor is required to physically inherit the complex interactions from an Nth-order tensor”. TT and TR can model complex modes interactions if the ranks are large enough. The fact that there is a lack of direct connections from any pair of nodes is not a limitation because any nodes are fully connected through a TR or TT. However, the price to pay with TT or TR to model complex modes interactions is having bigger core tensor (larger number of parameters). The new proposed topology has also a large price to pay in terms of model size because the core tensor C grows exponentially with the number of dimensions, which makes it intractable in practice. The paper lacks from a comparison of TR/TT and TW for a fixed size of both models (see my criticism to experiments below). - The new proposed model can be used only with a small number of dimensions because of the curse of dimensionality imposed by the core tensor C. - (major) I think the proposed TW model is equivalent to TR by noting that, if the core tensor C is represented by a TR (this can be done always), then by fusing this TR with the cores G_n we can reach to TR representation equivalent to the former TW model. I would have liked to see this analysis in the paper and a discussion justifying TW over TR. - (major) Comparison against other models in the experiments are unclear. The value of the used ranks for all the models are omitted which make not possible a fair comparison. To show the superiority of TW over TT and TR, the authors must compare the tensor completion results for all the models but having the same number of model parameters. The number of model parameters can be computed by adding the number of entries of all core tensors for each model (see my question about experiment settings below). - (minor) The title should include the term “tensor completion” because that is the only application of the new model that is presented in the paper. - (minor) The absolute value operation in the definition of the Frobenius norm in line 77 is not needed because tensor entries are real numbers. - (minor) I don’t agree with the statement in line 163: “Apparently, the O(NIR^3+R^N) scales exponentially”. The exponential grow is not apparent, it is a fact. I updated my scores after rebuttal. See my comments below Yes, the authors have stated that the main limitation of their proposed model is its exponentionally grow of model parameters with the number of dimensions.
- The new proposed model can be used only with a small number of dimensions because of the curse of dimensionality imposed by the core tensor C.
NIPS_2021_918
NIPS_2021
weakness Originality + The paper presents a novel algorithm. The algorithms applies ideas from bilevel optimization [43, 46, 55, 26, 45] to fairness and class imbalance learning. Unlike previous work [41, 33, 16, 32, 8, 13, 50, 63], which employs fixed class balancing loss functions based on some training data statistics, the proposed method automatically guides the loss function design and learns a more optimal set of parameters for the balanced loss. Quality + Claims are well supported through experimental evidence. + The authors are careful and honest about reporting the paper's limitations. + The paper compares its method to recent state-of-the-art [8,50] and beats them. - Lack of more baselines. The paper compares performance to only two baselines [8,50], and it is not clear to me why more baselines are not evaluated (e.g. [41, 33, 16, 32, 13, 63]). - No error bars in the results. There are no error bars in the result tables, making it more challenging to assess the significance of the improvements. Clarity + The paper is written very well, and it is easy to follow. On the most part, the paper adequately informs the reader. - Some clarifications about the experiments are needed. Table 1 does not have any P D A component in Algo. 1, does this mean that no P D A was applied in those experiments? (if P D A was used, did the authors also apply it to the baselines?) Similarly, did experiments in Table 2 apply L A init ? And lastly, does Table 3 show results for "Algo 1. α ← Δ & l , L A init , P D A "? - A possible inconsistency in the results but I may also be misunderstanding something (see above for my confusion). Specifically, if I am correct in understanding, the diagonal results in Table 3 show the performance for "Algo 1. α ← Δ & l , L A init , P D A " which have a higher error rate than both "Algo 1. α ← P D A , Δ & l " in Table 2 and "Algo 1. α ← Δ & l , L A init " in Table 1. This suggests that, in fact, P D A and L A init degrade performance when applied together. If my understanding is correct, this undermines the second contribution point (lines 73-77) and should be explicitly clarified in the paper. If my understanding is wrong, why are the results in the diagonal in Table 3 different from either the bottom row of Table 1 or Table 2? Significance + This work aims to address a timely problem of fairness-seeking optimization, and the paper will be relevant to the community. Post rebuttal update I have read other reviewers' comments and the authors' rebuttals, and I am inclined to keep my original score of 7 with increased confidence of 4, condioned on the additional baselines being included in Table 1 in the main paper. The rebuttal addresses the clarity issues, and the additional results show that the method is at least as good as other baselines [28][63][41] in the Cifar-10-LT setting. The performance advantage over [28][63] in the ImageNet-LT dataset is much more obvious. Although results for [41] in the ImageNet-LT setting were not provided, likely the method would also be beaten like in the Cifar-10-LT setting. I think this work is good, addressing a timely problem and it will be relevant for the NeurIPS conference. The paper adequately and fairly outlines its limitations in a dedicated section.
- Some clarifications about the experiments are needed. Table 1 does not have any P D A component in Algo. 1, does this mean that no P D A was applied in those experiments? (if P D A was used, did the authors also apply it to the baselines?) Similarly, did experiments in Table 2 apply L A init ? And lastly, does Table 3 show results for "Algo 1. α ← Δ & l , L A init , P D A "?
NIPS_2020_1706
NIPS_2020
1. The memorization effect is not new to the community. Therefore, the novelty of this paper is not sufficiently demonstrated. The authors need to be clearer what extra insights this paper gives. 2. It would be better if the author could provide some theocratical justification in terms of why co-training and weight averaging can improve results, since they are important for the performance. 3. The empirical performance does not seem to be very strong compared to DivideMix. Some explanations are needed.
2. It would be better if the author could provide some theocratical justification in terms of why co-training and weight averaging can improve results, since they are important for the performance.
NIPS_2020_1580
NIPS_2020
1. Referring to Equations 1 and 4, what is the relationship between \eta and Q. 2. All along, Q, K and V are matrices but in Equation 4, \eta and V are 1-d vectors. What is the reasoning for this? 3. DeepTCR: Why is DeepRC not compared with DeepTCR? 4. The number of repertoires are from 700 - 5K: how long is each repertoire? How are duplicate repertoires handled? 5. Given these are large datasets, why is a CV factor of 5 chosen? Was DeepRC checked with higher CV factors? 6. From the results, SVM MinMax is close on the heels of DeepRC. Why would one consider DeepRC over SVMs? 7. Are the DeepRC-identified motifs biologically interpretable? Minor comments: MM stands for both multiple motifs as well as MinMax. There are typos in the last section of the paper.
5. Given these are large datasets, why is a CV factor of 5 chosen? Was DeepRC checked with higher CV factors?
NIPS_2018_543
NIPS_2018
Weakness: The main idea of the paper is not original. The entire Section 2.1 is classical results in Gaussian process modeling. There are many papers and books described it. I only point out one such source, Chapter 3 and 4 of Santner, Thomas J., Brian J. Williams, and William I. Notz. The design and analysis of computer experiments. Springer Science & Business Media, 2013. The proposed Bayes-Sard framework (Theorem 2.7), which I suspected already exist in the Monte Carlo community, is a trivial application of the Gaussian process model in the numerical integration approximation. The convergence results, Theorem 2.11 and Theorem 2.12, are also some trivial extension of the classic results of RKHS methods. See Theorem 11.11 and 11.13 of Wendland, Holger. Scattered data approximation. Vol. 17. Cambridge university press, 2004. Or Theorem 14.5 of Fasshauer, Gregory E. Meshfree approximation methods with MATLAB. Vol. 6. World Scientific, 2007. Quality of this paper is relatively low, even though the clarity of the technical part is good. This work lacks basic originality, as I pointed out in its weakness. Overall, this paper has little significance.
17. Cambridge university press, 2004. Or Theorem 14.5 of Fasshauer, Gregory E. Meshfree approximation methods with MATLAB. Vol.
ICLR_2022_2110
ICLR_2022
Weakness: 1) Although each part of the proposed method is effective, the overall algorithm is still cumbersome. It has multiple stages. In contrast, many of existing pruning methods do not need fine-tuning. 2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or procedure novelty. 3) The experimental results are not convincing. The compared methods are few. Although few authors have attempted to prune EfficientNet, other networks can be compressed in experiments such as ResNet. In addition, the performance gains compared with SOTAs are also marginal, which are also within 1%. 4) The paper is poorly written. There are many typos and some are listed as follows: --In caption of Figure 2, “An subset of a network” should be “A subset of a network”. --In Line157 of Page4, “The output output vector” should be “The output vector”. --In Line283 of Page7, “B0V2 as,” should be “B0V2 as teacher,”. --In Line301 of Page7, “due the inconsistencies” should be “due to the inconsistencies”.
2) Technical details and formulations are limited. It seems that the main novelty reflected in the scheme or procedure novelty.
ICLR_2021_1944
ICLR_2021
I have several concerns regarding this paper. • Novelty. The authors propose to use Ricci flow to compute the distance between nodes so that to sample edges with respect to that distance. Using Ricci flow for distance computation is a well-studied area (as indicated in related work). The only novel part is that each layer gets a new graph; however, this choice is not motivated (why not to train all layers of GNN on different graphs instead?) and has problems (see next). • Approach. Computing optimal transport distance is generally an expensive procedure. While authors indicated that it takes seconds to compute it on 36 cores machine, it’s not clear how scalable this method is. I would like to see whether it scales on normal machines with a couple of cores. Moreover, how do you compute exactly optimal transport, because the Sinkhorn method gives you a doubly stochastic matrix (how do you go from it to optimal transport?). • Algorithm. This is the most obscure part of the paper. First, it’s not indicated how many layers do you use in experiments. This is a major part of your algorithm because you claim that if an edge appears in several layers it means that it’s not adversarial (or that it does not harm your algorithm). In most of the baselines, there are at most 2-3 layers. There are theoretical limitations why GNN with many layers may not work in practice (see, the literature on “GNN oversmoothing”). Considering that you didn’t provide the code (can you provide an anonymized version of the code?) and that your baselines (GCN, GAT, etc.) have similar (or the same) performance as in the original papers (where the number of layers is 2-3), I deduce that your model Ricci-GNN also has this number of layers. With that said, I doubt that it’s possible to make any conclusive results about whether an edge is adversarial or not with 2-3 graphs. Moreover, I would expect to see an experiment on how your approach varies depending on the number of layers. This is a crucial part of your algorithm and not seeing discussion of it in the paper, raises concerns about the validity of experiments. • Design choices. Another potential problem of your algorithm is that the sampled graphs can become dense. There are hyperparameters \sigma and \beta that control the probabilities and also you limit the sampling only for 2-hop neighborhoods (“To keep graph sparsity, we only sample edges between pairs that are within k hops of each other in G (we always take k = 2 in the experiments).” This is arbitrary and the effect of it on the performance is not clear. How did you select parameters \sigma and \beta? Why k=2? How do you ensure that the sampled graphs are similar to the original one? Does it matter that sampled graphs should have similar statistics to the original graph? I guess, this crucially affects the performance of your algorithm, so I would like to see more experiments on this. • Datasets. Since this paper is mostly experimental, I would like to see a comparison of this model on more datasets (5-7 in total). Verifying on realistic but small datasets such as Cora and Citeseer limits our intuition about performance. For example, Cora is a single graph of 2.7K nodes. As indicated in [1], “Although small datasets are useful as sanity checks for new ideas, they can become a liability in the long run as new GNN models will be designed to overfit the small test sets instead of searching for more generalizable architectures.” There are many sources of real graphs, you can consider OGB [2] or [3]. • Weak baselines. Another major concern of the validity of the experiments is the choice of the baselines. Neither of GNN baselines (GCN, GAT, etc.) was designed for the defense of adversarial attacks, so choosing them for comparison is not fair. A comparison with previous works (indicated in “Adversarial attack on graphs.” in related work section) is necessary. Moreover, an experiment where you randomly sample edges (instead of using Ricci distance) is desirable to compare the performance against random sampling. • Ablation. Since you use GCN, why the performance of Ricci-GCN is so different from GCN when there 0 perturbations? For Citeseer the absolute difference is 2% which is quite high for the same models. Also, an experiment with different choices of GNN is desirable. • Training. Since experiments play important role in this paper, it’s important to give a fair setup for the models in comparison. You write “For each training procedure, we run 100 epochs and use the model trained at 100-th epoch.”. This can disadvantageous for many models. A better way would be to run each model setup until convergence on the training set, selecting the epoch using the validation set. Otherwise, your baselines could suffer from either underfitting or overfitting. [1] https://arxiv.org/pdf/2003.00982.pdf [2] https://ogb.stanford.edu/ [3] https://paperswithcode.com/task/node-classification ========== After reading the authors comments. I applaud the authors for greatly improving their paper via the revision. Now the number of layers is specified and the explanation of having many sampled graphs during training is added, which was missing in the original text and was preventing a full understanding of the reasons why the proposed approach works. Overall, I am leaning toward increasing the score. I still have several concerns about the practicality of Ricci-GNN. In simple words, the proposed approach uses some metric S (Ricci flow) that dictates how to sample graphs for training. The motivation for using Ricci flow is “that Ricci flow is a global process that tries to uncover the underlying metric space supported by the graph topology and thus embraces redundancy”. This claim cites previous papers, which in turn do not discuss what exactly is meant by “a global process that tries to uncover the underlying metric space”. Spectral embeddings also can be considered as a global metric, so some analysis on what properties of Ricci flow makes it more robust to attacks would be appreciated. Also including random sampling in comparison would confirm that the effect is coming not from the fact that you use more graphs during the training, but from how you sample those graphs. In addition, as the paper is empirical and relies on the properties of Ricci flow which was discussed in previous works and was not addressed in the context of adversarial attacks, having more datasets (especially larger ones) in the experiments would improve the paper.
• Approach. Computing optimal transport distance is generally an expensive procedure. While authors indicated that it takes seconds to compute it on 36 cores machine, it’s not clear how scalable this method is. I would like to see whether it scales on normal machines with a couple of cores. Moreover, how do you compute exactly optimal transport, because the Sinkhorn method gives you a doubly stochastic matrix (how do you go from it to optimal transport?).
ARR_2022_319_review
ARR_2022
1 The paper poses mostly subjective points which are themselves largely not novel or are flawed in themselves. It starts with the emphasized sentence on page 1 attributed to Jacovi et al which I will call "Opinion A": "As an explanation method, the evaluation criteria of attribution methods should be how accurately it reflects the true reasoning process of the model (faithfulness), not how convincing it is to humans (plausibility)" This point is debatable. Whether attributions should be purely faithful to model behaviour or offer human-interpretability is a decision to be made in the definition of an attribution method or its motivation. Neither option is wrong as long as the method is applied to its intended use case. Further, the noted choice of faithfulness has implications on some of the "logic traps" described subsequently. I elaborate on each in the comments section below. Significant positioning fixes need to be implemented in this work starting with Opinion A. Either arguments for faithfulness need to be made or the paper needs to restrict itself to attributions whose primary concern is faithfulness. The latter option would have to be written from the perspective that faithfulness is an apriori goal. The novelty of arguments being made needs to be addressed as well. I find that the cores of the "logic traps" are all known points and are often well considered in attribution, evaluation, and robustness work. Presently the paper does not demonstrate that (logic traps) "in existing evaluation methods have been ignored for a long time". If there is a gap in some segment of the community, a survey or systemization paper where applicable would be more appropriate. 2 Arguments and experiments with regard to "model reasoning process" are vague in key definitions and thus non-convincing. Specifically, no definition of "model reasoning process" is given. Experiment outlined in Figure 7 argues that extractor's bits are equivalent to "model reasoning process" but this is arguable. The extractor can derive the same bits using different means or different bits using very similar means. While I do not disagree with the main points regarding model reasoning process and robustness, I do not think the experiments demonstrate them. Detailed comments regarding Weakness 1): * Logic Trap 1: The point being made is obvious: "The decision-making process of neural networks is not equal to the decision-making process of humans." The example Experiment 1 incorporates not a single attribution method. Is it presumed that all attributions will be wrong as there is no human explanation possible? Can we also say that any human explanation would be wrong for the same reason? If there is no ground truth, how can it be wrong for an attribution method to say anything? Regardless, no-one *expects* models to fully replicate the reasoning of a human. Replicating human behaviour may be useful to gauge human-interpretability of explanations but this is precluded by Opinion A. Finally, it is well understood that what is correct to a human may not be correct in a model as per faithfulness vs. explainability discussion which is had alongside attribution evaluations in literature and in Opinion A (some examples from cited works below). * Logic Trap 2: The second point made is a roundabout way of saying that ablation orders are varied: "Using an attribution method as the ground truth to evaluate the target attribution method." The authors are rightly pointing out that numerous forms of attribution evaluation based on ablation or reconstruction inputs have been used to motivate attribution methods. Here the opposite conclusion to Opinion A is helpful. Expecting humans to interpret an attribution one way may lead to one ablation order whereas another form of interpretation may lead to another. There is no single correct metric because there is no single interpretation. This point has been made at least in [a]. * Logic Trap 3: The third point is known: "The change in attribution scores maybe because the model reasoning process is really changed rather than the attribution method is unreliable." Ghorbani et al. (2019) note in their concluding discussion that fragility of attribution is reflective of fragility of the model (and that their attacks have not "broken" an attribution). Likewise Alvarez-Melis et al. (2018) point out that the non-robust artifacts in attributions they discover are actually reasonable if faithfulness to model is the only goal of an attribution. Thus Logic Trap 3 does not seem to add anything beyond Opinion A. * 3.1 -- "Attacking attribution methods by replacing the target model." This section seems to be pointing out that attribution methods have a problem of domain faithfulness in that they often internally apply a model to inputs that they have not been trained on or don't expect to operate reliably on. * 3.2 -- "Should We Use Attributions Methods in a Black-Box Way?" The paper argues that black-box is not a worthwhile goal. This is again highly subjective as there are several reasons, despite presented arguments, to prefer black-box methods, like 1) having explanations of the semantics of what a model is modeling instead of an explanation tainted by how the model is implemented, 2) black-box means model agnostic, hence can apply to any model structure, 3) some scenarios just do not have access to the model internals. Other comments: - The term "logic traps" is not define or explained. Dictionaries and reference materials equate them to logical fallacies which I'm unsure is the intended meaning here. - The term "erasure-based" is used in the intro but later the term "ablation" is used. - Typo/word choice near "with none word overlap". - Typo/grammar near "there are works (...) disprove" - Suggestion of 3.3, point 2 is unclear. Adversarial examples can be highly confident and I suspect means of incorporating confidence can themselves be subverted adversarially. - Figure 6 is not useful. I presume the paths are supposed to be indicative of model behaviour but as mentioned in earlier comments, defining it is a crucial problem in explanation work. The Figure is suggestive of it being a triviality. - Several points in the paper use phrases like "A lot of works ..." or "most existing methods ...". I think it would be more appropriate to list the works or methods instead of noting the relative abundance. - There are some inconsistencies in the notation for AOPC and the k parameter with potential related typos in its definition. - Potential grammar issue near "can be seen as an attribution". - Grammar issue near "results are mainly depend" - Grammar issue near "achieve 86.4% accuracy" - Where is footnote 1? - In Figure 4, the resulting attributions for the target word "good" in LOO, Marg are not presented or indicated by color. References from comments: [a] - Wang et al. "Interpreting Interpretations: Organizing Attribution Methods by Criteria".
* 3.1 -- "Attacking attribution methods by replacing the target model." This section seems to be pointing out that attribution methods have a problem of domain faithfulness in that they often internally apply a model to inputs that they have not been trained on or don't expect to operate reliably on.
VmqTuFMk68
ICLR_2024
1. Writtings could be improved in some places. For two examples, * In definition 2.1, what are the "relevant" auxiliary model weights? The current definition is a bit difficult for me to interpret. * In definition 2.3, are $p_t$'s referring to positional embedding? Could you explain why there aren't positional embeddings in definition 2.10. 2. Theorem 2.5 shows linear attention could be approximated by softmax attention. Can softmax attention also be approximated by linear attention? If not, I feel Theorem 2.5 alone does not suffice to justify the claim that "Thus, we often use linear attention in TINT". Let me know if I have misunderstood anything. In addition, is the claimed parameter saving based on linear attention or self-attention? 3. Definition 2.8 uses finite difference to approximate gradient. I am wondering if we can do this from end to end. That is, can we simulate a backward pass by doing finite-difference and two forward-pass? What's the disadvantage of doing so? 4. This work provides experiments on language tasks, while prior works provide experiments on simulated tasks (e.g., Akyurek et al 2022 did ICL for linear regression). So the empirical results are not directly comparable with prior works. 5. I feel an important prior work [1] is missed. Specifically, [1] also did approximation theory for ICL using transformers. How would the required number of parameters in the construction in this work compare to theirs? [1] Bai, Yu, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei. "Transformers as Statisticians: Provable In-Context Learning with In-Context Algorithm Selection." NeurIPS 2023
1. Writtings could be improved in some places. For two examples, * In definition 2.1, what are the "relevant" auxiliary model weights? The current definition is a bit difficult for me to interpret.
NIPS_2017_337
NIPS_2017
of the manuscript stem from the restrictive---but acceptable---assumptions made throughout the analysis in order to make it tractable. The most important one is that the analysis considers the impact of data poisoning on the training loss in lieu of the test loss. This simplification is clearly acknowledged in the writing at line 102 and defended in Appendix B. Another related assumption is made at line 121: the parameter space is assumed to be an l2-ball of radius rho. The paper is well written. Here are some minor comments: - The appendices are well connected to the main body, this is very much appreciated. - Figure 2 and 3 are hard to read on paper when printed in black-and-white. - There is a typo on line 237. - Although the related work is comprehensive, Section 6 could benefit from comparing the perspective taken in the present manuscript to the contributions of prior efforts. - The use of the terminology "certificate" in some contexts (for instance at line 267) might be misinterpreted, due to its strong meaning in complexity theory.
- Although the related work is comprehensive, Section 6 could benefit from comparing the perspective taken in the present manuscript to the contributions of prior efforts.
NIPS_2022_246
NIPS_2022
Weakness: 1) Generally lacking a quantitative measure to evaluate the generated VCEs. Evaluation is mainly performed with visual inspection. 2) As the integration of cone projection shown to be helpful, however it is not clear why this particular projection is chosen. Are there other projections that are also helpful? Is there a theoretical proof that this cone projection resolves the noise of the gradients in non-robust classifiers? Overall, I think the proposed technique yield better VCE and interesting for the community. I also think that the strengths outweigh the weakness. However, I would be open to hear other reviewers opinion here.
1) Generally lacking a quantitative measure to evaluate the generated VCEs. Evaluation is mainly performed with visual inspection.
7EK2hqWmvz
ICLR_2025
1. The paper does not clearly position itself with respect to existing retrieval-augmented methods that used to accelerate the model’s inference. A more thorough literature review is needed to highlight how RAEE differs from and improves upon prior work. 2. While the data presented in figure3 is comprehensive, I noticed that the visual presentation, specifically the subscripts, could be enhanced for better readability and aesthetic appeal.
2. While the data presented in figure3 is comprehensive, I noticed that the visual presentation, specifically the subscripts, could be enhanced for better readability and aesthetic appeal.
NIPS_2019_932
NIPS_2019
weakness is that some of the main results come across as rather simple combinations of existing ideas/results, but on the other hand the simplicity can also be viewed as a strength. I don’t find the Experiments section essential, and would have been equally happy to have this as a purely theory paper. But the experiments don’t hurt either. My remaining comments are mostly quite minor – I will put a * next to those where I prefer a response, and any other responses are optional: [*] p2: Please justify the claim “optimal number of measurements” - in particular highlighting the k*log(n/k) + 1/eps lower bound from [1] and adding it to Table 1. As far as I know, it is an open problem as to whether the k^{3/2} term is unavoidable in the binary setting - is this correct? (If not, again please include a citation and add to Table 1) - p2: epsilon is used without being defined (and also the phrase “approximate recovery”) - p4: Avoid the uses of the word “necessary”, since these are only sufficient conditions. Similarly, in Lemma 3 the statement “provided that” is strictly speaking incorrect (e.g., m = 0 satisfies the statement given). - The proof of Lemma 1 is a bit confusing, and could be re-worded. - p6: The terminology “rate”, “relative distance”, and notation H_q(delta) should not be assumed familiar for a NeurIPS audience. - I think the proof of Theorem 10 should be revised. Please give brief explanations for the steps (e.g., the step after qd = (…) follows by re-arranging the choice of n, etc.) [*] In fact, I couldn’t quite follow the last step – substituting q=O(k/alpha) is clear, but why is the denominator also proportional to alpha/k? (A definition of H_q would have helped here) - Lemma 12: Please emphasize that m is known but x is not – this seems crucial. - For the authors’ interest, there are some more recent refined bounds on the “for-each” setting such as “Limits on Support Recovery with Probabilistic Models: An Information-Theoretic Framework” and “Sparse Classification: A Scalable Discrete Optimization Perspective”, though since the emphasis of this paper is on the “for-all” setting, mentioning these is not essential. Very minor comments: - No need for capitalization in “Group Testing” - Give a citation when group testing first mentioned on p3 - p3: Remove the word “typical” from “the typical group testing measurement”, I think it only increases ambiguity/confusion. - Lemma 1: Is “\cdot” an inner product? Please make it clear. Also, should it be mx or m^T x inside the sign(.)? - Theorem 8: Rename delta to beta to avoid inconsistency with delta in Theorem 7. Also, is a “for all d” statement needed? - Just before Section 4.2, perhaps re-iterate that the constructions for [1] were non-explicit (hence highlighting the value of Theorem 10). - p7: “very low probability” -> “zero probability” - p7: “This connection was known previously” -> Add citation - p10: Please give a citation for Pr[sign = sign] = (… cos^-1 formula …). === POST-REVIEW COMMENTS: The responses were all as I had assumed them to be when stating my previous score, so naturally my score is unchanged. Overall a good paper, with the main limitation probably being the level of novelty.
- Theorem 8: Rename delta to beta to avoid inconsistency with delta in Theorem 7. Also, is a “for all d” statement needed?
ACL_2017_494_review
ACL_2017
- fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) - General Discussion: The paper describes "morph-fitting", a type of retrofitting for vector spaces that focuses specifically on incorporating morphological constraints into the vector space. The framework is based on the idea of "attract" and "repel" constraints, where attract constraints are used to pull morphological variations close together (e.g. look/looking) and repel constraints are used to push derivational antonyms apart (e.g. responsible/irresponsible). They test their algorithm on multiple different vector spaces and several language, and show consistent improvements on intrinsic evaluation (SimLex-999, and SimVerb-3500). They also test on the extrinsic task of dialogue state tracking, and again demonstrate measurable improvements over using morphologically-unaware word embeddings. I think this is a very nice paper. It is a simple and clean way to incorporate linguistic knowledge into distributional models of semantics, and the empirical results are very convincing. I have some questions/comments below, but nothing that I feel should prevent it from being published. - Comments for Authors 1) I don't really understand the need for the morph-simlex evaluation set. It seems a bit suspect to create a dataset using the same algorithm that you ultimately aim to evaluate. It seems to me a no-brainer that your model will do well on a dataset that was constructed by making the same assumptions the model makes. I don't think you need to include this dataset at all, since it is a potentially erroneous evaluation that can cause confusion, and your results are convincing enough on the standard datasets. 2) I really liked the morph-fix baseline, thank you for including that. I would have liked to see a baseline based on character embeddings, since this seems to be the most fashionable way, currently, to side-step dealing with morphological variation. You mentioned it in the related work, but it would be better to actually compare against it empirically. 3) Ideally, we would have a vector space where morphological variants are just close together, but where we can assign specific semantics to the different inflections. Do you have any evidence that the geometry of the space you end with is meaningful. E.g. does "looking" - "look" + "walk" = "walking"? It would be nice to have some analysis that suggests the morphfitting results in a more meaningful space, not just better embeddings.
- fairly straightforward extension of existing retrofitting work - would be nice to see some additional baselines (e.g. character embeddings) -
RnYd44LR2v
ICLR_2024
- Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the initial classifiers in RobustBench (see [Croce et al. (2021)](https://arxiv.org/abs/2010.09670)), showing a similar linear correlation with ID robustness. Moreover, [A, B] have also evaluated the robustness of adversarially trained models to unseen attacks. - A central aspect of evaluating adversarial robustness is the attacks used to measure it. In the paper, this is described with sufficient details only in the appendix. In particular for the non $\ell_p$-threat models I think it would be important to discuss the strength (e.g. number of iterations) of the attacks used, since these are not widely explored in prior works. [A] https://arxiv.org/abs/1908.08016 [B] https://arxiv.org/abs/2105.12508
- Similar analyses are already present in prior works, although on a (sometimes much) smaller scale, and then the results are not particularly surprising. For example, the robustness of CIFAR-10 models on distributions shifts (CIFAR-10.1, CINIC-10, CIFAR-10-C, which are also included in this work) was studied on the initial classifiers in RobustBench (see [Croce et al. (2021)](https://arxiv.org/abs/2010.09670)), showing a similar linear correlation with ID robustness. Moreover, [A, B] have also evaluated the robustness of adversarially trained models to unseen attacks.
NIPS_2020_1720
NIPS_2020
- In L104-112 several prior arts are listed. I understand that the task authors tackle is predicting full mesh, but why proposed method is better than [21] or [6]? What makes the proposed approach better than previous methods? From the experiments, the performance difference is clear. However, I am missing the core insights/motivations behind the approach. - In L230, it is indicated that "we allow it (3D pose regressor) to output M possible predictions and, out of this set, we select the one that minimizes the MPJPE/RE metric". Comparison here seems a bit unfair. Instead of using oracle poses, the authors would compute the MPJPE/RE for all of the M or maybe n out of M poses, then report the median error. - It is not clearly indicated whether the curated AH36M dataset is used for training. If so, did other methods eg. HMR, SPIN have access to AH36M data during training for a fair comparison? - There is no promise to release the code and the data. Even though the method is explained clearly, a standard implementation would be quite helpful for the research community. - There is no failure cases/limitations sections. It would be insightful to include such information for researchers who would like to build on this work.
- It is not clearly indicated whether the curated AH36M dataset is used for training. If so, did other methods eg. HMR, SPIN have access to AH36M data during training for a fair comparison?
bpArUWbkUF
EMNLP_2023
- There are some minor issues with the papers, but still, no strong reasons to reject them: - I found that the creation of the dataset is optional. The Kialo dataset, well-studied in the community, provides exactly what the authors need, pairs of short claims and their counters. It is even cleaner than the dataset the authors created since no automatic processes exist to construct it. Still, what has been created in this paper can be extra data to learn from. - The related work, especially regarding counter-argument generation, was shortly laid out with little elaboration about how previous works addressed that task and the implications. - In the abstract, the authors claim they trained Arg-Judge with human preference. However, looking at the details of the data that the model is trained on, it turns out that the data is automatically created and does not precisely reflect human preferences. - The procedure of creating the seed instructions, expanding them, and mapping them to inputs needed to be clarified. Providing examples here would be very helpful.
- I found that the creation of the dataset is optional. The Kialo dataset, well-studied in the community, provides exactly what the authors need, pairs of short claims and their counters. It is even cleaner than the dataset the authors created since no automatic processes exist to construct it. Still, what has been created in this paper can be extra data to learn from.
ICLR_2021_2527
ICLR_2021
Duplicate task settings. The proposed new task, cross-supervised object detection, is almost the same as the task defined in (Hoffman et al. 2014, Tang et al. 2016, Uijlings et al. 2018). Both of these previous works study the task of training object detectors on the combination of base class images with instance-level annotations and novel class image with only image-level annotations. The work (Uijlings et al. 2018) also conducts experiments on COCO which contains multi-objects in images. In addition, the work (Khandelwal et al. 2020) unifies the setting of training object detectors on the combination of fully-labeled data and weakly-labeled data, and conducts experiments on multi-object datasets PASCAL VOC and COCO. The task proposed by this paper could be treated as a special case of the task studied in (Khandelwal et al. 2020). We should avoid duplicate task settings. Limited novelty. The novelty of the proposed method is limited. Combining recognition head and detection head is not new in weakly supervised object detection. The weakly supervised object detection networks (Yang et al. 2019, Zeng et al. 2019) also generate pseudo instance-level annotations from recognition head to train detection head (i.e., head with bounding box classification and regression) for weakly-labeled data. Review summary: In summary, I would like to give a rejection to this paper due to the duplicate task settings and limited novelty. Khandelwal et al., Weakly-supervised Any-shot Object Detection, 2020 ---------- Post rebuttal ---------- After discussions with authors and reading other reviews, I acknowledge the contribution that this paper advances the performance of cross-supervised object detection. However, I would like to keep my original reject score. The reasons are as follows. Extending datasets from PASCAL VOC to COCO is not a significant change comparing to previous tasks. The general object detection papers also evaluated on PASCAL VOC only about five years ago and now evaluate mainly on COCO. With the development of computer vision techniques, it is natural to try more challenging datasets. So although this paper claims that this paper focuses on more challenging datasets, there is no significant difference between the tasks studied in previous works like [a] and this paper. In addition, apart from ImageNet, the work [b] also evaluates their method on the Open Images dataset which is even larger and more challenging than COCO. The difference between the tasks studied in [b] and this paper is only that, [b] adds a constraint that weakly-labeled classes have semantic correlations with fully-labeled classes and this paper doesn't. This difference is also minor. Therefore, the task itself cannot be one of the main contributions of this paper (especially the most important contribution of this paper). I would like to suggest the authors change their title / introduction / main paper by 1) giving lower wights to the task parts 2) giving higher weights to intuitions of why previous works fail on challenging datasets like COCO and motivations of the proposed method. [a] YOLO9000: Better, Faster, Stronger, In CVPR, 2017 [b] Detecting 11K Classes: Large Scale Object Detection without Fine-Grained Bounding Boxes, In ICCV, 2019
2) giving higher weights to intuitions of why previous works fail on challenging datasets like COCO and motivations of the proposed method. [a] YOLO9000: Better, Faster, Stronger, In CVPR, 2017 [b] Detecting 11K Classes: Large Scale Object Detection without Fine-Grained Bounding Boxes, In ICCV, 2019
README.md exists but content is empty.
Downloads last month
64