paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
10.2k
| point
stringlengths 47
690
|
---|---|---|---|
tsbdcgaCtk | ICLR_2024 | 1. generating a quality label does not necessarily mean that the model has the ability to predict it. I am wondering if there is some disturbances are made to the sentence in the training data, will the proposed model generate the correct quality label (showing the quality goes down)?
2. according to fig.1 , the prediction of quality labels is not good at all. The model seems not to be able to discriminate candidates with different qualities.
3. using QE label as the generation labels seems to be an interesting idea. Will you please give some examples of the same source sentence translated with different QE labels? It would be nice to see the effect demonstrated.
4. I am not quite sure how is the quality difference between two translations with 1 point difference in MetricX or Comet score. It will be better to give some examples to show how the translation quality is improved indeed. | 1. generating a quality label does not necessarily mean that the model has the ability to predict it. I am wondering if there is some disturbances are made to the sentence in the training data, will the proposed model generate the correct quality label (showing the quality goes down)? |
NIPS_2017_74 | NIPS_2017 | - Theorem 2 which presentation is problematic and does not really provide any convergence guaranty.
- All the linear convergence rates rely on Theorem 8 which is burried at the end of the appendix and which proof is not clear enough.
- Lower bounds on the number of good steps of each algorithm which are not really proved since they rely on an argument of the type "it works the same as in another close setting".
The numerical experiments are numerous and convincing, but I think that the authors should provide empirical evidences showing that the computational cost are of the same order of magnitude compared competing methods for the experiments they carried out.
%%%% Details on the main comments
%% Theorem 2
The presention and statement of Theorem 2 (and all the sublinear rates given in the paper) has the following form:
- Given a fixed horizon T
- Consider rho, a bound on the iterates x_0 ... x_T
- Then for all t > 0 the suboptimality is of the order of c / t where c depends on rho.
First, the proof cannot hold for all t > 0 but only for 0 < t <= T. Indeed, in the proof, equation (16) relies on the fact that the rho bound holds for x_t which is only ensured for t < = T.
Second the numerator actually contains rho^2. When T increases, rho could increase as well and the given bound does not even need to approach 0.
This presentation is problematic. One possible way to fix this would be to provide a priori conditions (such as coercivity) which ensure that the sequence of iterates remain in a compact set, allowing to define an upper bound independantly of the horizon T.
In the proof I did not understand the sentence "The reason being that f is convex, therefore, for t > 0 we have f (x t ) < = f (0)."
%% Lemma 7 and Theorem 8
I could not understand Lemma 7.
The equation is given without any comment and I cannot understand its meaning without further explaination. Is this equation defining K'? Or is it the case that K' can be chosen to satisfy this equation? Does it have any other meaning?
Lemma 7 deals only with g-faces which are polytopes. Is it always the case? What happens if K is not a polytope? Can this be done without loss of generality? Is it just a typo?
Theorem 8:
The presentation is problematic. In Lemma 7, r is not a feasible direction. In Theorem 8, it is the gradient of f at x_t. Theorem 8 says "using the notation from Lemma 7". The proof of Theorem 8 says "if r is a feasible direction". All this makes the work of the reader very hard.
Notations of Lemma 7 are not properly used:
- What is e? e is not fixed by Lemma 7, it is just a variable defining a maximum. This is a recurent mistake in the proofs.
- What is K? K is supposed to be given in Lemma 7 but not in Theorem 8.
- Polytope?
All this could be more explicit.
"As x is not optimal by convexity we have that < r , e > > 0". Where is it assumed that $x$ is not optimal? How does this translate in the proposed inequality?
What does the following mean?
"We then project r on the faces of cone(A) containing x until it is a feasible direction"
Do the author project on an intersection of faces or alternatively on each face or something else?
It would be more appropriate to say "the projection is a feasible direction" since r is fixed to be the gradient of f. It is very uncomfortable to have the value of r changing within the proof in an "algorithmic fashion" and makes it very hard to check accuracy of the arguments.
In any case, I suspect that the resulting r could be 0 in which case the next equation does not make sense. What prevents the resulting r from being null?
In the next sentences, the authors use Lemma 7 which assumes that r is not a feasible direction. This is contradictory with the preceeding paragraph. At this point I was completely confused and lost hope to understand the details of this proof.
What is r' on line 723 and in the preceeding equation?
I understand that there is a kind of recursive process in the proof. Why should the last sentence be true?
%% Further comments
Line 220, max should be argmax
I did not really understand the non-negative matrix facotrization experiment. Since the resulting approximation is of rank 10, does it mean that the authors ran their algorithm for 10 steps only? | - All the linear convergence rates rely on Theorem 8 which is burried at the end of the appendix and which proof is not clear enough. |
NIPS_2019_933 | NIPS_2019 | + I liked the simplicity of the solution to divide the problem into star graphs. The domination number introduced seems to be a natural quantity for this problem. +/- To my opinion, the setting seems somewhat contrived combining feedback graphs and switching costs. The application to policy regret with counterfactual however provides a convincing example that the analysis can be useful and inspire future work. +/- The main part of the paper is rather clear and well written. Yet, I found the proofs in the appendices sometimes a bit hard to follow with sequences of unexplained equations. I would suggest to had some details. - There is a gap between the lower bound and the upper-bound (\sqrt(\beta) instead of \beta^{1/3}). In particular, for some graphs, the existing bound with the independence number may be better. This is also true for the results on the adaptive adversary and the counterfactual feedback. Other remarks: - Was the domination number already introduced for feedback graphs without switching costs? If yes, existing results for this problem should be cited. If not, it would be interesting to state what kind of results your analysis would provide without using the mini-batches. - Note that the length of the mini-batches tau_t may be non-integers. This should be clarified to be sure there are no side effects. For instance, what happens if $\tau_t << 1$? I am not sure if the analysis is still valid. - A better (more formal) definition of the independence and the domination numbers should be provided. It took me some time to understand their meaning. - Alg 1 and Thm 3.1: Since only upper-bounds on the pseudo-regret are provided, the exploration parameter gamma seems to be useless, isn't it? The choice gamma=0 seems to be optimal. A remark on high-probability upper-bounds and the role of gamma might be interesting. In particular, do you think your analysis (which is heavily based on expectations) can be extended to high-probability bounds on the regret? - I understand that this does not suit the analysis (which uses the equivalence in expectation btw Alg1 and Alg6) but it seems to be suboptimal (at least in practice) to discard all the feedbacks obtained while playing non-revealing actions. It would be nice to have practical experiments to understand better if we lose something here. It would be also nice to compare it with existing algorithms. Typos: - p2, l86: too many )) - Thm 3.1: A constant 2 in the number of switches is missing. - p13, l457: some notations seem to be undefined (w_t, W_t). - p14, you may add a remark - p15, l458: the number of switches can be upper-bounded by **twice** the number of times the revealing action is played - p16, l514: I did not understand why Thm 3.1 implies the condition of Thm C.5 with alpha=1/2 and not 1. By the way, (rho_t) should be non-decreasing for this condition to hold. | - There is a gap between the lower bound and the upper-bound (\sqrt(\beta) instead of \beta^{1/3}). In particular, for some graphs, the existing bound with the independence number may be better. This is also true for the results on the adaptive adversary and the counterfactual feedback. Other remarks: |
NIPS_2016_241 | NIPS_2016 | /challenges of this approach. For instance... - The paper does not discuss runtime, but I assume that the VIN module adds a *lot* of computational expense. - Though f_R and f_P can be adapted over time, the experiments performed here did incorporate a great deal of domain knowledge into their structure. A less informed f_R/f_P might require an impractical amount of data to learn. - The results are only reported after a bunch of training has occurred, but in RL we are often also interested in how the agent behaves *while* learning. I presume that early in training the model parameters are essentially garbage and the planning component of the network might actually *hurt* more than it helps. This is pure speculation, but I wonder if the CNN is able to perform reasonably well with less data. - I wonder whether more could be said about when this approach is likely to be most effective. The navigation domains all have a similar property where the *dynamics* follow relatively simple, locally comprehensible rules, and the state is only complicated due to the combinatorial number of arrangements of those local dynamics. WebNav is less clear, but then the benefit of this approach is also more modest. In what kinds of problems would this approach be inappropriate to apply? ---Clarity--- I found the paper to be clear and highly readable. I thought it did a good job of motivating the approach and also clearly explaining the work at both a high level and a technical level. I thought the results presented in the main text were sufficient to make the paper's case, and the additional details and results presented in the supplementary materials were a good compliment. This is a small point, but as a reader I personally don't like the supplementary appendix to be an entire long version of the paper; it makes it harder to simply flip to the information I want to look up. I would suggest simply taking the appendices from that document and putting them up on their own. ---Summary of Review--- I think this paper presents a clever, thought-provoking idea that has the potential for practical impact. I think it would be of significant interest to a substantial portion of the NIPS audience and I recommend that it be accepted. | - Though f_R and f_P can be adapted over time, the experiments performed here did incorporate a great deal of domain knowledge into their structure. A less informed f_R/f_P might require an impractical amount of data to learn. |
ICLR_2022_985 | ICLR_2022 | • More motivation and derivations of the different Fisher scores U x
(for GANs, VAEs, supervised) would be beneficial for understanding better the paper.
• More discussions on the non-diagonal version of the Fisher information matrix models (see comments below) would be beneficial.
• Discussion on the dependence of the quality of the NFK embedding on the quality of the pre-trained neural network.
• Discussion on the choice of the low-dimensional embedding dimensionality k
General comments:
• Using the diagonal of the Fisher information matrix (FIM) seems desirable from a computational reason, however a natural question is what happens if one tries to use the full matrix. Given the size of the parameters θ
in a neural network, estimating the whole matrix would indeed be extremely computationally expensive, but by discarding them, one loses significant information. Could the authors comment on that? Is the diag of FIM related to other known concepts in statistics? Does using only the diagonal imply that the off-diagonal elements are zero meaning the parameters are orthogonal? How does this affect the results and interpretation?
• Could the authors give the derivation of U x
in eq (1) for GANs (I see part of the derivation is in the paper by Zhai et al, 2019)? For VAEs and supervised case, FIM I
is the inner product using U x
, but for GANs it acts in the output space of the generator. Why is this the case (some explanations are given in the original paper but would be helpful to discuss this a bit more here)? The derivation of eq (4) would also be useful.
• Low-rank structure of NFK and Alg 1: How does one choose the feature dimensionality k
? Many methods that rely on kernels and manifold learning make the assumption of low-dimensionality/low-rankness and show that a small number of eigenfunctions is sufficient to reconstruct the input data. How is this different for NFK? The way I understand “low rank” is that the data has its own rank, which is low, and could potentially be learned. However here the authors input the dimensionality/rank k
which might be or not close to the true rank in real applications.
• How does this work relate to the work of Belkin et al (2018) – “To understand deep learning we need to understand kernel learning”, where the authors look at other kernels (Laplacian and Gaussian)?
• Could the approach be used for neural networks that are not pre-trained, as the neural tangent kernel NTK?
• The experimental results are nice, however the focus on computation is not so relevant given that it only uses the diagonal of the Fisher information matrix. Comparisons using the whole matrix would also be needed. What error is used in Table 3 (MSE, MAE, RMSE)? The goal of the paper is to present a method for supervised and unsupervised settings, however in the results an example on semi-supervised is also presented. I wonder if the examples on semi-supervised and knowledge distillation could leave room to improve the supervised and unsupervised settings discussions, and potentially be moved to the Appendix?
Other comments:
• Please update reference (Jaakkola et al): year, conference, also there should be no “et al” there are only two authors
• Doesn’t, don’t, won’t, it’s , etc -> does not, do not, would not, it is
• Both the concepts of “data representation” and “feature representation” are used. Do they always refer to the same thing? If yes, would be good to specify that.
• Expression of K f i s h e r
=> second U
should be subscript z not x ?
• “FIM defined as the variance of the score …” -> the FIM matrix is defined between all pairs of parameters θ i and θ j
, so it should be a covariance?
• Appendix Fig 3: Not sure I fully understand this example. Could one try the reconstruction of the digits using a simple method, such as PCA using the first 100 principal components as a baseline?
• Not familiar with the “Fisher vector” terminology, except in image classification and the “Adversarial Fisher vector” from Zhai et al, 2019. Are there other references? | • Could the approach be used for neural networks that are not pre-trained, as the neural tangent kernel NTK? |
NIPS_2022_2138 | NIPS_2022 | Weakness:
1 The main contribution of this paper is about the software, but the theoretical contribution is overstated. The proof of the theorem is quite standard and I do not get some new insight from it.
2 Direct runtime comparisons with existing methods are missing. The proposed approach is based on implicit differentiation which usually requires additional computational costs. Thus, the direct runtime comparison is necessary to demonstrate the efficiency of the proposed approach.
3 Recently, implicit deep learning has attracted many attentions, which is very relevant to the topic of this paper. An implementation example of implicit deep neural networks should be included. Moreover, many Jacobian-free methods e.g., [1-3] have been proposed to reduce the computational cost. The comparisons (runtime and accuracy) with these methods are preferred.
[1] Fung, Samy Wu, et al. "Fixed point networks: Implicit depth models with Jacobian-free backprop." (2021).
[2] Geng, Zhengyang, et al. "On training implicit models." Advances in Neural Information Processing Systems 34 (2021): 24247-24260.
[3] Ramzi, Zaccharie, et al. "SHINE: SHaring the INverse Estimate from the forward pass for bi-level optimization and implicit models." arXiv preprint arXiv:2106.00553 (2021). | 2 Direct runtime comparisons with existing methods are missing. The proposed approach is based on implicit differentiation which usually requires additional computational costs. Thus, the direct runtime comparison is necessary to demonstrate the efficiency of the proposed approach. |
NIPS_2020_878 | NIPS_2020 | * The GCN-based predictor and experiments don't have open-sourced code (not mentioned in the main paper or supplement), however the authors do provide detailed descriptions. * Some correctness issues (see next section) * The paper presents 2 important NAS objectives: latency optimization and accuracy optimization. However, the BRP-NAS (section 4) seems out-of-place since the rest of the paper deals with latency prediction. It nearly feels like BRP-NAS could be a separate paper, or Section 3 was used only to suggest using GCN (in this case, why not directly start with accuracy prediction with GCN?). * The analysis on BRP-NAS is also somewhat barebones: it only compares against 3 basic alternatives and ignores some other NAS (e.g. super-net/one-shot approaches, etc...). * Unclear if code will be released, as the GCN implementation may be hard to reproduce without original code (though the author's descriptions are fairly detailed and there is more information in the supplement). | * The analysis on BRP-NAS is also somewhat barebones: it only compares against 3 basic alternatives and ignores some other NAS (e.g. super-net/one-shot approaches, etc...). |
R6sIi9Kbxv | ICLR_2025 | 1. The approach of decomposing video representation into spatial and temporal representation for efficient and effective spatio-temporal modelling is a general idea in video understanding. I'm not going to blame using this in large video language models, however, I think proper credit and literature reviews should be included. For example, TimeSFormer[1], Uniformer[2], Dual-AI[3] and others using transformer for video recognition.
2. Training specialized and efficient video QFormer has been explored and utilized by UMT [4] and VideoChat2 [5]. Please clarify the difference and include them in literature reviews.
2. Comfusing attentive pooling module architecture. It seems the temporal representation $v_{t}$ is derived from spatial queries $Q_{s}$ attending to a set of frame features (with T as batch dimension). It means the spatial queries can only attend to in-frame content. This is doubtable why the representation is called temporal representation.
3. Training data: what specific data are used from training? Please provide details of how many videos from what dataset and how you make sampling. This is critical for reproduction and measure the method effectiveness.
4. Experiments - Comparison fairness: More latest methods should be included in comparison, especially those with similar motivations, e.g., VideoChat2.
5. Experiments - Image benchmark: As image dataset is used, it would be great to show the performance variance after such ST QFormer tuning. Also compared to normal QFormer.
6. Experiments - Video summarization: there are some new good benchmarks, like Shot2Story ranging from different topics and using only text and frames modalities. This is not mandatory, but it should be good to include.
7. Experiments - Ablation - missing components: There should be experiments and explanation regarding the different queries used in spatio-temporal representation, i.e., spatial, temporal and summary. That is the key difference to VideoChatGPT and other works. What if only have spatial one, or temporal and summary one?
8. Experiments - Ablation - metric: for abaltions, I suggest to use QA benchmarks for experiments rather than captioning benchmark. When things come to LLM, the current captioning metrics such as B@4 and CIDEr might not be ideal to reflect model ability.
[1] Is Space-Time Attention All You Need for Video Understanding?
[2] UniFormer: Unified Transformer for Efficient Spatiotemporal Representation Learning
[3] Dual-AI: Dual-Path Actor Interaction Learning for Group Activity Recognition
[4] MVBench | 7. Experiments - Ablation - missing components: There should be experiments and explanation regarding the different queries used in spatio-temporal representation, i.e., spatial, temporal and summary. That is the key difference to VideoChatGPT and other works. What if only have spatial one, or temporal and summary one? |
ICLR_2022_1794 | ICLR_2022 | 1 Medical imaging are often obtained in 3D volumes, not only limited to 2D images. So experiments should include the 3D volume data as well for the general community, rather than all on 2D images. And the lesion detection is another important task for the medical community, which has not been studied in this work.
2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform DeiT-T in APTOS2019, and it does not outperform DeiT-S on APTOS2019, ISIC2019 and CheXpert (0.1% won't be significant). However, CNNs can give more almost consistent model improvements as the capacity goes up except on the ISIC2019.
3 On the segmentation mask involved with cancer on CSAW-S, the segmentation results of DEEPLAB3-DEIT-S cannot be concluded as better than DEEPLAB3-RESNET50. The implication that ViTs outperform CNNs in this segmentation task cannot be validly drawn from an 0.2% difference with larger variance.
Questions: 1 For the grid search of learning rate, is it done on the validation set?
Minor problems: 1 The n number for Camelyon dataset in Table 1 is not consistent with the descriptions in the text in Page 4. | 2 More analysis and comments are recommended on the performance trending of increasing the number of parameters for ViT (DeiT) in the Figure 3. I disagree with authors' viewpoint that "Both CNNs and ViTs seem to benefit similarly from increased model capacity". In the Figure 3, the DeiT-B models does not outperform DeiT-T in APTOS2019, and it does not outperform DeiT-S on APTOS2019, ISIC2019 and CheXpert (0.1% won't be significant). However, CNNs can give more almost consistent model improvements as the capacity goes up except on the ISIC2019. |
NIPS_2018_756 | NIPS_2018 | It looks complicated to assess the practical impact of the paper. On the one hand, the thermodynamic limit and the Gaussianity assumption may be hard to check in practice and it is not straightforward to extrapolate what happens in the finite dimensional case. The idea of identifying the problem's phase transitions is conceptually clear but it is not explicitly specified in the paper how this can help the practitioner. The paper only compares the AMP approach to alternate least squares without mention, for example, positive results obtained in the spectral method literature. Finally, it is not easy to understand if the obtained results only regard the AMP method or generalize to any inference method. Questions: - Is the analysis restricted to the AMP inference? In other words, could a tensor that is hard to infer via AMP approach be easily identifiable by other methods (or the other way round)? - Are the easy-hard-impossible phases be related with conditions on the rank of the tensor? - In the introduction the authors mention the fact that tensor decomposition is in general harder in the symmetric than in the non-symmetric case. How is this connected with recent findings about the `nice' landscape of the objective function associated with the decomposition of symmetric (orthogonal) order-4 tensors [1]? - The Gaussian assumption looks crucial for the analysis and seems to be guaranteed in the limit r << N. Is this a typical situation in practice? Is always possible to compute the `effective' variance for non-gaussian outputs? Is there a finite-N expansion that characterize the departure from Gaussianity in the non-ideal case? - For the themodynamic limit to hold, should one require N_alpha / N = O(1) for all alpha? - Given an observed tensor, is it possible to determine the particular phase it belongs to? [1] Rong Ge and Tengyu Ma, 2017, On the Optimization Landscape of Tensor Decompositions | - In the introduction the authors mention the fact that tensor decomposition is in general harder in the symmetric than in the non-symmetric case. How is this connected with recent findings about the `nice' landscape of the objective function associated with the decomposition of symmetric (orthogonal) order-4 tensors [1]? |
viNQSOadLg | ICLR_2024 | * Lack of Training Details: The paper lacks sufficient information regarding the training process of the policy. It should provide more details on the training data used, the methodology for updating parameters, and the specific hyperparameters employed in the process.
* Unclear Literature Review: The literature review in the paper needs improvement. It is not adequately clear what the main contribution of the proposed method is, and how it distinguishes itself from existing work, particularly in relation to the utilization of GFlowNet for sequence generation. The paper should provide a more explicit and comparative analysis of related work.
* Ambiguity in Key Innovation: The claim that GFNSeqEditor can produce novel sequences with improved properties lacks clarity regarding the key innovation driving these contributions. The paper should better articulate what novel techniques or insights lead to the claimed improvements, thereby enhancing the reader's understanding of the method's unique value. | * Unclear Literature Review: The literature review in the paper needs improvement. It is not adequately clear what the main contribution of the proposed method is, and how it distinguishes itself from existing work, particularly in relation to the utilization of GFlowNet for sequence generation. The paper should provide a more explicit and comparative analysis of related work. |
NIPS_2016_133 | NIPS_2016 | --- The clarity of the main parts has clearly improved compared to the last version I saw as an ICML reviewer. Generally, it seems natural to investigate the direction of how causal models can help for autonomous agents. The authors present an interesting proposal for how this can be done in case of simple bandits, delviering both, scenarios, algorithms, mathematical analysis and experimental analysis. However, the contribution also has strong limitations. The experiments, which are only on synthetic data, seem too show that their Algorithms 1 and 2 outperform what they consider as baseline ("Successive Rejects") in most cases. But how strong of a result is that in the light of the baseline not using the extensive causal knowledge? In the supplement, I only read the proof of Theorem 1: the authors seem to know what they are doing but they spend to little time on making clear the non-trivial implications, while spending too much time on trivial reformulations (see more detailed comments below). In Section 4, assuming to know the conditional for the objective variable Y seems pretty restrictive (although, in principle, the results seems to be generalizable beyond this assumption), limiting the usefulness of this section. Generally, the significance and potential impact of the contribution now strongly hinges on whether (1) it (approximately) makes sense that the agent can in fact intervene and (2) it is a realistic scenario that strong prior knowledge in form of the causal DAG (plus conditionals P(PA_Y|a) in the case of Algorithm 2) are given. I'm not sure about that, since this is a problem the complete framework proposed by Pearl and others is facing, but I think it is important to try and find out! (And at least informal causal thinking and communication seem ubiquitously helpful in life.) Further comments: --- Several notes on the proof of Theorem 1 in the supplement: - Typo: l432: a instead of A - The statement in l432 of the supplement only holds with probability at least 1 - \delta, right? - Why don't you use equation numbers and just say which equation follows from which other equations? - How does the ineq. after l433 follow from Lemma 7? It seems to follow somehow from a combination of the previous inequalities, but please facilitate the reading a bit by stating how Lemma 7 comes into play here. - The authors seem to know what they are doing, and the rough idea intuitively makes sense, but for the (non-expert in concentration inequalities) reader it is too difficult to follow the proof, this needs to be improved. Generally, as already mentioned, the following is unclear to me: when does a physical agent actually intervene on a variable? Usually, it has only perfect control over its own control output - any other variable "in the environment" can just be indirectly controlled, and so the robot can never be sure if it actually intervenes (in particular, outcomes of apparent interventions can be confounded by the control signal through unobserved paths). For me it's hard to tell if this problem (already mentioned by Norbert Wiener and others, by the way) is a minor issue or a major issue. Maybe one has to see the ideal notion of intervention as just an approximation to what happens in reality. | - Why don't you use equation numbers and just say which equation follows from which other equations? |
ICLR_2022_2754 | ICLR_2022 | I feel the motivation of the work is confusing. I can understand the authors want to improve CQL somehow further. But it is never made clear:
what existing problems are and why they matter; Is it the lower bound on the exiting CQL is too loose? Why is improving the bound important?
what is the effect you want to achieve? Is it an offline algorithm that can learn a better policy from data generated by a poor behavior policy?
The contribution is incremental, and I doubt the significance. As the paper cited in section 2.2, Kumar et al. (2020) propose to penalize the actions not described by the dataset, which enables a general definition of \mu. Note that the additional weighting scheme can be essentially thought of as a new type of \mu. I don’t see a clear difference between (2) and (3). Both can be considered as the specially designed action sampling distribution.
Why is theorem 1 useful? If I understand correctly, the key thing you want to say is the additional weighting can provide a tighter bound for OOD state-action pairs or those not close to the dataset? But the first step should be figuring out the effects of having a tight/loose bound. Does it hurt optimality/convergence rate/generalization…? Even partially answering this question can better motivate the reweighting approach. I believe the proof of the theorem is a simple modification from the existing CQL work.
The choice of the weighting scheme lacks justification. An intuitive choice is the RBF function. Furthermore, according to theorem 1, the proposed weighting is useful only when the action is OOD, i.e., the weight is 1; when the action is ID, weight should be 0, but your weighting scheme does not give zero?
Section 4.1, the proposed method comes out suddenly. Is there any reason to choose normalizing flows? Of course, normalizing flows is a good method enabling both efficient sampling and density evaluation. In your algorithm, you only need to evaluate the density but not to sample. There should be plenty of other choices. When testing ideas, it is more natural to start with some simple methods.
What is the purpose of section 4.3?
I expect to see how various concrete choices of the weighting scheme can affect the distance (e.g., KL divergence) between the learned policy and the behavior policy.
The experiments. 1. more random seeds should be tested (figure 1) — it is hard to distinguish algorithms from the current learning curves. Readers cannot see a clear message from them. 2. I expect more baselines to be compared and more domains to be tested. As I mentioned, the choices of the weighting and the way of learning density functions are not strongly motivated. In this case, I have to ask for stronger empirical results: baselines with other design choices and more domains. 3. The experiments in Fig 2 are incomplete. Why are there no experiments for half cheetah and walker with expert data? 4. Please provide reproducing details.
The abstract says, “… with a strong theoretical guarantee.” I don’t think there is any strong theory in the paper.
Page 3. Last paragraph. The criticism towards using empirical dataset distribution for \hat{pi}_\beta does not make sense to me. When the state/action is continuous, the empirical estimation should be kernel density estimation, which is a consistent estimator for estimating empirical distributions. The kernel can be chosen as smooth, so the KDE should have support everywhere. Minor:
there is a nontrivial number of grammar issues/typos. Please double-check.
Many sentences are confusing or logically disconnected.
e.g., in the abstract, “A compromise between enhancing …. To alleviate this issue, … ” what issue?
“Improving the learning process.” In what sense? Higher sample efficiency?
“Indeed, the lack of information … ” what information? Why does it provoke overestimation?
I believe saying “based on the amount of information collected” is inaccurate because the paper does not really introduce any information measurement. | 2. I expect more baselines to be compared and more domains to be tested. As I mentioned, the choices of the weighting and the way of learning density functions are not strongly motivated. In this case, I have to ask for stronger empirical results: baselines with other design choices and more domains. |
NIPS_2021_998 | NIPS_2021 | • Unprofessional writing. - Most starkly, “policies” is misspelled in the title. • At times, information is not given in an easy-to-understand way. - E.G. lines 147 - 152, 284 - 289. • Captions of figures do not help elucidate what is going on in the figure. This problem is mitigated by the quality of the figures, but it still makes it much harder to understand the pipeline of LCP and its components. More emphasis on that pipeline would help with the understanding. • 100 nodes seem like a small maximum test size for TSP problems (though this is an educated guess). Many real-world problems have thousands or tens of thousands of nodes. • Increase in optimality is either not very significant, or not presented to highlight its significance. It would be better to put the improvement into perspective. • Blank spaces in table 1 are unclear. Opportunities:
• It would be good to describe why certain choices were made. For example, why is the REINFORCE algorithm used for training versus something like PPO? I presume it has to do with the attention model paper this one iterates on, but clarification would be good.
• More real-world uses of the algorithm could be included to better understand the societal impact, including details on how LCP could be integrated well.
The paper lacks a high degree of polish and professionalism, but its formatting (e.g. bolded inline subsubsections) and figures are its saving grace. The tables are also well structured, if a bit cluttered --- values are small and bolding is indistinct. This paper does a good job of giving this information and promises open source-code on publication.
Overall, the paper and its presentation have several problems, but the idea seems elegant and useful.
Yes. The authors have adequately addressed the limitations and potential negative societal impact of their work | • It would be good to describe why certain choices were made. For example, why is the REINFORCE algorithm used for training versus something like PPO? I presume it has to do with the attention model paper this one iterates on, but clarification would be good. |
NIPS_2017_434 | NIPS_2017 | ---
This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance:
1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not
ablated. How important is the added complexity? Will one IN do?
2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2.
3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder.
While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated.
Why is this particular dimension of difficulty interesting?
4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error?
5. Are the learned object state embeddings interpretable in any way before decoding?
6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions:
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
* How many different kinds of physical interaction can be in one simulation?
* How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates?
Preliminary Evaluation ---
Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection. | --- This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance: |
Ugs2W5XFFo | ICLR_2025 | 1. Note that the paper mainly focuses on SD-based (SD 2.1, SDXL) models. These models are mostly the same styles, e.g., similar network structures and traditional denoising training strategies. Is there any possibility that the MI tuning incorporated with flow-based models like DiT-based models (SD3, Pixart series or so). And it is interesting to see if the proposed MI tuning behaves different with different types of models.
2. The evaluations on MI mainly focus on only simple semantic concepts like color, shape and texture. Is MI-tuning sensitive to object numbers or so?
3. The paper fixes the denoising steps to 50 when inferencing an image, are there any differences in performance of MI-tuning when using different steps except 50?
4. In quantitative analysis of Sect. 3.1, the paper mentions that the point-wise MI ranks images and select 1st, 25th and 50th as the representative images. Why the three images are representative? This needs more detailed explanations. Also, the reason of the selection needs quantitative analysis.
5. Some of the ablations mentioned in previous sections are hard to locate in the following contents, the writing can be improved in this part. | 5. Some of the ablations mentioned in previous sections are hard to locate in the following contents, the writing can be improved in this part. |
ICLR_2023_4659 | ICLR_2023 | Weakness: 1. It would make this paper stronger if the authors can show some adversarial robustness of some SOTA defended recognition models on the new set. 2. I would like to see more clear details of how to use DALL-E2 or stable diffusion models to generate hard examples, e.g., how to design prompts and how to filter out some unrealistic images. 3. The new dataset has some great properties. However, how to make it more scalable to real applications besides evaluating the current models trained on a public dataset? What if we have some new classes in our task, but it is not included in this set? 4. It is still unclear how to make the new proposed evaluation set more diverse and representative than the previous method and how to select those representative images. | 4. It is still unclear how to make the new proposed evaluation set more diverse and representative than the previous method and how to select those representative images. |
NIPS_2020_106 | NIPS_2020 | I also feel that the paper could have benefited from a discussion of these as compared to just outrightly saying that existing methods do not give us good results. In particular, the conditions under which existing methods work vs do not work should have been discussed more explicitly than what it is right now in the paper. Moreover, I think the experiments on cartpole and hopper are not indicative of their method's performance since these have determnisitc dynamics and the dataset was collected as trajectories (so s' is as frequent as s in the distribution \mu, see my point below) and hence their choice of masking reduces to action conditioned masking only. Some other questions that I have: - From the analysis perspective, the paper says that prior works such as Kumar et al. 2019 that use action conditional and concentrability do not get the same error rate. Is the main issue behind this limitation that the notion of concentrability used in Kumar et al. and other works is trajectory centric and not on the state-action marginal? The latter was shown to be better than trajectory-based concentrability in Chen and Jiang (2019). If this notion of concentrability is used, would that be sufficient to get rid of the concentrability assumptions in your work. - Why does the method help on Hopper, which has deterministic dynamics, so given (s, a), there is a unique s', and in this case, it simply reduces to action-conditional masking? Can it be evaluated on some other domains with non-deterministic dynamics to evaluate its empirical efficacy? Otherwise empirically it seems like the method doesn't seem to have much benefit. Why is BEAR missing from baselines? - It seems like when the batch of data is a set of trajectories, and the state space is continuous, then the density of s_t is the same as s_{t+1} which is 1/N, so in that case does the proposed algorithm which exploits the fact that s_{t+1} may be highly infrequent compared to s_{t} reduce simply to an action conditional? The experiments are done with this setup too it seems. - Building on the previous point, if the data comes from d^\mu: the state visitation distribution of a behavior policy \mu, then d^\mu(s) and d^\mu(s') for a transition (s, a, r, s') observed in the dataset shouldn't be very different, in that case, would the proposed method be not (much) better than action-conditional penalty methods that have mostly been studied in this domain? - How do you compare algorithms, theoretically, for different values of b, and the hyperparameters for other algorithms, such as \eps in BEAR? It seems like for some value of both of these quantities, the algorithms should perform safely and not have much error accumulation. So, how is the proposed method better theoretically than the best configuration of the prior methods? - How will the proposed method compare to residual gradient algorithms which have better guarantees? | - Why does the method help on Hopper, which has deterministic dynamics, so given (s, a), there is a unique s', and in this case, it simply reduces to action-conditional masking? Can it be evaluated on some other domains with non-deterministic dynamics to evaluate its empirical efficacy? Otherwise empirically it seems like the method doesn't seem to have much benefit. Why is BEAR missing from baselines? |
NIPS_2016_313 | NIPS_2016 | Weakness: 1. The proposed method consists of two major components: generative shape model and the word parsing model. It is unclear which component contributes to the performance gain. Since the proposed approach follows detection-parsing paradigm, it is better to evaluate on baseline detection or parsing techniques sperately to better support the claim. 2. Lacks in detail about the techniques and make it hard to reproduce the result. For example, it is unclear about the sparsification process since it is important to extract the landmark features for following steps. And how to generate the landmark on the edge? How to decide the number of landmark used? What kind of images features? What is the fixed radius with different scales? How to achieve shape invariance, etc. 3. The authors claim to achieve state-of-the-art results on challenging scene text recognition tasks, even outperforms the deep-learning based approaches, which is not convincing. As claimed, the performance majorly come from the first step which makes it reasonable to conduct comparisons experiments with existing detection methods. 4. It is time-consuming since the shape model is trained in pixel level(though sparsity by landmark) and the model is trained independently on all font images and characters. In addition, parsing model is a high-order factor graph with four types of factors. The processing efficiency of training and testing should be described and compared with existing work. 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images? | 5. For the shape model invariance study, evaluation on transformations of training images cannot fully prove the point. Are there any quantitative results on testing images? |
NIPS_2017_130 | NIPS_2017 | weakness)?
4.] Can the authors discuss the sensitivity of any fixed tuning parameters in the model (both strengths and weakness)?
5.] What is the scalability of the model proposed and computational complexity? Will the authors be making the code publicly available with the data? Are all results reproducible using the code and data?
6.] What conclusion should a user learn and drawn? The applications section was a bit disappointing given the motivation of the paper. A longer discussion is important to the impact and success of this paper. Please discuss. | 4.] Can the authors discuss the sensitivity of any fixed tuning parameters in the model (both strengths and weakness)? |
NIPS_2022_2513 | NIPS_2022 | Weakness:
The claim in Line 175~176 is not validated which it is valuable to see whether the proposed method could prevents potential classes from being incorrectly classified into historical classed.
In Tab. 1, for VOC 2-2 (10 tasks) and VOC 19-1 (2 tasks) MicroSeg gets inferior performance compared with SSUL, the reason should be explained. It also appear in ADE 100-50 (2 tasks) in Tab. 2.
The proposed method adopts a proposal generator pretrained on MSCOCO which aggregates more information. Is it fair to compared with other methods? Besides, could the proposed technique propmotes existing Class incremental semantic segmentation methods.
The authors adequately addressed the limitations and potential negative societal impact of their work. | 2. The proposed method adopts a proposal generator pretrained on MSCOCO which aggregates more information. Is it fair to compared with other methods? Besides, could the proposed technique propmotes existing Class incremental semantic segmentation methods. The authors adequately addressed the limitations and potential negative societal impact of their work. |
z62Xc88jgF | ICLR_2024 | 1. Although the use of this type of loss in this setting might be new, this work does not prove any new theoretical results.
2. That being said, experiment is a very important component in this paper, however, I find the evaluation metric of the solution very interesting. More specifically, let $u$ be the output of neural networks and $u^*$ be the exact solution. The test error is usually computed using relative $L^2$ norm (See for example [1][2]), i.e.
$$|| u - u^*||_2^2 / ||u^*||_2^2 = \int|u - u^*|^2dx / \int |u^*|^2 dx.$$
However, in Figure 4, when evaluating solutions, the mean error is computed using equation (15), the energy norm.
(i). why not using the relative $L^2$ norm? How does Astral loss perform if the evaluation is done in $L^2$?
(ii). The a posteriori error bound is in the energy norm, i.e.
$$L(u, w_L) \leq |||u-u^*||| \leq U(u, w_U).$$
so I would naturally expect Astral loss to achieve fairly small error in this energy norm, but this does not necessarily imply the solution is "better". Equations can be solved in different spaces. In fact, I think the space $L^2$ is more commonly used when people study existence and uniqueness of PDE solutions.
(iii). There could be a relation between the energy norm and $L^2$ norm. More explanation is needed for the specific choice of the evaluation metric since it differs from the previous literature.
[1] Li et al., Physics-Informed Neural Operator for Learning Partial Differential Equations
[2] Wang et al., An Expert's Guide to Training Physics-informed Neural Networks | 1. Although the use of this type of loss in this setting might be new, this work does not prove any new theoretical results. |
NIPS_2022_1572 | NIPS_2022 | 1.) Theoretical comparisons to adaptive learning of GPRGNN is not clear. 2.) Incremental work though the contribution is useful. | 1.) Theoretical comparisons to adaptive learning of GPRGNN is not clear. |
ICLR_2021_1181 | ICLR_2021 | 1.For domain adaptation in the NLP field, powerful pre-trained language models, e.g., BERT, XLNet, can overcome the domain-shift problem to some extent. Thus, the authors should be used as the base encoder for all methods and then compare the efficacy of the transfer parts instead of the simplest n-gram features.
2.The whole procedure is slightly complex. The author formulates the prototypical distribution as a GMM, which has high algorithm complexity. However, formal complexity analysis is absent. The author should provide an analysis of the time complexity and training time of the proposed SAUM method compared with other baselines. Besides, a statistically significant test is absent for performance improvements.
3.The motivation of learning a large margin between different classes is exactly discriminative learning, which is not novel when combined with domain adaptation methods and already proposed in the existing literature, e.g., Unified Deep Supervised Domain Adaptation and Generalization, Saeid et al., ICCV 2017. Contrastive Adaptation Network for Unsupervised Domain Adaptation, Kang et al., CVPR 2019 Joint Domain Alignment and Discriminative Feature Learning for Unsupervised Deep Domain Adaptation, Chen et al., AAAI 2019.
However, this paper lacks detailed discussions and comparisons with existing discriminative feature learning methods for domain adaptation.
4.The unlabeled data (2000) from the preprocessed Amazon review dataset (Blitzer version) is perfectly balanced, which is impractical in real-world applications. Since we cannot control the label distribution of unlabeled data during training, the author should also use a more convinced setting as did in Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification, He et al., EMNLP 2018, which directly samples unlabeled data from millions of reviews.
5.The paper lacks some related work about cross-domain sentiment analysis, e.g., End-to-end adversarial memory network for cross-domain sentiment classification, Li et al., IJCAI 2017 Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification, He et al., EMNLP 2018 Hierarchical attention transfer network for cross-domain sentiment classification, Li et al., AAAI 18 Questions:
1.Have the authors conducted the significance tests for the improvements?
2.How fast does this algorithm run or train compared with other baselines? | 1.For domain adaptation in the NLP field, powerful pre-trained language models, e.g., BERT, XLNet, can overcome the domain-shift problem to some extent. Thus, the authors should be used as the base encoder for all methods and then compare the efficacy of the transfer parts instead of the simplest n-gram features. |
5UW6Mivj9M | EMNLP_2023 | 1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted.
2) Relatedly, it was hard to discern what was novel in the paper and what had already been tried by others.
3) Since the improvement in numbers is not large (in most cases, just a couple of points), it is hard to tell if this improvement is statistically significant and if it translates to qualitative improvements in performance. | 1) The paper was extremely hard to follow. I read it multiple times and still had trouble following the exact experimental procedures and evaluations that the authors conducted. |
NIPS_2021_1759 | NIPS_2021 | The extension from the EH model is natural. In addition, there has been literature that proves the power of FNN from a theoretical point of view, whereas this paper fails to make a review in this regard. Among other works, Schmidt-Hieber (2020) gave an exact upper bound of the approximation error for FNNs involving the least-square loss. Since the DeepEH optimizes a likelihood-based loss, this paper builds up its asymptotic properties by following assumptions and proofs of Theorems 1 and 2 in Schmidt-Hieber (2020) as well as theories on empirical processes.
Additional Feedback: 1) In the manuscript, P
mostly represents a probability but sometimes for a cumulative distribution function (e.g., Eqs. (3) and (4) and L44, all in Appendix), which leads to confusion. 2). The notation K
is abused too: it is used both for a known kernel function (e.g., L166) and the number of layers (e.g., L176). 3). What is K b
in estimating baseline hazard (L172)? | 2). The notation K is abused too: it is used both for a known kernel function (e.g., L166) and the number of layers (e.g., L176). |
ARR_2022_1_review | ARR_2022 | - Using original encoders as baselines might not be sufficient. In most experiments, the paper only compares with the original XLM-R or mBERT trained without any knowledge base information. It is unclear whether such encoders being fine-tuned towards the KB tasks would actually perform comparable to the proposed approach. I would like to see experiments like just fine tuning the encoders with the same dataset but the MLM objective in their original pretraining and comparing with them. Such baselines can leverage on input sequences as simple as `<s>X_s X_p X_o </s>` where one of them is masked w.r.t. MLM training.
- The design of input formats is intuitive and lacks justifications. Although the input formats for monolingual and cross-lingual links are designed to be consistent, it is hard to tell why the design would be chosen. As the major contribution of the paper, justifying the design choice matters. In other words, it would be better to see some comparisons over some variants, say something like `<s>[S]X_s[S][P]X_p[P][O]X_o[O]</s>` as wrapping tokens in the input sequence has been widely used in the community.
- The abstract part is lengthy so some background and comparisons with prior work can be elaborated in the introduction and related work. Otherwise, they shift perspective of the abstract, making it hard for the audience to catch the main novelties and contributions.
- In line 122, triples denoted as $(e_1, r, e_2)$ would clearly show its tuple-like structure instead of sets.
- In sec 3.2, the authors argue that the Prix-LM (All) model consistently outperforms the single model, hence the ability of leveraging multilingual information. Given the training data sizes differ a lot, I would like to see an ablation that the model is trained on a mix of multilingual data with the same overall dataset size as the monolingual. Otherwise, it is hard to justify whether the performance gain is from the large dataset or from the multilingual training. | - The design of input formats is intuitive and lacks justifications. Although the input formats for monolingual and cross-lingual links are designed to be consistent, it is hard to tell why the design would be chosen. As the major contribution of the paper, justifying the design choice matters. In other words, it would be better to see some comparisons over some variants, say something like `<s>[S]X_s[S][P]X_p[P][O]X_o[O]</s>` as wrapping tokens in the input sequence has been widely used in the community. |
ICLR_2021_2836 | ICLR_2021 | Are the assumptions of Lemma 1 really satisfied by SGLD? I don't see why the Markov chain induced by SGLD would have the posterior of interest as its stationary distribution, even if the step sizes are appropriately annealed.
More generally, the paper makes several strong claims of "fast posterior simulation" and "guaranteed superiority of $\tilde{q}{\eta, \phi}^{(t)} o v e r
q\phi$". However, the results stated in Lemma 1 to 3 could be seen as motivation for the proposed approach but do not constitute rigorous theoretical guarantees for the actual algorithm.
Finally, it is not clearly justified why we can use samples from all T
iterations (instead of just the samples from the T
th iteration) to estimate the gradients. Since the marginal distribution of the Markov chain is different for each time 0 ≤ t ≤ T
, using samples from all T
time steps would seem to add an additional layer of approximation compared to the objective stated in Equations 4--6. It would help to move Algorithm 1 (and any mention of practical approximations needed to make the algorithm work in practice) into the main body of the paper.
Minor comments
MCMC SGLD and VI are not proper nouns (i.e. it should be "MCMC algorithms" instead of "MCMCs")
P. 1: what does "$$q(z)-mixed Markov chain" mean?
P. 2: two-way -> two
P. 2: by (richly) ... functions -> by a (richly) ... function
P. 2: "inspirition-driven designs"?
P. 2: because marginal -> because the marginal
P. 3: "are variables on a Markov chain" sound grammatically incorrect to me
P. 4: markov -> Markov | 3: "are variables on a Markov chain" sound grammatically incorrect to me P. |
ICLR_2022_1653 | ICLR_2022 | Weakness]: (1) There is a large gap in the proof of Theorem 1. (2) Missing discussion of the line of research using random matrix theory to understand the input-output Jacobian [1], which also consider the operator norm of the input-out Jacobian and draws a very similar conclusion, e.g., the squared operator norm must grow linearly with the number of layers; see eq (17) and follow up discussion in [1].
In what follows, I elaborate (1) and (2) since they are related.
The biggest issue I see in the proof is the equation above (A.1) on page 11. The authors mixed the calculation of finite width networks (on the left of the equation) and infinite width network calculation together (on the right). More precisely, the authors exchange the order of the two limits lim w i d t h → ∞ and
lim sup x α → x
. The exchangeability of the two limits is questionable to me. In the order: lim w i d t h → ∞
lim sup x α →
, we need to handle a product of random matrices (if we compute the Jacobian). This is indeed a core contribution of [1], who uses free probability theory to compute the whole spectrum of the singular values of the Jacobian (assuming certain free independence of the matrices). If we swap the limits (we shouldn't do this without justification) to
lim sup x α → x β lim w i d t h →
, the problem itself is reduced to computing the derivative of the composed correlation map, which is much simpler. I think these two limits are not unchangeable in general. E.g., using the order
lim sup x α → x β lim w i d t h →
, both critical gaussian and orthogonal initialization give the same answer. But using the order lim w i d t h → ∞
lim sup x α →
, gaussian and orthogonal initialization can give different answers, see eq (17) vs (22) in [1].
Several Qs: Q1:
How Theorem 1 leads to the four possible cases after it needs more discussion. In addition, what are the new insights quotient the existing ones from the order-chaotic analysis? It seems: the first case corresponds to the chaotic phase, the second case corresponds to the order phase. The third/fourth cases seem to be a finer analysis of the critical regime.
Q2: Remark1 the critical initialization. Several works have already identified the issue of the polynomial rate convergence of the correlation to 1 for Relu and smooth functions; see Proposition 1 in [2]; sec B.3. in [3].
Q3: I can't find places to explain the legends "upper bound", "largest found".
Q4: How does Thm1 imply eq (4.1)? Do you assume the operator norm is bounded by O(1)?
[1] Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice, https://arxiv.org/pdf/1711.04735.pdf [2] On the Impact of the Activation Function on Deep Neural Networks Training, https://arxiv.org/abs/1902.06853 [3] Disentangling Trainability and Generalization in Deep Neural Networks, https://arxiv.org/abs/1912.13053
Minors comments: 1.) What is the domain of the inputs? It seems they are lying in the same sphere, not mentioned in the paper. | 1.) What is the domain of the inputs? It seems they are lying in the same sphere, not mentioned in the paper. |
NIPS_2018_101 | NIPS_2018 | Weakness: The ideas of extension seem to be intuitive and not very novel (the authors seem to honestly admit this in the related work section when comparing this work with [3,8,9]). This seems to make the work a little bit incremental. In the experiments, Monte Carlo (batch-ENS) works pretty well consistently, but the authors do not provide intuitions or theoretical guarantees to explain the reasons. Questions: 1. In [12], they also show the results of GpiDAPH3 fingerprint. Why not also run the experiment here? 2. You said only 10 out of 120 datasets are considered as in [7,12]. Why not compare batch and greedy in other 110 datasets? 3. If you change the budget (T) in the drug dataset, does the performance decay curve still fits the conclusion of Theorem 1 well (like Figure 1(a))? 4. In the material science dataset, the pessimistic oracle seems not to work well. Why do your explanations in Section 5.2 not hold in the dataset? Suggestions: Instead of just saying that the drug dataset fits Theorem 1 well, it will be better to characterize the properties of datasets to which you can apply Theorem 1 and your analysis shows that this drug dataset satisfies the properties, which naturally implies Theorem 1 hold and demonstrate the practical value of Theorem 1. Minor suggestions: 1. Equation (1): Using X to denote the candidate of the next batch is confusing because it is usually used to represent the set of all training examples 2. In the drug dataset experiment, I cannot find how large the budget T is set 3. In section 5.2, the comparison of myopic vs. nonmyopic is not necessary. The comparison in drug dataset has been done at [12]. In supplmentary material 4. Table 1 and 2: why not also show results when batch size is 1 as you did in the drug dataset? 5. In the material science dataset experiment, I cannot find how large the budget T is set After rebuttal: Thanks for the explanation. It is nice to see the theorem roughly holds for the batch size part when different budgets are used. However, based on this new figure, the performance does not improve with the rate 1/log(T) as T increases. I suggest authors to replace Figure 1a with the figure in the rebuttal and address the possible reasons (or leave it as future work) of why the rate 1/log(T) is not applied here. There are no major issues found by other reviewers, so I changed my rate from tending to accept to accepting. | 2. You said only 10 out of 120 datasets are considered as in [7,12]. Why not compare batch and greedy in other 110 datasets? |
NIPS_2022_1708 | NIPS_2022 | Scalability: The proposed encoding method is templated-based (Line 155-156). Although the input encoding scheme (Section 7.1) may be a trivial problem, the encoding scheme may still affect the performance. Searching for the optimal encoding scheme is an expensive process, which may bring a high cost of hand-crafted engineering. Besides, the data gathering method also relies on hand-designed templates (Line 220).
Presentation: The related work of PLM is adequately cited. But the authors should also introduce the background of policy learning so that the significance of this work can be highlighted.
Performance: Compared to the work that uses traditional networks like DQN, the integration of PLM may affect the inference speed.
Clarity: Most parts of this paper are well written. However, there are some typos in the paper:
Line 53: pretrained LMs -> pre-trained LMs
Line 104: language -> language. (missing full stop mark)
Some papers should be cited in a proper way: Line 108: [23], Line 109: [36], Line 285:[15], Line 287 [15]. For example, in Line 108, "[23] show that" needs to be rewritten as "Frozen Pretrained Transformer (FPT) [23] show that".
[Rebuttal Updates] The authors provided the additional experiments for addressing my concern of scalability. The authors also revised the typos and added the related works.
Societal Impact: No potential negative societal impact. The authors provide a new perspective to aid policy learning with a pre-trained language model.
Limitation: 1) Building text descriptions for each task still requires human labor. We do not know what textual format is optimal for policy learning. It varies from task to task, model to model. On the other hand, as I stated in Question 1, the long-text input could restrict the scalability of this framework. 2) The proposed methods also need humans to design some templates/rules, as the authors mentioned in the conclusion part. | 1) Building text descriptions for each task still requires human labor. We do not know what textual format is optimal for policy learning. It varies from task to task, model to model. On the other hand, as I stated in Question 1, the long-text input could restrict the scalability of this framework. |
NIPS_2017_349 | NIPS_2017 | - The paper is not self contained
Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility.
I also hereby request the authors to release the source code of their experiments to allow reproduction of their results.
- Use of deep-reinforcement learning is not well motivated
The problem domain seems simple enough that a linear approximation would have likely sufficed? The network is fairly small and isn't "deep" either.
- > We argue that such a mechanism is more realistic because it has an effect within the game itself, not just on the scores
This is probably the most unclear part. It's not clear to me why the paper considers one to be more realistic than the other rather than just modeling different incentives? Probably not enough space in the paper but actual comparison of learning dynamics when the opportunity costs are modeled as penalties instead. As economists say: incentives matter. However, if the intention was to explicitly avoid such explicit incentives, as they _would_ affect the model-free reinforcement learning algorithm, then those reasons should be clearly stated.
- Unclear whether bringing connections to human cognition makes sense
As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adaptation mechanisms and the structure of the CPR itself has more of an effect on whether self-organization will fail or succeed than previously appreciated.'' It would be fairly surprising if any behavioral economist trying to study this problem would ignore either of these things and needs more citation for comparison against "previously appreciated".
* Minor comments
** Line 16:
> [18] found them...
Consider using \citeauthor{} ?
** Line 167:
> be the N -th agentâs
should be i-th agent?
** Figure 3:
Clarify what the `fillcolor` implies and how many runs were the results averaged over?
** Figure 4:
Is not self contained and refers to Fig. 6 which is in the supplementary. The figure is understandably large and hard to fit in the main paper, but at least consider clarifying that it's in the supplementary (as you have clarified for other figures from the supplementary mentioned in the main paper).
** Figure 5:
- Consider increasing the axes margins? Markers at 0 and 12 are cut off.
- Increase space between the main caption and sub-caption.
** Line 299:
From Fig 5b, it's not clear that |R|=7 is the maximum. To my eyes, 6 seems higher. | - The paper is not self contained Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility. I also hereby request the authors to release the source code of their experiments to allow reproduction of their results. |
ARR_2022_232_review | ARR_2022 | - A number of claims from this paper would benefit from more in-depth analysis.
- There are still some methodological flaws that should be addressed.
### Main questions/comments Looking at the attached dataset files, I cannot work out whether the data is noisy or if I don't understand the format. The 7th example in the test set has three parts separated by [SEP] which I thought corresponded to the headlines from the three sides (left, center, right). However, the second and third headlines don't make sense as stand-alone texts. Especially the third one which states "Finally, borrowers will receive relief." seems like a continuation of the previous statements. In addition to the previous question, I cannot find the titles, any meta-information as stated in lines 187-189, nor VAD scores which I assumed would be included.
Part of the motivation is that scaling the production of neutral (all-sides) summaries is difficult. It would be good to quantify this if possible, as a counterargument to that would be that not all stories are noteworthy enough to require such treatment (and not all stories will appear on all sides).
Allsides.com sometimes includes more than one source from the same side -- e.g. https://www.allsides.com/story/jan-6-panel-reportedly-finds-gaps-trump-white-house-phone-records has two stories from Center publishers (CNBC and Reuters) and none from the right. Since the inputs are always one from each side (Section 3.2), are such stories filtered out of the dataset? Of the two major insights that form the basis of this work (polarity is a proxy for framing bias and titles are good indicators of framing bias) only first one is empirically tested with the human evaluation presented in Section 5.1.3. Even then, we are missing any form of analysis of disagreements or low correlation cases that would help solidify the argument. The only evidence we have for the second insight are the results from the NeuS-Title system (compared to the NeuSFT model that doesn't explicitly look at the titles), but again, the comparison is not systematic enough (e.g. no ablation study) to give us concrete evidence to the validity of the claim.
Related to the previous point, it's not clear what the case study mentioned in Section 4.2 actually involved. The insights gathered aren't particularly difficult to arrive at by reading the related literature and the examples in Table 1, while indicative of the arguments don't seem causally critical. It would be good to have more details about this study and how it drove the decisions in the rest of the paper. In particular, I would like to know if there were any counterexamples to the main points (e.g. are there titles that aren't representative of the type of bias displayed in the main article?).
The example in Table 3 shows that the lexicon-based approach (VAD dataset) suffers from lack of context sensitivity (the word "close" in this example is just a marker of proximity). This is a counterpoint to the advantages of such approaches presented in Section 5.1.1 and it would be interesting to quantify it (e.g. by looking at the human-annotated data from Section 5.1.3) beyond the safeguard introduced in the second paragraph (metric calibration).
For the NeuS-Title model, does that order of the input matter? It would be interesting to rerun the same evaluation with different permutations (e.g. center first, or right first). Is there a risk of running into the token limit of the encoder?
From the examples shared in Table 3, it appears that both the NeuSFT and NeuS-Title models stay close to a single target article. What strikes me as odd is that neither chose the center headline (the former is basically copying the Right headline, the latter has done some paraphrasing both mostly based on the Left headline). Is there a particular reason for this? Is the training objective discouraging true multi-headline summarisation since the partisan headlines will always contain more biased information/tokens?
### Minor issues I was slightly confused by the word "headline" to refer to the summary of the article. I think of headlines and titles as fundamentally the same thing: short (one sentence) high-level descriptions of the article to come. It would be helpful to refer to longer text as a summary or a "headline roundup" as allsides.com calls it. Also there is a discrepancy between the input to the NeuS-Title model (lines 479-486) and the output shown in Table 3 (HEADLINE vs ARTICLE).
Some citations have issues. Allsides.com is cited as (all, 2021) and (Sides, 2018); the year is missing for the citations on lines 110 and 369; Entman (1993) and (2002) are seemingly the same citation (and the 1993 is missing any publication details); various capitatlisation errors (e.g. "us" instead of "US" on line 716) It's not clear what the highlighting in Table 1 shows (it is implicitly mentioned later in the text). I would recommend colour-coding the different types of bias and providing the details in the caption. | - A number of claims from this paper would benefit from more in-depth analysis. |
NIPS_2022_2605 | NIPS_2022 | Weakness: 1) In the beginning of the paper, authors often mention that previous works lack the flexibility compared to their work. It is not clear what does it mean and thus makes it harder to understand their explanation. 2) It is not clear regarding the choice of 20 distribution sets. Can we control the number of distribution sets for each class? What if you select only few number of distribution set? 3) The role of Tranfer Matrix T is not discussed or elaborated. 4) It is not clear how to form the target distribution H. How do you formulate H? 5) There is no discussion on how to generate x_H from H and what does x_H constitute of? 6) Despite the significant improvement, it is not clear how this proposed method boost the transferability of the adversarial examples.
As per my understanding, authors briefly addressed the limitations and negative impact in their work. | 4) It is not clear how to form the target distribution H. How do you formulate H? |
ICLR_2022_1012 | ICLR_2022 | a. The paper lacks structure and clarity
b. The paper lacks a more qualitative study of the model:
it would be interesting to see what layers the layer-wise attention mechanism attends to.
it would be great to understand how this model uses the latent variables, for instance by measuring the KL divergence at each layer, as done in the previous work (LVAE, BIVA) (connection to "posterior collapse").
c. Experiments are limited to CIFAR-10, larger scaler experiments (i.e. ImageNet) would be beneficial to the paper. It is not guaranteed that such an architecture would translate in the same gains for larger datasets (i.e. ImageNet).
3. Clarification needed:
a. Table 5: I interpreted the column "non-local layers" as using "attention across layers", I hope I was right. The nomenclature needs to be improved.
b. Is the layer-wise attention mechanism specific to deep VAEs, or can it be more generally applied to ResNet architectures?
c. Section 2.3 (paragraph cited bellow): I get the idea, but unless demonstrated, this remains a hypothesis.
"... in practice the network may no longer respect the factorization of the prior p ( z ) = ∏ l p ( z l ∣ z < l )
leading to diminished performance gains as shown in Table 1":
4. Minor comments / suggestions
a. The main contributions are introducing two types of attention for deep VAEs, it might help to describe them in a separate section, and only then describe the generative and inference models. Right now the description of the layer-wise attention mechanism is scattered across sections 2.3 and 2.4.
b. tricks like normalisation or feature scaling could be referenced in a separate section.
c. eq8: you might want to cite ReZero [1] here
d. Fig 1. a: the lack of arrows going from the activations ( h l , k l q )
to the attention block ( A ( . . . ) )
was confusing on the first read
e. It would be better practice to report likelihoods for multiple random seeds
f. Typo in section 2.1: "both q ( z x ) and p ( x )
are fully factorized gaussian..." -> "both q ( z x ) and p ( z )
are fully factorized gaussian..."
[1] Bachlechner, T., Prasad Majumder, B., Mao, H. H., Cottrell, G. W., and McAuley, J., “ReZero is All You Need: Fast Convergence at Large Depth”, <i>arXiv e-prints</i>, 2020. | 4. Minor comments / suggestions a. The main contributions are introducing two types of attention for deep VAEs, it might help to describe them in a separate section, and only then describe the generative and inference models. Right now the description of the layer-wise attention mechanism is scattered across sections 2.3 and 2.4. b. tricks like normalisation or feature scaling could be referenced in a separate section. c. |
ARR_2022_270_review | ARR_2022 | The paper makes a few claims about accuracy and best guesses which are understandably not so common. One of the main metrics it leans on in its abstract is the achievement of 37.59% accuracy for defense. While this is done in a scientific way, the authors do not consider other avenues for testing that make more sense such as random experiments based on other work.
034 - "Unfortunately, large 035 language models tend to memorize training data", cite or explain more... 042 - "LM-based...", how do you know this?
045 - This may be better attached to the previous sentence.
045 - Figure 1 does not seem to show that the attacks were successfully able to pick the persona consistently. It shows 5 out of 14, but the truth is not given in the figure.
083 - this paragraph is really good, the explanation helps understand what is done later in the experiments 089 - "KL loss", please cite == Problem Formulation = 158 - "Casual", describe what casual means with language instead of math please. 160 - The LM shown is the negative log of the probability, while that could be a typical training phase, how do you assume it is the "casual" one?
173 - "The goal", why did you choose this goal, is there evidence that by correctly choosing a persona you are able to get more personal information?
196 - "Machine Learning as a Service" --> machine learning as a service (MLaaS) 196 - "can be directly..", are you sure? Please cite the evidence where this has happened.
== Defense Learning Strategies == 204 - "simple LM training", what is this, cite?
206 -LM = LMs?
207 - "to avoid...", sentence no longer needed 213 - 216 - The statement makes sense but do you have something to show that?
220 - "intuition...", this is good == KL Loss == - The KL divergence is introduced here but mentioned before. Also, please cite.
== MI Loss == 281 - 299 The paper takes on the idea of a game gotten from other work. This is okay but it probably should have been mentioned as the main premise for the work instead of a finding.
== Experiment == --> Experiments?
319 - experimental setting --> experimental settings == Experimental Setting == --> Experimental Settings?
- Table 1 --> Good 363 - PPL, this is not defined anywhere.
371 - Table 2, from random 0 to 37.59 is a stretch. Also, the fact that there is "no" knowledge is probably not a fair comparison. Not sure whether to leave it or not but "best guesses" makes more sense.
384 - 52x --> 52 times? This number does not seem to be a valid measurement. The method of collection and distribution assumptions are questionable.
== Ablation Study == - Good comparisons | 173 - "The goal", why did you choose this goal, is there evidence that by correctly choosing a persona you are able to get more personal information? |
ACL_2017_19_review | ACL_2017 | But I have a few questions regarding finding the antecedent of a zero pronoun: 1. How will an antecedent be identified, when the prediction is a pronoun? The authors proposed a method by matching the head of noun phrases. It’s not clear how to handle the situation when the head word is not a pronoun.
2. What if the prediction is a noun that could not be found in the previous contents?
3. The system achieves great results on standard data set. I’m curious is it possible to evaluate the system in two steps? The first step is to evaluate the performance of the model prediction, i.e. to recover the dropped zero pronoun into a word; the second step is to evaluate how well the systems works on finding an antecedent.
I’m also curious why the authors decided to use attention-based neural network. A few sentences to provide the reasons would be helpful for other researchers.
A minor comment: In figure 2, should it be s1, s2 … instead of d1, d2 ….? - General Discussion: Overall it is a great paper with innovative ideas and solid experiment setup. | 1. How will an antecedent be identified, when the prediction is a pronoun? The authors proposed a method by matching the head of noun phrases. It’s not clear how to handle the situation when the head word is not a pronoun. |
OROKjdAfjs | ICLR_2024 | 1. How does your linear attention handle the autoregressive decoding? As of training, you can feed the network with a batch of inputs with long token dimensions. But when it comes to the generation phase, I am afraid that only limited tokens are used to generate the next token. Then do you still have benefits for inference?
2. The paper reads like a combination of various tricks as a lot of techniques were discussed in the previous paper, like LRPE, Flash, and Flash Attention. Especially for the Lightning Attention vs. Flash Attention, I did not find any difference between these two. The gated mechanism was also introduced in Flash paper. These aspects leave us a question in terms of the technical novelty of this paper.
3. It looks like during training, you are still using the quadratic attention computational order as indicated in Equ. 10? I suppose it was to handle the masking part. But that loses the benefits of training with linear attention complexity.
4. In terms of evaluation, although in the abstract, the authors claim that the linearized LLM extends to 175B parameters, most experiments are conducted on 375M models. For the large parameter size settings, the author only reports the memory and latency cost savings. The accuracy information is missing, without which I feel hard to evaluate the linearized LLMs. | 1. How does your linear attention handle the autoregressive decoding? As of training, you can feed the network with a batch of inputs with long token dimensions. But when it comes to the generation phase, I am afraid that only limited tokens are used to generate the next token. Then do you still have benefits for inference? |
NIPS_2018_87 | NIPS_2018 | weakness/questions: 1. Description of the framework: It's not very clear what Bs is in the formulation. It's not introduced in the formulation, but later on the paper talks about how to form Bs along with Os and Zs for different supervision signals. And it;s very confusing what is Bs's role in the formulation. 2. computational cost: it would be great to see an analysis about the computation cost. 3. Experiment section: it seems that for the comparison with other methods, the tracklets are also generated using different process. So it's hard to draw conclusions based on the results. Is it possible to apply different algorithms to same set of tracklets? For example, for the comparison of temporal vs temp+BB, the conclusion is not clear as there are three ways of generating tracklets. It seems that the conclusion is -- when using same tracklet set, the temp + BB achieves similar performance as using temporal signal only. However, this is not explicitly stated in the paper. 4. The observation and conclusions are hidden in the experimental section. It would be great if the paper can highlight those observations and conclusions, which is very useful for understanding the trade-offs of annotation effort and corresponding training performance. 5. comparison with fully supervised methods: It would be great if the paper can show comparison with other fully supervised methods. 6. What is the metric used for the video level supervision experiment? It seems it's not using the tracklet based metrics here, but the paper didn't give details on that. | 4. The observation and conclusions are hidden in the experimental section. It would be great if the paper can highlight those observations and conclusions, which is very useful for understanding the trade-offs of annotation effort and corresponding training performance. |
ICLR_2023_4654 | ICLR_2023 | Weakness:
1), The proposed approach is straightforward (not a demerit), and is a native extension on how to extend the DETR into few-shot, although there exist some specific mechanism designs in this paper to facilitate such extension. However, similar ideas also can be found in existing papers such as [1], which appeared in 2021 in arXiv and published in 2022. Since [1] is already published, it should be included as a fair baseline to compare with and discuss and [1] should be the most close research reported in few-shot object detection. However, the performance reported in this manuscript seems is not as high as in [1] with a significant margin.
[1] Meta-DETR: Image-Level Few-Shot Object Detection with Inter-Class Correlation Exploitation, T-PAMI, 2022.
2), From the data in Table 4, it indicates that the unsupervised pretraining is a key factor on the performance gain. However, there is no detailed discussion on the unsupervised pretraining in the main paper, which might be a problem. In fact, compared with ablation study of Table 5, the unsupervised pretraining is much more important than other modules presented in this paper. Therefore, I will suggest on focusing more on the pretraining method in the main paper.
3), I also cannot very agree with three “desiderata” claimed by the authors (although this is not a serious issue). In standard few-shot object detection, fine-tuning or re-training is not an evil. Moreover, “without re-training” is a merit to all attention-based few-shot approaches, not a unique merit of this approach. The second point, “an arbitrary number of novel objects” actually is not even an issue for those “re-training” few-shot methods. And the "re-training" based method may also have the merit that it no need to require the "queries" for the detection on both base and novel classes, and the selection of "queries" may also affect the performance of the detection performance.
4), In fact, many few-shot research is also focusing on the performance on both base and novel classes. One another important “desiderata” should be it can eliminate the performance drop as much as possible on base classes when adapting to novel classes. However, in this paper (similar for most "retraining-free" FS methods), the base class performance is not focused totally, and no experiment statistics are provided at all, this would be a problem to compare with most few-shot methods. Since the attention-based approach relying on the "queries", it's base-class performance may be worse than the "retraining" methods? | 2), From the data in Table 4, it indicates that the unsupervised pretraining is a key factor on the performance gain. However, there is no detailed discussion on the unsupervised pretraining in the main paper, which might be a problem. In fact, compared with ablation study of Table 5, the unsupervised pretraining is much more important than other modules presented in this paper. Therefore, I will suggest on focusing more on the pretraining method in the main paper. |
NIPS_2020_1817 | NIPS_2020 | There are a few points that are not clear from the paper, which I list below: - As far as I understood in the clustered attention (not the improved one), the value of the i-th query becomes the value of the centroid of the cluster that the query belongs to. So after one round of applying the clustered attention, we have a set C distinct values in N nodes. I wonder what is the implication of this for the next round of the clustered attention, because there is no way to have two nodes that were in the same cluster in the previous round to be in different clusters in the next round (as their values will be the same after round 1) and the only change in the clustering that makes sense is merging clusters (which is not the case as apparently the number of clusters stays the same). Isn’t this too restrictive? What if the initial clustering is not good, then the model has no chance to recover? If the number of clusters stays the same, does the clustering in the layer after layer 1 does anything different than the clustering in the layer 1 (if not they're removable)? - It’s a bit unclear if LSH-X is the Reformer, or a simpler version of the reformer (LSH Transformer). The authors mentioned that the Reformer can’t be used in a setup with heterogeneous queries and keys. First of all, I think it shouldn't be that hard to modify Reformer to support this case. Besides, authors don’t have any task in that setup to see how well the clustered attention does when the clustered queries are not the projections of the inputs that the keys are projected from. - The experiments that are done in the setup that the model has to deal with long sequences is limited to a single modality. Would be nice to have the model evaluated on large inputs in vision/text/algorithmic tasks as well. - Although the method is presented nicely and the experiments are rather good and complete, a bit of analysis on what the model does, which can be extremely interesting, is missing (check the feedback/suggestions). - The authors only consider vanilla transformer and (I think an incomplete version of) Reformer, while there are obvious baselines, e.g. Longformer, sparse transformer, or even Local attention (check the feedback/suggestions). | - Although the method is presented nicely and the experiments are rather good and complete, a bit of analysis on what the model does, which can be extremely interesting, is missing (check the feedback/suggestions). |
NIPS_2017_302 | NIPS_2017 | 1. Related Work: As the available space allows it, the paper would benefit from a more detailed discussion of related work, by not only describing the related works, but also discussing the differences to the presented work.
2. Qualitative results: To underline the success of the work, the paper should include some qualitative examples, comparing its generated sentences to the ones by related work.
3. Experimental setup: For the Coco image cations, the paper does not rely on the official training/validation/test split used in the COCO captioning challenge.
3.1. Why do the authors not use the entire training set?
3.2. It would be important for the automatic evaluation to report results using the evaluation sever and report numbers on the blind test set (for the human eval it is fine to use the validation set). Conclusion:
I hope the authors will include the coco caption evaluation server results in the rebuttal and final version as well as several qualitative results.
Given the novelty of the approach and strong experiments without major flaws I recommend accepting the paper.
It would be interesting if the authors would comment on which problems and how their approach can be applied to non-sequence problems. | 1. Related Work: As the available space allows it, the paper would benefit from a more detailed discussion of related work, by not only describing the related works, but also discussing the differences to the presented work. |
ICLR_2022_1617 | ICLR_2022 | Weakness: 1. The motivation of this work should be further justified. In few-shot learning, we usually consider how to leverage a few instances to learn a generalizable model. This paper defines and creates a few-shot situation for graph link prediction, but the proposed method does not consider how to effectively use “few-shot” and how to guarantee the trained model can be generalized well to new tasks with 0/few training steps. 2. The definition of “domain” in this paper is unclear. For instance, why select multiple domains from the same single graph in ogbn-products? Should we consider the selected domains as “different domains”? 3. The application of adversarial learning in few-shot learning is confusing. Adversarial learning in domain adaptation aims to learn domain-invariant representations, but why do we need such kind of representation in few-shot learning? | 1. The motivation of this work should be further justified. In few-shot learning, we usually consider how to leverage a few instances to learn a generalizable model. This paper defines and creates a few-shot situation for graph link prediction, but the proposed method does not consider how to effectively use “few-shot” and how to guarantee the trained model can be generalized well to new tasks with 0/few training steps. |
NIPS_2021_2445 | NIPS_2021 | and strengths in their analysis with sufficient experimental detail, it is admirable, but they could provide more intuition why other methods do better than theirs.
The claims could be better supported. Some examples and questions(if I did not miss out anything)
Why using normalization is a problem for a network or a task (it can be thought as a part of cosine distance)? How would Barlow Twins perform if their invariance term is replaced with a euclidean distance?
Your method still uses 2048 as the batch size, I would not consider it as small. For example, Simclr uses examples in the same batch and its batch size changes between 256-8192. Most of the methods you mentioned need even much lower batch size.
You mentioned not sharing weights as an advantage, but you have shared weights in your results, except Table 4 in which the results degraded as you mentioned. What stops the other methods from using different weights? It should be possible even though they have covariance term between the embeddings, how much their performance would be affected compared with yours?
My intuition is that a proper design might be sufficient rather than separating variance terms.
- Do you have a demonstration or result related to your model collapsing less than other methods? In line 159, you mentioned gradients become 0 and collapse; it was a good point, is it commonly encountered, did you observe it in your experiments?
- I am also not convinced to the idea that the images and their augmentations need to be treated separately, they can be interchangeable.
- Variances of the results could be included to show the stability of the algorithms since it was another claim in the paper(although "collapsing" shows it partly, it is a biased criteria since the other methods are not designed for var/cov terms).
- How hard is it to balance these 3 terms?
- When someone thinks about gathering two batches from two networks and calculate the global batch covariance in this way; it includes both your terms and Barlow Twins terms. Can anything be said based on this observation, about which one is better and why? Significance:
Currently, the paper needs more solid intuition or analysis or better results to make an impact in my opinion. The changes compared with the prior work are minimal. Most of the ideas and problems in the paper are important, but they are already known.
The comparisons with the previous work is valuable to the field, they could maybe extend their experiments to the more of the mentioned methods or other variants.
The authors did a great job in presenting their work's limitations, their results in general not being better than the previous works and their extensive analysis(tables). If they did a better job in explaining the reasons/intuitions in a more solid way, or include some theory if there is any, I would be inclined to give an accept. | - Do you have a demonstration or result related to your model collapsing less than other methods? In line 159, you mentioned gradients become 0 and collapse; it was a good point, is it commonly encountered, did you observe it in your experiments? |
NIPS_2021_37 | NIPS_2021 | , * Typos/Comments)
Overall, I like and value the research topic and motivation of this paper and lean positive. However, some details are not clear enough. I would update my rating depending on the authors' feedback. The details are as follows.
+ Interesting and important research problem. This paper focuses on how to obtain disentangle representations for feature-level augmentation. This topic is interesting and important, and will attract many interests of the NeurIPS community.
+ Good quality of writing and organization. Overall, the writing quality is good and the paper is well organized. It is comfortable to read this paper, although some details are not clear.
+ Comprehensive experiments. Experiments are conducted on two synthetic datasets Colored MNIST and Corrupted CIFAR-10) and two real-world datasets (BAR and Biased FFHQ).
- Relative difficulty score and generalized cross-entropy (GCE) loss. It is not clear how the relative difficulty score W ( x )
in Eq. (1) is used in the pipeline. W(x) is not mentioned again in both the overall objective functions Eq. (2) or Algorithm 1. Since readers may not be familiar with the generalized cross-entropy (GCE) loss, it is encouraged to briefly introduce the formulation and key points of the GCE loss to make this paper more self-contained.
- How bias-conflicting samples and bias-aligned samples are selected. This weakness follows the first one. It seems that the "bias-conflicting" is determined based on the relative difficulty score, but the details are missed. Also, the ablation study on how the "bias-conflicting" is determined, e.g., setting the threshold for the relative difficulty score, is encouraged to be considered and included.
- Disentanglement. It is not clear how disentanglement is guaranteed. Although "Broader Impacts and Limitations" stated that "Obtaining fully disentangled latent vectors ... a limitation", it is still important to highlight how the disentanglement is realized and guaranteed without certain bias types.
- Inference stage. It is not clear how the inference is conducted during testing. Which encoders/decoders are preserved during the test stage?
- Figure 1 is not clear. First, it seems that the two y towards L CE
are the outputs of C i
, but they are illustrated like labels rather than predictions. Second, the illustration of the re-weighting module is not clear. Does it represent Eq. (4)?
- Table 4 reported a much lower performance of "swapping" on BAR compared to the other three datasets. Is there any explanation for this, like the difference of datasets?
- Sensitivity to hyperparameters. The proposed framework consists of three important hyperparameters, ( λ dis , λ s w a p b , λ swap )
. It is not clear whether the framework is sensitive to these hyperparameters and how these hyperparameters are determined.
* (Suggestion) Illustration of backpropagation. As introduced in Line 167-168, the loss from C i
is not backpropagated to E b
. It would be clearer if this can be added in Figure 1.
* Line 280. Is "the first row and column ... respectively" a typo? It is a little confusing for me to understand this.
* Typos in Algorithm 1. Are λ dis and λ s w a p b
missed in L dis and L swap ?
* Typo in Line 209. Corrputed -> Corrupted.
============================= After rebuttal ===================================
After reading the authors' response to my questions and concerns, I would like to vote for acceptance.
The major strengths of this paper are:
The research problem, unbiased classification via learning debiased representation, is interesting and would attract the NeurIPS audience's attention.
The proposed method is simple but effective. The method is built on top of LfF [12] and further considers (1) intrinsic and bias feature disentanglement and (2) data augmentation by swapping the bias features among training samples.
The paper is clearly written and well organized.
These strengths and contributions are also pointed out by other colleague reviewers.
My main concerns were:
Unclear technical details of the GCE loss and the relative difficulty score. This concern was also shared with Reviewer 8Ai1 and iKKw. The authors' response clearly introduced the details and addressed my concern well.
Sensitivity to hyper-parameters. The authors' response provided adequate results to show the sensitivity to hyper-parameters. Other details of implementation and analysis of experimental results. The authors' responses clearly answered my questions.
Considering both strengths and the weakness, I am happy to accept this paper.
The authors have adequately addressed the limitations and potential negative societal impact of their work. | - Disentanglement. It is not clear how disentanglement is guaranteed. Although "Broader Impacts and Limitations" stated that "Obtaining fully disentangled latent vectors ... a limitation", it is still important to highlight how the disentanglement is realized and guaranteed without certain bias types. |
NIPS_2020_839 | NIPS_2020 | - In Table 2, what about the performance of vanilla Transformer with the proposed approach? It's clearer to report the baseline + proposed approach, not only aiming at reporting state-of-the-art performance. - In Figure 1, the reported perplexities are over 30, which looks pretty high. This high perplexity contradicts better BLEU scores in my experience. How did you calculate perplexity? | - In Figure 1, the reported perplexities are over 30, which looks pretty high. This high perplexity contradicts better BLEU scores in my experience. How did you calculate perplexity? |
ICLR_2023_2630 | ICLR_2023 | - The technical novelty and contributions are a bit limited. The overall idea of using a transformer to process time series data is not new, as also acknowledged by the authors. The masked prediction was also used in prior works e.g. MAE (He et al., 2022). The main contribution, in this case, is the data pre-processing approach that was based on the bins. The continuous value embedding (CVE) was also from a prior work (Tipirneni & Reddy 2022), and also the early fusion instead of late fusion (Tipirneni & Reddy, 2022; Zhang et al., 2022). It would be better to clearly clarify the key novelty compared to previous works, especially the contribution (or performance gain) from the data pre-processing scheme.
- It is unclear if there are masks applied to all the bins, or only to one bin as shown in Fig. 1.
- It is unclear how the static data (age, gender etc.) were encoded to input to the MLP. The time-series data was also not clearly presented.
- It is unclear what is the "learned [MASK] embedding" mean in the SSL pre-training stage of the proposed method.
- The proposed "masked event dropout scheme" was not clearly presented. Was this dropout applied to the ground truth or the prediction? If it was applied to the prediction or the training input data, will this be considered for the loss function?
- The proposed method was only evaluated on EHR data but claimed to be a method designed for "time series data" as in both the title and throughout the paper. Suggest either tone-down the claim or providing justification on more other time series data.
- The experimental comparison with other methods seems to be a bit unfair. As the proposed method was pre-trained before the fine-tuning stage, it is unclear if the compared methods were also initialised with the same (or similar scale) pre-trained model. If not, as shown in Table 1, the proposed method without SSL performs inferior to most of the compared methods.
- Missing reference to the two used EHR datasets at the beginning of Sec. 4. | - The experimental comparison with other methods seems to be a bit unfair. As the proposed method was pre-trained before the fine-tuning stage, it is unclear if the compared methods were also initialised with the same (or similar scale) pre-trained model. If not, as shown in Table 1, the proposed method without SSL performs inferior to most of the compared methods. |
ICLR_2022_3183 | ICLR_2022 | Weakness: 1. The novelty is limited. Interpretating the prediction of deep neural networks using linear model is not a new approach for model interpretation. 2. The motivation of the paper is not clear. It seems an experiment report about ~193k models, and obtains the obvious results, such as the middle layers represent the more generalizable features. But it is not about interpretation. 3. The writing is not clear. it's hard to read and follow the work. 4.
Concerns: 1. Why need 101 “source” tasks? What are these? 2. 101 tasks can be done on the retinal image, it is not a unified domain to do the research about model interpretation. 3. The motivation and conclusion should be clearly presented to under the contribution of this paper for model interpretation. | 1. The novelty is limited. Interpretating the prediction of deep neural networks using linear model is not a new approach for model interpretation. |
NIPS_2019_1408 | NIPS_2019 | Weakness: 1. Although the four criteria (proposed by the author of this paper) for multi-modal generative models seem reasonable, they are not intrinsic generic criteria. Therefore, the argument that previous works fail for certain criteria is not strong. 2. Tabular data (seeing each attribute dimension as a modality) is another popular form of multi-modal data. It would interesting, although not necessary, to see how this model works for tabular data. | 2. Tabular data (seeing each attribute dimension as a modality) is another popular form of multi-modal data. It would interesting, although not necessary, to see how this model works for tabular data. |
NIPS_2020_1454 | NIPS_2020 | - Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice. - Claims to be SOTA on three datasets, but this does not seem to be the case. Does not evaluate on what it trains on (see "additional feedback"). | - Small contributions over previous methods (NCNet [6] and Sparse NCNet [21]). Mostly (good) engineering. And despite that it seems hard to differentiate it from its predecessors, as it performs very similarly in practice. |
NIPS_2020_545 | NIPS_2020 | I do have several questions: - Did the hyper-ensemble paper forced the networks to start form the same initialization point? I looked into the paper in ref[12] for this information but couldn't tell. If so, then the work will need a more justification with the difference w.r.t hyper-ensemble. - In page 5, starting from line 172 it was not clear why o(mk) became o(k^2), can you elaborate more on this? - From table#1 and table#2 it seems that hyper-ens, str hyper ens and deep ens are quite close to each other in nll,acc,ece ranges. What's exactly the range if improvement of using str-hyper-ens over the others? - In tables#1,2 what is the meaning of the numbers in brackets (1),(4)? - I understand that the empirical evaluation is expensive, but reporting results on other deep models such as VGG, ResNet, DenseNet for a small subset of the settings will clear any doubts regards that the method only works best for wide-resnet. - On the same point of wide-resnet as in lines 269-272 for using two deep ensembles, what are the results of this comparison for wideresnet? | - In page 5, starting from line 172 it was not clear why o(mk) became o(k^2), can you elaborate more on this? |
NIPS_2016_265 | NIPS_2016 | 1. For the captioning experiment, the paper compares to related work only on some not official test set or dev set, however the final results should be compared on the official COOC leader board on the blind test set: https://competitions.codalab.org/competitions/3221#results e.g. [5,17] have won this challenge and have been evaluated on the blind challenge set. Also, several other approaches have been proposed since then and significantly improved (see leaderboard, the paper should at least compare to the once where an corresponding publication is available). 2. A human evaluation for caption generation would be more convincing as the automatic evaluation metrics can be misleading. 3. It is not clear from Section 4.2 how the supervision is injected for the source code caption experiment. While it is over interesting work, for acceptance at least points 1 and 3 of the weaknesses have to be addressed. ==== post author response === The author promised to include the results from 1. in the final For 3. it would be good to state it explicitly in Section section 4.2. I encourage the authors to include the additional results they provided in the rebuttal, e.g. T_r in the final version, as it provides more insight in the approach. Mine and, as far as I can see, the other reviewers concerns have been largely addressed, I thus recommend to accept the paper. | 2. A human evaluation for caption generation would be more convincing as the automatic evaluation metrics can be misleading. |
D0gAwtclWk | EMNLP_2023 | 1 While the paper provides valuable insights for contrastive learning in code search tasks, it does not thoroughly explore the implications of their proposed method for other NLP tasks. This somewhat limits the generalizability of the results.
2 The paper does not discuss the computational efficiency of the proposed method. As the Soft-InfoNCE method involves additional computations for weight assignment, it would be important to understand the trade-off between improved performance and increased computational cost.
3 While the authors present the results of their experiments, they do not provide an in-depth analysis of these results. More detailed analysis, including a discussion of cases where the proposed method performs exceptionally well or poorly, could have added depth to the paper. | 1 While the paper provides valuable insights for contrastive learning in code search tasks, it does not thoroughly explore the implications of their proposed method for other NLP tasks. This somewhat limits the generalizability of the results. |
NIPS_2018_901 | NIPS_2018 | Weakness: - The experiments are only done on one game environment. More experiments are necessary. - This method seems not generalizable for other games e.g. FPS game. People can hardly do this on realistic scenes such as driving. Static Assumption too strong. | - The experiments are only done on one game environment. More experiments are necessary. |
ACL_2017_543_review | ACL_2017 | - Experimental results show only incremental improvement over baseline, and the choice of evaluation makes it hard to verify one of the central arguments: that visual features improve performance when processing rare/unseen words.
- Some details about the baseline are missing, which makes it difficult to interpret the results, and would make it hard to reproduce the work.
- General Discussion: The paper proposes the use of computer vision techniques (CNNs applied to images of text) to improve language processing for Chinese, Japanese, and Korean, languages in which characters themselves might be compositional. The authors evaluate their model on a simple text-classification task (assigning Wikipedia page titles to categories). They show that a simple one-hot representation of the characters outperforms the CNN-based representations, but that the combination of the visual representations with standard one-hot encodings performs better than the visual or the one-hot alone. They also present some evidence that the visual features outperform the one-hot encoding on rare words, and present some intuitive qualitative results suggesting the CNN learns good semantic embeddings of the characters.
I think the idea of processing languages like Chinese and Japanese visually is a great one, and the motivation for this paper makes a lot of sense. However, I am not entirely convinced by the experimental results. The evaluations are quite weak, and it is hard to say whether these results are robust or simply coincidental. I would prefer to see some more rigorous evaluation to make the paper publication-ready. If the results are statistically significant (if the authors can indicate this in the author response), I would support accepting the paper, but ideally, I would prefer to see a different evaluation entirely.
More specific comments below: - In Section 3, paragraph "lookup model", you never explicitly say which embeddings you use, or whether they are tuned via backprop the way the visual embeddings are. You should be more clear about how the baseline was implemented. If the baseline was not tuned in a task-specific way, but the visual embeddings were, this is even more concerning since it makes the performances substantially less comparable.
- I don't entirely understand why you chose to evaluate on classifying wikipedia page titles. It seems that the only real argument for using the visual model is its ability to generalize to rare/unseen characters. Why not focus on this task directly? E.g. what about evaluating on machine translation of OOV words? I agree with you that some languages should be conceptualized visually, and sub-character composition is important, but the evaluation you use does not highlight weaknesses of the standard approach, and so it does not make a good case for why we need the visual features. - In Table 5, are these improvements statistically significant?
- It might be my fault, but I found Figure 4 very difficult to understand.
Since this is one of your main results, you probably want to present it more clearly, so that the contribution of your model is very obvious. As I understand it, "rank" on the x axis is a measure of how rare the word is (I think log frequency?), with the rarest word furthest to the left? And since the visual model intersects the x axis to the left of the lookup model, this means the visual model was "better" at ranking rare words? Why don't both models intersect at the same point on the x axis, aren't they being evaluated on the same set of titles and trained with the same data? In the author response, it would be helpful if you could summarize the information this figure is supposed to show, in a more concise way. - On the fallback fusion, why not show performance for for different thresholds? 0 seems to be an edge-case threshold that might not be representative of the technique more generally.
- The simple/traditional experiment for unseen characters is a nice idea, but is presented as an afterthought. I would have liked to see more eval in this direction, i.e. on classifying unseen words - Maybe add translations to Figure 6, for people who do not speak Chinese? | - The simple/traditional experiment for unseen characters is a nice idea, but is presented as an afterthought. I would have liked to see more eval in this direction, i.e. on classifying unseen words - Maybe add translations to Figure 6, for people who do not speak Chinese? |
NIPS_2018_583 | NIPS_2018 | Weakness: - (4) or (5) are nonconvex saddle point problems, there is no convergence guarantee for Alg 1. Moreover, as a subroutine for (7), it is not clear how many iterations (the hyperparameter n) should be taken to make sure (7) is convergent. Previously in structured SVM, people noticed that approximate inference could make the learning diverges. - Performance boost due to more parameters? In Tab 1,2,3, if we think carefully, LinearTop and NLTop adds additional parameters, while Unary performs much worse comparing to the numbers reported e.g. in [14], where they used a different and probably better neural network. This raises a question: if we use a better Unary baseline, is there still a performance boost? - In Table 1, the accuracies are extremely poor: testing accuracy = 0.0? Something must be wrong in this experiment. - Scalability: since H(x,c) outputs the whole potential vector with length O(K^m), where m is the cardinality of the largest factor, which could be extremely long to be an input for T. - The performance of NLTop is way behind the Oracle (which uses GT as input for T). Does this indicate (3) is poorly solved or because of the learning itself? [*] N Komodakis Efficient training for pairwise or higher order CRFs via dual decompositio. CVPR 2011. [**] D Sontag et al. Learning efficiently with approximate inference via dual losses. ICML 2010. | - Performance boost due to more parameters? In Tab 1,2,3, if we think carefully, LinearTop and NLTop adds additional parameters, while Unary performs much worse comparing to the numbers reported e.g. in [14], where they used a different and probably better neural network. This raises a question: if we use a better Unary baseline, is there still a performance boost? |
NIPS_2019_1350 | NIPS_2019 | of the method. CLARITY: The paper is well organized, partially well written and easy to follow, in other parts with quite some potential for improvement, specifically in the experiments section. Suggestions for more clarity below. SIGNIFICANCE: I consider the work significant, because there might be many settings in which integrated data about the same quantity (or related quantities) may come at different cost. There is no earlier method that allows to take several sources of data into account, and even though it is a fairly straightforward extension of multi-task models and inference on aggregated data, it is relevant. MORE DETAILED COMMENTS: --INTRO & RELATED WORK: * Could you state somewhere early in the introduction that by "task" you mean "output"? * Regarding the 3rd paragraph of the introduction and the related work section: They read unnaturally separated. The paragraph in the introduction reads very technical and it would be great if the authors could put more emphasis there in how their work differs from previous work and introduce just the main concepts (e.g. in what way multi-task learning differs from multiple instance learning). Much of the more technical assessment could go into the related work section (or partially be condensed). --SECTION 2.3: Section 2 was straightforward to follow up to 2.3 (SVI). From there on, it would be helpful if a bit more explanation was available (at the expense of parts of the related work section, for example). More concretely: * l.145ff: $N_d$ is not defined. It would be good to state explicitely that there could be a different number of observations per task. * l.145ff: The notation has confused me when first reading, e.g. $\mathbb{y}$ has been used in l.132 for a data vector with one observation per task, and in l.145 for the collection of all observations. I am aware that the setting (multi-task, multiple supports, different number of observations per task) is inherently complex, but it would help to better guide the reader through this by adding some more explanation and changing notation. Also l.155: do you mean the process f as in l.126 or do you refer to the object introduced in l.147? * l.150ff: How are the inducing inputs Z chosen? Is there any effect of the integration on the choice of inducing inputs? l.170: What is z' here? Is that where the inducing inputs go? * l.166ff: It would be very helpful for the reader to be reminded of the dimensions of the matrices involved. * l.174 Could you explicitly state the computational complexity? * Could you comment on the performance of this approximate inference scheme based on inducing inputs and SVI? --EXPERIMENTS: * synthetic data: Could you give an example what kind of data could look like this? In Figure 1, what is meant by "support data" and what by "predicted training count data"? Could you write down the model used here explicitly, e.g. add it to the appendix? * Fertility rates: - It is unclear to me how the training data is aggregated and over which inputs, i.e. what you mean by 5x5. - Now that the likelihood is Gaussian, why not go for exact inference? * Sensor network: - l.283/4 You might want to emphasize here that CI give high accuracy but low time resolution results, e.g. "...a cheaper method for __accurately__ assessing the mass..." - Again, given a Gaussian likelihood, why do you use inducing inputs? What is the trade-off (computational and quality) between using the full model and SVI? - l.304ff: What do you mean by "additional training data"? - Figure 3: I don't understand the red line: Where does the test data come from? Do you have a ground truth? - Now the sensors are co-located. Ideally, you would want to have more low-cost sensors that high-cost (high accuracy) sensors in different locations. Do you have a thought on how you would account for spatial distribution of sensors? --REFERENCES: * please make the style of your references consistent, and start with the last name. Typos etc: ------------- * l.25 types of datasets * l.113 should be $f_{d'}(v')$, i.e. $d'$ instead of $d$ * l.282 "... but are badly bias" should be "is(?) badly biased" (does the verb refer to measurement or the sensor? Maybe rephrase.) * l.292 biased * Figure 3: biased, higher peaks, 500 with unit. * l.285 consisting of? Or just "...as observations of integrals" * l.293 these variables | * synthetic data: Could you give an example what kind of data could look like this? In Figure 1, what is meant by "support data" and what by "predicted training count data"? Could you write down the model used here explicitly, e.g. add it to the appendix? |
43SOcneD8W | EMNLP_2023 | 1. The reported performance gain of the proposed framework is marginal when compared to the improvements introduced by simple Prompt Tuning approaches. For instance,for Table 3, out of 2.7% gain over Roberta backbone on ReTACRED, prompting tuning (i.e. HardPrompt) already achieves the gain of 1.7%.
2. The scope of the study is under-specified. It seems that the work focuses on injecting CoT- based approach to small-scale Language Models. If that is not the case, additional relevant CoT baselines for in-context learning of Large Language Models (for text-003 and ChatGPT) are missing in Table 2 and 3 (See Question A).
3. The major components of the proposed frameworks are CCL and PR. Both of them are incremental over the previous methods with minor adaptation for CoT-based prompting proposal. | 2. The scope of the study is under-specified. It seems that the work focuses on injecting CoT- based approach to small-scale Language Models. If that is not the case, additional relevant CoT baselines for in-context learning of Large Language Models (for text-003 and ChatGPT) are missing in Table 2 and 3 (See Question A). |
vXSCD3ToCS | ICLR_2025 | - The paper appears to require daily generation of the dynamic road network topology using the tree-based adjacency matrix generation algorithm. The efficiency of this process remains unclear. Additionally, since the topologies undergo minimal changes between consecutive days, and substantial information is shared across these days, it raises the question of whether specialized algorithms are available to accelerate this topology generation.
- The authors present performance results only for two districts, D06 and D11. It is recommended to extend the reporting to include experimental results from the remaining seven districts.
- There is an inconsistency in the layout of the document: Figure 5 referred to on line 215 of Page 4, yet it is located on Page 7.
- The caption for Figure 7 is incorrect, and should be corrected to "Edge Dynamics" from "Node Dynamics".
- It is recommended that some recent related studies be discussed in the paper, particularly focusing on their performance with this dataset.
[1] UniST: A Prompt-Empowered Universal Model for Urban ST Prediction. KDD2024.
[2] Fine-Grained Urban Flow Prediction. WWW2021.
[3] When Transfer Learning Meets Cross-City Urban Flow Prediction: Spatio-Temporal Adaptation Matters. IJCAI2022.
[4] Spatio-Temporal Self-Supervised Learning for Traffic Flow Prediction. AAA2023. | - The caption for Figure 7 is incorrect, and should be corrected to "Edge Dynamics" from "Node Dynamics". |
QQvhOyIldg | ICLR_2025 | 1. This paper is poorly written & presented. A lot of the content can be found in the undergraduate textbook. A substantial part of the results are informal version, say Lemma 6.1 - 6.3. Also, there is hardly any interpretation of the main results. The presentation style does not seem to be serious.
2. The technical contribution is unclear. Most of the analysis are quite standard.
3. There is no numerical experiments to verify its application in real-world dataset. | 2. The technical contribution is unclear. Most of the analysis are quite standard. |
ICLR_2021_1213 | ICLR_2021 | weakness of the paper. Then, I present my additional comments which are related to specific expressions in the main text, proof steps in the appendix etc. I would appreciate it very much if authors could address my questions/concerns under “Additional Comments” as well, since they affect my assessment and understanding of the paper; consequently my score for the paper. Summary:
• The paper focuses on convergence of two newly-proposed versions of AdaGrad, namely AdaGrad-window and AdaGrad-truncation, for finite sum setting where each component is smooth and possibly nonconvex.
• The authors prove convergence rate with respect to number of epochs T, where in each epoch one full pass over the data is performed with respect to well-known “random shuffling” sampling strategy.
• Specifically, AdaGrad-window is shown to achieve O ~ ( T − 1 / 2 )
rate of convergence, whereas AdaGrad-truncation attains ( T − 1 / 2 )
convergence, under component-wise smoothness and bounded gradients assumptions. Additionally, authors introduce a new condition/assumption called consistency ratio which is an essential element of their analysis.
• The paper explains the proposed modification to AdaGrad and provide their intuition for such adjustments. Then, the main results are presented followed by a proof sketch, which demonstrates the main steps of the theoretical approach.
• In order to evaluate the practical performance of the modified adaptive methods in a comparative fashion, two set of experiments were provided: training logistic regression model on MNIST dataset and Resnet-18 model on CIFAR-10 dataset. In these experiments; SGD, SGD with random shuffling, AdaGrad and AdaGrad-window were compared. Additionally, authors plot the behavior of their proposed condition “consistency ratio” over epochs. Strengths:
• I think epoch-wise analysis, especially for finite sum settings, could help provide insights into behaviors of optimization algorithms. For instance, it may enable to further investigate effect of batch size or different sampling strategies with respect to progress of the algorithms after every full pass of data. This may also help with comparative analysis of deterministic and stochastic methods.
• I have checked the proof of Theorem 1 in details and had a less detailed look at Theorems 2 and 3. I appreciate some of the technically rigorous sections of the analysis as the authors bring together analytical tools from different resources and re-prove certain results with respect to their adjustments.
• Performance comparison in the paper is rather simple but the authors try to provide a perspective of their consistency condition through numerical evidence. It gives some rough idea about how to interpret this condition.
• Main text is written in a clear; authors highlight their modification to AdaGrad and also highlight what their new “consistency condition” is. Proposed contributions of the paper are stated clearly although I do not totally agree with certain claims. One of the main theorems has a proof sketch which gives an overall idea about authors’ approach to proving the results. Weaknesses:
• Although numerically the paper provides an insight into the consistency condition, it is not verifiable ahead of time. One needs to run a simulation to get some idea about this condition, although it still wouldn’t verify the correctness. Since authors did not provide any theoretical motivation for their condition, I am not fully convinced out this assumption. For instance, authors could argue about a specific problem setting in which this condition holds.
• Theorem 3 (Adagrad-truncation) sets the stepsize depends on knowledge of r
. I couldn’t figure out how it is possible to compute the value r
ahead of time. Therefore, I do not think this selection is practically applicable. Although I appreciate the theoretical rigor that goes into proving Theorem 3, I believe the concerns about computing r
weakens the importance of this result. If I am missing out some important point, I would like to kindly ask the authors to clarify it for me.
• The related work which is listed in Table 1, within the group “Adaptive Gradient Methods” prove \emph{iteration-wise} convergence rates for variants of Adam and AdaGrad, which I would call the usual practice. This paper argues about \emph{epoch-wise} convergence. The authors claim improvement over those prior papers although the convergence rate quantifications are not based on the same grounds. All of those methods consider the more general expectation minimization setting. I would suggest the authors to make this distinction clear and highlight iteration complexities of such methods while comparing previous results with theirs. In my opinion, total complexity comparison is more important that rate comparison for the setting that this paper considers.
• As a follow up to the previous comment, the related work could have highlighted related results in finite sum setting. Total complexity comparisons with respect to finite sum setting is also important. There exists results for finite-sum nonconvex optimization with variance reduction, e.g., Stochastic Variance Reduction for Nonconvex Optimization, 2016, Reddi et. al. I believe it is important to comparatively evaluate the results of this paper with that of such prior work.
• Numerically, authors only compare against AdaGrad and SGD. I would say this paper is a rather theory paper, but it claims rate improvements, for which I previously stated my doubts. Therefore, I would expect comparisons against other methods as well, which is of interest to ICLR community in my opinion.
• This is a minor comment that should be easy to address. For ICLR, supplementary material is not mandatory to check, however, this is a rather theoretical paper and the correctness/clarity of proofs is important. I would say authors could have explained some of the steps of their proof in a more open way. There are some crucial expressions which were obtained without enough explanations. Please refer to my additional comments in the following part.
Additional Comments:
• I haven’t seen the definition that x t , m + 1 = x t + 1 , 1
in the main text. It appears in the supplements. Could you please highlight this in the main text as it is important for indexing in the analysis?
• Second bullet point of your contributions claim that “[consistency] condition is easy to verify”. I do not agree with this as I cannot see how someone could guarantee/compute the value r
ahead of time or even after observing any sequence of gradients. Could you please clearly define what verification means in this context?
• In Assumption A3, I understand that G t e i = g t , i and G t e = ∑ i = 1 m g t , i
. I believe the existing notation makes it complicated for the reader to understand the implications of this condition.
• In the paragraph right above Section 4.2, authors state that presence of second moments, V t , i
enables adaptive methods to have improved rates of SGD through Lemma 3. Could the authors please explain this in details?
• In Corollary 1, authors state that “the computational complexity is nearly O ( m 5 / 2 n d 2 ϵ − 2 ) ~
”. A similar statement exists in Corollary 2. Could you please explain what “nearly” means in this context?
• In Lemma 8 in the supplements, a a T and b b T
in the main expression of the lemma are rank-1 matrices. This lemma has been used in the proof of Lemma 4. As far as I understood, Lemma 8 is used in such a way that a a T or b b T
correspond to something like g t , j 2 – g t − 1 , j 2
. I am not sure if this construction fits into Lemma 8 because, for instance, the expression g t , j 2 – g t − 1 , j 2
is difference of two rank-1 matrices, which could have rank \leq 2. Hence, there may not exist some vector a
such that a a T = g t , j 2 – g t − 1 , j 2
, hence Lemma 8 may not be applied. If I am mistaken in my judgment I am 100% open for a discussion with the authors.
• In the supplements, in section “A.1.7 PROOF OF MAIN THEOREM 1”, in the expression following the first line, I didn’t understand how you obtained the last upper bound to ∇ f ( x t , i )
. Could you please explain how this is obtained? Score:
I would like to vote for rejecting the paper. I praise the analytically rigorous proofs for the main theorems and the use of a range of tools for proving the key lemmas. Epoch-wise analysis for stochastic methods could provide insight into behavior of algorithms, especially with respect to real-life experimental setting. However, I have some concerns:
I am not convinced about the importance of consistency ratio and that it is a verifiable condition.
Related work in Table 1 has iteration-wise convergence in the general expectation-minimization setting whereas this paper considers finite sum structure with epoch-wise convergence rates. The comparison with related work is not sufficient/convincing in this perspective.
(Minor) I would suggest the authors to have a more comprehensive experimental study with comparisons against multiple adaptive/stochastic optimizers. More experimental insight might be better for demonstrating consistency ratio.
Overall, due to the reasons and concerns stated in my review, I vote for rejecting this paper. I am open for further discussions with the authors regarding my comments and their future clarifications.
======================================= Post-Discussions =======================================
I would like to thank the authors for their clarifications. After exchanging several responses with the authors and regarding other reviews, I decide to keep my score.
Although the authors come up with a more meaningful assumption, i.e., SGC, compared to their initial condition, I am not fully convinced about the contributions with respect to prior work: SGC assumption is a major factor in the improved rates and it is a very restrictive assumption to make in practice.
Although this paper proposes theoretical contributions regarding adaptive gradient methods, the experiments could have been a bit more detailed. I am not sure whether the experimental setup fully displays improvements of the proposed variants of AdaGrad. | • In Corollary 1, authors state that “the computational complexity is nearly O ( m 5 / 2 n d 2 ϵ − 2 ) ~ ”. A similar statement exists in Corollary 2. Could you please explain what “nearly” means in this context? |
NIPS_2018_567 | NIPS_2018 | (bias against subgroups, uncertainty on certain subgroups), in applications for fair decision making. The paper is clearly structured, well written and very well motivated. Except for minor confusions about some of the math, I could easily follow and enjoyed reading the paper. As far as I know, the framework and particularly the application to fairness is novel. I believe the general idea of incorporating and adjusting to human decision makers as first class citizens of the pipeline is important for the advancement of fairness in machine learning. However, the framework still seems to encompass a rather minimal technical contribution in the sense that both a strong theoretical analysis and exhaustive empirical evaluation are lacking. Moreover, I am concerned about the real world applicability of the approach, as it mostly seems to concern situations with a rather specific (but unknown) behavior of the decision maker, which typically does not transfer across DMs, needs to be known during training. I have trouble thinking of situations where sufficient training data, both ground truth and the DMs predictions, are available simultaneously. While the authors do a good job evaluating various aspects of their method (one question about this in the detailed comments), those are only two rather simplistic synthetic scenarios. Because of the limited technical and experimental contribution, I heavy-heartedly tend to vote for rejection of the submission, even though I am a big fan of the motivation and approach. Detailed Comments - I like the setup description in Section 2.1. It is easy to follow and clearly describes the technical idea of the paper. - I have trouble understanding (the proof of) the Theorem (following line 104). You show that eq (6) and eq (7) are equal for appropriately chosen $\gamma_{defer}$. However, (7) is not the original deferring loss from eq (3). Shouldn't the result be that learning to defer and rejection learning are equivalent if for the (assumed to be) constant DM loss, $\alpha$ happens to be equal to $\gamma_{reject}$? In the theorem it sounds as if they were equivalent independent of the parameter choices for $\gamma_{reject}$ and $\alpha$. The main takeaway, namely that there is a one-to-one correspondence between rejection learning with cost $\gamma_{reject}$ and learning to defer with a DM with constant loss $\alpha$, is still true. Is there a specific reason why the authors decided to present the theorem and proof in this way? - The authors highlight various practical scenarios in which learning to defer is preferable and detail how it is expected to behave. However, this practicability seems to be heavily impaired by the strong assumptions necessary to train such model, i.e., availability of ground truth and DM's decisions for each DM of interest, where each is expected to have their own specific biases/uncertainties/behaviors during training. - What does it mean for the predictions \hat{Y} to follow an (independent?) Bernoulli equation (12) and line 197? How is p chosen, and where does it enter? Could you improve clarity by explicitly stating w.r.t. what the expectations in the first line in (12) are taken (i.e., where does p enter explicitly?) Shouldn't the expectation be over the distribution of \hat{Y} induced by the (training) distribution over X? - In line 210: The impossibility results only hold for (arguably) non-trivial scenarios. - When predicting the Charlson Index, why does it make sense to treat age as a sensitive attribute? Isn't age a strong and "fair" indicator in this scenario? Or is this merely for illustration of the method? - In scenario 2 (line 252), does $\alpha_{fair}$ refer to the one in eq (11)? Eq. (11) is the joint objective for learning the model (prediction and deferral) given a fixed DM? That would mean that the autodmated model is encouraged to provide unfair predictions. However, my intuition for this scenario is that the (blackbox) DM provides unfair decisions and the model's task is to correct for it. I understand that the (later fixed) DM is first also trained (semi synthetic approach). Supposedly, unfairness is encouraged only when training DM as a pre-stage to learning the model? I encourage the authors to draw the distinction between first training/simulating the DM (and the corresponding assumptions/parameters) and then training the model (and the corresponding assumptions/parameters) more clearly. - The comparison between the deferring and the rejecting model is not quite fair. The rejecting model receives a fixed cost for rejecting and thus does not need access to DM during training. This already highlights that it cannot exploit specific aspects (e.g., additional information) of the DM. On the other hand, while the deferring model can adaptively pass on those examples to DM, on which the DM performs better, this requires access to DM's predictions during training. Since DMs typically have unique/special characteristics that could vary greatly from one DM to the next, this seems to be a strong impairment for training a deferring model (for each DM individually) in practice? While the adaptivity of learning to defer unsurprisingly constitutes an advantage over rejection learning, it comes at the (potentially large) cost of relying on more data. Hence, instead of simply showing its superiority over rejection learning, one should perhaps evaluate this tradeoff? - Nitpicking: I find "above/below diagonal" (add a thin gray diagonal to the plot) easier to interpret than "above/below 45 degree", which sounds like a local property (e.g., not the case where the red line saturates and has "0 degrees"). - Is the slight trend of the rejecting model on the COMPAS dataset in Figure 4 to defer less on the reliable group a property of the dataset? Since rejection learning is non-adaptive, it is blind to the properties of DM, i.e., one would expect it to defer equally on both groups if there is no bias in the data (greater variance in outcomes for different groups, or class imbalance resulting in higher uncertainty for one group). - In lines 306-307 the authors argue that deferring classifiers have higher overall accuracy at a given minimum subgroup accuracy (MSA). Does that mean that at the same error rate for the subgroup with the largest error rate (minimum accuracy), the error rate on the other subgroups is on average smaller (higher overall accuracy)? This would mean that the differences in error rates between subgroups are larger for the deferring classifier, i.e., less evenly distributed, which would mean that the deferring classifier is less fair? - Please update the references to point to the conference/journal versions of the papers (instead of arxiv versions) where applicable. Typos line 10: learning to defer ca*n* make systems... line 97: first "the" should be removed End of line 5 of the caption of Figure 3: Fig. 3a (instead of Figs. 3a) line 356: This reference seems incomplete? | - When predicting the Charlson Index, why does it make sense to treat age as a sensitive attribute? Isn't age a strong and "fair" indicator in this scenario? Or is this merely for illustration of the method? |
NIPS_2017_349 | NIPS_2017 | - The paper is not self contained
Understandable given the NIPS format, but the supplementary is necessary to understand large parts of the main paper and allow reproducibility.
I also hereby request the authors to release the source code of their experiments to allow reproduction of their results.
- Use of deep-reinforcement learning is not well motivated
The problem domain seems simple enough that a linear approximation would have likely sufficed? The network is fairly small and isn't "deep" either.
- > We argue that such a mechanism is more realistic because it has an effect within the game itself, not just on the scores
This is probably the most unclear part. It's not clear to me why the paper considers one to be more realistic than the other rather than just modeling different incentives? Probably not enough space in the paper but actual comparison of learning dynamics when the opportunity costs are modeled as penalties instead. As economists say: incentives matter. However, if the intention was to explicitly avoid such explicit incentives, as they _would_ affect the model-free reinforcement learning algorithm, then those reasons should be clearly stated.
- Unclear whether bringing connections to human cognition makes sense
As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adaptation mechanisms and the structure of the CPR itself has more of an effect on whether self-organization will fail or succeed than previously appreciated.'' It would be fairly surprising if any behavioral economist trying to study this problem would ignore either of these things and needs more citation for comparison against "previously appreciated".
* Minor comments
** Line 16:
> [18] found them...
Consider using \citeauthor{} ?
** Line 167:
> be the N -th agentâs
should be i-th agent?
** Figure 3:
Clarify what the `fillcolor` implies and how many runs were the results averaged over?
** Figure 4:
Is not self contained and refers to Fig. 6 which is in the supplementary. The figure is understandably large and hard to fit in the main paper, but at least consider clarifying that it's in the supplementary (as you have clarified for other figures from the supplementary mentioned in the main paper).
** Figure 5:
- Consider increasing the axes margins? Markers at 0 and 12 are cut off.
- Increase space between the main caption and sub-caption.
** Line 299:
From Fig 5b, it's not clear that |R|=7 is the maximum. To my eyes, 6 seems higher. | - Unclear whether bringing connections to human cognition makes sense As the authors themselves state that the problem is fairly reductionist and does not allow for mechanisms like bargaining and negotiation that humans use, it's unclear what the authors mean by ``Perhaps the interaction between cognitively basic adaptation mechanisms and the structure of the CPR itself has more of an effect on whether self-organization will fail or succeed than previously appreciated.'' It would be fairly surprising if any behavioral economist trying to study this problem would ignore either of these things and needs more citation for comparison against "previously appreciated". |
NIPS_2017_351 | NIPS_2017 | - As I said above, I found the writing / presentation a bit jumbled at times.
- The novelty here feels a bit limited. Undoubtedly the architecture is more complex than and outperforms the MCB for VQA model [7], but much of this added complexity is simply repeating the intuition of [7] at higher (trinary) and lower (unary) orders. I don't think this is a huge problem, but I would suggest the authors clarify these contributions (and any I may have missed).
- I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify.
- Figure 2 is at an odd level of abstraction where it is not detailed enough to understand the network's functionality but also not abstract enough to easily capture the outline of the approach. I would suggest trying to simplify this figure to emphasize the unary/pairwise/trinary potential generation more clearly.
- Figure 3 is never referenced unless I missed it.
Some things I'm curious about:
- What values were learned for the linear coefficients for combining the marginalized potentials in equations (1)? It would be interesting if different modalities took advantage of different potential orders.
- I find it interesting that the 2-Modalities Unary+Pairwise model under-performs MCB [7] despite such a similar architecture. I was disappointed that there was not much discussion about this in the text. Any intuition into this result? Is it related to swap to the MCB / MCT decision computation modules?
- The discussion of using sequential MCB vs a single MCT layers for the decision head was quite interesting, but no results were shown. Could the authors speak a bit about what was observed? | - I don't think the probabilistic connection is drawn very well. It doesn't seem to be made formally enough to take it as anything more than motivational which is fine, but I would suggest the authors either cement this connection more formally or adjust the language to clarify. |
NIPS_2022_1664 | NIPS_2022 | • The paper is missing an integration of the main algorithmic steps (Fill, Propagate, Decode) with the overarching flow diagram in Fig 1 which creates a gap in the presentation.
• The abstract and main text make inconsistent claims about the transmission capacity: o Abstract: “.. covertly transmit over 10000 real-world data samples within a carrier model which has 220× less parameters than the total size of the stolen data,” o Introduction: “… covertly transmit over 10000 real-world data samples within a carrier model which has 100× less parameters than the total size of the stolen data (§4.1),”
• Definitions of metrics and illustrations of qualitative results are insufficiently described and included. o For example, the equation for a earning objective in section 3.3 should be clearly described.
o Page 7: define performance difference and hiding capacity in equations. o Fig 3 is too small for the information to be conveyed (At the 200% digital magnification of Fig 3, I can see some differences in image qualities).
• The choices and constructions of a secret key and noisy vectors are insufficiently described i.e., Are the secret keys similar to the public-private keys used in the current cryptography applications? What are the requirements on creating the noisy vectors?
• How is the information redundancy built into the Fill, Propagate, Decode algorithms? o In reference to the sentence “ Finally, by comparing the performance of the secret model with or without fusion, we conclude that the robustness of Cans largely comes from the information redundancy implemented in our design of the weight pool” | • How is the information redundancy built into the Fill, Propagate, Decode algorithms? o In reference to the sentence “ Finally, by comparing the performance of the secret model with or without fusion, we conclude that the robustness of Cans largely comes from the information redundancy implemented in our design of the weight pool” |
ICLR_2021_1310 | ICLR_2021 | Lack of some critical comparisons: 1) Does the proposed method also outperform the approach that first generates using a pre-trained BigGAN-256 then upsamples using an officially pre-trained ESRGAN? 2) What about the inference time (or complexity) of NSB-GAN compared to BigGAN?
If the decoders are trained with real samples only (drawn from the imagenet dataset), the upsampled outputs may have visual artifacts due to the mismatched distribution between the train (real) and test (fake gen from sampler) images. For example, the images of the cabinet (Fig3, 4, 5th row) have overly sharpened artifacts that BigGAN does not suffer.
Overall, the suggested work effectively decreases the training time of the BigGAN using a simple idea. However, there are missing comparisons and analyses such as 1) comparison with pre-trained BigGAN -> ESRGAN 2) How the capacity of the SR model affects the FID. And lastly, since the proposed method is pipelining, there are some unexpected artifacts. | 2) How the capacity of the SR model affects the FID. And lastly, since the proposed method is pipelining, there are some unexpected artifacts. |
NIPS_2021_2050 | NIPS_2021 | 1. Transformer has been adopted for lots of NLP and vision tasks, and it is no longer novel in this field. Although the authors made a modification on the transformer, i.e. cross-layer, it does not bring much insight in aspect of machine learning. Besides, in ablation study (table4 and 5), the self-cross attention brings limited improvement (<1%). I don’t think this should be considered as significant improvement. It seems that the main improvements over other methods come from using a naïve transformer instead of adding the proposed modification. 2. This work only focuses on a niche task, which is more suitable for CV conference like CVPR rather than machine learning conference. The audience should be more interested in techniques that can work for general tasks, like general image retrieval. 3. The proposed method uses AdamW with cosine lr for training, while comparing methods only use adam with fixed lr. Directly comparing with their numbers in paper is unfair. It would be better to reproduce their results using the same setting, since most of the recent methods have their code released. | 1. Transformer has been adopted for lots of NLP and vision tasks, and it is no longer novel in this field. Although the authors made a modification on the transformer, i.e. cross-layer, it does not bring much insight in aspect of machine learning. Besides, in ablation study (table4 and 5), the self-cross attention brings limited improvement (<1%). I don’t think this should be considered as significant improvement. It seems that the main improvements over other methods come from using a naïve transformer instead of adding the proposed modification. |
NIPS_2019_834 | NIPS_2019 | (A) To my read, the discussion of the algorithm and implementation are not quite complete in the sense that I still have several important questions that I did not feel were answered: (1) I am confused as to why/how the \beta evolves during the learning process and how this effects the goals one has when using a BMDP. Do we not care about how much risk is induced (or negative effects experienced) during training? Moreover, if the policy selects a new \beta at each new time step, how do we enforce a *specific* \beta when we wish to *use* the policy? (2) The convex hull procedure seems critical to being able to actually use the algorithm, but the explanation is lacking any intuitive interpretation. For example, it's not clear to me from 4.1 and Algorithm 3 exactly how "Budget \beta [is] always respected" (comment on Algorithm 3, line 9). Can the authors provide more explanation/intuition about what is going on here? (3) The authors state that "The pseudo-code of our exploration procedure is shown in Algorithm 4 in Appendix B." Since this component of your algorithm is one of the main hypotheses that is being validated, it should appear in the main paper. Suggest swapping for Algorithms 1/2 since these are basic extensions of existing techniques. (B) There is a mismatch between the introduction of the paper, which speaks generally to using BMDPs, and the actual experiments run, which show the benefit of using BMDPs *compared to* computing a set of solutions using existing techniques like FTQ(\lambda). The introduction needs to mention that approaches like the latter *are* available solutions and frame the contribution of the paper rather as one of providing a "better" solution in whichever way the authors feel this is best described (more-efficient, etc.). MINOR COMMENTS: * It seems that `else` at the beginning of Algorithm 3, line 9 doesn't belong there. * Several times in the paper, it is mentioned that experiments are done in "two environments," but aren't there three? * The definition in (2) is odd given that you say the "budget evolves as part of the dynamics." That is, if \beta'=\beta_a (as suggested by the Dirac function, then it is merely whatever the action says it was, correct? Why is that part of "the dynamics?" | * Several times in the paper, it is mentioned that experiments are done in "two environments," but aren't there three? |
vKViCoKGcB | ICLR_2024 | - Out of the listed baselines, to the best of my knowledge only Journey TRAK [1] has been explicitly used for diffusion models in previous work. As the authors note, Journey TRAK is not meant to be used to attribute the *final* image $x$ (i.e., the entire sampling trajectory). Rather, it is meant to attribute noisy images $x_t$ (i.e., specific denoising steps along the sampling trajectory). Thus, the direct comparison with Journey TRAK in the evaluation section is not on equal grounds.
- For the counterfactual experiments, I would have liked to see a comparison against Journey TRAK [1] used at a particular step of the the sampling trajectory. In particular, [1, Figure 2] shows a much larger effect of removing high-scoring images according to Journey TRAK, in comparison with CLIP cosine similarity.
- Given that the proposed method is only a minor modification of existing methods [1, 2], I would have appreciated a more thorough attempt at explaining/justifying the changes proposed by the authors.
[1] Kristian Georgiev, Joshua Vendrow, Hadi Salman, Sung Min Park, and Aleksander Madry. The
journey, not the destination: How data guides diffusion models. In Workshop on Challenges in
Deployable Generative AI at International Conference on Machine Learning (ICML), 2023.
[2] Sung Min Park, Kristian Georgiev, Andrew Ilyas, Guillaume Leclerc, and Aleksander Madry. Trak:
Attributing model behavior at scale. In International Conference on Machine Learning (ICML), 2023. | - For the counterfactual experiments, I would have liked to see a comparison against Journey TRAK [1] used at a particular step of the the sampling trajectory. In particular, [1, Figure 2] shows a much larger effect of removing high-scoring images according to Journey TRAK, in comparison with CLIP cosine similarity. |
ICLR_2023_3203 | ICLR_2023 | 1. The novelty is limited. The proposed method is too similar to other attentional modules proposed in previous works [1, 2, 3]. The group attention design seems to be related to ResNeSt [4] but it is not discussed in the paper. Although these works did not evaluate their performance on object detection and instance segmentation, the overall structures between these modules and the one that this paper proposed are pretty similar.
2. Though the improvement is consistent for different frameworks and tasks, the relative gains are not very strong. For most of the baselines, the proposed methods can only achieve just about 1% gain on a relative small backbone ResNet-50. As the proposed method introduces global pooling into its structure, it might be easy to improve a relatively small backbone since it is with a smaller receptive field. I suspect whether the proposed method still works well on large backbone models like Swin-B or Swin-L.
3. Some of the baseline results do not matched with their original paper. I roughly checked the original Mask2former paper but the performance reported in this paper is much lower than the one reported in the original Mask2former paper. For example, for panoptic segmentation, Mask2former reported 51.9 but in this paper it's 50.4, and the AP for instance segmentation reported in the original paper is 43.7 but here what reported is 42.4.
Meanwhile, there are some missing references about panoptic segmentation that should be included in this paper [5, 6]. Reference
[1] Chen, Yunpeng, et al. "A^ 2-nets: Double attention networks." NeurIPS 2018.
[2] Cao, Yue, et al. "Gcnet: Non-local networks meet squeeze-excitation networks and beyond." T-PAMI 2020
[3] Yinpeng Chen, et al. Dynamic convolution: Attention over convolution kernels. CVPR 2020.
[4] Zhang, Hang, et al. "Resnest: Split-attention networks." CVPR workshop 2022.
[5] Zhang, Wenwei, et al. "K-net: Towards unified image segmentation." Advances in Neural Information Processing Systems 34 (2021): 10326-10338.
[6] Wang, Huiyu, et al. "Max-deeplab: End-to-end panoptic segmentation with mask transformers." CVPR 2021 | 2. Though the improvement is consistent for different frameworks and tasks, the relative gains are not very strong. For most of the baselines, the proposed methods can only achieve just about 1% gain on a relative small backbone ResNet-50. As the proposed method introduces global pooling into its structure, it might be easy to improve a relatively small backbone since it is with a smaller receptive field. I suspect whether the proposed method still works well on large backbone models like Swin-B or Swin-L. |
NIPS_2020_125 | NIPS_2020 | - In section 3.1, the logic of extending HOGA from second order is not consistent with the extension from first order to second order; i.e., second order attention creates one more intermediate state U compared to the first order attention module. However, from the second order to higher order attention module, although intermediate states U0, U1 … are created, they are only part of the intermediate feature (Concatenating them will form U with full channel resolution). In this way, it seems we could regard the higher order attention module as a special form of second order attention module. - The paper does not clearly explain the intuition as to why different channel groups should have different attention mechanisms; i.e., in what specific way the network can benefit from the proposed channel group specific attention module. - Experiments are not solid enough: 1. There are no ablation studies on the effect of parameter numbers, so it is not clear whether the performance gain is due to the proposed approach or additional parameters. 2. Although there is good performance on imageNet classification with ResNet50/34/18, there are no results with larger models like ResNet101/152. 3. There are no results using strong object detection frameworks; the current SSD framework is relatively weak (e.g. Faster RCNN would be a stronger, more standard approach); it is not clear whether the improvements would be retained with a stronger base framework. - The proposed approach requires larger FLOPS compared to baselines; i.e., any performance gain requires large computation overhead (this is particularly pronounced in Table 3). - In Table 3 shows ResNet32/56 but L222 refers to ResNet34/50, which is confusing. | 2. Although there is good performance on imageNet classification with ResNet50/34/18, there are no results with larger models like ResNet101/152. |
ICLR_2022_1838 | ICLR_2022 | 1. When introducing the theoretical results, we should make a detailed comparison with the existing cross-entropy loss results. The current writing method cannot reflect the advantages of square loss. 2. The synthetic experiment in a non-separable case seems to be a problem. Considering the nonlinear expression ability of neural networks, how to explain that the data distribution illustrated in Figure 1 is inseparable from the network model? 3. This paper presents that the loss functions like hinge loss don’t provide reliable information on the prediction confidence. In this regard, there is a lack of references to some relevant literature. [Gao, 2013] has given a detailed analysis of the advantages and disadvantages between the entire margin distribution and the minimum margin. Based on this, [Lyu, 2018] designed a square-type margin distribution loss to improve the generalization ability of DNN.
[Gao, 2013] W. Gao and Z.-H. Zhou. On the doubt about margin explanation of boosting. Artificial Intelligence 203:1-18 2013.
[Lyu, 2018] Shen-Huan Lyu, Lu Wang, and Zhi-Hua Zhou. Improving Generalization of Neural Networks by Leveraging Margin Distribution. http://arxiv.org/abs/1812.10761 | 2. The synthetic experiment in a non-separable case seems to be a problem. Considering the nonlinear expression ability of neural networks, how to explain that the data distribution illustrated in Figure 1 is inseparable from the network model? |
NIPS_2019_165 | NIPS_2019 | of the approach and experiments or list future direction for readers. The writeup is exceptionally clear and well organized-- full marks! I have only minor feedback to improve clarity: 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between the learning curves and M-PHATE. Why do you want to want me to look at the learning curves? Does worse performing model always result in structural collapse? What is the accuracy number? For the last task? or average? 3. Make the captions more descriptive. It's annoying to have to search through the text for your interpretation of the figures, which is usually on a different page 4. Explain the scramble network better... 5. Fig 1, Are these the same plots, just colored differently? It would be nice to keep all three on the same scale (the left one seems condensed) M-PHATE results in significantly more interpretable visualization of evolution than previous work. It also preserves neighbors better (Question: why do you think t-SNE works better in two conditions? The difference is very small tho). On continual learning tasks, M-PHATE clearly distinguishes poor performing learning algorithms via a collapse. (See the question about this in 5. Improvement). The generalization vignette shows that the heterogeneity in M-PHATE output correlates with performance. I would really like to recommend a strong accept for this paper, but my major concern is that the vignettes focus on one dataset MNIST and one NN architecture MLP, which makes the experiments feel incomplete. The results and observations made by authors would be much more convincing if they could repeat these experiments for more datasets and NN architectures. | 1. Add a few more sentences explaining the experimental setting for continual learning 2. In Fig 3, explain the correspondence between the learning curves and M-PHATE. Why do you want to want me to look at the learning curves? Does worse performing model always result in structural collapse? What is the accuracy number? For the last task? or average? |
NIPS_2021_386 | NIPS_2021 | 1. It is unclear if this proposed method will lead to any improvement for hyper-parameter search or NAS kind of works for large scale datasets since even going from CIFAR-10 to CIFAR-100, the model's performance reduced below prior art (if #samples are beyond 1). Hence, it is unlikely that this will help tasks like NAS with ImageNet dataset. 2. There is no actual new algorithmic or research contribution in this paper. The paper uses the methods of [Nguyen et al., 2021] directly. The only contribution seems to be running large-scale experiments of the same methods. However, compared to [Nguyen et al., 2021], it seems that there are some qualitative differences in the obtained images as well (lines 173-175). The authors do not clearly explain what these differences are, or why there are any differences at all (since the approach is identical). The only thing reviewer could understand is that this is due to ZCA preprocessing which does not sound like a major contribution. 3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is.
Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset meta-learning from kernel ridge-regression. In International Conference on Learning Representations, 2021.
Update: Please see my comment below. I have increased the score from 3 to 5. | 3. The approach section is missing in the main paper. The reviewer did go through the “parallelization descriptions” in the supplementary material but the supplementary should be used more like additional information and not as an extension to the paper as it is. Timothy Nguyen, Zhourong Chen, and Jaehoon Lee. Dataset meta-learning from kernel ridge-regression. In International Conference on Learning Representations, 2021. Update: Please see my comment below. I have increased the score from 3 to 5. |
ICLR_2023_1599 | ICLR_2023 | of the proposed method are listed as below:
There are two key components of the method, namely, the attention computation and learn-to-rank module. For the first component, it is a common practice to compute importance using SE blocks. Therefore, the novelty of this component is limited.
Some important SOTAs are missing and some of them as below outperform the proposed method: (1) Ding, Xiaohan, et al. "Resrep: Lossless cnn pruning via decoupling remembering and forgetting." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. (2) Li, Bailin, et al. "Eagleeye: Fast sub-net evaluation for efficient neural network pruning." European conference on computer vision. Springer, Cham, 2020. (3) Ruan, Xiaofeng, et al. "DPFPS: dynamic and progressive filter pruning for compressing convolutional neural networks from scratch." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 35. No. 3. 2021.
Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included.
Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method. | 35. No.3. 2021. Competing dynamic-pruning methods are kind of out-of-date. More recent works should be included. Only results on small scale datasets are provided. Results on large scale datasets including ImageNet should be included to further verify the effectiveness of the proposed method. |
eFGQ97z5Cd | ICLR_2025 | - While MoEE introduces two methods for combining HS and RW embeddings (concatenation and weighted sum), the concatenation variant appears simplistic and less effective than the weighted sum in terms of similarity calculation. Future work could explore more sophisticated aggregation methods to fully leverage the complementary strengths of HS and RW.
- The claim that RW embeddings are more robust than HS, based solely on prompt variation tests, lacks comprehensive support. Other factors, such as model size (or maybe architectural variations), should be examined to substantiate this claim.
- The choice to evaluate on only a subset of the Massive Text Embedding Benchmark (MTEB) raises questions about generalizability; it would be helpful to understand the criteria behind this selection and whether other tasks or datasets might yield different insights. | - The choice to evaluate on only a subset of the Massive Text Embedding Benchmark (MTEB) raises questions about generalizability; it would be helpful to understand the criteria behind this selection and whether other tasks or datasets might yield different insights. |
NIPS_2016_537 | NIPS_2016 | weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers. | 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers. |
NIPS_2019_1145 | NIPS_2019 | The paper has the following main weaknesses: 1. The paper starts with the objective of designing fast label aggregation algorithms for a streaming setting. But it doesnât spend any time motivating the applications in which such algorithms are needed. All the datasets used in the empirical analysis are static datasets. For the paper to be useful, the problem considered should be well motivated. 2. It appears that the output from the algorithm depends on the order in which the data are processed. This should be clarified. 3. The theoretical results are presented under the assumption that the predictions of FBI converge to the ground truth. Why should this assumption be true? It is not clear to me how this assumption is valid for finite R. This needs to clarified/justified. 3. The takeaways from the empirical analysis are not fully clear. It appears that the big advantage of the proposed methods is their speed. However, the experiments donât seem to be explicitly making this point (the running times are reported in the appendix; perhaps they should be moved to the main body). Plus, the paper is lacking the key EM benchmark. Also, perhaps the authors should use a different dataset in which speed is most important to showcase the benefits of this approach. Update after the author response: I read the author rebuttal. I suggest the authors to add the clarifications they detailed in the rebuttal to the final paper. Update after the author response: I read the author rebuttal. I suggest the authors to add the clarifications they detailed in the rebuttal to the final paper. Also, the motivating crowdsourcing application where speed is really important is not completely clear to me from the rebuttal. I suggest the authors clarify this properly in the final paper. | 2. It appears that the output from the algorithm depends on the order in which the data are processed. This should be clarified. |
NIPS_2019_168 | NIPS_2019 | of the submission. * originality: This is a highly specialized contribution building up novel results on two main fronts: The derivation of the lower bound on the competitive ratio of any online algorithm and the introduction of two variants of an existing algorithm so as to meet this lower bound. Most of the proofs and techniques are natural and not surprising. In my view the main contribution is the introduction of the regularized version which brings a different, and arguably more modern interpretation, about the conditions under which these online algorithms perform well in these adversarial settings. * quality: The technical content of the paper is sound and rigorous * clarity: The paper is in general very well-written, and should be easy to follow for expert readers. * significance: As mentioned above this is a very specialized paper likely to interest some experts in the online convex optimization communities. Although narrow in scope, it contains interesting theoretical results advancing the state of the art in dealing with these specific problems. * minor details/comments: - p.1, line 6-7: I would rewrite the sentence to simply express that the lower bound is $\Omega(m^{-1/2})$ \- p.3, line 141: cost an algorithm => cost of an algorithm \- p.4, Algorithm 1, step 3: mention somewhere that this is the projection operator (not every reader will be familiar with this notation \- p.5, Theorem 2: remind the reader that the $\gamma$ in the statement is the parameter of OBD as defined in Algorithm 1 \- p.8, line 314: why surprisingly? | * significance: As mentioned above this is a very specialized paper likely to interest some experts in the online convex optimization communities. Although narrow in scope, it contains interesting theoretical results advancing the state of the art in dealing with these specific problems. |
NIPS_2022_704 | NIPS_2022 | Unfortunately, this paper has many weaknesses. First, and importantly, the main contribution of this paper (section 5.3) is not well motivated and the explanation of the contribution itself is really handwavy. Second, explanation of the different baseline approaches is also too handwavy, making the overall paper difficult to understand and not self-contained.
On the motivations: the authors argue that previous approaches cannot be applied to structured prediction. I strongly disagree with this.
l 227, "the output distributiion of global prediction is often intractable": in the non-projective dependency parsing case, the distribution can be computed using the Matrix Tree Theorem [1, 2, 3]. If we restrict the problem to projective trees, this can be computed via dynamic programming [4, 5]. For the transition based model, approximated approaches have been explored in the litterature [6]
l. 222, "the instantiations of KD are usually based on a loss function computing the cross-entropy of output distributions [...] however existing methods are no directly applicable to structure prediction tasks": I don't understand why. First, the next sentence is false (see previous point), but also KL divergence between structured distributions has been studied in the litterature. For non-projective trees, see [7, Section 6.3], for methods based on dynamic programming see [8]
l. 263, "furthermore, it is unclear how to adapt existing sampling methods to solutions that are not formalized as sequence generation": Recent work has considered sampling from the non-projective tree distributions, including sampling without replacement [9, 10]. Moreover, previous work has also considered perturb-and-MAP approaches [11, 12, 13]. Finally, in the case of dynamic programming algorithms, it is well known that it is possible to sample from the associated exponential familly distributions, see e.g. [14]
Related work
As suggested by the comment above, the literature is not properly explored or cited by the authors. There are similar problems in the introduction and related work section. For example:
l. 54: authors cite Dozan and Manning (2017) for graph-based parsers, whereas the correct citations is more than 10 years older [15]
l. 21: for transition-based parsers, they cite Ma et al. (2018), better citations would be [16, 17]
[1] Structured Prediction Models via the Matrix-Tree Theorem (Koo et al.)
[2] Probabilistic Models of Nonprojective Dependency Trees (Smith and Smith)
[3] On the complexity of non-projective data-driven dependency parsing (McDonald and Satta)
[4] Semiring parsing (Goodman)
[5] Differentiable Dynamic Programming for Structured Prediction and Attention (Mensch and Blondel)
[6] Globally Normalized Transition-Based Neural Networks (Andor et al.)
[7] Efficient Computation of Expectations under Spanning Tree Distributions (Zmigrod et al.)
[8] First- and Second-Order Expectation Semirings with Applications to Minimum-Risk Training on Translation Forests (Li and Eisner)
[9] Efficient Sampling of Dependency Structures (Zmigrod et al.)
[10] Unbiased and Efficient Sampling of Dependency Trees (Stanojevi)
[11] Perturb-and-MAP random fields: Using discrete optimization to learn and sample from energy models (Papandreou and Yuille)
[12] Differentiable Perturb-and-Parse: Semi-Supervised Parsing with a Structured Variational Autoencoder (Corro and Titov)
[13] Learning Latent Trees with Stochastic Perturbations and Differentiable Dynamic Programming (Corro and Titov)
[14] See section 17.4.5 of Machine Learning: A Probabilistic Perspective (Muprhy) for the idea and Latent Template Induction with Gumbel-CRFs (Fu et al.) for application to CRF-like distribution
[15] Non-projective Dependency Parsing using Spanning Tree Algorithms (McDonald et al.)
[16] A Classifier-Based Parser with Linear Run-Time Complexity (Sagae, Lavie)
[17] Algorithms for Deterministic Incremental Dependency Parsing (Nivre) | 54: authors cite Dozan and Manning (2017) for graph-based parsers, whereas the correct citations is more than 10 years older [15] l. |
ICLR_2023_2622 | ICLR_2023 | Weakness: 1. The figures are not clear. For example, in figure 2, it’s confused for the relation of 3 sub-figures. Some modules are not labeled in figure, such as CMAF, L_BT, VoLTA. 2. The experiments results are not significant. 3. Three steps for training are shown in VoLTA, a) switching off CMAF, b) switching on CMAF, c) keep CMAF and random sampling for training. The ablation study on these parts should be conducted. 4. The key point of this paper is GOT, but no ablation on this part. The authors are encouraged to verify which parts works. | 1. The figures are not clear. For example, in figure 2, it’s confused for the relation of 3 sub-figures. Some modules are not labeled in figure, such as CMAF, L_BT, VoLTA. |
NIPS_2018_707 | NIPS_2018 | weakness of the paper is the lack of experimental comparison with the state of the art. The paper spends whole page explaining reasons why the presented approach might perform better under some circumstances, but there is no hard evidence at all. What is the reason not to perform an empirical comparison to the joint belief state approach and show the real impact of the claimed advantages and disadvantages? Since this is the main point of the paper, it should be clear when the new modification is useful. 3) Furthermore, there is an incorrect statement about the performance of the state of the art method. The paper claims that "The evidence suggests that in the domain we tested on, using multi-valued states leads to better performance." because the alternative approach "was never shown to defeat prior top AIs". This is simply incorrect. Lack of an experiment is not evidence for superiority of the method that performed the experiment without any comparison. 4) The rock-paper-scissors example is clearly inspired by an example that appeared in many previous work. Please, cite the source appropriately. 5) As explained in 1), the presented method is quite heuristic. The algorithm does not actually play the blueprint strategy, only few values are used in the leaf states, which cannot cover the whole variety of the best response values. In order to assess whether the presented approach might be applicable also for other games, it would be very useful to evaluate it on some substantially different domains, besides poker. Clarity: The paper is well written and organized, and it is reasonably easy to understand. The impact of the key differences between the theoretic inspiration and the practical implementation should be explained more clearly. Originality: The presented method is a novel modification of continual resolving. The paper clearly explains the main distinction form the existing method. Significance: The presented method seems to substantially reduce the computational requirements of creating a strong poker bot. If this proofs to be the case also for some other imperfect information games, it would be a very significant advancement in creating algorithms for playing these games. Detailed comments: 190: I guess the index should be 1 339: I would not say MCCFR is currently the preferred solution method, since CFR+ does not work well with sampling 349: There is no evidence the presented method would work better in stratego. It would depend on the specific representation and how well would the NN generalize over the types of heuristics. Reaction to rebuttal: 1) The formulation of the formal statement should be clearer. Still, while you are using the BR values from the blueprint strategy in the computation, I do not see how the theory can give you any real bounds the way you use the algorithm. One way to get more realistic bounds would be to analyze the function approximation version and use error estimates from cross-valiadation. 2) I do not believe head-to-head evaluation makes too much sense because of well known intransitivity effects. However, since the key difference between your algorithm and DeepStack is the form of the used leaf evaluation function, it would certainly not take man-years to replace the evaluation function with the joint belief in your framework. It would be very interesting to see comparison of exploitability and other trade-offs on smaller games, where we can still compute it. 4) I meant the use of the example for save resolving. 5) There is no need for strong agents for some particular games to make rigorous evaluation of equilibrium solving algorithms. You can compute exploitability in sufficiently large games to evaluate how close your approach is to the equilibrium. Furthermore, there are many domain independent algorithms for approaximating equilibriua in these games you can compare to. Especially the small number of best response values necessary for the presented approach is something that would be very interesting to evaluate in other games. Line 339: I just meant that I consider CFR+ to be "the preferred domain-independent method of solving imperfect-information games", but it is not really important, it was a detailed comment. | 4) The rock-paper-scissors example is clearly inspired by an example that appeared in many previous work. Please, cite the source appropriately. |
NIPS_2022_57 | NIPS_2022 | Weakness:
The paper doesn't present comprehensive ablation study of the effectiveness of the cost model.
The paper seems to overlook some latest advances in distributed training and model parallelism, for example, Alpa [1] that considers homogeneous setting but still formulates the problem similarly, while having heterogeneity seems to be a plain extension. Would love to hear from the reviewer about this particular comparison.
[1] Zheng L, Li Z, Zhang H, Zhuang Y, Chen Z, Huang Y, Wang Y, Xu Y, Zhuo D, Gonzalez JE, Stoica I. Alpa: Automating Inter-and Intra-Operator Parallelism for Distributed Deep Learning. arXiv preprint arXiv:2201.12023. 2022 Jan 28.
As the author stated, memory footprint is right now not under consideration. However, the reviewer does not consider this as a significant drawback. | 2022 Jan 28. As the author stated, memory footprint is right now not under consideration. However, the reviewer does not consider this as a significant drawback. |
ICLR_2021_1465 | ICLR_2021 | 1. The complexity analysis is insufficient. In the draft, the author only provide the rough overall complexity. A better way is to show the comparison between the proposed method and some other methods, including the number of model parameter and network forwarding time.
2. In the converting of point cloud to concentric spherical signal, the Gaussian radial basis function is adopted to summarize the contribution of points. Is there any other function that can accomplish this job? The reviewer would like to the discussion about this.
3. The Figure 2 is a little ambiguous, where some symbols are not explained clearly. And the reviewer is curious about that whether there is information redundancy and interference in the multi-sphere icosahedral discretization process.
4. There are some typos in the draft. The first is the wrong use of "intra-sphere" and "inter-sphere". The second is the use of two consecutive "stacking" in the Spherical Discretization subsection. Please check the full text carefully.
5. The center choice of the concentric spheres should be discussed both theoretically and experimentally. In the opinion, the center of spheres play a important role in the representation capturing of 3D point clouds in a sphere convolution manner. | 3. The Figure 2 is a little ambiguous, where some symbols are not explained clearly. And the reviewer is curious about that whether there is information redundancy and interference in the multi-sphere icosahedral discretization process. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | - The presentation is at times too equation-driven and the notation, especially in chapter 3, quite convoluted and hard to follow. An illustrative figure of the key concepts in section 3 would have been helpful. |
NIPS_2018_947 | NIPS_2018 | weakness of the paper, in its current version, is the experimental results. This is not to say that the proposed method is not promising - it definitely is. However, I have some questions that I hope the authors can address. - Time limit of 10 seconds: I am quite intrigued as to the particular choice of time limit, which seems really small. In comparison, when I look at the SMT Competition of 2017, specifically the QF_NIA division (http://smtcomp.sourceforge.net/2017/results-QF_NIA.shtml?v=1500632282), I find that all 5 solvers listed require 300-700 seconds. The same can be said about QF_BF and QF_NRA (links to results here http://smtcomp.sourceforge.net/2017/results-toc.shtml). While the learned model definitely improves over Z3 under the time limit of 10 seconds, the discrepancy with the competition results on similar formula types is intriguing. Can you please clarify? I should note that while researching this point, I found that the SMT Competition of 2018 will have a "10 Second wonder" category (http://smtcomp.sourceforge.net/2018/rules18.pdf). - Pruning via equivalence classes: I could not understand what is the partial "current cost" you mention here. Thanks for clarifying. - Figure 3: please annotate the axes!! - Bilinear model: is the label y_i in {-1,+1}? - Dataset statistics: please provide statistics for each of the datasets: number of formulas, sizes of the formulas, etc. - Search models comparison 5.1: what does 100 steps here mean? Is it 100 sampled strategies? - Missing references: the references below are relevant to your topic, especially [a]. Please discuss connections with [a], which uses supervised learning in QBF solving, where QBF generalizes SMT, in my understanding. [a] Samulowitz, Horst, and Roland Memisevic. "Learning to solve QBF." AAAI. Vol. 7. 2007. [b] Khalil, Elias Boutros, et al. "Learning to Branch in Mixed Integer Programming." AAAI. 2016. Minor typos: - Line 283: looses -> loses | - Search models comparison 5.1: what does 100 steps here mean? Is it 100 sampled strategies? |
NIPS_2016_117 | NIPS_2016 | weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is higher for these deep networks). If the authors could demonstrate that DFA allows one to train and make use of such deep networks where BP and FA struggle on a larger dataset this would significantly enhance the impact of the paper. In terms of biological understanding, FA seems more supported by biological observations (which typically show reciprocal forward and backward connections between hierarchical brain areas, not direct connections back from one region to all others as might be expected in DFA). The paper doesn't provide support for their claim, in the final paragraph, that DFA is more biologically plausible than FA. Minor issues: - A few typos, there is no line numbers in the draft so I haven't itemized them. - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. - Figure 3 is very hard to read anything on the figure. - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers). | - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. |
NIPS_2022_670 | NIPS_2022 | 1. Lack of numerical results. The reviewer is curious about how to apply it to some popular algorithms and their performance compared with existing DP algorithms. 2. The presentation of this paper is hard to follow for the reviewer. | 1. Lack of numerical results. The reviewer is curious about how to apply it to some popular algorithms and their performance compared with existing DP algorithms. |
4ltiMYgJo9 | ICLR_2025 | 1. One of the main claims by the authors is the adaptation of the whole close-loop framework. While the authors claim it can be simply replaced by recording EEG data from human participants, there are actually no more concrete demonstrations on how. For example, what is the "specific neural activity in the brain" in this paper and in a possible real scenario? What's the difference? And how difficult is it and how much effort will it take to apply the framework to the real world? It's always easy to just claim a methodology "generalizable", but without more justification that doesn't actually help strengthen the contribution of the paper.
2. Based on 1, I feel it is not sufficiently demonstrated in the paper what role the EEG plays in the whole framework. As far as I can understand from the current paper, it seems to be related to the reward $R$ in the MDP design, because it should provide signal based on the desired neural activities. However, we know neither how the reward is exactly calculated nor what kinds of the neural signal the authors are caring about (e.g., a specific frequency bank? a specific shape of waveforms? a specific activation from some brain area?).
3. Besides the methodology, it's also not clear how the different part of this framework performs and contribute to the final result from the experimental aspect. While in the result section, we can see that the framework can yield promising visual stimuli result, it lacks either quantitative experiments and comparison between selection of algorithms, or a more detailed explanations on the presented ones. (See questions.) Therefore, it's unclear for me what the exact performance of the whole framework and individual parts compared to other solutions.
4. Overall, the presentation of this paper is unsatisfying (and that's probably why I have the concerns in 2 and 3). On the one hand, the author is presenting more well-known details in the main content but didn't make their own claims clear. For example, the algorithm 1 and algorithm 2 is a direct adaptation from previous work. Instead of using space to present them, I wish to see more on how the MDP is constructed. On the other hand, mixing citations with sentences (please use \citep instead \cite) and a few typos (in line 222, algorithm 1, the bracket is not matched) give me the feeling that the paper is not yet ready to be published. | 3. Besides the methodology, it's also not clear how the different part of this framework performs and contribute to the final result from the experimental aspect. While in the result section, we can see that the framework can yield promising visual stimuli result, it lacks either quantitative experiments and comparison between selection of algorithms, or a more detailed explanations on the presented ones. (See questions.) Therefore, it's unclear for me what the exact performance of the whole framework and individual parts compared to other solutions. |
NIPS_2022_489 | NIPS_2022 | Concern regarding representativeness of baselines used for evaluation
Practical benefits in terms of communication overhead & training time could be more strongly motivated
Detailed Comments:
Overall, the paper was interesting to read and the problem itself is well motivated. Formulation of the problem as an MPG appears sound and offers a variety of important insights with promising applications. There are, however, some concerns regarding evaluation fairness and practical benefits.
The baselines used for evaluation do not seem to accurately represent the state-of-the-art in CTDE. In particular, there have been a variety of recent works that explore more efficient strategies (e.g., [1-3]) and consistently outperform QMix with relatively low inter-agent communication. Although the proposed work appears effective as a fully-decentralized approach, it is unclear how well it would perform against more competitive CTDE baselines. Comparison against these more recent works would greatly improve the strength of evaluation.
Benefits in terms of reduced communication overhead could also be more strongly motivated. Presumably, communication between agents could be done over purpose-built inter-LB links, thus avoiding QoS degradation due to contention on links between LBs and servers. Even without inter-LB links, the increase in latency demonstrated in Appendix E.2.2 appears relatively low.
Robustness against dynamic changes in network setup are discussed to some degree, but it’s unclear how significant this issue is in a real-world environment. Even in a large-scale setup, the number of LBs/servers is likely to remain fairly constant at the timescales considered in this work (i.e., minutes). Given this, it seems that the paper should at least discuss trade-offs with a longer training time, which could impact the relative benefits of various approaches.
Some confusion in notation: - Algorithm 2, L8 should be t = 1,…,H (for horizon)? - L100, [M] denotes the set of LBs?
Minor notes: - Some abbreviations are not defined, e.g., “NE” on L73 - Superscript notation in Eq 6 is not defined until much later (L166), which hindered understanding in an initial read.
[1] S. Zhang et al, “Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control”, NeurIPS 2019. [2] Z. Ding et al, “Learning Individually Inferred Communication for Multi-Agent Cooperation”, NeurIPS 2020. [3] T. Wang et al, “Learning Nearly Decomposable Value Functions Via Communication Minimization”, ICLR 2020. | - Some abbreviations are not defined, e.g., “NE” on L73 - Superscript notation in Eq 6 is not defined until much later (L166), which hindered understanding in an initial read. [1] S. Zhang et al, “Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control”, NeurIPS 2019. [2] Z. Ding et al, “Learning Individually Inferred Communication for Multi-Agent Cooperation”, NeurIPS 2020. [3] T. Wang et al, “Learning Nearly Decomposable Value Functions Via Communication Minimization”, ICLR 2020. |
ICLR_2022_1872 | ICLR_2022 | I list 5 concerns here, with detailed discussion and questions for the authors below
W1: While theorems suggest "existence" of a linear transformation that will approximate the posterior, the actual construction procedure for the "recovered topic posterior" is unclear
W2: Many steps are difficult to understand / replicate from main paper
W3: Unclear what theorems can say about finite training sets
W4: Justification / intuition for Theorems is limited in the main paper
Responses to W1-W3 are most important for the rebuttal.
W1: Actual procedure for constructing the "recovered topic posterior" is unclear
In both synthetic and real experiments, the proposed self-supervised learning (SSL) method is used to produce a "recovered topic posterior" p( w x). However, the procedure used here is unclear... how do we estimate p( w x) using the learning function f(x)?
The theorems imply that a linear function exists with limited (or zero) approximation error for any chosen scalar summary of the doc-topic weights w. However, how such a linear function is constructed is unclear. The bottom of page four suggests that when t=1 and A is full rank, that "one can use the pseudoinverse of A to recover the posterior", however it seems (1) unclear what the procedure is in general and what its assumptions are, and (2) odd that the prior may not needed at all.
Can the authors clarify how to estimate the recovered topic posterior using the proposed SSL method?
W2: Many other steps are difficult to understand / replicate from main paper
Here's a quick list of questions on experimental steps I am confused about / would have trouble reproducing
For the toy experiments in Sec. 5:
Do you estimate the topic-word parameter A? Or assume the true value is given?
What is the format for document x provided as input to the neural networks that define f(x)? The top paragraph of page 7 makes it seem like you provide an ordered list of words. Wouldn't a bag-of-words count vector be a more robust choice?
How do you set t=1 (predict one word given others) but somehow also use "the last 6 words are chosen as the prediction target"?
How do you estimate the "recovered topic posterior" for each individual model (LDA, CTM, etc)? Is this also using HMC (which is used to infer the ground-truth posterior)?
Why use 2000 documents for the "pure" topic model but 500 in test set for other models? Wouldn't more complex models benefit from a larger test set?
For the real experiments in Sec. 6:
How many topics were used?
How did you get topic-word parameters for this "real" dataset?
How big is the AG news dataset? Main paper should at least describe how many documents in train/test, and how many vocabulary words.
W3: Unclear what theorems / methods can say about finite training sets
All the theorems seem to hold when considering terms that are expectations over a known distribution over observed-data x and missing-data y. However, in practical data analysis we do not know the true data generating distribution, we only have a finite training set.
I am wondering about this method's potential in practice for modest-size datasets. For the synthetic dataset with V=5000 (a modest vocabulary size), the experiments considered 0.72 million to 6 million documents, which seems quite large.
What practically must be true of the observed dataset for the presented methods to work well?
W4: Justification / intuition for Theorems is limited in the main paper
All 3 theorems in the main paper are presented without much intuition or justification about why they should be true, which I think limits their impact on the reader. (I'll try to wade thru the supplement, but did not have time before the review deadline).
Theorem 3 tries to give intuition for the t=1 case, but I think could be stronger: why should f(x) have an optimal form p ( y = v 1 x )
? Why should "these probabilities" have the form A E [ w x ]
? I know space is limited, but helping your reader figure things out a bit more explicitly will increase the impact.
Furthermore, the reader would benefit from understanding how tight the bounds in Theorem 4 are. Can we compute the bound quality for toy data and understand it more practically?
Detailed Feedback on Presentation
No need to reply to these in rebuttal but please do address as you see fit in any revision
Page 3:
"many topic models can be viewed"... should probably say "the generative process of many topic models can be viewed..."
the definition of A_ij is not quite right. I would not say "word i \in topic j", I would say "word i topic j". A word is not contained in a topic, Each word has a chance of being generated.
I'd really avoid writing Δ ( K )
and would just use Δ
throughout .... unclear why this needs to be a function of K
but the topic-word parameters (whose size also depends on K
) does not
Should we call the reconstruction objective a "partial reconstruction" or "masked reconstruction"? I'm used to reconstruction in an auto-encoder context, where the usual "reconstruction" objective is literally to recover all observed data, not a piece of observed data that we are pretending not to see
In Eq. 1, are you assuming an ordered or unordered representation of the words in x and y?
Page 4:
I would not reuse the variable y in both reconstruction and contrastive contexts. Find another variable. Same with theta.
Page 5:
I would use f ∗
to denote the exact minimizer, not just f
Figure 2 caption should clarify:
what is the takeaway for this figure? Does reader want to see low values? Does this figure suggest the approach is working as expected?
what procedure is used for the "recovered" posterior? Your proposed SSL method?
why does Pure have a non-monotonic trend as alpha gets larger? | 6: How many topics were used? How did you get topic-word parameters for this "real" dataset? How big is the AG news dataset? Main paper should at least describe how many documents in train/test, and how many vocabulary words. |
ICLR_2023_2286 | ICLR_2023 | 1. The paper is poorly organized. It is hard to quickly get the motivations and main ideas of the proposed methods.
2. The thermal sensor and environment setting for data collection is not described in details. From Figure 2, why is the quality of thermal image significantly higher than RegDB and SYSU-MM01 ? Does the thermal sensor or capturing time cause it?
3. The paper presents a transformer-based network as backbone, what is the benefits over the CNN-based backbones in traditional methods? The reason using such a transformer-based method is not clearly discussed.
4. The proposed multi-task triplet loss is not clearly clarified. It is strongly to re-organize the part and make a proof-reading. In addition, It seems there is mistake in Eq. (1). I suppose a max( x,0) is lost for both terms in $L_{mtri}$.
5. The sensitivity of hyper-parameters such as $m_1$, $m_2$, $\lambda$ is not discussed. In particular, their values are not specified in the paper.
6. There are lots of grammar mistakes, typos and description blurs that makes the paper hard to follow. It is strongly to find some experts to make a proofreading. | 5. The sensitivity of hyper-parameters such as $m_1$, $m_2$, $\lambda$ is not discussed. In particular, their values are not specified in the paper. |
NIPS_2021_1527 | NIPS_2021 | Weakness:
The unbalanced data scenario has not been properly explored by experiments. Under what circumstances can it be counted as an unbalanced data scenario, and what is the data ratio? Therefore, the experiments should not pay more attention to one given setting like TED, WMT, etc., but should construct unbalanced scenarios of different ratios by sampling data in one setting like WMT to verify this important issue.
There is a lack of a reasonable ablation study on the upsampling parameter T, so we cannot confirm whether the oversampling overfit phenomenon will occur, and to what extent will the upsampling reach.
Some baselines are missing in the experimental comparison, such as 1) giving different weights to the loss of unbalanced translation pairs so that in the later stages of training, there will be no situation where rich-resource pairs dominate the training loss; 2) the use of low-resource language pairs further finetune the multilingual model and use the method like R3F to maintain the generalization ability of the model.
In some low-resource language translations from 1.2->2.0, although the improvement of 0.8 can be claimed, it is insignificant in a practical sense.
Missing References:
Aghajanyan, Armen, et al. "Better Fine-Tuning by Reducing Representational Collapse." International Conference on Learning Representations. 2020. | 2) the use of low-resource language pairs further finetune the multilingual model and use the method like R3F to maintain the generalization ability of the model. In some low-resource language translations from 1.2->2.0, although the improvement of 0.8 can be claimed, it is insignificant in a practical sense. Missing References: Aghajanyan, Armen, et al. "Better Fine-Tuning by Reducing Representational Collapse." International Conference on Learning Representations. 2020. |
NIPS_2021_2152 | NIPS_2021 | Weakness: 1. This manuscript is more like an experimental discovery paper, and the proposed method is similar to the traditional removal method, i.e., traverse all the modal feature subsets and calculate the perceptual score, removing the last subset. The reviewer believes that the contribution of the manuscript still has room for improvement. 2. The contribution of different modalities of different instances may be different, e.g., we have modalities A and B, some instances with good performance of modality A which belongs to the strong modality, whereas some instances with good performance modality B which belongs to the strong modality. Equation 3 directly removes the modal subset of all instances. How to deal with the problem mentioned above. | 2. The contribution of different modalities of different instances may be different, e.g., we have modalities A and B, some instances with good performance of modality A which belongs to the strong modality, whereas some instances with good performance modality B which belongs to the strong modality. Equation 3 directly removes the modal subset of all instances. How to deal with the problem mentioned above. |
7vR0fWRwTX | EMNLP_2023 | 1. Relies entirely on automatic RST parsing, which remains error-prone. Better discourse understanding would likely improve benefits.
2. It seems that such RST parsers are mainly available for English (as such, the submission also mainly conducted evaluation from English to German), making it hard to generalize to other languages.
2. Using external TextTiling tool may limit end-to-end learning of document boundaries.
3. Limited linguistic analysis of what discourse phenomena the model has learned to handle. | 1. Relies entirely on automatic RST parsing, which remains error-prone. Better discourse understanding would likely improve benefits. |
ARR_2022_112_review | ARR_2022 | - The paper does not discuss much about linguistic aspect of the dataset. While their procedures are thoroughly described, analyses are quite limited in that they do not reveal much about linguistic challenges in the dataset as compared to, for example, information extraction. The benefit of pretraining on the target domain seems to be the only consistent finding in their paper and I believe this argument holds in majority of datasets.
- Relating to the first point, authors should describe more about the traits of the experts and justify why annotation must be carried out by the experts, outside its commercial values. Were the experts linguistic experts or domain experts? Was annotation any different from what non-experts would do? Did it introduce any linguistic challenges?
- The thorough description of the processes is definitely a strength, it might be easier to follow if the authors moved some of the details to appendix.
L23: I was not able to find in the paper that single-task models consistently outperformed multi-task models. Could you elaborate a bit more about this?
Table 1: It would be easier if you explain about "Type of Skills" in the caption. It might also be worth denoting number of sentences for your dataset, as Jia et al (2018) looks larger than SkillSpan at a first glance.
Section 3: This section can be improved to better explain which of "skill", "knowledge" and "attitude" correspond to "hard" and "soft" knowledge. Soft skills are referred as attitudes (L180) but this work seems to only consider "skill" and "knowledge" (L181-183 and Figure 1)? This contradicts with the claim that SkillSpan incorporates both hard and soft knowledge.
L403: I don't think it is the field's norm to call it multi-task learning when it is merely solving two sequential labeling problems at a same time.
L527: Is there any justification as to why you suspected that domain-adaptive pre-raining lead to longer spans?
L543: What is continuous pretraining?
L543: "Pre-training" and "pretraining" are not spelled consistently. | - Relating to the first point, authors should describe more about the traits of the experts and justify why annotation must be carried out by the experts, outside its commercial values. Were the experts linguistic experts or domain experts? Was annotation any different from what non-experts would do? Did it introduce any linguistic challenges? |
NIPS_2022_1338 | NIPS_2022 | )
There is no experiment to support the limitations of Free-Lunch as explicitly claimed by the authors. Even though it is observed that the overall performance has been improved, it is not quite clear whether such a performance gain indeed comes from addressing the limitations of Free-Lunch.
The presentation can be further improved as the current notations are too complicated and some consecutive sentences are not smoothly linked to each other.
In the overall performance comparison (Table 1), the performance gain of H-OT is somewhat marginal particularly in 1shot settings. | 1), the performance gain of H-OT is somewhat marginal particularly in 1shot settings. |
ICLR_2022_497 | ICLR_2022 | I have the following questions to which I wish the author could respond in the rebuttal. If I missed something in the paper, I would appreciate it if the authors could point them out.
Main concerns: - In my understanding, the best scenarios are those generated from the true distribution P (over the scenarios), and therefore, the CVAE essentially attempts to approximate the true distribution P. In such a sense, if the true distribution P is independent of the context (which is the case in the experiments in this paper), I do not see the rationale for having the scenarios conditioned on the context, which in theory does not provide any statistical evidence. Therefore, the rationale behind CVAE-SIP is not clear to me. If the goal is not to approximate P but to solve the optimization problem, then having the objective values involved as a predicting goal is reasonable; in this case, having the context involved is justified because they can have an impact on the optimization results. Thus, CVAE-SIPA to me is a valid method. - While reducing the scenarios from 200 to 10 is promising, the quality of optimization has decreased a little bit. On the other hand, in Figure 2, using K-medoids with K=20 can perfectly recover the original value, which suggests that K-medoids is a decent solution and complex learning methods are not necessary for the considered settings. In addition, I am also wondering the performance under the setting that the 200 scenarios (or random scenarios of a certain number from the true distributions) are directly used as the input of CPLEX. In addition, to justify the performance, it is necessary to provide information about robustness as well as to identify the case where simple methods are not satisfactory (such as larger graphs).
Minor concerns: - Given the structure of the proposed CVAE, the generation process takes the input of z and c where z
is derived from w
. This suggests that the proposed method requires us to know a collection of scenarios from the true distribution. If this is the case, it would be better to have a clear problem statement in Sec 3. Based on such understanding, I am wondering about the process of generating scenarios used for getting K representatives - it would be great if codes like Alg 1 was provided. - I would assume that the performance is closely related to the number of scenarios used for training, and therefore, it is interesting to examine the performance with different numbers of scenarios (which is fixed as 200 in the paper). - The structure of the encoder is not clear to me. The notation q_{\phi} is used to denote two different functions q(z w,D) and q ( c , D )
. Does that mean they are the same network? - It would be better to experimentally justify the choice of the dimension of c and z. - It looks to me that the proposed methods are designed for graph-based problems, while two-stage integer programming does not have to be graph problems in general. If this is the case, it would be better to clearly indicate the scope of the considered problem. Before reaching Sec 4.2, I was thinking that the paper could address general settings. - The paper introduces CVAE-SIP and CVAE-SIPA in Sec 5 -- after discussing the training methods, so I am wondering if they follow the same training scheme. In particular, it is not clear to me by saying “append objective values to the representations” at the beginning of Sec 5. - The approximation error is defined as the gap between the objective values, which is somehow ambiguous unless one has seen the values in the table. It would be better to provide a mathematical characterization. | - The approximation error is defined as the gap between the objective values, which is somehow ambiguous unless one has seen the values in the table. It would be better to provide a mathematical characterization. |
NIPS_2020_911 | NIPS_2020 | 1. While the idea of jointly discovering, hallucinating, and adapting is interesting, there is a complete lack of discussing the impact of adding additional parameters and additional computational effort due to the multi-stage training and the multiple discriminators. The authors should provide this analysis for a fair comparison with the baseline [31, 33, *]. 2. Splitting the target data into easy and hard is already explored in the context of UDA. 3. Discovering the latent domain from the target domain is already proposed in [24]. 4. The problem of Open Compound Domain Adaptation is already presented in [**]. 5. Hallucinating the latent target domains is achieved through an image translation network adapted from [5]. 6. Style consistency loss to achieve diverse target styles has been used in previous works. 7. While the existing UDA methods [31,33] only use one discriminator, it is unclear to me why authors have applied multiple discriminators. 8. The details of the discriminator have not been discussed. 9. I was wondering why including the hallucination part reduces the performance in Table 1(b). It seems like the Discover module with [31] performs better than (Discover + Hallucinate + [31]). Also, the complex adapting stage where the authors used multiple discriminators mostly brings performance improvement. More importantly, did authors try to run the baseline models [17, 25, 31, 33, 39] with a similar longer training scheme? Otherwise, it is unfair to compare with the baselines. 10. Since the authors mentioned that splitting the training process helps to achieve better performance, It could be interesting to see the results of single-stage and multi-stage training. 11. It is not well explained why the adaptation performance drops when K > 3. Also, the procedure of finding the best K seems ad hoc and time-consuming. 12. I am just curious to see how the proposed method performs in a real domain adaptation scenario (GTA5->CityScapes). [*] Fei Pan, Inkyu Shin, François Rameau, Seokju Lee, In So Kweon. Unsupervised Intra-domain Adaptation for Semantic Segmentation through Self-Supervision. In CVPR 2020. [**] Liu, Ziwei and Miao, Zhongqi and Pan, Xingang and Zhan, Xiaohang and Lin, Dahua and Yu, Stella X. and Gong, Boqing. Open Compound Domain Adaptation. In CVPR 2020. | 1. While the idea of jointly discovering, hallucinating, and adapting is interesting, there is a complete lack of discussing the impact of adding additional parameters and additional computational effort due to the multi-stage training and the multiple discriminators. The authors should provide this analysis for a fair comparison with the baseline [31, 33, *]. |
NIPS_2017_369 | NIPS_2017 | of their presented approach, showing promising results on MNIST variants and drawbacks on more realistic tasks like Cifar10. Clarity
The paper is well-organized, but some details are confusing or unclear.
- Discuss difference to original capsule work.
- Why are 1-3 refinement iterations chosen, what happens after more iterations?
- How many iterations were necessary for Cifar10?
- Compare the computational cost of baseline and capsules, as well as the cost of the refinement steps.
- What happens when the invariant sum over the coupled prediction vectors in equation (2) and the associated non-linearity are replaced by a simple linear layer and standard non-linearity?
- line 135: "We test using a single model with no ... data augmentation". A couple of lines before, the authors mention they do moderately augment data with shifts. Why do shifts improve performance, given that the authors claim capsules are designed to be robust to such variations? Originality
The presented work is original as it introduces a new routing principle for capsule networks.
However, novelty with respect to classical capsules should be discussed more clearly.
Relevant related work, either dealing with separating filter response into magnitude and orientation, estimating keypoints or doing locally adaptive filtering, as opposed to global normalization of STNs: https://arxiv.org/abs/1701.01833 https://arxiv.org/abs/1703.06211 https://arxiv.org/abs/1612.04642 https://arxiv.org/abs/1706.00598 https://arxiv.org/abs/1605.01224 https://arxiv.org/abs/1605.09673 Significance
The presented results are not very strong and it is hard to say how significant the findings are, as the authors do not thoroughly investigate more interesting domains than digits.
Performance on MNIST is a very limited metric, given that:
i) Saturated at this point
ii) Scattering has shown that local invariance wrt deformation, translation and rotation is enough to achieve very good performance
iii) It lacks ambiguity and many other properties that make natural images so challenging, especially the assumption of only one entity per location becomes questionable under clutter
The robustness results on affine and overlapping MNIST are promising, but should be validated on more realistic tasks with more challenging local statistics.
It would be great if the authors would provide the reader with insight into strengths and weaknesses on more realistic problems.
Some suggestions:
i) More thorough evaluation + visualisations on Cifar10. The results seem weak for now, but might shed some light on failure modes and work to be accomplished by follow-up papers
ii) Check if affine robustness holds for Cifar10 as well to similar degree, this would change my vote on the paper
iii) The iterative Tagger (Graeff et al.) might give some inspiration for additional experiments with more natural ambiguity and should be discussed in related work as well
A strong analysis on the drawbacks of the presented method and open problems would make this a very useful paper, but in its current form, it is hard to tell what the suggested method achieves beyond being potentially more representationally efficient and robust on variants of MNIST. | - What happens when the invariant sum over the coupled prediction vectors in equation (2) and the associated non-linearity are replaced by a simple linear layer and standard non-linearity? |