paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
10.2k
| point
stringlengths 47
690
|
---|---|---|---|
NIPS_2016_386 | NIPS_2016 | , however. For of all, there is a lot of sloppy writing, typos and undefined notation. See the long list of minor comments below. A larger concern is that some parts of the proof I could not understand, despite trying quite hard. The authors should focus their response to this review on these technical concerns, which I mark with ** in the minor comments below. Hopefully I am missing something silly. One also has to wonder about the practicality of such algorithms. The main algorithm relies on an estimate of the payoff for the optimal policy, which can be learnt with sufficient precision in a "short" initialisation period. Some synthetic experiments might shed some light on how long the horizon needs to be before any real learning occurs. A final note. The paper is over length. Up to the two pages of references it is 10 pages, but only 9 are allowed. The appendix should have been submitted as supplementary material and the reference list cut down. Despite the weaknesses I am quite positive about this paper, although it could certainly use quite a lot of polishing. I will raise my score once the ** points are addressed in the rebuttal. Minor comments: * L75. Maybe say that pi is a function from R^m \to \Delta^{K+1} * In (2) you have X pi(X), but the dimensions do not match because you dropped the no-op action. Why not just assume the 1st column of X_t is always 0? * L177: "(OCO )" -> "(OCO)" and similar things elsewhere * L176: You might want to mention that the learner observes the whole concave function (full information setting) * L223: I would prefer to see a constant here. What does the O(.) really mean here? * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. * L384: Could mention that you mean |Y_t - Y_{t-1}| \leq c_t almost surely. ** L431: \mu_t should be \tilde \mu_t, yes? * The algorithm only stops /after/ it has exhausted its budget. Don't you need to stop just before? (the regret is only trivially affected, so this isn't too important). * L213: \tilde \mu is undefined. I guess you mean \tilde \mu_t, but that is also not defined except in Corollary 1, where it just given as some point in the confidence ellipsoid in round t. The result holds for all points in the ellipsoid uniformly with time, so maybe just write that, or at least clarify somehow. ** L435: I do not see how this follows from Corollary 2 (I guess you meant part 1, please say so). So first of all mu_t(a_t) is not defined. Did you mean tilde mu_t(a_t)? But still I don't understand. pi^*(X_t) is (possibly random) optimal static strategy while \tilde \mu_t(a_t) is the optimistic mu for action a_t, which may not be optimistic for pi^*(X_t)? I have similar concerns about the claim on the use of budget as well. * L434: The \hat v^*_t seems like strange notation. Elsewhere the \hat is used for empirical estimates (as is standard), but here it refers to something else. * L178: Why not say what Omega is here. Also, OMD is a whole family of algorithms. It might be nice to be more explicit. What link function? Which theorem in [32] are you referring to for this regret guarantee? * L200: "for every arm a" implies there is a single optimistic parameter, but of course it depends on a ** L303: Why not choose T_0 = m Sqrt(T)? Then the condition becomes B > Sqrt(m) T^(3/4), which improves slightly on what you give. * It would be nice to have more interpretation of theta (I hope I got it right), since this is the most novel component of the proof/algorithm. | * L240 and L428: "is sufficient" for what? I guess you want to write that the sum of the "optimistic" hoped for rewards is close to the expected actual rewards. |
NIPS_2016_339 | NIPS_2016 | weakness of the model. How would the values in table 1 change without this extra assumption? 3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates? 4. An answer to this point may be beyond the scope of this work, but it may be interesting to think about it. It is mentioned (lines 104-106) that "the examples [...] should maximally disambiguate the concept being taught from other possible concepts". How is disambiguation measured? How can disambiguation be maximized? Could there be an information theoretic approach to these questions? Something like: the teacher chooses samples that maximally reduce the entropy of the assumed posterior of the student. Does the proposed model do that? Minor points: ⢠line 88: The optimal policy is deterministic. Hence I'm a bit confused by "the stochastic optimal policy". Is above defined "the Boltzmann policy" meant? ⢠What is d and h in equation 2? ⢠line 108: "to calculate this ..." What is meant by "this"? ⢠Algorithm 1: Require should also include epsilon. Does line 1 initialize the set of policies to an empty set? Are the policies in line 4 added to this set? Does calculateActionValues return the Q* defined in line 75? What is M in line 6? How should p_min be chosen? Why is p_min needed anyway? ⢠Experiment 2: Is the reward 10 points (line 178) or 5 points (line 196)? ⢠Experiment 2: Is 0A the condition where all tiles are dangerous? Why are the likelihoods so much larger for 0A? Is it reasonable to average over likelihoods that differ by more than an order of magnitude (0A vs 2A-C)? ⢠Text and formulas should be carefully checked for typos (e.g. line 10 in Algorithm 1: delta > epsilon; line 217: 1^-6;) | 3. I didn't find all parameter values. What are the model parameters for task 1? What lambda was chosen for the Boltzmann policy. But more importantly: How were the parameters chosen? Maximum likelihood estimates? |
GeFFYOCkvS | EMNLP_2023 | - The authors have reproduced a well-known result in the literature--left political bias in ChatGPT and in LLMs in general--using the "coarse" (their description) methodology of passing a binary stance classifier over ChatGPT's output. The observation that language models reproduce the biases of the corpora on which they're trained has been made at each step of the evolution of these models, from word2vec to BERT to ChatGPT, and so it's unclear why this observation needs to once again be made using the authors "coarse" methodology.
- The authors' decision to filter by divisive topic using introduces an unacknowledged prior: for most of the topics given in the appendix (immigration, the death penalty, etc.), the choice to bother writing about the topic itself introduces bias. This choice will be reflected in both the frequency with which such articles appear and in the language used in those articles. The existence of the death penalty, for example, might be considered unproblematic on the right and, for that reason, one will see fewer articles on the subject in right-leaning newspapers. When they do appear, neutral language will be employed. The opposite is likely true for a left-leaning publication, for whom the existence of the death penalty is a problem: the articles will occur with more frequency and will employ less neutral language. For these reasons, it's not surprising that the authors' classifier, which was annotated via distant supervision using political slant tags assigned by Wikipedia, will tend to score most content generated by an LLM as left-leaning. | - The authors have reproduced a well-known result in the literature--left political bias in ChatGPT and in LLMs in general--using the "coarse" (their description) methodology of passing a binary stance classifier over ChatGPT's output. The observation that language models reproduce the biases of the corpora on which they're trained has been made at each step of the evolution of these models, from word2vec to BERT to ChatGPT, and so it's unclear why this observation needs to once again be made using the authors "coarse" methodology. |
ICLR_2023_3705 | ICLR_2023 | 1)The main assumption is borrowed from other works but is actually rarely used in the optimization field. Moreover, the benefits of this assumption is not well investigated. For example, a) why it is more reasonable than the previous one? B) why it can add gradient norm L_1 \nabla f(w_1) in Eqn (3) or why we do not add other term? It should be mentioned that a milder condition does not mean it is better, since it may not reflect the truth. For me, problem B) is especially important in this work, since the authors do not well explain and investigate it.
2)Results in Theorem 1 show that Adam actually does not converge, since this is a constant term O(D_0^{0.5}\delta) in Eqn. (5). This is not intuitive, the authors claim it is because the learning rate may not diminish. But many previous works, e.g. [ref 1], can prove Adam-type algorithms can converge even using a constant learning rate. Of course, they use the standard smooth condition. But (L0,L1)-smoothness condition should not cause this kind of convergence, since for nonconvex problem, in most cases, we only need the learning rate to be small but does not care whether it diminishes to zero.
[ref 1] Dongruo Zhou, Jinghui Chen, et al. On the Convergence of Adaptive Gradient Methods for Nonconvex Optimization
3)It is not clear what are the challenges when the authors analyze Adam under the (L0,L1)-smoothness condition. It seems one can directly apply standard analysis on the (L0,L1)-smoothness condition. So it is better to explain the challenges, especially the difference between this one and Zhang et al.
4)Under the same assumption, the authors use examples to show the advantage of Adam over GD and SGD. This is good. But one issue is that is the example reasonable or does it share similar properties with practical problems, especially for networks. This is important since both SGD and ADAM are widely used in the deep learning field.
5)In the work, when comparing SGD and ADAM, the authors explain the advantage of adam comes from the cases when the local smoothness varies drastically across the domain. It is not very clear for me why Adam could better handle this case. Maybe one intuitive example could help.
6)The most important problem is that this work does not provide new insights, since it is well known that the second order moment could help the convergence of Adam. This work does not provide any insights beyond this point and also does not give any practical solution to further improve. | 3)It is not clear what are the challenges when the authors analyze Adam under the (L0,L1)-smoothness condition. It seems one can directly apply standard analysis on the (L0,L1)-smoothness condition. So it is better to explain the challenges, especially the difference between this one and Zhang et al. |
ICLR_2023_1979 | ICLR_2023 | Weakness: 1.It seems that the method part is very similar to the related work cited in the paper: Generating Adversarial Disturbances for Controller Verification. Could the author provide more clarification on this? 2.Experimental comparison to RRT* seems not good: Even though the RRT* baseline is an oracle without partial observability, the visible region is still very large, which covers more than half of the obstacles. In this case, the naive RRT* (as mentioned in supp C1) can still outperform the proposed method by a large margin on the first task. | 1.It seems that the method part is very similar to the related work cited in the paper: Generating Adversarial Disturbances for Controller Verification. Could the author provide more clarification on this? |
NIPS_2018_15 | NIPS_2018 | - The hGRU architecture seems pretty ad-hoc and not very well motivated. - The comparison with state-of-the-art deep architectures may not be entirely fair. - Given the actual implementation, the link to biology and the interpretation in terms of excitatory and inhibitory connections seem a bit overstated. Conclusion: Overall, I think this is a really good paper. While some parts could be done a bit more principled and perhaps simpler, I think the paper makes a good contribution as it stands and may inspire a lot of interesting future work. My main concern is the comparison with state-of-the-art deep architectures, where I would like the authors to perform a better control (see below), the results of which may undermine their main claim to some extent. Details: - The comparison with state-of-the-art deep architectures seems a bit unfair. These architectures are designed for dealing with natural images and therefore have an order of magnitude more feature maps per layer, which are probably not necessary for the simple image statistics in the Pathfinder challenge. However, this difference alone increases the number of parameters by two orders of magnitude compared with hGRU or smaller CNNs. I suspect that using the same architectures with smaller number of feature maps per layer would bring the number of parameters much closer to the hGRU model without sacrificing performance on the Pathfinder task. In the author response, I would like to see the numbers for this control at least on the ResNet-152 or one of the image-to-image models. The hGRU architecture seems very ad-hoc. - It is not quite clear to me what is the feature that makes the difference between GRU and hGRU. Is it the two steps, the sharing of the weights W, the additional constants that are introduced everywhere and in each iteration (eta_t). I would have hoped for a more systematic exploration of these features. - Why are the gain and mix where they are? E.g. why is there no gain going from H^(1) to \tilde H^(2)? - I would have expected Eqs. (7) and (10) to be analogous, but instead one uses X and the other one H^(1). Why is that? - Why are both H^(1) and C^(2) multiplied by kappa in Eq. (10)? - Are alpha, mu, beta, kappa, omega constrained to be positive? Otherwise the minus and plus signs in Eqs. (7) and (10) are arbitrary, since some of these parameters could be negative and invert the sign. - The interpretation of excitatory and inhibitory horizontal connections is a bit odd. The same kernel (W) is applied twice (but on different hidden states). Once the result is subtracted and once it's added (but see the question above whether this interpretation even makes sense). Can the authors explain the logic behind this approach? Wouldn't it be much cleaner and make more sense to learn both an excitatory and an inhibitory kernel and enforce positive and negative weights, respectively? - The claim that the non-linear horizontal interactions are necessary does not appear to be supported by the experimental results: the nonlinear lesion performs only marginally worse than the full model. - I do not understand what insights the eigenconnectivity analysis provides. It shows a different model (trained on BSDS500 rather than Pathfinder) for which we have no clue how it performs on the task and the authors do not comment on what's the interpretation of the model trained on Pathfinder not showing these same patterns. Also, it's not clear to me where the authors see the "association field, with collinear excitation and orthogonal suppression." For that, we would have to know the preferred orientation of a feature and then look at its incoming horizontal weights. If that is what Fig. 4a shows, it needs to be explained better. | - The hGRU architecture seems pretty ad-hoc and not very well motivated. |
ICLR_2023_2851 | ICLR_2023 | Loss function: The proposed Decoupled Uniformity loss extends the uniformity loss in [54] and does not explicitly include the alignment loss in [54], which is elegant. The authors claimed that the proposed loss function can implicitly encourage alignment (also showed theoretically in Theorem 1 with insufficient training samples). The reviewer wonder what if you add the alignment loss similar to [54]. Would that increase or degrade the performance in practice, ie., when the number of the training samples is neither too low nor infinity? As shown in the experiment, without the prior information, the proposed method cannot beat the SOTA.
Assumptions: The authors introduced the Weak-aligned encoder, which is claimed to be weaker than previous assumptions such as L-smoothness. In the implementation, how to ensure this assumption is satisfied?
Assumptions: How to ensure Assumption 2 in implementation?
Definition 3.7: How to find the value of \lambda? Experiments:
. From the experimental results, without the prior information (the same as the benchmarks) the proposed method has no advantage compared to the SOTA. The advantage only shows when using the prior knowledge. Such comparison is a bit unfair, because in this case the proposed method essentially requires two representation models learned based on each dataset, ie., VAE/GAN + CL. Such extra complexity and cost need to be considered.
. Kernel quality is crucial to the downstream performance as shown experimentally in Appendix Fig. 4. In the implementation, how to choose a proper kernel with optimal parameters? Are these parameters jointly learned or picked via the validation sets?
. In Table 2, the performance of the proposed method is shown with 4 views. Do the benchmarks also use 4 views? If not, please provide results with 2 views for the fair comparison. | . From the experimental results, without the prior information (the same as the benchmarks) the proposed method has no advantage compared to the SOTA. The advantage only shows when using the prior knowledge. Such comparison is a bit unfair, because in this case the proposed method essentially requires two representation models learned based on each dataset, ie., VAE/GAN + CL. Such extra complexity and cost need to be considered. |
ARR_2022_60_review | ARR_2022 | - Underdefined and conflation of concepts - Several important details missing - Lack of clarity in how datasets were curated prevents one from assessing their validity - Too many results which are not fully justified or explained
This is a very important, interesting, and valuable paper with many positives. First and foremost, annotators’ backgrounds are an important factor and should be taken into consideration when designing datasets for hate speech, toxicity, or related phenomena. The paper not only accounts for demographic variables as done in previous work but other attitudinal covariates like attitude towards free speech that are well-chosen. The paper presents two well-thought out experiments and presents results in a clear manner which contain several important findings.
It is precisely because of the great potential and impact of this paper, I think the current manuscript requires more consideration and fine-tuning before it can reach its final stage. At this point, there seems to be a lack of important details that prevent me from fully gauging the paper’s findings and claims. Generally: - There were too many missing details (for example, what is the distribution of people with ‘free off speech’ attitudes? What is the correlation of the chosen scale item in the breadth-of-posts study?). On a minor note, many important points are relegated to the appendix.
- Certain researcher choices and experiment design choices were not justified (for example, why were these particular scales used?)
- The explanation of the creation of the breadth-of-posts was confusing. How accurate was the classification of AAE dialect and vulgarity? - The toxicity experiment was intriguing but there was too little space to be meaningful.
More concretely, - With regard to terminology and concepts, toxicity and hate speech may be related but are not the same thing. The instructions to the annotators seem to conflate both. The paper also doesn’t present a concrete definition of either. While it might seem redundant or trivial, the wording to annotators plays an important role and can confound the results presented here.
- Why were the particular scales chosen for obtaining attitudes? Particularly, for empathy there are several scale items [1], so why choose the Interpersonal Reactivity Index?
- What was the distribution of the annotator’s background with respect to the attitudes? For example, if there are too few ‘free of speech’ annotators, then the results shown in Table 3, 4, etc are underpowered. - What were the correlations of the chosen attitudinal scale item for the breadth-of-posts study with the toxicity in the breadth-of-workers study?
- How accurate are the automated classification in the breadth-of-posts experiment, i.e., how well does the states technique differentiate identity vs non-identity vulgarity or AAE language for that particular dataset. Particularly, how can it be ascertained whether the n-word was used as a reclaimed slur or not? - In that line, Section 6 discusses perceptions of vulgarity, but there are too many confounds here. Using b*tch in a sentence can be an indication of vulgarity and toxicity (due to sexism).
- In my opinion, the perspective API experiment was interesting but rather shallow. My suggestion would be to follow up on it in more detail in a new paper rather than include it in this one. The newly created space could be used to enter the missing details mentioned in the review. - Finally, given that the paper notes that MTurk tends to be predominantly liberal and the authors (commendably) took several steps to ensure greater participation from conservatives, I was wondering if ‘typical’ hate speech datasets are annotated by more homogenous annotators compared to the sample in this paper. What could be the implications of this? Do this paper's findings then hold for existing hate speech datasets?
Besides these, I also note some ethical issues in the ‘Ethical Concerns’ section. To conclude, while my rating might seem quite harsh, I believe this work has great potential and I hope to see it enriched with the required experimental details.
References: [1] Gerdes, Karen E., Cynthia A. Lietz, and Elizabeth A. Segal. " Measuring empathy in the 21st century: Development of an empathy index rooted in social cognitive neuroscience and social justice." Social Work Research 35, no. 2 (2011): 83-93. | - There were too many missing details (for example, what is the distribution of people with ‘free off speech’ attitudes? What is the correlation of the chosen scale item in the breadth-of-posts study?). On a minor note, many important points are relegated to the appendix. |
ICLR_2022_2323 | ICLR_2022 | Weakness:
1. The literature review is inaccurate, and connections to prior works are not sufficiently discussed. To be more specific, there are three connections, (i) the connection of (1) to prior works on multivariate unlabeled sensing (MUS), (ii) the connection of (1) to prior works in unlabeled sensing (US), and (iii) the connection of the paper to (Yao et al., 2021).
(i) In the paper, the authors discussed this connection (i). However, the experiments shown in Figure 2 do not actually use the MUS algorithm of (Zhang & Li, 2020) to solve (1); instead the algorithm is used to solve the missing entries case. This seems to be an unfair comparison as MUS algorithms are not designed to handle missing entries. Did the authors run matrix completion prior to applying the algorithm of (Zhang & Li, 2020)? Also, the algorithm of (Zhang & Li, 2020) is expected to fail in the case of dense permutation.
(ii) Similar to (i), the methods for unlabeled sensing (US) can also be applied to solve (1), using one column of B_0 at a time. There is an obvious advantage because some of the US methods can handle arbitrary permutations (sparse or dense), and they are immune to initialization. In fact, these methods were used in (Yao et al., 2021) for solving more general versions of (1) where each column of B has undergone arbitrary and usually different permutations; moreover, this can be applied to the d-correspondence problem of the paper. I kindly wish the authors consider incoporating discussions and reviews on those methods.
(iii) Finally, the review on (Yao et al., 2021) is not very accurate. The framework of (Yao et al., 2021), when applied to (1), means that the subspace that contains the columns of A and B is given (when generating synthetic data the authors assume that A and B come from the same subspace). Thus the first subspace-estimation step in the pipeline of (Yao et al., 2021) is automatically done; the subspace is just the column space of A. As a result, the method of (Yao et al., 2021) can handle the situation where the rows of B are densely shuffled, as discussed above in (ii). Also, (Yao et al., 2021) did not consider only "a single unknown correspondence". In fact, (Yao et al., 2021) does not utilize the prior knowledge that each column of B is permuted by the same permutation (which is the case of (1)), instead it assumes every column of B is arbitrarily shuffled. Thus it is a more general situation of (1) and of the d-correspondence problem. Finally, (Yao et al., 2021) discusses theoretical aspects of (1) with missing entries, while an algorithm for this is missing until the present work.
2. In several places the claims of the paper are not very rigorous. For example,
(i) Problem (15) can be solved via linear assignment algorithms to global optimality, why do the authors claim that "it is likely to fall into an undesirable local solution"? Also I did not find a comparison of the proposed approach with linear assignment algorithms.
(ii) Problem (16) seems to be "strictly convex", not "strongly convex". Its Hessian has positive eigenvalues everywhere but the minimum eigenvalue is not lower bounded by some positive constant. This is my feeling though, as in the situation of logistic regression, please verify this.
(iii) The Sinkhorn algorithm seems to use O(n^2) time per iteration, as in (17) there is a term C(hat{M_B}), which needs O(n^2) time to be computed. Experiments show that the algorithm needs > 1000 iterations to converge. Hence, in the regime where n << 1000 the algorithm might take much more time than O(n^2) (this is the regime considered in the experiments). Also I did not see any report on running times. Thus I feel uncomfortable to see the author claim in Section 5 that "we propose a highly efficient algorithm".
3. Even though an error bound is derived in Theorem 1 for the nuclear norm minimization problem, there is no guarantee of success on the alternating minimization proposal. Moreover, the algorithm requires several parameters to tune, and is sensitive to initialization. As a result, the algorithm has very lage variance, as shown in Figure 3 and Table 1. Questions:
1. In (3) the last term r+H(pi_P) and C(pi_P) is very interesting. Could you provide some intuition how it shows up, and in particular give an example?
2. I find Assumption 1 not very intuitive; and it is unclear to me why "otherwise the influence of the permutation will be less significant". Is it that the unknown permutation is less harmful if the magnitudes of A and B are close?
3. Solving the nuclear norm minimization program seems to be NP-hard as it involves optimization over permutation matrices and a complicated objective. Is there any hardness result for this problem?
Suggestions: The following experiments might be useful.
1. Sensitivity to permutation sparsity: As shown in the literature of unlabeled sensing, the alternating minimization of (Abid et al., 2017) works well if the data are sparsely permuted. This might also apply to the proposed alternating minimization algorithm here.
2. Sensitivity to initialization: One could present the performance as a function of the distance of initialization M^0 to the ground-truth M^*. That is for varying distance c (say from 0.01:0.01:0.1), randomly sample a matrix M^0 so that M^0 - M^* _F < c as initialization, and report the performance accordingly. One would expect that the mean error and variance increases as the quality of initialization decreases.
3. Sensitivity to other hyper-parameters.
Minor Comments on language usage: (for example)
1. "we typically considers" in the above of (7)
2. "two permutation" in the above of Theorem 1
3. "until converge" in the above of (14)
4. ......
Please proofread the paper and fix all language problems. | 2. Sensitivity to initialization: One could present the performance as a function of the distance of initialization M^0 to the ground-truth M^*. That is for varying distance c (say from 0.01:0.01:0.1), randomly sample a matrix M^0 so that M^0 - M^* _F < c as initialization, and report the performance accordingly. One would expect that the mean error and variance increases as the quality of initialization decreases. |
ICLR_2022_1923 | ICLR_2022 | Weakness: 1. The novelty of this paper is limited. First, the analysis of the vertex-level imbalance problem is not new, which is a reformulation of the observations in previous works [Rendle and Freudenthaler, 2014; Ding et al., 2019]. Second, the designed negative sampler uses reject sampling to increase the chance of popular items, which is similar to the proposed one in PRIS [Lian et al., 2020]. 2. The paper overclaims on its ability of debiasing sampling. The “debiased” term in the paper title is confusing. 3. The methodology detail is unclear in Sec. 4.2. The proposed design that improves sampling efficiency seems interesting but the corresponding description is hard to follow given the limited space. 4. Space complexity of the proposed VINS should also be analyzed and compared in empirical studies, given that each (u, i) corresponds to a b u f f e r u i
. 5. Experiment results are not convincing enough to demonstrate the superiority of VINS on effectiveness and efficiency. - For effectiveness, the performance comparison in Table 1 is unfair. VINS sets different sample weights W u i
in the training process, while most compared baselines like DNS, AOBPR, SA, PRIS set all sample weights as 1. - For efficiency, Table 2 should also include the theoretical analysis for contrast. | - For effectiveness, the performance comparison in Table 1 is unfair. VINS sets different sample weights W u i in the training process, while most compared baselines like DNS, AOBPR, SA, PRIS set all sample weights as 1. |
Y4iaDU4yMi | ICLR_2025 | - The paper's presentation is difficult to follow, with numerous typos and inconsistencies in notation. For example:
- Line 84, "In summery" -> "In summary".
- In Figure 1, "LLaVA as dicision model" -> "LLaVA as decision model."
- Line 215, "donate" should be "denote"; additionally, $\pi_{ref}$ is duplicated.
- The definitions of subscripts and superscripts for action (i.e., $a_t^1$ and $a_t^2$) in line 245 and in Equations (4), (6), (7), (8), and (9) are inconsistent.
- Line 213 references the tuple with $o_t$, but it is unclear where $s_{1 \sim t-1}$ originates.
- The authors should include a background section to introduce the basic RL framework, including elements of the MDP, trajectories, and policy, to clarify the RL context being considered. Without this, it is difficult to follow the subsequent sections. Additionally, a brief overview of the original DPO algorithm should be provided so that modifications proposed in the methods section are clearly distinguishable.
- In Section 3.1, the authors state that the VLM is used as a planner; however, it is unclear how the plan is generated. It appears that the VLM functions directly as a policy, outputting final actions to step into the environment, as illustrated in Figure 1. Thus, it may be misleading to frame the proposed method as a "re-planning framework" (line 197). Can author clarify this point?
- What type of action space does the paper consider? Is it continuous or discrete? If it is discrete, how is the MSE calculated in Eq. (7)?
- In line 201, what does "well-fine-tuned model" refer to? Is this the VLM fine-tuned by the proposed method?
- Throughout the paper, what does $\tau_t^{t-1}$ represent? | - The authors should include a background section to introduce the basic RL framework, including elements of the MDP, trajectories, and policy, to clarify the RL context being considered. Without this, it is difficult to follow the subsequent sections. Additionally, a brief overview of the original DPO algorithm should be provided so that modifications proposed in the methods section are clearly distinguishable. |
NIPS_2022_389 | NIPS_2022 | 1). Technically speaking, the contribution of this work is incremental. The proposed pipeline is not that impressive or novel; rather, it seems to be a pack of tricks to improve defense evaluation. 2). Although IoFA is well supported by cited works and described failures, its introduction lacks practical cases, where Figures 1 and 2 do not provide example failures and thus do not lead to a better understanding. 3). The reported experimental results appear to evidence the proposed methods, while there is a missing regarding the case analysis and further studies. | 1). Technically speaking, the contribution of this work is incremental. The proposed pipeline is not that impressive or novel; rather, it seems to be a pack of tricks to improve defense evaluation. |
ZPwX1FL4yp | ICLR_2025 | 1.The application of gyro-structures on SPD manifolds and correlation matrices is indeed novel, but the paper does not clearly articulate the theoretical significance or unique advantages of using Power-Euclidean (PE) geometry over existing approaches like Affine-Invariant (AI) or Log-Euclidean (LE) methods. The work seems incremental without providing substantial theoretical or empirical evidence that PE geometry offers practical improvements beyond computational convenience. Especially, while gyro-structures are presented as an extension to non-Euclidean spaces, the paper does not establish a strong need or motivation for this approach within the broader context of machine learning or geometry-based learning. It lacks a thorough discussion on why gyro-structures would fundamentally enhance SPD or correlation matrix-based learning in a way that current methods do not.
2.Some key theoretical concepts and mathematical operations, such as those in gyrovector space theory and correlation matrix manifold construction, are highly technical and lack intuitive explanations. Additional clarification or simplified summaries would improve accessibility for readers unfamiliar with advanced Riemannian geometry.
3.On the experiments part, the related discussion lacks interpretive insights that would elucidate why the proposed gyro-structures outperform existing methods. In addition, while the paper compares its methods against SPD-based models and a few gyro-structure-based approaches, it lacks comparison with other state-of-the-art methods that might not rely on gyro-structures. This omission makes it unclear whether the proposed approach actually outperforms simpler or more commonly used techniques in manifold-based learning. | 3.On the experiments part, the related discussion lacks interpretive insights that would elucidate why the proposed gyro-structures outperform existing methods. In addition, while the paper compares its methods against SPD-based models and a few gyro-structure-based approaches, it lacks comparison with other state-of-the-art methods that might not rely on gyro-structures. This omission makes it unclear whether the proposed approach actually outperforms simpler or more commonly used techniques in manifold-based learning. |
5BoXZXTJvL | ICLR_2024 | 1. The novelty of this method appears somewhat constrained. Utilizing the first-order gradient for determining parameter importance is a common approach in pruning techniques applied to CNN, BERT, and ViT. This technique is well-established within the realm of model pruning. Considering in some instances this method even falls short of those achieved by SparseGPT (e.g., 2:4 for LLaMA-1 and LLaMA-2), I cannot say the first-order gradient in pruning LLMs might be a major contribution.
2. This paper lacks experiments on different LLM families. Conducting trials with models like OPT, BLOOM, or other alternatives could provide valuable insights into the method's applicability and generalizability across various LLM families.
3. The paper doesn't provide details regarding the latency of the pruned model. In a study centered on LLM compression, including latency metrics is crucial since such information is highly important to the readers to understand the efficiency of the pruned model. | 2. This paper lacks experiments on different LLM families. Conducting trials with models like OPT, BLOOM, or other alternatives could provide valuable insights into the method's applicability and generalizability across various LLM families. |
ICLR_2021_2846 | ICLR_2021 | Weakness: There are some concerns authors should further address: 1)The transductive inference stage is essentially an ensemble of a serial of models. Especially, the proposed data perturbation can be considered as a common data augmentation. What if such an ensemble is applied to the existing transductive methods? And whether the flipping already is adopted in the data augmentation before the inputs fed to the network? 2)During meta-training, only the selected single path is used in one transductive step, what about the performance of optimizing all paths simultaneously? Given during inference all paths are utilized. 3)What about the performance of MCT (pair + instance)? 4)Why the results of Table 6 is not aligned with Table 1 (MCT-pair)? Also what about the ablation studies of MCT without the adaptive metrics. 5)Though this is not necessary, I'm curious about the performance of the SOTA method (e.g. LST) combined with the adaptive metric. | 4)Why the results of Table 6 is not aligned with Table 1 (MCT-pair)? Also what about the ablation studies of MCT without the adaptive metrics. |
ACL_2017_108_review | ACL_2017 | The problem itself is not really well motivated. Why is it important to detect China as an entity within the entity Bank of China, to stay with the example in the introduction? I do see a point for crossing entities but what is the use case for nested entities? This could be much more motivated to make the reader interested. As for the approach itself, some important details are missing in my opinion: What is the decision criterion to include an edge or not? In lines 229--233 several different options for the I^k_t nodes are mentioned but it is never clarified which edges should be present!
As for the empirical evaluation, the achieved results are better than some previous approaches but not really by a large margin. I would not really call the slight improvements as "outperformed" as is done in the paper. What is the effect size? Does it really matter to some user that there is some improvement of two percentage points in F_1? What is the actual effect one can observe? How many "important" entities are discovered, that have not been discovered by previous methods? Furthermore, what performance would some simplistic dictionary-based method achieve that could also be used to find overlapping things? And in a similar direction: what would some commercial system like Google's NLP cloud that should also be able to detect and link entities would have achieved on the datasets. Just to put the results also into contrast of existing "commercial" systems.
As for the result discussion, I would have liked to see some more emphasis on actual crossing entities. How is the performance there? This in my opinion is the more interesting subset of overlapping entities than the nested ones. How many more crossing entities are detected than were possible before? Which ones were missed and maybe why? Is the performance improvement due to better nested detection only or also detecting crossing entities? Some general error discussion comparing errors made by the suggested system and previous ones would also strengthen that part.
General Discussion: I like the problems related to named entity recognition and see a point for recognizing crossing entities. However, why is one interested in nested entities? The paper at hand does not really motivate the scenario and also sheds no light on that point in the evaluation. Discussing errors and maybe advantages with some example cases and an emphasis on the results on crossing entities compared to other approaches would possibly have convinced me more.
So, I am only lukewarm about the paper with maybe a slight tendency to rejection. It just seems yet another try without really emphasizing the in my opinion important question of crossing entities.
Minor remarks: - first mention of multigraph: some readers may benefit if the notion of a multigraph would get a short description - previously noted by ... many previous: sounds a little odd - Solving this task: which one?
- e.g.: why in italics?
- time linear in n: when n is sentence length, does it really matter whether it is linear or cubic?
- spurious structures: in the introduction it is not clear, what is meant - regarded as _a_ chunk - NP chunking: noun phrase chunking?
- Since they set: who?
- pervious -> previous - of Lu and Roth~(2015) - the following five types: in sentences with no large numbers, spell out the small ones, please - types of states: what is a state in a (hyper-)graph? later state seems to be used analogous to node?!
- I would place commas after the enumeration items at the end of page 2 and a period after the last one - what are child nodes in a hypergraph?
- in Figure 2 it was not obvious at first glance why this is a hypergraph.
colors are not visible in b/w printing. why are some nodes/edges in gray. it is also not obvious how the highlighted edges were selected and why the others are in gray ... - why should both entities be detected in the example of Figure 2? what is the difference to "just" knowing the long one?
- denoting ...: sometimes in brackets, sometimes not ... why?
- please place footnotes not directly in front of a punctuation mark but afterwards - footnote 2: due to the missing edge: how determined that this one should be missing?
- on whether the separator defines ...: how determined?
- in _the_ mention hypergraph - last paragraph before 4.1: to represent the entity separator CS: how is the CS-edge chosen algorithmically here?
- comma after Equation 1?
- to find out: sounds a little odd here - we extract entities_._\footnote - we make two: sounds odd; we conduct or something like that?
- nested vs. crossing remark in footnote 3: why is this good? why not favor crossing? examples to clarify?
- the combination of states alone do_es_ not?
- the simple first order assumption: that is what?
- In _the_ previous section - we see that our model: demonstrated? have shown?
- used in this experiments: these - each of these distinct interpretation_s_ - published _on_ their website - The statistics of each dataset _are_ shown - allows us to use to make use: omit "to use" - tried to follow as close ... : tried to use the features suggested in previous works as close as possible?
- Following (Lu and Roth, 2015): please do not use references as nouns: Following Lu and Roth (2015) - using _the_ BILOU scheme - highlighted in bold: what about the effect size?
- significantly better: in what sense? effect size?
- In GENIA dataset: On the GENIA dataset - outperforms by about 0.4 point_s_: I would not call that "outperform" - that _the_ GENIA dataset - this low recall: which one?
- due to _an_ insufficient - Table 5: all F_1 scores seems rather similar to me ... again, "outperform" seems a bit of a stretch here ... - is more confident: why does this increase recall?
- converge _than_ the mention hypergraph - References: some paper titles are lowercased, others not, why? | - first mention of multigraph: some readers may benefit if the notion of a multigraph would get a short description - previously noted by ... many previous: sounds a little odd - Solving this task: which one? |
NIPS_2019_1397 | NIPS_2019 | weakness of the manuscript. Clarity: The manuscript is well-written in general. It does a good job in explaining many results and subtle points (e.g., blessing of dimensionality). On the other hand, I think there is still room for improvement in the structure of the manuscript. The methodology seems fully explainable by Theorem 2.2. Therefore, Theorem 2.1 doesn't seem necessary in the main paper, and can be move to the supplement as a lemma to save space. Furthermore, a few important results could be moved from the supplement back to the main paper (e.g., Algorithm 1 and Table 2). Originality: The main results seem innovative to me in general. Although optimizing information-theoretic objective functions is not new, I find the new objective function adequately novel, especially in the treatment of the Q_i's in relation to TC(Z|X_i). Relevant lines of research are also summarized well in the related work section. Significance: The proposed methodology has many favorable features, including low computational complexity, good performance under (near) modular latent factor models, and blessing of dimensionality. I believe these will make the new method very attractive to the community. Moreover, the formulation of the objective function itself would also be of great theoretical interest. Overall, I think the manuscript would make a fairly significant contribution. Itemized comments: 1. The number of latent factors m is assumed to be constant throughout the paper. I wonder if that's necessary. The blessing of dimensionality still seems to hold if m increases slowly with p, and computational complexity can be still advantageous compared to GLASSO. 2. Line 125: For completeness, please state the final objective function (empirical version of (3)) as a function of X_i and the parameters. 3. Section 4.1: The simulation is conducted under a joint Gaussian model. Therefore, ICA should be identical with PCA, and can be removed from the comparisons. Indeed, the ICA curve is almost identical with the PCA curve in Figure 2. 4. In the covariance estimation experiments, negative log likelihood under Gaussian model is used as the performance metric for both stock market data and OpenML datasets. This seems unreasonable since the real data in the experiment may not be Gaussian. For example, there is extensive evidence that stock returns are not Gaussian. Gaussian likelihood also seems unfair as a performance metric, since it may favor methods derived under Gaussian assumptions, like the proposed method. For comparing the results under these real datasets, it might be better to focus on interpretability, or indirect metrics (e.g., portfolio performance for stock return data). 5. The equation below Line 412: the p(z) factor should be removed in the expression for p(x|z). 6. Line 429: It seems we don't need Gaussian assumption to obtain Cov(Z_j, Z_k | X_i) = 0. 7. Line 480: Why do we need to combine with law of total variance to obtain Cov(X_i, X_{l != i} | Z) = 0? 8. Lines 496 and 501: It seems the Z in the denominator should be p(z). 9. The equation below Line 502: I think the '+' sign after \nu_j should be a '-' sign. In the definition of B under Line 503, there should be a '-' sign before \sum_{j=1}^m, and the '-' sign after \nu_j should be a '+' sign. In Line 504, we should have \nu_{X_i|Z} = - B/(2A). Minor comments: 10. The manuscript could be more reader-friendly if the mathematical definitions for H(X), I(X;Y), TC(X), and TC(X|Z) were state (in the supplementary material if no space in the main article). References to these are necessary when following the proofs/derivations. 11. Line 208: black -> block 12. Line 242: 50 real-world datasets -> 51 real-world datasets (according to Line 260 and Table 2) 13. References [7, 25, 29]: gaussian -> Gaussian Update: Thanks to the authors' for the response. A couple minor comments: - Regarding the empirical version of the objective (3), it might be appropriate to put it in the supplementary materials. - Regarding the Gaussian evaluation metric, I think it would be helpful to include the comments as a note in the paper. | 50 real-world datasets -> 51 real-world datasets (according to Line 260 and Table 2) 13. References [7, 25, 29]: gaussian -> Gaussian Update: Thanks to the authors' for the response. A couple minor comments: |
xNn2nq5kiy | ICLR_2024 | * The plan-based method requires manually designing a plan based on the ground truth in advance, which is unrealistic in real-world scenarios. The learned plan methods are not comparable to the methods with pre-defined plans based on Table 2. It indicates that the proposed method may be difficult to generalize to a new dataset without the ground truth summary.
* The novelty of the proposed method is limited. The most effective part is the manually designed plan. Based on that, they should also discuss some plan-based /outline-based prompt studies, such as [1-3].
* Some experimental details are missing. For example,
* The detailed information of the proposed new data sets. For example, the size, average document length, average summary length, average citation number, training/validation/testing split, and so on.
* How to choose the X (sentence number) and Y (words) in plan-based methods?
* What generation configuration is used in LLaMA-2, ChatGPT-3.5, and GPT-4? For example, the greedy decoding or the sampling with temperature.
* How was the human evaluation (In Section 5) conducted? The number of annotators, the inner agreement among annotators, the average compensation, the working hours, and the procedure of annotation should be described in detail.
* How are Avg Rating, Avg Win Rate, and Coverage in Table 8 calculated?
[1] Re3: Generating Longer Stories With Recursive Reprompting and Revision
[2] Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models
[3] Self-planning Code Generation with Large Language Models | * The plan-based method requires manually designing a plan based on the ground truth in advance, which is unrealistic in real-world scenarios. The learned plan methods are not comparable to the methods with pre-defined plans based on Table 2. It indicates that the proposed method may be difficult to generalize to a new dataset without the ground truth summary. |
bIlnpVM4bc | ICLR_2025 | - The main contribution of combining attention with other linear mechanisms is not novel, and, as noted in the paper, a lot of alternatives exist.
- A comprehensive benchmarking against existing alternatives is lacking. Comparisons are only made to their proposed variants and Sliding Window Attention in fair setups. A thorough comparison with other models listed in Appendix A (such as MEGA, adapted to Mamba) would strengthen the findings. Additionally, selected architectures are evaluated on a very small scale and only measured by perplexity. While some models achieve lower perplexity, this alone may not suffice to establish superiority (e.g., in H3 by Dao et al., 2022b, lower perplexity is reported against transformer baselines).
- Results on common benchmarks are somewhat misleading. The paper aims to showcase the architecture’s strengths, yet comparisons are often made against models trained on different data distributions, which weakens the robustness of the conclusions.
- Conclusions on long-context handling remain vague, although this should be a key advantage over transformers. It would be helpful to include dataset statistics (average, median, min, max lengths) to clarify context length relevance.
- The only substantial long-context experiment, the summarization task, should be included in the main paper, with clearer discussion and analysis.
- Section 4, “Analysis,” could benefit from clearer motivation. Some explored dimensions may appear intuitive (e.g., l. 444, where SWA is shown to outperform full attention on larger sequence lengths than those used in training), which might limit the novelty of the findings. Other questions seems a bit unrelated to the paper topics (see Questions).
- Length extrapolation, a key aspect of the paper, is barely motivated or discussed in relation to prior work.
- The paper overall feels somewhat unstructured and difficult to follow. Tables present different baselines inconsistently, and messages regarding architectural advantages are interleaved with comments on training data quality (l. 229). The evaluation setup lacks consistency (performance is sometimes assessed on real benchmarks, other times by perplexity), and the rationale behind baseline choices or research questions is insufficiently explained. | - The main contribution of combining attention with other linear mechanisms is not novel, and, as noted in the paper, a lot of alternatives exist. |
NIPS_2020_1274 | NIPS_2020 | - It would be helpful if the paper’s definition of “decentralized” is more explicitly stated in the paper, instead of in a footnote. Other ways of defining “decentralized” is where agents do not have access to the global state and actions of other agents during both training and execution which LIO seems to do. - Systematically studying the impact of the cost of incentivization on performance would have been a helpful analysis (e.g., for various values of \alpha, what are the reward incentives each agent receives, and what is the collective return?). It seems like roles between “winners” and “cooperators” emerge because the cost to reward the other agent becomes high for the cooperators. If this cost were lower, it seems like roles would be less distinguished, causing the collective return to be much lower. - In Figure 5d, more explanation as to why the Winner receives about the same incentive as the Cooperator to pull the lever would be helpful; it doesn’t match how the plot is described on lines 286-287. | - Systematically studying the impact of the cost of incentivization on performance would have been a helpful analysis (e.g., for various values of \alpha, what are the reward incentives each agent receives, and what is the collective return?). It seems like roles between “winners” and “cooperators” emerge because the cost to reward the other agent becomes high for the cooperators. If this cost were lower, it seems like roles would be less distinguished, causing the collective return to be much lower. |
NIPS_2019_1377 | NIPS_2019 | - The proof works only under the assumption that the corresponding RNN is contractive, i.e. has no diverging directions in its eigenspace. As the authors point out (line #127), for expansive RNN there will usually be no corresponding URNN. While this is true, I think it still imposes a strong limitation a priori on the classes of problems that could be computed by an URNN. For instance chaotic attractors with at least one diverging eigendirection are ruled out to begin with. I think this needs further discussion. For instance, could URNN/ contractive RNN still *efficiently* solve some of the classical long-term RNN benchmarks, like the multiplication problem? Minor stuff: - Statement on line 134: Only true for standard sigmoid [1+exp(-x)]^-1, depends on max. slope - Theorem 4.1: Would be useful to elaborate a bit more in the main text why this holds (intuitively, since the RNN unlike the URNN will converge to the nearest FP). - line 199: The difference is not fundamental but only for the specific class of smooth (sigmoid) and non-smooth (ReLU) activation functions considered I think? Moreover: Is smoothness the crucial difference at all, or rather the fact that sigmoid is truly contractive while ReLU is just non-expansive? - line 223-245: Are URNN at all practical given the costly requirement to enforce the unitary matrix after each iteration? | - line 223-245: Are URNN at all practical given the costly requirement to enforce the unitary matrix after each iteration? |
9RugvdmIBa | EMNLP_2023 | 1. Such strategy requires extra parallel data, which might not exist in many datasets/tasks especially during the pre-training stage. The authors did not consider such cases to propose some cheap ways to acquire such parallel data. Also, utilizing the parallel data for training increase the size of context window, which require larger context window for models and might be expensive.
2. The question answering requires the template mapping to transform the question into a masked statement, which might cause the poor generalization to questions that are not 'Wh-types'/transformable. | 2. The question answering requires the template mapping to transform the question into a masked statement, which might cause the poor generalization to questions that are not 'Wh-types'/transformable. |
NIPS_2016_117 | NIPS_2016 | weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is higher for these deep networks). If the authors could demonstrate that DFA allows one to train and make use of such deep networks where BP and FA struggle on a larger dataset this would significantly enhance the impact of the paper. In terms of biological understanding, FA seems more supported by biological observations (which typically show reciprocal forward and backward connections between hierarchical brain areas, not direct connections back from one region to all others as might be expected in DFA). The paper doesn't provide support for their claim, in the final paragraph, that DFA is more biologically plausible than FA. Minor issues: - A few typos, there is no line numbers in the draft so I haven't itemized them. - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). - Figure 2 right. I found it difficult to distinguish between the different curves. Maybe make use of styles (e.g. dashed lines) or add color. - Figure 3 is very hard to read anything on the figure. - I think this manuscript is not following the NIPS style. The citations are not by number and there are no line numbers or an "Anonymous Author" placeholder. - I might be helpful to quantify and clarify the claim "ReLU does not work very well in very deep or in convolutional networks." ReLUs were used in the AlexNet paper which, at the time, was considered deep and makes use of convolution (with pooling rather than ReLUs for the convolutional layers). | - Figure 3 is very hard to read anything on the figure. |
ICLR_2023_3923 | ICLR_2023 | Weakness:
1: The paper focus over the metric learning approach of meta-learning, what about the optimization-based model e.g. MAML [a], Implicit-MAML [b] etc?. How the model will behave over these approaches? Does the same analysis be correct for other meta-learning model?
2: The base model is implemented based on ProtoNet (Deleu et al., 2019) with default parameters given by (Medina et al., 2020). How we can ensure there hyperparameters are optimal?
3: For the domain invariant auxiliary learning, “treat each image as its own class to form the support” how it is a valid labelling? It may be a wrong class assignment, then the result may be worse than using this, but the model works. I cannot understand the proper intuition, could you please provide the detail intuition?
4: Is it possible to add some optimization based meta-learning approach in the Table-1? like MAML/implicit-MAML?
5: What is the size of auxiliary data?
[a] Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks
[b] Meta-Learning with Implicit Gradients | 4: Is it possible to add some optimization based meta-learning approach in the Table-1? like MAML/implicit-MAML? |
NIPS_2018_768 | NIPS_2018 | Weakness] 1: I like the paper's idea and result. However, this paper really REQUIRE the ablation study to justify the effectiveness of different compositions. For example: - In eq2, what is the number of m, and how m affect the results? - In eq3, what is the dimension of w_n? what if the use the Euclidean coordinate instead of Polar coordinate? - In eq3, how the number of Gaussian kernels changes the experiment results. 2: Based on the paper's description, I think it will be hard to replicate the result. It would be great if the authors can release the code after the acceptance of the paper. 3: There are several typos in the paper, need better proof reading. | 2: Based on the paper's description, I think it will be hard to replicate the result. It would be great if the authors can release the code after the acceptance of the paper. |
ICLR_2022_2791 | ICLR_2022 | The technical contribution of this paper is limited, which is far from a decent ICLR paper. In particular, All kinds of evaluations, i.e., single-dataset setting (most of existing person re-ID methods), cross-dataset setting [1, 2,3] and live re-id setting [4], have been discussed in previous works. This paper simply makes a systematic discussion.
For cross-dataset setting, this paper only evaluates standard person re-ID methods that train on one dataset and evaluate on another, but fails to evaluate the typical cross-dataset person re-ID methods, e.g., [1, 2, 3].
For live re-ID setting, this paper does not compare with the particular live re-ID baseline [4]
Though some conclusions are drawn from the experiments, the novelty is limited. For example, 1) most of person re-ID methods build on the basis of pedestrian detector (two-step method), and there are also end-to-end method that combines detection and re-ID [5]; 2) It is common that distribution bias exists between datasets. It is hard to find a standard re-ID approach and a training dataset to address the problem unless the dataset is large enough that can cover as much as scenes. 3) cross-dataset methods try to mitigate the generalization problem.
[1] Hu, Yang, Dong Yi, Shengcai Liao, Zhen Lei, and Stan Z. Li. "Cross dataset person re-identification." In Asian Conference on Computer Vision, pp. 650-664. Springer, Cham, 2014.
[2] Lv, Jianming, Weihang Chen, Qing Li, and Can Yang. "Unsupervised cross-dataset person re-identification by transfer learning of spatial-temporal patterns." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7948-7956. 2018.
[3] Li, Yu-Jhe, Ci-Siang Lin, Yan-Bo Lin, and Yu-Chiang Frank Wang. "Cross-dataset person re-identification via unsupervised pose disentanglement and adaptation." In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7919-7929. 2019.
[4] Sumari, Felix O., Luigy Machaca, Jose Huaman, Esteban WG Clua, and Joris Guérin. "Towards practical implementations of person re-identification from full video frames." Pattern Recognition Letters 138 (2020): 513-519.
[5] Xiao, Tong, Shuang Li, Bochao Wang, Liang Lin, and Xiaogang Wang. "Joint detection and identification feature learning for person search." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3415-3424. 2017. | 1) most of person re-ID methods build on the basis of pedestrian detector (two-step method), and there are also end-to-end method that combines detection and re-ID [5]; |
gp5dPMBzMH | ICLR_2024 | 1. Figure 3 presents EEG topography plots for both the input and output during the EEG token quantization process, leading to some ambiguity in interpretation. I would recommend the authors to elucidate this procedure in greater detail. Specifically, it would be insightful to understand whether the spatial arrangement of the EEG sensors played any role in this process.
2. The manuscript introduces BELT-2 as a progression from the prior BELT-1 model. However, the discussion and distinction between the two models are somewhat scanty, especially given their apparent similarities. It would be of immense value if the authors could elaborate on the design improvements made in BELT-2 over BELT-1. A focused discussion highlighting the specific enhancements and their contribution to the performance improvements, as showcased in Table 1 and Table 4, would add depth to the paper.
3. A few inconsistencies are observed in the formatting of the tables, which might be distracting for readers. I'd kindly suggest revisiting and refining the table presentation to ensure a consistent and polished format.
4. In Figure 4 and Section 2.4, there is a mention of utilizing the mediate layer coding as 'EEG prompts'. The concept, as presented, leaves some gaps in understanding, primarily because its introduction and visualization seem absent or not explicitly labeled in the preceding figures and method sections. It would enhance coherence and clarity if the authors could revisit Figures 2 and/or 3 and annotate the specific parts illustrating this mediate layer coding. | 1. Figure 3 presents EEG topography plots for both the input and output during the EEG token quantization process, leading to some ambiguity in interpretation. I would recommend the authors to elucidate this procedure in greater detail. Specifically, it would be insightful to understand whether the spatial arrangement of the EEG sensors played any role in this process. |
uSiyu6CLPh | ICLR_2025 | * I suggest that the authors show a more intuitive figure to visualize the framework that includes the images and labels in the original dataset and also the corrected images. This will help the readers to gain more intuition for your method.
* The authors combine two existing techniques to get the framework without innovation. The adversarial attack or correction method and the domain adaptation method used by the authors are proposed by prior work. And the adopted domain adaptation method here is a very old and simple method which is proposed eight years ago. Considering there were so many effective domain adaptation methods proposed in the recent few years, why don't you use other domain adaptation methods to further improve the performance?
* In Section 3.3, the authors align the features of the weak classifier on the original dataset and the synthetic dataset. Considering that the difference between the original and the synthetic datasets is the corrected part, can we omit the correctly classified samples and only minimize the covariance difference for the adversarially corrected sample and the misclassified sample?
* How do you choose the hyper-parameters such as $\lambda,\epsilon$? Does your method work robustly for other choices of hyper-parameters? If not, how do you choose them? | * The authors combine two existing techniques to get the framework without innovation. The adversarial attack or correction method and the domain adaptation method used by the authors are proposed by prior work. And the adopted domain adaptation method here is a very old and simple method which is proposed eight years ago. Considering there were so many effective domain adaptation methods proposed in the recent few years, why don't you use other domain adaptation methods to further improve the performance? |
eCXfUq3RDf | EMNLP_2023 | 1. Very limited reproducibility - Unless the authors release their training code, dialogue dataset, as well as model checkpoints, I find it very challenging to reproduce any of the claims in this paper. I encourage the authors to attach their code and datasets via anonymous repositories in the paper submission so that reviewers may verify the claims and try out the model for themselves.
2. Very high model complexity - The proposed paper employs a mathematically and computationally complex approach as compared to the textual input method. Does the proposed method outperform sending textual inputs to a large foundation model such as LLAMA or ChatGPT? The training complexity seems too high for any practical deployment of this model.
3. Dependent on the training data - I'm unsure if 44k dialogues is sufficient to capture a wide range of user traits and personalities across different content topics. LLMs are typically trained on trillions of tokens, I do not see how 44k dialogues can capture the combinations of personalities and topics. In theory, this dataset also needs to be massive to cover varied domains.
4. The paper is hard to read and often unintuitive. The mathematical complexity must be simplified and replaced with more intuitive design and modeling choice explanations so that readers may grasp core ideas faster. | 3. Dependent on the training data - I'm unsure if 44k dialogues is sufficient to capture a wide range of user traits and personalities across different content topics. LLMs are typically trained on trillions of tokens, I do not see how 44k dialogues can capture the combinations of personalities and topics. In theory, this dataset also needs to be massive to cover varied domains. |
NIPS_2018_914 | NIPS_2018 | of the paper are (i) the presentation of the proposed methodology to overcome that effect and (ii) the limitations of the proposed methods for large-scale problems, which is precisely when function approximation is required the most. While the intuition behind the two proposed algorithms is clear (to keep track of partitions of the parameter space that are consistent in successive applications of the Bellman operator), I think the authors could have formulated their idea in a more clear way, for example, using tools from Constraint Satisfaction Problems (CSPs) literature. I have the following concerns regarding both algorithms: - the authors leverage the complexity of checking on the Witness oracle, which is "polynomial time" in the tabular case. This feels like not addressing the problem in a direct way. - the required implicit call to the Witness oracle is confusing. - what happens if the policy class is not realizable? I guess the algorithm converges to an \empty partition, but that is not the optimal policy. minor: line 100 : "a2 always moves from s1 to s4 deterministically" is not true line 333 : "A number of important direction" -> "A number of important directions" line 215 : "implict" -> "implicit" - It is hard to understand the figure where all methods are compared. I suggest to move the figure to the appendix and keep a figure with less curves. - I suggest to change the name of partition function to partition value. [I am satisfied with the rebuttal and I have increased my score after the discussion] | - the required implicit call to the Witness oracle is confusing. |
n7n8McETXw | ICLR_2025 | 1. **Limitations in Model Complexity**: The paper primarily analyzes a single-head attention Transformer, which may not encapsulate the full complexity and performance characteristics of multi-layer and multi-head attention models. The validation was confined to binary classification tasks, thereby restricting the generalizability of the theoretical findings.
2. **Formula Accuracy**: The paper requires a meticulous review of its mathematical expressions. For instance, in Example 1, the function should be denoted as \( f_2(\mu_2') = \mu_1' \) instead of \( f_1(\mu_2') = \mu_1' \), and the second matrix \( A_1^f \) should be corrected to \( A_2^f \). It is essential to verify all formulas throughout the text to ensure accuracy.
3. **Theorem Validity and Clarification**: The theorems presented in the article should be scrutinized for their validity, particularly the sections that substantiate reasoning, as they may induce some ambiguity. Reading the appendix is mandatory for a comprehensive understanding of the article; otherwise, it might lead to misconceptions.
4. **Originality Concerns**: The article's reasoning and writing logic bear similarities to those found in "How Do Nonlinear Transformers Learn and Generalize in In-Context Learning." It raises the question of whether this work is merely an extension of the previous study or if it introduces novel contributions. | 4. **Originality Concerns**: The article's reasoning and writing logic bear similarities to those found in "How Do Nonlinear Transformers Learn and Generalize in In-Context Learning." It raises the question of whether this work is merely an extension of the previous study or if it introduces novel contributions. |
NIPS_2020_471 | NIPS_2020 | 1. The descriptions in joint inference is not very clear. I cannot get how the refine process do according to Equ. 2. It would be great if the authors can clear this part during the rebuttal and polish this part in the final version. 2. I have some doubts about the definations in Table1. What's the different between anchor-based regression and the regression in RepPoints? in RetinaNet, there is also only a one-shot regression. And in ATSS, this literature has proved that the regression methods do not influence a lot. The method that directly regresses [w, h] to the center point is good enough. While RepPoints regresses distance to the location of feature maps. I think there is no obvious difference between the two methods. I hope the authors can clarify this problem. If not, the motivations here is not solid enough. 3. It would be great if the authors can analyze the computational costs and inference speeds for the proposed method. | 2. I have some doubts about the definations in Table1. What's the different between anchor-based regression and the regression in RepPoints? in RetinaNet, there is also only a one-shot regression. And in ATSS, this literature has proved that the regression methods do not influence a lot. The method that directly regresses [w, h] to the center point is good enough. While RepPoints regresses distance to the location of feature maps. I think there is no obvious difference between the two methods. I hope the authors can clarify this problem. If not, the motivations here is not solid enough. |
5EHI2FGf1D | EMNLP_2023 | - no comparison against baselines. The functionality similarity comparison study reports only accuracy across optimization levels of binaries, but no baselines are considered. This is a widely-understood binary analysis application and many papers have developed architecture-agnostic similarity comparison (or often reported as codesearch, which is a similar task).
- rebuttal promises to add this evaluation
- in addition, the functionality similarity comparison methodology is questionable. The authors use cosine similarity with respect to embeddings, which to me makes the experiment rather circular. In contrast, I might have expected some type of dynamic analysis, testing, or some other reasoning to establish semantic similarity between code snippets.
- rebuttal addresses this point.
- vulnerability discovery methodology is also questionable. The authors consider a single vulnerability at a time, and while they acknowledge and address the data imbalance issue, I am not sure about the ecological validity of such a study. Previous work has considered multiple CVEs or CWEs at a time, and report whether or not the code contains any such vulnerability. Are the authors arguing that identifying one vulnerability at a time is an intended use case? In any case, the results are difficult to interpret (or are marginal improvements at best).
- addressed in rebuttal
- This paper is very similar to another accepted at Usenix 2023: Can a Deep Learning Model for One Architecture Be Used for Others?
Retargeted-Architecture Binary Code Analysis. In comparison to that paper, I do not quite understand the novelty here, except perhaps for a slightly different evaluation/application domain. I certainly acknowledge that this submission was made slightly before the Usenix 2023 proceedings were made available, but I would still want to understand how this differs given the overall similarity in idea (building embeddings that help a model target a new ISA).
- addressed in rebuttal
- relatedly, only x86 and ARM appear to be considered in the evaluation (the authors discuss building datasets for these ISAs). There are other ISAs to consider (e.g., PPC), and validating the approach against other ISAs would be important if claiming to build models that generalize to across architectures.
- author rebuttal promises a followup evaluation | - vulnerability discovery methodology is also questionable. The authors consider a single vulnerability at a time, and while they acknowledge and address the data imbalance issue, I am not sure about the ecological validity of such a study. Previous work has considered multiple CVEs or CWEs at a time, and report whether or not the code contains any such vulnerability. Are the authors arguing that identifying one vulnerability at a time is an intended use case? In any case, the results are difficult to interpret (or are marginal improvements at best). |
NIPS_2021_1822 | NIPS_2021 | of the paper. Organization could definitely be improved and I oftentimes had a bit of a hard time following the discussed steps. But in general, I think the included background is informative and well selected. Though, I could see people having trouble understanding the state-space GP-regression when coming from the more machine-learning like perspective of function/ weight space GP-regression.
Significance: I think there is definitely a significance in this work, as GP-Regression is usually a bit problematic because of the scaling, though it is still used extensively in certain areas, such as Bayesian Optimization or for modelling dynamical systems in robotics.
• Background: f
is a random function which describes a map e.g. f : R × R D s → R
and not a function of X ∈ R N t × N s × D s
as described in eq 1. At least, when one infers the inputs from the definitions of the kernel functions.
• In general, the definitions are confusing and should be checked,e.g. check if f n , k = f ( X n , k )
is correct and properly define X n , k .
• The operator L s
is not mentioned in the text
• 2.1: The description of the process f ¯
is confusing as the relationship to the original process f
is established just at the end.
• It would be helpful to add a bit more background on how the state space model is constructed from the kernel κ t ( ⋅ , ⋅ )
, e.g. why it induces the dimensionality d t
and also describe the limitations that a finite dimensional SDE can only be established, when a suitable spectral decomposition of the kernel exist.
• It should be mentioned that p ( y ∣ H f ¯ ( t n ) )
has to be chosen Gaussian, as otherwise Kalman Filtering and Smoothing and CVI is not possible. Later on in the ELBOs this is assumed anyway. • p ( u ) = N ( u ∣ 0 , K z z )
is a finite dimensional Gaussian not a Gaussian process and p(u) is not a random variable ( = not ∼ ).
• The notations for the covariances, e.g. K z z
are discussed in the appendix. I am fine with it; however, it should be referenced as I was confused in the beginning.
• 2.2: The log
is missing for the Fisher information matrix.
• The acronym CVI is used in the paragraph headline before the definition.
• Some figures and tables are not referenced in the text, such as figure 1.
• 3.1: In line 173 the integration should be carried over f ¯
and not s
, I guess?
• I had a bit of a hard time establishing the connection E p ( f ) [ N ( Y ~ ∣ f , V ~ ) ] = p ( Y ~ )
which is the whole point why one part of the ELBO can be calculated using the Kalman filter. Adding this to a sentence to the text would have helped me a lot.
• One question I had was that for computing the ELBO the matrix exponential is needed. When backpropagating the gradient for the hyper parameters, is this efficient? As I am used to using the adjoint method for computing gradients of the (linear) differential equation.
• Reference to Appendix for the RMSE and NLPD metrics is missing. | • It should be mentioned that p ( y ∣ H f ¯ ( t n ) ) has to be chosen Gaussian, as otherwise Kalman Filtering and Smoothing and CVI is not possible. Later on in the ELBOs this is assumed anyway. |
ICLR_2021_343 | ICLR_2021 | 1. This paper is not well-organized and many parts are misleading. For example, above Eq.3, the author assumes P_{G_{0}} = P_{D}. Does the author take the samples generated by the root generator as the authentic dataset? However, in Section 2 above Eq. 4, the author claims that the authentic data does not belong to any generator. 2. In Eq.4, the key-dependent generator is obtained via adding perturbation to the output of the root model. This setting may be troublesome as :1. These generators are not actually trained. This is different from the problem which this paper tempt to solve. 2. No adversarial loss to guarantee the perturbed data being similar to the authentic data. 2. How to distinguish the samples from different generators. 3. Since Eq.4 is closely related to adversarial attack, the authors are supposed to discuss their connections in the related works. 4. The name of ‘decentralized attribution’ is misleading. Decentralized models are something like federated learning, where a ‘center’ model grasps information from ‘decentralized models’. However, the presented work is not related to such decentralization. 5. Typos: regarding the adversarial generative models ->regarding to the adversarial generative models; along the keys->along with the keys. | 2. No adversarial loss to guarantee the perturbed data being similar to the authentic data. |
ICLR_2023_4867 | ICLR_2023 | Encoder architecture is a 2-layer GCN. This has limited expressive power.
Computation complexity and scalability are not analyzed theoretically, and the benchmarks used range from tiny (cora, citeseer) to medium graphs (ogb-arxiv).
Missing graph contrastive baselines in experiments and review (see detailed feedback below).
t-tests are not the best for evaluating statistical significance in this setting (see detailed feedback below).
Detailed Feedback and Questions:
This paper compares thoroughly with existing methods for handling noisy data but not nearly enough with contrastive graph learning baselines. There are several other baselines aside from GraphCL [1] that are not cited in the review and are not compared with the proposed approach in the experiments, See [2-4] ([1] is included in the paper but not its subsequent works).
Well known GCL methods include perturbations in the edge set GRACE (Zhu et al., 2020), BGRL (Thakoor et al., 2021), GBT (Bielak et al., 2021)), centrality (GCA (Zhu et al., 2021)), diffusion matrix (MVGRL (Hassani & Khasahmadi, 2020)) and the original graph (GMI (Peng et al., 2020)) and DGI (Velickovic et al., 2019)). The proposed method is compared to MVGRL (Hassani & Khasahmadi, 2020) but it is not clear why it is not compared to the others. If there is a good reason why a direct comparison does not apply that can be described in the review section or a direct experimental comparison is produced, I am willing to raise my score.
2, While there is performance improvement it in the experimental results it is often quite small. The t-tests are helpful but I think using Wilcoxon signed test would be better given small sample size. t-tests require assumptions of normality that are not explicitly justified in the paper. Do the t-tests that are close to 0.05 remain under the threshold when switching to Wilcoxon test?
Are the loss terms in eq. 6 added without weights? I would assume in general they are in different scales so weighting them should improve performance.
This work tackles feature and label noise. What about noise in the graph topology?
I understand that given extensive hyperparameter search (Table 4) gamma_0 and xi_0 can be selected. However, it would be interesting to see how selection of these values affects performance stability. How sensitive is the algorithm to xi selection in equation 2 and gamma_k selection in equation 5?
Typos: p. 1 handle an2 almost infinite” -> an p. 2 Existing Methods Handing Noisy Data" -> handLing p. 3 the attributes of each node is in” -> are p. 3 following two tasks.” -> replace .” by :”
References: [1] You, Yuning, et al. “Graph contrastive learning with augmentations.” Advances in Neural Inf. Proces. Syst. 33 (2020): 5812-5823. [2] You, Yuning, et al. “Graph contrastive learning automated.” Int. Conf. Machine Learning, 2021. [3] Qiu, Jiezhong, et al. “Gcc: Graph contrastive coding for graph neural network pre-training.” Proc. ACM SIGKDD Int. Conf. Knowledge Discovery & Data Mining. 2020. | 3 following two tasks.” -> replace .” by :” References: [1] You, Yuning, et al. “Graph contrastive learning with augmentations.” Advances in Neural Inf. Proces. Syst. |
ICLR_2023_3291 | ICLR_2023 | although this paper uses formal data symbolic description for the proposed method, there is still no framework diagram to help the method understanding, which makes the algorithm of the article slightly inferior in the narration and implementation process.
although this paper introduces various attack methods in detail, it does not show more attack methods in experimental comparison, such as ISSBA. as a novel attack method, the authors should give more experimental comparison and analysis of the attack. 3, the author mentioned in the paper the advantages of the algorithm can also be mentioned in the attack on high efficiency. But for this part, I don't seem to see more theoretical analysis (convergence) and related experimental proofs. I have reservations about this point.
What is the main difference between the authors and ISSBA in terms of the formulation of the method? I would like the authors further to explain the contribution in conjunction with the formulas.
Some Questions: 1.How is the computational efficiency of extracting the trigger? Unlike previous backdoor attack algorithms, the method needs to analyze and extract data from the entire training dataset. Does this result in exponential time growth as the dataset increases? 2. The effectiveness and problem of the algorithm are that it requires access to the entire training dataset. Have the authors considered how the algorithm should operate effectively when the training dataset is not fully perceptible?
Overall: The trigger proposed in this paper is novel, but the related validation experiments are not comprehensive, and the time complexity of the computation and the efficiency of the algorithm are not clearly analyzed. In addition, I expect the authors to further elucidate the technical contribution rather than the form of the attack. | 2. The effectiveness and problem of the algorithm are that it requires access to the entire training dataset. Have the authors considered how the algorithm should operate effectively when the training dataset is not fully perceptible? Overall: The trigger proposed in this paper is novel, but the related validation experiments are not comprehensive, and the time complexity of the computation and the efficiency of the algorithm are not clearly analyzed. In addition, I expect the authors to further elucidate the technical contribution rather than the form of the attack. |
NIPS_2021_1078 | NIPS_2021 | weakness of this paper, I am concerned with the following two points: 1) The assumption for termination states of instructions are quite strong. In the general case, it is very expensive to label a large number of data manually. 2) It seems that performance and sample efficiency are sensitive to λ parameters.
(Page 9, lines 310-313) I don't understand how the process of calculating the λ
is done. How is λ
computed from step here?
(Page 8 lines 281-285) The authors explain why ELLA does not increase sample efficiency in a COMBO environment, but I don't quite understand what it means.
[1] Yuri Burda et al, Exploration by Random Network Distillation, ICLR 2019
[2] Deepak Pathak et al, Curiosity-driven Exploration by Self-supervised Prediction, ICML 2017
[3] Roberta Raileanu et al, RIDE: Rewarding Impact-Driven Exploration for Procedurally-Generated Environments, ICLR 2020 | 1) The assumption for termination states of instructions are quite strong. In the general case, it is very expensive to label a large number of data manually. |
sXErPfdA7Q | EMNLP_2023 | UPDATE: The authors addressed most of my concerns however, I believe that the first and second points are still valid and should be discussed as potential limitations (i.e., there are too many confounding variables to claim that one is investigating an impact of different training methods; and the datasets might have been - and probably were - used for RLHF).
UPDATE 2: the concerns were addressed by the authors to the extend it was possible with the current design.
- The authors claim to investigate the effect of different training methods on the processing of discourse-level information (as one of three main experiments), however, it is questionable whether what we see is the effect of different training methods, different training data, or perhaps the data used for RLHF (rather than RLHF alone, that is it is possible that the MT datasets used in this study were used to create RLHF examples). Since the authors research black box models behind an API, I do not think we can make any claims about the effect of the training method (of which we know little) on the model's performance.
- Data contamination might have influenced the evaluation - The authors employ various existing datasets. While two of these datasets do have publication date past the openAI's models' training cutoff point (made public in August 2022), this seems not to be the case for the other datasets employed in this study (including the dataset with contextual errors). It is likely that these were included in the training data of the LLMs being evaluated. Furthermore, with the RLHF models, it is also possible (and quite likely) that MT datasets published post-training were employed to create the RLHF data. For instance the WMT22 dataset was made public in August 2022, which gives companies like OpenAI plenty of time to retrieve it, reformulate it into training examples, and use for RLHF.
- The authors discuss how certain methods are significantly different from others, yet no significance testing is done to support these claims. For example, in line 486 the authors write "The conversational ability of ChatGPT and GPT-4 significantly boosts translation quality and discourse awareness" -- the difference between zh->en ChatGPT (17.4 d-BLEU; 2.8/2.9 humeval) and GPT-4 (18.8 d-BLEU; 2.6/2.8 humeval) and the scores for FeedME-2 (16.1 d-BLEU; 2.2/2.3 humeval) and PPO (17.2 d-BLEU; 2.6/2.7 humeval) is minimal and it's hard to say whether it is significant without proper testing (including checking the distribution and accounting for multiple comparisons).
- The main automatic method used throughout this paper is d-BLEU, which works best with multiple references, yet I believe only one is given. I understand that there are limited automatic options for document level evaluation where the sentences cannot be aligned. Some researchers used sliding windows for COMET, but that's not ideal (yet worth trying?). That is why the human evaluation is so important, and the one in this paper is lacking.
- Human evaluation - many important details are missing so it is hard to judge the research design (more questions below); however what bothers me most is that the authors construct an ordinal scale with a clear cutoff point between 2 and 3 (for general quality especially), yet they present only average scores. I do believe that "5: Translation passes quality control; the overall translation is excellent (...)" minus "4: Translation passes quality control; the overall translation is very good" is not the same "one" as "3: Translation passes quality control; the overall translation is ok. (...)" minus "2: Translation does not pass quality control; the overall translation is poor. (...)". It is clear that the difference between 5 and 4 is minimal, while between 3 and 2 is much bigger. Simple average is not a proper way to analyze this data (a proper analysis would include histograms of scores, possibly a stacked bar with proportion of choices, statistical testing).
- Another issue with the human evaluation is that it appears that the evaluators were asked to evaluate an entire document by assigning it one score. Note that this is a cognitively demanding and difficult task for the evaluators. The results are likely to be unreliable (please see Sheila Castilho's work on this topic). There is also no indication that the annotators were at least given some practice items.
- "Discourse-aware prompts" - I am not sure what this experiment was about. It seems that the idea was to evaluated how the availability of discourse information can improve the translation, but if that is so, then all three setups did have discourse level information (hence this evaluation is impossible). The only thing this seems to be doing is checking in which way the information should be presented (one sentence at a time in a chat, all sentences at once but clearly marked, or the entire document at once without sentence boundaries). | - The authors discuss how certain methods are significantly different from others, yet no significance testing is done to support these claims. For example, in line 486 the authors write "The conversational ability of ChatGPT and GPT-4 significantly boosts translation quality and discourse awareness" -- the difference between zh->en ChatGPT (17.4 d-BLEU; 2.8/2.9 humeval) and GPT-4 (18.8 d-BLEU; 2.6/2.8 humeval) and the scores for FeedME-2 (16.1 d-BLEU; 2.2/2.3 humeval) and PPO (17.2 d-BLEU; 2.6/2.7 humeval) is minimal and it's hard to say whether it is significant without proper testing (including checking the distribution and accounting for multiple comparisons). |
NIPS_2020_867 | NIPS_2020 | - As someone without a linguistics background, it was at times difficult for me to follow some parts of the paper. For example, it’s not clear to me why we care about the speaker payoff and listener payoff (separate from listener accuracy), rather than just a means to obtain higher accuracy --- is it important that the behavior of the speaker at test time stay close to its behavior during training? - I think more emphasis could be placed on the fact that the proposed methods require the speaker to have a model (in fact, in most of the experiments it’s an exact model) of the listener’s conditional probability p(t|m), and vice-versa. - I would have liked more description of the Starcraft environment (potentially in an Appendix?) | - I would have liked more description of the Starcraft environment (potentially in an Appendix?) |
ACL_2017_769_review | ACL_2017 | In terms of the presentation, mathematical details of how the embeddings are computed are not sufficiently clear. While the authors have done an extensive evaluation, they haven't actually compared the system with an RL-based dialogue manager which is current state-of-the-art in goal-oriented systems. Finally, it is not clear how this approach scales to more complex problems. The authors say that the KB is 3K, but actually what the agent operates is about 10 (judging from Table 6).
- General Discussion: Overall, I think this is a good paper. Had the theoretical aspects of the paper been better presented I would give this paper an accept. | -General Discussion: Overall, I think this is a good paper. Had the theoretical aspects of the paper been better presented I would give this paper an accept. |
TY9mstpD02 | ICLR_2025 | - **generalizability to other models**: the proposed framework is validated using gpt-4-turbo, a costly language model, which may compromise the applicability of the framework at scale. The paper could be further improved by showing how running the experiments using a cheaper model (e.g., gpt-4o) and/or open source models (e.g., Llama 3.1) would affect the obtained results.
- **generalizability of the results**: the conducted experiments are either too simple (simple synthetic regression setting with 4 variables) or include few data-model pairs (6 in Section 4.2, 36 in Section 4.3, and 10 for the human studies), raising questions about the generalizability of the proposed framework to more complex datasets.
- **lack of meaningful baselines**: despite mentioning various model criticism techniques in Section 2, the authors limit their comparisons to simple naive baselines. For example, the authors could compare with a chain-of-thought prompting approach.
- few insights about the generated and correctness of summary statistics: while the authors provide one example in Section 4.3, the paper could be further improved by adding additional insights and contrasting the proposed discrepancies with commonly discussed discrepancies in the literature (e.g., do these resemble the ones commonly found by humans?) | - **lack of meaningful baselines**: despite mentioning various model criticism techniques in Section 2, the authors limit their comparisons to simple naive baselines. For example, the authors could compare with a chain-of-thought prompting approach. |
ICLR_2021_2180 | ICLR_2021 | Weakness: 1. The authors should provide justification of choosing to use AAE in the work. In particular, why is AAE an attractive approach for MTPP? 2. The authors mention: "The typical MTTP baselines like Reinforcement Learning, RNNs, Wasserstein GANs require a significant amount of data to train a network. Therefore, such techniques are not applicable to our proposed approach. As the baseline, we compare the proposed data generation technique with a Markov chain approach which was applied to the same dataset in (Klausen et al., 2018b)." The authors could try to apply Reinforcement Learning, RNNs, Wasserstein GANs with data filled by simple methods, to empirically validate that these methods are not suitable for incomplete, small data. 3. In feature mapping method, the authors could provide justification for converting marked point times t_ij to days a_i. Besides, the method to convert t_ij to a_i and examples are not provided. If this is a common preprocessing method in MTPP, the authors should cite the relevant work. Overall, Step 1 of Algorithm 1 is not clear. 4. At the data approximation technique (step 3 of the Algorithm 1), the author randomly chooses a probability for the appearance of an unobservable data point but there is a lack of explanations. Can the authors explain the reason of selecting from [0, Pcj(0)]? 5. The authors should provide more details of incomplete and small dataset, and compare with their method when training with full data. This is to understand that if generated data is still good when training with small dataset
=============== after rebuttal: I thank authors for the responses. After reviewing the authors' response and other reviewers' comments, I keep my original rating. | 1. The authors should provide justification of choosing to use AAE in the work. In particular, why is AAE an attractive approach for MTPP? |
Ie040B4nFm | EMNLP_2023 | - The proposed system seems to deter the model in terms of their BLEU scores (system degrades in 2 out of the 3 settings). This leads me to think that while the model seems to do well on speaker specific terms/inflections, the overall translations degrade.
- How would we choose which ELM to pick (male/female)? Does this require us to know the speaker’s gender beforehand, i.e., at inference time? This seems like a drawback as the accuracy should be calculated after using a gender detection model in the pipeline (at least in the cases where vocal traits match speaker identity).
- What happens when a single audio file has two speakers (male and female) conversing with each other? Which ELM to pick in that case? | - How would we choose which ELM to pick (male/female)? Does this require us to know the speaker’s gender beforehand, i.e., at inference time? This seems like a drawback as the accuracy should be calculated after using a gender detection model in the pipeline (at least in the cases where vocal traits match speaker identity). |
tbRPPWDy76 | EMNLP_2023 | There might be not enough theoretical discussion and in-depth analyses which help readers understand the prompt design. More motivations and insights are needed.
The engineering part might still need refinement.
* Considering that this work is all about evaluation, there might be a lack of experiments currently. It might be beneficial to conduct more evaluation experiments categorized by language types and dialog content.
* Although the style design is clean, the prompts are not well-organized (Table 6, 7). All sentences squeeze together.
* The Chinese translation of the proposed prompt (Table 7) is bad. | * Although the style design is clean, the prompts are not well-organized (Table 6, 7). All sentences squeeze together. |
NIPS_2017_337 | NIPS_2017 | Not many, but there are several approximations made in the derivation. Also the strong assumptions of convexity and norm-based defense were made.
Qualitative evaluation Quality:
The paper is technically sound for the most part, except for severals approximation steps that perhaps needs more precise descriptions. Experiments seem to support the claims well. Literature review is broad and relevant. Clarity:
The paper is written and organized very well. Perhaps the sections on data-dependent defenses are a bit complex to follow. Originality:
While the components such as online learning by regret minimization and minimax duality may be well-known, the paper uses them in the context of poisoning attack to find a (approximate) solution/certificate for norm-based defense, which is original as far as I know. Significance:
The problem addressed in the paper has been gaining importance, and the paper presents a clean analysis and experiments on computing an approximate upperbound on the risk, assuming a convex loss and/or a norm-based defense.
Detailed comments
I didn't find too many things to complain. To nitpick, there are two concerns about the paper.
1. The authors introduce several approximations (i -iii) which leaves loose ends. It is understandable that approximations are necessary to derive clean results, but the possible vulnerability (e.g., due to the assumption of attacks being in the feasible set only in lines 107-110) needs to be expanded to reassure the readers that it is not a real concern.
2. Secondly, what relevance does the framework have with problems of non-convex losses and/or non-norm type defenses? Would the non-vanishing duality gap and the difficulty of maximization over non-norm type constraints make the algorithm irrelevant? Or would it still give some intuitions on the risk upperbound?
p.3, binary classification: If the true mean is known through an oracle, can one use the covariance or any other statistics to design a better defense? | 2. Secondly, what relevance does the framework have with problems of non-convex losses and/or non-norm type defenses? Would the non-vanishing duality gap and the difficulty of maximization over non-norm type constraints make the algorithm irrelevant? Or would it still give some intuitions on the risk upperbound? p.3, binary classification: If the true mean is known through an oracle, can one use the covariance or any other statistics to design a better defense? |
h7DGnWGeos | ICLR_2024 | 1. The paper representation can be improved with better clarification. See details in Questions.
2. Some claims that motivate the method should be verified. See details in Questions.
3. Important related work [1] that targets on multi-step planning should be discussed.
[1] Liu, Songtao, et al. "FusionRetro: molecule representation fusion via in-context learning for retrosynthetic planning." International Conference on Machine Learning. PMLR, 2023. | 1. The paper representation can be improved with better clarification. See details in Questions. |
NIPS_2018_122 | NIPS_2018 | - Figure 1 and 2 well motive this work, but in the main body of this paper I cannot see what happens to these figures after applying the proposed adversarial training. It is better to put together the images before and after applying your method in the same place. Figure 2 does not say anything about details (we can understand the very brief overview of the positions of the embeddings), and thus these figures could be smaller for better space usage. - For the LM and NMT models, did you use the technique to share word embedding and output softmax matrices as in [1]? The transformer model would do this, if the transformer implementations are based on the original paper. If so, your method affects not only the input word embeddings, but also the output softmax matrix, which is not a trivial side effect. This important point seems missing and not discussed. If the technique is not used, the strength of the proposed method is not fully realized, because the output word embeddings could still capture simple frequency information. [1] Inan et al., Tying Word Vectors and Word Classifiers: A Loss Framework for Language Modeling, ICLR 2017. - There are no significance test or discussions about the significance of the score differences. - It is not clear how the BLEU improvement comes from the proposed method. Did you inspect whether rare words are more actively selected in the translations? Otherwise, it is not clear whether the expectations of the authors actually happened. - Line 131: The authors mention standard word embeddings like word2vec-based and glove-based embeddings, but recently subword-based embeddings are also used. For example, fasttex embeddings are aware of internal character n-gram information, which is helpful in capturing information about rare words. By inspecting the character n-grams, it is sometimes easy to understand rare words' brief properties. For example, in the case of "Peking", we can see the words start from a uppercase character and ends by the suffix "ing", etc. It makes this paper more solid to compare the proposed method with such character n-gram-based methods [2, 3]. [2] Bojanowski et al., Enriching Word Vectors with Subword Information, TACL. [3] Hashimoto et al., A Joint Many-Task Model: Growing a Neural Network for Multiple NLP Tasks, EMNLP 2017. *Minor comments: - Line 21: I think the statement "Different from classic one-hot representation" is not necessary, because anyway word embeddings are still based on such one-hot representations (i.e., the word indices). An embedding matrix is just a weight matrix for the one-hot representations. - Line 29: Word2vec is not a model, but a toolkit which implements Skipgram and CBOW with a few training options. - The results on Table 6 in the supplementary material could be enough to be tested on the dev set. Otherwise, there are too many test set results. * Additional comments after reading the author response Thank you for your kind reply to my comments and questions. I believe that the draft will be further improved in the camera-ready version. One additional suggestion is that the title seems to be too general. The term "adversarial training" has a wide range of meanings, so it would be better to include your contribution in the title; for example, "Improving Word Embeddings by Frequency-based Adversarial Training" or something. | * Additional comments after reading the author response Thank you for your kind reply to my comments and questions. I believe that the draft will be further improved in the camera-ready version. One additional suggestion is that the title seems to be too general. The term "adversarial training" has a wide range of meanings, so it would be better to include your contribution in the title; for example, "Improving Word Embeddings by Frequency-based Adversarial Training" or something. |
NIPS_2018_917 | NIPS_2018 | - Results on bAbI should be taken with a huge grain of salt and only serve as a unit-test. Specifically, since the bAbI corpus is generated from a simple grammar and sentence follow a strict triplet structure, it is not surprising to me that a model extracting three distinct symbol representations from a learned sentence representation (therefore reverse engineering the underlying symbolic nature of the data) would solve bAbI tasks. However, it is highly doubtful this method would perform well on actual natural language sentences. Hence, statements such as "trained [...] on a variety of natural language tasks" is misleading. The authors of the baseline model "recurrent entity networks" [12] have not stopped at bAbI, but also validated their models on more real-world data such as the Children's Book Test (CBT). Given that RENs solve all bAbI tasks and N2Ns solve all but one, it is not clear to me what the proposed method adds to a table other than a small reduction in mean error. Moreover, the N2N baseline in Table 2 is not introduced or referenced in this paper, so I am not sure which system the authors are referring to here. Minor Comments - L11: LSTMs have only achieved on some NLP tasks, whereas traditional methods still prevail on others, so stating they have achieved SotA in NLP is a bit too vague. - L15: Again, too vague, certain RNNs work well for certain natural language reasoning tasks. See for instance the literature on natural language inference and the leaderboard at https://nlp.stanford.edu/projects/snli/ - L16-18: The reinforcement learning / agent analogy seems a bit out-of-place here. I think you generally point to generalization capabilities which I believe are better illustrated by the examples you give later in the paper (from lines 229 to 253). - Eq. 1: This seems like a very specific choice of combining the information from entity representations and their types. Why is this a good choice? Why not keep the concatenation of the kitty/cat outer product and the mary/person outer product? Why is instead the superposition of all bindings a good design choice? - I believe section four could benefit from a small overview figures illustrating the computation graph that is constructed by the method. - Eq. 7: At first, I found it surprising why three distinct relation representation are extracted from the sentence representation, but it became clearer later with the write, move and backling functions. Maybe already mention at this point why the three relation representations are going to be used for. - Eq. 15: s_question has not been introduced before. I imagine it is a sentence encoding of the question and calculated similarly to Eq. 5? - Eq. 20: A bit more details for readers unfamiliar with bAbI or question answering would be good here. "valid words" here means possible answer words for the given story and question, correct? - L192: "glorot initalization" -> "Glorot initialization". Also, there is a reference for that method: Glorot, X., & Bengio, Y. (2010, March). Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the thirteenth international conference on artificial intelligence and statistics (pp. 249-256). - L195: α=0.008, βâ=0.6 and βâ=0.4 look like rather arbitrary choices. Where does the configuration for these hyper-parameters come from? Did you perform a grid search? - L236-244: If I understand it correctly, at test time stories with new entities (Alex etc.) are generated. How does your model support a growing set of vocabulary words given that MLPs have parameters dependent on the vocabulary size (L188-191) and are fixed at test time? - L265: If exploding gradients are a problem, why don't you perform gradient clipping with a high value for the gradient norm to avoid NaNs appearing? Simply reinitializing the model is quite hacky. - p.9: Recurrent entity networks (RENs) [12] is not just an arXiv paper but has been published at ICLR 2017. | - L15: Again, too vague, certain RNNs work well for certain natural language reasoning tasks. See for instance the literature on natural language inference and the leaderboard at https://nlp.stanford.edu/projects/snli/ - L16-18: The reinforcement learning / agent analogy seems a bit out-of-place here. I think you generally point to generalization capabilities which I believe are better illustrated by the examples you give later in the paper (from lines 229 to 253). |
VwyTrglgmW | ICLR_2024 | 1. The authors claim that the existing PU learning methods will suffer a gradual decline in performance as the dimensionality of the data increases. It would be better if the authors can visualize this effect. This is very important as this is the research motivation of this paper.
2. Since the authors claim that the high dimensionality is harmful for the PU methods, have the authors tried to firstly implement dimension reduction via some existing approaches and then deploy traditional PU classifiers?
3. In problem setup, the authors should clarify whether their method belongs to case-control PU learning or censoring PU learning, as their generation ways of P data and U data are quite different.
4. The proposed algorithm contains Kmeans operation. Note that if there are many examples with high dimension, Kmeans will be very inefficient.
5. The authors should compare their algorithm with SOTA methods and typical methods on these benchmark datasets.
6. The figures in this paper are in low quality. Besides, the writing of this paper is also far from perfect. | 1. The authors claim that the existing PU learning methods will suffer a gradual decline in performance as the dimensionality of the data increases. It would be better if the authors can visualize this effect. This is very important as this is the research motivation of this paper. |
NIPS_2018_177 | NIPS_2018 | weakness and issues which should be clarified: 1) The novelty of this paper is incremental. The proposed method is developed based on the MDNet framework. It seems that the only difference is that the proposed method further incorporate the attention regularization for backward propagation. 2) The regularization term seems a bit ad-hoc. Although the author has provided some intuitive explanation of the regularization, it seems lack of theoretical support. There are some other statistics which may be used to replace role of the mean and standard derivation in the regularization. Why they are not adapted in the regularization? For example, the median which is not sensitive to outlier value of data can be used to replace mean value. 3) The author claims that the proposed method can enable the classifier attend to temporal invariant motion patterns. It seems that no explanation is provided about what motion patterns mean in this paper. Although some figures show the evolvement of attention during training, no motion pattern is illustrated. In addition, some large variations may happen during the tracking process, such as out-plane-rotation, how can the proposed method ensure that the temporal motion invariant pattern can be found and the classifiers can attend to them? [POST-REBUTTAL COMMENTS] I have read the rebuttal and still have the concerns on the theoretical support for the regularization term. I keep my rating. | 2) The regularization term seems a bit ad-hoc. Although the author has provided some intuitive explanation of the regularization, it seems lack of theoretical support. There are some other statistics which may be used to replace role of the mean and standard derivation in the regularization. Why they are not adapted in the regularization? For example, the median which is not sensitive to outlier value of data can be used to replace mean value. |
Nk2vfZa4lX | EMNLP_2023 | 1. In this study, three distinct LLMs named Galactica, BioMedLM, and ChatGPT have been selected by the authors. The differences between these LLMs are outlined in the related work section of the paper. Despite their notable distinctions in training data and size, the evaluation of all these LLMs follows a uniform approach. A more effective approach would involve assessing the outputs of each LLM separately, as this could provide valuable insights into their relative performance in generating medical systematic reviews. It is plausible that certain LLMs might exhibit higher risk factors, while others could excel in generating coherent systematic reviews. In essence, this study would benefit significantly from a comprehensive comparative analysis between the LLMs, allowing for a more nuanced understanding of their respective capabilities, limitations, and potential benefits.
2. The number of samples presented to each domain expert appears to be relative inadequate to draw definitive conclusions about the abilities and constraints of LLMs in generating systematic reviews. Additionally, during the expert interviews, the inclusion of human-written systematic reviews, without indicating their human origin, could offer valuable insights. This approach would allow observation of how domain experts react to these reviews, shedding light on the deficiencies of LLM-generated systematic reviews and thereby allowing a more comprehensive understanding of the lacking of the LLM generated review.
3. The prompting technique used in this study is very basic and fail to leverage the full potentials of LLMs. Carefully curated prompts can gain better results in generating better systematic reviews. | 3. The prompting technique used in this study is very basic and fail to leverage the full potentials of LLMs. Carefully curated prompts can gain better results in generating better systematic reviews. |
fsDZwS49uY | ICLR_2025 | - The authors may want to generate instances with more constraints and variables, as few instances in the paper have more than 7 variables. Thus, this raises my concern about LLMs' ability to model problems with large instance sizes.
- Given that a single optimization problem can have multiple valid formulations, it would be beneficial for the authors to verify the accuracy and equivalence of these formulations with ground-truth ones.
- There are questions regarding the solving efficiency of the generated codes. It would be valuable to assess whether the code produced by LLMs can outperform human-designed formulations and codes. | - The authors may want to generate instances with more constraints and variables, as few instances in the paper have more than 7 variables. Thus, this raises my concern about LLMs' ability to model problems with large instance sizes. |
eI6ajU2esa | ICLR_2024 | - This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating model parameters, and this paper primarily focuses on adjusting the input data, how to prove that data processing is superior to model parameter adjustment? I believe a comparison should be made based on experimental results.
- Under noisy conditions, many TTA methods can achieve desirable results, while the improvement brought by this paper's method is relatively low.
- In appendix A.2.1, under noisy conditions, the average performance improvement brought by this paper's method is very low and can even be counterproductive under certain noise conditions. Does this indicate an issue with the approach of changing input data?
- How to verify the reliability of the long-range photometric consistency in section 3.3? Are there any ablation study results reflecting the performance gain brought by each part?
- The explanation of the formula content in Algorithm 1 in the main body is not clear enough.
[A] Temporal Coherent Test-Time Optimization for Robust Video Classification. ICLR23
[B] Video Test-Time Adaptation for Action Recognition. CVPR23 | - This paper investigates the issue of robustness in video action recognition, but it lacks comparison with test-time adaptation (TTA) methods, such as [A-B]. These TTA methods also aim to adapt to out-of-distribution data when the input data is disturbed by noise. Although these TTA methods mainly focus on updating model parameters, and this paper primarily focuses on adjusting the input data, how to prove that data processing is superior to model parameter adjustment? I believe a comparison should be made based on experimental results. |
NIPS_2022_1317 | NIPS_2022 | Weakness: 1. Literature review is not adequate. Even with the content in the appendix, this is no discussion of off-policy evaluation for reinforcement learning or non-stationary multi-armed bandits. 2. The claim and the discussion of “active non-stationarity” is somewhat confusing (more on this later in Questions). 3. The authors seem to overstate their contribution.
4. Baseline methods are weak and not presenting state-of-the-art.
There is no discussion of limitation. Along with my questions regarding the difference between this work and reinforcement learning, one possible direction in the conclusion is to talk about the similarity and difference, and to what extent the results are generalizable to RL settings. | 4. Baseline methods are weak and not presenting state-of-the-art. There is no discussion of limitation. Along with my questions regarding the difference between this work and reinforcement learning, one possible direction in the conclusion is to talk about the similarity and difference, and to what extent the results are generalizable to RL settings. |
K98byXpOpU | ICLR_2024 | 1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are encouraged to discuss the reason why DMLCBO does not achieve it and the theoretical technique difference between DMLCBO and above mentioned works.
2. In the experimental part, the author only shows the results of DMLCBO in early time, it will be more informative to provide results in the later steps.
3. In Table 3, DMLCBO exhibits higher variance compared with other baselines in MNIST datasets, the authors are encouraged to discuss more experimental details about it and explain the behind reason.
[1] A Near-Optimal Algorithm for Stochastic Bilevel Optimization via Double-Momentum
[2] Provably Faster Algorithms for Bilevel Optimization | 1. The proposed algorithm DMLCBO is based on double momentum technique. In previous works, e.g., SUSTAIN[1] and MRBO[2], double momentum technique improves the convergence rate to $\mathcal{\widetilde O}(\epsilon^{-3})$ while proposed algorithm only achieves the $\mathcal{\widetilde O}(\epsilon^{-4})$. The authors are encouraged to discuss the reason why DMLCBO does not achieve it and the theoretical technique difference between DMLCBO and above mentioned works. |
ICLR_2023_1522 | ICLR_2023 | The application of FERMI is not obviously a large improvement over its introduction in (Lowy+ 2021): we want to optimize a weird fairness-constrained loss, so we instead optimize an upper bound on it, which admits a stochastic convergence analysis, and also handles non-binary classification. The added contribution here is, in the paper's phrasing, "a careful analysis of how the Gaussian noises [necessary for DP-SGD] propagate through the optimization trajectories". I don't have much feel for what constitutes an "interesting" convergence analysis, but the conceptual novelty here is unclear, and the introduction is a bit slippery about what is novel and what is borrowed from the FERMI paper.
The paper also struggles to explain its technical contributions in terms between a very high level summary and a long, opaque theorem statement. I suggest changing the focus of the paper to 1) reduce, relegate to the appendix, or eliminate the discussion of demographic parity (an extremely coarse notion of fairness that, IMO, the fairness literature needs to move past, and has only been discussed this long because it's very simple), which takes up over a page of the main body without meaningfully adding to the story told by the equalized odds results alone, 2) extending the discussion of how Theorem 3.1 works and what it accomplishes (the current statement is a blizzard of notation with little explanation -- I still don't know what W is doing), along with Theorem 3.2, and 3) extending the equalized odds results to more datasets (why are Parkinsons and Retired Adult results only reported for demographic parity? it seems like equalized odds should also apply here, and an empirical story built on 2 datasets seems thin). I think 2) would help provide a clearer explanation of the paper's improvement over (Lowy+ 2021) and 3) would make a stronger empirical case separate from the convergence analysis.
Other questions/comments:
I'd appreciate a table in the appendix attempting to concisely explain all of the relevant variables -- by my count, Theorem B.1 has well over a dozen.
Why is Tran 2021b a single point where the other methods have curves? More generally, perhaps I missed the explanation of this in the text, but what is varied to generate the curves?
As far as I can tell, the paper does not discuss the tightness of the upper bound offered by ERMI, nor does it explicitly write out the expression for equalized odds. This makes it hard to contextualize the convergence guarantee in terms of the underlying loss we actually want to optimize.
Figure 4 "priavte" | 3) extending the equalized odds results to more datasets (why are Parkinsons and Retired Adult results only reported for demographic parity? it seems like equalized odds should also apply here, and an empirical story built on 2 datasets seems thin). I think |
NIPS_2021_442 | NIPS_2021 | of the paper:
Strengths: 1) To the best of my knowledge, the problem investigated in the paper is original in the sense that top-m identification has not been studied in the misspecified setting. 2) The paper provides some interesting results:
i) (Section 3.1) Knowing the level of misspecification ε
is a key ingredient, as not knowing the same would yield sample complexity bounds which are no better than the bound obtainable from unstructured ( ε = ∞
) stochastic bandits. ii) A single no-regret learner is used for the sampling strategy instead of assigning a learner for each of the (N choose k) answers, thus exponentially reducing the number of online learners. iii) The proposed decision rules are shown to match the prescribed lower bound asymptotically. 3) Sufficient experimental validation is provided to showcase the empirical performance of the prescribed decision rules.
Weaknesses: Some of the explanations provided by the authors are a bit unclear to me. Specifically, I have the following questions: 1) IMO, a better explanation of investigating top-m identification in this setting is required. Specifically, in this setting, we could readily convert the problem to the general top-m identification by appending the constant 1 to the features (converting them into d + 1
dimensional features) and trying to estimate the misspecifications η
in the higher dimensional space. Why is that disadvantageous?
Can the authors explain how the lower bound in Theorem 1 explicitly captures the effect of the upper bound on misspecification ε
? The relationship could be shown, for instance, by providing an example of a particular bandit environment (say, Gaussian bandits) ala [Kaufmann2016].
Sample complexity: Theorem 2 states the sample complexity in a very abstract way; it provides an equation which needs to be solved in order to get an explicit expression of the sample complexity. In order to make a comparison, the authors then mention that the unstructured confidence interval β t , δ u n s
is approximately log ( 1 δ )
in the limit of δ → 0
, which is then used to argue that the sample complexity of MISLID is asymptotically optimal. However, β t , δ u n s
also depends on t
. In fact, my understanding is that as δ
goes to 0
, the stopping time t
goes to infinity, where it is not clear as to what value the overall expression β t , δ u n s
converges. Overall, I feel that the authors need to explicate the sample complexity a bit more. My suggestions are: can the authors find a solution to equation (5) (or at least an upper bound on the solution for different regimes of ε
)? Using such an upper bound, even if the authors could give an explicit expression of the (asymptotic) sample complexity and show how it compares to the lower bound, it would be a great contribution.
Looking at Figure 1A (the second figure from the left, for the case of ε = 2
), it looks like LinGapE outperforms MISLID in terms of average sample complexity. Please correct me if I’m missing something, but if what I understand is correct, then why use MISLID and not LinGapE?
Probable typo: Line 211: Should it be θ
instead of θ t
for the self-normalized concentration?
The authors have explained the limitations of the investigation in Section 6. | 2) The paper provides some interesting results: i) (Section 3.1) Knowing the level of misspecification ε is a key ingredient, as not knowing the same would yield sample complexity bounds which are no better than the bound obtainable from unstructured ( ε = ∞ ) stochastic bandits. ii) A single no-regret learner is used for the sampling strategy instead of assigning a learner for each of the (N choose k) answers, thus exponentially reducing the number of online learners. iii) The proposed decision rules are shown to match the prescribed lower bound asymptotically. |
NIPS_2016_182 | NIPS_2016 | weakness of the technique in my view is that the kerne values will be dependent on the dataset that is being used. Thus, the effectiveness of the kernel will require a rich enough dataset to work well. In this respect, the method should be compared to the basic trick that is used to allos non-PSD similarity metrics to be used in kernel methods, namely defining the kernel as k(x,x') = (s(x,z_1),...,s(x,z_N))^T(s(x',z_1),...,s(x',z_N)), where s(x,z) is a possibly non-PSD similarity metric (e.g. optimal assignment score between x and z) and Z = {z_1,...,z_n} is a database of objects to compared to. The write-up is (understandably) dense and thus not the easiest to follow. However, the authors have done a good job in communicating the methods efficiently. Technical remarks: - it would seem to me that in section 4, "X" should be a multiset (and [\cal X]**n the set of multisets of size n) instead of a set, since in order the histogram to honestly represent a graph that has repeated vertex or edge labels, you need to include the multiplicities of the labels in the graph as well. - In the histogram intersection kernel, it think for clarity, it would be good to replace "t" with the size of T; there is no added value to me in allowing "t" to be arbitrary. | - it would seem to me that in section 4, "X" should be a multiset (and [\cal X]**n the set of multisets of size n) instead of a set, since in order the histogram to honestly represent a graph that has repeated vertex or edge labels, you need to include the multiplicities of the labels in the graph as well. |
NIPS_2019_1366 | NIPS_2019 | Weakness: - Although the method discussed by the paper can be applied in general MDP, the paper is limited in navigation problems. Combining RL and planning has already been discussed in PRM-RL~[1]. It would be interesting whether we can apply such algorithms in more general tasks. - The paper has shown that pure RL algorithm (HER) failed to generalize to distance goals but the paper doesn't discuss why it failed and why planning can solve the problem that HER can't solve. Ideally, if the neural networks are large enough and are trained with enough time, Q-Learning should converge to not so bad policy. It will be better if the authors can discuss the advantages of planning over pure Q-learning. - The time complexity will be too high if the reply buffer is too large. [1] PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning | - The time complexity will be too high if the reply buffer is too large. [1] PRM-RL: Long-range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning |
ACL_2017_318_review | ACL_2017 | 1. Presentation and clarity: important details with respect to the proposed models are left out or poorly described (more details below). Otherwise, the paper generally reads fairly well; however, the manuscript would need to be improved if accepted.
2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below).
- General Discussion: 1. The authors stress the importance of accounting for polysemy and learning sense-specific representations. While polysemy is taken into account by calculating sense distributions for words in particular contexts in the learning procedure, the evaluation tasks are entirely context-independent, which means that, ultimately, there is only one vector per word -- or at least this is what is evaluated. Instead, word sense disambiguation and sememe information are used for improving the learning of word representations. This needs to be clarified in the paper.
2. It is not clear how the sememe embeddings are learned and the description of the SSA model seems to assume the pre-existence of sememe embeddings. This is important for understanding the subsequent models. Do the SAC and SAT models require pre-training of sememe embeddings?
3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work.
4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting for the authors to look deeper into this. This aspect also does not seem to explain the improvements much since, e.g., the word similarity data sets contain frequent word pairs.
5. Related to the above point, the improvement gains seem more attributable to the incorporation of sememe information than word sense disambiguation in the learning procedure. As mentioned earlier, the evaluation involves only the use of context-independent word representations. Even if the method allows for learning sememe- and sense-specific representations, they would have to be aggregated to carry out the evaluation task.
6. The example illustrating HowNet (Figure 1) is not entirely clear, especially the modifiers of "computer".
7. It says that the models are trained using their best parameters. How exactly are these determined? It is also unclear how K is set -- is it optimized for each model or is it randomly chosen for each target word observation? Finally, what is the motivation for setting K' to 2? | -General Discussion:1. The authors stress the importance of accounting for polysemy and learning sense-specific representations. While polysemy is taken into account by calculating sense distributions for words in particular contexts in the learning procedure, the evaluation tasks are entirely context-independent, which means that, ultimately, there is only one vector per word -- or at least this is what is evaluated. Instead, word sense disambiguation and sememe information are used for improving the learning of word representations. This needs to be clarified in the paper. |
DEOV74Idsg | ICLR_2025 | The main weaknesses are that considering the diversity in the data, the tasks do not seem to go beyond standard tasks; and even on the figure captioning tasks, the analysis is lacking, particularly in terms of representation of strong evaluations from domain experts.
1. The tasks are somewhat standard - Figure captioning, and matching figures/sub-figures to appropriate captions. It would have been nice to see some unique tasks created from this nice dataset showcasing the diversity of images/plots. e.g. some variety of interleaved image-text tasks such as Question Answering from images could have been considered.
2. It would have been nicer to have more detailed analysis of model responses for a few images (3-5) in about 10 domains. Where experts weigh-in on model responses even for just the figure captioning task to evaluate the strengths and weaknesses of models in the different domains. Especially on the variety showcased in Figure-2, e.g. 10 examples from each category in figure-2 analyzed by domain experts.
3. Some metrics for figure captioning are missing e.g. BLEU, CIDEr, SPICE (https://github.com/tylin/coco-caption) are metrics often used in figure captioning evaluations, and it would be good to include these. ROUGE is primarily a recall based metric, while it’s relevant, in itself it’s not a sufficient signal particularly for captioning.
* Other LLM based metrics to consider using: LAVE (https://arxiv.org/pdf/2310.02567), L3Score (https://github.com/google/spiqa), PrometheusVision (https://github.com/prometheus-eval/prometheus-vision). L3Score is particularly interesting because you get the confidence from GPT-4o in addition to the generated response.
4. The results for the materials science case study is hard to interpret.
4.1 What is the baseline LLAMA-2-7B performance? (without any tuning?) Many numbers in Table 5 and Figure 6 already seem quite high so it is hard to understand what baseline you are starting from and how much room for improvement there was (and from the presented results, it doesn’t look like that much, which perhaps may not be correct)
4.2 How well do proprietary models perform on this task? Are there any proprietary models that generate reasonable responses worth evaluating?
4.3 In Table 5, the “Stable DFT” column numbers for LLAMA models (from Gruver et. al.) appear to not be consistent with numbers reported in Gruver et. al. Why is that? Minor
5. Related to point-2 Figure 4 is extremely difficult to follow. Perhaps reduce the materials science case study and include more detailed analysis of model responses and a discussion.
6. Figure 3 can be improved to clearly highlight the tasks and also the ground truth caption.
7. Other papers to consider citing in related works:
* the papers proposing different relevant metrics noted in Weakness-3.
* https://openaccess.thecvf.com/content/WACV2024/papers/Tarsi_SciOL_and_MuLMS-Img_Introducing_a_Large-Scale_Multimodal_Scientific_Dataset_and_WACV_2024_paper.pdf
Initial rationale for rating of 5 is primarily due to weakness 1 and 2. | 1. The tasks are somewhat standard - Figure captioning, and matching figures/sub-figures to appropriate captions. It would have been nice to see some unique tasks created from this nice dataset showcasing the diversity of images/plots. e.g. some variety of interleaved image-text tasks such as Question Answering from images could have been considered. |
CblASBV3d4 | EMNLP_2023 | - While studying instability of LIME, the work likely confuses that instability with various
other sources of instability involved in the methods:
- Instability in the model being explained.
- Instability of ranking metrics used.
- Instability of the LIME implementation used. This one drops entire words instead of perturbing
embeddings which would be expected to be more stable. Dropping words is a discrete and
impactful process compared to embedding perturbation.
Suggestion: control for other sources of instability. That is, measure and compare model
instability vs. resulting LIME instability; measure and compare metric instability vs. resulting
LIME instability. Lastly consider evaluating the more continuous version of input perturbation
for LIME based on embeddings. While the official implementation does not use embeddings, it
shouldn't be too hard to adjust it given token embedding inputs.
- Sample complexity of the learning LIME uses to produce explanations is not discussed. LIME
attempts to fit a linear model onto a set of inputs of a model which is likely not linear. Even
if the target model was linear, LIME would need as many samples as there are input features to be
able to reconstruct the linear model == explanation. Add to this likelihood of the target not
being linear, the number of samples needed to estimate some stable approximation increases
greatly. None of the sampling rates discussed in the paper is suggestive of even getting close to
the number of samples needed for NLP models.
Suggestion: Investigate and discuss sample complexity for the type of linear models LIME uses as
there may be even tight bounds on how many samples are needed to achieve close to optimal/stable solution.
Suggestion: The limitations discusses the computational effort is a bottleneck in using larger
sample sizes. I thus suggest investigating smaller models. It is not clear that using
"state-of-the-art" models is necessary to make the points the paper is attempting to make.
- Discussions around focus on changing or not changing the top feature are inconsistent throughout
the work and the ultimate reason behind them is hard to discern. Requiring that the top feature
does not change seems like a strange requirement. Users might not even look at the features below
the top one so attacking them might be irrelevant in terms of confusing user understanding.
"Moreover, its experiment settings are not ideal as it allows perturbations of top-ranked
predictive features, which naturally change the resulting explanations"
Isn't changing the explanations the whole point in testing explanation robustness? You also cite
this point later in the paper:
"Moreover, this requirement also accounts the fact that end-users often consider only the
top k most important and not all of the features"
Use of the ABS metric which focuses on the top-k only also is suggestive of the importance of top
features. If top features are important, isn't the very top is the most important of all? Later:
"... changing the most important features will likely result in a violation to constraint in
Eq. (2)"
Possibly but that is what makes the perturbation/attack problem challenging. The text that
follows does not make sense to me:
"Moreover, this will not provide any meaningful insights to analysis on stability
in that we want to measure how many changes in the perturbed explanation that
correspond to small (and not large) alterations to the document.
I do not follow. The top feature might change without big changes to the document.
Suggestion: Coalesce the discussion regarding the top features into one place and present a
self-consistent argument of where and why they are allowed to be changed or not.
Smaller things:
- The requirement of black-box access should not dismiss comparisons with white-box attacks as baselines.
- You write:
"As a sanity check, we also constrain the final perturbed document to result in at least one
of the top k features decreasing in rank."
This does not sound like a sanity check but rather a requirement of your method. If it were
sanity check, you'd measure whether at least one of the top k features decreased without imposing
it as a requirement.
- The example of Table 5 seems to actually change the meaning significantly. Why was such a change
allowed given "think" (verb) and "thinking" (most likely adjective) changed part of speech?
- You write:
"Evidently, replacing any of the procedure steps of XAIFOOLER with a random mechanism dropped
its performance"
I'm unsure that "better than random" is a strong demonstration of capability. | - You write: "Evidently, replacing any of the procedure steps of XAIFOOLER with a random mechanism dropped its performance" I'm unsure that "better than random" is a strong demonstration of capability. |
gybvlVXT6z | EMNLP_2023 | 1. I feel that paper has insufficiant baseline. For example, CoCoOp (https://arxiv.org/abs/2203.05557) is a widely used baseline for prompt tuning research in CLIP. Moreover, it would be nice to include the natural data shift setting as in most other prompt tuning papers for CLIP.
2. It would be nice to include the hard prompt baseline in Table 1 to see the increase in performance of each method.
3. I think the performance drop seen with respect to the prompt length (Figure 4) is a major limitation of this approach. For example, this phenomenon might make it so that using just a general hard prompt of length 4 ('a photo of a') would outperform the CBBT with length 4 or even CBBT with length 1. | 2. It would be nice to include the hard prompt baseline in Table 1 to see the increase in performance of each method. |
ICLR_2023_3692 | ICLR_2023 | One thing I'm debating with myself is the novelty of this paper. If we view Eq. (2) more generally, it has the form P o u t = P i n ω where ω
is the weighting function. It seems to me that this is a general formulation of many recent step-wise controllable generation methods, including Dexperts and GeDi. Both Dexperts and GeDi rely on discriminators to guide generation. To improve computation efficiency, it is natural to discard discriminators entirely and also avoid fine-tuning. One of the only ways to achieve controllable generation without discriminators and fine-tuning is 1) define attributes with keywords, and 2) impose the control at every generation step. Both these two ideas are not new; 1) appears at least in PPLM (in topic control where each topic is defined with a list of keywords) and 2) is the approach taken by at least Dexperts and GeDi. Both 1) and 2) are used in earlier methods such as [1] (see for example Section 2.2 therein). Therefore, on one end, I think the proposed method does not add much new ideas to the existing line of work in (step-wise) controllable generation. On the other hand, even with the general form above, the design of the weighting function ω
still requires careful design, which is the focus of the present paper. The proposed form, e.g., applying a tangent function on the exp-sum of attribute-related words (tokens), is an interesting way I haven't seen before.
The other concern I have is on the experiments. The authors demonstrate improved performance over PPLM and CTRL, in particular on efficiency, which is kind of expected. However, I am also curious about the comparison of the proposed approach with other inference-time controllable generation methods, such as Dexperts and GeDi. The proposed method would outperform these two methods on efficiency for sure (albeit less significantly so as compared to PPLM and CTRL), but I'm interested in the trade-off between efficiency and performance - it would make a very interesting case if the proposed approach achieves similar performance in terms of the evaluation metrics that the authors have already considered with improved efficiency compared to other inference-time controllable generation methods. Such comparison is currently missing from the experimental results.
[1] https://aclanthology.org/P17-4008.pdf | 1) appears at least in PPLM (in topic control where each topic is defined with a list of keywords) and |
NJUzUq2OIi | ICLR_2025 | I found the proposed idea, experiments, and analyses conducted by the authors to be valuable, especially in terms of their potential impact on low-resource scenarios. However, for the paper to fully meet the ICLR standards, there are still areas that need additional work and detail. Below, I outline several key points for improvement. I would be pleased to substantially raise my scores if the authors address these suggestions and enhance the paper accordingly.
**General Feedback**
- I noticed that the title of the paper does not match the one listed on OpenReview.
- The main text should indicate when additional detailed discussions are deferred to the Appendix for better reader guidance. **Introduction**
- The Introduction lacks foundational references to support key claims. Both the second and third paragraphs would benefit from citations to strengthen the arguments. For instance, the statement: "This method eliminates the need for document chunking, *a common limitation in current retrieval systems that often results in loss of context and reduced accuracy*" needs a supporting citation to substantiate this point.
- The sentence: "Second, to be competitive with embedding approaches, a retrieval language model needs to be small" requires further justification. The authors should include in the paper a complexity analysis comparison discussing time and GPU memory consumption to support this assertion.
**Related Work**
- The sentence "Large Language Models are found to be inefficient processing long-context documents" should be rewritten for clarity, for example: "Large Language Models are inefficient when processing long-context documents."
- The statements "Transformer models suffer from quadratic computation during training and linear computation during inference" and "However, transformer-based models are infeasible to process extremely long documents due to their linear inference time" are incorrect. Transformers, as presented in "Attention is All You Need," scale quadratically in both training and inference.
- The statement regarding State Space Models (SSMs) having "linear scaling during training and constant scaling during inference" is inaccurate. SSMs have linear complexity for both training and inference. The term "constant scaling" implies no dependence on sequence length, which is incorrect.
- The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memory-enhanced segmentation strategies [6], and recursive methods [7] for handling very long documents.
- Similarly, the paragraph on Retrieval-Augmented Generation should specify how prior works addressed different long document tasks. Examples include successful applications of RAG in long-document summarization [8, 9] and query-focused multi-document summarization [10, 11], which are closely aligned with the present work. **Figures**
- Figures 1 and 2 are clear but need aesthetic improvements to meet the conference's standard presentation quality.
**Model Architecture**
- The description "a subset of tokens are specially designated, and the classification head is applied to these tokens. In the current work, the classification head is applied to the last token of each sentence, giving sentence-level resolution" is ambiguous. Clarify whether new tokens are added to the sequence or if existing tokens (e.g., periods) are used to represent sentence ends.
**Synthetic Data Generation**
- The "lost in the middle" problem when processing long documents [12] is not explicitly discussed. Have the authors considered the position of chunks during synthetic data generation? Ablation studies varying the position and distance between linked chunks would provide valuable insights into Mamba’s effectiveness in addressing this issue.
- More details are needed regarding the data decontamination pipeline, chunk size, and the relative computational cost of the link-based method versus other strategies.
- The authors claim that synthetic data generation is computationally expensive but provide no supporting quantitative evidence. Information such as time estimates and GPU demand would strengthen this argument and assess feasibility.
- There is no detailed evaluation of the synthetic data’s quality. An analysis of correctness and answer factuality would help validate the impact on retrieval performance beyond benchmark metrics. **Training**
- This section is too brief. Consider merging it with Section 3, "Model Architecture," for a more cohesive presentation.
- What was the training time for the 130M model?
**Experimental Method**
- Fix minor formatting issues, such as adding a space after the comma in ",LVeval."
- Specify in Table 1 which datasets use free-form versus multiple-choice answers, including the number of answers and average answer lengths.
- Consider experimenting with GPT-4 as a retriever.
- Expand on "The accuracy of freeform answers is judged using GPT-4."
- Elaborate on the validation of the scoring pipeline, particularly regarding "0.942 macro F1." Clarify the data and method used for validation.
- Justify the selection of "50 sentences" for Mamba retrievers and explain chunk creation methods for embedding models. Did the chunks consist of 300 fixed-length segments, or was semantic chunking employed [3, 5]? Sentence-level embedding-based retrieval could be explored to align better with the Mamba setting.
- The assertion that "embedding models were allowed to retrieve more information than Mamba" implies an unfair comparison, but more context can sometimes degrade performance [12].
- Clarify the use of the sliding window approach for documents longer than 128k tokens, especially given the claim that Mamba could process up to 256K tokens directly. **Results**
- Remove redundancy in Section 7.1.2, such as restating the synthetic data generation strategies.
- Expand the ablation studies to cover different input sequence lengths during training and varying the number of retrieved sentences to explore robustness to configuration changes.
- Highlight that using fewer training examples (500K vs. 1M) achieved comparable accuracy (i.e., 59.4 vs. 60.0, respectively).
- Why not train both the 130M and 1.3B models on a dataset size of 500K examples, but compare using 1M and 400K examples, respectively? **Limitations**
- The high cost of generating synthetic training data is mentioned but lacks quantification. How computationally expensive is it in terms of time or resources? **Appendix**
- Note that all figures in Appendices B and C are the same, suggesting an error that needs correcting.
**Missing References**
[1] Longformer: The Long-Document Transformer. arXiv 2020.
[2] LongT5: Efficient Text-To-Text Transformer for Long Sequences. NAACL 2022.
[3] Semantic Self-Segmentation for Abstractive Summarization of Long Documents in Low-Resource Regimes. AAAI 2022.
[4] Summ^n: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents. ACL 2022.
[5] Align-Then-Abstract Representation Learning for Low-Resource Summarization. Neurocomputing 2023.
[6] Efficient Memory-Enhanced Transformer for Long-Document Summarization in Low-Resource Regimes. Sensors 2023.
[7] Recursively Summarizing Books with Human Feedback. arXiv 2021.
[8] DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization. ACL 2022.
[9] Towards a Robust Retrieval-Based Summarization System. arXiv 2024.
[10] Discriminative Marginalized Probabilistic Neural Method for Multi-Document Summarization of Medical Literature. ACL 2022.
[11] Retrieve-and-Rank End-to-End Summarization of Biomedical Studies. SISAP 2023.
[12] Lost in the Middle: How Language Models Use Long Contexts. TACL 2024. | - The Related Work section is lacking details. The paragraph on long-context language models should provide a more comprehensive overview of existing methods and their limitations, positioning SSMs appropriately. This includes discussing sparse-attention mechanisms [1, 2], segmentation-based approaches [3, 4, 5], memory-enhanced segmentation strategies [6], and recursive methods [7] for handling very long documents. |
NIPS_2017_110 | NIPS_2017 | of this work include that it is a not-too-distant variation of prior work (see Schiratti et al, NIPS 2015), the search for hyperparameters for the prior distributions and sampling method do not seem to be performed on a separate test set, the simultion demonstrated that the parameters that are perhaps most critical to the model's application demonstrate the greatest relative error, and the experiments are not described with adequate detail. This last issue is particularly important as the rupture time is what clinicians would be using to determine treatment choices. In the experiments with real data, a fully Bayesian approach would have been helpful to assess the uncertainty associated with the rupture times. Paritcularly, a probabilistic evaluation of the prospective performance is warranted if that is the setting in which the authors imagine it to be most useful. Lastly, the details of the experiment are lacking. In particular, the RECIST score is a categorical score, but the authors evaluate a numerical score, the time scale is not defined in Figure 3a, and no overall statistics are reported in the evaluation, only figures with a select set of examples, and there was no mention of out-of-sample evaluation.
Specific comments:
- l132: Consider introducing the aspects of the specific model that are specific to this example model. For example, it should be clear from the beginning that we are not operating in a setting with infinite subdivisions for \gamma^1 and \gamma^m and that certain parameters are bounded on one side (acceleration and scaling parameters).
- l81-82: Do you mean to write t_R^m or t_R^{m-1} in this unnumbered equation? If it is correct, please define t_R^m. It is used subsequently and it's meaning is unclear.
- l111: Please define the bounds for \tau_i^l because it is important for understanding the time-warp function.
- Throughout, the authors use the term constrains and should change to constraints.
- l124: What is meant by the (*)?
- l134: Do the authors mean m=2?
- l148: known, instead of know
- l156: please define \gamma_0^{***}
- Figure 1: Please specify the meaning of the colors in the caption as well as the text.
- l280: "Then we made it explicit" instead of "Then we have explicit it" | - l148: known, instead of know - l156: please define \gamma_0^{***} - Figure 1: Please specify the meaning of the colors in the caption as well as the text. |
V8PhVhb4pp | ICLR_2024 | The main weaknesses of this paper are the lack of enough qualitative results and the ambiguity of explanation.
1. In the ablation study of 4.3, only one particular qualitative example is shown to demonstrate the effectiveness of different components. This is far from being convincing. The authors should have included more than 10 results of different prompts in the appendix for that.
2. In the "bidirectional guidance" part of section 4.3 Ablation Studies, the results shown at the top row of figure 6 seem to be totally different shapes. I understand this can happen for the 2D diffusion model. However the text also says "... and the 3D diffusion model manifests anomalies in both texture and geometric constructs.". But where are the 3D diffusion results? From my understanding the results from the 3D diffusion model should always look like the same shape and yield consistent multi-view renderings. I did not find these results in figure 6.
3. Figure 4 shows the main qualitative results of the proposed feed-forward method. However there is no comparison to previous methods. I think at least the comparison to Shap-E should be included.
4. The results of Zero-1-to-3 shown in figure 5 are weird. Why all the others methods shown are the final 3D results with mesh visualization, but the Zero-1-to-3 only has multi-view generation results? My understanding is to generation the 3D results at the left lower corner of Figure 5, we still need to use the SDS loss. If this is true, then a directly competitor should be using Zero-1-to-3 with SDS loss.
5. More details about the decoupled geometry and texture control in page 8 are needed. What does it mean to fix the 3D prior? Do you mean fixing the initial noise of the 3D diffusion? When fixing the textual prompt if the 2D diffusion, do you also fix the initial noise? | 2. In the "bidirectional guidance" part of section 4.3 Ablation Studies, the results shown at the top row of figure 6 seem to be totally different shapes. I understand this can happen for the 2D diffusion model. However the text also says "... and the 3D diffusion model manifests anomalies in both texture and geometric constructs.". But where are the 3D diffusion results? From my understanding the results from the 3D diffusion model should always look like the same shape and yield consistent multi-view renderings. I did not find these results in figure 6. |
NIPS_2017_143 | NIPS_2017 | For me the main issue with this paper is that the relevance of the *specific* problem that they study -- maximizing the "best response" payoff (l127) on test data -- remains unclear. I don't see a substantial motivation in terms of a link to settings (real or theoretical) that are relevant:
- In which real scenarios is the objective given by the adverserial prediction accuracy they propose, in contrast to classical prediction accuracy?
- In l32-45 they pretend to give a real example but for me this is too vague. I do see that in some scenarios the loss/objective they consider (high accuracy on majority) kind of makes sense. But I imagine that such losses already have been studied, without necessarily referring to "strategic" settings. In particular, how is this related to robust statistics, Huber loss, precision, recall, etc.?
- In l50 they claim that "pershaps even in most [...] practical scenarios" predicting accurate on the majority is most important. I contradict: in many areas with safety issues such as robotics and self-driving cars (generally: control), the models are allowed to have small errors, but by no means may have large errors (imagine a self-driving car to significantly overestimate the distance to the next car in 1% of the situations).
Related to this, in my view they fall short of what they claim as their contribution in the introduction and in l79-87:
- Generally, this seems like only a very first step towards real strategic settings: in light of what they claim ("strategic predictions", l28), their setting is only partially strategic/game theoretic as the opponent doesn't behave strategically (i.e., take into account the other strategic player).
- In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE).
- Related to this, in the experiments it would be interesting to see the comparison of the classical squared/absolute error on the test set as well (since this is what LSE claims to optimize).
- I agree that "prediction is not done in isolation", but I don't see the "main" contribution of showing that the "task of prediction may have strategic aspects" yet. REMARKS:
What's "true" payoff in Table 1? I would have expected to see the test set payoff in that column. Or is it the population (complete sample) empirical payoff?
Have you looked into the work by Vapnik about teaching a learner with side information? This looks a bit similar as having your discrapency p alongside x,y. | - In particular, in the experiments, it doesn't come as a complete surprise that the opponent can be outperformed w.r.t. the multi-agent payoff proposed by the authors, because the opponent simply doesn't aim at maximizing it (e.g. in the experiments he maximizes classical SE and AE). |
NIPS_2022_2315 | NIPS_2022 | Weakness: 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. 2) The justification for isotropic representation and contractive search could be more solid. | 1) The proposed methods - contrastive training objective and contrastive search - are two independent methods that have little inner connection on both the intuition and the algorithm. |
NIPS_2018_936 | NIPS_2018 | Weakness: - would like to have seen a discussion of how these results related to the lower bounds on kernel learning using low-rank approximation given in "On the Complexity of Learning with Kernels". - In Assumption 5, the operator L is undefined. Should that be C? | - would like to have seen a discussion of how these results related to the lower bounds on kernel learning using low-rank approximation given in "On the Complexity of Learning with Kernels". |
ICLR_2022_176 | ICLR_2022 | There are two main (and easily fixable) weaknesses.
a) I think the role of the normalizing flow is underexplained. It is stated multiple times that the normalizing flow provides the evidence updates and its purpose is to estimate epistemic uncertainty. The remaining questions for me are 1. From which space to which does the NF map the latent variable z? 2. Why is the arrow in Figure 2 from a Gaussian space into the latent space, rather than from the latent space to n^(i)? I thought the main purpose was to influence n^(i)? 3. Which experiments show that the normalizing flow contributes meaningfully to the epistemic uncertainty (see b))?
b) Figure 1 does a good job of showing the intuition behind NatPNs but it lacks some components and a discussion in the text. The authors choose to show aleatoric (un-)certainty and predictive certainty respectively but don’t show epistemic (un-)certainty. Technically, you could deduce epistemic uncertainty from aleatoric and predictive uncertainty but it would be easier to compare and follow your argument if it was made explicit. Furthermore, I would like to see an explicit discussion of the results. Why, for example, is the difference between aleatoric and predictive uncertainty so low? Is there no or little epistemic uncertainty in this setting? There are two things that would convince me more regarding this problem: a) an additional toy experiment similar to Figure 1 which includes more epistemic uncertainty, e.g. with fewer data points. This could show that the epistemic uncertainty is well-calibrated. b) An argument for why the epistemic uncertainty is (presumably) so low in your setting. a) and b) are not mutually exclusive, doing both would convince me more.
There are a couple of minor improvements: Figure 1 is not referenced in the main text. I find it hard to spot the difference w.r.t the symbols in Figure 1. Maybe just making it less crowded would already improve visibility. In the last paragraph of 3.1, you mention “warm-up” and “fine-tuning”. It would be helpful to explain these concepts briefly in one additional sentence or provide references.
What would raise my score?
I would raise my score by 1 or 2 points if my main weaknesses are well addressed or if evidence is provided that my criticism is the consequence of a misunderstanding.
I would raise my score even further if I’m convinced of the high significance of this work. This will be mostly dependent on the estimate of more expert reviewers but I’m also open to arguments by the authors. | 2. Why is the arrow in Figure 2 from a Gaussian space into the latent space, rather than from the latent space to n^(i)? I thought the main purpose was to influence n^(i)? |
ICLR_2023_1946 | ICLR_2023 | Weakness: 1. This work raises an essential issue in partial domain adaptation and evaluates many PDA algorithms and model selection strategies. However, it does not present any solution to this problem. 2. The findings of the experiments are a bit trivial. No target label for model selection strategies will hurt the performance, and the random seed would influence the performance. They are commonsense in domain adaptation, even in deep learning. 3. The writing needs to improve. The Tables are referenced but always placed on different pages. For example, Table 2 is referred in page 4 but placed on page 3, making it hard to read. The paper also has many typos, e.g., ‘that’ instead of ‘than’ in section 3. 4. Many abbreviations lack definition and cause confusion. ‘AR’ in Table 5 stands for domain adaptation tasks and algorithms. 5. In section 4.2, heuristic strategies for Hyper-parameter turning is not clearly described. And the author said, “we only consider the model at the end of training”, but should we use the model selection strategies? 6. In section 5.2, part of Model Selection Strategies, the authors give a conclusion that seems to be wrong “only the JUMBOT and SND pair performed reasonably well with respect to the JUMBOT and ORACLE pair on both datasets.” In Table 6, the JUMBOT and SND pair performs worse than the JUMBOT and ORACLE pair by a large margin. For instance, on OFFICE-HOME, the JUMBOT and SND pair reaches 72.29 accuracies, while the JUMBOT and ORACLE pair achieves 77.15 accuracies. | 4. Many abbreviations lack definition and cause confusion. ‘AR’ in Table 5 stands for domain adaptation tasks and algorithms. |
ICLR_2023_2057 | ICLR_2023 | 1 - The main idea of using ensemble of neural networks is trivial and very common in machine learning literature. The paper doesn't provide any specific adaptation to the homomorphic encryption domain. 2 - The discussion on the homomorphic encryption schemes is completely missing. What type of HE do you use? 3 - How do you preform majority voting in the encrypted domain? Most of HE schemes do not support argmax operation. 4 - For sequential ensembling, it is important to study the effect of noise accumulation in the context of homomorphic encryption. This limitations prevents the use of even single deep neural networks on homomorphically encrypted data. | 4 - For sequential ensembling, it is important to study the effect of noise accumulation in the context of homomorphic encryption. This limitations prevents the use of even single deep neural networks on homomorphically encrypted data. |
NIPS_2016_537 | NIPS_2016 | weakness of the paper is the lack of clarity in some of the presentation. Here are some examples of what I mean. 1) l 63, refers to a "joint distribution on D x C". But C is a collection of classifiers, so this framework where the decision functions are random is unfamiliar. 2) In the first three paragraphs of section 2, the setting needs to be spelled out more clearly. It seems like the authors want to receive credit for doing something in greater generality than what they actually present, and this muddles the exposition. 3) l 123, this is not the definition of "dominated" 4) for the third point of definition one, is there some connection to properties of universal kernels? See in particular chapter 4 of Steinwart and Christmann which discusses the ability of universal kernels two separate an arbitrary finite data set with margin arbitrarily close to one. 5) an example and perhaps a figure would be quite helpful in explaining the definition of uniform shattering. 6) in section 2.1 the phrase "group action" is used repeatedly, but it is not clear what this means. 7) in the same section, the notation {\cal P} with a subscript is used several times without being defined. 8) l 196-7: this requires more explanation. Why exactly are the two quantities different, and why does this capture the difference in learning settings? ---- I still lean toward acceptance. I think NIPS should have room for a few "pure theory" papers. | 3) l 123, this is not the definition of "dominated" |
PCm1oT8pZI | ICLR_2024 | 1. The authors do not give a comprehensive discussion of previous work on this topic.
2. The experimental justification of this work is not sufficient, only compared to the basic backdoor-based strategy. | 1. The authors do not give a comprehensive discussion of previous work on this topic. |
NIPS_2020_1253 | NIPS_2020 | 1. Perhaps the most important limitation I can see is the artificial environments used. In games and especially those old Atari ones, audio events can be repeated exaclty the same and it's quite easy for the network to learn to distinguish new sounds, whereas this might not be the case in more realistic environments, where there's more variance and noise in the audio. 2. L.225: "Visiting states with already learned audio-visual pairs is necessary for achieving a high score, even though they may not be crucial for exploration" So that seems like an important limitation, agent won't work well in this sort of environments, which can easily happen in realistic scenarios. 3. L.227 "The game has repetitive background sounds. Games like SpaceInvaders and BeamRider have background sounds at a fixed time interval, but it is hard to visually associate these sounds" Same here, repetitive background sounds might often be the case in real applications. 4. L190: It's a bit strange how the authors use vanilla FFT instead of the more common STFT (overlapping segments and a Hann windowing function). Probably a good idea to try this for consistency with literature. Insufficient ablations: 6. An ablation on the weighting method of the cross-entropy loss would be nice to see. The authors note for example that in Atlantis their method underperforms because "the game has repetitive background sounds". This is a scenario I'd expect the weighting might have helped remedy. 7. An ablation with adding noise to the audio channel would be interesting. 8. An ablation sampling the negatives from unrelated trajectories would also be interesting 9. Some architectural details are missing and some unclear. For example why is the 2D convnet shown in Fig. 2 fixed to random initialization? | 6. An ablation on the weighting method of the cross-entropy loss would be nice to see. The authors note for example that in Atlantis their method underperforms because "the game has repetitive background sounds". This is a scenario I'd expect the weighting might have helped remedy. |
ICLR_2021_1189 | ICLR_2021 | weakness of the paper is its experiments section. 1. Lack of large scale experiments: The models trained in the experiments section are quite small (80 hidden neurons for the MNIST experiments and a single convolutional layer with 40 channels for the SVHN experiments). It would be nice if there were at least some experiments that varied the size of the network and showed a trend indicating that the results from the small-scale experiments will (or will not) extend to larger scale experiments. 2. Need for more robustness benchmarks: It is impressive that the Lipschitz constraints achieved by LBEN appear to be tight. Given this, it would be interesting to see how LBEN’s accuracy-robustness tradeoff compare with other architectures designed to have tight Lipschitz constraints, such as [1]. 3. Possibly limited applicability to more structured layers like convolutions: Although it can be counted as a strength that LBEN can be applied to convnets without much modification, the fact that its performance considerably trails that of MON raises questions about whether the methods presented here are ready to be extended to non-fully connected architectures. 4. Lack of description of how the Lipschitz bounds of the networks are computed: This critique is self-explanatory.
Decision: I think this paper is well worthy of acceptance just based on the quality and richness of its theoretical development and analysis of LBEN. I’d encourage the authors to, if possible, strengthen the experimental results in directions including (but certainly not limited to) the ones listed above.
Other questions to authors: 1. I was wondering why you didn’t include experiments involving larger neural networks. What are the limitations (if any) that kept you from trying out larger networks? 2. Could you describe how you computed the Lipschitz constant? Given how notoriously difficult it is to compute bounds on the Lipschitz constants of neural networks, I think this section requires more elaboration.
Possible typos and minor glitches in writing: 1. Section 3.2, fist paragraph, first sentence: Should the phrase “equilibrium network” be plural? 2. D^{+} used in Condition 1 is used before it’s defined in Condition 2. 3. Just below equation (7): I think there’s a typo in “On the other size, […]”. 4. In Section 4.1, \epsilon is not used in equation (10), but in equation (11). It might be more clear to introduce \epsilon when (11) is discussed. 5. Section 4.2, in paragraph “Computing an equilibrium”, first sentence: Do you think there’s a grammar error in this sentence? I might also have mis-parsed the sentence. 6. Section 5, second sentence: There are two “the”s in a row.
[1] Anil, Cem, James Lucas, and Roger Grosse. "Sorting out lipschitz function approximation." International Conference on Machine Learning. 2019. | 4. In Section 4.1, \epsilon is not used in equation (10), but in equation (11). It might be more clear to introduce \epsilon when (11) is discussed. |
LIBZ7Mp0OJ | ICLR_2024 | 1. While the authors propose optimizing model fairness through the Pareto frontier and simultaneous measurement of multiple fairness indicators, they do not provide a theoretical demonstration of how trading off one fairness metric for another could lead to an overall improvement in model fairness. A deeper theoretical exploration in this area could strengthen the paper, offering clearer guidelines on how to navigate fairness trade-offs effectively.
2. The paper lacks theoretical analysis on how to select among different Pareto-optimal outcomes, especially when one fairness metric is already at its optimal is one of the Pareto-optimal outcomes, i.e., there is no difference from a single optimization outcome. A theoretical framework or set of criteria for making these choices would be beneficial, providing practitioners with a robust method for decision-making in situations with multiple optimal fairness solutions.
3. The authors only use one performance to evaluate the model performance. In the context of FairML, where the applications are intricate and multifaceted, relying on a single performance metric may not sufficiently capture the model’s overall performance and impact. A diverse set of performance metrics would provide a more holistic view, ensuring a balanced and thorough evaluation.
4. In the experimental section, the authors have not conducted comparisons with existing fairness algorithms. Integrating benchmark comparisons against state-of-the-art fairness algorithms would significantly enhance the paper. It would offer tangible evidence of the proposed method's performance and effectively position the ManyFairHPO framework within the existing FairML research landscape. | 4. In the experimental section, the authors have not conducted comparisons with existing fairness algorithms. Integrating benchmark comparisons against state-of-the-art fairness algorithms would significantly enhance the paper. It would offer tangible evidence of the proposed method's performance and effectively position the ManyFairHPO framework within the existing FairML research landscape. |
ICLR_2022_2070 | ICLR_2022 | Weakness:
1 The idea is a bit too straightforward, i.e., using the attributes of the items/users and their embeddings to bridge any two domains.
2 The technical contribution is limited, i.e., there is no significant technical contribution and extension based on a typical model for the cross-domain recommendation setting. | 2 The technical contribution is limited, i.e., there is no significant technical contribution and extension based on a typical model for the cross-domain recommendation setting. |
NIPS_2020_593 | NIPS_2020 | - Line 54: Is 'interpretable' program relevant to the notion described in the work of 'Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608'? - Line 19, 37, 39: A reference for the 'Influence maximization' problem may be provided. The distribution may be more formally given (e.g. which p_{ij} sum to 1). To be able to refer to the joint distributions, there should be a more concrete statement of the p_{ij}. Or maybe a preamble of line 103. - Line 52: Some more details about the polynomial time character of the formulation may clarify your statement about the LP. - Line 103: The strategy space of the adversary implied in the equation is strongly pessimistic (why consider all possible correlations?). This can be used in a follow up work. It seems that it does not reduce the value of the current model. | - Line 54: Is 'interpretable' program relevant to the notion described in the work of 'Doshi-Velez, F., & Kim, B. (2017). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608'? |
ARR_2022_40_review | ARR_2022 | - Although author state that components can be replaced by other models for flexibility, authors did not try any change or alternative in the paper to proof the robustness of the proposed framework.
- Did authors tried using BlenderBot vs 2.0 with incorporated knowledge? it would be very interesting to see how the dialogs can be improved by using domain ontologies from the SGD dataset. - Although BlenderBot is finetuned on the SGD dataset, it is not clear how using more specific TOD chatbots can provide better results - Lines 159-162: Authors should provide more information about the type/number of personas created, and how the personas are used by the chatbot to generate the given responses. - It is not clear if authors also experimented with the usage of domain ontologies to avoid the generation of placeholders in the evaluated responses - Line 211: How many questions were created for this zero-shot intent classifier and what is the accuracy of this system?
- Line 216: How many paraphrases were created for each question, and what was their quality rate?
- Line 237: How critical was the finetuning process over the SQuad and CommonsenseQA models?
- Line 254-257: How many templates were manually created? - Line 265: How the future utterances are used during evaluation? For the generation part, are the authors generating some sort of sentence embedding representation (similar to SkipThoughs) to learn the generation of the transition sentence? and is it the transition sentence one taken from the list of manual templates? ( In general, this section 2.2.2 is the one I have found less clear) - Merge SGD: Did authors select the TOD dialogue randomly from those containing the same intent/topic? did you tried some dialogue embedding from the ODD part and tried to select a TOD dialogue with a similar dialogue embedding? if not, this could be an idea to improve the quality of the dataset. this could also allow the usage of the lexicalized version of the SGD and avoids the generation of placeholders in the responses - Line 324: how the repeated dialogues are detected? - Line 356: how and how many sentences are finally selected from the 120 generated sentences?
- Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones?
- The paper: Fusing task-oriented and open-domain dialogues in conversational agents is not included in the background section and it is important in the context of similar datasets - Probably the word salesman is misleading since by reading some of the generated dialogues in the appendixes, it is not clear that the salesman agent is in fact selling something. It seems sometimes that they are still doing chitchat but on a particular topic or asking for some action to be done (like one to be done by an intelligent speaker) | - Lines 402-404: How the additional transitions are generated? using the T5 model? how many times the manual sentences were selected vs the paraphrased ones? |
NIPS_2016_283 | NIPS_2016 | weakness of the paper are the empirical evaluation which lacks some rigor, and the presentation thereof: - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard". - The results comparing standard- vs. evolutional dropout on shallow models should be presented as a mean over many runs (at least 10), ideally with error-bars. The plotted curves are obviously from single runs, and might be subject to significant fluctuations. Also the models are small, so there really is no excuse for not providing statistics. - I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark: - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: * | - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard". |
NIPS_2019_1411 | NIPS_2019 | ] *assumption* - I am not sure if it is safe to assume any programmatic policy can be parameterized by a vector \theta and is differentiable in \theta. (for Theorem 4.2) *initial policy* - In all the experiments (TORCS, MountainCar, and Pendulum), the IPPG polices improve upon the PRIOR. It is not clear if IPPG can learn from scratch. Showing the performance of IPPG learning from scratch would be important to verify this. - Can IPPG be initialized with a neural policy? It seems that it is possible based on Algorithm 1. If so, it would be interesting to see how well IPPG work using a neural policy learned with DDPG instead of PRIOR. Can IIPG improve upon DDPG? *experiment setup* - It is mentioned that "both NDPS and VIPER rely on imitating a fixed neural policy oracle" (L244). What is this policy oracle? Is this the policy learned using DDPG shown in the tables? If not, what's the performance of using NDPS and VIPER to distill the DDPG policies? - It would be interesting to see if the proposed framework works with different policy gradient approaches. *experiment results* - How many random seeds are used for learning the policies (DDPO and IPPG)? - What are the standard deviation or confidence intervals for all performance values? Are all the tracks deterministic? Are the DDPG policies deterministic during testing? - It would be better if the authors provided some videos showing different policies controlling cars on different tracks so that we can have a better idea of how different methods work. *reproducibility* - Some implementation details are lacking from the main paper, which makes reproducing the results difficult. It is not clear to me what policy gradient approach is used. - The provided dropbox link leads to an empty folder (I checked it on July 5th). *related work* - I believe it would be better if some prior works [1-5] exploring learning-based program synthesis frameworks were mentioned in the paper. *reference* [1] "Neuro-symbolic program synthesis" in ICLR 2017 [2] "Robustfill: Neural program learning under noisy I/O" in ICML 2017 [3] "Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis" in ICLR 2018 [4] "Neural program synthesis from diverse demonstration videos" in ICML 2018 [5] "Execution-Guided Neural Program Synthesis" in ICLR 2019 ----- final review ----- After reading the other reviews and the author response, I have mixed feelings about this paper. On one hand, I do recognize the importance of this problem and appreciate the proposed framework (IPPG). On the other hand, many of my concerns (e.g. the choices of initial policy, experiment setup, and experiment results) are not addressed, which makes me worried about the empirical performance of the proposed framework. To be more specific, I believe the following questions are important for understanding the performance of IPPG, which remain unanswered: (1) Can IPPG learn from scratch (i.e. where no neural policy could solve the task that we are interested in)? The authors stated that "IPPG can be initialized with a neural policy, learned for example via DDPG, and thus can be made to learn" in the rebuttal, which does not answer my question, but it is probably because my original question was confusing. (2) Can IPPG be initialized with a neural policy? If so, can IPPG be initialized with a policy learned using DDPG and improve it? As DDPG achieves great performance on different tracks, I am just interested in if IPPG can even improve it. (3) How many random seeds are used for learning the policies (DDPO and IPPG)? What are the standard deviation or confidence intervals for all performance values? I believe this is important for understanding the performance of RL algorithms. (4) What is the oracle policy that NDPS and VIPER learn from? If they do not learn from the DDPG policy, what is the performance if they distill the DDPG policy. (5) Can IPPG learn from a TPRO/PPO policy? While the authors mentioned that TRPO and PPO can't solve TORCS tasks, I believe this can be verified using the CartPole or other simpler environment. In sum, I decided to keep my score as 5. I am ok if this paper gets accepted (which is likely to happen given positive reviews from other reviewers) but I do hope this paper gets improved from the above points. Also, it would be good to discuss learning-based program synthesis frameworks as they are highly-related. | - It would be interesting to see if the proposed framework works with different policy gradient approaches. *experiment results* - How many random seeds are used for learning the policies (DDPO and IPPG)? |
ICLR_2022_912 | ICLR_2022 | 1. The paper in general does not read well, and more careful proofreading is needed. 2. In S2D structure, it is not clear why the number of parameters does not change. If the kernel height/width stay the same, then its depth will increase, resulting in more parameters. I agree the efficiency could be improved since the FLOP is quadratic on activation side length. But in terms of parameters, more details are expected. | 2. In S2D structure, it is not clear why the number of parameters does not change. If the kernel height/width stay the same, then its depth will increase, resulting in more parameters. I agree the efficiency could be improved since the FLOP is quadratic on activation side length. But in terms of parameters, more details are expected. |
ICLR_2021_1534 | ICLR_2021 | The proposed counter mechanism relies on being able to manually identify entities in the environment, such as the box in the push-box environment. This has limited applicability to real-world problems with large-dimensional or visual state spaces, in which entities are not obvious a priori. Being able to explicitly count the number of times an agent has experienced an entity in a specific configuration is not a realistic expectation for interesting, real-world problems. Therefore, it is unclear how this method can be applied beyond simple tabular settings and video games.
Similarly, it seems that deciding which subspaces are equivalent requires a significant amount of domain knowledge into each problem, and does not seem to be generally applicable.
Why not benchmark against QMIX + RND, since both are tested independently?
Other suggestions:
Typo on p. 3 "tuple is then store in a replay memory" -> stored
Why was the number of steps between updates for EITI and EDTI held constant at 64,000? How many steps between updates were used for the proposed technique? | 3 "tuple is then store in a replay memory" -> stored Why was the number of steps between updates for EITI and EDTI held constant at 64,000? How many steps between updates were used for the proposed technique? |
ICLR_2023_3912 | ICLR_2023 | 1. Compared with DyTox, the proposed method obtains a little performance improvement with more parameters and variance. 2. The experiments based on ImageNet-1000 are missed, which is a large dataset and suitable for real-world situations. 3. The evaluation of FGT is only leveraged to evaluate the method performance in the ablation study, which should be used to evaluate the performance of the proposed method and the comparative methods. 4. I want to know why the mode parameters in Table 1 and Figure 5 are different. 5. The article structure of this paper is a mess, whose Appendix should appear in the supplementary material. | 3. The evaluation of FGT is only leveraged to evaluate the method performance in the ablation study, which should be used to evaluate the performance of the proposed method and the comparative methods. |
ICLR_2022_2425 | ICLR_2022 | 1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. 2)Clarity: Since the coreset construction algorithm is built up on previous works, a reader without the background in literature on coresets would find it hard to understand why the particular sampling probabilities are chosen and why they give particular guarantees. It would be useful rewrite the algorithm preview and to give at least a bit of intuition on how the importance sampling scores are chosen and how they can give the coreset guarantees Suggestions:
In the experiment section, other than uniform sampling, it would be interesting to use some other classical k-means coreset as baselines for comparison.
Please highlight the technical challenges and contributions clearly when compared to coresets for classical k-means. | 1)Less Novelty: The algorithm for construction of coresets itself is not novel. Existing coreset frameworks for classical k-means and (k,z) clusterings are extended to the kernelized setting. |
ARR_2022_51_review | ARR_2022 | 1. The choice of the word-alignment baseline seems odd. The abstract claims that “Word alignment has proven to benefit many-to-many neural machine translation (NMT).” which is supported by (Lin et al., 2020). However, the method proposed by Lin et al was used as baseline. Instead, the paper compared to an older baseline proposed by (Garg et al., 2019). Besides, this baseline by Garg et al (+align) seems to contradict the claim in the abstract since it always performs worse than the baseline without word-alignment (Table 2). If for some practical reason, the baseline of (Lin et al., 2020) can’t be used, it needs to be explained clearly.
2. In Table 2, the proposed approaches only outperform the baselines in 1 setup (out of 3). In addition, there is no consistent trend in the result (i.e. it’s unclear which proposed method (+w2w) or (+FA) is better). Thus, the results presented are insufficient to prove the benefits of the proposed methods. To better justify the claims in this paper, additional experiments or more in-depth analysis seem necessary.
3. If the claim that better word-alignment improves many-to-many translation is true, why does the proposed method have no impact on the MLSC setup (Table 3)? Section 4 touches on this point but provides no explanation.
1. Please provide more details for the sentence retrieval setup (how sentences are retrieved, from what corpus, is it the same/different to the setup in (Artetxe and Schwenk, 2019) ? ).
From the paper, “We found that for en-kk, numbers of extracted word pairs per sentence by word2word and FastAlign are 1.0 and 2.2, respectively. In contrast, the numbers are 4.2 and 20.7 for improved language pairs”. Is this because word2word and FastAlign fail for some language pairs or is this because there are few alignments between these language pairs? Would a better aligner improve result further?
2. For Table 3, are the non-highlighted cells not significant or not significantly better? If it’s the latter, please also highlight cells where the proposed approaches are significantly worse. For example, from Kk to En, +FA is significantly better than mBART (14.4 vs 14.1, difference of 0.3) and thus the cell is highlighted. However, from En to Kk, the difference between +FA and mBART is -0.5 (1.3 vs 1.8) but this cell is not highlighted. | 2. In Table 2, the proposed approaches only outperform the baselines in 1 setup (out of 3). In addition, there is no consistent trend in the result (i.e. it’s unclear which proposed method (+w2w) or (+FA) is better). Thus, the results presented are insufficient to prove the benefits of the proposed methods. To better justify the claims in this paper, additional experiments or more in-depth analysis seem necessary. |
NIPS_2019_387 | NIPS_2019 | - The main weakness is empirical---scratchGAN appreciably underperforms an MLE model in terms of LM score and reverse LM score. Further, samples from Table 7 are ungrammatical and incoherent, especially when compared to the (relatively) coherent MLE samples. - I find this statement in the supplemental section D.4 questionable: "Interestingly, we found that smaller architectures are necessary for LM compared to the GAN model, in order to avoid overfitting". This is not at all the case in my experience (e.g. Zaremba et al. 2014 train 1500-dimensional LSTMs on PTB!), which suggests that the baseline models are not properly regularized. D.4 mentions that dropout is applied to the embeddings. Are they also applied to the hidden states? - There is no comparison against existing text GANs , many of which have open source implentations. While SeqGAN is mentioned, they do not test it with the pretrained version. - Some natural ablation studies are missing: e.g. how does scratchGAN do if you *do* pretrain? This seems like a crucial baseline to have, especially the central argument against pretraining is that MLE-pretraining ultimately results in models that are not too far from the original model. Minor comments and questions : - Note that since ScratchGAN still uses pretrained embeddings, it is not truly trained from "scratch". (Though Figure 3 makes it clear that pretrained embeddings have little impact). - I think the authors risk overclaiming when they write "Existing language GANs... have shown little to no performance improvements over traditional language models", when it is clear that ScratchGAN underperforms a language model across various metrics (e.g. reverse LM). | - I find this statement in the supplemental section D.4 questionable: "Interestingly, we found that smaller architectures are necessary for LM compared to the GAN model, in order to avoid overfitting". This is not at all the case in my experience (e.g. Zaremba et al. 2014 train 1500-dimensional LSTMs on PTB!), which suggests that the baseline models are not properly regularized. D.4 mentions that dropout is applied to the embeddings. Are they also applied to the hidden states? |
NIPS_2019_524 | NIPS_2019 | weakness of the paper are as follows (from my perspective): * (strength) the authors introduce a generative approach for applying Hindsight Experience Replay (HER) in visual domains: the idea is simple and has the potential to improve our current Deep RL methods. * (weakness) currently, the paper does not seem to have a detailed discussion on how their generative model was trained to produce images containing the goal information. The authors do clarify this on their feedback and it would be useful if they also add this discussion on their next version of the paper. More importantly, including this discussion is useful for the Deep RL community. * (weakness) their current approach of training the generative model relies on manually annotating the goal images, which may prevent scalability of the algorithm. Addressing this could make their approach be more impactful. | * (strength) the authors introduce a generative approach for applying Hindsight Experience Replay (HER) in visual domains: the idea is simple and has the potential to improve our current Deep RL methods. |
pZk9cUu8p6 | ICLR_2025 | 1.Limited Discussion of Scalability Bounds:The paper doesn't thoroughly explore the upper limits of FedDES's scalability;No clear discussion of memory requirements or computational complexity.
2.Validation Scope:Evaluation focuses mainly on Vision Transformer with CIFAR-10;Could benefit from testing with more diverse models and datasets; Limited exploration of edge cases or failure scenarios
3.Network Modeling:While network delays are considered, there's limited discussion of complex network topologies or dynamic network conditions; The paper could benefit from more detailed analysis of how network conditions affect simulation accuracy | 1.Limited Discussion of Scalability Bounds:The paper doesn't thoroughly explore the upper limits of FedDES's scalability;No clear discussion of memory requirements or computational complexity. |
51gbtl2VxL | EMNLP_2023 | 1. The paper don't compare their models with more recent SOTAs [1-3], so it can not get higher soundness.
2. You should provide the results on more datasets, such as Test2018.
3. You should provide the METEOR results, which is also reported in recent works [1-5].
4. The Figure 5 is not clear, you should give more explanation about it.
[1] Yi Li, Rameswar Panda, Yoon Kim, Chun-Fu Richard Chen, Rogerio S Feris, David Cox, and Nuno Vasconcelos. 2022. VALHALLA: Visual Hallucination for Machine Translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 5216–5226.
[2] Junjie Ye, Junjun Guo, Yan Xiang, Kaiwen Tan, and Zhengtao Yu. 2022. Noiserobust Cross-modal Interactive Learning with Text2Image Mask for Multi-modal Neural Machine Translation. In Proceedings of the 29th International Conference on Computational Linguistics. 5098–5108.
[3] Junjie Ye and Junjun Guo. 2022. Dual-level interactive multimodal-mixup encoder for multi-modal neural machine translation. Applied Intelligence 52, 12 (2022), 14194–14203.
[4] Good for Misconceived Reasons: An Empirical Revisiting on the Need for Visual Context in Multimodal Machine Translation. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 6153–6166.
[5] Li B, Lv C, Zhou Z, et al. On Vision Features in Multimodal Machine Translation[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 6327-6337. | 3. You should provide the METEOR results, which is also reported in recent works [1-5]. |
NIPS_2020_11 | NIPS_2020 | 1. The proposed method seems only works for digit or text images, such as MNIST and SVHN. Can it be used on natural images, such as CIFAR10, which has wider applications in the real world then digit/text. 2. Are the results obtained on Synbols dataset generalizable to large-scale datasets? For example, if you find algorithm A is better than B on Synbols dataset, will the conclusion hold on large images (e.g., ImageNet scale) in real-world applications? This need to be discussed in the paper. | 1. The proposed method seems only works for digit or text images, such as MNIST and SVHN. Can it be used on natural images, such as CIFAR10, which has wider applications in the real world then digit/text. |
ACL_2017_726_review | ACL_2017 | - Claims of being comparable to state of the art when the results on GeoQuery and ATIS do not support it. General Discussion: This is a sound work of research and could have future potential in the way semantic parsing for downstream applications is done. I was a little disappointed with the claims of “near-state-of-the-art accuracies” on ATIS and GeoQuery, which doesn’t seem to be the case (8 points difference from Liang et. al., 2011)). And I do not necessarily think that getting SOTA numbers should be the focus of the paper, it has its own significant contribution. I would like to see this paper at ACL provided the authors tone down their claims, in addition I have some questions for the authors.
- What do the authors mean by minimal intervention? Does it mean minimal human intervention, because that does not seem to be the case. Does it mean no intermediate representation? If so, the latter term should be used, being less ambiguous.
- Table 6: what is the breakdown of the score by correctness and incompleteness?
What % of incompleteness do these queries exhibit?
- What is expertise required from crowd-workers who produce the correct SQL queries? - It would be helpful to see some analysis of the 48% of user questions which could not be generated.
- Figure 3 is a little confusing, I could not follow the sharp dips in performance without paraphrasing around the 8th/9th stages. - Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers?
I thank the authors for their response. | - Table 4 needs a little more clarification, what splits are used for obtaining the ATIS numbers? I thank the authors for their response. |
ICLR_2022_2677 | ICLR_2022 | 1 The authors do not analyze the security (i.e., protection of the privacy) of the proposed framework.
2 The authors do not analyze the communication cost between each client (i.e., domain) and the server. In a typical federated learning system, the communication cost is a very important issue.
3 The way of using an encoder and a decoder, or a domain-specific part and a domain-independent part are well known in existing cross-domain or transfer learning works. | 1 The authors do not analyze the security (i.e., protection of the privacy) of the proposed framework. |
ACL_2017_818_review | ACL_2017 | - I would have liked to see more examples of objects pairs, action verbs, and predicted attribute relations. What are some interesting action verbs and corresponding attributes relations? The paper also lacks analysis/discussion on what kind of mistakes their model makes.
- The number of object pairs (3656) in the dataset is very small. How many distinct object categories are there? How scalable is this approach to larger number of object pairs?
- It's a bit unclear how the frame similarity factors and attributes similarity factors are selected.
General Discussion/Suggestions: - The authors should discuss the following work and compare against mining attributes/attribute distributions directly and then getting a comparative measure. What are the advantages offered by the proposed method compared to a more direct approach?
Extraction and approximation of numerical attributes from the Web Dmitry Davidov, Ari Rappoport ACL 2010 Minor typos: 1. In the abstract (line 026), the authors mention 'six' dimensions, but in the paper, there is only five.
2. line 248: Above --> The above 3. line 421: object first --> first 4. line 654: more skimp --> a smaller 5. line 729: selctional --> selectional | - It's a bit unclear how the frame similarity factors and attributes similarity factors are selected. |
ACL_2017_588_review | ACL_2017 | and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems.
- Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task.
2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler.
3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary.
This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise.
4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities.
- Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested?
2) Have you tried building a classifier that just takes h_i^e as inputs?
I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores. | 2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler. |
bkNx3O0sND | ICLR_2024 | - Some parts of the paper (e.g., the first three paragraphs of the introduction) talk about NLG in general but the paper focuses on machine translation. Reranking methods, in particular, rely on good quality estimation models that exist for MT but may not exist for other tasks. Of course, MBR finetuning can be applied to other NLG tasks (e.g., summarization) but this paper does not touch this problem. I think the authors should either revise the writing and make the scope more clear from the beginning, or perform experiments in other NLG tasks. The contribution is mainly empirical and only validated for MT.
- The study is limited in terms of language coverage. They perform experiments on 2 language pairs only (mid & high resource), both for translation out of English. It’s not clear if the findings hold for lower resource languages and when translating into English. In particular, I’m expecting quality estimation models to be worse for low resource languages, which may end up affecting the quality of the translations produced with their method. Have you tried other languages? If so, what happens in that case?
- The related work discusses other methods for training NMT models beyond MLE (e.g., RL methods) but none of them is used as a baseline. | - The related work discusses other methods for training NMT models beyond MLE (e.g., RL methods) but none of them is used as a baseline. |
ARR_2022_233_review | ARR_2022 | Additional details regarding the creation of the dataset would be helpful to solve some doubts regarding its robustness. It is not stated whether the dataset will be publicly released.
1) Additional reference regarding explainable NLP Datasets: "Detecting and explaining unfairness in consumer contracts through memory networks" (Ruggeri et al 2021) 2) Some aspects of the creation of the dataset are unclear and the authors must address them. First of all, will the author release the dataset or will it remain private?
Are the guidelines used to train the annotators publicly available?
Having a single person responsible for the check at the end of the first round may introduce biases. A better practice would be to have more than one checker for each problem, at least on a subset of the corpus, to measure the agreement between them and, in case of need, adjust the guidelines.
It is not clear how many problems are examined during the second round and the agreement between the authors is not reported.
It is not clear what is meant by "accuracy" during the annotation stages.
3) Additional metrics that may be used to evaluate text generation: METEOR (http://dx.doi.org/10.3115/v1/W14-3348), SIM(ile) (http://dx.doi.org/10.18653/v1/P19-1427).
4) Why have the authors decided to use the colon symbol rather than a more original and less common symbol? Since the colon has usually a different meaning in natural language, do they think it may have an impact?
5) How much are these problems language-dependent? Meaning, if these problems were perfectly translated into another language, would they remain valid? What about the R4 category? Additional comments about these aspects would be beneficial for future works, cross-lingual transfers, and multi-lingual settings.
6) In Table 3, it is not clear whether the line with +epsilon refers to the human performance when the gold explanation is available or to the roberta performance when the golden explanation is available?
In any case, both of these two settings would be interesting to know, so I suggest, if it is possible, to include them in the comparison if it is possible.
7) The explanation that must be generated for the query, the correct answer, and the incorrect answers could be slightly different. Indeed, if I am not making a mistake, the explanation for the incorrect answer must highlight the differences w.r.t. the query, while the explanation for the answer must highlight the similarity. It would be interesting to analyze these three categories separately and see whether if there are differences in the models' performances. | 3) Additional metrics that may be used to evaluate text generation: METEOR (http://dx.doi.org/10.3115/v1/W14-3348), SIM(ile) (http://dx.doi.org/10.18653/v1/P19-1427). |