paper_id
stringlengths 10
19
| venue
stringclasses 15
values | focused_review
stringlengths 7
10.2k
| point
stringlengths 47
690
|
---|---|---|---|
NIPS_2021_2224 | NIPS_2021 | . 1. The proposed S1DB-ED algorithm is too similar to RMED (Komiyama et al. 2015), so I think the novelty of this part is limited. The paper needs to give a sufficient discussion on the comparison with RMED. 2. The comparison baselines in experiments are not sufficient. The paper only compares the proposed two algorithms, so readers cannot evaluate the empirical performance of the proposed algorithms. While I understand that this is a new problem and there are no other existing algorithms for this problem, the paper can still compare to some ablation variants of proposed algorithms to demonstrate the effectiveness of key algorithmic components, or reduce the setting to conventional dueling bandits and compare with existing dueling bandit algorithms.
After Rebuttal
I read the rebuttal of the authors. Now I agree that the analysis for the S1DB-ED algorithm is non-trivial and the authors correct the errors in prior work [21]. My concerns are well addressed. So I will keep my score. | .1. The proposed S1DB-ED algorithm is too similar to RMED (Komiyama et al. 2015), so I think the novelty of this part is limited. The paper needs to give a sufficient discussion on the comparison with RMED. |
NIPS_2018_849 | NIPS_2018 | - The presented node count for the graphs is quite low. How is performance affected if the count is increased? In the example of semantic segmentation: how does it affect the number of predicted classes? - Ablation study: how much of the learned pixel to node association is responsible for the performance boost. Previous work has also shown in the past that super-pixel based prediction is powerful and fast, I.e. with fixed associations. # Typos - Line 36: and computes *an* adjacency matrix - Line 255: there seems to be *a weak* correlation # Further Questions - Is there an advantage in speed in replacing some of the intermediate layers with this type of convolutional blocks? - Any ideas on how to derive the number of nodes for the graph? Any intuition on how this number regularises the predictor? - As far as I can tell the projection and re-projection is using activations from the previous layer both as feature (the where it will be mapped) and as data (the what will be mapped). Have you thought about deriving different features based on the activations; maybe also changing the dimension of the features through a non-linearity? Also concatenating hand-crafted features (or a learned derived value thereof), e.g., location, might lead to a stronger notion of "regions" as pointed out in the discussion about the result of semantic segmentation. - The paper opens that learning long-range dependencies is important for powerful predictors. In the example of semantic segmentation I can see that this is actually happening, e.g., in the visualisations in table 3; but I am not sure if it is fully required. Probably the truth lies somewhere in between and I miss a discussion about this. If no form of locality with respect to the 2d image space is encoded in the graph structure, I suspect that prediction suddenly depends on the image size. | - The paper opens that learning long-range dependencies is important for powerful predictors. In the example of semantic segmentation I can see that this is actually happening, e.g., in the visualisations in table 3; but I am not sure if it is fully required. Probably the truth lies somewhere in between and I miss a discussion about this. If no form of locality with respect to the 2d image space is encoded in the graph structure, I suspect that prediction suddenly depends on the image size. |
ACL_2017_318_review | ACL_2017 | 1. Presentation and clarity: important details with respect to the proposed models are left out or poorly described (more details below). Otherwise, the paper generally reads fairly well; however, the manuscript would need to be improved if accepted.
2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below).
- General Discussion: 1. The authors stress the importance of accounting for polysemy and learning sense-specific representations. While polysemy is taken into account by calculating sense distributions for words in particular contexts in the learning procedure, the evaluation tasks are entirely context-independent, which means that, ultimately, there is only one vector per word -- or at least this is what is evaluated. Instead, word sense disambiguation and sememe information are used for improving the learning of word representations. This needs to be clarified in the paper.
2. It is not clear how the sememe embeddings are learned and the description of the SSA model seems to assume the pre-existence of sememe embeddings. This is important for understanding the subsequent models. Do the SAC and SAT models require pre-training of sememe embeddings?
3. It is unclear how the proposed models compare to models that only consider different senses but not sememes. Perhaps the MST baseline is an example of such a model? If so, this is not sufficiently described (emphasis is instead put on soft vs. hard word sense disambiguation). The paper would be stronger with the inclusion of more baselines based on related work.
4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting for the authors to look deeper into this. This aspect also does not seem to explain the improvements much since, e.g., the word similarity data sets contain frequent word pairs.
5. Related to the above point, the improvement gains seem more attributable to the incorporation of sememe information than word sense disambiguation in the learning procedure. As mentioned earlier, the evaluation involves only the use of context-independent word representations. Even if the method allows for learning sememe- and sense-specific representations, they would have to be aggregated to carry out the evaluation task.
6. The example illustrating HowNet (Figure 1) is not entirely clear, especially the modifiers of "computer".
7. It says that the models are trained using their best parameters. How exactly are these determined? It is also unclear how K is set -- is it optimized for each model or is it randomly chosen for each target word observation? Finally, what is the motivation for setting K' to 2? | 2. The evaluation on the word analogy task seems a bit unfair given that the semantic relations are explicitly encoded by the sememes, as the authors themselves point out (more details below). |
NIPS_2017_434 | NIPS_2017 | ---
This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance:
1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not
ablated. How important is the added complexity? Will one IN do?
2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2.
3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder.
While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated.
Why is this particular dimension of difficulty interesting?
4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error?
5. Are the learned object state embeddings interpretable in any way before decoding?
6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions:
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
* How many different kinds of physical interaction can be in one simulation?
* How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates?
Preliminary Evaluation ---
Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection. | * How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates? Preliminary Evaluation --- Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection. |
NIPS_2020_1477 | NIPS_2020 | 1. I think it is a bit overstated (Line 10 and Line 673) to use the term \epsilon-approximate stationary point of J -- there is still function approximation error as in Theorem 4.5. I think the existence of this function approximation error should be explicitly acknowledged whenever the conclusion about sample complexity is stated. Otherwise readers may have the impression that compatible features (Konda, 2002, Sutton et al 2000) are used to deal with these errors, which are not the case. 2. As shown by Konda (2002) and Sutton et al (2000), compatible features are useful tools to address the function approximation error of the critic. I'm wondering if it's possible to introduce compatible features and TD(1) critic in the finite sample complexity analysis in this paper to eliminate the \epsilon_app term. 3. I feel the analysis in the paper depends heavily on the property of the stationary distribution (e.g., Line 757). I'm wondering if it's possible to conduct a similar analysis for the discounted setting (instead of the average reward setting). Although a discounted problem can be solved by methods for the average reward problem (e.g., discarding each transition w.p. 1 - \gamma, see Konda 2002), solving the discounted problem directly is more common in the RL community. It would be beneficial to have a discussion w.r.t. the discounted objective. 4. Although using advantage instead of q value is more common in practice, I'm wondering if there is other technical consideration for conducting the analysis with advantage instead of q value. 5. The assumption about maximum eigenvalues in Line 215 seems artificial. I can understand this assumption, as well as the projection in Line 8 in Algorithm 1, is mainly used to ensure the boundedness of the critic. However, as in Line 219, R_w indeed depends on \lambda, which we do not know in practice. So it means we cannot implement the exact Algorithm 1 in practice. Instead of using this assumption and the projection, is it possible to use regularization (e.g., ridge) for the critic to ensure it's bounded, as done in asymptotic analysis in Zhang et al (2020)? Also Line 216 is a bit misleading. Only the first half (negative definiteness) is used to ensure solvability. But as far as I know, in policy evaluation setting, we do not need the second half (maximum eigenvalue). 6. Some typos: Line 463 should include \epsilon_app and replace the first + with \leq \epsilon_app (the last term of Line 587) is missing in Line 585 and 586 There shouldn't be (1 - \gamma) in Line 589 In Line 618, there should be no need to introduce the summation from k=0 to t - \tau_t, as the summation from k=\tau_t to t is still used in Line 624. In Line 625, it should be \tau_t instead of \tau In Line 640, I personally think it's not proper to cite [25] (the S & B book) -- that book includes too many. Referring to the definition of w^* should be more easy to follow. In Line 658, it should be ||z_k||^2 In Line 672, \epsilon_app is missing In Line 692, it should be E[....] = 0 In Line 708, there shouldn't be \theta_1, \theta_2, \eta_1, \eta_2 In Line 774, I think expectation is missing in the LHS Konda, V. R. Actor-critic algorithms. PhD thesis, Massachusetts Institute of Technology, 2002. Zhang, S., Liu, B., Yao, H., & Whiteson. Provably Convergent Two-Timescale Off-Policy Actor-Critic with Function Approximation. ICML 2020. | 4. Although using advantage instead of q value is more common in practice, I'm wondering if there is other technical consideration for conducting the analysis with advantage instead of q value. |
ICLR_2022_3068 | ICLR_2022 | 1. The innovation in the paper is limited, more like an assembly of existing work, such as DeepLabv3+, attention and spatial attention。 2. The proposed method was not evaluated on other datasets, such as MS COCO dataset. 3. The main focus of this paper is tiny object detection, but the analysis of small object is limited in the experimental results.
Question: 1. What’s the ‘CEM’ and ‘FPM’ mean in Figure 1? 2. The novelty of CAM is limited, A similar structure has been proposed in DeepLabv3+. 3. The proposed FRM is a simple combination of channel attention and spatial attention. The innovative should be given in detail. | 3. The proposed FRM is a simple combination of channel attention and spatial attention. The innovative should be given in detail. |
Q2IInBu2kz | EMNLP_2023 | 1. You should compare your model with more recent models [1-5].
2. Contrastive learning has been widely used in Intent Detection [6-9], although the tasks are not identical. I think the novelty of this simple modification is not suitable for EMNLP.
3. You should provide more details about the formula in the text, e.g. $\ell_{BCE}$ ,even if it is simple, give specific details.
4. You don't provide the value of some hyper-parameters, such as τ.
5. The Figure 1 is blurry, which affects reading.
[1] Qin L, Wei F, Xie T, et al. GL-GIN: Fast and Accurate Non-Autoregressive Model for Joint Multiple Intent Detection and Slot Filling[C]//Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers). 2021: 178-188.
[2] Xing B, Tsang I. Co-guiding Net: Achieving Mutual Guidances between Multiple Intent Detection and Slot Filling via Heterogeneous Semantics-Label Graphs[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 159-169.
[3] Xing B, Tsang I. Group is better than individual: Exploiting Label Topologies and Label Relations for Joint Multiple Intent Detection and Slot Filling[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 3964-3975.
[4] Song M, Yu B, Quangang L, et al. Enhancing Joint Multiple Intent Detection and Slot Filling with Global Intent-Slot Co-occurrence[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 7967-7977.
[5] Cheng L, Yang W, Jia W. A Scope Sensitive and Result Attentive Model for Multi-Intent Spoken Language Understanding[J]. arXiv e-prints, 2022: arXiv: 2211.12220.
[6] Liu H, Zhang F, Zhang X, et al. An Explicit-Joint and Supervised-Contrastive Learning Framework for Few-Shot Intent Classification and Slot Filling[C]//Findings of the Association for Computational Linguistics: EMNLP 2021. 2021: 1945-1955.
[7] Qin L, Chen Q, Xie T, et al. GL-CLeF: A Global–Local Contrastive Learning Framework for Cross-lingual Spoken Language Understanding[C]//Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). 2022: 2677-2686.
[8] Liang S, Shou L, Pei J, et al. Label-aware Multi-level Contrastive Learning for Cross-lingual Spoken Language Understanding[C]//Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. 2022: 9903-9918.
[9] Chang Y H, Chen Y N. Contrastive Learning for Improving ASR Robustness in Spoken Language Understanding[J]. arXiv preprint arXiv:2205.00693, 2022. | 3. You should provide more details about the formula in the text, e.g. $\ell_{BCE}$ ,even if it is simple, give specific details. |
NIPS_2020_631 | NIPS_2020 | 1. How Fourier features accelerate NTK convergence in the high-frequency range? Did I overlook something or it's not analyzed? This is an essential theoretical support to the merits of Fourier features. 2. The theory part is limited to the behavior on NTK. I understand analyzing Fourier features on MLPs is highly difficult, but I'm a bit worried there would be a significant gap between NTK and the actual behavior of MLPs (although they are asymptotically equivalent). 3. Examples in Section 5 are limited to 1D functions, which are a bit toyish. | 1. How Fourier features accelerate NTK convergence in the high-frequency range? Did I overlook something or it's not analyzed? This is an essential theoretical support to the merits of Fourier features. |
NIPS_2019_494 | NIPS_2019 | of the approach, it may be interesting to do that. Clarity: The paper is well written but clarity could be improved in several cases: - I found the notation / the explicit split between "static" and temporal features into two variables confusing, at least initially. In my view this requires more information than is provided in the paper (what is S and Xt). - even with the pseudocode given in the supplementary material I don't get the feeling the paper is written to be reproduced. It is written to provide an intuitive understanding of the work, but to actually reproduce it, more details are required that are neither provided in the paper nor in the supplementary material. This includes, for example, details about the RNN implementation (like number of units etc), and many other technical details. - the paper is presented well, e.g., quality of graphs is good (though labels on the graphs in Fig 3 could be slightly bigger) Significance: - from just the paper: the results would be more interesting (and significant) if there was a way to reproduce the work more easily. At present I cannot see this work easily taken up by many other researchers mainly due to lack of detail in the description. The work is interesting, and I like the idea, but with a relatively high-level description of it in the paper it would need a little more than the peudocode in the materials to convince me using it (but see next). - In the supplementary material it is stated the source code will be made available, and in combination with paper and information in the supplementary material, the level of detail may be just right (but it's hard to say without seeing the code). Given the promising results, I can imagine this approach being useful at least for more research in a similar direction. | - even with the pseudocode given in the supplementary material I don't get the feeling the paper is written to be reproduced. It is written to provide an intuitive understanding of the work, but to actually reproduce it, more details are required that are neither provided in the paper nor in the supplementary material. This includes, for example, details about the RNN implementation (like number of units etc), and many other technical details. |
ICLR_2022_1420 | ICLR_2022 | Weakness:
Lack of novelty. The key idea, i.e., combining foreground masks to remove the artifacts from the background, is not new. Separate handling of foreground from background is a common practice for dynamic scene novel view synthesis, and many recent methods do not even require the foreground masks for modeling dynamic scenes (they jointly model the foreground region prediction module, e.g., Tretschk et al. 2021).
Lack of controllability. 1) The proposed method cannot handle the headpose. While this paper defers this problem to a future work, a previous work (e.g., Gafni et al. ICCV 2021) is already able to control both facial expression and headpose. Why is it not possible to condition the headpose parameters in the NeRF beyond the facial expression similar to [Gafni et al. ICCV 2021]? 2) Even for controlling facial expression, it is highly limited to the mouth region only. From overall qualitative results and demo video, it is not clear the method indeed can handle overall facial expression including eyes, nose, and wrinkle details, and the diversity in mouth shape that the model can deliver is significantly limited.
Low quality. The results made by the proposed method are of quite low quality. 1) Low resolution: While many previous works introduce high-quality view synthesis results with high-resolution (512x512 or more), this paper shows low resolution results (256x256) for some reasons. Simply saying the problem of resources is not a convincing argument since many existing works already proved the feasibility of high resolution image synthesis using implicit function. Due to this low-resolution nature, many high-frequency details (e.g., facial wrinkles), which are the key to enabling photorealistic face image synthesis, are washed out. 2) In many cases, the conditioning facial expressions do not match that from the synthesized image. From the demo video, while mouth opening or closing are somehow synchronized with conditioning videos, there exists a mismatch in the style of the detailed mouth shape. | 1) The proposed method cannot handle the headpose. While this paper defers this problem to a future work, a previous work (e.g., Gafni et al. ICCV 2021) is already able to control both facial expression and headpose. Why is it not possible to condition the headpose parameters in the NeRF beyond the facial expression similar to [Gafni et al. ICCV 2021]? |
NIPS_2017_71 | NIPS_2017 | - The paper is a bit incremental. Basically, knowledge distillation is applied to object detection (as opposed to classification as in the original paper).
- Table 4 is incomplete. It should include the results for all four datasets.
- In the related work section, the class of binary networks is missing. These networks are also efficient and compact. Example papers are:
* XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks, ECCV 2016
* Binaryconnect: Training deep neural networks with binary weights during propagations, NIPS 2015
Overall assessment: The idea of the paper is interesting. The experiment section is solid. Hence, I recommend acceptance of the paper. | - Table 4 is incomplete. It should include the results for all four datasets. |
NIPS_2021_537 | NIPS_2021 | Weakness: The main weakness of the approach is the lack of novelty. 1. The key contribution of the paper is to propose a framework which gradually fits the high-performing sub-space in the NAS search space using a set of weak predictors rather than fitting the whole space using one strong predictor. However, this high-level idea, though not explicitly highlighted, has been adopted in almost all query-based NAS approaches where the promising architectures are predicted and selected at each iteration and used to update the predictor model for next iteration. As the authors acknowledged in Section 2.3, their approach is exactly a simplified version of BO which has been extensively used for NAS [1,2,3,4]. However, unlike BO, the predictor doesn’t output uncertainty and thus the authors use a heuristic to trade-off exploitation and exploration rather than using more principled acquisition functions.
2. If we look at the specific components of the approach, they are not novel as well. The weak predictor used are MLP, Regression Tree or Random Forest, all of which have been used for NAS performance prediction before [2,3,7]. The sampling strategy is similar to epsilon-greedy and exactly the same as that in BRP-NAS[5]. In fact the results of the proposed WeakNAS is almost the same as BRP-NAS as shown in Table 2 in Appendix C. 3. Given the strong empirical results of the proposed method, a potentially more novel and interesting contribution would be to find out through theorical analyses or extensive experiments the reasons why simple greedy selection approach outperforms more principled acquisition functions (if that’s true) on NAS and why deterministic MLP predictors, which is often overconfident when extrapolate, outperform more robust probabilistic predictors like GPs, deep ensemble or Bayesian neural networks. However, such rigorous analyses are missing in the paper.
Detailed Comments: 1. The authors conduct some ablation studies in Section 3.2. However, a more important ablation would be to modify the proposed predictor model to get some uncertainty (by deep-ensemble or add a BLR final output layer) and then use BO acquisition functions (e.g. EI) to do the sampling. The proposed greedy sampling strategy works because the search space for NAS-Bench-201 and 101 are relatively small and as demonstrated in [6], local search even gives the SOTA performance on these benchmark search spaces. For a more realistic search space like NAS-Bench-301[7], the greedy sampling strategy which lacks a principled exploitation-exploration trade-off might not work well. 2. Following the above comment, I’ll suggest the authors to evaluate their methods on NAS-Bench-301 and compare with more recent BO methods like BANANAS[2] and NAS-BOWL[4] or predictor-based method like BRP-NAS [5] which is almost the same as the proposed approach. I’m aware that the authors have compared to BONAS and shows better performance. However, BONAS uses a different surrogate which might be worse than the options proposed in this paper. More importantly, BONAS use weight-sharing to evaluate architectures queried which may significantly underestimate the true architecture performance. This trades off its performance for time efficiency. 3. For results on open-domain search, the authors perform search based on a pre-trained super-net. Thus, the good final performance of WeakNAS on MobileNet space and NASNet space might be due to the use of a good/well-trained supernet; as shown in Table 6, OFA with evalutinary algorithm can give near top performance already. More importantly, if a super-net has been well-trained and is good, the cost of finding the good subnetwork from it is rather low as each query via weight-sharing is super cheap. Thus, the cost gain in query efficiency by WeakNAS on these open-domain experiments is rather insignificant. The query efficiency improvement is likely due to the use of a predictor to guide the subnetwork selection in contrast to the naïve model-free selection methods like evolutionary algorithm or random search. A more convincing result would be to perform the proposed method on DARTS space (I acknowledge that doing it on ImageNet would be too expensive) without using the supernet (i.e. evaluate the sampled architectures from scratch) and compare its performance with BANANAS[2] or NAS-BOWL[4]. 4. If the advantage of the proposed method is query-efficiency, I’d love to see Table 2, 3 (at least the BO baselines) in plots like Fig. 4 and 5, which help better visualise the faster convergence of the proposed method. 5. Some intuitions are provided in the paper on what I commented in Point 3 in Weakness above. However, more thorough experiments or theoretical justifications are needed to convince potential users to use the proposed heuristic (a simplified version of BO) rather than the original BO for NAS. 6. I might misunderstand something here but the results in Table 3 seem to contradicts with the results in Table 4. As in Table 4, WeakNAS takes 195 queries on average to find the best architecture on NAS-Bench-101 but in Table 3, WeakNAS cannot reach the best architecture after even 2000 queries.
7. The results in Table 2 which show linear-/exponential-decay sampling clearly underperforms uniform sampling confuse me a bit. If the predictor is accurate on the good subregion, as argued by the authors, increasing the sampling probability for top-performing predicted architectures should lead to better performance than uniform sampling, especially when the performance of architectures in the good subregion are rather close. 8. In Table 1, what does the number of predictors mean? To me, they are simply the number of search iterations. Do the authors reuse the weak predictors from previous iterations in later iterations like an ensemble?
I understand that given the time constraint, the authors are unlikely to respond to my comments. Hope those comments can help the authors for future improvement of the paper.
References: [1] Kandasamy, Kirthevasan, et al. "Neural architecture search with Bayesian optimisation and optimal transport." NeurIPS. 2018. [2] White, Colin, et al. "BANANAS: Bayesian Optimization with Neural Architectures for Neural Architecture Search." AAAI. 2021. [3] Shi, Han, et al. "Bridging the Gap between Sample-based and One-shot Neural Architecture Search with BONAS." NeurIPS. 2020. [4] Ru, Binxin, et al. "Interpretable Neural Architecture Search via Bayesian Optimisation with Weisfeiler-Lehman Kernels." ICLR. 2020. [5] Dudziak, Lukasz, et al. "BRP-NAS: Prediction-based NAS using GCNs." NeurIPS. 2020. [6] White, Colin, et al. "Local search is state of the art for nas benchmarks." arXiv. 2020. [7] Siems, Julien, et al. "NAS-Bench-301 and the case for surrogate benchmarks for neural architecture search." arXiv. 2020.
The limitation and social impacts are briefly discussed in the conclusion. | 2. If we look at the specific components of the approach, they are not novel as well. The weak predictor used are MLP, Regression Tree or Random Forest, all of which have been used for NAS performance prediction before [2,3,7]. The sampling strategy is similar to epsilon-greedy and exactly the same as that in BRP-NAS[5]. In fact the results of the proposed WeakNAS is almost the same as BRP-NAS as shown in Table 2 in Appendix C. |
ACL_2017_684_review | ACL_2017 | - Different variants of the model achieve state-of-the-art performance across various data sets. However, the authors do provide an explanation for this (i.e. size of data set and text anonymization patterns).
- General Discussion: The paper describes an approach to text comprehension which uses gated attention modules to achieve state-of-the-art performance. Compared to previous attention mechanisms, the gated attention reader uses the query embedding and makes multiple passes (multi-hop architecture) over the document and applies multiplicative updates to the document token vectors before finally producing a classification output regarding the answer. This technique somewhat mirrors how humans solve text comprehension problems. Results show that the approach performs well on large data sets such as CNN and Daily Mail. For the CBT data set, some additional feature engineering is needed to achieve state-of-the-art performance. Overall, the paper is very well-written and model is novel and well-motivated.
Furthermore, the approach achieves state-of-the-art performance on several data sets. I had only minor issues with the evaluation. The experimental results section does not mention whether the improvements (e.g. in Table 3) are statistically significant and if so, which test was used and what was the p-value. Also I couldn't find an explanation for the performance on CBT-CN data set where the validation performance is superior to NSE but test performance is significantly worse. | - Different variants of the model achieve state-of-the-art performance across various data sets. However, the authors do provide an explanation for this (i.e. size of data set and text anonymization patterns). |
ICLR_2022_3204 | ICLR_2022 | - It is unclear whether CBR works as expected (i.e., align the distribution of intra-camera and inter-camera distance). Intuitively, there are more than one possible changing directions of the two items in Equ 3. For example, 1) the second term gets larger, the first term gets smaller (as shown in Fig.2), 2) both of them get smaller, 3)the second term stays the same, and the first term gets smaller. However, according to Fig.4 (b) and Fig.5 (b), we can observe that the changes of the distance distribution caused by CBR should be in line with 2) and 3) mentioned above rather than 1) “expected”. Therefore, this paper should provide more explanation to make it clear.
One of the main contributions of this paper is the CBR, so different optimization strategies and the corresponding results should discussion. For example, what will happen by minimizing both of the inter and intra terms in Eq 3 or only minimizing the first term? | 1) “expected”. Therefore, this paper should provide more explanation to make it clear. One of the main contributions of this paper is the CBR, so different optimization strategies and the corresponding results should discussion. For example, what will happen by minimizing both of the inter and intra terms in Eq 3 or only minimizing the first term? |
NIPS_2019_1207 | NIPS_2019 | - Moderate novelty. This paper combines various components proposed in previous work (some of it, it seems, unbeknownst to the authors - see Comment 1): hierarchical/structured optimal transport distances, Wasserstein-Procrustes methods, sample complexity results for Wasserstein/Sinkhorn objectives. Thus, I see the contributions of this paper being essentially: putting together these pieces and solving them cleverly via ADMM. - Lacking awareness of related work (see Comment 1) - Missing relevant baselines and runtime experimental results (Comments 2, 3 and 4) Major Comments/Questions: 1 Related Work. My main concern with this paper is its apparent lack of awareness of two very related lines of work. On the one hand, the idea of defining hierarchical OT distances has been explored before in various contexts (e.g., [5], [6] and [7]), and so has leveraging cluster information for structured losses, e.g. [9] and [10] (note that latter of these relies on an ADMM approach too). On the other hand, combining OT with Procrustes alignment has a long history too (e.g, [1]), with recent successful application in high-dimensional problems ([2], [3], [4]). All of these papers solve some version of Eq (4) with orthogonality (or more general constraints), leading to algorithms whose core is identical to Algorithm 1. Given that this paper sits at the intersection of two rich lines of work in the OT literature, I would have expected some effort to contrast their approach, both theoretically and empirically, with all these related methods. 2. Baselines. Related to the point above, any method that does not account for rotations across data domains (e.g., classic Wasserstein distance) is inadequate as a baseline. Comparing to any of the methods [1]-[4] would have been much more informative. In addition, none of the baselines models group structure, which again, would have been easy to remedy by including at least one alternative that does (e.g., [10] or the method of Courty et al, which is cited and mentioned in passing, but not compared against). As for the neuron application, I am not familiar with the DAD method, but the same applies about the lack of comparison to OT-based methods with structure/Procrustes invariance. 3. Conflation of geometric invariance and hierarchical components. Given that this approach combines two independent extensions on the classic OT problem (namely, the hierarchical formulation and the aligment over the stiefel manifold), I would like to understand how important these two are for the applications explored in this work. Yet, no ablation results are provided. A starting point would be to solve the same problem but fixing the transformation T to be the identity, which would provide a lower bound that, when compared against the classic WA, would neatly show the advantage of the hierarchical vs a "flat" classic OT versions of the problem. 4. No runtime results. Since computational efficiency is one of the major contributions touted in the abstract and introduction, I was expecting to see at least empirical and/or a formal convergence/runtime complexity analysis, but neither of these was provided. Since the toy example is relatively small, and no details about the neural population task are provided, the reader is left to wonder about the practical applicability of this framework for real applications. Minor Comments/Typos: - L53. *the* data. - L147. It's not clear to me why (1) is referred to as an update step here. Wrong eqref? - Please provide details (size, dimensionality, interpretation) about the neural population datasets, at least on the supplement. Many readers will not be familiar with it. References: * OT-based methods to align in the presence of unitary transformations: [1] Rangarajan et al, "The Softassign Procrustes Matching Algorithm", 1997. [2] Zhang et al, "Earth Moverâs Distance Minimization for Unsupervised Bilingual Lexicon Induction", 2017. [3] Alvarez-Melis et al, "Towards Optimal Transport with Global Invariances", 2019. [4] Grave et al, "Unsupervised Alignment of Embeddings with Wasserstein Procrustes", 2019. *Hierarchical OT methods: [5] Yuorochkin et al, "Hierarhical Optimal Transport for Document Representation". [6] Shmitzer and Schnorr, "A Hierarchical Approach to Optimal Transport", 2013 [7] Dukler et al, "Wasserstein of Wasserstein Loss for Learning Generative Models", 2019 [9] Alvarez-Melis et al, "Structured Optimal Transport", 2018 [10] Das and Lee, "Unsupervised Domain Adaptation Using Regularized Hyper-Graph Matching", 2018 | - Lacking awareness of related work (see Comment 1) - Missing relevant baselines and runtime experimental results (Comments 2, 3 and 4) Major Comments/Questions: |
NIPS_2021_1954 | NIPS_2021 | Some state-of-the-art partial multi-label references are missing, such as 1) Partial Multi-Label Learning with Label Distribution 2) Noisy label tolerance: A new perspective of Partial Multi-Label Learning 3) Partial multi-label learning with mutual teaching.
The explanation of Theorem 1 is weak; the author should provide more explanations.
Can the author do the experiments on the image data set? | 2) Noisy label tolerance: A new perspective of Partial Multi-Label Learning |
CY9f6G89Rv | ICLR_2024 | While the proposed approach is interesting, I believe that there is a substantial amount of complexity that is currently unjustified. Moreover, little is offered in terms of intuition as to why the involved components are jointly able to produce a more accurate surrogate model. I believe the paper has potential, but these outstanding issues would have to be addressed in a satisfactory manner for the paper to be accepted. Specifically,
(To avoid confusion, I will denote the BO surrogate as the BO GP and the student as the SGP)
### __1. The complexity of the method:__
In addition to the Vanilla GP, there are three learnable components.
- The teacher, trained on two real and synthetic data based on its own classification ability and the student's performance
- The student, trained on synthetic data and validated on real data (synthetic and real)
- The synthetic data generation process, which is optimized to minimize the feedback loss of the first two
This nested scheme makes it difficult to discern whether synthetic data are sensible, are classified correctly and what drives performance - essentially why this process makes sense. As such, plots demonstrating the fit of the student model, the synthetically generated data or anything else which boosts intuition is paramount. As of now, the method is not sufficiently digestible.
### __2. The concept of adding synthetic data:__
In BO, The (GP) surrogate models the underlying objective in a principled manner which yields calibrated uncertainty estimates, assuming that it is properly trained. Since the GP in itself is principles, I am not convinced that adding synthetic data is an intrinsically good idea. The proposed approach suggests that the GP is either un-calibrated or does not sufficiently extrapolate the observed data. While this seems possible, there is no insight into which possible fault (of the GP) is being addressed, nor how the added synthetic data accomplishes it. Is it
1. The teacher MLP that aggressively (and accurately) extrapolates in synthetic data generation?
2. That the vanilla GP is simply under-confident, so that adding synthetic data acts to reduce the uncertainty by adding more data?
3. Some other reason/combination of the two
Note that I am specifically asking for intuition as to how the _modeling_ (and not the BO results) can improve by adding synthetic data.
### __3. The accuracy and role of the teacher:__
If the teacher is able to generate accurate data (synthetic or real), why not use it as a surrogate instead of the conventional GP? Comments like
#### _"Evaluating the performance of the student that is trained with random unlabeled data far away from the global optimum, may not provide relevant feedback for tuning the teacher toward finding the global optimum."_
suggest that it is the teacher's (and not the BO loop's) task to find the global optimum. If the teacher adds data that it believes is good to the BO surrogate, the teacher is indirectly conducting the optimization.
### __4. The role of the student:__
If the student is trained only on synthetic data, validated on a (promising) subset of the true data, and its loss is coupled with the teacher's. As such, the student's only role appears to be as a regularizer (i.e. to incorporate GP-like smoothness into the MLP) while producing approximately the same predictions on labeled data. Can the authors shed light on the role on whether this is true, and if so, what difference the student offers as opposed to conventional regularization.
### __5. Few tasks in results:__
Three benchmarks constitutes a fairly sparse evaluation. Does the proposed method make sense on conventional test functions, and can a couple of those be added?
### __6. Missing references:__
The key idea of 5.1 is _very_ similar to MES (Wang and Jegelka 2017), and so this is a must-cite. They also use a GEV (specifically a Gumbel) to fit $p(y^*)$. __Minor:__
- App. B1: proposes -->purposes, remove ", however"
- Sec 5.0 of systematically sample --> of systematically sampling
- Legend fontsize is tiny, Fig.1 fontsize is too small as well | 2. That the vanilla GP is simply under-confident, so that adding synthetic data acts to reduce the uncertainty by adding more data? |
NIPS_2017_434 | NIPS_2017 | ---
This paper is very clean, so I mainly have nits to pick and suggestions for material that would be interesting to see. In roughly decreasing order of importance:
1. A seemingly important novel feature of the model is the use of multiple INs at different speeds in the dynamics predictor. This design choice is not
ablated. How important is the added complexity? Will one IN do?
2. Section 4.2: To what extent should long term rollouts be predictable? After a certain amount of time it seems MSE becomes meaningless because too many small errors have accumulated. This is a subtle point that could mislead readers who see relatively large MSEs in figure 4, so perhaps a discussion should be added in section 4.2.
3. The images used in this paper sample have randomly sampled CIFAR images as backgrounds to make the task harder.
While more difficult tasks are more interesting modulo all other factors of interest, this choice is not well motivated.
Why is this particular dimension of difficulty interesting?
4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error?
5. Are the learned object state embeddings interpretable in any way before decoding?
6. It may be beneficial to spend more time discussing model limitations and other dimensions of generalization. Some suggestions:
* The number of entities is fixed and it's not clear how to generalize a model to different numbers of entities (e.g., as shown in figure 3 of INs).
* How many different kinds of physical interaction can be in one simulation?
* How sensitive is the visual encoder to shorter/longer sequence lengths? Does the model deal well with different frame rates?
Preliminary Evaluation ---
Clear accept. The only thing which I feel is really missing is the first point in the weaknesses section, but its lack would not merit rejection. | 4. line 232: This hypothesis could be specified a bit more clearly. How do noisy rollouts contribute to lower rollout error? |
NIPS_2018_461 | NIPS_2018 | 1. Symbols are a little bit complicated and takes a lot of time to understand. 2. The author should probably focus more on the proposed problem and framework, instead of spending much space on the applications. 3. No conclusion section Generally I think this paper is good, but my main concern is the originality. If this paper appears a couple years ago, I would think that using meta-learning to solve problems is a creative idea. However, for now, there are many works using meta-learning to solve a variety of tasks, such as in active learning and reinforcement learning. Hence, this paper seems not very exciting. Nevertheless, deciding the number of clusters and selecting good clustering algorithms are still useful. Quality: 4 of 5 Clarity: 3 of 5 Originality: 2 of 5 Significance: 4 of 5 Typo: Line 240 & 257: Figure 5 should be Figure 3. | 3. No conclusion section Generally I think this paper is good, but my main concern is the originality. If this paper appears a couple years ago, I would think that using meta-learning to solve problems is a creative idea. However, for now, there are many works using meta-learning to solve a variety of tasks, such as in active learning and reinforcement learning. Hence, this paper seems not very exciting. Nevertheless, deciding the number of clusters and selecting good clustering algorithms are still useful. Quality: |
ICLR_2021_1476 | ICLR_2021 | Lack of clarity:
The paper was difficult to follow, it omits several crucial details necessary to understand the proposed method.
Below are some general questions or suggestions:
While the paper's principal focus is on calibration and recalibration, it is unclear why there are claims to address aleatoric uncertainty
The concept of recalibration is introduced in Section 2 as a classification problem. However, in Section 4, the focus is on regression with normalizing flows
In Figures 1 and 2, what is X , c , W 1 , W 2 ?
Figure 2, add labels to x-axis and y-axis
Improve caption quality across all Figures and tables
CDF performance plot:
The paper claims the CDF performance plot is one of the main contributions, yet it is difficult to interpret or follow. Why are calibrated CDFs uniformly distributed?
What is σ ?
What is pull?
What is ψ
Recalibration Normalizing flows:
What are the complete formulations of the likelihoods optimized in 1) and 2)?
How is normalizing flow extended to multivariate calibration?
Weak experiments:
How is the Calib metric computed?
The experiments are on toy data and small UCI datasets. Additional large scale image datasets or regression tasks would strengthen the submission. | 1) and2)? How is normalizing flow extended to multivariate calibration? Weak experiments: How is the Calib metric computed? The experiments are on toy data and small UCI datasets. Additional large scale image datasets or regression tasks would strengthen the submission. |
NIPS_2017_356 | NIPS_2017 | ]
My major concerns about this paper is the experiment on visual dialog dataset. The authors only show the proposed model's performance on discriminative setting without any ablation studies. There is not enough experiment result to show how the proposed model works on the real dataset. If possible, please answer my following questions in the rebuttal.
1: The authors claim their model can achieve superior performance having significantly fewer parameters than baseline [1]. This is mainly achieved by using a much smaller word embedding size and LSTM size. To me, it could be authors in [1] just test model with standard parameter setting. To backup this claim, is there any improvements when the proposed model use larger word embedding, and LSTM parameters?
2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting?
3: To further backup the proposed visual reference resolution model works in real dataset, please also conduct ablation study on visDial dataset. One experiment I'm really interested is the performance of ATT(+H) (in figure 4 left). What is the result if the proposed model didn't consider the relevant attention retrieval from the attention memory. | 2: There are two test settings in visual dialog, while the Table 1 only shows the result on discriminative setting. It's known that discriminative setting can not apply on real applications, what is the result on generative setting? |
NIPS_2017_28 | NIPS_2017 | - Most importantly, the explanations are very qualitative and whenever simulation or experiment-based evidence is given, the procedures are described very minimally or not at all, and some figures are confusing, e.g. what is "sample count" in fig. 2? It would really help adding more details to the paper and/or supplementary information in order to appreciate what exactly was done in each simulation. Whenever statistical inferences are made, there should be error bars and/or p-values.
- Although in principle the argument that in case of recognition lists are recalled based on items makes sense, in the most common case of recognition, old vs new judgments, new items comprise the list of all items available in memory (minus the ones seen), and it's hard to see how such an exhaustive list could be effectively implemented and concrete predictions tested with simulations.
- Model implementation should be better justified: for example, the stopping rule with n consecutive identical samples seems a bit arbitrary (at least it's hard to imagine neural/behavioral parallels for that) and sensitivity with regard to n is not discussed.
- Finally it's unclear how perceptual modifications apply for the case of recall: in my understanding the items are freely recalled from memory and hence can't be perceptually modified. Also what are speeded/unspeeded conditions? | - Although in principle the argument that in case of recognition lists are recalled based on items makes sense, in the most common case of recognition, old vs new judgments, new items comprise the list of all items available in memory (minus the ones seen), and it's hard to see how such an exhaustive list could be effectively implemented and concrete predictions tested with simulations. |
NIPS_2020_1211 | NIPS_2020 | I have only a few remarks on this paper, even though they shouldn't be considered as weaknesses. They are listed below in no particular order. - in eq.1 | is used both as the absolute value operator and the cardinality one, which can lead to confusion - in eq.2, \tau and v have not been previously defined (unless I'm missing something) - I find it regrettable that no theoretical analysis of Mesa (e.g. convergence speed, generalization error, etc) is proposed aside from the complexity one, especially since it is built upon frameworks with strong theoretical properties - line 155 "is thus can be" typo - line 173 reference error "Haarnoja et al." - line 233 compares > compared - table 2, what does k correspond to? Is it the parameter of Algorithm 2? - a few more datasets would've been appreciated, especially concerning the cross-task transferability | - a few more datasets would've been appreciated, especially concerning the cross-task transferability |
ACL_2017_105_review | ACL_2017 | Maybe the model is just an ordinary BiRNN with alignments de-coupled.
Only evaluated on morphology, no other monotone Seq2Seq tasks.
- General Discussion: The authors propose a novel encoder-decoder neural network architecture with "hard monotonic attention". They evaluate it on three morphology datasets.
This paper is a tough one. One the one hand it is well-written, mostly very clear and also presents a novel idea, namely including monotonicity in morphology tasks. The reason for including such monotonicity is pretty obvious: Unlike machine translation, many seq2seq tasks are monotone, and therefore general encoder-decoder models should not be used in the first place. That they still perform reasonably well should be considered a strong argument for neural techniques, in general. The idea of this paper is now to explicity enforce a monotonic output character generation. They do this by decoupling alignment and transduction and first aligning input-output sequences monotonically and then training to generate outputs in agreement with the monotone alignments.
However, the authors are unclear on this point. I have a few questions: 1) How do your alignments look like? On the one hand, the alignments seem to be of the kind 1-to-many (as in the running example, Fig.1), that is, 1 input character can be aligned with zero, 1, or several output characters. However, this seems to contrast with the description given in lines 311-312 where the authors speak of several input characters aligned to 1 output character. That is, do you use 1-to-many, many-to-1 or many-to-many alignments?
2) Actually, there is a quite simple approach to monotone Seq2Seq. In a first stage, align input and output characters monotonically with a 1-to-many constraint (one can use any monotone aligner, such as the toolkit of Jiampojamarn and Kondrak). Then one trains a standard sequence tagger(!) to predict exactly these 1-to-many alignments. For example, flog->fliege (your example on l.613): First align as in "f-l-o-g / f-l-ie-ge". Now use any tagger (could use an LSTM, if you like) to predict "f-l-ie-ge" (sequence of length 4) from "f-l-o-g" (sequence of length 4). Such an approach may have been suggested in multiple papers, one reference could be [*, Section 4.2] below.
My two questions here are: 2a) How does your approach differ from this rather simple idea?
2b) Why did you not include it as a baseline?
Further issues: 3) It's really a pitty that you only tested on morphology, because there are many other interesting monotonic seq2seq tasks, and you could have shown your system's superiority by evaluating on these, given that you explicitly model monotonicity (cf. also [*]).
4) You perform "on par or better" (l.791). There seems to be a general cognitive bias among NLP researchers to map instances where they perform worse to "on par" and all the rest to "better". I think this wording should be corrected, but otherwise I'm fine with the experimental results.
5) You say little about your linguistic features: From Fig. 1, I infer that they include POS, etc. 5a) Where did you take these features from?
5b) Is it possible that these are responsible for your better performance in some cases, rather than the monotonicity constraints?
Minor points: 6) Equation (3): please re-write $NN$ as $\text{NN}$ or similar 7) l.231 "Where" should be lower case 8) l.237 and many more: $x_1\ldots x_n$. As far as I know, the math community recommends to write $x_1,\ldots,x_n$ but $x_1\cdots x_n$. That is, dots should be on the same level as surrounding symbols.
9) Figure 1: is it really necessary to use cyrillic font? I can't even address your example here, because I don't have your fonts.
10) l.437: should be "these" [*] @InProceedings{schnober-EtAl:2016:COLING, author = {Schnober, Carsten and Eger, Steffen and Do Dinh, Erik-L\^{a}n and Gurevych, Iryna}, title = {Still not there? Comparing Traditional Sequence-to-Sequence Models to Encoder-Decoder Neural Networks on Monotone String Translation Tasks}, booktitle = {Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers}, month = {December}, year = {2016}, address = {Osaka, Japan}, publisher = {The COLING 2016 Organizing Committee}, pages = {1703--1714}, url = {http://aclweb.org/anthology/C16-1160} } AFTER AUTHOR RESPONSE Thanks for the clarifications. I think your alignments got mixed up in the response somehow (maybe a coding issue), but I think you're aligning 1-0, 0-1, 1-1, and later make many-to-many alignments from these.
I know that you compare to Nicolai, Cherry and Kondrak (2015) but my question would have rather been: why not use 1-x (x in 0,1,2) alignments as in Schnober et al. and then train a neural tagger on these (e.g. BiLSTM). I wonder how much your results would have differed from such a rather simple baseline. ( A tagger is a monotone model to start with and given the monotone alignments, everything stays monotone. In contrast, you start out with a more general model and then put hard monotonicity constraints on this ...) NOTES FROM AC Also quite relevant is Cohn et al. (2016), http://www.aclweb.org/anthology/N16-1102 .
Isn't your architecture also related to methods like the Stack LSTM, which similarly predicts a sequence of actions that modify or annotate an input? Do you think you lose anything by using a greedy alignment, in contrast to Rastogi et al. (2016), which also has hard monotonic attention but sums over all alignments? | 4) You perform "on par or better" (l.791). There seems to be a general cognitive bias among NLP researchers to map instances where they perform worse to "on par" and all the rest to "better". I think this wording should be corrected, but otherwise I'm fine with the experimental results. |
ICLR_2023_3317 | ICLR_2023 | Weakness main comments: • what is the advantage of using a differentiable LP layer (GNN and a LP solver) as a high-level policy, shown in Eq. 10?
– compare it to [1] that considers the LP optimization layer as a meta-environment?
– compare it to an explicit task assignment protocol (e.g. not implicit).
E.g. a high-level policy that directly outputs task weightings instead of the intermediary C matrix?
• How does this method address sparse reward problems in a better way? From the experiments, this does not support well. in practice, the proposed method requires sub-task-specific rewards to be specified, which would be similar to providing a dense reward signal that includes rewards for reaching sub-goals. If given the sum of low-level reward as the global reward, will the other methods (Qmix) solve the sparse-reward tasks as well?
minor comments: • It is hard to determine whether the solution to the matching problem (learned agent-task score matrix C) optimized by LP is achieving global perspective over the learning process.
• When the lower-level policies are also trained online, the learning could be unstable. Details on how to solve the instability in hierarchical learning are missing.
• What is the effect of the use of hand-defined tasks on performance? what is the effect of the algorithm itself? maybe do an ablation study.
• Section 5.2 ”training low-level actor-critic” should be put in the main text.
[1] Carion N, Usunier N, Synnaeve G, et al. A structured prediction approach for generalization in cooperative multi-agent reinforcement learning[J]. Advances in neural information processing systems, 2019, 32: 8130-8140. | • How does this method address sparse reward problems in a better way? From the experiments, this does not support well. in practice, the proposed method requires sub-task-specific rewards to be specified, which would be similar to providing a dense reward signal that includes rewards for reaching sub-goals. If given the sum of low-level reward as the global reward, will the other methods (Qmix) solve the sparse-reward tasks as well? minor comments: |
JXvEzl8YkS | ICLR_2025 | 1. Although I am quite familiar with regime-switching, identification, and financial markets, the logical structure of this paper is challenging to follow. The introduction directly presents model definitions and the authors’ proposed improvements without first providing an overview of related work, the motivation, or the current state of research. This structure makes it difficult for readers to understand the unique contributions of this work and distinguish them from existing models. A clearer, more organized flow would greatly enhance readability and clarify the authors’ contributions.
2. The paper lacks a detailed motivation explaining why Regularised K-means, specifically, is optimal for adapting into the Jump Model framework. A comparison of Regularised K-means with alternative approaches could clarify why it is most suitable in this context.
3. The paper briefly mentions Hidden Markov Models (HMM) and other regime-switching models but does not provide a thorough comparison, such as the latest model - RHINE: A Regime-Switching Model with Nonlinear Representation for Discovering and Forecasting Regimes in Financial Markets (SIAM SDM2024). Besides, although feature selection is central to the model, there is limited discussion on alternative feature selection methods in time series analysis.
4. The formulas (such as those in Eq. 1.6) lack detailed explanations for how each component, including the penalty terms $ P(\mu) $, specifically enhances feature selection and regime accuracy.
5. The experimental part does not demonstrate how regime identification or interpretability is achieved. Additionally, there are no actual experimental results presented in the main text, yet the pseudo-code of the algorithms takes up several pages.
6. Appendix A is left blank, and the purpose of Proposition B.1 in Appendix B is unclear—is it merely meant to illustrate the classic partitioning principle of K-means? This is a well-known concept in machine learning, and furthermore, the authors’ so-called “proof” is missing. | 6. Appendix A is left blank, and the purpose of Proposition B.1 in Appendix B is unclear—is it merely meant to illustrate the classic partitioning principle of K-means? This is a well-known concept in machine learning, and furthermore, the authors’ so-called “proof” is missing. |
NIPS_2019_390 | NIPS_2019 | 1. The distinction between modeling uncertainty about the Q-values and modeling stochasticity of the reward (lines 119-121) makes some sense philosophically but the text should make clearer the practical distinction between this and distributional reinforcement learning. 2. It is not explained (Section 5) why the modifications made in Definition 5.1 aren't important in practice. 3. The Atari game result (Section 7.2) is limited to a single game and a single baseline. It is very hard to interpret this. Less major weaknesses: 1. The main text should make it more clear that there are additional experiments in the supplement (and preferably summarize their results). Questions: 1. You define a modified TD learning algorithm in Definition 5.1, for the purposes of theoretical analysis. Why should we use the original proposal (Algorithm 1) over this modified learning algorithm in practice? 2. Does this idea of propagating uncertainty not naturally combine with that of distributional RL, in that stochasticity of the reward might contribute to uncertainty about the Q-value? Typos, etc.: * Line 124, "... when experimenting a transition ..." ---- UPDATE: After reading the rebuttal, I have raised my score. I appreciate that the authors have included additional experiments and have explained further the difference between Definition 5.1 and the algorithm used in practice, as well as the distinction between the current work and distributional RL. I hope that all three of these additions will make their way into the final paper. | 3. The Atari game result (Section 7.2) is limited to a single game and a single baseline. It is very hard to interpret this. Less major weaknesses: |
Akk5ep2gQx | EMNLP_2023 | 1. The experiment section could be improved. For example, it is better to carry significance test on the human evaluation results. It is also beneficial to compare the proposed method with some most recent LLM.
2. The classifier of determining attributes using only parts of the sentence may not perform well. Specifically, I am wondering what is the performance of the attribute classifer obtained using Eq.2 and Eq.7.
3. Some of the experiment results could be explained in more details. For example, the author observes that "Compared to CTRL, DASC has lower Sensibleness but higher Interestingness", but why? Is that because DASC is bad for exhibiting Sensibleness? Similar results are also observed in Table1. | 1. The experiment section could be improved. For example, it is better to carry significance test on the human evaluation results. It is also beneficial to compare the proposed method with some most recent LLM. |
ICLR_2023_2283 | ICLR_2023 | 1. 1. The symbols in Section 4.3 are not very clearly explained. 2. This paper only experiments on the very small time steps (e.g.1、2) and lack of some experiments on slightly larger time steps (e.g. 4、6) to make better comparisons with other methods. I think it is necessary to analyze the impact of the time step on the method proposed in this paper. 3. Lack of experimental results on ImageNet to verify the method.
Questions: 1. Fig. 3 e. Since the preactivation values of two networks are the same membrane potentials, their output cosine similarity will be very high. Why not directly illustrate the results of the latter loss term of Eqn 13? 2. Is there any use of recurrent connections in the experiments in this paper? Apart from appendix A.5, I do not see the recurrent connections. | 1. Fig. 3 e. Since the preactivation values of two networks are the same membrane potentials, their output cosine similarity will be very high. Why not directly illustrate the results of the latter loss term of Eqn 13? |
NIPS_2022_1825 | NIPS_2022 | The major weakness of this paper is, in my eyes, the superficial modeling of logical relations. Deductive, Inductive, and Abductive reasoning types are only manifestations of logical reasoning, they do not touch down to the essence. Based on this classification, this paper only models the consequential relations in a sentence-level style, which ignores a broad range of other logical reasoning expressions such as negation, disjunction, and conjunction.
The so-called automatic identification of logical reasoning phenomena is in fact requires many manual labors. The authors have to pre-define words/expressions that indicate conclusion or premises, which greatly limits the universality of the proposed approach.
The authors only report the experimental results on the development set, which made the results less convincing. Also, the performance improvement over generation datasets is less noticeable than in classification datasets. It's hard to know that the model is really behaving as advertised rather than just adding capacity on top of the baselines.
In the related work section, especially Pre-training for Reasoning Ability Improvement, the authors describe several papers of other relation types that LogiGAN is not compared with. This makes it difficult to place this work in comparison with prior literature and understand the novelty of the proposed framework. Also, the key contribution of this paper could be blurred with improvement over GAN-like architecture rather than pre-training tasks or something else.
I do not see any open-source statements or code submissions in the supplementary materials, so I take issues with the reproducibility of this paper.
Minor Issues:
Line 19: “Learning without thought is labor lost” -> “learning without thinking is labor lost”
Caption of table 1: “development setsjudgement” -> “development set judgment”
Overall, I'm borderline with this paper. I think the paper tests three aspects: 1) GAN-style training in language models. 2) predict the masked-out logical statements as the pre-training task. 3) detection heuristics for logical reasoning phenomena. It certainly demonstrates great experimental results compared with vanilla T5 baselines. But none of them alone is novel enough. The most important part in my eyes, leveraging logical reasoning phenomena for pre-training, is after all a sententious logical relation extraction, whose effectiveness is already demonstrated in [1].
[1] Huang, Yinya, et al. "DAGN: Discourse-aware graph network for logical reasoning." NAACL 2021. | 2) predict the masked-out logical statements as the pre-training task. |
NIPS_2020_1602 | NIPS_2020 | There are some questions/concerns however. 1. Haven't you tried to set hyperparameters for the baseline models via cross-validation (i.e. the same method you used for your own model)? Setting it to their default values (even taken from other papers) may have a risk of unfair comparison aganist yours. I do not think this is the case but I would recommend the authors to carry out the corresponding experiments. 2. It is unclear for me why the performance of DNN+MMA becomes worse than vanilla DNN when lambda becomes small? See fig.3-4. I would expect it will approach vanilla methods from above but from below. | 2. It is unclear for me why the performance of DNN+MMA becomes worse than vanilla DNN when lambda becomes small? See fig.3-4. I would expect it will approach vanilla methods from above but from below. |
ICLR_2022_629 | ICLR_2022 | The motivation for all V&L experiments looks weak because it stems from the following single observation “if cast VQA 2.0 into a zero-shot image-to-text retrieval task, we only observe chance performance. Thus, we propose to integrate CLIP’s visual encoder with previous V&L models.” The prompt engineering considered for VQA is not persuasive and does not imply the same straightforward introduction of CLIP-based backbones to VLN / Image Captioning tasks. Also, I think it is necessary to explain the “chance performance” for VQA, at least for the binary yes/no question type. E.g., does 0.037 mean that one can simply flip the model’s answers?
“Unfreezing” the Visual Backbone helps if done correctly – it is a well-known fact supported by BUTD-Res101 “unfreezing” experiment in the paper. However, CLIP-Res50 benefits more from pre-training than BUTD-Res101. This observation requires further elaboration.
Most of the models considered in their respective tasks seem outdated as of 2021 (e.g., Pythia 2019, MCAN 2019). At the same time, the authors avoid direct comparison to the methods that require large-scale “supervised” or “self-supervised” pre-training, e.g., VLN-BERT, although CLIP itself can be considered as such.
The authors may want to include findings on localization ability of transformers from the following papers in the later versions of the manuscript: - Do Vision Transformers See Like Convolutional Neural Networks? arxiv 2021 - Dynamic Head: Unifying Object Detection Heads with Attentions, CVPR 2021 - Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions, ICCV 2021 | - Do Vision Transformers See Like Convolutional Neural Networks? arxiv 2021 - Dynamic Head: Unifying Object Detection Heads with Attentions, CVPR 2021 - Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions, ICCV 2021 |
NIPS_2016_355 | NIPS_2016 | - Unfortunately, one major take-away here isn't particularly inspiring: e.g. there's a trade-off between churn and accuracy. - Also, the thought of having to train 30-40 models to burn in in order to test this approach isn't particularly appealing. Another interesting direction for dealing with churn could be unlabelled data, or applying via constraints: e.g. if we are willing to accep X% churn, and have access to unlabeled target data, what's the best way to use that to improve the stability of our model? | - Unfortunately, one major take-away here isn't particularly inspiring: e.g. there's a trade-off between churn and accuracy. |
PyJ78pUMEE | EMNLP_2023 | - There's over reliance on the LLM to trust the automated scoring with the knowledge that LLMs have their complex biases and sensitivity to prompts (and order).
- It is not clear how the method will perform on long conversations (the dialog datasets used for prompt and demonstration selection seem to contain short conversations)
- The paper can be simplified in writing - the abstract is too long and does not convey the findings well.
- Fig. 1 can also be drawn better to show the processing pipeline (prompt generation and manual check, demonstration selection with ground truth scores, and automatic scoring alongwith showing where model training is being used to optimize the selection modules. | - Fig. 1 can also be drawn better to show the processing pipeline (prompt generation and manual check, demonstration selection with ground truth scores, and automatic scoring alongwith showing where model training is being used to optimize the selection modules. |
ICLR_2022_497 | ICLR_2022 | I have the following questions to which I wish the author could respond in the rebuttal. If I missed something in the paper, I would appreciate it if the authors could point them out.
Main concerns: - In my understanding, the best scenarios are those generated from the true distribution P (over the scenarios), and therefore, the CVAE essentially attempts to approximate the true distribution P. In such a sense, if the true distribution P is independent of the context (which is the case in the experiments in this paper), I do not see the rationale for having the scenarios conditioned on the context, which in theory does not provide any statistical evidence. Therefore, the rationale behind CVAE-SIP is not clear to me. If the goal is not to approximate P but to solve the optimization problem, then having the objective values involved as a predicting goal is reasonable; in this case, having the context involved is justified because they can have an impact on the optimization results. Thus, CVAE-SIPA to me is a valid method. - While reducing the scenarios from 200 to 10 is promising, the quality of optimization has decreased a little bit. On the other hand, in Figure 2, using K-medoids with K=20 can perfectly recover the original value, which suggests that K-medoids is a decent solution and complex learning methods are not necessary for the considered settings. In addition, I am also wondering the performance under the setting that the 200 scenarios (or random scenarios of a certain number from the true distributions) are directly used as the input of CPLEX. In addition, to justify the performance, it is necessary to provide information about robustness as well as to identify the case where simple methods are not satisfactory (such as larger graphs).
Minor concerns: - Given the structure of the proposed CVAE, the generation process takes the input of z and c where z
is derived from w
. This suggests that the proposed method requires us to know a collection of scenarios from the true distribution. If this is the case, it would be better to have a clear problem statement in Sec 3. Based on such understanding, I am wondering about the process of generating scenarios used for getting K representatives - it would be great if codes like Alg 1 was provided. - I would assume that the performance is closely related to the number of scenarios used for training, and therefore, it is interesting to examine the performance with different numbers of scenarios (which is fixed as 200 in the paper). - The structure of the encoder is not clear to me. The notation q_{\phi} is used to denote two different functions q(z w,D) and q ( c , D )
. Does that mean they are the same network? - It would be better to experimentally justify the choice of the dimension of c and z. - It looks to me that the proposed methods are designed for graph-based problems, while two-stage integer programming does not have to be graph problems in general. If this is the case, it would be better to clearly indicate the scope of the considered problem. Before reaching Sec 4.2, I was thinking that the paper could address general settings. - The paper introduces CVAE-SIP and CVAE-SIPA in Sec 5 -- after discussing the training methods, so I am wondering if they follow the same training scheme. In particular, it is not clear to me by saying “append objective values to the representations” at the beginning of Sec 5. - The approximation error is defined as the gap between the objective values, which is somehow ambiguous unless one has seen the values in the table. It would be better to provide a mathematical characterization. | - I would assume that the performance is closely related to the number of scenarios used for training, and therefore, it is interesting to examine the performance with different numbers of scenarios (which is fixed as 200 in the paper). |
K8Mbkn9c4Q | ICLR_2024 | While I do like the general underlying idea, there are several severe weaknesses present in this work – leading me to lean towards rejection of the manuscript in its current form. The two main areas of concern are briefly listed here, with details explained in the ‘Questions’ part:
### 1) Lacking quality of the “Domain Transformation” part
This is arguably the KEY part of the paper, and needs significant improvement in two points: Underlying intuition/motivation/justification, as well as technical correctness and clarity. There are several fundamental points that are unclear to me and require significant improvement and clarification; This applies to both clarity in terms of writing but, more importantly, to the quality of the approach and justifications/underlying motivations.
Please see the “Questions” part for details.
### 2) Lacking detail in experiment description:
Description of experimental details would significantly benefit from increased clarity to allow the user to better judge the results, which is very difficult in the manuscript’s current state; See "Questions" for further details. | 2) Lacking detail in experiment description: Description of experimental details would significantly benefit from increased clarity to allow the user to better judge the results, which is very difficult in the manuscript’s current state; See "Questions" for further details. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | * l.183: Much faster approximations than Chebyshev polynomials exist for the evaluation of kernel density estimates, especially in low-dimensional spaces (e.g., based on fast multipole methods). |
ICLR_2023_149 | ICLR_2023 | Weakness: 1. Some recent RNN-based latent models eg. LFADs and Oerich 2020, were overlooked in the current manuscript. It would be great to discuss those. 2. It is not clear to me whether such a model could generate novel knowledge or testable hypothesis about neuron data. | 2. It is not clear to me whether such a model could generate novel knowledge or testable hypothesis about neuron data. |
NIPS_2019_932 | NIPS_2019 | weakness is that some of the main results come across as rather simple combinations of existing ideas/results, but on the other hand the simplicity can also be viewed as a strength. I donât find the Experiments section essential, and would have been equally happy to have this as a purely theory paper. But the experiments donât hurt either. My remaining comments are mostly quite minor â I will put a * next to those where I prefer a response, and any other responses are optional: [*] p2: Please justify the claim âoptimal number of measurementsâ - in particular highlighting the k*log(n/k) + 1/eps lower bound from [1] and adding it to Table 1. As far as I know, it is an open problem as to whether the k^{3/2} term is unavoidable in the binary setting - is this correct? (If not, again please include a citation and add to Table 1) - p2: epsilon is used without being defined (and also the phrase âapproximate recoveryâ) - p4: Avoid the uses of the word ânecessaryâ, since these are only sufficient conditions. Similarly, in Lemma 3 the statement âprovided thatâ is strictly speaking incorrect (e.g., m = 0 satisfies the statement given). - The proof of Lemma 1 is a bit confusing, and could be re-worded. - p6: The terminology ârateâ, ârelative distanceâ, and notation H_q(delta) should not be assumed familiar for a NeurIPS audience. - I think the proof of Theorem 10 should be revised. Please give brief explanations for the steps (e.g., the step after qd = (â¦) follows by re-arranging the choice of n, etc.) [*] In fact, I couldnât quite follow the last step â substituting q=O(k/alpha) is clear, but why is the denominator also proportional to alpha/k? (A definition of H_q would have helped here) - Lemma 12: Please emphasize that m is known but x is not â this seems crucial. - For the authorsâ interest, there are some more recent refined bounds on the âfor-eachâ setting such as âLimits on Support Recovery with Probabilistic Models: An Information-Theoretic Frameworkâ and âSparse Classification: A Scalable Discrete Optimization Perspectiveâ, though since the emphasis of this paper is on the âfor-allâ setting, mentioning these is not essential. Very minor comments: - No need for capitalization in âGroup Testingâ - Give a citation when group testing first mentioned on p3 - p3: Remove the word âtypicalâ from âthe typical group testing measurementâ, I think it only increases ambiguity/confusion. - Lemma 1: Is â\cdotâ an inner product? Please make it clear. Also, should it be mx or m^T x inside the sign(.)? - Theorem 8: Rename delta to beta to avoid inconsistency with delta in Theorem 7. Also, is a âfor all dâ statement needed? - Just before Section 4.2, perhaps re-iterate that the constructions for [1] were non-explicit (hence highlighting the value of Theorem 10). - p7: âvery low probabilityâ -> âzero probabilityâ - p7: âThis connection was known previouslyâ -> Add citation - p10: Please give a citation for Pr[sign = sign] = (⦠cos^-1 formula â¦). === POST-REVIEW COMMENTS: The responses were all as I had assumed them to be when stating my previous score, so naturally my score is unchanged. Overall a good paper, with the main limitation probably being the level of novelty. | - p7: âvery low probabilityâ -> âzero probabilityâ - p7: âThis connection was known previouslyâ -> Add citation - p10: Please give a citation for Pr[sign = sign] = (⦠cos^-1 formula â¦). === POST-REVIEW COMMENTS: The responses were all as I had assumed them to be when stating my previous score, so naturally my score is unchanged. Overall a good paper, with the main limitation probably being the level of novelty. |
ARR_2022_209_review | ARR_2022 | 1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date?
2. There isn't one clear aggregation strategy that gives consistent performance gains across all tasks. So it is hard for someone to implement this approach in practice.
1. Experimental setup details: Can you explain how you pick which notes from the patient's EHR do you use an input and how far away are the outcomes from the last note date? Also, how do you select the patient population for the experiments? Do you use all patients and their admissions for prediction? Is the test set temporally split or split according to different patients?
2. Is precision more important or recall? You seem to consider precision more important in order to not raise false alarms. But isn't recall also important since you would otherwise miss out on reporting at-risk patients?
3. You cannot refer to appendix figures in the main paper (line 497). You should either move the whole analysis to appendix or move up the figures.
4. How do you think your approach would compare to/work in line with other inputs such as structured information? AUCROC seems pretty high in other models in literature.
5. Consider explaining the tasks and performance metrics when you call them out in the abstract in a little more detail. It's a little confusing now since you mention mortality prediction and say precision@topK, which isn't a regular binary classification metric. | 1. The task setup is not described clearly. For example, which notes in the EHR (only the current admission or all previous admissions) do you use as input and how far away are the outcomes from the last note date? |
ICLR_2022_1216 | ICLR_2022 | of the paper: Overall the paper is reasonably well-written but the writing can improve in certain aspects. Some comments and questions below. 1. It is not apparent to the reader why the authors choose an asymptotic regime to focus on. My understanding is that the primary reason is easier theoretical tractability. It would help the reader to know why the paper focuses on the asymptotic setting. 2. It is unclear in the write-up if sample-wise descent occurs only in the over-parameterized regime or not. Pointing this explicitly in the place where you list your contributions would help. More broadly, it is important to have a discussion around these regimes in the main body and also a discussion around how they are defined in the asymptotic regime would help. 3. The paper is written in a very technical manner with very little proof intuition provided in the main body. It would benefit from having more intuition on the tools used and the reasons the main theorems hold. 4. Given that prior work already theoretically shows that sample-wise multiple descent can occur in linear regression, the main contribution of the paper appears to be the result that optimal regularization can remove double descent even in certain anisotropic settings. If this is not the case, the paper should do a better job of highlighting the novelty of their result in relation to prior results.
I am not too familiar with the particular techniques and tools used in the paper and could not verify the claims but they seem correct. | 4. Given that prior work already theoretically shows that sample-wise multiple descent can occur in linear regression, the main contribution of the paper appears to be the result that optimal regularization can remove double descent even in certain anisotropic settings. If this is not the case, the paper should do a better job of highlighting the novelty of their result in relation to prior results. I am not too familiar with the particular techniques and tools used in the paper and could not verify the claims but they seem correct. |
NIPS_2016_43 | NIPS_2016 | Weakness: 1. The organization of this paper could be further improved, such as give more background knowledge of the proposed method and bring the description of the relate literatures forward. 2. It will be good to see some failure cases and related discussion. | 1. The organization of this paper could be further improved, such as give more background knowledge of the proposed method and bring the description of the relate literatures forward. |
9Ax0pyaLgh | EMNLP_2023 | 1. Authors are suggested to use other metrics to evaluate the Results (e.g. BERTScore).
2. Often it is not sufficient to show automatic evaluation results. The author does not show any human evaluation results and does not even perform a case study and proper error analysis. This does not reflect well on the qualitative aspects of the proposed model.
3. It is difficult to understand the methodology without Figure 1. Parts of section 2 should be written in alignment with Figure 1, and the authors are expected to follow a step-by-step description of the proposed method. (See questions to authors) | 1. Authors are suggested to use other metrics to evaluate the Results (e.g. BERTScore). |
NIPS_2018_25 | NIPS_2018 | - My understanding is that R,t and K (the extrinsic and intrinsic parameters of the camera) are provided to the model at test time for the re-projection layer. Correct me in the rebuttal if I am wrong. If that is the case, the model will be very limited and it cannot be applied to general settings. If that is not the case and these parameters are learned, what is the loss function? - Another issue of the paper is that the disentangling is done manually. For example, the semantic segmentation network is the first module in the pipeline. Why is that? Why not something else? It would be interesting if the paper did not have this type of manual disentangling, and everything was learned. - "semantic" segmentation is not low-level since the categories are specified for each pixel so the statements about semantic segmentation being a low-level cue should be removed from the paper. - During evaluation at test time, how is the 3D alignment between the prediction and the groundtruth found? - Please comment on why the performance of GTSeeNet is lower than that of SeeNetFuse and ThinkNetFuse. The expectation is that groundtruth 2D segmentation should improve the results. - line 180: Why not using the same amount of samples for SUNCG-D and SUNCG-RGBD? - What does NoSeeNet mean? Does it mean D=1 in line 96? - I cannot parse lines 113-114. Please clarify. | - Another issue of the paper is that the disentangling is done manually. For example, the semantic segmentation network is the first module in the pipeline. Why is that? Why not something else? It would be interesting if the paper did not have this type of manual disentangling, and everything was learned. |
YRJDZYGmAZ | ICLR_2024 | Many significant problems are found in the current format of the paper that prevents the understanding of the concept. The problem includes but is not limited to confusing writing, inconsistency of notations, expressions of novelty, experimental presentation etc. It is highly recommended that the authors to re-write the paper, re-organize the content and better polish the text for the reader to better understand.
The major problems:
* Novelty is limited:
- The proposed method as in Sec. 3.2 is very similar to DAPL but extended to multi-source scenarios. The only difference is to introduce an additional [DOM]. Note, that DAPL has not been peer-reviewed.
- The motivation for domain-aware mixup is confusing. I cannot be convinced and do not understand in the current writing, how it can enforce to learn domain-specific knowledge. The corresponding literature regarding mixup in the feature space is also not referenced and discussed (e.g. [1]).
- The description for deriving the domain-aware mixup is confusing. I assume the authors are trying to develop a method so that the learned prompt shares the knowledge between source and target domains (depending on Eq. 9)?
* Writing:
- In the first sentence of Abstract: “large vision-language models … strong performance in MFDA”. There is no such reference applying large VL models in MFDA. In fact, MFDA is a rarely studied problem.
- The description of the problem setting (MFDA) should be clearly explained at the beginning (abstract or introduction) so that the reader can refer better to the limitations of prior works.
- Paragraphs 1 & 2 in the introduction: the connection is missing, and ‘prompt learning’ suddenly jumps in, making the concept broken.
- Fig. 1 is not referred to in the paper.
- Related work: after describing the related prior works of each field, it's suggested to write a couple of sentences to distinguish between them to show the novelty of the proposed method.
- The description of the MFDA setting is very confusing in the first paragraph of the Method Section: “single target domain with \textbf{sparse} labels”, “…target distribution p_T(x, y) with label observation…” is mentioned, but the notation for target domain \tau is unlabeled. In the original MFDA paper (Yue et al., 2021a), the target data is unlabeled. What about the unlabeled data in source domains? Are they used during training (as in (Yue et al., 2021a))? It is very confusing that the problem setting description defers significantly as in (Yue et al., 2021a).
- There is significant text overlapping with DAPL in the preliminary sections of both papers (only with some rewording..). It should be strictly prohibited.
- What is [DOM] in Eq. 4? I assume it is a domain ID? And I assume [DOM] is the non-learnable component near the description of Eq. 4?
- Notation: what is subscript d in Eq. 4 and superscript d in Eq. 5? They are not explained in the text. I assume they are the domain IDs?
- What does it mean by ‘d*k categories’ as in the sentence after Eq. 5?
- Eq. 6 is very confusing. For the outer summation on d \in {s, u}, what is the purpose of computing the similarity between the target domain prompt and source image features? How does the learning on unlabeled target data is realized?
- What is inter-source domain mixup? In the current format of writing, I don’t understand why maintaining it will harm the representation learning on the target domain. The motivation is weak.
- In the second paragraph on page 6, the notation of target domain data y_t is different from Section 3.
- In Fig. 3, letters v and f are used to represent the features of “painting” and “real”. But v is used to represent text prompts as in Eq. 3
- The feature mix-up formulation in Fig. 3 is different than Eq. 8. One uses \gamma and another one uses \lambda? and the weighting is different?
- It is really confusing that the letter “t” is used to refer to text and target domain.
- What are D^s and D^u in Eq. 10? They are never defined. I assume they are source and target domains, which is inconsistent with what is described in the problem setting. The problem setting is borrowed from (Yue et al., 2021a). But Eq. 10 is copied from DAPL paper. Please keep everything consistent throughout the paper. Also, Eq. 9 requires source data as well, why only D^u is passed to L_u as in Eq. 10?
- The notations for loss functions in Eq. 7, 9, and 10 should be consistent.
- Table 5 in the last sentence of Page 8 should be Figure 5.
- The experimental setting/comparison is very confusing. What is “single best”, which can be both setting and method as in Table 1&2? What is source combined? Which rows in Tables 1&2 refer to the MFDA? How come the “Large model” in Table 1&2 can be the setting, it should be the model architecture.
- For Figure 6&7, they are hard to see the differences. It is suggested to use a table to report the numbers.
[1] Adversarial Domain Adaptation with Domain Mixup. AAAI 2020. | - The description of the MFDA setting is very confusing in the first paragraph of the Method Section: “single target domain with \textbf{sparse} labels”, “…target distribution p_T(x, y) with label observation…” is mentioned, but the notation for target domain \tau is unlabeled. In the original MFDA paper (Yue et al., 2021a), the target data is unlabeled. What about the unlabeled data in source domains? Are they used during training (as in (Yue et al., 2021a))? It is very confusing that the problem setting description defers significantly as in (Yue et al., 2021a). |
NIPS_2016_450 | NIPS_2016 | . First of all, the experimental results are quite interesting, especially that the algorithm outperforms DQN on Atari. The results on the synthetic experiment are also interesting. I have three main concerns about the paper. 1. There is significant difficulty in reconstructing what is precisely going on. For example, in Figure 1, what exactly is a head? How many layers would it have? What is the "Frame"? I wish the paper would spend a lot more space explaining how exactly bootstrapped DQN operates (Appendix B cleared up a lot of my queries and I suggest this be moved into the main body). 2. The general approach involves partitioning (with some duplication) the samples between the heads with the idea that some heads will be optimistic and encouraging exploration. I think that's an interesting idea, but the setting where it is used is complicated. It would be useful if this was reduced to (say) a bandit setting without the neural network. The resulting algorithm will partition the data for each arm into K (possibly overlapping) sub-samples and use the empirical estimate from each partition at random in each step. This seems like it could be interesting, but I am worried that the partitioning will mean that a lot of data is essentially discarded when it comes to eliminating arms. Any thoughts on how much data efficiency is lost in simple settings? Can you prove regret guarantees in this setting? 3. The paper does an OK job at describing the experimental setup, but still it is complicated with a lot of engineering going on in the background. This presents two issues. First, it would take months to re-produce these experiments (besides the hardware requirements). Second, with such complicated algorithms it's hard to know what exactly is leading to the improvement. For this reason I find this kind of paper a little unscientific, but maybe this is how things have to be. I wonder, do the authors plan to release their code? Overall I think this is an interesting idea, but the authors have not convinced me that this is a principled approach. The experimental results do look promising, however, and I'm sure there would be interest in this paper at NIPS. I wish the paper was more concrete, and also that code/data/network initialisation can be released. For me it is borderline. Minor comments: * L156-166: I can barely understand this paragraph, although I think I know what you want to say. First of all, there /are/ bandit algorithms that plan to explore. Notably the Gittins strategy, which treats the evolution of the posterior for each arm as a Markov chain. Besides this, the figure is hard to understand. "Dashed lines indicate that the agent can plan ahead..." is too vague to be understood concretely. * L176: What is $x$? * L37: Might want to mention that these algorithms follow the sampled policy for awhile. * L81: Please give more details. The state-space is finite? Continuous? What about the actions? In what space does theta lie? I can guess the answers to all these questions, but why not be precise? * Can you say something about the computation required to implement the experiments? How long did the experiments take and on what kind of hardware? * Just before Appendix D.2. "For training we used an epsilon-greedy ..." What does this mean exactly? You have epsilon-greedy exploration on top of the proposed strategy? | * L37: Might want to mention that these algorithms follow the sampled policy for awhile. |
NIPS_2017_390 | NIPS_2017 | - I am curious how the performance varies quantitatively if the training "shot" is not the same as "test" shot: In realistic applications, knowing the "shot" before-hand is a fairly strong and impractical assumption.
- I find the zero-shot version and the connection to density estimation a bit distracting to the main point of the paper, which is that one can learn to produce good prototypes that are effective for few-shot learning. However, this is more an aesthetic argument than a technical one. | - I find the zero-shot version and the connection to density estimation a bit distracting to the main point of the paper, which is that one can learn to produce good prototypes that are effective for few-shot learning. However, this is more an aesthetic argument than a technical one. |
NIPS_2016_321 | NIPS_2016 | #ERROR! | - The restriction to triplets (or a sliding window of length 3) is quite limiting. Is this a fundamental limitation of the approach or is an extension to longer subsequences (without a sliding window) straightforward? |
GFgPmhLVhC | EMNLP_2023 | 1. Novelty seems incremental to me. What are the ways in which this paper differs from https://aclanthology.org/2021.findings-acl.57.pdf? Is it just applying a very similar methodology to new task?
2. Performance gains seem small. There should be p-test or atleast confidence intervals to check statistical significance. | 1. Novelty seems incremental to me. What are the ways in which this paper differs from https://aclanthology.org/2021.findings-acl.57.pdf? Is it just applying a very similar methodology to new task? |
82VzAtBZGk | ICLR_2025 | The problem formulation is incomplete. The paper does not define the safety properties expected from the RL agent.
- Lack of theoretical results. This paper provides only empirical results to support its claims.
- The results are presented in a convoluted way. In particular, the results disregard the safety violations of the agent in the first 1000 episodes. The reason for presenting the results in this way is unclear.
- The presentation of the DDPG-Lag as a constrained RL algorithm is imprecise, as it uses a fixed weight for the costs, which works as simple reward engineering. In general, with a Lagrangian relaxation, this weight should be adjusted online to ensure the accumulated cost stays below a predefined threshold [1].
- The evaluation in CMDPs is inconsistent. These approaches solve different problems where a predefined accumulated cost is allowed.
- Weak baseline. From the results in Figure 10, it is clear that Tabular Shield does not recognize any unsafe state-action pairs, making it an unsuitable baseline. This is not surprising considering how the state-action space is discretized. Perhaps it is necessary to finetune the discretization of this baseline. Alternatively, it would be more suitable to consider stronger baselines, such as the accumulating safety rules [2] **references**
- [1] Ray, A., Achiam, J., and Amodei, D. (2019). *Benchmarking safe exploration in deep reinforcement learning*. <https://github.com/openai/safety-gym>
- [2] Shperberg, S. S., Liu, B., Allievi, A., and Stone, P. (2022). A rule-based shield: Accumulating safety rules from catastrophic action effects. *CoLLAs*, 231–242. <https://proceedings.mlr.press/v199/shperberg22a.html> | - The results are presented in a convoluted way. In particular, the results disregard the safety violations of the agent in the first 1000 episodes. The reason for presenting the results in this way is unclear. |
NIPS_2019_1180 | NIPS_2019 | --- There are two somewhat minor weakness: presentation and some missing related work. The main points in this paper can be understood with a bit of work, but there are lots of minor missing details and points of confusion. I've listed them roughly in order, with the most important first: * What factors varied in order to compute the error bars in figure 2? Were different random initializations used? Were different splits of the dataset used? How many samples do the error bars include? Do they indicate standard deviation or standard error? * L174: How exactly does the reactive baseline work? * L185: What does "without agent embeddings" mean precisely? * L201: More details about this metric are needed. I don't know exactly what is plotted on the y axis without reading the paper. Before looking into the details I'm not even sure whether higher or lower is good without looking into the details. (Does higher mean more information or does lower mean more information?) * Section 3: This would be much clearer if an example were used to illustrate the problem from the beginning of the section. * Will code be released? * L162: Since most experiments share perception between speaker and listener it would be much clearer to introduce this as a shared module and then present section 4.3 as a change to that norm. * L118: To what degree is this actually realized? * L84: It's not information content itself that will suffer, right? * L77: This is unnecessary and a bit distracting. * L144: Define M and N here. * L167: What is a "sqeuence of episodes" here? Are practice and evaluation the two types of this kind of sequence? Missing related work (seems very related, but does not negate this work's novelty): * Existing work has tried to model human minds, especially in robotics. It looks like [2] and [3] are good examples. The beginning of the related work in [1] has more references along these lines. This literature seems significantly different from what is proposed in this paper because the goals and settings are different. Only the high level motivation appears to be similar. Still, the literature seems significant enough (on brief inspection) to warrent a section in the related work. I'm not very familiar with this literature, so I'm not confident about how it relates to the current paper. [1]: Chandrasekaran, Arjun et al. âIt Takes Two to Tango: Towards Theory of AI's Mind.â CVPR 2017 [2]: Butterfield, Jesse et al. âModeling Aspects of Theory of Mind with Markov Random Fields.â International Journal of Social Robotics 1 (2009): 41-51. [3]: Warnier, Matthieu et al. âWhen the robot puts itself in your shoes. Managing and exploiting human and robot beliefs.â 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication (2012): 948-954. Suggestions --- * L216: It would be interesting to realize this by having the speaker interact with humans since the listeners are analogous to role humans take in the high level motivation. That would be a great addition to this or future work. Final Justification --- Clarity - This work could significantly improve its presentation and add more detail, but it currently is clear enough to get the main idea. Quality - Despite the missing details, the experiments seem to be measuring the right things and support very clear conclusions. Novelty - Lots of work uses reference games with multiple agents, but I'm not aware of attempts to specifically measure and model other agents' minds. Significance - The work is a useful step toward agents with a theory of mind because it presents interesting research directions that didn't exist before. Overall, this is a pretty good paper and should be accepted. Post-rebuttal Updates --- After reading the reviews and the rebuttal this paper seems like a clear accept. After discussion with R3 I think we all roughly agree. The rebuttal addressed all my concerns except the minor one listed below satisfactorily. There is one piece R3 and I touched on which is still missing. I asked about the relation to meta-learning and there was no response. More importantly, R3 asked about a comparison to a practice-stage only reward, which would show the importance of the meta-learning aspect of the reward. This was also not addressed satisfactorily, so it's still hard to understand the role of practice/evaluation stages in this work. This would be nice to have, but rest of the paper provides a valuable contribution without it. Though it's hard to tell how presentation and related work will ultimately be addressed in the final version, the rebuttal goes in the right direction so I'll increase my score as indicated in the Improvements section of my initial review. | * L167: What is a "sqeuence of episodes" here? Are practice and evaluation the two types of this kind of sequence? Missing related work (seems very related, but does not negate this work's novelty): |
NIPS_2019_175 | NIPS_2019 | 1. Weak novelty. Addressing domain-shift via domain specific moments is not new. It was done among others by Bilen & Vedaldi, 2017,âUniversal representations: The missing link between faces, text, planktons, and cat breedsâ. Although this paper may have made some better design decisions about exactly how to do it. 2. Justification & analysis: A normalisation-layer based algorithm is proposed, but without much theoretical analysis to justify the specific choices. EG: Why is is exactly: that gamma and beta should be domain-agnostic, but alpha should be domain specific. 3. Positioning wrt AutoDial, etc: The paper claims âparameter-freeâ as a strength compared to AutoDIAL, which has a domain-mixing parameter. However, this spin is a bit misleading. It removes one learnable parameter, but instead includes a somewhat complicated heuristic Eq 5-7 governing transferability. Itâs not clear that removing a single parameters (which is learned in AutoDIAL) with a complicated heuristic function (which is hand-crafted here) is a clear win. 4. The evaluation is a good start with comparing several base DA methods with and without the proposed TransferNorm architecture. It would be stronger if the base DA methods were similarly evaluated with/without the architectural competitors such as AutoDial and AdaBN that are direct competitors to TN. 5. English is full of errors throughout. "Seldom previous works", etc. ------ Update ----- The authors response did a decent job of responding to the concerns. The paper could be reasonable to accept. I hope the authors can update the paper with the additional information from the response. | 4. The evaluation is a good start with comparing several base DA methods with and without the proposed TransferNorm architecture. It would be stronger if the base DA methods were similarly evaluated with/without the architectural competitors such as AutoDial and AdaBN that are direct competitors to TN. |
ARR_2022_123_review | ARR_2022 | 1) it uses different experiment settings (e.g., a 32k batch size, a beam size of 5) and does not mention some details (e.g., the number of training steps), these different settings may contribute the performance and the comparisons in Table 2 may not be sufficiently reliable.
2) it only compares with some weak baselines in Tables 3, 4 and 6. Though the approach can surpass the sentence-level model baseline, the naive document-to-document translation model and Zheng et al. (2020), these baselines seem weak, for example, Voita et al. (2019) achieve 81.6, 58.1, 72.2 and 80.0 for deixis, lexical cohesion, ellipsis (infl.) and ellipsis (VP) respectively with the CADec model, while this work only gets 64.7, 46.3, 65.9 and 53.0. It seems that there is still a large gap with the presented approach in these linguistic evaluations.
3) the multi-resolutional data processing approach may somehow increase the instance weight of the document-level data, and how this affects the performance is not studied.
1) It's better to adopt experiment settings consistent with previous work.
2) There is still a large performance gap compared to Voita et al. (2019) in linguistic evaluations, while BLEU may not be able to reflect these document-level phenomena and linguistic evaluations are important. The issues for the performance gap shall be investigated and solved.
3) It's better to investigate whether the model really leverages document-level contexts correctly, probably refer to this paper: Do Context-Aware Translation Models Pay the Right Attention? In ACL 2021. | 2) it only compares with some weak baselines in Tables 3, 4 and 6. Though the approach can surpass the sentence-level model baseline, the naive document-to-document translation model and Zheng et al. (2020), these baselines seem weak, for example, Voita et al. (2019) achieve 81.6, 58.1, 72.2 and 80.0 for deixis, lexical cohesion, ellipsis (infl.) and ellipsis (VP) respectively with the CADec model, while this work only gets 64.7, 46.3, 65.9 and 53.0. It seems that there is still a large gap with the presented approach in these linguistic evaluations. |
ARR_2022_269_review | ARR_2022 | - It is not clear for me about the novelty of the proposed methods. - The proposed method relies on the quality of translation systems. - I'm not sure whether the differences of some results are significant (see Table 1). - The differences in results in Table 2 are very small that make the interpretation of results rather difficult. Furthermore, it is then unclear which proposed methods are really effective.
- Did the authors run their experiments several times with different random initializations? | - The differences in results in Table 2 are very small that make the interpretation of results rather difficult. Furthermore, it is then unclear which proposed methods are really effective. |
vexCLJO7vo | EMNLP_2023 | 1. This paper aims to evaluate the performance of current LLMs on different temporal factors and select three types of factors, including cope, order, and counterfactual. What is the rationale behind selecting these three types of factors, and how do they relate to each other?
2. More emphasis should be placed on prompt design. This paper introduces several prompting methods to address issues in MenatQA. Since different prompts may result in varying performance outcomes, it is essential to discuss how to design prompts effectively.
3. The analysis of experimental results is insufficient. For instance, the authors only mention that the scope prompting method shows poor performance on GPT-3.5-turbo, but they do not provide any analysis of the underlying reasons behind this outcome. | 3. The analysis of experimental results is insufficient. For instance, the authors only mention that the scope prompting method shows poor performance on GPT-3.5-turbo, but they do not provide any analysis of the underlying reasons behind this outcome. |
NIPS_2021_2445 | NIPS_2021 | and strengths in their analysis with sufficient experimental detail, it is admirable, but they could provide more intuition why other methods do better than theirs.
The claims could be better supported. Some examples and questions(if I did not miss out anything)
Why using normalization is a problem for a network or a task (it can be thought as a part of cosine distance)? How would Barlow Twins perform if their invariance term is replaced with a euclidean distance?
Your method still uses 2048 as the batch size, I would not consider it as small. For example, Simclr uses examples in the same batch and its batch size changes between 256-8192. Most of the methods you mentioned need even much lower batch size.
You mentioned not sharing weights as an advantage, but you have shared weights in your results, except Table 4 in which the results degraded as you mentioned. What stops the other methods from using different weights? It should be possible even though they have covariance term between the embeddings, how much their performance would be affected compared with yours?
My intuition is that a proper design might be sufficient rather than separating variance terms.
- Do you have a demonstration or result related to your model collapsing less than other methods? In line 159, you mentioned gradients become 0 and collapse; it was a good point, is it commonly encountered, did you observe it in your experiments?
- I am also not convinced to the idea that the images and their augmentations need to be treated separately, they can be interchangeable.
- Variances of the results could be included to show the stability of the algorithms since it was another claim in the paper(although "collapsing" shows it partly, it is a biased criteria since the other methods are not designed for var/cov terms).
- How hard is it to balance these 3 terms?
- When someone thinks about gathering two batches from two networks and calculate the global batch covariance in this way; it includes both your terms and Barlow Twins terms. Can anything be said based on this observation, about which one is better and why? Significance:
Currently, the paper needs more solid intuition or analysis or better results to make an impact in my opinion. The changes compared with the prior work are minimal. Most of the ideas and problems in the paper are important, but they are already known.
The comparisons with the previous work is valuable to the field, they could maybe extend their experiments to the more of the mentioned methods or other variants.
The authors did a great job in presenting their work's limitations, their results in general not being better than the previous works and their extensive analysis(tables). If they did a better job in explaining the reasons/intuitions in a more solid way, or include some theory if there is any, I would be inclined to give an accept. | - I am also not convinced to the idea that the images and their augmentations need to be treated separately, they can be interchangeable. |
ICLR_2022_1014 | ICLR_2022 | 1. It seems to me that a very straightforward hypothesis about these two parts would be that the trivial part is what’s very simple, either highly consistent to what’s in the training set, or the images with very typical object pose in the center of the images; and for the impossible part, it might be the images with ambiguous labels, atypical object pose or position. I think the human test results would support this hypothesis, but I wonder whether the authors could provide more evidence to either prove or disprove this hypothesis. 2. The figure 6 is very confusing to me. The caption says that the right part is original ImageNet test set, but the texts on the image actually say it’s the left part. If the texts on the image are right, then the right panel is the consistency on the validation images between the two parts. If I understand the experiments correctly, these results are for models trained on ImageNet training set without the trivial or the impossible part and then tested on ImageNet validation set without the two parts. Although it’s good to see the lower consistency, it should be compared to the consistency between models trained on the whole ImageNet training set and tested on ImageNet validation set without the two parts, which I cannot find. Is the consistency lower because of the changed training process or the changed validation set? 3. It is also unclear how surprising we should be towards the consistency distribution, is this a result of an exponential distribution of the general “identification” difficulty (most images are simple, then less and less are more difficult)? | 1. It seems to me that a very straightforward hypothesis about these two parts would be that the trivial part is what’s very simple, either highly consistent to what’s in the training set, or the images with very typical object pose in the center of the images; and for the impossible part, it might be the images with ambiguous labels, atypical object pose or position. I think the human test results would support this hypothesis, but I wonder whether the authors could provide more evidence to either prove or disprove this hypothesis. |
NIPS_2017_370 | NIPS_2017 | - There is almost no discussion or analysis on the 'filter manifold network' (FMN) which forms the main part of the technique. Did authors experiment with any other architectures for FMN? How does the adaptive convolutions scale with the number of filter parameters? It seems that in all the experiments, the number of input and output channels is small (around 32). Can FMN scale reasonably well when the number of filter parameters is huge (say, 128 to 512 input and output channels which is common to many CNN architectures)?
- From the experimental results, it seems that replacing normal convolutions with adaptive convolutions in not always a good. In Table-3, ACNN-v3 (all adaptive convolutions) performed worse that ACNN-v2 (adaptive convolutions only in the last layer). So, it seems that the placement of adaptive convolutions is important, but there is no analysis or comments on this aspect of the technique.
- The improvements on image deconvolution is minimal with CNN-X working better than ACNN when all the dataset is considered. This shows that the adaptive convolutions are not universally applicable when the side information is available. Also, there are no comparisons with state-of-the-art network architectures for digit recognition and image deconvolution. Suggestions:
- It would be good to move some visual results from supplementary to the main paper. In the main paper, there is almost no visual results on crowd density estimation which forms the main experiment of the paper. At present, there are 3 different figures for illustrating the proposed network architecture. Probably, authors can condense it to two and make use of that space for some visual results.
- It would be great if authors can address some of the above weaknesses in the revision to make this a good paper.
Review Summary:
- Despite some drawbacks in terms of experimental analysis and the general applicability of the proposed technique, the paper has several experiments and insights that would be interesting to the community. ------------------
After the Rebuttal: ------------------
My concern with this paper is insufficient analysis of 'filter manifold network' architecture and the placement of adaptive convolutions in a given CNN. Authors partially addressed these points in their rebuttal while promising to add the discussion into a revised version and deferring some other parts to future work.
With the expectation that authors would revise the paper and also since other reviewers are fairly positive about this work, I recommend this paper for acceptance. | - From the experimental results, it seems that replacing normal convolutions with adaptive convolutions in not always a good. In Table-3, ACNN-v3 (all adaptive convolutions) performed worse that ACNN-v2 (adaptive convolutions only in the last layer). So, it seems that the placement of adaptive convolutions is important, but there is no analysis or comments on this aspect of the technique. |
Grj9GJUcuZ | EMNLP_2023 | Unfortunately, this paper is not well written, as there are numerous typos and grammatical errors throughout. Additionally, there is an unfinished sentence present (caption of table 1). Consequently, reading and comprehending the paper becomes quite challenging.
Moreover, there are also numerous imprecise uses of terminology. These misuses create confusion while reading and can hinder future work.
- [241] Eq. (2) involves 2xN _BERT structures_. ---> BERT representations
- [249] we respectively add more noise (+Noise) or reduce some noise (-Noise) from z...
-> we respectively add **more** noise (+Noise) or **less** noise (-Noise) **to** z...
- Is _feature corruption_ the commonly used name for the issue? This name of the issue is not mentioned by Zbontar et al. (2021).
The authors also did not mention whether the source code will be released publicly. This will make the reproduction process much more difficult. | - [241] Eq. (2) involves 2xN _BERT structures_. ---> BERT representations - [249] we respectively add more noise (+Noise) or reduce some noise (-Noise) from z... -> we respectively add **more** noise (+Noise) or **less** noise (-Noise) **to** z... |
NIPS_2020_686 | NIPS_2020 | - The objective function (1): I have two concerns about the definition of this objective: 1. If the intuitive goal consists of finding a set of policies that contains an optimal policy for every test MDP in S_{test}, I would rather evaluate the quality of \overline{\Pi} with the performance in the worst MDP. In other words, I would have employed the \min over S_{test} rather than the summation. With the summation we might select a subset of policies that are very good for the majority of the MDPs in S_{test} but very bad of the remaining ones and this phenomenon would be hidden by the summation but highlighted by the \min. 2. If no conditions on the complexity of \overline{\Pi} are enforced the optimum of (1) would be exactly \Pi, or, at least, the largest subset allowed. - Latent-Conditioned Policies: How is this different from considering a hyperpolicy that is used to sample the parameters of the policy, Like in Parameter-based Exploration Policy Gradients (PGPE)? Sehnke, Frank, et al. "Policy gradients with parameter-based exploration for control." International Conference on Artificial Neural Networks. Springer, Berlin, Heidelberg, 2008. - Choice of the latent variable distribution: at line 139 the authors say that p(Z) is chosen to be the uniform distribution, while at line 150 p(Z) is a categorical distribution. Which one is actually used in the algorithm? Is there a justification for choosing one distribution rather than another? Can the authors motivate? **Minor*** - line 64: remove comma after a_i - line 66: missing expectation around the summation - line 138: what is H(Z)? - line 322: don’t -> do not - Equation (2): at this point the policy becomes a parametric function of \theta. Moreover, the dependence on s when using the policy as an argument for R_{\mathcal{M}} should be removed - Figure 3: the labels on the axis are way too small - Font size of the captions should be the same as the text | - The objective function (1): I have two concerns about the definition of this objective: |
7ipjMIHVJt | ICLR_2024 | 1) The first concern is the goal of the paper. Indeed, DAS earthquake detectors exists (one of them was cited by the autors, PhaseNet-Das, Zhu et al. 2023, there might be others), and no comparison was made, nor a justification on the benefit of your method against theirs. If the claim is to say that this is a foundation model, and the test on this task is only as a proof of concept, it should be clearer, and then show or justify a future useful application.
2) I think the purpose of a foundation model would be its applicability at a larger scale. Yet, is your method generalizable to other DAS sensors? It is not clear whether it is site and sensor-specific or not; if so it means a new self-training needs to be performed again for any new DAS.
3) The whole idea of this method is that earthquakes are unpredictible. It is clever indeed, but I see 2 major limitations: 1) this foundation model is thus harder to use for other tasks (which could be predictable) 2) in a series of aftershocks (which could maybe be seen as more predictable), how does your measure performs?
4) The comparison with other multi-variate time series are somehow misleading. Indeed, in multi-variate time-series, we suppose that the different time series (or sensors) are not ordered and not equally-spaced: DAS is a very particular type of 'multi-variate time-series'. I don't think it is worth presenting all of these methods (maybe only one), and it should be clearly stated in the paper. Yet, a comparison with image 2D foundation models, or by modifying a video framework from a 2D+t to a 1D+t, would be more relevant. | 1) The first concern is the goal of the paper. Indeed, DAS earthquake detectors exists (one of them was cited by the autors, PhaseNet-Das, Zhu et al. 2023, there might be others), and no comparison was made, nor a justification on the benefit of your method against theirs. If the claim is to say that this is a foundation model, and the test on this task is only as a proof of concept, it should be clearer, and then show or justify a future useful application. |
NIPS_2021_953 | NIPS_2021 | Although the paper gives detailed theoretical proof, the experiments are somewhat weak. I still have some concerns: 1)The most related works SwaV and Barlow Twins outperform the proposed method in some experimental results, as shown in Table 1,2,5. What are the main advantages of this method compared with SwaV and Barlow Twins? 2) HSIC(Z, Y) can be seen as a distance metric in the kernel space, where the cluster structure is defined by the identity. Although this paper maps identity labels into the kernel space, the information of one-hot label is somewhat limited compared with views embeddings in Barlow Twins. 3)Since the cluster structure is defined by the identity. How does the number of images impact the model performance? Do more training images make the performance worse or better ?
BYOL in the abstract should be explained for its first appearance. | 3)Since the cluster structure is defined by the identity. How does the number of images impact the model performance? Do more training images make the performance worse or better ? BYOL in the abstract should be explained for its first appearance. |
NIPS_2020_1524 | NIPS_2020 | * The paper makes several “hand-wavy” arguments, which are suitable for supporting the claims in the paper; but it is unclear if they would generalize for analyzing / developing other algorithms. For instance: 1. Replacing `n^2/(2*s^2)` with an arbitrary parameter `lambda` (lines 119-121) 2. Taking SGD learning rate ~ 0.1 (line 164) — unlike the Adam default value, it is unclear what the justification behind this value is. | 1. Replacing `n^2/(2*s^2)` with an arbitrary parameter `lambda` (lines 119-121) 2. Taking SGD learning rate ~ 0.1 (line 164) — unlike the Adam default value, it is unclear what the justification behind this value is. |
NIPS_2019_1397 | NIPS_2019 | weakness of the manuscript. Clarity: The manuscript is well-written in general. It does a good job in explaining many results and subtle points (e.g., blessing of dimensionality). On the other hand, I think there is still room for improvement in the structure of the manuscript. The methodology seems fully explainable by Theorem 2.2. Therefore, Theorem 2.1 doesn't seem necessary in the main paper, and can be move to the supplement as a lemma to save space. Furthermore, a few important results could be moved from the supplement back to the main paper (e.g., Algorithm 1 and Table 2). Originality: The main results seem innovative to me in general. Although optimizing information-theoretic objective functions is not new, I find the new objective function adequately novel, especially in the treatment of the Q_i's in relation to TC(Z|X_i). Relevant lines of research are also summarized well in the related work section. Significance: The proposed methodology has many favorable features, including low computational complexity, good performance under (near) modular latent factor models, and blessing of dimensionality. I believe these will make the new method very attractive to the community. Moreover, the formulation of the objective function itself would also be of great theoretical interest. Overall, I think the manuscript would make a fairly significant contribution. Itemized comments: 1. The number of latent factors m is assumed to be constant throughout the paper. I wonder if that's necessary. The blessing of dimensionality still seems to hold if m increases slowly with p, and computational complexity can be still advantageous compared to GLASSO. 2. Line 125: For completeness, please state the final objective function (empirical version of (3)) as a function of X_i and the parameters. 3. Section 4.1: The simulation is conducted under a joint Gaussian model. Therefore, ICA should be identical with PCA, and can be removed from the comparisons. Indeed, the ICA curve is almost identical with the PCA curve in Figure 2. 4. In the covariance estimation experiments, negative log likelihood under Gaussian model is used as the performance metric for both stock market data and OpenML datasets. This seems unreasonable since the real data in the experiment may not be Gaussian. For example, there is extensive evidence that stock returns are not Gaussian. Gaussian likelihood also seems unfair as a performance metric, since it may favor methods derived under Gaussian assumptions, like the proposed method. For comparing the results under these real datasets, it might be better to focus on interpretability, or indirect metrics (e.g., portfolio performance for stock return data). 5. The equation below Line 412: the p(z) factor should be removed in the expression for p(x|z). 6. Line 429: It seems we don't need Gaussian assumption to obtain Cov(Z_j, Z_k | X_i) = 0. 7. Line 480: Why do we need to combine with law of total variance to obtain Cov(X_i, X_{l != i} | Z) = 0? 8. Lines 496 and 501: It seems the Z in the denominator should be p(z). 9. The equation below Line 502: I think the '+' sign after \nu_j should be a '-' sign. In the definition of B under Line 503, there should be a '-' sign before \sum_{j=1}^m, and the '-' sign after \nu_j should be a '+' sign. In Line 504, we should have \nu_{X_i|Z} = - B/(2A). Minor comments: 10. The manuscript could be more reader-friendly if the mathematical definitions for H(X), I(X;Y), TC(X), and TC(X|Z) were state (in the supplementary material if no space in the main article). References to these are necessary when following the proofs/derivations. 11. Line 208: black -> block 12. Line 242: 50 real-world datasets -> 51 real-world datasets (according to Line 260 and Table 2) 13. References [7, 25, 29]: gaussian -> Gaussian Update: Thanks to the authors' for the response. A couple minor comments: - Regarding the empirical version of the objective (3), it might be appropriate to put it in the supplementary materials. - Regarding the Gaussian evaluation metric, I think it would be helpful to include the comments as a note in the paper. | - Regarding the empirical version of the objective (3), it might be appropriate to put it in the supplementary materials. |
oe51Q5Uo37 | ICLR_2025 | - Depending upon the model and the dataset, S$^3$T can result in lower training performance compared to full training.
- No code is provided, however the authors promise to release it with the final version.
The paper can be further improved by adding a limitations section that could summarize all potential trade-offs and discussing future directions to improve the same. It would also be nice to see some performance numbers in the abstract itself, such as S$^3$T improves the deletion rate by $1.6\times$ over SISA. | - Depending upon the model and the dataset, S$^3$T can result in lower training performance compared to full training. |
NIPS_2022_742 | NIPS_2022 | It seems that the 6dof camera poses of panoramas are required to do the projection. Hence, precisely speaking, the method is not fully self-supervised but requires camera pose ground truth. This is usually accessible, easier compared to the ground truth layout, but may also cause error for the layout projection and thus hurts the overall finetuning performance.
The experiment could be stronger to demonstrate the effectiveness of the method from two aspects: 1) a stronger baseline. It seems SSLayout360 is in general outperforming HorizonNet. It would be convincing to show that this method is able to improve powerful backbones. 2) analyze the domain gap. It would be nice to add some discussions about the gap between datasets. Some datasets are closer to each other thus the adaption may not be a big issue. Also, if the method is able to finetune a pre-trained model on synthetic data, then the value of the approach would be much higher. | 2) analyze the domain gap. It would be nice to add some discussions about the gap between datasets. Some datasets are closer to each other thus the adaption may not be a big issue. Also, if the method is able to finetune a pre-trained model on synthetic data, then the value of the approach would be much higher. |
NIPS_2016_499 | NIPS_2016 | - The proposed method is very similar in spirit to the approach in [10]. It seems that the method in [10] can also be equipped with scoring causal predictions and the interventional data. If otherwise, why [10] cannot use these side information? - The proposed method reduces the computation time drastically compared to [10] but this is achieved by reducing the search space to the ancestral graphs. This means that the output of ACI has less information compared to the output of [10] that has a richer search space, i.e., DAGs. This is the price that has been paid to gain a better performance. How much information of a DAG is encoded in its corresponding ancestral graph? - Second rule in Lemma 2, i.e., Eq (7) and the definition of minimal conditional dependence seem to be conflicting. Taking Zâ in this definition to be the empty set, we should have that x and y are independent given W, but Eq. (7) says otherwise. | - Second rule in Lemma 2, i.e., Eq (7) and the definition of minimal conditional dependence seem to be conflicting. Taking Zâ in this definition to be the empty set, we should have that x and y are independent given W, but Eq. (7) says otherwise. |
NIPS_2022_331 | NIPS_2022 | I believe that one small (but important) part of the paper could use some clarifications in the writing: Section 3.2 (on Representational Probing). I will elaborate below.
I think that a couple of claims in the paper may be slightly too strong and need a bit more nuance. I will elaborate below.
A lot of the details described in Section 3.3 (Behavioral Tests) seem quite specific to the game of Hex. For the specific case of Hex, we can indeed know how to create such states that (i) contain a concept, (ii) contain that concept in only exactly one place, (iii) make sure that the agent must play according to the concept immediately, because otherwise it would lose. I imagine that setting up such specific situations may be much more difficult in many other games or RL environments, and would certainly require highly game-specific knowledge again for such tasks. This seems like a potential limitation (which doesn't seem to be discussed yet).
On weakness (1):
The first sentence that I personally found confusing was "To form the random control, for each board ( H ( 0 ) , y )
in the probing dataset, we consistently map each cell in that board to a random cell, forming $H_s^{(0)}." I guess that "map each cell in that board to a random cell" means creating a random mapping from all cells in the original board to all cells in the control board, in a way such that every original cell maps to exactly one randomly-selected control cell, and every control cell is also mapped to by exactly one original cell. And then, the value of each original cell (black/white/empty) is assigned to the control cell that it maps to. I guess this is what is done, and it makes sense, but it's not 100% explicit. I'm afraid that many readers could misunderstand it as simply saying that every cell gets a random value directly.
Then, a bit further down under Implementation Details, it is described how the boards in the probing dataset get constructed. I suspect it would make more sense to actually describe this before describing how the matching controls are created.
On weakness (2):
(a) The behavioral tests involve states created specifically such that they (i) contain the concept, but also (ii) demand that the agent immediately plays according to the concept, because it will lose otherwise. In the game of Hex, this means that all of these board states, for all these different concepts, actually include one more new "concept" that is shared across all the tests; a concept that recognises a long chain of enemy pieces that is about to become a winning connection if not interrupted by playing in what is usually just one or two remaining blank cells in between. So, I do not believe that we can say with 100% certainty that all these behavior tests are actually testing for the concept that you intend them to test for. Some or all of them may simply be testing more generally if the agent can recognise when it needs to interrupt the opponent's soon-to-be-winning chain.
(b) "Fig. 5 shows evidence that some information is learned before the model is able to use the concepts." --> I think "evidence" may be too strong here, and would say something more like "Fig. 5 suggests that some information may be learned [...]". Technically, Fig. 5 just shows that there is generally a long period with no progress on the tests, and after a long time suddenly rapid progress on the tests. To me this indeed suggests that it is likely that it is learning something else first, but it is not hard evidence. It could also be that it's just randomly wandering about the parameter space and suddenly gets lucky and makes quick progress then, having learned nothing at all before.
(c) "Behavioral tests can also expose heuristics the model may be using." --> yes, but only if we actually already know that the heuristics exist, and know how to explicitly encode them and create probes for them. They can't teach us any new heuristics that we didn't already know about. So maybe, better phrasing could be something like "Behavioral tests can also confirm whether or not the model may be using certain heuristics."
It may be useful to discuss the apparent limitation that quite a bit of Hex-specific knowledge is used for setting up the probes (discussed in more detail as a weakness above).
It may be useful to discuss the potential limitation I discussed in more detail above that the behavioral tests may simply all be testing for an agent's ability to recognise when it needs to interrupt an opponent's immediate winning threat. | 5 shows evidence that some information is learned before the model is able to use the concepts." --> I think "evidence" may be too strong here, and would say something more like "Fig. |
ICLR_2023_935 | ICLR_2023 | 1 The traditional DCI framework may already be considered explicitness(E) and size(S). For instance, to evaluate the disentanglement (D) of different representation methods, you may need to use a fixed capacity of probing (f), and the latent size should also be fixed. DCI and ES may be entangled with each other. For instance, if you change the capacity of probing or the latent size, then the DCI evaluation also changes correspondingly. The reviewer still needs clarification on the motivation for considering explicitness(E) and size(S) as extra evaluation.
2 Intuitively, explicitness(E) and size(S) may be highly related to the given dataset. The different capacity requirements in the 3rd paragraph may be due to the input modality difference. Given a fixed dataset, the evaluation of disentanglement should provide enough capacity and training time which is powerful enough to achieve the DCI evaluation. If the capacity of probing needs to be evaluated, then the training time, cost, and learning rate may also be considered because they may influence the final value of DCI. | 1 The traditional DCI framework may already be considered explicitness(E) and size(S). For instance, to evaluate the disentanglement (D) of different representation methods, you may need to use a fixed capacity of probing (f), and the latent size should also be fixed. DCI and ES may be entangled with each other. For instance, if you change the capacity of probing or the latent size, then the DCI evaluation also changes correspondingly. The reviewer still needs clarification on the motivation for considering explicitness(E) and size(S) as extra evaluation. |
NIPS_2016_283 | NIPS_2016 | weakness of the paper are the empirical evaluation which lacks some rigor, and the presentation thereof: - First off: The plots are terrible. They are too small, the colors are hard to distinguish (e.g. pink vs red), the axis are poorly labeled (what "error"?), and the labels are visually too similar (s-dropout(tr) vs e-dropout(tr)). These plots are the main presentation of the experimental results and should be much clearer. This is also the reason I rated the clarity as "sub-standard". - The results comparing standard- vs. evolutional dropout on shallow models should be presented as a mean over many runs (at least 10), ideally with error-bars. The plotted curves are obviously from single runs, and might be subject to significant fluctuations. Also the models are small, so there really is no excuse for not providing statistics. - I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark: - In my opinion the claim about evolutional dropout addresses the internal covariate shift is very limited: it can only increase the variance of some low-variance units. Batch Normalization on the other hand standardizes the variance and centers the activation. These limitations should be discussed explicitly. Minor: * | - I'd like to know the final used learning rates for the deep models (particularly CIFAR-10 and CIFAR-100). Because the authors only searched 4 different learning rates, and if the optimal learning rate for the baseline was outside the tested interval that could spoil the results. Another remark: |
ARR_2022_68_review | ARR_2022 | 1. Despite the well-motivated problem formulation, the simulation is not realistic. The author does not really collect feedback from human users but derives them from labeled data. One can imagine users can find out that returned answers are contrastive to commonsense. For instance, one can know that “Tokyo” is definitely a wrong answer to the question “What is the capital of South Africa?”. However, it is not very reasonable to assume that the users are knowledgeable enough to provide both positive and negative feedback. If so, why do they need to ask QA models? And what is the difference between collecting feedback and labeling data?
In conclusion, it would be more realistic to assume that only a small portion of the negative feedback is trustworthy, and there may be little or no reliable positive feedback. According to the experiment results, however, 20% of feedback perturbation makes the proposed method fail.
Therefore, the experiment results cannot support the claim made by authors.
2. There is a serious issue of missing related work. As mentioned above, Campos et al. (2020) has already investigated using user feedback to fine-tune deployed QA models. They also derive feedback from gold labels and conduct experiments with both in-domain and out-of-domain evaluation. The proposed methods are also similar: upweighting or downweighting the likelihood of predicted answer according to the feedback.
Moreover, Campos et al. (2020) has a more reasonable formulation, where there could be multiple feedback for a certain pair of questions and predicted answers. 3. The adopted baseline models are weak. First of all, the author does not compare to Campos et al. (2020), which also uses feedback in QA tasks. Second, they also do no comparison with other domain adaptation methods, such as those work cited in Section 8.
Line 277: “The may be attributed…” -> “This may be attributed… | 3. The adopted baseline models are weak. First of all, the author does not compare to Campos et al. (2020), which also uses feedback in QA tasks. Second, they also do no comparison with other domain adaptation methods, such as those work cited in Section 8. Line 277: “The may be attributed…” -> “This may be attributed… |
NIPS_2019_653 | NIPS_2019 | of the method. Clarity: The paper has been written in a manner that is straightforward to read and follow. Significance: There are two factors which dent the significance of this work. 1. The work uses only binary features. Real world data is usually a mix of binary, real and categorical features. It is not clear if the method is applicable to real and categorical features too. 2. The method does not seem to be scalable, unless a distributed version of it is developed. It's not reasonable to expect a single instance can hold all the training data that the real world datasets ususally contain. | 2. The method does not seem to be scalable, unless a distributed version of it is developed. It's not reasonable to expect a single instance can hold all the training data that the real world datasets ususally contain. |
NIPS_2021_835 | NIPS_2021 | The authors addressed the limitations and potential negative societal impact of their work. However, there are some concerns as follows: 1.The main concern is the innovation of this paper. Firstly, laplacian score is proposed by Ref.13 for feature selection as an unsupervised measure. Secondly, i think that the main contribution of this paper is stochastic gates, but in Ref.36, the technology of stochastic gates is already used in supervised feature selection. Finally, authors focus on the traditional unsupervised feature selection problem. Thus i think that the core contribution of this paper is that authors extend the supervised problem in Ref.36 to the unsupervised problem without theoretical guarantees. Even authors introduce the importance of unsupervised feature selection from a diffusion perspective, but i don't think this is the core contribution of this article. For this question, if the authors can persuade me , I will change my score. 2.Authors introduce the importance of unsupervised feature selection from a diffusion perspective and i think this is a very novel thing for feature selection, but i can't understand what is the difference between similarity and exit times in nature. I hope the author can give me a more detailed explanation to understand the difference. 3.Authors sample a stochastic gate (STG) vector in algorithm 1 and thus i think that the proposed method should have randomness. But in the main experiment of this paper, i don't see this randomness analyzed by authors. 4.It would be better if the authors add some future work. | 2.Authors introduce the importance of unsupervised feature selection from a diffusion perspective and i think this is a very novel thing for feature selection, but i can't understand what is the difference between similarity and exit times in nature. I hope the author can give me a more detailed explanation to understand the difference. |
haPIkA8aOk | EMNLP_2023 | 1. The description of the metrics is limited. it would be desirable to have an explanation of the metrics used in the paper. Or at least a citation to the metrics would have been good.
2. The training objective in Equation 7 would increase the likelihood of negative cases as well resulting in unwanted behavior. Should the objective be: \mathcal{L}_{c} - \mathcal{L}_{w}?
3. The paper needs a bit of polishing as at times equations are clubbed together. The equations in Sections 4 and 5 can be clubbed together while introducing them.
4. The paper motivates by the fact that we need to generate multiple sequences during the test time and progress to get rid of them. However, ASPIRE generates multiple answers during the training phase. This should be explicitly mentioned in the paper as it directly conflicts with the claim of not generating multiple sequences.
5. A bit more analysis on the impact of the number of model parameters is warranted. | 1. The description of the metrics is limited. it would be desirable to have an explanation of the metrics used in the paper. Or at least a citation to the metrics would have been good. |
NIPS_2018_482 | NIPS_2018 | , and while I recommend acceptance, I think that it deserves to be substantially improved before publication. One of the main concerns is that part of the relevant literature has been ignored, and also importantly that the proposed approach has not really been extensively compared to potential competitors (that might need to be adapted to the multi-source framework; not e also that single-fidelity experiments could be run in order to better understand how the proposed acquisition function compares to others from the literature). Another main concern in connection with the previous one is that the presented examples remain relatively simple, one testcase being an analytical function and the other one a one-dimensional mototonic function. While I am not necessarily requesting a gigantic benchmark or a list of complicated high-dimensional real-world test cases, the paper would significantly benefit from a more informative application section. Ideally, the two aspects of improving the representativity of numerical test cases and of better benchmarking against competitor strategies could be combined. As of missing approaches from the literature, some entry points follow: * Adaptive Designs of Experiments for Accurate Approximation of a Target Region (Picheny et al. 2010) http://mechanicaldesign.asmedigitalcollection.asme.org/article.aspx?articleid=1450081 *Fast kriging-based stepwise uncertainty reduction with application to the identification of an excursion set (Chevalier et al. 2014a) http://amstat.tandfonline.com/doi/full/10.1080/00401706.2013.860918 * NB: approaches from the two articles above and more (some cited in the submitted paper but not benchmarked against) are coded for instance in the R package "KrigInv". The following article gives an overview of some of the package's functionalities: KrigInv: An efficient and user-friendly implementation of batch-sequential inversion strategies based on kriging (Chevalier et al. 2014b) * In the following paper, an entropy-based approach (but not in the same fashion as the one proposed in the submitted paper) is used, in a closely related reliability framework: Gaussian process surrogates for failure detection: A Bayesian experimental design approach (Wang et al. 2016) https://www.sciencedirect.com/science/article/pii/S002199911600125X * For an overall discussion on estimating and quantifying uncertainty on sets under GP priors (with an example in contour line estimation), see Quantifying Uncertainties on Excursion Sets Under a Gaussian Random Field Prior (Azzimonti et al. 2016) https://epubs.siam.org/doi/abs/10.1137/141000749 NB: the presented approaches to quantify uncertainties on sets under GP priors could also be useful here to return a more complete output (than just the contour line of the predictive GP mean) in the CLoVER algorithm. * Coming to the multi-fidelity framework and sequential design for learning quantities (e.g. probabilities of threshold exceedance) of interest, see notably Assessing Fire Safety using Complex Numerical Models with a Bayesian Multi-fidelity Approach (Stroh et al. 2017) https://www.sciencedirect.com/science/article/pii/S0379711217301297?via%3Dihub Some further points * Somehow confusing to read "relatively inexpensive" in the abstract and then "expensive to evaluate" in the first line of the introduction! * L 55 "A contour" should be "A contour line"? * L88: what does "f(l,x) being the normal distribution..." mean? * It would be nice to have a point-by-point derivation of equation (10) in the supplementary material (that would among others help readers including referees proofchecking the calculation). * About the integral appearing in the criterion, some more detail on how its computation is dealt with could be worth. ##### added after the rebuttal I updated my overall grade from 7 to 8 as I found the response to the point and it made me confident that the final paper would be improved (by suitably accounting for my remarks and those of the other referees) upon acceptance. Let me add a comment about related work by Rémi Stroh. The authors are right, the Fire Safety paper is of relevance but does actually not address the design of GP-based multi-fidelity acquisition functions (in the BO fashion). However, this point has been further developed by Stroh et al; see "Sequential design of experiments to estimate a probability of exceeding a threshold in a multi-fidelity stochastic simulator" in conference contributions listed in http://www.l2s.centralesupelec.fr/en/perso/remi.stroh/publications | * Coming to the multi-fidelity framework and sequential design for learning quantities (e.g. probabilities of threshold exceedance) of interest, see notably Assessing Fire Safety using Complex Numerical Models with a Bayesian Multi-fidelity Approach (Stroh et al. 2017) https://www.sciencedirect.com/science/article/pii/S0379711217301297?via%3Dihub Some further points * Somehow confusing to read "relatively inexpensive" in the abstract and then "expensive to evaluate" in the first line of the introduction! |
ICLR_2021_1504 | ICLR_2021 | W1) The authors should compare their approach (methodologically as well as experimentally) to other concept-based explanations for high-dimensional data such as (Kim et al., 2018), (Ghorbani et al., 2019) and (Goyal et al., 2019). The related work claims that (Kim et al., 2018) requires large sets of annotated data. I disagree. (Kim et al., 2018) only requires a few images describing the concept you want to measure the importance of. This is significantly less than the number of annotations required in the image-to-image translation experiment in the paper where the complete dataset needs to be annotated. In addition, (Kim et al., 2018) allows the flexibility to consider any given semantic concept for explanation while the proposed approach is limited either to semantic concepts captured by frequency information, or to semantic concepts automatically discovered by representation learning, or to concepts annotated in the complete dataset. (Ghorbani et al., 2019) also overcomes the issue of needing annotations by discovering useful concepts from the data itself. What advantages does the proposed approach offer over these existing methods?
W2) Faithfulness of the explanations with the pretrained classifier. The methods of disentangled representation and image-to-image translation require training another network to learn a lower-dimensional representation. This runs the risk of encoding some biases of its own. If we find some concerns with the explanations, we cannot infer if the concerns are with the trained classifier or the newly trained network, potentially making the explanations useless.
W3) In the 2-module approach proposed in the paper, the second module can theoretically be any explainability approach for low-dimensional data. What is the reason that the authors decide to use Shapely instead of other works such as (Breiman, 2001) or (Ribeiro et al., 2016)?
W4) Among the three ways of transforming the high-dimensional data to low-dimensional latent space, what criteria should be used by a user to decide which method to use? Or, in other words, what are the advantages and disadvantages of each of these methods which might make them more or less suitable for certain tasks/datasets/applications?
W5) The paper uses the phrase “human-interpretable explainability”. What other type of explainability could be possible if it’s not human-interpretable? I think the paper might benefit with more precise definitions of these terms in the paper.
References mentioned above which are not present in the main paper:
(Ghorbani et al., 2019) Amirata Ghorbani, James Wexler, James Zou, Been Kim. Towards Automatic Concept-based Explanations. NeurIPS 2019.
(Goyal et al., 2019) Yash Goyal, Amir Feder, Uri Shalit, Been Kim. Explaining Classifiers with Causal Concept Effect (CaCE). ArXiv 2019.
—————————————————————————————————————————————————————————————— ——————————————————————————————————————————————————————————————
Update after rebuttal: I thank the authors for their responses to all my questions. However, I believe that these answers need to be justified experimentally in order for the paper’s contributions to be significant for acceptance. In particular, I still have two major concerns. 1) the faithfulness of the proposed approach. I think that the authors’ answer that their method is less at risk to biases than other methods needs to be demonstrated with at least a simple experiment. 2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on the advantages and disadvantages of each of the three ways of transforming the high-dimensional data to low-dimensional latent space which might make them more or less suitable for certain tasks/datasets/applications. Because of these concerns, I am keeping my original rating. | 2) Shapely values over other methods. I think the authors need to back up their argument for using Shapely value explanations over other methods by comparing experimentally with other methods such as CaCE or even raw gradients. In addition, I think the paper would benefit a lot by including a significant discussion on the advantages and disadvantages of each of the three ways of transforming the high-dimensional data to low-dimensional latent space which might make them more or less suitable for certain tasks/datasets/applications. Because of these concerns, I am keeping my original rating. |
NIPS_2020_295 | NIPS_2020 | 1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones. 2. Some methods use epochs and pretrain epochs as 200, while the reported InvP uses 800 epochs. What are the results of InvP with epochs as 200? It would be more clear after adding these results into the tables. 3. The proposed method adopts memory bank to update vi, as detailed in the beginning of Sec.3. What the results would be when adopting momentum queue and current batch of features? As the results of SimCLR and MoCo are better than InsDis, it would be nice to have those results. | 1. The experimental comparisons are not enough. Some methods like MoCo and SimCLR also test the results with wider backbones like ResNet50 (2×) and ResNet50 (4×). It would be interesting to see the results of proposed InvP with these wider backbones. |
7tpMhoPXrL | ICLR_2025 | • GDPR Compliance Concerns: The paper’s reliance on approximate unlearning without theoretical guarantees presents a significant shortfall. While approximate unlearning may be practical, it falls short in scenarios where data privacy and regulatory compliance are non-negotiable. Without provable guarantees, it is questionable whether this method can satisfy GDPR requirements for data erasure. This gap undermines the core purpose of Model Unlearning in privacy-centered contexts, where the "right to be forgotten" demands more than a probabilistic assurance.
• Scalability to Other Domains: The Forget Vector approach is developed and validated primarily for image classification tasks, potentially limiting its application in NLP or other non-visual domains where input perturbations may be less effective.
• Dependence on MIA (Membership Inference Attack) Testing via Ulira: While the paper uses MIA testing as a metric for unlearning effectiveness, the effectiveness of MIA testing itself is not sufficiently robust for privacy guarantees. Additionally the use of U-LiRA [1] is recommended.
• Sensitivity to Data Shifts: From the paper the effectiveness of unlearning decreases under certain data shifts, which may hinder the reliability of Forget Vectors in dynamic data environments or adversarial settings. | • Dependence on MIA (Membership Inference Attack) Testing via Ulira: While the paper uses MIA testing as a metric for unlearning effectiveness, the effectiveness of MIA testing itself is not sufficiently robust for privacy guarantees. Additionally the use of U-LiRA [1] is recommended. |
NIPS_2022_2510 | NIPS_2022 | weakness
The main assumption behind the development of the theory is the fact that Lipchitz neural networks are "often perceived as not expressive", which I do not remember ever reading in the literature. Most of the paper seems fuzzy: it goes from BCE to robustness certification to a (trivial) convergence proposition (prop 4) to consideration about float32/64 (ex 1)... it is just too much. There are no transitions and at no moment the reader understands what is the point.
One of the biggest problem is the writing style that makes the paper hard and annoying (sorry...) to read. For example among many other problems:
- sentences must contain a verb and end with a '.'(even when it ends with a formula);
- theorem should be self contained, exterior references should remain exceptional;
- organize the equations in order to be easily readable;
- separate definitions and remarks (e.g. def 1 & 2 contains definitions and remarks);
- separate propositions and definitions (e.g. corollary 1 contains a definition and a proposition);
- the proofs rely on many technical tools such as Lebesgue measure but the pre-requisite (Borel space etc) are never explicitely given.
Most of the proofs in the appendix should be re-written. More personal but I think the footnotes in the paper could be avoided. The paper is full of references and it costs the readability of the arguments. The reader gets overwhelmed very quickly for the wrong reasons. References
most reference are incomplete (many article are reference as arxiv instead than the published and peer-reviewed version) Others
l94: one needs additional hypothesis on AllNet to claim that they have finite Lipschitz constant (Lipschitz activation would do).
l216: the reference to cor 1 seems dubious: it is a definition of \epsilon separation rather than about bias. | - separate definitions and remarks (e.g. def 1 & 2 contains definitions and remarks); |
NIPS_2018_809 | NIPS_2018 | Weakness: - The uniqueness of connecting curves between two weights would be unclear, and there might be a gap between the curve and FGE. A natural question would be, for example, if we run the curve findings several times, we will see many different curves? Or, those curves would be nearly unique? - The evidences are basically empirical, and it would be nice if we have some supportive explanations on why this curve happens (and whether it always happens). - The connections of the curve finding (the first part) and FGE (the second part) would be rather weak. When I read the first part and the title, I imagined that take random weights, learn curves between weights, and find nice wights to be mixed into the final ensemble, but it was not like that. (this can work, but also computationally demanding) Comment: - Overall I liked the paper even though the evidences are empirical. It was fun to read. The reported phenomena are quite mysterious, and interesting enough to inspire some subsequent research. - To be honest, I'm not sure the first curve-finding part explains well why the FGE work. The cyclical learning rate scheduling would perturb the weight around the initial converged weight, but it cannot guarantee that weight is changing along the curve described in the first part. | - Overall I liked the paper even though the evidences are empirical. It was fun to read. The reported phenomena are quite mysterious, and interesting enough to inspire some subsequent research. |
ACL_2017_699_review | ACL_2017 | 1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results.
2. The evaluation process shows that the current system (which extracts 1.
Present and 2. Absent both kinds of keyphrases) is evaluated against baselines (which contains only "present" type of keyphrases). Here there is no direct comparison of the performance of the current system w.r.t. other state-of-the-arts/benchmark systems on only "present" type of key phrases. It is important to note that local phrases (keyphrases) are also important for the document. The experiment does not discuss it explicitly. It will be interesting to see the impact of the RNN and Copy RNN based model on automatic extraction of local or "present" type of key phrases.
3. The impact of document size in keyphrase extraction is also an important point. It is found that the published results of [1], (see reference below) performs better than (with a sufficiently high difference) the current system on Inspec (Hulth, 2003) abstracts dataset. 4. It is reported that current system uses 527,830 documents for training, while 40,000 publications are held out for training baselines. Why are all publications not used in training the baselines? Additionally, The topical details of the dataset (527,830 scientific documents) used in training RNN and Copy RNN are also missing. This may affect the chances of repeating results.
5. As the current system captures the semantics through RNN based models. So, it would be better to compare this system, which also captures semantics. Even, Ref-[2] can be a strong baseline to compare the performance of the current system.
Suggestions to improve: 1. As, per the example, given in the Figure-1, it seems that all the "absent" type of key phrases are actually "Topical phrases". For example: "video search", "video retrieval", "video indexing" and "relevance ranking", etc.
These all define the domain/sub-domain/topics of the document. So, In this case, it will be interesting to see the results (or will be helpful in evaluating "absent type" keyphrases): if we identify all the topical phrases of the entire corpus by using tf-idf and relate the document to the high-ranked extracted topical phrases (by using Normalized Google Distance, PMI, etc.). As similar efforts are already applied in several query expansion techniques (with the aim to relate the document with the query, if matching terms are absent in document).
Reference: 1. Liu, Zhiyuan, Peng Li, Yabin Zheng, and Maosong Sun. 2009b. Clustering to find exemplar terms for keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 257–266.
2. Zhang, Q., Wang, Y., Gong, Y., & Huang, X. (2016). Keyphrase extraction using deep recurrent neural networks on Twitter. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (pp. 836-845). | 1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results. |
XX73vFMemG | EMNLP_2023 | 1. The paper studies the case where the student distills knowledge to the teacher, which improves the teacher's performance. However, the improvements could potentially be due to regularization effects rather than distillation as claimed, since all fine-tuning is performed for 10 epochs and without early-stopping. The fine-tuning on GLUE without validation early-stopping usually has very high variances, proper ablation studies are needed to verify.
2. The performance gains (especially, the Community KD) over standard one-way distillation seem quite marginal based on the experiments when compared to other BERT distillation techniques like BERT-PKD, TinyBERT, MobileBERT, or BERT-of-Theseus. This is not a good signal given the training process is more complex with co-training and co-distillation compared to other distillation techniques.
3. Evaluations are limited to BERT models only. Testing on other PLMs would be more convincing. | 1. The paper studies the case where the student distills knowledge to the teacher, which improves the teacher's performance. However, the improvements could potentially be due to regularization effects rather than distillation as claimed, since all fine-tuning is performed for 10 epochs and without early-stopping. The fine-tuning on GLUE without validation early-stopping usually has very high variances, proper ablation studies are needed to verify. |
ICLR_2021_1705 | ICLR_2021 | and suggestions for improvement: I have several concerns about the potential impact of both theoretical and practical results. Mainly:
By referring to Wilson et al., 2017, the authors argue that diagonal step sizes in adaptive algorithms hurt generalization. First, I find this claim rather vague, as there has been many followups to Wilson et al., 2017, so I suggest the authors to be more precise and include more recent observations. Moreover, one can use non-diagonal versions of these algorithms. For example, see [2 and Adagrad-norm from Ward et al., 2019], it is easy to consider similar non-diagonal versions of Adam/AMSGrad/Adagrad with first order momentum (a.k.a. AdamNC or AdaFOM), then, are these algorithms also supposed to have good generalization? I think it is important to see how these non-diagonal adaptive methods behave in practice compared to SGD/Adam+ for generalization to support the authors' claim.
I think the algorithm seems more like an extension of momentum SGD, than Adam.
It is nice to improve \eps^{-4} complexity with Lipschitz Hessian assumption, but what happens when this assumption fails? Does Adam+ get standard \epsilon^{-4}?
From what I understand in remark after Lemma 1, the variance reduction is ensured by taking β to 0
. The authors use 1 / T a
for some a ∈ ( 0 , 1 )
. Here, I have several questions. First, how does such a small β
work in practice? If in practice, a larger β
works well and theory requires β → 0
for working, it shows to me that theoretical analysis of the paper does not translate to the practical performance. When one uses β
values that work well in practice, does the theory show convergence?
Related to the previous part, I am also not sure about "adaptivity" of the method. The authors need to use Lipschitz constants L , L H
to set step sizes. Moreover β
is also fixed in advance, depending on horizon T
, which is the main reason to have variance reduction on z t − ∇ f ( w k )
. So, I do not understand what is adaptive in the step size or in the variance reduction mechanism of the method.
For experiments, the authors say that Adam+ is comparable with "tuned" SGD. However, from the explanations in the experimental part, I understand that Adam+ is also tuned similar to SGD. Then, what is the advantage compared to SGD? If one needs the same amount of tuning for Adam+, and the performance is similar, I do not see much advantage compared to SGD. On this front, I suggest the authors to show what happens when the step size parameter is varied, is Adam+ more robust to non-tuned step sizes compared to SGD?
To sum up, I vote for rejection since 1) the analysis and parameters require strict condition, 2) it is not clear if the analysis illustrates the practical performance (very small β
is needed in theory), 3) practical merit is unclear since the algorithm needs to be tunes similar to SGD and the results are also similar to SGD.
[1] Zhang, Lin, Jegelka, Jadbabaie, Sra, Complexity of Finding Stationary Points of Nonsmooth Nonconvex Functions, ICML 2020. [2] Levy, Online to Offline Conversions, Universality and Adaptive Minibatch Sizes, NIPS 2017.
======== after discussion phase ==========
I still think that the merit of the method is unclear due to reasons: 1) It is not clear how the method behaves without Lipschitz Hessian assumption. 2) The method only obtains the state-of-the-art complexity of ϵ − 3.5
with large mini-batch sizes and the complexity with small mini-batch sizes (section 2.1) is suboptimal (in fact drawbacks such as this needs to be presented explicitly, right now I do not see enough discussions about this.). 3) Adaptive variance reduction property claimed by the authors boils down to picking "small enough" β
parameter, which in my opinion takes away the adaptivity claim and is for example not the case in adaptive methods such as AdaGrad. 4) The comparison with AdaGrad and Adam with scalar step sizes is not included (the authors promised to include it later, but I cannot make a decision about these without seeing results) and I am not sure if Adam+ will bring benefits over them. 5) Presentation of the paper needs major improvements. I recommend making the remarks after Lemma 1 and theorems clearer, by writing down exact expressions and the implications of these (for example remarks such as "As the algorithm converges with E [ ∇ F ( w ) 2 ] and β
decreases to zero, the variance of z t
will also decrease" can be made more rigorous and clearer, by writing down exactly the bound for the variance of z t
by iterating the recursion written with E δ t + 1
and highlighting what each term does in the bound. This way will be much easier for readers to understand your paper).
Therefore, I am keeping my score. | 1) It is not clear how the method behaves without Lipschitz Hessian assumption. |
ICLR_2022_3352 | ICLR_2022 | + The problem studied in this paper is definitely important in many real-world applications, such as robotics decision-making and autonomous driving. Discovering the underlying causation is important for agents to make reasonable decisions, especially in dynamic environments.
+ The method proposed in this paper is interesting and technically correct. Intuitively, using GRU to extract sequential information helps capture the changes of causal graphs.
- The main idea of causal discovery by sampling intervention set and causal graphs for masking is similar to DCDI [1]. This paper is more like using DCDI in dynamic environments, which may limit the novelty of this paper.
- The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3).
- The contribution of this paper is not fully supported by experiments.
Main Questions
(1) During the inference stage, why use samples instead of directly taking the argmax of Bernoulli distribution? How many samples are required? Will this sampling cause scalability problems?
(2) In the experiment part, the authors only compare with one method (V-CDN). Is it possible to compare DYNOTEARS with the proposed method?
(3) The authors mention that there is no ground truth to evaluate the causal discovery task. I agree with this opinion since the real world does not provide us causal graphs. However, the first experiment is conducted on a synthetic dataset, where I believe it is able to obtain the causation by checking collision conditions. In other words, I am not convinced only by the prediction results. Could the author provide the learned causal graphs and intervention sets and compare them with ground truth even on a simple synthetic dataset?
Clarification questions
(1) It seems the citation of NOTEARS [2] is wrongly used for DYNOTEARS [3]. This citation is important since DYNOTEARS is one of the motivations of this paper.
(2) ICM part in Figure 3 is not clear. How is the Intervention set I is used? If I understand correctly, function f
is a prediction model conditioned on history frames.
(3) The term “Bern” in equation (3) is not defined. I assume it is the Bernoulli distribution. Then what does the symbol B e r n ( α t , β t ) mean?
(4) According to equation (7), each node j
has its own parameters ϕ j t and ψ j t
. Could the authors explain why the parameters are related to time?
(5) In equation (16), the authors mention the term “ secondary optimization”. I can’t find any reference for it. Could the author provide more information?
Minor things:
(1) In the caption of Figure 2, the authors say “For nonstationary causal models, (c)….”. But in figure (c) belongs to stationary methods.
[1] Brouillard P, Lachapelle S, Lacoste A, et al. Differentiable causal discovery from interventional data[J]. arXiv preprint arXiv:2007.01754, 2020.
[2] Zheng X, Aragam B, Ravikumar P, et al. Dags with no tears: Continuous optimization for structure learning[J]. arXiv preprint arXiv:1803.01422, 2018.
[3] Pamfil R, Sriwattanaworachai N, Desai S, et al. Dynotears: Structure learning from time-series data[C]//International Conference on Artificial Intelligence and Statistics. PMLR, 2020: 1595-1605. | - The paper is not difficult to follow, but there are several places that are may cause confusion. (listed in point 3). |
8fLgt7PQza | ICLR_2025 | 1. By reviewing your code and the details in the article, I can see that your workload is immense, however, the contribution of this article is incremental. My understanding is that it is essentially a combination of GraphRAG and GraphCare [1]. Furthermore, many key baselines were not cited. Since the authors mentioned that this paper focuses on RAG for EHR, some essential RAG algorithms should have been introduced, such as MedRetriever [2], and commonly used GraphRAG algorithms like KGRAG [3].
2. In the experiment or appendix section, I did not clearly see the formulas for Sensitivity and Specificity, nor were there any corresponding references, which is quite confusing to me. Moreover, using Accuracy as a metric in cases of highly imbalanced labels is unreasonable. For instance, in the MIMIC-III Mortality Prediction task, the positive rate is 5.42%. If I predict that all patients will survive, I can still achieve an accuracy of 94.58%. Previous works, such as GraphCare [1], have adopted AUROC and AUPRC as evaluation metrics.
3. The article is overly long and filled with detailed content, making it easy for readers to miss important points.
- [1] GraphCare: Enhancing Healthcare Predictions with Personalized Knowledge Graphs. ICLR 2024
- [2] MedRetriever: Target-driven interpretable health risk prediction via retrieving unstructured medical text. CIKM 2021
- [3] Biomedical knowledge graph-enhanced prompt generation for large language models. Arxiv 2023 | 1. By reviewing your code and the details in the article, I can see that your workload is immense, however, the contribution of this article is incremental. My understanding is that it is essentially a combination of GraphRAG and GraphCare [1]. Furthermore, many key baselines were not cited. Since the authors mentioned that this paper focuses on RAG for EHR, some essential RAG algorithms should have been introduced, such as MedRetriever [2], and commonly used GraphRAG algorithms like KGRAG [3]. |
NIPS_2016_395 | NIPS_2016 | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: 1) Section 1.2: the dimensions of the projection matrices are written as $A_i \in \mathbb{R}^{m_i \times d_i}$. I think this should be $A_i \in \mathbb{R}^{d_i \times m_i}$, otherwise you cannot project a tensor $T \in \mathbb{R}^{d_1 \times d_2 \times \ldots d_p}$ on those matrices. But maybe I am wrong about this... 2) The neighborhood condition in Definition 3.2 for differential privacy seems a bit odd in the context of topic modeling. In that setting, two tensors/databases would be neighbors if one document is different, which could induce a change of something like $\sqrt{2}$ (if there is no normalization, so I found this a bit confusing. This makes me think the application of the method to differential privacy feels a bit preliminary (at best) or naive (at worst): even if a method is robust to noise, a semantically meaningful privacy model may not be immediate. This $\sqrt{2}$ is less than the $\sqrt{6}$ suggested by the authors, which may make things better? 3) A major concern I have about the differential privacy claims in this paper is with regards to the noise level in the algorithm. For moderate values of $L$, $R$, and $K$, and small $\epsilon = 1$, the noise level will be quite high. The utility theorem provided by the author requires a lower bound on $\epsilon$ to make the noise level sufficiently low, but since everything is in "big-O" notation, it is quite possible that the algorithm may not work at all for reasonable parameter values. A similar problem exists with the Hardt-Price method for differential privacy (see a recent ICASSP paper by Imtiaz and Sarwate or an ArXiV preprint by Sheffet). For example, setting L=R=100 and K=10, \epsilon = 1, \delta = 0.01 then the noise variance is of the order of 4 x 10^4. Of course, to get differentially private machine learning methods to work in practice, one either needs large sample size or to choose larger $\epsilon$, even $\epsilon \gg 1$. Having any sense of reasonable values of $\epsilon$ for a reasonable problem size (e.g. in topic modeling) would do a lot towards justifying the privacy application. 4) Privacy-preserving eigenvector computation is pretty related to private PCA, so one would expect that the authors would have considered some of the approaches in that literature. What about (\epsilon,0) methods such as the exponential mechanism (Chaudhuri et al., Kapralov and Talwar), Laplace noise (the (\epsilon,0) version in Hardt-Price), or Wishart noise (Sheffet 2015, Jiang et al. 2016, Imtiaz and Sarwate 2016)? 5) It's not clear how to use the private algorithm given the utility bound as stated. Running the algorithm is easy: providing $\epsilon$ and $\delta$ gives a private version -- but since the $\lambda$'s are unknown, verifying if the lower bound on $\epsilon$ holds may not be possible: so while I get a differentially private output, I will not know if it is useful or not. I'm not quite sure how to fix this, but perhaps a direct connection/reduction to Assumption 2.2 as a function of $\epsilon$ could give a weaker but more interpretable result. 6) Overall, given 2)-5) I think the differential privacy application is a bit too "half-baked" at the present time and I would encourage the authors to think through it more clearly. The online algorithm and robustness is significantly interesting and novel on its own. The experimental results in the appendix would be better in the main paper. 7) Given the motivation by topic modeling and so on, I would have expected at least an experiment on one real data set, but all results are on synthetic data sets. One problem with synthetic problems versus real data (which one sees in PCA as well) is that synthetic examples often have a "jump" or eigenvalue gap in the spectrum that may not be observed in real data. While verifying the conditions for exact recovery is interesting within the narrow confines of theory, experiments are an opportunity to show that the method actually works in settings where the restrictive theoretical assumptions do not hold. I would encourage the authors to include at least one such example in future extended versions of this work. | - I found the application to differential privacy unconvincing (see comments below) - Experimental validation was a bit light and felt preliminary RECOMMENDATION: I think this paper should be accepted into the NIPS program on the basis of the online algorithm and analysis. However, I think the application to differential privacy, without experimental validation, should be omitted from the main paper in favor of the preliminary experimental evidence of the tensor method. The results on privacy appear too preliminary to appear in a "conference of record" like NIPS. TECHNICAL COMMENTS: |
NIPS_2019_651 | NIPS_2019 | (large relative error compared to AA on full dataset) are reported. - Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-implementation of the method appears feasible. - Significance: The submission provides a method to perform (approximate) AA on large datasets by making use of coresets and therefore might be potentially useful for a variety of applications. Detailed remarks/questions: 1. Algorithm 2 provides the coreset C and the query Q consists of the archetypes z_1, â¦, z_k which are initialised with the FurthestSum procedure. However, it is not quite clear to me how the archetype positions are updated after initialisation. Could the authors please comment on that? 2. The presented theorems provide guarantees for the objective functions phi on data X and coreset C for a query Q. Table 1 reporting the relative errors suggests that there might be a substantial deviation between coreset and full dataset archetypes. However, the interpretation of archetypes in a particular application is when AA proves particularly useful (as for example in [1] or [2]). Is the archetypal interpretation of identifying (more or less) stable prototypes whose convex combinations describe the data still applicable? 3. Practically, the number of archetypes k is of interest. In the presented framework, is there a way to perform model selection in order to identify an appropriate k? 4. The work in [3] might be worth to mention as a related approach. There, the edacious nature of AA is approached by learning latent representation of the dataset as a convex combination of (learnt) archetypes and can be viewed as a non-linear AA approach. [1] Shoval et al., Evolutionary Trade-Offs, Pareto Optimality, and the Geometry of Phenotype Space, Science 2012. [2] Hart et al., Inferring biological tasks using Pareto analysis of high-dimensional data, Nature Methods 2015. [3] Keller et al., Deep Archetypal Analysis, arxiv preprint 2019. ---------------------------------------------------------------------------------------------------------------------- I appreciate the authorsâ response and the additional experimental results. I consider the plot of the coreset archetypes on a toy experiment insightful and it might be a relevant addition to the appendix. In my opinion, the submission constitutes a relevant contribution to archetypal analysis which makes it more feasible in real-world applications and provides some theoretical guarantees. Therefore, I raise my assessment to accept. | 1. Algorithm 2 provides the coreset C and the query Q consists of the archetypes z_1, â¦, z_k which are initialised with the FurthestSum procedure. However, it is not quite clear to me how the archetype positions are updated after initialisation. Could the authors please comment on that? |
NIPS_2021_2123 | NIPS_2021 | This paper still existed some problems that I hope the authors could illustrate in a clearer way.
The authors argued that they were the first time to directly training deep SNNs with more than 100 layers. I don’t think this is the core contribution in this paper, because of the residual block, the spiking could be deeper. In my opinion, SEW structure is the most important point in this paper, and directly training a 50-layer and 100-layer snn is not a huge breakthrough. Otherwise, if they could give a more detailed analysis about why other methods can’t train a 100-layer snn except section 3.2, it may be more reasonable.
Why the RBA block can be seen as a special case of the SEW block? I mean SEW is another kind of RBA with binary input and output.
Equ. 11 is wonderful, how about other bit operations?
Fig. 5 a seems strange, please give more explanations.
When the input is aer format, how did you deal with DVS input?
If you can analyze the energy consumption as reference[15] did, this paper would be more solid. | 11 is wonderful, how about other bit operations? Fig. 5 a seems strange, please give more explanations. When the input is aer format, how did you deal with DVS input? If you can analyze the energy consumption as reference[15] did, this paper would be more solid. |
53kW6e1uNN | ICLR_2024 | 1. Limited novelty. The paper seems like a straightforward application of existing literature, specifically the DeCorr [1] that focuses on general deep graph neural networks, in a specific application domain. The contribution of this study is mainly the transposition of DeCorr's insights into graph collaborative filtering, with different datasets and backbones. Although modifications like different penalty coefficients for users and items are also proposed, the whole paper still lack enough insights about what are unique challenges of overcorrelation in recommender systems.
2. It could be better if one additional figure could be illustrated, i.e., how Corr and SMV metrics evolve with the application of additional network layers—mirroring the Figure 2, but explicitly showcasing the effects of the proposed method—the authors could convincingly validate their auxiliary loss function's efficacy.
3. Presentation issues. The y-axis labels of Figure 2 lack standardization, e.g., 0.26 vs. 0.260 vs. 2600 vs. .2600.
[1] Jin et al. Feature overcorrelation in deep graph neural networks: A new perspective. KDD 2022. | 1. Limited novelty. The paper seems like a straightforward application of existing literature, specifically the DeCorr [1] that focuses on general deep graph neural networks, in a specific application domain. The contribution of this study is mainly the transposition of DeCorr's insights into graph collaborative filtering, with different datasets and backbones. Although modifications like different penalty coefficients for users and items are also proposed, the whole paper still lack enough insights about what are unique challenges of overcorrelation in recommender systems. |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. - The authors define rooted patterns (in a similar way to the orbit counting in GSN), but do not elaborate on why it is important for the patterns to be rooted, neither how they choose the roots. A brief discussion is expected, or if non-rooted patterns are sufficient, it might be better for the sake of exposition to discuss this case only in the supplementary material. - The authors do not adequately discuss the computational complexity of counting homomorphisms. They make brief statements (e.g., L 145 “Better still, homomorphism counts of small graph patterns can be efficiently computed even on large datasets”), but I think it will be beneficial for the paper to explicitly add the upper bounds of counting and potentially elaborate on empirical runtimes. - Comparison with GSN: The authors mention in section 2 that F-MPNNs are a unifying framework that includes GSNs. In my perspective, given that GSN is a quite similar framework to this work, this is an important claim that should be more formally stated. In particular, as shown by Curticapean et al., 2017, in order to obtain isomorphism counts of a pattern P, one needs not only to compute P-homomorphisms, but also those of the graphs that arise when doing “non-edge contractions” (the spasm of P). Hence a spasm(P)-MPNN would require one extra layer to simulate a P-GSN. I think formally stating this will give the interested reader intuition on the expressive power of GSNs, albeit not an exact characterisation (we can only say that P-GSN is at most as powerful as a spasm(P)-MPNN but we cannot exactly characterise it; is that correct?) - Also, since the concept of homomorphisms is not entirely new in graph ML, a more elaborate comparison with the paper by NT and Maehara, “Graph Homomorphism Convolution”, ICML’20 would be beneficial. This paper can be perceived as the kernel analogue to F-MPNNs. Moreover, in this paper, a universality result is provided, which might turn out to be beneficial for the authors as well.
Additional comments:
I think that something is missing from Proposition 3. In particular, if I understood correctly the proof is based on the fact that we can always construct a counterexample such that F-MPNNs will not be equally strong to 2-WL (which by the way is a stronger claim). However, if the graphs are of bounded size, a counterexample is not guaranteed to exist (this would imply that the reconstruction conjecture is false). Maybe it would help to mention in Proposition 3 that graphs are of unbounded size?
Moreover, there is a detail in the proof of Proposition 3 that I am not sure that it’s that obvious. I understand why the subgraph counts of C m + 1
are unequal between the two compared graphs, but I am not sure why this is also true for homomorphism counts.
Theorem 3: The definition of the core of a graph is unclear to me (e.g., what if P contains cliques of multiple sizes?)
In the appendix, the authors mention they used 16 layers for their dataset. That is an unusually large number of layers for GNNs. Could the authors comment on this choice?
In the same context as above, the experiments on the ZINC benchmark are usually performed with either ~100K or 500K parameters. Although I doubt that changing the number of parameters will lead to a dramatic change in performance, I suggest that the authors repeat their experiments, simply for consistency with the baselines.
The method of Bouritsas et al., arxiv’20 is called “Graph Substructure Networks” (instead of “Structure”). I encourage the authors to correct this.
After rebuttal
The authors have adequately addressed all my concerns. Enhancing MPNNs with structural features is a family of well-performing techniques that have recently gained traction. This paper introduces a unifying framework, in the context of which many open theoretical questions can be answered, hence significantly improving our understanding. Therefore, I will keep my initial recommendation and vote for acceptance. Please see my comment below for my final suggestions which, along with some improvements on the presentation, I hope will increase the impact of the paper.
Limitations: The limitations are clearly stated in section 1, by mainly referring to the fact that the patterns need to be selected by hand. I would also add a discussion on the computational complexity of homomorphism counting.
Negative societal impact: A satisfactory discussion is included in the end of the experimental section. | - It might be a good idea to include a formal or intuitive definition of the treewidth since it is central to all the proofs in the paper. |
NIPS_2018_197 | NIPS_2018 | weakness of the paper: its clarity. From the presentation, it seems evident that the author is an expert in the field of computer algebra/algebraic geometry. It is my assumption that most members of the NIPS community will not have a strong background on this subject, me including. As a consequence, I found it very hard to follow Sect. 3. My impression was that the closer the manuscript comes to the core of algebraic geometry results, the less background was provided. In particular, I would have loved to see at least a proof idea or some more details/background on Thm. 3.1 and Cor. 3.2. Or maybe, the author could include one less example in the main text but show the entire derivation how to get from one concrete instance of A to right kernel B by manual computation? Also, for me the description in Sect. 2.4 was insufficient. As a constructive instruction, maybe drop one of the examples (R(del_t) / R[sigma_x]), but give some more background on the other? This problem of insufficient clarity cannot be explained by different backgrounds alone. In Sect. 3.2, the sentence "They are implemented in various computer algebra systems, 174 e.g., Singular [8] and Macaulay2 [16] are two well-known open source systems." appears twice (and also needs grammar checking). If the author could find a minimal non-trivial example (to me, this would be an example not including the previously considered linear differential operator examples) for which the author can show the entire computation in Sect. 3.2 or maybe show pseudo-code for some algorithms involving the Groebner basis, this would probably go a long way in the community. That being said, the paper's strengths are (to the best of this reviewer's knowledge) its originality and potential significance. The insight that Groebner bases can be used as a rich language to encode algebraic constraints and highlighting the connection to this vast background theory opens an entirely new approach in modelling capacities for Gaussian processes. I can easily imagine this work being the foundation for many physical/empirical-hybrid models in many engineering applications. I fully agree and applaud the rationale in lines 43-54! Crucially, the significance of this work will depend on whether this view will be adopted fast enough by the rest of the community which in turn depends on the clarity of the presentation. In conclusion: if I understood the paper correctly, I think the theory presented therein is highly original and significant, but in my opinion, the clarity should be improved significantly before acceptance, if this work should reach its full potential. However, if other reviewers have a different opinion on the level of necessary background material, I would even consider this work for oral presentation. Minor suggestions for improvements: - In line 75, the author writes that the "mean function is used as regression model" and this is how the author uses GPs throughout. However, in practice the (posterior) covariance is also considered as "measure of uncertainty". It would be insightful, if the author could find a way to visualize this for one or two of the examples the author considers, e.g., by drawing from the posterior process. - I am not familiar with the literature: all the considerations in this paper should also be applicable to kernel (ridge) regression, no? Maybe this could also be presented in the 'language of kernel interpolation/smoothing' as well? - I am uncertain about the author's reasoning on line 103. Does the author want to express that the mean is a sample from the GP? But the mean is not a sample from the GP with probability 1. Generally, there seems to be some inconsistency with the (algebraic) GP object and samples from said object. - The comment on line 158 "This did not lead to practical problems, yet." is very ominous. Would we even expect any problem? If not, I would argue you can drop it entirely. - I am not sure whether I understood Fig. 2 correctly. Am I correct that u(t) is either given by data or as one draw from the GP and then, x(t) is the corresponding resulting state function for this specified u? I'm assuming that Fig. 3 is done the other way around, right? --- Post-rebuttal update: Thank you for your rebuttal. I think that adding computer-algebra code sounds like a good idea. Maybe presenting the work more in the context of kernel ridge regression would eliminate the discussion about interpreting the uncertainty. Alternatively, if the author opts to present it as GP, maybe a video could be used to represent the uncertainty by sampling a random walk through the distribution. Finally, it might help to not use differential equations as expository material. I assume the author's rationale for using this was that reader might already a bit familiar with it and thus help its understanding. I agree, but for me it made it harder to understand the generality with respect to Groebner bases. My first intuition was that "this has been done". Maybe make they Weyl algebra and Figure 4 the basic piece? But I expect this suggestion to have high variance. | - I am not familiar with the literature: all the considerations in this paper should also be applicable to kernel (ridge) regression, no? Maybe this could also be presented in the 'language of kernel interpolation/smoothing' as well? |
NIPS_2018_430 | NIPS_2018 | - The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. - The authors only applied their method on peculiar types of machine learning applications that were already used for testing boolean classifier generation. It is unclear whether the method could lead to progress in the direction of cleaner machine learning methods for standard machine learning tasks (e.g. MNIST). Questions: - How where the time limits in the inner and outer problem chosen? Did larger timeouts lead to better solutions? - It would be helpful to have an algorithmic writeup of the solution of the pricing problem. - SVM gave often good results on the datasets. Did you use a standard SVM that produced a linear classifier or a Kernel method? If the former is true, this would mean that the machine learning tasks where rather easy and it would be necessary to see results on more complicated problems where no good linear separator exists. Conclusion: I very much like the paper and strongly recommend its publication. The authors propose a theoretically well grounded approach to supervised classifier learning. While the number of problems that one can attack with the method is not so large, the theoretical (problem formulation) and practical (Dantzig-Wolfe solver) contribution can possibly serve as a starting point for further progress in this area of machine learning. | - The authors approach is only applicable for problems that are small or medium scale. Truly large problems will overwhelm current LP-solvers. |
EtNebdSBpe | EMNLP_2023 | - The paper is hard to read and somewhat difficult to follow.
- The motivation is unclear. The authors argue that the LLP setup is relevant for (1) privacy and (2) weak supervision. (1) Privacy: the authors claim that the LLP paradigm is relevant for training on sensitive data as the labels for such datasets are not publicly available. However, the setting proposed in this paper does require gold (and publicly available) labels to formulate the ground truth proportion. If this proportion can be formulated without gold labels, it should be discussed. (2) Weak Supervision: in lines 136-137, the authors mention that the associated label proportions "...provides the weak supervision for training the model". However, weak supervision is a paradigm in which data is automatically labeled with noisy labels using some heuristics and labeling functions. It remains unclear to me in what way this setting is related to the proportion parameter authors use in their work.
- The authors claim it to be one of the preliminary works discussing the application of LLP to NLP tasks. However, I don't see anything NLP-specific in their approach.
- Not all theoretical groundings seem to be relevant to the main topic (e.g., some of the L_dppl irregularities). Additional clarification of their relevance is needed.
- Section 3.3 says the results are provided for binary classifiers only, and the multi-class setting remains for future work. However, one of the datasets used for experiments is multi-label.
- The experimental setting is unclear: does Table 1 contain the test results of the best model? If so, how was the best model selected (given that there is no validation set)? Also, if the proposed method is of special relevance to the sensitive data, why not select a sensitive dataset to demonstrate the method's performance on it? Or LLP data?
- The authors claim the results to be significant. However, no results of significance testing are provided. | - The authors claim it to be one of the preliminary works discussing the application of LLP to NLP tasks. However, I don't see anything NLP-specific in their approach. |
NIPS_2020_0 | NIPS_2020 | I mainly have the following concerns. 1) In general, this paper is incremental to GIN [1], which limits the contribution of this paper. While GIN is well motivated by WL test with solid theoretical background, this paper lacks deeper analysis and new motivation behind the algorithm design. I suggest the authors to give more insightful analysis and motivation. 2) I noticed that in Sec 5.3, a generator equipped with a standard R-GCN as discriminator tends to collapse after several (around 20), while the proposed module will not. The reason behind this fact can be essential to show the mechanism how the proposed method differs from previous one. However, this part is missing in this version of submission. I would like to see why the proposed module can prevent a generator from collapsing. 3) I understand that stochastic/random projection is with high probability to preserve the metric before mapping . My concern is that when stacking multiple layers of WLS units, the probability of the failure case of stochastic/random projection also increases (since projection is performed at each layer). This may greatly hinder the scenario of the proposed method from forming deeper GNN. In this case, authors should justify the stability of the proposed method. How stable is the proposed method? And what happens when stacking more layers? | 2) I noticed that in Sec 5.3, a generator equipped with a standard R-GCN as discriminator tends to collapse after several (around 20), while the proposed module will not. The reason behind this fact can be essential to show the mechanism how the proposed method differs from previous one. However, this part is missing in this version of submission. I would like to see why the proposed module can prevent a generator from collapsing. |
NIPS_2022_2786 | NIPS_2022 | 1. The main (and only) theoretical result in the paper provides utility guarantees for the proposed algorithm only when the features and noise are Gaussian. This is a strong requirement on the data, especially given that previous algorithms don’t need this assumption as well. Moreover, the authors should compare the rates achieved by their procedure to existing rates in the literature. 2. Experiments: the experimental results in the paper don’t provide a convincing argument for their algorithms. First, all of the experiments are done over synthetic data. Moreover, the authors only consider low-dimensional datasets where d<30 and therefore it is not clear if the same improvements hold for high-dimensional problems. Finally, it is not clear whether the authors used any hyper-parameter tuning for DP-GD (or DP-SGD); this could result in significantly better results for DP-GD. 3. Writing: I encourage the authors to improve the writing in this paper. For example, the introduction could use more work on setting up the problem, stating the main results and comparing to previous work, before moving on to present the algorithm (which is done too soon in the current version). More:
Typo (first sentence): “is a standard”
First paragraph in page 4 has m. What is m? Should that be n? | 1. The main (and only) theoretical result in the paper provides utility guarantees for the proposed algorithm only when the features and noise are Gaussian. This is a strong requirement on the data, especially given that previous algorithms don’t need this assumption as well. Moreover, the authors should compare the rates achieved by their procedure to existing rates in the literature. |
yuYMJQIhEU | ICLR_2024 | This paper mostly combines standard algorithms (random walk, Adam without momentum, SAM), although this is not a problem, the theoretically analysis needs to be improved. Meanwhile, the experimental part lacks new insights except for some expected results.
Major comments:
1. Theorem 3.7 looks like a direct combination of theoretical results obtained by Adam without momentum and SAM. Furthermore, the proof in Appendix does not consider the convergence guarantee that could be achieved by random walk method. That is, the Markov chain is not considered. Note that the last equation in Page 13 is almost the same as the convergence result (Theorem 4.3, Triastcyn et al., 2022)) except it does not have the compression part. The proof also follows exactly as Triastcyn et al. (2022). The perturbed model is not used, means that sharp awareness minimization is not analyzed, which makes me question the soundness of Theorem 3.7.
2. Since SAM is integrated to prevent potential overfitting, the experiment should present this effect compared with its counterpart that does not have the perturbed model. The lack of this experiment comparison would question the necessity of incorporating SAM in the proposed Localized framework.
3. The simulation only shows the loss performance of the proposed algorithms and the benchmarks, however, in practical, we would be more interested to see the classification accuracy.
4. The proposed algorithm is compared with FedAvg, however, for FedAvg, not all agents are communicating all the time, which does not make sense in the setting that FedAvg does not need to consider communication. That means, I suppose that if all agents in FedAvg communicate all the time, the performance of FedAvg might be much better than all the other methods, since there exists a coordinator, although the communication cost would be very high. The figures presented, however, show that Localized SAM is always better than FedAvg in the random sample setting in both performance and communication, which is not a fair comparison.
Minor comments:
1. In Page 2, first paragraph, Localized SAM is introduced first and then “sharpness-aware minimization (SAM (Foret et al., 2021))” is repeated again. It would be better to revise it.
2. Page 2, second paragraph in Related Work, the Walkman algorithm (Mao et al., 2020) is solved by ADMM, with two versions, one is to solve a local optimization problem, the other is to solve a gradient approximation. Therefore, it is not accurate to say that “However, these works are all based on the simple SGD for decentralized optimization.”
3. Section 3, first paragraph, in “It can often have faster convergence and better generalization than the SGD-based Algorithm 1, as will be demonstrated empirically in Section 4.1.” The “it” does not have a clear reference.
4. In Section 3.1, you introduced $\boldsymbol{u}_k$, which was not defined previously and did not show up after Algorithm 3.
5. Figure 6 seems to be reused from your previous LoL optimizer work. | 2. Page 2, second paragraph in Related Work, the Walkman algorithm (Mao et al., 2020) is solved by ADMM, with two versions, one is to solve a local optimization problem, the other is to solve a gradient approximation. Therefore, it is not accurate to say that “However, these works are all based on the simple SGD for decentralized optimization.” 3. Section 3, first paragraph, in “It can often have faster convergence and better generalization than the SGD-based Algorithm 1, as will be demonstrated empirically in Section 4.1.” The “it” does not have a clear reference. |
NIPS_2019_188 | NIPS_2019 | , listed below, that should not be difficult to fix in the final version of the paper. - in the comparison with the prior work, the authors do not present the latest results. Therefore, the comparison embellishes the contributions of this paper. Indeed, in table 1, in the row corresponding to the Langevin diffusion, the authors should present the result of "Analysis of Langevin Monte Carlo via convex optimization", which improves on [13,17]. In addition, for the Underdamped Langevin Diffusion, in the table,in the abstract and in the text, it would be more appropriate to compare with [15], which provides an improved bound (kappa^1.5 instead of kappa^2). - The authors do not keep track of numerical constants. I would very much appreciate to see results with precise values of constants, as it is done in many recent references cited by the authors. SPECIFIC REMARKS/TYPOS - Line 9: compare with [15] instead of [10] - Line 14: applies -> apply - Line 16: problems -> problem - Lines 32-33: this lines oversell the contribution (which is not needed). If one uses the methodology of evaluating a method adopted by the authors, that I find fully justified, the current best algorithm is independent of d. By the way, the sentence on lines 32-33 contradicts the content of table 1. - Lines 44-45: when mentioning ``the current fastest algorithm'', it might be helpful to cite [15] as well. - Lines 106-107 - same remark - Line 131: depend -> depends - Line 132: I guess Omega(d^1.5) should be replaced by Omega(d^2) - Line 136: I suggest to use another letter than f, which refers to a precise function (the potential) - Lines 170-171: [18,15] -> [18,13] - Lines 180-181: the meaning of "accurate" is not clear here. My suggestion is to remove any comment about accuracy here and to keep only the comment on unbiasedness. - Line 191: please emphasize that the distribution is Gaussian conditionnally to alpha. - Lemma 2: it should be clearly mentioned that these claims are true when u=1/L - Lemma 2: I suggest to replace v_n(0) and x_n(0) by v_n and x_n - Line 201: same remark - Section A: everything is conditional to alpha - Lemma 6: mention that u=1/L - Line 400: Schwarz is mispelled | - Line 191: please emphasize that the distribution is Gaussian conditionnally to alpha. |
ICLR_2022_3332 | ICLR_2022 | Weakness: 1. The writing needs a lot of improvement. Many of the concepts or notations are not explained. For example, what do “g_\alpha” and “vol(\alpha)” mean? What is an “encoding tree”(I believe it is not a common terminology)? Why can the encoding tree be used a tree kernel? Other than complexity, what is the theoretic advantage of using these encoding trees? 2. Structural optimization seems one of the main components and it has been emphasized several times. However, it seems the optimization algorithm is directly from some previous works. That is a little bit confusing and reduces the contribution. 3. From my understanding, the advantage over WL-kernel is mainly lower complexity, but compared to GIN or other GNNs, the complexity is higher. That is also not convincing enough. Of course, the performance is superior, so I do not see it as a major problem, but I would like to see more explanations of the motivation and advantages. 4. If we do not optimize the structural entropy but just use a random encoding tree, how does it perform? I think the authors need this ablation study to demonstrate the necessity of structural optimization. | 2. Structural optimization seems one of the main components and it has been emphasized several times. However, it seems the optimization algorithm is directly from some previous works. That is a little bit confusing and reduces the contribution. |
ICLR_2023_4699 | ICLR_2023 | 1. The guidance over SIFT feature space is good. However, the perceptual losses (such as VGG feature loss) are also considered effective. The authors should clarify their choice, otherwise this contribution is weakened. 2. Assumption 3.1. says the loss of TKD is assumed less than IYOR. However, eqn.7 tells a different story. 3. Assumption 3.1. may not hold in real cases. One cannot increase the parameter number of the teacher network when applying a KD algorithm.
4. This paper can become more solid if IYOR is used in some modern i2i methods. i.e, StyleFlow, EGSDE etc. 5. The student and refinement networks are trained simultaneously. Which may improve the performance of the teacher network. Is the comparison fair? Please provide KID/FID metrics of your teacher network. | 5. The student and refinement networks are trained simultaneously. Which may improve the performance of the teacher network. Is the comparison fair? Please provide KID/FID metrics of your teacher network. |
ACL_2017_588_review | ACL_2017 | and the evaluation leaves some questions unanswered. - Strengths: The proposed task requires encoding external knowledge, and the associated dataset may serve as a good benchmark for evaluating hybrid NLU systems.
- Weaknesses: 1) All the models evaluated, except the best performing model (HIERENC), do not have access to contextual information beyond a sentence. This does not seem sufficient to predict a missing entity. It is unclear whether any attempts at coreference and anaphora resolution have been made. It would generally help to see how well humans perform at the same task.
2) The choice of predictors used in all models is unusual. It is unclear why similarity between context embedding and the definition of the entity is a good indicator of the goodness of the entity as a filler.
3) The description of HIERENC is unclear. From what I understand, each input (h_i) to the temporal network is the average of the representations of all instantiations of context filled by every possible entity in the vocabulary.
This does not seem to be a good idea since presumably only one of those instantiations is correct. This would most likely introduce a lot of noise.
4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities.
- Questions to the authors: 1) An important assumption being made is that d_e are good replacements for entity embeddings. Was this assumption tested?
2) Have you tried building a classifier that just takes h_i^e as inputs?
I have read the authors' responses. I still think the task+dataset could benefit from human evaluation. This task can potentially be a good benchmark for NLU systems, if we know how difficult the task is. The results presented in the paper are not indicative of this due to the reasons stated above. Hence, I am not changing my scores. | 4) The results are not very informative. Given that this is a rare entity prediction problem, it would help to look at type-level accuracies, and analyze how the accuracies of the proposed models vary with frequencies of entities. |