paper_id stringlengths 10 19 | venue stringclasses 15
values | focused_review stringlengths 7 10.2k | point stringlengths 45 643 |
|---|---|---|---|
NIPS_2020_350 | NIPS_2020 | -The equations (3) and (4) are, however, very similar to [3] and [A, B] in the way that they force the minor-class examples to have larger decision values (i.e., \exp \eta_j) in training. The proposed softmax seems particularly similar to eq. (11) in [B]. The authors should have cited these papers and provided further ... | 2) Do the authors only apply the meta sampler in a decoupled way? That is, to update the linear classifier when the features are fixes? If so, please provide more discussion on this and when (which epoch) do the authors start applying the meta sampler? |
YLJs4mKJCF | ICLR_2024 | - Authors use their own defined vanilla metric, and lack related fairness-aware metrics like Equality odds (EO)
- Authors are encouraged to conduct more experiments on more datasets like COMPAS and Drug Comsumptionm, please kindly follow this AAAI paper which authors have cited: Exacerbating Algorithmic Bias through Fa... | - Authors use their own defined vanilla metric, and lack related fairness-aware metrics like Equality odds (EO) - Authors are encouraged to conduct more experiments on more datasets like COMPAS and Drug Comsumptionm, please kindly follow this AAAI paper which authors have cited: Exacerbating Algorithmic Bias through Fa... |
NIPS_2018_260 | NIPS_2018 | 1. The parameterizations considered of the value functions at the end of the day belong to discrete time, due to the need to discretize the SDEs and sample the state-action-reward triples. Given this discrete implementa- tion, and the fact that experimentally the authors run into the conven- tional di_x000e_culties of ... | 7. L107-114 seems speculative or overly opinionated. This should be stated as a remark, or an aside in a Discussion section, or removed. |
g3VOQpuqlF | EMNLP_2023 | * The result that randomly concatenating passages from an open-domain corpus gives better performance than natural long form text and semantically linked passages is counter-intuitive. I would like to understand if this conclusion is an artifact of the datasets for long form training or the tasks considered. I would li... | * It would have been nice to consider baselines such as Rope and Alibi relative positional embeddings to verify the performance improvement obtained by making the changes suggested in the paper. |
NIPS_2020_491 | NIPS_2020 | - The main weakness of the work relates to the computational complexity of 1) computing the local subgraphs (are shortest paths computed ahead of the training process?), 2) evaluating each node's label individually. Can authors comment on the impact on training/evaluation time? - Another important missing element from ... | - Another important missing element from the paper is the value of neighborhood size h, as well as an analysis of its influence over the model's performance. This is the key parameter of the proposed strategy and providing readers with intuitive knowledge of the value of h to use, and the robustness of the method with ... |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - T... | - What are the impacts on the model when multimodal data is imperfect, such as when certain modalities are missing? Since the model builds higher-order interactions, does missing data at the input level lead to compounding effects that further affect the polynomial tensors being constructed, or is the model able to lev... |
ACL_2017_33_review | ACL_2017 | Similar idea has also been used in (Teng et al., 2016). Though this work is more elegant in the framework design and mathematical representation, the experimental comparison with (Teng et al., 2016) is not as convincing as the comparisons with the rest methods. The authors only reported the re-implementation results on... | -General Discussion: The reviewer has the following questions/suggestions about this work, 1. Since the SST dataset has phrase-level annotations, it is better to show the statistics of the times that negation or intensity words actually take effect. For example, how many times the word "nothing" appears and how many ti... |
KmphHE92wU | ICLR_2025 | 1. The novelty of the method is somewhat limited, especially the direct application of existing invariant point cloud networks over the eigenvectors without any transfer challenge.
2. The paper doesn’t provide any detail about instance models of $\rho$, $\phi$, and $\psi$ (in Equations 6 and 7) implemented in the exper... | - Authors don’t verify the stability of the OGE-Aug on OOD benchmarks such as DrugOOD [1], where SPE [2] is validated on this dataset. |
SzWvRzyk6h | ICLR_2025 | * The phrasing like "the relationship between traditional style (LIWC-style) and sensorial style" may not be accurate. They are not totally independent, and LIWC-style includes categories that can capture aspects of sensory style.
* Apart from applying SVD to the BERT embedding, have the authors considered freezing som... | * Apart from applying SVD to the BERT embedding, have the authors considered freezing some layers of the model while only training a few layers? Or other parameter-efficient methods such as LoRA? These methods are natural to think about and could provide a valuable basis for experimental comparison. |
ICLR_2021_512 | ICLR_2021 | - Important pieces of prior work are missing from the related work section. The paper seems to be strongly related to Tensor Field Networks (TFN) (Thomas et al. 2018), as both define Euclidean and permutation equivariant convolutions on point clouds / graphs. Furthermore, there are several other methods that operate on... | - Expand the related work section - Compare to the strong baselines that use the coordinates. |
WYsLU5TEEo | ICLR_2024 | - **Limited to Binary Tasks**: A major limitation of the paper is that it only addresses binary classification tasks. It would be interesting to expand its applicability to multiclass problems to demonstrate broader utility, as mentioned in the discussion section.
- **Single Seed Experiments**: The experiments in the p... | - **Single Seed Experiments**: The experiments in the paper are limited to training on a single seed, making it difficult to assess the significance of performance differences and the true impact of the proposed cycle consistency loss on convergence. Multiple seed experiments would provide a more robust evaluation. |
UK7Hs7f0So | ICLR_2024 | 1. The current version of the paper solely presents the average value obtained from five trials without including information about the standard deviation. It is highly recommended to include error bars.
2. Why use the VMF distribution and the truncated normal distribution to characterize the angle and magnitude of the... | 2. Why use the VMF distribution and the truncated normal distribution to characterize the angle and magnitude of the target vector? The motivation behind this is unclear to me. |
NIPS_2022_836 | NIPS_2022 | 1. The application of this method seems to be in a very limited field which is a differentiable simulation of optical encoders. 2. The authors could have shown the result on 1~2 more datasets. 3. UNets have been there for a while. Are they indeed the best baseline method to compare the presented method against? 4. Ther... | 5. An entire multi-GPU setup is required for the optimizations in the proposed method, which makes it not very accessible for many potential users. |
ACL_2017_239_review | ACL_2017 | The overall result is not very useful for ML practioners in this field, because it merely confirms what has been known or suspected, i.e. it depends on the task at hand, the labeled data set size, the type of the model, etc. So, the result in this paper is not very actionable. The reviewer noted that this comprehensive... | 5) Missing citation for the public skip-gram data set in L425. |
ACL_2017_699_review | ACL_2017 | 1. Some discussions are required on the convergence of the proposed joint learning process (for RNN and CopyRNN), so that readers can understand, how the stable points in probabilistic metric space are obtained? Otherwise, it may be tough to repeat the results.
2. The evaluation process shows that the current system (w... | 5. As the current system captures the semantics through RNN based models. So, it would be better to compare this system, which also captures semantics. Even, Ref-[2] can be a strong baseline to compare the performance of the current system. Suggestions to improve: |
NIPS_2018_810 | NIPS_2018 | of the approach, the proposed message passing scheme, relying on the hierarchical representation, does not seem very principled. It would be nice to know more precisely what has inspired the authors to make these design choices. Perhaps as a consequence, in qualitative results show sometimes objects seem to rip appart ... | - it is not clear how the quantitative results are obtained: what data exactly is used for training, validating and testing ? |
ICLR_2022_488 | ICLR_2022 | - The authors claim that a volume-preserving mixing function is a natural restriction and is easily satisfied. I would like to see a stronger argument why this is true, as it seems easy to think of non-volume-preserving mixing functions. Such an argument should include why the triangle dataset and MNIST would be genera... | - It is unclear why the model does not fully succeed in identifying the true sources in the triangle dataset. Is one of the assumptions not satisfied? Are there learning difficulties? Further comments: |
NIPS_2021_1360 | NIPS_2021 | and questions:
There lacks an introduction of Laplacian matrix but directly uses it. A paper should be self-contained.
The motivation is not strong. The authors stated that "... the transformer architecture ... outperformed many SOTA models ... motivates us". This sounds like "A tool is powerful, then I try the tool on... | 4 Why this SE framework can help to improve, how does it help? Similar to 2, please DO NOT just show me what you have done and achieved, but also show me why and how you manage to do these. I would consider increasing the rating based on the authors' response. Reference: [1] Luo, et al. "Neural architecture search with... |
NIPS_2021_815 | NIPS_2021 | - In my opinion, the paper is a bit hard to follow. Although this is expected when discussing more involved concepts, I think it would be beneficial for the exposition of the manuscript and in order to reach a larger audience, to try to make it more didactic. Some suggestions: - A visualization showing a counting of ho... | - A visualization showing a counting of homomorphisms vs subgraph isomorphism counting. |
NIPS_2019_82 | NIPS_2019 | 1. One major risk of methods that exploit relationships between action units is that the relationships can be very different accross datasets (e.g. AU6 can occur both in an expression of pain and in happiness, and this co-occurence will be very different in a positive salience dataset such as SEMAINE compared to someth... | 4. Why is the approach limited to two views, it feels like the system should be able to generalize to more views without too much difficulty? Minor comments: |
ICLR_2021_2196 | ICLR_2021 | weakness.
Other comments • The proposed method, plastic gates, which performs best amongst the baselines used when combined with product of experts models, seems simple and effective but I am inclined to question how novel it is, since it just amounts to multi-step online gradient descent on the mixture weights. • The ... | • The metrics used for evaluating continual learning, loss after switch and recovery time after switch, which are one of the main selling points of the paper are suitable for the datasets provided, but would not be applicable in a setting where either the task boundaries are not known or there are no hard task boundari... |
ARR_2022_268_review | ARR_2022 | • It is not clear why does the user decoder at time step t uses only the information till time step t from the agent decoder and why not use the information from all the time steps?
• The motivation of applying the attention divergence loss to force attention similarity is still not clear to me. What happens if att^a_u... | • It is not clear why does the user decoder at time step t uses only the information till time step t from the agent decoder and why not use the information from all the time steps? |
NIPS_2021_1852 | NIPS_2021 | W1: The design of extending SGC (from Equation 1) to EIGNN (from Equation 3) is somehow implicit and ad-hoc without clear justifications. The authors should explain this more in details for better understanding by general audiences that not very familiar with implicit models.
W2: During the time complexity analysis, on... | 1) The discussion on arbitrary hyperparameter γ is missing, including how to set it in practice for a given graph and analyzing on the sensitivity of this hyperparameter, otherwise it will be hard for the researchers to follow. |
KadOFOsUpQ | ICLR_2025 | - I am not very convinced by the ablation method used in section 4.1, i.e., by replacing output vector by mean values. It seems a bit ad-hoc for me without further justification. Why use mean but not other statistics? How robust are the results, or is it specific only to the ablation method used here?
- Given that indu... | - Given that induction heads and FV heads appear at different locations (layers) within the model, head "location" can be one confounding factor that contributes to the difference in ICL performance when ablating induction heads vs. FV heads. There should perhaps be a controlled baseline that ablates heads at different... |
ACL_2017_792_review | ACL_2017 | 1. Unfortunately, the results are rather inconsistent and one is not left entirely convinced that the proposed models are better than the alternatives, especially given the added complexity. Negative results are fine, but there is insufficient analysis to learn from them. Moreover, no results are reported on the word a... | 8. A section on synonym identification is missing under similarity measurement that would describe how the multiple-choice task is approached. |
ARR_2022_201_review | ARR_2022 | 1. The Methodology section is very hard to follow. The model architecture description is rather confusing and sometimes uses inconsistent notation. For example, Section 2.2 introduces $v^p_{t-1}$ in the description which does not appear in the equations. Some of the notation pertaining to the labels ($l_0$, $l_{t-1}$) ... | 1. The Methodology section is very hard to follow. The model architecture description is rather confusing and sometimes uses inconsistent notation. For example, Section 2.2 introduces $v^p_{t-1}$ in the description which does not appear in the equations. Some of the notation pertaining to the labels ($l_0$, $l_{t-1}$) ... |
aVqGqTyky7 | EMNLP_2023 | 1. Lack of explanation and analysis of the model's deep mechanism. Why the dynamic update of confidence score works and why the model can outperformance supervised models so significantly need further detailed explanation. Only description but no explanation makes it short of interpretability.
2. Inconsistant symbol us... | 3. An overview of the workflow and the model, which can make it easier to get the whole picture of the work, is needed. |
NIPS_2020_560 | NIPS_2020 | I had a few concerns/confusions about the algorithmic motivations. 1. To debias the sketch, it seems that one needs to know the statistical dimension d_lambda of the design matrix A. This can't be computed accurately without basically the same runtime as required to solve the ridge regression problem in the first place... | 1. To debias the sketch, it seems that one needs to know the statistical dimension d_lambda of the design matrix A. This can't be computed accurately without basically the same runtime as required to solve the ridge regression problem in the first place. Thus is seems there will be some bias, possibly defeating the pur... |
Kr7KpDm8MO | ICLR_2024 | Here are some of my major concerns:
1) I doubt comparing the dynamics of random walk (with zero mean gradients) with neural network training with true objective is meaningful. In particular, it is not clear how a random walk (drawing gradients from a normal 0 mean distribution) can trace the dynamic of neural network t... | 5) Similarly for figure-3, please redefine the figure as the expected quantities are scalars but shown as a vector. |
S8VFVe6MWL | ICLR_2025 | Overall, the authors tend to trust subjective metrics (that they've done) over objective metrics and draw conclusions based on them, but the more I think about it, the more questions I have about the subjective evaluation process.
Also for all three of the main contributions: not enough logical connections were explain... | 3. The ablations seem to deserve better experiment setup, as so many questions arise: |
ACL_2017_318_review | ACL_2017 | 1. Presentation and clarity: important details with respect to the proposed models are left out or poorly described (more details below). Otherwise, the paper generally reads fairly well; however, the manuscript would need to be improved if accepted.
2. The evaluation on the word analogy task seems a bit unfair given t... | 4. A reasonable argument is made that the proposed models are particularly useful for learning representations for low-frequency words (by mapping words to a smaller set of sememes that are shared by sets of words). Unfortunately, no empirical evidence is provided to test the hypothesis. It would have been interesting ... |
NIPS_2020_777 | NIPS_2020 | 1. Data preparation. As the authors pointed out, data serve a very important role in the whole work. However, the authors did not describe clearly how a) training images are rendered b) query points are sampled during training c) normalizations are applied for 2D and 3D data. Are they the same as PiFu? In implicit func... | 2. Study of global feature. Methods like PiFu purposely avoid using voxel-like feature because of their high computational and memory cost. What is the resolution of the 3D voxel, and does it introduce unnecessary overhead to the whole network? It would be more convincing to study the importance of the global feature i... |
NIPS_2016_93 | NIPS_2016 | - The claims made in the introduction are far from what has been achieved by the tasks and the models. The authors call this task language learning, but evaluate on question answering. I recommend the authors tone-down the intro and not call this language learning. It is rather a feedback driven QA in the form of a dia... | - The error analysis on the movie dataset is missing. In order for other researchers to continue on this task, they need to know what are the cases that such model fails. |
ACL_2017_178_review | ACL_2017 | - The evaluation reported in this paper includes only intrinsic tasks, mainly on similarity/relatedness datasets. As the authors note, such evaluations are known to have very limited power in predicting the utility of embeddings in extrinsic tasks. Accordingly, it has become recently much more common to include at leas... | - Table 3: It’s hard to see trends here, for instance PM+CL behaves rather differently than either PM or CL alone. It would be interesting to see development set trends with respect to these hyper-parameters. |
NIPS_2020_1710 | NIPS_2020 | - there are very few experimental details about distillation - is this distillation only on the training set, or is there data augmentation? - it is difficult to understand e.g. figure 5, there are a lot of lines on top of each other - the main metrics reported are performance compared to remaining weights, but the aut... | - it is difficult to understand e.g. figure 5, there are a lot of lines on top of each other - the main metrics reported are performance compared to remaining weights, but the authors could report flops or model size, to make this much more concrete |
ICLR_2021_1213 | ICLR_2021 | weakness of the paper. Then, I present my additional comments which are related to specific expressions in the main text, proof steps in the appendix etc. I would appreciate it very much if authors could address my questions/concerns under “Additional Comments” as well, since they affect my assessment and understanding... | • In order to evaluate the practical performance of the modified adaptive methods in a comparative fashion, two set of experiments were provided: training logistic regression model on MNIST dataset and Resnet-18 model on CIFAR-10 dataset. In these experiments; SGD, SGD with random shuffling, AdaGrad and AdaGrad-window ... |
SDV7Y6Dhx9 | ICLR_2025 | - Some details of the proposed method are missing, as noted in the questions section below.
- This work introduces many hyperparameters, i.e. target supervision dropout ratio ($\alpha$), activation map update frequencies ($K$), and enhancement factor ($\beta$). A more in-depth analysis of the hyperparameter space and i... | - Some details of the proposed method are missing, as noted in the questions section below. |
NIPS_2017_369 | NIPS_2017 | * (Primary concern) Paper is too dense and is not very easy to follow; multiple reads were required to grasp the concepts and contribution. I would strongly recommend simplifying the description and explaining the architecture and computations better; Figure 7, Section 8 as well as lines 39-64 can be reduced to gain mo... | * (Primary concern) Paper is too dense and is not very easy to follow; multiple reads were required to grasp the concepts and contribution. I would strongly recommend simplifying the description and explaining the architecture and computations better; Figure 7, Section 8 as well as lines 39-64 can be reduced to gain mo... |
NIPS_2021_1852 | NIPS_2021 | W1: The design of extending SGC (from Equation 1) to EIGNN (from Equation 3) is somehow implicit and ad-hoc without clear justifications. The authors should explain this more in details for better understanding by general audiences that not very familiar with implicit models.
W2: During the time complexity analysis, on... | 3) For the evaluation on over-smoothing, it would be interesting to see how the EIGNN performs with respect to over-smoothing under standard setting on real-world datasets, especially in comparison with variants focusing on dealing with over-smoothing, such as the setting used in GCNII. |
NIPS_2022_2813 | NIPS_2022 | weakness (insight and contribution), my initial rating is borderline. Strengths:
+ The problem of adapting CLIP under few-shot setting is recent. Compared to the baseline method CoOp, the improvement of the proposed method is significant.
+ The ablation studies and analysis in Section 4.4 is well organized and clearly ... | - In the approach method, there lacks a separate part or subsection to introduce the inference strategy, i.e., how to use the multiple prompts in the test stage. |
ICLR_2021_2330 | ICLR_2021 | Weakness
- Method on Fourier domain supervision lacks more analysis and intuition. It's unclear how the size of the grid is defined to perform FFT, from my understanding, the size is critical as local frequency will be changed using different grid size. Is it fixed throughout training? What is the effect of having diff... | - Figure 4 is confusing. It's not clear what the columns mean -- it is not explained in the text or caption. |
NIPS_2019_991 | NIPS_2019 | [Clarity] * What is the value of the c constant (MaxGapUCB algorithm) used in experiments? How was it determined? How does it impact the performance of MaxGapUCB? * The experiment results could be discussed more. For example, should we conclude from the Streetview experiment that MaxGapTop2UCB is better than the other ... | * The experiment results could be discussed more. For example, should we conclude from the Streetview experiment that MaxGapTop2UCB is better than the other ones? [Significance] * The real-world applications of this new problem setting are not clear. The authors mention applicability to sorting/ranking. It seems like t... |
ICLR_2023_3780 | ICLR_2023 | 1. The motivation is unclear. The authors consider that semantics used in both synthesizing visual features and learning embedding functions will introduce bias toward seen classes. However, some methods [1][2] using semantics seem to get better results. Please compare with them and give more explanations for the motiv... | 4. I wonder why the results are so low using only ML in the ablation experiments. The results are even lower than some simple early methods like f-CLSWGAN [4] and f-VAEGAN-D2 [5]. More explanations can be given. |
NIPS_2018_265 | NIPS_2018 | in the paper. I will list them as follows. Major comments: =============== - Since face recognition/verification methods are already performing well, the great motivation for face frontalization is for applications in the wild and difficult conditions such as surveillance images where pose, resolution, lighting conditi... | - Moreover, the lack of ablation analysis (in the main paper) makes it very difficult to pinpoint from which component the small performance gain is coming from. |
ICLR_2021_1906 | ICLR_2021 | & Questions:
I think the analysis is a bit problematic. Th. 2 shows that when the number of classes is large (>8), the noise rate of similarity labels is less than class labels. And the authors use Th. 3 to prove that if the noise rate of transition matrix decreases the model will have a better generalization. However,... | 2 shows that when the number of classes is large (>8), the noise rate of similarity labels is less than class labels. And the authors use Th. |
ARR_2022_16_review | ARR_2022 | 1). Although the hypothesis is quite interesting, it is not well verified by the designed experiment. As pointed out in Section 3.1, models in conventional methods are trained on the original training set in addition to the generated adversarial examples. In contrast, the base model is trained on the adversarial set on... | 1). Although the hypothesis is quite interesting, it is not well verified by the designed experiment. As pointed out in Section 3.1, models in conventional methods are trained on the original training set in addition to the generated adversarial examples. In contrast, the base model is trained on the adversarial set on... |
NIPS_2020_810 | NIPS_2020 | - The CNN experiments are not fully convincing (see below). - Some related work is not properly addressed (see below). | - The CNN experiments are not fully convincing (see below). |
ACL_2017_150_review | ACL_2017 | I have some doubts about the interpretation of the results. In addition, I think that some of the claims regarding the capability of the method proposed to learn morphology are not propperly backed by scientific evidence.
- General Discussion: This paper explores a complex architecture for character-level neural machin... | - In Table 1, the results for model (3) (Chung et al. 2016) for Cs-En were not taken from the papers, since they are not reported. If the authors computed these results by themselves (as it seems) they should mention it. |
vexCLJO7vo | EMNLP_2023 | 1. This paper aims to evaluate the performance of current LLMs on different temporal factors and select three types of factors, including cope, order, and counterfactual. What is the rationale behind selecting these three types of factors, and how do they relate to each other?
2. More emphasis should be placed on promp... | 2. More emphasis should be placed on prompt design. This paper introduces several prompting methods to address issues in MenatQA. Since different prompts may result in varying performance outcomes, it is essential to discuss how to design prompts effectively. |
KEH6Cqjdw2 | EMNLP_2023 | - How do we extend the approaches to other (countries' legal documents)?
- Data collection and annotation are not clear
- The Enforceable Annotation might have ethical issues. What will be the reward for the 10 law experts? why did they volunteer? Does it count toward their study (credit), or will they co-author the pa... | - You could compare your result with SoTA approaches, for example with HateXplain models. |
FVhmnvqnsI | ICLR_2024 | 1. It's not clear what's the purpose of baseline B. It looks like the results are only compared to baseline A and C.
2. It's not clear why the freezing is used in MLS selection. If adaptive is good, why not just use adaptive method to choose the subset?
3. Will the additional loss bring extra computational cost? | 2. It's not clear why the freezing is used in MLS selection. If adaptive is good, why not just use adaptive method to choose the subset? |
OGdl9d3BEC | EMNLP_2023 | 1. The authors highlight that they have not implemented the quantisation methods on GPU systems to demonstrate real speedups due to a lack of CUDA kernel implementation.
2. The paper also mentions that the search algorithm does not include arithmetic density due to a lack of hardware models.
3. Although the authors hav... | 3. Although the authors have mentioned the limitations in the paper, they should provide a more detailed plan on how they plan to address these drawbacks in their future work. |
NIPS_2022_765 | NIPS_2022 | While the authors show improved numbers on benchmark datasets, it would be nice to also show and discuss how the proposed knowledge-CLIP model is qualitatively improving over the baseline CLIP. For example, in Intro and Figure 1, the authors motivates this paper by arguing that the baseline CLIP only captures text-imag... | - is this issue solved in the proposed knowledge-CLIP model? Some existing work that combines text and KG (e.g. https://arxiv.org/abs/2104.06378) has done closely-related analyses such as adding negation or changing entities in text to see if the KG-augmented method can robustly handle them. It would be very interestin... |
ICLR_2023_2217 | ICLR_2023 | The main idea is to propose a new method to rectify the classical prototype network, similar to the previous work 'Prototype Rectification for Few-Shot Learning' i.e. BD-CSPN (Liu et al. (2020)). However, the authors do not provide sufficient analysis of the differences. It is confusing for the readers to understand th... | 2. In Eq. 3, it is confusing to use p m in the numerator but use p c in the denominator. What is the reason? In Alg. 2, only the mean μ f is used for the fusion prototype. Have the authors considered adding the variance for further improvement? By the way, it is better to use μ g to replace μ f , which is consistent wi... |
NIPS_2019_465 | NIPS_2019 | - Demonstrating that an agent trained with a human model performs better than an agent assuming an optimal human is not necessarily a new idea and is quite well-studied in HRI and human-AI collaboration. While the work considers the idea from the perspective of techniques, such as self-play and population-based trainin... | - Koppula, Hema S., Ashesh Jain, and Ashutosh Saxena. "Anticipatory planning for human-robot teams." Experimental Robotics. Springer, Cham, 2016. |
BkR4QG4azn | ICLR_2025 | - **Computational cost**: While the paper mentions the additional cost didn't lead to "significant delays in computation", it is not clear why. I believe the paper deserves a more comprehensive discussion about the computational complexity of the proposal. Also, I wonder if the proposed approach becomes prohibitive in ... | - **Computational cost**: While the paper mentions the additional cost didn't lead to "significant delays in computation", it is not clear why. I believe the paper deserves a more comprehensive discussion about the computational complexity of the proposal. Also, I wonder if the proposed approach becomes prohibitive in ... |
NIPS_2018_612 | NIPS_2018 | weakness is not including baselines that address the overfitting in boosting with heuristics. Ordered boosting is non-trivial, and it would be good to know how far simpler (heuristic) fixes go towards mitigating the problem. Overall, I think this paper will spur new research. As I read it, I easily came up with variati... | * l.97: For clarity, consider explaining a bit more how novel values in the test set are handled. |
ICLR_2023_2237 | ICLR_2023 | 1.Similar methods have already been proposed for multi-task learning and has not been disccussed in this paper [1].
1.When sampling on the convex hull parameterization, authors choose to adopt the Dirichlet distribution since its support is the T-dimensional simplex. Does this distribution have other properties. Why us... | 1.Similar methods have already been proposed for multi-task learning and has not been disccussed in this paper [1]. |
be0sdRYSlH | ICLR_2025 | - It is thought to be a rather peripheral study, but the approach is novel.
- It is expected that the amount of computation of FedMITR is higher than other methods. Have you compared this?
- The results of the IID case need to be shared. It is necessary to share the results of the experiments using higher values of Dir... | - It is expected that the amount of computation of FedMITR is higher than other methods. Have you compared this? |
ICLR_2023_341 | ICLR_2023 | weakness :
the proposed method is may not be entirely novel. people have been adding symbolic reasoning to neural models for awhile, and the finding has always been : "If we can successfully 'hack' the underlying DSL that represented the set of tasks, adding symbolic reasoning would perform well". For instance, these w... | 1) and2) can be avoided by using a generic external knowledge base (as shown in figure 3). however the writing is too confusing I cannot be sure if that is the case or not. |
NIPS_2018_600 | NIPS_2018 | weakness of the non-local (NL) module [31] that the correlations across channels are less taken into account, and then formulate the compact generalized non-local (CGNL) module to remedy the issue through summarizing the previous methods of NL and bilinear pooling [14] in a unified manner. The CGNL is evaluated on thor... | + Good performance. Negatives:- Less discussion on the linear version of CGNL using dot product for f. |
NIPS_2019_772 | NIPS_2019 | of this approach (e.g., it does not take into account language compositionally). I appreciate that the authors used different methods to extract influential objects: Human attention (in line with previous works), text explanation (to rely on another modality), and question parsing (to remove the need of extra annotatio... | - How did you pick 0.6 for glove embedding similarity? Did you perform k-cross-validation? What is the potential impact - Have you tried other influential loss (Eq3)? For instance, replacing the min with a mean or NDCG? Remarks: |
NIPS_2019_651 | NIPS_2019 | (large relative error compared to AA on full dataset) are reported. - Clarity: The submission is well written and easy to follow, the concept of coresets is well motivated and explained. While some more implementation details could be provided (source code is intended to be provided with camera-ready version), a re-imp... | - Significance: The submission provides a method to perform (approximate) AA on large datasets by making use of coresets and therefore might be potentially useful for a variety of applications. Detailed remarks/questions: |
NIPS_2019_1089 | NIPS_2019 | - The paper can be seen as incremental improvements on previous work that has used simple tensor products to representation multimodal data. This paper largely follows previous setups but instead proposes to use higher-order tensor products. ****************************Quality**************************** Strengths: - T... | - The paper gives a good introduction to tensors for those who are unfamiliar with the literature. Weaknesses: |
ARR_2022_287_review | ARR_2022 | 1. The authors presented a fine-grained evaluation set in this paper. However, the anti-stereotype that appears in previous datasets is missing in the constructed dataset. In addition, details of annotations are missing in this paper. Since stereotype detection is quite challenging, it would be important to discuss how... | 3. Missing in-depth analysis on experimental results. For example, why the improvements of models are limited on offense detection dataset and are significant on coarse stereotype set? |
ICLR_2023_1587 | ICLR_2023 | The main issue with this work is that the evaluation setup is not realistic at all. For an experimental paper like this, verifying its applicability on real-world datasets is important. Yet, 2 datasets are synthetically generated and only 1 is of real birds. This birds dataset, too, is very simple, in that the feature ... | 2) the new method of training on the labeled data, plus incorporating input mask explanation annotations for a few (say, 60) examples. Use modern backbone baselines (say, Resnet50 or DenseNet121) for the feature extraction layer - 3 conv layers is definitely too small for anything non-synthetic. I have to say that even... |
GSBHKiw19c | ICLR_2024 | - The interpretation of dynamics model as an agent and introducing the concept of reward adds unnecessary complexity to the method and makes it a bit difficult to easily understand the method. Simply formulating the main idea with adversarial generative training and using the score D as a reward could make the paper be... | - The paper introduces multiple hyperparameters and did quite extensive hyperparameters search (e.g., temperature, penalty, and threshold, ..). Making sure that the baseline is fully tuned with the similar resource given to the proposed method could be important for a fair comparison. |
tauoKi9IWO | EMNLP_2023 | - Missing performance comparison with other approaches. Missing performance comparison on out-of-domain data. *Edit:* This concern seems to be partially addressed during the rebuttal phase and the authors provided some additional baseline results.
- L259 "Perplexity is the probability that the model generates the curre... | - L259 "Perplexity is the probability that the model generates the current sentence". This is not what perplexity is. Eq1 - This does not look like perplexity either, this looks like cross-entropy. |
NIPS_2019_431 | NIPS_2019 | Weakness: 1. A special case of the proposed model is the Gaussian mixture model. Can the authors discuss the proved convergence rates and sample complexity bounds with that established in GMM (Balakrishnan et al., 2017)? It is interesting to see if there is any accuracy loss by using a different proof technique. Sivara... | 3. In Proposition 6.1, the condition \eta \ge C_0 for some constant C_0 seems to be strong. Typically, the signal-to-noise ratio \eta is a small value. It would be great if the authors can further clarify this condition and compare it with that in Section 4 (correct model case). |
NIPS_2021_2338 | NIPS_2021 | Weakness: 1. Regarding the adaptive masking part, the authors' work is incremental, and there have been many papers on how to do feature augmentation, such as GraphCL[1], GCA[2]. The authors do not experiment with widely used datasets such as Cora, Citeseer, ArXiv, etc. And they did not compare with better baselines fo... | 2. In the graph classification task, the compared baseline is not sufficient, such as MVGRL[4], gpt-gnn[5] are missing. I hope the authors could add more baselines of graph contrastive learning and test them on some common datasets. |
ICLR_2021_2562 | ICLR_2021 | - The major concern lies in the evaluation of the proposed strategies. Here, the authors considers that their method purify the input image before passing it to the model and an adaptive attack against their edge map based defense strategies will likely results in structural damage to the edge map. However, it is cruci... | - The major concern lies in the evaluation of the proposed strategies. Here, the authors considers that their method purify the input image before passing it to the model and an adaptive attack against their edge map based defense strategies will likely results in structural damage to the edge map. However, it is cruci... |
NIPS_2016_117 | NIPS_2016 | weakness of this work is impact. The idea of "direct feedback alignment" follows fairly straightforwardly from the original FA alignment work. Its notable that it is useful in training very deep networks (e.g. 100 layers) but its not clear that this results in an advantage for function approximation (the error rate is ... | - Table 1, 2, 3 the legends should be longer and clarify whether the numbers are % errors, or % correct (MNIST and CIFAR respectively presumably). |
NIPS_2017_567 | NIPS_2017 | Weakness:
1. I find the first two sections of the paper hard to read. The author stacked a number of previous approaches but failed to explain each method clearly.
Here are some examples:
(1) In line 43, I do not understand why the stacked LSTM in Fig 2(a) is "trivial" to convert to the sequential LSTM Fig2(b). Where a... | 4. The experimental results do not contain standard deviations and therefore it is hard to judge the significance of the results. |
pUKps5dL4s | ICLR_2024 | 1. The momentum method is usually used for acceleration. However, the theoretical advantage, such as an improved convergence rate over PGD, has not been discussed. Indeed, we cannot see the benefit of the convergence analysis (Proposition 4.1) compared to the PGD method.
2. In light of the theoretical work on sampling ... | 2. In light of the theoretical work on sampling and particle-based optimization methods, the provided analysis seems somewhat weak. For instance, the existence and smoothness of the solution of SDE (2a)-(2d), and any guarantees of the discretization (in time and space), are not provided. |
ICLR_2021_1783 | ICLR_2021 | 1. The main contribution of this paper is introducing adversarial learning process between the generator and the ranker. The innovation of this paper is concerned. 2. Quality of generated images by proposed method is limited. While good continuous control is achieved, the realism of generated results showed in paper an... | 2. Quality of generated images by proposed method is limited. While good continuous control is achieved, the realism of generated results showed in paper and supplemental material is limited. |
NIPS_2021_2191 | NIPS_2021 | of the paper: [Strengths]
The problem is relevant.
Good ablation study.
[Weaknesses] - The statement in the intro about bottom up methods is not necessarily true (Line 28). Bottom-up methods do have a receptive fields that can infer from all the information in the scene and can still predict invisible keypoints. - Seve... | - In Section 3.3, how is G built using the human skeleton? It is better to describe the size and elements of G. Also, add the dimensions of G,X, and W to better understand what DGCN is doing. |
ARR_2022_28_review | ARR_2022 | The main concerns with this paper is that it doesn't fully explain some choices in the model (see comments/questions section). Moreover, some parts of the paper are actually not fully clear. Finally, some details are missing, making the paper incomplete.
- Algorithm 1 is not really explained. For example, at each step ... | - Lines 559-560: This is not entirely true. In Cycle Consistency loss you can iterate between two phases of the reconstructions (A-B-A and B-A-B) with two separate standard backpropagation processes. |
ICLR_2021_2674 | ICLR_2021 | Though the training procedure is novel, a part of the algorithm is not well-justified to follow the physics and optics nature of this problem. A few key challenges in depth from defocus are missing, and the results lack a full analysis. See details below:
- the authors leverage multiple datasets, including building the... | - calling 'hyper-spectral' is confusing. Hyperspectral imaging is defined as the imaging technique that obtains the spectrum for each pixel in the image of a scene. |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 3