Skip to yearly menu bar Skip to main content


Reviewer Guidelines

This page provides guidelines for reviewers for AISTATS. It references previous AISTATS reviewer guidelines and the reviewer guidelines for ICML 2025.

Responsibilities of a Reviewer

  1. Report problematic papers to the Area Chair (e.g., unacceptable formatting issues, dual submissions, plagiarism, etc.); see this FAQ for a list of possible issues.
  2. Submit high-quality reviews.
  3. During the rebuttal period, discuss with the other reviewers and the AC to help arrive at a good decision.
  4. Recommend papers for oral talks, awards, etc.

Important Dates

Dates may be subject to change. All dates are Anywhere on Earth (AoE; UTC-12), end of day, unless specified otherwise. 

  • Abstract submission deadline: September 25, 2025
  • Bidding phase: September 29, 2025 - October 6, 2025
  • Paper submission deadline: October 2, 2025
  • Supplementary material submission deadline: October 9, 2025
  • Review period: October 14, 2025 - November 10, 2025 [Reviews Due]
  • Reviews released to authors: November 21 (noon), 2025
  • Author rebuttal period: November 21 (noon), 2025 - November 30, 2025
  • Author-reviewer discussion period: December 1, 2025 - December 8, 2025
  • Reviewer-AC discussion period: December 9, 2025 - December 15, 2025
  • AC meta reviews: December 15, 2025
  • Paper final decision notifications: January 23, 2026
  • Conference dates: May 2, 2026 - May 5, 2026

Update September 16, 2025: Please note the changes in dates for the bidding phase, review period, and discussion periods.

Update October 14, 2025: Review period start date has changed (from October 11 to 14).

Update November 21, 2025: Author rebuttal period start and end dates has changed (pushed back by 1.5 days).  

Update November 27, 2025: Author rebuttal deadline has been extended (by 1.5 days).  

Update December 1, 2025: Author-reviewer discussion period has now begun (start date moved by 1 day).  

Reviewer FAQ

  • I accepted the reviewer invitation, but I have not been assigned any paper to review. What happened? 
    • This year, due to a large reviewer pool, not all reviewers have been assigned a paper to review. You may still be contacted for a replacement/emergency review during the review period, so please be on the lookout. 
    • To be eligible for reviewing, you should ensure that your publications are imported correctly to your OpenReview profile. When logged onto your profile page, your publications should show on the right-hand side. (Merely adding links to your Google Scholar or DBLP is not enough!)
  • When submitting my paper, I nominated myself (or my coauthor) as a reviewer, but I (or the coauthor) was not assigned any paper to review. Does this violate the reciprocal reviewer requirement? 
    • No. As long as you nominated at least one author, you are fine. At this point, any submission that did not nominate an author has been contacted. 
  • I do not see the usual "Strengths and Weaknesses" section. Where do I put them? (See below for the review form.) 
    • This year, we are asking for more fine-grained evaluation based on three key aspects: Soundness, Significance, and Novelty. For each item, you are asked to select your overall assessment in a bullet item and then provide your justification (e.g., why you believe the paper's results are highly/somewhat/not significant). In addition to these, there are also sections that ask you about non-conventional contributions (see below), clarity, and relation to prior work. We recommend that you mention the different strengths and weaknesses of the paper in their most relevant sections.
  • What is the purpose of the "Non-conventional Contributions" section?(See below for the review form.) 
    • Not all papers fall under the clear classifications of a methods paper or a dataset paper. Papers can have merits and contributions beyond the ordinary (such as bridging disciplines, discussing topical issues, and so on). Any such merits that do not fit under the other review form fields can be included under ”Non-conventional contributions” with a short justification.

Update October 15, 2025: Added an FAQ section on the review form. 

Update October 21, 2025: Added more FAQs about the reviewer assignments.

How to Review?

The purpose of the review process is twofold: to identify papers which offer significant contributions to the fields of artificial intelligence and statistics, and to provide constructive feedback to authors to help improve their work.

When reviewing a paper, consider its potential long-term impact, particularly out-of-the-box ideas, novel problems, and contributions that bridge fields. Such works may be easy to criticize but are often essential for progress, so avoid dismissive criticism.

Best Practices

  • Be thoughtful. Remember some authors may be submitting for the first time.
  • Be fair. Avoid letting personal feelings affect your review.
  • Be useful. A good review is useful to authors, reviewers, and ACs.
  • Be specific. Avoid vague comments.
  • Be flexible. Update your review when new information is presented.
  • Be timely. Meet deadlines and respond promptly during discussions.

If someone pressures you to bias your review, notify the Program Chairs at the official contact form. Report unethical or suspicious behavior to your Area Chair immediately.

Review Form

  1. Code of Conduct (Visible only to PCs/SACs/ACs)
    •  I understand and agree to abide by the AISTATS Code of Conduct and the Reviewer Guidelines:
      • In particular, you confirm that the review is written by you (and not, e.g., an LLM)
      • You confirm that you do not have any known conflicts with the authors (as in, e.g., participating in a collusion circle)
  2. Confidentiality Agreement (Visible only to PCs/SACs/ACs)
    •  Reviewers must keep the paper and supplementary materials (including code submissions and LaTeX source), as well as the reviews, confidential. This includes deleting any submitted code at the end of the review cycle to comply with confidentiality requirements. This also means that you cannot share the manuscript with other people or LLM services.
  3. Summary and Contributions: Briefly summarize the paper and its contributions.
  4. Paper Keywords: Provide 3 keywords (comma separated) about the paper’s topic.
  5. Expertise Keywords: Provide 3 keywords (comma separated) about your own research expertise related to the submission (as a reviewer)
  6. Soundness: Are the theoretical arguments and/or empirical methods sound? Consider whether the assumptions, proofs, and derivations are correct, and whether the experimental design, evaluation, and analysis are rigorous and reliable.
    •  Correct / minor errors (e.g., typo or errors that do not affect the main results)
    •  Major errors (e.g., an incorrect theorem or derivation)
    •  Not applicable
    • Justification: Justification for your provided judgment of soundness. If not applicable, put in “N/A”. If you found errors, please specify the details of your concerns.
  7. Significance: Significance of the contributions (theoretical and/or empirical, if applicable).
    •  Significant contributions (e.g., strong theoretical insights or well-supported empirical improvements with appropriate baselines/statistics).
    •  Somewhat significant (e.g., theoretical contribution of limited novelty or empirical gains missing baselines/statistical rigor).
    •  Not significant (e.g., theoretical results are incremental, or empirical results are similar to baselines/missing key comparisons).
    •  Not applicable.
    • Justification: Justification of your provided judgment of significance. If not applicable, put in “N/A”. If your choice is “somewhat significant” or “not significant”, please specify the details of your concerns.
  8. Novelty: Novelty of the theoretical results and/or empirical methodology.
    •  Ground-breaking
    •  New results
    •  Incremental compared to existing results
    •  Known results, or a trivial extension of known results
    • Justification: Justification of your provided judgment of novelty. If your choice is “incremental” or “known results” please specify the previous publication(s) that support your choice.
  9. Non-conventional Contributions: Does the submission contain non-conventional research contributions? Examples: novel ideas with not widely accepted assumptions, new problems and/or tasks, “bridging fields” contributions.
  10. Clarity: Is the paper clearly written? Consider whether it has (1) clearly stated its contributions, notation and results, (2) supported the claims made in the title, abstract, introduction with results, (3) explained the meaning of its theoretical assumptions, and/or (4) motivated the proposed methodology well with e.g., examples.
  11. Relation To Prior Work: Is it clearly discussed how this work differs from or relates to prior work in the literature? Any related work missing (if yes, provide details below)?
    •  All related works are clearly discussed
    •  All related works are cited but the discussion is less clear
    •  Some minor related works are missing or described incorrectly
    •  Some important related works are missing or described incorrectly
  12. Additional Comments (optional): Add your additional comments, feedback, and suggestions for improvement, as well as any further questions for the authors. What would you do differently if you were given the chance to improve the paper?
  13. Reproducibility: Are there enough details to reproduce the main experimental results of this work?
    •  Sufficient amount of details available for reproducing the main results
    •  Some amount of details available
    •  Insufficient amount of details available
    •  Not applicable (the paper does not have experiments)
  14. Rating: Overall rating of the paper
    •  7: Strong Accept (Technically excellent paper with clear novelty and broad potential impact beyond a single sub-area of AI and stats. The work is very well executed with convincing evaluation and good attention to reproducibility and ethical considerations.)
    •  6: Accept (Technically solid paper, with high impact on at least one sub-area. The contribution is convincing and the evaluation is adequate, though there may be some limitations or open questions.)
    •  5: Borderline Accept (Technically solid paper where reasons to accept outweigh reasons to reject, e.g., limited evaluation.)
    •  4: Weak Accept / Weak Reject (Paper has some merits but also notable weaknesses. Reasons to accept and reject are roughly balanced, and the paper may require further evaluation or discussion. Please use sparingly.)
    •  3: Borderline Reject (Technically solid paper where reasons to reject, e.g., limited evaluation, outweigh reasons to accept, e.g., good evaluation.)
    •  2: Reject (For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and incompletely addressed ethical considerations.)
    •  1: Strong Reject (For instance, a paper with well-known results or unaddressed ethical considerations.)
  15. Confidence
    •  5: Absolutely certain (Use this sparingly; please ensure you are very familiar with the related work, you have carefully checked and understood the proofs and/or experimental details if applicable)
    •  4: Confident, but not absolutely certain. (It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.)
    •  3: Fairly confident (It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.)
    •  2: Somewhat confident (You are willing to defend your assessment, but it is quite likely that you did not understand central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.)
    •  1: Educated guess (The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.)
  16. Ethical Concerns (Visible only to PCs/SACs/ACs): Does this submission raise potential ethical concerns? These include methods, applications, or data that create or reinforce unfair biases and/or that have a primary purpose of harm or injury.
    •  No concern
    •  Minor concerns
    •  Major concerns
    • Justification: Justification of your provided judgment of ethical concerns. If not applicable, put in “N/A”.
  17. Confidential Comments (optional; visible only to PCs/SACs/ACs): Confidential comments for the Area Chair.

Update October 15, 2025: Added the review form to reviewer guidelines.

Author Rebuttal and Discussion Period

Authors will be given the opportunity to respond to your reviews before decisions are made. This is to enable them to address misunderstandings, point out parts of their submissions that were overlooked, or disagree with the reviewers’ assessments.

In previous years, some authors felt that their responses were ignored. As a reviewer, you are expected to read and (if appropriate) respond to each author’s response. Keep an open mind during the discussion period. (Have you overlooked something?) It is not fair to ignore any author response, even for submissions that you think should be rejected. Even when an author’s response did not change your assessment of a submission, you must convey to the authors that you have carefully considered their comments. 

Ethical Conduct for Peer Review (Use of Generative AI)

AISTATS reviewers are expected to follow standard ethical conduct for the peer review process. In particular, AISTATS prohibits:

  • use of privileged information (e.g., information and discussions about submissions) for any purpose other than reviewing;
  • use of Generative AI tools in reviewing (as described below);
  • all forms of collusion, whether explicit or tacit (e.g., an arrangement between authors and reviewers, ACs, or SACs to obtain favorable reviews).

It is strictly prohibited to use Generative AI tools, such as large language models (LLMs), for reviewing AISTATS submissions. In particular, reviewers cannot use Generative AI tools to write their reviews, and reviewers cannot input any content from any submission or review into a Generative AI tool.

If you believe someone may be engaging in unethical conduct, please notify AISTATS via the official contact form (select “Code of Conduct, Ethics Violations, …” as the email topic).

All suspected unethical conduct will be investigated by the AISTATS organizing committee. Individuals found violating the rules may face sanctions, have their own submissions rejected, etc.

Other Remarks

  1. Minor formatting issues: Do not desk-reject; flag them instead.
  2. Supplement: Supplementary material may be appended; conflicts between appended and separately uploaded material can be resolved at the reviewer’s discretion.
  3. Links in submission: Exercise caution when accessing external links.
  4. Code in GitHub: Reviewing submitted code is optional; if accessing, ensure anonymity and be aware of possible security risks.