Reviewing papers for ECAI 2020: FAQ

PART I: REVIEW PROCESS

1. I was assigned a paper which is outside my area of competence. Why so? And what should I do?
The assignment algorithm gives priority to bids over keyword matching, but it also tries to optimize the global assignment.  If you did not bid, or if you entered positive bids for few papers, or if you entered bids for which many other people entered bids as well, then the system was not able to find enough papers for you based on your bids, and then used your keywords. Now, some keywords (an example among others being “AI and the web”) are quite broad and may lead the algorithm to assign a paper to you that is not necessarily in your area of competence.

If this is the case, and you really understand that you cannot review the paper please write to the program committee chair as soon as possible. He will assign it to someone else. Obviously, the longer you wait, the more difficult the reassignment. (The same thing applies if you discover a conflict of interest, or if you cannot review a paper for any other reason.)

2. What should I do if the paper is anonymous?

Papers at ECAI are traditionally non-anonymous, and ECAI 2020 follows such tradition.  If you don’t find the names of the authors in the paper this is fine. You can, in any case, find them in EasyChair.

3. What should I do if the paper is overlength?

If a paper exceeds the allowed length, then the paper will be rejected. In unclear cases, please send  an email to the program committee chair and we’ll decide what to do.

4. When are reviews due?

December 16, 2019 23:59 (as usual, UTC-12). Please don’t be late, as author feedback will start immediately after that. If you anticipate a delay, please tell the program committee chair as soon as possible so that he can see whether he can find another reviewer.

5. Can I delegate a review to a subreviewer?

It cannot be forbidden completely, but the program committee chair discourages this practice. We selected people to be in the PC based on their competence, reliability and seniority. We want to ensure that each paper is reviewed only by highly qualified people. In the past, several people complained because their paper got some unprofessional reviews, and it turned out that they had been written by first-year PhD students or master students. A student may be brilliant, and yet not have the sufficient experience to judge the quality of a paper beyond the validity of its technical aspects. In any case, if you delegate a review, you will still be responsible for the review, in the sense that you have to endorse it and that you should participate to the discussion. (Exceptions are possible in some particular cases.)

For assigning a paper to a subreviewer, go to next question. Once the reviewer has submitted their review, read answer to Question 7.

6. How can I assign a paper to a subreviewer?

Log in Easychair as a PC member of ECAI2020. Go to the “Reviews/Assigned to me” menu. Then go to the paper detailed view of a paper assigned to you for which you would like to request a subreviewer. On the right menu, you can click on “Request review”. Fill in the form with the contact details for the subreviewer (“Subreviewer’s first name”, “Subreviewer’s last name”, “Subreviewer’s email address”) and edit the message to be very clear on the internal deadline that you give to the subreviewer before seding the request out. Please, notice that handling tight deadlines is very important and you still have the responsibility for getting it done in time and on assuring the quality of review (see the previous question).

7. How can I see the review written by one of my subreviewers?

Log in Easychair as a PC member of ECAI2020. Go to the “Reviews/Subreviewers” menu. Go to the paper in question by clicking on the paper number #. There you can either manage the request (e.g., sending a reminder) by clicking on “sending email and quote this letter” or you can see the review done for you by clicking on “Show reviews” in the right menu. Please, always revise the review made by a subreviewer and, if needed, proceed to modify and improve it in order to assure high quality reviewing standards.

8. I am not sure I will have all my reviews ready on time. Does it matter, and why?

There will be an author rebuttal period (if you are also an author you will soon receive more details). Once the rebuttal period has started, injecting a review in the system at any time would be very annoying for the authors. This means that if your review is submitted after the rebuttal has started, it will probably not be visible by the authors, they will not be able to react to it; and it will have less weight in the discussion (it will be considered as an “additional review”). Then, we must make sure that each paper has enough reviews before rebuttal starts, and we need at least one full day for that. This is why it is very important that you submit your reviews on time, that is, by December 16, 2019, 23:59 UTC -12 or possibly a few hours after that, but no more. If you anticipate a delay of more than a few hours:
(a) Tell the program committee chair beforehand that you anticipate a delay, and tell the paper Ids for which there may be a delay.
(b) Try to submit at least a partial review of each paper, containing the important points to which you want the authors to react. You can complete your reviews later, during the discussion period, with minor things such as unclear sentences, minor errors, comments about form etc.
(c) If (b) is not feasible, try to submit all the reviews you can by the deadline. One late review is better than two late reviews.

9. What if I cannot see any more in my batch a paper that was there before?

Most likely it means that the paper was withdrawn by its authors; or sometimes by the program committee chair if that was a summary reject, but then this should have happened a few weeks ago.

PART II: CRITERIA FOR EVALUATING PAPERS

ECAI is an archival conference. In “archival conference” there’s “archival” and “conference”. Being archival means the we want papers of very good quality, the same quality level as if they were published in top journals. Being a conference also means that we want to give priority to significant, novel and exciting work rather than on small improvements over past work (which may be reported elsewhere). 10.  How can I evaluate the relevance to ECAI? ECAI is a conference that bears on all aspects and subareas of Artificial Intelligence. AI is broad and it is sometimes difficult to evaluate whether a particular paper is within the scope of ECAI or not. The answer may depend on the particular subarea and in a few days,  there will be specific answers for some subareas. In doubt, use your best guess and report an explanatory comment in the dedicated box.  In any case: if you are assigned a paper that you think is not relevant enough to AI, please don’t decide to stop reviewing it without consulting the program committee chair beforehand; He will redirect you to someone who can help you deciding. 11. How can I evaluate the significance of a paper? Behind significance, two things may be considered: importance of contribution and expected impact. By expected impact, it is meant: will the paper be influential within its small subarea, within AI in general, or even more generally? Or not influential at all? When looking back to papers that were published 20 years ago, one can see some that have been influential and some others than have not. Try to imagine the future. 12.  Should I recommend acceptance if and only if I do not find a reason for recommending rejection? In the early days of Knowledge Representation and Reasoning, there was something that some of you know as “negation as failure” (NAF): if I cannot prove that something is true, then it’s false. The NAF principle should *not* be applied to the question “should this paper be rejected?”. In simpler words: when you evaluate a paper, rather than looking for reasons to reject and decide to accept when you do not find any, do the opposite: find positive reasons to accept, and if you don’t find any, don’t be afraid to recommend rejection. Likewise, don’t be afraid to recommend acceptance even if you have some (weak) reason for rejection, such as small errors. In every conference, there are false positives and false negatives: don’t be obsessed about avoiding false positives. See below for more details. 13. There are formal results in the paper, but no proofs. Should I recommend rejection? This question comes back again and again at every conference. There’s a little bit of schizophrenia here: there’s a page limit that prevents authors to give full proofs, there’s time limit and review load that would prevent reviewers to check them all anyway, and we are expecting archival, top-quality papers that should not contain flawed results. As a rule of thumb, here are some guidelines that could be applled as a reviewer: – if there is one (or more) central result(s) in the paper, we expect to see a proof of it, or at least a detailed proof sketch which allows us to understand how the proof goes on and to see if the result holds – if the paper consists in a series of results of equal importance, it is usually expected that some of them come with full proofs, or at least proof sketches that are detailed enough (see item above) – if formal results are not central to the paper (for instance, if the paper provides an interesting model, significant experimental results etc.), not having proofs due to space limitation should not be a reason to reject. If you have any doubts about some of the results, do not forget that there will be a feedback phase. You can ask author to provide a proof then. (This year we’ll make sure that authors will have enough space to answer if you ask them a long answer such as a full proof.) See also FAQ #20. 14. The paper presents a new model and/or a new algorithm and/or new techniques but does not provide examples that would help the reader understand. Authors should do their best for the reader to understand quickly, and examples are of primary importance for that.  If you think a specific paper should contain examples and does not, feel free to recommend rejection. You can ask the authors to resubmit it to IJCAI 2020 after adding examples. 15. The paper mostly consists in experimental results on benchmarks, that show an improvement over current results. When is it enough for acceptance? Again it is difficult to give a precise answer, as this depends a lot on the area. As a general rule, epsilon improvement papers are not the best fit for a conference paper. If the results are only small improvements compared to the state of the art, you may make a difference between a paper that uses a radically different methodology than previous papers, and a paper that uses a classical methodology. A small improvement on performance due to a small improvement of an algorithm is usually not enough (except is the problem is really really important). It is a plus to use benchmarks that are widely recognized (but of course it is not compulsory and it depends very much on the area). In brief, a rule of thumb: – new technique, small improvement: generally ok. – small adaptation of a known technique, big improvement: generally ok. – small adaptation of a known technique, small improvement: generally not ok. 16. The maths in the paper are really simple. Should I recommend rejection? Certainly not. We have seen, in the past, an incredible number of reviews recommending rejection using arguments such as “the results are really simple”, “there is not enough mathematical meat in the paper”, and even “there are no original proof techniques”.  Some important results have simple proofs. Some results have long and involved proofs but are of little interest.  This should be very clear: this is not a math competition, but a conference on Artificial Intelligence. 17. The paper is not polished enough: there are small errors, typos, English mistakes, and other issues that prevent the paper to be “archival”. Is it a reason to recommend rejection? If the paper is otherwise good, then no, it is not. We are precisely in a situation where a journal recommendation would have been “accept subject to minor revisions”. In such a case we can recommend the paper to be accepted, and the authors have to commit to proofread the paper before the final version is published. The program committee chair may perform additional random checks of the final version of such papers. Note that small errors and typos are not at the same level as English writing. Everyone has the same ability of spotting typos and errors. Not everyone is equally close to be a native speaker. Please be understanding to people whose native languages are far from English. If you are yourself a native speaker of English or of a related language, imagine that you have to submit a paper in Quechua; how would you feel if your great paper is rejected because the writing is not good enough? This is not a literature prize committee, but a conference on Artificial Intelligence. 18. What about the acceptance rate? A paper should be accepted only if it reaches the usual standards of top AI conferences, but some of you will expect something more precise. The usual acceptance of top AI conferences is between 20 and 25% and there’s no reason the rate should be significantly different this time. However, keep in mind that you have evaluated a small number of papers, and it is not necessary to explain to you what statistical significance means. Avoid writing things such as “I recommend acceptance because it is the best of my batch”, “I recommend rejection because it is only the third best of my seven-paper batch”.  Sometimes all the papers you have reviewed will be rejected. Sometimes a majority of them will be accepted. 19.  I found out that there is a version of the paper published on ArXiv, or in a thesis, or presented at a non-archival workshop.  This is allowed.  This also applies to submissions for which there exists a short version published somewhere, even at an archival place. 20. The paper is slightly overlength or may be overlength depending how we strictly understand the constraints? Contact the program committee chair. It is preferred to be tolerant and to interpret the rules to the benefit of the authors. 21. Can I ask specific question to the authors, that they should answer during the rebuttal phase? Yes, of course. Authors will answer in plain text, which is not very convenient for displaying figures, tables or proofs. However, if you think it is important that the authors should provide evidence in such form, please specify it in your review, and there will be a way for the authors to upload a pdf. Please don’t abuse of this possibility, do it only if it’s important. 22. Can I update my review? Yes, you can. Take into account the deadlines before the rebuttal and authors’ reply. 23. How should I review highlight papers? Highlight papers should be evaluated against the same criteria as full paper submissions: Relevance, Significance, Novelty, Technical quality, and Quality of the presentation. However please keep in mind that a highlight paper is just a 2 page abstract so, Relevance, Significance, Novelty and Technical quality, does not refer to the abstract itself, but to the research reported therein, which probably appeared somewhere else. Ultimately, the question that you have to answer in assessing a highlight paper is how much its presence at the ECAI 2020 would be of interest to the attendees and enhance the ECAI 2020 program.