Reviewer Survey Results

We surveyed our reviewers right after the review and discussion period soliciting feedback about the review process. Here are their responses.


How did you like the review process? Count
I thought the review process was the same as usual 10
I thought the review process was unique but I will not recommend it for the future 11
I thought the process was unique and I would recommend it for the future 49

How can we make the review process better?

I thought the process was unique and I would recommend it for the future

I thought it was a great improvement compared to previous years. The more transparency and openness in the review process, the better, and I believe that this RSS has shown that.
Provide a little more guideline on what makes a strong RSS submission. Comments such as "this paper is not up to RSS standards" were justly asked to be removed, but it is at the same time true that there is a high standard that has been set, and in the lack of quantifiable metrics, one tends to look at qualitative comparison with the past.
Some of the questions seemed redundant and thus were difficult to answer -- for example, what do you agree with / what is positive / what did you learn all felt like they had too much overlap. Otherwise, I thought the process was good and encouraged more positive and higher quality results by making people think a bit harder before dismissing a paper outright.
Great Job Sidd! Your comments on focusing on positives and constructive feedback sorely needs reminding. Though we shouldn't accept more papers because we say positive things, deeper conversations between reviewers are essential in making sure our reviews are of high quality and are actually correct. Reviewing of reviews are important given that we are all busy and sometimes, through our own mistake, we can miss important details in plain sight.
The discussion part needs more leads from person at the level of program chair or editor. A small conference call would be helpful only if revealing identity will not be a problem.
I liked the review criteria emailed by Sidd (compassionate, helpful, etc.)
Great job in engaging reviewers with a personal (and pretty intense) touch. I think that's a useful motivating factor.

If paper allocation was completely automated, I am very impressed. The papers I received were just right for me.

It would be useful to include a full link to the RSS CMT webpage in every email sent to the reviewers (to avoid having to search back in email history for it).
Continue to push reviewers to emphasize the positive aspects of papers.
A different platform (CMT is a bit unfamiliar), but not much else to change. I really appreciate the excellent communication from the people at the higher levels of the process.
the login procedure was a bit complicated, otherwise all very good!
Maybe for the papers with high-variance-reviews, we can give the authors a rebuttal chance.
I think an important part of the process were the occasional "motivational" messages, without them, the discussion would indeed only decrease the grades of the papers. I would maybe add a field for recommendation to be presented as a poster or a lecture, where for the former, one could use milder criteria and not be so critical in the review...
I like that discussions are based on all the reviews. This is also an important feedback on your understanding of the paper. I do believe that area chairs have the important role of triggering the discussions and that they should propose a synthesis that should be the more shared between the reviewers.
I know that you're aiming for a quick turnaround, but I think the discussion period was too fast. I was traveling a bit during that period, and it was difficult to participate and engage before we ran out of time.
One thing to improve: help clarify the target quality / competitiveness / expectations for RSS for reviewers who are not part of the core community
The attempt to start analysis and discussion prior to the posted review deadline was overly stressful on the reviewers, I believe. I'd planned my time based on having my reviews submitted on time by that date, and so it was difficult to respond to my meta reviewer who asked to be able to look at my review earlier. I'd recommend moving the review deadline earlier if the reviews are needed earlier, since this would make it more clear how to manage time. I felt the discussion process after the review deadline was really helpful. Other reviewers and I had meaningful discussion and the meta reviewers were useful in guiding this interaction. I am fully on-board with keeping this format.
Clarify the terminology (meta reviewer? program committee? area chair?); Clarify the deadlines (review deadline vs discussion deadline vs chair's discussion deadline).
I am in awe of this year's RSS review procedures. Already, the double-blind procedure set RSS out as a conference I wanted to aim for. The standards set out this year clearly cut through common bully-like tactics (e.g., saying something isn't up to an unquantified, mystical standard). The problem with such tactics is that they are catching and can seem necessary in an antagonistic, territorial culture. Unfortunately, even this year, I still saw such comments -- even coming from an AC. I think if successfully implemented, a process like this could make RSS reviews the productive, positive exercise it should be for both reviewers and reviewees. If successful, I would be even more excited about RSS as a potential venue for my work.
I especially appreciated Q1-Q4 on the review form. However, I also think there should be questions that request *succinct* criticism, e.g., "list up to three main points of disagreement" or "what assumptions do you take issue at". Often the critical items are buried within the review and it may be helpful to obtain a high-level summary of the most important points first.

I thought the review process was the same as usual

(Rationale for previous response: *uniqueness* is orthogonal to the question "did you *like* the process"... my response reflects that there didn't seem to be any major change in the process. I'm not unhappy with it, though.)

I did think that anonymizing the PC member might've made it harder for him/her to generate discussion. But I agree with the messages that were sent out: things like determining the "RSS-quality" threshold and "fit" is the role of reviewers anyway (never has been, actually). They should be left to editorial oversight and, so long as the editors take their job seriously (i.e, don't just take mean scores and threshold à la ICRA/IROS), treating the reviews as recommendations for the authors and assessments that are mere suggestions for their eyes, they can stop the conference from over-fitting. I am sceptical about top-down declarations of topic X and Y, and Z will be the focus of the conference this year. RSS contains papers that the RSS community writes and values in its reviews. I have no problem with the recursive definition: social processes at a macro-scale don't a have simple one-way causality. The right thing is to ensure that there is an effective way to identify, value, and shepherd outlier topics. For example, could (or did?) a paper on ethics or morality make it though the review process? Maybe narrow-minded or inexperienced reviewers would kill it; but the editor's job is to decide how big a grain of salt is needed in reading the reviews; he/she should always feel justified in overriding those opinions.

Two cases point to the fact that things are healthy, let's not agonize too much: (a) Oliver pointed out in the open community forum that the community has been moving away from SLAM papers for years now. (b) The inclusion of HRI representation at the PC level and their selection of reviewers has shown that those topics can be brought into the fold (though, that pendulum swung too far, in my personal opinion). These two countervailing cases show there are some natural passive dynamics and some adjustment is possible too. This is OK.
I found the discussion section not especially useful. I get why it's there, but once reviewers have submitted their comments, they've made up their minds for the most part.
I have 2 main comments. The first regards the fact that there was not rebuttal phase this year. I believe that the rebuttal phase was useful to support the review process and to further improve the quality of the submissions. I would recommend re-including it. The second regards the discussion phase involving the AE and the reviewers. I would have liked the discussion phase to be somehow more interactive (this may depend on the submission). To keep high standards for RSS, it may be good to provide extra incentives for the AE and the reviewers. For instance, each reviewer may be assigned a score and depending on the score he/she may get a discounted rate on the conference registration.
I didn't like the tone of the emails, i.e., constantly trying to make me feel guilty in order to get me to do something.
For authors: allow rebuttal. For reviewers: give access to final decisions.

I thought the review process was unique but I will not recommend it for the future

Currently too much burden on volunteer reviewers. Make questions easy -- only one textbox. Discussions should be to the many cases Meta-reviewers just wanted to extend the discussion even though the result was crystal clear to everyone!!
Excessive instructions by the organizers do not help. They just overload, and get inevitably ignored after a point.
Ensure that the reviewers assigned to a paper have a significant background in the topic. Two of the three reviewers on one of the papers I reviewed indicated minimal familiarity with the topic. This led to high-variance scores and a somewhat uninformative discussion.
I liked the focus put on trying to make sure reviews are respectful and constructive, and to be explicit about positives as well as negatives. However, I felt that the drive to make reviews 'positive' was not so helpful. I think that to enable the area chairs to make good decisions, the full rating spectrum should be used which include being honest about papers that are no there yet. Also, and I know this is probably not intended, this made me feel that reviewer's opinion were not taken seriously (e.g. encouraging reviewers to change their mind if they gave a poor rating).

In short, I would definitely keep 1) encourage for constructive criticism 2) encourage being explicit about positives (as well as negatives, which reviewers tend to be explicit about anyway). I don't think giving higher rating in itself helps the community, and 1) and 2) can be reached without changing the ratings themselves.
The review process for a conference should NOT require that amount of time and have those expectations. I was literally asked to review the proofs for correctness, to state the contribution as compared to the state of art, etc. on papers that I felt capable of reviewing, yet those were not directly related to my past work so that I could have done those reviews in a matter of hours. Completing the reviews on that level of depth would have required a few days for me, which I am not willing to spare a conference review, and in general for any review unless it appears a perfect fit for my research. Next time, when my reviewer services are solicited, it would be good to let me know in advnce for which papers you would like to get my opinion.
(My choice above reflects some improvements that I think could be made.)
I appreciated the thoughtful review form used for this conference. It set up reviews to be more balanced than they sometimes are. However, I think it may take several tries to get people to be balanced in their comments. After all the number of reviews that each person does a year is large (at least that is true for me), there is not enough time to do reviewing (in terms of all the commitments reviewers have), and the process is set up to pick and choose among papers. However, a focus on what the reviewer learned and one how to improve the paper is helpful. I think that tweeking the review form each year might keep people on their toes.

In general reviewers seem to spend very little time in discussion once they do their reviews. Perhaps conference chairs need to re-balance what reviewers see as their job. It's not just to review the papers, it is to come to agreement with 2 other (anonymous) people about the value of the paper for the conference.
Stick with standard review questions (overall score, overall assessment, comments to Meta-reviewers, best paper, etc.). The number of questions and their similarity in the review form are a bit confusing.
I think the uniqueness of RSS lies also on the rebuttal phase. It was a good idea to activate the discussions between Area Chairs and Reviewers in RSS 2017, but I still would like to keep the tradition of rebuttal to include the authors in the discussions. It was the first time to experience the anonymity of communication between ACs and Reviewers, finally I suppose it worked quite well.
I found that the discussion phase was a little annoying. I had already provided my extensive review comment, and it seemed like the area chair was forcing to artificially keep discussing about papers for which the comment was very clear (i.e. negative)