Dear area chairs:
[TL;DR: We describe how the first part of reviewer workflow will work (esp. wrt the Toronto system) and other fine points.]
Thank you so much for accepting that charge for being an area chair! Both of us feel that the area chairs are the key personnel in ensuring scientific quality, rigour and insight in the technical program. At this point, if you log into http://www.softconf.com/acl2017/ with your START V2 login, you should see that you have a track chair privileges (Min is coordinating the softconf configuration for the event, so if you see any problems with the configuration for your account, please notify him). Also, as we have written before, we would like to make all correspondence open to the public (where not confidential). We encourage all of you to comment these broadcast policies that we are considering either here on the blog or in Facebook, to support our endeavour to enhance transparency.
Also, to clarify, area chairs are allowed to submit to the conference and to the same area. It would be unfair otherwise, as many of you have students or junior colleagues whom you are mentoring that may be submitting to the area. As per previous ACL events, any conflicts of interest (COIs) for a submission where one or more area chair are involved need to be declared, so that another (non-COI) chair within the area can be assigned to manage the submission. In the case that all area chairs have a COI, the paper will be routed to a different area.
Now, onto the heart of this post: the reviewing process. The dates mentioned here are also on the AC calendar that we posted in the previous post: https://calendar.google.com/calendar/ical/arq0ig9b7dvhvpnv1n93bbluv0%40group.calendar.google.com/public/basic.ics
First, we are planning to start with the reviewer list from the previous NAACL 2016 (co-chaired by Ani Nenkova and Owen Rambow) and ACL 2016 (co-chaired by Noah Smith and Katrin Erk) as a starting point. Your job will be to make additions to this list of reviewers, especially considering your particular communities and those communities beyond the other chairs in your area.
Once this initial list is complete by the proposed deadline of 5 January (yes, we know in the middle of holiday season), Regina and Min will be sending out the first batch of invitations in the START system centrally to all proposed reviewers [you do not have to do the invitation itself — the invitation email will be sent out in batch by us through START’s interface]. As reviewers accept and decline, you will have to revise the reviewer list to suggest new reviewers, so that we can comfortably cover the workload for all expected incoming submissions papers. We hope to recruit sufficient reviewers such that each one only needs to review at most four papers — as many have already pointed out, this will be difficult given that the review period has been substantially reduced to two weeks.
To give you an idea of how many submissions to expect into your area, we’ve pulled statistics from the previous ACLs, given in the table at the end of this post. Note that the identity of the areas is slightly different, and that because we have a joint deadline, we expect slightly fewer submissions overall (multiplied below as 80% of the total past submissions — for both long and short papers).
Reviewers who accept the invitation will be asked to use the Toronto paper matching system (described at
http://papermatching.cs.toronto.edu/, but currently down due to significant hardware failures; Updated, now working but at a new address: http://torontopapermatching.org/webapp/profileBrowser/login/) to build a registered reviewer profile of their expertise by uploading PDF versions of their past publications. The Toronto system supports both bulk upload by providing webpage where the PDF files can be retrieved, or individual uploads of PDFs. Once a profile is created, the profile can also be edited to exclude past publications where the reviewer no longer has interest. This registered reviewer profile exists beyond the scope of ACL 2017, and can be adopted by other ACL events, or other conferences (for example, it is already integrated into the Microsoft Conference Management System) to enhance paper-reviewer matching.
We encourage all of you to try to create your own profile, if you do not have one already. Familiarising yourself with the system may help you troubleshoot the process for reviewers in your area who are having difficulty creating profiles. It takes just a few minutes for people who already have some form of a webpage listing their publications.
We will still be calling for author-initiated bids on papers to assist you in making the paper-reviewer assignment (9 to 12 Feb, three days after the paper submission deadline). Assuming that it is available for use by then, the Toronto system will generate a normalized score (between 0 and 1) for each prospective paper-reviewer pairing. We are working with softconf (Rich Gerber) to integrate the output of the Toronto system for area chair use. Likely, area chairs will see the calculated paper-reviewer matching scores, in descending order, as part of the information for each paper. Unfortunately, due to the tight schedule in integrating the system, we do not plan to make the personalised paper matching scores available to reviewers during the bid process.
Let us be clear: the Toronto system, when used, only supplies assignment recommendations, to provide another source of evidence to assist area chairs in assigning papers. The final assignment of a paper to a reviewer rests in your hands.
The review period is two weeks (13 to 27 February). It is impossible to guarantee that all reviews will be in, in a conference of this magnitude. You will have to work closely with late reviewers to guarantee that they can complete late reviews, as soon as possible so that the author response period can start on time. In the intervening one-week period (6 to 13 March), you will need to identify papers that merit discussion among the ACs and among yourselves. We encourage you to come up with a preliminary classification of papers into sure reject, sure accept and discussion needed (perhaps the bulk of papers) even before authors respond to reviews. The three-day author response period begins afterwards (13 to 15 March). Note that the authors have a text box to communicate directly with you as area chairs, in the case that the authors feels that they’re worth has been misinterpreted by reviewers. Please do address these comments in the needed meta-review, being aware of the sensitivity needed to craft an appropriate response.
You will have approximately one week afterwards to generate your final rankings and recommendations for accept/reject, and presentation style (poster, oral). Note that we are not going to add meta-reviewing to the duties necessary for all papers, as we expect dialogue among the ACs (with us too) will help to resolve these cases. We will then generate the final programme for the conference in consultation with you, to be disseminated on 30 March.
Please note that we are still recruiting area chairs, as we are inviting a second round to replace colleagues who have declined the invitation. Once the set of area chairs is finalised, we will publish the final statistics on the open call for area chairs, inclusive of the nominations. We know some members of the community are interested in how our open call fared.
Appendix: Submission Statistics (with approximate projections)
(culled from ACL Q3 reports and the ACL Wiki)
Please note, that the statistics for the upcoming submissions are approximate and that we have not finished recruiting area chairs — the list just reflects the state of recruiting at the moment. We have consolidated certain areas together to reflect our opinion that broader areas lessen the difficulty of area selection, more about these changes below. As always we welcome your comments, especially critical and constructive ones.
For the table below, there are two important points in our estimates. For the projected number of submissions, we used the maximum of the 2016 and 2014 submissions and then applied a 0.9 multiplier (assuming that the joint deadline cuts down on the total number of submissions from the previous conference in which the deadlines were staggered). For the projected number of reviewers, we hope to recruit enough reviewers to give each reviewer a load of 3-4 papers (hence the multiplier for each reviewer was 3.5).
(Last updated: 4 Jan)
|ACL 2017 Areas||Projected # of reviewers||Current load per chair||Current # Area Chairs||2017 Projected Submissions||Historical 2016 Submissions (in terms of 2017 areas)||Historical 2014 Submissions (in terms of 2017 areas)|
|Cognitive Modelling and Psycholinguistics||19||21.0||2||22||23||25|
|Dialogue Interactive Systems||22||8.3||3||25||28||18|
|IE, QA, Text Mining and Applications2||229||29.7||9||267||272||2974|
|Multidisciplinary and Others||47||27.0||2||54||61|
|Phonology, Morphology, and Word Segmentation||25||14.5||2||29||33||28|
|Resources and Evaluation||50||29.0||2||58||65||59|
|Sentiment Analysis and Opinion Mining||81||31.3||3||94||105||584|
|Summarization and Generation3||60||23.0||3||69||77||50|
|Tagging Chunking Syntax and Parsing||64||18.5||4||74||81||83|
|Vision, Robotics, and Grounding||16||9.0||2||18||20|
1 – New for this year.
2 – Combines previous areas of IE, QA, IR, NLP Applications and Document Analysis
3 – Combines previous areas of Summarization and Generation areas.
4 – Approximate, areas don’t map 1-to-1.