ACL 2017 is now underway and in keeping with our wish to make the organization process transparent, we owe you this post on how the decision making for the final acceptances were made.
We did not use score cutoffs to determine acceptances but instead used scores as a guide towards arguing for paper acceptances.
We asked ACs to give us must accept papers, and accept but ok to reject papers and reject papers. For the first two categories, we largely followed our Area Chairs’ input, but for the bulk of the 20-30% of middle ground papers, we read every single review, discussion board input, direct-to-AC communications, and in some cases, scanned or read the submissions in question.
At least one AC, and in some cases for certain areas the whole AC team, followed the direct-to-AC communications. Due to the high overhead for ACs (over 20% of all submissions used this new facility) in shepherding all of the submissions, ACs were not required to mark submission reviews for whether they read the authors’ direct-to-AC communications.
Through this process, we selected and balanced out the final conference program, in consultation with Chris, Priscilla and Anoop, who as general and local co-chairs, helped us balance the size of the conference program with the logistic considerations necessary for oral and poster sessions space, set up and tear down.
We kept an eye for having a good mix and representation of our subdisciplines, as we wanted the conference to be diverse while maintaining quality. The resultant process did mean that in certain cases, both area chairs and reviewers were surprised that certain papers were rejected or accepted. As a result, different areas had different average acceptance scores – certain areas turned out to be more stringent and others were less so.
Here are the data tables for the final, summarized overall picture1.
and those of the component long and short paper formats:
Enjoy the conference!
– Regina and Min
1 These numbers reflect submission numbers post filtering for formatting violations and some sanity cross checks (and thus differ from the official submission rates mentioned here in the blog and in the preface to the proceedings)