ICLR Submissions 3 years on

ICLR has hosted its reviewing system on OpenReview for the last several years, with public reviews, author comments and acceptance decisions. This data was downloaded using their public API and cross referenced with the Semantic Scholar corpus to find the citations of the submitted papers. Hover over individual papers to see their titles, citations, affiliations and acceptance decisions. Click on papers to view them on Semantic Scholar.

Full disclosure - there were about 14 papers from ICLR 2018 and 6 papers from ICLR 2017 that I was unable to match. Citation counts were pulled from Semantic Scholar on 26/08/2020 and may differ from citation counts computed by Google Scholar. Small amounts of random noise (< 0.5) are added to citation counts and average review scores to 'jitter' the visualisation, making it easier to see individual papers.

The ICLR Best Rejected Paper Award - 2017

SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <1MB model size
Forrest N. Iandola, Matthew W. Moskewicz, K. Ashraf, Song Han, W. Dally, K. Keutzer

Honourable Mentions

Adversarial Examples in the Physical World
A. Kurakin, Ian J. Goodfellow, S. Bengio

Prototypical Networks for Few Shot Learning
J. Snell, Kevin Swersky, R. Zemel

Conditional Image Synthesis with Auxiliary Classifier GANs
Augustus Odena, Christopher Olah, Jonathon Shlens

The ICLR Best Rejected Paper Award - 2018

CyCADA: Cycle-Consistent Adversarial Domain Adaptation
Judy Hoffman, E. Tzeng, T. Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei A. Efros, Trevor Darrell

Honourable Mentions

Federated Learning: Strategies for Improving Communication Efficiency
Jakub Konecný, H. McMahan, F. Yu, Peter Richtárik, A. T. Suresh, D. Bacon

Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
T. Haarnoja, Aurick Zhou, P. Abbeel, S. Levine

Synthesizing Robust Adversarial Examples
A. Athalye, L. Engstrom, Andrew Ilyas, Kevin Kwok

Subsequently Published Rejections

One possible complaint with just looking at citations based on reviewer score is that just occasionally, reviews actually improve a paper; authors add experiments, tweak writing and do extra analysis which leads to acceptance at another conference. Because the papers here are linked to the Semantic Scholar corpus, we can see if the rejected papers were published in a subsequent venue (not arxiv).

Citations by Affiliation

Looking at this rejection data another way, by aggregating across institutions, is also quite interesting. Berkeley, Toronto and Microsoft are clearly working to improve their papers rejected from ICLR, with over half of their citations coming from papers which were submitted elsewhere afterward. As expected, Google dominates, with over 28,000 citations from 145 papers.

*Only contains institutions with > 1000 citations in aggregate.