Reviewing papers is incredibly valuable experience

02 Jul 2021 Gregory J. Stein

In a few weeks, I am hosting an internal paper writing workshop in my research group. Modeled after a similar practice initiated by my PhD advisor, my students are going to write work-in-progress drafts based on their current research progress and share them with each other. While the focus is obviously mostly geared mostly towards the writing aspect of this workshop—and I hope this will help the students to develop their research progress into a narrative suitable for publication—it is my experience that having the students serve as each other’s reviewers is equally valuable experience.

A fundamental part of earning a PhD is developing a critical eye for what constitutes interesting ideas, contributions, or questions to ask, an ability often honed through trial-and-error and via interaction with their advisor. Typically, students present their advisor with research progress that the advisor can help to course correct, typically by pointing out where the research could be improved and serving as a “reviewer” for the in-progress ideas. These interactions give the student ideas about how to improve the quality of their work and to avoid pitfalls that may otherwise come up only in the review process, training them over time to produce solid research on their own. Yet meetings with a research advisor tend to be somewhat narrow in scope, often geared towards discussing topics with which the student already has a considerable amount of familiarity, potentially limiting the utility of these interactions in developing the student’s broader researcher’s intuition.

Looking at the in-development work of others pulls back the curtain and reveals insight into the thought process of other researchers. Much of the interaction that “younger” students get with work outside their own is by reading published papers. Published work has already gone through the gauntlet of peer review, giving a biased perspective of research from which it can be difficult to understand how one gets from a blank canvas to a finished product ready for public consumption. For this reason, I have found that serving as a reviewer at a number of high-profile conferences and journals in my research area has helped me continue to hone my own intuition for how to ask the right questions and pursue good research opportunities. Serving as a reviewer forces me to put into words what I think are the strengths and weaknesses of a paper that is not my own and to come up with recommendations about how any weaknesses could be overcome. This process was excellent experience for me as a senior graduate student organizing my PhD thesis and writing my faculty applications, yet early-stage graduate students are rarely experienced enough to serve as reviewers themselves without significant guidance along the way.

Most students also informally discuss their research in conversations with their friends and colleagues, another invaluable part of the graduate school experience.

I cannot help but feel that reviews are a largely-untapped pedagogical resource. However, even though early students may not review papers themselves, inviting those students to study early versions of published papers and the reviews that accompany them can be incredibly helpful for teaching them about the research process and about what distinguishes successful papers. Many conferences in robotics and computer science have adopted the OpenReview platform for reviews, where reviews and early paper versions are all made public after the review process concludes. With OpenReview, it’s possible to look not only at accepted, final revisions of landmark papers, but also papers that were ultimately not accepted and the reviewer comments on those submissions.

As my students are starting to put together submissions of their own, and I suggested that it may be a worthwhile learning experience to search OpenReview for recent papers in their area to study their reviews. My hope is that looking at reviews—both for papers that were accepted and those that were ultimate rejected—will help them anticipate questions a keen reviewer might ask, a useful mental model for thinking of new experiments or for improving the narrative of the paper. How effective will this process be? I am uncertain, especially since review quality can be dubious, even at top conferences. Either way, I am excited to see where this little adventure leads. Feel free to follow up on Twitter.