Transforming Our Culture of Peer Review
A privilege of being ASA president is writing this column and speaking directly to ASA members. This month, I’d like to discuss with you an issue that has been an ongoing source of frustration for many of us—one that more than ever threatens the impact of our discipline in this new era of unprecedented discovery, Big Data, and data science.
This challenge is the long-standing culture of peer review in most scholarly journals in our discipline. One that is too slow and too critical. That has driven some members of our profession to seek alternative outlets for their work. That we as a profession must address.
I am certainly not the first to sound this call, but I hope by devoting this column to it, I can inspire more of us to become actively engaged in changing it.
My perspective is shaped by my experience as an author, referee, associate editor, and editor. In the 1990s, I was an associate editor for JASA Applications and Case Studies and Biometrics. In 2000–2002, I was coordinating editor of Biometrics, and, since 2006, I have been that journal’s executive editor.
For readers who may not be old enough to know much about journals in the 1990s, the editorial process took place almost solely by mail. Although there was email and Internet (we weren’t entirely prehistoric), the technology required to support the electronic systems of today had not yet evolved. Authors were required to submit, by mail, five paper copies of their manuscripts, enough for the editor, associate editor, and several referees.
When I was an associate editor, fat packages of papers from editors would arrive in my mailbox. Once I identified referees, copies of the paper were sent to them—by mail, with some to far-flung destinations going by airmail! Referees would mail their reviews to me, and I would mail my report to the editor, along with the referees’ reports.
When I became Biometrics editor in 2000, much of journal business still took place by mail. But things changed rapidly. By the time I handled my last paper in late 2003, almost all submissions and reviews were by email, and Biometrics was being published online.
This history lesson highlights that, prior to the 2000s, the expectation was that the editorial process had to be slow, with papers and reports spending days or weeks in transit and in mailrooms. The transition to electronic processes revealed the real culprit—the time a paper languished on an editor’s, associate editor’s, or referee’s desk. The electronic age did little to alter this “delay culture.”
In 2000, then-editors from JASA, Biometrics, Biometrika, Annals of Statistics, the Royal Statistical Society journals, and others met at JSM in Indianapolis for a frank discussion. I was shocked when one editor declared that our papers are just more complex and substantial than those in other fields, and if it takes six months or a year to review a paper, so be it. That comment emphasized what a profound cultural challenge we faced.
And continue to face. Things have improved—somewhat. At Biometrics, for example, median time to review is about two months (so half of all papers still take longer …), but we are considered swift by statistics standards. And securing good referees, ever more challenging given escalating demands on academicians’ time to attract external funding, would be even more so if we shortened expectations further.
And we still compare badly to other fields. A familiar refrain is that journals in medicine, genetics, and so on demand—and receive—1–3 week turnarounds on their submissions. What is different about these cultures that inspires such prompt evaluation of fellow researchers’ work? Many of us have recoiled as collaborators wonder out loud, “Your stuff must not be very important if there is no urgency to disseminate it.”
It is essential for new developments in our field to be reported quickly so our science keeps pace with today’s breakneck scientific progress. And, like it or not, evaluation of researchers continues to rely on publication record, putting junior members of our field at a stressful disadvantage. We all know it doesn’t take months of daily toiling to review a paper—it takes, at most, days, and sometimes an afternoon.
Exacerbating this problem is our tendency to be overly critical of each other’s work. Of course, the point of peer review is criticism—but constructive criticism, directed toward making the work more useful and accessible. Many reviewers embrace this principle, and, as an editor, I have been impressed with their thoughtfulness and genuine desire to improve the work. But I have been disheartened by the proportion of reviews that strive to put authors through contortions, demanding copious further simulations and extensions and changes in focus that are beyond the authors’ intended scope. Often, such revisions would be, at best, incremental improvements whose main impact would be to slow the dissemination of the ideas.
Over the years, there have been bold attempts to affect change. For example, Scott Zeger and Peter Diggle founded Biostatistics in 2000 with the goal of swift, high-quality review of all submissions. Sadly, such efforts have had little effect on our culture.
Today, the fluid landscape of publication places us at a crossroads. Repositories such as arXiv facilitate widespread dissemination and ongoing feedback and revision. The open access movement and government mandates for access stand to reinvent the way research is reported.
Indeed, the ASA convened a panel in 2012 charged with reporting on models for the future of ASA publications, as discussed in a series of Amstat News articles earlier this year.
Some members of our profession have argued that we should seize the opportunity for radical change. I encourage you to read the commentaries by David Banks and Karl Rohe in the October 2012 issue of Amstat News on alternative modes of peer review, as well as Larry Wasserman’s blog post, “A Rant on Refereeing,” in which he proposes a “world without referees” in which all articles are disseminated on public repositories and are “crowd-peer reviewed.”
Nick Fisher is the force behind Stat, the International Statistical Institute’s online journal for rapid dissemination of statistics research, launched in 2012. Stat publishes short, focused articles. Papers undergo a single-pass review process—rejection or acceptance are the only options—that strives for 30-day submission to publication. Rather than writing unstructured reports, reviewers use a checklist approach that focuses broad attention on whether a paper represents a meaningful advance rather than on minutiae. Nick says that, while it is too early to evaluate this model’s success, Stat has published some excellent articles and authors are receiving swift decisions.
It is unlikely that our system of peer review will be transformed radically overnight. But, given that some form of peer review is critical to the advance of our field, we must change our practices so we can adapt quickly to the uncertain future of publication. This cannot be mandated from above; it can only happen from the bottom up, through individual efforts. The next time you review a paper—as referee, associate editor, or editor—please contemplate the role you can play. The future of statistical science demands our commitment to a new review culture.