Skip to main content

Book ,

A journalist shines a harsh spotlight on biomedicine’s reproducibility crisis

Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions

Richard Harris
Basic Books
288 pp.
Purchase this item now

In Rigor Mortis: How Sloppy Science Creates Worthless Cures, Crushes Hopes, and Wastes Billions, a sensationalistic title belies a carefully crafted book about data reproducibility and scientific rigor in biomedical research. Although the book breaks no new ground, at a time when the so-called “irreproducibility crisis” has stoked passionate debate among scientists, bewildered the public, and even launched new disciplines (e.g., metaresearch), an accessible overview of the problem is most welcome.

In a folksy style reminiscent of his reporting for National Public Radio, Richard Harris begins most chapters with a human-interest vignette, then ties the anecdote to a pervasive problem in science. In a chapter entitled “A Bucket of Cold Water,” for example, the reader is introduced to Tom Murphy, a 56-year-old father of three who participated in a hopeful ALS clinical trial that ultimately failed. Scores of human clinical trials conducted with promising ALS candidate drugs have ended unsuccessfully, Harris goes on to explain, invariably because of flawed preclinical study designs and an overreliance on poor mouse models for the disease.

Harris shows both sides of the reproducibility debate, noting that many eminent members of the research establishment would like to see this new practice of airing the scientific community’s dirty laundry quietly disappear. He describes how, for example, in the aftermath of their 2012 paper demonstrating that only 6 of 53 landmark studies in cancer biology could be reproduced (1), Glenn Begley and Lee Ellis were immediately attacked by some in the biomedical research aristocracy for their “naïveté,” their “lack of competence” and their “disservice” to the scientific community.


Tight research budgets and flawed preclinical studies have compromised efforts to cure ALS, writes Harris.

Those of us who have succeeded as academic researchers have done so in part by embracing incentives that are embedded in our academic culture—pitching ambitious proposals to funding agencies and publishing as often as we can in journals with high impact factors, hoping to put forward a winning package to hiring or tenure committees.

Although “publish or perish” was the old dogma, “show me the money” is the reality facing today’s academic scientists. “What we have is a Darwinian winnowing,” Henry Bourne, a professor emeritus in the Department of Cellular and Molecular Biology at the University of California San Francisco, tells Harris. “We take them [researchers] if NIH gives them a grant. And we don’t if they don’t.”

This so-called meritocracy, and its inherent hypercompetitiveness, has likely fueled the irreproducibility fire more than any other factor, especially in today’s chronic state of insufficient science funding. In a chapter cleverly called “A Broken Culture,” Harris describes a cutthroat environment that tempts many researchers to cherry-pick their data for publication. Or, worse still, drives some to fabricate the perfect result.

Although Nobel laureates and National Academy members have weighed in soberly with well-intentioned commentaries and suggestions for new approaches, the status quo for success in research has not fundamentally changed. Ultimately, Harris argues, for impactful reproducibility initiatives to be voluntarily embraced by the research community, the perverse incentives driving current definitions of academic success must be reassessed.

Gone are the good old free-wheeling days of the gene jockeys, eyeballing the relative intensities of bands on a gel or dots on a plate (guilty as charged). More data generated means more evaluation and quantitation. As biomedical research goes big, lessons learned from other disciplines point to the need for a more deliberate effort when it comes to designing, executing, and analyzing experiments. One approach, commonplace in virtually every industry and discipline except biology, is the broad application of standards and best practices. Validated cell lines and antibodies, two examples that Harris describes in detail in a chapter entitled “Trusting the Untrustworthy,” would have an immediate positive impact on reproducibility from experiment to experiment and from lab to lab.

Harris devotes the book’s final chapters to other important responses to irreproducibility, such as requiring open access of data and methods, as well as offering new training modalities for students that emphasize study design and statistics. However, he insightfully observes that many newly introduced initiatives and guidelines, notably by major publishers and funders, are “frequently ignored” by the research community and will continue to be until there are meaningful incentives in place.

Rigor Mortis effectively illustrates what can happen when a convergence of social, cultural, and scientific forces, as well as basic human motivation, conspires to create a real crisis of confidence in the research process. The book’s ominous title and subtitle notwithstanding, Harris shines a glimmer of light on a community beginning to awaken to its predicament, revealing how many of the same thoughtful, creative people who were attracted to a career in science may also be instrumental in fixing it.


  1. C. G. Begley, L. M. Ellis, Nature 483, 531 (2012)

About the author

The reviewer is at the Global Biological Standards Institute, Washington, DC 20036, USA.