I’ve been waiting for these results: the Reproducibility Project (Cancer Biology) has been going back over several prominent papers in the field, seeing how well their results hold up. This follows a similar effort in experimental psychology, where the results were mixed. The hopes here were to reproduce 50 high-profile studies, but as this Nature News article details, those ambitions had to be scaled back. They’re doing 29 papers, and the first seven now have results (five of them are publishing in eLife today).
Those results are. . .mixed. Again. Two of the five seemed to reproduce fairly well, although they have their own problems: a paper from Science Translational Medicine on cimetidine as an antitumor agent worked, but the statistics were not as good, and one from Cell on BET bromodomain inhibition and cMyc, while it also worked, had substantial differences in the respective control groups. Another two yielded results that are basically uninterpretable compared to the original work – they might have worked, they might not, but something appears to have gone off along the way. One of these is a PNAS paper on SIRP-alpha as a cancer target, where the replication study didn’t find the reported level of tumor inhibition, but instead non-statistically significant growth with a few spontaneous remissions that confounded the statistics. The other is a Nature paper on a gene (PREX2) whose mutations seem to accelerate tumor growth, but in the reproduced study everything grew much too fast for differences to show up. Finally, though, one of the five (a Science paper on a peptide, iRGD, that aid in doxorubucin penetration of tumors), just seems to have not reproduced at all.
That last paper is from the lab of Erkki Ruoslahti at Sanford Burnham, and as that Nature News article reports, he’s nonplussed, to say the least. Ruoslahti says that his work has actually been reproduced in at least ten other labs around the world, and if he’s right about that, then that’s food for thought, too. I can find a number of papers that have used the iRGD peptide to enhance tumor drug delivery, so Ruoslahti may well have a point. In which case, quis custodiet ipsos custodes: why didn’t it reproduce for the reproducers?
When you get right down to it, though, none of these five papers did what someone outside the sciences might have hoped for – that is, reproduced pretty much as written. Working scientists (and especially working biologists) know that that’s a high expectation, which is why the first two papers are still in the “substantially reproduced” category. There are a lot of variables in this sort of work, and not all of them are even known to the authors themselves, by any means. The Reproducibility Project tells the labs redoing the papers that they have to follow things exactly as they were originally reported, but anyone who’s tried following complex papers of this kind will have had to mess around with the conditions along the way, not that that always works, either.
So what this is telling us so far is that (1) getting tumor biology papers to repeat exactly is very difficult, and (2) the original papers themselves are probably not providing as many experimental details as they could. Neither of these conclusions will be controversial for anyone in the field. What we don’t know yet is the fundamental nonreproducibility rate. There are papers that can’t be replicated because the cells were treated a little differently, or the buffer conditions had to be changed, or the antibodies used for detection were wonky, and so on and very much so on. Those will come right if you’re just willing to mess with them long enough. Then there are papers that can’t be reproduced because their fundamental results are invalid. No amount of tinkering will fix them. But as it’s currently being run, the Reproducibility Project is not going to tell us how many of those there are.
It’s already telling us that the literature in the field is probably inadequate in many respects and that there are a lot more factors at work than you might have guessed from reading the original papers. It’s good to have proof of that, but honestly, we knew it already.