Skip to Content

Sloppy Science

Nature has a comment on the quality of recent publications in clinical oncology. And it’s not a kind one:

Glenn Begley and Lee Ellis analyse the low number of cancer-research studies that have been converted into clinical success, and conclude that a major factor is the overall poor quality of published preclinical data. A warning sign, they say, should be the “shocking” number of research papers in the field for which the main findings could not be reproduced. To be clear, this is not fraud — and there can be legitimate technical reasons why basic research findings do not stand up in clinical work. But the overall impression the article leaves is of insufficient thoroughness in the way that too many researchers present their data.
The finding resonates with a growing sense of unease among specialist editors on this journal, and not just in the field of oncology. Across the life sciences, handling corrections that have arisen from avoidable errors in manuscripts has become an uncomfortable part of the publishing process.

I think that this problem has been with us for quite a while, and that there are a few factors making it more noticeable: more journals to publish in, for one thing, and increased publication pressure, for another. And the online availability of papers makes it easier to compare publications and to call them up quickly; things don’t sit on the shelf in quite the way that they used to. But there’s no doubt that a lot of putatively interesting results in the literature are not real. To go along with that link, the Nature article itself referred to in that commentary has some more data:

Over the past decade, before pursuing a particular line of research, scientists. . .in the haematology and oncology department at the biotechnology firm Amgen in Thousand Oaks, California, tried to confirm published findings related to that work. Fifty-three papers were deemed ‘landmark’ studies. . . It was acknowledged from the outset that some of the data might not hold up, because papers were deliberately selected that described something completely new, such as fresh approaches to targeting cancers or alternative clinical uses for existing therapeutics. Nevertheless, scientific findings were confirmed in only 6 (11%) cases. Even knowing the limitations of preclinical research, this was a shocking result.
Of course, the validation attempts may have failed because of technical differences or difficulties, despite efforts to ensure that this was not the case. Additional models were also used in the validation, because to drive a drug-development programme it is essential that findings are sufficiently robust and applicable beyond the one narrow experimental model that may have been enough for publication. To address these concerns, when findings could not be reproduced, an attempt was made to contact the original authors, discuss the discrepant findings, exchange reagents and repeat experiments under the authors’ direction, occasionally even in the laboratory of the original investigator. These investigators were all competent, well-meaning scientists who truly wanted to make advances in cancer research.

So what leads to these things not working out? Often, it’s trying to run with a hypothesis, and taking things faster than they can be taken:

In studies for which findings could be reproduced, authors had paid close attention to controls, reagents, investigator bias and describing the complete data set. For results that could not be reproduced, however, data were not routinely analysed by investigators blinded to the experimental versus control groups. Investigators frequently presented the results of one experiment, such as a single Western-blot analysis. They sometimes said they presented specific experiments that supported their underlying hypothesis, but that were not reflective of the entire data set. . .

This can rise, on occasion, to the level of fraud, but it’s not fraud if you’re fooling yourself, too. Science is done by humans, and it’s always going to have a fair amount of slop in it. The same issue of Nature, as fate would have it has a good example of irreproducibility this week. Sanofi’s PARP inhibitor iniparib already wiped out in Phase III clinical trials not long ago, after having looked good in Phase II. It now looks as if the compound was (earlier reports notwithstanding) never much of a PARP1 inhibitor at all. (Since one of these papers is from Abbott, you can see that doubts had already arisen elsewhere in the industry).
That’s not the whole story with PARP – AstraZeneca had a real inhibitor, olaparib, fail on them recently, so there may well be a problem with the whole idea. But iniparib’s mechanism-of-action problems certainly didn’t help to clear anything up.
Begley and Ellis call for tightening up preclinical oncology research. There are plenty of cell experiments that will not support the claims made for them, for one thing, and we should stop pretending that they do. They also would like to see blinded protocols followed, even preclinically, to try to eliminate wishful thinking. That’s a tall order, but it doesn’t mean that we shouldn’t try.
Update: here’s more on the story. Try this quote:

Part way through his project to reproduce promising studies, Begley met for breakfast at a cancer conference with the lead scientist of one of the problematic studies.
“We went through the paper line by line, figure by figure,” said Begley. “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”

39 comments on “Sloppy Science”

  1. MIMD says:

    Then there’s sloppy science coming from our own government Dept. of HHS.
    How’s this for a caveat?
    “Our findings must be qualified by two important limitations: the question of publication bias, and the fact that we implicitly gave equal weight to all studies regardless of study design or sample size.”

  2. Rick Wobbe says:

    In addition to increased publication pressure, there’s also increased pressure to turn research and tech transfer offices into profit centers, so licensing early and often is the order of the day. In that caveat emptor world, the highest value is making the most money – and as such the model has worked spectacularly (greater than average licensing revenue for less than average scientific rigor/cost), leading one to ask “so where’s the problem?” Sometimes it seems like, to paraphrase Barry Goldwater, “Extremism in the defense of revenue is no vice”.

  3. Tech transfer says:

    Published research doesn’t have to be right, it just has to convince others it might be right. It is rewarded with more funding, the rest is obvious………publish convincing stuff, real or not.

  4. Student says:

    One topic Lee has brought up in seminars (he is our chair) is cell-cell contamination of cell lines. Both by the clinicians bring them from the clinic and by those in the hood (who passage them tens of times and don’t think twice about handing them off to the lab down the hall, street, etc.)

  5. Virgil says:

    Spent most of last night reading this issue of nature, and wondering when it would show up here! Nice to see this stuff in the main journal instead of relegated to Nat. Rev. Drug Dev.
    As for reproducibility in cancer studies in the lab, one only has to look at the impending debacle at MD Anderson (, to see what a mess basic research in cancer field is in, especially cell biology studies.

  6. Student says:

    @5 I don’t think that guy’s lab reflects the field or the institution as he (or his workers?) was outright lying (as opposed to being sloppy like this article speaks to)…As grad students we are required to take ethics courses. Among them, one emphasizes the use of software to determine image plagerism/manipulation/etc. So this really took a lot of people by suprise.

  7. lynn says:

    It’s certainly not limited to oncology. I think it’s up to all of us who review manuscripts to insist on seeing the right controls run [both positive and negative] and much more rigorous testing of hypotheses. I agree it’s not fraud – but a lot of sloppy science.

  8. Rick Wobbe says:

    Tech transfer, #3,
    I can’t see your face from here. Were you winking and smiling mischievously when you wrote that or were you serious? Sarcasm doesn’t transmit very clearly through electronic media.

  9. lazybratsche says:

    “I explained that we re-did their experiment 50 times and never got their result. He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
    That is very very disheartening, and really inexcusable. I’ve only been at this for a few years (as a tech and now a grad student) but I would never ever believe an experiment that only worked once, and then failed in the next five replicates. Sure, sometimes it’s hard to do an experiment “correctly” since there are so many variables, some of which can’t be controlled. So sometimes it’s worth trying to find the conditions that make the result reproducible. But that non-reproducible result is worthless and should never be published. The result that is only reproducible under narrow conditions can be informative, but it’s only publishable those conditions are made absolutely explicit.
    And yet, the lab who publishes shocking nonreproducible results will get a handful of Nature papers. The lab who is more cautious will be lucky to get a single solid paper in a less glamorous journal.

  10. Over on twitter, Leonid Kruglyak of Princeton pointed out that saying “94% of what you know is wrong but we won’t tell you which 94%” does not exactly establish credibility either.

  11. dearieme says:

    “He said they’d done it six times and got this result once, but put it in the paper because it made the best story.” Sorry, that is fraud – he knew all along.

  12. maverickny says:

    Sloppiness is, of course, only one potential reason for the lack of reproducibility, but there are many others including contaminated cell lines, as others have correctly pointed out.
    However, the sheer complexity of the biologic processes involved in cancer are also important and one that I’m surprised a leading cancer researcher such as Lee Ellis didn’t bother to point out.

  13. Anonymous says:

    I’m a chemist, and I’ve always been suspicious of medical types – I don’t know how many times I’ve seen some half scientist / half P.T. Barnum on CNN breathlessly claiming that a cure for cancer was right around the corner. I can guarantee you I’d be tarred and feathered if I tried to publish a paper saying I’d invented a time machine or something like that!

  14. NJBiologist says:

    @7 Lynn–Absolutely. And that’s not limited to oncology, or to target validation studies. Unfortunately, opinions about what procedural controls mean vary between individuals. Blinding, in particular, means very different things to different people.

  15. RM says:

    What really gobsmacked me was this little gem from the Reuters story on the article:
    “Some authors required the Amgen scientists sign a confidentiality agreement barring them from disclosing data at odds with the original findings.”
    Which, if true, is just wrong. Not even wrong, as the saying goes. That’s not how science should be done.
    If I was in their situation, I’d seriously consider refusing to sign the agreement, and then going to the journal editors with it as de facto evidence that the original article should be retracted, as the authors obviously have no confidence in its validity.

  16. Ginsberg says:

    “He said they’d done it six times and got this result once, but put it in the paper because it made the best story.”
    “Very disillusioning” or scientific misconduct?

  17. jtd7 says:

    Another contributing factor may be the de-emphasis of Experimental Methods. Especially in high-profile journals such as Nature and Science, the methods are relegated to “Supplemental Material Available On-Line.” I think this sends a message that they are not all that important. Too often, when I am trying to follow up on a published result, I find that the published methods are inadequate. The source and characterization of a primary antibody may not be given. The method may be a reference to another publication that, too often, does not describe any such method.

  18. John Wayne says:

    @16, definitely scientific misconduct. It may be time (or a little past time) to make a few examples out of poor behavior; this sort of thing has the potential to infect the whole field of scientific research.

  19. mike says:

    To be fair, a lot of experiments in the hands of graduate students and undergraduates do not work many times before they actually work. Without more detail, I would hesitate to say that the experiment working one time in six was a failure to replicate the work, instead of the PI handing the project to student after student until one of them did it. Then he jumped to the conclusion that this student did it right, rather than that the experiment didn’t work.
    It’s hard to publish an article that really describes all the failed experiments in a graduate lab. “Researcher A got the reaction to work on his fourth attempt, but then was able to do it consistently for the next two attempts before he left. Researcher B, an undergraduate, claims to have run the reaction six times, but the rest of the lab only remembers him to be there on two of those days, and for one of them he left it running over spring break before attempting to work it up. Researcher C got it to work once, and then failed six more times before realizing that when his reagent was borrowed by the lab down the hall, they left it out of the dry box for three weeks before returning it. He ran it once more with fresh reagent and it worked again, but since he was getting married and leaving grad school he never wrote it up.”
    So how many times did the reaction work? Which of the reactions should be placed in the experimental section? And this would be a chemical system, which is so much shorter and easier to interpret than a biological one.

  20. anonymous says:


  21. Todd says:

    I’m shocked, but not surprised. This has been going on for decades. I don’t think there’s a lot of fraud, like everyone else is saying. However, if you’re a PhD or postdoc who needs a paper to graduate, and the PI is leaning on you, well…things can be made to work the 53rd time. Also, there’s more secret sauce in your average academic lab than in Most McDonald’s. It’s scary how often that happens.

  22. Hap says:

    I thought if you actually published a paper on a synthetic method, that you were actually supposed to have one – meaning that you know the basic inputs needed to obtain the depicted products (for a limited subset of reactants) consistently. If you don’t have that, why was your paper published again (other than for CV enhancement)? Otherwise, I might as well be reading ads for nutraceuticals and methods to attain financial freedom through lottery tickets than reading research papers.

  23. Andrew says:

    I found it more than a little hypocritical that the Nature paper had no methods, no results, no list of what they tried and failed to validate, no indication of how they tried to validate etc etc. In short it was one of the sloppiest papers I’ve seen, and itself belonged in the Journal of Irreproducible Results. No wonder it appeared in Nature

  24. Chrispy says:

    Gee, it is unfortunate that Begley did not encourage his group to publish their results. Amgen clearly applied a lot more resources to these studies than the academics could afford to, and now this boatload of important research was done only to be lost. Part of the beauty of Science is that it is OK to disagree, but show us your evidence. Whining that the academics are doing an inadequate job is not really participating and doesn’t help much. Until leaders in industry recognize that they bear responsibility for scientific progress, too, we’ll be stuck, each company trying to secretly discover what is real and what isn’t on their own.

  25. Anonymous says:

    And for those who don’t read the literature and end up “re-inventing the wheel” it’s called SLOPPY SECONDS!!! LOL

  26. Anonymous says:

    @24. The results are being published. Here’s an example

  27. Iridium says:

    “He said they’d done it six times and got this result once, but put it in the paper because it made the best story. It’s very disillusioning.”
    If result comes out 1 every 6 time, it could be still ok to publish it… as long as youy say it works 1 time out of 6!
    In my book, this is not very disillusioning, this is fraud.
    Sadly…he even does not realize that!

  28. Rob says:

    That is thoroughly disgusting conduct.
    What ticks me off as an academic scientist is having to compete for grants with people who just make it up.

  29. Rob says:

    That is thoroughly disgusting conduct.
    What ticks me off as an academic scientist is having to compete for grants with people who just make it up.

  30. Anonymous says:

    … and how many biologists experiments did really contain the intended cell lines? Watch this:

  31. newnickname says:

    In his book “The Way of Synthesis: Evolution of Design and Methods for Natural Products” and elsewhere, Hudlicky says that one of the major problems in modern chemistry is “ethics” (or the lack of ethics?) in the reporting of yields and other aspects of our reactions and research. There should be no shame or penalty for reporting a “59%” yield instead of accidentally-on-purpose transposing that to “95%” (which I have witnessed others do).

  32. Jordan says:

    @31 newnickname: Hudlicky makes the same point very vigorously in person.
    @21 Todd: I think you’ve identified a potential root cause here — the “race to the finish line”. It may manifest itself more as sloppiness than outright fraud, but the net effect is the same.

  33. Immunoldoc says:

    Having been involved in target ID and validation group for a major pharma I can say, charitably, that about 50% of published work was not reproducible either in whole or in part. In terms of the accusations that pharma is “hiding” such data, show me the journal that routinely publishes negative results that refute findings of major labs and I’m quite sure I’d be happy to get these stories out. I tried on several occasions to publish well-controlled, negative data, refuting a major story, in the very journals that had printed the original article only to be told they weren’t interested.

  34. Nile says:

    “He said they’d done it six times and got this
    result once, but put it in the paper because it
    made the best story.”
    …Follow-up studies tried fifty times, got zilch, and the original experimenter admitted he couldn’t help them.
    Isn’t there a journal specifically for irreproducible results?

  35. Eric Schuur says:

    Better to get the truth out there in the open, but I do find that last quote quite depressing.

  36. Mario says:

    There are people doing research/academy for the prestige/fame, not for the love for science. For them science is a tool to reach a goal, not the goal itself. So, faking results is not that disturbing just a necessary little stain to move one step up.
    It is not sloppy science, it is a crime.
    The sufferer is waiting for a cure.

  37. CM says:

    Scientists are turned into mortgage slaves nowadays — what else did you expect?

  38. udippel says:

    Of course, it is fraud. Don’t kid nor delude yourselves.
    Delusion is, when someone calls ‘sloppy’ what is outright fake. Science is, according to some definition, that an experiment can be repeated. If it can’t, it is fake. As simple as that. If “sometimes” renders a result publishable, we have slipped down to the same level that we used to abhor. Then, astrology, palm reading, homeopathy and whatnot is just as much science as ‘our’ science. “We just happen to not be able to reproduce our result in these days.”
    Though it is understandable. It is not ‘us’ per se, but the society around us. Instead of trusting us to be responsible academics, we are controlled by bean-counters who demand a ‘breakthrough’ after some fixed time of funding. As if science wasn’t that most of the time it just doesn’t work. It is career, tenure, feeding the family. Though that ought not be an excuse. The unemployed family father’s stealing from the store isn’t condoned neither.
    Until a few days ago I was working in a place where we were ‘encouraged’ to let ‘slip’ through any student with non-functional results of research. Reason given: it’s for the income of the university (“your salary”).

Comments are closed.