Skip to Content

Animal Studies: Are Too Many Never Published At All?

A new paper in PLoS Biology looks at animal model studies reported for the treatment of stroke. The authors use statistical techniques to try to estimate how many have gone unreported. From a database with 525 sources, covering 16 different attempted therapies (which together come to 1,359 experiments and 19,956 animals), they find that only a very small fraction of the publications (about 2%) report no significant effects, which strongly suggests that there is a publication bias at work here. The authors estimate that there may well be around 200 experiments that showed no significant effect and were never reported, whose absence would account for around one-third of the efficacy reported across the field. In case you’re wondering, the therapy least affected by publication bias was melatonin, and the one most affected seems to be administering estrogens.
I hadn’t seen this sort of study before, and the methods they used to arrive at these results are interesting. If you plot the precision of the studies (Y axis) versus the effect size (X axis), you should (in theory) get a triangular cloud of data. As the precision goes down, the spread of measurements across the X-axis increases, and as the precision goes up, the studies should start to converge on the real effect of the treatment, whatever that might be. (In this study, the authors looked only at reported changes in infarct size as a measure of stroke efficacy). But in many of the reported cases, the inverted-funnel shape isn’t symmetrical – and every single time that happens, it turns out that the gaps are in the left-hand side of the triangle, the not-as-precise and negative-effect regions of the plots. This doesn’t appear to be just due to less-precise studies tending to show positive effects for some reason – it strongly suggests that there are negative studies that just haven’t been reported.
The authors point out that applying their statistical techniques to reported human clinical studies is more problematic, since smaller (and thus less precise) trials may well involve unrepresentative groups of patients. But animal studies are much less prone to this problem.
The loss of experiments that showed no effect shouldn’t surprise anyone – after all, it’s long been known that publishing such papers is just plain harder than publishing ones that show something happening. There’s an obvious industry bias toward only showing positive data, but there’s an academic one, too, which affects basic research results. As the authors put it:

These quantitative data raise substantial concerns that publication bias may have a wider impact in attempts to synthesise and summarise data from animal studies and more broadly. It seems highly unlikely that the animal stroke literature is uniquely susceptible to the factors that drive publication bias. First, there is likely to be more enthusiasm amongst scientists, journal editors, and the funders of research for positive than for neutral studies. Second, the vast majority of animal studies do not report sample size calculations and are substantially underpowered. Neutral studies therefore seldom have the statistical power confidently to exclude an effect that would be considered of biological significance, so they are less likely to be published than are similarly underpowered “positive” studies. However, in this context, the positive predictive value of apparently significant results is likely to be substantially lower than the 95% suggested by conventional statistical testing. A further consideration relating to the internal validity of studies is that of study quality. It is now clear that certain aspects of experimental design (particularly randomisation, allocation concealment, and the blinded assessment of outcome) can have a substantial impact on the reported outcome of experiments. While the importance of these issues has been recognised for some years, they are rarely reported in contemporary reports of animal experiments.

And there’s an animal-testing component to these results, too, of course. But lest activists seize on the part of this paper that suggests that some animal testing results are being wasted, they should consider the consequences (emphasis below mine):
The ethical principles that guide animal studies hold that the number of animals used should be the minimum required to demonstrate the outcome of interest with sufficient precision. For some experiments, this number may be larger than those currently employed. For all experiments involving animals, nonpublication of data means those animals cannot contribute to accumulating knowledge and that research syntheses are likely to overstate biological effects, which may in turn lead to further unnecessary animal experiments testing poorly founded hypotheses.

This paper is absolutely right about the obligation to have animal studies mean something to the rest of the scientific community, and it’s clear that this can’t happen if the results are just sitting on someone’s hard drive. But it’s also quite possible that for even some of the reported studies to have meant anything, that they would have had to have used more animals in the first place. Nothing’s for free.

19 comments on “Animal Studies: Are Too Many Never Published At All?”

  1. Will says:

    I suppose the NIH could make a rule that funded projects must result in a publication of the outcome, whether positive or negative – or at least set up some sort of database where negative results could be stored. At the very least it would reduce the number of grants awarded for work that had previously been carried out, freeing up more money for other, actually novel projects.

  2. Sili says:

    My impression is that the point of ‘free’ journals like the PLOS family is to make the publication easier and thus at least remove the publisher bias from these studies.
    That of course leaves the author bias, and that’s harder to deal with.
    I’m very much in favour of far heavier regulation. All trials on animals (not just on humans) should be registered before they’re allowed to go ahead, the endpoints need to be defined at submission and the results must be deposited at a given date after the scheduled end of the trial. And – this is the crucial bit – failure to comply must carry consequences. Most effectively barring the company from doing any new trials.
    Some sort of review board should be in place, too, of course, to ensure that the trial/experiment is capable of providing answers. But I assumed there were already internal ethics committees for that purpose.
    Ben Goldacre is very fond of funnelplots and included them in the book.

  3. alig says:

    Since a lot of animal work is in the process of being off-shored to China, good luck trying to get more regulation on it.

  4. Sili says:

    Certainly unfortunate, but tu quoque is not an argument for letting people run wild as they please here.
    As always, the Free Market™ is not be-all, end-all of life, the universe and everything.

  5. mikeymedchem says:

    Re: #2 — I’m all for open access journals…but the cost onus ends up on the author — a couple thousand bucks to publish a paper so that it’s free for others to access. This is not the way to encourage open access.
    Also — regarding regulation of ALL animal trials — considering the numbers of experiments that happen on a daily basis for all kinds of reasons, introduction of regulation on par with those in place for human trials would simply cause research to screech to a halt. Have you ever seen the regs for human trials? They are arduous, and rightfully so. But you’re proposing that all results, even the quick PK experiment, should be regulated in this way? Yipes.

  6. geezer says:

    I agree with mikey..Sili, you’re living in lala land if you think all animal studies should be registered. Do you know how many animal expts take place in a given day? And what about the data that’s considered proprietary? Jeesh…

  7. Malcolm Macleod says:

    … but what you could have is a system where researchers declared their area of work (in as much or as little detail as their Intellectual Property status allows) so that those seeking an unbiased overview of data at least know where to look for unpublished work. I think there are opportunites here for light touch collegiate academic registration, which might prevent us being forced to take on a much more onerous bureaucratic system. As someone involved with both animal and human studies, I agree that ICH/GCP regulatory standards would be crushing.

  8. RobK says:

    This all presumes that the animal models of stroke actually mean something one way or the other, which is by no means certain. In fact, CNS diseases in general suffer from a lack of reliable models. You can induce pathology that somewhat resembles the human condition but only up to a point, and sometimes not at all.

  9. Curt F. says:

    I wonder if there is a study waiting to be done comparing (i) the number of animals sacraficed at a particular lab over a certain time interval, as gleaned from IRB forms or similar information, and (ii) the number of animal sacrafices reported in published papers resulting from that institution over a similar (or perhaps extended) time interval.
    It could be another way to address the question of publication bias.

  10. petros says:

    an interesting assessment but perhaps not surprising given the number of clinical failures, often in phase III, of drugs that have shown good results in animal models of stroke, usually the MCAO model

  11. Cartesian says:

    I think a bit of Cartesian philosophy could help to have less biases against the use of animals for some experiments.

  12. Sili says:

    Do you know how many animal expts take place in a given day?

    Obviously not. My bad.
    So a ‘lighter touch’ approach might work?
    Let labs regulate themselves with occasional surprise audits from outside? With smackdowns after repeated cases of bad practice?

  13. mehere says:

    can’t blame the scientists too much for publication bias, who would publish all these negative results? We need a new journal, the Journal of Not Much Happened.

  14. Sili says:

    can’t blame the scientists too much for publication bias, who would publish all these negative results? We need a new journal, the Journal of Not Much Happened.

    On your second point, wholly agreed.
    But on the first you’re dead wrong. There’s plenty of blame to heap on them. If nothing else, think about the other groups that may well go on to replicate the failed trials only to fail again, unaware of any previous failures. Derek is always telling us that nobody wants to use animals – they’re messy and expensive.
    Then why the Belgium not go out of our way to ensure that no more animals than necessary are used? Why reproduce failure? (Aside form the fact that it seems to be what pharma does these days …)

  15. Hap says:

    Hey! It’s not fair to say that pharma only reproduces failure. It comes up with a lot of its own original failures, as well. Even in the best of times, failure was pharma’s largest output – working in a field with 80%+ failure rates’ll do that. There’s just not enough successes to make everyone happy, but plenty of past successes to pillage for temporary gain.
    For academics, publishing failed research is difficult at best, because of the lack of places to put it, other than your own website. (There could be other options for making it available, however.) For company researchers, their companies are probably far more afraid of others beating them to a valuable drug based on their failed research and correspondingly less willing to reveal animal failure data. If companies aggregated and laundered their animal trial data, they might publish it in that form, but I don’t know if publishing that data would make them look worse to the public and thus cost them sort-of-tangible worth in return for the good will of researchers (and future research gains). There might be fairly strong reasons why corporate researchers might not release their failed trial data, and less leverage relative to academic researchers to get them to do so.

  16. anonqat says:


    Welcome to the Journal of Articles in Support of the Null Hypothesis. In the past other journals and reviewers have exhibited a bias against articles that did not reject the null hypothesis. We seek to change that by offering an outlet for experiments that do not reach the traditional significance levels (p < .05). Thus, reducing the file drawer problem, and reducing the bias in psychological literature.

    Bad news is that they haven’t published a lot.

  17. Sili says:

    I meant to stress Pharma administrators. Sorry.
    And I agree that it’s not an easy project. But I still think it’s the right thing to do. I’m emphatically not an animal ‘rights’ nut, but until someone changes my mind, I think it’s fair to insist that we make the best possible use of the animals in our care.
    Companies certainly have every incentive to be as paranoid and secretive as possible (and then a bit). Hence the necessesite for regulation to throw the doors open. They’ll find a way to function – just as Detroit learnt to deal with putting seatbelts in cars.

  18. Hap says:

    It should be relatively easy to get academic groups to give their data in some standard form (cleaned for proprietary data) to funding agencies in return for funding. It would seem to be the interests of a consortium of pharma companies to pool their own animal data (they’ve already done so for some common issues in trials) – even if they don’t believe in animal rights, they do believe in cash, and saving money from doing unnecessary studies and getting better results so they don’t waste money in clinical trials is in their best interests. Funding agencies might even be able to coordinate between the two pools of data. Barring that, FDA could force the issue with pharma, but they’d have to be pretty motivated and steadfast to do so.

  19. mehere says:

    publishing these things is unfortunately not going to happen as much as we’d like. Here in pharma-land getting IP to clear any publication is a battle, never mind one that ‘doesn’t show much’. On my first (dead wrong) point, even if a journal was found that would publish this stuff (and I like the one quoted above, even if it did seem to be a serious version of this old chestnut:, persuading people to spend their scarce time writing up non-results could be difficult. I agree with the sentiment and the fact that this information would be really useful (if sufficiently detailed), but I can’t see it actually happening in reality, unless there was some compulsion to do it from the legal side

Comments are closed.