Skip to Content

Drug Industry Trials vs. NIH-Funded Ones: Who Reports on Time?

This article at NEJM is looking at how well clinical trial results are made public, which has been a big topic over the last few years. Let me say up front that the results are quite interesting, and that some news outlets appear to be misreporting them.
Since 2007, it’s been required by law that anyone sponsoring a clinical trial in the US register it at clinicaltrials.gov, and report at least a summary of the results within one year after finishing data collection for the trial’s primary endpoint (or within a year of stopping it for any other reasons). These authors (all from Duke) found that the clinicaltrials.gov data can be messy to work with. It’s not clear which trials in the registry are subject to the above legal requirements, so they first used someone else’s algorithm to identify over 32,000 “highly likely clinical trials”. Then they picked out the ones that were listed as “completed” or “terminated” before August 31, 2012 (to give everyone time to report), and that took the number down to 13,327 trials, all of which ended between January 1, 2008 and that 2012 cutoff. Any trial reporting results (or filing a request for an extension) by September 27, 2013 was considered to be legally acceptable.
How did everyone do? Only 13% of all the trials reported data within one year of completion, but the authors say that they still can’t be sure how many of the trials being analyzed were required to report during that time (there are exceptions related to whether an intervention has been approved for marketing or not). Here’s where they tried to correct for this:

We manually reviewed a sample of 205 HLACTs to determine requirements for reporting (Tables S15A and S15B in the Supplementary Appendix). By reviewing approval dates and labeling information, we determined that 44 to 45% of industry-funded HLACTs in this sample were not required to report results, as compared with 6% of NIH-funded studies and 9% of those funded by other government or academic institutions. On the basis of this review, we estimated that during the 5-year period, approximately 79 to 80% of industry-funded trials reported summary results or had a legally acceptable reason for delay. In contrast, only 49 to 50% of NIH-funded trials and 42 to 45% of those funded by other government or academic institutions reported results or had legally acceptable reasons for delay.

That’s the real take-home of this article. The authors themselves say that:

Before the passage of the FDAAA, industry sponsors received particular scrutiny for selective reporting. Since the enactment of the law, many companies have developed disclosure policies and have actively pursued expanded public disclosure of data. Curiously, reporting continues to lag for trials funded by the NIH and by other government or academic institutions. Pfizer has reported that the preparation of results summaries requires 4 to 60 hours, and it is possible that the NIH and other funders have been unable or unwilling to allocate adequate resources to ensure timely reporting.

That much seems clear: the drug industry has been doing a significantly better job of complying with the law than publicly funded trials have. Some of the reports about this paper have picked up on this, but others have landed on that 13% overall figure and gotten stuck, even though (as the paper itself shows) many of the trials in that set were not even legally required to report data. The most detailed report in the press is probably this one from NPR. They get the results of the paper right, which is more than I can say for some others. I particularly noted, and not happily, that Ben Goldacre tweeted that figure along with a link to his book, “Bad Pharma”, which juxtaposition implies that this number is both germane and the fault of the drug industry. I expected better.

16 comments on “Drug Industry Trials vs. NIH-Funded Ones: Who Reports on Time?”

  1. G2 says:

    Is this the coming attraction mentioned yesterday?

  2. Anonymous says:

    You expected better from Glen Goldacre? You don’t know him very well then.

  3. Even John LaMattina, usually pretty reliable in my experience, gets the tone wrong. Link in URL, if I did it right.

  4. Derek Lowe says:

    #1 – nope, this isn’t the “coming attraction” yet. 2 PM for that one!

  5. The Realist says:

    ” I particularly noted, and not happily, that Ben Goldacre tweeted that figure along with a link to his book, “Bad Pharma”, which juxtaposition implies that this number is both germane and the fault of the drug industry. I expected better.”
    Come on, he can’t allow actual numerical data to interfere with book sales! “These are real facts, based on my theories…”

  6. JK says:

    “I expected better.”
    Has Derek done a review of Bad Pharma? I think the book deserves discussion (although it’s not well written). In my view it has some strong points as well as some big problems, but I’d be interested in Derek’s take on it, or other pointers to good reviews. Not wanting to thread jack here, but Bad Pharma has surely had a big impact on the topic of this post – reporting of trials.

  7. CMCguy says:

    The Pfizer reported time frame for “preparation of results summaries requires 4 to 60 hours” seems extremely low based on what I have seen in the data handling requirements even from a small Clinical study. Usually takes a team of 2-4 people to gather, consolidate and QC review the data much less provide any analysis and conclusions (even if preliminary). The amount of effort/cost entailed in routine data processing has always been a factor in low reporting of trials rather than nefarious attempt to hide info IMO, especially when a study outcome was negative where the resources generally move on to other projects. Even if everything is captured electronically generating basic reports can be substaintial work.

  8. johnnyboy says:

    I much enjoyed Goldacre’s book “Bad Science”, but sadly he appears to be devolving from a science educator into a anti-pharma activist, judging from his rhetoric. He probably gets a lot more attention and speaking engagements as the latter. I stopped paying attention to him when I read his comment that pharma companies were willingly killing people, by releasing insufficient post-marketing trial data.

  9. SP says:

    I’m guessing a lot of the difference is the informatics and database structures in place in pharma vs. academia. If you have a proper database and warehouse and reporting structures you can pull the data relatively quickly, especially if it’s a standard query and report you often use. A lot academic data is captured on spreadsheets on someone’s hard drive because NIH grants simply do not fund infrastructure work like LIMS even though it would amplify the value of all the other work going on at an institution.

  10. anonao says:

    @SP, or if you are big company you know how to find the loophole not to report data legally, so when you areally have to, ok, but otherwise try not to

  11. Anonymous says:

    My company looked at this same topic recently. In a whitepaper titled “Reporting Bias in Clinical Trials: What’s the current status?” This was presented at the PharmaCI conference last year.
    http://www.citeline.com/resource-center/whitepapers/

  12. DCRogers says:

    What part of “law” are people not getting here?
    Laws that apply to me accept nothing less than 100% compliance, and if I am noticed violating one, I don’t just get an *aw shucks, try to do better*.
    I find 20% failure-to-comply for large firms deplorable, and 55% for academia shocking. It’s not like a few person-days of work constitutes an example of the heavy hand of government… sheesh.
    There are laws for little people, and laws for big people. Only the latter are voluntary.

  13. Bell4 says:

    There’s a potentially huge asymmetry between unpublished results from industry and academia. An unpublished clinical trial from academia is most likely to be of relatively modest impact on medical practice, whether positive or negative: As we have seen, positive trials do tend to be written up, and if a large, well-executed academic trial came up negative, that result itself is of general medical significance and is also quite likely to be disclosed (cf. the negative results on anti-oxidant vitamin supplements on various cancers). Hence most unpublished academic studies are more likely to be in the marginal impact category.
    In contrast, for any industrial study of a marketed drug, a negative outcome is of immediate significance to patients using the medicine as well as physicians considering the use of said medication. Even delayed release of negative information is often of definite monetary value to the company marketing the drug. Even a superficial publication compliance rate of 99% might be unacceptable.
    A more useful analysis might sort publication delays or omissions into categories reflecting these distinctions.

  14. Eric says:

    #13 I think you might be overstating the ‘huge asymmetry between unpublished results from industry and academia’ For the most part, industry funded clinical trials have little impact on medical practice. How can I claim this? Because most trials involve investigational products that never make it to market. Most trials fail – that is the nature of the drug development business. Publishing data from these trials helps other investigators (both public and private) but it doesn’t change doctor’s prescribing patterns. Now data from a trial with a marketed drug – that’s a whole different beast. And yes, that should always be published.

  15. tangent says:

    I’d be fascinated if the data could be coded for “successful” / “unsuccessful” trials (that did or did not find the result the investigators hoped for), and stratified by time-to-report. Are successful results reported sooner? Are unsuccessful results reported only when legally required, or later?

Comments are closed.