This article at NEJM is looking at how well clinical trial results are made public, which has been a big topic over the last few years. Let me say up front that the results are quite interesting, and that some news outlets appear to be misreporting them.
Since 2007, it’s been required by law that anyone sponsoring a clinical trial in the US register it at clinicaltrials.gov, and report at least a summary of the results within one year after finishing data collection for the trial’s primary endpoint (or within a year of stopping it for any other reasons). These authors (all from Duke) found that the clinicaltrials.gov data can be messy to work with. It’s not clear which trials in the registry are subject to the above legal requirements, so they first used someone else’s algorithm to identify over 32,000 “highly likely clinical trials”. Then they picked out the ones that were listed as “completed” or “terminated” before August 31, 2012 (to give everyone time to report), and that took the number down to 13,327 trials, all of which ended between January 1, 2008 and that 2012 cutoff. Any trial reporting results (or filing a request for an extension) by September 27, 2013 was considered to be legally acceptable.
How did everyone do? Only 13% of all the trials reported data within one year of completion, but the authors say that they still can’t be sure how many of the trials being analyzed were required to report during that time (there are exceptions related to whether an intervention has been approved for marketing or not). Here’s where they tried to correct for this:
We manually reviewed a sample of 205 HLACTs to determine requirements for reporting (Tables S15A and S15B in the Supplementary Appendix). By reviewing approval dates and labeling information, we determined that 44 to 45% of industry-funded HLACTs in this sample were not required to report results, as compared with 6% of NIH-funded studies and 9% of those funded by other government or academic institutions. On the basis of this review, we estimated that during the 5-year period, approximately 79 to 80% of industry-funded trials reported summary results or had a legally acceptable reason for delay. In contrast, only 49 to 50% of NIH-funded trials and 42 to 45% of those funded by other government or academic institutions reported results or had legally acceptable reasons for delay.
That’s the real take-home of this article. The authors themselves say that:
Before the passage of the FDAAA, industry sponsors received particular scrutiny for selective reporting. Since the enactment of the law, many companies have developed disclosure policies and have actively pursued expanded public disclosure of data. Curiously, reporting continues to lag for trials funded by the NIH and by other government or academic institutions. Pfizer has reported that the preparation of results summaries requires 4 to 60 hours, and it is possible that the NIH and other funders have been unable or unwilling to allocate adequate resources to ensure timely reporting.
That much seems clear: the drug industry has been doing a significantly better job of complying with the law than publicly funded trials have. Some of the reports about this paper have picked up on this, but others have landed on that 13% overall figure and gotten stuck, even though (as the paper itself shows) many of the trials in that set were not even legally required to report data. The most detailed report in the press is probably this one from NPR. They get the results of the paper right, which is more than I can say for some others. I particularly noted, and not happily, that Ben Goldacre tweeted that figure along with a link to his book, “Bad Pharma”, which juxtaposition implies that this number is both germane and the fault of the drug industry. I expected better.