The drug industry has a huge stockpile of results on projects that have not worked. That much is clear – clinical success rates continue at about 10%, on average, so we have a steady stream failures of all kinds, for all reasons. It would be foolish not to learn as much as we could from all of these, and that’s the point of this article in Nature. It details an effort by the European Medicines Agency (EMA) to get companies together who have had clinical failures in Alzheimer’s. (That is to say, pretty much every company that has had a clinical program in Alzheimer’s).
Following the G8 call to action, we at EMA invited drug companies to present their research to us confidentially and individually — detailing what drug targets they investigated, what populations they thought their interventions might treat, and how they intended to test this in their trial designs. Seven companies agreed to take part. Their presentations to us covered data on 14 discontinued or ongoing trials, including efficacy trials that collectively covered more than 12,000 participants. . .
. . .The information shared with our teams was more up-to-date, broader and more in-depth than what is commonly published in the literature, included in trial registries or given in mandated public summaries. Details on data generated before phase III trials — including preclinical and early clinical research — were crucial to frame the failure’s significance in terms of which hypotheses it falsified. Such information also helped to avoid unwarranted negative conclusions about uninformative generic terms such as ‘β-amyloid hypothesis’
Even though the EMA team couldn’t present much of this publicly, while consulting the the regulatory agencies in the US, Canada, and Japan they did revise the EMA guidelines for Alzheimer’s trials (design, endpoints, etc.) in light of what they heard. One of the larger results of this should be a more standardized approach to such trials, because as things stand, they can be difficult to compare. That’s partly been on the companies involved (with each of them working up their own trial designs) and partly on the different regulatory requirements across these regions. But that should now be a bit more clear.
No meeting like this will break the logjam in Alzheimer’s research by itself, of course. The field has bigger problems than that, but given that, it certainly makes no sense to pile on more problems if they don’t have to be there. And (as the authors say at the end of their article), this approach makes sense for a lot of other hard-to-treat diseases as well. There are certainly other therapeutic areas that have wiped out over and over even when different mechanisms have been addressed (pain, obesity, stroke, sepsis, and more). There’s no point in just letting the wreckage pile up when we could be getting more benefit out of it.
Now, some areas already have pretty good disclosure of failure (the authors specifically mention CETP inhibitors). But even in those cases, you could still get more nonpublic information while speaking with the regulatory agencies in this way. But a larger issue is the way that failures are dealt with in the industry as a whole. Big high-profile trials do get pretty clear announcements (note that things like Alzheimer’s and CETP involved a large number of patients for a long time). Smaller programs, though, don’t always get that treatment. Sometimes there are projects that sit on a company’s “pipeline slide” for quarter after quarter until they just quietly disappear. That’s not the optimal way to do it. No one likes failures, no one’s exactly proud of them, but God knows that they’re a fact of life in this business. “If at first you don’t succeed, destroy all evidence that you tried” is no way to run a research business.