Skip to Content

Alzheimer's Disease

Failure Shouldn’t Be Such an Orphan

The drug industry has  a huge stockpile of results on projects that have not worked. That much is clear – clinical success rates continue at about 10%, on average, so we have a steady stream failures of all kinds, for all reasons. It would be foolish not to learn as much as we could from all of these, and that’s the point of this article in Nature. It details an effort by the European Medicines Agency (EMA) to get companies together who have had clinical failures in Alzheimer’s. (That is to say, pretty much every company that has had a clinical program in Alzheimer’s).

Following the G8 call to action, we at EMA invited drug companies to present their research to us confidentially and individually — detailing what drug targets they investigated, what populations they thought their interventions might treat, and how they intended to test this in their trial designs. Seven companies agreed to take part. Their presentations to us covered data on 14 discontinued or ongoing trials, including efficacy trials that collectively covered more than 12,000 participants. . .

. . .The information shared with our teams was more up-to-date, broader and more in-depth than what is commonly published in the literature, included in trial registries or given in mandated public summaries. Details on data generated before phase III trials — including preclinical and early clinical research — were crucial to frame the failure’s significance in terms of which hypotheses it falsified. Such information also helped to avoid unwarranted negative conclusions about uninformative generic terms such as ‘β-amyloid hypothesis’

Even though the EMA team couldn’t present much of this publicly, while consulting the the regulatory agencies in the US, Canada, and Japan they did revise the EMA guidelines for Alzheimer’s trials (design, endpoints, etc.) in light of what they heard. One of the larger results of this should be a more standardized approach to such trials, because as things stand, they can be difficult to compare. That’s partly been on the companies involved (with each of them working up their own trial designs) and partly on the different regulatory requirements across these regions. But that should now be a bit more clear.

No meeting like this will break the logjam in Alzheimer’s research by itself, of course. The field has bigger problems than that, but given that, it certainly makes no sense to pile on more problems if they don’t have to be there. And (as the authors say at the end of their article), this approach makes sense for a lot of other hard-to-treat diseases as well. There are certainly other therapeutic areas that have wiped out over and over even when different mechanisms have been addressed (pain, obesity, stroke, sepsis, and more). There’s no point in just letting the wreckage pile up when we could be getting more benefit out of it.

Now, some areas already have pretty good disclosure of failure (the authors specifically mention CETP inhibitors). But even in those cases, you could still get more nonpublic information while speaking with the regulatory agencies in this way. But a larger issue is the way that failures are dealt with in the industry as a whole. Big high-profile trials do get pretty clear announcements (note that things like Alzheimer’s and CETP involved a large number of patients for a long time). Smaller programs, though, don’t always get that treatment. Sometimes there are projects that sit on a company’s “pipeline slide” for quarter after quarter until they just quietly disappear. That’s not the optimal way to do it. No one likes failures, no one’s exactly proud of them, but God knows that they’re a fact of life in this business. “If at first you don’t succeed, destroy all evidence that you tried” is no way to run a research business.

 

17 comments on “Failure Shouldn’t Be Such an Orphan”

  1. PorkPieHat says:

    There has to be some mechanism that is used more often that allows companies to collaborate based on shared knowledge of what does not work, such that it permits a sharing of revenue from products in Alzheimer’s disease. The market pie for Alzheimer’s disease should be so large that sharing it within a consortium that’s actually successful in developing a disease modifying treatment for AD should not be a problem, no?

    1. Hypnos says:

      I could imagine that this could lead to all kinds of compliance issues due to antitrust laws. A couple of large companies sharing proprietary data, coordinating on where (not) to compete and potentially sharing profits – sounds problematic to me. If something like this was orchestrated by a regulator (such as EMA in this case) – maybe. Definitely worth investigating in the interest of patients and companies.

      1. NJBiologist says:

        I’d heard that something along these lines got off the ground in the 1980s or so, and lasted at least ten years. I vaguely remember the name “Pharmaceutical Research Institute” and BMS being involved, but that’s all I’ve got… maybe someone better informed/with a longer memory can help out?

  2. Roger says:

    “One of the larger results of this should be a more standardized approach to such trials, because as things stand, they can be difficult to compare.”

    Considering how much is unknown about Alzheimer’s disease, how would one decide on what design of clinical trial to standardize to? And wouldn’t any benefit of being able to compare (failed) trials to each other be less important than the benefit of testing drugs in novel ways?

    1. loupgarous says:

      “One of the larger results of this should be a more standardized approach to such trials, because as things stand, they can be difficult to compare.”

      Considering how much is unknown about Alzheimer’s disease, how would one decide on what design of clinical trial to standardize to?

      Data transformation utilities allow combination of data from different designs of studies specifically to develop large databases in which apples-to-apples comparisons are more likely, and valid conclusions are easier to draw from large bodies of data from various study designs. (I helped design and create such a set of utilities for Big Pharma pharmacovigilance, safety and efficacy studies in the SAS System).

      The advantage of such utlilities is that they allow statisticians reviewing data from patient cohorts of all sizes to choose the study design which offers the most statistical power for each cohort.

      That issue comes up more than you’d think in NDAs and postmarketing safety databases, as regulators ask for data in how new drugs work in study patient sub-populations such as diabetics, patient with specific organ dysfunctions, children, elderly, etc – you can be working with low double-digit numbers of patients in some of these sub-studies.

      The idea ought to be to transform data from a range of study designs for inclusion in large safety and efficacy databases – which gives you better statistical power for the “big picture”. This process also gives you intermediate transformed versions of smaller studies into common formats from which apples-to-apples comparisons can be made.

      1. loupgarous says:

        Last sentence above ought to have read

        “This process also gives you intermediate transformed versions of smaller study datasets into common formats from which apples-to-apples comparisons can be made.”

      2. MikeC says:

        “Data transformation utilities allow combination of “data from different designs of studies specifically to develop large databases in which apples-to-apples comparisons are more likely”

        More likely than extremely unlikely is still highly unlikely. It might work in situations where there is a single successful drug that newcomers in the same class are trying to beat

        1. loupgarous says:

          me: “Data transformation utilities allow combination of “data from different designs of studies specifically to develop large databases in which apples-to-apples comparisons are more likely”

          More likely than extremely unlikely is still highly unlikely. It might work in situations where there is a single successful drug that newcomers in the same class are trying to beat

          Not sure what you mean by “extremely unlikely”. My experience with large, cumulative clinical datasets from studies of endocrine drugs. For one client, the job was building a safety database with data from both investigational new drug and post-marketing studies of recombinant origin human growth hormone. For another client, the comparisons were between the manufacturer’s own recombinant origin human insulin and their recombinant origin insulin analogue (in which the positions of two amino acids in the insulin molecule were reversed in a way that caused the insulin not to polymerize, so that most of the insulin was existed as a monomer). This is similar but not identical to what you said.

          However, for safety studies there’s nothing “extremely unlikely” about being able to discuss safety of drugs either in the IND study phase, post-marketing, and combined experience with both phases. But in the cases the EMA are discussing, plenty of standard of care vs. study drug, placebo vs. study drug and study drug vs. other study drug comparisons can be made, depending on how large the patient cohorts are and how long the study runs.

          It’s probably “highly unlikely” that many Latin Square clinical studies can be transformed in this way. In the case of the recombinant insulin analogue IND study I mentioned there were Latin Square studies, but most studies in that project were two-arm, mostly study drug treatment vs. standard of care treatment to answer questions about study drug efficacy versus existing drug efficacy. In cases like that, dataset transformations can be run with little difficulty.

  3. anon says:

    I’ve been seeing a lot of articles about implementing blockchain in clinical trials, seems like this could be a new format to allow verifiable clinical data to be shared widely while safeguarding patient data

    1. Scott says:

      Yeah, whoever says that is trying to get your money. I suggest following the XKCD recommendations: Don’t touch whatever they sold you and burying it in the desert. Make sure you wear chemical-warfare level protection. https://xkcd.com/2030/

      Blockchain, in simple terms, is a recording of every person (well, bitcoin wallet ID) who has ever held [ serial # item ]. If you’ve handled registered mail, that’s a paper blockchain. There is ZERO anonymity involved. In fact, anonymity would break part of what makes the blockchain work.

  4. Chrispy says:

    I learn much more about why programs got yanked over beers at conferences than I do from the literature. Frankly, companies see their failures as strategic knowledge with value, so they will be unwilling to share this data unless they are compelled to. Also, lots of perfectly good stuff gets shelved for business reasons (too much competition, for example). That’s a lot different than killing a program because of an on-target side effect. In the world of Matrix Metalloproteinases, companies observed skeletomuscular rigidity in their trials for years before it became common knowledge. I’m not sure the field has ever recovered from that — was it the use of non-specific agents? Do all MMPs do this? Who knows.

    1. loupgarous says:

      @Chrispy:

      In the world of Matrix Metalloproteinases, companies observed skeletomuscular rigidity in their trials for years before it became common knowledge. I’m not sure the field has ever recovered from that — was it the use of non-specific agents? Do all MMPs do this? Who knows.

      Supposedly, any AE in a clinical trial of an investigational new drug US or EMA auspices is reported to the regulatory agency. Where I’ve worked, medical experts on project staff did in-house determinations of what caused AEs, but that wasn’t to decide whether to inform regulators, who received tabulations of all adverse effects during studies of investigational new drugs.

      I worked on the “top ten adverse events” reporting macro at a Big Pharma client of our firm (really Big Pharma in this case), and that was their procedure – and other Big Pharma clients I worked for followed the same procedure. Playing hide and seek with an adverse event during any human trial is (legally) a big no-no.

      In “the world of Matrix Metalloproteinases”, was skeletomuscular rigidity observed in animals, humans, or both?

    2. johnnyboy says:

      Indeed, this basically is the central paradox of the Pharma industry – broadly speaking you’re working for the common good, but you’re also in a competitive system, where hiding some information may help you against competitors but goes against the common good. It trickles down to the more mundane, basic aspects of the work, eg. if you develop say a biological reagent internally, for which there isn’t a commercial equivalent, do you keep it for exclusive company use for a marginal competitive advantage,, or do you make it available externally, to help benefit research in general ?
      The EMA initiative is a brilliant way to help the common good while not threatening competitiveness, and should be generalised to as many fields as possible.

  5. Alan Goldhammer says:

    The issue with Alzheimer’s is what clinical endpoint should be used for efficacy studies. The large multi-party neural imaging effort is seeking to see if there is a biomarker(s) that correlates with observed clinical deterioration of function. Until this is settled, it is going to continue to be difficult for drug development. The development of anti-depressants, while not optimal, is easier because the observational studies needed to show efficacy are better accepted. Even so, looking at the clinical trial results for anti-depressants shows that marginal efficacy is the usual outcome.

  6. electrochemist says:

    Just stating the obvious, but there is a huge difference between failed CETP drugs and Alzheimer’s drugs. The CETP inhibitor molecules tested in the clinic actually worked, raising HDL remarkably. The kicker was that these trials indicated that raising HDL pharmacologically was of no benefit to patients. The results were unambiguous.

    Alzheimer’s is a different beast. It isn’t obvious, in most cases, that the drugs involved in failed trials would *not* work if dosed differently, if intervention occurred earlier, if a different cohort were used, if if if if…. (True believers in the clinical development orgs at these companies won’t admit failure.) So, collaboration with competitors is unlikely.

    The one possible exception could be BACE inhibitors. There have been enough candidates tested in the clinic to prove that it is possible to inhibit BACE1 very robustly, decrease the production of beta amyloid from APP, and have no positive effect on clinical patients.

    1. Derek Lowe says:

      Quite right – and I do indeed think that the BACE inhibitors and the amyloid-lowering results seen in the antibody trials are what move the amyloid therapies as a whole closer in to CETP territory.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.