Skip to main content

Animal Testing

A Terrific Paper on the Problems in Drug Discovery

Here’s a really interesting paper from consultants Jack Scannell and Jim Bosley in PLoS ONE, on the productivity crisis in drug discovery. Several things distinguish it: for one, it’s not just another “whither the drug industry” think piece, of which we have plenty already. This one get quantitative, attempting to figure out what the real problems are and to what degree each contribute.

After reviewing the improvements in medicinal chemistry productivity, screening, secondary assays and other drug discovery technologies over the last couple of decades, the authors come to this:

These kinds of improvements should have allowed larger biological and chemical spaces to be searched for therapeutic conjunctions with ever higher reliability and reproducibility, and at lower unit cost. That is, after all, why many of the improvements were funded in the first place. However, in contrast [12], many results derived with today’s powerful tools appear irreproducible[13] [14][15] [16]; today’s drug candidates are more likely to fail in clinical trials than those in the 1970s [17] [18]; R&D costs per drug approved roughly doubled every ~9 years between 1950 and 2010 [19] [20] [1], with costs dominated by the cost of failures [21]; and some now even doubt the economic viability of R&D in much of the drug industry [22] [23].

The contrasts [12] between huge gains in input efficiency and quality, on one hand, and a reproducibility crisis and a trend towards uneconomic industrial R&D on the other, are only explicable if powerful headwinds have outweighed the gains [1], or if many of the “gains” have been illusory [24] [25] [26].

Note the copious referencing; this paper is also a great source for what others have had to say about these issues, too (and since it’s in PLoS, it’s open-access). But the heart of the paper is a series of attempts to apply techniques from decision theory/decision analysis to these problems. That doesn’t make for easy reading, but I’m very glad to see the effort made, because it surely wasn’t easy writing, either (the authors themselves advise readers who aren’t decision theory aficionados to skip to the discussion section, and then work back to the methods, but I wonder how many people will follow through to the second part of that advice). Scannell and Bosley mention that concepts from decision theory are widely used at the front end of drug discovery programs (screening) and at the end (clinical trials), but not much in between, and this paper could be described as an attempt to change that. They believe, though, that their results apply not only to drug discovery, but to other situations like it: rare positives in a very large landscape of widely varied potential negatives, with a wide range of tools available to (potentially) narrow things down. Figure 1 in the paper is a key overview of their model; definitely take a look as you go through.

So then, feeling as if you’ve been given permission to do what one normally does anyway (flip to the end section!), what do you find there? There are some key concepts to take in first. One is “predictive validity” (PV), which is what it sounds like: how well does a given assay or filter (screening data, med-chem intuition, tox assay, etc.) correlate with what you’re really wanting to get out of it? As they mention, though, the latter (the answer against the “reference variable”) generally only comes much later in the process. For example, you don’t know until you get deep into clinical trials, if even then, whether your toxicology studies really did steer you right when they pointed to your clinical candidate as likely to be clean. They also use the phrase “predictive model” (PM) to refer to some sort of screening or disease model that’s used as a decision-making point as well. With these terms in mind, here’s a clear takeaway:

Changes in the PV of decision variables that many people working in drug discovery would regard as small and/or unknowable (i.e., a 0.1 absolute change in correlation coefficient versus clinical outcome) can offset large (e.g., 10 fold or greater) changes in brute-force efficiency. Furthermore, the benefits brute-force efficiency decline as the PV of decision variables declines (left hand side of both panels in Fig 4). It is our hypothesis, therefore, that much of the decline in R&D efficiency has been caused by the progressive exhaustion of PMs that are highly predictive of clinical utility in man. These models are abandoned because they yield successful treatments. Research shifts to diseases for which there are poor PMs with low PV [78]. Since these diseases remain uncured, people continue to use bad models for want of anything better. A decline in the average PV of the stock of unexploited screening and disease models (PMs) can offset huge gains in their brute-force power (Fig 4).

Let’s all say “Alzheimer’s!” together, because I can’t think of a better example of a disease where people use crappy models because that’s all they have. This brings to mind Bernard Munos’ advice that (given the state of the field), drug companies would be better off not going after Alzheimer’s at all until we know more about what we’re doing, because the probability of failure is just too high. (He was clearly thinking, qualitatively, along the same lines as Scannell and Bosley here). Munos was particularly referencing his former employer, Eli Lilly, which has been placing a series of huge bets on Alzheimer’s in particular. If this analysis is correct, this may well have been completely the wrong move. I’ve worried myself that even if Lilly manages to “succeed”, that they may well end up with something that doesn’t justify the costs that will surely follow to the health care system, but which will be clamored for by patients and families simply because there’s so little else. (There’s that “well, it’s bad but it’s all we’ve got” phenomenon yet again).

I’m very sympathetic indeed to this argument from this paper, because I’ve long thought that a bad animal model (for example) is much worse than no animal model, and I’m glad to see some quantitative backup for that view. The same principle applies all the way down the process, but the temptation to generate numbers is sometimes just too high, especially if management really wants lots of numbers. So how’s that permeability assay do at predicting which of your compounds will have decent oral absorption? Not so great? Well, at least you got it run on all your compounds. In fact, this paper makes this exact point:

We also suspect that there has been too much enthusiasm for highly reductionist PMs with low PV [26] [79] [25] [80] [81] [74] [82]. The first wave of industrialized target-based drug discovery has been, in many respects, the embodiment of such reductionism [1] [83] [84] [74]. The problem is not necessarily reductionism itself. Rather, it may be that good reductionist models have been difficult to produce, identify, and implement [85] [82], so there has been a tendency to use bad ones instead; particularly for common diseases, which tend to have weak and/or complex genetic risk factors [86] [83] [87]. After all, brute-force efficiency metrics are relatively easy to generate, to report up the chain of command, and to manage. The PV of a new screening technology or animal PM, on the other hand, is an educated guess at best. In the practical management of large organisations, what is measureable and concrete can often trump that which is opaque and qualitative [65], even if that which is opaque and qualitative is much more important in quantitative terms.

Exactly so. It sounds to some managerial ears as if you’re making excuses when you bring up such things, but this (to my mind) is just the way the world is. Caligula used to cry out “There is no cure for the emperor!”, and there’s no cure for the physical world, either, at least until we get better informed about it, which is not a fast process and does not fit well on most Gantt charts. Interestingly, the paper notes that the post-2012 uptick in drug approvals might be due to concentration on rare diseases and cancers that have a strong genetic signature, thus providing models with a much better PV. In general, though, the authors hypothesize that coming up with such models may well be the major rate-limiting step in drug discovery now, part of a steady decline in PV from the heroic era decades ago (which more closely resembled phenotypic screening in humans). This shift, they think, could also have repercussions in academic research as well, and might be one of the main causes for the problems in reproducibility that have been so much in the news in recent years.

As they finish up by saying, we have to realize what the “domains of validity” are for our models. Newtonian physics is a tremendously accurate model until you start looking at very small particles, or around very strong gravitational fields, or at things with speeds approaching that of light. Similarly, in drug discovery, we have areas that where our models (in vitro and in vivo) are fairly predictive and areas where they really aren’t. We all know this, qualitatively, but it’s time for everyone to understand just what a big deal it really is, and how hard it is to overcome. Thinking in these terms could make us value more the data that directly reflect on predictive value and model validity (read the paper for more on this).

In fact, read the paper no matter what. I think that everyone involved in drug discovery should – it’s one of the best things of its kind I’ve seen in a long time.




41 comments on “A Terrific Paper on the Problems in Drug Discovery”

  1. anonymouse says:

    Sounds like a very complicated way of saying something simple: we don’t understand the biology, and all the low-hanging fruit has already been picked.

    1. Bunsen Honeydew says:

      That’s exactly what I was thinking.

    2. Am I Lloyd says:

      Actually the paper explicitly argues against the “low-hanging” fruit theory which you mentioned:

      “One standard explanation for Eroom’s Law is that the “low hanging fruit” have been picked. We and others have been critical of such explanations [77] [1]. First, they generally leave the nature of the fruit undefined (but there are exceptions [78]). Second, such explanations may underestimate the difficulty of historical discoveries [77] [24] [1]. Third, drugs that come to market reduce the incremental economic and therapeutic value of undiscovered or unexploited therapeutic candidates without making such candidates harder to discover per se. This is the so-called “better than the Beatles problem” [1]. Fourth, low hanging fruit explanations risk tautology, because they use the efficiency of R&D as the measure of the height at which as-yet-unpicked fruits are hanging [1].”

    3. bgcarlisle says:

      In the third sentence of their discussion, they explicitly state that they’re not making a “low-hanging fruit” argument.

      1. anonymouse says:

        Low Predictive Validity (PV), and its high sensitivity to apparently minor modifications of conditions (or knowledge) = “Don’t understand the biology”.

        Low-hanging fruit: Let’s take just 3 examples.

        1. Beta-blockers: developed ~60 years after the discovery of adrenaline and the subsequent working out of the *physiology* involved (mind you, no clear picture yet of what an adrenergic receptor really looked like).

        2. HMG-CoA Reductase inhibitors: developed after it was abundantly clear that cholesterol had a lot to do with heart disease, and after the biosynthetic route to cholesterol was figured out, knowing that HMG-CoA reductase was the rate-limiting step.

        3. Antifolates: developed (rationally) after observation that folic acid stimulated the growth of acute lymphoblastic anemia cells. Led directly to methotrexate, pyrimethamine, trimethoprim, and others.

        1. GSKpipeline says:

          I agree with you if somebody have a candidate drug (s) and got Big chuck of cash in advance, it is very frustrating to fail repeatedly in clinical trials (even phase I). For me personally I will be chasing around as crazy COW to find any of Predictive Validity (PV)s which will work.

      2. Design Monkey says:

        Actually they are saying the same, just in different words.

        “low hanging fruit” – problems, for which models with good enough predictive validity were easy to stumble upon and use.

        “high hanging ones” – for which, despite all the efforts, no model with good predictive validity and ease of use exist at current level of knowledge.

    4. Unchimiste says:

      They actually address the low hanging fruit thing in the paper and and they state it might be a misleading concept. Not sure they’re right but it’s worth reading.

      1. When thinking about what “low hanging fruit,” versus other ways of thinking about drug dev challenges, it’s helped me to actually visualize what each means. If I imagine a tree with branches at several layers, and at each layer there’s one piece of fruit, the fruit at the bottom is easier to pick. All other fruit is progressively harder to pick.

        In an alternative conception, based on the “better than the Beatles” framing, I think instead of having a box with 100 white balls, 10 black balls, and 1 red ball, and the box is shaking so that the balls are rolling around at random. If I close my eyes and pick one, 10% of the time, I’ll get a black ball. If I pick some more, my chances of getting a black ball again are still about 10% (a little less, but for practical purposes…) but a black ball does me no good because it’s just as effective as the first black ball. I want a red ball, which would be a more efficacious drug. So in this case, the black balls (ie, effective drugs) are still approximately as likely to crop up as before, but it’s effectively not worth anything from a practical standpoint because the environment (the FDA, the payers) has imposed a constraint on me such that it’ll be very difficult to make a profit from that drug. So, that’s why “low hanging fruit” is not the only way to explain the increasing difficulty in finding new drugs. At least that’s how I think of it.

        I suppose one could say this is semantics and one way or the other it’s just harder to find new drugs, but I think the strategies one takes to solve the problem are going to depend on what you think is going on.

    5. Rose says:

      It is pretty obvious to the average pharmacologist in industry that bad models result in a low success rate in clinical trials. The issue is really how find better models, and the current strategy is “big data” – throw enough omic data at a problem and a solution will emerge. So far that’s not working all that well. Perfect though to publish this analysis in PLosOne which is full of highly quantitative analysis that no one else will ever be able reproduce, generating yet more unpredictive models.

  2. Ash (Wavefunction) says:

    The domain of applicability argument is important, especially for computational models where training sets don’t usually approximate real life scenarios. For instance in structure-based drug design, real life scenarios would include poorly resolved x-ray crystal structures of proteins with missing loops and/or homology models, along with an insufficient number of active/inactive compounds with questionable measurements of binding affinity in assays whose own domain of applicability and margins of error with respect to real living systems is usually unknown. Many computational models working with structure usually do away with most of these factors.

  3. the magic 8-ball says "no" says:

    Actually I find the paper rather disappointing. I agree they do a good job highlighting all the usual caveats of drug discovery R&D (e.g. lack of biological validation and translation) from 8 miles high, but it falls flat in validating and demonstrating their own proof-of-concept. It’s unclear what data sets were used to validate their methodologies, other than toy (hypothetical) data. As consultants, they should partner with a pharma company or two to work with real-world data sets to validate their approach (assuming they can effectively arm wrestle R&D management for the relevant data). I didn’t see reference to pharma data in the article or in the acknowledgements. Given the state of pharma, you’d think that upper management would be willing to part with a little $ to explore the application of Decision Theory, unless of course its already clear to them that executive decisions are irrefutable (or maybe they’d rather not have a second look). At least it would be more cost-effective than paying McKinsey to make the decisions for them.

    1. Jack Scannell says:

      To the magic 8-ball. I am one of the authors, and I do have sympathy with your comments. We have done some very preliminary work on tyrosine kinase inhibitor data. We have also had some discussions with people who do have access to large historical data sets. However, we decided to try to publish the analytic framework in full, and discuss its possible implications, before making a serious attempt to get out hands on other people’s data. The serious attempts are just starting, in the form of a couple of applications to UK biomedical research funding agencies (Derrick correctly describes me as a “consultant” but I do some work associated with Edinburgh and Oxford Universities). One application, which will be submitted in the next funding round, is for a project along the lines of “a quantitative history of screening and disease models.” We hope to access both industrial and academic data. We want to see if predictive validity has indeed declined and, if it has, whether one can partition the blame to exhaustion (i.e., we have used the best models and moved on) versus changing scientific or industrial fashion. The ultimate aim is to get a little better at identifying, on a prospective basis, the characteristics of models that turn out to have high predictive validity.

      1. the magic 8-ball says "ask again later" says:

        Hi Jack,

        I wish you well. However, it will be no easy task to get the data sets that you need, especially if you’re after pharma (internal and competitive) data, for several reasons. The data is usually not organized to easily determine what variables contributed to R&D decisions, how the decisions were made, who made them, and what was the outcome(s). This is exacerbated by the fact that there are a lot of transition points (operational hand-offs and pitfalls) in the course of an asset making it from the bench to the market, and a fair number of years. Also, as soon as a R&D project transitions, successfully or not, project teams expediently move on to the next thing, often at the expense of not adequately capturing the necessary data for longitudinal or retrospective analyses. In short, R&D efforts are measured and rewarded by “what are you doing for me now”, not “how and what did you deliver on past programs”; so it’s a self-perpetuating cycle of controlled chaos. You’ll have to get buy-in from top management that resource should be dedicated to retrospective analysis and internal cooperation downstream or across lines. Also, transparency may be difficult. Sometimes the dirty laundry of past programs can have unseemly tread marks – it’s easier to forget, spin, bury, or burn it, rather than to air it out.

        Honestly, it would be a step forward to simply keep a running tally of R&D decisions made by management and associated outcomes – are they any better than a cohort of downstream scientists/managers, or a naive cohort? – are the decisions of some managers/executives better than others? You’d probably need an independent entity to do the tallying, akin to the U.S. Congressional Budget Office. Accountability seems to get skewed as you move up the food chain of decisions. True, businesses are not democracies, but if pharma is trying to be more profitable, they should be more open to how R&D decisions translate to new and better medicines.

        Based on some decisions that I’ve witnessed, this seems to be the source:

  4. milkshake says:

    The problem is with management, and progressively longer/more expensive clinical trials. Also, the CEOs are obscenely overpaid to lie to the investors. The wishful thinking permeates from top down (“we will save 2 billions a year in synergies post merger” or “genomics/proteomics will give us ten good new targets a year” or “we are going to only develop blockbuster drugs, anything withs sales below billion a year is no interest to us”).

    If you are smart enough to become a CEO, you realize that any decision you take about medchem and biology research will not influence on stock performance of your company during your tenure. Hence any atrocious re-aligning silo six sigma open space will do, just to prove your mettle (and 30 million a year salary) and you don’t even have to invent the crap by yourself – consulting firm will deliver neat slides for you. You use the the reorganization/merger to centralize power, you get rid of opponents and silence the critics, you outsource the research and even if the company’s own research productivity is gonna go to crap in 10 years, it will become someone else’s problem. You got your mansion by the lake built already

    1. Stephen Frye says:

      While painfully cynical, I agree that there is a very serious sociological problem with the incentive structure in senior management positions that further confounds the actual scientific challenges facing R&D productivity. I make no assumptions of malevolence, but the time delay between the latest reorg and its influence on any useful measure of productivity is much longer than the t 1/2 of a CEO, head of R&D etc. This needs to be addressed as well. Maybe it is in small biotech? My experience in 20 years at GSK was that middle management’s (me) job was to try to shield good projects and people from the churn at the top.

  5. tuan says:

    Sounds like that they say that we don’t exactly know what we are doing- and we cannot quantify it. Yes. Otherwise:
    1.) Principle of diminishing returns
    2.) On the pursuit and misuse of useless information:
    3.) Often no relevant research in (big) pharma, because of (cost)pressure and short sight of management/shareholders
    4.) Translational science instead of pharmacology

  6. GSKpipeline says:


    I agree with you if somebody have a candidate drug (s) and got Big chuck of cash in advance, it is frustrating to fall in clinical trials (even phase I). For me personally I would be chasing around as crazy COW to find any of Predictive Validity (PV)s which will work.

  7. John McDonald says:

    What has changed over the years is our vastly improved understanding of cellular biology. Consequently drug discovery now focuses too exclusively on ‘cellular pharmacology’ to the exclusion of tissue, organ, or animal pharmacology. Why? Because we can now create a concrete logical argument that allows us to advance our drug candidates through the approval process even though we never adequately address the in vivo conundrum. Too often we completely ignore what we know we don’t know.

    The challenge to drug discovery is improving our understanding of the cellular biology to whole organism linkage. We have seen huge government support for genetic and cellular science with minimal support for animal pharmacology. Similarly, in spite of FDA support for innovative clinical trials, how many have actually been focused on improving our understanding of the link between tissue/organ/animal pharmacology and human disease. Large pharma has many of the tools (e.g. selective compounds and MoAbs) to run these scientific trials, but won’t sacrifice development time to do the needed science. The unknowns are known, but the courage to address them has been lacking. This is a challenge that large pharma, academia, and NIH/NCATS should tackle in a concerted manner.

    1. anonymous says:

      You are very correct about the improved knowledge in cell biology, and less improvement in knowledge of animal models. There are two additional obstacles to improving the situation. In order to run clinical trials of let’s says specific receptor antagonists in a disease such as depression, we need better ways to measure the outcomes. This has been a long=standing goal, with considerable investment in imaging, biomakers, etc and so far there has not been a breakthrough that allows a relatively efficient trial to take place. The other issue is that for some diseases there may never be a model, either cellular or whole animal. There are limits to the correspondence of animal and human disease and to the ability to translate cell studies to whole animals. We push against those limits every day using ever more sophisticated techniques, but again, so far, the ability to translate to human disease has been elusive. The journal this blog appears in is devoted to this endeavor, but real successes that would support a decision-making clinical trial are few and far between.
      So the analysis of the paper is fine, it quantitates the frustration felt by clinicians and clinical scientists at the lack of understanding that results in such a high failure rate of promising ideas. Whether the investment should be in better cellular models, animal models, clinical outcome measures, all of the above, molecular signatures that tie them all together, analysis tools that can assess all of the measurements together (smart models) etc remains very unclear. At a time when investment in the scientific underpinnings as opposed to products (or hoped-for products) continues to shrink, the situation is not likely to improve soon.

  8. Slava Bernat says:

    The paper gives the model of theoretical framework for biomedical research, so discussion only makes sense if the model adequately describes the reality. Given the assumptions (e.g. normal distribution of decision variables) and simplified decision workflow, it seems to be a legitimate concern. So it gives some fresh hypotheses but also needs some real-world testing, at least a retrospective one.

  9. DanielT says:

    The interesting question is why are drugs failing in clinical trials. Is it because they don’t work, is it because of unexpected side-effects, or is it because the drug won’t make enough money if only used in the Phase I/II patient population (i.e. management gets greedy and tries to expand the market by include marginal patients at Phase III)? Are regulators less willing to tolerate side-effects than in the past. How many of the old drugs would get past the FDA today?

    I do agree that a bad model is worse than no model (look at the damage caused by the bad sepsis models). Will regulators let you go straight into man with no data from animal models (just tox) if all the animal models are bad?

  10. anon says:

    maybe one thing they fail to understand in their theoretical framework is that not all targets are created equal, and that we have many highly predictable models for disease “cures” but we lack the chemical matter with which to tackle them. they can talk all they want about how predictive our models and decision making are, but there are plenty of targets we know about that have tremendous biological validation, we just need to find new ways to get at them.

    second, it sounds like the take away is that we don’t understand the biology well-enough for it to be worth pharma going after a lot of these disease areas currently. so I guess they just need to wait for academics to figure it out, then pharma can come back in and say academia doesn’t contribute to drug discovery. that sound right?

  11. Peter Kenny says:

    I need to take a proper look at the paper but will make some general comments. One big challenge in creating a theoretical framework for drug discovery is that it is extremely difficult to generate the quantitative inputs that the models demand. In this sense there are parallels with systems biology simulations. It’s worth remembering that free intracellular concentration is not currently something that we can measure for an arbitrary compound in live humans. One thing that I found missing from this vision of the woes of drug discovery was the idea that drug discovery can also be seen in terms of design of experiments and not just as an exercise in prediction.

    That said, there is certainly scope to improve the way we think about and conceptualize drug discovery. The reverence with which the rule of 5 is still treated may actually say more about the current state of drug discovery science than about the science behind the rule. Ligand efficiency is widely used in drug discovery even though the metric provides a view of chemical-biological space that changes with the units in which IC50s are expressed. Even after voodoo correlations have been exposed, people continue to tout them as penetrating insight.

    One point that I make when doing presentations is that if we analyze our data badly then those who fund our activities may conclude that the difficulties we face are of our own making. I’ve included a 2011 blog post on drug discovery’s woes as the URL for this comment. It is entitled ‘Dans la merde’.

  12. MoMo says:

    In this abstract theory proposed I learned 3 things:
    1) Calling molecules “Low hanging fruit” shows disrespect to chemistry and chemists everywhere, MOre MOlecules are needed
    2) Its better to have no CEO than a bad one, and
    3) in decision theory the PC term for decision makers “boundedly rationale” is interchangeable for “F%@&ing Stupid”

  13. JJR says:

    And here I thought the productivity decrease was because we have to write four risk assessments for every one experiment we run (real result), or the fact that we spend 50% of time in meetings. I’m pretty sure beauracracy causes an extreme decrease in productivity.

  14. David Cockburn says:

    It sounds as if the Biopharm industry is spending too much money on clinical work and that it would be better funding theoretical academic work to understand human and animal physiology.
    The problem is that academic investment is the responsibility of governments and charities so it is difficult for a company to justify investment in results that will be shared with the world.

    1. simpl says:

      David, I think you are right to concentrate on biopharms, because there are several parameters which differ from finding chemical. This distinction might even give the authors a chance to test there models and quantify the differences.
      The first-generation biologicals, essentially antibodies and growth factors, are based on natural mechanisms, so toxicity risks are smaller, the clinical trials can be small and targeted,
      They are also expensive to make (just starting to transition from being a cottage industry), injectable only, unstable, and readily decomposed by the body (though the effect may last much longer than the residence time). Those disadvantages always suggested to me that chemists should be working on second generation leads where they can improve on the biologicals, like the beta-lactams, or heparin.

      To your suggestion that BioPharms should research, rather than develop: many with a full set of ideas do just that. But, those small targeted trials mean that candidates after phase I should bring in more money – they do – and, given that rewards come only after launch and marketing, BioPharms with rich sugar-daddies often take the risk to go all the way.

  15. Andy says:

    So a rather obvious question from a chemist who has little experience of big pharma and med chem: when a project in a specific disease area is conceived, whose job is it to ask the obvious question “Will these super potent inhibitors of whatever-ase actually translate into an effective treatment for nasty disease in people?”
    Or is it pointless asking this and it’s kicked into the long grass? Or is it continually asked?
    The uncertainty would kill me, how can people work like that?

    1. alchemist says:


      that question is asked time and time and time again, in a variety of forms (is the biology making sense? is there a risk of tox? does it make sense clinically? Does it make sense economically?).
      The problem is that it is a darn hard question to answer, and in reality you are never really sure.

      And yes it is a difficult way to work, at least until you get used to it… 😉

    2. me says:

      Generally, that question is answered in the ‘target validation’ package at the beginning of the project before said inhibitor was even thought of. The problem is that very often the companies are working on diseases where there is no known treatment, so they don’t have the molecules that DO translate into clinical success to compare theirs against.

      for me, this article says: drug discovery is risky and getting riskier.

      Also: basic science is the key de-risker, and nobody wants to do it anymore.

  16. Kelvin Stott says:

    A few thoughts of my own:

    1. Each new drug raises the bar for the next and thus increases the failure rate (at greater cost) while reducing the scope for incremental improvement (added benefits).
    2. We’ve already found what we can with current approaches (or at least we are getting diminishing returns as we pick the biggest, ripest AND most accessible fruit first), and each new failed attempt just increases the likelihood that the same old approach (and similar reductionist approaches) will keep failing.
    3. Bottom line: The old model is dead, and we’re beating a dead horse. We need a completely new, top-down, non-reductionist, systems-based approach to drug discovery, with better, more predictive systems-based validation models.

    1. MH says:

      If I had a dollar for every time I’ve heard this I would be a very rich man. Sure – all that you cite would be nice. I’ve watched many people try to implement exactly these types of approaches at a fantastic cost – with little to show for it except generalities implicating macrophage and inflammation biology for any disease you care to mention. I suspect the concept is also flawed – we know from plenty of human genetics and human pharmacology that perturbing a single node can have dramatic effects on the organism, suggesting the ‘reductionist’ model has plenty of life. Likewise there are clear examples of compounds that may be working through general system-wide effects. Prosecuting the former is a hell of a lot easier than prosecuting the latter and until we come up with “systems-based validation models” that truly correlate better with the clinical outcome we care about AND that we can iterate on in a reasonable time-scale, these pronouncements will leave me cold.

      1. Kelvin says:

        I meant “systems-based” as in integrated drug discovery system, rather than just biological system, although the former could include forms of the latter.

  17. Phil says:

    I wonder if high-throughput screening is sending people down blind alleys. Outside of pharma, in areas like coatings formulation, educated guesses by seasoned formulators are still better than statistical DOE. Both DOE and HTS are attractive to MBA’s, who think they can turn R&D into a predictable process where x person-hours will result in one new product launch.

    I understand pharma research in the old days used to be similar to how formulated products like coatings are often still developed, using educated guesses by seasoned med-chemists as a starting point instead of whatever results the HTS machine spit out.

    1. Phil II says:

      The idea that statistics could be used to render an unpredictable process predictable is ironic given that statistics and probability is for the most part about understanding variation, the role of randomness, and the limits of prediction.

      It’s like flipping a coin repeatedly and observing a run of heads, then thinking you can find some way to make it happen again if you just understand the statistics well enough.

  18. Matt Nelson says:

    Their DT approach raises awareness of the importance of starting the discovery process with the highest ratio of potential successes versus eventual failures (A/U) and basing progression decisions on more predictive models. This is easier said than done, but there is a lot of opportunity to do better. More can be done to study what kinds of evidence are predictive of successful drugs. I was pleased to find that human genetics is one of those (, which taken at face value shows that targets with genetic evidence are twice as likely to succeed over those that don’t. Exploring other factors with demonstrated predictive validity needs more research. In my experience, decision makers take notice. At GSK I am seeing more weight being put on such evidence.

Comments are closed.