Skip to main content

Business and Markets

The Cost of New Drugs

I’m continuing my look at Bernard Munos’ paper on the drug industry, which definitely repays further study (previous posts here, here, and here). Now for some talk about money – specifically, how much of it you’ll need to find a new drug. The Munos paper has some interesting figures on this question, and the most striking figure is that the cost of getting a drug all the way to the market has been increasing at an annual rate of 13.4% since the 1950s. That’s a notoriously tough figure to pin down, but it is striking that the various best estimates of the cost make an almost perfectly linear log plot over the years. We may usefully contrast that with the figures from PhRMA that indicate that large-company R&D spending has been growing at just over 12% per year since 1970. Looking at things from that standpoint, we’ve apparently gotten somewhat more efficient at what we do, since NME output has been pretty much linear over that time.
But that linear rate of production allows Munos to take a crack at a $/NME figure for each company on his list, and he finds that less than one-third of the industry has a cost per NME of under $1 billion dollars, and some of them are substantially more. Of course, not every NME is created equal, but you’d have to think that there are large potential for mismatches in development cost versus revenues when you’re up at these levels. Munos also calculates that the chance of a new drug achieving blockbuster status is about 20%, and that these odds have also remained steady over the years – this despite the way that many companies try to skew their drug portfolios toward drugs that could sell at this level.
How much of these costs are due to regulatory burden? A lot, but for all the complaining that we in the industry do about the FDA, they may, in the long run, be doing us a favor. Citing these three studies, Munos says that:

. . .countries with a more demanding regulatory apparatus, such as the United States and the UK, have fostered a more innovative and competitive pharmaceutical industry. This is because exacting regulatory requirements force companies to be more selective in the compounds that they aim to bring to market. Conversely, countries with more permissive systems tend to produce drugs that may be successful in their home market, but are generally not sufficiently innovative to gain widespread approval and market acceptance elsewhere. This is consistent with studies indicating that, by making research more risky, stringent regulatory requirements actually stimulate R&D investment and promote the emergence of an industry that is research intensive, innovative, dominated by few companies and profitable.

But this still leaves us with a number of important variables that we don’t seem to be able to push much further – success rates in the clinic and in the marketplace, money spent per new drug, and so on. And that brings up the last part of the paper, which we’ll go into next time: what is to be done about all this?

17 comments on “The Cost of New Drugs”

  1. Northern_Chemist says:

    Taking $1 billion as the cost today, an annual increase of 13.4% will mean a cost of $3.5 billion by 2020, $12.4 billion by 2030, $43.5 billion by 2040, $152.9 billion by 2050.

  2. MAD says:

    The rate of inflatino between 1970 and 1985 was almost 12% a year so unless these figures are corrected there was no “real” increase in spending for those 15 years. Anyone know?

  3. Derek Lowe says:

    Not sure where you’re getting that number, MAD. From what I can see, the average rate of inflation (CPI) from 1970 to 1985 looks to be almost exactly 7% per year. Only in 1980 did it reach or break 12%.

  4. HelicalZz says:

    I spent some of the weekend reading through the paper, and i was a very good one. But (always a but), I was left unconvinced that new molecular entities (NMEs) is really such a good way to measure productivity, especially when the latter is measured in dollars. Certainly the industry itself prefers to use revenues and metrics like ROA, ROE. These are almost certainly better metrics for evaluating the individual companies in an industry. Even these metrics though don’t tell a full story for the industry, as a drug can continue to generate returns for years as a generic, but not necessarily much to the benefit of the originating company.
    Part of the problem is clearly that the bar has been raised by past success. New drugs often compete with older (cheaper) drugs, and in those cases must be superior in some fashion such as in efficacy or safety. Another is the increasing regulatory burden, which does need disruption in some form (Orphan drug was a good start).
    Speaking to regulatory burden, there is a bias in this blog to look at R&D cost through the lens of drug development and preclinical activities. That is important, and growing in cost, but the real cost explosion as well as the lions share of the R&D expense is in the clinical development. Costs here are almost entirely driven by the need to meet regulatory requirements.

  5. Will says:

    NDA success may be one metric of productivity, another could be be patent filings or grants for new compounds; neither scale is perfect, but i wonder how the increase in filings track with the increase in R&D spending?
    In other words, does the amount of money/scientists you throw at something correlate with the amount of NDA candidates a company can generate, but that R&D spending is less predictive of which candidates will actually succeed

  6. Incha says:

    Although a very interesting paper, there are so many questions left unanswered. For example have the number of filings for NCEs with the FDA also stayed constant, or is one reason for the low numbers of NCEs approved due to a larger percentage of compounds failing in the clinic. Also what percentage of drugs are required to have data versus other therapies, compared to 30 or 50 years ago? It seems to be that in some areas (schizophrenia and depression for example) that drugs fail or struggle to get approval due to failure to show a significant difference to drugs already on the market. However in complex diseases such as these, which are poorly understood, it would seem sensible to have a wider range of therapies to choose from. In this way the FDA seems to be stifling innovation in areas where some therapies are already on the market.
    It would also be interesting to find out what the number of filings from generics companies are (to produce someone else’s product) compared to the number of NCEs filed every year, and what this means in terms of time from filing to approval for new products.

  7. CMCguy says:

    Derek you state “all the complaining that we in the industry do about the FDA, they may, in the long run, be doing us a favor” and then provide an extract from Munos were he argues the point that greater reg burden increases innovation. There may be a correlation found but I do not see this as a causation (although have not read the papers cited to see how connected). The reg hurdles have changed the type of drugs sought and developed plus certainly added to the costs (as ZZ points out largely in clinical/D side) so this demands more R&D investments just to reach approval rather than stimulates contribution to more discovery/R. I think there are other factors why US and UK have been leaders in pharma innovation and stringent regulations has a net negative impact on innovation (which is not necessary always bad in their purpose to check and monitor). Before Preemption was largely shot down one might have suggested the FDA helps hold back the legal expenditures so did contribute more growth potential but now such is a bigger factor in determining how/what to develop.

  8. cliffintokyo says:

    As a pharma CMC regulatory science professional, I can comment from the perspective of considerable experience on the impact of regulations on pharma development.
    1) It is surprising how useful the ‘road maps’ (e.g. CTD and associated guidelines) provided by the regulatory authorities are in developing information packages for marketing applications; and more surprising how often we STILL need to point to these road maps to get development staff to do the appropriate science and technological investigations.
    2) I agree that the regulations create a tremendous amount of work, and some initiatives are trying to address this (e.g. ICH, EU better regulation initiative, PICS GMP harmonisation), but the regulations do serve the really essential function of keeping many companies on the rails, and sometimes help to prevent ‘self-destructive’ behaviours (e.g. EU and JPN bans on DTCA).
    3) I regret the trend for the number of pharma regulations to increase, but hopefully we will see more strategic-type regulations in future, which will not add so much to the already stupendous workload, but will aim to steer pharma into appropriately ethical modes of operation.
    4) Outside of big pharma, many companies have still to recognize the value and efficiencies to be gained by having talented regulatory affairs staff lead their development projects.

  9. srp says:

    Raising the bar on new drug candidates is perfectly sensible for prescribers and users–why buy something newer if it’s not as good or no better than a proven product?. The consumers of any product that improves over time expect new versions to be at least as good for their purposes as what is already out there, and that standard becomes more stringent as progress takes place. But those judgments are best made by end users and their medical advisors, not regulators. (There is an argument for testing against placebo, but that’s not what is at issue here.)
    A binary “yes this is better” vs. “no this is not better” decision imposed over the entire market only makes sense when users are homogeneous in their requirements. Drugs seems like a singularly bad fit to this binary model, given the diverse reactions of people to the same medicine. Even if a new drug doesn’t look superior to an existing one in a clinical trial, if it’s almost as good overall in the trial it may add a lot to individual patients in clinical practice, due to patients’ idiosyncratic physiologies and circumstances. As was pointed out on an earlier thread, even “me-too” drugs are really not the same as the pioneer.
    So why doesn’t anyone complain about this? Fear of the regulators or a desire to inhibit new competition from alternatives to existing moneymakers?

  10. alig says:

    The FDA is suppose to evaluate whether a drug is safe and effective. It’s only in the past decade that they’ve added the requirement that it be superior to current therapies. That was not part of their mandate. For competetion’s sake you would think you would want as many drugs as possible on the market to address a particular disease (remember that when Lipitor was launched they priced it less than Zocor in order to gain market share). The NIH is in a better position to run clinical trials comparing the efficacy of similar treatments to give guidance to doctors rather than the FDA denying a new treatment option.

  11. MAD says:

    My bad-it is closer to 7% lucky I didnt use that math in the lab!
    Anyhow its still a very high inflation time did they account for this at all?

  12. UK Chemist says:

    I agree that the paper raises far more questions than answers but it was a great read.One thing that occurred to me was that there is an aspect or poor decsion making in all of this.Either projects are pushed too far or killed too early.If you go back to periods of time when individual companies were hot e.g Glaxo in the mid to late 80s,I suspect that they had several key R&D staff who had really great intuition about which compounds to back.I know in the case of GSK ,there has been a steady purge every 5 years or so, of much of this talent.That might explain,in part,why companies with a level of continuity appear to have been more sucesful and why mergers don’t work.I think that in today’s big pharma companies, people like Simon Campbell wouldn’t be able to use their intuition without a 40 page justification. Malcolm Gladwell’s book blink seems to show that once you try and codify intuition you’re on very shakey ground

  13. Mutatis Mutandis says:

    #12… Just playing with “round” numbers, in a somewhat crude fashion. Perhaps there is somebody here who can model it better than I do.
    Suppose we start with 100 projects and can afford to bring only 10 of them to clinical trials. Therefore, during our project we build in 10 stop/go decision points, at each of which we have to eliminate approximately 20% of our projects. (0.8^10 = 0.10..).
    Now let’s assume that the success rate of our clinical trials is 1 in 5, and — somewhat ungenerously — that this actually is a function of the decisions we took during the pre-clinical process. Or, of the 10 compounds that reach the final pre-clinical end point, we passed 2 for the right reasons and 8 for the wrong reasons.
    What does that mean? In our final triage round, we had 12.5 compounds. We stopped 20% of those (2.5), allowed 8 compounds to proceed although with hindsight we shouldn’t have, and passed 2 for the right reasons. If that is true, it would have been wiser to toss a coin, for the accuracy of our judgement on those we allowed to pass is a dismal 20%. Of we are not smarter in earlier stages of the project — are we? — that looks bleak.
    We will never know what the outcome would have been for the 90 out of 100 projects that we stop along the way. If our judgement in that is not any better than in letting projects go forward, then 72 of these were stopped for the wrong reasons and only 18 for the right reasons. But let us assume, because we all think we know a hopeless case when we see one, that we are 80% accurate in killing off projects, so we killed 72 for good reasons and 18 for bad reasons.
    So overall, out of our 100 projects, we judged 74 correctly, i.e. the 2 that successfully completed trials and the 72 we stopped for good reasons. But we made wrong decisions on 26 projects, wasting money on failed clinical trails in 8 cases and stopping 18 projects unnecessarily. If that is true, then for every drug that we bring to the market there are nine that we might have brought to the market…
    Perhaps the conclusion is that we are so bad at decision making that we shouldn’t. Maybe we should just pick a promising target and say “damn the torpedoes, full speed ahead!”

  14. Anonymous says:

    #13 Far too simplistic maths I’m afraid. Most projects fail in the early stages because they deserve to die. We never get even close to a compound that you could take into human beings. There is rarely any real competition for compounds prior to Phase I. Actually the early stages are relatively easy in that you probably have about a 1 in 4 chance of finding a small molecule starting point that’s decent and get it to a reasonable state for development. And despite all the critics of the industry we have got much, much better at survival into and past Phase I (we can model human PK from rat/dog PK moderate;y well, and predict safety from in vitro and in vivo tox studies. But we’re all taking a beating in Phase II where for whatever reason the drugs aren’t working – either because the target was poor or sometimes because despite our best effforts we don’t actually have a safety window large enough to test efficacy, and worst of all, compounds where marketing lose courage over the potential market size. The stats for Phase II failure right now are really, really scary for most of the industry i.e. worse than 1 in 10 survival. Add to that Phase III which once upon a time was a slam dunk and now seems more like a 1 in 3 lottery.
    The damn the torpedoes full steam ahead approach was pretty much the Merck strategy for the last twenty years. Pick a target that you care about then pile on the chemistry and biology teams and push until they drop. It was very much a blitz attack that was fine if you picked the right target in the first place as other companies struggled to compete. But Merck’s internal R&D has faltered just like everybody else.

  15. C_S says:

    Looking at supplement 1’s Figure A2 it would great if he would disclose which companies have what NME productivity. Does anybody have these figures?
    I guess the plot I would like to see is dollars vs. NME productivity for the points. I would expect the companies with the lower NME/ Million $ would disappear more frequently.
    BTW in my understanding poisson distributions don’t lead to power law distribution. Poisson’s are still a generalized linear model with a finite variance. Whereas you need a distribution with an infinite variance, to obtain a power law. However it takes a lot more data than we have to distinguish a log-normal distribution from a power law.

  16. You are one lucky man, can bring the end of war about all by yourself. Definitely, definitely have to get well.

  17. in homecare says:

    Знакомства секс питер

Comments are closed.