Here’s another paper on the cost to develop a new drug, a topic about which, I’m convinced, debate will never end. This one is designed as a response to the Tufts estimates on these costs, and I’m not going to help much, because I have some things to debate about this paper myself.
The authors have looked at oncology-focused companies that got their first drug approvals during the period 2006-2015. There are ten of them for which 10K filings are available, and the total of R&D spending from the company’s inception up to the approval is taken as the cost of developing a new drug. The raw spending numbers run from $157 million to $1.9 billion, so you can see that there’s a bit of spread right from the beginning. They average the figures out to $648 million, and that is presented in the abstract as the cost to develop a new drug – substantially less, the authors note, than the Tufts estimate (most recently $2.7 billion).
My first thought was that it’s at least better than Donald Light’s figure of $43 million, which is my natural look-on-the-bright-side disposition coming through. But several important factors come to mind that I didn’t think the $648 million figure were taking into account. Famously, among those who argue about such things, the Tufts figure includes opportunity cost / cost of capital estimates. This drives some critics into what I can only describe as a state of rage, and my usual response to them is that I will be glad to hold any sum of money they wish and give them the exact dollar amount back in ten years. No one has taken me up on this offer. If you feel that this deal would be a loss to you, but that it would not be a loss to a business like a drug company, you need to examine your views a bit more closely. And if you feel that it would be a loss, but that this should not be included in the cost of developing a drug, my question is where are we going to put it, then? Because it’s real. The timelines involved in the drug business make it quite real indeed.
This paper, though, does include an adjusted column of figures, noting the years each company took to develop its drug and assuming a 7%/year cost. That takes the range from $203 million to $2.6 billion (basically, the Tufts figure), for an average of $757 million. But there’s an even bigger consideration than the cost of capital: the cost of failure.
And that, to me, sinks this entire paper. We have in this business somewhere around a 90% failure rate in the clinic. Picking companies’ first approvals disproportionately selects for the fortunate ones who succeeded their first time out. In other words, this estimate ignores (as much as possible) the cost of clinical failure, and that cost is one of the central facts of the entire drug industry.
Let’s do a thought experiment. Looking at the list of companies, most of them have not had a drug approval since the ones under study here (as you’d figure). So instead of taking the numbers up to the date of that first approval, how about taking them up to right now? The amount spent per drug will be creeping ever higher, since they’re still spending R&D money and still have only one drug to show for it. When they do get a second approval, we can change the denominator to a “2” and bring it back down again, but the point is that several of these companies will likely never get a second approval. That’s the odds of the game. You can argue, I suppose, that this is an unfair comparison because now you’re including costs of Drug Number 2, but you can be sure that many of the companies on this new paper’s list were working on other projects at the same time that they were getting their first drug through.
So how do we account for the cost of these failures, which are as real as can be? Matthew Herper has already done this calculation for us, as of a few years ago. He looked at a list of 98 drug companies over a ten year period, added up the stated R&D costs, and divided by the number of drug approvals (exactly as in the preceding paragraph). This is not the most elegantly nuanced financial analysis, but there’s a lot to be said for it, because that’s real money going into the hopper every year. And what you find is that you don’t even get down to the level of the Tufts estimate (as it was at the time) until you get down to nearly company #40 on the list (!) Now, I realize that the “R&D costs” category varies from company to company, but after you’ve done 98 of them, you should be closing in on something meaningful.
I have one more objection: not all the drugs on that list of ten are equal, by any means. The cheapest of the lot is a new liposomal formulation of vincristine (from Talon), and that is just not the same as coming up with a truly new drug from scratch. The estimate for the second cheapest, pralatrexate from Allos, does not take into account that there were years of collaboration between NIH, SRI, and Sloan-Kettering that identified the drug candidate itself (which is in the well-traveled antifolate area) some years before Allos even came into existence. I’m not saying anything against these companies, far from it. They actually identified ways to get useful compounds onto the market for much less money than usual, and good for them. But holding them up as examples of how much it costs to “develop a new drug” is a bit off.
So while this paper is certainly not useless, it’s not as useful as it advertises itself to be, either. Some of the points I’ve raised here do come up in it, but I don’t find the arguments against them persuasive. In the end, the universe of one-drug/first-drug companies is just not anywhere near a representative sample. (In fact, Herper’s analysis found just that – the one-drug companies in his ten-year window had the lowest cost per drug, and the companies that had more than one approval during that time spent more and more per drug, up to many billions). As far as I’m concerned, this paper sets a lower bound, and that’s all.
Note: All opinions, choices of topic, etc. are strictly my own – I don’t in any way speak for my employer