Skip to main content
Menu

Drug Development

The Past Twenty Years of Drug Development, Via the Literature

Here’s a new paper in PlOSOne on drug development over the past 20 years. The authors are using a large database of patents and open literature publications, and trying to draw connections between those two, and between individual drug targets and the number of compounds that have been disclosed against them. Their explanation of patents and publications is a good one:

. . .We have been unable to find any formal description of the information flow between these two document types but it can be briefly described as follows. Drug discovery project teams typically apply for patents to claim and protect the chemical space around their lead series from which clinical development candidates may be chosen. This sets the minimum time between the generation of data and its disclosure to 18 months. In practice, this is usually extended, not only by the time necessary for collating the data and drafting the application but also where strategic choices may be made to file later in the development cycle to maximise the patent term. It is also common to file separate applications for each distinct chemical series the team is progressing.
While some drug discovery operations may eschew non-patent disclosure entirely, it is nevertheless common practice (and has business advantages) for project teams to submit papers to journals that include some of the same structures and data from their patents. While the criteria for inventorship are different than for authorship, there are typically team members in-common between the two types of attribution. Journal publications may or may not identify the lead compound by linking the structure to a code name, depending on how far this may have progressed as a clinical candidate.
The time lag can vary between submitting manuscripts immediately after filing, waiting until the application has published, deferring publication until a project has been discontinued, or the code name may never be publically resolvable to a structure. A recent comparison showed that 6% of compound structures exemplified in patents were also published in journal articles. While the patterns described above will be typical for pharmaceutical and biotechnology companies, the situation in the academic sector differs in a number of respects. Universities and research institutions are publishing increasing numbers of patents for bioactive compounds but their embargo times for publication and/or upload of screening results to open repositories, such as PubChem BioAssay, are generally shorter.

There are also a couple of important factors to keep in mind during the rest of the analysis. The authors point out that their database includes a substantial number of “compounds” which are not small, drug-like molecules (these are antibodies, proteins, large natural products, and so on). (In total, from 1991 to 2010 they have about one million compounds from journal articles and nearly three million from patents). And on the “target” side of the database, there are a significant number of counterscreens included which are not drug targets as such, so it might be better to call the whole thing a compound-to-protein mapping exercise. That said, what did they find?
compounds targets year chart
Here’s the chart of compounds/target, by year. The peak and decline around 2005 is quite noticeable, and is corroborated by a search through the PCT patent database, which shows a plateau in pharmaceutical patents around this time (which has continued until now, by the way).
Looking at the target side of things, with those warnings above kept in mind, shows a different picture. The journal-publication side of things really has shown an increase over the last ten years, with an apparent inflection point in the early 2000s. What happened? I’d be very surprised if the answer didn’t turn out to be genomics. If you want to see the most proximal effect of the human genomics frenzy from around that time, there you have it in the way that curve bends around 2001. Year-on-year, though (see the full paper for that chart), the targets mentioned in journal publications seem to have peaked in 2008 or so, and have either plateaued or actually started to come back down since then. Update: Fixed the second chart, which had been a duplicate of the first).
targets source year
The authors go on to track a number of individual targets by their mentions in patents and journals, and you can certainly see a lot of rise-and-fall stories over the last 20 years. Those actual years should not be over-interpreted, though, because of the delays (mentioned above) in patenting, and the even longer delays, in some cases, for journal publication from inside pharma organizations.
So what’s going on with the apparent decline in output? The authors have some ideas, as do (I’m sure) readers of this site. Some of those ideas probably overlap pretty well:

While consideration of all possible causative factors is outside the scope of this work it could be speculated that the dominant causal effect on global output is mergers and acquisition activity (M&A) among pharmaceutical companies. The consequences of this include target portfolio consolidations and the combining of screening collections. This also reduces the number of large units competing in the production of medicinal chemistry IP. A second related factor is less scientists engaged in generating output. Support for the former is provided by the deduction that NME output is directly related to the number of companies and for the latter, a report that US pharmaceutical companies are estimated to have lost 300,000 jobs since 2000. There are other plausible contributory factors where finding corroborative data is difficult but nonetheless deserve comment. Firstly, patent filing and maintenance costs will have risen at approximately the same rate as compound numbers. Therefore part of the decrease could simply be due to companies, quasi-synchronously, reducing their applications to control costs. While this happened for novel sequence filings over the period of 1995–2000, we are neither aware any of data source against which this hypothesis could be explicitly tested for chemical patenting nor any reports that might support it. Similarly, it is difficult to test the hypothesis of resource switching from “R” to “D” as a response to declining NCE approvals. Our data certainly infer the shrinking of “R” but there are no obvious metrics delineating a concomitant expansion of “D”. A third possible factor, a shift in the small-molecule:biologicals ratio in favour of the latter is supported by declared development portfolio changes in recent years but, here again, proving a causative coupling is difficult.

Causality is a real problem in big retrospectives like this. The authors, as you see, are appropriately cautious. (They also mention, as a good example, that a decline in compounds aimed at a particular target can be a signal of both success and of failure). But I’m glad that they’ve made the effort here. It looks like they’re now analyzing the characteristics of the reported compounds with time and by target, and I look forward to seeing the results of that work.
Update: here’s a lead author of the paper with more in a blog post.

22 comments on “The Past Twenty Years of Drug Development, Via the Literature”

  1. Anonymous says:

    Do these charts show total no. compounds, or compounds per target? If the latter, it could simply reflect a more diverse approach as companies work on a greater number of targets from genomic discovery, no?

  2. James says:

    Derek,
    It appears that you’ve posted the same graph for both figs – the second should be either fig4 or fig5 from the manuscript.
    Nice that it’s open access for viewing.
    James

  3. Respisci says:

    Why the acceleration between 2001 to 2005 for patents in Figure 1?
    Off to read the paper…

  4. will says:

    In 2000, US patent applications began to be published, so if the authors didn’t distinguish between granted patents and published applications, that would neatly explain the inflection point. Prior to that, applications that didn’t make it to allowance would simply go in the trash, and the compounds disclosed therein would, for the most part, never see the light of day
    I’ve work in pharma patent law for the past ten years, both on the innovator and generic side. I have definitely noticed a trend of less disclosure over the most recent years. For instance, nowadays you may have an table listing 100 compounds with a statement to the effect, all compounds had IC50

  5. Mike says:

    “(In total, from 1991 to 2010 they have about one million compounds from journal articles and nearly three million from patents). ”
    There’s a problem right there. Patent claims tend to be extremely explicit about claiming all of the combinatoric possibilities that can be built onto a drug candidate. Last year Canada revoked Pfizer’s CA 2163446 because while the patent valiantly covered 260 quintillion compounds it didn’t bother to explain which one was actually sildenafil.
    A database that covers the compounds that are fully and individually described by structure or name in patents would probably vary as much over the years due to trends in patent lawyer opinion (“you should add more examples”, etc.) on drafting pharma claims as anything else.

  6. gippgig says:

    The second chart isn’t displaying at all now.

  7. Ed says:

    A small point, but there should be a WIPO requirement for all patents to be (co)-written in the Extended OCR-A font so that the machines that read and translate them actually do a useful job….what percentage of the explicitly named compounds actually come out of the translation correct? 1%?
    Time for a campaign here Derek!

  8. Mike says:

    @ #7: Hear Hear!

  9. James says:

    @ Mike –
    Wow – that’s some impressive SAR!
    How many CRO’s were required to synthesize and annotate all those variants? And what type of super-nano-ultra HTS flowchart screen was used to test?
    Now I understand some of the recent patterns of R&D staffing. If you can synthesize & test all chemical matter that efficiently, what’s the need for all that extra headcount?
    (turning of sarcasm filter now…)

  10. MoMo says:

    This paper corroborates what we witnessed last week, that the Big Clear Out in Pharma is due to lack of productivity making compounds as evidenced by the patent literature. Forget about targets!
    I wont go into the psychopolitical economics of chemistry at the bench, as that may be too much for this blog and denial is always the first stage in death anyway.
    MOre MOlecules!

  11. Nick K says:

    #10 MoMo: Every village has its idiot, and on Pipeline it’s you.
    R H Bradbury comprehensively rubbished your claim about the Pfizer chemists at Sandwich on the Novartis thread. Your credibility is nil.

  12. Rich Rostrom says:

    To me, the interesting thing about these charts is the enormous growth from the 1991 base levels.
    Compounds in patents, up over 15x at peak, still over 10x.
    Compounds in journals, up 10x.
    Targets in patents, up 17x.
    Targets in journals, up 15x.
    Are people doing that much more work? Even as pharma has shed so many jobs? Have methods become that much more productive?
    That has a scary implication. Useful results (new drugs) are apparently diminshing, even though more than an order of magnitude more chemistry is being done. That suggests drug discovery is becoming very hard.
    Also: Given the long development time for any drug, ISTM that anything that came out circa 2000 originated in a radically different environment than now.

  13. Kelvin says:

    We are simply seeing the diminishing returns of target-based drug discovery, as the increase in quantity with diminishing quality means a lot more work and investment with a lot more failure. We need a radical new “top-down” systems-based approach to drug discovery, which does not depend on trying to understand more and more complex diseases…

  14. Beeva says:

    @13, that’s rubbish. Since when did companies only patent those compounds that would go on to make drugs? The downturn coincides perfectly with pharma downsizing and mergers.

  15. Anonymous says:

    @14: Think about opportunity cost: More investment pushing crap compounds further into the clinic vs making and testing (and killing) more compounds in discovery.

  16. MoMo says:

    Quite alright on the village idiot comment Nick K. Bradbury posted applications, not patents. And for a satellite that site that is dismal Sad truth in all of this and the unemployed are still unemployed and the failure starts at the top and works it way down. Some of the doomed start fresh, as start-ups, if they are innovative enough, and the rest fight to find few jobs.
    In the new reality Pharma start-ups are avoiding the Big P types, at least in the US, not sure how they do it in the UK. They need to get things done and are the future job creators.
    You sound unemployed and bitter Nick K.,me, I have no time for nonsense, just reality.

  17. Not a native speaker says:

    What happened to English grammar and syntax? Who’s editing papers on PLOSOne? Shouldn’t proper writing apply to our field as to others? What about credibility of the journal? No wonder our scientific community is losing relevance in shaping public opinion!
    “(…) is merger*s* and acquisition activity (M&A) among pharmaceutical companies. The consequence*s* of this include* target portfolio consolidations (…). A second related factor is *less* [sic!] scientists engaged in generating output. Support for the *former* [former what?] is *provided by the deduction that* [what does this mean?] NME output is directly related to the number of companies *and for the latter, a report that* [syntax, again!] US pharmaceutical companies (…). Firstly, patent filing and maintenance costs *will have risen* [looking ahead?] at approximately the same rate as compound numbers. Therefore part of the decrease could simply be due to companies, *quasi-synchronously* [??], (…)” etc etc.

  18. Nick K says:

    MoMo: Sorry, I forgot that even village idiots have feelings.
    Your credibility is still nil, though.

  19. cdsouthan says:

    Derek, thanks for highlighting. I address some points in a blog post where I have gone into more detail
    http://cdsouthan.blogspot.se/2013/11/tracking-big-small-data-from-drug.html
    But brief responses:
    Derek, the small molecules swamp out the big ones here
    1) Yes, these are totals; cpds-per-target are in PMID: 21569515
    4) Interesting observations but a) there may be a spike but the patent curation encompasses families b) you could be right if the ratio of discrete results to binned values was shifting, despite average extractions rising
    5) Wrong on the first because these are fully exemplified structures with activity data. Shifts in drafting style by the global attorney “herd”, plausible but wouldn’t this average out?
    12) My untested guesses a) EST-omics, b) HTS, c) library design
    13-15) nothing we tracked rules out increasing quality but the next 5 years will tell

  20. MoMo says:

    Nick K,
    Hate to tell you, unless you own a company and pull the strings- everyones’ credibility is still nil.
    So go own a company and you can be the ultimate expert and the village idiot at the same time- and there’s nothing anyone can do. Unless you owe back taxes or blow your place up.
    Cheers!

  21. MoMo says:

    Nick K,
    Hate to tell you, unless you own a company and pull the strings- everyones’ credibility is still nil.
    So go own a company and you can be the ultimate expert and the village idiot at the same time- and there’s nothing anyone can do. Unless you owe back taxes or blow your place up.
    Cheers!

  22. sgcox says:

    MoMo is a mystery wrapped in an enigma, to papraphrase.

Comments are closed.