Skip to main content

Clinical Trials

Digging Into the Genetics of Drug Targets

Rare diseases – remember years ago, back when those were a case of market failure? When companies were reluctant to work on them because the market size was guaranteed to be small and you’d have to charge, like, a hundred thousand or more a year to make the whole idea financially viable? Which wasn’t going to happen, until it did? And now there are rare-disease programs all over the place. (I last wrote about this here).

You can look at this several ways, depending on what you think of the drug industry, and it’s no sin to have more than one of these opinions in your head simultaneously. You can see this as the market discovering the price for such therapies – as it turns out, patients (well, their insurance companies) will actually pay the six-figure price tags necessary for such programs to be worthwhile to a for-profit company, and once that became clear, such companies began to populate that space. Market failure no more! But it’s also fair to say that many of these companies have realized that if those insurance plans will shell out $100,000 for such drugs, then they’ll probably shell out $150,000. Or $250,000 – try it and see! After all, it’s not that many patients – no one insurance plan is going to get hammered on too badly, and at those prices you can afford compassionate-use exemptions, too. The place where all those patients (and their payments) congregate is on the receiving end of the money, a situation that has probably led to (and is surely leading to) unsustainable pricing situations on the whole.

But another reason that companies have headed into this space is the way that human genetics have advanced. We now have so many more sequences, of such higher quality, across such broader populations that it’s possible to draw conclusions from the data that would have been unacceptably shaky before. Not only do you get to charge a lot for your drug – should you make a drug – but you can have a better idea that it’s going to work in the clinic. And “having a better idea that it’s going to work in the clinic” is (make no mistake) one of the most cherished goals of drug research. We get slammed 80 to 90% of the time in clinical trials. It’s horrendous, and we’re spending more and more money just to experience even that level of success. If you can improve that failure rate, either through avoiding toxicity or through more confidence in a new drug’s mechanism of action, billions of dollars are sitting out here for you to scoop them up.

Thus rare genetic-based diseases. Because with those, you can (1) generally identify your patients and make sure that the right people are in the clinical trial. You’d think that would be pretty easy, but go try it with (say) Alzheimer’s. And you can (2) have a much clearer picture of what your disease target does in humans by looking at natural genetic variation. This ranges (in many cases) across the whole spectrum from mild impairment to total loss-of-function, and it can be a big red blinking arrow sign telling you that Yes, This Is the Problem That Causes the Disease. You might think that would be not such an issue, either, but hey, try it with Alzheimer’s again. Or depression. Or osteoarthritis, or lupus, or even Type II diabetes. We know many things about those diseases, and we can (sometimes) interrupt the trouble at various points and in various ways, but reaching down to the ultimate first cause is something else again.

But even that part is fraught with difficulties, as this article will make clear. Here’s the take-home:

Long story short: there is no doubt that if you’re developing a drug against gene X, then knowing the effect of human loss-of-function variants in gene X is incredibly valuable. Yet at the same time, there is no simple formula or algorithm for what pattern of LoF variants makes a gene a safe target or an unsafe target. In fact, there isn’t even any complex formula or algorithm. The best we can do is deep curation and consideration to each individual gene and drug.

Why should it be that tricky? Well, there are problems in making sure that your gene of interest really has been picked up correctly as a loss-of-function in the datasets. But once you’re past that, you’re faced with the same problem you have in the animal knockouts that we’ve had available for many years now. Developmental effects do not have to be the same as pharmacological effects. An inhibitor drug may well affect a protein target’s activities in different ways than the loss-of-function mutants do, and you have to dig into the situation to figure that out. Giving such a drug to an already-developed creature may well affect things differently than such a mutation present from the very beginning does. And so on.

So you get the full spectrum of possible confounding variables. There are genes whose knockouts are embryonic lethal that nonetheless represent viable protein targets for inhibitors. There are genes whose knockouts look almost perfectly normal that nonetheless represent viable protein targets for inhibitors – and everything in between. And there are a range of reasons for both of those situations to exist. The article I just linked to goes into deep and very useful detail on this; I strongly recommend it to anyone thinking about these issues. It’s especially aimed towards the idea of predicting target-based tox trouble ahead of time by looking at genes that seem to be intolerant of loss-of-function in the human population, but even there:

All the above points make it clear that even if LoF variants are incredibly informative for drug discovery, and I think they are, we’re never going to have a plug-and-play formula. We are never going to be able to say, for every X% your gene is depleted for LoF, your probability of adverse events goes up by Y%. And we are certainly never going to have a rule, like don’t develop drugs against targets with >Z% depletion of LoF variation.

And it goes into detail about just what you can do to mine real insights under these conditions – you will not be surprised to learn that it involves a fair amount of work, as so many useful things do. That doesn’t mean you shouldn’t do it, of course. You most certainly should, because if things work out you will indeed have a far better idea of what your target is like and what you can expect from a drug for it. But it will not be the work of a moment, so look out for any breezy confidence you might come across in these efforts and act accordingly.

11 comments on “Digging Into the Genetics of Drug Targets”

  1. John Wayne says:

    This is why we need more machine learning. Problem solved.

    1. Chemjobber says:

      Actual spit-take

      1. John Wayne says:


    2. DH says:

      No, this is why we need more clever biologists designing and performing experiments to create more data that can be used in machine learning.

      (Yes, I know your comment was tongue-in-cheek, but even as an advocate of machine learning myself, I like to take every chance to emphasize the primacy of experimentation and empirical data in science.)

  2. Chrispy says:

    The best loss-of-function red herring in my experience is NaV 1.7 (SCN9A). People with LoF mutations in this gene experience no pain. None. They often die young because they hurt themselves gravely without knowing it. There was a great piece in the NYT called “The Hazards of Growing Up Painlessly,” and they mention a woman who was NaV 1.7 null and shattered her pelvis during childbirth — the woman walked around for weeks before going to the doctor because of a limp. So many inhibitors of NaV 1.7 have been developed — non-opiate pain relief would be a blockbuster. But they have all failed. It turns out that there is something NaV 1.7 connects with in development which is related to endogenous enkephalins, and inhibiting it later does little to nothing for pain. It looks like some pharmas just don’t know when to stop, though, and are still pursuing it. Amgen has a video on YouTube called the Passionate Pursuit of Nav 1.7 which details the efforts they have been going through to develop their inhibitor:

    1. ScientistSailor says:

      @Chrispy do you have a reference for this statement?

      It turns out that there is something NaV 1.7 connects with in development which is related to endogenous enkephalins, and inhibiting it later does little to nothing for pain.

      1. eub says:

        All I know is what I just read on Wikipedia, but: “Recently, it has been elucidated that congenital loss of Navv1.7 results in a dramatic increase in the levels of endogenous enkephalins, and it was found that blocking these opioids with the opioid antagonist naloxone allowed for pain sensitivity both in Navv1.7 null mice and in a woman with a defective Navv1.7 gene and associated congenital insensitivity to pain.”
        Reference is to

        (Naloxone restores pain sense in some people congenitally lacking it, and nobody happened to try that ever before? … well hell, it’s not like I thought about it.)

    2. eub says:

      “Passionate” in the praying mantis sense. Bite off their head and their pelvis carries on copulating all the more vigorously.

  3. Barry says:

    One instructive lesson is from Gleevec. Richard Pazdur insisted that Ciba (later Novartis) develop Gleevec for CML (although it was a rare disease in which Ciba had not interest, it had clear etiology). Ciba got orphan status in the bargain, and then went on to find other indications (GIST…) for which Gleevec works. So quickly it was a $billion/yr drug with orphan status.

    1. If memory serves, Gleevec also established that time to approval could be significantly reduced, being approved around 10 weeks from NDA submission (something like 32 months from Phase I initiation).

  4. yuri says:

    Compensation. Between epigenetics and other gene expression regulatory networks it’s near impossible to compare genetic and pharmacological knockouts. iirc the src kinase knockout mouse showed no obvious phenotype despite mountains of evidence for key roles in important signalling pathways.

Comments are closed.