Skip to Content

Down Amongst the Ugly Variables

Here’s a paper on “Avoiding Missed Opportunities” that’s come out in J. Med. Chem. By that, the authors are referring to the common practice, in a drug development project, of setting up criteria that the eventual candidate needs to meet (at least this much potency, PK at least to this level, selectivity against counterscreens X and Y of such-and-such-fold, etc.) They’re emphasizing that this, while reasonable-sounding, has its pitfalls:

However, having chosen a profile of property criteria, we should consider the impact that this choice will have on the decisions made, i.e., the compounds or chemical series chosen for progression. In some cases, the choice of compounds is very sensitive to a specific property criterion or the importance given to it. In these cases, that criterion may artificially distort the direction of the project; if we are not very confident that the “right” criterion has been used, this may lead to valuable opportunities being missed.

They have a point. Especially as a project goes on, it can be difficult to reconstruct some of the thinking that went into some of the cutoffs (or to recall how arbitrary some of them might have been). A totally different group of people might be working on things by that point as well, exacerbating the problem:

In the context of drug discovery, consider a progression criterion for potency that specifies that the IC50 must be less than 10 nM. If the most active member of a chemical series with good ADME and safety characteristics has a potency of 50 nM, would it make sense to reject this series? In a simple case such as this, it may be possible to spot this exception, but with the increasing complexity and diversity of the data used in early drug discovery, these sensitivities may not always be apparent. In addition, as time progresses it can be difficult to remember the details of chemical series explored earlier in a project, and consideration of these sensitivities can reveal alternative directions or backup series, should a project reach an insurmountable issue with its primary series.

When you’re addressing several parameters at once, it can be hard to keep all this in mind. And in the earlier stages of the project, it’s highly unlikely that any compounds will clear all the hurdles, so you have to progress things that have one or more defects, in the hopes that these can be fixed later on. That, in turn, can lead you to under- or over-value the importance of some of the criteria. The ones that were met early on can get less attention, since they were fixed so early, even though they might be more important to the eventual success of the project than the ones that took longer to fix, for example.

As the authors note, there’s also a problem of false quantitation. If you decide that cLogP has to be (say) under 3, is a 2.95 really different from a 3.05? For a calculated property? Almost certainly not. You have to allow these cutoffs some fuzziness, but to some people’s eyes, that’s just a way of making excuses. (And even if you’re OK with it, you still have to draw a line somewhere, eventually). But you also should have some clear idea of how important it is to make that number, which can be a hard question to answer.

The paper suggests “desirability functions” rather than simple cutoffs. That is sort of a weighted variation, where you assign greater and lesser importance to the various criteria and calculate a score accordingly. And this is fine, but you’re still faced with the problem of how you’re going to combine the weighted scores – add them up, or multiply them? If you add them, you run the risk of having other properties be good enough to outweigh a killingly low score on one of them, making a doomed compound look better than it is (especially for compounds that have progressed through more assays!) Multiplying them gets rid of that problem, but allows for too much emphasis to be placed on a relatively poor score in one category. These sorts of problems are one reason that some drug-candidate scoring systems proposed in the past (such as “chemical beauty“, blogged here) have been shown, after more study, to be rather poorly effective.

The main part of the paper proposes a method (sensitivity analysis) to figure out just how sensitive the eventual prioritization is to the various components, which is worth knowing. Clearly, the ones that have more leverage on the final results should get more attention and have more thought put into them, especially as regards uncertainties in the numbers that are being fed into the score. To my mind, this actually gets back to that paper by Scannell and Bosley about the problem of nonpredictive assays. One way for an assay to end up in the nonpredictive category is to have a lot of variability – there’s so much noise in the numbers that you can’t tell what’s going on.(And unfortunately, there are many other ways for an assay to be nonpredictive!) This paper arrives at the same conclusions, that these things can kill even the most well-thought-out scheme for working through a drug development process.

Part of the danger is that such a well-thought-out scheme can be very attractive, and the higher up the managerial ladder you go, the more attractive is probably is. At that level, you aren’t involved in the day-to-day details of any one project, but you are going to be held responsible for how many of them succeed. Under those not-all-that-appealing conditions, you’ll be looking for some way to keep track of everything that will give you an accurate picture but not eat up all of your time. So a scoring/dashboard/overview scheme will sound like just the thing. Moreover, these often break things down into one (or just a few) numbers, for even easier digestion, but the smaller the bite-size pieces that a given metric scheme delivers, the more you should mistrust it. Reality can be quite resistant to being distilled down this way.

There’s a very relevant analogy from the 2007-2008 financial crisis. The people running the firms that managed the big portfolios of bonds and derivatives were quite likely using inappropriately simplified measures for risk and the degree that the various components were correlated. The further you went up the organization, the more likely it was that people just looked at one or two numbers (value-at-risk, the correlation number from the Gaussian copula, what have you) and decided that they had things figured out well enough. They didn’t. But it’s human nature to look for something that’s easier to get a handle on, and to overvalue it once you think you’ve found it, especially in a messy, multifactorial situation like investing or drug discovery.

15 comments on “Down Amongst the Ugly Variables”

  1. Peter Kenny says:

    One point this article doesn’t really pick up on is that links between molecular/physicochemical properties and in vivo behaviour of compounds are not particularly strong. The challenge is to define guidelines in a way that accurately takes account of what we know (and what we don’t know). Lead optimization may be less multiobjective than is often asserted. The objectives like (lipophilicity,permeability, solubility etc) that we monitor are surrogates for what we really need to know (unbound concentration at site of action). Also a logP of 4 might be perfectly acceptable in one series but kiss of death in another.

    1. Peter S. Shenkin says:

      The same approach can be applied iteratively as in-vivo assays (protein plasma binding, liver microsome clearance, and so on, depending on the program) are accumulated for any compound deemed strong enough to progress via further optimization is likely to undergo. It’s not limited to zero-level rule-of-five plus primary assay.

      1. Peter Kenny says:

        Not sure what you’re getting at here and I wouldn’t regard PPB and microsomal assays as ‘in vivo’. Ro5 is essentially useless in practical lead optimization and provides no guidance whatsoever for the optimization of Ro5-compliant compounds. The medicinal chemist charged with optimizing a lead series can legitimately query the relevance of desirability functions derived from a structurally diverse set of compounds to the series that he/she may be working on. I’ve linked our correlation inflation article as the URL for this comment.

        1. Peter S. Shenkin says:

          Thanks. “In vivo” was a mistake on my part. I meant to refer to a panoply of experimental tests which go beyond the simple properties that I think you were referring to. Eventually, you will be looking at more detailed assays of various nature (including, eventually, in-vivo assays), and the methodology described in the paper can apply across this range as well as across early-stage “molecular/physicochemical properties”.

          1. Peter Kenny says:

            I agree that the methodology described in the paper could be applied across a range of assays and also believe that it is useful to analyse the sensitivity of decisions to assumptions. That said, there are still challenges for lead optimization approaches based on desirability functions. First, there is a Design of Experiments challenge for customizing desirability functions for specific projects (and even individual series). Second, one needs to deal with what I’ll term ‘interactions between objectives’. For example, you can probably get away with a less permeable compound if it is highly soluble (I recall this being discussed in the Ro5 article). Plasma protein binding needs to be viewed in the broader context of distribution and the authors of the featured article need to take a look at NRDD (2010) 9:929–939 (DOI 10.1038/nrd3287)

  2. Andy says:

    Given the multitude of explicit conditions applied to drug candidates, coupled with the implicit ‘doesn’t look right’ test and our own cognitive biases, it’s a wonder we make anything.
    Maybe screening random compounds is the answer? It might be no less productive….

  3. Anon says:

    Right now the problem in drug discovery is not the false negatives, it is the false positives which are driving up costs. If anything, rules of thumb should be applied more rigorously, but at the same time, pharma should be exploring ideas well outside the norm in order to innovate. Catch 22!

    1. Peter Kenny says:

      If the rules of thumb actually have a basis then by all means apply them as rigorously as you like but be aware that the basis for some of these rules is not strong. The cutoffs for the 4/400 rule that the Sages of Stevenage used to tout (maybe they still do) reflect the scheme used to categorize continuous data and it could just as easily have been a 3/300 or 5/500 rule. If proposing to use a rule of thumb, it’s a good idea to make sure that you fully understand the basis of the rule. Generally you should be extremely wary of any rule or guideline that doesn’t provide guidance as to how stringently it should be adhered to. I don’t always get the impression that compound quality ‘experts’ are always aware of the difference between logP and logD (or indeed why this may be relevant to lead optimization). Reductio ad absurdum is a useful tool for evaluating rules, guidelines and metrics although the reduction is often redundant, I’ve linked a blog post on PFI, another favorite of the afore-mentioned Sages of Stevenage, as the URK fpr this comment.

  4. Sean Fearsalach says:

    This is how it happens in IT, but I’m sure pharma is very similar.
    http://dilbert.com/strip/1996-03-10

  5. CMCguy says:

    This reminds me of the Puritan Impulse post coming from a slightly different angle. Certainly it would be nice to work with definite rules and fixed criteria that align across multi-variables with each target however that simply is not the world of drug discovery. Besides serendipity the most reliable technique to distinguish compounds or series I have seen is reliance on the guidance and intuition of experienced and gifted medchemist, who always were few and far between and wonder if now are an endangered species. Likewise until final selection have typically dealt with an intentionally different type of back-up compound or series in a delayed parallel track to allow more immediate response if the main molecules hit a major wall although resources for such efforts was often thin.

  6. MoMo says:

    Here are the rules in drug discovery, so go get a pen and paper so you all can write them down then paste it on your bathroom mirror so you can remember them all day after you wake up in the morning.

    1. There are no rules.
    2. If you think there are rules, go look for a job in telemarketing, as you are one step away from this career path.

    1. Anon says:

      3. Suck up to your boss
      4. Don’t rock the boat
      5. Don’t try to change the status quo
      6. Keep your mouth shut and do what you’re told
      7. Thanks, now here’s your pink slip

      1. Tommy says:

        I like this comment.

        Big pharma has essentially lost the capability to combine science and art of drug development.
        The big secret is still to design a drug for a dedicated indication based on solid numbers (often underestimated).

        Good luck for all start-ups!

  7. Alchemyst says:

    Rules vs guideline. Would the statins have ever been advanced if blood levels were important, etc
    In the final analysis- humans are just two legged rats

Comments are closed.