Skip to Content

Drug Development

The Puritan Impulse

Here’s a good blog post with a lot of food for thought, and here’s the follow-up. I missed these back when they came out last year, but the issues they raise are (for better or worse) evergreen. The author’s framing a debate about medicinal chemistry as Cavaliers-versus-Roundheads, which is a classification that’s useful enough to have been applied ever since its origin in the English Civil War. Over time (and in these two posts), it’s come to mean making a distinction between a more Puritan outlook, people who are sticking to principles (especially principles about what shouldn’t be done), and a more laissez-faire pragmatic view:

. . .I am sure that those more historically literate than myself will highlight why this is a bad analogy. Its also very UK-centric for which I apologise for indulging myself. However, I think it is a useful comparator to explain what I think is wrong with many of the bandwagons that go zooming by if you just wait around long enough in a drug discovery environment. Its not so much the roundheads as having a distinctive look or political grouping that is the source of the analogy I wish to draw but their puritan roots. In particular, it is the puritan tendency to tell people what they should NOT do and their propensity for banning things which is the basis of the comparison I want to make. Notably, they undid themselves by (amongst other things) banning things such as the various feasts and festivities and the vividly decorated public buildings that were some of the few sources of gaiety for many of the populace.

The comparator that I wish to make and which therefore, I think makes me a cavalier, is that there are too many people trying to tell drug discoverers what NOT to do but without providing any particularly useful guide to what they should do instead. I further speculate that, like the roundheads, this approach is wont to sap the joy out of drug discovery for many and ultimately is unlikely to spur the kind of creativity that we cavaliers think essential for success in this field.

Historically, there is a problem with this definition creep, because (among other things) the Cavaliers were fighting (with all the means at their disposal) for the divine right of kings, which doesn’t sound all that open and creative to modern ears. (It’s also worth keeping in mind that the Puritans themselves were only a minority of the Roundhead supporters, most of whom were still Church of England). But I’ll leave the history aside, because it is a side issue here, and go to the main point of the argument.

The author very much objects to things like up-front assessments of compound tractability, target druggability, and so on. I feel his pain, actually:

It has frequently shocked me to hear medicinal chemists pontificating about what will and will not work. It has too often felt like a delusion. Actually, that’s not quite right. In terms of stacking the odds in your favour, it is a pretty good idea to say that everything will not work. This makes medicinal chemistry ideal territory for self-satisfied Roundheads. But how unhelpful, how uninspiring. In an environment in which little is understood definitively, we require persistent sorts who can take the knocks of things not working as hoped and can pick themselves up and do it all again. A corrosive presence that will decrease the prospects of success is the one who can only tell you why they think something will not work or worse, the “told you so” sorts who don’t even make testable predictions.

In fact, I’ve said very similar stuff myself, to the effect that you can sit in the back of the conference room and say over and over that things won’t work, and in this business you’ll be right almost all the time. But to what end? It’s the small residuum of things that work that we really care about. So I know where the objections to the “Don’t make this, don’t do that” sort of rules are coming from, because it seems as if you’re setting yourself up for failure after failure, while all the time assuring yourself that you’re doing the right thing for the right reasons. It can be infuriating to listen to, and infuriating to watch.

But that said, I think that there’s a bit of a false dichotomy here. Both sides of these arguments about compound metrics, undruggable targets, intractional PAINs and so on can be taken to reducio ad absurdum extremes. In fact, here they are: on one side, you could say that there are long lists of targets that should never be attempted, huge swaths of compounds that should never be made, and piles of screening hits that should immediately be purged, to the point that there’s little left to work on at all. And on the other, you would have people who are more than ready to let a thousand flowers bloom, to work on whatever, however, because you never know what might turn up or work out well in the end.

Both of these, though, are caricatures – or I hope that they are. Try some thought experiments: if you’re a hard-core rule-of-fiver, then you have excluded some of the most useful drugs in the entire pharmacopeia. You never would have made their structures, you never would have followed up on them as hits if you’d seen their structures at the beginning, and the more fool you for doing so. Similarly, if you have very firm ideas about target druggability, you probably would not have believed a priori that there could be drugs targeting microtubules, FKBP, or the protein synthesis initiation complex. But there are wildly useful drugs that hit all of these, and plenty of equally wild targets as well.

On the other side, though, if you’re OK with going to a 500 MW compound and optimizing it, because all this rule-of-five stuff is an artificial construct, then how about 600? 850? 1405? Synthetic difficulty aside, you really do start to run into some practical problems dosing these things in living creatures. And if you were, say, to run into a situation where you could treat Disease X by either making a Type I GPCR antagonist or by (say) targeting an intrinsically disordered nuclear receptor cofactor protein in the CNS. If you had no reason to think that (from a disease-modifying perspective) one of these was any better than the other, would it still be just a coin flip to decide what to put tens of millions of dollars and years of your life to working on?

Well, these examples are caricatures, too, but that’s what a reducio ad absurdum does – show you that there is indeed some absurdity waiting out there for you.The important thing, I think, is to have a sliding scale in your mind for all these factors, and to understand the risk/reward for each of them. Some targets really are more tractable than others, even if there’s a giant middle of the scale where these distinctions are difficult or impossible to make. And some compounds really do have a better chance of becoming drugs than others, although there are no clearly makred electric fences out in that territory, either. If your target is a class of interaction that no one’s ever gotten to work before, you might not want to pile any more risk onto it in other categories if you can help it. If your chemical matter is really outside the experience of everyone you show it to, you’d better have a good reason for going there, and be ready to do extra tox, scale-up, and formulation work to get it to fly.

And even behind understanding the risk/reward level for each of these is the realization that there is such a tradeoff to be made, every time. I think Chris Lipinski was appalled at some of the uses that his rule of 5 paper was put to, because it was mainly just to show that marketed drugs tended to fall into certain areas of chemical space, which suggested that moving outside of them might well be associated, eventually, with a higher risk of not having a marketed drug. It’s human nature, I suppose, to take this sort of thing and run with it, and it’s certainly true that there is a particular personality that is likely not only to run with it, but start beating everyone else over the head with it when they arrive. That’s the Roundhead of the blog posts referenced above, and I can see where the irritation comes from.

But I can’t go all the way over to the other side, either – as I’m fond of saying, just because you can screw up in one direction doesn’t mean that you can’t ever screw up in the opposite one. There are ditches on both sides of the road. I think that medicinal chemists should indeed be creative, and had better be creative, because I fear that too many of the drugs that can be discovered by only doing what we know how to do have already been discovered by now. At the same time, though, “because it’s creative” isn’t enough of an argument in medicinal chemistry, either. It would be very creative of me indeed to fill a hundred-thousand-member screening library with nothing but quinones, because I’m sure that no one has thought of doing that before. I’ll get mondo honking piles of screening hits out of that set, too, no matter what I run through it. But ars longa, vita brevis, and I could easily spend the rest of my life and my next one, assuming I don’t come back as a sea otter or a crested grebe, trying to get one of these things to actually become a drug that might do someone, anyone, any good.

21 comments on “The Puritan Impulse”

  1. Hap says:

    Isn’t “Nullius in verba” the antidote to this, though? Religious and political arguments have the problem that lots of time their assumptions are either untestable or practically untestable. On the other hand, the assumptions here can be tested. If you have a quinone and think that it might be a good lead for inhibitors of some exotic enzyme, well, then you can actually go and test its selectivity for a (preferably wide) panel of enzymes and see if it actually is selective for the one you want. Data trumps talk.

    The problem with lots of the work on PAINS is that many people don’t seem to care that just because a compound shows activity in an assay doesn’t mean that it’s actually selective or useful. On the other hand, just because a inhibitor is too heavy or has too many rotatable bonds doesn’t mean it won’t work; the only way to find out is to test, because biology doesn’t care about dogmatic pronouncements of activity or bioavailability or the lack thereof.

    1. Peter Kenny says:

      A big problem with a lot of the prevailing ‘wisdom’ is that it is very difficult to separate fact (experimental observations) from opinion. Guidelines are typically based on trends in data and the strength of the trend tells you how rigidly you should adhere to the guidelines. This is why correlation inflation is a such a menace. When the basis of a rule, guideline or metric is challenged the response is often something like, “It’s useful” and this should worry us a great deal. You may find the blog post that I’ve linked as a URL for this comment to be relevant.

      1. Hap says:

        Isn’t this where the comment about effect size is supposed to go (high correlation but low effect size doesn’t mean much) or am I confused?

        1. Peter Kenny says:

          Effect size is more usually encountered in categorical data analysis (comparing treated and placebo groups) and the problems start when people confuse the strength of a trend with its statistical significance. Scaling the difference between two groups by standard deviation quantifies effect size. Scaling the difference by standard error quantifies statistical significance.

          Typically a strong correlation between X and Y means that X will be a good predictor of Y. That said, you also need to look at the root mean square error (RMSE) when Y is fit to X to estimate the likely quality of predictions. You can get a situation where the the correlation is strong because large dynamic range in X and Y but the RMSE is too large to allow accurate prediction. For example you might have measured solubility in the range 1 nM to 1 M but what you’re interested in is likely to be in the range 1 to 100 micromolar.

          Another parameter of interest is steepness of response. If potency is strongly correlated with a pharmaceutical risk factor like lipophilicity or molecular size then you are likely want the response to risk factor to be as steep as possible. If you’re trying to project outcomes then it makes sense to observe trends in your own data rather than make the assumptions forced on you by the metrics,

    2. Andre says:

      ” the only way to find out is to test, because biology doesn’t care about dogmatic pronouncements of activity or bioavailability or the lack thereof.”

      This statement seems to me like a strong argument for more phenotypic drug discovery, where there is no preconceived notion on which target has to be hit to elicit a therapeutic benefit.

  2. Luysii says:

    Ah yes the Puritan impulse — alive and well in libertine San Francisco, where there is a move afoot in the city council to ban the sale of cigarettes to those under 21, while permitting marihuana use.

  3. Peter Kenny says:

    The bigger issue is that the Puritans can’t (won’t) tell us what is fact and what is dogma. Perhaps they can no longer even remember. I’ve linked a blog post on this theme as the URL for this post.

  4. LeeH says:

    It’s amazing how some simple rules of thumb can polarize a community. On one side, the skeptics yell “you’re not the boss of me!” and complain vehemently that their 2nd amendment rights to 700 MW compounds are being taken away. On the other, warnings of “those that ignore history are doomed to repeat it”.

    Now, in full disclosure I tend to fall into the latter camp, mostly because well-crafted “rules” are a pragmatic tool for cheminformatics, and I’ve seen enough cases where they work well. But guess what guys. Rules of thumb are not supposed to be thumb screws. They’re designed to be an early warning system to keep you from veering too far off the road. If you momentarily swerve into the breakdown lane, so what, as long as you take precautions that you’re not going to plow into someone changing a tire. If you keep your eyes open (is your compound series generally soluble? Decent PK?) you’ll be fine. Don’t freak. Just know that another swerve could put you in the ditch.

    1. Peter Kenny says:

      It is a little naive to see skeptics as simply as folk who complain that their rights to sin (in a compound quality sense) is being taken away. If guidelines are based on competent and honest analysis of data that we can all see then I’ll use them. If I suspect that data has been presented in a manner that makes trends looks stronger than they actually are then I’ll be skeptical. Also if the guideline tells me something that sounds like BS then I’ll back my judgement. The LELP metric tells us that logP of 3 and 25 heavy atoms is equivalent to logP of 1 with 75 heavy atoms. No doubt you’ll get somebody who will assert that LELP is useful but are you going to use LELP to make decisions? Compound quality ‘experts’ tell us that we should lower logD and you can usually do this by increasing the extent of ionization. I hope you can see where this is headed.

      One point that may be worth thinking about is that if we do bad data analysis then those who fund our activities may conclude that the difficulties we face are of our own making. I’ve linked a blog post on ‘thermodynamic proxies’ as the URL for this comment. Would you be a skeptic or believer if your project manager suggested that you use these in a project?

      1. LeeH says:

        Peter – My point is that much of the of the controversy depends on the definition of the word “use”. If you mean that the project team is vigilant (i.e. requests early or more PK or other supporting data) for compounds that are on the fringes of some set of early warning metrics, then yes, I’d “use” them gladly. If it means that a scientist is given a bad mark on his/her performance review because he/she made some compounds that violate the rules, then no, I wouldn’t want to “use” them. And of course, everyone will be unhappy if the rules are based on obvious bad science, or used in isolation of common sense (your 75 atom example, for instance).

        1. Peter Kenny says:

          Lee, it sounds like we are in general agreement although I’m sure that our positions will differ on specific issues. If the guidelines have a secure basis then I will take notice of them. Well-crafted guidelines should give the user an idea of how rigidly they should be adhered to. It can help to be aware of the origins of thresholds/cutoffs used to specify guidelines. For example the GSK 4/400 rule reflects the scheme used to categorize lipophilicity and it could very easily have been a 5/500 rule, a 3/300 rule or a 4.22/422 rule.

          I would challenge your assertion that, “everyone will be unhappy if the rules are based on obvious bad science”. Ligand efficiency (the metric) is actually thermodynamic nonsense and, when we use it, our perception is altered when we change the concentration used to define the standard state. That said, I’m not denying that pharmacokinetic characteristics of compounds tend to deteriorate when molecular size or lipophilicity increase.

  5. Bagger Vance says:

    At least it’s better than MSM/Social media, where there is only one historical data point, and everything’s just a race to Godwin your opponent.

  6. MoMo says:

    The Puritans as a religion don’t even exist anymore, so if you want to go extinct, keep following the “rules”. And Lipinski didn’t expect all of you to follow his findings like the lost sheep many of you are in this industry. I know, I asked him.

    So break some rules, break all of them if you have to, and above all, question authority, as the real leaders wont mind.

    The phony ones, on the other hand………

  7. Anon says:

    Off topic MoMo as I challenge your assertion that Puritan religion no longer exists as I know many current Protestant churches today proudly trace their lineages to various Puritan groups where even some do profess to adhere to many of the same strict teachings.

    On target I have often seen your designated real leaders do require a few saner people around them to make sure they do not go off the deep end so that their revolutionary approaches can be executed in the practical real world. I think Derek set a good tone about drug discovery needing to be series of trade-offs, some of which may be firmer than other, although most have to be willing to be bent under observing certain results. Probably would classify that as practicing “situational ethics in med chem” which is indeed something Puritans would find most disturbing.

  8. RM says:

    As I read the blog posts, they’re less pro-cavalier and more anti-Roundhead. That is, it’s strictly a complaint against all the “Don’t do what Donny Don’t does”-style proscriptions, rather than a support of the “let a thousand flowers bloom” perspective.

    For example, ligand efficiency (or the rule-of-five, or lipophilicity, or PAINS or …) isn’t worthless, and it isn’t the be-all-end-all, but how does one actually use it? How do you know you should trust it, and when should it be discarded? What sort of guidelines are involved in its use? Does anyone actually discuss these things, or are we just batting rules-of-thumb back and forth with no guidance?

    I’ve seen a large number of people come out with dogma about this metric or the other, or expressing sniffy disdain for some substructure or the other, but its most often couched in terms of absolutes. (“A quinone? Psht! Idiot academics!”) Instead, there should probably be a more middle ground approach – yes, quinones (or high molecular weight compounds, or …) should be looked at with a skeptical eye, but it should be skeptical, not scathing. Don’t just say “I’ll never trust a rhodanine!”, say instead what reasonable evidence it would take for you trust that this particular rhodanine is actually doing something. (Again, *reasonable* evidence. If you’re so set against them that the only evidence you trust would be of the unreasonable kind, then perhaps you’re part of the problem.)

    Or in other words, it’s easy to be a curmudgeon and point out what Goofus is doing wrong, but to be useful, you also have to point out what Gallant is doing right.

    1. tangent says:

      “say instead what reasonable evidence it would take for you trust that this particular rhodanine is actually doing something.”

      What are people’s takes on Bayesian methods in drug discovery? Not in clinical trial design, but for this type of thing: quantify “if I have X prior belief about my compound, then after I know it’s a rhodanine, my updated belief is Y…”

      Better than gut feel? Or too many unknown unknowns here to have value?

      1. Patrick says:

        I think the basic Baseyian logic of using priors and modifying accordingly is one of the foundations of rational problem solving, period. A deeper awareness of it is, I think, useful in examining your process and conclusions, but it’s always there at some level, isn’t it?

        But if you’re discussing actually formalized statistical methods, then no, I think for the most part the unknowns (and, perhaps especially, the unknown unknowns) are too numerous for this to be useful.

  9. tally ho says:

    while the cavaliers and roundheads debate medicinal chemistry, the fat cats are laughing all the way to the bank. who was cleaning out the coffers during the English Civil War (wondering if there’s a good “fat cat” synonym from that period)?

    http://www.bloomberg.com/news/articles/2016-03-14/allergan-vows-to-pay-golden-parachute-taxes-after-pfizer-deal

  10. Kevin McLaughlin says:

    Better should remain skeptics and avoid cynicism?

  11. Paolo Dondoli says:

    Just wanted to say that the correct form is “reductio ad absurdum”.

Comments are closed.