Skip to Content

PAINS Go Mainstream

Well, I’m back in the Eastern Time Zone after flying in from Basel (and Amsterdam) yesterday. And the first thing I wanted to mention was this article from Jonathan Baell and Michael Walters inNature, on the PAINS compounds. It’s good to see the journal cover this issue (and I was impressed that they got New Yorker cartoonist Roz ChastRoz Chast to illustrate it).
PAINS are, of course, nasty frequent-hitting compounds that should be approached with great caution in any sort of screen for activity. This topic has come up many times on the blog (for someone writing about chemistry and drug discovery, there’s no way it couldn’t have), most recently just a few weeks ago. There are a lot of these things out in the literature (and the catalogs), and they just keep on coming. Now a wider audience gets to hear about the problem:

Academic researchers, drawn into drug discovery without appropriate guidance, are doing muddled science. When biologists identify a protein that contributes to disease, they hunt for chemical compounds that bind to the protein and affect its activity. A typical assay screens many thousands of chemicals. ‘Hits’ become tools for studying the disease, as well as starting points in the hunt for treatments.
But many hits are artefacts — their activity does not depend on a specific, drug-like interaction between molecule and protein. A true drug inhibits or activates a protein by fitting into a binding site on the protein. Artefacts have subversive reactivity that masquerades as drug-like binding and yields false signals across a variety of assays.

That’s the problem, all right. It’s not like ugly-looking compounds can never become drugs, and it’s not like they can’t be starting points for research. But the odds are against them, and you have to realize that, and you also have to realize why this “hit” you’ve just uncovered may well be spurious (at worst) or need a lot of extra work (at best). Far, far too many papers from less experienced research teams seem to be oblivious to these concerns. Compound hits? Compound good!
Appropriately, this piece calls out the rhodanines as perfect examples of the problem:
Rhodanines exemplify the extent of the problem. A literature search reveals 2,132 rhodanines reported as having biological activity in 410 papers, from some 290 organizations of which only 24 are commercial companies. The academic publications generally paint rhodanines as promising for therapeutic development. In a rare example of good practice, one of these publications (by the drug company Bristol-Myers Squibb) warns researchers that these types of compound undergo light-induced reactions that irreversibly modify proteins. It is hard to imagine how such a mechanism could be optimized to produce a drug or tool. Yet this paper is almost never cited by publications that assume that rhodanines are behaving in a drug-like manner.
Very occasionally, a PAINS compound does interact with a protein in a specific drug-like way. If it does, its structure could be optimized through medicinal chemistry. However, this path is fraught — it can be difficult to distinguish when activity is caused by a drug-like mechanism or something more insidious. Rhodanines also occur in some 280 patents, a sign that they have been selected for further drug development. However, to our knowledge, no rhodanine plucked out of a screening campaign is in the clinic or even moving towards clinical development. We regard the effort to obtain and protect these patents (not to mention the work behind them) as a waste of money.

Yeah, I wouldn’t spend much on trying to stake a claim to these things, either. If you haven’t done much screening, you may not appreciate just how many false positives are out there (and for difficult targets, how few real positives there may be). I see people in the literature screening little libraries of a few thousand compounds from a catalog and reporting hit after hit, even in very tricky systems, while in industry we’re used to running hundreds of thousands of compounds past some of these things and coming up with squat. Well, after checking the “hits” for purity, aggregation behavior, reactivity, and profiles from past screening campaigns, that is.
Here’s the sad truth: If you’re doing a small-molecule screen to affect transcription factors, protein-protein targets, or anything in general that doesn’t have an evolutionary optimized small-molecule binding site, you’d better assume that the vast majority of any hits you get are false positives. There’s almost no way that they can be anything else. The true hit rate for some of these things against any sort of typical compound collection is damn near zero, which means that the ways your compounds can be wrong far outnumber the ways that they can be right.
Every single hit, for any assay, should be regarded with appropriate suspicion. Purity check first, LC/MS and NMR. Is it what it says on the label? You might be surprised how often it isn’t (or isn’t any more, even if it started out OK). If you have solid material and DMSO stock, check both of them, because things diverge on storage. It’s a very good idea to take your interesting hits, run them through a plug of silica gel, and test them again. That’s especially true if they have any color to them (but keep in mind, some assay-killing contaminants are completely colorless). The gold standard is resynthesis: if you can make the compound again and purify it, and it still works, you at least know you can trust it that far. If you can’t, well, how exactly is this compound going to do anyone any good?
Note that we haven’t even gotten to the PAINS yet. There are a lot of clean, accurately labeled compounds that should be chucked into the waste can, too, which is where the Baell PAINS list comes in. You’re going to want to check for aggregation: run your assay with some detergent in it, or do some dynamic light scattering or any of several other techniques. A lot of false-positive compounds are aggregators, and you can’t completely predict which ones they might be (it varies according to assay conditions).
You’re also going to want to run your hits through some other assays. How promiscuous are they? If you have access to data from multiple screening campaigns with the same compound collection, good for you. If you don’t, you should strongly consider sending your hot compound(s) out for a commercial screening panel. Don’t just pick the similar targets to screen – you want those, of course, but you want all kinds of other stuff. If a compound hits against widely disparate protein classes, it’s a PAIN, and is set to cause trouble. Don’t assume that they’re clean – don’t assume that any compound is clean, because it almost certainly isn’t. That goes for marketed drugs, too – the question is, does it have selectivity that you can live with, or not?
Those are the big tests, and believe me, they’ll clear out your initial list of screening hits for you. If your target is a tough one to start with, they may well clear out everything. Better that, though, than working on (and publishing) crap.

15 comments on “PAINS Go Mainstream”

  1. David Borhani says:

    Amen!

  2. JAB says:

    Amen twice!

  3. Anonymous says:

    Amen a third time, but I’d also be really interested to hear people’s thoughts on how to identify suspect cpds.
    There are a lot of methods – detergent sensitivity, Novartis’ non-stochiometric binders assay, redox assays, looking at slope factors, promiscuity…the list goes on, but I’ve never really seen any recommendations as to what is best practise.
    My personal prejudice would be to use the non-stochiometric binder assay in conjunction with slope factors and promiscuity against other targets. Detergent assays are OK when you ran your primary assay without detergent, but if you screened with it I’m not sure what they add and only do redox assays if you have a sensitive target

  4. David Borhani says:

    @3: I’d love to hear Jonathan Baell’s thoughts on your questions.
    Something I think is reasonable to do, if your potentially PAINful compound has passed the easy hurdles (detergent, etc.) is to apply MedChem logic. It does take more work, however.
    Take those arylidene rhodanines, for example: If I saturate the C=C double bond, or replace the C=S by C=O, do I still have an inhibitor? Toxoflavins: What happens if I replace some of the N atoms by CH? This sort of molecular deconstruction of hits, to enable building specific hypotheses about how each atom of the inhibitor may interact with the target, can be invaluable, in my opinion.

  5. MAW says:

    Different PAINS require different techniques. A diligent MD/PHD student from Mayo Clinic (see our paper referred to in the Nature commentary) recently established ALARM NMR here at the UMN. It does not replace other methods, but we have found it a useful tool in the toolbox.
    Always have a well-versed medicinal chemist on board when doing assay triage. Probably the simplest place to start.
    I agree that atom replacements are key to understanding structure-interference liabilities in these compounds. Has anyone reduced the double bond in an “active” tetrahydroquinoline [C12=CC=CC=C1C3C(CC=C3)C(C4=CC=CC=C4)N2]; see Baell 2010FMC1529 for a discussion) and retained activity?

  6. Jonathan Baell says:

    OK, I feel the need to respond.
    Great summary Derek.
    @3: you are quite correct. A recommended best practise has yet to be established, in part due to the subversive and complicated behaviour of PAINS in different settings…look at the excellent work of the Guy group on SJ-172550 in PlosOne a couple of years ago. Could a general assay be developed to detect such complicated behaviour? Maybe, but a lot of learning yet. Has anyone out there a protein like PTB1B pulls out lots of junk? A panel of these (such as the La protein that led to ALARM NMR) could be useful in addition to a couple of the redox assays now out there. And @4 (hi dave) yes early SAR that tracks with the problematic structure has to be a concern. The importance of SAR is so often neglected. And @5…Mike/Jayme your efforts to establish ALARM NMR is brilliant. Re the tetrahydroquinoline…a good point about reduction….but another complication…I’ve been told by Uli Schmitz (Giliead) these can chelate with heavy metals such as Gadolinium…so a metal strip should also be part of the process hit triage.

  7. Jonathan Baell says:

    @6….I forgot to stress this is why structure alone is one of the most important red flags and so being familiar with as many PAINS classes as possible or at least the most common is very useful/important. Personally, I wouldn’t actually bother following up target-based PAINS HTS hits in any sense with counter-assays unless accompanied by compelling data (less hot PAIN class, unusually potent, selective, polar etc, but even then…..)

  8. Ed says:

    It is also worth noting that there exist free and easy to use software solutions for filtering off PAINS, so there should be absolutely no reason for any group to chase up PAINS hits without being aware of it.
    KNIME ( a free, cross-platform equivalent to PipelinePilot) is very well supported by the cheminformatics and sbdd/fbdd/lbdd community (CDK, RDKit, Schrödinger, ChemAxon, Simulations+, MOE, BioSolveIT etc), and there are a number of freely downloadable workflows to achieve PAINS filtering.
    as far as i know it installs without admin rights, so no need to trouble your IT team either.
    https://www.knime.org/downloads/overview
    ed ( a happy medicinal chemist KNIME user)

  9. cdsouthan says:

    It would be a great community service if those so inclined could surface (and maintain) PAINS lists as PubChem submissions (SIDs). This would be the easiest way to warn the largest number of users.

  10. cdsouthan says:

    Alternative thoughts – just offer PubChem a robust consensus PAINS filter for them to includ as a default subset (i.e. on the right hand side of result sets)

  11. Pete says:

    I made these comments on the corresponding Practical Fragments post and they may have some relevance here as well.
    I believe that we need to be thinking more about the criteria by which compounds are deemed to be PAINS. For example has a compound been shown experimentally to be a PAIN or do we think it ought to be a PAIN because it includes a substructure that is believed to be PAINful? What do we mean when we assert that 8% of compounds in commercial libraries are PAINS. If we believe that a substructure is PAINful then exactly how strong is the link between the presence of the substructure and the observed PAINfulness? Is the PAINfulness restricted to a single type (i.e. detection technology) of assay or has it been observed over different types of assay? How much should we be worried about PAINfulness if affinity to the target has been measured directly and characterized by X-ray crystallography?
    I’m certainly not denying that PAINS are a problem and we do need to be aware of potential screening artefacts. At the same time, we do need to find ways to better capture how much we really know about the PAIN levels associated with particular substructures, particularly when asserting literature pollution. As a cautionary tale, it’s worth remembering that it was asserted (dx.doi.org/10.1038/nrd2445), “Lipophilicity plays a dominant role in promoting binding to unwanted drug targets” even though correlations were for median lipophilicity for each promiscuity level and the activity (>30% inhibition at 10 micromolar) threshold used to define promiscuity is unlikely to have any physiological relevance. Fast forward a bit and we see this work being cited in support of the assertion (dx.doi.org/10.1021/jm201388p ) that lipophilicity, “… has an inevitable role in selectivity and promiscuity” which could be regarded as a form of inflation in its own right.

  12. Jonathan Baell says:

    @11….Pete, as I suggested in Dan’s blog, I can recommend going back to the PAINS J. Med. Chem. 2010 and associated papers (Baell Fut. Med. Chem. 2010 and Baell Aust. J. Chem. 2013) as we discuss a number of these issues in quite some detail so there is no need to go over old ground. I certainly agree that we should not immediately think of this as a black and white issue and that evidence-driven decision-making is the way to go and the degree of evidence does vary from class to class. I also don’t rule out that we may at some point in the future see an example or two where a PAIN (particularly from a less problematic class) has been prematurely progressed from a target-based screen but adventitious additional off-target activity has led to (possibly even clinical) efficacy and who knows maybe it goes all the way. That is, it has become a more traditional drug-discovery approach in a sense, associated with polypharmacology. Personally I’d not take this chance from a target-based screen without substantial reason to do so.
    When we say 5-12% of libraries contains PAINS, this is defined by the original PAINS filters and what they recognise. There is much nuance and many concepts to discuss about this, which we do in the papers cited above (i.e. why in some cases a PAIN may not be a PAIN, and why in others a non-PAIN…that is a compound not recognized by the filters…..is actually a PAIN). The 1700 word/10 refs limit in the Nature article (where terms such as ‘phenotypic’ were seen as potentially too specific to a generalist audience) does not allow for such subtlety but does afford a valuable opportunity to send a strong message to try to put a halt to target-based (and many phenotypic-derived) PAINS publications, the overwhelming majority of which are a waste of precious (unaffordable) time and money.
    I certainly appreciate that it is very hard for people to understand how much time PAINS can waste unless you are a H2L medicinal chemist who has worked on hit sets containing PAINS. I think most medicinal chemist who have gone down this path only to find it a cul-de-sac, would agree.

  13. SP says:

    I’ve faced biologists who were worse than ignorant of the general idea of “ugly” or promiscuous compounds based on substructure filters- they actively reject the notion that chemists can base these judgments on structure alone because they can always come up with a couple counterexamples, e.g. marketed drugs with “bad” groups. It’s similar to the attitude that you have to follow up on EVERY SINGLE HIT even if it’s not available, impure, etc. because that one hit out of your list of thousands is going to be the next drug. There’s no sense of statistics or cost/benefit.

  14. MAW says:

    @11…In most of my analyses, PAINS (or percent PAINS) simply means that this was the percent of compounds flagged by substructure filters implemented in Canvas (Schrodinger). The point is simply to say that most commercial libraries have suspect compounds in them, and researchers should be wary. I have found at least one 50k library that has no compounds flagged as PAINS.
    When doing HTS triage, I simply flag compounds as PAINS and decide by visual inspection how to prioritize them versus other actives. The same is true of any computational filter we use. After all, our groups don’t have the resources to follow up on everything we find, so knowing what might be the risks of moving ahead on a series is important.
    Presumably, pharmaceutical companies will sometimes employ even harsher filters to flag or remove compounds. REOS filtering is much stricter…usually on the order of 25-30% in large commercial libraries. But of course we don’t throw out all nitro aromatics at the triage stage.
    We are often approached by academic researchers who have developed compounds that we believe fit the PAINS substructure classes. In one case they have crystal structures. Of course, these crystals were formed under conditions that were not at all like the colormetric assays they were running to determine activity. (And the compounds weren’t that potent anyhow, even after a library of ~400 compounds was prepared.) Before moving ahead with a collaboration, we proposed to derisk these compounds by performing simple redox cycling assays. After about 2 years we have yet to have access to any of their compounds.
    The bottom line is we use many computational filters to flag compounds for prioritization. PAINS filtering is just one tool in this toolkit.
    We are just completing work on structural interference studies on a few classes of interference compounds. I am in total agreement with JBaell. These studies have shown a wide range of interference with these classes. I suspect that following up on any of these non-interfering members would eventually lead to the discovery of the more deceptively “active” true interference compounds. But I don’t know that for sure.

  15. Lagorce David says:

    As a follow up to those interesting discussions it is clear that educated decisions need to be made in the prioritization and optimization of hit compounds and especially when it comes to promiscuous associated substructures such as PAINS.
    Having in hand the proper tool to flag such chemical moieties, rather than just reject them, is therefore essential to gain some knowledge and to anticipate any development failure later on in the process.
    To this end, for several years we are developing a dedicated predicting tool and maintaining literature surveys on this matter in collaboration with Dr. J. Baell.
    Indeed we provide a free online tool named FAFDrugs2 (http://mobyle.rpbs.univ-paris-diderot.fr/cgi-bin/portal.py?form=FAF-Drugs2#forms::FAF-Drugs2 ) which can either creating filtered libraries or analyzing a subset of hit compounds with the various physchem and chemistry rules including the essential concept of PAINS.
    A dedicated website is available at http://fafdrugs2.mti.univ-paris-diderot.fr

Comments are closed.