Skip to main content
Menu

Academia (vs. Industry)

More Hot Air From Me on Screening

After yesterday’s post on pathway patents, I figured that I should talk about high-throughput screening in academia. I realize that there are some serious endeavors going on, some of them staffed by ex-industry people. So I don’t mean to come across as thinking that academic screening is useless, because it certainly isn’t.
What is probably is useless for is enabling a hugely broad patent application like the one Ariad licensed. But the problem with screening for such cases isn’t that the effort would come from academic researchers, because industry couldn’t do it, either: Merck, Pfizer, GSK and Novartis working together probably couldn’t have sufficiently enabled that Ariad patent; it’s a monster.
It’s true that the compound collections available to all but the very largest academic efforts don’t compare in size to what’s out there in the drug companies. My point yesterday was that since we can screen those big collections and still come up empty against unusual new targets (again and again), that smaller compound sets are probably at even more of a disadvantage. Chemical space is very, very large. The total number of tractable compounds ever made (so far) is still not a sufficiently large screening collection for some targets. That’s been an unpleasant lesson to learn, but I think that it’s the truth.
That said, I’m going to start sounding like the pointy-haired boss from Dilbert and say “Screen smarter, not harder”. I think that fragment-based approaches are one example of this. Much smaller collections can yield real starting points if you look at the hits in terms of ligand efficiency and let them lead you into new chemical spaces. I think that this is a better use of time, in many cases, than the diversity-oriented synthesis approach, which (as I understand it) tries to fill in those new spaces first and screen second. I don’t mind some of the DOS work, because some of it’s interesting chemistry, and hey, new molecules are new molecules. But we could all make new molecules for the rest of our lives and still not color in much of the map. Screening collections should be made interesting and diverse, but you have to do a cost/benefit analysis of your approach to that.
I’m more than willing to be proven wrong about this, but I keep thinking that brute force is not going to be the answer to getting hits against the kinds of targets that we’re having to think about these days – enzyme classes that haven’t yielded anything yet, protein-protein interactions, protein-nucleic acid interactions, and other squirrely stuff. If the modelers can help with these things, then great (although as I understand it, they generally can have a rough time with the DNA and RNA targets). If the solution is to work up from fragments, cranking out the X-ray and NMR structural data as the molecules get larger, then that’s fine, too. And if it means that chemists just need to turn around and generate fast targeted libraries around the few real hits that emerge, a more selective use of brute force, then I have no problem with that, either. We’re going to need all the help we can get.

25 comments on “More Hot Air From Me on Screening”

  1. New scaffolds should be introduced in screening libraries which are currently missing. Similarity searching methods can shed light on such scaffolds.
    http://ashutoshchemist.blogspot.com/2009/06/anti-question-or-when-bias-can-be-good.html

  2. HelicalZz says:

    Just to throw in a curveball, the consideration of pathway analysis and elucidation via screening and identification of inhibitors is itself a bit of an industry-centric mindset. One of the huge advantages of RNAi (the technology, not as a therapeutic) continues to be pathway analysis and elucidation. Knocking down protein products has advantages over knock-out technologies, especially with targets essential to development.
    So, pathway inhibition via chemical means is by no means a necessity for pathway elucidation or association with a disease indication. Don’t assume that screening is necessary to serve the interests of academic research or necessarily to obtain and demonstrate utility for a patent application.
    In other words, this is likely to be a bigger issue in the coming years.

  3. big btech says:

    Real trouble is biotechs and big pharma are continuing to look for new drugs where they’ve already been. That is, in the well-defined (and retrospective) space defined by the stultifying BS of Lipinsky’s rules. As it turns out, continuing to look where you’ve been, that is, x # of heteroatoms, MW

  4. NewbieAlert says:

    I thought that, historically, most of our drugs came from natural product leads. Why have we abandoned this technique? (I ask out of ignorance, not out of agenda.)

  5. mad says:

    Here is another curve ball.
    Are we really screening what we thing we are screening. What about all the problems with library storage and preserving the integrity of the compounds. DMSO storage turned out to be not the “file and forget” preservative it was treated as.
    How many targets were missed due to screening partially degraded compounds?

  6. Cellbio says:

    I’ve seen the output of some academic screens, and what strikes me is that the technical solutions are being made, libraries, liquid handling etc, but the judgement is lacking. After the screen is done, some (most?) of the academics take the compounds that sit on top of a potency ranking as “success”. So, when Derek says, ‘we can screen and come up empty’, that is because we apply reasonable judgement and have other data/insight about the compounds that aloows us to trash the whole output as oppossed to filing patents and conducting biological research with mM concentrations of structurally complicated salts or detergents. Not so much the case in academia today.

  7. Lucifer says:

    Has this approach produced any path-breaking drugs? You know, the ones that have a therapeutic effect and are somewhat superior to the previously used drugs?
    The Devil is in the details..
    //It’s true that the compound collections available to all but the very largest academic efforts don’t compare in size to what’s out there in the drug companies.//

  8. JAB says:

    @4. I resemble that remark ! Nat prods have been dropped in pharma partly due to cost and time lines of resupply once one had a purified NP in hand. I would submit that great strides have been made in biosynthesis and total synthesis in the last several years and that the worth of NPs ought to be on the rise. Note that Novartis still has an active NP program, but that Wyeth’s acquisition by Pfizer is likely to completely extinguish US pharma NP groups. I predict that boutique NP specialist companies could fill the gap if someone sees the opportunity.

  9. Anonymous in NC says:

    Comment 5 by mad begins to raise a major issue. Analysis of the sources of error in screening are few. The DMSO stability question is one, real quantitation due to solubility is another, structure assignment and sample picking errors help to round the first round choices. While contemporary assay development uses statistics like Z factor, there are other substantial sources of error. Couple this with low frequency of hits and you have a major failure mode of HTS. What % of hits present in library are actually found in HTS or qHTS? Are there implications for the perceived higher success of focussed (directed) libraries? Given all the requirements a molecule must meet to grow up to be a drug, is relevant diversity space really that big?

  10. JB says:

    I help run one of the large academic screening centers- We’ve thought about some of the issues people raise here. We do a routine QC on every compound that enters screening, we’re generating new scaffolds that are more natural-product like, we have medicinal chemists who decide when something is garbage and not worth spending time on.

  11. Paul S says:

    Back in graduate school, I had an idea for a screening method that probably couldn’t have worked in the mid 1980s, but perhaps can today. It’s based on pattern-recognition databases applied to NMR (both H1 and C13) data for compounds of interest.
    The method would essentially take a black-box approach to compound activity. That is, it wouldn’t assume any understanding of why a given compound is active, except to assume that the reason has something to do with conformational or structural effects that can be detected in NMR. It would begin by building a dataspace of NMR data on compounds of known activity at a particular receptor, perhaps including both the compounds themselves and the compounds bound to the active site(s). Then you’d turn an algorithm loose on your database and compare it with similar data for compounds of unknown activity.
    Pattern recognition has been applied to chemical data more and more often as time goes on – for example to link a mineral sample to a specific mineral deposit based on crystallographic and trace element analysis. I think it would be a fast method for initial screening of compounds of interest, at least screening out compounds of probable low activity prior to in vivo (or even in vitro) screening methods that are much more labor intensive.
    Do you know of any work in this area?

  12. Sili says:

    Anything that calls for more chrystallography is fine by me! Where do I sign up?
    Is there really no ‘screening’ in place in compound libraries? I wouldn’t have thought it would be that hard to random sampling and check NMR against spectrum filed when the stuff was registered.
    Of course, once a ‘miss’ is found, someone would have to look at what the problem is – I can see how that would be a bottleneck.

  13. Ty says:

    Academic efforts in the drug discovery area is understandably rudimentary at this point. But, after many lessons learned in a hard way, it’s improving, I guess, and it will, thanks in part to the influx of many of the seasoned drug hunters who lost industry jobs in recent years. Having said that, from my observation, the real problem lies in the head of the PIs. More often than not, their goal is not to make a breakthough medicne but to publish and get a grant renewed. They are mostly ignorant of and try to bypass the toughest part of drug discovery – like target ID, PK and exposure, tox issues, etc.
    Combichem frenzy has generated a huge amount of trash which still is negatively impacting many screening efforts and thereof, esp. in academia where the primary collection of small molecules tends to be cheap commercial libraries. I am afraid now that the immature drug discovery drive in the academia might generate quite a number of ‘expensive’ trash in the years to come. Elesclomol, anyone?
    Regarding natural products, really how many drugs are derived from NPs in a way NPs substituting compound collection for hit finding? In this context, we should not count the mimics of physiological ligands such as monoamines, peptoids, steroids, etc. Other than infectious disease area and cancer where nature had to have generated a wealthy pool of bug-thwarting and cytotoxic agents, I don’t really see too many drugs that were inspired by natural products. Inside the industry, yes, NPs are underestimated for many reasons but, outside of it, its mystique is kinda overblown, I think.

  14. cyclcc says:

    Is the point not so much to completly cover chemical space but rather to start accessing new chemical space and in this way you have a chance of accessing new biological space. You could argue that Pharma has looked mostly at the same targets (GPCR’s) with the same chemistry and any time that new biology targets have been investigated its the same “old” chemistry thats used to access it. When this doesn’t work the target is called “undrugable” …… new chemistry will be needed for the targets of the future if pharma is to survive.

  15. hibob says:

    @13
    “More often than not, their goal is not to make a breakthough medicne but to publish and get a grant renewed. They are mostly ignorant of and try to bypass the toughest part of drug discovery – like target ID, PK and exposure, tox issues, etc.”
    I think it’s best when PIs don’t try to make a breakthrough medicine – they’re much better off trying to identify a pathway and finding compounds that work as a proof of principle rather than trying to think twelves steps ahead to a real drug. If they think they’re on to something, by all means start a company or license it, but telling all their grad students they will be dropping their projects to become a pipeline (and no publishing of the results for a coupla years, sorry) wouldn’t work very well. So yeah, they should work on finding the target and the big ugly tarballs that hit it, and publish.

  16. Norepi says:

    Of course, another problem everyone has in screening are the false positives/false negatives, especially if everything isn’t automated, double if the assay is cell-based. One of our cellular assays has two phases, an in initial high concentration and then more detailed screening if the compounds pass a certain “potency threshold” – we get compounds all the time that look fantastic initially, and then maybe shape up to be 10 uM max. What happened, did they decompose? And the variability, one compound with N carbons causes the cells to grow, and N+1 kills everything in sight…How many potential drugs have we chucked out on account of this sort of business?
    Derek, I find it interesting that you mention the difficulties in modeling DNA/RNA. I’m not terrifically experienced, but as someone who did this, yes, it is a pain, at least when you’re working with intercalating compounds, because a) some programs, especially newer ones, just aren’t parameterized right or completely for DNA, b) I think programs have a hard time dealing with the higher quantum factors involved (pi-stacking, hyperconjugative effects), and trying to sort this out ab initio is just time consuming. So the end result is usually lousy inhibitors docking the same as good ones; it’s not predictive at best and downright wrong at worst.

  17. LeeH says:

    A few comments:
    Using Lipinski rules is just a case of not repeating history. Chris Lipinski didn’t really invent these rules – the human body did. The probability of having a successful drug is very reduced if you fall outside these ranges because you have a high probability of violating some PK or physicochemical limitation. Conversely, if you are inside the ranges that doesn’t mean you have a drug.
    Concerning actually having what you think you have in a compound collection, perhaps I was just lucky, but where I used to work we were almost never blindsided by the identity of a compound after retesting HTS hits. It was almost always what we thought it was, with the exception of an instance where compounds from a particular vendor were uniformly incorrect. Of course, I can’t vouch for the hundreds of hits that we didn’t follow up on, but I suspect that it wasn’t that we just did everything right, but that by and large these days most collections are fairly clean.
    On the natural products issue, it’s really a religious discussion rather than a technical one. On the one hand, NPs can give you very novel shapes, and have clearly been a major historical source of drugs. On the other hand, how many companies want to get into the trench warfare of fixing ADME/PK issues on a compound with multiple chiral centers and synthetically equivalent functional groups (DOS notwithstanding)?
    Regarding the vastness of chemical space, yes, it’s vast, but I think the bigger issue now is finding compounds that are specific rather than those that are active. We try to find specific kinase inhibitors using small molecules that bind to almost identical binding sites. The body controls this specificity using the interaction of proteins where the action occurs well outside the binding site. It’s a miracle we have any kinase-based drugs at all.

  18. Lucifer says:

    Anti-microbial drugs?
    Anti-microbial drugs, vaccines and sanitation have increased life expectancy more than all other medical advances combined.
    //Using Lipinski rules is just a case of not repeating history. Chris Lipinski didn’t really invent these rules – the human body did. The probability of having a successful drug is very reduced if you fall outside these ranges because you have a high probability of violating some PK or physicochemical limitation. Conversely, if you are inside the ranges that doesn’t mean you have a drug.//

  19. Anonymous says:

    of course synthesizing compounds of the complexity found in natural product is the place where new drugs will be found. The problem is pharma would rather spend it’s money on TV ads, private jets, and executive compensation, rather than on long drawn out synthetic projects that may or may not be of value. It will never happen. The real issue at hand is that “expensive” medical care, (drugs, specialists, procedures) is going to go the way of the dinosaur, it’s impossible for it to continue.

  20. seenthelight says:

    of course synthesizing compounds of the complexity found in natural product is the place where new drugs will be found. The problem is pharma would rather spend it’s money on TV ads, private jets, and executive compensation, rather than on long drawn out synthetic projects that may or may not be of value. It will never happen. The real issue at hand is that “expensive” medical care, (drugs, specialists, procedures) is going to go the way of the dinosaur, it’s impossible for it to continue.
    PS
    99.9999% of the compound libraries are nothing but junk, easy to make, combichem derived crap. A few “degradation products” in the mix doesn’t preclude finding a hit.

  21. NP_chemist says:

    Well, #13 seems to have forgotten the best selling drug of all-time, atorvastatin, which is effectively the warhead from the original compactin with “different grease”. The first group to show that this type of substitution could be successful was in fact in a paper by Merck well before Warner Lambert chemists “invented” atorvastatin.

  22. drug_hunter says:

    (1) My estimate is 10^24 – 10^30 plausible organic compounds including variants on all known natural product scaffolds. Perhaps 1 out of every 10^6 of these will have reasonable drug-like properties. Meanwhile the largest screening libraries are, what, 10^7 ?
    (2) With new disease biology e.g. transcriptional regulation, disruption of protein-protein interactions, and so forth, we will need far more chemical diversity to make potent and selective drugs.
    (3) Conclusion: there’s still a lot of room for improvement in library construction and screening. Improvement both synthetically and computationally.

  23. srp says:

    It is striking that everyone here is totally on board with trying to design drugs “rationally” by fitting molecules to specific receptors on known pathways. This commitment is so strong that alternatives are not even considered.
    Yet we know (from many of Derek’s posts, no less) that the actual mechanisms of actual working drugs often (usually?) depart significantly from the original theory by which they were developed and approved. In addition, most of the big successful drugs of the past (aspirin?) were not developed by targeting a single pathway and indeed operate in complex ways on multiple pathways.
    Until the state of the art in biology and computer modeling gets way more advanced, I find it hard to believe that “rational” interventions into such complex systems are likely to have a high success rate. My impression is that the outcome of evolution is biological systems with all kinds of messy feedbacks and feedforwards that operate in variable ways depending on environmental conditions. Treating these systems like machines designed by engineers which (sometimes) give simple responses to simple interventions strikes me as dogmatic and unrealistic, but there seem to be institutional (e.g. the FDA) and cultural (e.g. individual education and experience) factors that mandate that approach today.

  24. Jose says:

    Interesting to note that all the big pharmas have dropped their NP divisions, and all the small biotechs focused on NP platform development are now defunct.

  25. transmetallator says:

    At a “biology heavy” institution we have a large HTS program focused on screens for new interesting biology. The compound library is supplemented by compounds from several chemistry labs doing tot. syn. work and isolation fractions from a natural product group. Guess where a ton of hits come from? The natty P mixtures have very high hit rates and even if they have no idea what they are doing, validation then gives new compounds that can be optimized. Does industry do this? If not, why not?

Comments are closed.