Skip to main content

Drug Assays

The Latest on Encoded Screening Libraries

I wanted to mention an upcoming meeting, for people in the Boston/Cambridge area, on DNA-encoded library technologies – Friday, November 6, at AstraZeneca’s site in Waltham. I’ll be helping moderate a panel discussion at the end of the meeting, but there are speakers during the day from Harvard (David Liu) as well as AstraZeneca, GSK, Roche, Ensemble, Vipergen, NuEvolution, X-Chem and the ETH – pretty much all the big players in this area. I’ve long been interested in the possibilities of screening Carl Sagan-esque billions-and-billions of compounds, and I look forward to seeing what the state of the art is like.

There have been several blog posts here about this field, but for background, the Wikipedia article is quite thorough. (Wikipedia, as an aside, is in my opinion one of the great modern works of the human race). Here’s an open-access Accounts of Chemical Research article on the field, from Dario Neri (who’ll be speaking at the conference), and here’s a recent look at macrocycle libraries prepared in this way (from Ensemble, and Stephen Hale, a co-author on this paper, will be speaking as well). And this recent issue of Current Opinion in Chemical Biology is a themed one around various sorts of encoded libraries, with several interesting papers.

14 comments on “The Latest on Encoded Screening Libraries”

  1. Molecular Architect says:

    I haven’t been keeping up on this technology as I was always skeptical about screening small molecules attached to a large chunk of DNA. However, I just saw a great presentation on the identification of phosphatase inhibitors with this technology by a couple of scientists from GSK. An impressive and very compelling piece of work. Wish I could be there to see what others are doing.

  2. James Woods says:

    Will the meeting be recorded for those of us working on Friday?

  3. Daen de Leon says:

    Be warned that the Wikipedia article is a bit skewed towards Vipergen’s YoctoReactor tech …

  4. anon3 says:

    Should be an interesting meeting!

  5. milkshake says:

    I have my doubts: I have been with Selectide that in early 90s pioneered one bead-one compound approach, the encoding and tried on-bead screening or single bead-release based screening. There are huge problems in terms of synthetic feasibility and assay reproducibility, and frankly the whole coding thing is very unwieldy. Also, Selectide, that still exists as Sanofi site, soon completely abandoned its core combichem platform for traditional parallel synthesis done on solid phase support: from speaking with their chemists I believe they are making small collection of compounds that are individually synthesized, purified and thoroughly characterized before screening.

  6. Cameron says:

    I can appreciate the skepticism about encoded screening libraries, but I think looking at what’s going on in big pharma shows that’s it’s becoming a routinely used hit ID platform at this point. Back in 2006 there were only a few academic and biotech players – NuEvolution, Praecis, Ensemble, PhiloChem, etc. GlaxoSmithKline acquired Praecis in 2006/2007 and became the first big pharma to use this platform. Looking at it now, most big pharmas are in the game: AstraZeneca, Pfizer, Bayer, Johnson & Johnson and Janssen with X-Chem; Novartis and Merck with NuEvolution; Roche (now internal); etc.

    1. Daen de Leon says:

      Nuevolution just announced a deal with Janssen, too.

  7. exGlaxoid says:

    Having worked at GSK and seen tagging from three different technologies used, I can say without hesitation that I never saw a drug or even a good lead come from all of the many libraries of tagged compounds screened at an enormous expense. There was an internal group that spend $100+ million on arrays, only a few ever got screened and none had anything worthy of following up. Then they bought Affymax, spent 100’s of millions on that, and again, only a few libraries were screened, with few real hits. I did however do some quality control on some of those, and most had LC-MS peaks for a few of the compounds they were alleged to contain, and many others which were not supposed to be present. For comparison, the waste from the red can scored better…

    And Praecis was just one more waste of money as best as I can tell, which was part of why real R & D has now been cut to the bone. Simple calculations show that it is hard to make billions of compounds in equal amounts in that small of a volume, plus most every library that I saw from there was triazole based, which we already had several of from the previous technologies. You could just mix trichlorotriazine with a random mix of amines and do just as well.

    Sadly, those, and other large, bead based libraries took away from the few really good parallel synthesis chemistry groups at Stevenage, Collegeville, and RTP, most or all of whom are gone now. They found simple ways to increase solution phase chemistry, using scavengers, simple automation (every complex robot based synthesis machine was a huge bust in my experience), and good chemistry plans in order to make 100’s of thousands of pure (>90% by NMR and LC-MS) solids in vials for screening and even lead optimization. But because it was not sexy, photogenic, or making billions of compounds, it was ignored by the R & D management as not fancy enough. The group I worked with averaged 1000 discrete compounds per chemist per year, which is about 5 to 10 times the productivity of most chemists.

    I will also acknowledge that many areas of chemistry, like nucleotides, complex chiral compounds and sugars are not as amenable to parallel synthesis, another thing that many managers did not understand, as they tried to use combichem/arrays/a hammer to solve all problems as if that was their only tool, and every problem was a nail. There are certainly many problems for which parallel synthesis or libraries are not the answer.

  8. again says:

    “And Praecis was just one more waste of money as best as I can tell, which was part of why real R & D has now been cut to the bone. ”

    There is no way that the above is even remotely true, so why even write it? Even the simplest amount of $ accounting proves the statement absurd.

    Second, there’s a laundry list of papers from GSK these days showing a range of the Praecis-esque generated hits, of which most are not triazine based, and many seem perfectly drug-like. So why are you talking about triazine? You can read the papers, they are published, just read them.

    Third. GSK claims that a Praecis triazine hit is in the clinic. If so, then this hit did not come from HTS. That is that. There is no such that is, it could have come from HTS, if it was there, and if the screen was run, and if it was followed up. Thats crazy talk. It is or it isn’t, and it wasn’t. Deal with reality.

  9. debug22 says:

    I’m at GSK and have followed the evolution of Encoded Library Technology since 2007, though not directly in that group. IMO exGlaxoid’s comment may have been true at one time, but over time the libraries, the sequencing and the informatics got better to the point that it’s a real hit ID platform almost on par with HTS. And yes we have a handful of clinical candidates from this technology some of which are not triazines.

  10. anony-mous(e) says:

    @debug22: That MUST explain why the GSK pipeline is SO ROBUST 🙁

  11. Jasper says:

    Can the general public attend this meeting of superstars?

  12. again says:


    I think what @debug22 is saying is that your GSK pipeline would be even WORSE if not for your encoded library technology program. My guess, and its just a guess, is that GSKs biggest problem is it employs a bunch of angry chemists who have no faith or trust in each other, or its own management.

Comments are closed.