Skip to Content

Chemical Biology

Ignoring the Literature, Selectively

Very little time for blogging today (travel), but I wanted to pass on some words of wisdom from Kevan Shokat, from a recent Perspectives piece in Nature Reviews Cancer. Talking about chemical probes, and how to know if they’re valid enough to work with, he suggests that you need to see dose-response data (for one thing), and he’s much, much happier when there’s a crystal structure of the proposed probe with the target protein (hey, who isn’t?) But then there’s this point:

The third criteria is proof that the first drug can be modified and its biochemical and cellular activity improved in a manner consistent with the structural model. Note that I do not include a requirement that the molecule work in an animal model. In my opinion, too many first reports describe animal efficacy data, long before the first three criteria are established, leading to false-positive proof of target inhibition. Something to look for if the report was published more than a year ago, is whether a follow-up study has appeared showing an improved version of the molecule and further proof of target engagement. If nothing appears after several years in the peer-reviewed literature, bioRxiv, or published patent applications, you can bet the molecule was an artefact and the target remains undrugged.

I endorse both of those – the too-fast animal model problem and the lack of follow-up criterion. When the answer to “Whatever happened to. . .?” is “Nothing, apparently”, then it’s a bad sign. The only mitigating factor might be if the report was from a more obscure source or published in a more obscure journal. That gives you a possible out, in that other people may not have noticed it, but that could also be offset by the possibility that the group reporting it may not have had the resources or experience to do adequate characterization themselves.

This is not a mandate to ignore the literature wholesale. If you do that, you’ll end up stuck pretty quickly in this field. Richard Feynman was famous among colleagues, when he moved into a field of research, for deliberately not reading the literature and trying to work his way up from first principles. That, though (as one of those colleagues remarked in James Gleick’s biography) only worked if you were as smart as Feynman. And at any rate, it doesn’t work at all in biology, where there are no first principles, at least by the standards of physics. No, we’re stuck with the literature, and we have to keep up with it, but we also have to remember that a reasonable percentage of it is wrong, and be prepared to ignore parts of it as needed, and with cause. If you try the opposite, and decide that every report you read is completely correct, you will end up stuck as firmly in the mire as if you don’t read the literature at all. It ain’t easy.

18 comments on “Ignoring the Literature, Selectively”

  1. Mad Chemist says:

    I’m glad I do synthetic chemistry, where it’s easier to verify whether or not something is working. I’ve still had several procedures I’ve attempted that didn’t work anything like they were supposed to, unfortunately.

  2. bhip says:

    I would add counter-screening data as a requirement for chemical probes. It’s easier to obtain than crystal structure data but arguably more relevant. A chemical probe which binds to its target and has SAR data in cell culture is useless if it also hits umpteen (or one) other target(s) …..

  3. road says:

    One point ignored here is selectivity. Many of the worst-offenders do, indeed, hit their intended targets, but hey hit dozens (or hundreds) of related homologues, too, confounding interpretation of results and making them less-than-useful for target validation. Initial probe reports for targets that are members of larger families that do not describe within-family selectivity are to be viewed with great suspicion.

  4. Chris says:

    Slightly off-topic, but my impression is that the system doesn’t reward well for thorough studies other than a pat on the back and the occasional verbal admiration of your publications. I’ve had colleagues get oncology target papers rejected because they’d only run two animal efficacy models instead of the 3-4 that the journal required. That has to be a sign of a system that encourages the wrong questions. If we want those thorough studies to become the norm we have to change the way publishers review papers.

    1. Free Radical says:


  5. Mike P says:

    “If nothing appears after several years in the peer-reviewed literature, bioRxiv, or published patent applications, you can bet the molecule was an artefact and the target remains undrugged.”

    I’m not sure that is a safe bet. If something is successful in its earlier stages, it is not at all uncommon for the follow up to become more proprietary and remain unpublished during a commercialization effort – and it may be years before patent applications become public. In the pharmaceutical industry, failed projects end up in the literature faster than successful ones.

    1. sean says:

      Something happened in the 80s. We went from patents on likely candidates turning into patents on every candidate with the widest possible structural features that the lawyers thought would stick.

      1. Kyle MacDonald says:

        Question from someone who doesn’t know much. Mike P seems to be saying that patent applications will happen several years into the development process, whereas Sean seems to be saying that lawyers will try to patent as much as possible without waiting to see if the structure is actually useful. Is there something I’m missing here?

        1. Mike P says:

          As Sean says, in the pharmaceutical industry almost everything these days ends up in patent applications. Publishing in the scientific literature is very often delayed in favor of waiting until after patent applications become public which can be years after the first experiments were done. That delay in publishing in the scientific literature tends to be shorter for unsuccessful projects (those that are terminated with no commercial potential) because the business reasons for withholding the results are weaker.

          Similarly, work done in academic labs that looks promising tends to be transferred to some type of commercial venture where proprietary concerns can delay public release of results.

          1. Kyle MacDonald says:

            Ah, okay. So someone, in either an academic or a commercial lab, produces a compound. If it’s an academic lab, it likely goes to a commercial lab, with some delay in the process. Once it’s in a commercial lab, wherever it came from, it probably goes to a patent application pretty quickly, but there’s a second delay before that application becomes public. Then there’s a third delay after that before it’s likely to end up in the scientific literature. Thanks for the clarification!

    2. Diazepine says:

      “In the pharmaceutical industry, failed projects end up in the literature faster than successful ones.” – totally on target.

      Generally, there’s a patent position (filed) before journal publication(s) and a company’s threshold for publication is much lower if an asset has tanked. As already pointed out, the patent application also provides a hedge in case there’s alternative indication(s) downstream. If a clinical asset is in play, the last thing you want to do is potentially encumber it with a tangential publication. That said, it would be nice to see more (transparent) publications/details regarding clinical failures – while that may be good for science, that’s usually considered bad for bizness.

      1. dstar says:

        Hmm. So does this mean that if there’s an article by someone not in academia, and there’s _not_ a related patent, the described drug is probably useless?

        That would seem to be the implication.

        1. Design Monkey says:

          It’s not a drug then. It would be highly unlikely that somebody would go through (highly expensive and complicated) drug registration process without patent. Theoretically might happen in very rare government/charity sponsored cases, but I’m not aware of any of such in reality.

    3. Anon says:

      My first 10 years in pharma I worked on 7 projects that never advanced into the clinic. My officemate who started same time as me worked on one that was fantastically successful. He got tons of accolades within the company, but when we both got laid off I had a bunch of publications to put on my resume and he had only one.

      So, yeah, failed projects end up in the literature faster than successful ones.

  6. Barry says:

    Leo Szilard was asked about his shift from nuclear physics to (what we would now call) Molecular Biology. He replied that when he was doing physics, he could set himself a problem, draw a hot bath, and work it through from first principles before the water got cold. But in his new field, he would set himself a problem, get into the hot bath, then pretty soon have to get out to look up some inconvenient fact. “Biology ruined bath-time”.

  7. Kris says:

    As someone part of an academic setting that does ligand discovery, I can tell you that the quick and dirty animal studies are done so that they can publish the work in J. Med Chem or something similar. It looks great on your grant update/renewal to have “in vivo data”. It is an easy and foolproof way to satisfy the “publish or perish” rule.

    1. chemkev says:

      I agree. I don’t think academic labs do “quick and dirty” in vivo/cellular studies with nefarious intentions. I think it’s just what is required for J. Med. Chem. and similar journals. Also, it’s not a bad starting point for continuation of the work both on the chemistry and biology -side. I think it’s often over interpretation of the reader that leads to problems.

  8. Joe Q. says:

    I would say the same is true (probably more so) for data in the patent literature — it’s good (and important) to have a look at what’s appeared in patents, but much of it is probably either wrong or inconsequential.

Comments are closed.