Skip to Content

How to Know When a New Target is Really a New Target

This is an excellent article, and the title is self-recommending: “Common Pitfalls in Preclinical Cancer Target Validation”. The abstract speaketh the truth:

An alarming number of papers from laboratories nominating new cancer drug targets contain findings that cannot be reproduced by others or are simply not robust enough to justify drug discovery efforts. This problem probably has many causes, including an underappreciation of the danger of being misled by off-target effects when using pharmacological or genetic perturbants in complex biological assays. This danger is particularly acute when, as is often the case in cancer pharmacology, the biological phenotype being measured is a ‘down’ readout (such as decreased proliferation, decreased viability or decreased tumour growth) that could simply reflect a nonspecific loss of cellular fitness. These problems are compounded by multiple hypothesis testing, such as when candidate targets emerge from high-throughput screens that interrogate multiple targets in parallel, and by a publication and promotion system that preferentially rewards positive findings.

Yes, yes, yes, and yes indeed. Anyone working in the area should be ready to shout “Preach, brother!”, and if any of this comes as news, or if it seems overblown, then this is a great opportunity to get more familiar with the problems the article is talking about. To start with, an important distinction is made between reproducibility and robustness. These often get mixed together in discussions of problems with the scientific literature, but we’re (mostly) dealing with the latter. To be technical about this technical subject, a flat-out reproducibility problem means that when every bit of the experiment is done exactly the same way, the result still doesn’t come out as reported. This can be due to fraud, unfortunately, or it could be that the original result was some random-chance thing that just doesn’t repeat. Robustness, on the other hand, means that the experiment will indeed work reproducibility, but only if everything is done right – and by “everything”, one includes variables that even the original authors may not have been aware of, as well as the ones that they kind of knew about but didn’t bother to actually put into the experimental section.

A robust result can probably be reproduced even if you switch to a different buffer, or if your cell lines have been passaged a different number of times, or if the concentration of the test molecule is a bit off, etc. The more persnickity and local the conditions have to be, the less robust your result is, and in general (sad to say) the lower the odds of it having a real-world impact in drug discovery. There are certainly important things that can only be demonstrated under very precise conditions, don’t get me wrong – but when you’re expecting umpteen thousand patients to take your drug candidate and show real effects, your underlying hypothesis needs to be able to take a good kicking and still come through.

The paper also makes some solid philosophical points about how we should be thinking about correlation and causation. For example:

Clear thinking about causation versus correlation is aided by using words that have precise meanings rather than vague terms, such as ‘linked to’ and ‘associated with’, which often create ambiguity (intentionally or not) about whether two things are causally related to one another. Two words that are particularly useful in describing causal relationships are necessity and sufficiency. . .The statement ‘A is necessary for B’ means that if A is not true, then B cannot be true. The statement ‘A is sufficient for B’ implies that if A is true, then B will be true.

Failure to distinguish between necessity and sufficiency can lead to illogical conclusions. For example, when BRAF mutations were first detected in malignant melanoma, I heard it argued by some participants at scientific advisory board meetings that mutant BRAF would not be a good drug target because BRAF mutations are also present in benign naevi. However, the latter observation indicated only that BRAF mutations are not sufficient to cause malignant melanoma. The more important question from a therapeutic perspective is whether BRAF activity is necessary for the maintenance of BRAF-mutant melanomas, which has now been answered affirmatively.

“Necessary but not sufficient” is a very common state of affairs in biology, and anyone working in this field needs to have a well-developed sense of it. That goes not only for complex disease states, but for assay conditions and even to compound SAR trends. We have a lot of multifactor effects in our business, and no shortage of chicken-and-egg questions, and thinking about them as clearly as possible is essential. Here’s another example:

There are many examples in cancer biology of molecular changes that correlate with increased aggressiveness of cancer without necessarily causing the aggressive behaviour. For example, intratumoural hypoxia and the resulting upregulation of the transcription factor hypoxia-inducible factor (HIF) in tumours almost invariably correlates with poor outcomes in patients. . .This could signify that hypoxia and HIF cause some tumours to become more aggressive. Alternatively, it could simply reflect the fact that aggressive tumours outgrow their blood supplies, become hypoxic and therefore upregulate HIF.

The paper goes on to address the issues mentioned in the abstract (such as problems with “down” phenotype readouts and the particular importance of negative controls), and also has an excellent section on rescue experiments. A powerful piece of evidence that you’re onto a real target is when you can show that variants of your target protein that are resistant to your drug candidate actually confer that resistance on cell lines. Similarly, you can find some resistant cell lines and sequence them back down to find out what proteins are causing the rescue (which can not only validate your target, but tell you a lot about related biology). If you don’t have such experiments, your case is weaker. If you can’t seem to get them to work, your case may be much weaker. There are still valid reasons why such things might not work out, as the paper details, but you need to consider those explicitly.

I found this section particularly relevant, as will, I think, anyone who reads the literature in this area:

There has been a trend, especially in papers in high-profile journals, towards making far-reaching claims in an attempt to paint a seemingly complete picture that incorporates both new mechanistic insights and the physiological or clinical relevance of a given set of findings. The field would be better served if papers claimed less, but provided more lines of corroborating evidence in support of their conclusions. Describing a properly controlled and complementary set of target validation experiments can easily constitute an entire manuscript. It should not be an afterthought relegated to the last figure of a manuscript, in a gratuitous attempt to achieve in vivo or clinical relevance.

Unfortunately, the reward system we have in place encourages just that sort of behavior, and it’s not going to be easy to change it. We get what we subsidize; one pretty much always does, although it might not be what you thought you were asking for.

Finally, it should be noted that this paper isn’t just a list of reasons why your new cancer target isn’t so great. It also provides some reasons to keep going in the face of a common objection:

For example, the proteasome inhibitor bortezomib has anti-myeloma activity at a concentration that causes a 50–80% decrease in proteasome activity, but is toxic at higher concentrations that more completely block proteasome activity. This last observation underscores the fact that the issue in cancer therapeutics is not whether a target is important or essential in normal cells, but whether the target is more important in cancer cells than it is in normal cells. The degree to which there is a differential requirement for the target in cancer cells relative to normal cells is the biological determinant of the therapeutic window for inhibition of the target. Even in hindsight, it is not clear why most approved cancer drugs, including the above-mentioned imatinib mesylate and bortezomib, have therapeutic windows. This question is even more perplexing for many classic cytotoxic agents, including DNA-alkylating agents and microtubule poisons.

That’s a very good point, and well worth remembering. We have to try to keep from working on things that aren’t real, and there’s no shortage of those, but we also have to give the real things every chance to work. This is particularly important as you get more experienced in any scientific field. I’ve said many times that I don’t want to be that guy in the back of the conference room who’s always saying “That’s not gonna work”. I mean, sure, 95% of the time, it really isn’t gonna work; that’s how this business goes. But what good does that do anyone, then? I’d much rather be the person who perks up when something comes along that looks like one of the other 5%, and tries to get it to happen. That’s harder – but it’s a lot more worthwhile.

I highly recommend that drug discovery people in any field, not just cancer, give this paper a read. It’s full of extremely sound advice, and reminds us all of what we should be trying to do, and how we should be trying to do it.

 

15 comments on “How to Know When a New Target is Really a New Target”

  1. Barry says:

    “Necessary and sufficient” is a conditions rarely met in Oncology (only the Philadelphia Chromosome for Chronic Myelogenous Leukemia comes to mind) But”Necessary but not sufficient” is not really a sufficient argument qualification, either. This has two parts:
    1-A cancer doesn’t care if you’ve broken Ras, or Raf, or Mek, or Erk…any link along a signal transduction cascade breaks the cascade
    and
    2- Multifactorial etiology. A full blown invasive solid tumor needs to tick off all the “Hallmarks of Cancer”
    1.1 Self-sufficiency in growth signals
    1.2 Insensitivity to anti-growth signals
    1.3 Evading programmed cell death
    1.4 Limitless replicative potential
    1.5 Sustained angiogenesis
    1.6 Tissue invasion and metastasis

    which will require about six targets affected (although a single event as in CML, or a mutant transcription factor might throw multiple switches)

  2. tlp says:

    I wonder how well one can squeeze biological data into “necessary/sufficient” formalism. ‘Necessity’ is particularly slippery concept, given that one can tweak cellular response in much more than one way.
    Also, more often than not experiments don’t give a clear TRUE/FALSE answer, rather something like “it kinda seems to work at such and such conditions”.
    So maybe more flexible probabilistic/Bayesian logic would be more appropriate?

    1. NJBiologist says:

      The logic is simple: interruption of the process with antagonists at a receptor demonstrates necessity, and recapitulation of the process with agonists demonstrates sufficiency.

      The details, however, are brutal. The biggest are generally ruling out off-target effects that would support an alternate mechanism, and ruling in engagement of the target. As many commenters before me have noted, good literature on tool compounds is vital here, and often in short supply.

  3. anon says:

    Not sure about the necessary/sufficient formalism, but I did learn that much of biology is “fraught with danger”

  4. Ash (Wavefunction) says:

    It’s interesting: since the time I first pointed out this paper on Twitter and sent it around to colleagues, so many of them have gotten back to me saying how useful it is. Of all the examples in the paper, I like the one about HIF the best. This is not only because it shows us how cause and effect can be muddled, but it also shows us how the effects can actually contradict each other: eg. HIF can lead to both upregulation *and* downregulation of other oncogenic targets. This really makes a point about testing your hypothesis under as many different circumstances as possible.

  5. DanielT says:

    A good start would be to stop using inbred mice in cancer research. We should be doing research in animals in the same way we do in humans except we can give carcinogens to mice to cut down the waiting time.

    Any new treatment developed needs to work in a mouse population that reflects the phase 1/2 human population that gets to test new treatments – outbred, late stage, spontaneous origin, treatment initiated only after many treatment failures, oh and and finally, the mice should be on death’s door.

    1. MrRogers says:

      If BaP and other mutagens induced a mutation spectrum in mice that mirrors that in humans this might be a good idea. Unfortunately, those tumors aren’t a reasonable reflection of the human disease. As shown by the Novartis group, PDX models are fairly predictive of response. They obviously don’t work for immunotherapy, so something else will be needed for those drugs. In the end, though, models are only proof that an approach can work. The only way to find out if something will work is human trials. In those cases I’m familiar with, successful oncology drugs have shown a substantial signal in phase I or at the latest phase II.

      1. DanielT says:

        No mouse cancer is going to be a perfect model for human cancer (spontaneous cancers in dogs are vastly better), but using a mouse model that even approximately models humans cancer would be a great start. Most researchers working on cancer are not even working on a cure for cancer in mice, let alone one with relevance to humans.

  6. Kelvin Stott says:

    The issue is made even worse by the fact that so many Pharma companies will then focus on these mis-validated targets, that they then will have to share any upside in the unlikely event that the approach does work. In fact, you may be better off going for a completely novel target that has no published link with the disease, just to avoid excessive competition in the event of success, as I explain here:

  7. misdirection says:

    What a misleading post. “If your conditions are too persnickity…less chance of having a real impact.” This is just looking under the lamppost. Many would have said that about testing in certain types of mouse strains vs. others, for example, but we have now learned that things like that can be very important for very real reasons, such as the microbiota present in those different strains. In reality it’s the other way around, perhaps: pharma wants something that smashes every cell line in sight, not something that is context dependent. In reality those context dependent, focused agents are now looking much more likely to have impact. If you just want things that kill off the NCI60, as pharma has been looking for for decades, by all means keep going. But there is less of a chance of having real impact…

  8. misdirected review says:

    Also, I do believe the quote about it being common to see far-reaching claims in high impact papers. But I think this is an unintended consequence of the insane review process. Reviewers have this need to ask for more and more, and the editors require reaching these bars. In many cases the biology being presented is novel enough and interesting enough for publication, but the review process pushes these authors to go beyond, in a tight time frame and competitive landscape, to expand these claims. I think we get what we select for. Reviewers and editors need to stop the cycle of constantly asking for more when a study is ready as is.

  9. SPQR says:

    These discussions are becoming boring and repetitive as fuck.

    We all know that biological research has massive problems of reproducibility and robustness. By now, we also all know what the causes are. From bad technique to weak scientific rigor all the way up to fraud. I have personally seen all of these things and I know many other people who know many other people who also have. Additionally, I was severely punished during my Ph.D. for uncovering data fraud in my lab.

    Anybody that is half a scientist can tell you the problems with biology, what we need are people that actually are in a position to do anything about it with the will to do so. For all the people that amass on the internet to complain about these issues, in the “real world” I see preciously little of this attitude.

    Derek, I love your blog and I read it frequently, but I think with this entry you have chosen some very low hanging fruit. I have read the review previously and while it is, of course, spot on, it is also obvious. It is a text of under-grad or graduate student level. We have extensively characterized the problem. Like I said, I refuse to produce bullshit research, this is something that has cost me dearly and will continue to have its cost for me, but it doesnt matter how many Ph.D. students or postdocs get buried if nobody with any clout speaks up.

    In the mean time, only people with Cell/Nature/Science publications get jobs and even cases of obvious scientific misconduct are punished with slaps on the wrist (if at all).

  10. Barry says:

    inbred pre-clinical models guarantee that your control really differs from your experiment only in the variable that you’re testing. So yes, spontaneous cancers in dogs are more like spontaneous cancers in humans. But no one has ever had enough of them to get statistically meaningful results.
    And we are all indebted to senator Jesse Helms for defining rats and mice as “laboratory equipment” which makes them vastly cheaper to keep/breed/use that “animals” under Federal Animal Welfare laws

  11. Apologies for the late reply. One thing Bill Kaelin does not address in detail, in this excellent review, is the models used to “validate” a target. In cancer, huge and expensive efforts have used tumour cell lines in culture and PDXs. These models have a high growth fraction (many cells undergoing division) and their biochemistry reflects their proliferative status. But many human tumours, such as colon and prostate cancer have a low growth fraction, with cells out of cycle or cycling much more slowly than the models. Doubling time of tumour size can be 100 days or more. To deliver lab results, models are chosen that provide a rapid return, not having to wait for 100 days. Biochemistry is biased towards proliferation raher than survival (eg with IGFR signalling). Surprise: the drugs discovered using these models, where the target (IGFR for example) was “validated”, don’t work. Peter Sorger has gone a long way to addressing this problem. See Nature Biotech 35: 500 (2017). In addition, targets should only be validated in cells which are in context: stroma etc. Another story….

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.