Skip to main content
Menu

Chemical News

Not All Of Those Compounds Are Real. Again.

The Nrf2 pathway has been a hot area of research for some years now, particularly in oncology. It’s a basic-leucine-zipper transcription factor that under normal conditions stays mostly out in the cytosol, where it’s under tight regulatory control. Under cellular stress, though, it heads into the nucleus and fulfills its transcription-factor destiny, in particular setting off a range of genes coding for cytoprotective proteins. This break-glass-in-case-of-emergency mechanism works by Nrf2 being normally bound to another cytosolic protein, Keap1, which both binds it up and facilitates its degradation by the ubiquitination/proteasome system (by in turn binding a ubiquitin ligase and keeping it close to Nrf2). Keap1 has cysteine residues that get modified under oxidative stress, and this event causes it to fall off of Nrf2 and send it on its way.

That’s the index card version of the system. As usual, when you look closer all sorts of complications ensue. For example, Keap1 has about 27 cysteine residues that seem to be important for regulating Nrf2 activity. Modification of some of them (particularly Cys151) are definitely important for releasing Nrf2, but modification of others actually helps keep it bound – there’s some sort of “cysteine code” effect going on that we don’t completely grasp. Nrf2 itself is involved in regulation of at least 200 genes, most of which clearly seem to be involved in dealing with oxidative stress – but not all of them. And more broadly, there are people who would like to use Nrf2 activation as an anti-inflammatory tool or neuroprotectant, which seems reasonable. But there are also a number of cancers where Nrf2 is already revved up and helping keep the tumor cells going, so in those cases you’d like to be able to shut it down and cause them trouble. And so on!

How about some more complications, then? This new paper (from researchers at Copenhagen and Rennes) is a very welcome look at the chemical matter that’s appeared in the literature so far (journals as well as patents) as inhibiting the Nrf2/Keap1 protein-protein interaction. The team identified 19 such compounds, and purchased or prepared every single one of them for side-by-side tests. As one might have suspected, not all the literature routes to these worked as well as they do in, well, the literature, so they had to do some work on the synthesis end for several of them. But they eventually took the whole list and characterized them in three orthogonal biophysical assays (fluorescence polarization, thermal shift, and surface plasmon resonance). That’s exactly what you want to do with such hits – make them perform via several different readouts so you can see if you believe their activity. They also checked them all for issues like potential covalent behavior (which can be good or bad, depending), for redox activity (rarely anything but bad), and for aggregation (always bad). And finally, they all went into cell assays.

This is an excellent, thorough med-chem examination of these compounds, and it’s a pleasure to see it. I might add that one would want to see a lot more of this sort of thing when interesting new compounds are first reported, rather than waiting for other groups to come in and try to gather all the pieces together (or perhaps take out the trash), but I’m just glad that it’s been done in this case. And the results?

Ten of the nineteen reported compounds appear to be garbage. That’s not quite how the authors phrase it, but they come pretty close, saying that they “question the legitimacy” of that set. As they should: some of these were discovered in fluorescence-based assays but turn out to be fluorescent interference compounds. Others aggregate, others are chemically unstable. There are compounds with cell activity that show nothing believable in the biochemical/biophysical assays, so who knows what that means, etc. As the authors note, and they are so right, that last situation “highlights the crucial deficiency of characterizing compounds solely based on cellular activities“.

Let’s be frank: these are the sorts of problems that should be caught before you publish papers. All fluorescence-based assays are subject to false positives based on compound properties (absorption, quenching, intrinsic fluorescence, etc.) and all kinds of compounds can aggregate under different assay conditions to give you false positives that way, too. Checking compound purity and stability should be an elementary step. None of these problems are new, but here we are, and here the med-chem literature is. I will add my obligatory statement about the difficulties this sort of thing poses for ideas about shoveling it all into the hopper of deep-learning software.

The authors don’t call this out explicitly in the text, but I will. The molecules that look real come from groups at Rutgers, Biogen, China Pharmaceutical University, Purdue, Univ. of Illinois-Chicago, Sanofi, Keio Univ. (and other Japanese academic partners), and Astex/GSK. The ones with problems come from Univ. College London/Dundee/Johns Hopkins (the last two also show up with other collaborators), Harvard/UCSD, Toray Industries (also with RIKEN collaborators), China Pharmaceutical Univ./Jiangsu Hengrui Medicine Co. (the former shows up more than once), and Univ. of Minnesota. You will note that appearance of well-known institutions on both lists. I will say that big pharma comes out looking OK, partly because of abundant resources and partly because having money on the line makes you marginally less likely to try to fool yourself. (For cash-strapped small pharma, the incentive to fool yourself can sometimes override other considerations, I hasten to add).

The authors of the current paper finish up by recommending that people only draw conclusions based “on pharmacological mechanisms supported by orthogonal biochemical and biological assays“, and I can only second the motion. That would indeed be a great thing. Let’s give it a try. Start with Nrf2/Keap1 compounds, and just keep on going.

25 comments on “Not All Of Those Compounds Are Real. Again.”

  1. MrXYZ says:

    I should note that these problems are not limited to small molecules. We’ve seen similar issues with antibodies giving false positives in cellular assays, often due to aggregation or other stability issues.

  2. Terry Moore says:

    Those of us in the NRF2/KEAP1 field owe Anders Bach–the lead author–and his group a big thank-you for carrying out these experiments.

    Derek, although the University of Chicago is not a bad place to be mistaken for, my group is at the University of Illinois at Chicago.

    1. Derek Lowe says:

      Whoops – fixed! Thanks. . .Very much agreed on the debt owed to the authors, too.

      1. Anders Bach says:

        Thanks Terry for your nice comment, and thanks Derek for your very nice write up. It is great to see our work being exposed to a wider audience; it makes the effort worthwhile…The project started about 3 years ago where we made some of the literature compounds to be used as controls for setting up assays. However, the results when testing many of these were confusing and it became frustrating. So at one point we decided to make them all and find out once and for all which works and which don’t. Now, I am glad we did it…Only problem is that we see the same issue for most other protein targets we are working on. I guess everyone in the medchem field know this is a problem; here it is just being displayed (for Keap1) very clearly. We hope our paper will help advancing the Keap1 field, and thanks to the attention it gets here, it might positively affect other target areas too.

  3. dearieme says:

    “Ten of the nineteen reported compounds appear to be garbage”

    So about 50%; nearly the toss of a coin. Just consistent with “Why Most Published Research Findings Are False”.

  4. Adonis says:

    Epitomizes my major problem with academic drug discovery: things don’t have to work, they just need to be publishable and fundable.

    1. skeptic one says:

      Adonis,
      I wish this were an “academic” issue only…. In my experience, it is not – similar things are seen in the industry. In fact, there are academic groups that are leading the way characterizing probes (like the authors of this paper!)

      Maybe I am unlucky, but when I have discussed this topic in project teams most folks (non-chemists) just:
      a: don’t believe this deep characterization adds value, or
      b: are just not motivated to have a fight for resources to characterize chemical probes thoroughly.

      Everyone tries to move the project forward doing as little as possible… Of course, there is always money to run well established assays… and add the values to a powerpoint table…

      Have others had similar experiences? Or it’s me hanging around the wrong crowd?
      What’s the best way to persuade non-chemists of the value of doing rigorous characterization of probes?

    2. anon says:

      More like a failure of the peer review system.

      My academic lab won’t publish something without activity in an orthogonal assay, a selectivity assay or two, and a tox counterscreen.

      Never get too excited at those “most actives” at the top of the spreadsheet on one and only one assay.

  5. a. nonymaus says:

    Some of the irreproducible synthetic chemistry may underlie the irreproducible biochemistry. As an example, the previously published putative procedure for compound 17 is now described as giving compound 52, an intermediate. Assuming that 52 is actually what had been made previously and assigned as 17 through shoddy characterization, it would be interesting to see if 52 has the Nrf2/Keap1 activity that 17 has now been shown to not have.
    Of course, it could be just another fluorescent, aggregating, redox-active brick in the wall.

    1. Uncle R says:

      As Captain Corante comments (that’s you Derek – O Captain! my Captain!), “Big Pharma comes out looking okay.” May reflect bigger pool of chemists to keep each other on the straight and narrow (although pool of course been shrinking for decades, but maybe big pool chemists now keep little pool chemists on the straight and narrow as well…).

      For example – the DMSO stock solution blessed with the active impurity that never was; the 001 sample tainted with residual mercury from that S-acetyl displacement reaction carefully carried out all those years previously by the future Site Scientific Safety Adviser; any number of beads, solid phase supports, Kans, encoded libraries, etc, etc, etc, for the active entities that never were; hunting behind, under and inside the NMR machine for the missing NMR tube of the only surviving sample of the only surviving project lead (it happened – tube found, activity validated but no happy ending, project ended up in a PK dead end like all chemical series bar one worldwide vs that target).

      999/1000 times nothing comes out the other end of all that detective work, but at the back of your mind are those stories like Librium and Intal that compel Dr Holmes and Mr Watson to go forth and probe for the 1001st time the Strange Case of the Disappearing Biological Activity…

      Well done Anders Bach and co-workers – can’t access the full paper but sounds a Magnificat piece of work.

      Keep up the good work Chemists wherever you are and keep those pesky Biologists on the straight and narrow too!

  6. matt says:

    Sounds like such a verification procedure should be formally written up and publicized under some catchy name like the Bach Protocol or the Bach Postulates which must be satisfied for demonstrating activity, and then “influencer” campaigns run by reviewers and professionals at journals to suggest all reputable articles on chemical matter claiming to have that biological activity should satisfy the Bach Postulates.

    I realize things somewhat like this have already been done, but maybe a catchy name and a simplified checklist and a single literature reference to bind them all together will be the magic to solve the issues Skeptic One raises above. The way to persuade others is when it becomes Simply the Proper Way To Do Things, and half-assing gets seen for what it is.

    1. Anders Bach says:

      Not a bad suggestion (=:

      There are many good papers addressing this; also, many of the thoughts have been crystallised into Chemical Probes Portal (http://www.chemicalprobes.org/about), which focus on ensuring high standards for compounds to be used for studying biology.

      Whether a check list of some sort could be useful I don’t know; guess we don’t want a too rigid publication system. But ensuring the quality of what we publish is definitely the responsibility of the scientists and also of the journals and reviewers.

  7. These sorts of “public service” papers are invaluable.

    I recently highlighted an example of a team doing all the right experiments before publication: an HTS screen of 1.7 million compounds against ATAD2 yielded 9441 hits. Careful biophysics revealed that only 16 were real – all from the same series.

  8. Barry says:

    Tularik brought impressive resources to the mammalian transcription factor problem (although there’ve been advances in assay format since the days of yeast-2-hybrid). Their Scientific Advisory Board was called a “brain trust” in the field. But after screening >2million small molecules, they conceded that they had zero credible med. chem. leads.
    Mammalian Transcription factors (e.g. Myc) are prominent in a bunch of human cancers (and other diseases). But they remain unpromising targets.

  9. milkshake says:

    I worked for a small biotech. Our biology made a single, cursory, flawed assay on few mice to apply for a grant from NIH to the tune 1 million USD. The flawed design was most likely intentional, to produce impressive “result” to gt the funding. The company was low on cash, so this money was most welcome – for a cancer project that the management wanted to pursue anyhow.

    So we got the money for more thorough study which disproved the previous results – it showed the initially observed effect wasn’t real. What do you think our management did? 1. They published the new data, in intentionally mis-represented form, to carefully hide the fact that the stuff does not work. 2. They keep telling the investors they still have a clinical candidate. 3.On the company website the original false data is prominently shown, with legend doctored by hand (30mg/kg was changed to 20mg/kg since the last presentation of the same date at a conference, to better “agree” with the later data, but the graph remains the same) Meanwhile the CEO and research director and VP for preclinical development were made to go, and the new management is keenly aware the project is baloney. They just keep presenting it, and of course there is no retraction.

  10. ScientistSailor says:

    It’s not just industry that has money on the line. Academics do also (grants) and this also leads to pressure to fool yourself, or not do the killer experiment.

    1. Hap says:

      No – it just matters what direction the money flows. At least in theory, when it’s already your money, you want to spend as little as possible to make more and so check your results to not spend money (and time) chasing the wild goose. If it’s not your money, and attractive results (even if they aren’t real) attract money, then you’re going to do what gets the money.

      Unless funders (for both businesses and academia) insist on making sure that the methods to eliminate obviously unsound results are followed, then you will get lots of unsound results that should have been caught earlier. As long as publishing or selling on unsound results makes people money, they’re not going to stop publishing and selling on them.

  11. exGlaxoid says:

    I have sadly seen similar cases. Companies do studies with no control, and an N of 1 on a dozen compounds and one or two are better than the others, so they become leads. There appears to be an effort not to retest those compounds.

    In other cases I have seen a postive hit repeated a few times, using the same DMSO solution, but when no closely related compounds were active, there was no attempt to remake the solution before using that data in papers and grant applications.

    And similarly, in a large pharma company, a compound with known mechanism based side effects was pushed forward due to a new scaffold, which was sold to management as a way to avoid the side effects.

    The only way to avoid this type of optimistic, biased, behavior is to have funding groups and management insist on controls, duplicates, retesting and analysing solids for “active” hits, and loss of funding for groups that repeatedly show a lack of good science. Good luck on that.

    1. bhip says:

      ” a large pharma company, a compound with known mechanism based side effects was pushed forward due to a new scaffold, which was sold to management as a way to avoid the side effects”-….all….the…time

  12. Eugene says:

    It is unfortunate that too often the basic principles of Science (as I started learning in grade school, when they actually taught children how to perform real experiments, not flashy demonstrations with no explanation) are ignored because it is inconveniently arduous or the results may not be desirable. So statistically meaningful sample sizes, confirmation of results via a different method or ignoring undesirable results become commonplace. In industry this seems to be driven by profit or the career advancement, especially of the MBA project managers who seem to sense that to much Science is costly and more often than not will give negative results.

  13. not-a-chemist says:

    “…and more often than not will give negative results.”

    You mean the negative results that would save money by not letting them advance to the next level of testing? Oh yeah, that goes back to the discussion of whose money you are spending.

    1. Eugene says:

      It is a bad career move to be the one to shoot down someones pet project. Besides, by the time the project gets to the point that everybody realizes that the project is failing (because of bad Science or bad management) the people responsible will have either advanced to the point being beyond blame or moved on to another organization to wreck havoc there.

      1. Hap says:

        In theory, this would be where management coming from the trenches might help – if they knew what to look for, they might be able to ask the right questions earlier and require answers.

        Otherwise, without competent (or willing) oversight, the scenario is basically wannabe upper management spending the company’s money for their career development and the company’s detriment. And again, it comes down to whose money is being spent and who is spending it.

  14. Just saying... says:

    so, is drug discovery practiced in such way, a scientific activity?

  15. Anon says:

    Co-credit the Rutgers one that passed validation to the Broad MLPCN team. Rutgers prof was the PI on the R03 screening application but Broad did the HTS, follow-up assays, and medchem.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.