Skip to main content
Menu

Chemical Biology

Myristoylation Probes, Rethought

The need for good chemical probes continues, and (sadly) so does the use of crappy ones. That’s what I took away from this recent paper from a multicenter team out of London. They’re looking at commonly used probes for inhibition of N-myristoyltransferase (NMT) enzymes, and it’s one of those good-news/bad-news situations.

N-myristoylation is another one of those funky residue modifications that make the human proteome a lot more versatile than just the count of protein-coding genes would indicate. The enzymes (there are two subtypes) catalyze the conjugation of myristic acid to an N-terminal glycine, and it’s an event that happens with at least 200 different proteins. Some of this takes place right down at the ribosome right at translation, and some of it happens post-translationally: for example, when a protease cleaves an existing protein species to expose a fresh glycine terminus. (Those protein cleavage events are another big reason the protein universe is a lot larger than it appears to be through the lens of sequence information).

Functionally, this modification is particularly involved in pathways to do with T-cell activation and infection, and NMT inhibition has been proposed as an antiviral and antifungal target (there are, for example, viral proteins that use the human N-myristoylation process for their own purposes). Over the years, there have been several compounds reported as NMT inhibitors in such cellular studies, among them 2-hydroxymyristic acid, D-NMAPPD (first reported as a ceramidase inhibitor), and (weirdly) tris-dba-palladium complex. Yep, same stuff you might use for a Suzuki coupling reaction. A group at Imperial College (on this paper as well) has also reported two more recent inhibitors, IMP-366 and IMP-1088.

You may see where this is going. A close look across different cell lines and in greater proteomic and mechanistic detail indicates that only the two IMP compounds are actually NMT inhibitors. The other probe compounds, well. . .there are problems. For starters, 2-hydroxymyristic acid has to be given in rather heroic concentrations to have an effect, and you’re always courting trouble when you go up to hundreds of micromolar anything in cell assays. It seems that its effects in those assays come from some sort of derangement of lipid metabolism and handling rather than NMT inhibition, which means that papers like this one may need some rethinking. Similary, the D-NMAPPD compound is problematic: it is indeed a ceramidase inhibitor, but the current authors could not reproduce its reported inhibition of NMT in either enzyme or cell assays. Instead, it had notable cytotoxicity. (One thing I noticed, though, was that the current paper has a truncated structure for the compound, with a much shorter amide chain than the actual tetradecanamide. I hope that’s just an error in manuscript preparation!) And as for the palladium complex, forget it. In enzyme assays, it shows inhibitory effects at just about the concentration it starts to crystallize out of the buffer, which isn’t good, and in cells it show broad cytotoxicity and no real effects on myristoylation. That was an unlikely candidate from the start, and while it certainly has cellular effects, ascribing those to NMR inhibition doesn’t seem tenable. Pd(dba)3 containing nanoparticles (as reported here) are in the same category.

Meanwhile, the two IMP compounds have reproducible effects against recombinant enzymes, at more lower concentrations, and affect the proteomes of several cell lines in a manner consistent with NMT inhibition. Moreover, there are crystals structures of both complexed with the active site of NMT. That’s not to say that they might not have other effects in cells, but at least you can start by saying that they’re NMT inhibitors, which is something that you can’t say with the other three compounds.

Every time something like this happens, the scientific literature is streaked with results that are probably not what they’re claimed to be. The problem is that this doesn’t get noted. They’re still in Pubmed with no annotation, they’re still cited by other papers, and so on. We have a shortage of good ways to annotate past work with “You know, later on this was shown not to be the case” comments (with PubPeer being the closest thing that comes to mind). This work certainly makes you think that a lot of myristoylation-based conclusions will need to be re-evaluated. But another thing to go back and check will come in a few years’ time – when we see how many papers have continued to use what appear to be invalidated chemical probes, yet again.

15 comments on “Myristoylation Probes, Rethought”

  1. tlp says:

    Looks like scientists need some better system for tracking dependencies than journal citations. Kinda like github for software – so that if the credibility of parent results is questioned the whole fork downstream gets affected instantly. With current pace of publishing we can pretty soon reach the inflection point where it will be easier to rerun known experiment than to dive into literature and try separating wheat from chaff.

    1. Russ says:

      Finally refuting a week in the lab saving an hour in the library?

    2. winampdfx says:

      In general I like the github idea applied to scientific concepts. It could have an advantage of collecting most of the relevant information in one place. On the downside is the difficulty of classifying scientific data and publications. A single paper can operate with hundreds of ideas and concepts some of which may appear to be wrong or shall be updated. This is the most important difference from a particular piece of software and forks representing different versions.

      1. zero says:

        This seems workable.
        The paper’s assumptions should be clearly stated in a way that is traceable to the conclusions of other papers or reference works. This is a significant bootstrapping problem, but as the platform grows it would become simpler.

        The paper’s conclusions should be assigned a confidence score. Initially this would be based on the power of any statistical analysis performed and the scores of cited works. (In the unlikely event of a paper with no stats or scored references, reviewers could assign their best guess as a starting value.)
        Follow-on papers would affect the confidence scores of their upstream papers depending on a couple of factors. Experiments to reproduce the original paper with better statistical power would have the most direct bearing, while papers that only referenced the original in passing would have very little impact.
        The net result should be that useful papers (those which tend to generate a tree of trustworthy papers) will tend to have high confidence scores. Dead-end papers can also have high confidence if they are carefully designed, so it is not entirely a popularity contest. Papers with a significant citation that turns out to be disproven can be automatically flagged as ‘increased risk’ and perhaps prioritized for a reproduction attempt.

        This could be broken down even further to scoring specific itemized conclusions, which is a closer match to version management anyway. This is one way to cut through the confusion surrounding a retraction or disproven conclusion in a paper that also had solid results, much like how version management and test tracking can help isolate a defective component of a larger and otherwise properly working feature.

        Scaled out to the wider practice of science, that approach could even help identify concentrations of low (or high) confidence. That might be individual researchers, specific facilities or even entire fields of research. More importantly, a low score suggests potential interventions. Perhaps a researcher with low scores has consistently underpowered results due to a lack of sufficient funding; more money for a larger sample size might lead to more reliable results. (It could certainly go dystopian as well.)

  2. dearieme says:

    At what point does what does the entropy in the literature outweigh the enthalpy, metaphorically speaking?

  3. drsnowboard says:

    Perhaps your readers can come up with a metal complex that actually is used therapeutically – apart from cytotoxics like the -platins? You know, where the metal complex has been proved to get to the site of action, intact?

    1. Eugene says:

      Would Vitamin B12 to treat Cyanide poisoning and pernicious anemia qualify?

    2. cynical1 says:

      All of the MRI contrast agents are Gadolinium complexes and they are used every day.

    3. Barry says:

      the target is contentious, but Lithium for bipolar disorder certainly gets there (wherever “there” is”

    4. loupgarous says:

      Octreotides are used as carriers for metal ions for both peptide receptor-targeted radiotherapy (Lu-177) and for peptide receptor targeted radiography (Ga-68). In both cases the octreotate delivers the metal ion where it can do the most good.

  4. Phytomig says:

    Probes as we know it don’t really tell us much. Probes now are looking at chemical changes to specific enzymes or groups of enzymes. However, what we need are probes that probe the bulk properties of tissues and cells. Biomaterial probes are what we need, but sadly, all we have is the central dogma and organic chemists :(.

    1. Imaging guy says:

      “we need are probes that probe the bulk properties of tissues and cells”

      Do you mean “probes” that come out of phenotypic screening of cells, tissues cultures and animals?

  5. opensciguy says:

    Derek, scite.ai, website linked to above, is the beginnings of what you are talking about at the end of your post: an annotation layer for the literature that analyzes agreement between a work’s findings and subsequent papers that cite it. It’s like IF and H-index but with a directional component, not just magnitude.

    A lot of potentially impactful things like this have taken shape over the last few years, including Peter Murray-Rust’s Content Mine, which was originally a pdf-scraping operation to make the literature more machine readable for deep NLP and other analyses.

    Interesting times!

  6. Ed Tate says:

    Thanks for these insightful comments! Many journals now require robust evidence for validation of cell line identity in light of the fact that many lines are contaminated (e.g. by HeLa), or are not what was originally claimed. There is a whole organisation dedicated to this endeavour: https://iclac.org/databases/cross-contaminations/.

    We have long realised that many probes have off targets which can confound their use in cells, so why can we not have the same level of rigour applied to any submitted paper which reports the use of a chemical probe/inhibitor? We need a consortium to provide a gold standard against which this can be measured then, very importantly, we need the journals to require all submissions to include a statement of probe quality (exactly as we are required to do for cell lines and antibodies by e.g. the Nature journals), with a link to a summary of probe validation data accessible to the editor/reviewers.

    Maybe a simple traffic light system could be used, with green for validated to a high standard, yellow for evidence pending, and red for probes with known off-targets. This traffic light and a link to the probe’s validation data could then be placed prominently against any data figure which uses such a probe, enabling all readers to make a judgement as to the reliability of the work.

    By the way, thanks for the spot on the structure error for D-NMAPPD in Fig. 1, which occurred in the proof stage; Cell Chem Biol will correct this prior to formal publication.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.