Skip to Content

The ACS Journals Tighten Up Screening Standards

Here’s an article (free access) in ACS Central Science on assay interference compounds, a contentious topic that has been aired here (and in many other places). This one, though, is authored by the editors-in-chief of all the relevant ACS journals and is appearing in all of them as well. People will argue about some of the terms used, but I think the article and its recommendations is very sensible. It’s yet another clear warning that all screening assays will generate false positives, and that many of these will occur through known mechanisms that can be guarded against, mitigated, and counterscreened for. Importantly, it’s noted that although there are in silico tools that are available to prescreen compounds on the basis of structure, there is no substitute for well-controlled experimental evidence. If you rely on a computational filter for such classification, you are running a substantial risk of moving compounds on that are still problematic, and/or throwing some away that are actually worth pursuing. Do the experiments. If there’s one rule of experimental science that’s gotten us this far, that’s probably it. Nullius in verba, folks: go run well-thought-out controls and decide for yourselves, but don’t take anyone else’s (or anything else’s!) word about any particular hit until you’ve done so. In the same way, the journals aren’t going to take your word for it either, without data to back things up:

In light of these concerns, the participating ACS journals plan to uphold the standards above to ensure that all compounds for which activity is reported demonstrate activity commensurate with expectations (i.e., the compound is binding to the expected pocket and accompanied by thorough SAR). Active compounds from any source must be examined for known classes of assay interference compounds, and this analysis must be provided in the general experimental section. For compounds with potential assay interference liability, firm experimental evidence must be presented from at least two different assays, both of which report that the compounds are specifically active and that the apparent activity is not an artifact. Other issues that need to be considered in this context are the purity of the compound, stability in assay buffers, cysteamine or glutathione (GSH) reactivity, and a review of the literature for previous activities reported for the compound or compound class.

I think that this will improve the med-chem literature, and I think the literature could certainly use the improvement. The object is not to witch-hunt out any structures that people find funny-looking (and if that starts to happen, I and others will raise a fuss about that). There are some odd structures that are out there helping millions of patients. But there’s also way too much junk in the literature, poorly controlled papers that act as if there’s no reason to suspect their “hit compounds” whatsoever, and that’s wrong, too. Some compounds (and some compound classes) really are more suspicious than others, and if you’re going to propose them as things of interest to the rest of the community, you need to make the case for them. If you run the controls and everything passes, you’ve just strengthened your work (and your paper) immensely. If you run them and the compound doesn’t pass, you’ve saved yourself a lot of wasted time and effort. So run them.

This would be a good time to mention that not everyone was thrilled with that recent article on curcumin as an example of a compound that appears to be wasting people’s time. By email, I was pitched a ringing defense by Ajay Goel of the Baylor U. Medical Center, which I passed on. I note that he’s on YouTube with talks that have titles like “Curcumin for Cancer, Depression, Pain Relief and More”, and a Google search reveals that “more” includes Alzheimer’s, diabetes, arthritis, and, well, more. Heading further up the scale, a group of authors wrote to Nature, basically making the case that curcumin has shown actual results in the clinic, quite possibly through polypharmacology, that cannot be dismissed so easily (a contention that I think the authors of the original paper would dispute). Two other longtime researchers in the field also objected in ACS Med Chem Letters.

Well, it’s not like no one is going to publish any article on curcumin after this controversy. But to get one into a good journal, you are going to have to run a lot of control experiments, because the compound really does have a lot of confounding factors that make interpretation of experimental results difficult. And if those who believe that the compound is a worthwhile field of study are going to convince others, those are just the kinds of papers that will do it. If there’s something solid in there, bring it to the surface so that even the doubters have no recourse but to agree.

16 comments on “The ACS Journals Tighten Up Screening Standards”

  1. Molecular Pharmacology Guy says:

    As a pharmacologist/screener in industry, I applaud this initiative from the ACS and hope that it will be replicated by other scientific societies and journal publishers.

    A great first step in the right direction!

  2. tlp says:

    Seems like any generalizations and conclusions drawn from purely statistical analysis of available dataset can (and likely will) be superseded by an analysis with larger sample size

  3. Anon says:

    Real experiments? The computer guys will be very disappointed. After all, who needs real data when you can pretend to predict everything with a bit of data dredging?

    1. AnonAnonAnon says:

      The inverse is also true for “bench” guys. After all, who needs real data when you can pretend to know everything with a bit of anecdotal evidence?

  4. anon says:

    Nobody who was submitting turds is going to be discouraged by ACS authorship guidelines. Reviewers are the gatekeepers. and editors are responsible for their quality. Run controls people!

    1. ScientistSailor says:

      I’ve tried rejecting papers based on PAINS that did not contain specificity experiments. They were still published. I thoroughly applaud these guidelines!

      1. Peter Kenny says:

        ScientistSailor, The URL for this comment discloses some countermeasures for dealing with reviewers like you.

        1. ScientistSailor says:

          Ha!

  5. MTK says:

    I guess this will help, but the skeptic in me says as long as tenure, grant awarding, and promotion decisions are tied to publications and if other publications don’t follow suit it won’t mean much more than these papers finding a different home.

    1. David says:

      Let’s start a publication, call it “False Positives Digest” or “False Positive Indigestion” if you want to be funny, and give them a proper place where they’re not hurting anyone 😉

    2. Roger Moore says:

      Maybe so, but in the long run that’s going to wind up hurting the publications that don’t have such strong guidelines. After all, which one are you going to trust more: an article in a journal that has strict guidelines that help to prevent publication of false positives, or one in a journal that lets those false positives through? Journals that let people publish crap will discover their journals’ reputations getting crappier.

    3. Morten G says:

      It worked for macromolecular x-ray crystallography 25 years ago. Some journals started to require that models (and later reduced data) were freely available for download and now all journals require it.

  6. Peter says:

    A lot of these suggestions are very sensible and I hope the editors don’t mess things up by permitting computationally predicted PAINlessness to relieve researchers of the responsibility to assess their screening output experimentally. Reading this editorial reinforced my view that we need to stop using the term ‘PAINS’ if we are to move forward. In particular we need to draw a distinction between assay interference (assay result gives false indication of whether or not target is engaged) and unacceptable modes of action.

    It is noteworthy that separate categories were defined for ‘PAINS Molecules’ (b) and ‘Spectroscopic Interference Compounds and Compounds That Inhibit Reporter Enzymes (c). Are the authors of the editorial suggesting that (c)-type interference does not fit into the pan-assay interference framework? I think that interference in AlphaScreen assays resulting from singlet oxygen quenching/scavenging would fit into category (c).

    For the comment URL, I have linked a blog post in which I present a case against including PAINS criteria in the J Med Chem guidelines for authors.

  7. Peter Kenny says:

    People following this discussion may be interested in the article on correcting for interference that I’ve linked as the URL for this comment. Interference can lead to false negatives as well as false positives.

  8. UglyChem says:

    An ugly compounds outside of “drug-like” space with awesome in vivo profiles.

    http://pubs.acs.org/doi/full/10.1021/acsmedchemlett.6b00391

  9. PrivilegedScaffold says:

    The Journal of Natural Products is conspicuously absent from the list of ACS journals who have agreed to these more stringent guidelines.
    Regarding how to change the system, until the major funding bodies require that publishers institute some minimum quality standards in order for publications to be used as records of success in grant submissions there will continue to be an incentive to publish shoddy science. If publications are the “coin of the realm” then you either have to change the realm or the coin to effect change.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.