Skip to main content

Biological News

Crappy Antibodies: Available Now, and for the Foreseeable Future

I made a brief mention of this article yesterday, but I wanted to highlight it. It’s a look, from Nature News, at the broader implications of the antibody problem in research. Antibodies are, of course, universal reagents in molecular biology assays. If you suddenly declared their use illegal, the field would just collapse. But we can’t live with ’em, either, because a really significant percentage of the antibodies used are not as good as they should be. A really disturbing percentage of the scientific literature is (at the very least) complicated by this problem, and some of it is flat-out invalidated by it.
As has been mentioned here several times, the same goes for small-molecule chemical probes, too, and how. That problem is, in principle, a bit more solvable (and I’ve been hearing about some efforts to try to help solve it – more on that as it develops). Small molecules are easier to assay for purity and identity, for one thing, and compared to antibodies, there are a lot fewer of them. The article estimates that there are around 300 companies selling something like two million antibodies. Which of these do what they’re advertised to do, and under what conditions, well. . .that’s hard to say:

Scientists often know, anecdotally, that some antibodies in their field are problematic, but it has been difficult to gauge the size of the problem across biology as a whole. Perhaps the largest assessment comes from work published by the Human Protein Atlas, a Swedish consortium that aims to generate antibodies for every protein in the human genome. It has looked at some 20,000 commercial antibodies so far and found that less than 50% can be used effectively to look at protein distribution in preserved slices of tissue5. This has led some scientists to claim that up to half of all commercially available antibodies are unreliable. . .
. . .Abgent, an antibody company based in San Diego, California, and a subsidiary of WuXi AppTec in Shanghai, China, tested all of its antibodies about a year ago. After reviewing the results it discarded about one-third of its catalogue.

So that should give you a rough estimate, and I don’t think that many experienced assay development folks will be surprised. The people that are surprised, as usual, are the ones who just order out of the catalog and believe what’s on the label. As the article mentions, a lot of people shop on price and speed of delivery, which (you’ll be shocked to hear) are variables that don’t always correlate well with reagent quality. And there are a lot of resuppliers out there, so even if you buy half a dozen antibodies against the same protein from different outfits, you may have only bought two. Or one. Who knows? And if you use up your supply of one that’s working for you and re-order, will the new batch be the same as the old one? Who knows?
There are several online resources that are trying to address this problem (they’re listed in the article), but many people don’t even know about them. And as long people have the attitude that one (now more cautious) scientist expressed in the piece, the crappy reagents will continue to be sold. “I wasn’t trained that you had to validate antibodies;” he says, “I was just trained that you ordered them.”

22 comments on “Crappy Antibodies: Available Now, and for the Foreseeable Future”

  1. steve says:

    Most competent immunologists don’t depend on one antibody. Rather, they do sandwich assays or immunoprecipitation followed by Westerns that rely on recognition of two different epitopes. Even monoclonals have cross-reactivity issues hence the need for looking at two separate epitopes. As with any assay, you need to validate before you can depend on the results.

  2. Pessinist says:

    @1 Steve: true but, how many times does that complicate the issue for each time it reduces it?

  3. anon says:

    a scientist who doesn’t believe in validating the quality of their reagents is not a scientist.

  4. MTK says:

    It goes beyond what is described, however. At least according to folks I’ve talked to since it’s not my field.
    Not only is there variability in reactivity and selectivity between vendors for nominally similar mABs, but batch to batch variability from the same vendor and intra-batch variability depending on storage length and conditions.
    The only real recourse seemingly is to test your antibodies prior to every experiment. Short of that you are asking for real trouble in terms of any reproducibility. Of course, just because it’s reproducible doesn’t mean it’s “correct” either. It could be reproducibly unselective.

  5. SP says:

    Minimally people should be using universal IDs (like Research Resource ID, RRID) so that the exact vendor, part, and lot can be traced back for reproducing work- e.g.:
    There’s a link for each reagent registered that autogenerates the text you need to insert into a publication protocol for referencing the RRID. Some publishers have started requiring the use of them. (Incidentally that site supports Derek’s estimate, they have 2.4 million registered.)

  6. 2 Cents says:

    From my experience, biomedical research has largely adopted a kit-based, ready-to-use mentality. Although one should expect high quality when spending $400+ on 100 micrograms on a primary antibody, research customers should still verify product performance before committing to expensive experiments. What disappoints me the most is that various open-access, crowd-sourced evaluation tools exist, yet hardly any biomedical researcher uses them.
    While many can regurgitate acronyms in a signal transduction pathway, very few understand the underlying chemical mechanisms. The pursuit of Big Data should not excuse inattention to detail or neglecting best practices.

  7. DaveP says:

    I don’t remember whether it was in 11th grade chem or 12th grade organic chem lab where the teacher substituted solvents on us – replacing methanol with water, maybe – and the grade for that day was based on how long it took you to figure out that something wasn’t right. I always figured it was one of the most valuable lessons I learned.
    But then we walked five miles to school, barefoot in the snow, uphill both ways, so that sort of thing probably doesn’t apply today.

  8. PS says:

    The lack of antibody validation is a problem of huge proportions. Just look at this paper:
    (Proteomics. Tissue-based map of the human proteome.)
    The antibody validation is risible at best (spotted array), and their own westerns invalidate quite a few of their antibodies that I have looked at. Yet this has been no barrier to publish a paper in Science.
    @SP Having resource ID will help only if companies maintain consistent quality. My experience with SCBT and Abcam shows that there is big batch to bath variability.

  9. anonymous says:

    GPCR ab are particularly crappy & widely abused. I prefer Westerns as the first test of their potential utility. GPCR are glycoproteins & therefore appear as wide, fuzzy bands on Western blots. If you don’t see this in a recombinant (+ the signal disappears in the nontransfected parental), the ab is junk. If a figure in a paper, shows a GPCR as sharp band on a Western, the ab data is beyond suspect.
    See Fig 5 in Gu & Schonbrunn, 1997 (no firewall; Mol Endocrinol. 1997 May;11(5):527-37)- that is what a GPCR looks like in a Western.

  10. Virgil says:

    There’s a reason why a certain vendor is affectionately referred to as “SantaCrap”, right!

  11. johnnyboy says:

    @10: it used to be that Santa Cruz was the one company you knew to stay away from. Now the problem is that there are 10+ companies as crappy as them (who are probably just reselling the SC stuff).
    My rule of thumb is to stay away from the companies that offer antibodies to essentially every protein known to man. The best companies (eg Cell Signaling) have much smaller catalogues, and for a reason.

  12. watcher says:

    Yeah, well it’s a lot like generic drugs, that are “identical”……but they are not. And now even pharmacies can force the use of a generic even when the patient asks for a brand name and is willing to pay more for the original.

  13. NJBiologist says:

    @10 Virgil: That would be the speed-of-delivery issue… if your business model depends on being first-to-market for every new protein, and you get there by selling the first antisera/hybridoma that shows any detectable binding in any measure (western, flow, IHC), then your catalog will be an enriched collection of questionable products.

  14. RickW says:

    I think it goes without saying that you have to validate your reagents for your specific assay. With monoclonals you have to understand the science behind them as well. What was used as antigen to elicit them? For proteins, do they recognize conformational vs. linear epitopes? I suspect that has a lot to do with the “less than 50%” result. There are many perfectly good monoclonals that don’t work on fixed sections.

  15. enzgrrl says:

    This is why we run controls. Lots of them.

  16. assaybuilder says:

    @14 RickW: 100% this.
    I have seen a reasonably nice cars worth wasted on a monoclonal that reliably recognized a denatured protein. Native form? Not a twitch. Turns out the vendor had used the same batch of antigen for both immunisation and reference material. This contained some of the denatured stuff… Also: that fancy new sepsis marker? Actually was hemoglobin all along.
    Purity and Identity are as important in biology as in chemistry. Sometimes more difficult.

  17. steve says:

    @2- Sorry, I have no idea what you’re talking about.

  18. anon2 says:

    Yes!! SantaCrap, I completely forgot that is how we referred to those guys.

  19. A very pertinent article- thanks Derek.
    People tend to miss out on a very important sector when discussing antibodies- the technology transfer offices (TTOs) who bring the antibodies out from labs to the commercial suppliers.
    Picking up from this article- so even if you buy half a dozen antibodies against the same protein from different outfits, you may have only bought two. Or one. Who knows?
    We are trying to tackle exactly this- if we could link inventors, publications and institutes to each antibody- at least some of these issues can be resolved.
    If this is of interest to you please visit us on

  20. Emjeff says:

    Apart from the quality issue, this points to a different issue- namely that basic scientists have a very hazy idea of what validation of an assay entails. Look at the amount of data which is required to validate a standard small molecule LCMS assay ( where the samples are cleaner and the method of detection is straightforward and predictable) and try to ask for something similar from a bench pharmacologist. You will be met with a blank stare.

  21. wlm says:

    @20, Emjeff
    I’m a bench biochemist, but I’ve also done some forensic toxicology, and I know what you mean about the differences in validating assays in the two fields. And I agree that biologists should generally be more careful in that regard.
    However, an important difference in the two situations is that the result of a validated small molecule LCMS assay should be dispositive on its own. The result of, say, a Western or immunocytochemical experiment *should* be just one of a number of lines of evidence.

  22. Dennis Discher says:

    Mass Spectrometry (MS) can be an independent validation tool for target selectivity and even specificity. We are a Cell Biology & Biophysics lab (not a high-throughput proteomics lab), and we try to validate any Ab of concern using MS methods. MS is increasingly available to everyone and at costs similar to buying Ab’s.
    One preferred method is ‘IP-MS’ (immuno-precipitation MS) as illustrated in Fig.6D,E of Swift et al. (SCIENCE VOL 341, 30 AUGUST 2013, 1240104) for a common nuclear protein and two transcription factors. Step-1: usual IP with Ab. Step-2: run SDS-PAGE gel on IP. Step-3: cut out MW ranges of interest (based on Immunoblots and predicted MW of target) as narrow or broad bands and do standard MS preps (of peptides generated by the usual in-gel trypsinization). Step-4: Run MS and look for the claimed target of the Ab within the list of proteins detected by MS. Step-5: Assess specificity by number of target peptides detected and their ion current as well as by presence and type of known binding partners (eg. chromatin factors might come down also with transcription factors but mitochondrial proteins would be surprising and suspicious).
    Repeat if desired with increased (or decreased) stringency of the IP with different salts, detergents, etc. as is typical for IPs.
    Include, if desired, tissues or cells that have the target knocked out, overexpressed, or even just partially knocked down, which can also be used for quantitation of the decrease in level independent of Abs (Raab et al. JOURNAL OF CELL BIOLOGY 2012; Swift et al. NUCLEUS 2013).
    This can all take a few weeks or a month or two. We find close to 50% of MS-detected peptides are different between human and mouse, and so insight into species specificity can be obtained. MS also yields post-translational modifications of the target (eg. phosphorylation) in the analyzed cell type or tissue, and can be combined with proteolysis to at least partially identify the domains with epitopes recognized by the Ab (eg. Buxboim et al. CURRENT BIOLOGY 2014).
    If an Ab does not work for IP (which would make us suspicious), but it does work in Immunofluorescence and identifies particular organelle(s), then enrich or purify those organelles and analyze bands of the target’s predicted MW to make sure the target is detectable.
    MS is not merely for those doing massive proteomic studies. With MS getting cheaper and better, our experience shows that MS can and should be used (and eventually required we feel) as a target-specific analytical tool for validation and deeper insight into any Ab. References above and further details can be found via links to:

Comments are closed.