The open-source program that I use for literature management (Zotero) set off a feature not long ago that I didn’t realize it had. A red banner appeared across the top with a notice that a paper that I had in one of my collections had been retracted. That’s pretty handy: a red X now appears next to the paper, and it’s also part of a new folder that the program created (“Retracted Items”) I was naturally curious to see what paper it was that I’d found interesting that was now being pulled, and was surprised to find that it was a high-profile publication from the Crews lab at Yale on protein degradation.
They had reported a powerful variation on the technique last year: instead of targeting proteins in the cytosol with bifunctional molecules that cause them to be ubiquitinated, they had a system that caused extracellular proteins to be actively taken up and degraded. That would be quite interesting and useful – there are a lot of circulating proteins that one could imagine being therapeutically cleared in this way, across several major disease classes. The Crews group’s so-called ENDTAC molecules were said to hijack the endosome/lysosome pathway, and this was demonstrated with uptake and removal of a GFP-fused target protein.
Well, apparently not. The retraction notice says that they cannot reproduce these findings and are thus pulling the paper. That’s a good thing, and exactly what should happen. But it says something about the state of the literature that it’s notable when it does. There are any number of un-retracted papers out there, some of them from pretty prominent labs, whose reported results are partly or even completely non-reproducible. Most active researchers will have encountered some; this is not a phenomenon that’s peculiar to any one branch of chemistry (or any one branch of science in general).
Like all retractions, there is surely a story behind this one. It’s interesting that there had been (as far as I can see) no publications so far calling the original paper into question. It only appeared last May, so the overall turnaround time was pretty quick by scientific literature standards. The usual way that such hot results fall apart is for other people to jump on the new method and fail to get it to work, but that can happen inside the original research group, too. There have been a number of painful cases over the years where it turned out that the amazing results only seemed to happen when one particular grad student or post-doc was doing the experiment, and not because they had such good experimental technique, either. When you isolate the variables, sometimes you isolate ones that you weren’t counting on (!)
There are honest mistakes too, of course, but the process of discovering and writing up some new and interesting result is supposed to shake those out before a paper gets published. At some point deliberate fraud actually becomes a better explanation than the level of sloppiness needed otherwise, but of course one would rather avoid having to reach for either one. I’m sure this was a painful experience for Prof. Crews and his lab. But I’m glad that they took this action, and whatever it was that happened, I would assume that steps have been taken to keep it from ever happening again.