Skip to main content
Menu

The Dark Side

A Very Public Retraction. Good.

Word came out yesterday from Frances Arnold of Caltech (via her Twitter account) that she and her co-authors are retracting this paper from Science. The retraction notice itself has the details:

After publication of the Report “Site-selective enzymatic C‒H amidation for synthesis of diverse lactams” (1), efforts to reproduce the work showed that the enzymes do not catalyze the reactions with the activities and selectivities claimed. Careful examination of the first author’s lab notebook then revealed missing contemporaneous entries and raw data for key experiments.

My first thoughts on reading this were (1) Oh dear, and (2) Good for Prof. Arnold. It’s of course not good that this work slipped through, but in a follow-up tweet she admits being distracted when this one was submitted (that would be the Nobel award), and that she did not do her job well. To be honest, I’m glad to see such a forthright admission and I wish that more people would follow the example. Lab heads who are willing to step up like this should be applauded, and seeing it happen in a Nobelist’s research group should be yet another example of why the Royal Society was right to make their motto Nullius in verba (on no one’s word). We don’t argue from fame or authority in this business; we argue from reproducibility of results. It’s good to see interesting results from a reliable lab, because that makes you think that the stuff will be more likely to work when you try to reproduce it. (The first link in this post will show that this is far from the first time that a Nobel laureate has retracted a paper!)

This sort of thing (someone inventing or overselling their work and trying to run it past a busy academic PI) is unfortunately an old story. I have personally seen it happen, and I’ll bet that there are several readers here who have stories as well. Sometimes it gets caught, and sometimes it doesn’t. And even worse, when something like this actually makes it through, sometimes the paper gets retracted and sometimes it doesn’t. This one clearly deserved to be pulled, and I’m glad that it has been. But there are plenty more papers that that deserve the same treatment and for similar reasons.

The numbers are bad enough when you count up outright fraud like this looks to have been – missing raw data, blank notebook entries, etc. The revelations over the last few years about the number of faked-up Western blots and the like have been a good thing. But there’s a lot of stuff out there with notebooks and data that still just doesn’t quite reproduce. “Sorta kinda works sometimes” should not be the standard for publication – a project like that needs to go back in the oven rather than being written up for a journal. Sadly, those are even less likely to ever be retracted. At least in a case like this one there’s clear malfeasance and a clear failure to catch it, both of which can and should be acknowledged. But who’s going to send out a notice that their Novel General Powerful synthetic method from a year or two back doesn’t quite live up to any of those adjectives, on further reflection? That New Ligand X only works on alternate Tuesdays in months without an R in them and that the paper on it was Frankensteined together accordingly? Or that some other papers really should be asterisked somehow because their conclusions are built on wonky cell lines, inappropriate chemical probes, or antibodies that no one can get anymore?

 

45 comments on “A Very Public Retraction. Good.”

  1. Anon says:

    “enzymes do not catalyze the reactions with the activities and selectivities claimed. Careful examination of the first author’s lab notebook then revealed missing contemporaneous entries and raw data for key experiments.”

    So how did they generate the original data?

    1. AM says:

      Pretty easily. Balance manipulation is the easiest explanation for changes in catalytical efficiency and selectivity. Not reporting a purification step is also easy for manipulation of selectivity. You could even falsify those in your notebook, and when asked where the compound was months later “it must have degraded” or “someone threw it away.”

      When you’re in grad school and your graduation is on the line depending on a publication, it gets pretty tempting. I’m glad I never succumbed to that dishonesty but I had a labmate who did and got kicked out. The PI would have missed the subtleties but we luckily had an eagle-eyed post-doc in the lab who caught it.

      1. Anon says:

        It says the authors contributed equally. I wonder if other papers from the same authors are under investigation.

        1. loupgarous says:

          That the other authors themselves tried to repeat Cho’s results and found them irreproducible in-house shows they were at least awake to the potential for error and/or misconduct. That it happened post-publication was embarrasing, but when they found out, they were prompt to take responsibility and retract.

          That said, review of the other authors’ work would be indicated, just as a check – and not an impeachment of their own work.

      2. Bob says:

        Had a similar experience during my postdoc: a project based on a couple big paper from the lab (JACS type journal) , with excellent published results from two generations of students ago (understand only one student remaining in the lab from that era and was not directly involved in the generation of these particular results). After fighting for a few months to get my side of good results and getting nowhere, I decided to repeat the published reactions, much to my surprise getting very inferior results, akin to the system does “not catalyze the reactions with the activities and selectivities claimed”. When confronting the people that were present at that time but not involved directly, the reaction was along the lines of “well, we never tried to reproduce it after X and Y left, but everyone in the lab knows it’s not really reproducible. Knowing X and Y, no one is surprised, we never saw them make any column for example and we dont know where their HPLC traces are”. PI’s reaction was “meh, close enough, don’t make such a big deal out of it” seeing my 15-25% e.e. discrepency, depending from the day, from the published results under identical conditions to what was reported.

        Talking to people from other labs in the same area of enantioselective catalysis made me later realise that basically everyone is using some kind of “dirty trick” such as unreported workup procedure of catalytic reactions, unreported special tricks to get highly active catalysts, or analysis conditions that don’t match with the reported ones but that are “basically the same” to polish their results, and since it’s gotten so competitive to publish in that area, no one can be honest about it anymore if they want to get publishable results. That was what made me lose a huge chunk of my faith in science and pushed my decision to definitely leave academia. So seeing a retraction by such a high profile PI for essentially the same reasons that what I have actively gone to detest and run from in academic labs gives me a little bit of hope back.

        P.S. Of the students involved in those original studies, X and Y are now all PIs themselves (2 of them) or industry scientists in big pharma (2 of them)…

        1. NMH says:

          “P.S. Of the students involved in those original studies, X and Y are now all PIs themselves (2 of them) or industry scientists in big pharma (2 of them)…”

          In science, cheating gives you a huge pay off. If someone doesn’t make it to faculty or a pharma job then you know for sure they did not commit fraud. Anybody else, you cannot be 100% sure.

        2. robocop says:

          NAME AND SHAME! NAME AND SHAME!

        3. Nick K says:

          “Of the students involved in those original studies, X and Y are now all PIs themselves (2 of them) or industry scientists in big pharma (2 of them)…”

          How profoundly depressing. Dishonesty clearly pays.

  2. Anon says:

    About your statement.. “Sometimes it gets caught, and sometimes it doesn’t.” But, don’t you think when we are working on an important reaction to make beta lactams (potential applications in antibiotics) a due diligence would have saved an embarrassment?

  3. Azetidine says:

    As Derek likes to say in this blog, “Falsus in uno, falsus in omnibus”. She should have her Nobel revoked.

    1. Emjeff says:

      That is a bit unfair, especially when she has been so upfront about it. If she had concealed it and got caught three years later, I would agree with you, but the fact that she immediately announced the retraction and took responsibility suggests to me that she just had a bad apple in the lab.

    2. Silicon says:

      That’s an insane assertion. One instance of overlooking a paper and sending it to publication does not erase previous contributions to a field. People are allowed to make mistakes and be forgiven when they own up to them. It’s not like she let it drag on for years and allowed the community to pour research dollars and effort into something that doesn’t work. It appears as though, the mistake was realized, the cause identified, then the publisher was notified.

      “To make mistakes or be wrong is human. To admit those mistakes shows you have the ability to learn, and are growing wiser.”

      1. li zhi says:

        “Who watches the watchers?” The main problem (imho) with a comprehensive monitoring system would be its huge use of time, energy and talent without any direct benefit. There needs to be a 3rd line on every notebook page. Last I checked (some yrs ago) the author and a witness signed the notebook. We could do well with a reviewer as well (signing on the 3rd line).
        Each notebook author should be required to spend 1 or 2 hrs a week to review (and sign) notebook pages of randomly (but firmly and transparently) assigned peers. It wouldn’t be a comprehensive review, just a spot check. The reviewer will create notes/entries/check-lists for each pg reviewed (watching the watcher) and this will be sent to both the nb author as well as the PI. For the system to work, it would seem that check-lists would do the job (or at least provide the framework). But the follow-up/response both by the author and the PI (e.g. explaining review inaccuracies, follow-up by asking for (immediate) details or remediation within a short time limit would require some discipline. PI’s response, once the dust has had a few days (optimistically) to settle will also be required to be recorded (within time-certain).(and there’s all sorts of legal and IP issues that need addressing (formally, in a contract with binding arbitration by “the administration”). Reviews would become property of the institution (not the PI) and be confidential. I have no idea if this would blow up into an acrimonious mess or be ignored and fade away, but hey at least it’s constructive and (as I envision it) would require limited resources (if done ‘right’). IMHO, academic scientists need to be held to a requirement to spend some time “pro bono” on lab reviews something like this…

    3. PUI Prof says:

      Was her Nobel based on this work?

      1. loupgarous says:

        Arnold’s Nobel was awarded in October 2018. Cho’s paper (the one which had to be retracted) was published May 10th, 2019. Unless other irreproducible results are found (an audit of the group’s previous work would be a very good idea at this point), I don’t see how Cho’s conduct in reporting the beta-lactam synthesis could have anything to do with Arnold’s work for which she got the Nobel.

        David Baltimore has been involved two scandals, one surrounding allegations of misconduct by co-worker Thereza Imanishi-Kari which led to a Federal investigation (politicized by Rep. John Dingell) which ended in exoneration of Dr. Imanishi-Kari by an HHS appeals panel, and another series of allegations regarding Luk Van Parijs, a former associate professor of biology at the Massachusetts Institute of Technology (MIT) Center for Cancer Research. These scandals involved NIH-funded work resulting in research papers in which Baltimore was a co-author with Imanishi-Kari and van Parijs, in which errors were found and remarked upon.

        Baltimore, as a co-author of the work of these former co-workers of his, also bore a co-author’s responsibility for what was in the papers – but no one suggested he was proximately responsible for the misconduct. David Baltimore is a Nobel laureate, has been president of Caltech and Rockefeller University, was president of AAAS in 2007 and is generally regarded as having made key contributions to immunology, virology, cancer research, biotechnology, and recombinant DNA research.

        Nobody’s talking about revoking Baltimore’s Nobel prize. I don’t think it’s appropriate to do so for Frances Arnold, either.

    4. Lawrence says:

      Considering that her Nobel prize was based partly on her discoveries in the 90s, and not at all based on this 2019 article, that would be just a little unfair, and set a very bad precedent.

    5. Comeon says:

      That’s idiotic. Have you ever made a mistake in your life. To quote another: he who casts the first stone…

    6. loupgarous says:

      Falsus in uno, falsus in omnibus” cannot apply to the work for which Arnold was awarded the Nobel prize unless similar misconduct is proven to have occured during that work. Nothing of the kind has been reported.

      Revoking Frances Arnold’s Nobel for failing to catch an crucial error in the paper’s primary author for work – but personally trying to reproduce that paper’s findings and failing, then advising the journal of that fact – would be punishing Dr. Arnold for taking responsibility for a regrettable error which was not culpable research misconduct on her part, but bringing irreproducible results by another member of her group to the attention of the scientific community on her own initiative.

      As a co-author of that paper, she was by definition responsible to a degree for the paper’s contents, but not the paper’s primary author. Arnold’s prior work for which she was awarded the Nobel prize is unaffected by this.

    7. x says:

      If you want to incentivize covering up and keeping mum and punish retraction of invalid papers, then definitely do this.

  4. Charles H. says:

    I disagree that flakey reactions should not be reported. It’s true that they don’t deserve the same level of respect, but they point to an area that needs further study.

    In fact, degree of uncertainty in the results should be a major item in reports on each reaction. It would probably yield valuable pointers as to where to look for catalysts and poisons thereof.

    OTOH, I’m well out of my field here, being basically a programmer.

    1. AR says:

      The idea behind your run of the mill synthetic methodology paper is fixing flaky reactions and finding the determining factors while doing so.

      1. Lambchops says:

        True, but Charles is right about uncertainty. The organic chemistry literature is pretty woeful in terms of reporting uncertainty.

        When reading run of the mill methodology papers I pretty much assumed that the reported yields and enantioselectivities were based on the best run of the reaction unless it was stated otherwise (most of the time it isn’t).

        Reporting these results as an average of three runs with a range wouldn’t be difficult and might provide some insight into potential areas to tweak further or into reaction mechanism. Chemists hate uncertainty (and justifiably so!) but it’s always going to exist and may tell us something useful. Plus (where possible) I think more could be done to report whether any variability in yield is due to the reaction itself or the work up procedure.

        1. Drp says:

          Not that I don’t think it’s a good idea, but I’m reality, doing all your examples in triplet when they want 50 of them with isolated yields, you’re the only one doing them and you don’t have an auto column could take a decent part of 6 months.

          Maybe do a few in triplet and if they are reproducer you stick with it?

  5. luysii says:

    Along with the very welcome attempts at reproducing work (beginning in psychology which sorely needed it) comes a new problem — null hacking. The same pressures that lead original investigators to slice and dice their data until they find something ‘significant’ (e.g. p hacking) exist for reproducers, who are likely to get a more widely published (and read) paper if they take down an existing paper than if they don’t. For more on this read — Proc. Natl. Acad. Sci. vol. 116 pp. 25535 – 25545 ’19 — or if you don’t have a subscription have a look at https://luysii.wordpress.com/2019/12/22/null-hacking-reproducibility-and-its-discontents-take-ii/

  6. mark says:

    I am always surprised to read such things. When potential good or great results occur doesn’t the PI ever think about getting another person in the lab to verify those results? Any experienced PI should do this and not doing so is irresponsible and to me calls into question the veracity of every piece of work the PI has published as they just aren’t careful experimentalists.

    1. world war 3 says:

      It was “the obvious next step” kind of work, based on what that lab had published up until that point. Skepticism is useful in science, but a constant level of “I am not going to trust your results until I verify them myself” skepticism ends up wasting so much time.

    2. kriggy says:

      Its a good point but the enviroment in academia doesnt really support this. Everyone is pushing their work to get papers and therefore funding so its hard to convince other lab members to take some time of their project in favor of doing reproducibility study on your work.

    3. Bob says:

      I was shocked to learn how rare it was in the labs of people I know. Basically no one I know ever did this and people were looking at me suspiciously when I asked people to reproduce my results in my postdoc lab.

      My PhD supervisor had it as a rule that before you publish anything from the lab, you give the reaction to someone external to the project for them to reproduce without any input from you other than the written procedure. If the results were within an acceptable range (i.e. 5% yield and 1-2% e.e. if applicable), you send it for publication. If it falls out of this, either the procedure is not written properly, some information is missing and you correct it or something else is causing the irreproducibility and you have to figure it out before sending it, at least to warn people of something to look for if they try to reproduce it.

  7. NMH says:

    R1 research faculty: ALWAYS distracted, and never do their job well (at least the job of overseeing a lab). This one was caught, but think of all of the individuals who post-doc’d at Cal Tech, Berkeley, Stanford, MIT, and Harvard (eg, where you need to post-doc to get a decent job) that faked data and got through to a good job…..

  8. Christophe Verlinde says:

    Here are the specifics: shared first authorship AND shared inventors.
    Do I have to say more?

    Inha Cho*, Zhi-Jun Jia*, Frances H. Arnold†

    Author contributions: I.C. and Z.-J.J. designed the research and performed the experiments with guidance from F.H.A. I.C., Z.-J.J., and F.H.A. wrote the manuscript. Competing interests: I.C. and Z.-J.J. are inventors on patent application CIT-8187-P submitted by the California Institute of Technology that covers the intramolecular and intermolecular enzymatic C–H amidation.

    1. anon says:

      Wouldn’t have been the first time academic research fleeced a big pharm company with fake results. Dirtis, anybody?

    2. milkshake says:

      I have observed with two of my previous bosses a remarkable lack of research integrity and personal integrity, as PIs. They were ambitious, seized the opportunity early on in their careers, pushed ahead with drug development-related research programs that did not work according to their hopes. Apparently, the projects were underfunded, and for the continued funding and for the potential commercialization of research, it was necessary to exaggerate the results, and to create a “robust intellectual property portfolio” – to patent all kinds of stuff. At some later points, there were manipulations or even outright fabrications of animal data. They did not start that way, set out to become cheats, they were bona fide talented scientists and able leaders, but over the time their research integrity get bent because it was getting in the way of their aims and goals. And I think writing those wildly exaggerated patent claims and prophetic examples, operating on wishful thinking and trying to cover the whole sky with imagined inventions is where they stopped being scientists and turned into hustlers. That, and giving frequent business pitches for the benefit of unsophisticated investors.

  9. Andy II says:

    Just confused. “efforts to reproduce the work showed that the enzymes do not catalyze the reactions with the activities and selectivities claimed.” The Science pub shows a gram-scale prep of the beta-lactam 2s (1.6g) as a solid with 92% ee by HPLC (Fig 2B). With 1.6 grams of material, it would be easy to characterize the product, wouldn’t it? So, did the enzyme make the product with much lower efficiency or not at all?

    1. Anon says:

      There are so many questions that need to be answered but sadly no one will. They will only congratulate her for retraction.

  10. dearieme says:

    Is it easier to make the decision to retract once your Nobel is in the bag?

    Or would most Nobelists have much too grand an opinion of themselves to stoop to a retraction?

    My guess is that there’s far, far more dishonesty in science than anyone wants to admit, though I’ll grant that much of it presumably stops short of simply fabricating results. I’ll admit to putting absurd “research milestones” into a grant application because I was required to enter something on the form – and felt that writing in “this is research, for God’s sake, how the hell do I know what I’m going to discover and when?” might not be welcomed. But I did resent being required to indulge in essentially dishonest anticipatory bragging. Consequently as far as possible I pursued academic research that needed little external funding from government bodies.

    Anyway, it’s hard to think of a more foolish way to respond to this retraction than cancelling a Nobel award. Maybe capital punishment?

  11. Dionysius Rex says:

    This morning I am seeing a lot of withdrawals retractions from the groups of Prof Dannenberg and Prof Subbaramaish at Cornell Medical College – with a variety of additional authors…..

    Anyone have any insights?

  12. Mark says:

    The usual way in which work is reproduced in a group is that the PI gets funding to continue the research, employs a new post-doc, who has to be able to do what the previous person did, in order to extend the project. And this is why unreproducible results are so, so appallingly damaging. How many highly competent post-docs have trashed their careers by unwittingly stepping into an unreproducible mess where the previous worker has vanished in a puff of smoke, with inadequate or missing lab-books and raw data? It’s not just the loss of the grant funding that hits the public – it’s the loss of that scientist, who either gives up their career in despair, or finds that with 2 more years on the clock and no paper to show for it, they’re unemployable.

    1. Yep. says:

      Happened to me. The PI never did believe me, either. I gave up and got a job at a large biotech that doesn’t check references as a matter of policy. The project continued. According to the person slated to be victim #3, who saw my name on some old lab notebooks and tracked me down, victim #2 left science entirely. Mercifully, that PI is now out of grant money.

    2. Milsteiner says:

      It is okay. Just keeping postdocing until you get it right

    3. jay says:

      i had to reproduce a previous post docs work where in that person was reporting the peak of DMSO in HPLC as the drug peak. It took me a couple of minutes to confirm the observation, bit took me almost a week to convince my PI. and the previous post doc is a faculty in PRC..yay

  13. Adonis says:

    This event lays bare an inescapable fact that modern science is firmly in the hands of its least trained members: students and postdocs. I have yet to meet a PI doing lab work these days. What PI and ultimately broader community sees is a highly distilled (and often distorted) version of reality generated A detail to illustrate this point is her comment that she was” unusually busy” when paper was submitted. Busy with what? Obviously not research.
    It is time to move back to the old model of a scientist with a few apprentices. So long as we stick with megalabs model where the PI is busy travelling, consulting, grant writing, and doing everything else but actual research we will have events like this.

    1. NMH says:

      I disagree…grad students are post-docs are the best trained in running experiments. Everytime my advisor has entered the lab and do an experiment, he makes some kind of mistake rendering his results irreproducible. Furthermore, the aforementioned hands are usually the ones that can figure out to put the work into some kind of narrative that can be published, not the advisor.

      The problem is that the best-trained members (above) have crappy jobs they are trying to escape. You can do that if you are 1.) lucky, 2.) brilliant, or 3.) cheat. I really believe that the our modern faculty is comprised of individuals equally representing these three categories.

      1. Adonis says:

        Well, we sort of agree and disagree at the same time. Yes, many PIs are all thumbs but that is because they are spending time doing everything but research. I am in the camp with those who think that Nobel Prize it an outdated concept. In the megalab model you will have hard time convincing me that all the ideas came from the PI and students and other staff had nothing to do with it. I am also in the camp that thinks if you live by the sword, you die by the sword. In other words, if you are going to pick up all the accolades when things work out, you will take all the punishment when they don’t. If she fired the student, she should be fired herself.

        1. yf says:

          Research has long become a business for universities and medical schools. Most of P.I. are simply small business owners to compete within a finite pool of grants.

Leave a Reply to PUI Prof Cancel reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.