Here’s more on the problems with non-reproducible results in the literature (see here for previous blog entries on this topic). Various reports over the last few years indicate that about half of the attention-getting papers can’t actually be replicated by other research groups, and the NIH seems to be getting worried about that:
The growing problem is threatening the reputation of the US National Institutes of Health (NIH) based in Bethesda, Maryland, which funds many of the studies in question. Senior NIH officials are now considering adding requirements to grant applications to make experimental validations routine for certain types of science, such as the foundational work that leads to costly clinical trials. As the NIH pursues such top-down changes, one company is taking a bottom-up approach, targeting scientists directly to see if they are willing to verify their experiments. . .
. . .Last year, the NIH convened two workshops that examined the issue of reproducibility, and last October, the agency’s leaders and others published a call for higher standards in the reporting of animal studies in grant applications and journal publications. At a minimum, they wrote, studies should report on whether and how animals were randomized, whether investigators were blind to the treatment, how sample sizes were estimated and how data were handled.
The article says that the NIH is considering adding some sort of independent verification step for some studies – those that point towards clinical trials or new modes of treatment, most likely. Tying funding (or renewed funding) to that seems to make some people happy, and others, well:
The very idea of a validation requirement makes some scientists queasy. “It’s a disaster,” says Peter Sorger, a systems biologist at Harvard Medical School in Boston, Massachusetts. He says that frontier science often relies on ideas, tools and protocols that do not exist in run-of-the-mill labs, let alone in companies that have been contracted to perform verification. “It is unbelievably difficult to reproduce cutting-edge science,” he says.
But others say that independent validation is a must to counteract the pressure to publish positive results and the lack of incentives to publish negative ones. Iorns doubts that tougher reporting requirements will make any real impact, and thinks that it would be better to have regular validations of results, either through random audits or selecting the highest-profile papers.
I understand the point that Sorger is trying to make. Some of this stuff really is extremely tricky, even when it’s real. But at some point, reproducibility has to be a feature of any new scientific discovery. Otherwise, well, we throw it aside, right? And I appreciate that there’s often a lot of grunt work involved in getting some finicky, evanescent result to actually appear on command, but that’s work that has to be done by someone before a discovery has value.
For new drug ideas, especially, those duties hae traditionally landed on the biopharma companies themselves – you’ll note that the majority of reports about trouble with reproducing papers comes from inside the industry. And it’s a lot of work to bring these things along to the point where they can hit their marks every time, biologically and chemically. Academic labs don’t spend too much time trying to replicate each other’s studies; they’re too busy working on their own things. When a new technique catches on, it spreads from lab to lab, but target-type discoveries, something that leads to a potential human therapy, often end up in the hands of those of us who are hoping to be able to eventually sell it. We have a big interest in making sure they work.
Here’s some of the grunt work that I was talking about:
On 30 July, Science Exchange launched a programme with reagent supplier antibodies-online.com, based in Aachen, Germany, to independently validate research antibodies. These are used, for example, to probe gene function in biomedical experiments, but their effects are notoriously variable. “Having a third party validate every batch would be a fabulous thing,” says Peter Park, a computational biologist at Harvard Medical School. He notes that the consortium behind ENCODE — a project aimed at identifying all the functional elements in the human genome — tested more than 200 antibodies targeting modifications to proteins called histones and found that more than 25% failed to target the advertised modification.
I have no trouble believing that. Checking antibodies, at least, is relatively straighforward, but that’s because they’re merely tools to find the things that point towards the things that might be new therapies. It’s a good place to start, though. Note that in this case, too, there are commercial considerations at work, which do help to focus things and move them along. They’re not the magic answer to everything, but market forces sure do have their place.
The big questions, at all these levels, is who’s going to do the follow-up work and who’s going to pay for it. It’s a question of incentives: venture capital firms want to be sure that they’re launching a company whose big idea is real. The NIH wants to be sure that they’re funding things that actually work and advance the state of knowledge. Drug companies want to be sure that the new ideas they want to work on are actually based in reality. From what I can see, the misalignment comes in the academic labs. It’s not that researchers are indifferent to whether their new discoveries are real, of course – it’s just that by the time all that’s worked out, they may have moved on to something else, and it might all just get filed away as Just One Of Those Things. You know, cutting-edge science is hard to reproduce, just like that guy from Harvard was saying a few paragraphs ago.
So it would help, I think to have some rewards for producing work that turned out to be solid enough to be replicated. That might slow down the rush to publish a little bit, to everyone’s benefit.