Skip to main content

Academia (vs. Industry)

Stand By Your Data With Some Cash

Reproducibility in the scientific literature has been a big issue for some time now, and it’s not going away any time soon. There are arguments and counterarguments about how much of the literature is not reproducible, how reproducible the attempts to reproduce it are, what the standards should be for such efforts, and how much the problem might vary by scientific discipline. I’m not going to attempt to link to all the relevant articles (and all my past blog posts) on this topic right now – art is long and life is short (see, I managed to quote that in English rather than Latin).

But what I do want to do is call attention to a proposed solution from Michael Rosenblatt, who is Chief Medical Officer at Merck. See what you think about this one:

Here is the essence of the proposal: What if universities stand behind the research data that lead to collaborative agreements with industry, and what if industry provides a financial incentive for data that can be replicated? Currently, industry expends and universities collect funding, even when the original data cannot be reproduced. In the instance of failure, collaborations dissolve, with resulting opportunity loss for both academia and industry. But what if universities offered some form of full or partial money-back guarantee? With such assurance, companies could proceed with a project more rapidly and more frequently. They would also be likely to pay a premium over current rates for data backed by such assurance over “nonguaranteed” data, even from the same university. This approach places the incentive squarely with the investigator (including his or her laboratory) and the institution—precisely the leverage points for change. The premium would provide universities with the financial wherewithal to cover the cost of affirming their data if they choose to replicate it before entering into a collaboration.

That might work, but I can certainly see universities offering some resistance to the idea, since it places the onus right on them. The expense will be real, while the promise of revenue to make up for it remains to be seen. This recalls the 2011 discussions about venture capital firms and academic science. That turned into a “Don’t you trust me?” sort of argument, with some VCs being pretty hard-nosed about testing things, and others finding the whole idea to be against the deal-making spirit of collaboration. I lined up at the time (and still do) more towards the hard-nosed end. Biopharma deals are about data, and nullius in verba, folks, take no one’s word for anything and check it again.

I’d like to see Rosenblatt’s proposal tried, but if you’re going to wait for the universities to try it, you’re going to have a long wait. So the way to do it is probably from the industrial end. You could try putting in that money-back-guarantee language and see how what the tech transfer offices think of it, but I’m pretty sure I can guess. If it’s structured as “this better work or we take back the cash”, they’ll probably balk immediately. For psychological reasons, you’re probably better off making a low offer for the data and the idea as they come, with a much more generous one if it’s been reproduced first. Let them seek the carrot rather than fear the stick. Reproducible results are already worth a lot more in reality; we should adjust our pricing to reflect that.

Addendum: lest anyone think that I’m just bashing academic science here, I’m all for adding conditions like this to all-industrial deals as well. Some of them already come close to this, but why not put it right out there on the table? How would, say, the GSK-Sirtris deal have gone under such conditions, eh?

27 comments on “Stand By Your Data With Some Cash”

  1. Hap says:

    That would work better if you had a more robust internal research apparatus, but seeing as pharma has been trying for the most part to outsource discovery to universities, what is the leverage? If you don’t take a flier on the potential crappy research, from what are you going to develop products?

    This would have been a better realization to have (and the mechanism for countering unreliable research would work better) before you nukedXXXXXreduced your internal research capacities.

  2. Anon says:

    Be careful what you measure and incentivise, you might just get it, but not as you intended. Just ask Valeant’s shareholders!

  3. Anon says:

    Interesting idea, for Pharma to outsource all its research on the cheap, and then guarantee the quality by threatening to back what little money it gave. Perhaps academia will wake up and realize that it can replace pharma altogether and sell the drugs it develops via its own network of hospitals.

    1. ab says:

      Except that only a small fraction of academics have the intellectual and financial resources to bring a drug to market, and those that can already do.

      1. Phil says:

        Academics that bring drugs to market don’t do it without partnering with a pharma company at some point in the development process. Derek had a post a while back on drugs initially discovered in academia, but I don’t think any of these was actually brought to market without involvement from big pharma.

        1. ab says:

          I don’t THINK so either, but I’m not certain that’s true in all of the cases from Derek’s prior post, and I just don’t feel like combing through all of them. But I do feel pretty confident in saying that academics who can, do.

  4. Anon says:

    As someone who works at a University, I can tell you that Universities also won’t like having to stand behind their investigators like that. The problem is asymmetric information: typically the investigator knows far more about their own field than anyone else at the University. So they are in the best position to judge the risk, but aren’t exactly unbiased.

    1. The asymmetric information point is a good one. I often think about biopharma in parallel with things like sports leagues and certainly an element of trades and free agency is that neither side has full knowledge, so there will always be a risk. In sports that seems baked in and, for example, if one team requested a money-back guarantee should a prospect in a trade not work out, I think they’d find themselves with few to no trade partners. There are occasional instances where a trade contains a clause with a hedge based on how well players perform and/or if they’re still on the roster in future years, but those seem to be more of an exception than a rule.

      But another problem with this pay for performance scheme is that it assumes that scientists have full (or at least a lot more) control over the results they get and report from their research than they actually do. This gets back to the philosophical questions of where the replication crisis is coming from, but let’s say that 50% is due to less than rigorous science (which I think might be high). That still leaves 50% that’s due to the vagaries of working on biology. The recent paper ( on buying petshop mice and housing them in your animal facility to make your mouse immunological models work better is an example of this–how uncontrolled is that? So when a promising finding using that approach fails to replicate in Biopharma, it will be practically impossible to figure out why. Which means you’ll get lawsuits. Which seems like an even worse use of dollars than speculative bets on research in the first place.

      I agree there needs to be more rigor in how scientists are trained, how they use statistics, and how they report their findings. But I don’t think this lever is the right one and it’s not being applied at the right place.

      1. RM says:

        Yeah, your second paragraph was my initial thought too. If something like this ever took off, you would need to be *exceedingly* precise about what counts as “reproducible”.

        I mean, ignore academia for a moment and look just at industry. There’s a large number of examples of potential drugs which looked promising (from internal tests) at first, and then fell apart after closer scrutiny. If a “hit” doesn’t pan out with more extensive tests, is that a failure of reproducibility? If I give you a list of results, each validated to the p=0.05 level, is it a failure of reproducibility that 1/20 comes back as a false positive? If there’s unknown confounding factors (mouse strain, facility temperature) – so the results are valid, but not as general as might originally have been assumed – is that a failure of reproducibility?

        Also, who/how reproducibility is assessed is a big deal. Can you just say you can’t replicate it? Do you need third party validation that you’re replicating with exactly the outlined protocol? Do you have to have the protocol run by an independent lab in to prove that an unbiased outsider can’t reproduce it? At a certain point, you’re going to be spending a heck of a lot of time and money to prove conclusively that something doesn’t work – well past the point when you normally would abandon it.

        1. Anon says:

          I can pretty much get any experiment to NOT work (just be sloppy or omit critical steps). A negative result doesn’t really prove anything. It’s simply a data point that suggests that the original theory is incorrect. In my opinion, using a negative result as leverage to back out of a deal is a horrible idea. Once a company loses interest in a project, then it can almost assuredly find negative results to argue for their money back.

  5. watcher says:

    There certainly should be verification for deals involving big money (in particular)…and sometimes there is in terms of milestones (for drugs). But as you are in your GSK bashing mode, let me point out that company scientists at RTP did evaluate Sirtris’s compounds, finding their claims not to be substantiated or replicated. UPPER management wanted to believe the wonderful slide show and story they bought the bullet anyway! Witty (and maybe the board) would have had to approve too such a large sum. And who paid the price? Many hundreds of scientists were let go to make up for the money the deal cost; the UPPER management are all still GSK UPPERs.

    1. Anon says:

      So perhaps it’s the upper management that should be handing over their jobs and salaries if they ignore the advice of their scientists. Now we just need to get upper management to vote for that policy.

      Just like the academics agreeing to claw-backs, it’s turkeys voting for Christmas: ain’t gonna happen.

    2. MTK says:

      The GSK-Sirtris reference was not to bash GSK or assign blame to any particular part of GSK, but rather to point out that if such a risk-sharing contract were standard practice that GSK would have substantially trimmed their losses or that there may never have been the deal in the first place depending on how confident Sirtris may have been regarding the robustness of their data.

  6. Peter Kenny says:

    On a semi-related note, I wonder whether there should be a sin bin for pharmaceutical researchers who present data in ways that exaggerate trends.

  7. Slurpy says:

    Another sticking point would be, how do you argue reproducibility? Let’s say Professor X shows efficacy of his candidate in 80% of his 50 person cohort. Merck takes it, runs it with n=1000, and gets 65% efficacy. Is that a “reproduced” result? I’d say yes. Are Merck’s lawyers going to sue Xavier’s School for Gifted Youngsters for reporting results that don’t match what Merck got? Almost certainly.

  8. Hap says:

    If you want the originator of your research to assume more risk, you’re going to have to pay more unless you have a lot of leverage. Pharma having more leverage seems less likely, considering pharma’s decision to rely on smaller pharmas and academia for discovery (and their consequent need for products) – even with short-term, relatively known risk, Cubist was able to get lots of money from Merck.

    If they can do this, of course, and not raise funds, most likely, there just won’t be enough people wanting to play with them, which also doesn’t work out well for pharma.

  9. MoMo says:

    Most ridiculous thing I have ever heard. Shows that Pharma is losing touch with reality- Again.

  10. Daniel Barkalow says:

    I think the “carrot” version of the system seems very plausible from both sides, actually. If you start giving grants to independent labs that will try to replicate interesting (to you) results, and you also give a bonus to the original lab if the second lab finds that the results are still interesting, you’ll probably find out a bunch of things you want to know. That makes it a worthwhile plan for industry, and the second lab stands to get funding out of work they’d otherwise do internally and never report on, and the first lab gets a bonus if (a) their original results actually work and (b) they can teach others how to do it.

    I know a bunch of scientists who would have been a lot happier about their projects falling through due to the reference system not actually working if they’d had a grant to see whether the reference system works.

  11. Eric says:

    As several people have already said – it’s hard to imagine this working well. There is too much wiggle room around what is reproducible and it puts all the burden on the seller, which likely just kills the deal.

    As far as the other suggestion: “For psychological reasons, you’re probably better off making a low offer for the data and the idea as they come, with a much more generous one if it’s been reproduced first. Let them seek the carrot rather than fear the stick. ”

    Isn’t this essentially what happens now? Preclinical compounds and ideas have limited commercial value. Phase 1 data is worth more. Late stage data in a patient population is worth a lot more. Furthermore, targets that are actively pursued by multiple laboratories are worth more because it has presumably been validated by peers (although that could just be the lemming effect in pharma!).

  12. Mikeb says:

    The problem with biology is that everything can change based on the lighting of the room, what day it is, and what mood you’re in. All kidding aside, we once had a guy from NIST come give a talk, and during his presentation he showed us some results he obtained from a study where his lab sent out the same exact set of cells to a dozen different labs across the country and told them to all run a simple cell viability assay after treating the cells with compound X. All labs were given the same exact protocol to follow. The results that they got back were shockingly inconsistent; differences in viability between some labs bordered on a nearly 1 order of magnitude of difference. Eventually NIST was able to optimize the protocols so that if you pipetted in a zig-zagging, crisscrossing manner, you’d cut down on the variance. The big picture though is that if labs can’t even run a very simple cell viability assay and get repeatable results, why should the vast majority of biology be reproducible then when other types of experiments can take months and months of setup, 100 different steps, 20 different protocols, and rely on instruments with setups that might have slight quirks? Repeatable science…ha. More like wishful thinking.

  13. bank says:

    Here’s an idea for this problem. University or biotech labs send their materials to an approved “Quality assurance” service. This lab would repeat some of the key experiments and then provide an independent report on the results. The cost of this would be borne by the lab intending to sell their IP or expertise.

    In this manner, those wishing to partner or invest have an independent assurance that the seller’s work is in good standing, and will be more willing to part with their money.

  14. autophagy says:

    Biology is a tricky thing; there’s a lot more unknowns than knowns. That’s a big reason why drug discovery is so challenging. Since biological/pharmacological observables and endpoints are prone to all sorts of variation, depending on many conditions and variables, it’s hard to imagine easily translating reproducibility into conditions of legal contracts. So due diligence remains critical and here internal scientific expertise is imperative, since upper management (who control the purse strings) is relatively clueless about the devil in the details. If UPPERs disregard their internal scientific staff then you have to wonder about the integrity of an organization. Case in point, the GSK-Sirtris deal. “Watcher” tells the truth above. If the UPPERs responsible for that deal (and the firing of their internal staff) had any integrity, they would have resigned long before now.

    The waters of drug discovery are filled with sharks and that won’t change any time soon. Combine the uncertainties and variability of biology and the huge financial upside of novel therapeutics and there will be sharks. Merchant-scientists (largely academic in origin) are more motivated to chum the water with their science instead of validating it, fueled by venture capital looking to make deals (not drugs), and hungry Pharma looking to replenish their pipelines. Given this ecosystem, it’s on Pharma to do a better job considering the merits of the bait before swallowing the hook. They can begin by enabling their staff to do first-class science and actually paying attention to what their scientists have to say.

  15. Cialisized says:

    This takes all the fun out of that poker game called biotech deal-making. Both poker and deals are about betting on a hand that is partly or wholly unseen. The player (university or small biotech) has to make it appear that they have a valuable hand by whatever means–bluff or bluster or selective data use. Imagine poker where the bluffing player has to give back a portion of that hand’s winnings! That would not be fun.

  16. Cellbio says:

    Isn’t there already a simple deal structure that pharma uses with small biotechs and academics that addresses this issue? With smaller up-front payments, milestones based on progress and royalty payments with ultimate success, a deal can start with a lot of bio-bucks but risk offset and aligned with the science being proven right.

    Other comment I’d make is the reproducibility problem that I think people are frustrated with is not a precision problem, 80% response vs. more powered studies revealing less robust response, but really piss poor science that is flatly wrong. Lack of reagent QC, lack of assay validation, lack of replication due to pressures to be first to publish, lack of expertise because academia is a training environment after all. To me, as bad as these may be, this arena is where industry is stronger, should be stronger, and so the collaboration should be that academia highlights potential and industry wades through the murky water. If you pay too much on the front end, is that a ‘you’ problem or a ‘them’ problem.

    Also, do this once or twice and you get pretty good a recognizing the good academics from the snake-oil salesmen. Not bashing here, same is true for start-up culture. Some serial entrepreneurs are the best snake-oil salesmen out there. Whether asking for grant dollars, VC dollars or selling a rug, you get what you bargain for in the context of the prevailing market. Recognize the inherent risk in tech evaluation and be better at your craft of deal making rather than asking for total guarantee.

  17. DCRogers says:

    Efforts to more closely align financial incentives with scientific results almost certainly will end up corrupting, not improving, the science: encouraging risk-aversion instead of creativity, legal obfuscation instead of clear presentation, financial reward as the goal rather than the scientific reward of success.

    If there is a problem with reproducibility, it should be addressed within the current system of peer review — perhaps allowing (or insisting upon) more access to raw data, or even funding small-scale confirmations as part of the process.

  18. tangent says:

    The rights are being sold at market rates, and presumably industry expects to get positive ROI from these deals. So if we make some ‘certified’ properties with higher expectations, they’re going to sell for more. To first order this is not going to change the total money flow.

    Unintended incentives aside, what this could hope to do is 1) benefit the academic groups who have high skill at making results that replicate (at the expense of the other groups), and 2) benefit the pharma companies who have relatively *low* skill at evaluating quality of outside research (at the expense of those who do better at it now). Result (1) seems reasonable if it works right.

    Result (2) is interesting, in that it steers the industry towards being more purely about executing drug development, less and less about research. Presumably industry employees whose expertise lies in evaluating drug discovery research will then be laid off, following after those who used to do drug discovery research.

    1. tangent says:

      To be fair, those industry jobs should migrate over to academia, the economic logic would say, since more value now lies over there. Research scientist positions or the like. But I don’t believe academia has en masse picked up the research headcount as those jobs got cut, so I’m not buying it will happen for the research-assessment folks, sorry.

      (Honestly, it looks from here like the pharma industry has coasted along with research staffing inherited from a past when anybody knew how to do pharmaceutical research for reasonable payoff-per-cost. Current academia is not going to staff up to those levels that are basically a bubble in the current reality. No criticism meant of the researchers involved, and not saying there isn’t any research done cost-effectively, but it’s pretty clear there’s not a silver bullet there that can run through full-auto, until the world changes some more.)

Comments are closed.