How much does scientific publication matter? For once, we’re not going to be talking about its role in academia (partly because it obviously means quite a bit there!) No, how much does it matter in industry? Specifically, at highly valued biomedical startups? That’s the subject of this new paper by John Ioannides and co-authors, and I’ll go right to the conclusion: across 47 “unicorns”, startups valued at more than one billion dollars, there seems to be no correlation at all between their valuation and their publication record.
To be honest, I’m not sure if that surprises me much at all. For several reasons. But first, it should be noted that the analysis itself generated a rather lumpy data set. Of the 47 companies (18 current and 29 exited), 8 had no publications at all, and half of the total publications from current firms are from just two companies:
Unicorns published 425 PubMed papers. Only 34 (8%, including two reviews) were highly cited. For exited unicorns, we identified 413 papers, of which 47 (11%, including nine reviews) were highly cited. Overall, more than half of the current unicorns (10/18) and almost 40% of the exited unicorns (12/29) had no highly cited papers. Over the entire cohort of companies, we identified no association between company founding year and number of published (r = −0.09, P = 0.51) or highly cited papers (r = −0.08, P = 0.57).
Three unicorns (Outcome Health, GuaHao and Oscar Health) had no published papers, and two more (Clover Health, Zocdoc) had published just one. All five were in the domain of digital health. Among exited unicorns, five (Enobia Pharma, Neotract, Qualicorp, Cameron Health and China Nuokang Biopharmaceutical) had no published papers and two (Flexus Biosciences and Cardioxyl Pharmaceuticals) had just two. 23andMe (107 articles) and Adaptive Biotechnologies (89 articles) published almost half of all unicorn papers. No similar disproportionality existed for exited unicorns.
The paper extends the analysis to companies below the billion-dollar mark and finds the situation to be (at best) the same, and then comes a line that probably should have led off the whole paper: “Most start‐ups apparently do not publish much“. It’s true. I think it’s been true for a long time. And it’s true for several reasons. Ioannides makes quite a bit out of the Theranos story earlier in the article (stealthy high-valuation company whose tech wouldn’t stand close inspection, etc.) but I think that’s overdone. Companies who have really valuable stuff that’s, you know, real also keep quiet about it until everything is lined up. Not everyone who doesn’t publish is a Theranos.
And even those that do publish aren’t going to have a long record to look at. Remember, these are startups. They haven’t been around for that long, and they’re concentrating on getting stuff to market (or to a stage where they can convince people to do a deal with them towards that), not on writing up manuscripts that will pick up lots of citations. That comes later. Part of this is the disconnect between academia and industry about scientific publication in general. If I came across a busy young company that was spending a lot of time putting papers together for big-time journals when they could be shoring up their science internally, I would not invest. Near the end of the manuscript, this point of view finally makes an appearance:
Publishing is clearly not the primary mission of start‐ups. The need to spend time to write, submit, revise and publish papers may even be seen as a deviation from the trajectory of disruptive innovation. Further disincentive arises from the fact that peer‐reviewers may be resistant to new ideas. Nevertheless, when technologies and products influence real‐world health outcomes, peer‐reviewed publication is essential.
That last sentence may well be true, but it comes across to me more as an assertion than a conclusion. Peer review is overall a good thing, but it does (as mentioned) slap down interesting ideas at times, and it also lets junk through, even into high-end journals. At the same time, there’s no doubt that having other sets of experienced eyes looking over your work is valuable, but there are other ways for that to happen than peer-reviewed publication (and the companies that manage to skip this step in every way, such as Theranos, are indeed asking for trouble). I agree that publication should be an eventual goal for the good of the overall scientific enterprise, but it might asking quite a bit of companies that haven’t even gone public yet, at least in some cases. To its credit, the study does mention that a company doesn’t need a long record of papers – one or two key, detailed ones would be fine. But its count-the-papers approach a few paragraphs before belies that a little.
There’s another feature of the analysis that needs to be brought up: I notice that this work searched “searched PubMed through November 2017 for papers carrying each start‐up’s current or past name(s) as affiliation.” That’s fine as far as it goes, but my guess is that there are a significant number of cases where the enabling technology (or at least its first iteration) was published under a founding scientist’s name but without the company affiliation, because it may well not have even existed at the time. Something to consider, but the paper dismisses this idea by saying that “However, it is unlikely that this work can be considered directly relevant to the start‐up“. I’m not sure about that at all.
What should potential investors take away from this analysis? It probably is reassuring to see what look like good papers in good journals when you’re evaluating a small company – but as Ioannides’ own analyses have made clear in the past (and as anyone with experience in this business knows) that’s no guarantee. In fact, the top-tier journals publish work that is a bit less reliable than the middle-tier journals, in large part because the stuff is so cutting-edge and interesting. So it’s nice, but it doesn’t let you off the hook in the due-diligence department. Patent applications are something you’re going to want to look at, too, but (as the current paper points out) Theranos had lots of patents and applications. Peer review does an imperfect job of catching bullshitters, but the patent office is set up to do that even less.
At the other end, if a company has no publications at all, you really do need to kick the tires more carefully. Big investors will be seeing lots of fascinating PowerPoint presentations and confidential reports, and they’d better be prepared to ask serious questions. There are VC shops who actually pay third parties to try to reproduce exciting results before they’ll invest, when that’s possible. Of course, if you’re looking at a Theranos, the PowerPoints will be full of fiction and the answers to your questions will be slick, reassuring lies. That’s the problem: the scientific enterprise assumes that you are not lying about everything. Peer review (ideally) catches inconsistencies, omissions, and misinterpretations, but you do not start off reviewing a manuscript by asking yourself “What if everything in this paper is a deliberate fabrication?” You’re looking for internal consistency, for adequate proof of what’s claimed, for possibilities the authors may have missed, but you start off by assuming that they’re trying to tell you about something real and not trying to fool you about the whole subject of the paper. That’s how a lot of fabricated stuff slips through, of course (small stuff and large) – we’re not always even thinking about the possibility that that’s what it might be.
What I’m saying is (as above) that referencing Theranos (as this paper does, many times) is a bit of a red herring. You cannot necessarily catch these people by looking at their publication records; it would be a simpler world if that were the case. This latest paper, to my mind, mixes that up with the subject of publication records in general, and it makes for a weaker case overall for what was not a strong argument to start with.