Skip to main content

Academia (vs. Industry)

Come, Let Us Reason Together

Here’s an editorial from Morris Birnbaum of Pfizer on collaboration between academia and the drug industry. He’s worked on both sides of the fence, so that makes for some useful comparisons. I think the summary sentence would be this one: “It seems obvious that many of the obstacles to effective academic-pharmaceutical partnerships result from a fundamental lack of understanding by each party of the other’s motivations and career pressures.” Admittedly, that sentence could have been written any time in the last fifty years, and it’s perhaps a bit unnerving that it’s still valid today, but these things take time.

One classic obstacle is the attitude towards publication. In academia, if it didn’t get published in a journal, it didn’t happen, whereas in industry journal publication is more of an “Oh, I guess so” affair. A successful project for a drug company scientist isn’t one that got into a good journal, it’s one that made it to the clinic (and as far into the clinic as possible, preferably all the way out the other side. That brings up another difference in this area – if your primary goal is publication, in as high-profile a journal as possible, then a lot of that is up to you once the good research results are generated. But getting a compound into the clinic, which certainly a good outcome, is only the beginning of the exposure to out-of-anyone’s-control factors, as our 90% failure rates illustrate most painfully.I think that, in general, academic scientists feel much more attached to their projects because more of the success is on them directly, compared to somewhat more fatalistic industry scientists who figure that the cells/rats/patients are gonna do what they’re gonna do, and we just have to get up there and find out what that is.

Birnbaum also makes a good point about what industry is trying to get from academic research, versus what academic research provides. Drug companies want new targets, new modes of treatment. Is there some specific intervention that no one else realizes yet that would make a big difference in a disease? Academic groups, naturally enough, are not focused in quite that direction. They’re likely as not looking for new biology, new pathways, new mechanisms (as they should). These may or may not have any relevance to a disease target, or not yet, and the first big “translational gap” is finding the intersection between “continuous avalanche of new biology results from the literature” and “stuff that could lead to a good drug project”. Once in a while there’s a discovery that just drops things into peoples’ laps, but most of the time it’s a more like “OK, according to this paper, the XYZ kinase turns out to get more active when A happens. . .and you know, that A business is a bit like B, which happens in this disease over here, and now that I dig into it, the QRS kinase is kind of like XYZ, when you think about it, so I wonder if you inhibited QRS if the disease would. . .” and so on. Dots have to be connected.

So, the article says, industrial and academic groups should spend more time than they probably do making sure that they’re thinking about the same goals, and thinking about them in roughly the same way. You can’t slap a bunch of once-a-month milestones on things, but you can’t just say “Here’s some money, we’ll talk to you in a couple of years”, either. The academic side of the partnership needs to have room to explore things, while also realizing that in the end, the stuff getting explored should have a shot at being actionable.

. . .companies are most comfortable working with contract research organizations (CROs), for which a list of experiments and deliverables is agreed upon and the plan rarely revisited and then only with serious discussion. On the other hand, academics often proceed under the principle that once grant funding is received, there is considerable flexibility in what studies are performed, as long as the principal investigator can demonstrate the merit of the experiments at the time of renewal or final report. It is readily apparent that these two approaches are incompatible and will breed discord.

Most definitely. And you want to fund the right people, and “right” does not necessarily mean “big and famous”. Getting some big name to work with you on something that’s a side project for them will probably not produce as many results as someone less well known for whom the project is a priority. One thing that Birnbaum doesn’t mention is that companies may have different priorities than they should have in these cases. Sometimes you see collaborations that seem to be initiated just for the prestige value of working with Dr. Famous Professor or with good ol’ Aweinspiring U., and these end up structured (unstructured) in a way that they’re almost certain not to actually produce anything (except perhaps keeping everyone on friendly terms?)

Friendly terms are not to be underestimated, however, because there can be a fair amount of suspicion (from both sides) in some of these collaborations. The article mentions several, but this one stood out:

A particularly perplexing phenomenon is the disproportionate aversion of many academics to revealing results to scientists in the private sector compared to academic colleagues and competitors, from whom the principal investigator has considerably more to fear. The private sector is seldom motivated primarily by publication; hence, the academic scientist should have little concern for being ‘‘scooped.’’

My guess is that this reluctance is fueled by a feeling that revealing data might lead to the company running off and quickly making a billion dollars while the professor is left in the dust. Sadly, that doesn’t happen much (the billion dollar part, not the leaving in the dust part). Many academics don’t actually have a good handle on what’s patentable and what’s worth patenting, and (some, not all) university technology transfer offices don’t, either. But believe me, if there’s a billion bucks to be made, it’s going to come on pretty slowly, and everyone will have time to apportion credit. Not everyone ends up like Northwestern, but it does happen. And yeah, we’d like for it to happen more often, too!

22 comments on “Come, Let Us Reason Together”

  1. anonao says:

    On the last paragraph, another reluctance from academia, is once it is given to their industrial partner they may not be able to publish the results for years. May be ok for PI but more problematic for the post-doc who did the work and need publications to pursue his/her career (the system is highly biased towards publications, even if we don’t like it to be that way).

  2. Anon says:

    “A successful project for a drug company scientist isn’t one that got into a good journal, it’s one that made it to the clinic (and as far into the clinic as possible, preferably all the way out the other side”.

    Even that’s not quite right, or at least shouldn’t be!

    A successful project for a drug company scientist *should* be one that either makes it all the way, or fails early – with unambiguous results either way.

    1. bad wolf says:

      “…all the way out the other side.” What does this mean? Out the other side of the clinic–public usage–is what, going generic? Being recalled?

      I thought “the clinic” was public sale/application. Is it just “clinical screening” on the way to use, as a common descriptor?

      1. Anon says:

        “all the way out the other side” means all the way through clinical development and into the market, when you actually start making money. Obviously.

      2. JIA says:

        @bad wolf
        Yes, in pharma, the phrase “in the clinic” means the drug in clinical trials, prior to FDA approval that the drug is safe and effective. Once you get approval, that would be “post-launch” or a “marketed drug”. So when Derek says “out the other side”, he means, the drug actually got approval. Which sadly, is quite rare relative to the total number of projects that get started way back in Research….

    2. Mol. Biologist says:

      There is another contradiction with previous Derek’s post related Merck cuts. From chemist prospective it sounds that good chemist is “a real productivity enhancer, and when cuts happened” this stuff is – and it’s not like the people left after one of these cuts are exactly refreshed and energized, either”. So, Derek is definitely taking chemists side in this situation. However, he claimed it is a disadvantage that “scientists feel much more attached to their projects”.
      Many posts after talking that corporate politics is “the devolution of pharma companies from sources of innovation to pure sales and marketing organizations”. And other post Why high managers decide what is more effective treatment “And antibodies aren’t? Because messing with the immune system always ends well, like playing “Hide the Steak” with the Doberman next door”.
      IMO obstacles is coming only from INEFFECTIVE academic-pharmaceutical partnerships. It happens all the time “when the experimentation got results that we didn’t expect, or compound did not work in different clinical models. In this situation is very hard to take responsibility for all failures which done by INEFFECTIVE management. I will agree with ANON that successful project for a drug company scientist *should* be one that either makes it all the way, or fails early – with unambiguous results either way. Yes, it may go slow but THERE is no other way around.

  3. Peter S. Shenkin says:

    There’s a contradiction here. On the one hand,

    “industrial and academic groups should spend more time than they probably do making sure that they’re thinking about the same goals”

    on the other, the goals and reward systems of academic and industrial research differ, as Derek points out.

    “It seems obvious that many of the obstacles to effective academic-pharmaceutical partnerships result from a fundamental lack of understanding by each party of the other’s motivations and career pressures.”

    But I’m not sure it’s lack of understanding, though that’s there, too. Once that lack of understanding is lifted, there is still the issue that the goals are different. What “same goals” should be pursued? Academics might say that industry should do more basic research, and industrial scientists might say that academics should think more about practical applications. But as Silverman (quoted in the reference to Nortwestern) said, “It turned out that during the experimentation we got results that we didn’t expect, namely that our compound activated one of the enzymes — that wasn’t even something we were trying to do.” Not starting out with an industrial goal didn’t render Silverman blind to the implications of a serendipitous observation.

    I draw the Panglossian conclusion that academics and industrial scientists are each pursuing the goals they should be pursuing. Sure, they should understand that each others’ goals and reward systems are different. But beyond that, it is good to remember that modern science has its roots in two pursuits: the ancient philosophical effort to comprehend the universe and the practical arts; you know, people who made pots and glassware, forged swords, extracted metals from ore, built mines and irrigation systems and pumps to elevate water (and even steam engines!), and so on. The former pursuit began to illuminate the latter only in the last few hundred years. The two strands continue to be separately visible, as the muddy waters of the Mississippi and the clear waters of the Ohio can be seen separately for hundreds of miles south of their convergence in Cairo, IL. Their expressions are the academic and the industrial approaches to research — and not only in chemistry. Strategically, we are already working toward the same goals, but tactically, we operate very differently, and this seems to me to be a good thing.

    1. CMCguy says:

      Peter what you say is well stated however there should remain opportunities for overlap of differing goals and missions that hopefully does result in effective transition from one to the other. Although not always the case academics and industry people can have natural tendency to move in quite different circles, even at conferences and when the former as engaged as consultants, where if there were increase informal intellectual exchanges might better guide academics to focus efforts in areas of interest and concerns of industrial people while the later group could pick up info on evolving efforts that might stimulate ideas for collaborations. To a degree there always is going to be a bit of antagonism in the relationships which mostly probably is rooted in money issues as both sides require funding (and some seem to view industry funds as tainted) but the reality is both have commonality in science that can from basis for collaborations where each side contributes based on their strengths they do have (I would add Reg Agencies to the equation as in the end all sides can benefit from rubbing elbows more even as they acknowledge roles and goals no the same.

    2. c says:

      I nominate this for the “most beautifully written comment of the year” award.

  4. one man CRO says:

    i agree with the academic concern over publishing in general and specifically as it relates to outside partnerships. i worked as a CRO on an academic/private partnership. the academic imaging people were constantly talking about which images would look great on the cover of PNAS and JACS. with everyone in the academic sphere thinking about what to publish, no progress was being made be toward the milestone. i kept saying “all we have to do is show that we’re moving towards an agreed upon goal and they’ll release more money”.

    the irony is that none of the work ever got published…..and the money went away.

  5. loupgarous says:

    Academic publications on how and why drugs work ought to be unencumbered by industry pressure. The saga of the unnerving tendency of fluoroquinolones to attack collagen in some elderly patients, I think, wouldn’t have unfolded if Bayer and Ortho/McNeil/Johnson & Johnson had had much influence on those clinicians in Taiwan and Canada who broke the bad news to us. The Ontario study alone identified 200 excess cases of aortic aneurysm statistically associated with fluoroquinolone therapy.

    And one lawsuit’s been filed specifically at the FDA and its handling of the data on fluoroquinolones, alleging that when industry’s allowed to enter deliberations on drug safety, the consequences can be severe for patients (a total of 500 identified deaths, though the complaint uses FDA’s own statistics that only 10% of drug fatalities are ever reported, so that they allege 5,000 deaths may have occurred).

    The story of this racketeering complaint’s so far been confined to websites run by law firms. One of them,, doesn’t go as far as the original complaint in alleging a Clinton Foundation connection to the matter, but its front page says “Drugwatch helps people injured by dangerous drugs and medical devices,” while not saying “and helps its sponsors to a big chunk of any settlements or awards these people may get.”

    The more I look into that particular story, the more agnostic I get about everyone’s motives but the clinicians who undertook those studies in Taiwan and Canada. Those guys show how academia really ought to work, and doesn’t seem to as much as we’d like.

  6. Taylor-Made says:

    Princeton and Alimta is another good example of not getting left in the dust.

  7. JeffC says:

    I think I’d generally agree with a lot of this. This area, the industry-academic interface is what I do for a living and I’ve done it from both sides of the fence. I think the key part from an industry perspective, is explaining why product development and academic research are different from each other (not better or worse, just different). And academics need to understand that too, because, essentially that’s the difference. Industry wants products, academia (generally) does not. The absolute best academics I’ve worked with understood that, and the industry partner was sensitive to the publishing needs (industry needs to recognize that those publications are the lifeblood of future funding for that academic) etc of the academics.

    One other observation, is that when anything becomes more of a product development gig, the biggest failing of the academic community is that they generally “don’t know what they don’t know”. They are so used to being the world expert, that they find it very hard to deal with a team of people, that do different things that evolve over time, an no one person is a master of all of the detail. Academic teams are always smaller with fewer disciplines. Again, not a failing, just different.

    Industry’s biggest issue, is never making that clear, not really understanding the bind that academics face if they can’t publish and putting too many restrictions in place that really don’t matter all that much.

    And then of course there is the dysfunction that is the TTO world. Sadly (for many varied reasons) mosts TTO are pretty poor and their needs/desires are totally misaligned from the academics they represent and they have a completely distorted view of the value of what they wish to sell/license. It may be the single biggest problem faced by industry and academia. So many TTOs are poorly trained, supported and resourced and so they don’t ever stand a chance. If academia is about teaching and research (mainly), then there is a real issue when the TTOs view their main aim as making money off that and preferably as much as possible. Universities are non-profits/charities, but when they act like profit-making, that’s when I think there are big issues. And there so much variability in TTOs depending on what the university thinks is important that this makes it even more difficult. That wasn’t a dig at TTO’s because they often face an impossible task.

    Similarly, industry could be an awful lot more transparent and truthful in what I actually wants. If you are going to be “cuddly” then be open and make it clear that it’s about relationships and Kudos. If it is all about the deal and the product, be clear about that and don’t try to screw the TTO.

    And thus is will ever be……?????

  8. Anon says:

    If Pharma wants a defined program of work focused on results with no obligation to publish, then it should be prepared to pay a proper CRO. But if it wants to exploit academia on the cheap, then it must accept enough freedom and flexibility to publish any results, at least once any patents have been filed within a certain agreed period. As the saying goes, you can pick any two from quick, cheap and good, but not all three. Or more simply, you can’t have your cake and eat it.

  9. I teach folks in academia and industry how to set up effective collaborations, which I define as those that generate useful data in a timely manner. Setting up collaborations is not as easy as many people think, and once these crash and burn it is nearly impossible to get them back up on the rails. As each side has different expectations, the key is to make sure that possible outcomes are discussed in advance, so that there are no surprises (which often lead to bad feelings) later on.

  10. dearieme says:

    My latest collaboration has gone pretty well, with one exception: dealing with the (US) company’s lawyers. Dear Heavens!

  11. MoMo says:

    The push to publish is maligned by the welfare state academic scientists rely on, as the insecere Pharma collaborations seeded with saboteurs, deadbeats, and the brain dead occur daily.

    But once in a while it works and human suffering is relieved and medicine actually progresses.

    Not bad for a bunch of monkeys.

  12. Li Zhi says:

    Unless things have changed a lot, I really doubt that any number of “quick failures” is the path to a stellar career. Show me someone who fails projects quickly, and I’ll show you someone who not only disappoints the management which backed the project (at the expense of others) but leaves that same management questioning the dedication, problem solving, and general competence of that investigator. Really, who would you choose: the investigator who has run through 4 projects in 18 months, quickly failing each or the one who has nursed (ie milked) one (fatally flawed when it comes right down to it) project for 18 months? Who is going to seem more competent, more able to solve problems? The only “antidote” for quick failures is to use the time saved to find a success.

    1. Anon says:

      “The only “antidote” for quick failures is to use the time saved to find a success.”

      That’s the whole point, it’s about opportunity cost, and not wasting time and money doing stuff that is going to fail anyway, so that you can spend it doing something that is likely to work and generate a positive return rather than an even bigger loss. It’s to avoid the sunk cost fallacy.

      Most managers do understand this rationally when they stop to think about it. Problem is that many don’t stop to think about it. So you have to challenge management with the right questions.

      1. dave w says:

        Isn’t “fail fast” really the same idea as development strategies such as “high throughput screening” etc. – i.e., you have many possibilities, some large fraction of which will be uninteresting, so you want quick ways to try a bunch of them so you can weed out the unteresting ones and spend most of your time digging deeper into the ones that show at least some initial potential.

  13. Levorotatory says:

    “But getting a compound into the clinic…is only the beginning of the exposure to out-of-anyone’s-control factors.” For the medicinal chemist in industry, I would start the “out-of-anyone’s-control factors” at which projects one is assigned to work on.

  14. Oliver Hauss says:

    I’d actually have to partially disagree with one of the points, mostly because I’ve worked “on the other side” beyond clinics.
    For the same reason they are important for academics, publications are important for industry, too – what hasn’t been published didn’t happen. And if you want to market your product, having a bunch of good publications under your belt helps tremendously. And precisely for the reason cited re priorities of the top guys vs, the lower ranks, I’ve found myself that doing some kinds of studies with the second rank was much more promising than if the top guy had the manuscript on his desk for years and you urgently wanted it to get published to defuse an argument often brought against your product but it just wasn’t that high a priority for him…

Comments are closed.