Skip to Content

Grit And Giving Up, Experience and Ignorance

It seems to be a fairly slow news day in biopharma – not always such a bad thing – so I wanted to bring up a general drug discovery question. Unfortunately, I’m not sure it has a good answer. What is the proper balance between perseverance and pragmatism – in other words, how do you know when to give up on a given compound, series, or project?

That’s akin to the “when do you sell a stock” problem, and right off, it’s important to note that there are big mistakes to be made in either direction on any question like this. The Sunk Cost Fallacy is waiting to tell you to hold on too long, that you’ve come this far and that now is no time to quit, no matter how horrendous the situation may have gotten. Interestingly no phrase comes to mind that describes the opposite error, that of bailing out too quickly, but that’s certainly a real phenomenon as well. It’s for sure that if you don’t have a fair amount of grit and determination that you’re never going to get a drug project to work, but it’s also for sure that grit and determination by themselves are just not enough – necessary, but not sufficient, to use a favorite phrase.

There are social and psychological components to this as well. People tend to admire fortitude and staying power, and you can bring on a whole list of football coach sayings as evidence: “A winner never quits, and a quitter never wins”, etc. Even historically, people know the names of more battles and generals where a heroic (and doomed) last stand was made compared to the examples where someone actually managed to save themselves and their army to fight (and win) another day. Not long ago, I was re-reading Ron Rosenbaum’s article about 1960s and 70s nuclear strategy in his collection The Secret Parts of Fortune, and came across the part where he described how studying surrender strategies had been a career-ender for people in the field. We really don’t want to hear about giving up.

Perhaps that’s because we worry that giving up might become too much of a habit. It is easier, for sure, and that’s the problem. As mentioned, any drug discovery project is going to go through patches where things look bleak, and if you drop one every time that happens, pretty soon you won’t have much left to drop. So running for cover every time is bad, and holding on to every possible bitter end is bad – how do you steer between these two ditches and stay on the road?

The best advice I have is to think really hard before you start and try to set the most realistic goals and milestones you can. “If by (date) we haven’t been able to (goal), then we need to rethink” is how these should probably look. “Rethink” doesn’t mean “abandon the whole thing”, either – it just means that you need to stop and take a look around, see if there really are believable ways out of the situation and how much they’ll cost you compared to the eventual goal. “Eventual” is a key word there, because it’s easy to get caught up in thinking that the goal is to keep the project going, one way or another, or that it’s to make the screening cascade as beautiful as possible, or whatever. No, the goal is to deliver a drug candidate, better yet, a drug. And it doesn’t have to be the drug that you’re working on, either, if the time, money, and resources that are being spent on the project at hand might be better spent somewhere else.

Be honest, then, about what can should be accomplished by when. That way, if you’ve done it, you can feel more confident that things are moving in the right direction. And if you haven’t, then you know that the time to take stock has come, and not to put it off in the hope that things will just somehow get better. If various things have turned out to be harder than they looked, fine – happens all the time. You just have to make sure that their difficulty hasn’t gotten to the point where calling a halt makes sense.

That’s because calling a halt does, in fact, make sense sometimes. The best way to do it is with a hard endpoint: here’s a model or an assay we trust, and if our best compound can’t show the right effect in it, we must be done, unless we can come up with a damn good reason why that happened and a clear plan to get around it. Not every compound is going to work, not every compound series can be developed, not every target is druggable, and not every target is even a good idea to go after in the first place. Our failure rates in the clinic are a big, steaming, pile of evidence for those statements. It’s tempting to think that we’d be in much better shape if we shouldn’ta hadn’ta worked on some of them at all.

But back to the opposite error we can go. The “fail fast” idea has been around for years now for that very reason, and it’s easy to understand its appeal. The expenses just get worse and worse as a project goes forward, and if you could just avoid going down some of those roads. . .the problem is, though, that if you’re going to Fail Fast, you need to be as sure as possible that you’re failing for the right reasons, and assays that are that trustworthy are not easy to find. People have messing things up by being so focused on being able to make the call that they pick an assay that isn’t so great and treat it like an oracle. That isn’t going to end well, but having no game-deciding assay at all won’t end well, either. Think, for example, of a phenotypic project. If you haven’t found the mechanism for your active compound, you’re going to have to make a big leap of faith into the clinic based on the results from your animal assay (or whatever the phenotypic driver was). But what if you don’t know if you can trust that assay? (“Then you shouldn’t have started the phenotypic project” is one response, but it might be too late for that to do anyone much good). How do you fail quickly and cleanly in a situation like that one?

In the end, many people will end up making decisions based on what’s worked (or not worked) for them in the past. And while experience is clearly valuable, it can hose you down, too. People who have been burned by letting projects go on too long will be more susceptible to killing future ones off too early, and people who feel like they were robbed of a project that actually was going to work will be more likely to let things drag on. “They fill you with the faults they had”, as Philip Larkin famously said about parents and children, and experience does it to us in other ways as well. All of us are guilty, at one time or another, of thinking that we know more about what we’re doing than we really do. That can hit you early in your career if you haven’t grasped the extent of your ignorance yet, and it can hit you later on if you think that you’re past that stuff (you aren’t – ignorance is a large subject and has plenty of room for us all). Perhaps one of the things that experience can (or should) do for us is remind us that experience itself is of limited applicability.

41 comments on “Grit And Giving Up, Experience and Ignorance”

  1. anon says:

    re: generals running away and fighting another day, that’s kind of what George Washington was best known for.

    1. Derek Lowe says:

      He was very good at that (and needed to be!) I think if you asked most people to name some big maneuver or battle he’s famous for, though, they’d name his surprise attack by crossing the Delaware.

      1. Derek Freyberg says:

        That’s true, but surviving the winter at Valley Forge and keeping a force intact to fight again the next spring might well have been what saved the revolution.

        1. Curious Wavefunction says:

          There’s also the brilliant (and lucky) tactical escape from Manhattan.

          1. Peter Kenny says:

            The disengagement of the allied force from Gallipoli in 1915 is worthy of mention

  2. Barry says:

    the problem engages ego as much as it does numeric analysis. While it is good (critical) for the company to kill a project early if it is to be killed, the project leader gets no praise for doing so. His/her career is boosted if a project goes to the clinic and there is no compensatory boost if she/he demonstrates quickly and efficiently that the project is a no-go.

  3. Chrispy says:

    Our failure rate in clinical trials is well above 50% for novel therapies (more for small molecules, less for antibodies). The model systems, particularly for complex diseases like dementia, really do not have sufficient predictive value to determine what is worth taking to the clinic. For antibodies, the animal models don’t even model something simple like toxicity well. We need a way to cheaply and ethically get into humans. A larger Phase 1 trial with a slower dose escalation done in real patents — something like that. Statistical power would be hard, but at this point I’d take anecdotal evidence in humans over a raft of data in animal models.

  4. kinase says:

    As an early career scientist (<1 y) without drug discovery background, I feel experience is vital; what my senior colleagues understand intuitively, I find hard to grasp.

    Can you please advise early career biologists and chemists on what to look out science-wise and career-wise possibly with a timeline/milestones to help track intellectual growth.

    1. Derek Lowe says:

      That’s a tough one to assign calendrical milestones to, but it’s actually a very good topic in general. Look for a post soon!

      1. Some idiot says:

        Good question… I do not have an answer (apart from the rather unuseful one of “get experience…”), but the general point that some people have a good gut feeling is very general and something that comes with a lot of experience is very true. Which brings up the more general question of what should you trust most: gut instinct or rigid adherence to rules? There is no good answer to that one either, and if you go too far either way with these two you will (probably) end up in big trouble. For example, cherry-picking data is generally a really, really bad thing, but just occasionally, if you have someone who really, really knows what they are talking about, it can work (and how can you tell the difference? With great difficulty…).

        There is an excellent discussion of this point in the excellent book “The prism and the pendulum” (the whole book is strongly recommended…!). One of the chapters discusses Milliken’s classic oil-drop experiment, where he determined the charge on the electron. For those who have forgotten the experiment, he had an emulsion of oil drops in something else, with many oil drops charged, and the cell is observed through a microscope. An electric field is established such that the field opposes gravity, and is adjusted such that a particular chosen oil drop is stationary. The size of the oil drop is measured, and then another one is found, and you repeat until you have enough data to determine the minimum unit of charge.

        The point is that throughout Milliken’s painstaking observations, there were many drops he chose not to measure because they “felt or looked wrong”. Was he right to do so? In retrospect yes, because has was proven to be right, and his gut feeling was on to something. However, just think what the situation might have been if his gut feeling was wrong. He may have amassed a heap of worthless observations and come up with seriously incorrect conclusions. How do you know the difference (at the time)? Good question, but I would generally go with a good gut feeling from someone who had a good track record, well-knowing that it might be wrong.

    2. Anonymous Researcher snaw says:

      Kinase, by asking that question you are demonstrating you already have a key attribute needed for success in this industry: curiosity combined with humility. Throw in some appropriate self-confidence pkus lots and lots of communication skills and you’re there!

      Communication skills are particularly important: if I could go back in time to advise my younger self I would emphasize that aspect. Pay close attention to they interpersonal dynamics at meetings. Notice especially when somebody senior listens closely to what somebody junior has to say, and ask yourself “what about this junior person is different from others at his or her level?”

      Beyond that, on the science side try to develop both depth of expertise in your specialty and a broad understanding of why the other departments need help from your group.

  5. CMCguy says:

    Very difficult question where likely no set of general rules can be applied in each case since weighting of variables have to be on sliding scales as rare to have truly hard endpoints to provide complete answers as desired. Its probably hardest with earlier discovery efforts where one wants to trust the science is reliable and manageable enough to achieve relevant objectives. In most situations I think the science itself, either biology or medchem, may actually be best way to give you adequate halt signals even though all of us have tendency typically push beyond when not recognized cliffs/holes in the path which again is where experience can be either aid or hindrance. Once compounds move in to development there can be more definite barriers such as Tox, AMDE or clinical execution that can clarify when you hit a wall that can not break through, jump over or dig under but even then as we know run in to late failures sometimes from overconfidence in understandings. There also become another driving factor with business and marketing considerations which can include bigger unknowns than lab/clinical data but are very deadly to programs where regardless of the potential science if their WAG treatable populations or ROI too small. Yes rarely deal with the obvious to know when to terminal or push harder on a project.

  6. Peter Kenny says:

    This is a good question, Derek. I remember responding to a project manager’s question as to whether or not we had a lead with, “It depends on how desperate you are”. As an aside, views of project feasibility (and compound quality) differ for those handing over the project and those charged with taking it on. My experience is that it can be a smoke and mirrors exercise to divine the senior management view of project priority and I’m guessing that project team members in a Pharma/Biotech somewhere are ‘never quitting’ unaware that high level decisions have been made to kill the project. There really is a surfeit of leadership bollock on LinkedIn.

  7. The phrase that comes to mind that describes the situation of quitting too early is “leaving money on the table.”

    My feeling is that the persistence problem is a bigger one than quitting too early. As Barry points out, generally organizations reward progress, not killing projects, which suggests to me the bias will be to persistence rather than premature termination. However, this is just a feeling and I’d love to see some companies open up historic data on project progression and termination, to compare that to eventual clinical success and failure.

    I think CMCguy is correct that no single set of rules can be created that fits any or even most situations, and I also agree that things do get clearer the closer one gets to the clinic. I think one of the things many organizations seem to support in theory is something like pre-registration in experiments: decide ahead of time what is a necessary set of conditions for progression to the next stage. However, due to the incentive structure, those conditions are sometimes honored more in the breach than otherwise. And all it takes is one success of a project that didn’t tick off all the boxes for people to have an example to forever point to and say, “See, compound X worked, so these conditions aren’t necessary.” Part of how our brains work: finding patterns and justifications for things we emotionally already believe.

  8. steve says:

    The big problem is that pharma insists on pretending to know the molecular target and mechanisms for drugs even though history proves this is a fallacy. We don’t really understand the mechanisms involved in lots of major drugs – from statins to metformin to anasthetics to even acetaminophen (paracetemol). Small molecule drugs don’t hit single targets in vivo; they are part of a complex interplay as one would expect from any living organism experiencing a perturbation in the system. Yet major pharma wants a checklist of things to try and convince themselves that they are maximizing efficacy and mitigating side effects and wants to reduce everything to a simple molecular interaction. Well how well is that working out for you? How many effective drugs were killed because the mechanism wasn’t understood and how many drugs were developed where the mechanism was believed to be one thing and turned out to be something completely different? How about putting in place screening assays for efficacy and letting armies of postdocs work out mechanisms once the drug is on the market? The emphasis on pretending to understand mechanism is, I think, a major flaw in current pharma development schemes.

    1. but what do I know..... says:

      The more chemists on a project, the harder it is for management to walk away & the more hand waving occurs in the face of damning evidence. The TACE program at Schering-Plough leaps to mind. It it amazes me that these idiots managed to get through high school let alone get oversight in a major company but there you go….

      1. Mark Thorson says:

        Does that top solanezumab at Lilly?

  9. simpl says:

    This decision is the most important purpose of R&D management board. Fortunately, it is not as difficult as you might expect, becuse there is a financial selection mechnism operating in parallel. Thus, the board decides every few years on an R&D portfolio that they can carry – for a tier 1 pharma firm, maybe 8 research areas, 50 preclinical compounds and 10 in clinical trials, aiming for 1 launch per year.
    If a bunch of preclinicals are successful, they get promoted and compete for the 10 places with existing candidates. The same number of clinicals have got to lose priority at the next review – some through promotion or attrition, others through lack of progress. The overhang occasionally gets some special dispensation, but normally goes on the back boiler, getting no new funds for trials etc, or straight to the garbage pile. This also provides a fair(ish) mechanism for competition between projects, and drives each team to justify their project’s place in the pecking order.
    It took me a few years to grasp this mode of operation (hi Kinase, thinking of you) but I think the mechanism is sound, though susceptible to bullshit from project teams and RAs, which adds to running expenses. And it is also valid for other large multi-project firms and conglomerates, like Philips, Boeing, GM or Siemens.

  10. steve says:

    @simpl, things are never so simple. The financial selection you mention is part of the problem. A recent review from McKinsey & Company (Cha et al., 2013) analyzed more than 1700 individual forecasts on 260 launched drugs and found them no better than throwing darts, with more than 60% of the consensus forecasts in their data set being either over or under by more than 40% of the actual peak revenues. This was true even post-launch, with the variance in peak sales estimates still being 45% versus actual peak sales even 6 years after drug launch. So, aside from simplistic assumptions on the science side (which assumes that mechanism is a simple bimolecular interaction) there are simplistic assumptions on the marketing side (which selects projects based on false market models) both of which lead to the current dearth of promising large pharma pipeline products.

    1. simpl says:

      thanks Steve: I agree that models are simplistic, and much more so on the marketing side. However, the thinking is that R&D management are pretty aware of this, and can allow for it. The point I’m trying to convey is that ranking projects is easier than understanding every nuance. It is easier, for instance, to pick the top golfers than to understand everything that it takes to be a top golfer: and anyway, this understanding is not necessary.
      Which brings us back to Derek’s central point; picking future drugs is harder than picking future golf pros, though – and the bets are larger.

      1. steve says:

        I think we’re talking about different things. I’m not talking about deciding between mature projects; it’s easy to pick top golfers once they’re on the circuit and have several tournaments under their belts. Trying to pick them when they’re 4 or 5 years old is much tougher. Knowing where to put your research effort requires a change in paradigm that eliminates simplistic mechanism and marketing assumptions.

  11. Curious Wavefunction says:

    During the ill-fated 1996 expedition to Mt Everest (recounted in Jon Krakauer’s “Into Thin Air”), three climbers turned back from only a couple of hundred feet below the summit when they realized how crowded and slow things were getting on the South Col. As Rob Hall who was the leader of the expedition remarked, it took far more resolve, character and courage to turn back than to keep going on when things looked bad.

    The three who tuned back lived to fight another day. Rob Hall who got to the peak tragically died.

    I think the answer to this question is easier in some circumstances than others. For instance if you are working on a structure based design project there are a couple of warning flags that should really make you question your approach: lack of high resolution protein structures, existence of high resolution protein structures that seem to exhibit different conformations, anomalous SAR, undue conformational flexibility in ligands, lack of SAR or activity data for benchmarking. Protein flexibility especially seems to always put a dent in what seems to be a clear cut rationale for a SBDD approach, so if you start observing massive amount of induced fit you should probably rethink your strategy.

  12. Busta says:

    Hard to read these long posts so sorry for the duplication.

    Typically med chem efforts are inexpensive so I say, for a interesting target, have at it!

  13. Kling says:

    No one can predict the future, but the corporate strategy office in big pharma pretend to. PD1 and PDL1 interaction was discovered at Genetics Institute, and a full blown research program ensued partnering with Honjo (PD1) and Dana Farber (PD-L1). But Wyeth, after buyout of GI, gave up the program and decided to repurpose the immunology group in favor of chasing immune system kinases. I recall someone in the biologics group walking around wearing a T shirt: Small molecules, small minds. Fast forward 16 years and a few Billions in projected sales….the kinase program didn’t go anywhere btw. This had nothing to do with scientific tenacity, many tears of sorrow when the PD1 program was cut.

  14. Dick Thomas says:

    An interesting topic that always generates a variety of opinions. One aspect of this decision making tree that I rarely see addressed is “what else would we be doing”? Allocation of discovery resources is a multi-factorial problem. No one program should be taken completely out of the context of the entire organization. If the program being evaluated is bogging down for any of a host of reasons and there are several other prospects with good biology, screening hits, etc., waiting in the wings, then it is easier to pull the plug. If you have a program that is struggling through some tough times but there is nothing else on your plate to apply the resources in a better manner then you could let the program continue.

    This has always struck me as one of the “cheap shot” analysis pitfalls of many who complain that their individual projects were “shut down by those “&%#$@&&%,s” in management who don’t realize that success is just around the corner. They fail to put their own programs into the context of a finite resource that needs to be applied to all of the current, pending, and future programs. That is the larger question. Deciding to pull the plug or continue on a specific program is a tough call. Deciding how to balance a finite set of resources between multiple projects is far, far tougher. I am not saying that any specific management team has shown all of the answers, but I speak from experience that this is Derek’s original postulate in spades.

  15. KOH says:

    The best kind of surrender or retreat (similar to the Washington strategy) is one in which you are able to save and pool resources and focus them on some other area where success is more likely. This is probably easier to accomplish in the academic arena than in a for-profit sector. As a university professor, I have come across many projects in which attaining the original goal turned out to be a bridge too far, but the knowledge we gained in trying to reach that bridge opened up new projects that were much more fruitful. It still stings that we never accomplished our original goal, but something useful still came out of it.

  16. DoctorOcto says:

    Sounds a lot like no-limit texas hold-em to me.

    It’s usually cheap to get in before the flop, but as you get deeper and deeper into clinical trials, the bets get higher and higher and before you know it it’s either all in or fold.

    If you need to get out, you should do so before get out before it gets too expensive to lose.

  17. Kelvin Stott says:

    This is what we decision scientists are supposed to do – analyze the probability-weighted average net outcome (return on investment) to determine whether to stop or proceed (and how). But of course it depends on all kinds of assumptions on costs, scenarios, probabilities, outcomes, revenues, etc. , so part of the job is to make sure all those assumptions are reasonable and based on related data (with a bit of intuition and experience added in the mix). It also involves making people aware of their own biases a la Kahneman et al., as well as considering other alternative investment opportunities in the R&D portfolio to maximize overall expected ROI and benefit to patients.

  18. John Delaney says:

    @DoctorOcto Quite like the poker analogy – my (very limited) understanding of Texas Hold’em is that pros won’t play most 2 card hands. It’s not that they might not get lucky and generate a winning hand with the next few cards, it’s that they know the odds precisely. This is where drug discovery deviates from poker – no one has any idea of the true odds of a carrying on or folding. Nick Taleb calls this the “Luidic Fallacy” in a different context – games have knowable odds, real life does not.

  19. Li Zhi says:

    Correct term is “opportunity cost” in comparing how/where to allocate resources.
    I was going to suggest “cowardice” (on a scale from timidity to panic) but I like “leaving money on the table” better – but how about tossing the baby out with the bath water? Or perhaps “false negative”. The point has already been made that different levels of management have different perspectives on this question. A number of posts have mentioned experience as being key to “best” go/no-go decision making. It occurred to me that maybe what experience brings to the table is information from larger scopes. Often different levels have made quite different (categorical) assumptions about the way the project may succeed. I’ve often found it useful to understand what the basis is for the initial green light, what the operating assumptions are, and what we know we know and what we know we don’t know and compare these factors at different times/decision points. (of course this analysis must be probabilistic in nature). The goal is to predict the outcome given the facts and the momentum of the various probes. If a base assumption ‘fact’ changes, or if a key foray has disappointed and there’s no real reason to believe it will change, then lights out. It’s almost a smell test.

  20. ex-chemist says:

    What difference does it make? You’re still going to get laid off. And then you can spend the rest of your life wondering “Would’ve, could’ve, should’ve……..”

  21. Phil says:

    Jason Altom would probably be a happy suburban soccer dad today if he had quit grad school at the right time. Chemjobber had a pretty good series of posts on the topic. http://chemjobber.blogspot.com/search/label/i%20quit%20grad%20school%20in%20chemistry

  22. Nick K says:

    Has any company ever gone back and taken another look at its failed candidates? I’m sure there are many great drugs in there which deserve a second chance.

    1. Dennis says:

      In this era of drug repurposing and high throughput screening there is an ongoing relook at failed molecules. I believe at worst that medicinal chemists are getting more starting points for new drugs.

  23. Chrispy says:

    An interesting example is Aliskiren, the first renin inhibitor approved. Many of you old fogeys will remember renin programs — everyone went after it. Everyone ran into problems. And Novartis had one that they shelved until (as I recall) some employees left Novartis to start Speedel and develop the drug. They did this until it looked promising enough for Novartis to come back and buy in at great expense.

  24. tmj says:

    I honestly should have applied that to my PhD. I quit in my early 7th year, because I had become completely miserable due to the whole experience (especially after a lab move to a different country). I had thought about quitting years ago, but I had a fellowship and had constantly told myself, “you can’t quit now, you don’t get a fellowship every day!”.

    In retrospect, I could have saved myself two years of therapy for depression if I had the guts to cut loose earlier.

  25. How valuable is experience in this field, anyways?

    Is it not quite common to find people with 20 years of experience in the field who have never, not once, contributed significantly to a successful drug? If that’s so, how reasonable is it to assume that their 20 years of experience is worth a damn in making these evaluations?

    (of course they can be experts at all kinds of things, and I suppose they might even be brilliant at “smelling” success, but I don’t see how that would follow)

  26. RM says:

    Institutional incentives matter in this. How accepting is your boss towards a “failed” project?

    I remember once when a company was belt-tightening and looking to reduce the number of employees. Their strategy at the time was to do so by cutting projects and firing everyone working on those projects. This meant that some of the best scientists in the place – the ones who took on the high-risk/high-reward projects that often became the company’s core products – were the ones cut in preference to the rank-and-file who just happened to be on a “safe” project.

    Under those conditions – where ending the project probably means getting laid off – I can certainly imagine project managers milking the sunk cost fallacy for all it’s worth, and trying to extend the project lifetime well past the point it makes sense from an external perspective.

  27. JG4 says:

    Kenny Rogers – The Gambler (1978)
    https://www.youtube.com/watch?v=Jj4nJ1YEAp4

    You’ve got to know when to hold ’em
    Know when to fold ’em
    Know when to walk away
    And know when to run
    You never count your money
    When you’re sittin’ at the table
    There’ll be time enough for countin’
    When the dealin’s done

    Every gambler knows
    That the secret to survivin’
    Is knowin’ what to throw away
    And knowin’ what to keep
    ‘Cause every hand’s a winner
    And every hand’s a loser
    And the best that you can hope for is to die
    in your sleep

    1. John Delaney says:

      Amen Kenny! He’s wasted in country music, he should move into pharma…

  28. NH MedChem says:

    Judah Folkman spent forty years studying angiogenesis and fighting against those who refused to consider his theories. In “Cancer Warrior,” the documentary covering his career, he states that it is a fine line between perseverance and pig-headedness. I love that line.

Comments are closed.