Skip to Content

Clinical Trials

Making Excuses, the Modern Way

This one will be good for a wry smile, a roll of the eyes, or perhaps a knowing shiver. The British Medical Journal has published a “Key opinion leaders’ guide to spinning a disappointing clinical trial result”, and many are the times that such a handbook is needed, unfortunately:

When key opinion leaders are asked to comment on disappointing trial results in news reports or at conferences, we have observed that they seem curiously unable to recognise that the treatment doesn’t work. They prefer to argue that the trial design was wrong, drawing from a set of stereotyped criticisms. Using cardiology as an example, we have systematically analysed the excuses they provide to compose the Panellists’ Playbook, an anthropological classification that will be useful not only for readers but for key opinion leaders in need of inspiration (or backbone). . .

. . .We found comments on 321 trials from the 15 international scientific congresses [in cardiology] held during 2013 to 2017. Of these trials, 127 (40%) had negative results and received a total of 438 remarks from key opinion leaders. Excuses were provided for 108 (85%), with a mean of 2.5 published excuses for each trial. We defined an excuse as any explanation given for a trial’s result other than the treatment not working. . .

The most common excuse was that the sample size was too small. Interestingly, the authors found only one instance where anyone suggested at the time what the sample size should have actually been. “Follow up too short” is up there, too, but apparently with no estimates of what an appropriate length would be. Other high-scoring excuses are that the trial was too inclusive, the comparator therapy was too good, or that the wrong doses were used. “More studies are needed” is also popular, as anyone who’s followed the medical literature for more than twenty minutes can well believe, but the authors put that one in the “vacuous” category, suggesting that “the key opinion leader simply does not like the result and wants another throw of the dice“. This seems hard to refute.

The authors suggest that wild-type Key Opinion Leaders have been selected for over time for their ability to mobilize these explanations under journalistic time pressure. But this handbook should make anyone capable of such performance. Indeed, they say, with its help “no intervention is too ineffective for an excuse“. I fear that they are correct!

 

37 comments on “Making Excuses, the Modern Way”

  1. Laurent Wada says:

    Do statisticians for drug companies not know how to do power analyses? Or are they purposefully not doing them?

    1. J Severs says:

      Two more likely possibilities: (1) overoptimistic estimate of treatment effect (2) assumed nuisance parameters, though estimated in the literature, are very different in the study’s population. I have seen both in my pharma experience.

    2. MTK says:

      Of course they know how to do power analyses. I would assume that they’re actually quite good at it also, since most times trials are probably powered enough to support further progression or regulatory approval, but not overpowered so as to show small effects that would not gain approval based on the anticipated therapeutic window. That would make an expensive process even more expensive. The drug companies are generally in extensive discussions with the FDA during development with regards to the design of their clinical programs in order to support the labeling they want.

      The KOL’s here are basically trying to come up with a reason, any reason, why the study failed besides the drug candidate wasn’t effective or safe enough.

    3. Nesprin says:

      I very much doubt the statisticians are the ones writing the press releases.

    4. Emjeff says:

      Yes, of course they know how to do them. The question, though, is whether power (as usually calculated) is useful. To calculate power, you have to assume a variance. Fine, you can get that from previous studies. But previous studies may have been different from your proposed study in quite significant ways (disease severity, age, presence/absence of concomitant illness, etc.). The upshot is, if you calculate power assuming a certain variance, and your new study ends up showing a higher variance, you could lose. In a very real sense, the calculation of power is too optimistic, since you are assuming that your new study is going to have an equal or lower variance. Most non statisticians do not remember this when they see the phrase “90% power”.
      A more useful and realistic way to determine the probability of a successful study is to calculate assurance, which is power integrated over a wide range of probable variances. As you might expect, assurance < power. A study with 90% power at one assumed value of variance might only have 75% assurance. This makes senior-VPs cringe, of course, but it is more realistic than power.

  2. CuriousScientist says:

    There is some weird ones in this playbook, “Patient wrong race” and ” wrong continent” represent almost 10% of the result.

    Oh this does not work in Europe but if we will have done it in Asia, this was for sure a success… If we will have included more North American and less South American this might have been better? Is that the kind of discussion going on in meetings after failure?

    This looks strange to me but making excuses is always easy anyway.

  3. MTK says:

    In the KOL’s defense they are specifically “opinion” leaders with all that that word connotes.

  4. cato says:

    Ah yes, the ye olde panellist’s playbook

  5. Peter Kenny says:

    My dog ate my homework?

    1. dearieme says:

      Miss, a swan was following me to school and I thought I’d better shoosh it back to the river.

  6. Isidore says:

    I am unclear as to who these “opinion leaders” are. Obviously not company employees, as they, presumably, should not appear to have obvious conflicts of interest, but are they paid or do they receive some other compensation or perks in exchange for their opinions? Or are they individuals who are simply pathologically optimistic and can always find the silver lining no matter how dark the cloud? Can anyone offer examples, not necessarily with names?

    1. MTK says:

      The KOL’s are often expert clinicians or respected academics at research hospitals and things like that. Dana-Farber, MD Anderson, Cleveland Clinic, places like that.

      I think there are a couple of things going on.

      One, is that many of them are well-meaning and are really look for new ways to treat patients suffering from serious diseases. So the excuse making is sort of combination of hopefulness and desperation. That’s the friendly interpretation.

      The second, more cynical interpretation, is that most of them are consultants or SAB members to pharma and biotech companies or work at places that do run clinical trials for pharma and biotech. So these KOL’s will generally try to come up with something to soft peddle the clinical outcomes a bit. If a biotech wants more funding to try again with a compound it’s going to be tough if a KOL publicly comes out and says “The damn thing doesn’t work!” A few of those instances and people are going to be hesitant to work with you or your institution.

      In that instance, they’re sort of like the home inspector for a real estate deal. Yeah, you point out stuff that needs fixing, but not enough to kill the deal. If you’re known as a deal-killer pretty soon no agent in the area is going to hire or recommend you to do a home inspection.

      1. StumpedByTheCaptchaMath says:

        MTK’s second interpretation is spot on–no KOL is going to come out and flatly say that some drug doesn’t work. I’m not a “follow the money” kind of conspiracy theorist, but this is a classic case of follow the money. There is zero incentive for a KOL to call a drug a loser and a high incentive for them to peddle some BS excuse as to why the trial failed.

        That being said, I would argue it’s not a KOL’s job to stand up and call a drug a loser; that’s the responsibility of executive mangament at the pharma company. Some members of management choose to stand up and say it publicly while others just have the compound slowly disappear from their latest corporate slide decks.

      2. eub says:

        Home inspector recommendations from a buyer’s agent are not _so_ bad, since if they torpedo one house, the buyer is going to buy a different house — back of the envelope, the stringent inspector doubles the agent’s work (back to square one!) in 5% of inspections, that’s 5% more labor to make the revenue. A decent honest buyer agent, which you’d better have, will eat that overhead.

        Ask your agent, though, when was the last deal their recommended inspector sunk. In initial interviewing you asked how many deals per year they do, do the math for time. If they can’t name one deal killed, or try to _reassure you that doesn’t happen like that’s reassuring_, yeah find your own.

        But yes in general you’ll do as well picking out of a hat as asking anybody whose livelihood depends on deal velocity.

        General contractors usually know guys who do home inspection. If you have a general contractor you like, which can be a bit of chicken-and-egg problem.

    2. Old Timer says:

      They are in the SI.

  7. BK says:

    In my short time in pharma, I had only one compound go from phase 1 up through phase 2b and the unfortunate disorder it was treating seemed to be spread through North America, Europe, Austrailia/NZ, and the Arab world and we had partner clinics dotted in every area of that map.

    I guess I just don’t understand how using the wrong race or location of people is a good excuse when you should know who is affected most by this disorder/disease by at least phase 2…

  8. db says:

    Linked in my byline above is another recent groundbreaking result from the BMJ: “Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial”

    1. MrRogers says:

      Highly recommended. (The pun is just a bonus.)

    2. eub says:

      I laughed.

    3. Isidore says:

      In the article some of the difficulties in enrolling fully clinical trials jump out.

    4. 10 Fingers says:

      Not the greatest leap for the field

  9. Eugene says:

    MBA course text book and applicable to just about any technical field!

  10. John Wayne says:

    In my pharma days we always joked that KOL was an acronym for ‘Kooks On the Loose.’

  11. ScientistSailor says:

    Since it seems that smaller PII trials often show signs of efficacy that are not reproduced in larger PIII trials, shouldn’t “Sample size too large” be a good explanation?

    1. DrOcto says:

      This is brilliant!

  12. Anonymous says:

    The authors are 4 clinical fellows, 1 registrar (eh, what is that?), and 1 professor. “Fellow: Looking back, how do you explain that your career development stagnated at such an early stage?” “I can explain that with Reason #137. Although the sample size is small, it seems to be linked to my co-authorship of a 2018 paper on KOLs and the Panelist’s Playbook. After that, nobody wanted to work with me anymore or have me in their department. It seems that publishing something critical of KOLs was a classic #9A: Intervention given unskillfully.”

    1. Diver Dude says:

      https://en.wikipedia.org/wiki/Specialist_registrar

      The paper being published in the BMJ might be a clue to the usage.

  13. An Old Chemist says:

    Derek, the clinical trials of irreversible EGFR inhibitors (third-generation inhibitors) often include more patients in China than in USA and Europe, so the geological excuse of failed clinical trials should be an accepted argument. Then, in Asian countries, the hospitals do not employ as well-trained medical professional as they do USA, Europe, and Japan. My previous company’s phase-III cancer trial (dose given twice a day after a certain number of hours) gave great results in the sub-data set analysis for Japan than in other Asian countries,. The excuse that I heard was that Japanese follow the drug-administration protocols religiously/rigorously, and so the drug did work as had been predicted by the CMO, clinicians, and the team.

  14. Softball spider 🕸 says:

    It polymerized…. Too hard to visualize…. Retention time shifted….. BACK ELECTRON TRANSFER!!! 😭

  15. Peter Juhasz says:

    Frankly, I doubt KOLs – or anyone else in the clinical development organizations, for that matter – are incentivized to give honest assessments of failed trials. Also, in our world of pharma (and Western corporate culture, I may add) we have been indoctrinated for the uses of positive euphemisms such as the explanations quoted in the BMJ article.

  16. Peter S. Shenkin says:

    Look, I think the only thing that would put these claims into proper perspective is to look at approved drugs which had to go through multiple clinical trials before being approved, the earlier trials having been deemed inadequate using example sfrom the “excuses” discussed in the article.

    Then, of those approved drugs, how many are widely considered to be reasonable improvements over what came before? Even knowing to what extent they replaced still-extant predecessors in the clinic would be a reasonable starting point.

    One could also compare drugs that required multiple Phase III trials before approval with ones whose first Phase III trial succeeded. Most likely, any Phase III trial that failed would have been excused in the one of the manners discussed.

    If you automatically take such excuses to mean failure, it follows that any drug candidate which fails its first Phase III trial ought to be abandoned. But some drugs actually do get approved after the failure of the first Phase III trial. Yes, the trial failed, but at least some of these “excuses” are really hypotheses for why they failed, and point to ways in which redesigned trials might succeed and the useful domain of the drug can be determined.

  17. VinceMcMahon says:

    Nice one Derek. Here’s another funny article from the BMJ Christmas Edition:

    Parachute use to prevent death and major trauma when jumping from aircraft: randomized controlled trial

    https://www.bmj.com/content/363/bmj.k5094

  18. Norrie Russell says:

    Many moons ago a KOL presenting for one of the big biotechs at a conference in San Francisco showed data on a Phase III study which clearly did not work vs standard of care. When questioned on his conclusions from the study he remarked that “the data are approaching statistical significance” and “were encouraging”. This was received with a round of laughter from the audience.

  19. Joe Q. says:

    The sad thing is that, as far as I can tell, the KOLs are often also have a major hand in the study design (or are at least extensively consulted). At least this was my experience in the med-tech (not pharma) field.

  20. BigSky says:

    And remember who the primary audience is for these types of KOL announcements… directed at the investor population that hang on any excuse to buy/sell/pump/dump. It’s not meant to update the employees or the patients or direct future study’s.
    I guess viewed any other way it just classic post hoc ergo propter hoc.

  21. steve says:

    In the KOL’s defense, it’s very difficult to prove a negative. If a clinical trial fails does that really “prove” that the drug doesn’t work? The problem is our arcane clinical trial system where you have a drug that needs to work on a relatively huge population with a great deal of genetic variability in PK, PD and underlying disease progression and possibly even etiology (e.g., “breast cancer” which we know is many different diseases of different genetic origins). Until we can do more focussed trials for approval there will always be post-hoc analyses suggesting that a failed Ph3 is because of clinical trial design.

Leave a Reply to db Cancel reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.