Skip to Content

Clinical Trials

How Much Clinical Research Is Useful?

John Ioannidis is back with an article titled “Why Most Clinical Research Is Not Useful”. (Thanks to Cambridge MedChem Consulting for the mention of it). His emphasis here on clinical work comes from his own admission that improving the efficiency of early-stage research is much harder to do, since it can lead in so many directions that aren’t apparent at the start, but that clinical research should be much more focused and specific. From the title, at least we know that his delicate and tactful approach remains intact. A less incendiary choice might have been “Why Most Clinical Research Isn’t As Useful As It Should Be”. But even if you find Ioannidis annoying, he tends to make good points, so what does he have to say here? First off, some definitions:

The term “clinical research” is meant to cover all types of investigation that address questions on the treatment, prevention, diagnosis/screening, or prognosis of disease or enhancement and maintenance of health. Experimental intervention studies (clinical trials) are the major design intended to answer such questions, but observational studies may also offer relevant evidence. “Useful clinical research” means that it can lead to a favorable change in decision making (when changes in benefits, harms, cost, and any other impact are considered) either by itself or when integrated with other studies and evidence in systematic reviews, meta-analyses, decision analyses, and guidelines.

Fair enough. He’s suggesting a series of questions to be asked about any given clinical research paper. Parapharasing, they are: (1) Does it address a problem that’s worth addressing, in its impact on human health? (2) Has the prior evidence been studied to make sure that this new work is adding something? (3) Is the study powered enough to provide useful information? (4) Is the study run under real-world conditions, or as close as possible? (5) Does it fit with what patients find important as well? (6) Does it provide value for the money? (7) Is the goal of the study feasible in the first place? (8) Are all the methods and data provided clearly and without bias?

I might have put those in a different order, but they’re all good questions, and many of them are rather high bars to clear. Let’s apply them to the most recent clinical data discussed here on this blog, the announcement by Sage Therapeutics yesterday. Here’s how I’d answer the Ioannidis questions in that case: (1) Yes. Postpartum depression is a real problem, and can be life-threateningly severe in some cases. (2) Very little is effective in such cases, so this is fine. (3) No. Not at all. A major failing, which could well be enough to rot the rest of this particular barrel of apples. (4) Not enough details here, but I believe that the depression measurement scales being used are reasonably applicable (and I don’t know of anything better). (5) Most definitely. (6) Hard to say. It certainly was an inexpensive depression trial, as these things go, and thus may well have provided plenty of value, but see the third question. (7) It should be. Post-partum depression is a hard therapeutic area, but there’s nothing that says it’s intractable. (8) Since this isn’t a published paper (yet), we’ll have to reserve judgment.

With that third question example in mind, I’ll provide my own counterpoint to the Ioannidis list, in a set of related devil’s-advocate questions that will, I think, show what he’s trying to have everyone avoid. In the same order, they are (1) Has this study invented a disease that’s actually not something anyone is worried about? (2) Does it ignore previously reported effective treatments and pretend that there’s nothing else available? (3) Does it use so few patients, or for so short a time, that no one can really be sure if anything worked or not? (4) Does it use surrogate endpoints to make it look as if things worked, or pick and choose among outcomes to get a nice-looking result? (5) Does it solve a problem that no patients really wanted solved? (6) Does it spend a vast amount of money to advance the science a couple of inches? (7) Did it start because everyone was being too optimistic about this stuff working at all? (8) Are the raw data stuffed in a box, so that you’ll just have to take the paper’s word for the statistical workup? It will not be difficult to find clinical studies that violate one or more of these. Not at all.

Ioannidis has a section on the complications of applying these criteria, because there certainly are some. For example, he adduces one of his own first papers, on zidovudine monotherapy. When the study was started, it was a relevant question. When it finished, it was still of interest. By the time it was published, though, it was a moot point. For these and other honest reasons, I think it really is too much to expect that all the clinical papers that appear will always get a clear “yes” to all eight questions, but I think he’s right that the current situation is too far over to the other side of the scale. There really is a lot of junk out there.

It would be of great interest if every clinical report actually had to have a little section addressing a standardized list of such questions. It would, in one sense, be an invitation to boilerplate, which is what happens in research grants and other such forms, but at least we could see up front when people were being disingenuous in that way. “Of course our study is relevant, twitchy hair follicles are a major public health problem and our six-patient study advances the field greatly”. In many cases, if you’re willing to run a crappy study in the first place, you’re probably willing to sell it as something better-looking, too. The utility of this question format might come in when people don’t necessarily realize, or haven’t quite been able to admit to themselves, that their study is (in one way or another) lacking, and thinking about this up front, with a realization that it will affect the eventual publication, might be of some benefit.

That last motivation is especially important for academics. For industrial clinical research, the big motivator is What the FDA Will Think, and in some cases that’s one of those complications mentioned above. There are studies that can be expensive and may not (from an outside perspective) move the science quite enough for the money, but are nonetheless run because the FDA says to run them. To be fair, companies often end up in that position when the earlier studies weren’t designed well enough to give data that would have given them a chance to stand on their own. It’s going to be hard to outlaw wishful thinking.

16 comments on “How Much Clinical Research Is Useful?”

  1. mark says:

    As far as I’m aware you generally have to run a clinical trial past some sort of ethics committee before you can get it off the ground. Surely this is the stage where these questions should be addressed? If I propose a clinical trial to see whether monoclonal antibodies can cure ingrown toenails, with a total of 4 patients, this should be bounced hard before the trial even begins.

    Related to this is the question of publication. If you don’t publish the results, then you have conducted pointless experimentation on humans and should be ethically barred from any future clinical trial work.

    1. Isidore says:

      The devil is, as always, in the details. It is arguable, in my opinion, whether an ethics committee should be empowered to question, on ethical grounds, running a study to cure ingrown toenails with a very expensive therapy, especially if the study is adequately powered (and even if it is not adequately powered, is this really an ethical issue?). Especially points (5) and (6) are either too subjective or simply business decisions (or both) to merit the attention of an ethics committee. There will always be some people who find their condition important enough to merit addressing it. And like porn, which one can recognize when one sees it, quantifying before the fact the acceptable amount of money spent per inch of science advancement is, if not impossible certainly very difficult, and in any case not the purview of an ethics committee. I do agree, however, that once one is given permission to use human subjects one is obligated to report on the results. This is the issue, often discussed, of reporting on negative results, since even mildly positive results typically find their way into publication and it is an ethical issue.

      1. Michael Rogers says:

        Yes, adequate power is an ethical issue. If a study doesn’t have adequate power, then you are asking patients to take time (and risk) with no hope of advancing understanding. Such an approach is unethical.

  2. Chrispy says:

    The nature of clinical trials has changed — at least the early ones (Phase I, II) for academia and smaller shops. Although these are supposed to be designed for safety and dosing, the real intention appears to be to show enough efficacy to get investors/buyers to buy in. (“Well, of course the trial was not powered to show efficacy, but clearly there were more remissions in the treated arm.”) This is why so many things that look promising early crater in Phase III. It makes me wonder if these smaller trials aren’t more easily gamed, too, as far as efficacy goes, since that’s not really what you’re supposed to be looking at. Is that too cynical? Small changes in patient selection, inadequate blinding, etc. would matter a great deal, and there is an enormous financial incentive so show some glimmer of efficacy at this stage.

    1. milkshake says:

      I would be particularly concerned with miniature startups, especially virtual biotechs – they may lack the resources and the expertise to do justice to the project… If someone tells you he can de-risk the drug development and make it much cheaper by not doing it in house but contracting everything from CROs, he is either genius or a fraud.

      1. Phil says:

        You can certainly make drug development *look* less risky and perhaps frame the costs in a way that is more palatable to shareholders by outsourcing everything. Licensing drugs that are already “developed” to a certain extent is not really less risky, but paying for the work after it has been taken to a certain stage makes it seem (to investors) like they are paying for something concrete and quantifiable rather than (in the eyes of certain investors) burning R&D dollars on something unproven and unquantifiable.

        Whether this qualifies as fraud, I’m really not sure.

        1. milkshake says:

          the problem with outsourced research is that it is not cheaper or better quality, and you don’t really build expertise. The problem with a very small company planning to take a drug just to early clinic before getting sold is that they will hide problems and try to design the clinical trial to impress the investors and the prospective buyer, with disregard to science (and to what will come out in later clinical trials)

          1. Phil says:

            I agree with all your points, including the broader point that outsourcing does not actually save money at all. All I’m saying is it shifts it around in a way that can make more sense to MBA-types, and if that’s what it takes to get your company off the ground I can’t criticize the choice.

            However, I don’t think the problem you raise about coloring data is exclusive to small companies trying to get bought. It happens with internal projects too, thanks to fubar incentives (like those discussed in Derek’s most recent post).

  3. Shanedorf says:

    Lots of advances in modeling and simulation may offer a path forward
    I’d advocate that every clinical trial should have an in silico simulation before it is run in humans. That allows researchers to design better trials and test hypotheses without risk to actual humans. Both the mathematical algorithms and the understanding of biology have improved to a point that in silico should precede in vivo as often as practicable and the FDA is already on board with this concept, specifically mentioning in several of their guidances.

    1. regdoug says:

      Even if you had a good simulation tool to run the studies in silico, I don’t think that excuses you from clearing the bars laid out in the 8 questions. Computer studies take time and effort like any other study, and it’s arguably easier to create junk results using software.

  4. Someone is Wrong on the Internet says:

    I’d love to live on the same planet at the person who wrote the above comment, it sounds nice.

    But seriously, this is nuts. If you think the problem is this simple, your understanding could use some improvement. In some cases, you certainly can take what is known to predict the behavior of related clinical trials. Unfortunately, a lot of us are working on things where there is no standard of care. I don’t see the Return on Investment here in most disease areas.

  5. tnr says:

    In regards to the postpartum depression results, the issue isn’t power as the study resulted in statistical significance even with a small sample size. The issue is with bias creeping into a small study. Did the investigators guess what the drug assignment would be prior to enrolling a subject (selection bias). Were the raters somehow unblinded to the drug the subject was on?
    It will be interesting to see if the study can be replicated in Phase 3.

  6. Barry says:

    From where I sit, the clinical phase of drug discovery starts too late and ends too soon. For an novel, unvalidated target, I would want a Proof of Concept trial in man as soon as we have a candidate compound with strong pharmacodynamics and selectivity to answer the question “does modulating this activity have any effect on human disease?” . And the FDA should consider that all drug approvals should be conditional on confirming safety/efficacy after release in “Phase IV”, when hundreds of thousands or millions of patients can show rare effects that might be missed even in a good Phase III.

  7. r says:

    Excellent article and blog Derek, both of them much needed. Speaking personal experience or the MD side (from trainee to staff physician to basic research in academic setting):
    As residents in training in a European academic hospital, some colleagues and myself were sometimes assigned (by the director of the program) to assist in clinical trials initiated by pharmaceutical companies. In fact, I always had the impression that these studies were very well designed, had more than enough power etc. Some great new drugs (lifesavers!) were being studied like rtPA, blood replacement therapies, in fact the first half of the nineties saw some great therapies coming to the clinic. It could be I’m somewhat biased because of the field we’re working in: most of these drugs were designed to treat life-threatening illnesses. Unfortunately, pharmaceutical companies turned towards ‘blockbuster’ type business models and hence the pile of ‘me too’ sildenafils, proton pump inhibitors, SSRIs…
    Maybe (because of biotechs stepping in) that has changed since a couple of years?
    However, disappointment about clinical trials really set in after confrontation with patient studies designed by academic investigators, sponsored by government funding or funding of the institution itself. Some of these studies wouldn’t even qualify for just one of the requirements outlined in the article or Derek’s post. These studies were only meant to advance the number of Phd degrees, the ego and career of the investigator and so on. Waste of money, resources… Luckily, ethical committees step in more easily now and feel more confident in preventing these foolish endeavors.

    1. Oliver H says:

      Thank you! It’s always bothering to see people look at industry-sponsored studies suggesting lack of quality when there’s so much garbage coming out of academia itself that it probably leads significant parts of the medical community on wild goose chases that are grounded on nothing but poorly designed studies that are evidence of little more than the fact that none of the authors had any idea of experimental design, let alone statistical considerations. Let’s not forget that a lot of the people behind these studies are not trained scientists but trained medical practitioners whose grasp of how science is properly done is hit and miss (which is not to say that there aren’t great MD researchers…) I’ve been conducting statistics courses for physicians – including from academic hospitals – just to be able to discuss the strengths and weaknesses of two competing products with them.

      Industry will often have either in-house or contracted statistical consultants helping with the design of a study. There’s simply too much invested into these to handwave them.

Comments are closed.