There are a lot of clinical trial designs out there. But one thing that a lot of them have in common is that they are designed right from the start to run under certain set conditions and to enroll a set number of people (or at least to meet certain thresholds before that enrollment is closed). You set all that down before you even start, run the trial, and see what you obtained after you analyze the data. There’s another class of trials, though, that use adaptive designs. These can be rolling-enrollment, and can even change protocols as the trial goes on (adding or dropping particular treatment groups, changing dosages, etc.) You can even run with several different interventions under study and change those along the way, in theory. This recent article will catch you up on the general ideas, as shown in the scheme at right.
This is by no means a new idea. I was, in fact, commissioned to write an article on the subject around 15 years ago, and it wasn’t a new idea then, either. But it’s taken a while to catch on. For one thing, these things tend to be more friendly to a Bayesian statistical approach rather than the classic “frequentist” one that we all know and sort of love. That’s a big topic all by itself, and there are people more qualified than I am to discuss it, but the general idea is that a Bayesian framework updates the probability of a particular hypothesis as more data come in to support or refute it. You have a “prior probability” (based on what you knew before the experiment started), and a “likelihood function” is applied to that based on the effect of the new data (which were not used in calculating the prior), which gives you a “posterior probability”. Your hypothesis ends up looking more or less likely after the data are collected and analyzed, and you can keep the process going, updating along the way.
There’s nothing weird or spooky about the Bayesian approach, but it does require a different approach and it has its own pitfalls along with its own strengths. Traditionally, Bayesian trials have been quite rare in the drug business, so there are fewer people with the relevant experience in setting them up. The first one I remember seeing was a Pfizer cardiovascular trial in the early 2000s, but the literature on the subject has been growing steadily, and the new paper mentioned notes eight current trials, each with its own design features. There are a lot of choices possible, and (as with any trial) a vital step is to specify up front just what you’re doing, how you’re going to do it, and why.
“Go Bayesian” is not synonymous with “Go wild” – the new design possibilities opened up by adaptive designs also need to be specified up front, in great detail. It’s also important to run plenty of simulations beforehand to see what you might expect as you tweak the starting design, or how the trial might perform as it proceeds and re-weights. One thing this paper mentions is that there aren’t (yet) commonly agreed ways to evaluate or rank the effects of these various design decisions, so you’re a bit on your own in this process (well, you and the relevant regulatory authorities, who have been coming to grips with all these issues themselves). That’s a real issue when you go before an institutional review board – the complexity of some of these trials as they take advantage of all those features can be a barrier to approval (or even to understanding).
A common adaptive technique is probably re-weighting of the patient population depending on the results that come in (and the various probabilities of success get updated). This lets you study more than one sort of patient at the same time, or even merge a traditional Phase II effort into a Phase III one as the trial proceeds. But if you’re going to do that sort of thing, you also have to guard against drift in the way the trial is run and how the data are collected, because things could run for a while. (The same thing is a concern in traditional trials, too, of course).
The paper under discussion is particularly geared toward adaptive platform trials, which is the sort of thing you might run if you’re comparing several oncology clinical candidates or combinations (for example). That has real appeal in situations where patient enrollment is a limiting factor, because you have at least a theoretically more efficient way of evaluating all these as compared to running separate traditional trials. (On the other hand, if you’re just comparing a couple of possibilities, you’re probably causing yourself greater trouble and expense by setting up an adaptive trial rather than a traditional one. But as the authors note, even when they’re well-suited these things “do not lend themselves to traditional funding models“. You don’t know how many people will be involved or how long things might run, which is not what granting agencies or clinical research VPs like to hear. I hope, though, that this work by the Adaptive Platform Trials Coalition helps to move the field forward. There really are a lot of appealing possibilities.