That’s a post title that I could have used eight or ten times over the lifetime of this blog – Eli Lilly has been hammering away at Alzheimer’s for a long time now. They have yet another anti-amyloid antibody study out this week, and (as has happened over and over in this area) it as preceded by talk of interesting, tantalizing possible efficacy. Maybe possibly we might finally perhaps see something that sort of works?
Well, I’m not exactly overwhelmed. The study had 131 patients in the treatment group and 126 patients in the placebo controls. The treatment group got the antibody, donanemab, every four weeks for up to 72 weeks of treatment. That’s one of the the things about any Alzheimer’s trial – it’s a long, slow disease, and that means that any meaningful trial is going to be similarly long, slow, and very expensive. The primary outcome was a rating on the Integrated Alzheimer’s Disease Rating Scale, with secondary outcomes being ratings on a whole list of other assessment scales. In the end, the treatment group showed a (barely) significant change in the iADRS scores, but did not reach the the change in that scale that it was designed to be able to measure. No significant change was seen in the main secondary outcome, the CDR-SB assessment (there was an improvement, but it just missed statistical significance). The ADAS-Cog13 score looks like it might have been a bit different, but the paper notes that it partly depends on the CDR-SB assessment, and since that one failed no conclusion can be drawn. The other two scores (ADCS-iADL and MMSE) showed no change.
There are also biomarker outcomes looking at amyloid and tau levels, but you know what? I honestly don’t care about those, because they have never, in my view, shown any strong correlation with the real-world effects of Alzheimer’s disease. They didn’t here, either: the patients who showed the biggest decrease in brain amyloid were not correlated at all with any particular clinical improvement. Frankly, what does it mean if a patient has less amyloid protein if they’re just as impaired with or without it? The real-world effects that we do care about (memory, orientation, function) are measured by the ratings just mentioned, and to me this looks like yet another therapy that simply does not work. 131 patients is not a huge number of patients (this is just a Phase II trial), and seeing one barely significant result in one cognitive rating scale – while the others all showed no difference – does not inspire much confidence. The paper tries to make something out of a Bayesian disease progression model, but I can’t make myself care much about that, either, because the treatment differences that anyone could possibly notice are just not there. Everyone in this trial deteriorated (it’s Alzheimer’s), but the treatment group deteriorated at a slightly slower rate that could only be seen by very close attention to the statistics, and even then only by some measures (barely) and not by others. And as Matthew Herper and Adam Feuerstein noted at STAT (in a very good article) it looks like the treatment group and placebo group are coming together as the trial goes on, anyway.
And if you look at the paper’s Figure 1, you’ll see that these patients were out of 1955 patients who were assessed for eligibility, most of whom (1563) were turned down for “screening failure”. In the supplementary material, you see that 592 of them failed a tau-imaging PET scan with flortaucipir, 347 of them either failed an MMSE evaluation or had no historical tau PET data (an odd pair of factors to put together), 334 of them failed the CogState Brief Battery evaluation, etc. In other words, these patients were very carefully selected (early stages of the disease, clear evidence of classic Alzheimer’s pathology by brain imaging) – these are people for whom you would expect the best chances of donanemab working. With the results shown. I don’t believe I’ve mentioned yet that the treatment group, showed (on imaging) a greater decrease in whole-brain volume and an increase in ventricular volume as compared to the placebo group. If someone wanted to run with a headline of “Experimental Alzheimer’s Drug Shows Efficacy” I would object, but if they wanted to go with “Experimental Alzheimer’s Drug Shows Brain Shrinkage”, I would say that the data support that view much more strongly. The paper calls these changes “paradoxical”.
One more complication, also highlighted by Herper and Feuerstein: one of the problems that’s been noted in past amyloid-antibody trials is “amyloid-related imaging abnormalities with edema”, or ARIA-E, which (as the name implies) is swelling in the brain. (Maybe that would cancel out the shrinkage?) Anyway, there were patients who had to drop out of the trial because of this effect, which mostly seem to happen with people who have the APOE4 gene variant (which also causes worse Alzheimer’s outcomes). The STAT authors spoke with an investor looking over the trial data who says (correctly, from what I can see) that what efficacy the trial showed could have been driven by the survivorship bias introduced when these patients dropped out (which happened around the midpoint of the trial). Lilly argues against that, naturally, but remember, we’re talking about the ragged edge of statistical significance in either case.
I still don’t see how any of the effects seen in this trial translate into any real-world benefits. Even if they’re real, and there’s definitely reason to doubt that. I’m sticking with what I said nearly ten years ago about another failed Lilly antibody: there’s a nasty moral hazard in this business of marginal, hazy, Alzheimer’s statistical benefits, because the first company that manages to get anything approved by the FDA will reap billions of dollars from the huge backlog of desperate patients and families who want something, anything, that they can use against the disease. I feel the same way about Biogen’s aducanumab.
Of course, there is a Phase III underway already for donanemab – as the paper says, “longer and larger trials are required to study the efficacy and safety”. Well, what’s for sure is that longer and larger trials will be required to make Lilly give up on it. That much seems certain.