I blogged earlier this year about some interesting androgen antagonist results in coronavirus therapy. The idea is that the TMPRSS serine protease needs to be present at the cell membrane for the entire cell-entry pathway of the virus to work, and that androgen antagonists decrease its expression sharply. As the references in that earlier post show, that’s not a crazy idea per se, and it had gotten some attention from various groups.
What prompted the post was a press releases on a trial that had taken place in Brazil. A small company called Kintor had provided an investigational androgen receptor antagonist (proxalutamide) for this work, which looked at over 500 patients and reported quite strong results. A preprint manuscript on this work appeared late in June. But now Science reports that there are doubts about some parts of the study, and it’s apparently having trouble finding any other publication venue.
One problem is that those results were not only strong, they seem too strong to be real. That gets us back to the key concept of effect size, which I’ve blogged about here and here, among other places. You can think of that as the underlying strength of the intervention that you’re studying. If you’re the sort of person (like me) who keeps calling for controlled trials of new therapy ideas, you will get to have people ask you “You wouldn’t run a double-blinded trial of parachutes for people jumping out of a plane, would you?”. As a side note, many of the people who ask this one seem to feel that it’s both extremely original and extremely telling, but sadly, it’s neither. The effect size of a parachute intervention during free fall towards the ground is enormous, and in fact could not be larger. And the larger the underlying effect size, the fewer data points you need to be sure that you’re seeing something real.
But in drug trials, we just don’t see parachute-level effect sizes. The closest example I can think of is when penicillin was first made available and given to patients with severe bacterial sepsis. Doctors at the time saw people die of bacterial infections all the time, and they knew exactly what it looked like when someone was passing the point of no return. But penicillin could often bring such people back, and those physicians who saw it happen needed no further convincing of its efficacy.
I wish we had a lot more stories like that, but we don’t – you can count them easily on your fingers. What we have instead are a lot more than fifty shades of grey. Most successful trials show that a drug works to some degree in some patients. Some respond pretty strongly, some don’t respond at all, and “pretty strongly” is a phrase that’s on a sliding scale of its own. Even really dramatic interventions like CAR-T therapy for leukemia show this. These engineered T-cell transplant techniques have pulled some people practically out of the grave – oncologists know what it looks like and what it means when someone has been through every possible therapy and the disease is continuing to worsen, but CAR-T has sent some of these people back out onto the world without signs of leukemia. But not all of them. Some people get a partial or less durable response, and some people don’t really respond at all (substantial effort is going into trying to figure out how we could identify such patients up front, as you would imagine!)
That’s how it works out here in the real world, as opposed to the last act of a screenplay. But the Brazilian trial showed a 77% reduction in mortality, which is either very impressive or too impressive depending on how you look at it. As the Science article shows, a problem is that the mortality in the control group seems very high (49% of the hospitalized patients!), which could of course inflate any benefit in the treatment group. The enrollment of the patients seems a bit too fast to be plausible as well. And there’s worse:
But on 8 June, the Brazilian newspaper O Globo reported that the Brazilian National Research Ethics Commission was investigating the study because the authors failed to report trial deaths as quickly as required by clinical trial rules in Brazil, and at different times reported a total of 170 and more than 200 deaths during the trial. The agency would not confirm the investigation, noting that “all data from the research protocols under analysis are confidential,” but Cadegiani confirms to Science that the commission is expected to issue a report on the trial.
It does not help much that lead investigator Flavio Cadegiani has also been a big proponent of a number of other coronavirus treatments that are (at the very least) unproven. But one way or another, we’re going to know what’s going on, because proxalutimide itself is in another trial here in the US, and other AR antagonists are in trials as well. To be sure, looking at people who are already on androgen depletion therapy has shown no effect on their infection rates with the coronavirus, so there’s room to wonder about the entire idea, as plausible as it might seem mechanistically. But it looks like we’ll at least get the answer.
The whole story, though, illustrates what non-screenplay clinical research is like. Extremely solid-sounding rationales sometimes work and sometimes don’t, and we often don’t know why they didn’t. Trials get run that point one way, and trials get run that point another, and we have to figure out what the differences might have been between them, which ones were intrinsically more likely to be reliable, and what lessons to learn from them taken together. If the effect sizes were higher, we’d have less confusion. But most of the time, they aren’t.