Skip to main content
Menu

Clinical Trials

The Problem with COVID-19 Clinical Trials

Let’s talk about a painful subject. I am of the opinion – and I’m far from alone – that the most reliable way to determine if a possible therapy has any usefulness is a randomized, double-blinded controlled clinical trial. I can be a bit more specific than that, even: let’s make that “a trial that is run with sufficient statistical power to have a good chance of providing a meaningful readout”.

The worldwide coronavirus pandemic has featured some well-run trials that have truly advanced our knowledge of the disease and how to treat it. But it has featured far, far more garbage. That word was chosen deliberately. There have been too many observational trials, too many uncontrolled (or poorly controlled) ones, too many open-label ones, and above all, there have been way too many trials whose number of patients would be insufficient to tell us much of anything even if everything else had been run properly.

I am not revealing any hidden tricks of the trade here. Clinical trial design is a subject with a very large literature, and there are any number of people and organizations who can provide useful guidance on both its theoretical and practical aspects. Among these aspects are the calculations that should be made for how many patients a trial is likely to need to be well-powered enough for a clean read on its clinical endpoints. You can start to learn the basic outlines of the subject online. Now, that’s not to say that it’s an easy subject to get ahold of. You’re going to have to estimate some of your key parameters as well as you can, among them what you think the effect size of your treatment might be, what the patient-to-patient variability might be like, the time course of treatment that might be needed, and more. Just picking the proper clinical endpoints is a subject all in itself (and it’s one that can have a huge effect on a trial’s design and on its chances for success). And at the other end of things, your inclusion criteria and patient enrollment process is a place for serious thought, too. Who should be evaluated (or definitely not evaluated) in your trial, and how long will it take you to round those people up? Where are you thinking about doing all this, anyway?

There are a wide variety of trial designs out there as well, and you can find yourself sorting through some that are clearly inappropriate to the problem at hand, some that would be great if you had about ten times as much money and time as you do, and several that at first glance look like they could all work out, but which have real-world differences that it’s crucial that you be aware of. You would be well advised to consult with experience practitioners before you start, to make sure you’re on the right track.

Unfortunately, underpowered, badly-run, and badly designed trials have been with us for a long time. Here are some well-justified concerns from 2002, for starters, and various fields of clinical research undergo periodic bouts of soul-searching over the years about these issues. But the pandemic year has really made some of our problems more obvious. Not only do we have trouble with badly run trials, but mixing in with that is a bandwagon effect. Clinicians all over the world just piled onto some of the coronavirus ideas, and kept piling on for months and months and months. Think, for example, about the hydroxychloroquine situation. Now, I still get messages condemning me as an implacable, irrational foe of the One True Coronavirus Therapy. But it’s worth remembering that I started out as a “Huh, I don’t know how that would work, but let’s look into it” person, which I really think should be the default setting. And in that spirit, I was all for running trials and getting more hard data.

But what did we get? A search through clinicaltrials.gov for “hydroxychloroquine|coronavirus” gives you 113 trials. What’s more, thirty-six of those are still listed as “recruiting patients”. This is ridiculous, but it’s not amusing. There are some large, well-controlled data sets available that indicate that HCQ is very likely not a useful therapy, but as you can see, there are also dozens of other smaller ones that say Yes! No! Maybe! Sorta! Kinda! Kinda Not! Depends! Could Be! Who Knows? And that adds up not just to a lack of knowledge, it turns into an actual hindrance to knowledge as you try to sort through the data. The heap of fuzzy indeterminate results also fuels the extrascientific political and cultural arguments about the drug, since everyone can find some sort of “support” for whatever opinion they might have.

You have to think that there were other therapies that deserved a look in the clinic as compared to the forty-third, sixty-seventh, or ninety-eighth hydroxychloroquine study. You’ll recall that for a while, HCQ ended up mixed into other clinical trials just because everyone wanted it or imagined that it was some sort of standard of care, and that did no one much good, either. Now, HCQ isn’t the only offender, but it’s a big one, and I think it illustrates what we should try not to do next time.

How, then, should we try not to do that? (Update: some thoughts here on this problem from a distinguished team of authors with exactly the same concerns). It’s not like the US (to pick a big example) has a National Clinical Trial Authority that passes judgment on these things. To be honest, the downsides of having such an agency might worry me even more. But letting everyone go into Headless Poultry Mode and pile up overlapping crap in the clinic isn’t such a good way to go, either. You would hope for a little more coordination among major medical research centers, and you’d also hope for some local university/research hospital review boards to be aware that greenlighting the East Porkford Covid-19 Treatment Study with 47 patients isn’t really going to advance medical science very much. Especially when it’s covering the same ground as the trials kicking off in Mashed Potato Falls, Rancho Malario, and Kidneystone Pass. But I’m being unfair to East Porkford – some of these lackluster trials were conducted at larger institutions that should have known better. The way we’re set up, it’s down to the review boards and the sources of funding to police things better, and to keep their heads while all about them are losing theirs.

And it’s also down to the NIH and the CDC to lead the way more than they did during 2020. The RECOVERY trial in the UK has been an example of what can be accomplished in that line. The NIH has helped run some good trials, but we’ve had nothing that comprehensive in the US as compared to the UK effort, and I really wish we had. I fear that some day, eventually, we’re going to have a chance to do better, and I hope that we take it.

87 comments on “The Problem with COVID-19 Clinical Trials”

  1. James Cross says:

    This clinical trial, I thought, was going to answer the HCQ + Zinc once and for all.

    https://clinicaltrials.gov/ct2/show/NCT04370782

    It completed with apparently 18 participants.

  2. anon says:

    I was once on a grant-giving committee for a collaboration between a large medical school and a private company. We had dozens of proposals for what we came to call “surgeon science” — “I did it three times, and one of the patients improved, so I’m declaring it the best things since sliced bread and you should give me a million bucks to develop the widget I invented to do the procedure” .

    We’d dump them, with the lowest ranking we could give, and then the Dean would then proceed to fund them because the guy was having trouble getting an NIH grant. Doh! I suspect that’s how a lot of these trials happen.

  3. luysii says:

    This sort of thing is why I don’t trust meta-analysis. Garbage in Garbage out.

    The most egregious example was Women’s Health Initiative where 3 separate meta-analyses of a bunch of uncontrolled studies concluded that estrogen replacement therapy decreased the risk of coronary heart disease by 35 – 50%. The gigantic (161,100 women followed for 12 years, with 1,000,000 clinic visits) Women’s Health Initiative trial of hormone therapy to prevent coronary disease was halted earlier than planned when it was found estrogen based therapies INCREASED the risk of coronary heart disease, stroke and breast cancer.

    1. Mister B. says:

      At least ! Thank you very much.

      This is also why I am also very cautious with “Big data”, “Machine Learning” and all sorts of hype around.
      Craps in.
      Craps out.

    2. Anon says:

      The WHI is a great example of how difficult it is to design and properly interpret trials. You have accurately described the results from the WHI as they were initially reported.

      Analysis over the last two decades has uncovered a number of its flaws and concluded that the cardiovascular risk of hormone replacement therapy depends on age and time since menopause. For women close to menopause, hormone replacement poses a low risk for adverse cardiovascular events. When all-cause mortality is considered, the effects of hormone replacement are likely neutral or favorable.

      What wasn’t widely reported was that two-thirds of WHI participants were over 60. Their average time since menopause was around 12 years. The hormone regimen used in the study was widely criticized at the time, and there is some evidence that other combinations, and the newer transdermal therapies, are safer.

      In general, the WHI study simply showed that hormone replacement should not be started by older women long after menopause.

      1. luysii says:

        Anon: Thanks for the followup. I retired shortly after the initial results came out. The reasonS the WHI was done, were the results of meta-analyses. These studies did not take into account that post-menopausal estrogen users are different from nonusers in several respects — they are thinner, better educated and more health conscious. Post-menopausal women using estrogen are by definition good compliers and as such probably have other attributes that predict better health. For one thing they are under some degree of medical attention to even get a prescription for estrogen.

        My point about about meta-analysis, was that performing one on the ‘studies’ Derek cites would likely be worse than useless, as it might produce a conclusion about efficacy when none is in fact warranted from the ‘studies’.

  4. John says:

    Early on in this pandemic X company would issue a press release basically saying hey our drug might work. It worked in a lab dish therefore lets perform a trial! Guess what happened? Stock price spiked and people piled on waiting for the results of a garbage (great word) underpowered and poorly designed trial. Other companies saw this and boom press releases everywhere and trials worse than an old Perry Mason episode. I think greed and stock manipulation drove much of the nonsense. Additional these was no coordinated effort like Warp Speed it was the wild west for treatments.

  5. RAD says:

    Sigh, TWiV 715 at the 3min mark discussed the paper “Hydroxychloroquine-mediated inhibition of SARS-CoV-2 entry is attenuated by TMPRSS2”.

    Hydrochloroquine (HCQ) works by modifying the pH of endosomes. SARS-CoV-2 can enter cells via endosomes or via TMPRSS2 mediated direct fusion; HCQ doesn’t block the TMPRSS2 but pathway but Camostat does. HCQ together with Camostat should have worked if the combination proved to be safe in a proper trial. The Combination Therapy With Camostat Mesilate + Hydroxychloroquine for COVID-19 (CLOCC) proposed trial was canceled rather than modified to use a placebo as a control:

    Withdrawn (lack of public funding; planned control arm with Hydroxychloroquine treatment showed out as not being standard of care anymore as time evolved.)

    There was a small window of time before monoclonal antibodies were available that HCQ + Camostat could have made a significant difference. We also failed to effectively employ the monoclonal antibodies therapies but that is another sad story. The fact that Derek Lowe, the best science writer focused on the pandemic, doesn’t know/appreciate the missed opportunity is a reflection of our collective failures to respond effectively to this pandemic.

    We needed to highlight “Outstanding Questions”: both good and unanswered questions that required coordinated effort to find effective and timely solutions. Live (or die) and learn.

    1. Stephanie says:

      An impressive theoretical MOA (or even carefully documented MOA) is NOT THE SAME as outcomes data that proves that giving someone an intervention will decrease M&M. Or just M. Another thing that quacks like to do is dazzle with plausible MOA claims that do not translate into lives/QOL saved.

    2. SirWired says:

      All drugs that enter the clinic “should have worked” (if they didn’t, they wouldn’t be in a clinical trial.) There are excellent reasons why about 7 of 8 drugs that enter the clinic fail to achieve approval.

      If you listen to different proponents, it “should” have worked alone, it “should” have worked with Azithromycin, Zinc, and a grab-basket of other junk.

      Given HCQ’s track record of complete failure once it is well-tested, it’s not necessarily a promising avenue of research to try it with Yet Another Protocol.

      1. RAD says:

        The problem was that HCQ was successfully tested in the lab with Vero cells (monkey kidney cells) that do not express TMPRSS2. I’m not suggesting Yet Another Protocol; the paper covered in TWiV 715 is magnificent. The simplified diagram explaining the two entry pathways should have been widely known early in the pandemic. Everyone that promotes science and despises Trump, myself included, should be humbled by the fact that our anti-Trumpism got in the way of good science.

        1. Marko says:

          A trial of Camostat plus HcQ would be a perfectly reasonable thing to do, based on what’s known about their respective MOAs and about viral entry mechanisms, but I doubt we’ll ever see it.

          1. RAD says:

            What I tried to say, in my badly formatted blockquote, is that antiviral therapeutics have a narrow window of time in which to bend-the-curve before monoclonal antibodies arrive (the preferred therapeutic). A Camostat+HCQ trial might be beneficial for the next endosome+TRMPRSS2 virus.

    3. sgcox says:

      HCQ indeed works in RA by alkalisation of lysosomes. Well, at least this our current hypothesis. But the process is very slow by its very nature, benefits appear several months after the beginning of treatment. And usual RA dosage is actually higher than what was used in covid papers/trials/whatever.
      It is simply physically impossible that the same mechanism works spectacularly and almost immediately after given to a patient with symptomatic viral infection

  6. Philip says:

    My pet peeve for SARS-CoV-2 antiviral treatment trials is that the subjects were too far into their infection for antiviral treatments to be most effective. This rendered the trials to being mostly just a large phase 1 safety trial. Not useless, but not useful for finding an effective treatment.

    To run meaningful phase 3 antiviral trials for acute diseases, we need rapid and frequent testing of at risk populations and/or good contact tracing (really both). When a person is found early in their infection and the person agrees to participate in an antiviral trial they need to be enrolled in the trial and randomized into control or treatment arms ASAP. Hopefully for the next pandemic, we will be ready to move quickly on rapid test development and have contact tracing ready to go. Waiting until their O2 saturation drops to 90% makes for a worthless, hopeless antiviral trial.

    1. Henry says:

      But in the real world, people who are very early in their disease evolution won’t be seeking medical attention. If someone finds a compound that works if given in the first hour after exposure, that compound will be clinically useless for the vast majority of cases.

      1. Philip says:

        Henry, that is why I stated we need rapid and frequent testing of at risk populations and/or good contact tracing. Frequent rapid testing and contact tracing are essential for antiviral therapies to be effective and for trials to be meaningful.

        As Thomas pointed out in an earlier In The Pipeline, contact tracers being about to offer antiviral therapies, may help get honest responses from people.

    2. Andy says:

      And yet the anti-androgen drugs look pretty good in recent trials, and they supposedly work by indirectly inhibiting viral entry.

    3. Carl Pham says:

      It’s not worthless, it just needs to be much larger to show the presumably much smaller effect of knocking the virus flat late in the infection, rather than early.

      Personally, I think we need to stop feeding O2 to the crazies — some of whom are otherwise very intelligent, but whose understanding of biology stopped at 10th grade — who think there is some magic window of opportunity for therapy X, and if you are outside that window *nothing at all happens* (rather than, say, something happens, but it’s too little too late to change the outcome). That’s not generally how disease works, since normally the mechanism by which the disease does harm doesn’t radically change from early to late in the process. Trying therapies out on desperately sick people, observing some modest benefit (not enough to save them) and then moving to less sick people *once you know the therapy does some good* so you can justify the risks to healthier people is a well-trodden path for development. It’s how some of the more risky cancer therapies became standard.

      But it all starts with *some* effect *whenever* you give it. The magic window of opportunity people are to my mind often No True Scotsman proponents in disguise. (“Oh it only works with zinc! You didn’t include zinc. Oh it only works between days 3 and 7, and all your interventions started on day 8….”) You take all this seriously, and this is one way we get to umpty zillion trials of HCQ, because the theorists won’t take Sorry Charley for an answer — and can *always* come up with a new version of their theory.

      1. Philip says:

        Carl, there is a huge difference between an acute viral infection and cancer.

        I do not think HCQ works to stop replication of SARS-CoV-2 in humans, though it very well could in green monkey kidney cells. See RAD’s post, just before mine for more information.

        COVID-19 has stages. Some may overlap, such as the viral-replication early stage overlapping with the slightly later overreacting-immune-system stage. Treatments that are effective at one stage may not be at a different stage. I don’t want steroids early in the infection, but later if my immune system is going a bit crazy, I want them then.

  7. TallDave says:

    even beyond that, many if not most RCTs have terrible methodologies that don’t support their claims

    e.g. there was a big study that claimed Vit D supplementation had no benefits

    got a ton of press

    buried in the methodology (and never published in any of the press, natch) was the fact the control group for Vit D supplementation was… (wait for it)… also taking Vit D supplements

    I mean come on

    1. Karl Pfleger says:

      See Endocr Connect (2020) Oct 9 doi: 10.1530/EC-20-0274.
      “Why do so many trials of vitamin D supplementation fail?” by Barbara J Boucher
      for more explanation of the problems with many vitamin D trials.

      The most prominent vitamin D trial to have its conclusions overgeneralized, Murai et al from Brazil, gave D3 far too late (10+ days after symptom onset, 90% of patients needed supplemental oxygen at baseline) and then did not measure how quickly D blood levels went up only that they were eventually raised at discharge. Lots of comments pointing out the flaws in the comments section of the preprint version. Some discussion also in Jungreis & Kellis’s preprint: https://www.medrxiv.org/content/10.1101/2020.11.08.20222638v2

      I enumerate the specific problems with the write-up of this study (what they should have said in the discussion section but failed to) and the problem with what the press wrote about it here: https://twitter.com/KarlPfleger/status/1363906961821949953

      1. TallDave says:

        yep

        terribly challenging to design these trials well

        most study designers seem to be trying very hard

        so few results can be replicated

  8. JT says:

    There needs to be a fed program to help small companies pay for well run trials. Little companies might have a good new drug, but can’t afford to run a well designed trial with enough participants to get a good result. This would also keep competition at a healthy level to keep big pharma knocked down a peg.

    1. In Vivo Veritas says:

      “This would also keep competition at a healthy level to keep big pharma knocked down a peg”. Trust me, if a little company has a good idea – Big Pharma will be glad to buy them, fund them, or find some other way to reap some potential profit. Most big pharmas have entire groups dedicated to identifying such opportunities, and most small companies spend most of their time trying to be identified by such groups!

      1. JT says:

        Sure, but some companies don’t want to be bought out or partnered.

        1. Dylan says:

          Well, there’s also a well developed group of life science venture investors that will fund ideas that seem like they might have a shot. Of course, most of the time they are going to want you to partner or sell with large pharma, simply because it is expensive and inefficient to recreate the infrastructure that those pharma companies already have, whether to run later stage clinical trials, or eventually get the drug approved and sold.

  9. David E. Young, MD says:

    I emailed many times to the NIAID branch of the NIH, begging them to do many large studies on various treatments. I suggested that they appoint one (or more) trial czars, to start large studies, avoiding duplications. I suggested that they vet 1,000 hospital (or more, starting with the 1,200 hospitals involved in ECOG studies, for example) and have the ability of run a trial at 100 to 150 hospitals at one time. Recruit 1,500 patients in a month. Decide early on if outpatient Remdesivir helps, whether Ivermectin has any merit, test Favipiravir, and combinations. Back in April I wrote about the need for rapidly enrolling trials perhaps 100 times on various youtube comments. (and elsewhere)

    Such an idea would require enormous work. But it could have been done. There would need to also be an encouragement for people to partipate in clinical trials and that push would need to come from the white house, from churches and from media, local governments, on line sites. (none of that happened and I guess it would have not happened under the previous administration).

    Instead of having one person in 5 thousand participate in a trial we should have had 2 or 3 percent participate.

    All of this would have take enormous work. Is it feasible? Ask C. C. Myers construction about the Harbor Freeway. Look it up in wikipedia. https://en.wikipedia.org/wiki/C._C._Myers

  10. NSK says:

    Dear Derek, very well written. So much academic wastage. Things need to change. Thank you for your voice.

  11. metaphysician says:

    My thought: a clinical trial that is not designed such that it can even theoretically provide meaningful results? By definition lacks equipoise and is thus unethical. Not only should all those underpowered, uncontrolled trials never have been approved in the first place, but in a better world the perpetrators ought be up on charges of illegal human experimentation.

  12. Jonathan says:

    To me, the Vitamin D literature exemplifies the problems. Lots of observational and meta-studies. Some small RCT studies. There have been countless articles and studies pointing to a likely benefit of supplementation, but has there been one solidly powered RCT study on it? It boggles my mind how so much ink can be spilled, so much promise has been promoted, but still no solid study one year later into this pandemic.

    The other problem is that based on one underpowered and badly designed study, people are quick to say “X doesn’t work”. For example, Vitamin D as one mega dose in pills among badly ill hospitalized patients may not work, but may be another story about liquid Vitamin D supplements prophylactically. We need scientists who are knowledgeable about the medicine designing the trials collaborate with people who know how to design trials with statistical power to get a readout. The state of affairs is, as Derek says, painful.

    On the other hand, can we have a discussion about remdesivir? The supposed “standard of care” drug approved for COVID has, at best, extremely slight benefits, based on one study, and at worst, no benefit with a load of concerning side effects.

  13. MattF says:

    And what about bias? Particularly when there’s a political opinion divide that goes along with a medical opinion divide. People may not know much about any particular virus, but they -will- have political opinions.

    1. En Passant says:

      And what about bias? Particularly when there’s a political opinion divide …

      In the case of HCQ I think bias and political opinion did influence clinical trials, at the design stage.

      In mid 2020 Lancet retracted two articles based on egregiously fabricated data, amidst a media blitz slamming HCQ and Trump. These should never have passed objective peer review. But the politics at the time prevailed until sane minds pointed out the nonsense.

      Trials involving HCQ were even suspended for a while due to political pearl clutching about already well known risks of HCQ.

      I have no expertise in medicine or pharm, but in my younger days I did have experience in statistical analysis. But I don’t think medicine and pharm experience is necessary to spot bad or tendentious trial design, or politically inspired attempts to slam the only known drug (at the time) which showed measurable effect on the infection.

      The public information on every RCT involving HCQ that I looked at made me suspicious of the design. They started dosing too late, or with egregiously high dosages instead of the long established dosages of HCQ for other medical purposes, or similar issues.

      Most non-RCT studies of HCQ effects on covid19 that I found, mostly foreign to the USA, indicated modest effects, reducing hospitalization for severe infections by a few days.

      In early 2020 HCQ was the first drug that showed even modest effects. But Trump had praised it, so a firestorm of Trump Derangement Syndrome erupted. Trump had to be proved wrong. I think that TDS affected the design of subsequent RCTs that used (as I indicated above) very high dosages, late initiation of dosing, etc.

      Now that vaccines are available, it isn’t important for an RCT to demonstrate modest effects of an old drug. But history’s moving finger has writ, and there is no way to deny the documented facts of earlier RCT designs involving HCQ.

      1. Richard Ward says:

        But HCQ was also studied in the UK’s RECOVERY study in a very large double blind study and that conclusively showed that it had no effect. All the things you refer to about it having shown effect have basically been blown out of the water by RECOVERY. Indeed I would go so far as to say that the only positive clinical effect it ever showed was illusionary and political -people just wanted it to work because President Trump said it did.

        1. confused says:

          I think very early on (when the first hints came out, even before Trump got involved) “people wanted it to work” because, well, we needed something! IIRC it was used in some tropical countries (where the medicine was familiar because of malaria) where Trump would presumably have had less relevance.

          Continued discussion of it, months later, in the US surely was politically inspired. But not the very first attempts, I don’t think.

        2. En Passant says:

          But HCQ was also studied in the UK’s RECOVERY study in a very large double blind study and that conclusively showed that it had no effect. All the things you refer to about it having shown effect have basically been blown out of the water by RECOVERY.

          The RECOVERY study is a case that underscores my point: too late and in excessive dosage.

          The usual dose for arthritis and other autoimmune diseases is about 200 mg per day.

          The RECOVERY trial dosed 400 mg every 12h, after two loading doses of 800 mg. More does not always mean better. I don’t think that one must be a clinician to understand that point. It is also true in many non-medical contexts.

          The RECOVERY trial initiated medication only after hospitalization, not upon presentation of initial symptoms.

          I’m not a pharm or medical expert, but those two points differ from the European non-RCT studies that I saw, Finnish I believe, that showed a small decrease in length of hospitalization stays when HCQ was administered early and in ordinary dosages.

          These days the question of HCQ efficacy is moot. Vaccines exist and much better treatments exist.

          But those differences indicate to me that the RECOVERY trial was foreordained to fail HCQ because of its design.

          The American firestorm of accusations against physicians who used
          HCQ early, and the fact that some states tried to ban its use for a while, indicate to me that the opposition was strongly politically based.

          I think any reasonable person will agree that politics should have no place in medical treatment. But in the case of HCQ, politics certainly did.

          1. li zhi says:

            I suspect that a version of the “Streetlight Effect” is at play with the HCQ trials. I suspect that relatively safe, very low cost interventions are going to continue to be trialed just because the PI’s institution doesn’t have a (competent) statistician on board. Some commentators ring the “evil corporate profiteers” bell, and I don’t believe that. Like they say: “We shouldn’t attribute to malice that which can be explained by incompetence.” I do suspect that most of these microstudies, vastly under-powered, are motivated both by the PI wanting to help and s/he wanting to publish. The Covid-19 is/was a crisis, I think the alternative to having a bunch of noise is, as Derek says, a cure worse than the disease. (That is some sort of systematic command and control could be better sometimes but actually worse others, depending on how far “out of the box” the best treatment is.) But we’re well over a year in, 2.7 million deaths, and still our best treatments are marginal. If the guys doing the HQC work were to have picked some other horse (a clinically safe drug) how likely is it that we would have found a treatment that actually substantially reduced morbidity and mortality? Not answerable, no, but I doubt it made much difference.

  14. Richard West says:

    I probably wouldn’t have taken the vaccine without this blog. Pretty awesome life and death responsibility Derek has taken on here!

  15. Rob says:

    Too small, too hospital-focused, too expensive, too complicated. In order to show an affect, a lot of the trials should have been outpatient to catch the disease in its early stages and very large in order to show significance when most patients in the placebo arm are going to do fine. Who is going to organize and pay for that when you are testing Vitamin D or zinc or a generic drug? It’s a dilemma, and it’s hard to be optimistic about next time.

    1. Andy says:

      A county, state, or national public health office, perhaps? It would be conceptually straightforward to direct all clinics reporting a positive test to offer the patient a chance to enroll in a massive clinical trial. Patients who say yes provide a list of all current meds they take on a website, have a video consultation with a doctor (if needed for safety reasons) and get a pre-filled bottle of randomized study medication either on the spot or by courier.

      This would be expensive and require a fair amount of organization. But the cost should be absolutely trivial compared with all the other COVID-related costs borne by the public.

      Heck, try to make it partially self-funding. Tell participants that they can be compensated up to $100 for participation if they feel that this would benefit them, but that the trial would appreciate a donation if the patient can afford it.

      1. Rob says:

        All good ideas. Unfortunately it’s hard to imagine a government entity being this efficient and effective. They don’t all work like the DMV, but they don’t do well with programs they don’t run routinely.

      2. Ian Malone says:

        UK has actually been doing this: https://www.recoverytrial.net/

        This is the programme that found dexamethasone helped, and recently tocilizumab. They are currently testing aspirin among other things.

        1. Rob says:

          I think that’s a trial for hospitalized patients.

          1. Rob says:

            The hospital-based trials have identified things that help with the cytokine storm, which peaks late in the disease course. They’ve done almost nothing to help prevent the disease from progressing to a critical stage.

  16. John Wayne says:

    It is hard to know things. Anybody who tells you otherwise is in sales.

    1. Oudeis says:

      Preach, reverend. I’m in your choir, just tell me when you need an AMEN.

      And yet it’s interesting how much education you need–or at least how much education I needed–to get a sense for just how hard it is to know things. It seems people naturally grow up with an encyclopedia model of knowledge, where anything of significance can just be looked up, and it’s not until graduate school when they’re trying to make new knowledge that it dawns on them, “I’ve thought more about my thesis than anything else in my entire life, and I’m STILL not sure I’m right.”

      1. sgcox says:

        When I was a student I was thinking more of my personal life or the lack of thereof than of my thesis.
        Looking back over the years I now regret I concentrated too much on thesis..
        😉

      2. John Wayne says:

        I agree. It is odd how much you have to know before you realize you don’t know much. I often wonder if it applies to everything. Can anything be simple? I don’t know.

        While pleasing, even the Dunning-Kruger effect seems too simple to be a useful label to apply to anything observed outside your own expertise. I can’t wait to stop being middle aged, then I will be more confident in things again.

      3. confused says:

        >>It seems people naturally grow up with an encyclopedia model of knowledge, where anything of significance can just be looked up

        I wonder if this is “natural” in the sense of a fundamental part of psychology, or a feature of how people are taught? I generally got the impression, up until college (and only in some classes even there) that scientific knowledge was much more “ironclad” than it often actually is (and certainly no clear picture of which facts/theories are so well-established that they can be treated as essentially 100% confirmed, and which ones are just current-best-guess and the next better experiment could change them radically.)

        1. Oudeis says:

          You raise a good question. I agree the way things are taught might have an influence here, and it wouldn’t surprise me if it has to do with the education of our teachers–letting them get (or requiring them to get) degrees in education rather than the fields they’re going to teach. It’s conceivable (though I don’t know for sure) that most elementary school teachers themselves still accept the “science is a bunch of facts” model, and that it’s not until high school that students are exposed to educators who’ve ever struggled with a scientific problem.

          And, in fairness, kids need a bunch of facts in their heads before they can really see the messier picture. Imagine trying to explain RNA vaccine development to someone who doesn’t know people are made of cells, doesn’t know what a protein is, and believes (as even educated people have at some times and places) that disease is caused by the stars, bad smells, or evil spirits.

  17. Jim Mayes says:

    Disciples of Ivermectin use also populate the cybersphere, most recently use is “validated” by random effect meta analysis of of selected studies. While other meta-analysis fail to find an effect. GIGO I fear, but many can believe a P value.
    Added are assertions that the drug was used by millions in various countries and “Look how well country X did.”
    Fluvoxamine appears to have started a reasonable study after preliminary results, Wealthy entrepreneurs may be sources not to be overlooked.

    1. Micha Elyi says:

      Ivermectin presents everybody with an opportunity to do better, trial-wise.

  18. Carl Pham says:

    I think you raise a very important point, but…realistically, what hope is there in the near future? We live in a republic, and moreover in an age when enormous and nearly instant mob pressure can be brought to bear on some unfortunate soul whose decisions can be caricatured the wrong way.

    It’s easy enough to say, from the sidelines, that certain people ought to have more gumption, but wow! I sure wouldn’t want to be in the hot seat on any one of these. With 2,500 people a day snuffing it and the President himself — who has all these monster minds to advise him, right? So he ought to know what he’s talking about, right? — solemnly saying This Is A Miracle Cure — I would *not* care to have to be the one to decline to run some small HCQ study. Sure, it might be the right thing to do, and I’ll be memorialized by the future maybe — but in the meantime, I’ve got a family, I’ve got an employer who may depend on public funds, or if not certainly on sales revenue, and who may decide throwing me to the mob is the best way to limit the damage. Yikes.

  19. Michael Lin says:

    Not a problem limited to the US. The WHO SOLIDARITY trials after all the time they had to plan and their large patient population ended up being open-label only. Then WHO announced their results as if they were definitive.

  20. Chris Phoenix says:

    There’s a place for preliminary studies, but the more politicized the context, the harder it is to do them without criticism and unhelpful news cycles.

    Back when nanoparticles were new, a researcher did a preliminary study (not in humans) of nanoparticle toxicity, and found some interesting-but-not-compelling evidence of possible harms – the kind of thing where you might say, “OK, maybe something’s there, let’s design a real study and see if we can replicate this convincingly.”

    She presented her work, clearly labeled as preliminary/exploratory, to a small group of scientists at a scientific conference. Unfortunately, there was a journalist in the room who took the story and ran with it. She was, of course, excoriated for bad science – completely undeserved, as far as I can tell.

    Sorry, I don’t remember names and dates at this remove.

  21. Karl Pfleger says:

    The underlying tone of this blog post read to me as if scientific truth were the main goal. To the extent that it is, most of what Derek wrote seems pretty reasonable. Obviously, in the long run, scientific understanding & truth is super important. But the main goal of medicine is saving people, and these goals are not always exactly the same or compatible in the short term. Clinical equipoise is relevant here and really can’t be absent from any serious discussion of this topic.

    In times of severe crisis, the luxury of perfect trials is not always available. Think of MASH mobile surgical hospitals (eg in the Korean war and the TV show) or of disaster medicine. When people are dying at 10x, 100x, or more of the normal rate, when hospitals are expanding into their parking lots and staff are underslept, etc. one can’t expect trial protocols to be handled perfectly. Nor can one fault clinicians on the front lines from changing treatment when they believe clinical equipoise no longer holds.

    In disaster medicine in general, as I understand things, there is an attitude of throw everything at the problem that might help as long as it won’t hurt. This seems appropriate for acute crises in general. Expected benefit vs expected harm in a decision theory framework should arguably be the basis for public health policy, not perfect clinical trials.

    Clinical trials are always a sort of strange tradeoff between confidence in the answer vs. ethical dubiousness due to treating one or the other arm less well as the confidence in the answer grows. Statistics aren’t binary. The confidence doesn’t flip from no idea perfect clinical equipoise (totally equal belief that either arm could be better) to complete certainty in the trial result instantly. In reality, confidence grows as data increases, so in a rolling trial for example, one always has to lose perfect clinical equipoise as statistical confidence in a meaningful effect grows. It’s really always unethical to a certain extent to continue to build confidence to the preferred FDA approval level, because clearly just shy of that level it’s clearly not an even bet which arm of the trial one would want to be in.

    1. Carl Pham says:

      I think you may have missed the point. Nobody’s talking about what goes on at the clinical level — the decisions physicians and patients need to make at an actual point of crisis with a very sick person. Sure, if the situation is desperate and the patient or physician think HCQ or Vitamin D or concentrated camel urine might help, go for it. What have you got to lose?

      Lowe’s column is about something else entirely, which is trials of drugs for the sole purpose of determining whether they work — i.e. a purely scientific question, not a clinical question at all. And what he’s saying if I understand correctly is that (1) if you’re going to ask the scientific question, do the trial right so you get a definite answer, and (2) if you don’t have the money, wit, experience or patience to do it right, don’t do it at all, because you are wasting precious time/effort/money/patient volunteerism that could be better spent elsewhere — that every poorly-run trial that doesn’t give any useful answers has a sad opportunity cost, represents the loss of resources that *could* have been used in a *well-run* trial elsewhere to gain valuable information.

      1. Karl Pfleger says:

        Carl, I’m baffled by your response. What do you mean no one is talking about what goes on at the clinical level? They are called ‘clinical’ trials for a reason: they involve human patients suffering from real diseases. They aren’t abstract science experiments conducted in a lab with abstract instruments like 17th century physics or even experiments on animals (that’s called pre-clinical). You can’t separate out the human patients involved. There are always human patients involved whose care has ethical issues. It isn’t purely a scientific question as you claim, not once the trials get to the human clinical level. When viewed purely from the scientific question point of view, what Derek said is reasonable. It’s less so when viewed from the point of view of providing optimal healthcare to all patients, including both control & treatment arm subjects of the trials themselves.

        How do you determine if a lifesaving drug for a terminal diseases works reliably? If a small trial (a pilot or a phase 1 primarily for safety) shows dramatically fewer deaths in treatment than control and there is good biological reason to think treatment should save lives of those with the condition, and preclinical animal data shows that it works, then you still don’t have the efficacy bar data for normal FDA post phase 3 trial approval, but can you really say at that point when starting a late stage trial that you have true clinical equipoise based on the early data? Who after considering the animal data, biological understanding, and small trial data would want to have themselves or their loved ones randomized to the control group?

        1. confused says:

          I’ve been uncomfortable with this in regards to the vaccines – IMO giving them to the most vulnerable patients (esp. LTCF residents) pre Phase 3 would have been a clear ethical win.

          This is a small population (thus little manufacturing needed) and the death rate due to COVID in this population is so high that any side effect that didn’t turn up in Phase 1/2 would be utterly irrelevant – I believe over 5% of the *total* early-2020 LTCF population in the US has died of COVID.

          Sure, they weren’t technically proven to work. But the antibody data from Phase 1/2 means that we certainly couldn’t say it was 50/50 whether they’d work or not. And when the risk of not acting is so large…

          1. Karl Pfleger says:

            This is essentially the same argument proponents of vitamin D and ivermectin have been making.

          2. confused says:

            I don’t know that that analogy is fair, because of the mechanism. We know that most people successfully get over COVID – the immune system does usually work against it – so I think that the fact that a vaccine candidate produces immune response equal or better to that seen in recovered patients provides significantly more than “equipoise” evidence that it works.

            Not nearly enough evidence to vaccinate the population in general, sure, but there is a huge gap between that and “we have literally no idea whether it does anything”.

  22. BW says:

    Can someone comment on the use of retrospective studies, given that for some drugs a large number of covid patients will already have been taking them? For example with SSRIs. https://www.nature.com/articles/s41380-021-01021-4

    Does this kind of analysis sufficiently bolster smaller clinical trials or are larger blinded clinical trials necessary to make any judgment on the efficacy of a drug?

    1. Carl Pham says:

      Clinical trials are far from my expertise, so I hope someone with more knowledge answers the question, but just from a general empirical science point of view, my guess is that the big problem with retrospective studies is that you can only hope to do your measurement right if you happen to think of, and you have access to data on, every possible confounding variable.

      Let’s say you’re trying to figure out whether SSRIs are protective against severe COVID. So you look at what fraction of people taking them got bad COVID, versus what fraction of those *not* taking them got it. But wait…we already know there are risk factors for severe COVID. For example, it usually much worse for fat people. Now, is there a correlation between being fat or skinny and being on SSRIs? There might be. Maybe fat people are more likely to be depressed, or maybe depressed people are more likely to not eat and lose weight, or maybe taking SSRIs tends to make you gain/lose weight. So if we measure less severe COVID in people on SSRIs and say aha! To the preprint server Robin! it may be what happened is that there were simply more skinny people on SSRIs and *that’s* the real reason there was less severe COVID — it had nothing to do with the SSRI.

      There are statistical ways to correct for this — but *only* if the relevant data was written down somewhere, e.g. if all your SSRIs takers and non-takers had their BMI written down, too. As well as whether whether they were old or young, had diabetes or not, had impaired immune system or not, worked in an occupation with frequent exposure or not, et cetera, including every known *and* every unknown risk factor for severe COVID. Only if you can think of every possible confounding variable *and* have the data on it can you reliably correct for these potential biases.

      A well-designed prospective trial doesn’t have the same problem because people are deliberately randomly assigned to the treatment and control groups, so *even if* there is some confounding variable you don’t know about, roughly the same number of people with and without it get sorted into both groups, so any effect is naturally averaged out (if you have enough people in the study, which depends on how common your confounding factor might be). The beauty of this is that you don’t need to know what the factors might be, you just need to have enough people and be sure to sort them truly randomly (which is one reason these things get done “blindly”, so even any unconscious bias on the part of patient or scientist can’t affect the sorting — I mean, what if SSRIs *did* have an effect, and since the scientists hope and believe their drug will work, they unconsciously tend to sort all the cheerful friendly people on SSRIs into the treatment arm…oops).

      Sometimes you can find “natural” random trials retrospectively, some situation where people were naturally sorted essentially randomly into treatment and control groups. But this is tricky. For example, in the case of SSRIs, people are not *randomly* sorted into getting or not getting SSRIs — I mean, not if psychiatry has any value at all. They’re deliberately routed into one or the other categories, far from any random choice. Does that matter? Don’t the criteria for prescribing or not prescribing an SSRI have nothing to do with COVID? Well, alas, we don’t know for sure — we’re back to the original problem of whether we’ve accidentally sorted people on the basis of an unknown factor that *does* matter to COVID into the SSRI/non-SSRI group.

      As I said, I hope someone with real knowledge of the field answers, but that’s my take just from the point of view of basic empiricism, for what it’s worth.

      1. CET says:

        ” Only if you can think of every possible confounding variable *and* have the data on it can you reliably correct for these potential biases.”

        True, but I would add a further caveat to retrospective trials that there needs to be a clear rationale for each variable that’s controlled for. I’ve seen some really awful public health studies (with a lot of citations) that were pretty clearly messaged by trying out different combinations of adjustments until they got statistical significance.

      2. confused says:

        >> Only if you can think of every possible confounding variable *and* have the data on it can you reliably correct for these potential biases.

        Wouldn’t this no longer be true if the retrospectively observed effect was large enough to overwhelm confounding effects?

        I mean, probably this doesn’t happen in real life very often, especially for something this complex, but I’m asking from a “scientific methods” perspective.

        1. CET says:

          I would tend to agree. Heavily ‘controlled’ data in a retrospective study strikes me as being a little like results from a cell culture assay that only work under very specific conditions (‘only with these specific additives, this set of antibiotics, cells at a passage number of less than 10, only when dosed as the TFA salt, at 12 hour intervals with a complete change of media each time, and only when the assay is started during a new moon’).

          Maybe the result is technically true given all of the caveats, but if it were worth anything, it would be more robust.

    2. Tom says:

      In my view, retrospective studies have their place in clinical research, though only in the sense they should be used to point at a ‘“Huh, I don’t know how that would work, but let’s look into it”’, as Derek put it in his post. The information from a retrospective study should only point towards design of a full randomized, double-blinded controlled clinical trial to actually determine if the considered treatment is effective.

  23. Jonathan B says:

    I fear clinical academics face a perverse incentive. In a former life (now retired) I was for a while the first pass internal review of medical school grant proposals. There was a depressing number of early career doctors proposing a small scale trial on an interesting but largely speculative idea. I had to point out gently that their best possible outcome would be “more research is needed” when that was their justification for the study in the first place.

    Unfortunately running a small trial that can be published in their name, even if largely uninformative, carries more kudos than joining a big well designed trial organised elsewhere – who is even going to notice if you are one of 100 collaborating investigators listed in a footnote?

  24. Rob says:

    Maybe not quite on topic, but does anyone know why the Merck trial of Molnupiravir is taking so long? It doesn’t seem like they are in hurry-up mode and we could sure use an effective oral therapy.

    1. Derek Lowe says:

      Supposedly the data are coming out very shortly. . .

  25. Franksnbeans says:

    No discussion of Novavax stellar phase 3 data??? That’s odd…

    96% against original strain mild, mod, severe
    86% UK
    55% SA
    Over 80% efficacy with single dose.
    100% protection against severe disease

    1. Marko says:

      What I found most interesting about the recent Novavax data is how they went from saying that seropositivity among the placebo group provided absolutely no protection against disease in SA to admitting that such protection is in a similar range (i.e. overlapping CIs) as for their vaccine.

      1. Franksnbeans says:

        whatever floats your boat

  26. zeltcm says:

    This is an occasion to bring up one of my favorite statistics textbooks – “Statistics Done Wrong” by Alex Reinhart. One of the chapter gets into statistical power and its disuse, and another on problems with p-values (https://www.statisticsdonewrong.com/).

  27. Blaine White, M.D. says:

    I don’t think my first attempt to post on this went, so here’s another with better references. For pathophysiology and several therapeutic MOAs and early trials or reports, you can see my current 200-reference review for our staff and residents (https://drive.google.com/file/d/159rh2onLuW2fAQwac8fMQ5BDFQeAZmtj/view). Mainoli et al. have reported that only 7.9% of registered C19 trials reported results publicly within 5 months (https://doi.org/10.21203/rs.3.rs-131757/v1). Indeed, a major problem of RCTs is that they are not completed in a timely manner, even in the U.S., which has seen over 25-million cases of C19. For example, the UMich trial of dipyridamole (NCT04391179) was first posted 5/1/2020; there are still no results posted. The UPenn trial of cyclosporine (NCT04412785) was first posted 6/2/2020; there are still no results posted. This leaves clinicians and patients confronting large numbers of infections with an “establishment” sneering about “evidence-based medicine” and RCTs that in fact are near worthless because they either aren’t timely reported or disagree with big-pharma interests involved (remdesivir – see the review). And that is the real uncomfortable truth during more than a year in which 1 in 600 Americans have died from C19.

  28. SALEH says:

    Large and fast RCT needs a lot of money to be performed . Recruiting here and elsware is not an issue since it is a wold pandemic. The main actual race is vaccine oriented, they are correctly funded.
    That leaves a very small place for active treatement strategies mostly performed by repositioning old inexpesive médications mainly in the under developped countries.
    I dont beleive that RCT are the only way to go in such a crisis. Well designed lower grade evidence trials and cohort based data can help , even if not ideal .
    Many important medical dicoveries are dated before the EBM era.
    Why not help those countries better perform their trials. But frankly that does’nt seems to be the priority.Who cares.
    Yes vaccination is the main answer to such a global pandemic, but I still beleive that other combined options have to be given a real chance for the benefit those many persons who are refusing vaccination whatever their reason for refusal.

  29. Blaine White, M.D. says:

    As I indicated in my above post, with regard to how we use RCTs, I have specific doubts about both the efficacy and EUA approval process for remdesivir. Similar concerns have been expressed by Science magazine (https://www.sciencemag.org/news/2020/10/very-very-bad-look-remdesivir-first-fda-approved-covid-19-drug). The large WHO study found it ineffective for reducing invasive ventilation or mortality (https://doi.org/10.1101/2020.10.15.20209817). Beigel et al. reported a prospective, placebo controlled, double-blind RCT without significant difference in mortality, and of 469 SICK patients (246 placebo and 223 remdesivir) requiring either hi-flow oxygen, non-invasive mechanical ventilation, invasive ventilation, or ECMO, the recovery curves over 29 days post drug/placebo administration were essentially identical (DOI: 10.1056/NEJMoa2007764). Spinner et al. reported another RCT of remdesivir in a total of 596 patients with no evidence of an effect on mortality (doi:10.1001/jama.2020.16349). Most recently a Japanese study found it ineffective (https://doi.org/10.1101/2021.03.09.21253183), and a meta-analysis of 4 RCTs including 7,333 patients (https://doi.org/10.1101/2021.03.04.21252903) “showed no difference in survival in patients who received remdesivir therapy compared to usual care or placebo.” The WHO has recommended against its use. Meanwhile, the FDA continues an EUA, and several financial news services reported that Gilead had $1.9-billion income from remdesivir in the 4th quarter 2020. While that has gone on, several state medical licensing boards (including in my home state) have issued veiled threats about use of other potentially effective treatments “off label.” This typically involves cheap, safe, long available, and off patent medications with considerable basic science and clinical cohort evidence suggesting effectiveness, such as ivermectin, dipyridamole, and cyclosporine (see my 200-reference review for MOAs and clinical reports – https://drive.google.com/file/d/159rh2onLuW2fAQwac8fMQ5BDFQeAZmtj/view). In the case of ivermectin, a search of clinicaltrials.gov for Covid19, ivermectin, and United States identified only 4 trials, the earliest from UKentucky posted 5/5/2020, none completed or with results. I hate conspiracy theories. But, the above evidence reeks of a broken and/or corrupt system that has ignored RCTs for a clearly ineffective but very profitable drug that continues to hold a FDA EUA, AND has also remained vigilantly ignorant of the molecular biology, immunology, and pathophysiology together with observational clinical studies and some meta-analyses (https://www.researchsquare.com/article/rs-317485/v1) pointing toward the possibility of inexpensive, effective, early treatment. Just perhaps the problem does not begin with an inadequate number of RCTs, but rather with greed and determined scientific blindness while 535,000 Americans died from C19.

    1. Doug H MD says:

      thank you for the review of Remdesivir

    2. Marko says:

      Thank you for a review of the grifters who control the drug evaluation and approval narratives in the US.

  30. PieCharts says:

    Great post. Loved the line from Rudyard Kipling’s “If”.

  31. Philip says:

    Can someone please explain to me, in this context, why aspirin wasn’t studied as an early outpatient therapy from the moment coagulopathy was found to be a problem?

  32. SALEH says:

    Well designed observetional studies are of interrest if concordant results are obtained from many independant trials , that seems to be the case for Ivermectine . If RCTs point to the same direction then the level of confidence is even greater.
    I still beleive like that the best stratégy is combining products based upon cumulating knowledge (event presumption of efficacy) based also on mecanism of COVID infection.
    Putting concommitant or successive obstacles in fromt of the the virus to make it more difficult for the virus to invade is a much more reasonable and ethical attitude than Paracétamol and wait , specially if the objective is not préventing illness but rather severty
    This multiple product strategy (with well tolerated products) is not compatible because of complexity with RCT design (it is for common sens that still exist)

  33. idiotraptor says:

    Phillip-
    I found your question interesting and just referred it to my hematologist wife. She ventured that the nature of the thrombotic events being observed (DVTs and mini-stokes) in COVID-19 patients may have have been considered too high a bar for aspirin alone and more amenable to heparin products. Pending verification, she thought some clinical interventions/trials may have used aspirin in combination with other agents. She wasn’t absolutely certain on this point.

  34. Blaine White, M.D. says:

    I agree with Saleh and want to respond briefly to idiotraptor. I don’t want to cite all this; it’s in the big review link above. It’s important to recognize that in C19 the coagulopathy is clotting, not bleeding. Sick patients have some D-dimer elevation but markedly elevated VonWillebrand factor; VWF indicates endothelial damage and points at platelet activation. Indeed, autopsy studies have found in C19 lungs 9-fold the level of platelet microclots seen in flu, and the review includes a photomicrograph after immunohistochemical fluorescense labeling that is a compelling picture of a platelet/NETs lung microclot. Platelets have FC-gamma-2-receptors, and there are studies demonstrating their activation by IgG-C19 immune complexes. So there is evidence the clotting coagulopathy starts with platelets, suggesting conventional anti-thrombin treatment (heparin and derivatives) would be only marginally effective. In fact, an early large Italian cohort study and subsequent VA data found very modest mortality reduction with heparin or enoxaparin. In contrast, a VA 28,000 cohort study found a >50% mortality reduction using lo-dose aspirin (drives covalent chemical stabilization of platelets). This is another example of basic-science illuminated pathophysiology and common sense selection of a treatment approach without waiting for the RCT religious liturgy. The case that sick C19 is a macrophage activation syndrome (MAS), and the basic-science-established antiviral and anti-MAS activity of cyclosporine (multiple large Spanish cohort studies) is another example. The use of ivermectin as an agent against RNA viruses is still another. So Dr. Peter McCullough of Texas A&M and Baylor hospital is quite right. If as physicians we accept our responsibilities to study and learn the molecular biology, immunology, and pathophysiology of the infection and potential therapeutic approaches, we can rapidly assemble safe and effective approaches to early outpatient treatment. If instead we sit on our hands and wait for the bureaucracy and year-long RCTs to give us permission to treat people, we can let a lot of folks die waiting for vaccines.

  35. SALEH says:

    Waiting for facts is starting to look more and more like a film i’ve seen few years ago the Tartar Steppe from a novel by Italian author Dino Buzzati (1940).
    The novel tells the story of a young officer, Giovanni Drogo, and his life spent guarding the desertic Bastiani Fortress waiting watching the arrival of the Tartars that where never comming.

    1. sgcox says:

      Indeed.
      Or may be El coronel no tiene quien le escriba, especially the final words.

  36. SALEH says:

    Yes sgcox from the Nobeled Gabriel Garcia Marquez
    Maybe Lowe Derek is starting to be aware of this tragi-comic situation, opéning a small window from time to time to make some frech air enter the blog.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.