Skip to main content

Clinical Trials

Drug Dosing

First in humans! That’s a big step for a drug project – you’ve identified a clinical candidate with enough potency, selectivity, etc. to be a plausible drug, you’ve made it past toxicity testing (always a black-box cross-your-fingers exercise), and you’ve figured out a way to dose the stuff in human subjects. But how do you know how much drug to dose?

That’s a big question that has had a lot of work put into it over the years, and this recent short review in J. Med. Chem. will tell you some of the main parts of the story. One of the biggest parts is “allometric scaling”, the extrapolation of doses in animals to humans on the basis of body weight, blood volume, half-life, and other pharmacokinetic data. That’s led to progressively better and better modeling of what a reasonable dose in humans would look like, allowing projects to get earlier reads than ever of what developing their compound might be like.

And as the paper mentions, you can see this improvement reflected in the decline of Phase I clinical failures over the years. Phase I is all about blood levels, setting things up for Phase II and some actual patients with the disease you’re targeting. (You’re also making sure that there’s nothing alarming in human tox that you missed in the animal testing, but that’s fortunately a very rare event). Getting the initial dose wrong was at one time something that worried people more, but it’s clearly harder and harder to mess that one up. I’ve only been involved in one Phase I failure over the years, when the first-in-human blood data came in at about 25% of what the project team had estimated, which sent us back to the ol’ drawing board because that would have made the projected dose much less appealing.

You might think that patients would just take what you tell them to take, but ask any practicing physician about that and you’ll get a rueful expression. Patient compliance is a big issue – anything that makes it harder to take a medication (or just more easy to get it wrong) is to be avoided if possible. The preferred oral dosage is one small pill once a day, but not everything can be made to fit that pattern. People are familiar with “horse pill” sized antibiotics, and that’s one way you can run into trouble, having to give a physically large dose just to reach the needed blood levels for efficacy. A funny dosing schedule is also bad news. Once a day is great, but once every other day would be horrible, because people lose track or forget.

In ancient times (a few decades back), people would estimate human dose just on the basis of a few parameters from rodent to dog to human. More complete modeling not only takes the aforementioned pharmacokinetic data into account, but also pharmacodynamics – how much compound do you need, in what tissue or compartment, and for how long, to achieve clinical efficacy? The answer to that question can vary widely, from “well, you just need to dink that part down a bit and the follow-on effects will amplify things” to “you have to shut that protein down totally before you’ll see anything”. That obviously calls for a lot more knowledge of the drug’s effects and of human biology in general, but that’s just the sort of thing that we’ve been piling up over the years. The paper has some case studies of recent clinical programs dealing with a whole range of issue to illustrate the sort of thinking that goes into this work – a compound targeting a glucose reuptake transport protein in the kidney is going to need a much different approach than an irreversible enzyme inhibitor that needs to get into the CNS (to pick two of those examples). Clearance and half-life considerations are obviously likely to be wildly different between those two, and that has to be balanced against the the potencies of the drugs and the “tone” of the respective target systems (that is, how much inhibition of the target do you need to have an effect?) There’s that combination of PK and PD.

The hope is that predictions of dosing will continue to become more and more computational, allowing you to start avoiding potential trouble earlier and earlier in the drug discovery process. We most certainly cannot yet rip through a bunch of proposed structures and predict how they’ll behave on oral dosing, but we can continue to make better predictions that take less intensive animal testing, which is just what’s been happening.

But for this to continue, the authors note, the dose-finding process is going to have to get less aligned with the late preclinical development part of the organization and more aligned with early-stage research. Traditionally these folks have only really kicked in as the projects start nominating advanced drug candidates with a serious shot at the clinic, but the only way to apply dose-finding techniques earlier in the process is to, well, apply them earlier in the process. This will need adjustments both on the part of the organization and on the part of the people doing the work:

Perhaps the biggest challenge is the pervasive belief that dose predictions are done near the end of the design process to support nomination and early clinical development. . .those doing dose predictions in support of design must adopt a design mindset, balancing the need for accuracy and precision with the need to provide insight on a relevant time-scale to design. This can be an unusual situation for many PKPD experts that were trained in a traditional clinical and/or academic environment where timelines are longer and the demand for accuracy and precision is much higher. . .

That effect shows up a lot. Consider drug synthesis: there’s no way you could do early-stage hit-to-lead or lead optimization work under the standards required to file for a clinical candidate. You’d be spending all your time filling out forms, for one thing. The early stage stuff often needs rough-and-ready answers (Worked? Didn’t work?) while the later GLP/GMP standards are all about strict documentation and reproducibility. People who have done one end of this process usually have a hard time adjusting to the other. We’ll see how this works for the dosing folks!

22 comments on “Drug Dosing”

  1. Chrispy says:

    In antibody work, I was struck by how different antibodies against a soluble ligand could have such radically different half-lives. The “PK guy” at a big pharma just shrugged and said: “That’s why we test.” But do the tests have any predictive value? JAX is promoting their FcRN knock-in mouse as an alternative, which seems like it would be more accurate. But these antibodies I am talking about all had the same Fc (the part that binds FcRN) and still had radically different half-lives, so it isn’t clear what is really going on.

  2. Diver Dude says:

    I’ve helped designed and run a fair number of First in Man programmes over 2 decades. Small molecules, proteins and some weird-and-wonderful things in between.

    Trust me, you have no real idea what’s going to happen. You do your state of the art modelling. You do your best to be many multiples below any possible effect exposure at the first dose level. You do your best to have robust biomarkers in place. And then you meet high density reality. At first dose, I’ve had everything from no detectable levels (good) to full pharmacological effect (the project team loved it, I nearly had a heart attack). The biological differences between pre-clinical and clinical are huge and, usually, non-obvious except with hindsight.

    Any one who tells you they know what’s going to happen at the first dose hasn’t done the job yet.

    1. Some idiot says:

      “Then you meet high density reality.”

      Thanks! Best line I have heard for a while…! Sums it up perfectly…!


      1. Diver Dude says:

        Its a Derek Lowe original from a couple of years ago 🙂 and it is applicable in a disconcertingly wide variety of drug development scenarios!

        1. Some idiot says:

          Oh yes indeed…! 🙂

    2. Andre Brandli says:

      @Diver Dude: Any thoughts, experiences, or wisdom to share about the first-in-man dosing of antisense oligonucleotides?

      1. Diver Dude says:

        AB – nope, never had experience of that other than as a whole other ball game. But… my experience suggests extreme caution, which is rarely popular with project teams with time lines to hit. One of the times I had full pharmacogical effect was with an engineered Ab and it turned our dose selection was off was because, in vivo, man was more like a mouse than a monkey. It turned out with hindsight to have been predictable. Although “post hoc ergo proper hoc” is also a trap.

      2. Diver Dude says:

        AB – nope, never had experience of that other than as a whole other ball game. But… my experience suggests extreme caution, which is rarely popular with project teams with (usually insane) time lines to hit. The time we had full pharmacogical effect with first dose was with an engineered Ab and it turned our dose selection was off was because it turned out that, in vivo, man was pharmacologically speaking more like a mouse than a monkey.

        “The best laid plans o’ Mice and Men, gang aft agley”. I think Robert Burns was a clinical pharmacologist.

        1. Andre Brandli says:

          Thanks! What’s your opinion on intrathecal delivery and dosing for the cerebrospinal fluid? What’s the best animal model to make predictions for first-in-man studies?

      3. Anonymous says:

        An anecdote out of an early antisense company: They were under pressure to deliver on management’s repeated promises to investors to produce a drug. (“The company had been forecasting clinical trials of an AIDS antisense agent since early 1991. After a management change in mid-1991, the company pushed its projection back.”) They eventually arranged for a clinical trial of an HIV antisense drug in France (would it have been approved in the US?). At least one member of scientific management was strongly against it; he was “encouraged” to resign from the company. I think the first doses were administered to 9 patients who almost immediately experienced serious complications, including death (2 of 9, as I recall).

        As of 2016, there are a couple of FDA approved antisense therapies and many more in development. Eteplirsen is an approved antisense drug for muscular dystrophy that has been a topic of discussion In The Pipeline.

      4. Emjeff says:

        Clinical PK guy checking in. Micro-dosing has never gotten any traction because of a few things, in my opinion. First, we are paying a lot more attention to bioavailability in the non-clinical space. This decreases the risk that a bad candidate will get nominated. Secondly, the timelines for synthesizing the labeled compound are still very long. In early development , the equation that holds true the most is Time=$; most projects teams are simply not willing to spend the time and money to answer this small question. Finally, the fact is that these studies are not very predictive. You are extrapolating quite a bit, in most cases, from the micro-dose to the putative clinical dose. The data I have seen on the predictability of micro-dosing studies is not impressive.

  3. HeredrinkthisSocrates says:

    I don’t think the whole micro-dosing development design ever really got traction, did it?

  4. Lambchops says:

    I feel like this is one of the areas of drug development that I know the least about. I guess perhaps because it’s the area where a successful study is a ‘boring’ study and studies like the infamous “elephant man” trial, where everything goes horribly, horribly wrong.

    The article is behind a paywall for me so thanks to everyone posting their experiences. I was wondering if anyone had any stories to share about first in human studies in oncology or other areas where these studies are in the patient population rather than the healthy population. What additional challenges does this raise and (aside from getting an early read on efficacy) how does it substantially change the design an approach compared studies in healthy volunteers?

    1. Lambchops says:

      Gah, meant to say that studies where everything goes horribly wrong are the only ones those not involved tend to hear about!

    2. Hap says:

      What is the “elephant man” trial?

      1. Hap says:

        Google is your friend – it’s the TeGenero whoopsie…

  5. Chris says:

    How are Phase I results translated into dosage for people with kidney or liver problems? I assume the elimination rates are going to be completely different here. Is there a common procedure in this case? Say, to reduce the dose by 10% for patients with [some standardized] effective imparement of 10%?

    1. Matt says:

      You can model but depending on how drug is metabolized it’s standard practice to perform studies to look at PK in hepatic and renally impaired subjects. FDA has an entire guidance around this.

  6. Red Fiona says:

    Can I second the statement about antibiotics. I was on a two-week treatment that involved 4 times a day and instructions about timing with regard to food. It was basically impossible to maintain.

  7. Simon p says:

    Before testing a car into the wind tunnel, a car manufacturer uses cluster computing and simulates the new model’s aerodynamics via the computer. It is time to put some trust in this (system) ENGINEERING discipline also in the pharma space!!! Yet I see many frustrating comments here how they have been failing in predicting even a ballpark in humans.
    Just hire proper PhD engineers (MS in engineering and PhD in engineering) with proper modeling knowledge, and spend some of your budget money towards computational pharmacology.

    PS: if you are thinking of turning one of your PharmD to modeling you already lost.

    1. Druid says:

      Any new car is a me-too of a Ford Model-T, with a wheel at each corner and one in front of the driver. Your car is in contact with air which obeys a modest number of equations. Then you put it in a windtunnel because your predictions are not reliable enough. A drug is in contact with perhaps 100,000 flexible proteins which keep changing shape (not counting those in the other species required for safety testing) and a few thousand varieties of other types of molecule. Some of those proteins are enzymes which turn the drug into metabolites. Patients come in all shapes and sizes and take other medicines. It is hard to beat trial and test and redesign and finally testing in patients. I guess engineers designed the 737 Max and its controls. Wikipedia lists recent bridge failures as 9 in 2017, 9 in 2018 and 7 in 2019. Some of those are due to irresponsible drivers but pharma co’s are counted as responsible for the dumb things that patients or physicians do with their drugs. If you want a challenge, see if you can understand how the kidney works.
      It is not hard to predict how a drug will behave, it is just hard to get it right.

    2. Donal says:

      I’m a modeller in the chemical engineering space (mainly CFD on reacting systems). Aerodynamics simulations are quite simple compared to even the most basic of reacting systems. CFD was an industrial tool for aerodynamics 30 years ago, it’s really only in the last 10 years that computing power has caught up to the point where typical chemical systems can be modeled reliably. These typical chemical systems might involve 10 or so reactions with turbulence and mass transfer effects to solve for, and these problems will still need run on a computing cluster.

      Biological systems are orders of magnitude more complex chemically than industrially significant chemical systems. What’s understood about biological systems is still a small fraction of what’s actually going on. It’s just about possible with current modelling technology to create a whole cell model of a single-celled organism, which in biological terms is as simple as it gets.

      Right now we are probably decades away from the kind of modelling that could usefully predict drug behaviour in a complex organism. If quantum computing can be harnessed we may be able to reduce that timeline somewhat on the computational side. Whether we can understand enough about the biological systems to model them mathematically is another question, and a harder one to answer: if the models can be run then it becomes possible to play with best guesses of the system’s behaviour and see what lines up with reality.

Comments are closed.