First in humans! That’s a big step for a drug project – you’ve identified a clinical candidate with enough potency, selectivity, etc. to be a plausible drug, you’ve made it past toxicity testing (always a black-box cross-your-fingers exercise), and you’ve figured out a way to dose the stuff in human subjects. But how do you know how much drug to dose?
That’s a big question that has had a lot of work put into it over the years, and this recent short review in J. Med. Chem. will tell you some of the main parts of the story. One of the biggest parts is “allometric scaling”, the extrapolation of doses in animals to humans on the basis of body weight, blood volume, half-life, and other pharmacokinetic data. That’s led to progressively better and better modeling of what a reasonable dose in humans would look like, allowing projects to get earlier reads than ever of what developing their compound might be like.
And as the paper mentions, you can see this improvement reflected in the decline of Phase I clinical failures over the years. Phase I is all about blood levels, setting things up for Phase II and some actual patients with the disease you’re targeting. (You’re also making sure that there’s nothing alarming in human tox that you missed in the animal testing, but that’s fortunately a very rare event). Getting the initial dose wrong was at one time something that worried people more, but it’s clearly harder and harder to mess that one up. I’ve only been involved in one Phase I failure over the years, when the first-in-human blood data came in at about 25% of what the project team had estimated, which sent us back to the ol’ drawing board because that would have made the projected dose much less appealing.
You might think that patients would just take what you tell them to take, but ask any practicing physician about that and you’ll get a rueful expression. Patient compliance is a big issue – anything that makes it harder to take a medication (or just more easy to get it wrong) is to be avoided if possible. The preferred oral dosage is one small pill once a day, but not everything can be made to fit that pattern. People are familiar with “horse pill” sized antibiotics, and that’s one way you can run into trouble, having to give a physically large dose just to reach the needed blood levels for efficacy. A funny dosing schedule is also bad news. Once a day is great, but once every other day would be horrible, because people lose track or forget.
In ancient times (a few decades back), people would estimate human dose just on the basis of a few parameters from rodent to dog to human. More complete modeling not only takes the aforementioned pharmacokinetic data into account, but also pharmacodynamics – how much compound do you need, in what tissue or compartment, and for how long, to achieve clinical efficacy? The answer to that question can vary widely, from “well, you just need to dink that part down a bit and the follow-on effects will amplify things” to “you have to shut that protein down totally before you’ll see anything”. That obviously calls for a lot more knowledge of the drug’s effects and of human biology in general, but that’s just the sort of thing that we’ve been piling up over the years. The paper has some case studies of recent clinical programs dealing with a whole range of issue to illustrate the sort of thinking that goes into this work – a compound targeting a glucose reuptake transport protein in the kidney is going to need a much different approach than an irreversible enzyme inhibitor that needs to get into the CNS (to pick two of those examples). Clearance and half-life considerations are obviously likely to be wildly different between those two, and that has to be balanced against the the potencies of the drugs and the “tone” of the respective target systems (that is, how much inhibition of the target do you need to have an effect?) There’s that combination of PK and PD.
The hope is that predictions of dosing will continue to become more and more computational, allowing you to start avoiding potential trouble earlier and earlier in the drug discovery process. We most certainly cannot yet rip through a bunch of proposed structures and predict how they’ll behave on oral dosing, but we can continue to make better predictions that take less intensive animal testing, which is just what’s been happening.
But for this to continue, the authors note, the dose-finding process is going to have to get less aligned with the late preclinical development part of the organization and more aligned with early-stage research. Traditionally these folks have only really kicked in as the projects start nominating advanced drug candidates with a serious shot at the clinic, but the only way to apply dose-finding techniques earlier in the process is to, well, apply them earlier in the process. This will need adjustments both on the part of the organization and on the part of the people doing the work:
Perhaps the biggest challenge is the pervasive belief that dose predictions are done near the end of the design process to support nomination and early clinical development. . .those doing dose predictions in support of design must adopt a design mindset, balancing the need for accuracy and precision with the need to provide insight on a relevant time-scale to design. This can be an unusual situation for many PKPD experts that were trained in a traditional clinical and/or academic environment where timelines are longer and the demand for accuracy and precision is much higher. . .
That effect shows up a lot. Consider drug synthesis: there’s no way you could do early-stage hit-to-lead or lead optimization work under the standards required to file for a clinical candidate. You’d be spending all your time filling out forms, for one thing. The early stage stuff often needs rough-and-ready answers (Worked? Didn’t work?) while the later GLP/GMP standards are all about strict documentation and reproducibility. People who have done one end of this process usually have a hard time adjusting to the other. We’ll see how this works for the dosing folks!