Here’s a new paper calling for expanding the medicinal chemistry synthetic toolbox. There have been calls like this before, of course, but those weren’t wrong, either. It’s not hard to figure out how we’ve ended up where we are, though (links added to replace footnotes in the below paragraph):
The limited set of reaction types used in medicinal chemistry can be rationalized by the use of several criteria in their selection. The first criterion is the availability of starting materials and reagents. The second is the ease of synthesis (such as short reaction times, moderate temperatures and high yields with limited by-products). Indeed, analysis of the synthetic methodologies used over the past 240 years in more than 6.5 million organic reactions indicated that key parameters, such as reaction time, temperature, pressure and solvent, are biased by anthropogenic factors. For example, half of the reactions were complete within <3 hours, and >90% of reactions were run at atmospheric pressure, typically between −80 °C and +200 °C. Furthermore, medicinal chemists are often reluctant to make difficult-to-synthesize molecules without compelling precedence or predictions, and the strength of computer-aided design is still largely geared towards prioritizing lists of compounds and designing libraries rather than predicting a single optimal compound. Consequently, chemists may prefer to focus on simple reactions that can efficiently cover a lot of ground and, indeed, this is what is observed when analysing what is actually made in medicinal chemistry laboratories.
Those last links are blogged about here and here, and see here as well. The authors mention the problem of new reaction robustness (tolerance of a wide range of functional groups, or at the very least testing for such tolerance). That makes you reluctant to try some new method out of the literature as well, because there’s no telling how your (often more polar and/or complex) molecule will handle it. As the paper points out, medicinal chemists are indeed biased towards robust reactions, things that (in our own experience) will tend to deliver products instead of failing in interesting and puzzling ways.
That’s the thing: the current state of synthetic organic chemistry in the medicinal labs is actually the result (for the most part) of pretty rational choices. We’re trying to make the most diverse compounds in the shortest period of time, using the reactions that we think are most likely to work. So you get a lot of workhorse transformations, and you get a completely understandable bias towards chemistries that you know you can radiate out from using a big set of diverse building blocks that you already have on hand. None of this is stupid in the least.
But it’s not necessarily the optimum, either. It really would be desirable to have more sorts of reactions in the “workhorse” category, and to have scaffold-generating reactions that generate interesting starting points that are under-represented now. Over the years, various companies have made deliberate efforts toward this sort of thing (I know; I’ve been on some of them), but my impression is that it’s quite an uphill climb to make a real change in a typical screening collection. Importantly, though, the larger vendors of building blocks and intermediates have been making similar efforts, looking for commercial advantage.
This new paper goes into detail about the way that new software and automated hardware might make a change as well: synthesis programs don’t have the biases we do, and can extract reactions out of the literature that the average chemist might not have noticed, is reluctant to try, or hasn’t realized yet has been developed to a useful point. And automated methods for reaction discovery and optimization can clear out a lot of underbrush on the scope and robustness fronts, leading to validation of new techniques much faster than letting time do its usual work. This isn’t going to happen next week, or next month, but it is coming on. These tie into the hardware-driven search for new reaction conditions and types, too – flow chemistry, photochemistry, high-pressure chemistry, electrochemistry and biocatalysis (among others) are sources of transformations that you just can’t do with the usual reactions in the usual flasks and vials, and the automated forms of these things can fill out the example table to where a bench chemist can trust them. The machines are even more biased towards robustness than we humans are; we have common interests for sure.
It’s true that new chemical matter is not necessarily a rate-limiting step in drug discovery. But that’s drug discovery as we practice it now. As we get into newer modes (binders instead of functional molecules, for protein degradation, etc., small-molecule/biomolecule hybrids, targeting things like disordered proteins, intracellular condensate and so on) we could probably use all the help we can get. Anyone who’s tried a number of screening campaigns will know of some high-value targets that never seem to turn up anything useful. One possibility is that there’s nothing useful to turn up. But given the size of druglike chemical space, and the tiny amount of it that we’ve explored so far, I’m not sure that that’s the way to bet, either. . .anyway, don’t you want to start a project with something other than a kinase inhibitor scaffold or some Suzuki/amide combination? Sure you do!