I’ve written here about some of the work on high-throughput reaction optimization: setting up dozens (or hundreds, or thousands) of small test reactions to investigate the conditions needed to get particular transformations to go in high yields. There are plenty of useful reactions (especially some widely-used metal-catalyzed ones) that can be very sensitive to changes in solvents, bases, concentration, temperature, ratios of reagents, and other factors, and these happen in ways that can be nearly impossible to predict a priori. Sometimes you’re trying to optimize a particular reaction on a particular starting material (as is the case in process optimization work), and sometimes you want to increase your success rates when setting up a big library run for new diverse compounds. The only solution is to explore the reaction space a bit, until you find out (for example) that Catalyst 14 works well most of the time, as long as you’re using Base B, but if your starting material has a pyridine ring in it those conditions are going to flop unless you switch to Catalyst 53 and change solvents, and so on.
Automating that process is an obvious step, but how to get that automation to work reliably is sometimes not so obvious. And it should go without saying that a poor-quality high throughput setup is the worst of all worlds, generating unreliable junk data far more quickly and comprehensively than you could ever crank out lousy results by hand. Among the factors you have to consider are dispensing the different reagents reliably, mixing effects in the sample wells, evaporation of small volumes of solvent, and precipitation and solubility (both in the dispensing and reacting stages). All of these will affect the experimental results themselves, and they might well affect your ability to even figure out what those results are in the first place. That’s because reaction monitoring is almost always done in these cases by automated LC/mass spec sampling, and you don’t want to set up a few plates for analysis only to come back and find that the sampling needle plugged up catastrophically with gunk after eight runs, and so on. Never forget, any machine that’s capable of working unattended will be capable of screwing things up unattended, and you have to prepare for that.
But at the same time, you don’t want to end up in a situation where you have too many picky individual workarounds for things like reagent dispensing, because that will rapidly eat up the advantages of using automation in the first place. That’s what’s being addressed in this new paper from a group at AbbVie. They were wrestling with the reagent dispensing step. The standard answer to problems there is to just have everything in solution and use the pretty-reliable liquid dispensing technologies that are available, but that’s just not always feasible. Once you switch over to trying to dispense solid reagents, though, you kind of fall out of the twenty-first century and into a world of scoops and spatulas, powders and clumps, a world where some reagents are hygroscopic and sticky, where others are dry fine dust that collect static electricity charge and floomph all over the nearby surfaces as they’re lifted out into the air, a world of widely varying particle sizes and solid flow behavior from sample well to sample well.
This work shows an ingenious solution to the problem. The team found that they could coat the reagents as a thin layer on tiny glass beads, as shown at right, and that these were a reliable dispensing medium once you calibrated each batch of reagent for its particular bead-loading. The loading isn’t high, so each variety of “Chembead” acts pretty much the same for solid handling purposes, and the small amount of reagent by gross weight allows for small-scale dispensing with less error. The dispensing technology itself (a Chemspeed machine) didn’t alter the beads’ loading or properties, and the reagents were stable and active for at least 18 month on storage in this form. Interestingly, the paper notes that ” reagents that are air- and moisture-sensitive have been coated effectively”, although you wonder if all the ones they tried were feasible.
As I understand them, Chemspeed dispensers are gravimetric, but the density of the beads is basically the same no matter what thin layer of reagent they have on them, so for preliminary set-some-up-by-hand reactions (or for larger scale ones) you can use calibrated scoops as if you’re adding powdered sugar into a recipe. (Granulated white sugar is one thing in the kitchen, but volumetric dispensing in recipes can famously break down when you’re using salt, due to the particle sizes and bulk densities of kosher salt versus table salt. But the Chembeads are at least standardized).
The paper illustrates the use of these reagents for Suzki-Miyura and Buchwald-Hartwig couplings, which (to be sure) are absolutely the first thing that synthetic organic chemists would apply automated optimization to. These metal-catalyzed couplings are famous for sensitivity to reaction conditions and catalyst selection, and I’ve long believed (I’m not alone) that basically any such coupling reaction can be optimized to a high yield if you’re just willing to spend enough of your lifetime investigating conditions. The AbbVie folks have standard screening sets of likely combinations, 50 to 70 variations of each, that they run with the bases and/or catalysts on the Chembeads, and the machines can set up a reliable plate covering either lot in about an hour (while you go off and do something else with your time).
It’s notable that for many of the combinations that one or more reagents involved are not really all that soluble under the starting reaction conditions. But some of these turn out to be successful runs, which shows you how you can’t rely just on systems that start with everything in solution. The use of the screening sets is illustrated by a particular bromoheterocycle-piperidine Buchwald-Hartwig coupling that doesn’t appear in the literature. Of the 68 test reactions, only two gave any product, and they were the only two that used LiHMDS as base. A followup plate using only that base but varying the other conditions showed a much higher success rate; that was indeed the critical variable, and good luck finding that out any other way than running a bunch of reactions. On top of that, the top ten yields in the second plate were all ones that used 1,4-dioxane as solvent, so there’s the number two condition for success.
For those in the crowd who aren’t synthetic chemists, it cannot be emphasized enough that there is no way that you could have known about either of these choices beforehand – and that if you change the reaction to another bromo-heterocyclic system, the optimal base and solvent are likely to switch again to something else entirely. It’s as bad as cell culture or X-ray crystal growing, two other areas that are famously infested with evil spirits and voodoo rituals. You run into these systems that are just intrinsically very sensitive to initial conditions, with variables that are sometimes too small or obscure for you to even realize that they’re variables.
The paper ends up with a possibility for even more miniaturization: the team shows that more than one reagent at a time can be coated onto the Chembeads, and demonstrate a microscale run with a single bead containing all the solid reagents. This nanomolar reaction gave product by LC/MS, and could then be scaled to a 50mg run with a 65% yield of product. So you might be able to produce an even larger variety of reagent combinations and move to perhaps 1536-well plates if that turns out to be desirable. Screening 68 reaction conditions at a time is a great improvement over what anyone is willing to do by hand, but how about screening 1200 at a throw, without worrying about solubility or reagent dispensing? It’ll be interesting to see if that’s where this technique goes. . .