Skip to main content

Chemical News

Automated Chemistry: A Vision

I enjoyed this article at Chemistry World, but fair warning: you may not. I say that based on the response when I’ve written about its subject here before, which is the automation of synthetic organic chemistry. There have been some pretty strong negative reactions to the idea, which fall into several categories. “You’re hyping something that doesn’t even/can’t even exist”, “This is too depressing to read about even if it’s true”, “Organic synthesis is an art form; talk of automating it is ridiculous and/or insulting”, “We’ve heard about this sort of thing for years, and it never works”, and so on. The article itself is fine, and goes over both the promises of automated synthesis and the problems involved in realizing it (as well as the problems it causes for chemists to think about it!)

So with that realistic view of the field, let me put on the goggles and go for the reductio ad absurdum, the Universal Synthetic Chemistry Machine. What would it look like, and what would it do? Right up front, I have to say that I do not necessarily expect anything quite like this. But neither am I quite prepared to bet against it, and I think it’s a good thought experiment to do (see below). Here goes:

The USCM occupies a large vented enclosure. A chemist sends it a structure, and the software runs its inhuman, invisible fingers over the structure file, looking for likely bond disconnections and analyzing potential retrosynthetic pathways. Possible steps would be graded based on feasibility from the machine’s vast, thoroughly curated database of known reactions from the literature, with special emphasis on ones that it has shown to work in its own past syntheses, but it also uses the general synthetic rules that humans have told it about (and the ones that it may have inferred for itself). These are chained together in increasingly long sequences as it searches for a minimum-energy path through the synthesis, which would be the one involving the highest potential yields, the most likely reactions to work (but with the fewest overall steps, when possible), and the most easily available materials and reagents. The analogy to chess-playing software is clear. Chess is a far easier game to model, but the same general mixture of brute force search and decision-tree paring is used. Known and validated preparations are the equivalent of the opening sequences programmed into a chess engine; the real energy is spent on the mid- and end games. As with any system of this kind, the machine is not guaranteed to find you the absolute best synthetic sequence, but by this point in its evolution it has a very good chance of finding a good, workable one. Which is what it’s for.

Since this is a Universal Synthetic Chemistry Machine, it can in theory handle whatever you throw at it, from ibuprofen to palytoxin. In practice, complex natural products are handled by “centaur teams”, machine-assisted humans, who feed the syntheses of various advanced intermediates into the software and use it to rate the likelihood of the various overall strategies. For molecules that can be made in ten to twelve steps, though, there’s really no point: the software will give you a route.

Once you tell it to go ahead and make the compound (and how much to make), it determines the solvents and reagents it needs and start assembling those. The USCM uses reagents in standard-format containers adapted for its handling systems, and calls those up from the automated stockroom to be loaded into its various bins and holding areas. Solvent lines, a whole range of them, are already built in. Cartridges for purification, phase separation, solid-phase extraction and other operations are also standardized and can be dropped in by the hardware and fitted without human intervention.

The machine prefers to work in flow mode, and optimizes its synthetic routes in that direction. But it can switch over to batch reactions when needed. It identifies the likelihood of success for the various steps, and for the ones below a certain cutoff, it switches to analytical condition-searching mode, running an array of small experiments to scout out reaction modes before proceeding with the larger batch. Before starting, it will present the chemists with its options – scout a problematic step that might shorten the synthesis, or take the long way around with reactions that are more likely to work? Order up an unusual reagent or catalyst that looks to help the route, or go with what’s in house? It is programmed, of course, with routines that anticipate scaleup differences in stirring, heating, and so on, and it’s also ready to identify reactions that are potentially energetic.

Most of the time, though, things are quite straightforward. With its array of pressure and flow sensors, the machine is even able to localize potential clogging in the flow lines, and bring on extra heat or ultrasound to free things up. In extreme cases, it will back-drain the system and switch to a DMSO flush or the like, but since its software is also ready to flag structures that look like they would be less soluble back during the planning stage, it approaches those with caution from the start.

The software provides an estimate of when the synthesis will finish, updated along the way as the sequence proceeds, and it checks for purity along the way (and, of course, at the end). The standard mode is to try telescoped reaction sequences, especially when it has evidence from its files that particular combinations are acceptable, and to only purify intermediates further if the increase in yield is justified. And thus another structure is made. For most medicinal chemistry analog arrays, the whole process is routine, because there are some twenty or thirty transformations that make up the great bulk of that work, and they’re well optimized by now.

Here’s the thing: I said at the start that this was a reductio ad absurdum, but you know what? It’s not all that absurd. I have violated no physical laws; I have invoked no devices or programs that we don’t already have in some form. Realizing such a machine is going to take a great deal of engineering prowess, but let me be honest with you: I don’t see any reason why something like this can’t exist. Or shouldn’t. The fact that various people have overpromised over the years in this area does not mean that those promises are impossible to realize.

Here are some points that I made when I spoke about this topic in Manchester a year or two ago: if you work in a well-equipped lab in a first-world environment, you are already surrounded by machines that do a lot of repetitive tasks for you, like queuing up your samples for analysis, collecting your fractions while you run gradient elution purifications, and so on. Making a pile of analogs from a common intermediate is most definitely a repetitive task of its own, and we’re heading towards machines that will do that for us. The same goes for scouting a range of reaction temperatures, solvents, and catalysts.

There are many places around the world, from academic labs to CROs in various countries, where there are no mechanical fraction collectors. The fraction collector has a name, takes a lunch break, and collects a salary, because (for now) human labor is cheaper and more reliable in such places. Many chemists from wealthier environments look at such a situation and get the same feeling that people do if they see someone pulling a rickshaw for a living: this is grunt work. Humans are fit for better than this. And so they are. The same used to be true for the process of washing everyone’s laundry, with a tub and board or even a flat rock down by the river, and in some places it still is. Humans are fit for better.

The uneasiness comes on when automation and mechanization moves into jobs that we don’t feel are all that bad. It’s one thing to haul manure all day, but quite another to make a series of sulfonamides. (The occasional equivalence of sulfonamides and manure is a topic for another time). But the principle is the same – leaving such things to machines frees us for still better tasks. Instead of banging out amides and palladium couplings, we will use our brains to think about what we should be making next, and using our hands for chemistry that’s still too difficult or novel for machines to handle.

Now, there’s a sting in here. You may well need fewer chemists under such a system, although that’s still not for sure. But what is for sure is that the chemists that are around will be working at a generally higher level than most of us do now, because what we bring to the process will be (increasingly) not our hands but our brains. And not everyone is going to be capable of doing that. There used to be more people in the medicinal chemistry labs of high-wage countries who were not there to come up with great project ideas or break new ground in the existing ones, but rather were employed because they were good at banging out series of analogs thought up (for the most part) by other people. That work has been migrating to lower-wage territory for many years now. And the machines are coming for it in turn, and more besides. You may not be willing to credit the Universal Synthetic Chemistry Machine described above, but a Universal Amide/Suzuki Cranker is a lot easier to picture. And that’s a matter of degree – not of kind.

55 comments on “Automated Chemistry: A Vision”

  1. anon says:

    Give it 15-20 years and we will have the first versions of these machines.

    1. 15 to 20 months with the right funding

  2. Vince says:

    Didn’t K. Eric Drexler beat you to this idea by a bit?!

    1. Derek Lowe says:

      No nanotech needed for the machine I describe, though – although if his type ever does get going, it’ll make everything, presumably.

      1. John Tellis says:

        Chemjobber, Ash (Curious Wavenfunction) and myself recently discussed this on CJ’s radio show. Would be curious to hear your thoughts, Derek.

      2. Even Drexler said his approach wouldn’t make literally everything, and certainly not with one reaction system. What he claimed is more like: It should be possible, using mechanochemistry, to make most structures that can exist (though it may take purpose-built nanomachines for any given synthesis).

        The key differences are that Derek’s USCM does bulk-phase chemistry and Drexler’s system would mostly do solid-phase chemistry. At the reaction level, it’s like the difference between an acrobat jumping between flying trapezes, vs. a contortionist who can be slowly helped into position via fixed supports and dextrous assistants.

        A nanotech version of the USCM that Derek describes might go something like this:
        1) Start with the molecule held in a high-affinity solid-state binding site.
        2) Figure out a sequence to take the molecule apart, mechanically, piece by piece, so that each piece transfers to a high-affinity solid-phase binding site.
        – The key restriction is that the reactions should be reversible – running the system backward should perform synthesis. This implies that the molecule should be held/bound so it doesn’t “sproing” into a completely different configuration during the reaction.
        – You can remove big or small pieces (but then you have to figure out how to break down the big pieces).
        – The binding sites can be moved with six degrees of freedom and enough force to directly make or break covalent bonds. They can also be modified internally during the reaction.
        – At this point, the system is allowed to assume that desired binding sites can be built, as long as the desired shape and charge distribution (and their evolution over time) can be specified and won’t sterically block the required motions.
        – Between steps, you’re allowed to transfer the remnant molecules between binding sites if that helps.
        – Intermediate steps are allowed to use dangling bonds and other things that would be ridiculous in wet chemistry, because there is no solvent and no contaminants.
        – The pieces are allowed to bond to the binding sites, as long as this is reversible (non-destructive) (or the site can be easily “repaired” with just a few steps).
        – It’s OK to add pieces if that helps stabilize things and sets you up for a good “move” later. Of course the combinatorics of this get large quickly, so machine learning and a standard library of most-often-helpful choices will probably be applied here.
        3) As you get to small pieces, figure out how to transform them into one of your available feedstock molecules. You’ll probably have a chess-opening-library to help here.
        4) As you design the pathway, look for problems.
        – Binding sites that will be difficult to put together from your library of pre-existing mechanical (molecular) shapes.
        – Reactions that can’t proceed slowly (that “spoing” from one state to another at some point in the reaction trajectory) – these waste energy.
        – Molecules that are unstable – that reconfigure nondeterministically. This is especially bad if pieces fly off.
        5) Build the binding sites, and the mechanical system (“assembly line”) that will move them through the needed series of positions. Think injection molds more than robots here.
        6) Test the system and see if it works as predicted. At this point, human “debugging” will probably be most useful. An undesired reconfiguration will gum up the works and/or need to be detected and shunted aside as waste (and will likely take out the binding site with it) so adjust until the probability of undesired reconfigurations is extremely low. (Fortunately, the probability of reconfigurations can be shifted by orders of magnitude with relatively small changes in energetics.)
        7) Compute your throughput – how long it takes the machinery to make its own mass of product. Ask the human if this is acceptable.
        8) Build as many production lines as desired, to synthesize the amount desired in the time required.

        1. anon the II says:

          Your little machines in the video linked to your name have fine structure substantially smaller than bonds. That is the whole problem with your little dialog and the con Eric Drexler visited upon us. There ain’t no physics to make this stuff.

          1. The video looks like that, but the actual proposals do not include any fine structure smaller than atoms/bonds.

            Where the video looks lumpy, that’s actual atoms that were actually simulated to do what the video shows. Where the video looks smooth and polygonal, it’s an approximation of what shape the molecular structure would actually be.

            Lumpy bearings should work just fine at the atomic scale, if you design them right.

            Drexler got a PhD thesis from MIT on these proposals. NASA did some work on them. Hundreds or thousands of physicists and chemists have looked at them.

            The National Materials Advisory Board was told by Congress to study Drexler’s proposals. I gave a presentation, and got a chance to talk with the chair of the committee. He grilled me for about ten minutes, asking very good questions, and then ran out of questions and said it could work in theory.

            So it’s unlikely that there’s a simple fundamental flaw with the idea, and if you think you’ve found one, it’s probably worth looking closer.

  3. Calvin says:

    It seems far more likely that the computation aspect of this will work before a synthesiser will work. So I can definitely see a situation where a machine will tell us what chemical route is most likely to work, provide the highest yield and be the most efficient. That looks doable with the right data to teach the system. And that might be pretty useful and cool. What’s the best way to make a molecule? Even if it provided the top 3 routes I can see that being quite handy.

    I’ve been away from the bench for a while now so that might already be possible to some extent now.

    But I think the machine that does all of the practical bits is some way off.

    1. Derek Lowe says:

      It’s already getting to that point – I’ll be blogging about a new example shortly!

  4. Bagger Vance says:

    “That work has been migrating to lower-wage territory for many years now.”

    And yet, the workers have been migrating to higher-wage territory for many years now also.

    Not really sure what Derek is advocating for, if anything, in this scenario.

    1. DH says:

      I don’t think he’s “advocating” for anything — just doing a thought experiment to address the question, “Is it technically possible?”

  5. John Wayne says:

    I don’t disagree with a single point you have made, but there is a white elephant in this room – chemicals. Even a limited machine that can only run amide formations will still require a very large number of chemical inputs to be actually useful. Let’s simplify the problem assuming that the numbers of coupling agents, bases and solvents number below 100; how many carboxylic acids and amines would you need to have available to make this thing useful? I propose that 1000 compounds is the floor, and 100,000 is the high number. How the heck are you going to keep thousands (or tens of thousands) of chemical hoppers full of material? It is hard enough for contemporary medicinal chemistry lab to keep on top of their own inventory (see a recent post on this blog.) This is a very interesting engineering question that will have to get answered in a way that really moves the needle. Start writing those grants.

    1. NJBiologist says:

      Is that really a different problem from the HTS groups’ need for an automated library of >10E6 chemicals?

      1. John Wayne says:

        1. Nobody will ever tell you that creating and maintaining HTS libraries and sample picking is cheap
        2. They store materials they dispense in solution, which is probably not going to work for a synthesizer that needs to be able to pick solvents
        3. QC is a constant problem for them, and it will be for this too
        4. Folks ask the HTS groups for plates of material in very small quantities. This synthesis machine would need significant quantities of material, and that demand skyrockets if it needs to perform many steps in a row.

        If you attempted to repurpose an HTS facility to dispense 100 mg to 10 gram quantities of solid material it would probably be faster to throw away the facility and start over

        1. NJBiologist says:

          Got it–thank you for the explanation.

  6. Natural chemist says:

    Reductio ad absurdum…who said my minor in Latin wouldn’t pay off?

    1. Derek Lowe says:

      Thanks! Corrected. . .

  7. Hap says:

    This doesn’t seem ridiculous, though compiling all of its data and having a program that can estimate solubility will take a bit.

    The problem for me is that when we are done with something – when people other places or machines can do something better and faster, we are not good at finding other things to do. Particularly in our environment, employers want people to do stuff now and have little reason or desire to invest a lot in training, so even when education and experience should make people adaptable, it’s nontrivial to do something else. When a lot of people have to change course, it generally doesn’t happen. While it’s good for the future, the people who do this now may not get to do something else, which is obviously scary. We are poor at repurposing people to do something else, and much better at discarding people. If you might be discarded, well, this isn’t comforting, and it’s harder to take joy at the future this might make.

    1. Anon says:

      Certainly people are not great at repurposing themselves when given a choice, but the thought of having no food on the table tends to put things into perspective. Speaking from experience here. 🙂

      1. Hap says:

        There were a lot of people who voted for Trump because they thought he’d bring their old jobs back; they probably have jobs now, but they make not much and use little if any of their capacities. The concept of “surplus consciousness” seems relevant.

        I am sorry that you got to experience it firsthand, though.

  8. Speaking as someone with expertise in computers and only a passing interest and familiarity in chemistry, the difficulty of doing the retrosynthetic search as algorithms is quite low. The real difficulty, I expect, is in getting usable data to elucidate the retrosynthetic graph. Estimating purity or likelihood that a reaction could add one functional group without destroying that one over there is going to be crucial to getting usable results, and I don’t know that the computational chemistry field is ready yet to be able to make those estimations. It also doesn’t strike me as the sort of problem amenable to the deep learning hammer.

    1. barry says:

      E.J.Corey’s LHASA has been working at the retrosynthetic analysis by computer since 1969. Along the way, it has spun off ChemDraw, and important heuristics on which synthetic chemists have relied consciously or unconsciously for a generation. But it hasn’t proven as easy as computer scientists tell us it should be

    2. Jeff L says:

      There’s some pretty cool work coming out of IBM working on this exact problem (with the method you expected)

  9. Barry says:

    Maybe I’m not visionary enough. But right now, I want a machine that I can hand a reaction mixture, tell it that it’s in e.g. THF, and that the product I want has an Rf of 0.3 in e.g. 4%MeOH in MeCl2.
    Isolating/purifying that is a task we do all the time, that doesn’t require any intellectual input, but that one chemist can’t do in parallel.

  10. Uncle Al says:

    Batch 10 kg of artemisinin, even as semi-synthesis from sweet wormwood, would likely reduce the lab to rubble re peroxide intermediates. Continuous parallel microdribbles are strongly preferred. If it can be exited from an undergrad lab, high tech has little or no place in procedure. In-between we have modest scale synthesis of a large library of potent pharma – Chinese fentanyl derivative mills.

  11. Some idiot says:

    I like your description of the machine, and my gut feeling is that it feels about right. I also agree with an earlier poster that the software searching side is probably closer to now than the other part.

    But I feel that the description misses one important thing which (to me) is one of the important differences between a robot and an actual chemist (now thinking down my line as a process chemist). Broadly speaking, doing the next 58 iterations on known chemistry is (generally) pretty straight forward. But getting chemistry to run on new systems is far from trivial. Just the fact of trying out the reactions on a small scale, and knowing what the results are telling you can be more complex for a robotic system than one might first imagine. How do you monitor the reaction? HPLC? Fine, but if the product you are looking for is the stuff which has oiled out as a smear on the inside of the glasswear, then you can forget your robot finding it for you. When do you take samples? Before or after the green colour appears? How is the robot going to guess that (or even let you know it happened)?

    Yes, all of these things can be programmed into a system, but generally are only applicable if you have an expectation that it is going to happen. Therefore, my belief is that whereas automation has the very severe risk of delivering a deal of false negatives (nb: I am actually a fan of automation in its right place…), the value of an actual chemist is to take the unexpected and turn it into value.

    So I guess it comes down to what you are expecting it to do. If you want it to take over the grunt work of making stuff where the processes are reasonably well known, fine (as long as you are prepared to de-gunk the flow reactor every time it suddenly finds out that it has suddenly created a precipitate… 😉 ). And as a source of good ideas and inspiration: fantastic! But for performing route-scouting work for processes: give me a process chemist, backed up with all modern aids, but with all the idiosyncrasies and curiosity that comes with a person…! 🙂

  12. Wavefunction says:

    I wonder if someone can ever prove or disprove a version of Turing and Hilbert’s halting problem for the USCM.

  13. Kelvin Stott says:

    “I have violated no physical laws; I have invoked no devices or programs that we don’t already have in some form. Realizing such a machine is going to take a great deal of engineering prowess, but let me be honest with you: I don’t see any reason why something like this can’t exist. Or shouldn’t.”

    Exactly what I was thinking. As you know from our own private discussions (re. Biofracker), there is no theoretical limit or reason why a fully automated drug discovery device *can’t* exist based on established ideas and technologies. If you start with a crazy vision or idea and keep asking what is required to make it all work, usually one can come up with a very credible plan. It certainly beats discarding an idea just because it sounds crazy and different to what we have today.

  14. David says:

    Attach a cyclotron and you’ve the potential for even more variety, eh?

    Also, in general the idea of such a device being connected to the internet is something I don’t really want to be near…

  15. dereklowebot says:

    Plot twist: This blog post was written by an automated AI blog-bot

  16. anon the II says:

    OK, so nobody has slammed you so far for this little daydream. I guess I’ll have to step up to the plate. When you need to make a new molecule you have to string a bunch of decisions together. Your dream is that we’re about 90% good on all of these decisions. So why not? My reality is that there are so many of these decisions: what reagents, what ratio, what solvent, what temperature, what catalyst, what base, how long, workup, chromatography, stability, blah, blah, blah, that even if we’re getting it about 90% right, 0.9**150 is still about zero.

    Not to say we shouldn’t keep at it. I believe that laziness is probably the real mother of invention.

    1. tangentq says:

      It was described as an algorithm that finds a minimal path in a known graph, the output being a synthesis pathway, but I’d expect you’re correct —
      the output of the algorithm will be a plan of experimentation.

      It will be “try these reagents/conditions and see what happens” and “if the result is what I 90% expect, then pencil this step as available for path search” and “now that we can make that intermediate, try this reaction on it” and “otherwise, try these different conditions”. The software generates a decision tree of experiments to test steps, which the software figures will lead to choosing a specific synthesis path from within a general approach.

      This may be more terrifying, since this sounds a lot like an intelligent human chemist. Sorry about that. Creative research ideas aren’t readily in scope at least.

  17. Ghostinthemachine says:

    Structure confirmation, in addition to purity check, will be desirable for the proposed machine. Problem solving when a reaction fails would be an interesting AI challenge too.

    1. exGlaxoid says:

      Bruker already sells software that does automated NMR structural determination for their 700 Mhz machine, which is not perfect by far, but does a surprisingly good job, if you have a few million to spare for it. Even I was amazed that it worked pretty well for simple molecules and even OK for complex ones with stereochemistry, quat centers, and multiple functional groups.

      I don’t think it will replace analytical chemists soon, but the LC-MS instruments, new NMR, HPLC,/SFC and other tools out there have made analysis much better and more automated than before. I don’t see analysis as the weak point in automating chemistry.

      1. Professor Electron says:

        Structure confirmation is far from solved in the real world. Your samples need to be pretty pure (no ethyl acetate, DCM, SM, by-products etc) and in deuterated solvent. Then you need to acquire at least an HSQC, and ideally an HMBC too, so that’s several hundred ug needed to get the data in reasonable time. After that, you can run the confirmation software, and get a probability its ‘consistent with’ your desired structure. That’s not the same as the probability that the structure is correct! You may well have a geometric or positional isomer which gives very similar spectra. The software company ACD has a program which automatically generates isomers to check, which is more robust but there are plenty of isomers that are difficult for even an expert to distinguish.
        This won’t work unless you’ve lots of protons in your molecule because HSQC/HMBC don’t give info in regions without them. And then there’s the 10% chance that your molecule didn’t appear in the LC-MS because it didn’t ionize or fell apart in the source.
        Automated analysis could be solved; it isn’t yet. It needs more robust methods and a much more co-ordinated approach to data analysis based on the chemistry – for example, automatic recognition of reagents/solvents or even consideration of possible by-products. All things to add to the Lowe Universal Synthesis Machine!

  18. anon says:

    Yes, but does it go “Ping”?…

    1. oldnuke says:

      And in Prof Dr Klapötke’s lab, you dial in the number of nitrogens and the desired yield. 🙂

      1. anon says:

        …and run away…

      2. Some idiot says:

        Chemical yield or explosive yield? 😉

        1. oldnuke says:

          #2 I think.

      3. Hap says:


  19. Automation Chemist says:

    Machines have been designed that have a large area of virtual compounds available to them through limited numbers of fairly well understood transformations. True you need to load the reagents and yes not all combinations work for any number of reasons but you can get enough of them to work to scope out SAR in a fair proportion of chemotypes.

    They are just another tool. The compounds being made (or the SAR from them) are the most important thing not how they are made.

  20. Anon says:

    I wonder if such a machine would do its own safety assessment, or whether it would care about self preservation, especially when making heavily nitrated products.

    I guess the point we should really worry about AI, is when machines *do* take into account such things! 😮

    1. Automation Chemist says:

      They are not self aware yet (afaik) but you can, if you want, do automated safety assements and block out certain compounds/conditons.

      CHETAH has been about since 1974, that surprised me.

  21. oldnuke says:

    No worry, we just add the “Things I Won’t Work With” list into the algorithm. 🙂

  22. HGMoot says:

    The difficulty may well be not in achieving successful syntheses but rather in handling failure situations. The reaction planning part may be the easiest (washing the vessels well may be another story…), but what when a reaction with a very high expected success score fails at the last moment? Happens to us about 98% of the time.
    It is in handling failure how we earn our chemist’s card.

  23. steve says:

    For everyone who says this is impossible remember Auguste Comte. He said, “We will never be able to determine the chemical composition of the stars, nor their density and temperature”. Spectral analysis was invented 25 years later. And you might also want to remember South Korean Go master Lee Sedol, who was beaten by DeepMind, an AI program that came up with moves that Sedol said no human would have thought of. The newer version is faster and better. The version that defeated Sedol trained for months and reached an Elo rating of 3739. AlphaGo Zero surpassed that level in just 36 hours teaching itself and eventually reached a rating of 5185. So please, spare me the negativity in saying no computer program will ever outdo synthetic chemists. It will happen, the only question is when. The only thing that is truly impossible for any AI to accomplish is the understanding of the mind of a teenage daughter.

    1. tlp says:

      I wonder how long it would take alphaGo to teach itself to expert level if it had to physically move stones around. There’s no shortcut for that in synthetic machine, I suppose. Whatever beautiful artificial intelligence is built-in, it will be trapped in a physical ‘body’ and slowed down a lot. The question is by how much.

      1. steve says:

        That’s just a detail, it’s not fundamental. If AI can figure out how to do it in theory then I’m sure that robotics can be set up so it can do so in practice.

        1. tlp says:

          “If AI can figure out how to do it in theory then I’m sure that robotics can be set up so it can do so in practice.”
          It’s a platitude but successful retrosynthesis != practical synthesis in a finite time.

  24. exGlaxoid says:

    There are at least two parts to the automated lab, one part is the design of the synthesis. I have no doubt that for many molecules this can be further automated than it is, already it would be possible to make many molecules just with a few Scifinder searches and simple modifications. So that part (route design) should be doable, and is not really the challenge for most compounds.

    The actual synthesis would be hard, I have seen solutions that crash out gum up so many instruments it is hard to keep track. Just keeping an LC-MS running takes a fair bit of effort, and those solutions are supposed to be 1 mg/ml or less. Imagine the fun with tBuLi, hydroscopic salts, highly crystalline solids, and Pd black crashing out on vessels, I could come up with 100 ways to break a machine, I have seen most in real life. GSK bought into the automated lab and built two buildings for $100 million each, both of which were abandoned within a decade after wasting millions more in operating expenses. Most of their actual output from the test experiments done by real chemists with actual hands.

    It turned out that just having a good stockroom, automated vial weighing machines, automated LC-MS and HPLC purification systems, liquid handlers, good software, and good chemists allowed chemists to make 1,000 to 10,000 compounds per chemist per year (~10 mgs, > 90% pure as solids in a tared vial, then automatically dissolved in DMSO for screening.) That was enough to make more compounds than the company was willing to screen within a decade, the HTS collection grew over 4 million compounds at one time, and was judged big enough, at that point most of the high-thoughput chemists were laid off. But the big machines never made more than a few 100 compounds before breaking, clogging, or failing.

    But the real point is that we made more compounds than they needed once we had the right targets, support, chemicals, tools, and simple workstations. At the peak, a few chemists could make 5K pure compounds a year each, so with 4 sites of over 10 chemists each, that added up quickly to more compounds than they knew what to do with, if you throw out the cost of the big factories, the same could be done again for a million in simple equipment per chemist, and a team of 10 people can make 100,000 compounds per year, at a cost of about $100 per compound. So why wait for a big gadget?

    The problem is that no one wants 4 million compounds more to screen any longer, and a group of 100 chemists in China can make them for $20 each, if you don’t QC them too much. If you know anyone who wants to make compounds fast, efficiently, and with a lot of automation, I know at least 20 chemists who could help make it happen.

  25. Robochem says:

    Automation and machines… it’s all good until things break down. Cookies anyone?

  26. One Man CRO says:

    I’d be very curious about chemical compatibility with machine parts especially as it relates to longevity of the instrument. Just using the ISCO system for routine purifications leads to strange crud building up in cracks and crevices. Can only imagine adding oxidizing and reducing agents etc to the mix. See what that does to internal plumbing in the short term

  27. Beentheredonethat says:

    The problem with automating discovery chemistry is the variability in solubility, solids vs goop etc. Even just making amide libraries you run into this problem.

    Purification is where automation really shows its value.

Comments are closed.