Skip to main content

Chemical News

The Algorithms Are Coming

I think that every synthetic organic chemist should take a look at this paper in Angewandte Chemie. It’s on the application of computer algorithms to planning synthetic routes, which is a subject that’s been worked on for fifty years or more – without, it has to be said, making too much of an impression on working organic chemists. But after reading this paper, from Bartosz Grzybowski and co-workers at the Polish Academy of Sciences, I’m convinced that that is going to change. I don’t know if this particular software, or even this particular computational approach (which I last wrote about here) is going to do it, although they both look very promising. But to a first approximation, those conditions don’t matter – what seems inescapable to me is that the generation of synthetic routes in organic chemistry is subject to automation, and we are getting very close to seeing this put into use.

Here’s the paper’s summary of the situation:

Overall, we believe that modern computers can finally provide valuable help to practicing organic chemists. While the machines are not yet likely to match the creativity of top-level total-synthesis masters, they can combine an incredible amount of chemical knowledge and can process it in intelligent ways with rapidity never to be matched by humans. In retrosynthetic planning, even inexpensive desktop machines can consider thousands of matching reaction motifs per second and can identify those that would be difficult to discern even by expert chemists—in fact, even desktop computers can be distinctly superior to humans in their capability to recognize complex rearrangement patterns and multicomponent reactions. Of course, it could be argued that one might be able to recognize these motifs using human intuition. But this is like arguing that we could, using paper and pencil, “eventually” divide two ten-digit numbers to the precision of ten decimal places—why do so if we have a pocket calculator available? Our thinking about all synthesis-aiding programs is that they should be regarded precisely as “chemical calculators,” accelerating and facilitating synthetic planning, rapidly offering multiple synthetic options which a human expert can then evaluate and perhaps improve in creative ways.

That’s well put. And the corollary is that for less demanding syntheses, this technique should generate perfectly good routes that will not be subject to much improvement at all. The analogy to a calculator is a good one, although many will object that a mathematical operation has a definite correct answer, while a synthetic route is more of a matter of opinion. But as the paper shows, these opinions can (and are) being taken into account: you can set the software to generate the shortest routes, or  the routes using the least expensive reagents or the most well-precedented reactions, or some combination of these – it’s like searching online for air travel, taking into account ticket prices, number of connections, layovers, and so on. The software shown can even take into account less-common preferences, such as using no regulated starting materials or reagents, the likely time needed for each reaction step, stipulating that a particular intermediates, solvents, or reactions conditions have to be used or have to be avoided, and so on. (The analogies to GPS-driven map route algorithms will probably be clear by now as well).

There’s still a lot to be done, as the authors show in the third section of the paper. But none of these problems seem to be computationally intractable – far from it. After looking at the computational approaches to things like chess and Go (or driving to Tucson), there seems to be no reason at all why organic synthesis shouldn’t fall into the same general category. Many of the same considerations apply – choosing moves that don’t reveal later vulnerabilities (which can be a computationally intensive process), trading off various factors (convenience, cost, literature precedence), and so on. You can, in chess terms, think of each step in a synthesis as a position, and the possibilities for the next step in the synthesis as the move to be made. It’s easier to do with chess, but that doesn’t mean it’s impossible with chemistry. The tricky part, over the years, has been converting organic chemistry into a computationally tractable form, but that’s well underway. As the paper shows, dealing with the problem in terms of networks and graph theory is particularly promising.

The examples shown, which range all the way up to synthetic routes for Taxol, are quite interesting. The first examples are patchwork, chimeras of various known routes and reactions from the literature (in these cases, on the exact substrates), assembled into something that’s very likely to work. That is exactly how you or I would do it, if we were trying to get to some compound in the quickest and most feasible way possible, only the software searches through the literature much more quickly and thoroughly. The software can be set to display the various sources and their year of publication, so you can see how the route was assembled.

vardenafil synth

By this point another thought will have occurred to many readers: that this sort of thing may well be fine for Frankensteining a synthesis together from literature precedents, and in fact can probably do that better than any human can, but what can it do with de novo synthesis? What if you feed it molecules that no one has made, in structural classes that haven’t been explored? The paper refers to a 2009 book that I hadn’t seen: Knowledge-Based Expert Systems in Chemistry (Not Counting on Computers), and refers readers there for a good history of attempts at this sort of thing. But the paper itself takes readers through a useful summary of LHASA, SECS, SYNLMA, SYNCHEM, SYNGEN, CHIRON, and other efforts over the years. The paper hypothesizes that the field may have actually been held back by its own enthusiasm and ambitions – many of these efforts were made with what we would now call completely inadequate hardware, which meant that many simplifying assumptions had to be introduced (which naturally limited their applicability). Chess, by contrast, is an easier problem to deal with. As everyone knows, Deep Blue marked the transition where computation defeated the best human player in the world, but it should be remembered that chess programs had been able to give ordinary (or even reasonably good) human players all they could handle for many years before that.

Organic chemistry software, though, has never come close to that standard. The problem isn’t subject to as much algorithmic compressibility as you would like – in the end, a brutal number of chemical transformations just have to be put in individually, without trying to generalize too much. Even trying to introduce shortcuts into the reaction-entering process will lead to trouble:

To do things right, the reactions must be coded by human experts carefully delineating which substituents are or are not allowed, and considering both steric and electronic factors, and more. This expert-based approach is actually not an exception in teaching computers to solve complex problems—indeed, Deep Blue was able to score chess positions because it was “taught” an incredible number, 700 000, of grandmaster games; Mathematica began to do its wonders of symbolic mathematics only after it has been “taught” by humans a certain number of rules, heuristics and algorithms, some of which took years to develop and volumes to describe. . .

Machine extraction of synthetic transformations from the literature works just fine – at first. And then it generates worthless gibberish when you try to apply all that data, because the context of the molecules involved is so important – that protecting group will fall off, that other group will get reduced, too, that stereocenter will racemize, that sulfur will inactivate that catalyst, and on and on. Getting things in at that level of detail has taken years, but once you do it right, you don’t have to do it again, and the software just keeps getting more powerful as you keep adding more detailed knowledge. What you end up with are many thousands of synthetic reactions, with information about their limits, vulnerabilities, and ranges all coded along with them. (They’ve also coded a separate list of structures to avoid – highly strained, basically impossible intermediates that might otherwise be considered plausible by the program).

Now this can be used to propose syntheses of new molecules, and they show an example using a recently identified natural produce, epicolactone. It took several hours for the software to help come up with a solution, with an organic chemist going along step-by-step, but it’s actually a pretty impressive solution, and it’s also mechanistically related to a recently published total synthesis from Dirk Trauner’s group at Munich (both take inspiration from the biosynthesis of purpurogallin).

Epicolacton route

And that takes us to the ultimate goal – yeah, the one where an organic chemist isn’t watching each step, but instead is off doing something else while the software grinds away. (Some readers will interpret this as “The one where there isn’t an organic chemist at all”, but we’ll get to that). This is going to take very careful work on defining the “position” at each step, and on scoring the synthetic possibilities. Brute force is not going to cut it, unfortunately – there are many more possible reactions from a given chemical position than there are possible moves from a chess position, and the whole thing gets out of control very quickly if you just try to grind it out (and even grinding it out requires a way to score and evaluate each possibility, in order for a recommendation to be made). The paper goes into details about the various scoring functions you can use (for the kinds of variables discussed above – cost, length of the route, literature analogs to it, and so on).

. . .we require that the search algorithm be 1) non-local—that is, able to explore not only one synthetic “branch” of synthetic solutions at a time but consider numerous distinct possibilities simultaneously; 2) strategizing—that is, able to perform few individual reaction “moves” that might locally appear sub-optimal but could ultimately lead to a “winning” synthetic solution; 3) self-correcting—that is, able to revert from hopeless branches and to switch to completely different synthetic approaches. In addition, we require that the searches always terminate at either known or commercially available substances (with the threshold molecular weights specified by the user).

These are stiff requirements, but the paper demonstrates some proposed syntheses of fairly recently-identified (and in some cases unsynthesized) natural products, such as tacamonidine and goniothalesdiol A. This is by far the most impressive thing of its kind that I have ever seen; it’s a real leap past what most people would think of as “organic synthesis software”, assuming that they think of it at all. At the same time, and as the paper strongly emphasizes, this is not a solved problem yet, either. There’s still a lot to be done, both in terms of the chemistry that’s being put into such systems, and with the scoring and evaluating algorithms that drive them. Steric effects (and stereoelectronic effects) in particular need shoring up, as the paper freely admits.

But these now seem like solvable problems, and that’s what I want to emphasize. There seems to be no reason why time, money, and effort cannot continue to make this sort of approach work better and better – and it’s already working better than most people realize. In fact, I’m willing to stipulate that software will, in fact, eventually (how soon?) turn out to provide plausible synthetic routes to most compounds that you give it, with the definition of “most” becoming progressively more stringent as time goes on. So where exactly does that leave us organic chemists?

Well, back in the 1960s, it took an R. B. Woodward (or someone at his level, and there weren’t too damn many of those) to look at some honking big alkaloid and figure out a way that it could be made. There were fewer reactions to choose from, and analytical techniques were (by our standards) quite primitive, so just figuring out that you’d made what you thought you’d made at each step could be a major challenge all by itself. But the science has advanced greatly – we have a lot more bond-forming reactions than we used to, and a lot of elegant ways to set stereochemistry that just weren’t available. And we have far better ways to know just what our reactions have produced, to make sure that we’re on the right track.

One response in the total synthesis community has been to turn to making even larger and more complicated molecules, but that (at least to me) tends to reduce to the “find new chemistry” rationale, and there may be easier or more direct ways to find new chemistry. That, though, looks like the place to be in general. All those transformations that have been painstakingly entered into this software were discovered by organic chemists themselves – and no one knew that there was such a thing as a Friedel-Crafts reaction or a Suzuki coupling until those reaction were discovered. Each new reaction opens up a new part of the world to the techniques of organic chemistry. If it does, in the end, come down to software for reaction planning, then each new reaction that’s discovered will suddenly expand the capability of that program, and it will be able to generate good synthetic routes to things that it couldn’t before (and neither could anyone else).

I think that George Whitesides is right, that organic chemistry in general needs to become more focused on what to make and why, rather than how to make it. “How” is, frankly, becoming a less and less interesting (and rate-limiting) question as the years go on, and the advent of software like this is only going to speed that process up. On level, that’s kind of a shame, because “how” used to be a place that a person could spend a whole interesting and useful career. But it’s not going to be that way in the future. It may not be that way even now.

This isn’t the first time this has happened. NMR was too much for some of the older generation of chemists who’d proven structures by degradation analysis. And a colleague of mine mentioned to me this morning that when computer-driven searching of the Chemical Abstracts database became more widely available during the 1980s, his PhD advisor looked at his years of index cards, turned to him and said “This is the beginning of the end. Now any bonehead with a mouse can call himself a synthetic chemist”.

So the same debates over automated synthesis are sure to start up around this kind of software, too. But that work, too, forces us to work on the harder problems  and the harder “what” and “why” questions. I’ll finish up with a thought – how about taking this new software (named Syntaurus, by the way) and asking it for routes that prioritize the starting MIDA boronates and couplings that the Burke synthesis machine is so good at working with? Close the loop. Someone’s going to do it, you know – someone’s probably doing it now. Best to start thinking about how we’ll deal with it.

Update: Wavefunction has thoughts on this paper here.

44 comments on “The Algorithms Are Coming”

  1. Barry says:

    As you note, people (notably the E.J. Corey group) have worked this for decades. While it hasn’t yet replaced the chemist, it certainly has changed aspects of the practice. Chemists now routinely communicate through computer-generated graphics integrated with text. The first useful version of this was ChemDraw, which had been invented in the Corey group as a user interface for LHASA. While LHASA never launched, ChemDraw changed how we communicate and publish.
    Also out of the Corey group, “strategic bond analysis” is part of the armamentarium chemists bring to synthetic problems.

    1. ADCchem says:

      David and Sally Evans invented chemdraw.

      1. Derek Lowe says:

        Not quite so. Dave Evans had an idea for making a chemical drawing program when he first saw MacDraw, and Sally Evans very much wanted one to come into existence. But Stewart Rubenstein wrote the program. See:

  2. Curious Wavefunction says:

    And this isn’t a purely academic debate either. The fact that synthesis has become a victim of its own success has had a huge (and many would say detrimental) impact on the economy. It’s very similar to what happened with software and programming, with many software jobs getting outsourced because of their evolution into plug-and-play protocols.

    Automation of even 30% of future synthesis using computer-aided design methods would amplify this impact tremendously, leading even more synthesis to be outsourced. That in turn would lead to an enormous problem for chemists (especially graduate students and postdocs) trained in synthesis, which would exacerbate the STEM problem far beyond its current incarnation. It seems more important than ever for chemists to foresee these trends before they strike, and to heed Whitesides’s advice to “move beyond the molecule”.

    1. Jay says:

      We are already seeing a steady increase in the automation of chemical synthesis now. The Ley group has been leaders in flow synthesis, producing a catalogue of natural products.

      CROs such as Cyclofluidic have started to capitalise on this by adapting batch chemistry to flow. The beauty of this is that after the right protocol has been found you have complete reliability over your product. Syntheses of a dozen steps can now be performed over as many hours leading to huge cost savings downstream.

      The unification of algorithms, flow synthesis and inline compound analysis would close the loop on truly automated synthesis. Add to that robotic liquid handling and compound screening and we have ourselves a revolution in drug discovery. The only thing left would an AI suggesting the compounds to be made. Then we will be truly out of a job.

    2. Insilicoconsulting says:

      Might the exact opposite not happen? Since most synthetic routes that do not require high end expertise will be proposed by automation, only a few people may need to be employed in-house. Maybe 1-2 experts and 5-10 junior chemists at the most?

      In that case, tendency would be to bring work back in house rather than have the additional baggage of managing outsourcing! Raw materials can be shipped from China in no time!

  3. JFlaviusT says:

    In 2011 Zen19D reached 5 dan in Go (scale of 1-9 for professionals). 5 years later, AlphaGo is the best Go player that has ever existed. Although I would estimate that synthesis is a tougher problem than Go (mostly because of imperfect/incomplete information in the literature and complicated stereoelectronic and functional group compatibility issues), Go was a MUCH tougher problem than chess, and chess was a MUCH tougher problem then checkers, and checkers was a MUCH tougher problem than [so on and so forth]. Many AI experts thought we were decades from a computer that could defeat a 9 dan professional without handicaps. Point is, AI almost always progresses faster than you think. As it gets better at what it does, it gets better at getting better at what it does. Programs for organic synthesis are no different. If a program can be the approximate equivalent of an undergraduate today, it will be a graduate student in 5 years, and a professor 2 years after that, then one month later it’s hanging Baran’s Ingenol synthesis up on it’s refrigerator, patting him on the head and telling him how cute it was that he came up with that all on his own. But most practitioners will bury their head in the sand here and insist that they could never be replaced by computers.

    1. Curious Wavefunction says:

      There’s a very interesting recent article on the AlphaGo victory by Michael Nielsen in Quanta Magazine which you can Google (unfortunately this site hates links). According to Nielsen, what was special about the Go victory was that AlphaGo seemed to understand a key part of what makes humans different – intuitive thinking. By using neural nets trained on past winning Go boards and by improving those neural nets by pitting them against themselves, the program basically learnt what a human intuitively thinks is a “good” Go board without really understanding what human intuition is (do we understand it, for that matter?). I wonder if a similar capability can be built into some of these synthetic AI programs by training them on some of the “classics” of total synthesis where notions of intuition and beauty were imperative in achieving an elegant and efficient solution.

      1. JFlaviusT says:

        These are my thoughts exactly. When we don’t understand chemical intuition people tend to think “Well, if we can’t understand it ourselves, how can we code it into a computer?” But what if the question is turned back onto you, “If YOU don’t understand chemical intuition, how did YOU learn it?” You learned it by looking at lots of reactions and drawing imperceptible parallels between disparate data points. This is neural net computing and, as you say, this is what allows AlphaGo to have “intuition” without that feature being encoded into its logic. The way these computers are now learning is no different than how humans learn, I think we just need to provide them with informational infrastructure that allows them to efficiently and precisely navigate through the data we’ve generated so far. With Go that’s simple, since the moves are easily translated to 1’s and 0’s. With chemistry it’s tougher, but certainly nowhere near impossible.

    2. NHR_GUY says:

      LOL, your comment about Baran gave me a great laugh for today. I know him and think he is absolutely brilliant and one of the finest minds in our field today. That said, it’s scary to think a computer program will inevitably come along one day and exceed his ingenuity and make his work look trite. The genie has been out of the bottle for a while. It’s no longer a question of if, only when. Hopefully “when” arrives long after I am retired.

  4. Ed says:

    It’s certainly interesting times for applying computational methods to synthetic planning – given the recent automated extraction of reactions from 200,000 patents : – the data feedstock for automated synthesis. It looks like it’s getting rich enough to meet Peter Norvig’s “Unreasonable Effectiveness of Data” criteria . Uncomfortable though it may feel for chemists: data + algorithms = automation has been the rule in so many fields. Getting people to apply the methods however is a different matter, horses to water…..

  5. Peter S. Shenkin says:

    Med chemists have told me for some time, “If we think it looks promising, we’ll figure out out how to make it.”

    In focusing on what to make, the role of computation (for binding and ADMET prediction) becomes important. The more we can rely upon computation to predict better binding and ADMET properties than whatever we have now, the greater the synthetic effort development programs are likely to commit.

    The third component, it seems to me (beyond greater synthetic tractability – the subject du jour – and better property prediction) is computer-assisted generation of novel candidate structures.

    None of the above will eliminate the unpleasant surprises that biology presents, even given the the most promising assay results. But we can hope these developments will lead us faster down the path of discovery. Understanding and avoiding such surprises is a fourth and larger frontier.

  6. Rhodium says:

    The porantherine synthesis of Corey and Balanson in 1974 ends with “The synthetic approach outlined in Scheme I (among others) was also suggested by LHASA-lO, the Harvard program for computer-assisted synthetic analysis.” After 42 years there should be some progress in the field.

  7. anon says:

    I initially thought the “whopper of a chemistry post ” would be on Burke’s synthesis machine. Not surprised to see the machine was mentioned in the post.

  8. ASOC says:

    A major problem in this entire field of computer aided organic chemistry is that I am respectfully doubtful that a lot of this software exists, at least in the state that is advertised in such papers.

    I am aware of Grzybowski’s long running press for his system “Chematica”, that I can’t seem to download as a trial or even buy. Many publications and citations by peopel who have never actually run the software. Look on Quora and you can find people asking if anyone has any experience with this system. You can find lots of GUIs, videos with cool music, Churchill quotes but very little on the correctness/performance of the underlying methods. The same holds for other such chemistry systems that have been recently published. On the other hand, you can download very sophisticated free software implementations for computer vision, speech recognition and chess playing.

    Compare the state of machine learning and automated reasoning with other areas in computer science (like automated theorem proving) and you’ll see that there are still some important and challenging underlying computer science problems to solve before we can claim victory. In the meantime, we ought to be promoting open competitions (like we do for many other sorts of programs) with standard problems so that we can compare performance across platforms. Chemistry’s hesitance in this area is troubling (especially given how long and painful it was to get PubChem).

    1. eugene says:

      Grzybowski is a bit of a salesman, and he can reliably get his work into the best journals. That said, I doubt he would publish this without having an actually having something to back it up. But probably if a lot of people started using it at this point and feeding it all sorts of structures, it might be worthless for most of them, you could be right there.

      1. ASOC says:

        There is very little excuse in this day in age to write papers about software and not provide source and binaries so that readers/referees can verify the claims of the paper. Imagine if someone wrote a papers about a (very useful) compound for years that no one else ever made to check if it actually existed.

        1. eugene says:

          Well, you could posit this question to the authors on PubPeer I suppose. It’s not only a platform for revealing Frankengels, and was originally envisioned as a place to debate the merits of ideas presented by articles. The authors will automatically get the email that a post has been made.

    2. Actually, we do exist!
      General information about the software is available on our website ( More detailed information and access to the software is provided for inquiries received at
      The distance between you — or others with inquiries on Quora — and Chematica/Syntaurus is one click! And we sure hope you will reach out… There really is no reason to live in doubt 🙂

      1. ASOC says:

        Chematica, grateful for your response. Were the paper referees granted software they were able to run and evaluate? Perhaps it is worth suggesting here if you offer trial licenses to academics/non-profits for the purposes of evaluation?

        1. Li Zhi says:

          No way should referees approve a paper without running the code themselves.

      2. Design Monkey says:

        Mhm, one click, and exactly how many tens (or hundreds) of thousands of dollars away?

  9. Validated Target says:

    For many years, Hendrickson’s SYNGEN program was developed and optimized to solve one synthesis problem, “testrone”, a slightly modified estrone. When it was finally tested on other then-contemporary problems by a newly arrived post-doc, it choked and blew smoke. The students eventually fixed it with a lot of new code validated by testing on a wider range of structures but in its early days it just seemed to be a rigged demo.

  10. eugene says:

    Good thing I work in an area of chemistry where we can’t reliably predict reaction outcome, or even if the reagent will do what you think it will do. Certainly lots of future prospects… I’m guessing also those unreliable Tet. Lett. procedures from the 80s and 90s that I was going through when I was still doing a lot of organic synthesis will provide a lot of sticks for the wheels of this machine.

    1. Anon says:

      This is precisely the problem. How many publications have reliable yields? Whatsmore, how many times have you seen two chemists try the same reaction and get completely different results because of technique? Often times it’s a little tweak to the tight rope of reaction setup that gives good yields.

      It will be very hard to normalize the input for neural type learning.

  11. tangent says:

    The retrosynthetic search really seems like the easy part here! The hard part being the ability to predict reactions: “conditions for reaction R, applied to novel substrates X + Y, will produce Z in W% yield.” Is that really solved to any usable extent?

    Once you can predict reactions, then it’s a search problem, find a path by chaining together links in compound space. Not a small search problem, but I would think a tractable one.

    (The search might seem vastly harder than solving Go: since Go has hundreds of possible next moves, versus the many many thousands of named reactions known (and maybe some thousands would do something?). But a two-player game is much harder to search, since at every step White will do whatever harms Black’s search for a win. Compare to how easy it would be to find a Black win if the players cooperate.)

  12. Rule (of 5) Breaker says:

    Useful tool in planning – sure. Useful for automated synthesis? Well, you will forgive me for not retiring quite yet. No matter how much you “teach” the software, it will still work under the premise that the literature reactions will work as advertised. That seems like a bit of a stretch to me. My experience has taught me that while some reactions work exactly as published others take quite a bit of effort to get them to work with my substrates. Broadly applicable automated synthesis? Maybe someday, but not anytime soon.

    1. zero says:

      It is technically feasible to apply a weight to published reactions based on their cite counts or other metrics meant to generate a ‘trust’ score. This trust score would be one of several values to be optimized across the chain of reactions. Applying rules to set minimums for these values is likewise simple.
      If you only want to use reactions that have at least a dozen cites (thus hopefully avoiding ‘paper tiger’ reactions that don’t actually work) that should be easy to do from a software point of view. The hard part is populating that dataset in the first place.
      Using strict rules and minimums should make the search faster, since large numbers of reactions would be ruled out at each step. It does mean you are less likely to find a solution and your solution set is less likely to contain the optimum solution (for various definitions of optimum). On the other hand, the solution set (if any) will have reliable results with a high likelihood of success in the lab.
      A truly intelligent software package would perform multiple tiers of search, starting with only the most reliable high-yield reactions first and gradually expanding the potential reaction set and relaxing rules for later passes. Solutions can then be ranked by various attributes such as reliability, cost or yield, or less common attributes like availability of inputs. The results list could identify one or more reactions that would be usefully applied if the trust level were increased. (That is, a good program could give you a list of reactions to test in the lab where the results of those tests would usefully inform the solution set for a given target and could potentially improve accuracy for other targets.)

  13. mikeb says:

    Lots of chemists in denial. While you might not see automated synthesis in your lifetime, it WILL become a reality–eventually. About 20 years ago my first computer had a 3.2 GB HDD which considered elite at that time, and now you can buy multiple terabytes for less than $100. Combine this computer algorithm with a lot of the flow chemistry techniques currently being developed and it will only be a matter of time until you have a molecular printer.

    Don’t think that your job won’t be able to be automated just because you’re highly educated. Most of the public has no idea how advanced cutting edge machine learning really is. It is only a matter of time as well until we replace a freakin’ surgeon with a robot as a well.

    1. cbb says:

      Surgeons already use robotics quite heavily. However, you still need someone to oversee the robotics and ensure that it is doing the correct thing.

      1. Pennpenn says:

        Yes, but how long will we leave such critical oversight responsibilities in the hands of mere fallible humans?

        The overseers and maintainers are no safer than any other, and are fools to think themselves irreplaceable!

        I mean, I’m only a little joking…

      2. Mikeb says:

        Sure, you still need A surgeon to oversee the results of a surgery performed by a robot, but the point is that it is only a matter of time until hospitals and health care systems can save hundreds of millions, even billions in labor costs by employing 20 times less surgeons and use robotics for the vast majority of surgeries once the technology becomes more mature and has an established track record of near perfection. All you’d need is a single physician just to oversee the results. Robots will never get tired, can perform surgery better, faster, and more accurately. Again, not in our lifetime, but it will eventually come.

        Also, robotics is already starting to destroy a field like Pharmacy. Hospitals only have to employ a handful of pharmacists while automated robots can fill orders with far, far better accuracy than a pharmacist. In fact, Medco’s automated system has already filled 500 million prescriptions with an error rate of only 0.236–and that was 4 years ago. The ONLY reason pharmacists have jobs these days is because it is the law that a pharmacist be on hand while prescriptions are being dispensed, meaning pharmacists have jobs only because they’re good at lobbying lawmakers and not because of some indispensable set of skills they think they have that are better than automation. Pharmacy is the perfect example of a field that is going to be demolished by automation even though its practitioners may be highly educated. It is only a matter of time before hospital administrators start lobbying lawmakers to do the reverse in order to remove the laws that require a pharmacist be on staff, that way hospital systems will be able to save even more money but cutting unnecessary and highly paid pharmacy labor.

        Chemistry will be the same with automated synthesis that will never get tired, will always find optimal routes to synthesis, and can do it faster.

  14. Lowly postdoc says:

    Just wanted to mention that the question “how” to make molecule is the engine that drove the discovery of all the new developments from the Woodword days that Derek mentions in the post “the science has advanced greatly – we have a lot more bond-forming reactions than we used to, and a lot of elegant ways to set stereochemistry that just weren’t available”………….and there will be always place for “how” when you see a structurally complex molecule that “Syntaurus” can not provide! Let us say even if we have perfect answer, some one has to set up those reactions and make the product…so synthetic and medicinal chemists are always going to have to come out the current grave and rescue the world. Machines can only do so much….if computers could only solve problems then there would not have been so many jobs in IT sector. Optimistic views of a lowly postdoc!

    1. Albert says:

      I agree that eventually retrosynthesis will be solved computationally, but we are not there quite yet. Few months ago myself and one of my colleagues tested one of the current commercial retrosynthesis programs and found it of very little value. Regio and stereochemistry was a mess, results were either obvious or ended into intractable combinatorial explosion and so on…

      More philosophically there is no job which in principle can’t be replaced by a computer and it’s very difficult to predict which ones will suffer this fate first. If we achieve a true general purpose AI then there will be no need for any science/engineering jobs at all…

  15. Daen de Leon says:

    I wrote a much scaled down & simplified version of this for a customer of mine in 2007 — it used OpenBabel’s reaction templates to match primary and secondary amines on one molecule with carboxylic acids on another, did the amide bond formation, and generated the reaction products. It didn’t take account of enthalpy or any physical chemical properties, but it could figure out when an acylation would work, or a nucleophilic substitution, or any one of about six amide bond reaction types.

  16. The same goes for your bride and groom who’re getting able to
    explore a new life together. They work adequately for waterfall bouquets,
    which cascade gently down along with the flowers to shift naturally.
    By the 20th century, the tradition altered to the bride tossing the bouquet in the air
    and one of the women attending the wedding catching it.

    It was just about that way back when that Jan de Graaff, earth’s most outstanding lily hybridizer, became interested in making the
    lily much easier to grow plus much more beautiful. Things
    that it is possible to make using butterfly designs include cookie cutters and key chains.

  17. Li Zhi says:

    Reminds me of the (well known) difficulty in pursuading surgeons to use check-off lists. Despite the clear evidence that mortality, morbidity and error rates were lower in hospitals using formal algorithms, the surgeons strongly resisted (and many still do, apparently). Ego. Off the top of my head, it seems to me that “intuition” is those (nonaffective) thought processes which are nonverbal (or unarticulated). To believe the narrator is in control of the machine is an error. Most of the machine isn’t available for narration. It certainly does NOT require 4 years of study to set up a synthesis step. (not 6 or 8, either!). The basic premise of machine designed synthesis is that a machine will be faster, cheaper or more effective. We should add to that more trustworthy. It is a very good point that the available data is imperfect; statistics is probably [pun] the best solution to that. But any design capable of being coded into silicon is capable of being modified so that critical assumptions are flagged. The presumption that a machine which has evolved to fit a general and quite fuzzy ecological niche will be better than a purpose designed machine rests on the designer of the latter being quite incompetent. The day IS coming when a PC, a monkey and a stock room will outperform a Organic Synthesis postdoc, imho. We have code that can formulate (biological) hypotheses, design experiments, and execute (flow chemical) experiments. I don’t think its a question of if, but of when a combined system is capable of self-guided experimental learning (learning from experiments it itself performs and applying that to a goal either it or a “customer” identifies.

    It’s also interesting to note that we don’t know how many moves are possible in Go. Theoretically, somewhere between 10^10^48 and 10^10^147 (or so I’ve read). While you might naively think a 19×19 board → (361!) moves, it is unknown how many of them are unobtainable (as well as the obvious reductions in “different” moves due to the rotational and mirror symmetries.)

  18. Li Zhi says:

    Hmm. I must have blundered: 361! isn’t anywhere close to 10^10^48 (let alone 10^10^147). It would still require more memory that the number of particles in the Universe, but still…

  19. Jen says:

    Have you seen 20n, the synthesid startup out of Berkeley?

  20. loupgarous says:

    “As everyone knows, Deep Blue marked the transition where computation defeated the best human player in the world, but it should be remembered that chess programs had been able to give ordinary (or even reasonably good) human players all they could handle for many years before that.”

    My Z-80A-based. 16Kb RAM (with the expansion brick) Timex-Sinclair 1000 was, five years after I bought it, largely useful as being the one computer I could reliably beat at chess. Got to take those naches where you can get ’em.

  21. Thomas Struntz says:

    Fully replacing the chemists is a long way off. while I see potential in such system to inspire the chemists, maybe see a pathway or part of a pathway he else would not have thought about or only thought of it after weeks to month of thinking about the problem. So you still gain a lot of time in experiments you did not have to perform.

    However a lot of chemists especially older ones are quiet stubborn and refuse to even try such a tool with an open mind. The second part then is IT competence. Looking at the videos in the linked article already tells me it’s unusable for most chemist as defining you own rules (cost functions) as shown sadly is already too much of IT to ask. Click-click-click -> wait 5 seconds -> show solution. Anything more complex and you need a designated expert that uses the system.

    It’s scary how little some highly educated people actually know about computers and hence it’s easy for a young chemist to stand out. Invest some time into becoming more IT versed. It stuff should actually be much more included in natural sciences based education (chemistry, biology,….).

Comments are closed.