Skip to Content

Optimism as a Function of Time

I’m traveling, so there’s a bit less time for blogging than usual. But I wanted to mention something that’s come up several times over the years, especially when new technologies are coming along. People ask me “Do you think that (such-and-such) is going to be important?” or “Can (technology X) really be as big a deal as it seems?” And when I answer those sorts of questions, I always have to start by clearing up what time frame we’re talking about.

That’s because I’m a short-term pessimist, but a long-term optimist. Consider, say, retrosynthesis software. There are some very interesting things going on in the field, but not all of these are commercial just yet. Right now, my impression of the available software is that it’s not wildly useful (although it’s still better than some people think it is). And it can be quite expensive – I’ve been told about some figures by correspondents for what some of these software packages are going for, and they ain’t cheap. So for the moment, no, I don’t think that this stuff has changed the world yet, and I also wonder if some of its commercial practitioners might not be pricing themselves out of what should be their market. So in the immediate term, I sound a bit pessimistic.

But. . .at the same time, I really don’t see much reason why such programs can’t work, or indeed why they shouldn’t. They’re getting more capable all the time, both in terms of the knowledge of the literature that they have available, their ability to apply that to a given problem, and even in the rate of improvement in both of those functions (second-order improvement!) We human chemists, on the other hand, are not improving at so noticeable a rate. If you could draw curves for both the average human chemist’s and the available machine’s competence in retrosynthetic analysis, I feel sure that the machine one would have a much greater slope. It will catch up with the human one and then surpass it, and I feel equally sure about that. So long term, I’m optimistic – if you call that optimistic, and your mileage may vary on that point!

The same goes for many other trends. If you tell me that we are eventually going to get better at predictive toxicology, to the point of making a real impact on the costs of clinical trials, I will agree with you (while not committing to a date for “eventually”). If you tell me that you have software for sale right now what will do this for us in great detail, I will reach into my pocket to make sure that my wallet is still secure, because I will take you for delusional at best and out to swindle me at worst. Toxicology is a very hard problem. I feel sure that we don’t as yet have the knowledge to make serious decisions in the field without going through the expensive, laborious, dangerous tasks of giving our compounds to animals, and then to humans. But could we have such knowledge at some point? Why not? We’re not violating any laws of physics – the prediction of toxic effects is a big, tough, multifactorial mess, but there’s no reason that I can see why it shouldn’t eventually give ground to us. It already has, you know – just not enough!

So if you’d like to maintain a reputation as both a dour skeptic and a wild-eyed enthusiast, that’s how to do it. At least, it’s worked for me. Both these things are dependent on the time frame involved.

12 comments on “Optimism as a Function of Time”

  1. Anonymous says:

    I don’t know the current asking prices of retrosynthesis software, but I do recall that Hendrickson’s Syngen license wasn’t too costly. Syngen (both the Forward Synthesis program and the regular Retrosynthesis program) was developed using the Sigma-Aldrich catalog for starting materials (with prices) and free access to the commercial InfoChem database of, at the time, over 1 million reactions. InfoChem let Hendrickson use it for internal, academic use only. If clients wanted to link Syngen to a database, I have no doubt that the database is where the bulk of the cost would have been.

  2. Daniel Barkalow says:

    I think the only question of whether software retrosynthesis will eventually be important is whether something else will beat it by the time it beats humans. I’d entirely believe that a CNC experimental apparatus will turn out to be much better than any software running on a computer with fewer than a million cores dedicated to your question.

  3. hn says:

    Simple question from an academic: what makes predictive toxicology a hard problem? Any good review articles?

    1. Druid says:

      Paracetamol/acetaminophen toxicity has probably received 1 million hours of research and I am still not sure there is a definitive mechanism, though we do at least know how to make it safer by adding N-acetylcysteine to the mix (one of those rare things – an antidote!). There are often competing theories for mechanisms as for diclofenac. Taking an empirical approach, we can look at acetaminophen (or diclofenac) and see how it could be metabolized to a quinone-imine and see a bad risk. But then you need to quantify that risk. Will it harm 1 in 10 or 1 in 10 million? What is the dose going to be? Then, unlike chemical synthesis, you can’t easily do the experiment to check the prediction and try to improve the model. Unfortunately, and I think unethically, very little is published, particularly failures, from tox studies and most data are buried inside inaccessible databases. In vitro assays are barely screens, and in vivo tox is often very different between species.The drugs that make it are not usually completely devoid of toxicity, but it is manageable and sufficiently rare.

    2. Mark says:

      The state of the art in comp chem is that we still cannot predict with any sort of reliability whether a particular compound will bind to a particular protein with any sort of affinity. For toxicology prediction you need to make this prediction for your compound against dozens of proteins, no doubt including some that we know very little about, and get the correct answer for all of them. That’s just for the direct tox mechanisms: you then have to consider the toxicology of all of the metabolites (and we can’t even currently predict what they will be, either). We’re so far from a predictive tox method that I really don’t understand any of the reasoning behind the vast sums of EU money that have been thrown at the problem: the output has been a huge array of crappy non-predictive models at vast expense. The money would much more profitably have been spent on getting a better understanding of the basic biology rather than throwing ML methods at useless 2D descriptors.

  4. Insilicoconsulting says:

    Nitpicking but Predictive tox would be in pre-clinical. In clinical trials or pharmacovigilance, finding true associations or causation for drug-AE events when obscured by concomitant meds, dosage and natural disposition would be useful.

  5. Hypnos says:

    A retrosynthesis software package does not have to be better than a human expert to be useful. If we are able to automate only some of simple cases, we can free up time for experts to focus on the more challenging cases. Also: not every chemist in industry has deep synthetic knowledge and 20 years of experience. We have to be honest about our own capabilites and pick the right benchmarks in our discussions around “AI”.

  6. milkshake says:

    Retrosyntetic analysis is not lagging behind due to lack of computational power. It needs a huge data training set, and for that it needs to ingest a dataset comparable with the Scifinder database. The data has to be in the compatible format and has to be updated by reading and indexing the published literature. So really, it becomes a data entry problem.

    With toxicology prediction, I don’t know even where to begin.

    The problem of computational methods is GIGO – they are only good if the underlying data are good, and if presumptions that go into their processing are valid.

  7. MM says:

    Tangential, but I like to remind my students (in a med. chem. / chem. bio. course), who get very excited about AI and other new technologies of Amara’s Law – “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” The Human Genome Project is a good example; we certainly didn’t meet the hype regarding new cures in the 2000’s, but now the power of genomics (to my 20-something year old students) is almost taken for granted now.

    1. cancer_man says:

      Only journalists said there would be cures in the 2000s.

      I think there are technologies that improve where many in their own field do not notice.

      “I think that if you work as a radiologist you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath. It’s just completely obvious that in five years deep learning is going to do better than radiologists. It might be ten years.” – Geoffery Hinton, “grandfather of neural networks”

      1. Isidore says:

        “Only journalists said there would be cures in the 2000s.”

        And those submitting NIH grants.

  8. Anonymous says:

    Predictive tox is full of problems. The raw data (LD50s, tumorigenicity, mutagenicity (Ames Test, etc.), acute tox, chronic tox, …) is all over the place and comes from different organisms and different cells under different conditions. Even large, curated databases of in vitro tox data are only about 60-70% accurately predictive when compared to studies in whole animals (which can vary from fish to rodents to dogs to primates).

    Tox screens run in different cell types (kidney, liver, …) often give different results and tox rankings. So which do you use for your predictions? When those compounds are run in whole animals, perhaps the most relevant guide, non-toxic compounds sometimes become toxic and toxic compounds can have no observable effect on the animals, even after long exposures at high doses.

    Many, if not most, if not all compounds have multiple biological mechanisms of action. When looking for curative or preventive drugs, you’re looking for a beneficial effect that is better, faster, stronger than any negative off-target effect (although you do want to know about the most likely bad things that might happen, hence predictive tox). Predicting tox only, you have to consider everything that can go wrong and its potential for causing harm. Membrane perturbations, DNA damage, metabolic disruption, signaling disruption, … It isn’t just one thing that can happen. Which of those many “little” things will be responsible for the most toxic outcome? That can be really hard to predict.

    There is a free tool, OSIRIS Property Explorer (google it), that used to run in a browser. Due to problems with Java on many computers, it looks like it is now a free download (from openmolecules . org and elsewhere). Launch the applet and start drawing your molecules and watch the indicators vary from green to yellow to red. It is a very simple to use Tox Prediction tool. I just found another free Tox Prediction tool: ProTox-II at tox . charite . de (google it). I haven’t tested it yet.

    (Wow! Webreactions lives again! openmolecules . org also has a simple download to run Webreactions in a simple Java applet. Webreactions is a poor man’s reaction (literature) search tool. I used to use it a lot. [You have to make sure that your Reactant and Product atoms are mapped correctly or you’ll get an error. Do not erase and redraw; draw either R or P; click the tab to switch to P or R; modify the bonds and atoms; then search.)

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.