Skip to main content
Menu

In Silico

Machine Learning for Antibiotics

I know that I just spoke about new antibiotic discovery here the other day, but there’s a new paper worth highlighting that just came out today. A team from MIT, the Broad Institute, Harvard, and McMaster reports what is one of the more interesting machine-learning efforts I’ve seen so far, in any therapeutic area.

This is another run at having an ML system work though a data set of both active and inactive compounds and letting it build a model of structures that seem to be associated with that activity. So let’s do a bit of background: that, of course is what medicinal chemists have been doing for decades (in our rather human way), and it’s also what we’ve been trying to use software to help us do for decades, too. The potential use of automated systems to help us sift through the huge piles of available compounds (and the even more towering piles of possible ones) has been a goal for a long time now, and every generation of chemists and computational scientists has had a go at it with the latest hardware and the latest code. In its final form, this would be “virtual screening”, the supplementation (or outright replacement) of physical assays with computational ones that could evaluate structures that haven’t even been prepared yet but only exist as representations of things that could be made if they look interesting.

We’ve been working on virtual screening for decades, with a level of success that can be characterized as quite variable but (to be honest) often underwhelming. There have been many levels of this sort of thing – a popular one has been to run though a collection of known drugs or bioactive molecules (which puts you into the low thousands of possibilities). And if you’re working at a drug company, you can screen the compounds that you already have in your collection or some set thereof (which can take you up into the hundreds of thousands or low millions). These days, there are also collections of structures such as the ZINC databases, which will give you over a billion molecules to screen, if you’re up to it – and it’s safe to say that nobody is, because doing any sort of meaningful computational screen on those sorts of numbers really is at the edge of what we can talk about, even in 2020. Also at that edge is the idea of “generative” screening, where you don’t just dump pre-generated structures into the hopper, but rather have the software build out what it believes would be interesting new structures based on its own modeling. That also is just beginning to be possible, depending on whose press releases you read.

What do you get when you run these screens? Well, it’s safe to say that you always get hits. The uncertainties in modeling (and the concomitant desire not to miss things) ensure that you will pretty much always get virtual screening hits. Unfortunately, you can also count on the great majority of those being false positives should you actually screen them out here in the real world. To be sure, every physical screen generates those, too, but virtual screens are particularly efficient at false-positive generation, and the scoring and ranking functions are generally fuzzy enough that you really can’t make a start on clearing them out other than by running the actual assay. It’s true, though, than in many cases (although not always!) that running those compounds does show that the virtual screen enriched the list with actual hits, as compared to the hit rate of the starting collection. That’s good, and it’s a success for the algorithms, although there are also times when that enrichment is nothing that a human chemist couldn’t have pointed to as well (based on similarities with known compounds).

To finish up this background digression, that last point is exactly what we would ask of virtual screens: to find us active compounds whose structures we wouldn’t have suspected ourselves. Ideally, the software would spit out a list of only such compounds, not scatter them lightly through a big pile of red-herring false positives and a bunch of real but coulda-toldya-that true positives. We can’t do that yet. But we’re getting closer, and this new paper is an example (to which at last I turn!)

An important feature of this work is that it’s a close collaboration between virtual screening ML methods and actual assays, run specifically for this project. For example, the team started out by taking a list of FDA-approved drugs and a somewhat shorter list of bioactive natural products (2,335 compounds total) and running a growth-inhibition screen with them against E. coli bacteria. Machine-learning models are exquisitely sensitive to the quality of the data used to train them, and it’s a very good idea to generate that data yourself under controlled conditions if you can. There are surely antibacterial numbers available in the literature for many of the compounds on that list, but they’re going to be from assays run by different labs under different conditions, against different strains and at different concentrations, making those numbers close to useless for reliable machine learning fodder. So collecting fresh numbers under tighter conditions was an excellent start.

120 of the molecules in that set inhibited bacterial growth at 80% or better at a set concentration, so those were classified as hits. The neural-network model was then trained up on these activities and structures. As the authors note, in years past compounds had to be rendered into numerical forms using human judgement calls (Do you assign different values to, say, different functional groups? To their arrangements in space? To other molecular properties as well? And in what proportion?) But our systems have gotten capable enough to generate these themselves along with the activity data, giving you a complex structure-activity model (the particular technique they used is detailed here). Further molecular features were added in as well using RDkit.

After this, the group used an ensemble of such generated models to evaluate a collection (from the Drug Repurposing Hub) of over six thousand molecules that have been reported as going into clinical development (across all sorts of indications). Compounds that overlapped with the initial training set were removed. And at this point they compared several different ML models in their ability to handle the data: one trained without the added RDkit properties, for example, along with one trained only on RDkit numbers, several random-forest trained algorithms, etc. (I would be very glad to hear from people with more ML experience than I have about this paper’s degree of disclosure on their main model and these others – one of the biggest problems in the field is the lack of enough disclosure to reproduce such work, and I hope that’s not the case here). Taking the 99 best molecules as predicted by their model and actually testing these against E. coli for growth inhibition showed that 51 of these did have some level of activity – as compared to the set of the 63 lowest-scoring molecules, of which only 2 showed activity. So the virtual screen did indeed seem to enhance for activity (although, as you can see, about half of the best “hits” were still false positives).

Looking through those 51 screening hits for interesting structures and degree of clinical development, one in particular stood out: SU3327, a c-Jun terminal kinase inhibitor that turned out to have an MIC of 2 micrograms/mL against E. coli growth (an activity that had never been noticed before). Its structure is vaguely like the nitroimidazole antibiotics (metronidazole, for example), but it displays a different spectrum of activity. In fact, the compound (renamed halicin) showed activity against an impressive variety of bacteria, including S. aureusA. baumanni, C. difficile, M. tuberculosis, and others (it seems to have much less effect on Pseudomonas, unfortunately). Notably, it continued to perform against drug-resistant strains (against several common antibiotics with different mechanisms of action), and attempts to generate resistance mutants were not successful.

Looking at gene expression profiling, the compound seemed to affect cell motility and iron homeostasis, which led to a hypothesis that it was affecting the pH gradient of the bacterial membrane (disruption of which has been reported to interfere with both of these). Indeed, the compound’s effects were very pH-sensitive, and experiments with fluorescent probes and membrane disrupting compounds were consistent as well. This is not a commonly recognized mode of action, and it’s worth noting that the nitroimidazoles themselves don’t seem to work this way, but rather disrupt DNA synthesis. A quick search through the literature, though, turned up this paper that suggests that several antibiotics have effects on pH homeostasis that contribute to their bactericidal action (building on an earlier oxidative-stress hypothesis). But in that case, it seems to be the opposite gradient effect, if I’m reading it right: the antibiotics studied there (such as chlorpromazine) became more effective under alkaline conditions, whereas halicin becomes less so.

Halicin itself is shown in the paper to be effective in mouse models of drug-resistant bacterial infection, which is quite interesting. Topical infection with A. baumannii strain 288, which is resistant to all the usual antibiotics, was effectively treated with halicin ointment. Another model was C. difficle infection in the gut, where metronidazole is a first-line treatment, and orally administered halicin outperformed it. It would be quite interesting to know the compound’s profile in preclinical tox testing in its life as a JNK inhibitor, and how close it has come to being taken into human trials.

The group then went on to apply their ML model to wider sets of compounds. Trying it out against the 10,000 compounds in an anti-tuberculosis set at the Broad was predicted to not line up well, since it was a highly divergent chemical and biological space from the original set. And indeed it didn’t – running the best-scoring and worst-scoring compounds from the ML model in the E. coli growth inhibition model showed no real enhancement by virtual screening. These results were incorporated into the model, then, as the group moved on to the (huge) ZINC15 data set of over 1.5 billion structures. This is too large to screen the whole thing in such detail, at least it is in the year 2020, so the group concentrated on compounds with physical properties most like those of known antibiotics. That knocked it down to a mere hundred millions molecules – still a very impressive number. The top 6800 compounds were as before compared as ranked by several other ML models, and 23 were selected as having very good similarity in antibiotic property space but with wide structural divergence, in an effort to find new chemical matter that might well show new mechanisms.

These compounds were assayed against E. coli, S. aureus, K. pneumoniae, A. baumannii, and P. auruginosa, which is quite a list of heavy hitters, and 8 of the 23 had inhibitory activity against at least one of these. One of the structures is a sort of quinolone-sulfa hybrid that showed rather potent and broad activity, while not being significantly slowed down by quinolone-resistant strains. Now, if you showed this one to an experienced medicinal chemist and asked what it was, they’d say “Probably an antibiotic”, just because of the structural heritages, but it seems as if it might have interesting activity and is probably worth following up on. So the screen was worthwhile, and the paper claims that it took 4 days to run. That estimate doubtless does not include the significant amount of effort it took to get to the point of running said screen, but it’s true that much of that work doesn’t have to be done again if you want to go further.

So overall, this is an impressive paper. The combination of what appears to be pretty rigorous ML work with actual assay data generated just for this project seems to have worked out well, and represents, I would say, the current state of the art. It is not the “Here’s your drug!” virtual screening of fond hopes and press releases, but it’s a real improvement on what’s come before and seems to have generated things that are well worth following up on. I would be very interested indeed in seeing such technology applied to other drug targets and other data sets – but then, that’s what people all around academia and industry are trying to do right now. Let’s hope that they’re doing it with the scope and the attention to detail presented in this work.

 

47 comments on “Machine Learning for Antibiotics”

  1. KwadGuy says:

    I know there has been a lot of AI naysaying in this space over the past few years in response to unsubstantiated AI claims.

    But this work does feel more concrete. It helps that they did a TON of wet work to go with their AI search–this feels like a HUGE effort overall. But the results are impressive, and I think they’re going to perk up a lot more ears. Especially since these authors focused on the hard and increasingly serious problem of antibiotic resistance.

    No, it wasn’t plug/chug/drug. That’s a pipedream. But it demonstrates how AI, properly and carefully applied, can move you out of the space you know.

    1. Nat says:

      As someone who has worked on both sides, I judge every computational biology or chemistry paper by the quality (or at least presence) of experimental follow-up. Not just because a computational model is only as useful as its ability to generate testable hypotheses, but if the authors couldn’t be bothered to pick up a pipette, I assume the model is equally half-assed.

      1. Dr. Viktor Frankenstein says:

        Don’t forget that a lot of people with computational expertise have no idea how to set up and perform a biochemical assay!
        Even if they were totally convinced that their predictions are true, they would still have to find and convince a wet-lab collaborator to do the experimental work.

    2. Stats matter says:

      “it demonstrates how AI, properly and carefully applied, can move you out of the space you know”

      It doesn’t really demonstrate that, you know. The AI seems to have generated one hit which is outside the chemical space it knew. That’s actually a lower hit-rate than they report for testing compounds that the AI predicted to be inactive, and lower than they found in the training set of drug repurposing molecules which were selected with no AI at all.

  2. Delph says:

    It’s worth noting that while disruption of the proton gradient might not be a well recognized mode of action in anti-bacterials, it is a very well established mode of action in pesticides. Insecticides such as chlorfenapyr and fungicides such as fluazinam work by this action and it is a well established way to get activity. The issue is getting activity that is selective for the target organism/s because all cell have membranes and are disrupted easily.

  3. Ron Richardson says:

    What’s great about this paper is that it absolutely fails as a modern med-chem-driven drug-hunting exercise–the compound is hideous, the mechanism of action is murky and seems liable to toxicity, and there aren’t studies on target engagement or binding mode. But applying modern medicinal chemistry to antibiotic discovery has failed tremendously! Most antibiotics break all the med-chem rules. Just training an AI on structures known to have powerful selective antimicrobial activity, getting some new scaffolds, and running the hits through basic animal models is probably a much more efficient approach.

  4. Dr. Manhattan says:

    One of the authors, Eric Brown, has set up a high throughput system for the microbiology up at McMaster. Combining his expertise (along with a couple of his students, who I also know) allowed the AI work to be validated with the biology. Some of the compounds undoubtedly have toxicity and other liabilities, but this is a fresh approach to identify compounds as hits that could be further refined into leads.

  5. Mark says:

    If you run a docking virtual screen, and discover a hit, and that hit turns out to bind allosterically somewhere completely different on the protein, is that still a success for your docking screen, or did you just get lucky?

    If you train an AI on a bunch of antibiotics, and it recommends a compound that turns out to work by hitting a target which none of the training set antibiotics hit, is that a success for your AI virtual screen, or did you just get lucky?

    I’d be a lot more impressed if halicin had the same mode of action as some of the compounds in the training set, but was structurally distinct from them – that would be a much better indication that the AI learnt something meaningful.

    1. Chris says:

      The AI result could be luck, but it doesn’t have to be. Some of the value of machine learning-driven predictions is that you don’t have to lock them into a hypothesis of activity, like a pocket or a binding mode or a target. You give it structure, you give it a phenotypic activity, and you ask it to tell you what new structures seem like they should also have activity. That’s the beauty of ML: it can handle the volumes of data necessary to detect very subtle SAR connections. While I can’t say what’s happening here, if you had a drug with split mechanism 90% target A/10% target B and optimized manually into 10% target A/90% target B I wouldn’t call that luck, right?

      If you were to repeat this experiment by feeding the model structure and target binding data? Well, maybe that would be luck.

    2. Anonymous says:

      Beginning in the 1940s, a search for folic acid analogs led to the development of methotrexate for the treatment of leukemia (link in my handle). It is a potent competitive inhibitor of dihydrofolate reductase and was predicted to be binding to DHFR in a manner similar to folic acid itself. Decades later, X-ray of the methotrexate bound to DHFR revealed that it was binding in a completely different manner. Good (human) idea, good (human) implementation, wrong (human) prediction.

      In the 1990s, Tack Kuntz (UCSF) published some results from the use of his DOCK program to predict strong small molecule binders to target proteins. Upon reduction to practice, they found that they (the AI, that is) had successfully predicted many strong binders but, based on X-ray, some of them were binding in was completely different from the DOCK-predicted ways.

    3. x says:

      Fact is, they found new things. I’d call that making your own luck, maybe.

    4. Stats matter says:

      Just a point on the statistical significance of these results.

      120/2335 compounds from the training set hit the E.coli assay (~5%).
      2/63 of their compounds predicted to be inactive were active (~3%).

      From this you can conclude that this assay gives an underlying hitrate of 3-5% by chance alone from random, diverse compound sets.

      The authors tested 99 compounds and reported the one that’s least similar to the training set.

      By chance alone, you’d expect them to have found 3-5 such molecules.

    5. Ed says:

      I am not a computational guy but this approach strikes me as biologically very naïve. But, I am sure some of you would say that all my previous learnings prevent me from thinking outside the box.

      It seems to me this could work if there were only a single solution to the problem. However, there are multiple, for example a mode of action via an extracellular target (with relatively little restraints on compound properties) and one via an intracellular target which, however, now always requires the compound to have permeant properties as well. Multiply this by the range of different extracellular and intracellular target (binding sites) and routes into the cell, and it will be clear that there are many solutions to the problem. Would the program allow for multiple solutions or would it point to a hybrid, i.e. an antibacterial compound that gets stuck halfway during cell entry.

      So I would conclude that the outcome was not luck but there is still a lot of well established biology to be incorporated into the program.

  6. artificial stupidity says:

    Gr8! Now that antibiotics have been solved how about they do AI to COVID-2019? O wait, they already solved that too!

    https://chemrxiv.org/articles/Potential_2019-nCoV_3C-like_Protease_Inhibitors_Designed_Using_Generative_Deep_Learning_Approaches/11829102

  7. You can see this both ways, and I think they are equally true:

    On the one hand, we don’t have a negative control (baseline method), and a sufficiently large number of compounds that are tested using both methods (which would be experimentally rather difficult to do), so there is no objective way to capture the contribution of the method to the overall result. In the neural network model SU3327 is at rank 89, in the standard RDKit model rank 273 (out of 4,496) – is this a meaningful difference? There is some manual selection involved apart from the rank, and anyway not all compounds are tested… and maybe one of the higher-ranked RDKit compounds is even better? But maybe not of course, we simply don’t know! And this is a real problem with the validation of virtually all of our current ‘AI’ methods: We happily ascribe a particular outcome of a complex process to one of a very large number of possibly contributing factors – probably the one we prefer, such as the method if that is our brainchild – and ignore the rest (instead of comparing to baseline, which we simply don’t have). So somehow many ‘case studies’ are more a ‘proof by example’ in the end.

    On the other hand, we here have a compound with an apparently mostly new mode of action, the wet lab validation is indeed comprehensive, so didn’t we arrive at the goal here in the end? Hence: Does the way actually matter – or isn’t the success proof enough, with an unexpected chemistry/endpoint link, so ‘we got there in the end’? (And aren’t all drug discovery projects the result of a huge number of contributing factors where we never know which step contributed what precisely?) So in this sense this is a very successful paper, of course. It all worked very well – without us being able to say what precisely worked, and why. But, of course: Indeed, it worked.

  8. AlloG says:

    Looks like the mechanism o’ action as Nisin but dat nitro group flanked by a cupla of thiazoles- Hoo-boy! My liver hurts just lookin at it!

    Does dat ML predict how much money its gonna raise? I predict Zee-ro!

  9. steve says:

    Alpha Go beat world champion Lee Se-dol by coming up with moves that Lee said no human would have though of. Lee retired after that saying Alpha Go was unbeatable. Alpha Go Zero is even better. It’s entirely possible that this AI learned rules from its data set that eludes humans. Rather than just calling it “lucky” maybe humans should try to figure out the logic that was used.

    1. Nick K says:

      Alpha Zero “learned” to play chess by playing itself several million times, thus “learning” the game. This took only 4 hours of machine time. It then went on to beat Stockfish, the strongest traditional chess engine. Even elite human players are simply swept aside. Quite terrifying.

    2. Watsin says:

      Since everything in the universe is just applied physics perhaps the AI should just learn first principles then derive all of biology from that. Filling in the gaps of disease should be simple!

  10. metaphysician says:

    With antibiotic resistance on the rise, I would tend to think that “toxicity” is a more fungible issue than before. Its not a long term drug for a chronic illness, its a drug for treating an active infection. The side effect profile would have to be pretty nasty to be a worse option than “Dying in the hospital of a multi-drug resistant infection”, doubly so since presumably these drugs would be used *in* a hospital ( ie, with supportive care and monitoring present ) rather than in the home.

    Granted, it might be tough to market and make a profit on an antibiotic that has a 50% chance of killing the patient, even if you do try to limit it to infections that are close to 100% likely to kill the patient.

    1. nope says:

      That logic worked out well for mercury, malaria, and salvarsan vs syphilis after all

      1. steve says:

        Current TB drugs are pretty darn toxic – almost worse than the disease itself.

  11. Mike Gilson says:

    Following your prior recommendation, Derek, did the authors report the similarity (by some reasonable metric) of the hits to the compounds in the training set? I have skimmed the paper, but didn’t see this so far.

  12. Natural Stupidity says:

    Chlorpromazine is an antibiotic?

    1. loupgarous says:

      Thioridazine’s just as effective as chlorpromazine as an antibiotic. Some of the other phenothiazines are also possibly active against antibiotic-resistant Mycobacterium tuberculosis.

      Of course, long-term treatment of drug-resistant TB with chlorpromazine entails tardive dyskinesia, long-term thioridazine therapy risks QTc interval prolongation, weird retinal pigmentation and other things you’d rather not have if you could avoid it. Patients with drug-resistant TB might not have the option, especially in developing nations where there may be big bottles of generic phenothiazine antibiotics on pharmacy shelves.

  13. Anonymous says:

    Slightly off the main topic (AI / ML for antibiotics), but still related to the treatment of bacterial infections: phage therapy (link in my handle). The idea is to screen, mutate, evolve bacteriophage (a virus) against the drug resistant infection in vitro and then dose the patient with the optimized antibacterial phage. Controversial, not fully approved. But we know where this is going …

    “New Company “In The Phageline” To Use AI / ML To Optimize Phage To Treat Bacterial Infections, Cure Cancer, and Eliminate The National Debt. VC Are All In.”

  14. anonymous says:

    I haven’t read the paper, but it seems possible that the AI algorithm might simply have done the equivalent of applying rule-of-5 like filters to the training set. Antibiotics have physicochemical features that are outside the range of most other drug classes. If you simply gave priority to low cLogP, low MW, etc then you would likely enrich for antibacterial activity, even by unknown mechanism. (Or the presence of NH2, NO2, etc groups, isolated small heterocyclic rings, lots of heteroatoms, which would achieve the same thing.) The spectacular success of this particular exercise could be a result of this effect, supported of course by having a really good set of training data. Another contributing factor could be simple random luck. How many times have people set out to discover drugs through AI applications? If there have been 100 such legitimate efforts, and the first 99 fail (and therefore don’t make it to the BBC news, or even get published at all), does the success of the 100th effort really mean anything more than the law of averages catching up with this one lucky research group?

    1. Dr. Manhattan says:

      Another point is that as interesting as this paper is, in antimicrobial screens of compound collections one often turns up compounds that inhibit bacterial cells. The problem is they also inhibit A549 cells, HepG2 and other cell lines at concentrations close to the bacterial inhibitory concentrations. The one example in this set was an abandoned SU3327, a JK diabetes drug. We don’t know why it was abandoned-efficacy? Toxicity? Metabolism?

      So as far as this goes, it is an impressive effort to identify heretofore unknown hits that have the potential to be leads.

      Looking at their data set, Fig 6D has one very active compound (which has been noted above as looking to have hepatoxicity potential). Curiously, the compound has excellent E. coli and Pseudomonas aeruginosa (a very tough organism when it come to antibiotics). But, Klebsiella (one of the Enterobacteriaceae, and a fairly close relative to E. coli) has an inhibitory concentration more than 400 fold higher than E. coli and 80 times higher than Pseudomonas. Could be strain dependent on that particular Klebsiella isolate, but to a microbiologist, it sticks out as odd.

  15. Allchemistry says:

    Halicin kills a wide range of bacteria, performs on metabolically inactive bacteria as well as on antibiotic-resistant strains, it does not easily elicit resistance and disrupts the membrane . These very same characteristics also apply to the ordinary desinfectans chlorhexidine, suggesting that the ML algorithm in one way or another (also) selected for features that are important for interaction with the membrane. Particularly for the action of drugs like halicin, having an intracellular target, membrane-active properties are likely essential.

  16. Heetyout says:

    Check out the cravatt lab……can’t reproduce anything he did over the last 15 years.

  17. gippgig says:

    One odd way to screen for compounds with antibacterial activity would be to select those that have the highest rates of diarrhea as a side effect in clinical trials (seriously!).

    1. Dr. Viktor Frankenstein says:

      “select those that have the highest rates of diarrhea as a side effect in clinical trials”

      An excellent idea!

      1. Dr. Manhattan says:

        I would love to see that trial design….

        Cleanup in Aisle 3!

  18. wim says:

    Also relevant is this recent nature paper, linked in my handle.
    they docked 150 million compounds (!) to the melatonin 1 receptor, exclude anything that is similar to known MT ligands and discover a plethora (10 new chemotypes) of agonists/inverse agonists at the MT1 and MT2 receptor and a highly impressive hit rate of 39% (number-active/number-physically-tested). F

    1. Derek Lowe says:

      I hadn’t seen that one yet! More blogfodder, thanks.

    2. the docker says:

      Honestly your docking success is dependent on the target. Some things just suck up compounds. Just try any b-lactamase and pick negatively charged compounds –infallible!!

  19. Mike Weinberg says:

    It’s definitely inaccurate to say that 100 million scorable compounds is manageable but a billion and a half is not. Scoring scales linearly with the number of observation! So, if anything, the reason for only considering a fraction of the compound dataset has to do with the lack of good validation data for compounds resembling the other billion and a half which were not selected for virtual screening, and so there was no model to score those with.

    I think that it’s fairer to say that good training data doesn’t exist for most compound structures in the zinc db than to say that there isn’t enough computing power to generate productions for them.

  20. Jim Hartley says:

    Heard Brian Shoichet talk a couple of weeks ago, he said they did a large-ish virtual screen via Amazon Cloud Services for $930.

  21. Painful PAINs says:

    Hi Derek,

    On this point:

    ‘although, as you can see, about half of the best “hits” were still false positives’

    How does this compare to other screening techniques? Direct uHTS, DEL, combichem (only partially joking here)?

    My first med chem role was in a phenotype uHTS setting and despite a sophisticated screening platform and well educated scientists, we we’re still running into essentially PAINs compounds for what felt like the majority of the hits. Unfortunately, I was too junior to be privvy too what % of library was hitting, and what % of the hitters were real – but it was constantly discussed.

    1. It's painful says:

      @Painful PAINs,

      It’s one of the things that some virtual screening folks have trouble believing, but the way hits are identifed makes practically no difference to the frequency of PAINs. The main thing is to find ways of removing them afterwards.

      An AI trained on a set full of PAINs will identify more compounds with PAINs-like features so having clean input is pretty important.

  22. lynn says:

    It is easy to kill bacteria, albeit harder to do so with E. coli and other Gram-negative bacteria. But, as others have said here, toxicity is key – killing must be selective. Kill the bug, not the host. The Drug Repurposing molecules may well not be safe at the levels required for antibiotic dosing [even with very potent compounds]. IIRC [I’ll have to check the reference] 25% of chemicals off the shelf will kill some bacteria. In the antibacterial discovery game [outside of TB, pretty much] we have generally avoided membrane active agents due to worries about toxicity. I fear that halicin will be a toxic agent. [It is also hardly soluble, according to Chemaxon calculations].

    The other equally important consideration is that it is not so much the molecular targets that are important in determining the overall “structures/determinants” of antibiotics – it is their ability to accumulate in the bacterium. That will vary with the number of obstacle membranes and efflux systems that must be passed [many more in Gram-negatives], the species, and whether the targets are cytoplasmic, periplasmic or act from outside the cell. Rather than deriving algorithms or neural nets based on killing activity by a highly diverse set of scaffolds, it would be worthwhile to ascertain what kinds of molecules accumulate where, and then analyze by location-bins [or possibly routes of entry]. This would require a way of measuring accumulation by compartment –by an activity-independent method, not so easy to do in bacteria. One non-high throughput method: Prochnow, H., et al. Analytical Chemistry 91:1863-1872.

    As to the Anonymous note above – yes, Rule of 5 will be a pretty good measure for intracellularly acting Gram positive agents and also [usually] necessary but not sufficient for intracellularly acting Gram-negatives. But bacteria may be killed by extracellular [extra-membrane] mechanisms – and those will have different physicochemical properties. Overall – pooling all antibacterials and searching for commonalities does not seem the best method to pursue. See, if you’re interested, Silver, L. L. (2016) A Gestalt approach to Gram-negative entry. Bioorganic & Medicinal Chemistry 24:6379-6389.

  23. achemist says:

    I cannot say Im surprised something like this: (https://imgur.com/a/XEVuA91) kills bacteria.
    Would be more surprised if any cell survived some of that abomination closeby.

    Might as well screen triflates

  24. Logan Andrews says:

    Having worked in antibacterial drug discovery for several years I can comment that our research team would steer clear of any “antibacterial” compounds that could not generate resistance in the lab. This was a sign of a non-specific mode of action (akin to bleach) that would likely lead to tox at the exposures and durations needed to treat real bacterial infections in the clinic. Had this work produced a molecule with a defined mechanism of action (with an atomic level understanding of how resistance could rise) I would be much more interested. In contrast, I would speculate that they found a bleach-like molecule that will be toxic at the higher doses antibiotics are typically delivered at.

  25. Kent Matlack says:

    In addition to reading Derek’s typically thoughtful and well written paper, I suggest reading the original. It is very well written and easy to read.

    1. Tourettes of Chemistry says:

      ALWAYS review the primary data (or the next best thing).

  26. Anonymous says:

    It would be interesting to know which set of features made the algorithm decide to select halicin, and to see if this profile distinguishes halicin from the lower scoring compounds.

  27. bacillus says:

    Treatment in the A. baumannii model began 1h post-infection, and in the C. dificile model at 24h post-infection. No-one shows up at the clinic within these time frames. Wonder why they didn’t delay treatment until 48, 72, 96 h after infection (unless infection was lethal before 48h).

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.