Skip to main content
Menu

In Silico

Is FEP Ready For the World?

Here’s a paper that basically throws down the computational gauntlet. A large group of authors from Schrödinger, Nimbus, Columbia, Yale, and UC-Irvine say that their implementation of free energy perturbation (FEP) calculations really does lead to a significant number of more active compounds being predicted. That’s as compared to other computational methods, or to straight med-chem intuition and synthesis.

Here, we report an FEP protocol that enables highly accurate affinity predictions across a broad range of ligands and target classes (over 200 ligands and 10 targets). The ligand perturbations include a wide range of chemical modifications that are typically seen in medicinal chemistry efforts, with modifications of up to 10 heavy atoms routinely included. Critically, we have applied the method in eight prospective discovery projects to date, with the results from two of those projects disclosed in this work. The high level of accuracy obtained in the prospective studies demonstrates the ability of this approach to drive decisions in lead optimization.

They say that these improvements are due to a better force field, better sampling algorithms, increased computing power, and automated work flow to get through things in an organized fashion. The paper shows some results against BACE, CDK2, JNK1, MCL1, p38, PTP1b, and thrombin, which seems like a reasonably diverse real-world set of targets. Checking the predicted binding energies versus experiment, most of them are within 1 kcal/mol, and only about 5% are 2 kcal/mol or worse. (To put these into med-chem terms, the rule of thumb is that a 10x difference in Ki represents 1.36 kcal/mol). These calculation should, in theory, be capturing the lot: hydrogen bonding, hydrophobic interaction, displacement of bound waters, pi-pi interactions, what have you. The two prospective projects mentioned are IRAK4 and TYK2. In both of these, the average error between theory and experiment was about 1 kcal/mol.
But this is not yet the Rise of the Machines:

The preceding notwithstanding, a highly accurate and robust FEP methodology is not, in any way, a replacement for a creative and technically strong medicinal chemistry team; it is necessary to generate the ideas for optimization of the lead compound that are synthetically tractable and have acceptable values for a wide range of druglike properties (e.g., solubility, membrane permeability, metabolism, etc.). Rather, the computational approach described here can be viewed as a tool to enable medicinal chemists to pursue modifications and new synthetic directions that would have been considered too risky without computational validation or to eliminate compounds that would be unlikely to meet the desired target affinity. This is particularly significant when considering whether to make an otherwise highly attractive molecule that may be synthetically challenging. If such a molecule is predicted to achieve the project potency targets by reliable FEP calculations, this substantially reduces the risk of taking on such synthetic challenges.

There’s no reason, a priori why this shouldn’t work; it’s all down to limits in how well the algorithms at the heart of the process deal with the energies involved, and how much computing force can be thrown at the problem. To that point, these calculations were done by running on graphics processing units (GPUs), which really do have a lot more oomph for the buck (although it’s still not just as trivial as plugging in some graphics processor cards and standing back). GPUs are getting more capable all the time themselves, and are a mass-market item, which bodes well for their application in drug design. Have we reached the tipping point here? Or is this another in a very long series of false dawns? I look forward to seeing how this plays out.

59 comments on “Is FEP Ready For the World?”

  1. Anonymous says:

    I recently attended a talk about this subject given by one of the authors, Dr. Murcko. It was pretty interesting, and I am curious to see where this technology goes.
    Even still, it seems like an incredibly slow process:
    “four perturbations per day can be completed by use of eight commodity Nvidia GTX-780 GPUs”
    In and of itself as the most basic setup, that is still rather slow and it isn’t a trivial task getting such a setup running (nor very cheap). However, if that is what is needed for this to become commonplace in labs I’m incredibly excited.
    My inner computer geek is intrigued, I wonder how exactly the software utilizes the processing power of the GPUs. Similar to cryptocurrency mining/password cracking etc., I was under the assumption that the AMD architecture was dramatically faster at performing these sorts of processes/calculations. I would be curious to see a comparison of some of the different available hardware on the market. I wouldn’t be surprised if a switch to AMD could result in cheaper and faster systems that would make this new methodology more accessible.

  2. Ed says:

    Gamer GPUs such as the GTX-780 are prone to give errors due to their design limitations…one pixel wrong on one frame in a 80fps game…who cares??? Quattro GPUs are a lot more expensive for a reason! So once again it is compchems living in a world of their own without regards to the capabilities of the hardwre that end uses like medicinal chemists have on their desks.
    If you are interested in getting a comparably accurate estimated binding affinity in

  3. Ed says:

    #2…continued

  4. Twelve says:

    Haven’t read the paper – do they show that this works for GPCRs, or that it should work for GPCRs?

  5. Peter says:

    Thrombin receptor is a GPCR

  6. Pete says:

    It’s actually computed versus experimental delta-delta-G values that one needs to compare here rather than computed versus experimental delta-G values.

  7. Matt says:

    If you are trying to run these calculations on your desktop you are just dipping your toe in the waters or Doing It Wrong. There are many scales of parallelism available here. The Right Way involves taking advantage of the coarse grained parallelism by farming out trajectories across a large number of independent compute nodes. The beautiful thing is that you don’t need the high speed coupling between nodes that you’d find in traditional massively parallel HPC systems. For academic researchers, the cheapest way to do it is probably using Amazon GPU-equipped compute nodes*. Chemists who can’t use outside facilities for IP-protection reasons would need an in-house rack of GPU-equipped machines.
    I also disagree that you need to pick your hardware from the Quadro series. In quantum computational methods at least the empirical result is that single-bit errors like you can encounter from non-ECC memory rarely affect the final result. Convergence is just a tiny bit slower if there’s a random error introduced at one of the iterations. For FEP you are using statistical results from many calculations. It doesn’t make sense to pay a large hardware premium to reduce errors by maybe 0.01% (Quadro cards with ECC memory) when that leads to substantially less sampling per dollar. There might be other reasons to buy professional cards but in this application ECC memory is not one of them.
    *Make sure you’re accounting for all costs if you think this is obviously wrong: hardware, power, space, cooling, networking, and most importantly personnel for maintenance and administration. Doing it yourself is cheap only if your time has no value.

  8. Twelve says:

    @5 I read Thrombin as ‘direct thrombin inhibitor’, in parallel to p38 meaning ‘p38 inhibitor’. So in this case, they actually applied this to thrombin receptor antagonists?

  9. annonie too says:

    Seems to me that if this does give an advantage in selecting possible compounds to make and test, then the proper equipment would be affordable for larger Pharma companies at least, and as these companies also have independent computational chemists / scientists, it would give these groups something more concrete to do as their job function. Even if it takes several days or weeks, is this any different from the time frame of deliverables from such people now? Let the Med Chemists do their chemisty, and make the computations “experts” deliver on the analysis to give the compound space where compounds should be made and tested.

  10. Ann O Mouse says:

    Haven’t had contact with this methodology in many years, but I thought that you had to have a really good structure of an inhibitor-enzyme complex for one compound to be the basis for the experiment where you “perturb” one inhibitor into the other, then calculate Delta-Delta-G from that. This being the case, (1) is this really possible for GPCRs or only enzymes where you have a really good crystal structure with a bound inhibitor? and (2) Don’t these have to be very small changes? I don’t have access to the paper so I can’t fact-check my assumptions or see how diverse the ligands/inhibitors in their set really are. Just curious. Like the idea.

  11. Wavefunction says:

    The true test as usual would be to battle-test the method on a large number of real world drug discovery projects. In such cases you may not always have good quality crystal structures or you may be dealing with homology models. You may have highly flexible proteins in which sampling is a much bigger issue or you may have long-distance effects from distant residues or bound co-factors which may impact binding.
    Nevertheless, I would say this is definitely one of the most promising advances in the field I have seen recently, at the very least because its ease of use would make it amenable to deconvoluting its strengths and pitfalls. And I agree that the relatively modest cost of GPUs would not be a burden for even a small company, especially considering how much we spend on experimental resources.

  12. anonymous says:

    @5,@9: Thrombin, the protease. Not PAR2 (the thrombin receptor).

  13. Magrinho170 says:

    @11 Dammit Jim, I’m a doctor! Not an experimental resource!

  14. SM says:

    I wonder if any of the antiquated bitcoin farms could be converted to “hit” farms

  15. cdsouthan says:

    Highest density of erudition I’ve seen in these comment threads for a long time…. keeep it up

  16. Anonymous says:

    Tell me, do any of these computational models include any one of the 300 known post-translational modifications that can significantly impact protein conformations, signaling, half-lives, localization, etc. etc.? Unfortunately the post-translationalome is completely unpredictable and contains no known code like DNA does for proteins. How can these algoriths predict what will accurately happen in the cells for recpeptor-ligand or enzyme-inhibitor binding if the right PTMs aren’t included? Something like 40% of the entire molecular weight of ion channels comes from sugars and changing just 1 sugar on an ion channel can significantly alter gating properties. So how could models accurately predict something like gating properties if you completely ignore something like sugars altogether? Practically all GPCRs, which are a favorite target of pharma, are also modified with sugars, and when in solution, the glycan structures on proteins are “wiggling and jiggling” just like the protein structures they’re on.

  17. OldLabRat says:

    @ 16 Anonymous: Can’t disagree with the post-translational issue. Wouldn’t wet lab binding assays suffer the same problems going from straight protein binding to cellular function assays? If I’m reading the paper correctly, FEP wasn’t used to predict cellular functional potency, but just binding affinities.
    @11 Wavefunction: the paper did mention prospective predictions on 8 targets. While not a large number of projects, it seems to be the start of real world use.

  18. Mark Murcko says:

    I can tell you all that this paper was a long time coming!
    re: predicting gating kinetics and the effects of large-scale movements of complex proteins in general: I completely agree that these are far harder problems.
    #11 (wavefunction) is right: we now all need to test these methods widely on problems of various kinds, so we can determine what I like to call the “sphere of applicability” of the method — when should you attempt it with caution, when can you really trust it, etc.
    For well-chosen problems, already it is becoming clear that the method DOES work. Has to be applied intelligently and with care. Naturally.
    Re: hardware costs, it is actually getting really quite cheap: something like $3 to $5 per calculation at current prices, and this will drop 10X or more in three years.
    So overall: I’m cautiously optimistic: and I am VERY skeptical about ALL new technologies, as a general rule. This is where most of my gray hair has come from.

  19. Ann O Mouse says:

    @15-cdsouthan “Highest density of erudition I’ve seen in these comment threads for a long time.”
    Don’t get used to it. Derek’s next post will be on why Pharma CEOs are blood-sucking scumbags, Dr Oz should have his license to practice medicine revoked, and chemistry and chemists are just misunderstood (all true). Then it will be back to the same old crap here at comment-central.

  20. dr z says:

    @ 17
    you are correct in that conventional recombinant expression will produce protein devoid of post-translational modifications. this can dramatically influence the behavior of the protein of interest. histones would be the quintessential example.
    for smaller proteins, semi-synthesis may be an option, or even amber codon suppression for others. in general though, it remains a underexplored area of biochemistry.
    @ 16
    a valid point, especially with GCPRs. in light of the synthetic difficulties above, very few crystal structures contain PTMs in order to compare structures…

  21. Anonymous says:

    @17
    I’d imagine wet lab assays would also suffer the same problems if they ignore PTMs too. FEP might not model cellular potency, but I question just how useful the binding affinities it is used to calculate really are. Binding affinities of small molecules to what exactly? Take for example an important oncotarget like EGFR. How can you correctly model affinity binding of a small molecule to EGFR if you ignore EGFR PTMs? A change in sugar alone is enough to regulate EGFR signaling/conformations, see:
    Sialylation and fucosylation of epidermal growth factor receptor suppress its dimerization and activation in lung cancer cells.
    Proc Natl Acad Sci U S A. 2011 Jul 12;108(28):11332-7
    Even if you tried to develop a wetlab screen for testing small molecule affinities, you’d run into problems if you use recombinant proteins expressed in bacteria since bacteria do not contain the same machinery for PTMs as mammalian cells (although membrane protein expression in bacteria is notoriously difficult). Even yeast and CHOs still produce, however, still produce noticeable differences in PTMs.
    Ultimately this just may be yet another argument for even more use of phenotypic screening using cells, but I fail to see just how useful FEP could be for being able to accurately predict binding affinities when protein conformations in the actual setting they’re supposed to be in with all of their correct PTMs will have profoundly altered conformational states. It is quite common to find all sorts of different glycans directly in the binding pocket of receptors or in hinge regions that are important for conformational shapes of transmembrane receptors

  22. Anonymous says:

    @21
    I am sure you have realized that the presented FEP approach is currently aimed mainly towards lead optimization in spesific projects and targets. Therefore, it is expected that understanding has been established how the simplistic yet rapid primary binding or potency assays relate to more comprehensive assays (if at all).

  23. Christophe Verlinde says:

    Looks like good work but healthy scepticism is warranted
    given the following declaration at the end of the paper:
    “The authors declare the following competing financial
    interest(s): D.L.M., W.L.J., and B.J.B. are consultants to
    Schrodinger, Inc. and are on its Scientific Advisory Board.
    R.A.F. has a significant financial stake in, is a consultant for,
    and is on the Scientific Advisory Board of Schrodinger, Inc.”

  24. M Bower says:

    I call bollocks on this. Nothing has been done to significantly improve forcefields, to capture the real effects of solvation, to deal with a million ways that atoms aren’t spheres and bonds aren’t springs. What they are doing is getting the wrong answer faster, and this will turn out to be a long, expensive sales pitch for everyone involved. Beware!

  25. Am I Lloyd says:

    “What they are doing is getting the wrong answer faster”
    I thought that’s what all of us are trying to do.

  26. Ann O Mouse says:

    @24-
    For those too embarrassed to show their ignorance (never a problem with me)
    “Bollocks” is a word of Middle English origin, meaning “testicles”. The word is often used figuratively in British English and Hiberno-English as a noun to mean “nonsense”, an expletive following a minor accident or misfortune, or an adjective to mean “poor quality” or “useless”. Similarly, the common phrases “Bollocks to this!” or “That’s a load of old bollocks” generally indicate contempt for a certain task, subject or opinion. Conversely, the word also figures in idiomatic phrases such as “the dog’s bollocks”, “top bollock(s)”, or more simply “the bollocks” (as opposed to just “bollocks”), which will refer to something which is admired, approved of or well-respected

  27. Ed says:

    #26 thanks! can you also explain the etiology of the Hiberni-English phrase “Jesus suffering! F**k!”?

  28. Matthieu Schapira says:

    Looks very promising. Hopefully, this will become available to academics in the near future. Here is a message I got today from Schrodinger:
    “Unfortunately, FEP is not yet available for academic users. OPLS2.1 is available at USDxxx/year. FFBuilder is an additional $xxx/year (FFBuilder does require 16 Jaguar/MacroModel licenses, so please keep that in mind). I would be happy to send an evaluation, as well as arrange a demo with an Applications Scientist, if you are interested.”(I removed actual quotes to stay in good terms with them).
    This, in spite of the statement in the paper:
    “For a typical FEP calculation (6000 atoms in the protein) with the protocol described in this work, four perturbations per day can be completed by use of eight commodity Nvidia GTX-780 GPUs, making it feasible to evaluate thousands of molecules per year in the context of a drug discovery program with compute resources that are well within the reach of both academic institutions and commercial enterprises”

  29. H2L says:

    @ 24
    I looks there are improvements to the force field that hey used in this work. See the primary citation, where there appears o be a better representation of the underlying physics as compared with older force fields:
    http://pubs.acs.org/doi/abs/10.1021/ct300203w
    Faster is definitely important too, but what evidence is there that this approach will not work like they claim? Maybe I am just being naive, but the results look pretty good. Like they said, this is not a replacement for experiment, but can it at least help us make better decisions about which compounds to make next? It does not need to be perfect to be helpful.

  30. Ann O Mouse says:

    @27
    Nope, haven’t a clue on that choice bit of Hibernese. I will tell you that googling that phrase gives you a trully bizarre collection of hits.

  31. okemist says:

    I set this up at home ran the calculations for 3 months for P38; answer: a-symmetrical urea

  32. Slicer says:

    No one’s mentioned distributed computing yet? I ran folding@home for years before it stopped working with my machine. A massive biotech conglomerate should have no problems paying gamers to run their GPUs overnight.

  33. Slicer says:

    Actually, let’s do the math on this from what #28 gave us, assuming that this can be made to work in a distributed environment.
    Let’s say that 30,000 home users participate in a program in which they get paid a dollar for every 8 hours they wholly devote to this task with a suitable GPU. From their perspective, it’s 30 bucks a month in free money to blow on Steam games. That’s 10,000 GPUs (assuming no dual-card setups) per 24 hours, so 5,000 perturbations a day, so 1,825,000 perturbations a year. An annual 11 million dollars for this before you add fudge factors and development costs. Forgive my ignorance, but isn’t that chump change compared to the possible profits from this much raw bruteforcing of biochemistry?

  34. M Bower says:

    @29 “It does not need to be perfect to be helpful.” said every computational person everywhere. And I completely agree. But just consider solvation in the unbound state; very small changes in structure can lead to large and as yet unpredictable changes in solvation energies. We can’t predict those energies accurately (often not even whether they increase or decrease) with today’s forcefields and solvation models, and as they have (potentially large and unpredictable) effects on the total free energy of binding, it puts the whole enterprise on shaky footing. And that’s just one component.

  35. Julien Michel says:

    @34. This is interesting. Can you give specific examples of compounds where small changes in structure have been shown to give rise to large and unpredictable changes in hydration free energies?

  36. M Bower says:

    @35 Well, just as an example, methylamine is more soluble than ammonia, and computational methods have failed to reproduce the trend in solvation energies from ammonia to methylamine to dimethylamine to trimethylamine (as far as I know). See Meng et al, J. Phys. Chem. 1996, 100, 2367-2371 for an interesting stab at this problem. Now extrapolate that level of unpredictability out to drug-sized molecules, with their concomitant flexibility and the unpredictable ways all their conformations may interact with water solvent. Without a better understanding of energies of solvation (and the prices paid by molecules to partially de-solvate when they bind, etc.) predicting free energies of binding (where those energies form just one part of one side of a small difference between two large numbers) is still beyond our theoretical grasp.

  37. Anonymous says:

    @ 36 “Resolution of the amine hydration problem” 1999 Rizzo and Jorgensen.

  38. M Bower says:

    @37 and yet, 16 years later, accurate prediction of solvation energies for drug-sized molecules is an unsolved problem.

  39. anonymous says:

    @38, there was a solvation free energy prediction challenge. The results published here:
    http://rd.springer.com.proxy1.athensams.net/article/10.1007/s10822-014-9718-2
    In summary: “we find a relatively wide range of methods perform quite well on this test set, with RMS errors in the 1.2 kcal/mol range for several of the best performing methods.”

  40. Idi Amine says:

    #34 Bower: Appreciate what you are saying, but we don’t need to predict every individual case with accuracy in order to predict general trends. In most med chem projects the goal is to prioritize compounds rather than picking the One True Hit, and that’s what methods like this help us do. That being said, I agree that sometimes errors cancel and give us the right results and this is something that merits investigation.

  41. did they publish their data says:

    If the answer is no (hint: the answer is no) every single author of this paper is deserving of contempt for what amounts to fraud. selling a method without any real data to make money (yes every author somehow works for schrodinger or is on their SAB) is complete and utter bullshit. science deserves better. i hope every author is run out of science, or at least, hit by a bus.

  42. H2L says:

    @38 (i.e. M Bower)
    I think you need to read more about the field to make such bold comments. Quoting data from 15 years ago does not seem like the most solid place to argue from. Sure, 15 years ago there were not such accurate solvation free energy predictions, but the field has actually evolved (whether we like it or not).
    Of course, accurate solvation free energies does not imply accurate binding free energies. Solvation is a much easier problem — just a small molecule and waters. That seems somewhat tractable. Binding involves proteins, and proteins are difficult to treat, with all of their jiggling and wiggling. I am still not sold on whether we can sample proteins adequately enough with MD methods to predict accurate binding free energies, but that would be the point to argue, not solvation free energies.
    We should all be completely transparent that FEP is not magic. It is a physics-based simulation method. If barriers are too high to sample then the alternate states on the other side of the barrier will not be sampled. Ligands with alternate binding modes are likely out of the question at this point. Large-scale conformational changes are not tractable (yet) either. Fortunately, as we know, in the vast majority of cases that we deal with on a daily basis, the chemical modifications that we propose do not induce changes in ligand binding modes or large-scale conformational changes, so a method like FEP seems to have some space to make an impact.
    I am sure anyone who reads this blog can find a case that would break FEP, but is that really our goal? Are we really so scared of computers and computational algorithms? We are a long way off from even the most sophisticated, GPU-enabled, enhanced-sampling, statistically-rigorous method from replacing the role of a smart ligand designer. The best these methods can do is help us make better decisions, but I think that is worth something. You might be much smarter than me, but I sure make a lot of compounds that I think will improve potency that do not — in fact, the majority of what I think will improve potency does not. Does that make me worthless? Of course not (I hope). We are driving discovery, and making a lot of mistakes along the way. I see a method like FEP helping me make better decisions (and eliminate bad decisions). Call me naïve, but I see something useful here.

  43. Anonymous says:

    @41 they published excel tables in the SI, and all the initial setups are available on google drive. If you are asking about raw trajectories, are you really expecting ACS to host tens if not hundreds of terabytes of SI? Have you tried, I don’t know, asking them for the data you want before condemning them to be hit by a bus?

  44. H2L says:

    Tried posting another comment to M Bower, but it got held up by the moderator. Maybe this discussion has been terminated. MIght be time for us to get back to work and see if this stuff can really make an impact. We do not have access to any kinds of FEP, but it looks like there are many varieties out there, including free academic versions. Not sure how those compare to the Schrodinger version, but I expect we will see many comparisons in the near future, especially given that all of the Schrodinger data (save the live project data) is available in the SI.

  45. Idi Amine says:

    #41: *Slow applause*: This is exactly the level of erudition and reasoned debate on Derek’s blog that the previous commenter was talking about.
    It’s pretty clear that you hold a personal grudge against one or more of the authors – this is not the place for venting that grudge. If you have an actual scientific argument to offer I am sure people will be interested.

  46. ProfessorPlum says:

    @42 “Solvation is a much easier problem — just a small molecule and waters.” That’s true. But accurate solvation free energies don’t fall out of the FEP process for free – they are a sine qua non of the process of getting accurate FEP results. In other words, your FEP results can only be as accurate as your solvation free energies, and even if we can predict them to 1.2 kcal/mol, that sets an upper limit on how accurate FEP results can be.

  47. Julien Michel says:

    @46 Yes I agree with that, but the ca. 1 kcal/mol figure is across a lot of chemical groups. What would be useful now is some analysis of what types of perturbations work better than others. For instance in my experience nonpolar–>non polar tends to be more accurate than polar–>polar. Having more precise guidelines would be useful.

  48. Anonymous says:

    @46
    Seems like you know more than me about solvation free energies, but reading the abstract of that Schrodinger solvation paper they say:
    “OPLS2.0 produces the best correlation with experimental data (R2 = 0.95, slope = 0.96) and the lowest average unsigned errors (0.7 kcal/mol).”
    Maybe you are right that binding cannot be better than solvation predictions (although there might be fortuitous cancelation of errors that would play in favor of the calculations), but in any case, 0.7 kcal/mol seems pretty good across a broad range of chemical functionalities.

  49. ProfessorPlum says:

    @48 the field has come a long way, but the dataset used for that OPLS2 result consists of smaller molecules that are monofunctional at best and not really drug like. They do contain some pharmaceutically interesting functional groups but don’t look like drugs. I do hope this new forcefield continues to perform well in a blind test for free energy of solvation predictions, that would be great.

  50. H2L says:

    @49
    Good point about the size of the molecules. Do you know if there is data for solvation energies of drug-like molecules? I am not even sure if this kind of thing is possible to measure accurately experimentally. If it is, that should definitely be what the force field developers use for their validation. Small fragments are a start, but insufficient to really get a grip on the relevant problem. Still, maybe it is best to just go straight for the binding energies, since that is measurable and the ultimate goal of something like FEP.

  51. ProfessorPlum says:

    @50 measuring solvation free energies has always been difficult (as it often involves measuring very small dissolved concentrations or very small vapor pressures) and the lack of such data is a problem for the field – the datasets tend to be small, and then of course you run the risk of training to the data without careful design. Measured solvation free energies for drug-sized and drug-like molecules are rarer still.
    The problem with going straight for the binding energies is that FEP is meant to be an end run around the real problems with calculating binding energies by using a thermodynamic cycle that makes transformations between two ligands in the bound and unbound state. And therefore FEP must necessarily rely on being able to calculate changes in solvation free energy between two compounds. Which is hard to do, especially with the dearth of data.

  52. H2L says:

    @51
    So, what do you propose? Let’s not allow the pursuit of perfection be the enemy of making practical progress. Sounds like the right solvation data is not available, and might not ever be. As such, it still seems to me that we need to start trying this free energy stuff in real projects. It is works (meaning it helps us make better decisions more efficiently), then we have something. If not, then we need to step back and reassess.

  53. julien michel says:

    @51 sadly it appears no one can get funding from a research council to measure hydration free energies of drug-like molecules.

  54. julien michel says:

    @51 sadly it appears no one can get funding from a research council to measure hydration free energies of drug-like molecules.

  55. ProfessorPlum says:

    @52 well, one thing I wouldn’t recommend is pretending we’ve solved the problem when we haven’t.
    FEP may indeed be a good method for predicting activities based on a known activity and a binding mode – but there are many others, including 3D shape/feature similarity methods, QSAR (which has its own problems), simple fingerprint/clustering, potentials of mean force, docking scoring functions, etc., etc., ad infinitum.
    I agree @54 that funding these kinds of measurements would be to the public good! If only it was sexy enough to be supported.

  56. H2L says:

    @55
    Nobody is proposing that the problem is solved. Just suggesting that we should not let the pursuit of perfection hinder our progress in applying something that might have value now. Based on publication featured in this post, the value is potentially already much higher for binding energy predictions than the laundry list of methods that you note. It sounds like we will not agree here. I just want to see this get used in the real world to drive real projects, which is the only way we are really going to understand how useful it is now. It might flop. It might not. But enough of the retrospective comparisons of methods, methods, methods. Let’s accept when something looks promising and then give it a shot, like we do with new experimental approaches.

  57. prediction ? converged ? says:

    I am afraid that this is just another “that” kind of paper, which you should know what I mean if you are familiar with “classical” FEP literatures. Having 5 nanoseconds across-the-board should not magically lead to thermodynamically “converged” results because structures are very unlikely equilibrated. Interestingly, “convergence” checking (time-dependent), such an essential element for FEP, has never been mentioned in this paper. I am sure that Schrodinger has enough GPUs to run longer at least on one set of data to check whether predictions still hold.

  58. Kent Kemmish says:

    @33 Are you going to do something about it?

  59. Kent Kemmish says:

    @33 Are you going to do something about it?

Comments are closed.