Skip to main content


Calculating Your Way to Antivirals

My intent is to start mixing in some non-coronavirus posts along with my pandemic science coverage – you know, like the blog used to be way back earlier in the year (!) Today’s subject might be a good transitional one – it’s an article in the New England Journal of Medicine on coronavirus drug discovery, but the points it raises are generally applicable.

“How to Discover Antiviral Drugs Quickly” is the attention-getting title. The authors are all from Oak Ridge, not known as a center of drug discovery, but the connection is the massive computational resource available there. Their Summit supercomputer is recognized as currently the most powerful in the world, which is a moving target, of course – Oak Ridge itself is expecting an even larger system (Frontier) next year, and other labs in China, etc., are not sitting around idly, either.

The authors note that “The laborious, decade-long, classic pathway for the discovery and approval of new drugs could hardly be less well suited to the present pandemic.” I don’t think anyone would argue with that, but it slides past a key point: it could hardly be less well suited to any other disease we’re trying to treat, either. Right? Is there any therapeutic area that’s best served by these timelines as opposed to something quicker? So this is not a problem peculiar to the coronavirus situation, although it does make for a more dramatic disconnect than usual.

Docking and Screening

The paper makes the case for high-throughput ensemble docking of virtual compound libraries. Many readers here will be familiar with the concept, and some of you are very familiar indeed. If this isn’t your field, the idea is that you take a three-dimensional representation of a candidate molecule and calculate its interactions (favorable and unfavorable) with a similar three-dimensional representation of a protein binding site for it. You’re going to be adding those up, energetically, and looking for the lowest-energy states, which indicate the most favorable binding. If that sounds straightforward, that’s because I have grievously oversimplified that description. Let’s talk about that.

Among the biggest complications is that both the molecules of interest and their binding site can generally adopt a number of different shapes. That’s true even when they’re by themselves – some of the bonds can rotate (to one degree or another) at room temperature without much of an energetic penalty, and taken together that gives you a whole ensemble of reasonable structures, each with a somewhat different shape. A real kicker is that the relative favorability of these depends first on the compound’s (or the binding site’s) interactions with itself: they could swivel around to the point, perhaps, where it starts to bang into itself, or you could rotate a bond to where nearby groups start to clash a bit, or you could cause a favorable interaction (or break one up) with such movements. And these energetic calculations are also affected by each partner’s interaction with solvent water molecules, which are numerous, highly mobile, and interacting with each other at the same time. Finally, the relative energies of each partner will be affected by the other partner. As a target molecule approaches a binding site, a dance begins with the two partners shifting positions in response. You can have situations (for example) where there might be a favorable binding arrangement at the end of such a process, but no good way to get to it by any step-by-step route. The whole field of “molecular dynamics” is an attempt to figure out this process frame-by-frame, and if you thought getting a static picture was computationally intensive, MD will eat all the computing cycles you can throw at it. (Here’s an older post on that topic, but many of its issues are still relevant). One thing that becomes clear is that there may well be some arrangements of either partner along the way that would be considered unfavorable if you calculated them alone in a vacuum or surrounded by solvent, but which make perfect energetic sense when they’re interacting with the other partner nearby.

Practitioners in this area will also appreciate that all those energetic calculations that the last long paragraph relied on are not so straightforward, either. Binding energy involves both an enthalpic term and an entropic one, and these can work in the same direction or can largely cancel each other out (a common situation). Even such an apparently straightforward step as displacing a water molecule from a protein’s binding site (to make room for a candidate small molecule) can be enthalpically favorable or unfavorable and entropically favorable or unfavorable, too. These calculations involve (among other things) the interactions of hydrogen bonds (very important), of charged or potentially charged groups such as acids and amines, of local concentrations of electron density such as pi-electron clouds and around electronegative atoms, and of plain noncharged alkyl groups that can attract each other weakly or strongly repel each other if they’re jammed together too closely.

There’s a lot going on, and dealing with all of these things computationally is always going to involve a list of tradeoffs and approximations, no matter what your hardware resources. Skilled molecular modelers will know their way around these, realize the weaker points in their calculations, and adjust their methods as needed to try to shore these up. Less skilled ones (and let me tell you, I am one of those) might be more likely to take some software’s word for it, whether that’s a good idea or not. These various software approaches all have their strong points and weak ones, which might reveal themselves to the trained eye as the molecules (and the relative importance of their interacting modes) vary across a screen.

Now, all this is to point out that while speeding up the calculations is a very worthy goal, speeding up calculations that have underlying problems or unappreciated uncertainties in them will not improve your life. The key is, as always, to validate your results by experiment – and to their credit, the Oak Ridge authors specifically make note of this. This is a good way to expose weaknesses in your approach that you wouldn’t have appreciated any other way, which sends you back for another round of calculation (improved, one hopes).

“Virtual screening” of this sort has been a technique in drug discovery for many years now, and its usefulness varies. Sometimes it really does deliver an enhanced hit rate compared to a physical assay screen, and sometimes it doesn’t (and sometimes you never really know, because you’re doing the virtual one because the real-world screen isn’t feasible at all). It’s definitely improved over the years, though – the methods for calculating the energies involved are better, and we can evaluate more far more shapes and conformations more quickly. But it’s important to realize that the larger the screen, the more work needs to be done up front to set it up properly – here’s a post on a paper that goes into that topic.

What Screening Gets You

And now we come to the bad news section, when we ask: how much time does one save in a drug-development process through improvements in high-throughput screening? Unfortunately, the answer is, mostly, “not all that much”. The laborious parts come after the screen is done, and they’re pretty darn laborious. Hits that come out of a screen have to be modified by medicinal chemists for potency, selectivity (against the things you know you should worry about, anyway), metabolic stability and clearance, toxicology (insofar as you understand it), and other factors besides, not all of which will be working in the same direction. Some of these things can be helped a bit by computational approaches, sometimes. But not all, and definitely not always.

And all this is before you even think about going into clinical trials. But those are the really hard part, where we have, for new investigational drugs, a 90% failure rate. None of the most common reasons for those failures are addressed by the supercomputer screen that started off the project. One big problem is that you may have picked the wrong target, and another big one is that your drug may end up doing something else to patients that you didn’t want. Neither of those problems are amenable – yet – to calculation, especially not the kind that the NEJM paper is talking about. You have pick a target before you start your screen, of course, and you get ambushed later by toxicology that you never even knew was coming. It’s not that we don’t want a computational way to avoid such nasty surprises – that would be terrific – but nothing like that is on the horizon yet. Billions of dollars, big ol’ stacks of cash, are waiting for the people who figure out how to do such things. But no one can do them for you at the moment.

Now, I understand that the early computational screens against coronavirus proteins were for repurposing existing drugs, which is indeed a really good idea – it’s the way to get something into the front lines the quickest. But the Oak Ridge folks ran that screen back in February (and good for them for doing it). The last paragraph of the current article is a bit vague, but as it ascends into the clouds it seems to be aiming for something more than repurposing. That, though, will subject it to just those problems mentioned in the last paragraph. Virtual screening gets a lot of effort thrown at it because, honestly, it’s just a lot more amenable to a computational approach, so far, than the really hard parts are. People can do it, so they do.

In the end, though, screening is just not a rate-determining step. Making it faster is no bad thing, but it’s like cutting a couple of minutes off your trip to the airport to catch a six-hour flight.

50 comments on “Calculating Your Way to Antivirals”

  1. Jeff Weidner says:

    Another key aspect to applying any AI/modelling approaches is the lack of metadata. When we physically screen there’s noise in the chemistry. 95% purity in your test compounds is considered ok, but what about the other 5% synthetic schemes and analytical chem can provide clues about the contaminants and their likely specific or non-specific interactions in the assay system. We’re even worse on the assay side. If I’m running an in vitro assay, what is the detection system? What form of the target protein/system am I using, binding domain, whole protein, post translational modifications? Binding assay or functional assay? Cell background?

  2. Kaleberg says:

    Give the high probability of failure, I’m amazed that people stick to the drug development field. I supposed it is better than starving. It’s even worse than Hollywood, and there’s no equivalent of straight to video.

    1. Anonymous says:

      Well, the pay is good, if you can find a job. Money is still there because the payoff is potentially very good even if the risk is high. The quest is amazing fun – even if you don’t find a drug, you still learn fascinating stuff along the way and if you do your job right you’re stil pushing back the frontiers. And it’s still the best way to conduct a life-or-death endeavor.

    2. FoodScientist says:

      90% failure rate seems like great odds compared to the bloodbath in Alzheimer’s clinical trials.

  3. Mandy says:

    Derek what are your favorite computational chemistry tools?

    Also I miss airports

  4. Experienced Medicinal Chemist says:

    If we accept that virtual screening isn’t helpful because there’s a 90% failure rate in the clinic, that means we’d need to accept that medicinal chemistry isn’t helpful either for the exact same reason. And that would be absurd, right? Medicinal chemistry is clearly adding value, by developing the screening hits into nanomolar potent selective non-toxic (and on and on) clinical candidates. If they could hand us a molecule with some of those facets, we’d have more time to discover the rest.

    What *would* it take for a virtual screen to be useful? Pulling a nanomolar inhibitor out? A selective nanomolar inhibitor? A nontoxic selective nanomolar inhibitor? A selective nanomolar inhibitor that passes the Ro5? A selective nanomolar inhibitor that passes the Ro5 and isn’t eliminated as swill by Pat Walters? Just an interesting new scaffold with good (predicted) SAR? An accurate tissue distribution predictor? An accurate solubility predictor?

    Technology gets better all the time, so let’s set some goalposts up for the whiz kids. Or “crowdsource” as they call it.

    1. Moderately Experienced Medicinal Chemist says:

      For starters, connecting it to some kind of automated synthesis that makes the top hits from your virtual screen on the same day, so you can test them in an actual assay.

      1. Ge Li says:

        “some kind of automated synthesis”: You mean WuXi AppTec?

    2. Nat says:

      Isn’t the 90% failure rate referring to compounds that already have at least some experimental data, not simply virtual screening hits? My impression was that the attrition rate for purely computational hits was much higher.

      1. Pfarma pharmer says:

        Nat – You’re absolutely right! The 90% clinical failure rate is what traditional preclinical medicinal chemistry delivers. Purely computational clinical failures must be higher because if they were the same, Pfizer would have already pfired all of us already. But the question being asked above wasn’t “What does the computer need to do to replace us?”, it was the simpler “What does the computer need to do to be useful?”.

  5. Carl says:

    Honestly from an armchair ametuer percpective the only thing i could think of to help with that 90% failure rate via computation would be to run these models looking for every possibble binding of your compound with every known molecule in the human body, then flag all the ones within X orders of magnitude of probability of happening as your target interaction. Wouldn’t help with all those cases of an interaction with a piece of the machinery no one even knew existed, and i imagine the list of known compounds in the human body is probably 6 orders of magnitude higher than compounds you’d screen for at a minimum. And the i assume larger molecules involved in a good chunk of that is probably going to make the computational workload a few order of magnitude more intense.

    So not something we can expect anytime this century, and possibly not anytime next century either. But it’s the only option that occurs to me, (and none of this deals with potential metabolic action altering your compound, you’d hope thats mostly allready well understood enough to be handled pre screen but i’m betting based sheerly on experiance of reading this blog that it’s not).

    Honestly Medchem often strikes me as the purest essence of science distilled down. Taking a whole bunch of interesting gunk, throwing it at the wall, and documenting what happens to find out what happens that might be of use.

    1. Ken says:

      “to run these models looking for every possibble binding of your compound with every known molecule in the human body”

      I am reminded of an exchange from Greg Egan’s novel Permutation City.

      PROGRAMMER: Sure, they could take in [to a computer simulation] a crude sketch of the organism. But they wouldn’t have a fraction of the memory needed to bring it to life. And it would take a billion years of simulated time before it evolved into something more interesting that blue-green algae. Multiply that by a slowdown of a trillion…

      INTERVIEWER: Flat batteries?

      PROGRAMMER: Flat universe.

  6. An experienced Modeler says:

    I feel I must apologize to the med chem community about this work. I think this falls into the Andy Grove fallacy class, that applying more compute power will smooth over any scientific cracks in the process: force fields, treatment of protonation in the protein and ligand (is that tetrazole really neutral?), solvent effects (look at Hypericin in fig 9 of their SARS paper. Ouch.). There are no control experiments (how sticky is the pocket? What happens when you screen a set matched on properties? Does the ‘best’ molecule really stand out from the noise?) and a heavy reliance on Vina scores. Virtual screening can find hits, but then the hard work starts. This whole effort is a pissing contest about quantity, rather than looking at quality. There are too many similar papers doing VS on the same target polluting the literature, and they all seem to come up with a different ordering of the same set of drugs. Pause for thought?

    1. Dr. Manhattan says:

      “I think this falls into the Andy Grove fallacy class, that applying more compute power will smooth over any scientific cracks in the process“

      Thanks for your comment. As others point out, is a very real role for computational hit screening in drug discovery, but it is the downstream processes of hit to lead to candidate where the “unknown knowns” and even the “ unknown unknowns” are found.

      As far as software is concerned, Silicon Valley would be hard pressed to deal with an FDA-like approval agency for their products. The recent Windows 10 updates that are crashing computers with the blue screen of death and erasing user files would have failed the Software toxicology tests. And, oh yeah, drug developers don’t get “do overs” with software patches.

      1. achemist says:

        Ive seen a quite funny comparaison between biology and computers (might have even been here).

        Basically biology is like fixing a bug in a program thats written entirely in chinese, there is no manual – and if you crash the program your company fails.

    2. Zach says:

      Quite the opposite, actually. I know some people who are working on similar projects, and the line of thinking was something closer to

      We have
      1) A large computer which needs to be kept busy
      2) Many experts in high performance computing, molecular modeling, and reduced order modeling / machine learning who are temporarily idle.
      3) A charter to work on scientific issues in the public interest, and
      4) A sincere desire to help

      Given all of those things, they started working on the problem within a couple of days of the lockdown.

      1. milkshake says:

        this is commendable, but even if you discover a promising lead and a well-funded (pharma sized) medchem group with good biology, PK and crystallography support takes it on, you are still 2-3 years away from a clinical candidate, for which complete formulation work, stability studies and chronic toxicity animal studies have to be done, plus development of production route and its GMP certification – some of it can be done in parallel once the most promising one or two candidates are identified especially if you throw plenty of money on it, and from there you have another 3-4 years to get the drug through clinical trials, which will cost couple hundred millions. And the whole game has 90-95% chance of failure.

        So it would have been much better to limit the virtual docking and whatnot to databases of approved drugs, which might be re-purposed in less than a year, based on a single clinical trial.

  7. Too Long in the Wasteland says:

    Unfortunately this NEJM opinion piece comes across as another advertisement for massive virtual screening, no strings attached, and specifically justification for whatever costs have been incurred for the Summit supercomputer, courtesy of U.S. taxpayers. As others have commented, virtual screening is simply another tool for enhancing lead generation, nothing more, nothing less. It’s one of many first steps along a long path to a clinical compound. If not properly conducted, virtual screening can actually impede lead generation by generating a bunch of false positives. But if a practitioner knows what they’re doing, often it can significantly enrich the starting points for lead generation, complementing what can be found from HTS or screening smaller repurposed or diverse compound sets.

    Relying only upon conventional med-chem to triage a limited set of starting points can also be a recipe for failure, especially if the chemists get seduced by a potency trap – e.g. brick dust or flatland chemotypes that may be potent, but which offer little hope for in vivo efficacy. The more starting points the better – virtual screening can help here.

    Then there’s a good chance that right out of the gate you may be working on the wrong biological target because you’ve oversimplified the biology, the mechanism(s) of action, or the disease state(s). Don’t over interpret biologist’s cartoons.

    A word of advice to all would-be drug discovery scientists, computational or experimental, please temper the hype. If drug discovery and development was easy or straightforward, it wouldn’t be so expensive or take so long. There’s plenty of humble pie for all. p.s. beware of “thought leaders”

  8. Magrinho says:


  9. MTK says:

    Does one really need the world’s most powerful supercomputer to virtual screen 8000 compounds against a single protein? That sounds like using a F1 race car to go grab a gallon of milk at the Quick-E-Mart.

    Sounds like you’d be better off just running any old regular assay. You could probably even run it manually.

    1. Nathaniel Echols says:

      This certainly seems like a very expensive way to do an embarrassingly parallel experiment.

    2. Rick Deveraux says:

      You don’t need it, less powerful systems can also give results, but there are going to be tradeoffs.

      It’s either going to take you a lot longer to get the results, and/or the results you do get are less accurate (and even the best results don’t have a stellar accuracy).

      The analogy isn’t so much as using a F1 car to “grab a gallon of milk from the supermarket” as it is “offroading it from Whitehorse to Torres del Paine”.

    3. Barry says:

      “any old regular assay” would require that you synthesize and purify these compounds. The virtual screen lets you evaluate compounds that have never existed yet. Small molecule space is vast. There’s not enough matter on Earth to make and screen all permutations.

      1. MTK says:

        The 8000 compounds were all approved drugs. They were looking for repurposing.

  10. A Nonny Mouse says:

    In an historical perspective, ICI (pre-A-Z) closed down their anti-viral research claiming that a selective anti-viral would never be found. Later that same day Wellcome announced Acyclovir to the world.

    (The person that closed the ICI group down later headed up Research at Wellcome!).

  11. ER says:

    It is impressive that these super computers and virtual screening AI find phenol derivatives as agonists for the Estrogen receptor

    1. Adonis says:

      For this of type of “discoveries” my friend likes to joke” If they didn’t know that, they should have asked me”

    2. Sybil Fawlty says:

      Ah yes, shades of Dear Old Basil, specialist subject the BO and all that…

  12. Josh says:

    Silly-con Valley types think they can solve any problem with a big enough computer. Rather presumptuous and more than a little obnoxious. Let them calculate how to get triphenylphosphine oxide out of a reaction product. Then I’ll pay attention.

    1. brian says:

      Hiring a process chemist is cheaper than a computer scientist these days.

    2. Nick K says:

      I have found that triphenylphosphine oxide will simply crash out as a crystalline solid merely by triturating the crude mixture with hexane, and can be filtered off.

      I’ll get my coat…

      1. anon theII says:

        Put your coat back. You just filtered off all your product with the triphenylphosphine oxide.

    3. fef says:

      There is high probability that an alternative synthetic route exists that does not involve you having to separate that product from TPPO. I am not saying that finding it is feasible in the near future, but come on, try to think outside the box.

    4. Pedwards says:

      The only reliable method for removing triphenylphosphine oxide from a desired final product is to decide that you don’t really want that final product.

    5. Cato says:


  13. Joe Kelleher says:

    Are there any improvements in the experimental validation side of this approach that could help inform or direct the computational screening side? My (very!) limited understanding is that the ‘validate your results by experiment’ part itself just amounts to determining the binding constant and comparing with the model, and then all the experimental work from then onward doesn’t really take advantage of the modelling. If we could easily get more information from the experimental assay, say the protein conformation change, residence time, or the structure of water molecule interactions in the immediate neighbourhood of the ligand, would there be any use for that?

  14. Peter Kenny says:

    I was reminded of “It’s the largest computational docking ever done by mankind” in a rather breathless piece in Nature that I’ve linked as the URL for this comment. Many (most?) of the people advocating computational drug discovery appear unaware of the incremental nature of drug discovery. I would argue that drug design should be seen partly in terms of Design of Experiments with an objective of generating information as efficiently as possible.

  15. Andrew Molitor says:

    This sounds like the classic “three body problem” of physics, only with 10s of 1000s of bodies and several different kinds of gravity. Given that there is no closed form solution for 3 bodies and 1 kind of gravity, and most actual solutions are chaotic, I have to admit that this sounds.. hard.

    Virtually all solutions to “what happens when X bangs in to Y” would also be, I guess, chaotic, which I think means “they bounce off after some absurdly complex interaction and nothing chemically interesting happens, but holy shit does a lot of wiggling go on.”

  16. Tom says:

    Derek and All, what are your thoughts on the likely success of AI structured knowledge repositories for drug repurposing, such as the recent Lancet paper from BenevolentAI that suggested baricitinib could reduce COVID infection? Another over-promised virtual drug discovery approach or the dawn of a new age in drug discovery?

  17. AQR says:

    Several years ago, I was told by an experienced computational chemist that if you had an x-ray crystal structure of of a small molecule bound to a protein and a second crystal structure of a different small molecule bound to the same protein, you would not be able to determine which of these molecules had the higher affinity for the protein. I am curious whether this is still the case.

    If it is still true, then I am puzzled how people can put confidence in docking studies. In the case above, one knows the location of all of the atoms of both the large and small molecules, along with perhaps that of a number of water molecules, and one still cannot identify the stronger binder. In a docking study, the location of atoms of the protein and of two different poses of the small molecule are derived from calculations that would have to take into account the location of the various protein chains (many not directly involved in the binding interaction) and of solvent. Certainly there is a lot more uncertainty in the later than in the former.

    1. Derek Lowe says:

      Yes, I’d say that’s still the case, unless one of the structures has something obvious like three solid-looking hydrogen bonds and the other one doesn’t, etc.

    2. Ro0 says:

      Even if you could get the exact affinity… a 100 nM compound may be a better drug than a 1 nM analog.
      Don’t they teach medicinal chemistry anymore?

    3. Mark says:

      You’re right to be puzzled. One thing to keep in mind is that on average, in a virtual screen, a 5% hit rate is considered pretty good: the results are dominated by false positives. Given this, you have to wonder whether running on larger and larger sets of virtual compounds is really going to give you more information.

  18. Anon says:

    Docking is definitely a pissing match, but pissing matches still get published in Nature:

  19. Old fart says:

    You want to discover drugs quickly? Bring out of forced retirement all the experienced folks. They know what they’re doing… understand the dcience of drug didcovery. Or else, “smart biotechs” wouldn’t hire them.
    Aren’t you guys tired of so much BS?

    Computers are great to play games… (says my daughter, while startung another PS4 game)

  20. Bruce Grant says:

    Murphy’s Corollary #347 — In clinical trials, the odds favor the null hypothesis.

  21. Stu West says:

    When the authors say a decade-long discovery process is poorly suited to the current pandemic, that’s probably just space-filling boilerplate (like student history essays that begin “Ever since the dawn of time…”). But I suppose you could argue that in the case of HIV, if it takes a decade to discover a drug, at least the virus takes about ten years to kill you and you can make a miraculous Lazarus-style recovery if you start on an effective treatment even in end-stage disease.

  22. Star Dorminey says:

    As a computer scientist outsider, I want to learn about the existing automation in drug development – could anyone give me some pointers? I know a bit about high-throughput screening, molecular libraries and fragment-based drug discovery, but I know little about automation of the “six hour flight” Derek mentions – tox screening, off-target effects, choosing the correct binding site.

    Could someone point me to the big machines – the grist mills that grind through screening hits and exclude them? How well can you predict tox in vitro before going to live animals? Maybe there’s some opportunity for systems that are part algorithm, part mechanized with humans in the loop directing them.

  23. Alan Goldhammer says:

    10 weeks ago I was reading every AI preprint that came my way. I even started keeping a running tally of all the identified targets. There was some similarity, but a lot dissimilarity. I gave up when one of the papers published fexofenidine as a good drug with significant binding energy. Since I already take it for allergies, I figured my quest was over and I’m now protected! No HCQ side affects for me.
    I really stopped reading such preprints unless they were accompanied by in vitro data suggestive of activity against the virus.

    1. Gustavo Orair says:

      Alan, did you read studies on atazanavir?

      A Korean research indicate that, in simulations, atazanavir has a binding affinity to multiple components of the coronavirus:
      “The AI prediction showed that atazanavir to have a potential binding affinity to multiple components of the coronavirus, binding to RNA-dependent RNA polymerase (Kd 21.83 nM), helicase (Kd 25.92 nM), 3′-to-5′ exonuclease (Kd 82.36 nM), 2′-O-ribose methyltransferase (Kd of 390 nM), and endoRNAse (Kd 50.32 nM), suggesting that all subunits of the SARS-CoV-2 coronavirus replication complex may be inhibited simultaneously by atazanavir.”

      A Brazilian research indicates that ATV (atazanavir) could dock in the active site of of SARS-CoV-2 Mpro (Major protease) with greater strength than LPV (lopinavir):
      “A molecular dynamic analysis showed that ATV could dock in the active site of SARS-CoV-2 Mpro (Major protease) with greater strength than LPV and occupied the substrate cleft on the active side of the protease throughout the entire molecular dynamic analysis.”
      “Next, a series of assays with in vitro models of virus infection/replications were performed using three cell types, Vero cells, a human pulmonary epithelial cell line and primary human monocytes, which confirmed that ATV could inhibit SARS-CoV-2 replication, alone or in combination with ritonavir (RTV). In addition, the virus-induced levels of IL-6 and TNF-α were reduced in the presence of these drugs, which performed better than chloroquine, a compound recognized for its anti-viral and anti-inflammatory activities. ”

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

This site uses Akismet to reduce spam. Learn how your comment data is processed.