Skip to main content

Drug Development

BenevolentAI: Worth Two Billion?

Regular readers will know that I have no problem believing that AI (in its various forms) will definitely have an impact on drug discovery. And regular readers will also know that I’m quite skeptical that it’s going to have an immediate impact on the high-level functions of drug discovery (what target to go after, what molecules to make, which one should be the clinical candidate, etc.) Not everyone doing AI/machine-learning in the field is even talking about going after these, but those who are tend to go all the way. A few months ago, I wrote about a hype-fest presentation on this very thing.

One of the companies mentioned on it was BenevolentAI, from the UK. They themselves announced just the other day that they’ve raised another $115 million, in a funding round that values the company at about $2 billion, so I guess it’s time to have a look at them in particular. They say that “BenevolentAI’s advanced technology is disrupting the pharmaceutical industry by lowering costs, decreasing failure rates and increasing the speed at which medicines are delivered to patients“. I had not noticed this happening, personally, but hey, lowering costs and decreasing failure rates are things we need very much in this business. How is this being done?

The company’s advanced technology has been shown to outperform human scientists in understanding the cause of disease and is capable of quickly generating drug candidates at scale.  The technology is also able to decipher the molecular process of disease and link these disease signatures within patients to ensure that the best drug candidate is given to the best patient responders – the ‘right drugs in the right patients’.

“Understanding the cause of disease”. “Quickly generating drug candidates at scale”. “Decipher the molecular process of disease”. Well, I can accuse these folks of several things, but not lack of ambition. Nor lack of self-promotion. Rather than rant on for a paragraph about each of those statements, I will just say that I am extremely skeptical that they can do any such things – at least not to the level needed to “disrupt the pharmaceutical industry”. I base that opinion on my own human experience over the last 28 years or so.

We have a great deal of difficulty understanding the cause of disease, just to pick that first claim. And I’m not sure at all if AI is ready to help across the board with that problem, because that would imply that we have enough data and we need help interpreting it. And while we could certainly use such help, I think that bigger problem is that we just don’t know enough about cells, about organisms, and about disease. AI is going to be very good at digging through what we’ve already found, and the hope is that it’ll tell us, from time to time, “Hey guys, you’re sitting on something big here but you just haven’t realized it yet”. But producing new knowledge is something else again. Drawing correlations and connections is not really the same thing – new knowledge, in this field, comes from experimentation. More advanced AI could point us to the more fruitful areas to run experiments in: “Hey, there’s too much about long noncoding RNA that we really don’t understand and we need to shore up” (the gospel truth, by the way). Really impressive AI might predict pathways or connections that haven’t been confirmed by experiment, but look as if they should exist. But AI as it stands is (at best) just going to sift through and rearrange knowledge that we already have, not give us more of it, and I just don’t think we have enough on hand to “decipher the molecular process of disease”.

Lest you think that I’m taking those quotes out of context, here’s some more:

To achieve this, BenevolentAI has created a bioscience machine brain, purpose-built to discover new medicines and cures for disease.  Proprietary algorithms perform sophisticated reasoning on over 50 billion ingested and contextualised facts to extract knowledge and generate complex insights into the cause of diseases that have, until now, eluded human understanding.

“Bioscience machine brain”, forsooth. Who, exactly, “contextualised” those fifty billion facts for it to chomp on? Fifty billion seconds is nearly 1600 years, guys, and putting biomedical facts in context can, at times, take even more than a second per fact. But what the hey, I’m not a bioscience machine brain. I hope that the PowerPoint deck that convinced people to part with $115 million is written at a less eye-rolling level than this press release; a person could pull a muscle.

Nonetheless, I honestly wish these folks luck. This is the way to find out if this stuff works. They are, I should note, some of the people behind the most recent retrosynthesis software that I wrote about, and I thought that was pretty interesting. But that is a perfect example of taking existing knowledge and using algorithms to root through it and rearrange it in new and useful ways. Drug discovery in general could be improved by more of that – but I don’t think there’s room to improve it as much as the BenevolentAI people are claiming. I’ll give them credit for rolling up their sleeves and coming on down to see if it works. But I don’t think it’s going to go the way that press release says. Not yet.

Update: I see that I’m not alone! And for a much more grounded look at AI in drug discovery, see this recent C&E News story.

64 comments on “BenevolentAI: Worth Two Billion?”

  1. anoano says:

    They are running a clinical trial currently, BEN-2001 in Parkinson Disease Patients With Excessive Daytime Sleepiness (CASPAR), phase 2b with 230 patients using a JnJ compound that previously failed.
    For the money, they got a new site in Cambridge UK (bought the company that the founder created and sold before), so need more cash to cover the burn rate.

    1. Derek Lowe says:

      I’m told by a colleague that the J&J compound is not really a “failed” one; it’s just that the company didn’t want to go after narcolepsy as a target. We’ll see what happens!

  2. NotAnExpert says:

    As far as I know, BenevolentAI was not involved with the retrosynthesis program.
    They “only” employed the first author of that paper after the fact.

    1. anoano says:

      True, however they advertise the program in their own slides as a technology they possess. And the author has benevolent has affiliation in the paper (but no financial interest), so they may have licenced the technology and hired him before it was all complete.

      1. NotAnExpert says:

        Maybe they did license the code, who knows?
        But if you are referring to the nature paper, it is essentially the same as this ( arXiv paper.
        Where the first author has no affiliation with benevolentAI.

  3. mallam says:

    I’ve heard it said to chemists working on a new, novel target and who wanted more certainty in biological work including whole animal and human studies that “biology is messy”. Th”e zeros and ones” that make up computational analysis don’t deal well with the nature of biological uncertainty, it the potential for “messiness”. No matter how many data points are analyzed or “adjustable” variables are added to bettter fit a set of “results”, the still unknowns of animal and human biology cannot accurately be predicted. This concept includes the need for safety evaluation. If computational analyses were able to provide such predictive results, then the only compounds needing to be synthesized would be the few ultimate ones for verification of activity and then safety testing.

  4. myma says:

    I have a headache right now. Can they tell me what is causing it?

    1. Old Timer says:

      Too much wine last night after reading their press release?

      1. Old Pump Kicker says:

        If I read that press release /after/ I invested my money, I would be guzzling bourbon.

  5. Mad Chemist says:

    Let’s see how long before the UK’s equivalent of the SEC goes after BenevolentAI for lying to investors.

    1. Anonymous says:

      At one time, prior to 1997 or so, it was noted that the UK and Europe lagged behind the US in biotech investment and R&D. They felt that they were missing out on developing their own discoveries because of a lack of VC investment. (monoclonal Abs [Nobel Prize to Kohler at Max Planck and Milstein at U Cambridge] were never patented but put in the public domain).

      Very roughly, the UK (and parts of EU) had laws regarding the solicitation of investments that required that there be something of real, actual proven value (a real “thing”; a real drug with actual results) and not merely an idea (snake oil or an early stage promise). The investment laws were loosened up bit by bit in 93, 94, 95, … and so on, until it is now OK for snake oil salesman to solicit investors with promises of miracles.

      (As I understood it, prior to the changes, you could make and sell snake oil in a bottle to customers and be subject to various consumer protection laws, etc.. But, you could not legally solicit investors for your snake oil venture. In the US, the SEC has rules regarding VC investing, also — high net worth individuals able to prove that they can tolerate 100% loss of their investment, etc.. In the US, you could raise VC money based on wild speculation and unsubstantiated promises:
      recombinant mol bio to make insulin, HGF, etc – Genentech – success!
      diphtheria toxin fusion proteins – Seragen – not so successful
      single blood drop diagnostics – Theranos – not so successful)

  6. tt says:

    Unbelievable! Who are these VCs that cough up dough for this pile of hype? Maybe it’s the same fools that funded Theranos looking to recoup their losses with an even bigger loser. I have no doubt that AI/ML will grow to play a role in drug discovery, but the hype cycle thusfar smells like peak combichem days. Drug discovery is still primarily an experimental science and given the vast swaths of garbage data in the literature, the prospects for AI to make any meaningful contributions at this stage are slight. Improve the actual data first, and then we might start generating helpful AI.

    1. Dionysius Rex says:

      The people/VCs funding this know that there are many greater fools. They will be in the money long before everything goes Pete Tong.

      1. tt says:

        I’m going to start a new company called “metaAI” in which we design an AI application that writes AI related business plans, creates auto-generated press releases, and presents to potential investors how this new AI startup will “revolutionize/disrupt (insert industry here)…” As soon as sufficient funds are raised, metaAI will flip this to some credulous, established player in said industry.

        1. fajensen says:

          Well – Journalism already works exactly like that for 80% of the “News”.

          Machine learning algorithms spout out selected gibbering madness from a festering pool of misery collected across the globe, measure the click-trough rate, social media sharing and other KPI’s and then update their state matrices for even more outrage and revulsion.

          It might just be possible to rent a Robo-Journo, tweak its parameters a bit using maybe some Sci/Bio grad students* for the A-B split testing instead of SoMe and Adds Pushed in Orifices and Bubba’s yer Uncle and we are on the way to suc-cess!

          *) Grad students will show up for anything if there is free food and drink involved.

    2. Peter Kenny says:

      How uncouth of you to say the T-word

  7. I think that you really need to be less negative, Derek, and I’m sure that the happy smiling HR folks can arrange for you to attend a course on positive thinking. These are true Leaders and with whom you should feel privileged to be on the same planet. What you just don’t get is that with AI you no longer worry abut trivial matters like intracellular concentration, affinity and selectivity. The key is to stop asking stupid questions, get some accreditation as a lean six sigma ‘belt’ and let AI’s finest minds disrupt drug discovery with metric-laden apps. I’ve linked a blog post on molecular interactions as the URL for this comment and I hope that you will show a bit more positivity in future posts on the topic.

  8. Metamonad says:

    What something is worth and what people are willing to pay are two different things.

  9. Delores says:

    “Doesn’t look like anything to me….”

    1. Scott says:

      Hey, a Westworld fan!

      That press release gave me a Buzzword Bingo Blackout.

    1. right says:

      I don’t think Baldoni knows anything about AI. Has he ever even programmed ‘Hello World’?

      1. Metamonad says:

        Of course not, and why should he, he’s a visionary leader, not a lowly hacker! The word I used “bullish” is close in sound to a more appropriate term, I’ll leave that to your imagination. And while we’re on the topic of BS, have you come across the terms “ideator” and “ideation”? GSK has some of those too:

        1. fajensen says:

          Oh God, Yes.

          In Sweden it is something where lost of consultants will turn up to to peddle their skills. But, there are no buyers and no ideas created either since nobody is buying anything, everyone are selling. Like Linkedin, except not virtual but Physical.

          The first-cause consultant running the event, the so-called “facilitator”, however, does get paid rather well.

  10. A Nonny Mouse says:

    Oh dear… Reminds me of the in-silico disasters, one of which I saw first hand when they wanted me to make their impossible predicted active compounds (and I only got to see pre-screened structures).

  11. road says:

    Why is Derek looking into a microscope in that C&E news write-up? Are there many microscopes in the modern med-chem lab?

    1. Derek Lowe says:

      Using a polarizer to try to find some decent crystals. . .

      1. RM says:

        Can’t you train an AI to do that for you?

  12. Wavefunction says:

    The marketing/business school bot that generated the word salad in that press release leaves me profoundly moved.

    1. Yvar says:

      You know, that really seems like an important point. An AI program that can better determine what will part investors from their cash would be more profitable than any drug discovery effort, and may well be what BenevolentAI has created.

      1. Anonymous says:

        which begs a question: why AI that claims to go after bench scientists sells better? Why not AI-CRO? Feeling some conspiracy here.

      2. Wavefunction says:

        Basically you write a function that minimizes the distance between the fool and the money.

        1. Dr. Victor Frankenstein says:

          Surely you want to >maximize< the distance between a fool and his money, so that they can be parted more easily!

  13. luysii says:

    Basically drug discovery is hard because we are trying to change the workings of a system whose workings we don’t understand very well. In particular, we don’t know all the players in the system. Consider that microRNAs which can alter the level of every mRNA in the cell (along with the proteins made from it) weren’t discovered until 25 years ago, and have only really been studied intensively since 2001.

    Over the years I’ve put up a series of posts giving specific examples of these points — there are now 31 — the latest being the discovery of retroviral proteins being used at the synapse between neurons. They’re all here —

    1. me says:

      Slotted beautifully between posts about how Hillary is surely weeks away from death and obviously unfit to lead. Bravo.

  14. Anonymous says:

    You deserve such people who fool us with mere hype! You have problems to accept the small claims of the Natural Product Total Synthesis people that cost 0.1% damage or cost compared to the hypes such as these!

    Go figure out drug discovery using such been-violentAIs!

  15. Calvin says:

    So I’m generally with Derek on this one. If not towards the cynical end. The valuation is just a bit silly…..While there are a few people there who I think are very good and pretty experienced (and should know BS when they see it) the CEO makes me very skeptical here. She has a long history on jumping on bandwagons with no discernible output and so that colours my view considerably. The investor group is a weird mix also; some good and some terrible. It really is a weird company all round, and given the current lack of material output I’m inclined to think that this is essentially one big balloon of hot air……

  16. right says:

    The hype is weird, who would believe this stuff? AI has a long long way to go before it makes real contributions. GSK had an interesting press release about their use of AI. They used it to classify data so that the data could be then be stored in a database. THAT is the current value of AI, sorting though messy input so that data can be placed neatly in an oracle table.

    1. fajensen says:

      I wrote such a thing 7 years ago. To find different requirement specifications and cluster them based on their type, content, then score them on completeness from a 6 GB pile of randomly named documents produced during the meltdown phase of a failed software project. The “best ones” went into the PBS / design repository for humans to work over for the second attempt at the coding – these people had to complete the product. They already sold it and bet a good chunk of the business on it too!

      It worked fairly well.

      I’d guess the key feature was winnowing a huge s-pile of Demotivation down to something that could be dealt with in small increments. If people think that the document they have is the best they are gonna get, because The Algorithm Said So, they will with some degree of happiness tack down and Fix this document, rather than hare off searching for the One True Document that is sure to be in there, somewhere.

  17. Hap says:

    1) There are a lot of fools with money?
    2) I put “benevolent” in companies names in the same category as “smart” in product names – if it is smart or good for me, you wouldn’t need to tell me, and so most likely it is smart or good for the people selling it but not for me.
    3) Why does “garbage in, garbage out” not apply to AI claims the same way it applies to every other realm of computer expertise? There’s a lot of data on biology, but at least some of it is crap, so the patterns you get from it may not be reliable, and it’s also incomplete. If they can sort the reliability of research results or indicate useful places to focus research efforts, what they make would be useful to lots of people, but that’s asking a lot.

  18. Ian Malone says:

    Ah, but you’ve missed that they’re using proprietary algorithms. These aren’t just some algorithms they found on the street.

    1. fajensen says:

      Question of Audience: The “investor class” sees Proprietary and it is Good because if no-one can duplicate the work then the owner can expect to extract rent forever whereas the “engineering / science class” sees Proprietary as the Spawn of Evil because they can’t verify if the underlying principles are sound – and even if it works they can’t write any papers on it.

      I would never, ever use a proprietary machine learning algorithm for anything; I like my algos being Open Source, based on current research, and to be poked and prodded by many people who are experts in the field and much smarter than I am, besides!

      Uniqueness is not always that great.

      The advantage with a “me too” product versus a Proprietary “unicorn product” is that one does not have to explain the “me too”. Everyone knows that it works and how, it is selling so everyone understands the market for it. The New & Improved “me too” only has to be (seen as) better and more cost effective than the other “me too’s” to be viable.

      1. Ian Malone says:

        I’d agree! Really I was having fun with the use of the word in there, it’s often seemingly used in an attempt to suggest quality, though it’s really nothing of the sort (potentially meaning anything from, “We spent tens of thousands of man-hours optimising and testing this and no-one else is getting their hands on it.” to, “We bodged this together in an afternoon, and would rather you didn’t look too closely.”.). You do have a good point though about it meaning there’s some intellectual property in there that may have value for investors, but as it’s software that’s copyrightable, “proprietary algorithms” still has more of a marketing gimick sound to my ears. (Actual algorithms being less protected usually, though in the US you can patent them. In which case I’d expect to see “patented algorithms”.)

  19. chiz says:

    If they had thrown in a vague reference to blockchain somewhere they could probably have raised even more money.

  20. Some idiot says:

    Off topic, but Derek, I noticed a few days ago that there is no longer a link directly to the comments (or mention of how many comments there are) on posts…

    Using the mobile version, anyway…

  21. Insilicoconsulting says:

    Most informed people know AI can generate a readymade optimised drug candidate without human input. So why so do investors still invest is perhaps well worth me , calling investors fools is but the easiest non explanation akin to explaining the current White House incumbent’s success by calling voters stupid. Heard of the Forbes 400 entry. Looks to be same as in the case of DART … should we really worry if actual patients are not harmed and a few jobs are generated with a small probability that something, however small may yet come from their efforts?

    The malaise goes deeper..and we need to understand why such deals get sealed..

    Is it lack of investment opportunities in general? Is it money being turned from black to white? Is it the hope of a buyout by pharma or google of the world?

    Is it that investors know there may be several small successes although it’s touted as a revolution and 115 m USD is not big money for them?

    Is it that there’s no realistic way for a driven entrepreneur, scientist to raise reasonable amount of funding without some hyperbole?

    1. Insilicoconsulting says:

      First sentence should read “cannot generate “

    2. Design Monkey says:

      >Is it that there’s no realistic way for a driven entrepreneur, scientist to raise reasonable amount of funding without some hyperbole?

      As certain guy said:
      You can go a long way with a smile. You can go a lot farther with a smile and a gun.
      Regarding raising money – yep, you (probably) can raise some money on worthy scientific grounds. Sprucing it up with hyperboles and lies will get you much more money. And then there’s realisation, that lies and hyperboles alone too work pretty well, no frickin scientific grounds are really needed there.

  22. drsnowboard says:

    Surely anything that can make connections between huge volumes of unconnected data and enable it to be presented on a single powerpoint slide is of immeasurable value to p̶h̶a̶r̶m̶a̶ ̶e̶x̶e̶c̶u̶t̶i̶v̶e̶s̶ drug discovery. Look at bioinformatics…

    *sarcasm mode off*

  23. Bill Truesdell says:

    Only one question from a layman. Per this site more that 50% of all drug studies cannot be replicated. So the AI will be using “facts” that are not necessarily facts to come up with a solution on the way to go. Obviously not too scientific and potentially not too good for the recipients of the new way to go.

    1. Me says:

      Not sure if that was a question or a comment, but yes there is alot of ‘garbage in, garbage out’-type rhetoric in the comments section, and yes, (in the most agnostic term possible) this AI system will need to be able to navigate through such a minefield.

      One way around this is that if they are basing the AI algorithms on the thinking of R&D management in any place I ever worked, the AI will essentially work by saying everything is cr*p unless someone who outranks them likes the target, hence it would have an AI overlord that thinks like a Harvard MBA. That over-AI will, in turn, be programmed to spot terms that increase share price when placed into press releases.

  24. Anon says:

    Truth be told my friends this piece herein written by Derek belongs to :snake oil” category.

  25. tlp says:

    It looks like current obsession with ML/AI that generate model-free correlations is a reflexive swing in the opposite direction from ‘rational drug design’. Ligand-based, structure-based, metrics-based approaches seem to have reached their limits. Data-rich techniques are proliferating and nobody in their right mind can claim to understand everything, or anything. So welcome random forest and deep learning. No need for understanding what model they are ‘learning’ – just get those computed hunches on scale.

  26. David Edwards says:

    One problem that arises immediately from my standpoint, is that pattern recognition and correlation extraction are only a part of the scientific process. An importnt part, admittedly, but by no means the whole story. As a corollary, data mining operations (which, at bottom, is what much so-called “artificial intelligence” and “machine learning” amount to), whilst they may point to correlations that scientists hadn’t suspected before, may also point to mere coincidence that isn’t worthy of serious, in-depth investigation. Indeed, statisticians at the analytical end of the field, expended much effort teaching humans to be wary of this in the past. See for example, Student’s exposition on the perils of treating linear regression as a mere mechanical process.

    Where scientists still score over computers, is that they generate concepts arising from the data, aimed at providing a mechanism underpinning the observed correlation. That’s what a genuine scientific theory is – a collection of concepts, which together provide mechanisms for the emergence of specific, well-defined interactions between entities, said mechanisms invariably involving constraints allowing for falsifiability (in principle even if difficult in practice), which has furthermore been tested to destruction by multiple, carefully constructed and diligently applied experiments, and found via such test to hold for every new data set that has arisen since the formulation of the original concepts and hypotheses. Those experiments, when performed properly, being of course aimed at finding a data set for which the hypothesis fails.

    At the moment, AI cannot replicate this part of the scientific enterprise. It cannot generate new concepts, new mechanisms, or experiments to subject said concepts and mechanisms to test. That still requires the ingenuity of human scientists, which is why the best of said human scientists continue to walk away with Nobel Prizes, because performing this task is a very hard one. That part of the scientific enterprise is a long way from being mechanised. Indeed, anyone familiar with the impact of Gödel and Turing on Hilbert’s Entscheidungsproblem should be immediately suspicious of any hyperbolic claims to have mechanised even a tiny fraction of that part of the scientific enterprise. Mechanisation thereof is likely to be subject to severe constraints (such as those that made mathematicians stop and pause post-Gödel and post-Turing). AI might be able to deliver wonderful collections of data-mined correlations, but it cannot tell us reasons for those correlations existing, let alone drop in our laps an actual scientific theory underpinning those correlations.

    At bottom, if AI is going to be something other than a particularly fast bureaucrat removing drudgery from humans, and start delivering original ideas, some ferociously difficult research will need to be completed first. The first such difficult piece of research being to teach a computer to manipulate ideas to start with. Then, such a computer has to solve the problem of alighting upon working ideas without generating a lot of garbage beforehand. In particular, if a computer is going to start generating actual, bona fide scientific theories instead of glorified string comparisons flagged as potentially “special”, it’s going to have to master the difficulties involved in making its ideas exhibit the features of the best scientific theories. Namely, generality (the concepts and mechanisms are applicable to a broad range of entities and interactions, modelling these with appropriate fidelity), consilience (dovetailing with existing tested frameworks, and ideally, providing new, independent, and hitherto unsuspected validations thereof), and unifying power (namely, bringing previously distinct classes of entity and interaction into a single coherent framework of knowledge, consistent with the previous two features).

    Achieving this isn’t going to happen overnight, if at all. Anyone who doesn’t recognise how ambitious a project this is, hasn’t learned enough about the subject to be trusted either with research or public pronouncements thereupon.

    1. Derek Lowe says:

      I agree. I’m fine with AI being that drudgery-removal-system, actually, because there sure is a lot of that to be cleared out. Taking it further and making an “insightful breakthrough machine” is a whole other order of business, though. I certainly think it’s possible, but as you say, it’s not going to be anywhere near easy, and we’re only beginning to get an idea of how to implement such things.

      The wild card will be if we can get an AI system that’s at least weakly capable of optimizing AI systems. Of course, at that point, we should at least take a look at the various SF stories that suggest that we might regret that. . .

  27. Metamonad says:

    More of the same:
    Stand-out claim: “Accelerating Therapeutics for Opportunities in Medicine (ATOM) formed in October 2017 with the goal of reducing preclinical drug discovery from six years to just one, using cancer as the exemplar disease.
    The great John Baldoni of GSK fame is involved so success is guaranteed!

  28. exGlaxoid says:

    Is Elizabeth Holmes part of the Benevolent board? That would be the clear indication how they got their claims of greatness.

    If any of the claims of discovering a drug in less time, more success, or better safety were true, they would be worth many billions, but I doubt that they are worth the cost of a cup of coffee.

  29. bozo says:

    “I checked it very thoroughly,” said the computer, “and that quite definitely is the answer. I think the problem, to be quite honest with you, is that you’ve never actually known what the question is.”

    – Douglas Adams

    1. anon says:

      AI or computer is a highly efficient and low cost technician. It can do the work faster with less error than human. However, it will never ask why.

  30. loupgarous says:

    You can’t have an investment prospectus without the words “disruptive” or “disrupts” in it these days. It makes you wonder if corporate PR flacks remember that “disruption” was originally a “bad thing'” – as in “destroying ability to function”, a real hazard in AI of any type.

    Even less-complex AI has a bad track record at times – the Airbus military transport which flipped over on takeoff when its engine control firmware was installed incorrectly, or the radiotherapy software which seduced its users into not being sure essential beam collimators were working correctly – with catastrophic results for patients. Never mind the recent mishap in which a self-driving car under test for Uber mowed a pedestrian down.

    I hope that Benevolent AI works as advertised, but skepticism is the appropriate attitude toward AI. It should be regarded as buggy until validated, then after its validation still regarded with suspicion.

  31. Mostapha Benhenda says:

    Yet another shoot in the dark, investors have been warned, though, read:

  32. Zdock auto dock says:

    I see a lot of cynisim here, which is perhaps warranted, but none of it regarding the actual approach they are taking? What is benevolentAI actually doing? Is their main skillset NLP (thus dealing with publications) or network science and thus a more systems biology approach. Maybe someone can direct me elsewhere to materials that get under the hood a bit more?

  33. Glen says:

    The author says “AI is going to be very good at digging through what we’ve already found, and the hope is that it’ll tell us, from time to time, ‘Hey guys, you’re sitting on something big here but you just haven’t realized it yet’.”

    So the story is a bit contradictory because this is exactly what Benevolent AI are trying to achieve.

    1. Derek Lowe says:

      That they are. But the question is whether AI can do anything of the kind yet. . .

Comments are closed.