Skip to Content

In Silico

Farewell to “Watson For Drug Discovery”

STAT is reporting that IBM has stopped trying to sell their “Watson for Drug Discovery” machine learning/AI tool, according to sources within the company. I have no reason to doubt that – in fact, I’ve sort of been expecting it. But no one seems to have told IBM’s website programming team, because the pages touting the product are still up (at least they are as I write this). They’re worth taking a look at in the light of reality. Gosh, it’s something:

Watson for Drug Discovery reveals connections and relationships among genes, drugs, diseases and other entities by analyzing multiple sets of life sciences knowledge. Researchers can generate new hypotheses using the resulting dynamic visualizations and evidence-backed predictions. . .

. . .Pharmaceutical companies, biotech and academic institutions use Watson for Drug Discovery to assist with new drug target identification and drug repurposing. Connect your in-house data with public data for a rich set of life sciences knowledge. Shorten the drug discovery process and increase the likelihood of your scientific breakthroughs.

Well, no, apparently they don’t use it much, because no one seems to have felt that they were increasing the likelihood of any scientific breakthroughs. The IBM pages are rather long on quotes from the Barrow Neurological Institute, about how they can make such breakthroughs “in a fraction of the time and cost”, but it looks like they’re going to have to get along without the product unless IBM is providing support to legacy customers. And since the STAT piece says that they’re halting both development and sale, that seems unlikely. Barrow and IBM press-released some results in late 2016, and there’s a promotional video from a month or two later, but that was both the first and last announcement from that collaboration.

What happened? Reality. As this IEEE Spectrum article from earlier this month shows in detail, IBM’s entire foray into health care has been marked by the familiar combination of overpromising and underdelivery. To their credit, the company made a very early push into the area (2011 !) with a lot of people and a lot of money. Unfortunately, they also made sure that everyone knew that they were doing it, and what a big, big deal it all was.

The day after Watson thoroughly defeated two human champions in the game of Jeopardy!, IBM announced a new career path for its AI quiz-show winner: It would become an AI doctor. IBM would take the breakthrough technology it showed off on television—mainly, the ability to understand natural language—and apply it to medicine. Watson’s first commercial offerings for health care would be available in 18 to 24 months, the company promised.

In fact, the projects that IBM announced that first day did not yield commercial products. In the eight years since, IBM has trumpeted many more high-profile efforts to develop AI-powered medical technology—many of which have fizzled, and a few of which have failed spectacularly. 

Watson for Drug Discovery is just one of that suite of tools (well, potential tools). The idea was that it would go ripping through the medical literature, genomics databases, and your in-house data collection, finding correlations and clues that humans had missed. There’s nothing wrong with that as an aspirational goal. In fact, that’s what people eventually expect out of machine learning approaches, but a key word in that sentence is “eventually”. IBM, though, specifically sold the system as being ready to use for target identification, pathway elucidation, prediction of gene and protein function and regulation, drug repurposing, and so on. And it just wasn’t ready for those challenges, especially as early as they were announcing that they were. I first wrote about the company’s foray into drug discovery in 2013, and you’ll note that nothing really came out of the GSK/IBM work mentioned in that post. To the best of my knowledge, the two companies never really collaborated on drug discovery at all, but hey: they did team up on more targeted ways to advertise flu medicine.

Meanwhile, attempts at diagnostic and drug-therapy recommendations in oncology have been not only unproductive, but (according again to earlier reporting at STAT) even worse than that. The Spectrum article linked above goes into more details on those and other efforts all over the health care area that have come to naught, along with a few limited successes. And oddly enough, I’m going to finish off thinking about those. I still believe that machine learning is a perfectly good idea, with potential applications all over the field. But it ain’t magic. The areas where it’s worked the best so far are the ones with well-defined outcome sets based on large and very well-curated data collections, and where people have not been expecting the software to start spitting out golden insights and breakthrough proposals. It’ll get better – with a lot of work.

Just because people tried to sell the world on the idea that we’d moved past that stage years ago (A) does not make that so but (B) does not mean that we’ll never move past that stage at all. Next week I’ll have a post about machine learning and AI that goes into the real state of the field, from practitioners who have been spending their time whacking away at the code rather than generating promotional videos. IBM, though, has so far been doing the entire field a disservice with the way that they’ve spent too much time on the latter and not enough on the former.

58 comments on “Farewell to “Watson For Drug Discovery””

  1. John Wayne says:

    It could have moved the needle. It didn’t. It may still have some applications in the future. Welcome to research.

    I’d like to take a shot at giving it limited training sets and see what it can do, but I won’t promise anything amazing.

  2. Peter Kenny says:

    Many of the AI/ML evangelists appear to see drug design/discovery simply as an exercise in finding trends and making predictions. Put another way, all we need is for some clever folk from Silicon Valley to come up with an algorithm and the drugs will condense from the ether as if by magic. Part of drug design is actually generating the data that you need and we need to be thinking more in terms of Design of Experiments (especially relevant when you have to ‘buy’ your data rather than just using ‘free’ data). My advice to AI/ML visionaries is understand what a drug needs to do, the problems lead optimization chemists face and the constraints under which they have to solve them. One of the key difficulties in drug design is that unbound intracellular concentration is not, in general, measurable in live humans.

    1. Asimo Honda says:

      Didn’t Atomwise write a paper showing most of this nonsense is bunk? https://www.ncbi.nlm.nih.gov/pubmed/29698607

      Even the AI kids don’t believe in AI!

      1. I’m an author of the paper referenced above. I’m glad you enjoyed the paper, but I don’t think our conclusions justify your interpretation.

        In the paper, we don’t even investigate whether or not AI/ML techniques work; instead, we show that existing benchmarks are inconclusive. Good performance on retrospective benchmarks is likely to be due to artifacts from the benchmark design, rather than real predictive accuracy.

        That’s why we’ve been running our prospective AIMS program, which Derek has written about before: https://blogs.sciencemag.org/pipeline/archives/2017/04/24/free-compounds-chosen-by-software and the first results from DNDi and UConn were just announced: https://www.dndi.org/2019/media-centre/press-releases/dndi-and-atomwise-collaborate-to-advance-drug-development-using-ai-for-neglected-diseases/ and https://today.uconn.edu/2019/04/drug-discovery-partnership-ai-biotech-company-reaps-promising-early-results/

        While our paper doesn’t exactly support your claim, we agree with your skepticism. Frankly, no one can pick which algorithm is going to be best, just from first principles. Dozens of real prospective discoveries should be the minimum evidence to get an AI system taken seriously.

  3. Old Timer says:

    Maybe the fraction was 1/1? Maybe higher?

  4. d says:

    What might be a more useful start is to invent an AI to check the work of humans. As often as not I’ve witnessed researchers use identical equipment and methods to replicate molecular biology studies to establish a baseline for new studies and fail, repeatedly.
    An AI to prevent/correct this would be much more useful than one that predicts targets based on flawed studies, at least until the AI takes over, if our species lasts that long.

  5. Jordan says:

    I’m really looking forward to next week’s post on AI for drug discovery.

    I’ve worked in small molecule drug discovery. I never thought all these AI companies promising to use biophysics and AI to make drug discovery faster, safer, and cheaper would get anywhere.

    But I see them getting funded by VC’s, appearing as features on Startup Health and such, and they are actually striking deals with companies like Merck, Bayer, Pfizer, and certainly many smaller pharma companies.

    Am I missing something?

      1. Peter Kenny says:

        How does the Kool-Aid taste?

          1. Peter Kenny says:

            Very witty but start to worry if it smells like almonds.

    1. johnnyboy says:

      I know a bit about drug discovery, and a bit about AI, and (notwithstanding the saying about ‘a little knowledge..”), frankly I can’t put the two together either. I use AI in some of my work, and it’s a brilliant tool for simple, trainable tasks that would be too hard/long/tedious for humans to perform with accuracy. By contrast the process of drug discovery is so complex, with so many different decisions involved along the way, each step requiring subjective human judgement weighing risk-benefit and likelihood of success – how would you possibly train an AI algorithm on that ? Frankly it all smells a bit too much like money being thrown at buzzwords and hype. I see it my own company, with entire departments being set up with AI as the objective, without anyone seeming to have a clear idea of what kind of answers or problems they’re supposed to solve – apart from the usual meaningless bromides on improving efficiency, pursuing innovation, blahblahblah. I may be unimaginative on this, but for the basic truth is that AI is a computer, and a computer is good at doing what we tell it to do. In order to train AI to do drug discovery better, we would need to know how to do drug discovery ourselves – and clearly this is something we don’t, as we’re still learning every day what works and what doesn’t.

      1. PhotoDeTox says:

        Indeed johnnyboy. The same in my company. IT departments hiring with no clear objectives. At the same time the budget for actual drug discovery projects is being cut dramatically. Where does this lead to?

    2. loupgarous says:

      There seems to be a ton of money available in Big Pharma for acquisitions based on the possibility that a drug candidate will be that one out of every ten in trials that gets approved, then, after that, makes its nut. So a gamble that AI will do what it’s designed (or at least marketed) to do may not be that daunting to the decision-makers.

      Maybe the problem with Watson for Drug Discovery is that it’s doing things backwards. SAS Institute rents the drug, insurance, banking and other industries a workstation full of stuff for a manageable amount – used to be just over US$1000/workstation for the base product and one or two specialty languages for graphics, etc. They have been making money on that model for decades, though you can use SAS for less, now.

      I’m pretty sure that Watson for Drug Discovery costs way more than that, so much more that they haven’t gotten much of a user base. The reason that we tolerate being Microsoft’s beta testers every time it rolls out a new OS is that it doesn’t cost much as software goes, so they have a huge customer base and human waves of people to fix bugs as they’re identified. That they are walking away from Bing shows even Redmond can recognize deadwood when it dies.

      IBM probably priced Watson for Drug Discovery far beyond what a prudent decision-maker was willing to spend in hopes they’d get it working before his boss woke up and discovered the size of that check.

  6. Albert Buchard says:

    What and angry man you are. Would you have been so swift to recognise a win ?

    1. Derek Lowe says:

      I recognize victories all the time around here, actually. They’re rare enough that we should celebrate them. And if you think that was an angry post, you should read the “Snake Oil” category – that’s where I really get warmed up.

  7. DriveBy says:

    Deus ex Machina replaced by Deuce ex Machina? …sorry, couldn’t resist.

  8. MoMo says:

    Can’t say we didn’t see this coming. Since I couldn’t find term to describe the basic human nature of those who seek the path of least resistance and effort in drug discovery and chemistry I made up my own:

    MoMo’s Law- Those who try to follow easy paths in drug discovery are doomed to fail.
    More Molecules or Die.

  9. Chrispy says:

    As long as science is plagued by irreproducible results it will be very difficult to train these machines. Where the technology works best is in systems with robust data sets and well-defined outcomes. They have been pretty impressive in diagnosis with defined data sets like CAT scans. They need a better bull$hit detector for early research, however.

  10. Anon says:

    For Watson, it was garbage in, garbage out!

  11. Dominic Ryan says:

    I had a closer look at Watson some years ago. I think people may be missing a feature of Watson.

    Watson gained notoriety by winning Jeopardy. What did that actually demonstrate? Not AI! It demonstrated two things. The first was voice to text. That was not so much of a breakthrough. Just see where Siri and ‘Ok Google’ are today, they were well underway by that point.

    The second, and more important demonstration was of a much better ability to process language in real time than anything else. It was a breakthrough in Natural Language Processing (NLP), which is a well defined area of science. By correctly parsing the text coming in Watson was able to correctly look up the answer in a database. No AI needed. I suppose Siri and Google do incorporate elements of that. The main point though is to correctly map the question asked in Jeopardy. It had to understand English!

    Now, how do you use that? Well, aim Watson at as much literature as you can and use the NLP engine to create information nodes and edges, where edges are linkages between nodes (observations). The strength of any edge, the apparent relationship between observations, is the realm of AI. That is fundamentally a machine learning question. The problem with all such ML approaches is the quality of the underlying data and the assumed error rate for the data.

    If you assume a low error rate then you come up with all sorts of correlations. If you assume a high error rate you get nonsense. I think we can conclude, at least in a Watson-like scale, that the error rate is actually much higher than Watson might have hoped.

    Is that a surprize? Should we be surprised at how noisy and poorly reproducible the scientific literature is? After 50 papers using a Sonogashira it gets reliable. After 10 papers making a pathway suggestion implying a cancer link under specific conditions I am less sure. When you read conclusions in some papers, or press releases from tech transfer offices, I don’t think we should be surprised.

    1. bks says:

      I don’t believe IBM Watson used voice-to-text. My memory is that Watson got the answers via ASCII text versions. The advantage Watson had was perfect buzzer use, the key to winning the game:
      https://www.nydailynews.com/opinion/ken-jennings-op-ed-jeopardy-champ-computer-nemesis-watson-unfair-advantages-article-1.139563

      1. anon the II says:

        You’re exactly right. I think Jennings and that other dude (maybe the smarter of the two) would have kicked Watson’s ass if Watson had to do OCR or voice-to-text, before fingering through its databases. It was a publicity stunt bought and paid for by IBM. I wasn’t buying it. I was screaming at the TV sort of like I was when 60 minutes did the Sirtris story.
        Except for Jeopardy, I try not to watch so much TV these days.

  12. anon says:

    I had an interesting experience spending the day as the “rare disease subject matter expert”* with a group of data scientists. They thought they had a tool that would somehow find entirely new hypothesis and potential drug targets.

    The problem was that since the disease of interest was rare, their tools were entirely reliant on text mining, not any of the massive-omics databases available for common diseases. So the “drug targets” they found were only those literally stated in abstracts of a handful of papers. Again, since this disease is legitimately rare, the entire biomedical literature consists of only a couple hundred papers, the majority of which are case reports, and only a few dozen of substance concerned with disease mechanisms and treatments.

    The entire exercise produced a very… spotty… literature review, and some increasingly confused and frustrated data scientists. Turns out biology is hard, and biologists and clinicians aren’t idiots incapable of reading papers or using databases.

    *postdoc that spent a few months on a project related to the rare disease of interest

  13. Jb says:

    So much for Silicon Valley ‘disrupting’ biotech. But hey, as long as you use buzzy words like AI, machine learning and data science, VCs will eat up crappy biotech proposals that still eventually need brute force and expensive bench lab validation.

  14. Anonychemist says:

    Nature Materials has a new special issue (out today, Apr. 18) focused on machine learning in medicine. It’s interesting to note how much focus is given to the advantages of ML and how litte is given to the pitfalls.

  15. AlloG says:

    Will I be able to get this Watson? To use as doorstop in my office?

  16. RF says:

    I remember vividly when they came to talk to us about Watson. At the time I was running R&D IT for one of the big biotechs. Didn’t take long to realize that a) there wasn’t really anything there behind the nice Powerpoints, and b) their $$$ expectations were just comically absurd. I would have been willing to accept some grant money from IBM to help them create some proof-of-concept studies but they wanted seven digits from *us* for totally unproven technology. I don’t think we had a follow-up meeting…

  17. HAL9000 says:

    Any update how BenevolentAI are faring with their very similar “bioscience machine brain”?

    1. MK says:

      It seems that they started with a similar premise, but not so long ago changed their business model massively: they acquired a lab building in Cambridge and started their own „real” medicinal chemistry program. And earlier this year, they were hiring both cheminformaticians and experimentalists, so I guess BAI is doing well…

  18. John Wayne says:

    I will believe in a massive jump in AI ability if somebody trains one to work in biotech, and the machine quits and gets a job working in finance.

    1. Pavlov says:

      LoL… a comment-of-the-day nomination!

    2. anon says:

      Brilliant.. and a bit introspectively sad. Comment of the day seconded.

  19. comment_is_free says:

    There’s a lot of hating on machine learning going on here…most of it misplaced. What angers people I think is the hype surrounding the field. If we leave aside the unnecessary hype for a minute and evaluate it for what it is, ML is an extremely valuable set of tools for finding patterns and trends in large datasets. There are some problems in drug discovery where this set of tools is going to be very important and very useful — especially bioinformatics-related problems. It will not however solve all problems in drug discovery by a long shot, because there are many problems in drug discovery where it’s not applicable, either due to the limitations of these tools, the limitations of our current understanding of biology, the limitations of the available datasets or all of the above. Nor is it ever a substitute for critical thinking.

    Recognize it as a useful tool for certain kinds of problems (but not by any means a magic bullet), and use it where it’s applicable. Blindly hating on it because some people have overhyped it is not a rational response.

    1. Wavefunction says:

      comment_is_free: I agree with most of your points. There are areas like image recognition where machine learning works well because the data is homogenous and vast (you can essentially train on billions of pixels) and others where it doesn’t because the data doesn’t exist, lacks quality or isn’t homogenous. In these areas, applying machine learning is really a crapshoot, an experiment where you just try out a bunch of things and end up interrogating the quality of both the methods and the datasets after the face. Now it would be fine if outfits like Watson said that this is in fact what they are doing, but as it stands, they try to charge you millions of dollars essentially for pretending that they have a system which works, instead of saying what they are really doing – running an experiment which they don’t know will succeed or not.

    2. Hap says:

      1) People always hate on the hype, though, and when there’s lots of hype, people doubt the substance under it.

      2) A lot of the hype-generators in e-anything seem to rely on devaluing the actual work of doing something in favor of the computational tools that make it easier to do and easier to distribute (Uber/Lyft/etc, AirBnb, maybe Apple’s music). Those things are important, but if there’s nothing to distribute, then they aren’t worth much. The “Watson in AI” thing seems to be another of that ilk – devaluing the people and tools to do drug discovery in favor of the things to process its output.

      1. Anon says:

        Agree- the negativity will not prevent ML from impacting drug discovery and other areas of Life Sciences and Healthcare. We need to think back to the early days of the internet, as that’s about where we are at in AI drug discovery today. Many search engines failed, and eventually one got it right. It’s typically not the first mover that wins the market by finding the best solution(aka IBM Watson for Drug Discovery, or Altavista/Yahoo), instead it’s a continuous process of failure and improvement. I commend IBM Watson for trail-blazing the way. I certainly think they did themselves no justice by over-hyping and under-delivering, but that probably comes down to the huge pressure IBM is under in general, and hence they needed to return on the ($2.7BN) investment theyve made into Watson, which I would say is their biggest problem now. I wholeheartedly believe that in 10 years we will all be utilizing AI inside our Life Science workflows, and we wont even be talking about AI as it will be hidden underneath the software we will be using. We should embrace the future benefits of AI to improve inefficiencies in Life Science R&D (albeit in incremental steps), and not kick it when it’s down…it might come back to bite you 🙂 …It will never replace biologists/medics, but it certainly can enhance our work, but it will replace those who dont use it…

        1. Anon says:

          *those who dont use it/embrace it, will be replaced by those who do…

  20. KN says:

    IMHO it is not that AI/ML systems aren’t working, the problem is that data in biotech field is highly unreliable for different reasons. And the scientists generating this data are one of the biggest of these reasons. So I would keep the “machines can’t replace humans” gloating down – they probably can’t yet, but maybe they should. Until AI would learn than unreproducible paper looks just the same in resume as a good one.

    1. thomas J owens says:

      Agree entirely. Also, it seems as if precious few of those commenting here have an inkling of weighting of results or the importance of how AI/ML can and must RENDER conclusions. They are prime examples of brilliant in their field, but fatuously hostile to novel aspects tangential to it.

    2. MrXYZ says:

      I will be the first to say that I do not know much about AI/ML although by necessity I will need to learn more. But to the experts in the ether, how good does data need to be (let’s say a high-throughput drug discovery assay) in order for AI/ML to deliver conclusions that can be trusted (I’ll let you define what ‘trusted’ means)? In other words, how sensitive is current AI/ML to typical assay noise?

      Along these lines, does anyone have an example of a biological data set where ML/AI was able to extract meaningful/useful conclusions?

      1. Anonymous says:

        MrXYZ: “Along these lines, does anyone have an example of a biological data set where ML/AI was able to extract meaningful/useful conclusions?” Trick question: meaningful to whom?

        The EPA’s ToxCast database is collecting huge amounts of info (millions of data points) on a few thousand organic compounds using different high throughput screening assays covering 1000 or more toxicity endpoints. The ToxRef database of tox data in whole animals has been growing for around 40 years or so. There have been several papers trying to correlate predictions made from the (much cheaper per data point) ToxCast data with the known outcomes from the (more expensive whole animal) ToxRef data. I think that some of the best correlations are in the 60-70% range. Is that bad HTS data? Bad whole animal data? Bad toxicity theories? Is that meaningful?

        It seems that 65% predictive accuracy across the field is considered to be a very good result. Before complaining about that, consider that if you go to a casino, the house advantage at various games (except blackjack) is ~2% to 10% or so. 65% – 10% still leaves you with a better than 50% chance to walk out ahead of the game. Or, enough for a chance to get your grant renewal approved.

    3. Hap says:

      Problem is, drug discovery has been figuring that the data isn’t worth much anyway – its companies keep laying off the people that generate the data (or hoping that people will be willing to make crap to find new data). Taken to its end, the only people who’ll be left to generate data for AI to process have even less reason to care – as long as they get pubs and grants, everything will be OK.

      The problems that need to be solved in drug discovery can’t be solved with AI. If you solve them (better questions, appropriate publication incentives), then AI will likely help. If you can’t, then AI will be a cooler-looking way for pharma to commit suicide and for a few lucky and conscienceless people to make a lot of money.

  21. NotHF says:

    ML is just a buzzword for statistics done by machines. If you don’t have a fundamental understanding of the statistical quality factors involved in correlations in large data sets, and you have bad quality data…well, you’re going to get shit out.

    With sufficient devotion you can correlate just about any two variables in your data set, whether that has any meaning or significance requires a fair amount of thinking to determine.

  22. milkshake says:

    I have a friend who is quite high up in a corporation selling big data solution to companies doing clinical trials – they offer integrated data management system for all patients undergoing cancer treatment in a number of clinical trials, with complete medical history and all the test results and with preserved tissue samples available for re-analysis, etc. They had run into problems like one of their subcontractors was using underpaid drones who could not care less about the data they were filling into the database, because they were paid by the entry, and there was no oversight. And the tissue samples – you cannot use the poorly prepared and poorly stored samples later on for RNA analysis…

  23. DanielT says:

    There is one very good use for ML and that is giving you an objective opinion that your data is garbage. It can be really hard to convince someone their dataset is worthless, but ML helps.

  24. Anon says:

    I was asked to join the IBM Watson sales team in late 2016, on the promise that it “could solve the drug discovery problem”. I was skeptical but thought I would test the online demo, simply by asking it who had won the US Presidential election 2 weeks earlier. It couldn’t even answer that, despite that the answer was posted all over the news and web, so I gave it a pass.

    1. cynical1 says:

      Maybe that was a trick question. Hillary Clinton won the election. The Orange Doofus occupied the White House. And Fox News became President.

  25. Charles says:

    Watson quit just because it can’t catch up with Alpha Zero.

  26. Emjeff says:

    I’m not sure I understand how AI is supposed to help with drug discovery, given its output – correlations among factors in huge datasets? Any idea at all of how many you’ll find? Lots. How many are just Type 1 error? Lots, even if you set the bar high for acceptance by choosing a a p-value of 0.000000000000001 for significance. Now what – do you go back to the lab and test all of these “hypotheses”, knowing that a good many of them are garbage?

    The lab part is the hard stuff, BTW – we humans are pretty good at seeing spurious correlations ourselves. However, at some point, you have to generate actual data.

  27. yfp says:

    A.I and other computer language based system are built on mathematics models ( statistical and probabilities included). Mathematics by itself is deductive. However, Science (including Biological Science) itself is an inductive process. These two processes progresses in two different directions. For examples, A.I can classify in great details all objects that fall from trees ( a deductive process). However A.I. can not find the law of gravity ( an inductive process).

  28. BSM says:

    Artificial Intelligence is currently a crappy umbrella term for a bunch of technologies and techniques. Many of these techniques have been around for ages, we just have computational power and data quantity to make them meaningful at a larger scale than we used to. If you are arguing whether AI will discover new drugs or not you’re missing the point. The key is that all of the technologies – including those lumped together under the AI moniker – that are together helping to transform the way companies work, are only providing momentum. People still have to direct that increased energy toward accomplishing goals that deliver value. Operational, analytical, and experiential value can all be delivered using these tools – and that is the key. Make the investment to augment the scale, precision, and accuracy of the tools you put in the hands of your researchers using every tool in the toolbox. Watson for drug discovery just didn’t seem to be a very good tool because it promised more than it could deliver at a price beyond what was justified.

  29. Anonymous Researcher snaw says:

    The spectacular successes of Deep Learning in some specific areas have, I fear given people unrealistic expectations of how those accomplishments can be generalized. Fitting complicated nonlinear decision boundaries to high-dimensional data requires either huge amounts of data or some hidden lower-dimensional structure. And you need a very favorable signal-to-noise ratio.

  30. SteveM says:

    That IBM over-promised and under-delivered with Watson is no surprise because the vendors of big software platforms almost ALWAYS over-promise and under-deliver.

    That said, why didn’t the Pharma companies who licensed Watson first challenge IBM to demonstrate a case study success for a condition with an existing pharmacological solution?

    E.g. GERD. Have Watson harvest data and present molecular scaffolds representing notional discovery target classes. And OBTW, exclude data related to the known successful compounds. Would PPIs be in Watson’s recommendation list?

    If Watson couldn’t deliver solutions that are known to work using only pre-success data, nobody would have written the checks to IBM in the first place.

  31. Alex says:

    It’s amazing to see this post only ten days after our brief email conversation. Of course it’s just a serendipitous coincidence, but it’s still uncanny. For other readers: my question didn’t refer to Watson and AI/ML per se, but rather the more general question of supercomputers and the protein folding problem vis-à-vis drug design. I was remembering Blue Gene and Blue Gene / L.

  32. Carlos Montanari says:

    Maybe it’s because the model is too much probabilistic…safer going to the international space station!

  33. metaphysician says:

    I’ve said it before, but. . . I strongly suspect that “better drug discovery” is a problem that AI will only be able to solve after its passed the singularity/superintelligence/robot revolution threshold. Which is to say, meaningfully predicting drug behavior to a useful level is probably a harder problem than taking over the world.

  34. Today, AI in Biotech and Bio-Pharma Research is hype tempered by promise. Use Cases with regular / predictable data, e.g. images, [can] benefit from AI today … but until Research Data in general is better maintained and curated, AI beyond just Watson wi ll continue to struggle.

Leave a Reply to Hap Cancel reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.