Skip to main content
Menu

Analytical Chemistry

IBM And The Limits of Transferable Tech Expertise

Here’s a fine piece from Matthew Herper over at Forbes on an IBM/Roche collaboration in gene sequencing. IBM had an interesting technology platform in the area, which they modestly called the “DNA transistor“. For a while, it was going to the the Next Big Thing in the field (and the material at that last link was apparently written during that period). But sequencing is a very competitive area, with a lot of action in it these days, and, well. . .things haven’t worked out.
Today Roche announced that they’re pulling out of the collaboration, and Herper has some thoughts about what that tells us. His thoughts on the sequencing business are well worth a look, but I was particularly struck by this one:

Biotech is not tech. You’d think that when a company like IBM moves into a new field in biology, its fast technical expertise and innovativeness would give it an advantage. Sometimes, maybe, it does: with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday. But more often than not the opposite is true. Tech companies like IBM, Microsoft, and Google actually have dismal records of moving into medicine. Biology is simply not like semiconductors or software engineering, even when it involves semiconductors or software engineering.

And I’m not sure how much of the Watson business is hype, either, when it comes to biomedicine (a nonzero amount, at any rate). But Herper’s point is an important one, and it’s one that’s been discussed many time on this site as well. This post is a good catch-all for them – it links back to the locus classicus of such thinking, the famous “Can A Biologist Fix a Radio?” article, as well as to more recent forays like Andy Grove (ex-Intel) and his call for drug discovery to be more like chip design. (Here’s another post on these points).
One of the big mistakes that people make is in thinking that “technology” is a single category of transferrable expertise. That’s closely tied to another big (and common) mistake, that of thinking that the progress in computing power and electronics in general is the way that all technological progress works. (That, to me, sums up my problems with Ray Kurzweil). The evolution of microprocessing has indeed been amazing. Every field that can be improved by having more and faster computational power has been touched by it, and will continue to be. But if computation is not your rate-limiting step, then there’s a limit to how much work Moore’s Law can do for you.
And computational power is not the rate-limiting step in drug discovery or in biomedical research in general. We do not have polynomial-time algorithms to predictive toxicology, or to models of human drug efficacy. We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up.
Note: it’s been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore’s Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I’d still say that understanding is the real rate limiter. . .

17 comments on “IBM And The Limits of Transferable Tech Expertise”

  1. pipetodevnull says:

    “We hardly have any algorithms at all. Anyone who feels like remedying this lack (and making a few billion dollars doing so) is welcome to step right up”
    Yes, working on it. Sure, it’s not happening as fast as some would hope (and that fact alone is apparently enough to earn scorn and snark from the old skool), and analog has a pun sense in chemistry, but come on folks–do you really think it’s not the way forward? Care to review your statements again in a decade?

  2. Derek Lowe says:

    My comment doesn’t have to sound as snarky as it sounds. There really are billions of dollars (and dozens of drugs) waiting out there for anyone who can get a better handle on these things, and I sure wish that someone would. Doing so would be the best revenge possible on any doubters, for sure.
    But in a decade? If you’ve got some concrete predictions as to where you think we’ll be in ten years, and would like to put them out for public scrutiny, please do. In fact, feel free to email me with them, and I’ll put them up in a whole new post. Your real name and affiliation need not be involved in the slightest, unless, of course, you really feel lucky (!)

  3. Nekekami says:

    As someone who suffered through academia(Never bring real-world experience into academia, folks, it’s considered an attitude problem), in the field of comp.sci(which is essentially applied maths, and not programming), I argued that more effort was needed in many real-world fields, such as for example medicine, as well as more general fields such as improved parallellisation algorithms etc. I ALWAYS met opposition: “It’s too complex” “They need to make their models simpler” and similar were the excuses. It doesn’t fit into a neat X year grant cycle they can put a grad student or postdoc on.
    To maths/comp.sci people: Yes, it’s complex, because it’s a real-world problem we are dealing with. Just like you can’t simplify bridge engineering into a neat physics formula, because in the real world, all those extra little details need to be accounted for, unlike those theoretically optimal models.
    To the people working with research into new medicines: Don’t hold your breath too deep. There are people who would be willing to do it, but we face resistance from several angles: Entrenched faculty that have their own little niches and want to foist that off on their lackeys, ahem, I mean PhD’s, a vicious academic research system that is built on a fairly set timecycle and predicted results already before applying for a grant, and a lack of interest in running such fairly fundamental research from the corporate side.
    What led me to decide that further time in academia was a waste was when I suggested, as my intended research project if I were to continue further, a look into making a faster and more accurate system for medicine recommendations. Current systems are slow and cumbersome, because of the aforementioned complexity issues. The Prof was not interested.
    *About the attitude problem?
    Well, my prof in Algorithms was one of those Ivory Tower fellows with a long list of academic merits, but never done anything in the real world.. For example, during one lecture, he presented and explained an algorithm, he finished off with “And for this problem area, this is the best algorithm, and you don’t have to waste time to look into any other”, at which point I, with a number of years of working with software dev and engineering in the real world, pointed out that there are at least 3 architectures where the algorithm would either run slow as hell, or not run at all. That earned me a talk with the professor about my “attitude problem”. Another occassion was when I pointed out that, if there’s been so little research done in parallellisation, maybe that means we SHOULD dive into that….

  4. MIMD says:

    with its supercomputer Watson, IBM actually does seem to be developing a technology that could change the way medicine is practiced, someday.
    Some day, when we can also do this: http://hcrenewal.blogspot.com/2011/03/quick-thought-on-cybernetic-medicine.html

  5. Philip says:

    To | /dev/null: There are some rate limiting steps that advances in computer technology will not solve. Some biological reactions just take too much time. I put together a system to look at cell changes over time. The camera was a 16 mega pixel gray scale camera. It took 1.6 seconds to transfer each image. By today’s standards that is incredibility slow, but it was not the rate limiting step. In fact if the camera transfer rate was infinite, and the processing time was zero, the test time would have decreased by about 0.03%.
    If you think in 10 years we will have computer models that are close enough to real humans so that we can get rid of a lot of testing, I want part of that bet.
    One reason for the failure of computer programmers to come up with better software for the pharma or most other industries, is that programmers do not understand what is needed. Most companies do not send their programmers to see what the problem is they are trying to solve. Working in such isolation is just stupid, but it is also SOP. The best software solutions came from people that know the industry, even if they are just average programmers.

  6. Frank Adrian says:

    Expertise in one field often leads to an increase in the Dunning-Kruger Effect in other fields. This increase is directly proportional to the narrowness of expertise in the original field. Basically, people play from their strengths, drawing analogies from their base expertise which, due to fairly deep ignorance of the target field of their analogy, causes them to believe that the target field is as transparent to them as their original field of expertise, that problems map directly, etc., etc., etc.
    The best way to disabuse the “expert” of this effect is to educate them in how their analogies are flawed and how the target field is, in reality, different. However, the “expert” often takes this as an attack of his knowledge of his original field, causing resistance to education. And besides, who has enough time to explain things to an idiot who thinks he’s an “expert”? Especially, when (as in the case of CxO’s) they aren’t used to being told that they might be incorrect and they fight you every step of the way.
    All-in-all, let them spend their money, get their nose bloodied a couple times and then – maybe – they’ll be ready to admit that their original expertise did not grant them expertise in a completely unrelated field and thus, be ready to learn. Or else, they’ll get bored and wander off to do something else. As Euclid said to Ptolemy: “There is no royal road to geometry.” As goes mathematics, so goes life – although this may be my own Dunning-Kruger Effect talking, so take this with a grain of salt.

  7. MDA Student says:

    This all goes back to education, jobs, quality of life, and economics.
    Getting people to be educated Tech + Biotech is like training people to do MD+PhD. You make was more in the Tech (or MD) side of things, and if you venture over to the PhD (or Biotech) you get much less return for your efforts.
    If Biotech had the same ROI as the Facebook and Zyngas you’d have more talented people making that jump over and more people training in both realms. As is the individuals don’t have as much to gain so they aren’t doing it.
    BTW, I think Bruce Booth had an article some time back about tech vs. biotech investing.

  8. Anonymous says:

    I don’t know… I saw a talk comparing Moore’s lab to cost per base of DNA sequencing. Sequencing technology has had a much steeper curve (e.g. faster innovation). Obviously that’s just one tiny slice of biology, but there’s clearly innovation happening, certainly on the technology side.

  9. Anonymous says:

    Data, Data, Data Watson. Without data I cannot theorize…or build models!
    Its’s only now that chembl /pubchem and other places have enough quality data for some tox endpoints so as to able to build good predictive models.
    But on closer inspection, for a given endpoint, much of this data is not very useful.Except for say genotox/carcinogenicity.

  10. Ljstewarttweet says:

    Data is not Knowledge! So much data in biology yet so few true insights. Who will figure out how to convert data into insight for health?

  11. John Wayne says:

    @10: Well said

  12. Virgil says:

    Biggest issue I see with application of “big data” to biology, is the nasty and disparate nature of the data itself. We can’t even build decent kinetic models of simple metabolic pathways, because the published kM and Vmax values for the enzymes are from all over the shop (one from rabbit liver, another from E. coli, another from frog muscle, etc.) Despite the best efforts of Expasy, NCBI and other database efforts, there’s still no simple common language that all biology can be annotated in.
    Another huge problem is the shockingly low ability of big data sets from even 4-5 years ago to interface with current data sets. Driving this in part, is the fact that the technology/methodology is almost moving too fast… by the time we biologists figure out how to use a new method to get meaningful data, the next new thing has come along and we all want to do that instead because it’s “better”. Arguably it took >5 years in the case of the first gene chips (a lot of the early publications were complete bunk), and just as we were actually beginning to get useful data and understand how to use the method properly, everybody junked their Affy arrays and went to RNAseq instead. We never went back and re-did the gene chips properly because “the field had moved on”, and you can’t get grants to revisit old problems using dated methods. I would bet good money we’ll be decrying RNAseq 5 years from now, but no-one will come back and correct the errors because the next shiny sexy method will be far more interesting by them. Rinse and Repeat.

  13. anchor says:

    #10, apropos for the comment! In my years of work (20+ years) in the medicinal chemistry and drug discovery area, the only way it works is where we learn along the way, the old fashioned way! Rest are only mirages!

  14. ptm says:

    Being a molecular biologist and an electronic engineer by training I can tell you those areas are completely different and approaches from one have very little relevancy to the other.
    Technologies like electronics or IT work so well ONLY because they were very carefully designed to remain deterministic and susceptible to reductionism even when dealing with very high levels of complexity in modern computer systems. The power of the language of electronic engineering is primarily due to the fact the components themselves were designed and optimized with that goal in mind. This situation is completely different from the one in biology where evolution never bothered with analytical tractability.
    If a protein with some particular function can be recruited to perform another completely unrelated and transient function under some circumstances the evolution will be perfectly happy with it, if it can then add yet another role on top of it so much the better, meanwhile the complexity of the system from our perspective goes through the roof. Sure there are cases where it’s better from an evolutionary POV to keep processes compartmentalized but that is certainly not a general rule.
    In general I am pretty pessimistic when it comes to progress in biology. Unless we can come up with new revolutionary tools like for example scanners capable of analyzing 3d structure of single frozen cells with atomic resolution the progress will likely slow down to a crawl as we get deeper and deeper into the quagmire of molecular interactions in living cells.

  15. Pau; says:

    @10:
    “The purpose of computing is insight, not numbers.” — Richard Hamming (1961)
    “The purpose of computing numbers is not yet in sight.” — Richard Hamming (1997)

  16. yeroneem says:

    One of the most direct ways the biochemistry will benefit from raw computing power is protein-drug complex structure solution (and hopefully prediction).
    Also structural chemistry in general is one of the few areas where you can easily revisit and incorporate in databases ANY old data, starting with Bragg’s NaCl structure.
    With beams from free electron lasers not requiring single crystals for protein structure solution (only a slurry of a powder) the bigger and more complex proteins will become tractable. Here I am not relying on news’ headlines: I’ve heard a detailed talk by one of these guys, and seen the data.
    And at some point the combination of processing power and available experimental data will lead to reliable protein folding in silico, “suddenly” exploding the amount of information available.

  17. Lucas says:

    “Note: it’s been pointed out in the comments that cost-per-base of DNA sequencing has been dropping at an even faster than Moore’s Law rate. So there is technological innovation going on in the biomedical field, outside of sheer computational power, but I’d still say that understanding is the real rate limiter. . .”
    The current sequencing technologies all depend crucially on robotics, computer vision, and a long list of statistical post-processing techniques to generate sequences so cheaply. Sequencing techniques directly benefit from biology advances, algorithmic advances, as well as Moore’s law.
    Indeed, I think that Moore’s law being an input to sequencing is driving the faster-than-Moore’s-law reduction in cost. The techniques are getting better, but the cost of one of the key raw materials (computer power) is also getting exponentially cheaper.
    That sounds very Kurzweilian, but following the current trends to their logical conclusion isn’t as encouraging. If current trends continue, we’ll be able to sequence a whole human genome for tens of dollars in a few hours.
    So let’s imagine that sequencing is completely free and instantaneous: is that world radically medically different from the world we currently live in? Clearly it’s not. We just don’t have enough medically significant information about the genome to radically change medicine. We probably will, and cheap sequencing will help with that, but in and of itself sequencing isn’t very useful. As you say, understanding is a bigger gating factor.

Comments are closed.