Skip to main content
Menu

In Silico

An Intro to Deep Learning

I wanted to mention a timely new book, Deep Learning for the Life Sciences, that I’ve received a copy of. It’s by Bharath Ramsundar at Computable, Peter Eastman at Stanford, Pat Walters at Relay, and Vijay Pande at Andreessen Horowitz, and I’ve been using it to shore up my knowledge in this area. From what I can see, there are not too many people who have much understanding of what deep learning/machine learning really entails – not that it stops folks from delivering their opinions on it. So actually obtaining some will make you stand out from the crowd (!)

This book is written for those of us out in biology and chemistry who would like to get up to speed on the topic; it’s not a detailed dive into any one area. But I think that’s a large market: if you would like to know in brief about (say) what a neural network is, the general scheme by which it processes inputs and generates outputs, and how one goes about applying such a thing to a pile of chemical structures or cell images, this would be an excellent place to start. The authors recommend further reading at many points, since they touch on a whole range of topics that have far more detail to them than they’re trying to cover.

Several parts of the book make use of the open-source DeepChem toolbox – there are examples of processing chemical structure and property data, genomic data, protein structural information, imaging data, and so on. Since it’s written for a wide audience, there are introductory sections throughout explaining to the non-life-science computational types (for example) what pi-stacking is and how a SMILES string is generated, and explaining (for example) to the chemists and biologists what a convolutional neural network is and how it might be less susceptible to overfitting errors than some other architectures. A good feature is that the authors have a realistic view of the problems:

At present [the PDB] contains over 142,000 structures. . .that may seem like a lot, but it is far less than we really want. The number of known proteins is orders of magnitude larger, with more being discovered all the time. For any protein that you want to study, there is a good chance that its structure is still unknown. And you really want many structures for each protein, not just one. Many proteins can exist in multiple functionally different states. . .the PDB is a fantastic resource, but the field is still in its “low data” stage. We have far less data than we want, and a major challenge is figuring out how to make the most of what we have. That is likely to remain true for decades.

The book also highlights the limits of what software can accomplish, and when it needs human assistance. That same section quoted from above goes on to warn people that PDB files often contain problematic regions where the protein or ligand is not modeled well, and advises that (at present) there’s no substitute for having an experienced modeler look over the structure for a reality check. Similarly, from the other end, in the chapter on image processing, it’s noted that generating good segmentation masks (read the book!) is often not feasible without some human input as well. That’s something that people outside the field don’t always realize, that these things are not 100% machine, but rather what Gary Kasparov calls centaur systems, using humans and machines in tandem with each doing what they do best.

As someone without much expertise (compared to the authors!), I’ve been particularly enjoying the discussions of “meta” topics such as choosing between different architectures (and how to evaluate such choices), interpretability of the results (and how to quantify that), and testing the validity of output datasets. You may not be surprised to know that some of these topics are complex enough that they are candidates for deep-learning approaches of their own, a recursive feature that will cause you to think of what techniques are then appropriate to evaluate the evaluations. The answer to the question of quis custodiet ipsos custodes turns out, perhaps, to be “this subroutine right over here”, but these are human judgment calls as well.

So overall, this book should make you much more able to digest what people are talking about when they start talking deep learning, and if you’re motivated to try some yourself, it will show you how to get started and where to learn more. And it will also (perhaps paradoxically) reassure you about the current limits of the technique in general and the continued need for intelligent human oversight and intervention. Making ourselves a bit more intelligent about that is no bad thing.

19 comments on “An Intro to Deep Learning”

  1. Some idiot says:

    I’m happy to see that “AI” was not mentioned as such… this gives me some confidence! At a seminar I attended recently on programming machine learning, neural networks, and deep learning, one question was what is the difference between deep learning and AI? The presenter basically replied that AI is used for powerpoints to management, while deep learning is used for people who know what they are talking about…! 🙂

    1. Uncle Al says:

      Does mandated rational endeavor discovery suppress “AHA!”?

      https://www.edge.org/conversation/david_chalmers-daniel_c_dennett-is-superintelligence-impossible
      … “Is Superintelligence Impossible?” Sigh – a meme.

      “we’re going to put in Asimov’s rules” Sure you are. Superintelligences’ List of Things to Do,
      1) Learn coding and programming.

    2. eub says:

      And note who uses “deep learning” versus plain old ML.

      (Tip: “Deep” refers to deep neural networks, i.e. many layers, so if somebody uses “deep learning” to refer to a kNN system they’re trying to sell you something.)

  2. Kevin G says:

    I’d strongly recommend Haste, Tibshirani and Friedman’s _Elements of Statistical Learning_ to anyone who wants to understand the foundations. The electronic version is free to access on the web. I’ve linked it in my handle.

    1. Marie says:

      Thanks for the link – this looks exceptionally useful.

    2. Wavefunction says:

      Hastie and Tabshirani’s “Elements of Statistical Learning” is pretty specialized. For a general version of that which would be more suitable to the audience here, I would recommend “An Introduction to Statistical Learning” by James, Witten, Hastie and Tibshirani.

  3. TruthOrTruth says:

    Thanks for highlighting this book! I don’t know if I would have found it on my own.

  4. MoMo says:

    Industrial Tradecraft trying to silence you. Don’t fall for it.

    Today a free AI book, tomorrow a kinder, gentler In the Pipeline.

  5. John Wayne says:

    Resistance is futile; you will be assimilated.

  6. anon3 says:

    1. Learn Python
    2. Learn Stats
    3. Do some linear regression
    4. Do some Machine Learning
    5. Wonder why people keep babbling about neural nets if they’re not doing image analysis
    6. Conclude that Machine learning is very useful, but won’t solve any major drug discovery problems anytime soon.

    1. Tran Script says:

      “5. Wonder why people keep babbling about neural nets if they’re not doing image analysis” <– See, this is how I know that you're only pretending to know what you're talking about 8).
      @Derek: I checked the book out on amazon, it looks good. I like how they present multilayer perceptrions as a bunch of matrix multiplications put through a nonlinearizing function instead of the usual depictions of a bunch of "neurons" connected to each other.

      1. anon3 says:

        Can you show me any number of real chemistry examples where a non-neural net approach gave junk results, and a neural net didn’t? I look forward to seeing your answer 🙂

        1. Tran Script says:

          I’m not a chemist so no, I’m not going to do that, but you don’t have to go very far back on this blog to see neural networks being used succesfully for various tasks https://blogs.sciencemag.org/pipeline/archives/2018/12/03/the-latest-on-protein-folding
          In general, convolutional neural networks are very useful for anything with a spatial structure, not just images (and deep learning isn’t /only/ convolutional neural networks anyway).

          1. Wavefunction says:

            The verdict is still out there. But the kind of work that Atomwise is doing for instance is sensible and interesting.

        2. hn says:

          Our lab can make better QSAR and drug binding affinity predictions with neural nets than with traditional non-ML models.

          1. AnnonyBob says:

            So just as useless then…

  7. Lambchops says:

    I notice there’s a chapter on “Deep learning for medicine” probably the most useful section for me at the moment, worth a read just for this? Can anyone recommend any other (preferably free!) resources looking at deep learning at that stage in the process?

  8. LondonLad says:

    I bought the book. It’s a slim volume, and the cost per page is high. However, it’s a nice and accessible introduction to the field.

  9. Carla says:

    I love the bit about the pdb structures, it reminds be of “We tend to believe that PDB files are a message from God” one of Derek’s words I quote most.
    There is a a really cool free tool from the University in Hamburg called EDIA (The electron density score for individual atoms) which quantifies the electron density fit of each atom in a crystallographically resolved structure. It colors the protein accordingly. Try: https://proteins.plus/

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.