Skip to main content
Menu

General Scientific News

Simulation and Understanding

Roald Hoffmann and Jean-Paul Malrieu have a three-part essay out in Angewandte Chemie on artificial intelligence and machine learning in chemistry research, and I have to say, I’m enjoying it more than I thought I would. I cast no aspersion against the authors (!) – it’s just that long thinkpieces from eminent scientists, especially on such broad topics as are covered here, do not have a good track record for readability and relevance. But this one’s definitely worth the read. It helps that it’s written well; these subjects would be deadly – are deadly, have been deadly – if discussed in a less immediate and engaging style. And if I had to pick a single central theme, it would be the quotation from René Thom that the authors come back to: prédire n’est pas expliquer, “To predict is not to explain”.

The first part is an introduction to the topics at hand: what do we mean when we say that we’ve “explained” something in science? The authors are forthright:

To put it simply, we value theory. And simulation, at least the caricature of simulation we describe, gives us problems. So we put our prejudices up front.

They place theory, numerical simulation, and understanding at the vertices of a triangle. You may wonder where experimental data are in that setup, but as the paper (correctly) notes, the data themselves are mute. This essay is about what we do with the experimental results, what we make of them (the authors return to experiment in more detail in the third section). One of their concerns is that the current wave of AI hype can along the way demote theory (a true endeavor of human reason if ever there was one) to “that biased stuff that people fumbled along with before they had cool software”.

Understanding is a state of mind, and one good test for it is whether you have a particular concept or subject in mind well enough, thoroughly enough, that you can bring another person up to the same level you have reached. Can you teach it? Explain it so that it makes sense to someone else? You have to learn it and understand it inside your own head to do that successfully – I can speak from years of experience on this very blog, because I’ve taught myself a number of things in order to be able to write about them.

My own example of what understanding is like goes back to the Euclid’s demonstration that the number of primes is infinite (which was chosen for this purpose by G. H. Hardy in A Mathematician’s Apology). A prime number, of course, is not divisible by any smaller numbers (except the universal factor of 1). So on the flip side, every number that isn’t prime is divisible by at least one prime number (and usually several) – primes are the irreducible building blocks of factors. Do they run out eventually? Is there a largest prime? Euclid says, imagine that there is. Let’s call that number P – it’s a prime, which means that it’s not divisible by any smaller numbers, and we are going to say that it’s the largest one there is.

Now, try this. Let’s take all the primes up to P (the largest, right?) and multiply them together to make a new (and rather large) number, Q. Q is then 2 · 3 · 5 · 7 · 11 · (lots of primes all the way up to) · P. That means that Q is, in fact, divisible by all the primes there are, since P is the last one. But what happens when you have the very slightly larger number Q+1? Now, well. . .that number isn’t divisible by any of the primes, because it’ll leave 1 as a remainder every single time. But that means that Q+1 is either divisible by some prime larger than P, or it’s a new prime itself (and way larger than P) and we just started by saying that there aren’t any such things. The assumption that P is the largest prime has just blown up; there are no other options. There is no largest prime, and there cannot be.

As Hardy says, “two thousand years have not written a wrinkle” on this proof. It is a fundamental result in number theory, and once you work your way through that not-too-complicated chain of reasoning, you can see it, feel it, understand that prime numbers can never run out. The dream is to understand everything that way, but only mathematicians can approach their subject at anything close to that level. Gauss famously said that if you didn’t immediately see why Euler’s identity (e +1 = 0) had to be true, then you were never going to be a first-rate mathematician. And that’s just the sort of thing he would say, but then again, he indisputably was one and should know.

Now you see where Thom is coming from. And where Hoffmann and Malrieu are coming from, too, because their point is that simulation (broadly defined as the whole world of numerical approximation, machine-learning, modeling, etc.) is not understanding (part two of their essay is on this, and more). It can, perhaps, lead to understanding: if a model uncovers some previously unrealized relationship in its pile of data, we humans can step in and ask if there is something at work here, some new principle to figure out. But no one would say that the software “understands” any such thing. People who are into these topics will immediately make their own prediction, that Searle’s Chinese Room problem will make an appearance in the three essays, and it certainly does.

This is a fine time to mention a recent result in the machine-learning field – a neural-network setup that is forced to try to distill its conclusions down to concise forms and equations. The group at the ETH working on this fed it a big pile of astronomical observations about the positions of the planet Mars in the sky. If you’re not into astronomy and its history like I am, I’ll just say that this puzzled the crap out of people for thousands of years, because Mars does this funny looping sewing-stitch-like motion in the sky, occasionally pausing among the stars and then moving backwards before stopping yet again and resuming its general forward motion across the sky). You can see why people starting added epicycles to make this all come out right if you started by putting the Earth at the center of the picture. This neural network, though, digested all this and came up with equations that, like Copernicus (whose birthday I share), puts the sun at the center and has both Earth and Mars going around it, with us in the inner orbit. But note:

Renner stresses that although the algorithm derived the formulae, a human eye is needed to interpret the equations and understand how they relate to the movement of planets around the Sun.

Exactly. There’s that word “understand” again. This is a very nice result, but nothing was understood until a human looked at the output. When we start to wonder about that will be the day we can really talk about artificial intelligence. And to Hoffman and Marlieu’s point about simulation, it has to be noted that some of those epicycle models did a pretty solid job of predicting the motion of Mars in the sky. Simulations and models can indeed get the right numbers for the wrong reasons, with no way of ever knowing that those reasons were wrong in the first place, or what a “reason” even is, or what is meant by “wrong”. And the humans that came up with the epicycles knew that prediction was not explanation, either – they had no idea why such epicycles should exist (other than perhaps “God wanted it that way”). They just knew that these things made the numbers come out right and match the observations, the early equivalent of David Mermin’s quantum-mechanics advice to “shut up and calculate“.

Well, “God did it that way” is the philosophical equivalent of dividing by 1: works every time, doesn’t tell you a thing. We as scientists are looking for the factors in between, the patterns they make, and trying to work out what those patterns tell us about still larger questions. That’s literally true about the distribution of prime numbers, and it’s true for everything else we’ve built on top of such knowledge. In part three of this series of essays, the authors say

The wave of AI represented by machine learning and artificial neural network techniques has broken over us. Let’s stop fighting, and start swimming. . .We will see in detail that most everything we do anyway comes from an intertwining of the computational, several kinds of simulation, and the building of theories. . .

They end on a note of consilience. As calculation inevitably gets better, human theoreticians will move up a meta-level, and human experimentalists will search for results that break the existing predictions. It will be a different world than the one we’ve been living in, which frankly (for all its technology) will perhaps come to look more like the world of Newton and Hooke than we realize, just as word processing and drawing programs made everything before them seem far more like scratching mud tablets with sticks and leaving them to dry in the sun. We’re not there yet, and in many fields we won’t be there for some time to come. But we should think about where we’re going and what we’ll do when we arrive.

33 comments on “Simulation and Understanding”

  1. MattF says:

    Many years ago, I took an undergraduate solid-state physics course from David Mermin— he’s the only lecturer I’ve ever heard who stutters in complete sentences. To the point, I doubt that his view of QM is so simple.

  2. loupgarous says:

    Thanks for sharing that. I wish I were confident that

    “As calculation inevitably gets better, human theoreticians will move up a meta-level, and human experimentalists will search for results that break the existing predictions.”

    Trust in insufficiently tested theories has not only broken those theories when real-world experience didn’t match theoretical predictions, but at times has broken people and things on the way to breaking the theories.

    It happens in Pharma, but is most notable when a Tacoma Narrows Bridge falls into the gorge below it, or a brand-new military transport aircraft drops out of the sky and kills its crew on take-off because its full-authority digital engine controls didn’t work as expected.

    The problem with modern simulation was summed up by Peter Amey in the abstract to his Logic Versus Magic in Critical Systems:

    “A prevailing trend in software engineering is the use of tools which apparently simplify the problem to be solved. Often, however, this results in complexity being concealed or “magicked away”. For the most critical of systems, where a credible case for safety and integrity must be made prior to there being any service experience, we cannot tolerate concealed complexity and must be able to reason logically about the behaviour of the system.”

    Amey goes on to describe cases in which simulations written in object-oriented code failed to faithfully behave as intended, and the extreme care required to write simulations that work – a LOT of structured testing and validation work must happen, the programming equivalent of creating falsifiable theories.

    1. Derek Lowe says:

      These are really good points. Another example would be the financial crisis in 2007/2008 – firms believed their models that told them that their asset portfolios were not highly correlated, and that they were thus hedged against a real-estate downturn. Those models were inadequate to say the least, causing the monetary equivalent of the Tacoma Narrows collapse, as we saw.

      1. loupgarous says:

        Yep. Recent experience with digital controls on military aircraft is a forked version of the same tale – Amey, in the paper I quoted, talks about the massive effort to document the correct operation of controls on an upgrade to the C-130 “Hercules” transport (very few of which suffer catastrophic failures in flight).

        Amey’s paper was written too early to include the 2015 crash of an Airbus A400M “Atlas” military transport with full authority digital engine control (FADEC) after it lifted off Seville Airport. Improper operation of the FADEC software caused as many as three of the plane’s four engines to stop.

        Whether it was a fault in the software itself or in installation of the software (it’s still a software design issue if you can install it in ways that cause crashes), the same care taken with Lockheed’s C-130 software in the 1990s wasn’t taken to avoid what happened just outside Seville in 2015.

        I actually look forward to quantum leaps in computing capacity bringing in a new era in which our vision of the world around us is clearer. Having written some of the code which enables Big Pharma to describe clinical study results to regulatory agencies, I can only hope that the people who write the simulations on which that new era depends are always agitating for simulations to be verified, re-verified, and verified again. Nothing’s quite as seductive as a simulation that tells you what you’d like to hear.

  3. Suppose I had a new assay that would read out the whole kinome in 5 minutes, for $5, for your favorite 5 billion molecules. Same errors bars as your current favorite assay. (This kind of assay is the goal of AI systems.)

    I guess we could worry that such an assay would “demote theory”, because a medicinal chemist could choose no longer to understand the principles of kinase binding but, rather, just blindly follow whatever the black box said.

    Or we could expect that access to so much more data would enable new theory. After all, Copernicus still used epicycles; it took new telescopes and Tycho Brahe’s years of data to get us Kepler’s elliptical models. AI is not a new astronomy, it’s a new telescope, and the lenses are becoming clearer.

    1. loupgarous says:

      Their mirrors, on the other hand, occasionally are flawed.

      “Within weeks of the launch of the telescope, the returned images indicated a serious problem with the optical system. Although the first images appeared to be sharper than those of ground-based telescopes, Hubble failed to achieve a final sharp focus and the best image quality obtained was drastically lower than expected. Images of point sources spread out over a radius of more than one arcsecond, instead of having a point spread function (PSF) concentrated within a circle 0.1 arcsec in diameter as had been specified in the design criteria. Analysis of the flawed images showed that the cause of the problem was that the primary mirror had been polished to the wrong shape. Although it was probably the most precisely figured optical mirror ever made, smooth to about 10 nm (0.4 μin), at the perimeter it was too flat by about 2,200 nanometers (2.2 micrometers; 87 microinches). This difference was catastrophic, introducing severe spherical aberration, a flaw in which light reflecting off the edge of a mirror focuses on a different point from the light reflecting off its center.

      How did this happen?

      During the initial grinding and polishing of the mirror, Perkin-Elmer analyzed its surface with two conventional refractive null correctors. However, for the final manufacturing step (figuring), they switched to the custom-built reflective null corrector, designed explicitly to meet very strict tolerances. The incorrect assembly of this device resulted in the mirror being ground very precisely but to the wrong shape. A few final tests, using the conventional null correctors, correctly reported spherical aberration. But these results were dismissed, thus missing the opportunity to catch the error, because the reflective null corrector was considered more accurate.”

      No matter how technically advanced our tools for studying Nature (and the tools for making those tools) get, Johann Schiller gets the last word:

      Against stupidity, the gods themselves contend in vain!”

      1. Jim Hartley says:

        The report on the cause of the spherical aberration in the Hubble telescope can be found searching “hubble telescope spherical aberration report pdf”. See page 48 for the culprit, a missing flake of non-reflective paint on the “field cap” caused an operator to align his measurement to the field cap instead of the measuring rod underneath. 1.3 mm error.

        1. loupgarous says:

          Didn’t know that. But Perkin-Elmer ignored indications of spherical aberration from conventional refractive null correctors after the final grinding stage because they assumed (without validating its results) that the new, custom refractive null corrector was not only much more precise in its measurements (which it undoubtedly was) but at least as accurate (which it was not). It’s hard to excuse that in the construction of a multimillion-dollar telescope going on a one-way ride to orbit.

          One of my freshman chemistry professors at LSU, asked if he minded us using hand-held scientific calculators during tests, affably said “Of course not. Since those things came out, my assistants and I have seen some of the most precise wrong answers ever written!”

  4. NotHF says:

    Not every day you see a Derrida reference in Angewandte.

  5. MATH says:

    Q+1 is not necessarily a prime number! 3×5=15, 15+1=16.

    1. MATH+1 says:

      You forgot to include 2 as well

      2x3x5 = 30, 30+1=31

      31 is prime

    2. Nesprin says:

      Q is the product of all primes, so 2x3x5 is not Q.

    3. Derek Lowe says:

      That’s why it says that Q+1 is *either* divisible by some higher prime than P, *or* is a prime itself.

  6. Wavefunction says:

    The Thom quote reminds me of an excellent take from David Deutsch on the fallacy that prediction equates understanding in his book “The Fabric of Reality”. To illustrate the fallacy Deutsch gives the simple example of a magician doing a magic trick: with enough iterations you can predict what happens next, but you understand nothing. The fallacy between prediction and understanding is also pointed out very well in Gary Marcus’s recent critique of AI, “Rebooting AI”.

  7. Earl Boebert says:

    My favorite characterization of models comes from Alan Turing’s last paper, “The Chemical Basis of Morphogenesis” (Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, Vol. 237, No. 641. (Aug. 14, 1952), pp. 37-72.):

    “In this section a mathematical model of the growing embryo will be described. This model will be a simplification and an idealization, and consequently a falsification. It is to be hoped that the features retained for discussion are those of greatest importance in the present state of knowledge.”

    I used to point out to my students that the depth of Turing’s genius was exemplified by the phrase “it is to be hoped.”

    Models are arbitrary collections of features that could just as well have been selected by a doctor with a flashlight. If you use them to clarify your thinking you are in good shape. If you use them as a black-box oracle you are liable to get into trouble. Big trouble.

    1. Hap says:

      The problem is that people tend to treat black boxes not in their areas or competence or interest as oracles because they don’t want to have to think about them; they just want them to enable them to do what they need to do and can understand. If something looks easy and simple but can get people into trouble, it will. If you give powers to people above their ability and willingness to understand them, you are probably going to be hosed eventually.

  8. biochemist says:

    I’m loving this paper. Does anyone have other Is there a place where these (well written) long form reviews around science are collected?

  9. Another Guy says:

    Great summary of this very interesting article Derek, and very relevant for our AI-obsessed age (from autonomous vehicles to AI overlords taking over the world), The sections on Explainable AI are important for the pharmaceutical world as it is hard to imagine some future where drugs will be developed, approved/denied or healthcare decisions being made simply because the black-box says so. I wrote a while back about how neural networks can seem like a panacea until it stops working with the realization that it is not clear how the input links with the output. Or maybe I should just hide under my bed?

  10. Quarthinos says:

    I’m a computer person, and one of my first real jobs was put data into a model, running it, and then passing on the results to my betters to try to make sense of it. One of the betters said something that has stuck with me ever since:

    “All models are wrong. Some are useful.”

    It’s good to know that its true everywhere and not just in that particular field.

    1. Diver Dude says:

      George Box’s dictum is becoming more and more widely applicable. Philosophically speaking, I think that should worry us.

    2. Isidore says:

      I recall a statement at some meeting by an eminent scientists (it may have been Charles Weissmann), that “model systems are like model students, they behave as you expect them to.”

  11. Charles H. says:

    Ok. Lots of valid points. But it’s worth mentioning that some things are not understandable. You may be able to do the math, and construct a simulation that will give the experimentally determinable right answer, but that’s not understanding. Understanding requires a model that can be analogized to conscious body functions. That’s what our mind evolved to deal with.

    A relevant quote attributed to Richard Feynman is “If you think you understand quantum mechanics, you don’t understand quantum mechanics.” But the same thing applies to many other things. All that’s required is that a bunch of independent things are acting simultaneously. Actually, there are other conditions, but that’s the simplest one. I think this is related to the famous “the man in the gorilla suit” psychology experiment.

    This is why successful scientific theories tend to simplify the conditions. But you can’t always successfully do that.

  12. metaphysician says:

    I have to say, I generally groan any time someone brings up “Chinese Room”. I’ve seen *faaaar* too many arguments and debates where what might be an interesting thought experiment is used to draw conclusions far beyond its potential validity. In particular, it feels like people love to use it as the basis for arguments that computer intelligence is impossible, without acknowledging that the same logic also disproves organic intelligence.

    1. Derek Lowe says:

      That is a misinterpretation of the thought experiment, for sure.

    2. Nathan says:

      It certainly seems to be a powerful argument that our minds are more than simply a “biochemical computer”. It seems to me that the “Chinese Room” thought experiment either strongly supports dualism OR strongly supports the concept that there is something absolutely fundamental that we do not understand about neuroscience. Do you disagree?

      1. SomeGuyNotTakingAStance says:

        Not the person you replied to, but in my opinion the Chinese room doesn’t really provide an “argument” for anything, just restates the question in certain ways. Replace “person manually running the Chinese ‘speaking’ program” with “person manually running a quark level CFT simulation of a chinese-speaking-person’s brain”. Does that “understand” Chinese? If it doesn’t, why not? You’d need to explain why the “dumb” laws of physics that “run” our brains can allow understanding, but a person “running” those laws on paper doesn’t. Is our understanding of the laws of physics missing a special term for “human understanding/consciousness/the soul”?

        If you believe in Dualism, then I think the answer is yes, and if you don’t the answer is no, but it’s kinda a matter of faith at that point, and the Chinese room doesn’t tell us anything we didn’t start with.

  13. yfp says:

    The argument between Science and AI always remind me of Kant’s Critique of Pure Reason. In short, Kant believed that human brain is not tableau rasa assumed by Hume and Locke: first we receive sensations. secondly, we organize these sensations into perception. finally, we organize perceptions into conception. Only organized conception is knowledge. This sequence of development of human experience is the A priori of development of Science.
    On the other hand, A.I. is a mathematical development (a compilation of 0 and 1 s, a long chain of false and true decisions) . Mathematical system( e.g. 2+2=4) is A. priori which is absolute and independent of all human experience.
    The triumph of A.I. is that it achieved the first step of human knowledge ( to organize sensation–> perception). Whether or not it can accomplish the second step ( to organize perception–>conception. Can mathematical a priori replace human a priori?) remains to be determined.

  14. dip says:

    This was the hardest blog post to read so far, it reads very all over and the prime number paragraph just felt unreadable. Not sure what Derek was going for here…

    1. Derek Lowe says:

      All I can say is that you’ll have an even wilder time with the three papers (!)

  15. LeeH says:

    Expanding on Charles H’s comment…

    The power of machine learning (I refuse to use the A word) is prediction. Given the limits of your labels (assay data, whatever) what you are trying to do is predict the behavior of a new example (compound). You don’t have to understand the model in order to get a “perfect”, or even a good prediction. It just has to be better than random, or even better, better than the success rate of the medicinal chemist, in order to be useful. Our task is to get to useful chemical matter, not understand how we got there.

    And we are much more likely to fail because our labels are misleading, either because our assays are too noisy or not actually relevant to the disease. In that case, even the perfect model is not going to help you, since what it is predicting is of no value in the end.

  16. 123 says:

    “They place theory, numerical simulation, and understanding at the vertices of a triangle. You may wonder where experimental data are in that setup, but as the paper (correctly) notes, the data themselves are mute”

    If you take this statement literally ( probably out of their context) and apply- the theories and mere proposals should be good / worthy enough to publish, especially in the fields where it takes years to just get the experimental data / proof to support your idea/proposal. Are there any journals that publish such kind?

  17. A computational chemist says:

    To say that “simulation” (as opposed to “theory”) isn’t “understanding”, is a pile of purist garbage.

    1. eub says:

      The map does need to be smaller than the territory.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.