Skip to Content

Understand the Brain? Let’s Try Donkey Kong First.

I didn’t think I’d actually see someone try the thought experiment mentioned in this post, but by golly, someone has. That was a discussion of the attempts to simulate the workings of an actual brain, neuron connectivity and all, and the article I quoted went into great detail about just how far we are from being able to do that. (Anyone at the moment who tells you different is either stretching the truth, is deeply misinformed, or is both at the same time, which is not an unlikely combination).

The article said along that the way: “As an overly simplistic comparison, imagine taking statistics on the connectivity of transistors in a Pentium chip and then trying to make your own chip based on those statistics. There’s just no way it’s gonna work.” My own comment was that “Your chances of success with that statistical approach to a Pentium chip are not good at all, but they’re a lot better than the chances of it working on brain tissue.” (Many readers will be immediately reminded of the famous “Can A Biologist Fix a Radio” paper, and that linked post above will send you to several discussions of it on this site).

What I can report now, courtesy of Alex Tabarrok at Marginal Revolution, is that people have now tried to reverse-engineer the old 6502 chip in such fashion. First the engineers got a crack at it: the Visual 6502 project photographed the layout of the chip at high resolution and tried to model it from just that data. Since they know what the chip is made out of (one transistor after another!) and since its architecture is not all that complex (and we know a lot about computer chip architecture), they were able to recreate the functions of the chip from their connectivity models. That’s actually pretty impressive – I’m not sure how it would work on a more modern chip, but I’m surprised that it worked at all.

But then the biologists tried it out. This paper details an attempt to study the 6502 chip using the tools we have available to study nematode brains and the like, and it’s titled “Could a Neuroscientist Understand a Mcroprocessor”. I’ll let the abstract speak for itself:

There is a popular belief in neuroscience that we are primarily data limited, that producing large, multimodal, and complex datasets will, enabled by data analysis algorithms, lead to fundamental insights into the way the brain processes information. Microprocessors are among those artificial information processing systems that are both complex and that we understand at all levels, from the overall logical flow, via logical gates, to the dynamics of transistors. Here we take a simulated classical microprocessor as a model organism, and use our ability to perform arbitrary experiments on it to see if popular data analysis methods from neuroscience can elucidate the way it processes information. We show that the approaches reveal interesting structure in the data but do not meaningfully describe the hierarchy of information processing in the processor. This suggests that current approaches in neuroscience may fall short of producing meaningful models of the brain.

Even unlimited data would not help, they say, with the tools and techniques we’re using. You can get as much “behavioral” and phenotypic data as you want from a 6502 chip, but it doesn’t help. Trying to learn what’s going on from the equivalent of brain lesions was, for example, not too informative. They used Space Invaders, Donkey Kong, and Pitfall as behaviors of the chip and tried to learn backwards from those:

Lesions studies allow us to study the causal effect of removing a part of the system. We thus chose a number of transistors and asked if they are necessary for each of the behaviors of the processor (figure 4. In other words, we asked if removed each transistor, if the processor would then still boot the game. Indeed, we found a subset of transistors that makes one of the behaviors (games) impossible. We might thus conclude they are uniquely responsible for the game – perhaps there is a Donkey Kong transistor or a Space Invaders transistor. Even if we can lesion each individual transistor, we do not get much closer to an understanding of how the processor really works.

This finding of course is grossly misleading. The transistors are not specific to any one behavior or game but rather implement simple functions, like full adders. The finding that some of them are important while others are not for a given game is only indirectly indicative of the transistor’s role and is unlikely to generalize to other games. Lazebnik [9] made similar observations about this approach in molecular biology, suggesting biologists would obtain a large number of identical radios and shoot them with metal particles at short range, attempting to identify which damaged components gave rise to which broken phenotype.

Other tools that map closely to those of neuroscience were similarly ineffective – they generated large amounts of data, as large as you want, but still could not recapitulate what was going on in any useful sense. Now, as the authors freely admit, microprocessors are very different from brains. But the differences are mostly that brains are far, far more complicated – so if the tools we’re using on brains can’t even tell us much about a 1980s microprocessor (a very simple system by brain standards), then what can they tell us about their actual field of study? The authors suggest that as newer methods are developed, they should be benchmarked to what they could tell us about the 6502 processor as a way of seeing if we’re making any real progress.

So Lazebnik’s radio-fixing paper is indeed having an influence. His criticism of biological techniques and biological thinking is (to me) its strongest part – he goes on to say that there are a number of more productive techniques that biologists have been giving short shrift to, but that’s always where he’s lost me. I don’t think the situation is even that good when it comes to neuroscience, for example: where are these better techniques? Ray Kurzweil, to pick one bold futurist, has us by 2019 with a much (much!) better understanding of the brain than we currently have. Things are really going to have to pick up, let me tell you.

 

49 comments on “Understand the Brain? Let’s Try Donkey Kong First.”

  1. Anchor says:

    Speaking of neuroscience a significant improvements in motor function in stroke patients has been accomplished by implantation of stem cells in the brain! That pretty much validates Ray Kurzweil!
    https://www.washingtonpost.com/news/to-your-health/wp/2016/06/02/stanford-researchers-stunned-by-stem-cell-experiment-that-helped-stroke-patient-walk/?hpid=hp_hp-more-top-stories_stemcell-715a-stream%3Ahomepage%2Fstory

    1. Hammer Swinging Engineer says:

      I don’t think this actually validates Kurzweil, as to me it seems closer to doing a classical engineering fix to a piece of machinery that doesn’t work: Hitting it with a hammer. I don’t know what it did, but darned if it didn’t work.

      1. Nicholas Swenson says:

        unfortunately analogies only go so far … this isn’t really an engineer swinging a hammer.

        this is just allow the body the repair it-self via the same organizational principles that caused it form in the first place. we don’t need to engineer a fix, we just need to give the body the tools to fix itself.

        1. AOM says:

          For implanting stem cells, I like the analogy:
          It’s like searching though a junkyard to replace a broken part in your car. You don’t need to know how it is made or how it works, just that it’s broken and how to put it into your car.

  2. CP says:

    According to Kurzweil we are less than a decade away from nanobot-driven flying cars (whatever that means)!

    Makes me wonder if these futurists are aware of the state of nanotech. Five years ago we could make nanorods. Now we can make… hollow nanorods? Nanobots should be just around the corner, surely!

  3. Jose says:

    I so rarely find a paper ‘delightful’ but this one most certainly is!

  4. luysii says:

    There are a few more problems with interpreting a wiring diagram of the brain (assuming one could be obtained).

    l. The wiring diagram is not static– we know that synapses are constantly being formed and destroyed
    2. Synapses can be excitatory or inhibitory. Electron micrographs of synapses can’t tell one from the other
    3. The neurotransmitters of greatest interest to thought, emotion and mood — norepinephrine, dopamine, serotonin — are released diffusely into the brain extracellular fluid, not locally at synapses.

    For more on these points please see — https://luysii.wordpress.com/2011/04/10/would-a-wiring-diagram-of-the-brain-help-you-understand-it/

    1. ADMPHD says:

      Not only those issues, but changes in synaptic strength, which would not be visible, are where much of the rubber meets the road.

      Even with simple, 14-cell networks (see the Crustacean STG as one example), we still have difficulty in replicating their output, even knowing their wiring diagram, firing properties, ionic conductances, and effect of neuromodulators on many of the points in the diagram.

      Our brain? Not in our lifetimes. And I say this as a neuroscientist.

    2. Johannes says:

      I’m sorry, but have you even read the article and the paper?
      1. The “wiring diagram” of a software running on a processor isn’t static either. But that is beside the point, which is that even in a static system our approaches fail.
      2. Processor gates can be excitatory (AND, OR, XOR) or inhibitory (NOT, NAND, NOR). Electron micrographs of transistors might be able to tell you which is which — but still the analysis fails, which is the main point of the article.
      3. Running software is released diffusely through several input lines all at once and then scatters through the processor. But that is irrelevant because even if we know the exact pathways, our analysis still fails.

      Your argument seems to be that “the brain is too large to understand it on a wiring-level”, but the same thing applies to microprocessors. These processors are well into the tens of billions of transistors range now, and I doubt that anyone can understand any of these devices on a wiring-level. Yet they still get constructed and built on that level, and that level is (in the end) still the most important.

  5. Curious Wavefunction says:

    I think such approaches partly depend on knowing whether the brain is analog or digital. It could of course be both.

    http://physicsdatabase.com/2014/06/03/freeman-dyson-are-brains-analogue-or-digital/

  6. Sok Puppette says:

    Um… I’m afraid you and everybody else you mention is ignoring exactly HOW different brains are from microprocessors. Think about the tools that DO work for reverse engineering microprocessors. Would you see them as the right sorts of tools for reverse engineering brains?

    I suspect that the tools being used on brains really aren’t up to the task, and I don’t deny that understanding brains, even enough to hack on them a little, is gonna be very, very hard. But aiming neuroscience tools at microprocessors is a silly stunt that tells you approximately nothing.

    And when you say things like “But the differences are mostly that brains are far, far more complicated”, I don’t think you’re using an appropriate view of what complexity is, or at least of what complexity is important, either for understanding or for engineering.

    The total information available for building a brain is limited. That information can be no bigger than the human genome (under 1 GB, probably fewer bits of information than the design files for a modern processor), plus one set of the interpreting cellular machinery, plus the aspects of the environment that you can rely on to be consistent. Anything that varies randomly can’t add critical complexity; if you want nearly every brain to work, you can only rely on things that are consistent. And nearly every brain does in fact work.

    Genome plus some rather flexible environmental requirements doesn’t seem to be that much information. The specified complexity of a brain can’t be that much. Yes, it’s complicated, and, worse, it’s built with absolutely no regard for comprehensibility, very much unlike a microprocessor. But there are sharp limits on what you need to have or know to make it work.

    Brains do have many interconnections and many other structures, but many parts of those structures must either be repetitive (and thus not intrinsically complex) or happen by chance (and thus not be sensitive to changes in just any tiny detail). Whatever secret sauce is critical for brain function, it can’t demand that every single neuron be just so.

    So a brain must be at least IN PART like a salt solution. There may be be moles of particles in a liter of the solution, but you don’t demand that you know where every one of them is before you’ll say you understand the solution in every useful way.

    … which kind of points to a real difference between brains and microprocessors. An architectural and “design” difference.

    In a microprocessor, there are repetitive structures reducing complexity, and there IS design for comprehensibility. But, in the end, almost every single connection in the whole processor must be exactly just so, or it won’t work. The overall behavior of the microprocessor depends in a very brittle way on the entire content of its design information. All of its complexity is directly specified, and most of it is necessary before you get anything interesting out of it at all.

    There are very few connections you can knock out and still have a microprocessor play Donkey Kong.

    On the other hand, you can knock out whole chunks of a brain and still have a person who doesn’t even seem that odd.

    … which is a long-winded way of saying you shouldn’t expect to learn too much about studying brains from studying microprocessors…

    1. Some idiot says:

      Long-winded, but very well put!

      (-:

      Just curious… You say that the instructions for brain building cannot be more than about 1 GB… I do not have the background to evaluate that number, but it sort of sounds reasonable… However, is this reasonable as a limiting factor? If the “blueprint” was one which included hardware and algorithms for the biological “machine learning”, then it should be possible to create a higher level of finished complexity from a smaller set op design instructions.

      But I agree with your general point. It is a good refutation. Although non-functional, a spiderweb is probably a better analogue than a computer chip, in that there are a large number of redundancies, but when connections are removed one by one, at some stage there is likely to be catastrophic collapse.

      1. George Locke says:

        Each base can take on 4 values, ACGT, so that’s two bits. 2 bits * 3 billion base pairs in the genome gives 6 GB. That’s how you get the GB figure, but I think this (very common) analogy has a fatal problem.

        If you want to think about DNA in terms of digital computers, I think the “software” is physics and chemistry while the genome is more like a list of arguments that tells the chemistry what to do.

        1. eyesoars says:

          2 bits/base * 3 Gbases = 6Gbits = 0.75 GB(ytes) * 8 bits/byte.

        2. Andrew says:

          Ultimately, program and data are interchangeable, and, following Kolmogorov, your “fatal problem” is only an additive factor. True, it could be a very large one, but I suspect otherwise.

        3. Sok Puppette says:

          It’s not about digital computers. It’s about information theory. You could equally well express the amount of information in decibels; it’s just that bits are the common unit.

          The rules of physics and chemistry don’t contain that much information, and furthermore those same rules apply to every object in the universe. Something has to specify the information that makes a brain DIFFERENT from a rock, and that thing can’t be the laws of physics shared by both.

          Also, the distinction between “software” and “arguments” isn’t one that an information theorist or a computer scientist, or even a computer programmer, would make.

          1. Mike C says:

            @Sok Puppette: 1. There may not be that much information in the rules of chemistry and physics – IF you understand them well enough to make 100% accurate predictions in biological systems. But we can’t, not even for single proteins. Saying the genome yields the specified complexity of the brain is a bit like saying a google search query yields the specified complexity of a google search. 2. Twins with identical genomes do not generate brains that are near identical. The environments aren’t identical either. 3. Why not compare it to FPGAs (Field Programmable Gate Arrays) that are programmed by self generating algorithms instead? Each time you do it the result can’t be transferred to another FPGA: very slight defects end up causing tunneling events at thresholds that are unique to specific transistors on specific chips. Is the specified complexity of that system equal to the design of the FPGA?

      2. Kaleberg says:

        There’s a lot of room for unpredictability in 1GB of code.

    2. Jason Pipkin says:

      This isn’t quite the right counterargument. Brains and microprocessors have both similarities and differences.

      To argue against the paper you need to establish, for example, why a lesion study will provide more understanding when applied to a biological system but not when applied to this silicon system.

      1. Some idiot says:

        I would argue that the main point is that a brain lesion experiment would give results that are very _different_ to a chip lesion experiment. I.e. Not better or worse, but just plain different in character. Not so much a comparison between apples and oranges, but between pears and giraffes…

      2. George Locke says:

        If you think of “lesion studies”, I’d guess that any “lesion” in a microprocessor is much, much more likely to be “embryonic lethal”. That is, if you break a few connections in the microprocessor, you’re much more likely to get completely meaningless, non-functional output. Brains are much more adaptable than microprocessors, implying that they are structured differently.

        Since I know practically nothing about the models tested here, I don’t know if they’re general enough that they ought to be able to apprehend a microprocessor. Nevertheless, I can’t help but feel like it would be very strange if a good model of the brain were also a good model of a microprocessor.

        1. aairfccha says:

          The closest electronic analogy to a brain would probably be an FPGA, which is rather fundamentally different to a static processor running (external) software.

    3. Anon2 says:

      It turns out that simple neurobiological systems, at least, are very robust to many system parameters (e.g. strength of connection, ion channel densities, etc). See
      http://dx.doi.org/10.1101/sqb.2014.79.024828

    4. Sympa says:

      Let’s say that a certain model of computer can be described by 1 GB of data.
      Now, if we expand the memory of that computer, the description will still be 1 GB (plus maybe 100 bytes to say: repeat that memory thing a 100 times).
      Now, filling the brain is what happens when learning. And it is observable: people from different environments can have different basic abilities. Whether it is estimating intentions, building relationships, ability to do scientific work, etc.
      Much of the high level function must be learned.

      1. zero says:

        I submit as example: demoscene.
        People have written programs in 64k (or even 4k) of code that provide full audiovisual output of astonishing complexity.

        Another example: zebra stripes.
        Built from the interaction between a promoter and a suppressor, the varying expression of two signal molecules combines to produce patterns of such complexity that no two animals are precisely alike. At the same time, all such animals have patterns that are similar enough to blend in; the evolutionary advantage of defeating a predator’s visual edge-detection is conveyed using only a very small number of proteins.

        It follows that the complexity of a brain is not limited by the initial information present in genes. The development of a brain is a process governed by multiple feedback mechanisms, allowing complexity to develop over time and in response to stimuli. Each brain does not need to have every neuron ‘just so’ because each brain has generated the necessary functionality on the fly. Brains react to damage by routing around it, rebuilding functionality from the remaining connections and neurons. Some level of redundancy is built in, meaning any one neuron can be lost with no loss of functionality.

        A third example: the internet.
        This may be the closest thing we have to a simulation of neural activity in the real world. The underlying code for the internet is a handful of very compact drivers (TCP, BGP, etc.) to handle the physical communication and a set of standards small enough to be conveyed in text to handle data encapsulation (HTTP, XML, etc.). The global internet developed over a period of decades in response to stimuli and with significant feedback mechanisms. The result is a construct so complex that we struggle to monitor or even model it in a meaningful way. It survives injuries small and large, displays emergent behavior, handles both storage and processing (some of that inline) and has hierarchial levels of activity. Even so, the internet as a whole represents only a tiny fraction of a human brain’s connectedness. (I would argue that the aggregate computing capacity of internet-connected computers far exceeds a human brain’s capacity, so there are glaring discrepancies between the two systems.)

    5. I’m not convinced that information theory is the right way to get your arms around the brain.

      We know that very large, very rich, structures can be described very very simply (cf. fractals, Conway’s game of Life, etc, but no, no, god no, I am not saying “brains are basically fractals!!!!!”). We know that computation and memory can be carried out with structures that are very large but very regular (microprocessors, RAM, and yes, brains).

      The information in the genome that describes a “brain” most likely is not describing how the brain differs from a rock, but rather how the brain differs from other, similar, simply described but extremely rich structures.

      There’s a good whack of the genome that’s devoted to “this is how the cell works” and then a little bit for “make a pile of these things in a sort of a skin on a big lump of fat, and the cells in that skin should have some minor changes and tweaks that make them self-organize into an immense, rich, structure”

      Somewhere buried in the minor tweaks to the cells, the architecture of the fat blob and, I suppose, a wad of other details, we get brain-ness.

      Most of the heavy lifting of “brain-ness” is in some relatively tiny coding for gigantic self-organizing structures. Then all you have to do is twiddle that gently to make it a brain instead of a random bunch of cells zapping noise at one another, which is the tricky bit it turns out. I assume evolution had something to do with it. This is the bit that we have, um, I think almost exactly zero insight whatsoever in to.

    6. Andrew says:

      It’s also interesting to take development into account. That ~1GB isn’t just building an adult brain. It’s building a series of brains, each of which has to run a changing body living in a changing environment, from embryo in womb, to fetus in womb, to infant with mother, to toddler with sibling, etc., etc. And each of that series of brains has to be seamlessly changeable into the next one, with no interruption or loss of function along the way.

      Doing the equivalent in silicon (or even in code) would be an interesting challenge, wouldn’t it? To build a chip that starts as a single transistor and grows from there, and has the ability to play gradually more complex games as it grows, with no interruptions along the way?

  7. Anon says:

    “Even unlimited data would not help, they say, with the tools and techniques we’re using.”

    And *there* is the same problem with Big Data as applied to *all* complex biological systems. You can never get enough data.

    1. Phil says:

      “You can never get enough data” implies that “enough data” would be sufficient but getting it is the problem. The point here is that “enough data” does not exist. The types of data currently available are not sufficient to elucidate the answer.

      An analogy in chemistry would be trying to elucidate complex structures without access to modern methods like NMR – in many cases, degradations, flame tests, titrations etc are not going to narrow down to a single unequivocal structure. The lesion tests described in the paper remind me exactly of these crude methods.

  8. King of Kong says:

    I wonder what you would observe when the game reaches level 22, the so-called “killscreen”…insight into the Hayflick limit?

  9. SM says:

    Seems like simulating a brain is going to be an NP-hard problem wherein brute force by data isn’t going to get us very far.

  10. db says:

    Derek,

    I’m not sure if you’re aware of the new book “The Age of Em,” (reviewed here ) but your points are pertinent to that discussion.

    The book presumes that copying of human minds will become possible and discusses what the outcomes of such a thing might be, but the serious gulf between current understanding of the human brain and the level of knowledge required to duplicate one is far far greater than one might expect to be bridged within a century.

  11. EtaoinShrdlu says:

    I come at this at the opposite end, as a programmer, so pardon any missteps on the biological front. The thing that strikes me reading the description of the paper is that there’s a bit of a false comparison. Traditionally developed code (such as Donkey Kong, Space Invaders, and Pitfall) has normally been pared down to only the barest essentials, because there’s a (nominally, and sometimes only nominally) intelligent designer behind the keyboard that uses a criterion that says “if this piece of code doesn’t do exactly what I want, it should be deleted”. Meanwhile, my understanding of the mechanics of evolution says that (setting aside the cost per additional neuron and thinking on a long enough time scale) as a brain evolves, neurons/groups of neurons will only be trimmed out if they decrease the odds of themselves being passed to the next generation in some way. In that comparison, I would agree that a brain would be much much harder to figure out than a chip that plays Donkey Kong.

    However, that difference in the degree of difficulty would become much smaller with other types of programming, namely the interconnected fields of neural networks and genetic algorithms. The type of code produced by them is famously hard to understand, even by the people that set things up in the first place, because they are the product of random noise filtered by an artificial selection criterion over huge numbers of generations. If the engineers had been given a circuit board that runs code designed by one of these genetic algorithms instead of by a human designer, I suspect they’d be nearly as helpless as the biologists to understand how it produces what it produces.

    They’d still have I think a slight leg up, because the physical components of the circuit itself are extremely well understood, unlike how I understand it to be on the neurological end. If even that were not true, then it would truly become a slog, basically identical to how a computer scientist might try to understand a brain: isolate this component, stimulate its inputs in every possible way and record its output, try and notice correlations between certain combinations of input and output (we’d first try a truth table by reflex, but this could be much trickier if the components don’t work in binary), attempt to classify components into finite groups that each respond in a characteristic way, and only then begin the work of trying to understand how groups of them behave.

    tl;dr the biologists shouldn’t sell themselves too short, the engineers had some unspoken advantages going for them.

  12. Imaging guy says:

    Integrated circuits are very complex but we understand them. Mars rover was even more complex made up of hundreds of ICs and other parts but NASA has successfully managed to land it on Mars. Why? Because we build them. That is exactly why the project to synthesize the whole human genome from scratch, Human Genome Project- Write (HGP-Write), announced today in Science is important. Synthesizing different chunks of genome of an organism and placing them inside artificial cells with defined RNAs, proteins and metabolites will probably help us understand biology better than current methods.

    1) The Genome Project–Write (Science 02 Jun 2016: DOI: 10.1126/science.aaf6850)
    2) Scientists Announce HGP-Write, Project to Synthesize the Human Genome (New York Times)

  13. As our understanding of the brain gets better, our approach towards teaching new knowledge will improve.

  14. UndergradChemist says:

    The “Can a Biologist Fix a Radio” hits the nail on the head about why I decided to major in chemistry and not biology; I always had a gut feeling, but could never express it in words.

  15. Carl W says:

    I have a couple of problems with the paper (well, with the excerpts Derek provided… haven’t read the whole thing).

    First, I agree with other commenters that 6502’s are too different from brains to be a reasonable model organism.

    Second, even so, the described experiment seems unlikely to correspond to the first experiment a biologist would perform on a brand-new organism. It seems that the experiment was:

    Pick 3 extremely complex behaviors that have been seen in the wild, perform lesion experiments to see whether the organism can still perform the behavior under the given lesion, grade each result on a pass/fail scale, discover that most lesions make the organism fail to complete the behavior (without trying to examine closely the differences between the lesion behavior and the desired/expected behavior), give up.

    It turns out that booting a video game is a fairly complex behavior that is made of a sequence of much simpler behaviors. Shouldn’t they investigate the simplest behaviors (or, if we pretend that they don’t know how a processor works at the start of the experiment, shouldn’t they start by trying to figure out some “basic” behaviors)? It’s like trying to find a neuron for “can you complete a decathlon” or “can you graduate from college”, instead of trying to find what in your brain lets you move your right arm or remember a random word one day later.

  16. Nicholas Crook says:

    You made the claim in the previous article that we are unable to simulate a round worm brain. It seems to be the prevailing crux of your entire argument. However, nearly a full year before you posted that prior article, we had already successfully simulated a worms brain. It was placed in a lego body to see how it would react to external stimuli. For further information check out this article on the Smithsonian magazine’s website: http://www.smithsonianmag.com/smart-news/weve-put-worms-mind-lego-robot-body-180953399/?no-ist

    Just some food for thought.

    1. Derek Lowe says:

      To the best of my knowledge, this was nowhere near a simulation of a roundworm’s brain. It’s based on the roundworm connectome, the sum of all C. elegans neural connections. But the simulation, as far as I know, does not model many of the important features – such as the different weights assigned to each neural signal, which can change in response to environment (and which seem to be the basis for learning). There is, from what I can see, a rather vigorous debate about just what the OpenWorm project has been able to simulate. It’s an interesting project, to be sure, but there seems to be a lot of hype about “uploading the roundworm mind” in the popular press, and that’s not accurate.

  17. Something chip reverse-engineers have to their advantage is that every photolithography process currently available builds something essentially two-dimensional. Yes, there are ways to build bridges, but no-one stacks bridges vertically on top of eachother or a transistor over a bridge or …

    Most brains are not so tidy.

  18. luysii says:

    There is a fourth reason that a wiring diagram won’t really help you understand the brain. The wiring diagram ignores gap junctions (places were neurons get very close with holes in their membranes that allow the cytoplasm of one neuron to communicate with another, and which allow electrical activity in one neuron to pass into another). They are called electrical synapses. For more details on gap junctions please see — https://luysii.wordpress.com/2016/06/05/mind-the-gap-junction-that-is/

  19. Michelle says:

    Actually, many of the techniques in this paper HAVE been extremely useful in neuroscience. Lesion studies have expanded our knowledge of the neural underpinnings of sleep, memory, and aggression, among other things, and tuning curves have radically changed our understanding of how the brain processes sensory information. If these same techniques can’t capture anything useful about a microprocessor, that suggests that the microprocessor might be too different from the brain to provide a useful point of comparison. Instead, maybe we should be focusing on what kinds of gaps in our experimental repertoire are keeping us from solving the nervous systems of simple model organisms, like worms or flies.

    (Much) longer discussion about these ideas here: http://www.empiricalimage.net/home/2016/6/5/whats-in-a-model

    1. Andrew says:

      That post also makes the point that the 6502 experimenters weren’t very good at doing good lesion studies. They walked into a woodshop with a hacksaw, hacked away for a few days, and concluded that saws weren’t a useful tool for making things with wood.

  20. JG4 says:

    Thoth • May 18, 2016 5:48 AM
    https://www.schneier.com/blog/archives/2016/05/friday_squid_bl_526.html#c6724490
    @all
    Make your own 6502 IC.
    Link: http://monster6502.com

    The MOnSter 6502
    A new dis-integrated circuit project to make a complete, working transistor-scale replica of the classic MOS 6502 microprocessor.
    We’ll be showing off our progress at the 2016 Bay Area Maker Faire!

Leave a Reply

Your email address will not be published. Required fields are marked *

Solve the math problem. * Time limit is exhausted. Please reload CAPTCHA.