OK, today’s blog post is going to be even weirder than usual – we’re going to wander off into quantum mechanics. And into a particular borderland of it where have been a lot of interesting hypotheses and speculations, but plenty of hand-waving hoo-hah, so it’s important to realize the risks up front. But here we go.

The talk about quantum computing has been impossible to miss over the years, and it’s not even near reaching a crescendo. Suffice it to say that directly taking advantage of quantum-mechanical phenomena can lead (potentially) to extraordinary accelerations in particular computing tasks. As I understand it, the situation will be a revved-up version of (for example) the situation you see when using Graphics Processing Units for other purposes than the high-end video and gaming that they were intended for. For GPUs, you have to identify the algorithms and approaches that best take advantage of their particular architecture, and the real-world problems that are best suited to being solved by them (as opposed to getting the glint of sunset light off a swinging ice ax to look right in the Yeti side-quest, which is what GPUs excel at in their natural state). Some calculations are just going to be intrinsically better suited from the beginning (more parallel and more easily broken down into smaller independent units), while other problems can be reworked to take advantage by emphasizing a different stage of the process or by coming at the problem from a different direction entirely.

And so it is (and will be) with quantum computing. If you’re going to go to the trouble and expense of building a quantum computer (and both of those are really formidable), you’ll want to get the most benefit out of it that you can. As it stands, there’s a (not extremely long) list of algorithms that have been mathematically proven to be faster via quantum techniques than they can be by classical ones, so everyone will be finding way to route their problems through those magic portals. The first example of such was Shor’s algorithm for factoring numbers, which is why if you manage to build a robust, capable quantum computer you can be sure that some intelligence agency will buy it from you. The widely-used RSA cryptography method depends on the multiplication of two huge prime numbers being immensely easier than factoring the resulting product back down from scratch, and Shor’s algorithm is almost exponentially faster than any known non-quantum method. At the moment the record (or at least the publicly known record) for the largest number that has been factored by Shor’s algorithm is (wait for it) 21 – told you the implementation was pretty steep. But things are moving right along. As an aside, it’s often been remarked that number theory was long considered the purest branch of mathematics, i.e. the one with the least possible reduction to any practical use, and the fact that the global economy now depends on things like the behavior of prime numbers would have horrified a pure-and-proud-of-it mathematician like G. H. Hardy beyond description.

The second quantum-advantage example was Grover’s algorithm for database searching (more properly, for inverting a function). If you have to root through a database of N entires, a linear search down column A in unstructured data, how many computational steps does it take? Well, it’s hard to get around the problem that it might take you up to N steps at the very least, because the answer you’re looking for might be in the last entry that you check. But Grover showed that quantum computation could shorten that to working in square-root-of-N time, and being quadratically faster is nothing to sneeze at, either, especially for really large problems. Grover’s algorithm is of great interest to the national security community as well, because it could brute-force its way through cryptographic hash functions, which also depend on algorithmic asymmetry: easy to run in one direction, viciously hard to reverse if you don’t know the trick.

Now what, you are asking yourself, does all this have to do with chemistry or biology? That’s where we get to this new paper from the CNRS in Marseille, written up here in *Technology Review*. It describes the behavior of electrons moving across two different sorts of surfaces – a quantum walk across a square grid and a triangular one (both of which are found, of course, in various crystalline materials). This paper seems to demonstrate that a quantum walk which recovers the Dirac equation also implements a Grover algorithm search – in this case, localizing around an electron hole in the grid with exactly the statistics that you’d expect from a Grover search process (when search may then, in fact, be natural electron behavior under such conditions). It’s already been shown that “quantum walks” can implement the Grover-style search, but under rather artificial conditions, because it typically requires an “oracle” step where you compare results to an expected value. But searching for an electron hole in this way effectively builds the oracle into the search itself, removing a substantial practical difficulty.

Now what, you are asking yourself, does all this have to do with chemistry, or biology? Well, a quick-and-easy answer is that it might lead to real quantum computing via a different route than we’ve been trying to date. If you can make electrons implement the Grover algorithm under the right circumstances, maybe you don’t have to build a general quantum computer to realize it yourself from the ground up. Just take advantage of what the electrons (or any other quantum particle) will be glad to do for you! The advent of large-scale quantum computing would have a bit of an effect on computational chemistry and structural biology, and it may have just gotten closer.

But there’s something else to think about, as mentioned in the *Technology Review* article linked to above. It directs the reader to this 2000 paper from Apoorva Patel. He showed that a quantum-computing system that implements the Grover algorithm is capable of distinguishing between four choices in a single computational step – in fact, four is the optimum number, giving the greatest computational efficiency. Similarly, he also showed that a three-step quantum search under these conditions can distinguish between 20 choices as the most efficient number.

Now what, you are asking yourself, does all this have to do with chemistry or biology? Well. . .if Grover-algorithmic processing is some sort of fundamental property of nature, then you might expect the genetic material to be synthesized most efficiently when the machinery has a choice of four different nucleotides. And when translating these into proteins, a triplet code (which requires three processing steps) would function most efficiently when working with a palette of 20 amino acids. I will admit to being unusually susceptible to ideas like this, but that makes the hair stand up on the back of my neck.

It has not escaped the attention of the authors of this new paper, to borrow a phrase. Recall that part about using the hole defect as a built-in “oracle” step in the search – the authors note: “*Indeed, replacing the Grover oracle step by surface defects seems way more practical in terms of experimental realizations, whatever the substrate, possibly even in a biological setting*“, and reference a later Patel paper at that link. Patel himself has speculated on just these possibilities with nucleotides and amino acids, but all this of course requires that the Grover algorithm be recapitulated by natural phenomena, and that such a quantum-mechanical process is actually somehow at the root of molecular biology. *It looks like the first of those is being shown right now.* Which makes the second, however weird and unlikely it may seem, worth taking more seriously. Now *this* is what I expected from the twenty-first century, not the rest of the garbage that we find ourselves surrounded with (current events, etc.) A research area for the brave!

“If Grover-algorithmic processing is some sort of fundamental property of nature, then you might expect the genetic material to be synthesized most efficiently when the machinery has a choice of four different nucleotides.”

Always hard to know whether such results are little more than numerological coincidences (Dirac and Eddington for instance had noted several relationships between seemingly unrelated fundamental constants) or something real.

Numerological coincidence seems like a really tough alternate hypothesis to beat here. I think I’d be more comfortable with the idea of four nucleotides being optimal if I better understood the energetic? entropic? savings of four vs. three, or five, or–my naive preference–twenty.

I am partial to a thermodynamic argument as being simpler myself, but there’s also the fact that many of these numbers are the result of sheer contingency, a random selection of a branch in evolution’s garden of forking paths that occurs without real cause and is then perpetuated simply because it is. One can ask similar questions about any number of numerical constants in the universe; this generally leads to philosophical discussions about the anthropic principle. Much of life can be distilled into thermodynamics + contingency.

We’re talking about actions happening at the molecular level, or a bit smaller, here, so I’d say that quantum mechanical mechanisms are a necessary assumption. The only question would be “Is it *these* quantum mechanical mechanisms?”.

I’d guess, based on just my available knowledge, that it’s a best guess that these mechanisms are important. They may not have caused the choice, but they made it’s implementation the evolutionarily favored one. Yes, it’s a guess, but to think otherwise seems merely to be arguing from “that’s not the way I’m used to thinking about things”…unless, of course, you have a reason that isn’t mentioned.

nature is kind of clever and using quantum “computing” e.g. in one of the most effective mechanism (photosynthesis). We need to better work with this kinds of (new) paradigms. Quantum-Wonderland needs to be worked in – beyond the boring 4d space-time which we perceive .

Yep. It’s an example of the Strong Law of Small Numbers. https://en.wikipedia.org/wiki/Strong_Law_of_Small_Numbers

“There aren’t enough small numbers to meet the many demands made of them.”

Surely the authors considered this possibility…

https://www.youtube.com/watch?v=5ZLtcTZP2js

Well, that has been settled very recently:

https://www.theregister.co.uk/2019/09/07/three_cubes_problem/

With all these advances and hints (coincidence? I think not since 42 is involved!) we may soon be able to create a simulation that will reveal that we are …a simulation? A result of a simulation? …A computer?

Just a couple of comments on the quantum computing front:

It’s not known that Shor’s algorithm is really faster than any classical method. It’s probably true, but we don’t have any way to rule out a much better classical algorithm than anyone found yet.

Grover’s algorithm is indeed provably better than any classical algorithm – and the proof is exactly as you say that a classical algorithm needs to check all N possibilities. But it is not really interesting for breaking hash functions. Usually we can do much better than Grover’s algorithm by using classical ideas to reduce the search space, and we don’t know how to mix these classical ideas with Grover’s algorithm in general. Furthermore, even if we could, you can compensate for an adversary using Grover’s algorithm just by doubling the size of the hash. And the adversary cannot do anything more than Grover’s algorithm, because we know \sqrt{N} is the best one can hope for in general search with quantum computation.

Good point on Shor’s – it would certainly be newsworthy if anyone found a classical algorithm that could beat it, though. So newsworthy, in fact, that I’d be willing to bet that we’d never hear a peep about it, depending on who discovered it. . .

I don’t know, I’d expect we’d hear a *lot* about its usage, at the least. In the same sense we’d hear a lot about someone detonating a nuclear bomb above a major world capital. *cough*

To infer from the numeric co-indidence that nature is using a quantum process that can be described by Grover’s algorithm strikes me as a giant leap. It is an “extraordinary claim”.

Wouldn’t up much simpler explanation be that a codon triplicate is the minimum to code for 20 amino acids (4^3 = 64 as opposed to 4^2 = 16 ) and 4^4 = 256 is a completely redundant number of base pair combinations to code for 20 amino acids and also lengthening the genetic code by a full 3rd, giving more room for errors and mutations?

Well, one other possibility would be to have 6 nucleotides and 2-nucleotides codons.

Or you could have 18 amino acids, or 24. The 20 seems a bit arbitrary.

Is there some characteristic that favors the 20 amino acids that are used in nature? I know there are amino acids that can exist or be synthesized that are not used. Do the 20 bring something to the table that are not found in other aminos? Are the 20 easier for natural processes to assemble, lower energy barriers? Atomic abundance? This is almost starting to sound like the discussion about the stereo-chemistry of amino acids.

I don’t know anything about math or quantum computing, but this whole topic as described in the Technology Review article is really interesting.

One line that struck me was describing how when Patel first published his hypothesis that 4 and 20 were the product of quantum processes that it was essentially dismissed in that quantum processes could not possibly occur in a biological matrix, but that no longer seems to hold and that “Photosynthesis, for example, is now thought to be an essentially quantum process.” Hold on. What? Really? Damn.

just build a Stochastic magnetic circuits:

https://www.nature.com/articles/d41586-019-02742-x

https://www.nature.com/articles/s41586-019-1557-9#Abs1

https://medium.com/syncedreview/quantum-chemistry-breakthrough-deepmind-uses-neural-networks-to-tackle-schr%C3%B6dinger-equation-a5ad4e3bfea0

… A fruitful path to QM modeling of matter removes discrete knowledge of process. cf: DFT.

This is a never-ending problem where I work. We have certain problems of interest that require certain kinds of algorithms to solve. GPUs and quantum computers are good for running a rather different set of algorithms. But our superiors buy them anyway because they’re supposed to be cutting-edge, and then demand that we find ways to solve our problems on them.

It’s positively Dilbertesque. I think of the strip where the pointy-haired boss wants to know why it takes so long for his engineers to produce new electronics when Frito-Lay can cut up a potato and fry it into chips in five seconds.

Frito-Lay can cut up a potato and fry it into chips in five secondsFrito-Lay does not reinvent the potato every 18 months. Add mandatory diversity hiring. One cannot realize epic proportions using little people.

https://www.nature.com/articles/s41586-019-1557-9

Oh please, keep the “diversity hire” whining out of a science blog. Next you’ll be asserting that “the best hire” is a one-dimensional measure rather than a Pareto optimum, and pretending there’s no demonstrated benefit to creativity and decision-making from having a diverse team. Please, let’s stick to actual science.

One cannot realize epic proportions using little people.The Manhattan Project derived from staffing not management. Social Justice laughter is ever pluralized. Pickett, Keuffel & Esser…were slaughtered by the Evil-IQ. Dialectic is inert toward productivity.http://hpmemoryproject.org/wb_pages/d_cochran_01.htm

… HR celebrates diversity, rejecting the uncultured, disruptive, and socially maladroit.

youtu.be/mdYBcbmXQmU

… If you vend eldritch chemistry, you desperately need an army of autist chemists. Absent that last piece, you have nothing.

Tiny nitpick: “Well, it’s hard to get around the problem that it might take you up to N steps at the very least, because the answer you’re looking for might be in the last entry that you check.” this should probably be something like “take you up to N steps at the very worst” or maybe most. O(n) is worst-case.

Nitpick on the nitpick – if it takes on average n/2 tries, then calling it O(n) is precisely correct. “Big-oh” notation deliberately ignores constant factors.

Not a claim at all, merely healthy speculation. Can be fun flying a kite on a windy day. Maybe in due course 22nd century science will look back on derivation of Lowe’s Theory using Grover’s algorithm as nice example of good 21st century science at work…

…But first our descendants have to make it into the 22nd century. Anyone any thoughts on that paper in Science the other month suggesting that enough land going spare globally (Siberia, Canada, etc) that billions of hectares of tree planting could, in half a century or so, bring atmospheric carbon dioxide back to pre-industrial levels?

Bring on the Global Tree Corps..!

Tree planting was one of Freeman Dyson’s early proposals for sequestering carbon dioxide (into fast-growing trees such as fir and pine). Professor Dyson’s since been considering how just changing the way we manage tillable land could correct the increase of CO2 in the atmosphere massively:

The point of this calculation is the very favorable rate of exchange between carbon in the atmosphere and carbon in the soil. To stop the carbon in the atmosphere from increasing, we only need to grow the biomass in the soil by a hundredth of an inch per year. Good topsoil contains about ten percent biomass, [Schlesinger, 1977], so a hundredth of an inch of biomass growth means about a tenth of an inch of topsoil. Changes in farming practices such as no-till farming, avoiding the use of the plow, cause biomass to grow at least as fast as this. If we plant crops without plowing the soil, more of the biomass goes into roots which stay in the soil, and less returns to the atmosphere. If we use genetic engineering to put more biomass into roots, we can probably achieve much more rapid growth of topsoil. I conclude from this calculation that the problem of carbon dioxide in the atmosphere is a problem of land management, not a problem of meteorology. No computer model of atmosphere and ocean can hope to predict the way we shall manage our land.

Very interesting, thank you. Woodlander hadn’t thought of that. All too easy not to see the soil for the trees, to reach for the moon and not look down at your feet.

If the math(s) is right, 0.1 inches equates 6.6 kg topsoil per square metre. Eyeballing the composter, if a year’s kitchen vegetable waste gives 25 Kg topsoil, that’s 0.1 inches of soil for 4 sq metres (size of a small vegetable patch).

Knight on Shining Calculator please come to rescue, quantify biomass to cover half planet’s landmass and provide compelling imagery that shows how it could actually be done. Wartime Spirit and all that, Dig for Victory and all that, all of course upgraded for social media age to solve the CO2 problem once and for all.

Now if it’s any help there’s the mulching mower which redistributes the grass cuttings into biomass and the hedge cutttings, autumn leaves and general garden waste which end up in the council’s mass composter, which should end up as distibuted biomass somewhere, should’t it? Maybe our 0.2 acres might just even be biomass-CO2 neutral, which is all okay then, isn’t it, providing you don’t forget to include mulching mower and hedgecutter CO2 emissions in calculation too?

And to mention the unmentionable, there’s the septic tank which gets emptied every year by a local farmer (CO2 again, pump and tractor). No questions asked about where contents end up, but am hoping our s***’s doing its little bit to save the planet from getting into deep s***, isn’t it? The Greenland ice cap’s not really melting anyway is it? And even if it is, think of it as a Business Opportunity, just like that tweet the other month implied.

Reading between Freeman Dyson’s lines, maybe market forces, technology, green living and green farming really will magic it all away just in time to meet the rolling 50 year deadline. Bring on the Tree Corps, the Soil Corps and Band of the Salvation Army (playing of course Nearer My God to Thee).

Wouldn’t bet the ranch on it though, although us 60somethings, 70somethings and sundry wrinklies all likely already gone by time that bet came in, so what’s to worry about? Had great time, left mess, cleared off. Obviously teenagers, 20somethings and anyone young at heart might see that bet differently…

…Quite right too – Good on You. Maybe consider being a bit more vocal. Maybe suggest mom and pop trade down the 5L armour plated 4WD for something 1.6L, 0.5 x CO2 and more manoeuvrable? Might actually even be more fun to drive? Even consider electric if convinced by it and price right? Maybe even 2 vehicles instead of 3? Maybe stick with iPhone for 4 years not 2? Maybe sell those Apple and VW shares, ‘cos something’s got to give hasn’t it?? Don’t vote for you know who in 2020???

Time to move on. Look up Sixties, look up LBJ and Not Standing for a Second Term. The times they were ‘a changin’ and the answer was blowin’ in the wind, though maybe teenagers, 20somethings and anyone young at heart were too busy being young to pick up on what else might have been goin’ on back then?

Who knows, might even take us full circle back to the joys of Quantum Mechanics where this blog post began? The answer my friends is growin’ at your feet…

Should have said, Babes, look up Bob Dylan too and maybe even give him a whirl for old time’s sake for the old timers out there.

Not mentioned is a quantum algorithm for solving the protein folding problem. Synthesize the protein. Let the usual quantum effects apply. Determine the protein structure. Just add the word “quantum”. I’m waiting for the Wired or Technology Review article.

A big part of why we have four nucleic acids and 20 amino acids is historical. Nucleic acids had to come in pairs for duplication, and they needed to be similar enough to form a chain, but different enough for something – proteins, RNA, amino acids, a mineral matrix – to chemically distinguish them.

If you look at how amino acids are encoded, it seems that early systems weren’t as specific as current ones. There are even arguments that amino acids could be assembled into proteins directly by an RNA template, just with limited reliability. As long as it was good enough, life would reproduce. The encoding itself suggests broad sensitivity later refined.

There is DNA sequence evidence that suggests there were revolutions in the reliability of gene replication and expression. One suggestion is that natural selection only had limited force for the first few billion years, but, once regulation was improved, life evolved more rapidly.

This stuff is miles over my head.

QM makes my head hurt. I swear it’s the next closest thing to voodoo.

Don’t be silly; it’s nowhere near as simple nor as logical as voodoo.

One thing that I was reminded of, reading this article, was Leonard Adleman’s work in the 1990s using DNA to perform parallel computing tasks. Looking at Derek’s interesting parallels between optimal arrangements for quantum-mechanical computing and the number and arrangement of nucleotide bases in DNA, I wonder if somehow an interface between quantum and biomolecular computing systems might prove useful. It’s not obvious to me exactly how this might have a practical use – the time scales are wildly disparate.

Having had time to think about that question, it’s possible that climatology and meteorology models might be the “killer app” people were asking for when Leonard Adleman was publishing his work in DNA computing. The computational speed would, of course, be dismal compared to

in silicoprocessing, but the sheer number of simultaneous computing steps possible might be what’s needed for meaningful models of, say, named tropical storms. And there’s where your interface between DNA computers and quantum computers might come in handy, as they both are capable of making vastly parallel calculations.The closest parallel I can think of offhand is about 20 miles from where I’m typing this, the pair of Cray supercomputers in the basement of the Naval Meteorology and Oceanography Command office building, Stennis Space Center, MS.

Once a year, they open that facility (remarkable most of the time for its constant armed guards) to the public for walk-through tours. I was working across the street at the time, my wife and kids came by to pick me up from work, and we noticed the signs advertising the open house at NAVOCEANO – and as we were all computer geeks, didn’t have to be convinced hard to go inside. The terminals for those Crays were (this was in the 1990s) Silicon Graphics Isis machines… the sort of things you usually saw being used to make movies at Pixar at the time.

It might be incredibly useful to unleash a big DNA computer on climatological models, using banks of sequencing hardware as I/O gear, and a quantum computer could control the DNA computer’s inputs and outputs, optimizing how queries were constructed, then reading out the results of the biomolecular computer’s parallel computations. Disparities between “clock speeds” in those systems would still be significant issues, but not deal breakers. You’d be more interested in the completeness of the models you were constructing.