Skip to Content

Who Discovers and Why

Another Conservation Law

As long as there’s been organized scientific research – that is, more than one person working on a problem – there have been timeline disconnects. Something takes longer than expected, throwing everything off, usually. That’s the basic disconnect, and there are ways to deal with it, but there’s a larger one that I don’t think that anyone’s ever found a way to deal with.
That’s the problem that larger discoveries have of coming infrequently and on no one’s schedule at all. Scientists have been complaining about this for as long as anyone’s tried to manage scientists. There’s a conservation law at work here, I think: the harder the task you ask a team to accomplish, the less able you are to say when they’ll accomplish it. Straightforward tasks can be planned out to the day. Harder ones can be roughly estimated by quarter. Really big ones. . .well, there’s just no damn way of knowing.
There are several problems that follow this one around, probably wearing the same color shirts and the same brand of shoes. One of those is the way that progress on tough problems comes in irregular fits and starts. If you’re budgeting for steady, regular accomplishments that can be listed every quarter, you’re going to have a bad time. Long periods will go by without much concrete evidence that anything useful is happening. That’s because the team has been trying things out that didn’t work. Even worse, part of the time some of them may have been trying those things out mostly in their heads, trying to get a better handle on the problem. A run of negative results is (on first approximation) hard to distinguish from people just messing around, but a run of unproductive thinking is hard to distinguish from someone just staring out a window. It doesn’t look so great come performance review time.
In the extreme cases, you get people like Claude Shannon, who did tremendous, revolutionary work near the beginning of his career, and is hardly remembered for anything in the years afterwards. (This story is told in many places, but William Poundstone’s Fortune’s Formula is a good place to find it. Shannon is an indelible figure in the history of science, but he would have had some pretty rough quarterly progress reports to turn in.
What to do about this? The only advice I have is to keep that relationship above in mind, difficulty versus predictability. If you want someone (or some group) to aim high, be prepared for that uncertainty principle to kick in. It’s not possible just to leave everyone alone forever, but checking in enough to see that real thought and effort is being expended is probably all that a manager can do. Not every organization is going to be open to that.

8 comments on “Another Conservation Law”

  1. Anonymous says:

    In today’s climate the great Claude Shannon would probably have received a poor performance appraisal followed by a pink slip.

  2. luysii says:

    It’s important to realize that Shannon died of Alzheimer’s disease, and this may have reduced his later productivity. We really aren’t sure when it starts, and the pathologic process begins early (actually in all of us — it’s a matter of degree whether we’re impacted or not).
    He was a legend at MIT — riding around on a unicycle, inventing various crazy things (motorized pogo sticks) etc. etc. He died at 85, and retired from the MIT faculty at 62.

  3. Kyle S says:

    The general principle here is a really interesting one in how we look at systems and progress. The Pareto distribution (http://en.wikipedia.org/wiki/Pareto_distribution) which describes how many phenomena sort out (like frequency of big and really big earthquakes) I think also applies to really fundamental scientific discoveries.
    Unfortunately, the distribution isn’t specifically predictive–that is, you can say something about general frequency of a really big earthquake but not about exactly when/where it’ll occur.
    The tough part is explaining concepts like this and the power law to management, who work off a schedule derived from a view of scientific progress as more continual and controllable than it really is.

  4. Barry says:

    what’s rarely remembered is that “Shannon Entropy” was described by Leo Szilard in 1927 when he explained the Maxwell’s Demon paradox

  5. DN says:

    Even more frustrating is this principle applied to competitors. For example, various people were making steady progress on algorithms to calculate discrete Fourier transforms. Then Cooley and Tukey came along with their fast Fourier transform and made everything else obsolete. Poof, life’s work gone for the other folks.

  6. steve says:

    You also get people like Pauling who make one brilliant discovery after another in chemistry then tries his hand at biology and comes up with nonsense like megadose Vitamin C curing colds and then cancer. Just because you’re brilliant in one field doesn’t mean you can be equally brilliant in another. Management can destroy research by suddenly switching priorities and forcing scientists that are successful in one area into another area that’s not their forte.

  7. newnickname says:

    One story goes that, very early on, Claude Shannon sought input from John von Neumann (game theory, etc) on some of his information theory ideas. von Neumann was on his way to the airport and invited Shannon to come along for the ride. In that taxi ride, vonN helped to map out much of information theory as it then developed by Shannon. I don’t have a source; just heard that story someplace; I don’t want to detract from Shannon. von Neumann was said to be a brilliant and FAST thinker.
    In chemistry, Claude Shannon’s information content equation H = n log S has found its way into “molecular complexity” via Bertz, JACS, 1981, 103(12), 3599-3601. “The first general index of molecular complexity.” DOI: 10.1021/ja00402a071
    Complexity C = Sigma-sum(2n log n) where n (actually “eta”) is each “propane” subgraph of the molecule in question. The Bertz Complexity Index is unambiguous, robust and easily calculated and is the standard used by PubChem / PubMed for every molecule you look up.
    Bertz was a senior researcher at the old Bell Labs (where von Neumann and Shannon surely had an impact) but he actually conceived of the index while still a Woodward student at Harvard. He would hear “most complex molecule”, “highly complex for its size” and so on and decided that there must be something better than chemist intuition (or chemist braggadocio) to measure complexity. So he came up with his then new index.

  8. chronos says:

    #4 and #7: among EEs and CSs, Shannon’s definition of information entropy, while incredibly useful in designing compression algorithms, is considered “oh, that’s neat, I see via Boltzmann why it came out the same as thermodynamic entropy”, whereas Shannon’s true contributions were the Nyquist–Shannon sampling theorem, the Shannon–Hartley theorem, and the Noisy-channel coding theorem.
    The first is critical to understanding A/D conversions, including all video and audio encoding, (Basically, every image, song, audio clip, video, etc. that has ever been displayed by a computer.)
    The second proves that data transmission rate for a zero-error channel is a three-way tradeoff between how many Hz of spectrum you allocate to it, how many Watts of power you pour into it, and how quiet you can get the noise floor. By itself, not very practical.
    The third proves that you can invent an error-correcting code, which lets you treat a channel with errors as if it were a zero-error channel of lower capacity, and proves the best bounds on how good the code can be as a tradeoff between capacity cost vs error rate. When combined with the previous theorem, this is pretty critical to making the Internet and WiFi work, and also gets broad application with USB, HDMI, and other cables.
    Basically, Bell Labs was a great place to be in 1948.

Comments are closed.