Skip to main content

Business and Markets

The Management Hat

On this anniversary, I wanted to point back to an older post here: Roger Boisjoly and the Management Hat. He tried, repeatedly, to keep the Challenger disaster from happening, but upper management decided that there were more important things to worry about: goals, timelines. Never forget.

35 comments on “The Management Hat”

  1. anon3 says:

    Yes, but he took data that OTHER people collected and used to it prove his point…making him a parasite.

    1. anon3 says:

      Actually, I guess he didn’t prove his point…when I think about it. Also he had very little data to work with.

    2. simpl says:

      I am aware of the wrong-headed parasite discussion, and take your comment as sarcastic. However, there is clear evidence since 1927 that creativity lies not the in the new facts, but in the new connection of facts. A subset would be serendipity, where a use is connected to an observed fact.
      The only thing left open is whether no fact is new under the sun.

  2. path integrals says:

    Appendix F – Personal Observations on Reliability of Shuttle, R. P. Feynman

    “It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management… Official management claims to believe the probability of failure is a thousand times less. One reason for this may be an attempt to assure the government of NASA perfection and success in order to ensure the supply of funds. The other may be that they sincerely believed it to be true, demonstrating an almost incredible lack of communication between themselves and their working engineers… For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.”

    does this sound familiar in pharma?

  3. luysii says:

    I think this comment on your original post is worth repeating (of course I would, it’s mine), Nonetheless —

    Boisjoly wasn’t the only one. Billings, Montana produced Allan McDonald, also working at Thiokol at the time, who spoke against the launch back then (for the same reason). He was quickly invited back as the graduation speaker at Billings Senior High, his alma mater. His further education was at Montana State. He later wrote a book about his experience at Thiokol
    Montana is often looked down upon by bicoastals as a bunch of rubes, but having lived and practiced medicine there for 15 years, nothing could be further from the truth. You quickly learn to treat everyone the same, as the guy in bib overalls may (1) be able to buy and sell you (2) be smarter than you are.

  4. Mike Andrews says:

    I worked the Apollo One mission at the Manned Spacecraft Center. The NASA types all had an attitude of “nothing has gone wrong yet, so nothing will go wrong”. They ignored warnings about a pure O2 atmosphere with so many flammable components in the s/c. They ignored reports of electrical problems that made lights and electronics flicker, blink, and just go offline. Each report was discounted, and electrical problem reports were treated as totally disjoint from the warnings about fire in a pure O2 atmosphere. We allll know what happened: a spark in a pure O2 atmosphere ignited something, and it all went up at once. Oxygen-fed fires are fast and hot, and that spacecraft was just a great big bomb calorimeter.

    I got to process the biomedical tapes from the Cape and run the traces out to a Brush strip chart recorder. It was the saddest thing I have ever seen in my 69 years of life.

    The same attitude held true in 1986, and again when we lost the Columbia.

    NASA does not learn from experience. NASA WILL NOT learn.

  5. JimM says:

    The Teacher in Space Project (TISP was one of Reagan’s more attention-getting campaign promises during the 1984 campaign, and it’s glorious fulfillment by the inclusion of teacher Christa McAuliffe in the Challenger crew was slated to be the emotional high point of his State of the Union speech scheduled for the night of Jan. 28, 1986.

    I leave it to readers to form their own opinions of just how likely some nebulous safety concern was to delay that particular launch.

    1. luysii says:

      You left out global warming

  6. toxchick says:

    There was a heartbreaking interview with the other engineer that tried to stop the launch on NPR today. He’s 89 and still blames himself. He said that he thought God had picked a “loser” to put the message through, and it was his fault they didn’t listen. I listened to it and thought of TeGenero and the clinical trial in France, and how it would feel to be on the development team for those programs.

    1. Madame Hardy says:

      There was a happy followup to the NPR story last week. Hundreds of people heard the story and wrote to Bob Ebeling to say it wasn’t his fault. Many of them were engineers who had been taught the Challenger decision in school as an example of organizational process failures. Most important of all, Allan McDonald, Ebeling’s boss at the time and a leader of the effort to postpone the launch called Ebeling right away, and in particular reminded him that “”If you hadn’t have called me,” McDonald told Ebeling, “they were in such a go mode, we’d have never even had a chance to try to stop it.” Robert Lund of Thiokol, and George Hardy of NASA, two of the men who overrode Ebeling’s attempts, called to tell him he’d done the right thing.

      His daughter’s response on the NPR page: “I just want to thank NPR listeners on behalf of my dad, Robert Ebeling. He has appreciated all the emails, letters and notes from all of you. He has had a turnaround in his feelings of guilt about the deaths of the Challenger astronauts. We, as his family, love all of you and are grateful that you have contacted us. I have read every one of your messages to my dad. He is letting go of the guilt that he has held for 30 years. It is a miracle from God and from all the people who have written to us. I thank all NPR listeners for this amazing gift. My dad does not have much time left and your words are easing his mind.”

  7. Nick K says:

    The Israeli airline El Al has an extraordinary way of dealing with the problem of over-optimistic risk management. Every plane always carries a member of the security team which checked it for bombs, firearms or terrorists among the passengers before the flight. El Al hasn’t suffered a terrorist attack since 1976 (the Entebbe Raid). I’m certain the senior managers of NASA would have thought quite differently had they been made to fly themselves on the Space Shuttle.

    1. Kevin H says:

      “Every plane always carries a member of the security team which checked it for bombs, firearms or terrorists among the passengers before the flight.”

      Well, that’s veeeery loosely true, inasmuch as the baggage handlers and x-ray screeners and ground interviewers could all be said to be part of the same “security team” as the air marshals (one or more of whom are aboard each El Al flight.) The guy x-raying suitcases isn’t getting pulled from his desk and tossed on random flights.

      Further, while El Al can afford to give up perhaps 1 or 2 percent of its seats for non-revenue marshals (especially since not all flights are full anyway), everyone at NASA would be horrified at the thought of giving up 25 percent of the seats on the new Orion MPCV for management ballast.

      And how would you assign that seat? Draw lots months ahead, so that there’s time for training? Put all the senior managers through astronaut training and draw straws on launch day? (“Sorry honey; you’ll have to pick up the kids from school–I’m going to the ISS.”) Oh yeah, don’t forget to fire all the managers who would get a medical downcheck–they’re a safety risk, since they know they can’t be assigned to a flight.

      1. Old Timer says:

        Geesh, it was hyperbole.

      2. Chris says:

        Just take the lesson in the reverse direction… have those flying on the craft be responsible for deciding if it goes.

      3. Nick K says:

        Kevin H: My suggestion was entirely hypothetical. I wasn’t seriously suggesting putting NASA senior managers on rockets.

  8. Anon says:

    It’s easy to pick apart decisions in hindsight, once the uncertainty of the future becomes the certainty of the past and present, but all decisions are made under uncertainty, so we have to live with our mistakes, and hopefully learn from them.

    Incidentally, I recently saw a documentary which explained how ALL current aircraft safety procedures we have now, are based on learning from past disasters…

    Better to learn from our mistakes than fear making them in the first place, otherwise we’d never move forwards. All progress is based on our mistakes.

    1. Hap says:

      The problem is that the Challenger disaster doesn’t seem to have come from an acceptance of uncertainty by decision makers but rather ignorance of it. The Feynman quote above is pretty stark – the managers did not listen to the data that they had about the fallibility of Shuttle components (they thought likely failure rates were 10-5, while engineers assessed it a &gt10-2), and hence didn’t regard the lack of temperature sensitivity data of O-ring failures as a problem because they saw failure as very unlikely (despite the previous rate of part and system failures). They had information from their past mistakes and failures, chose to ignore them, and, well….

      Of course, the people that made these decisions did not pay a price for them. When not learning from your mistakes is costly to other people, but not you, then not learning is what you will probably do. Being willfully ignorant of the risks you undertake is very different than accepting them, particularly for the people and things you manage,

      1. Anon says:

        Well, choosing not to listen and act, is also a decision, which in this case led to a terrible disaster. I hope and assume all involved have learned from this mistake, and have introduced protocols to ensure that any concerns raised by anyone, are dealt with appropriately rather than swept under the carpet.

        As I said, we all make mistakes, and we all make progress by learning from them.

        1. Hap says:

          That’s usually how it works, at least personally. I worry that the incentive structure shown here and elsewhere makes it beneficial for people not to learn from their mistakes, though, and if you don’t learn from them and still have power, even if other people do learn from them, people are going to die for your mistakes, and eventually whatever you are trying to achieve will fail.

          In theory, science (and life?) is supposed to be about not fooling yourself. The “management hat” comment seems to be all about fooling yourself. If that’s how management is paid to think and act, bad things will happen. Perhaps “the engineering hat” shouldn’t just be for engineers.

          1. Anon says:

            Well, if the incentive structure is to blame for not learning, then that is the mistake we need to learn from, by fixing it so that we do learn. So basically, the same principle applies.

            Just that in practice we only spot these deeper systematic problems (poor incentive structure) once the same mistake is made more than once.

  9. David says:

    One comment below the referenced article referred to 6-sigma. I have a major problem with 6-sigma. It is based on two assumptions: 1) that there are a very large number of steps that all have to go right to produce a product (which is true for silicon chips), and 2) that you don’t know anything about the quality of each individual process (which is usually false). That is why I like to say “6-sigma is for the ignorant”. If you make an assumption that processes that are less likely to succeed are fundamentally harder than those which are more likely to succeed, and that is the reason that they yield poorer, then you can calculated a set of sane targets for each process. But management apparently would prefer to pretend that everything is equally easy, that engineers (and researchers) are fungible, and that they know more than the engineers. A very sad state of affairs. I blame the schools that produce MBAs.

  10. ab says:

    I read part of the transcript from the NPR interview with the other engineer (Ebeling) as well. Heartbreaking indeed that at 89 he still blames himself. I have never seen anything written from the perspective of the NASA managers or the Morton Thiokol management team that ultimately over-ruled Boisjoly’s and Ebeling’s recommendation and cleared the launch. That’s a big missing piece of this puzzle. Do they feel regret? Do they feel they made the wrong decision? I read somewhere that one of managers, when asked about it after the disaster, said he would make the same decision again. But that’s just a snippet from one manager and I don’t have the reference.

    I wonder if the military’s definition of “acceptible risk” is different from ours. I’m not from a military background and I hate to speculate, but it’s possible that they see things differently.

    1. Tom Bryant says:

      I do not believe its a difference between “civilian” and “military” methods of assessing acceptable risk. The problem is what I like to call “the PHB Syndrome”. The PHB ( shorthand for Dilbert fans for Pointy Haired Boss) syndrome is basically anytime management puts on, or keeps on their dunce caps, er management hats in a situation where there is a fairly obvious good and bad choice to be made.
      BTW in three months plus we will be marking the 30th anniversary of a minor industrial “incident” in Russia that nobody has heard of: Chernobyl. That accident happened for many of the same reasons that the Challenger disaster took place. These are: management desirous of an outcome with a devil may care attitude, data or indications of a serious, potentially deadly flaw or fault in key systems that could kill people, and key operations personnel that could not convey to management the seriousness of the problem in time.

  11. path integrals says:

    In “What Do You Care What Other People Think?”, Feynman details how, as a member of the Rogers Commission investigating the disaster, he had to go to considerable lengths to get straight answers to straight questions about potential causes of the Challenger disaster. It’s an informative, as well as entertaining, read. With the formation of the Rogers Commission, layers of bureaucrats and managers (government and corporate) spontaneously organized to control the information and fallout from the investigation – a classic study of C-Y-A. Feynman would have none of it, frequently deviating from the orchestrated script of the investigation. He slipped away to have candid 1-1 discussions with the engineers, knowing that 1) they’d know the product and process best and 2) information would be substantive, with minimal BS.

    The coup de gras was a very simple science experiment during the Commission hearing – ice water, plus O-ring, plus C-clamp:

    It’s too bad that upper management rarely has the time or interest to have candid conversations with front-line scientists and engineers. They become susceptible to their own delusions as well as the self-serving spin fed to them by middle management. Top-down organizations are self-selecting for prolific CYA activities, rather than quality products.

    1. Oliver H says:

      I know it’s nitpicking, but the term is “coup de grâce”. “gras” means “fat”, which has precious little to do with the issue.

      1. tangent says:

        I know it’s nitpicking, but the term is “coup de grâce”.

        Aw, I loved the turn of phrase! A “coup de gras” is an atherothrombotic stroke, surely.

  12. JG4 says:

    This reminds me of a story about how a hospital cut their fatal error rate in half by using the same type of checklists that pilots use. The link to that excellent story is appended below the end of this article/blog post.

    IT Security and the Normalization of Deviance

    Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable — the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:

    Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.

    …[denotes all of the omitted material that makes this excerpt a fair use exemption]

    This essay previously appeared on the Resilient Systems blog.

    moo • January 12, 2016 9:12 PM
    MIT prof Nancy Leveson wrote a pretty good book called “Engineering a Safer World”, which is available for free in PDF form from MIT Press. Chapter 5 of the book is entirely devoted to a detailed examination of that friendly-fire incident. It is very much worth reading.

    Checklists are an attempt to constrain process deviance.

    The Checklist
    If something so simple can transform intensive care, what else can it do?
    by Atul Gawande December 10, 2007

    1. Johan™ Strandberg says:

      Diane Vaughan’s “Deviance and acceptance of deviance” sounds a lot like the “Overton Window” in politics [ ]. I suspect some of the underlying mechanisms are similar.

  13. Fred the Fourth says:

    The Shuttle should have operated under FAR 91.3(a).

  14. Curt F. says:

    I hate all the references to a nebulous “NASA management” as the bad guys in a thread where the foiled heroes are referred to by name (Boisjoly and Ebeling). George Hardy and Lawrence Mulloy, among others, killed the seven astronauts onboard the Challenger. Sure, they didn’t mean to do it — undoubtedly they had only the best of intentions. But the fact remains, George Hardy and Lawrence Mulloy are killers and history will be better off if it remembers their names as well as their misdeeds.

    1. Hap says:

      I think people don’t like putting killing blame on individuals unless there is a lot of intent, and I don’t know whether the willful ignorance fits that (though it probably does, though for criminal homicide there are at least more gradations of meaning).

      I think the main point is that willful ignorance of negative data (and pressure on others to do so) seems to be rewarded repeatedly, and that the behavior is thus not specific to Hardy and Mulloy (though holding them and others who make similar decisions accountable would help). Some of that is self-congratulatory (because that is diametrically opposed to how science is supposed to work) but the accusation seems directly applicable to pharma’s problems (*cough* Exubera *cough*). The point being made doesn’t depend only on the people involved, but on the structure they work in. Holding them accountable as individuals would diminish the effect of structure on behavior, but the behavior would endure because the structure and its incentives do.

  15. Wheels17 says:

    Goal setting is a treacherous thing. As my former employer was crashing, we wrote off the newest and most capable machines and kept running the old and less capable machines because the writeoff went to a corporate account, but the depreciation charges were in the division’s financials and eliminating the depreciation charge improved the financials…

    I was recently reading about the failure of Target Canada. Here’s one contributor to the failure( ):

    “Within the chain’s replenishment system was a feature that notified the distribution centres to ship more product when a store runs out. Some of the business analysts responsible for this function, however, were turning it off—purposely. Business analysts (who were young and fresh out of school, remember) were judged based on the percentage of their products that were in stock at any given time, and a low percentage would result in a phone call from a vice-president demanding an explanation. But by flipping the auto-replenishment switch off, the system wouldn’t report an item as out of stock, so the analyst’s numbers would look good on paper.”

  16. Crimso says:

    Just note how often you see the Challenger explosion chalked up to a “design flaw.” It’s not a “design flaw” if failure results from operating outside specs.

  17. JG4 says:

    from the brilliant afternoon compendium at NakedCapitalism, which has good coverage of medical and pharma news, along with biting social commentary.

    “Engineer who refused to OK Challenger launch report donates papers to Chapman University” [Los Angeles Times]. “‘I’m hopeful that some of the material will be accessed by future generations and may prevent them from making the same mistakes,’ [Allan McDonald] said during a visit to the college.”

    see also:

    Engineering a Safer World: Systems Thinking Applied to Safety
    Nancy Leveson
    MIT Press, 2011 – 534 Seiten

    Engineering has experienced a technological revolution, but the basic engineering techniques applied in safety and reliability engineering, created in a simpler, analog world, have changed very little over the years. In this groundbreaking book, Nancy Leveson proposes a new approach to safety — more suited to today’s complex, sociotechnical, software-intensive world — based on modern systems thinking and systems theory. Revisiting and updating ideas pioneered by 1950s aerospace engineers in their System Safety concept, and testing her new model extensively on real-world examples, Leveson has created a new approach to safety that is more effective, less expensive, and easier to use than current techniques.

Comments are closed.