Skip to main content

Drug Development

Andy Grove: Rich, Famous, Smart and Wrong

So I see that Andy Grove, ex-Intel, is telling everyone that the drug industry could use some of that Moore’s Law magic. I’ve noticed that people who spend a lot of time in the computer business often have an. . .interesting perspective on what constitutes progress in other fields, and we might as well appoint Grove the spokesman for their worldview:
Q: In what way does the semiconductor industry offer lessons to pharma?
A: I picked the semiconductor industry because it’s the one I know; I spent 40 years in it, during which it became the foundation for all of electronics. It has done a bunch of unbelievable things, powering computers of increasing power and speed. But in the treatment of Parkinson’s, we have gone from levodopa to levodopa. ALS [Lou Gehrig’s disease] has no good treatment; Alzheimer’s has none.

To me, the first sentence of that answer is the key one. As for the rest of it, hey, it’s all true. Perhaps one explanation for the difference between the two fields is that they’re driven by fundamentally different processes? Nah, that can’t be right:
Q: Why is the speed of progress so different in semiconductor research and drug development?
A: The fundamental tenet that drives us all in the semiconductor industry is a deeply felt conviction that what matters is time to market, or time to money. But you never hear an executive from a pharmaceutical company say, “Before the end of the year I’m going to have xyz drug,” the way Steve Jobs said the iPhone would be out on schedule. The heart of every high-tech executive has been, get the product into customers’ hands and ramp up production. That drive is just not present in pharma; the drive to get sufficient understanding and go for it is missing.

Well. Where to begin? Let’s start with a minor fact, and work our way up. I’ve been in this industry for eighteen years, and I cannot count the number of year-end goals I’ve had to deal with. Number of new targets identified, number of new projects started, number of compounds recommended for development, number of compounds progressed to Phase II, number taken to the FDA. It never ends. If Andy Grove hasn’t heard a pharma executive talk about all the wonderful things that are going to be done by a given timeline, he needs to listen harder.
But here’s the rough part: although drug company people talk like this, they’re full of manure when they do. These year-end goals, in my experience, do very little good and in some cases do a fair amount of harm. I’ll bet some of my readers have sat in a few meetings – I sure have – and looked up at the screen thinking “Why on earth are we recommending this drug to go on?”, only to have the answer be “Because it’s early November”. More idiotic things may get done in the name of meeting year-end numerical goals than for any other reason in this industry, so thanks, but I’ll try to ignore the recommendation to do them some more, but good and hard this time.
Mr. Grove, here’s the short form: medical research is different than semiconductor research. It’s harder. Ever seen one of those huge blow-ups of a chip’s architecture? It’s awe-inspiring, the amount of detail that’s crammed into such a small space. And guess what – it’s nothing, it’s the instructions on the back of a shampoo bottle compared to the complexity of a living system.
That’s partly because we didn’t build them. Making the things from the ground up is a real advantage when it comes to understanding them, but we started studying life after it had a few billion years head start. What’s more, Intel chips are (presumably) actively designed to be comprehensible and efficient, whereas living systems – sorry, Intelligent Design people – have been glued together by relentless random tinkering. Mr. Grove, you can print out the technical specs for your chips. We don’t have them for cells.
And believe me, there are a lot more different types of cells than there are chips. Think of the untold number of different bacteria, all mutating and evolving while you look at them. Move on to all the so-called simple organisms, your roundworms and fruit flies, which have occupied generations of scientists and still not given up their biggest and most important mysteries. Keep on until you hit the lower mammals, the rats and mice that we run our efficacy and tox models in. Notice how many different kinds there are, and reflect on how much we really know about how they differ from each other and from us. Now you’re ready for human patients, in all their huge, insane variety. Genetically we’re a mighty hodgepodge, and when you add environment to that it’s a wonder that any drug works at all.
Andy Grove has had prostate cancer, and now suffers from Parkinson’s, so it’s no wonder that he’s taken aback at how poorly we understand each of those diseases – not to mention all the rest of them. But his experience in the technology world has warped his worldview. We are not suffering from a lack of urgency over here – talk to anyone who’s working for a small company shoveling its cash into the furnace quarter by quarter, or for a large one watching its most lucrative patents inexorably melt away. And we don’t suffer from a lack of hard-charging modern management techniques, that’s for sure.
What we suffer from is working on some of the hardest scientific problems in the history of the species. Mr. Grove, the rest of your recommendations don’t betray much familiarity with the industry, either, so there may be only one way to make you really understand this. If you really, really believe in your ideas, please: start your own company. You’ve got the seed money; you can raise plenty more just by waving your hand. Start your own small pharma, your own biotech. Hire a bunch of bright no-nonsense researchers and show us all how it’s done. Tell them that you’re going to have a drug for Parkinson’s by the end of the year, if that’s what you think is lacking. Prove me and the rest of the industry wrong.

86 comments on “Andy Grove: Rich, Famous, Smart and Wrong”

  1. Bryan says:

    A few minutes reading Gary Pisano’s Science Business might enlighten Mr. Grove. Although I dont know how well received Pisano’s later conjectures are in the book, he sums up the comparison between semiconductors and drug development quite succinctly. Changing the socket on a motherboard is a lot easier than altering a gene to modify a kinase’s active site to fit your drug.

  2. PharmaProphet says:

    Actually, I think one of the major problems facing the industry is that upper management is thinking and behaving *exactly* like Mr. Grove. Too much concern for the almighty $, and far too little understanding of how scientific progress really occurs.

  3. matt says:

    Wow, great post. Hit the nail on the head. And by the way, what is up with acomplished old guys (i.e. Grove and Watson) saying dumb stuff lately?

  4. John Novak says:

    Grove also doesn’t have the FDA and medical ethics boards to contend with. (Not that he should, or the pharma industry shouldn’t! But the semiconductor market, while ruthless in its own way, will not send you to jail for a poorly thought-out experiment.)

  5. matt says:

    Wow, great post. Hit the nail on the head. And by the way, what is up with accomplished old guys (i.e. Grove and Watson) saying dumb stuff lately?
    And also, what’s up with the Neuroscience meeting? That’s where Grove had the platform to say all this stuff. I hope people laughed him off the stage or at least gave him some hard zingers after his talk. And two years ago they invited the Dali Lama. Why do they even waste their money? These events sure do generate a lot of publicity, but it’s already one of the largest meetings in the world. Not like they need to advertise.

  6. GA says:

    Derek, I’m glad you took this on – when I first read this piece (and it seems to be getting some headlines), all I could do was shake my head in amazement. I’m sure that Andy got the “process” right at Intel, so that chip after chip gave the same performance. However what he fails to understand is that the reason why drug discovery and development fails more often than not is not because the “process” is sub-optimal or the “drive” is missing. It’s because of the inherent complexity of the systems we deal with.

  7. TW Andrews says:

    Wow. It’s not often that you hear that the problem with drug development is that the pharmaceutical industry isn’t interested enough in money.
    In any case, I think the root of Grove’s misunderstanding is that he confuses engineering challenges like the ones that the semi-conductor industry typically faces with the scientific challenges inherent in drug development.
    When Intel started putting transistors on chips, it was well understood how electrons moved through conductive materials, and how boolean gates operated. That wasn’t something that they had to develop from scratch, let alone for each chip. I guess I don’t need to go into detail here on how that’s different from drug development where eternal principles are few and far between.

  8. RY says:

    As computer scientist who re-tooled into a geneticist/biologist a few years back, I understand exactly where Grove is coming from… and why he is so wrong. The problem with bio-medical science is that all the mess and complexity gets suppressed in the popular press (i.e. NYTimes, Economist, WSJ) and luminaries like Grove get a totally warped sense of what we know and don’t know. Even the neat little diagrams of pathways we sometimes make give the illusion of knowledge and simplicity which is utterly misleading. I think, we, as a community, need to do a better job of educating other professionals and the public… (though I fear it may not be possible)

  9. Bryan says:

    Just a quick question, but what do you guys think about his comments and criticism of academia and peer review? (bottom page 2

  10. Nick K says:

    Message to Andy Grove:
    I’ll listen to your ignorant, ill-informed comments about drig discovery if you listen to my equally ignorant, ill-informed comments about chip design.

  11. Nick K says:

    Message to Andy Grove:
    I’ll listen to your ignorant, ill-informed comments about drug discovery if you listen to my equally ignorant, ill-informed comments about chip design.

  12. paiute says:

    You forgot the most important difference:
    Crashed computers = no big deal (patch it)
    Crashed humans = multimillion dollar judgement

  13. excimer says:

    Excellent post. I, too, shook my head in amazement at this guy’s comments. There really is little to compare between the methods in the semiconductor industry and the pharma industry.

  14. TFox says:

    In 40 years, Parkinson’s has gone from levodopa to levodopa. The chip industry, on the other hand, has gone from transistors in silicon to transistors in silicon. *All* advancement in electronics is what pharma types call scale-up, doing the same thing you did last year a little cheaper. Scale-up, needless to say, is not the hard part of new drug development…

  15. DLIB says:

    I’m a pharmacologist that works in the semiconductor industry ( across the street from Intel in Santa Clara ) the company I work for ( Brion technologies/ASML ) allows chips to be designed smaller and smaller — 16nm lines are on our roadmap. It’s true, there’s no comparison in the complexity of the problems associated with the respective industries. We do computer modeling of the optical systems used to expose resist. Our models are good enough to resolve better than .5nm differences in edge placement. Just fine when printing an IC. Not good enough for Docking/scoring. The search algorithms bare some resemblance. Ours are physically based. We can fairly represent the complete system. You guys with much bigger and faster computation can’t represent the system entropy change very accurately at all ( you need a real calorimeter for that ). There is cross fertilization that’s possible but probably more the pharma industry could teach the Semiconductor industry.

  16. Kay says:

    What if the electronics industry had Rules that did not work? What if they generated data but did not really believe the results? What if they chose to continue to use the Rules and results? What if the workers chose not to admit these faults to management?

  17. haywarmi says:

    Hey Bryan, I’m glad someone else took note of that comment. I’d agree that the peer review system pressures conformity but who’s going to weed out the “wild ducks” from the lame ducks? Especially tough when you’re handed 20 50 page grants to review (and get paid nothing except the privilege of doing it)?
    My impression is that this already exists, to a limited degree, with the HHMI and merit grant system. That is, those who have already proven themselves to be forward thinking and creative are given some license to explore. More of this, though, takes away from the deserving young investigators who may be more creative and technologically innovative.
    Those who are so quick to point out that the peer-review process stifles creativity aren’t always aware that the funding levels are nearing the single digits, so how do you make that decision? Its not like we have a lot of error to play with, which is exactly what you need to find (or encourage) those wild ducks. You use a shotgun to hunt ducks for a reason and right now the NIH is using a BB-gun (I’ll stop with the analogies now but I didn’t start it, “wild duck” came from Grove)

  18. Wavefunction says:

    Actually a drug is just like a chip. You outsource its production to a third world country, you get all kinds of crap put into it, then it works for some time, develops a defect, kills its consumer in one way or the other and finally becomes obsolete.

  19. Chemgeek says:

    A more blatant and ridiculous example of comparing apples and oranges I have never seen. (although, apples and oranges are more similar than pharma and chip design).

  20. SRC says:

    I’m a bit shocked that Grove does not grasp the difference between science and technology.
    Designing a chip is a matter of engineering, the underlying scientific principles having long since been worked out. It’s more akin to the space program than to pharmaceutical research. Progress in scientific research happily ignores timelines that engineering development observes religiously, because caprice plays no role.
    The better parallel would be to liken biomedical research to natives in Borneo attempting to build a computer.

  21. Molecular Geek says:

    SfN has speakers like Grove or the Dali Lama in addition to the more traditional plenary speakers because part of the mission of a major meeting like this is to include discussion of the larger context of the science that the attendees are doing. They also have public lectures during the meeting to engage the communuity. As someone here at the meeting, I can say that his remarks were listed on being about “Ending the R01 Culture in research”. I won’t defend his position. Others have correctly called him to task for his misunderstanding of the relationship between chip design and drug design. I’m sure that in his mind, if every academic were to refocus their energies away from trying to get NIH or NSF support, and find a way to become a biotech/pharmaceutical entrepeneur, he wouldn’t have to fear that prostate cancer or the onset of parkinsons. I won’t go any further with this tired old canard. We see it often enough in critiques of the industry.
    I would also point out that Grove was here on a panel discussing funding woes in biomedical science. As a followup to that, Newt Gingrich showed up for a plenary yesterday arguing that the refusal of the current administration to keep funding the NIH budget on its previous trajectory is shortsighted, dangerous, and wrong. (Who would have ever thought he would come out for more government spending?).
    Just for context, Neuroscience 2007 is huge. There are over 32,000 attendees here, and the poster hall runs for the entire length of the convention center exhibit hall (at least 500 meters, end to end) with topics ranging from molecular mechanisms of synatogenesis and ligand design through clinical behavior studies and discussions on the physiological nature of conciousness. (The latter is the topic that brought the Dali Lama to speak a couple of years ago. I wasn’t there for it, but my better half said that it was a very powerful talk, and it was SRO in the 2 overflow rooms where they simulcast the presentation). They don’t have sections like the ACS meetings do, so trying to decide which talks and which posters to attend is like drinking from a hydrant. They attract good speakers that can draw interest across a very broad spectrum of attendees. If anyone else gets a chance to hear Sebastian Seung from MIT speak, do it. He spoke on Monday night, and he gave one of the best lectures I have heard. It was at a Scientific American level to make sure that it was accessible to the entire audience, but his lab had posters on the details yesterday afternoon as well.

  22. CMC guy says:

    This is a stimulating subject: Attempting to apply semiconductor model to pharmaceuticals does seem to be the preverbal round peg. As noted in multiple comments the complexities are vastly different. Even when we gain knowledge in areas such as mol bio and genetics the translation into acceptable drugs is a difficult pathway that isnt so straight (and paiute nails risks). I dont know how many research programs (and expense) have gone into treatments for Parkinsons and Alzhemiers since the 1950/60s (timeframe in Grove article?) but would suspect numerous approaches explored and perhaps even a few made to clinical trials.
    Although at times it seems Pharma has bought into fundamental “time to money” principle at the core think most people doing the work so that will benefit sick people and have to delicately balance efficacy with ill effects. Unfortunately what works in animals often proves unsuitable for people so crossing gulf between demo a cure for a cancer in a mouse and the advancement to human the bridge collapses. Grove mentions biomarkers lack of emphasis but how many times have we seen compounds give excellent responses based on such but still fail to do the job against the disease (see PSA for Prostate Cancer for example).
    Grove does have interesting comments about “conformity of thoughts and valves” (targeted academia mainly) which I do see as a problem in Pharma with too few companies willing to move off save territory. More innovations/wild ducks are needed to solve illnesses but that will not come unless the funding opens to enable these novel explorations and the short-term ROI view that now dominates investors drives to only immediate high return results.

  23. Ian Ameline says:

    All the comments here are quite good, and I agree with them, but another component of why chips are “easier” than cells is that the designers of each generation of processors are using the previous generation as tools to make the next. There is a positive feedback loop in there that just doesn’t exist in Derek’s world.
    This is not necessarily true of the lithography equipment used to manufacture the chips, or the materials science that goes into making the precise compounds that form the transistors — but on the whole, it is just so many orders of magnitude more predictable and understandable than the processes that take place in a cell that I’m quite sure I don’t grasp even a part of why Derek’s field is so much harder than mine…
    Keep at it Derek and colleagues — we’re all getting older, and sooner or later we’re all going to depend on the fruits of your labor for our continued survival.

  24. Ai yi yi says:

    There seems to be a pretty unanimous opinion here, and I voice my agreement as well. The refinements in chip design are more akin to developing a new formulation of an existing drug – perhaps a syrup for children, or a controlled release version, etc. Those are the types of refinements that the pharma industry can realistically achieve within a set time period, and accurately forecast the effort and costs involved, and they are more on a par withthe incremental advances in chip design. So did he give a timeline for when the electronics industry will finish building that Star Trek transporter? At least that would make those trips to the doctor more convenient…………

  25. MTK says:

    OK, there’s a concensus here, so let me throw a couple of things out there. One intentionally provocative, and tongue in cheek, and the other a bona fide question.
    a) If one looks at total R&D spending vs. R&D spending as a % of sales, the electronics industry leads the former, while the pharma industry leads the latter. So if Andy Grove wants the pharma industry to become as incentivized, and as fast, as the semi-conductor industry, one conclusion is that mandatory government price controls, floors not ceilings, be instituted. That would do it, right? Companies would make damn sure that stuff got done if there was guaranteed greater than market value return.
    b) If Pharma can’t learn from the semi-conductor industry, what industries can it learn from? I find it highly self-patronizing to think that we’re so special and so difficult that we can’t apply some principles from other successful industries, countries, or segments. I’m just not sure what they may be.

  26. qetzal says:

    I can easily forgive Grove for not understanding why pharma is fundamentally different from the semiconducter business. I can even forgive him for not understanding that pharma is fundamentally different.
    But if he honestly thinks pharma’s problem is not enough drive to bring product to market, he’s being an idiot. You don’t have to understand pharma to know better than that. You just have to understand the tiniest bit about business.

  27. Jose says:

    MTK- I think that realizing pharma *is* fundamentally different from any other industry is not self-aggrandizing or indulgent, it is the *reality.” Pharmacueticals are not normal consumer products!

  28. SRC says:

    Jose, actually, if I had to choose a similar industry based solely upon business model, it would probably be wildcatting for oil (albeit without the FDA, or personal injury lawyers).

  29. There are some who say that pharmaceutical research needs to learn from semiconductor MANUFACTURING. Lots of talk about lean stuff with lots of beancounting and statistics, communicated with religious zeal. Nobody seems to notice that the pharmaceutical industry might be a little more regulated than the semiconductor industry. Let’s also remember when the secretary of defense was recruited from General Motors and look where that ended up.

  30. OK so it was Ford and not GM; hope everybody spotted the mistake.

  31. Ian Musgrave says:

    Bravo, a truly excellent post Derek.!

    But in the treatment of Parkinson’s, we have gone from levodopa to levodopa.

    (coughs politely) Well, in Australia it’s levodopa plus dopadecarboxylase inhibitors (as this greatly reduces side effects), no one prescribes levodopa alone. Not to mention amantidine (okay, so it doesn’t work so well, but has fewer side effects) and bromocriptine. In those 40 years we have tuned doses, added drug combinations (levodopa, MAO inhibitors and DDC inhibitor combinations work best), and tried out a heap of things that just didn’t work.
    But what does he expect? In Parkinsons (and Alzheimer’s and ALS), brain cells are dying. You can’t bring back brain cells once they are gone. We have only a limited idea of why they die, and virtually no idea of how to stop them dying. Until we know mechanisms in more detail, anything we do is palliative only (levadopa helps the few remaining brain cells make more dopamine, when they finally die off, levadopa stops working). Rushing drugs to market is pointless when we don’t know the mechanism of the disease. I speak here as someone who is trying to develop drugs to unravel beta amyloid, which is most likely the major pathogenic event in Alzheimer’s (but maybe it’s not). With Parkinson’s we are even more in the dark (alpha synuclein anyone?).
    Stopping brain cells dying, or replacing them, is a seriously hard problem (remember the big hoo-ha about neural cell transplantation in the late 90’s, it ended up not working).
    Making transistors which fit more elements on a chip is a significant challenge, but when you make a chip you know it works almost straight away. When you get a drug working in a test tube, it will be years before you know if it works in a human (and this is doubly so for diseases like Alzheimer’s where you have to wait a least a year for a knock-in animal model which doesn’t exactly mimic the human disease to give you any results, let alone monitoring cognitive decline in humans).
    Even if you get something that works in humans, you can’t rush a drug to market in the same way you can a transistor, there’s this thing called the FDA (or the TGA in Australia, and the UK and European equivalents). Also, new chips aren’t likely to kill people because of a rare gene polymorphism that isn’t picked up on the initial tox scan.
    But still and all, if you don’t know the mechanism of the disease, then useful treatments will be hard to come by. And as Derek and other have pointed out, biology is complex, finding drugaable answers is a ling hard row to hoe.

  32. srp says:

    In the spirit of devil’s advocacy, I’ll try to come up with some sense in which Grove might have half a point. I agree with the consensus above, but Grove’s critique stimulates some heretical thoughts. And let me say in advance, I’m aware of the various institutional barriers to the suggestions below; but if these suggestions have some merit, then changes in those institutions would be the next order of business.
    1. At Intel, and I believe at most semiconductor companies, there is very little emphasis on developing fundamental first-principle knowledge about why things work. Instead they just try to get them to work, record what they did that got them to work, and go from there.
    My understanding is that pre-“rational”-drug design, the pharma industry worked the same way and generated a lot of useful drugs with high research productivity compared to today. Is there any evidence that intentional targeting of receptors is better than blind screening (based on hunches) at turning up good drugs? And if such evidence is absent, does it make sense to cling to a research model that increases emotional comfort but reduces average research productivity?
    2. A weak analogy to the uncertainty of how a drug will work in a human is provided by the uncertainty of how a new semiconductor device or process technology will work on a production fab. (I realize that the uncertainty is a whole lot less, but bear with me. The difference actually works in favor of my argument.) The semiconductor folks build pilot lines and try to test in a realistic environment as quickly as possible, because the actual fabs are extremely complex and finicky systems (as human artifacts go). Stripped-down or simpler analogues to a real production line can be misleading models for what will happen when you try to do something for real at commercial quantities.
    In drug research, we have a really complex and finicky environment–the human body–but a lot of time is spent working on animal models. From previous posts by Derek, I infer that there is no systematic evidence that success in animal models is highly correlated with success in people, or even that animals are really easier to cure than people. It seems to be an article of faith rather than a scientific principle that if something doesn’t work in rats, it won’t work in people (we know for sure that the inverse statement is false from all the rat cures that fail in human trials). Maybe getting compounds into humans earlier, faster, and more frequently and using animals less intensively is the key to getting more actual drugs into the marketplace.
    Obviously, these are not airtight arguments, just ideas that Grove’s analogy stimulated. How much of the standard operating procedures of pharma research is grounded in evidence that it really improves research productivity?

  33. StW says:

    Reading all these comments, one might think that our basic medical research and clinical research systems are working as well as they possibly can. An objective look, however, would not produce that conclusion.
    Perhaps the success and speed of the semi-conductor industry isn’t the place to find all the answers, but we can take some lessons from any scientific or technical endeavor that has acheived the kind of success that the semi-conductor industry has. Perhaps the most valuable lesson we can take from Andy Grove’s industry is its acceptance of change and paradigm shifts. In fact, in the semi-conductor industry, paradigm shifts are the goal, and they happen frequently. Compare that with our stagnant, entrenched, 40-year old clinical research models, and the comparison is starkly infuriating to anyone who takes the time to fully understand it, especially if one is being directly affected by the systemic failures that frequently occur in drug development and regulation. I am not talking about whether a drug works or not, but rather how desperatley awful we are at identifying the ones that work and getting them to the people who need them.
    Why do drugs that obviously work (and there definitely are some of these) for diseases like cancer take 7 to 10 years to make it through an inflexible, one-size-fits-all phased clinical trial system while thousands of people die from their disease waiting for it? Why do we insist that every single “experiment” of a clinical trial be designed, run and interpreted by biostatisticians and physician-statisticians?
    There are other ways to collect and evaluate data. As a very experienced applied scientist in the environmental field (highly-regulated – highly complex), and a person with deep personal eperience and extensive knowledge regarding what works and what doesn’t in medical research, the arrogance and resistance to change in the medical and clinical research fields is unprecendeted in modern science.
    Instead of a knee-jerk reaction that Andy Grove doesn”t know what he is talking about reflected by the post and many of the above comments, perhaps we should all take a breath and listen to what he is saying. I suspect, and I hope, we may learn some things he knows that we didn’t know. And maybe they will help a little, or maybe a lot.
    No one likes criticism, but medical and clinical researchers need it as much as anyone else. Nothing guarantees failure more certainly than believing one’s own baloney, and coming up with arguments to justify one’s own failures. Sure medical research is tough, but that doesn’t mean the incredible progress we have seen in the semi-conductor industry has been easy. there are differences in the challenges they face, but there are also differences in the way they have dealt with the challenges they do face.
    There is nothing sacred about the way we do anything in medical or clinical research, and it’s not like we are succeeding beyond our wildest dreams in conquering disease. in fact, we are failing at a phenomenal rate and succeeding so rarely it should make us all think long and hard about what we can and should be doing differently – outside the box of accepted approaches that are so firmly anchored in medical conventional thinking.
    I don’t know Andy Grove, but I know and have known a great many people like him. He is motivated by is past and current personal circumstances to stir the pot, and there is ample evidence that the pot needs stirring.
    Open minds, please.

  34. milkshake says:

    Drug discovery is like writing software apps for a poorly documented buggy OS designed by aliens from a giant planet Red-Mont.
    Except that you are not allowed to use the actual OS for testing until the very late in the process – and you must never cause it to crash.
    The drug approval and the manufacturing practices are extremely burdened by regulations that are designed to ensure better drugs. But this formal system makes the process very slow and super-expensive. Many historically succesful drugs like aspirin would not pass the approval process nowadays. I don’t see much push for streamlining the process, reducing the waste of money and the bureaucracy and bad management that is so prevalent in pharma.

  35. eman says:

    All that fancy=shmancy research is for nothing when your company can’t figure out how to get your drug into a tablet.

  36. MTK says:

    srp and STW,
    That’s what I was looking for. We can’t honestly believe that every one of our processes represents “best practices”, can we? One thing, though srp, without some understanding, or at least a plausible rationalization of mechanism, your chances of getting an IND, much less an NDA approved are slim.
    And Jose, I realize that there are things that make pharma different from other industries, which is why I used the words “some principles”, but are we that different from medical devices? Are there not ways of thinking that might be improvements? And yes, I do think “It’s a pharma thing, you wouldn’t understand.” is indulgent. We should be open to considering lessons that can be learned from others.

  37. SRC says:

    And enough of defense. Let’s go on the offensive. I’ll maintain that with a less Edisonian and more scientifically rigorous approach, more like that in pharma and academic research, the semiconductor industry would be far ahead of where it is today.
    There ya go, Andy. What do you make of that one?

  38. SRC says:

    Sorry, the blog deleted my (g) at the end of that comment.
    Still, the fact remains, Grove’s authority is based upon his success in the semiconductor industry, which arguably was a matter of being in the right place at the right time (or, more evocatively, he was playing the tuba on the day it rained gold). Would he have been as successful had he entered the field 20 years earlier or later? Probably not, and so his comments, while interesting, don’t warrant chiseling into stone just yet.

  39. Another Kevin says:

    OK, let me try and give a contrarian view – which is what you’d expect from me: I’m an engineer, not a chemist nor a biologist. (Nevertheless, I work with many of both – I’m in that sort of lab.)
    Yuri Lazebnik, in his paper, “Can a Biologist Fix a Radio? —or, What I Learned while Studying Apoptosis” gives a jaundiced view of how biologists approach the complexity of signaling pathways. (I’d extend his argument to other biological systems, such as pharmacokinetics.) He compares the tools that biology has at its disposal to attempts to understand a radio by removing selected components and seeing what breaks.
    He points out that the engineering disciplines have developed tools for handling the complexity of their systems, tools that the biologists (at least in the new field of “systems biology”) would do well to learn.
    He’s probably just as wrong as Grove, but I’m convinced that he’s at least wrong for different reasons. Dr Lowe, could you be convinced to read this paper so we can at least discuss something less obviously absurd?

  40. Stw – breath of fresh air.
    I did scratch my head a little at Mr. Grove’s analogy. But the real point wasn’t in the analogy. It was in the slowness of the system, the actual [in much of the PD community’s opinion] “wrongful conviction” of a drug that was in the Parkinson pipeline, and the lack of drive in producing innovative treatments,. Being the businessman that he is, I can’t picture him using the words compassion or urgency, people like me are suffering, So he said it “his way”.
    The analogy isn’t the point. Money is poured into research – it’s only fair to ask “where’s the beef?”

  41. C3PBW says:

    Gordon Moore of ‘Moore’s law’ and Intel fame is a member of the Board of Directors of Gilead Sciences.
    One thing Gilead learned from the semiconductor industry was the importance of branding ala ‘Intel Inside’ . So when they outlicensed Tamiflu to Roche, part of the agreement was that the Gilead logo would appear side by side with Roche in every package and press-release mentions of Tamiflu. Now besides picking up some sizeable royalty checks each quarter, they still get their name out in front for something they did a long time ago.

  42. Ian Musgrave says:

    From the NewsWeek article

    Like an increasing number of critics who are fed up with biomedical research … that lifts the fog of the rodent version of Alzheimer’s but not people’s…

    We have only had an animal model of Alzheimer’s for the last couple of years. Animlas don’t get Alzheimer’s, we had to do decades of research until we had a modest understanding of the disease process (hard to do for a relatively rare disease in elderly people that can only be reliably diagnosed after death), and the “knock-in” revolution (part of that despised “academic research that won’t cure disease”) allowed us to place the putative disease causing genes into rodents. Then we had to characterise the disease to make sure it was relevant to Alzheimer’s, then test the drugs (in a model where it takes at least a year to be sure your drug works).
    The first few drugs that really attack the disease process have only just come off animal tests, and have been tested in humans, which is not bad given we didn’t even have a disease model at all a few years ago.
    Predictably, both high profile treatment modalities that came though this process failed, both for adverse reactions (we expected that for one drug, but it was a proof of concept test, and new, more patient friendly drugs are being tested now.
    StW wrote

    Why do drugs that obviously work (and there definitely are some of these) for diseases like cancer take 7 to 10 years to make it through an inflexible, one-size-fits-all phased clinical trial system while thousands of people die from their disease waiting for it?

    If there is one thing that we have learnt from biomedical and clinical research, is that drugs that “obviously work”, generally don’t. Cancer research is littered with drugs that “obviously worked” but failed miserably clinically.

  43. processchemist says:

    I suppose that when at Intel they have ONE new chip working there’s no problem in producing 100.000 pieces.
    Mr Moore knows what happens when lets say 2 grams of a promising NCE (coming out from a long process, from in silico design to lead optimization) come out from a medicinal chemistry lab?

  44. Kay says:

    “does it make sense to cling to a research model that increases emotional comfort but reduces average research productivity?”
    If you care about your company, then please give comment 32 a careful read.

  45. Ian Musgrave says:

    Sorry, this is going to be a long one.
    StW wrote:

    Why do drugs that obviously work (and there definitely are some of these)…

    To return to this, it depends on what you mean by “obvious�. There are heaps of things that go gangbusters in a Petri dish full of cancer cell lines, but will not work in an intact animal. There are several drugs that will work well in rodent models of cancer (either spontaneously occurring cancer models or tumour xenograft models) that just won’t work in humans. Even if you take your drug straight from the Petri dish to humans, you have years of work ahead of you. In humans, to say a cancer drug works, the cancer has to be in remission for at least two, preferably 5 years. Even if you go for surrogate markers (tumour shrinkage and one year survival rates), you will still take at least a year to recruit your subjects (more if it is a relatively rare tumour, assuming that you can recruit people for a study where there is no toxicology pre-study, so you can’t tell them if the drug will not kill them outright), at least a year to run the experiment, and then another half year of so to do the analysis (measurements of tumours on X-rays, biochemical indices, histology on tumour biopsies these things take non-negligible amount of time to do).
    So it will take you a minimum of 2.5 to 3 years just to see if the drug works in humans straight from the Petri dish (which may have been time and money wasted, as without a preliminary pharmacokinetics study you have no idea whether the dose you give your subjects, extrapolated from the Petri dish data, actually gives a high enough tissue concentration to do anything in the first place, again assuming the drug doesn’t kill your subjects outright, or makes them throw up all the time etc. because you haven’t done a preliminary tox study). So, minimum of 2.5 to 3 years from Petri dish to proof of concept in humans, then you have to convince the FDA on the basis of one years data, that your drug will cause long term cancer remission and won’t cause long term problems further down the road and justifies the inevitable higher cost of your drug on the basis of minuscule knowledge.
    Now, most of the drugs that are going to successfully kill cells in a Petri dish will fail, and cancer patients go through enough as it is without testing hundreds of drugs on them that will not work. So we are obliged to do preliminary animal studies, just to make get an inkling of metabolism problems, and that the drug is not insanely toxic. And then human pharmacokinetic and toxicity studies to make sure that the right dose gets in and there is not a weird wrinkle in the human toxicology profile (cytokine storm anyone?). So of course it takes longer.

    Why do we insist that every single “experiment” of a clinical trial be designed, run and interpreted by biostatisticians and physician-statisticians?

    Because if you don’t, you have just wasted several million dollars, and probably trashed the chance of a decent drug to be marketed (or let through a dog of a drug).
    It’s hard to get this across, but statistics isn’t a bolt on frippery to keep maths geeks happy. It is an integral part of experiment design. If you don’t do it right, then you can kiss millions of dollars good bye as you do a study that won’t actually be able to tell you if your drug has any effect.
    In chemistry, if you use the same reagents and apparatus, there is a tolerable chance that you will get roughly the same yield every time you run a reaction. In biology, especially for human trials, Harvard’s Law rules with a vengeance (when two sets of animals are treated under the same conditions of lighting, heating, food intake and environmental stimulus …. They will do as they damn well please). With cells in a Petri dish, you are dealing with a genetically uniform clone with none of those nasty absorption, distribution, metabolism and excretion issues. With xenograft mice, you have a bit more complexity, but the mice are still fairly vanilla genetically, and the tumour grafts fairly uniform too.
    In people, that’s a whole ‘nother ball game. They vary genetically with everything, they have a wide range of environmental influences, ingesting substances that may interfere with the already variable absorption, metabolism and excretion of your drug (some like grapefruit juice (remember Terfanidine) we can control, others we can’t), so we have a whole parcel of complicated interactions that need to be carefully sorted out it you want to see if your drug works (and the tumours are genetically heterogeneous too, and will evolve ways to beat your drug while you are testing it, cancer biology is a nightmare).
    Except or a few rare drugs, like Gleevec, most drug effects on cancer are relatively small, and have to be dug out of the noise with careful experimental design. Like it or not, there is a good reason why we do clinical trials the way they are done, because the other ways we have tried just don’t work. I’ve also done epidemiology, and worked briefly as a biogeographers assistant, so I know about other approaches to experimental design and analysis. Biology is complicated and the ways we have are the ones that give us a fighting chance of getting drugs that work without harming people too much. Even then the system isn’t fool proof, as organisms are just damn complicated, and nasty surprises can lurk in the wood work.
    So yes, if at the end of the day you want to give cancer sufferers a relatively safe drug that will actually be of some benefit to them, rather than an expensive placebo, we will have to wait years and have statisticians on board.

  46. Anonymous BMS Researcher says:

    I strongly agree with many of the points made above, in particular the stupidities that can be caused by numerical metrics and the fundamental difficulties of drug development.
    On metrics, like Derek I too have seen how intense the pressure to make the numbers (which, not incidentally, are included in our bonus formulas) can become late in the year.
    On biostatistics: I once was an electrical engineer before going back to graduate school and getting a doctorate in biology. Believe me, engineers also use statistics — a lot. But the engineering world is fundamentally much easier than is the pharmaceutical world, because circuits are as close to identical as we can make them.
    However, there is another fundamental difference between the engineering world and the pharma world that I have yet to see mentioned in this thread: the primary locus of value. Ford and Toyota both have large R&D departments and patent portfolios, but the reasons I’m about to drive 15 miles in a Camry instead of in a Taurus have very little to do with their makers’ respective patent portfolios. The number one reason why Toyota has a much larger market cap than Ford is what happens on the factory floor. Clearly making the cars efficiently is one of the hardest parts of being a car company.
    But Wall Street pays very little attention to what happens in the factories where BMS and Pfizer and the rest of us make our pills, it is basically assumed that if the drug gets approved the pills will get made. I am not saying the manufacturing end is unimportant, it is quite important and we spend a fair amount of effort on it, but making the physical pills is clearly not the hard part of our industry, as the existence of generic competition clearly proves. Our market cap is driven mainly by Wall Street’s perception of our pipeline.

  47. emjeff says:

    I don’t have any more comments to add, but I have a suggestion; Derek, you need to work this particular blog up into an article and get it published.

  48. STW says:

    In response to Ian Musgrave. The argument you make about drugs that obviously work usually don’t, falls into the “believing your own baloney” category. It is the response given by statisticians who believe only in things like confidence limits and p-values. I am not being combative here, only pointing out that the cliches and sound bites so automatically thrown around by defenders of the status quo are more of the one-size-fits-all problem we have in medical and clinical research. I am not a rookie on this. I am something of an expert with many thousands of hours under my belt studying the problems and solutions, and it is in fact possible to identify some of the drugs that obviously work long before the statisticians are finally satisfied with the data from multiple, years long, double-blind, randomized, placebo-controlled clinical trials, for example, in terminal cancer patients. Good decisions have two parts, they must be sufficiently correct (but not perfect) – and they must be made in time. A correct decision made later than it should have been often makes the decision either wrong, useless or far less effective than it could have been. Since we are often talking about lives with medical treatment decisions, one would think we would be good at making them. We aren’t. The art of good decision-making seems to be non-existent in clinical research, replaced by a myopic vision focused like a laser beam on an arcance set of algorithms called relative frequentist statistics. No other field of science has ever taken it to the extreme clinical reserachers and regulators have, for good reason. Strap in the scientists to a single method, and it isn’t science anymore. The data is never perfect, and waiting for it to become statistically perfect, which is what the FDA is doing with increasing frequency, imposes a human cost and a stagnation of progress that far outweighs the diminishing benefit we get from waiting for that perfect statistic to finally emerge.
    Disease is not caused by p-values, and no one ever calls a statistician in areal emergency, unless the energency happens to be a serious or life-threatening disease, then you are stuck with them.
    Again. Open minds, and instead of repeating the conventional wisdom (which is often the “baloney” I was talking about), question the conventional wisdom. Test the comventional wisdom. Conventional wisdom is nothing more than yesterday’s stale knowledge. I suspect Andy Grove would tell you that, in the semi-conductor industry, following the conventional wisdom is the quickest path to failure.

  49. Hap says:

    Science isn’t a library of knowledge but the method to get that knowledge. Statistics (when performed honestly) is a tool to understand what you actually know rather than what you think you know. If there’s a better way to get the knowledge of what drugs work and are safe, people would be interested because they can make a whole more money and help people more effectively in doing so, but it doesn’t sound like AG actually has such an idea.
    The “dose and hope” school of pharmacology sort of went down the tubes when someone decided to dose 150 children with cough medicine dissolved in diethylene glycol. People are pretty risk-averse – they don’t generally want to assume risks (even very small ones) for the benefit of others, and their risk-aversity decreases as they pay more for something – and this is unlikely to change. People are willing to assume larger risks either for larger benefits or when they have no choice – and (as with cancer and AIDS) the FDA already factors for those circumstances, as do the drug companies. Attempting to ignore what we know of statistics won’t make the lack of knowledge or lack of pliability of human nature, human biology, and systems biology go away.

  50. “Every drug for cancer and other serious life-threatening illnesses that the Abigail Alliance has pushed for earlier access to in our five and half year history is now approved by the FDA! Many lives could have been saved or extended, if there had been earlier access to these drugs!” The number of drugs is now up to 16!
    (More detail in the August 14, 2007 Abigail Alliance Wall Street Journal op-ed, ‘FDA’s Deadly Track Record’)
    Also go to and in the upper left hand corner click on the (short) FDA rally video button.
    Frank Burroughs
    President, Abigail Alliance for Better Access to Developmental Drugs

  51. I was reading his article and was thinking exactly the same way you nicely summarized! I am not sure how much he really knows about drug design, but for me there is a *big* difference between a carbon and a silicon based molecular structure. Is this guy telling us that we should start making Si-based drugs? Note: We are still talking about a drug for humans, right; or are we talking about robots with an artificial bloodstream we can control?
    Finally, here a statement for highlighting the differences in the industrial setup:
    “If you want to understand why something happens in business, study the disk drive industry. Those companies are the closest things to fruit flies that the business world will ever see. Drug design is a process between 9 to 15 years! So, which object to study lies in-between a fruit fly and a hard-disk? A high-throughput screening, a biological assay or an ‘in silico’ 3D/2D/xD model of a drug?”

  52. Ian Musgrave says:

    StW wrote:

    The argument you make about drugs that obviously work usually don’t, falls into the “believing your own baloney” category. It is the response given by statisticians who believe only in things like confidence limits and p-values.

    A brief disclaimer here, as well as trying to unravel beta amyloid, I also teach biostatistics. One of the first things I teach my students about is the “bloody obvious test”, where a result is so obvious that any statistics is window dressing. The only anti-cancer drug that falls in the BOT category is Gleevec (and its variants), with a stunning 80% response rate. Gleevec was fast forwarded through the approval process. On the flip side is herceptin, which works marginally in a special subtype of cancer, which wouldn’t have been found without statistics (or we could have just given expensive herceptin placebos to women with cancer). Leading anti-cancer drugs produce effects on the order of 20% increases in remission (eg Rituximab, ot the antibody conjugated to a plant toxin, gemtuzumab ozogamicin), so you definitely need statistics to sort these out. With very few exceptions, the new anti-cancer drugs we produce are incremental improvements, and we need statistics to see these incremental improvements.
    StW, could you give us an example of an “obviously working” anti-cancer agent that wasn’t fast forwarded through approval?

  53. stw says:

    Gleevec certainly does fall in to that category, but there have been others. And by the way, gleevec wasn’t “fast-forwarded” nearly as much as the FDA wants us to think. the FDA actually dithered over that one for about two years, and was still dithering and asking for more trials after a 100 percent response rate in phase 1 and response rate above 80- percent in Phase II. there have actually been virtually no exceptionally promising drugs that weren’t held up to some significant extent by the FDA. And they are still being held up. On the subject of FDA dithering and delays, I really am an expert.
    The inability to see the other drugs for which this was true has its roots in the rule of statistical analysis that requires everything to be viewed in the context of populations, whether we have defined an actual controlled population or not. A drug that obviously works for a subset of patients in a trial is their “Gleevec” even if it is not “Gleevec” for everyine in the trial.
    The fact that it doesn’t effectively treat everyone chosen to be in the trial population is a failing of the method and the designers of the trial, not the drug. The designers of those trials failed to identify (i,e,. control) the population properly. What they did was run a trial that included patients with different diseases. This is so obvious it almost intellectually hurts. Macro control variables like prior treatments, mets or no mets, no prior surgeries or specific prior surgeries, etc. have little at all to do with whether a patient responds to a drug or doesn’t for genetic/proteomic diseases. It is a construct of the ignorance that existed back in 1962 when statistics was chosen as the basis for all clinical research and all approval endpoints. The diseases we are still having the most trouble with are the ones driven at the molecular level, so in order for statistical trials to work well, we have to control for molecular differences. The easy way to think of this is that Tarceva works well for about 10 percent of lung cancer patients and hardly at all or not at all for the rest. This teaches us that lung cancer is not lung cancer, and that we didn’t control the trial for that, not that the drug doesn’t work. For those it does help, the results can be miraculous if unfortunately, not permanent. That doesn’t mean the drug doesn’t work – it means we have to figure out who it will work for, who it won’t, and why. Once we do that, we begin to understand not just the effect of the drug (which is all we ever really get from statistical trials) but both the effect and the cause of that effect. Once we link the cause with the effect, we no longer need staistical clinical trials for that question because we have gotten to what is generally termed, “first-principal” science.
    Oddly, just as a couple of teams working independently had managed to tease this information out for the very similar drug Iressa, FDA decided to pull Iressa off the market because a randomized trial that averaged the survival outcomes of the 90-percent of non-responders in with the 1 percent of responders fell just short of statistical significance for showing a survival benefit. So a useful drug for a few who could be fairly confidently idebntified in advance of treatment was yanked just as the information needed to optimize its use was learned. Why? because FDA doesn’t beleive in anything but statistics, whether it produces a rational and medically correct decision or not.
    Statistics can be a useful tool, but it cannot be the only tool we use, because it too often forces us to ask and answer questions in the wrong way, or to ask the wrong questions altogether, in which case one can’t possibly get the right answers. Methods and experimental designs are just recipes for cooking up data, and if we confine ourselves to one recipe, we severely limit the quesiotns we can ask, and the answers we can get. Sorry, but defending statistics as the only valid basis for the design, conduct and analysis of experimental data in clinical research is like trying to swim with a strait jacket on (those darn analogies again).
    As I said in an earlier post, once scientists get strapped into using a single approach to experimental design, or data analysis, or anything else for that matter, it isn’t science anymore. It is repetition, and we should not expect much progress from it.
    Science is the serach for knowledge. The methods we used to made today’s discovery is often not gong to be the method we need to use to make the next discovery. Since 1962 we have gone from barely knowing what the genome was to haveing decoded the entire thing more than once, and we are approaching very rapdily the ability to decode human genomes the way we check for cholesterol levels. Over that time, our narrow clinical reserach model has not changed at all and remains firmly in the hands of reserachers who refuse to beleive anything but the results of a Kaplan-Meier curve based on a clinical trial designed to produce nothing but a Kaplan-Meier curve.
    No one should be surprised we are failing a lot more than we are succeeding. If we don’t start looking where the exponentially expanding knowledge of human disease biology is telling us the answers are, we don’t have much chance of finding those answers.

  54. Hap says:

    How do you determine whether something works in a subset of a population without statistics? Statistics isn’t sufficient (you need to have an idea that some people with be more susceptible to a drug than others), but once the idea exists, the only way to test it is statistical (because human biology isn’t well enough understood and because people’s behavior can’t be held constant to do a controlled lab experiment).
    The other problem seems to be that we can’t decode the genome as easily as a cholesterol test – people have been plugging away for some time even after the sequence of the basic human genome is known, because people don’t know what it all does. We don’t know what proteins are produced (let alone how) or what they all do. Mechanism-based logic is tough without a mechanism.
    It doesn’t seem as if we have the tools to make safety and efficacy testing faster, either. The FDA insists on time-consuming safety tests because their output is predictable and (mostly) accurate. If one had a different method to determine safety and efficacy, one would still have to prove to the FDA that the new tests are functionally equivalent to the old ones before the faster tests could be deployed in practice. In addition, some aspects of drug testing in people are slow because it takes time to see an effect (as indicated above).
    Considering the problems with the drug industry, it’s clear it can learn from someone how to do its job better – but since people can’t agree on what’s wrong, it’s hard to understand who might understand its problems and have a better method in which to do its job.

  55. stw says:

    You determine it clinically. You are confusing the observations that produce the data points used in statistical analysis with the clinical observations that produce the data points in the first place. An example. A patient with Stage IV, advanced and progressing colon cancer presents for enty into a trial and is found to be eligible. I know from very involved personal experience and lots of research that colon cancer is a progressive disease. this is cauise the natural course of the disease, and we understand this pretty well after watching these diseases kill people millions of times. The patient will have been taken off his/her last therapy because it was no longer working (i.e., the tumors were growing and/or new tumors were forming – this is how progression is measured in colon cancer along with some blood markers that are less defintive than Ct scans of tumors). The natural course of the disease is continued progression in the absence of treatment, and in colon cancer, spontaneous stable disease and/or regression are so rare as to be non-concerns. the patient is then given a drug and the tunors stop growing and spreading for an extended period, or they shrink and dtsbailize, or in rare cases, they completely disaappear (called a complete response) for a time period long enough to be measured on periodidc scans (usually every 6 weeks for colon cancer). These are all considered responses and they are the observations that the statisticians use as evidence of clinical benefit. They are the plus side of the trial data. people who whose disease continues to progress (even if it may be slowed a little) are considered non-responders. For the people that experienced the complete responses and the partial responses (generally defined quiet arbitrarily as tumor shrinkage of 50 percent or more), they certainly responded and the drug worked for them for as long as the response lasts, which is referred to as duration. This is the clinical data statisticains use from clinical trials. it is meaningful in iots own right because it shows that there are patients who respond to the drug, but statisticians don’t care about that obvious fact and rejecty it as anecdotal because they deal only in populations. this is actually a profoundly unscientific concept, but they do it nonetheless.
    What they then do with it is kind of like deciding whether you like a movie by reading a review written by someone else (analogy again). They calculate the average number of responses in each arm of the trial (and if it is against a placebo in advanced colon cancer, there won’t be any responses in the control arm) and then they compare them. They also calculate arcane almost meninlkess metrics called 95-percent confidence and related p-values, and they decide whether the drug is better than the placebo based primarily on the p-value, which is yet another arcane and almost m,eaningless measure of whether the result could be due to chance. the problem with all of this is that is completely removed form ewhat actually happened. Some people responded and benefitted from the drug, so there is obviously a subset of patients who should be geting it. it has nothing to do with statistics. you learn it before a statistician ever touces the data. The statistician then loses and obscures the inherent knowledge imparted by the “anecdotal” data by sunmerging it in a bunch of sequential averaging calculations. So what we really learned is iognored, and the next quesiotn we should be asking – which is how to identify the patients who should get the drug, and conversely how to oidewntify those who shouldn’t because it won’y work for them, is lost.
    It would literally take a book to explain whty this is incredibly stupid given what we now know about the molecular causes of disease like colon cancer, but because the next step is invariably another trial desitgned by statisticains because the FDA virtually mandates that those are the only experiemntal designs they will accept, we never get around to doncutcting the kind of noin-staitsical clinical trioals that wouyld give us the answers we need. Instead the FDA either approves the drug or doesn’t based on the averages, and if they approve it, doctor’s will prescribe the drug to patients with no idea at all who will respond or for how long, except that if tey treat enough of them, eventually they end up somewhere around the average response and duration found in the trial, if they bother to average the results experienced by their won patients.
    This is stupidity, not science. It is also what is mandated by the FDA and staunchly defended by a lot of people who can’t really explain why we can’t change it, except to say we can’t. This string of blog posts presents a pretty good record of that kind of thinking.

  56. stw says:

    Actually, some of the people engaged in developing genomic profiling systems are predicting that we will be able to decode an indidviuals genome for just a few thousand dollars as soon as about 5 years from now. With those projections coming down quickly, the reality is probably closer than that. genomic preofiling svcience is one of the few things in medical reserach that is going vertical in terms of progress – perhaps because doing it is very similar to the kind of thing intel does. It is developing a technology where we already understand the concepts behind how to do it.
    We also know a lot more about the human disease biology than you appear to think. there already have been some drugs developed based o first prinicipal science that had no path to approval at the FDA based on anything but the old, phased, statistical approach. An exmaple is Sprycel, the follow-on drug for Gleevec. it was developed to overcome the resistance that developes ins ome CML patients to Gleevec. The scientists developing it mapped out the genetic mutations that caused the resistance to Gleevec and came up with new molecule that fits almost all of the muations, shutting down the CML (leukemia). it obviously worked in Phase I and ity was known very conclusively why it worked, but the drug stil ahd t go through more than a yaer of multiple parallel cliniacl trials that added very little to what we knew almost right out of the box about the drug. People died waiting for it because neither the FDA nor the clinical research community is open-minded enough to accept the fact that statistics is not always the way to go. Today, sometimes the scientific and clinical information we have is better than anything statisticians can give us, and the instances where that is the case are on the increase.
    Cerytainly the drug industry has problems, but te FDA has even deeper problems that are causing some of the industry’s problems. The drug industry isn’t nearly as evil as it is often painted, and the FDA isn’t nearly as unbiased or competent as many believe it to be. it is a typical, slow-moving, unresponsive bureaucracy with all the tuypical problems and failings of those tyopes of organizations. it can’t get out of its own way, and changes only wehn that change is forced upon it from outside its walls.

  57. Dave Eaton says:

    I am a chemist that works in a company awash in electrical engineers. I’ve learned a great deal from their approach. But I also encounter certain blind spots that their education engenders.
    Mind you, the chemistry I am doing is not in any way as complex or highly regulated as drug discovery. But I still find myself, again and again, explaining to engineers why chemists can’t just go into the lab and design and build new molecules the way engineers design a circuit. Why you can’t reliably calculate which solvent to use. Why two molecules that look almost identical will behave in ways that seem completely different.
    Most every bit of the chemistry we do is aimed at being put into a product. The drive to get to an end point that is profitable is constant. But even minor changes at any point in the process, or any deviations in raw materials, have huge effects that must be understood to be mitigated. It is constant and ongoing, challenging and, more to the point, not predictable. It makes little sense to try to schedule discoveries the way you do the completion of defined tasks, though scheduling seems to be an instinct driven into engineers.
    I wish I could share stories, but alas, secrecy is a downside of research in industry. But I will say that I regularly encounter something surprising, like the vexing and complex surface chemistry of glass, that I never expected to matter.
    A big part of engineering, it seems to me, is making things combine linearly, so that one thing can be trusted to affect another predictably. Chemistry is not like that. Biology is double-secret not like that. And so they are not, and are not likely to become, engineering.

  58. Ian Musgrave says:

    StW wrote:

    Gleevec certainly does fall in to that category, but there have been others.

    Which ones?
    StW wrote:

    ….FDA decided to pull Iressa off the market because a randomized trial that averaged the survival outcomes of the 90-percent of non-responders in with the 1 percent of responders fell just short of statistical significance for showing a survival benefit. So a useful drug for a few who could be fairly confidently idebntified in advance of treatment was yanked just as the information needed to optimize its use was learned….

    Iressa (gefitinib) is a case in point. Gefitinib was not a drug that “obviously workedâ€�, even though we had a good theoretical basis for its effect and good tissue models. Gefitinib was fast tracked on the basis of surrogate markers in early studies of non-small-cell lung cancer. When the survival data came in, it was then obvious that it didn’t work in general (not just the ISEL trial, three other studies failed to find benefit, but ISEL was the largest and most authoritive trial). The FDA was quite right to stop it being prescribed for new cases of non-small-cell lung cancer, because for the vast majority, they would just be getting expensive placebos. People who were on gefitinib and responding remained on it though (it was not “yanked”). More importantly, by the time gefitinib was restricted there were drugs that did work for everybody, Tarceva (erlotinib, approved in 2004) being one of them. Contrary to what StW’s statement implied, erotininb works for everyone[*], but does better in the same groups that gefitinib does, so replacing gefitinib with erotininb is a win-win situation for everybody.
    Note that in this context, improvements in median survival time are generally on the order of 2-3 months, though in some subgroups median survival can be improved by as much as 12 months (that’s very good given the nature of the disease). The improvements though clinically important are by no means obvious, and require well constructed clinical trials to see, even for the highest levels of response. Indeed, the only reason we know that there were patient subset dependent effects was very careful pre-planned statistical analyses (see Lancet. 2005 Oct 29-Nov 4;366(9496):1527-37.). Just to hammer this home, statistics was the key to finding these effects.
    At the time that the studies were planned, no one knew of the possible role of EGFR activating mutations, so studies could not have been planned around that. Now that we understand about activating mutations, and have done a few studies the situation is becoming clearer (most, but not all, studies suggest that EGFR exon 19 deletions and L858R mutations are particularly susceptible to gefitinib and erlotinib; again, we need statistics to sort these studies out). Tests have recently been developed that will allow us to genotype tumour for these mutations, which should allow better drug targeting. Again, note that this was after lots of clinical work and statistical to show that apparent associations were real associations (and a heck of a lot of work to get a streamlined test that could deliver results in a clinical time frame). Sadly, both gefitinib and erlotinib are limited by the rapid rise of mutations in the EGFR (something no chip developer has to worry about).
    As StW mentions, lung cancer (ALL cancer) is a very heterogeneous disease. Researchers know that, but as yet there have been no effective markers to do rational molecular sub-typing. Even the EGFR exon 19 deletions and L858R mutations account for only a percentage of the variation in response to gefitinib (and this is post-facto knowledge). There is a lot we still don’t know. Even the just published survey of lung cancer genomics (Nature, November 4, 2007, doi:10.1038/nature06358), offers now real clues about how to meaningfully subtype cancer into therapeutic subsets (“More generally, our results indicate that many of the genes that are involved in lung adenocarcinoma remain to be discovered�). This is a major unrealised goal for pharmacogenomics, and criticising researchers for not including information that was unavailable when the study was planned and started is singularly unhelpful.
    Still, the major take home message from the gefitinib and erlotinib trials is that statistics plays a key role in the design of a meaningful trial. Neither of these drugs “obviously worked� and careful statistics was needed to find meaningful patient benefit. Any trial that ignores statistics is doomed to be a waste of time, resources and patients lives.
    [*] It doesn’t seem to work in treatments with combined Paclitaxel and Carboplatin, but neither do anti-EGFR antibody treatments either.

  59. rick says:

    forgive the typing but the meds wore off some time ago. i will also apologize for not taking energy to read the many posts for same reason.
    having had pd for 15 years and having researched it thoroughly, i hope to claim a soapbox for a moment.
    1) the present system is a failure
    2) those from an academic background are not qualified to evaluate it due to their position within it – i.e. they are part of the problem they seek to solve
    3) an engineering model is exactly what is needed. bridges fall down. scientific failures apply for more funding. the feedback loops are inverted.
    4) how many researchers here actually talk to patients for their insights?
    5) finally, some of you might be sharp enough to follow up on this rather brash statement:
    Parkinson’s is NOT a neurological disorder per se. It is an intertwined disorder of the immune and endocrine systems which, with added influence of the GI system, damages the nervous system. go to medline and read the work of Bin Liu at the NIH, PM Carvey at St Lukes, and R McEwen at SUNY (?)
    6) if you are interested in joining a cross-disciplinary effort on this let me know at
    gotta go. life sux this time of night

  60. Dave Eaton says:

    3) an engineering model is exactly what is needed. bridges fall down. scientific failures apply for more funding. the feedback loops are inverted.
    To some extent, this is true in academia. As an industrial chemist, I assure you that the model is try, try, and try some more.
    The engineering model fails precisely because the systems involved are hideously non-linear. If the bridge falls, you can isolate the failure points, and seek to improve the structure or materials. If a reaction fails, the resultant combinatorial explosion of possible reasons defies analysis, and so you just try another approach (different catalyst, different solvent, etc). Incremental changes are possible in engineering, and not necessarily so in chemistry or biology.
    This bares saying out loud- despite continuous advancement in computational chemistry, there’s not a reaction worth doing that can adequately be described or predicted by calculation. My perspective is that of someone who works on relatively simple things by biological standards.
    Said another way: chemistry is way fucking hard. Plenty of us doing it are very closely allied with engineering. If we could put pen to paper and calculate anything worth doing, you can bet we would. Multiply this complexity a grillionfold, and you start to nip at the edges of biology.
    I had the good fortune of working in a solid state physics group as a post doc. The breathtaking advances in electronics are, to a reasonable approximation, the results of very, very ‘boring’ and repetitive structures, where everything is periodic, where perturbations by impurities are exponential in their effects, and where one reasonably tries to control individual layers of atoms.
    I think that a systems approach, and ‘engineering’ approach to chemistry and biology is likely to be useful, but not as fruitful as Grove imagines.

  61. Ian Musgrave says:

    StW wrote:

    This is the clinical data statisticains use from clinical trials. it is meaningful in iots own right because it shows that there are patients who respond to the drug, but statisticians don’t care about that obvious fact and rejecty it as anecdotal because they deal only in populations. this is actually a profoundly unscientific concept, but they do it nonetheless

    This is exactly how statisticians don’t work. The problem here is that people vary; a lot. Put one set of people on a placebo, and measure their progress, then put another set of people that you have carefully matched to the first set on a placebo again, and you will get different figures for improvement, stable disease and disease progression. The question statisticians ask is, is the number or responses you see in a drug arm more than we would expect by chance sampling of the population at large?
    The usual situation is more like we have (as a made up example) 8 people in the drug arm have tumour shrinkage and 4 people in the placebo arm have tumour shrinkage. In what sense can we confidently say that the responses in the drug arm are actually due to the drug?
    In the case of the gefitinib trial, 1% of all patients on placebo showed shrinkage of their tumours in the absence of drug therapy (it happens, people VARY), in contrast 8% of the people on gefitinib showed a shrinkage of their tumour. Now 8% looks bigger than 1% so we might naively think that gefitinab has worked, but remember people VARY. The question here is, is the response seen in the gefitinib receiving group larger than you would expect just from random sampling of the population. If you gave placebo to another group of matched subjects, what is the likelihood that 8% will show tumour shrinkage in the absence of drug? The answer is, given the known amount of variation in that population (and the number of people showing response out of the total population), it’s quite likely that we would see 8% of the population show tumour shrinkage in the absence of drug, so no, the gefitinib response is not larger than we would expect from random sampling.
    The take home message here is that even in something like tumour shrinkage, which we might expect to be only due to drug effects, can occur in patients taking placebo. We need statistics because we have to be able to estimate how likely it is that we will get a given result by chance alone. For drugs like gleevec, that likely hood is insignificant, for drugs like gefitinib that likelihood is almost a certainty.
    I heartily recommend Statistics at Square One for a better understanding of some of the issues involved.

  62. stw says:

    Please note a typo in my post above where I discussed Iressa. I was making a lot of them last night. The response rate was about 10 percent.
    I would respond to Ian’s assertion that statistics was needed to find that gefitinib didn’t work is actually the opposite of what happened. The ISEL trial did show a slight survival benefit (a little over two weeks), but it missed statistical significance by a small margin (a few more responding patients would have put it over the top). Given that FDA approved Tarceva with Gemcitabine for pancreatic cancer based on a statistically significant average survival advantage of only two weeks, small survival advantages do sometimes support approvals. Tarceva did show a longer overall survival advantage in lung cancer (about two months) that was statistically significant
    But to my central point on Iressa. Clinical observations of individual patients who responded to the drug showed very convincingly that they recieved clinical benefit. I know (or knew when they were alive) people who have responded to these drugs, and when it happens it is a very good thing. Some of the people who got Iressa responded for a year or two. To be clear, a good response to a drug like Iressa or Tarceva or Erbitux (all EGFR targeting drugs) literally gives people their lives back for as long as those responses last. The side effects are very mild, and these people live near normal lives as if they physically didn’t have cancer at all. It is not a small thing, even though it happens for only about ten percent of the people who get these drugs. More for Erbitux in colon cancer and haed and neck cancer. Tarceva may be slightly better than Iressa on a population basis, but for some patients, Iressa unquestionably worked and the statisticians cannot tell us if it is the same people responding to Tarceva who would or would not respond to Iressa, and vice-versa. Consider what we may have just missed if the ten patients out of a hundred who respond to Iressa are different from the ten patients who respond to Tarceva. The drugs are not identical. If it is a different subset of patients, we threw away an ability to effectively treat 20 percent of lung cancer patients, so we could effectively treat only 10 percent. Bottom line, by pulling Iressa based on population stats alone, we could well have reduced our ability to effectively treat lung cancer.
    With Iressa in the ISEL trial, by averaging in the results of all non-responders with patients who responded to Iressa, the positive data ends up being blended into a sea of negative data, and the significance of the benefit recieved by the patients who responded is masked in the calculations. This is the difference between population-based, statistics-driven thinking and medical reality. As Ian acknowledges, lung cancer is not one disease, and it changes over time in the individual patient. This is the most compelling argument against continuing the entrenchment of clinical trials in population- based statistics. The population-based approach assumes that every patient with lung cancer will receive the same drug, and that the drug will benefit only a percentage of those patients (in the case of Iressa and Tarceva, about 10 percent). The rest will indeed recieve an expensive (and sometimes toxic) placebo, because population-based statistical clinical trials cannot ask or answer the molecular questions we need to ask and answer to to get the right drug to the right patient at the right time. Because of the efficacy/expensive placebo approach to deciding how to practice medicine that is produced by the stats, we decide which drug to use based on which one has the higher response, or longer response, with no consideration to who should getting which drug to treat the disease they have. Instead, they get the drug that the population has, on average. so unless they are that average patient, they are getting the worng drug.
    I propose that this is not only bad science and bad medicine, it is intellectually stupid (oxymoron?), because in a lot of cases we can now do better, if we would choose to do better.
    The argument that the ISEL trial couldn’t be designed propsectively to evaluate the effect of mutations because that work hadn’t been completed before the trial started, thus allowing it to be designed to test that hypothesis is mostly true (I recall there may have been some information available at that time), but that information was available and published when the FDA pulled the drug from the market. And that action by FDA was the result of the narrow, linear thinking that statistical approaches produce.
    Worse, it was a poor and unnecessary decision. First consider that the FDA and its Oncologic Drugs Advisory Committee explicitly acknowledged that the dissemination of information regarding the trial results was overwhelming, and that tracking of new prescriptions showed that almost no new prescriptions were being written for Iressa within a few weeks of the education effort undertaken by Astrazeneca and the FDA. Second, consider that FDA decided to restrict approval of the drug rather than withdraw approval because it also knew some patients were recieving siginficant clinical benefit (this is unquestionably a fact), and if the drug had been withdrawn from them the agency would have been accused of harming or even killing patients (this is absolutely true by the way and Iressa is actually still technically approved, but no one can get it outside a clinical trial). Third, consider that the very statistics-minded Director of the cancer drugs office at FDA was (and still is) aggressively pushing a campaign to enforce the completion of Phase IV (post-approval) trials (whether they make sense or not, whether they are ethical or not, whether they are actually needed or not) and also sending a strong message that those trials had better meet pre-determined endpoints or else. Finally, the information regarding mutations and the ability to pre-screen patients for those mutations to determine who is likely, and who is not likely, to respond to the drug, was published and sufficiently convincing by the time the decision was made that it should have moderated the FDA’s decision.
    The effect was to kill the drug in the US. It remains approved in Japan, and possibly in other countries (I am not sure, but I do think it is still approved in Switzerland and maybe some other nations).
    The mutation data is the kind of information we need to move to the individualized therapeutic we know we need if we are to conquer individualized diseases like lung cancer. Claiming that Iressa did not work is simply chaining one’s mind to a single, rigid way of forming a hypothesis and testing that hypothesis.
    This debate between Ian and I is a sample of what is going on throughout the clinical research community. Population-based approaches to clinical testing have been around a long time, and most clinical researchers are deeply, intellectually invested in them. Often the discussions have the characteristics of a debate between an atheist and a fundamentalist; there is no movement or acknowledgement on either side.
    So here is an attempt to see both sides and to frame where we are on this. There are still a lot of diseases that are population-based diseases where everyone is sick for essentially the same reason. HIV/AIDS, even with the mutations, is one of those, and population-based studies are useful for those diseases. Then there is lung cancer, where virtually every patient is unique to some degree at the molecular level in terms of what is driving their diseae at the moment, what they will respond to and what the won’t respond to, or have become resistant to, again at that moment. In lung cancer, population-based approaches won’t and aren’t working because it is not a population-based disease.
    We already know we need new approaches, but first we have to change minds and overcome resistance to change in what has become a very change-resistant scientific endeavor.
    We are at the beginning of a needed paradigm shift, and we are fighting it and keping it from starting. That is what I think we can learn from Andy grove and Intel. Welcome paradigm shifts, facilitate paradigm shifts, pursue paradigm shifts. That is what we need in medical research.
    And I am done.
    Steve Walker
    Abigail Alliance for Better Access to Developmental Drugs

  63. BS58 says:

    Great analyses, but I think the point is missed. Andy Groves is just some rich, spoiled guy who thinks he can buy whatever he wants. Now that he has PD, he thinks he can just go buy the cure. It doesn’t exist. The Pharma industry is telling him this by not offering the product he wants when there is an obvious market. Shoot the messenger.

  64. justapdpatient says:

    It’s not “the pharma industry” that is withholding a treatment, it’s just one company. And the reasons they are withholding are not scientific. They denied compassionate use even tho the FDA ok’d it and are withholding it because it does work, but they didn’t think it was a big enough moneymaker. It would involve brain surgery, not enough surgery rooms in the nation to hold the load that would come in, not enough trained surgeons…..and last but not least….they knew a better delivery system would come along and they would not make enough money.
    Reading between the lines of Andy’s speech, you are dealing with two issues i) the very poorly received comparison between the two businesses, which I think is more about championing a cause and just doing it [why does it take 3 years to get pre-clinical toxicity reports published? and ii) a very sensitive generation of advanced pd patients, who are not elderly by the way, whose treatment hopes are sitting on the shelf at that company.

  65. srp says:

    My points way up at #32 may have been missed in the flow of statistics wars comments, but I think the comments of Dave Eaton actually strengthen them. To the extent that biological systems are highly nonlinear in their responses, why would it make sense a) to focus on “rational” drug design aimed at postulated receptors and b) to require every proposed drug to go through an animal screen for efficacy? Many (most?) good drugs have unknown mechanisms of action and there isn’t much evidence that curing rats is either necessary or sufficient for curing people. I understand that the FDA forces companies to follow a) and b), but is that good public policy?

  66. Observer says:

    Quite an interesting email trail.
    I agree that biology is not so simple, there are so many complexities in chemistry and biology. However, this doesn’t mean it isn’t worth adapting some methods from engineering — like modeling complex systems — to improve the process. And to thoughtfully re-examine the whole drug development process itself.
    One quote from the discussion: “A big part of engineering, it seems to me, is making things combine linearly, so that one thing can be trusted to affect another predictably. Chemistry is not like that. Biology is double-secret not like that. And so they are not, and are not likely to become, engineering.”
    My experience with simulation and modeling is that they are all about helping you understand highly non-linear systems with lots of feedback, and the engineers I work with have experience in exactly that kind of world. They bring that understanding to biology and model it. So indeed engineers can deal with non-linearities and the complexity of biology. We won’t have everything modeled from an atom up for a long time but do we really need that, or do we need a model that is just good enough to help us with complexity, uncertainty, integrating all the data? Any good modeler can tell you it is all about choosing your level of abstraction, modeling at the right level to help make the decisions you need to make. In fact, I believe we can use different kinds of modeling at different times vs. one universal model for all (e.g., disease models to help you select the target pathway, pathway models to help you select the target protein, molecular models to help you optimize how you hit the target, etc.)
    This isn’t theoretical. People in the industry are doing biological modeling and making key, large $ decisions based on it. It isn’t universal — as William Gibson said, “the future is here already, it just isn’t evenly distributed yet.”
    So my suggestion for those of you who think Grove is flat wrong — consider that maybe engineering, modeling can help. Consider that we are already using modeling in the industry, in the form of animal models, but we know they are woefully inadequate. So if computer modeling is imperfect but more predictive of human response than animal modeling, then we should give it serious consideration.
    As for re-examinging the process, I realize that Grove’s comments on performance of the industry, year end goals, etc, are too simplistic. As others have noted, however, we do have incentives and metrics that don’t always lead to success. One example: if people in discovery are rewarded for shots on goal — how many compounds make it to the next phase — vs goals — how many compounds make it to market, we are going to struggle to achieve success at a reasonable cost.
    Food for thought.

  67. stitcher says:

    justapdpatient said it well.
    It is JUST ONE COMPANY who is squashing Andy Grove. And shame on them! The arrogance is shameful!!
    Andy Grove has a right to speak his mind in any forum, and should be allowed to do so. Someone needs to step up to the plate and put Mr. Grove’s “speech” content online…now, before it also disappears. That a single company is so insecure and that an organization can allow itself to be dictated to as to not allow Mr. Grove to speak…shameful!!

  68. tgibbs says:

    His big error is viewing drug design as analogous to chip design. But really, it is more analogous to reverse engineering. A better analogy is this:
    Your company has a microprocessor, extremely complex. All of the documentation is lost, nobody knows the details of the design principles, although it is suspected that it uses a mixture of digital and analog logic, and it uses nanoscale circuitry below the ability of your microscopes to resolve. Unfortunately, the chips are extremely failure prone. Your company’s job is to reverse engineer the chip well enough to figure out how to improve reliability. What’s more, the actual chips are considered extremely precious, so you are obliged to do most of your work on another related microprocessor that is of a somewhat different design and generation.
    So when can we expect the fix?

  69. rick says:

    i swear i will attempt to come here in better shape tomorrow and not have to be apologetic. in the meantime, it was refreshing that to see non-linearity even mentioned. there is a definite aspect of that to PD but don’t even say the word “Chaos” around a typical neurologist…
    there is a strong feeling in the patient community that profit rules even to the point that research into cures are suppressed in favor of treatments because of the cash cow effect. don’t dismiss it out of hand. i have files on 82 promising leads from peer reviewed journals that await followup but whose patentability is doubtful.
    what is really needed is a cross disciplinary effort combining a half dozen disciplines pursuing dozens of elements at once.

  70. Monica Marcu says:

    1. Yes, we have not created life, so we do not understand it. Unlike chips. BUT in order to function properly/healthy we should observe life closely and follow its rules, they govern among others human physiology. There is no way one can understand life by doing only research in a man-made environment with man-made “life” models (in lab mice, plastic dishes). One needs to spend time in the middle of life: outdoors, wilderness. Observe connections and interdependence, cause and effect. A whole new view on health and disease opens up… Until we’ll come closer to cure diseases like Parkinson, we should rather do more to prevent them. But how can we prevent them when we’re so damn good in killing and sickening everything around us: bees, frogs,plants… All is truly connected, and since there is more and more mess in the wild, there will be more disease in us.
    2. So true: “What stands in the way of more and faster success in getting cures to patients?
    The peer review system in grant making and in academic advancement has the major disadvantage of creating conformity of thoughts and values. … The pressure to conform [to prevailing ideas of what causes diseases and how best to find treatments for them] means … There is no place for the wild ducks. The result is more sameness and less innovation. What we need is a cultural revolution in the research community, academic and non-academic…. ”
    That revolution will not come any time soon, not for as long as the “scientists” have eliminated life from molecular biology and nature from medicine. What we mostly produce and publish is a reductionist collection of man-interpreted results of man-made “living” systems, and then expect to develop drugs based on those results. For another valid argument read “Why Most Published Research Findings Are False” ,John P. A. Ioannidis. His findings are not false.
    I am waiting to see the book “Last Scientist in the Woods”; it is not only the child who misses on the wonder of the wilderness, but also the thinker… Medical science terribly misses the wisdom of nature.
    For now I focus my efforts to prevent, the cures are far away.

  71. JIsaac says:

    DNA form Life, Atom form Computers!
    Is DNA is a form of Atom? Or an Atom is a Form of DNA? This is an Egg Chicken evolution, maybe semicon was been the priority during the 1900. and Pharma concentrates on the evolution of many life forming organism, so we can differentiate this two interest going to different direction.
    However, both of this field contributes to the whole society. I would not say Pharma and Semicon are both correct but what is important is Andy Contributed to speed up the process in evaluating formulas, results and cures. Semicon is not specific in solving mental problems or Pharma is not specific to solving problems for drug discovery, everything is important. What is up now is how our consciousness mind will help the society to grow more seeds, stay in healthy life and obtain a peaceful world. Lets give everyone a chance to speak in freedom, maybe he has the brilliant idea to spark to help the life science but, let’s respect the almighty.

  72. cwilkes says:

    Great post again Derek, as well as the commenters. I’ll also throw out that chip design is a new industry and it is bound to make more impressive gains year over year than one that’s been around for a while (and by that I mean the creation of all drugs).
    Grove just has to look at MSFT to see how long something takes once it gets to certain level of complexity. Every time Microsoft announces a new operating system (95, NT, Vista) it always comes out late as the software engineers are building on top of a platform (hardware) that they don’t fully understand. That’s not because they are lazy or stupid, but because there’s just so much a single person can know. And it isn’t limited to hardware, even their own code they don’t know how it works.

  73. stitcher says:

    Okay enough to of the computer chat, let’s get back to business of what this started out to be about.
    READ Andy Grove’s speech to the Society for Neuroscience in its entirety
    See What’s New
    Click on “READ Andy Grove’s speech to the Society for Neuroscience in its entirety”

  74. weirdo says:

    I have files on 82 promising leads from peer revewed journals that await followup but whose patentability is doubtful.
    Two things:
    1) I don’t know what that has to do with the fallacy that companies are only interested in treatments and not cures. And, a fallacy that truly is. It assumes that the scientists in the labs of pharmaceutical companies are just plain evil people (they know how to cure, but won’t). There is no other way to interpret such an inane misconception.
    2) What is stopping any of the Internet doubters from developing these 82 promising leads? Or, the scientists who originally reported them? Or, um, you, for that matter? Venture captial isn’t THAT hard to find, particularly for good ideas that can cure people. Don’t tell me about patentability — I’ve seen so many presentations from biotech companies that have molecules in development with questionable IP that I’ve lost count. Good molecules will get pursued by somebody. Maybe not Big Pharma, necessarily, but somebody. If nobody is following up, well, it’s probably the science.

  75. TNC says:

    Thanks to PDP, I read the Andy Grove presentation. It’s bad, but interesting.
    I find Andy Grove’s X01 idea to be overly ideal, at best. Peer-reviewed science (in the form of NIH study sessions) at the very least, are not subject to Congressional or Presidential pressure. X01 grants would fast become political footballs.

  76. ally kendall says:

    Well, recruiting a secretary of defense from Searle didn’t work any better than recruiting one from Ford, did it?

  77. John says:

    You’ll all missing Grove’s point here. What he’s saying is that medical researchers can learn lessons from how engineers analyze failures in design. Rather than throw out an experimental drug because it failed, look more deeply at what caused it to fail and apply statistical methods (DOE) to help predict what “knobs” to turn. This is a clear case where our two industries should work together for the better of both rather than throw flip comments around like comparing a change in motherboard socket to drug delivery.

  78. knowitall says:

    Grove is right to be frustrated with the biomedical research community. He compares his world of IT with disease research and thinks they are readily interchangeable in principle. The problem is that his world has physics at it’s base, whereas biology is still at the stage where sheep herders were lying on their backs describing star constellations. Biology desparately needs a central dogma in order to predict the relationships between genes and phenotypes. Andy should throw his billions at that instead of complaining.

  79. Eric J. Johnson says:

    That would absolutely require – to begin with, anyway – the ability to precisely calculate 3-d protein structures from the corresponding gene/protein sequences. Which has already been a well-known holy grail for a long time (not that you were clearly suggesting otherwise).

  80. If you look at the half life of the comment postings on this one blog post, it becomes clear that the discussion pretty much petered out after just 3 days. This is unfortunate since the dialog prompted by Andy Grove is pretty healthy for our industry. Instead of dropping down to the details like why statistics is necessary vs. counter-productive, let’s focus on the main message: “What we are doing to put new treatments on the market has serious problems.”
    The problems are complex and it will take initiatives on many fronts to solve them. What we need are different groups eating different parts of the elephant and some method to continuously see the big picture.
    There is no doubt that we have some brilliant scientists who can get the job done. What we don’t have are good leaders who recognize the urgency of the situation and put real muscle behind it.
    Last, I agree with Derek that Andy Grove should start his own biotech company. Perhaps his first hire when doing so should be Derek. After all, as most of us have experienced, staying where we are just exposes us to lousy management who don’t listen to us either.

  81. jim o'hara says:

    We need more Andy Groves!

  82. Fred Glynn says:

    Anyone who looks at the medical profession as an industry–and at a disease as a company–will quickly come to see that the medical industry is not as organized as a semiconductor company. There are lots of experiments going on here and there that are of interest to those conducting them but there is little coordination between them. That hinders progress Furthermore, the researchers know that they will lose their jobs when they finally accomplish what they set out to and (possibly subconsciously) slow down to avoid running out of work. The lack of candor and sharing among people who should see themselves as true colleagues also impedes progress. Mr. Lowe, do you actually think that humans have made as much progress as they ought to have in the 2,500 years since Hippocrates? Even a thousandth of a percent of what we ouoght to have made? I think it’s far less than that. Andy Grove is right, nowhere near as much progress is being made as might have been done with a “can do” attitude. So, I think you can expect to see Andy Grove start his own biotech company which will (a) find a cure for Parkinson’s and (b) create a new and workable corporate model for for-profit biotech companies.

  83. Nick K says:

    Fred Glynn: Progress in medicine since Hippocrates’ time? Life expectancy in the West until fairly recently – 37 years. Life expectancy of people born in late 20th century – 80+ years.

  84. Fred Glynn says:

    Nick K: Could you specify the last year in which 37 years was the life expectancy in the West? Could you specify the first year in which 80 years was the life expectancy? Also, are you assuming that an increase in life expectancy is an indicator of a progress in medicine? Aren’t there an awful lot of other factors? I notice that you didn’t address at all the lack of (and slowness in) sharing of important information in the medical industry. I notice that you didn’t address at all the way in which taxpayers fund research which is then quietly shunted off to private for-profit companies where important discoveries are sometimes, for all practical purposes, allowed to slumber or die.

  85. Fred Glynn says:

    Nick K: I think you will find that whatever increases in life expectancy have occurred over the past two hundred years are more the result of changes in infrastructure than of advances in medical knowledge. Advances such as the ability to perform open heart surgery obviously affect a tiny fraction of a percent of the population and have a negligible effect on the average life expectancy. Improvements in water supplies, removal and treatment of sewage, ability to raise larger crops and bring them to market more rapidly have played a vastly more significant role in increasing longevity than advances in humankind’s knowledge of the human body. And would you believe that an enormous number of people, if asked, would tell you that they believe in the efficacy of the diagnoses and treatment for leprosy that appears in Leviticus 13-14? And don’t you think that it is amazing that the hocus-pocus of Leviticus was written around the same time that Hipppocrates was urging the application of reason to treating human ills?

  86. Nick K says:

    @85 Fred Glynn: Sorry not to have responded earlier to your post. Have a look at this paper by Adrian Gallop, who works for HMG as an actuary ( You’ll see that life expectancy in the UK at least has continued to increase throughout the 20th Century, well after the (unarguable and huge) improvements bestowed by the advances in sanitation and the like. Furthermore, the most significant increases recently have been for the elderly. This finding is completely inconsistent with your hypothesis. Figure 2 in this reference showing life expectancy at 65 from 1850 to 2000 is particularly clear in this regard. You will see that there has been a huge improvement from 10-12 years 16-19 years, starting in the early 20th Century and continuing dramatically upward from the 1940’s onward. Incidentally, take a look at Figure 1. Notice that life expectancy at birth for men in England and Wales was around 40 until as late as 1880.

Comments are closed.