Skip to main content


The Duke Cancer Scandal and Personalized Medicine

Here’s a good overview from the New York Times of the Duke scandal. Basically, a team there spent several years publishing high-profile papers, and getting high-profile funding, and treating cancer patients based on their own tumor-profiling biomarker work. Which was shoddy, as it turns out, and useless, and wasted everyone’s time, money, and (in some cases) the last weeks or months of people’s lives. I think that about sums it up. It was Keith Baggerly at M. D. Anderson who really helped catch what was going on, and Retraction Watch has a good link to his presentation on the whole subject.
The lead investigator in this sordid business, Anil Potti, ended up retracting four papers on the work and left Duke last fall (although he’s since resurfaced at a cancer treatment center in South Carolina). That’s an interesting hiring decision. Looking over the case (and such details of it as Potti lying about having a Rhodes Scholarship), I don’t think I’d consider hiring him to mow my yard. Perhaps that statement will be something for his online reputation management outfit to deal with.
But enough about Dr. Potti himself; I hope I never hear about him again. What this case illustrates are several very important problems with the whole field of personalized medicine, and with its public perception. First off, for some years now, everyone has been hearing about the stuff: the coming age of individual cancer treatment, biomarkers, zeroing in on the right drugs for the right patient, and so on. You’d almost get the impression that this age is already here. But it isn’t, not yet. It’s just barely, barely begun. By one estimate, no major new cancer biomarker has been approved for clinical use in 25 years. Update: changed the language here to reflect differences of opinion!)
Why is that? What’s holding things up? We can read off DNA so quickly these days – what’s to stop us from just ripping through every cancer sample there is, matching those up with who responded to which treatment regime and which cancer targets are (over)expressed, and there you have it. That’s what all these computers are for, right?
Well, that sort of protocol has, in fact, occurred to many researchers. And it’s been tried, over and over, without a whole lot of success. Now, there are some good correlations, here and there – but the best ones tend to be in relatively rare tumor types. There’s nowhere near as much overlap as we’d like between the cancers that present the most serious public health problems and the ones that we have good biomarker-driven treatment data for. Breast cancer may be one of the fields where things have moved along the most – treatment really is affected by checking for things like Her-2. But it’s not enough, nowhere near enough.
So why, then, is that the case? Several reasons – for one, tumor biology is clearly a lot more complex than we’d like it to be. Many common forms of cancer present as a host of mutated cells, each with a host of mutations (see this breast cancer work for an example). And they’re genetically unstable, constantly changing. That’s why so many cancers relapse after initially successful treatment – you kill off the tumor cells that can be killed off, but that may just give the ones that are left a free field.
Given this state of affairs, and the huge need (and demand) for something that works, the field is primed for just the sort of trouble that occurred at Duke. Someone unscrupulous would have no problem convincing people that a hot new biomarker was worthwhile – any patients that survived would praise it to the skies, while the ones that didn’t would not be around to add their perspective. And even without criminal behavior, it’s all too easy for researchers to honestly believe that they’re on to something, even what that isn’t true. The statistical workup needed to go through data sets like these is not trivial; you really have to know what you’re doing. Adding to the problem, a number of judgment calls can be made along the way about what to allow, what to emphasize, and what to ignore.
The other problem is that cancer is such an emotional issue. It’s very easy for anyone with a drum to beat to join in at full volume. Do you think that the FDA is letting all sorts of toxic junk through? Or do you think that the FDA is killing people by being stupidly cautious? Are drug companies ignoring dying patients, or ruthlessly profiteering off them? Are there too few good ideas for people to work on, or too many? Come to oncology; you can find plenty of support for whatever position you like. They can’t all be right, but when did that ever slow anyone down? Besides, that means that there will invariably be Wrong-Thinking Evil People on the other side of any topic, and that’s always stimulating, too.
It is, in fact, a mess. Nor are we out of it. But our only hope to is to keep hacking away. Wish us luck!

22 comments on “The Duke Cancer Scandal and Personalized Medicine”

  1. JIA says:

    Hi Derek,
    I can’t get to the paper by Eleftherios P. Diamandis (“Cancer Biomarkers: Can We Turn Recent Failures into Success?”) because it’s behind a paywall, but I don’t understand the assertion that “no new major cancer biomarkers have been approved for clinical use for at least 25 years”. On the contrary, the FDA has recently reviewed data for (e.g.) KRAS mutations in colon cancer to determine if a patient should receive anti-EGFR therapy or not, resulting in changes ot the labels of those drugs. CRC is a major class of cancer! See
    I could cite other examples — and more are in the pipeline for near-term approval, such as BRAF V600E in melanoma (vemurafenib), or the EML4-ALK fusion in non-small cell lung cancer (crizotinib). Perhaps these aren’t considered “major” enough? They certaintly count as personalized medicine in my book. Can you please explain more what your criteria are for a successful biomarker?

  2. Derek Lowe says:

    He seems to be especially highlighting things that are useful for population screening and early diagnosis. His definition: “A major biomarker is one that is used widely at the international level, has been FDA [US Food and Drug Administration]-approved, and is recommended by experts for use in clinical practice in professional practice guidelines, such as the ones issued by the American Society of Clinical Oncology”
    I’ll reword that sentence a bit, though – should be up in a minute or two.

  3. pete says:

    @1 JIA
    I’d also add the approval of diagnostic multi-gene-screen tests for predicting recurrence risk in certain cancers.

  4. Jacko says:

    A nation of burger flippers getting their ($10,000-100,000/year) personal cancer cocktails changed every few months. Thats assuming you can trust someone in this banana republic to actually be ethical. There is no legal recourse in personalized medicine. How can 1 person sue anyone?
    I mean you people are really something. All these idiots called TAXPAYERS “running for cancer” when in fact it won’t be for them because in a post peak oil world it won’t be affordable except for the glorious MBA’s. No competitive energy infrastructure for real wealth but a bunch of clown chemists doing a great job of bankrupting the state.
    Now ya’ll go put up your solar panels and wait for 3rd world status, ya here.

  5. qetzal says:

    Ditto what Pete said. Seems like the Oncotype Dx test would meet Diamandis’s definition, at least based on Derek’s quote in comment #3. One could argue that multivariate tests aren’t really biomarkers in the normal sense, but they’re serving the same function.
    I agree that personalized medicine is often overhyped and still has a very long way to go before it can live up to the public’s expectations. But I don’t see any reason to discount those successes that have already been achieved.

  6. I’m with JIA. By that definition, KRAS makes it, as does OncoTypeDx, as pete says. And Her-2 is less than 25 years ago, isn’t it? And what about the AML genetic marker from NEJM this past year?
    This may show these tests need to be evaluated by a robust regulatory system. We don’t have any regulatory apparatus in place, and the science is moving fast.

  7. Anonymous says:

    FYI I just emailed Coastal Cancer Center to express my outrage that they are employing this sociopathy (not in those terms). Either they don’t know this d-bag’s history or they are a bunch snake oil salesmen emptying the pockets of dying cancer patients. Judging by their Geocities website I assume its the former. Here is their email address to express your concern:

  8. Matt says:

    Why does the Burzynski Clinic come to mind when I read this article…?

  9. RKN says:

    What about the bcr-abl fusion gene in CML, approved for clinical use in, I think, ~2000?
    3 kinds of biomarkers: diagnostic, prognostic and predictive.
    As for screening, if cancer is not regulated at the level of transcription, then profiling changes in mRNA won’t capture the relevant unit of dysregulation.

  10. Greg Pawelski says:

    The good news is that everyone has now come to the conclusion that “personalized medicine” is the new paradigm.
    The bad news is that “personalized medicine” is now synonymous with “molecular medicine.”
    Is everyone familiar with the Hans Christian Andersen story “The Emperor’s New Clothes?”
    That’s what it’s about in “molecular medicine,” as it pertains to drug selection in cancer medicine. By now, we thought we’d at least be selecting single agents on the basis of molecular technologies. But the investigator who seemingly made the most progress (Anil Potti) has been discredited.
    The early papers (including a 2004 NEJM study in acute leukemia, which have seemingly not led anywhere constructive. It’s as close to worthless as worthless can be.
    Actually, the exact word to use was crap, but someone already used it: 10:
    “100 Percent Crap”
    Donald Berry, chairman of the Department of Biostatistics and head of the Division of Quantitative Sciences at MD Anderson, said the Duke scandal [i.e. Potti] puts the entire field of genomics at risk.
    “About 10 years ago, I read in Newsweek that the high-paying, glamorous job of the new millenium was bioinformatics,” Berry, one of the statisticians who signed the letter to Varmus, said in an email. “We were going to cure diseases in the near time frame. (Francis Collins was at the forefront of pushing this attitude.) My reaction was that we didn’t know how to handle one gene (and we still don’t), never mind 20,000 genes.
    “It was clear then, and it is clear now, that false-positive leads pop up all over the place and we have to keep banging them back down, as in ‘Whack-a-Mole.’ I say ‘we.’ Unfortunately, few people understand this, although the plethora of unconfirmable observations gets people asking, ‘Why?’ I’ve been saying for years that 90 percent of
    biomarkers studies are crap. And this is so even if the logistical, study conduct issues are carried out flawlessly. Sloppiness a la Potti/ Nevins leads to 100 percent crap.”
    But it’s not just Potti, and it’s not just microarrays. The whole concept of using molecular “signatures” of any kind to do anything beyond the most straightforward of cases (i.e. single gene mutations, etc.) is so flawed that everyone should have seen the problems at the beginning.
    The reason what no one seemingly sees it now can be explained by the facts that the technology itself is so elegant and beautiful. But a beautiful biological technology is no different than a beautiful computer technology — it’s not worth much without some very good applications (“apps”), and personalized molecular medicine is still waiting for its first killer app.

  11. David Young says:

    I agree with Derek that medical science is still very much just beginning to understand personalized medicine. JAI brings up several instances where special test do help us distinguish what cancer treatment is optimal. These test more often tell us when certain drugs do not work. Herceptin is useless in Her-2 negative breast cancers, EGFR antibodies are not helpful in KRAS mutant colon cancer, Imatinib is not helpful in chronic lymphocytic leukemia (well, that distinction is not a specialized test), BRAF inhbitors are not helpful in melanoma that is not BRAF driven (mutated). The list is not long. Our office gets called on with some frequency by companies that convey to me that they have the best tumor panel that predicts optimal treatments. I don’t believe any of them and ask them some very pointed questions. Most of our present tests instruct us as to when certain drugs do not work. The converse is problematic. What if one tests a Merkel cell tumor for BRAF mutation… .does that allow us to use the (upcoming) BRAF blockers? No, they won’t get paid for. Same for a lot of “off label indications” where a test might actually suggest a response. Won’t get paid for.

  12. GM says:

    I am with JIA. Check this on how biomarkers are getting into drug labels…

  13. Ken Rubenstein says:

    Considering the amount of effort that has gone into personalized medicine and the small handful of useful tests now in routine practice, the whole field seems pretty much a dry well. Much ado about not much. In fact, there are a modest, but growing number of researchers and publications that cast doubt on the cell-based somatic mutation theory of carcinogenesis in favor of a tissue level paradigm. I tend to side with them.

  14. Esteban says:

    FYI, Keith Baggerly has a history of debunking shoddy analyses of genomic data. I admire him greatly.

  15. qetzal says:

    @Greg Pawelski:

    The whole concept of using molecular “signatures” of any kind to do anything beyond the most straightforward of cases (i.e. single gene mutations, etc.) is so flawed that everyone should have seen the problems at the beginning.

    So do you think Oncotype Dx just a lucky fluke? (Serious question – no snark intended!)

  16. @getzal
    When I write about molecular signatures, it’s in regards to “drug selection.”
    The Oncotype Dx test identifies patients who are unlikely to have a recurrence it treated with surgery alone. If you aren’t likely to have a recurrence, you don’t need chemotherapy. If a genomic test can help to find out if a cancer patient will benefit from chemotherapy or not, and if they do, a cell-based functional profiling assay can help see what treatments have the best opportunity of being successful.
    Other tests, such as those which identify DNA, or RNA sequences or expression of individual proteins often examine only one component of a much larger, interactive process. Functional Profiling looks at the entire genome.
    Oncotype DX can measure the activity of dozens of genes and reveal which ones are most active. The test is expensive, but many insurers cover it because it often prevents even more costly and unnecessary chemotherapy.
    Predictive accuracy is the only data existing to validate the Oncotype DX test, which wasn’t a prospective study and certainly wasn’t a “real world” study. The Oncotype DX test has been independently validated by the original laboratory group which published the results.
    Also, no one is seriously proposing that any of the molecular tests now available (Oncotype DX, EGFR amplification/mutation) should have to be proven efficacious, as opposed to merely accurate, before they are used in clinical decisions regarding treatment selection.
    Here is a interesting caveat. The validation standard private insurance companies accept from molecular (genetic) tests is “accuracy” and not “efficacy.” No longer will it be essential to prove that the use of a diagnostic test improves clinical outcomes, all they have to do for these tests is prove that the test has a useful degree of “accuracy.”
    However, the validation standard they want for cell-based functional profiling tests is “efficacy” and not “accuracy.” Why the double standard? Genetic assays have established absolutely do data relating to assay “efficacy” and with much less data relating to assay “accuracy” than exist to support the application of cell-based functional profiling assays.

  17. HFM says:

    In addition to the examples above, I’d point out that imatinib is now being used in KIT-mutant melanoma (~3% of patients). You’d never get a 3% response past a clinical trial – I was told they’d tried Gleevec on just about everything when it first came out, and melanoma was one of the failures. But if you can sequence the patients, no problem.
    The problem is, obviously, trying to figure out who gets what drug. Sequencing is getting cheaper, but if you aren’t just checking the “greatest hits”, the amount of genomic messer-uppery is prohibitive. RNA profiling doesn’t get everything, and proteomics…well, it’ll be awesome someday. And then you have to generate and sort through all that data in something approaching real time (so you can actually treat the patient).
    Then there’s the problem of combination therapy. A single drug will put you in remission for a year, if you’re lucky, and then it’s useless. Some combinations work, some just make things worse. The resistants have mutations you don’t see in naive cells – not just the obvious binding-site kind, but wholesale rewiring. It’s a mess.
    And that’s assuming you have the drugs. As someone who’s spent time in this area, I will quote a drunken biologist: “Just tell those [valued colleagues] in chemistry to drug the rest of the [valued] genome, and we’ll hold up our end!”
    (So…you hear that, valued colleagues? Just cough up another 20,000 or so active molecules, and we can put this cancer thing to rest.)

  18. Morten G says:

    Try writing “Circulating tumor DNA as a cancer biomarker” in pubmed. Heck, try it in Google.

  19. MIMD says:

    Duke doesn’t have a good track record in scandals.
    As to cancer markers, perhaps that hard wall of bioinformatics stagnation I wrote about some years ago has something to do with it.

  20. Rick says:

    Apropos of this discussion, check out Gina Kolata’s article in today’s (July 19) NYTimes ( I think she’s being too optimistic about the present situation and the near-term prospects.

  21. Skeptic says:

    There is a great deal of judgment going on at this site. So, humans can’t commit error? What if they do, and they retract the publication – that was one of the first standards for definition of being a “responsible scientist” by Kuhn. You should read Thomas Kuhn’s “The Structure of Scientific Revolutions”, and read more about the history of science.

  22. Susan Gurney says:

    The problem is that our funding system rewards postive findings, rather than honest reporting of results, even if they are negative. Scientists care about maintaining funding, and few of them have the time or energy to pick up and start from scratch if their results are ‘noise-filled.’ There is tremendous waste of time and effort in this system, and heroes are few and far between.

Comments are closed.