This new paper has generated a lot of headlines (Science news writeup here). It reports work on the long-sought “liquid biopsy” idea for cancer screening, the use of circulating biomarkers to detect tumors via a blood test. The idea has obvious appeal, so much appeal that many news stories over the years have gotten well ahead of the facts.
Looking closer at this work, it really is the best thing of its kind I’ve seen. And it really isn’t good enough. The team looked at about a thousand patients with clinically diagnosed tumors (stage I to III, nonmetastatic) of eight different types (ovarian, hepatic, stomach, pancreas, esophageal, colorectal, lung, and breast) and tried to detect both protein and DNA markers of their presence. Of these, there are screening techniques available for only two types (lung and colorectal), so something that caught these and others in the general population could be very useful. Hold that thought, though.
The DNA part of the screen uses mutations known in the Catalog of Somatic Mutations in Cancer (COSMIC) database, detected by PCR. The proteins are from a list of 41 reported candidates (39 of which turned out to have some use). The actual test ended up using 16 DNA markers and 8 proteins, because the gain in signal from larger sets was not worth it (or actually made things even noisier), and the combination is called CancerSEEK. Detection varied, as of course it would: ovarian and hepatic tumors had the most sensitivity, with the test picking up over 95% of the tumors in the sample. At the other end of the scale, the breast cancer detection rate was down in the 30% range.
So overall, there were quite a few false negatives. Fortunately, the false positive rate was lower – in a screen of 812 people without any detectable cancer, only 7 showed up positive. That still needs improvement (numbers coming up), but it’s a good start. The bigger problem is if you’re going to use this test for early detection: when the team looked at the detection rates adjusted for the stage of the diagnosed tumors, CancerSEEK turned out to only catch 43% of the Stage I cases overall.
Real-world data is going to be generated on this one – the funding (up to $50 million) is in place for a five-year study in women 65 to 75 who have never been diagnosed with cancer (up to 50,000 patients). The criterion will be two positive readings (because of that 1% false positive rate), after which imaging techniques will be used to try to find the tumors. (I should mention that the team is able to broadly localize the tumors as things stand, from the biomarkers themselves). And here is where the arguing will start.
The key is the actual number of people in that sample who have cancer. With 50,000 seventy-year-olds, you are definitely going to pick up real cases. From these tables of incidence rates, I would guess about 700 people in that sample will actually be diagnosed with cancer (I’ll assume that these are all types that this test will detect – that’s not quite right, but close enough for illustration). So we have 49,300 people who are actually cancer-free. The first pass will tell 493 of them that they have cancer when they don’t, so you’ll test again. That will narrow it down to five people who are actually healthy but have had (unfortunately) two positive liquid biopsies in a row. Meanwhile, you’ve tested everyone else again, too (48,807 people), and told 488 of them that they have one positive result.
What of the 700 who actually have cancer? The stage at which diagnoses are made varies by cancer type – lung, sadly, is typically picked up at a more advanced point (65% Stage III or IV), while breast cancer diagnoses are 65% Stage I. The hope is, of course, that a simple test like this one will pull more of the harder-to-catch types back to the earlier stages, but as noted above, the problem here is that the detection rate for those early-stage cancers is pretty weak. To ballpark it, let’s say that of the 700 patients in this study that you’d expect to be diagnosed with cancer, that half of them (350) are going to be Stage I. The test is going to miss 57% of them (200 patients) on the first pass, and it’s going to miss another 57% (114) of those people on the second run. Meanwhile, of that other 150 people that showed up positive the first time, 85 of them are going to be missed when you test again. From the paper, the detection rate for later-stage cancers averages about 75%, so of that other 350, you’ll miss 87 the first time around, and 22 of those will be missed on the second pass. Of the 263 later-stage patients who tested positive the first time, you’ll miss 66 of them with an incorrect clean test the second time.
Net among the healthy population (49,300 people) is that you’ve told 48319 of them, correctly, that they’re OK (98%). You’ve given some real worries to 976 of them, though, and flat-out terrified five. Net of the group that actually has cancer (700 people) is that you’ve correctly told 262 of them (37%) that they have cancer. Unfortunately, you’ve also told 136 of them (19%) that they’re apparently OK, because they’ve passed two liquid biopsies in a row. And you have 302 patients (43%) who are in the gray area of one positive, one negative – the same category as those other 976 people who are actually OK.
In fact, since we really don’t know who those 700 patients are, the “blinded” version that we’ll actually see is something like this: out of 50,000 people, 48,455 of them have shown no cancer in two tests in a row (although 136 of them actually have cancer). Meanwhile, 267 people have shown detection of cancer twice in a row (although 5 of them are actually cancer-free). And you have 1278 patients who are one up/one down. 302 of those actually have cancer, although you certainly don’t know which. I should note that these numbers are probably optimistic. The expectation is that the false positive rate will be somewhat higher in the real-world study, for example, since its population is uniformly older.
Update: it’s been pointed out in the comments that I’m assuming here that the false positives are independent each time, which is likely not the case. Inflammation, for example, could well produce them every time you test, so these really are optimistic numbers!
But hold on. We haven’t even gotten to the hard part. Now we get to ask a really tough question: of those 267 people that the study will presumably move to thorough imaging, how many of those cancers should be treated? This is something that not everyone thinks about – normally, the impulse is that if you get a cancer diagnosis that it has to be treated immediately and aggressively. But we know that there are people whose tumors are so slow-growing and benign that they’re going to die with them, not of them. Prostate cancer is the most well-known type with a number of patients in this category. You will do these people a great disservice by giving them surgery, radiation, chemotherapy, what have you, and you will spend a great deal of time, effort, and money that could go to people who need it. As the Science news piece has it:
For those who test positive twice, the next step will be imaging to find the tumor. But that will bring up questions raised by other screening tests. Will the test pick up small tumors that would never grow large enough to cause problems yet will be treated anyway, at unnecessary cost, risk, and anxiety to the patient? Papadopoulos thinks the problem is manageable because an expert team will assess each case. “The issue is not overdiagnosis, but overtreatment,” he says.
That, I have to say, sounds a bit like the NRA’s line that guns don’t kill people, people do. The tricky part will be keeping overdiagnosis from leading to overtreatment, which it generally does. Good luck to the expert team. I have had my disagreements with Vinay Prasad in the past, but this paper in the BMJ is worth reading on this topic. The problem is that the statistics that show that “cancer screening saves lives” are based on disease-specific mortality, not overall mortality. The trials needed to show a benefit in that (unproven!) latter category would have to be huge, but those are the numbers we really need: how many people die? Some types of screening (the PSA test) don’t even seem to show a benefit in disease-specific mortality, much less overall, and the numbers for (say) mammography are very much a matter for debate.
I realize that that sounds flat-out heretical, but the general perception of the benefits of cancer screening really are not in line with reality. That said, a really solid liquid biopsy type test might be the sort of thing that could tip the balance towards unequivocal benefit (after all, there are fewer cases of hepatic or pancreatic cancer that are better left alone). The new work I discussed above, for all its shortcomings, really is an important step towards such a test. But as it stands, it isn’t one itself. That’s for the future, damn it all.