I enjoyed this article at FiveThirtyEight, because I’ve had similar thoughts over the years myself. There are layers of knowledge about many topics, and it can be hard to be sure what layer you’ve made it do, and whether there’s another one underneath you yet. The first example in the piece is the spinach-has-a-lot-of-iron idea, which many people have heard of. A subset of those people will tell you that this got started many decades ago because of the “Popeye” comic strip, and that it’s actually a myth, that spinach doesn’t have an unusual amount of iron as opposed to other greens and vegetables. And a subset of those people have heard the story about how the whole error was based on a missed decimal point with some earlier German data about iron content, which propagated for years until someone noticed.
But as it turns out, that last story is a legend, too, which has also been handed along uncritically. (As an aside, I was sure that the reason that a fair amount of the iron in spinach wasn’t very bioavailable because of its oxalic acid content, but it turns out that this is not necessarily the case, either). It’s easy to take the first story you hear as the explanation, but in some ways, it’s even easier to take the second one and figure you’re done, especially if you’re used to seeing first explanations not quite pan out.
An example of this from medicinal chemistry would be thalidomide. Pretty much everyone has heard of the terrible effects it had in pregnant women. Many chemists and biologists are sure that these were due to one enantiomer of the compound, and that dosing the other might have prevented the catastrophe. But that’s not true, either: thalidomide’s chiral center racemizes quickly in vivo, so it really doesn’t matter if you dose a single enantiomer. And the teratogenic effects vary so widely from species to species that the various stories about how they would have been found/were certain to have been missed, etc., are generally confused as well.
This explanation seems quite plausible:
Complicated and ironic tales of poor citation “help draw attention to a deadly serious, but somewhat boring topic,” Rekdal told me. They’re grabby, and they’re entertaining. But I suspect they’re more than merely that: Perhaps the ironies themselves can help explain the propagation of the errors.
It seems plausible to me, at least, that the tellers of these tales are getting blinkered by their own feelings of superiority — that the mere act of busting myths makes them more susceptible to spreading them. It lowers their defenses, in the same way that the act of remembering sometimes seems to make us more likely to forget. Could it be that the more credulous we become, the more convinced we are of our own debunker bona fides? Does skepticism self-destruct?
It can indeed. We humans have a narrative bias: an explanation that makes a good story is going to get elevated compared to the ones that don’t. And if you’re used to the initial explanation for something always being a matter of possible doubt – that is, if you’re a working scientist – then you have to watch out for this constantly. I know that I’m personally more attracted to long-shot contrarian explanations than I strictly should be, because of this very effect.
Where do you call a halt, though? The article notes, disturbingly, that conspiracy theorists and people like anti-vaccine activists are also skeptical of prima facie explanations and believe that they have deeper truths that are unrealized by the masses. I’m looking for reasons to differentiate my own ways of thinking from theirs, for my own psychological comfort, and one that I can adduce is that in my work I’m often forced to give up on things that I believed (or hoped) to be true, after seeing the evidence against them. A real conspiracy theorist isn’t going to let that sort of thing slow them down much – that contrary evidence is just what they want you to believe, while the real evidence always points back to the real explanation. Which is the pet theory that the person was actually convinced of at the start, most of the time.
Not to say that scientists can’t walk off into that pit, either – happens all the time, on small issues and on big ones. The lines are sometimes finer than we’d like to admit. A hypothesis might need some real dedication and effort to be properly tested, but when does that cross over into a too-attached refusal to accept that it’s been tested and failed? No lights flash and no buzzer sounds. It’s like figuring out when to kill a drug project, because there’s always something else to try, and there’s always a chance that it might still work out in the end. Many a huge drug success has looked like a failure at one point or another, so how can you stop work on something just because it looks like it’s going to fail? On the other hand, what better reason is there, anyway?
Uncritical acceptance of whatever comes along is clearly a mistake, but total reflex skepticism of everything is a mistake, too, as well as being so hard to live by that you run into inconsistencies immediately. Like Samuel Johnson kicking his rock, you have to have some assumptions in there somewhere. When I open up a bottle of reagent, I generally work as if I believe it to be what’s on the label, and set up my reaction accordingly. There are gradations and shades, though – if it’s an unusual reagent from a supplier that I don’t know, or have had reason to doubt in the past, then I’ll probably look into the stuff a bit more. But if it’s a common chemical from a reputable company, then no, I’m not going to take an NMR to make sure that this 4-liter jug of acetone really is acetone before I use it to rinse out a flask. (We will leave aside, for now, the time in grad school that I found someone else’s rubber pipet bulb – now swollen to the size of a small trout – floating in the can of acetone I was so using, which accounted neatly for the fact that my newly rinsed glassware was still sticky).
The FiveThirtyEight article does not end on a comforting note. The person profiled in it, Mike Sutton, is now working on the story that natural selection was actually hit upon decades before Darwin by someone else, Patrick Matthew. Darwin himself acknowledged that much, but the question (which I wasn’t aware of before reading this) is whether he cribbed it from Matthew in the first place. Sutton now thinks he did, but many other Darwin scholars think the evidence is nowhere near strong enough. This, to me, illustrates a problem in between the two fallacies just mentioned, though: the situations where there just isn’t enough to say for sure. Deciding that there never is enough evidence to say that anything’s for sure is another way of describing total skepticism, and deciding that any old evidence at all is plenty is uncritical acceptance. But these are easier to differentiate when the argument has a chance of being won, like being able to decide that yeah, that is reasonably pure acetone in that can, or alternatively that there is in fact a well-soaked pipet bulb in there, too. When you’re talking about the historical record, things get fuzzier quickly, which is why there are still Holocaust deniers out there, despite this being one of the better-attested historical events of the twentieth century. George Orwell started off a newspaper column by talking about how Sir Walter Raleigh at one point abandoned his plan to write a history of the world, because while imprisoned in the Tower of London he was unable to even figure out what had caused a disturbance just within his earshot. (Of course, I’m not even sure if this story is accurate – Orwell himself couldn’t resist adding that if it wasn’t true, it should be).
Which takes us back, finally, to the judgement-call aspect of the whole thing. As mentioned above, the wilder conspiracy theories trip themselves up by what evidence they’re willing at accept: for example, if you’re ready to call the Holocaust a fake, how do you know that World War II happened at all? In the end, we have to start talking about “the weight of the evidence”, and decide for ourselves how weighty it needs to be, what scales we’re using to measure it, and how much we trust the scales. Historians have it far worse in that department than scientists do, since we have actual scales to work with, but that doesn’t mean we escape the problem completely.