I wanted to highlight this post at Five Thirty Eight, because it makes an important point about something that chemists, biologists, and MDs get to hear a lot about: nutritional science. Headlines have been produced for decades about how you should eat this superfood, and avoid that toxic one, and how this berry will protect you from Alzheimer’s and that vegetable from colon cancer, and how eating X is associated with Bad Disease Y, and so on and so on.
And you know what? The scientific rationales behind most of these are pitiful. Have a look. And have a look at how the numbers are generated – food diaries, attempts at recalling what you’ve had to eat over the past three months, that sort of thing. No one locks up five hundred people in a warehouse and feeds them precisely measured portions of People Food Mix, and without that, the numbers are always going to be fuzzy. Really, really fuzzy, to the point that the great majority of all these eat-this stories are noise, sheer noise.
Although concerns about self-reported dietary intakes have been around for decades, the debate has come to a head in recent years, said David Allison, director of the University of Alabama’s Nutrition Obesity Research Center in Birmingham. Allison was an author of a 2014 expert report from the Energy Balance Measurement Working Group that called it “unacceptable” to use “decidedly inaccurate” methods of measurement to set health care policies, research and clinical practice. “In this case,” the researchers wrote, “the adage ‘something is better than nothing’ must be changed to ‘something is worse than nothing.’”
Indeed. Bad data are worse than no data, and the situation is made far worse by the headline-ready nature of the material, not to mention its store-shelf-ready nature. People, understandably, would like to know if there’s some food that they’re eating that is substantially raising their risk of disease, or substantially lowering it. Journalists know that people want to read about this stuff, and marketers know that there are vast sums to be made by catering to hopes and fears: “Just eat this!” You couldn’t ask for a setup that could lead to more hype and muddle, and that’s what we’ve got.
My strong impression is that if we could put error bars on the results of observational nutrition studies, that they’d be appalling. That goes both for the amounts and types of food that people are putting down as having eaten (or avoided) and for the conclusions about human disease. Just a look at the constant weathervaning over the last thirty years should be enough evidence: coffee/wine/butter/what-have-you are good, bad, good again, actually sort of bad, good for some people, bad compared to X, good compared to Y, were good all along, were thought to be good but now are bad. . .it’s never-ending. And that’s because human nutrition is extremely complex, varies from person to person (and population to population), and because the numbers are crap.
Short of arsenic berries, should those exist, it’s not easy to pin down One Specific Food that has a gigantic, inarguable effect on long-term human health. Yeah, I think that a twenty-year diet of bacon-wrapped Twinkies would probably be inadvisable. But so would a twenty-year diet of nothing but brown rice and zucchini. We should all try to stay in between those two, and stop reading food headlines.