Skip to main content
Menu

The Limits of Sweet Reason

Here’s a meta-question for you: to what extent can we scientists explain what we do, and to what extent does that change anyone’s mind? That sounds like a cry of despair, but it’s not. As someone who writes about science for a mixed audience, though, it does cross my mind.

A few recent examples: here’s an NPR segment on the Jamison group’s drug synthesis machine (as blogged about here). They interviewed someone who managed to get the idea of the thing (to my mind) almost totally wrong: he seems to think that this will be a mighty patent-busting machine, enabling anyone to just crank out high-priced pills for pennies, and will thus disrupt the entire pharmaceutical industry. People who heard this segment are now talking about how much of a savings this is going to be for society, and so on. This gets things about as backwards as they can be gotten, actually (see that linked post for more, if you’re not involved in drug research and wonder why I’d think that).

Another case would be Martin Shkreli’s appearances before Congress a few months ago. I am nowhere near defending Shkreli, but at the same time, several of the members of the House who were questioning him seemed to have little or no understanding of the issues involved (patents, generic drugs, FDA exclusivity provisions, etc.) I found it hard to say which side of the hearing was more annoying, and ended up not being able to stand listening to it.

I wrote about glyphosate the other day, and as you’d imagine, I always get some interesting feedback on that sort of thing, too. There are people who are right along with Stephanie Seneff – they are convinced that glyphosate is behind pretty much everything wrong in the world, and they cannot understand why (or how) I can’t immediately grasp the problem. These people tend to be wrong in all sorts of different ways, some of them mildly entertaining, but what they have in common is that in their minds the question is absolutely settled, and anything that contradicts (or even complicates) that position is a deception of Satan (or of Monsanto, who are really the same entity, anyway). The autism/vaccine people, and the anti-vaccination people in general, are in this same category. Some of them surely overlap, and if I had lots of time to waste, I’d let one of them try to sort out for me if it’s the glyphosate or the vaccines that’s going to make everyone autistic first.

This is why I don’t spend more time bashing away at Food Babe-style nonsense. There’s so much of it out there, and the people who believe it are (in many cases) basically unpersuadable. I can’t resist going after some of the more idiotic manifestations, but I do so in the knowledge that it’s partly just therapy for me.

This article talks about the similar effect you see in discussions of GMO foods. The author’s right – many of the people who are anti-GMO are not, will not, will never be convinced by any evidence to the contrary. It’s a good example of one of my favorite maxims, that you can’t use reason to talk someone out of a position that they didn’t arrive at by reason. The paper being discussed illustrates that: large numbers of people opposed to GMO food are reacting at a basic level of emotional and moral disgust, and that’s not something that can be reasoned away very easily. They want GMO foods banned no matter what the risks are, no matter what the benefits are, no matter what any evidence anywhere might be: banned, banned, banned. (Given the apparent number of people in this category, many of them would probably have to be eating GMO foods already without realizing it, but I wouldn’t want to be the person telling them that).

So I guess the question is, what percentage of the population is willing and able to change their opinion based on evidence? Many a reformer in many a field has overestimated that number, that’s for sure. But it’s not zero, either, which is why I’m not writing this in a despairing mood. I think, actually, that the people who are disproportionately more influential are, for the most part, somewhat more willing to use reason as a tool. For example, I note that the Royal Society (to their credit) is calling for European governments to reassess their opposition to GMO crops. The Guardian (well, the Observer) came out with a strongly worded editorial to the same effect:

The importance of changing attitudes to GM crops is equally pressing for the green movement. Greenpeace, Friends of the Earth and other NGOs are happy to accept scientific consensus when it suits their purposes. They triumphantly quote academic research that backs their claim that climate change, brought about by increasing use of fossil fuels, now threatens major changes to sea levels, coral reefs, shorelines and global temperatures. Yet they are equally willing to say that scientists – who they are pleased to endorse as a profession elsewhere – are utterly wrong about GM crops. This is a dishonest act of cherry picking that makes a nonsense of the green movement’s claim to hold a superior moral position about the health of the planet.

These things move slowly, and if you’re waiting for them, the time it takes can seem ridiculous. But I fully expect that decades from now the anti-GMO protests will seem as quaint as smashing waterwheel-driven looms does to us now. Progress isn’t inevitable, but it isn’t easy to roll back, either. So if we think that there are things that need explaining, we shouldn’t shy away from trying to explain them, while at the same time realizing that explanations themselves are of slow effect.

82 comments on “The Limits of Sweet Reason”

  1. Anon says:

    It will always be very difficult to persuade intelligent people. But it will always be nearly impossible to persuade stupid and ignorant people with reason alone. Best way is to show tangible proof in the most simple and personal example possible. That’s why popular tabloids focus on personal and emotional anecdotes rather than stats and logic – and why so many people read them. Unfortunately that’s human nature.

    1. Mark Thorson says:

      And Jesus. I had a dream in which Jesus told me that competitive inhibitors of the NMDA receptors are a good thing.

      1. Timothy Leary says:

        That’s messed up – easy with the NMDA.

  2. Luysii says:

    Although the people questioning the global warming/climate change meme are characterized as anti-science, and usually to be found on the right, the left’s anti-science credentials are in good standing, witness their attacks on GMO, and on the best solution we have to CO2 production, nuclear power.

  3. Philo says:

    The Copernican Revolution took over 100 years to play out. That ended 400 years ago, but I am sure there are some people that still think the Earth is the center of the universe because this is how it morally should be. Thomas Kuhn wrote a philosophical treatise with the promise that reason will lead to scientific revolutions. However, this is no longer the direction. Belief systems will be stronger than ever.

  4. Peter Kenny says:

    This is a thought-provoking post, Derek, and it may be instructive to look at how resistant scientists themselves are to reason. Ligand efficiency metrics are a case in point. Our perception of ligand efficiency changes with the concentration units in which affinity or potency are expressed and this is a somewhat alarming characteristic for something that is widely touted as a decision-making tool by people who, with much hand-wringing, bemoan attrition rates in clinical development. If you point this out, the response tends to be along the lines of, “But it’s useful!” and I’m reminded of the creationist’s kneejerk response (God created it that way) to any rational challenge. One response to our critique of ligand efficiency metrics was that it consisted of noisy arguments although the response failed to demonstrate any understanding of the basis of the critique. I have linked my counter-response as the URL for this comment and it is possible that the ongoing debate about LE metrics says more about the scientific culture of drug discovery than it does about the LE metrics themselves.

    1. Walther White says:

      You have a good point, Peter. It’s not just the average layman that resists the march of progress, even scientists can be complete nitwits. I used to work with biologists who constantly turn a blind eye to new evidence, including results from their own lab, which cast doubt on their pet “theories.” I wouldn’t even call it cherry-picking–it’s more like they’re throwing away the cherries and harvesting the pits.

      1. Peter Kenny says:

        Hi Walter, there are particular problems in the drug-likeness & compound quality where a cartel (or magic circle) of ‘experts’ appears to have emerged. It seems almost to be a point of honor among these folk to see who can make the weakest trend look the strongest while gushing about how seminal each other’s contributions are. One problem in this area is that ‘analysis’ is often done to add weight to opinion rather than to address scientific questions. One that I make when doing talks is that if we do bad analysis, those who fund drug discovery may conclude that the difficulties that we face are of our own making. I have linked a talk as the URL for this comment that provides some practical advice on correlation inflation.

        1. John Frum says:

          Peter – Your comments (and Derek’s post) are a reminder that drug discovery programs can sometimes take on the qualities of an elaborate cargo cult ritual, rather than objective problem solving. For example, “we’ll run another HTS and lead compounds will appear” or “the target was validated in a mouse KO model – full steam ahead” or “the PFI is too high – it’s no good”. So many uncertainties in the drug discovery process can contribute to cargo cult science and practice. Have you discussed this connection with LE metrics before? It seems to be fertile ground.

          1. Peter Kenny says:

            Hi John, I’ll have a think about the cargo cult angle and thanks for the suggestion. My take is that those who claim to be leading drug discovery are prone to panacea-centric thinking and that anything numerical is seen as somehow more rational/scientific/physical. A good example of this sort of thing is the way that self-appointed experts ‘convert’ IC50 values ‘free energy’ to make ‘soft science’ biochemistry appear to be more physical. The irony is that it’s complete bullshit and these ‘experts’ seem to think that dividing these ‘free energy’ values by number of heavy atoms relieves them of the obligation to report units. There is a degree of philately in modern drug discovery and I’ve linked the ‘Dans la merde’ blog post as the URL for this comment.

        2. Me says:

          Agreed – a nice merry-go-round of consultancy fees being reaped, with a cargo-cult following attached.

    2. to LE or not to LE says:

      I realise I am about a year late to this party, but if LE is defined as binding free energy per heavy atom (deltaG/N); and using deltaG = -RT ln Kd we can substitute and get to LE = -1.4 log Kd / N, with units of kcal / mol. Nothing is defined arbritrarily here as far as I can tell… Does this avoid your objection, as long as we stick to Kd and not IC50? Or a meringue?

  5. Anon says:

    If you want to convince 90% of people that Glyphosate is non-toxic, you’ll need to shovel it directly into all your food and eat it in front of them. But still you won’t be able to convince the other 10% – they have already seen all the proof they need that it’s toxic.

    1. Mark Thorson says:

      Perhaps they’d be convinced if we told them the lifetime risk from glyphosate is 10,000 times less than the risk from one mercury amalgam tooth filling.

      1. Anon says:

        Wouldn’t make a difference: the only thing they’d care about, is that their aunt got cancer around the same time glyphosate use became common, while she had a mercury amalgam filling since she was a child without any problems.

        Most people are just not wired to think in terms of logic, risk, probability and statistics, so best to speak the same language (with personal anecdotal stories) if you want to convince them – that is, with personal anecdotal stories. :-/

        1. Anon says:

          PS. Similarly, you wouldn’t try to convince an English-speaking statistician by explaining how the statistics make sense in Chinese, would you?

  6. P says:

    I would say the only hope is to influence the next generation to sniff out indoctrination. Oddly enough though, at least for me, the slow process of rejecting my Catholic elementary education was my first step into skepticism. Maybe I would not be as critical a person if those weirdos in collars hadn’t tried so hard.

  7. Julian says:

    My standard response to anti GMO types is “What about insulin”… That usually leaves them struggling to explain why one GMO is good and another bad….

    1. JJDoubleJ says:

      Yes, Excellent!

    2. crazytrainmatt says:

      Insulin is actually a more interesting example than you might think. There have been multiple generations of incremental improvement, but it remains extremely expensive. Those interested in how drug markets work will find this very interesting reading: http://www.nejm.org/doi/full/10.1056/NEJMms1411398

      I actually have some sympathy for the anti-GMO crowd, at least as long as we are managing to grow enough calories for everyone. The man on the street notices immediately that Europeans take their food much more seriously than Americans, and that it is generally of much higher quality than even the luxury supermarkets in the US, which seem to focus on appearance and atmosphere more than taste.

      While the anti-GMO crowd are just plain wrong on the safety issues, we scientists tend not to take responsibility for downstream consequences of progress. GMO could recover some of the lost taste from tomatoes, or it could squeeze out even more US-style industrial efficiency. Which path is taken will depend on politics, but think of antibiotic use in US livestock (banned in Europe).

      1. Vader says:

        I would have guessed that American food is the way it is because that was once the best way to grow enough for the average American.

        Perhaps that’s no longer the case, but I need a little more convincing.

    3. Oliver H says:

      A rather disingeneous argument. Insulin produced by microorganisms barely capable of existing outside their tank is quite a different beast. Pollen of plants don’t stay in tanks, they don’t even stay on the same field the plants that produced them were brought out on. Dismissing other people’s opinion on scientific grounds by making a scientifically bogus argument is not a productive way to convince people of the integrity of the pro-GMO side.

  8. The Donald says:

    Unfortunately, I think your question has already been answered by the Donald: not very much.

    Irrational behavior and poor critical reasoning seems to be built into our DNA. Technology is double-edged – it’s neither good nor bad. It depends how it’s used and in this regard understanding a technology is essential. However, given the exponential growth in technology in the past ~150 years, to me the key question is whether or not evolution of moral human behavior can outpace amoral behaviors. Irresponsible use of technology is one outcome; manipulative and malicious uses are another. There’s plenty of examples (finance and politics come to mind) where an amoral minority represents a perversely large asymmetric risk to society due to technology. The consequences could be HUUUGE!

  9. JG4 says:

    Julian raised a good point that is rarely seen in the public discussion of GMOs. Not all genetic modifications are created equal. Ones that raise salt tolerance (to pick a random example) are far less likely to be injurious than ones that add bacillus thuringiensis toxin (to pick another entirely random example) to the food chain. What could go wrong on the planet of unintended consequences.

    1. John Wayne says:

      I don’t disagree with your point, but that judgement cannot be made in a vacuum. To pick a nonrandom example, organic farming permits spraying crops with BT bacteria (that make BT toxin). The vast majority of agents we use to prevent other creatures from eating our food are not from GM, and are sprayed on the old fashioned way. If you don’t like BT toxin being used to protect food, you’ve got some other issues with the system as it is.

      A broad reaching theme here is also how people understand and react to risk. As a society, we make a lot of illogical choices; food and vaccine safety are just the two on the top of the pile right now. I believe the classic examples of safety measures with great returns on investment are clean water, sanitation, and vaccines. Good examples of poor returns on investment are airport security, the war on terrorism, and car airbags.

  10. Prairie Boy says:

    Driving home I heard that NPR story on the drug synthesis machine and somehow they made it sound like walking up to a soda dispensing machine and getting some fine wine out of it. No clarification about needing the appropriate inputs (source chemicals), just a lot of talk about cost and evil-patents.
    Wonder how the gullible would like it if the same machine could dispense a cup of glyphosate solution?

    1. Mark Thorson says:

      I could make a machine like that which would dispense individually personalized drugs for pretty much any problem you may have. Homeopathically potentized drugs, of course. It will take a minute and the machine will vibrate a little while you wait for your medicine to be prepared.

  11. Stephen says:

    As William James argued over 100 years ago, the power of belief systems is in their ability to supersede truth. The interface between belief and fact, scientific or otherwise, is central to contemporary discussions on climate change, GMOs, abortion, vaccines, immigration, and many other issues. My own experience discussing such issues with many individuals echoes the author’s maxim: reason has not had an effect, perhaps because the beliefs were not arrived at by reasoning.

    How, as a society, we weigh individual beliefs versus generally accepted facts may well determine the course of humanity. Thanks for this thought-provoking essay.

    1. Vader says:

      Perhaps the problem is that some important beliefs cannot be arrived at by reason. The value of human life, the repugnance for slavery, the protection of freedom of conscience — the cases for these have been made using reason, but I doubt reason is really how anyone ever arrives at them.

      Of course, we’re not talking about these kinds of beliefs. The value of vaccines seems like something that can be arrived at by pure reason, once you accept the premise that human life is precious, yet we have antivaxers.

      I think it’s more a case of conspiratorial thinking. I see strains of this in almost every seemingly irrational phenomenon out there. And while I think I understand some aspects of conspiratorial thinking, I’m not sure I have a complete handle on it.

      Particularly since humans really do conspire together at times.

  12. Great post. This seems to bring up the problems of scientific literature. Accessing science today is easy. Unfortunately, I think accessing good science which is accurate and easy for people to digest is difficult. The floodgates of the internet are wide open and people have to wade through all the food babe-esque material. Even if someone happens upon a gold nugget of science, would they know what it was? DeGrasse Tyson and Nye have advocated better scientific thought by the lay community, but it doesn’t help if the people don’t have access to the proper information. I think the lay people need to ask the right questions, but we, as a scientific community, need to be able to provide better platforms for the answers.

    1. Anon says:

      Stackexchange is a great platform for that. Clear and direct specific technical questions, with clear and direct specific technical answers explained in a more accessible and interactive human manner.

  13. Anonist says:

    Anti-GMO and anti-vax is religion and no arguments can change that. Interestingly enough, the first commandment “You shall have no other gods before Me” serves science pretty well. If you believe in God, you have no reason to introduce these new religions in your life.

  14. Anon2 says:

    Collective opinion will advance the same way science does: “one funeral at a time”.

  15. DCRogers says:

    People have every reason to suspect the system is optimized not towards their own interests, and happily externalizes risks onto them or the planet, covered by the breezy arguments of well-paid experts that we mere mortals must accept, given we are not experts in most things.

    You may change one mind on glyphosate, or climate change; but the above system will remain in place, rich fertile soil for the next generation of Food Babes.

  16. Diver dude says:

    My approach is more direct. If someone tells me that they don’t believe in climate change, my response is to say “excellent, then you’ll be making bad investment decisions which means I can make money off you”. This seems to work. You also get extra points if you pronounce “excellent” like Monty Burns.

    1. Anon says:

      … and then just add, “So now, how much are you prepared to lose if we have a bet?”

    2. NJBiologist says:

      DiverDude–I’m long on ExxonMobil and BP. Short those to your heart’s content.

  17. MTK says:

    A) People believe what they want to believe

    and

    B) There has been a gigantic erosion of trust between the public and our institutions, including Government, Business, Science, Media, etc.

    You combine those two and you’ve got a problem. Data or information is uncompelling because people don’t trust it or its sources from the outset.

    1. AreSee says:

      Exactly. What authority do you cite for your belief if you don’t trust any authorities? It’s simpler to disbelieve. And if disbelief puts you in the same camp as others whom you like, and in opposition to those you dislike, choosing a belief becomes a shibboleth, your key to identifying with a religious group you want membership in. It’s literally a no-brainer.

      I think most people also hate to re-evaluate past conclusions. Ask a criminal attorney if eyewitness testimony or fingerprint IDs are problematic, and you’ll likely gain a hostile witness. I’ve seen the same thing with biologists who are asked, “Are you sure your assay is properly calibrated? How confident are you in those results?” Too many are happy to trust the vendor’s promises rather than calibrate it themselves with known reagents and invite doubt.

      It’s human nature to want to move forward.

  18. KK says:

    There’s actually some data being published by the researcher Dan Kahan (see PMID: 24092722 for an example). He’s finding solid support for the idea that illogical ideas get more entrenched, not less so, when the facts supporting the opposite side of the issue are presented. On the other side of the fence, in the hopeful camp, This American Life ran an episode recently where they found a study showing that minds could be changed with high success on the abortion and gay rights issues were fabricated. However, upon further digging, and reproduction by others in the field, there actually does seem to be some success found when an individual with an opinion one one side of a hot-button issue has a discussion with someone on the opposite side of the issue. The key seems to be that they need to find common ground as individuals before they figure out they are on opposite sides. Clearly not a strategy that will be easy to implement, but it’s better than the backwards entrenchment that Kahan finds.

    1. P says:

      Let’s hope for an alien invasion to unite us all. That, or another world war might work.

      1. Fred the Fourth says:

        cf. “The Lathe of Heaven”

    2. Vader says:

      “The key seems to be that they need to find common ground as individuals before they figure out they are on opposite sides.”

      I think this is right. There seems to be a strong “us versus them” mentality nowadays, inflamed (I think) by social media and other technologies that mean your immediate physical neighbors are less likely to be your community than ever before. (If you follow me.) Having someone who is “on your side”, with whom you’ve exchanged all the trust cues, tell you that vaccination is actually a pretty good idea, seems a lot more promising than having a government official tell you to vaccinate or face legal penalties (even if the latter is in some sense justified.)

  19. TX raven says:

    Interesting timing.
    This morning, while I was watching the news, I thought: “have we reached the age of the end of reason”?
    I guess I am not alone asking that question…

    1. Vader says:

      Given how much of Hitler’s early support came from German academia, I’m not sure we ever did live in an age of reason.

      I know, that sounds like argumentum ad Hitlerum. But in this case, Hitler is actually relevant.

  20. Earl Boebert says:

    A lot of these contentious subjects revolve around the issue of risk, which is something that humans are lousy at evaluating*. When people ask me if I think climate change is real, I tell them that I stopped looking into it when I realized that it was the wrong question. It is the wrong question because by the time you know the answer *for sure* it will be too late.

    I then introduce them to Pascal and his penchant for betting. I tell them that the way to understand a situation like climate change is to formulate the problem as a bet and then ask what happens if you *lose.*

    If you bet that climate change is real, and you’re wrong, you’ve wasted a bunch of money. If you bet that climate change is not real, and you’re wrong, the consequences are dire. So which is the more prudent and conservative bet?

    *The chairman of Rolls Royce supposedly was once asked why he always flew the Atlantic on four-engine aircraft. His response was that it was because there were no five-engined aircraft. (He had obviously watched two many films of engines blowing up on the test stand.) Though the number of engines may have reassured him, they would not reassure an expert on fuel contamination.

    1. Hap says:

      Isn’t the problem with Pascal’s wager the inability of people to see past a potential bad outcome with essentially infinite magnitude and essentially zero probability (because there are an infinite number of ways to be wrong and damned, and only one to be correct, and you don’t know which one)? If there is a finite and evaluable probability of events (or at minimum a nonzero one), then the severity of the negative outcome is important because it can overwhelmingly affect the expectation value – how bad things are likely to get – but if you don’t have that, you’re stuck multiplying infinity by zero, which is indeterminate and doesn’t help anyone predict anything.

      I am not saying you are wrong, just that Pascal’s wager might not be the appropriate tool in this case.

      1. Earl Boebert says:

        Oh, sure. I wasn’t trying to advocate the “official” Pascal’s wager, just trying to get people to think in terms of negative outcomes. The probability of occurrence has to be estimated, and the options for mitigation have to be considered. If the probability is nonzero, the consequences dire, and the options for mitigation minimal, maybe you shouldn’t wait around until the event occurs to get moving.

      2. tangent says:

        One way to put the problem with Pascal’s wager as a persuasive technique is that it supported Pascal’s argument for God, which we don’t buy as valid argument. It was a polemic for him. He was exploiting the uncertainty in the probabilities to win an argument.

        In a situation where you don’t trust me, why should you accept my numbers for the hellishness of Hell or the direness of climate change? And you certainly do see breathless HuffPo articles about the super-dire direness of it, that people enjoy because they drive this Pascal’s wager further to the “do something” conclusion and so the reader is smart for having previously concluded that. (Also because people have a weird fetish for contemplating future disaster.)

        I think Earl may be talking more about situations where people do have some shared ground on the probability and consequences — and maybe you could sway them to multiply those two things together.

    2. eyesoars says:

      No five engined aircraft?

      OMFG. Such a one should know well beyond all others that the certification process for aircraft requires them to be able to complete an aborted takeoff with N-1 engines, where N is the number of engines the aircraft has. Nominally then, the risk should grow strongly with the number of engines.

      Also, twin-engined jets thus have by far the largest excess power margin on takeoff.

      Unfortunately, engine failure rates are *not* independent: things like flocks of geese, volcanic eruptions, botched maintenance, contaminated fuel, &c, have made multi-engine failures much, much more common than simple failure risk multiplication would suggest. So avoiding multiple engines doesn’t provide as much benefit as it “ought”.

      1. Peter Kenny says:

        The Hindenburg had four engines but didn’t need them to remain airborne…

    3. tangent says:

      Earl, does this work for you? I’ve tried something like that and routinely got: “It’s a huge decision, the costs to prevent it are enormous, you can’t possibly make such a decision without being certain!”

      It really calls out human issues with thinking about probabilities and uncertainties.

      1. Earl Boebert says:

        I used it for years in the indications and warnings section of my security threat briefings, as a way of getting people to think about decision making under uncertainty instead of insisting on certainty. You can observe all kinds of suspicious behavior in your neighborhood but you aren’t certain that burglars have targeted your house until you come home and find your valuables missing.

        In many such problem domains (and of course there is no universal approach) I found it important to counter the McGuffin* fallacy in security and it’s corresponding mistake in safety, root cause seduction. This is the notion that there is a single factor, element, or component whose countering or “fixing” will drop the probability of adverse outcome to zero. So this business about bets was just an attempt to get the audience to think in a more systems way. Sometimes it worked, sometimes it didn’t.

        *”McGuffin” is a screenwriters term, ascribed to Alfred Hitchcock. It is the hidden object that everybody is racing to find in a suspense movie.

  21. Mr. Gladstone says:

    Derek – highly recommend the following book by Michael Specter:

    Denialism: How Irrational Thinking Hinders Scientific Progress, Harms the Planet, and Threa tens Our Lives

    Although denialists, according to Specter, come from both ends of the political spectrum, they have one important trait in common: their willingness to replace the rigorous and open-minded skepticism of science with the inflexible certainty of ideological commitment. Specter analyzes the consequences of this inflexibility and draws some startling and uncomfortable conclusions for the health of both individuals and society. For example, though every reputable scientific study demonstrates the safety of major childhood vaccines, opponents of childhood immunization are winning the publicity war; childhood immunizations are tumbling and preventable diseases are increasing, often leading to unnecessary deaths. Specter, a New Yorker science and public health writer, does an equally credible job of demolishing the health claims made by those promoting organic produce and all forms of alternative medicine. Specter is both provocative and thoughtful in his defense of science and rationality—though he certainly does not believe that scientists are infallible. His writing is engaging and his sources are credible, making this a significant addition to public discourse on the importance of discriminating between credible science and snake oil.

  22. Vader says:

    I have to think this is wired deep into our brains. We are still adapted to an environment in which, if we didn’t make snap decisions based on partial data using probabilistic algorithms, the saber-tooth tiger ate us. Those algorithms include a certain amount of groupthink, use of trust cues, and so on. It takes a real conscious effort to overcome; and even in the 21st century, there may be a sense in which it is more rational, for the individual, to remain ignorant about a lot of things and spend the time focusing on keeping his job instead.

    Not always so good for society. But, paradoxically, putting the good of the greater society over yourself and your immediate clan group is probably not a value that can be arrived at by pure reason. Even more paradoxically, religion has served this goal as well as anything.

  23. MoMo says:

    “Commerce without ethics or morals is a form of aggression” Gandhi

    DDT, PFOAs, Volkswagens, and the list goes on and on.

    Why should big companies and institutions be trusted? Did something change?

    There is a fine line between science and snake oil, and sometimes there is no line.

    1. Karl Krumhardt says:

      DDT is used to fight mosquitos, which spread Malaria. You still dont want it because of DDT accumulation and all its side effects? Well, WHO still recommends to use it in Africa because there is currently no other way to gain control over Malaria in certain regions.
      Does that make the companies producing DDT bad or good from your point of view?Nature/reality is not black/White – there are a lot more colours involved.

  24. Steve says:

    Scientists are just as bad. Max Plank said it best “science progresses one funeral at a time”

  25. “Reason” is an interesting thing.

    Derek, being a scientist, equates “reasoning” with, roughly, deductive logic. The wonderful thing about deductive logic is that if you get the argument right and the axioms right, the conclusion is right. guaranteed. This sounds really great. I mean, it’s a seriously wonderful thing.

    Trouble is, it’s not as great as it says on the box on account of you often get one of the preconditions wrong, which is why the Science Establishment has added, at least in theory, all manner of checks and balances to improve the thing, and as any working scientist knows the thing is still wonderfully unreliable. The best that can be said is that it tends to converge on the right answer.

    On the other side we have various other reasoning methods, inductive reasoning and so on. These methods offer no guarantees. However, they’re actually *really* successful, a lot of the time. Reasoning by analogy, various magical reasoning, reasoning from the specific to the general, and so on. A big one is “if A implies B, then B implies A” which gets applied a lot by regular humans.

    On the face of it, it’s ridiculous and stupid. Scientists tend to laugh and point fingers. However, if A and B are actually strongly correlated, which they often are, turns out it works like sixty.

    So consider the glyphosate thing. Turn it around. Suppose Monsanto DID have evidence that some chemical they’re making vast sums might cause cancer in some cases? I suppose, if one were a humorist, one might assert that they’d instantly pull the chemical from the market and perform diligent due diligence, but that would correctly induce gales of laughter in the audience. Monsanto, and any large corporation, would dilute the message up the management chain until the decision makers got a weak enough, vague enough, message to slow-roll it, to let it slide until More Science Got Done and so on. See also the Challenger explosion. We know how these things work, in gory detail, and they are pretty much 100% reliable.

    Let us stipulate that we Monsanto Would Cover Up A Dangerous Chemical. At this point we’re one common fallacy away from Monsanto Is Covering Up Glyphosate’s Danger.

    And before we start pointing fingers and laughing, let us recall that this is a fallacy in the land of deductive reasoning, but it is a perfectly reasonable heuristic in the real world. There’s a reason we generalize, there’s a reason we use reasoning by analogies. Sometimes it works.

    And despite the much-vaunted guarantees of deductive logic and Scientific Reasoning, that system is pretty shabby at actually living up to its guarantees, as noted above. It’s not actually immediately obvious that the crazy inductive, magical, analogical, reasoning systems are actually that much worse.

    Getting all huffy because sometimes they produce the wrong answers seems, to be blunt, pretty hypocritical.

    Anyways. To get back to the original question, before you can change people’s minds you have to understand the mechanisms by which they arrived at their conclusions. The listened to other people say stuff, the “confirmed” what they hear with reasoning like “that makes sense” because “corporations are big liars” (it DOES make sense, because they ARE, by the way). I don’t actually know what the next step is, but that’s definitely the first one.

  26. Barry says:

    Max Planck observed “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” (Eine neue wissenschaftliche Wahrheit pflegt sich nicht in der Weise durchzusetzen, daß ihre Gegner überzeugt werden und sich als belehrt erklären, sondern vielmehr dadurch, daß ihre Gegner allmählich aussterben und daß die heranwachsende Generation von vornherein mit der Wahrheit vertraut gemacht ist.). But I’m not aware that he ever doubted that people were interested in correct knowledge. We now confront a sector of the populace actively hostile to the progress of science.

  27. anon says:

    “It’s a good example of one of my favorite maxims, that you can’t use reason to talk someone out of a position that they didn’t arrive at by reason.”

    I really dislike this saying, for two main reasons:
    1. It’s smug.
    2. It’s wrong. Many people don’t really understand how they arrived at a particular, anti-science position. SOME of these people can be taught how to use reason instead. That teachable fraction is our true audience. And what else would we use to try to convince them if not reason? Any other way would be dishonest.

    I agree that many people won’t ever learn this skill. We can quibble over how large (or small) the teachable fraction is, and over how much effort we should expend trying to help them see the light. I very much appreciate your contributions toward this end.

    1. Hap says:

      Haidt’s The Righteous Mind seemed to be a counterexample to me – in it, people seem to derive moral base sets independently of reason, and though the people who hold those positions can reason, they do not and can (in experiments) follow logic that leads to the converse of their stated positions, as long as they hold onto the unstated positions they do hold. The implication would seem to be that lots of people are capable of reasoning, but that reason is unlikely to drive their decisions in many (most) cases. “You can’t reason someone out of a position they didn’t reason themselves into” doesn’t seem smug in that context, since there is no implication that people can’t reason, but that reason does not determine their actions in some cases. You have to understand why someone holds a position (what things the person cares about) to be able to change their opinion – you need to use the right tool for the job.

      To get people to centralize reason in their belief systems, you not only have to show that reason is applied (appropriately) to the situation you wish to apply it to, but you have to get people to internalize the process. People aren’t going to do that without a reason, and in practical terms reason isn’t a sufficient one. People create narratives to give their lives meaning, and they make themselves around them. They aren’t going to change those narratives without something to replace them, and rarely do we have one. Arguing based on the things someone holds dear (using their belief system towards different ends) is more likely to work than trying to force them to change their belief system.

      1. Curious Wavefunction says:

        To elaborate more on Haidt’s excellent book (which I think should be read by anyone who wants to understand why people disagree with each other), liberals and conservatives generally seem to have different moral foundations for thinking about things. I think the particular liberal moral foundation relevant to this discussion is the “care/harm” foundation. The liberal imperative to presumably not harm others and care for them leads them could lead them to err on the side of irrational safety when it comes to things like ‘chemicals’ and GMOs.

      2. anon says:

        I’ve lost count of how many great book recommendations I’ve gotten from his forum. Thanks for that, and for your thoughtful comment.

  28. Ed says:

    In a similar vein, or so it seems to me, do you try to convince religious people that there is no scientific basis for their believe?

  29. MedChemNovice says:

    Just noticed a booklet on GMO.

    “GM plants: Questions and answers” by the Royal Society
    https://royalsociety.org/~/media/policy/projects/gm-plants/gm-plant-q-and-a.pdf

  30. Garrett Wollman says:

    As another commenter noted, what’s behind these things is really a generalized suspicion of institutions — particularly the large corporations (presumed to be only in it to enrich Wall Street fat-cats, never mind those shares in your 401(k)) and the government agencies (presumed to be incompetent) that are supposed to regulate the risks they pose. On the GMO front, there is also a deep suspicion that much of GMO agriculture is being done not to improve the world, but to enrich pesticide and seed manufacturers — which seems to the layman to be borne out, on the one hand by the patent laws that prohibit planting carryover seed, and on the other hand by all the news stories about Monsanto. Ultimately, the question boils down to “Who benefits?”

  31. Lane Simonian says:

    From my point of view, the problem is some scientists present as certain what evidence often indicates is uncertain at best. Let me give two examples: the potential connection between certain pesticides and neurological diseases and the potential connection between mercury and autism.

    http://www.alzforum.org/news/research-news/pesticides-raise-risk-als-and-potentially-alzheimers-disease

    https://www.researchgate.net/profile/Harald_Walach/publication/7503719_Mercury_and_autism_accelerating_evidence/links/0fcfd50e46cd92c275000000.pdf

    What is missing is what level of exposures and from what sources would be needed to cause certain neurological disease.

    Certainty before there is certainty is not a good thing.

  32. Anon says:

    Personally I would like to see more reason in the Brexit debate, but as both campaigns understand all too well, people respond to only two things: Fear and Greed. Reasoning is merely used as cover to hide the sad fact.

  33. Tim H. says:

    Perhaps the anti – GMO types are informed by science fiction scenarios and wish to make them more impossible by banning beneficial techniques that have any chance of misuse.

  34. Thomas Garnett says:

    Dr. Lowe,
    despair not! I am non-scientist layperson. You have changed my mind about several issues, e.g. GMOs and pharmaceutical funding. Keep doing what you have been doing. In a world of cons and sloppy hand waving, your clear and probing writing are badly needed.

  35. Li Zhi says:

    Interesting thread. Many make unsupported assumptions (for instance, that reason is rational, or that our current population can be taught to make better decisions by ‘better’ education). Its interesting to note that there’s no static here on the claim made that gcc is proven to lead to predominantly bad outcomes. The GCC debate is (or imho ought to be) about policy, not science. (and Kyoto agreement was so unrealistic, imho, that I’d characterize it as risible). I mention Global Climate Change because I’m a long time reader of the Wall Street Journal which has fairly consistently editorialized on the anti anthropogenic climate change side, mostly arguing about the science. It occurred to me a couple of years ago that while I believe they are wrong (although not without exception on all of the issues they raise) they may actually have contributed to a better approach than if the US or G7 governments, say, went into lock-down hysteria mode and aggressively implemented (using all necessary force) Kyoto. Similarly, most comments here either say or imply that GMOs are safe. That is rubbish, imho. It is not the lesson we want anyone to learn. The potential for GMOs to do harm is enormous. Saying they are “safe” is just as bad, if not worse, as claiming “drugs are safe”. Some are, some aren’t. We need checks and balances. How many ranting about the ignorance of the anti-gmo crowd actually discriminate in their own language between ANY potential GMO and the commercially available ones? Failure to communicate? Or code-speak and tribalism? Which would lead to a worse world, hypothetically speaking: one where all GMOs were outlawed globally or one where anyone could modify any organism by sending the DNA sequence(s) desired into CRISPR4U Inc., and getting back a tailor-made viral vector? Are the anti-GMO crowd’s fears really completely without merit?

    1. Li Zhi says:

      I basically am arguing that familiarity breeds contempt. “Regulations? We don need no stinkin regulations!” Unregulated, the potential harm clearly outweighs the potential benefits of GMO, imho. The solution isn’t imho to convince everyone that GMOs are “safe” (= harmless??), but to convince a majority that given proper care the technology can be safe.

  36. Shion Arita says:

    I think they should make a movie, ‘GMO me’, or something like that, in the vein of “Supersize Me,” in which someone eats nothing but GM foods for an extended period and then nothing bad happens. Not the most conclusive evidence objectively of course, but probably more effective to people who make emotional decisions.

    1. Oliver H says:

      There’s just a fundamental problem with that – and the scientific argument for GMOs in general: safety for the consumer is not the sole relevant consideration. And that’s the fundamental problem some people in the discussion fail to understand. I always compare it to clothes made in Bangladesh: Are they safe to wear? By and large, yes. Safety standards would largely prevent the import into, say, the European Union if it weren’t so. BUT: Are they safe to produce for those who make them? By and large, no. And it is perfectly within the right of the consumer to say that for ethical reasons, they not only have no interest in buying them, they think that a much more large-scale effort is required to create sufficient pressure against such production methods.

      There’s plenty of reasons to object to something, and consumer safety is just one of them.

  37. Michael Rogers says:

    In my experience, some minds can be changed, but it’s a slow, one-on-one process. Steps that I have found effective include:
    1. Ask what a person’s position is and why he holds it.
    2. Listen and summarize (honestly) your understanding of the person’s view.
    3. Ask a question formatted along the lines of “would you change your mind if it turned out that X”.
    4. When the answer is “no”, ask “What it would take to change your mind?” and return to step 1.
    5. If the answer is yes, then summarize the evidence for X.

    Nothing will work if someone is not willing to change his mind, but this works when he is.

  38. zero says:

    Humans exist in a social network. We weigh the value of information based on our trust for the person or source that provided it.
    Most people deal in factoids, here defined as sound-bite-sized data that may or may not be true but sure sounds compelling. Once we receive a factoid, we react to it. That reaction is based partly on a series of fallacies, including but not limited to: 1) how much we trust (or distrust) the source, 2) how much we want our reaction to meet the approval (or disapproval) of the source, 3) how much we hope (or fear) that the factoid might be true, 4) how well or poorly the factoid fits in with our history of reactions to other factoids on the subject.

    A hardcore antivaxxer doesn’t get that way overnight. This person’s sense of worth is rooted in a social network of true believers with high threat of rejection for nonconformity and a culture of outrage similar to hate radio. Years of factoids have rolled into this person’s experience and each one burns ‘the Truth’ a little deeper into their outlook. You would be more likely to convince this person in an afternoon that you can fly than you are to convince them that vaccines are safe. A fact coming from you is just a factoid, subject to evaluation according to social network rules; as a challenge to their factoid reaction history the response is automatically negative.

    Scientists aren’t exactly the most sociable group of people. The percentage of people with direct and meaningful social contact with a scientist is tiny.
    Some people get their science factoids from science ‘journalists’ (with a disappointingly small fraction observing journalistic integrity). In many cases these are clickbait articles designed to provoke a response, get shared and drive up ad revenue. Members of this group include such luminaries as Neil Degrasse Tyson and Vani Hari. (I know Dr. Tyson is an actual scientist, but that is not how average people experience his social presence. Note assignment to this group has nothing to do with the accuracy of content.)
    Most people get their factoids second-hand (or worse) from some average person they know, who in turn got it from a journo, talk show host, blogger, etc., or simply made it up.

    Once a factoid has been experienced, the reaction becomes a part of our fuzzy ‘memory of things we know’, unlikely to ever be seriously questioned again. It’s no surprise that people carry an enormous bloat of false knowledge given that most of the factoids floating about are false; we tend to believe factoids presented to us by friends and family and it takes considerable effort (even pain) to restore truth.

    If you want to change a person’s mind you have to be in a position of trust. An outsider bringing ‘truth’ is actually just {bringing the pain of doubting and questioning other trusted figures}, while a true friend might actually be {helping us see that things aren’t the way we thought they were and that’s ok because now we know the truth}.
    Scientists start off at a disadvantage in part because of the factoid (or family of factoids) that most of science is BS, made up, lies, grant fraud, politically motivated (and I can’t stress that one enough), wrong, hates Jesus, exaggerated or a scam. People with little exposure to skepticism are likely to accept that factoid and in turn reject any science-y factoid they later encounter.
    Part 1) As a culture, science needs to kick science journalism out of the house. The vast majority of lies, fraud and other deceit in science is committed by people with little to no connection to the actual research who are trying to profit one way or another and need the results to sound amazing/horrifying/super-cool/barbaric, etc. Shut off the firehose of university press releases, don’t give interviews to vampire blog trolls and certainly do not appear on any radio or TV show without some professional PR assistance. If possible, publish a free-access ‘plain english’ analysis of your work that is appropriately cautious and insist that anyone publicising your work link to that page.
    – Part 1a) For those laypeople with the interest and willingness to go read actual research, Elsevier is the sign of the beast. It’s like seeing a ten foot tall wall of fire between me and the data I need to make a decision. Publish open access, especially preprints with comments allowed. How can we assess the validity of a statement when that statement’s supporting information is locked behind a paywall? In more general terms, cutting out the middleman of science communication means encouraging average people to directly experience your work somehow, even if the only part we understand is a plain-english summary in preprint.
    – Part 1b) Avoid spiking social institutions. Don’t make fun of Catholicism (for example). As an organization it willfully disregards evidence that birth control is safe and effective and that abortion is also safe, effective and reduces child poverty and crime. Pointing this out in any context other than research on the subject itself does more harm than good, since you will establish yourself as an evil outsider and your future data will be rejected.
    – Part 1c) Wherever possible, don’t try to convince. Make simple statements of fact. Say things like ‘Vaccines are safe. My family is up to date.’ then move on; don’t try to give it any particular weight, but do give it a personal connection. By avoiding a direct challenge you delay that person’s reaction to your factoid; given some time to stew (and to identify with your family-ness), you might get a better reaction.
    Part 2) We need to be teaching children the process and practice of critical thinking. Rote memorization, ‘teaching the test’ and authoritarian relationships with instructors are harmful to our collective intelligence. Test scores as presently implemented are not suitable indicators for educational achievement. If we properly equip our children with the tools necessary to sort fact from fiction and to maintain skepticism then the next generation will be at least science-neutral.
    Part 3) We need to get our political system back into a semi-rational state. One party in particular seems to feel entitled to its own facts as well as its own opinions. A well-run piece of research should be admissible as fact regardless of who ran it, what it was about and what was discovered (or disproven).

    1. DoctorOcto says:

      A well worded and extremely reasonable argument Zero. I will of course treat it as a factoid, and promptly disregard it as it doesn’t fit my own world view, which is that people are fundamentally quite lazy and will not spend any effort trying to understand something if they can get away with not doing it. Though I’m a synthetic chemist with 20 years at the bench, I have little or no interest in understanding the nuances of theoretical astrophysics or deep sea ecobiology. Small wonder that non-scientists are even harder to convince, when the energy input required to reach an understanding is higher. Hmm I wonder if you could apply an activation energy equation to this phenomenon, and thereby predict a rate of conversion of opinions.

      1. zero says:

        thanks 🙂
        I’ve made a little chip in the wall. It’s a start.
        Laziness is not inevitable so long as we encourage curiosity and creativity.

        Certainly it takes some mental energy to overcome barriers. We have a limited reserve of willpower (a la ego depletion). Spending it on barriers just for curiosity (or worse, on introspection) is a hard choice for people with more immediate needs. Whether it be finding (or keeping) a job, a house, friends, a significant other, these basic physical and social needs tend to come first. Once people are secure in their basic needs and their daily grind doesn’t take all their mental energy then the opportunity for improvement is at hand.

        I would bet that general levels of stress are a good first-order indicator of conversion rates. A good job and a good friend are probably the best two things that can happen to a person who needs to question their beliefs.

Comments are closed.