Skip to main content
Menu

Pharmacokinetics

There Is No “Depression Gene”

I wrote a couple of years ago about the long-running study of mutations in a serotonin transporter gene. Over the years, polymorphism in the gene have been correlated with all sorts of human behavior and psychiatry, in keeping with the importance of serotonin signaling in human cognition. Depression, anxiety, that whole end of human behavior seemed to be affected by just what sort of genetic variation one had. Hundreds and hundreds of studies have appeared in the literature, many of them with truly impressive p-values.

Well, as that old post shows, people have been throwing cold water on this idea for a while now as well, and now there’s a paper that should (you’d think) expunge the whole idea of 5-HTTLPR variations having anything coherent to tell us about human disease. It doesn’t stop there: the authors go on to demolish every other “depression gene” connection in the existing literature. They went after the lot:

Utilizing data from large population-based and case-control samples (Ns ranging from 62,138 to 443,264 across subsamples), the authors conducted a series of preregistered analyses examining candidate gene polymorphism main effects, polymorphism-by-environment interactions, and gene-level effects across a number of operational definitions of depression (e.g., lifetime diagnosis, current severity, episode recurrence) and environmental moderators (e.g., sexual or physical abuse during childhood, socioeconomic adversity).

Nothing. No clear evidence for any given gene, in any polymorphic form, with any effect on depression, as either measured by itself or in combination with any other environmental effect. At this point it seems safe to say that there are no single standout genes that can be associated with depression. That’s not to say that there’s no genetic influence at all, but what this means is that (like so many other things) it’s a complex mix of dozens, hundreds, thousands of genetic factors tangled with environmental ones. It may well be that many of these end up binning into similar phenotypes or heading down common pathways, but we don’t know that for sure, either. What we do know is that talk of a “depression gene” is nonsense.

Looking back, the single biggest problem with all these earlier proposals (and there have been plenty) is that their sample sizes were wildly, hilariously small. Once again, it’s all down to effect size. The paper calculates that the largest-effect-size gene candidates in this field would still need samples in the tens of thousands to detect. And what has the median sample size been over the years? 345 patients. Right. This literature is all noise, all false positives, all junk. As you actually move to larger and larger studies, everything disappears, which is what noise does. Real stuff, on the other hand, should become stronger and harder to ignore as you increase the N, with tighter error bars and better signal/noise.

Here’s an excellent writeup on Slate Star Codex (whose author is a psychiatrist himself). He’s trying to be judicious throughout, but the frustration shows. I particularly like this part:

First, what bothers me isn’t just that people said 5-HTTLPR mattered and it didn’t. It’s that we built whole imaginary edifices, whole castles in the air on top of this idea of 5-HTTLPR mattering. We “figured out” how 5-HTTLPR exerted its effects, what parts of the brain it was active in, what sorts of things it interacted with, how its effects were enhanced or suppressed by the effects of other imaginary depression genes. This isn’t just an explorer coming back from the Orient and claiming there are unicorns there. It’s the explorer describing the life cycle of unicorns, what unicorns eat, all the different subspecies of unicorn, which cuts of unicorn meat are tastiest, and a blow-by-blow account of a wrestling match between unicorns and Bigfoot.

He goes on to note that there are a number of diagnostic tests that are supposed to help practitioners prescribe antidepressants based on gene sequence. But work like this latest paper strongly suggests that this is not well-founded. Some of these tests are for metabolic enzyme isoforms that could affect blood levels of specific compounds – and that’s not a stupid idea, although it’s often harder to realize in practice than just doing a sequence on someone. But there are companies using the exact same genes whose connection to depression is being invalidated. Slate Star Codex again:

Remember, GeneSight and their competitors refuse to release the proprietary algorithms they use to make predictions. They refuse to let any independent researchers study whether their technique works. They dismiss all the independent scientists saying that their claims are impossible by arguing that they’re light-years ahead of mainstream science and can do things that nobody else can. If you believed them before, you should be more cautious now. They are not light-years ahead of mainstream science. They took some genes that mainstream science had made a fuss over and claimed they could use them to predict depression. Now we think they were wrong about those. What are the chances they’re right about the others?

So far, there appears to be little or no reliable evidence that such testing is useful. That’s not to say that it can’t ever be, just that the people who are trying to sell it to you right now don’t have a very strong case. Psychiatric indications, out of the entire landscape of medical therapy, are really the most difficult and treacherous region to try to navigate with any sorts of molecule-level explanations. We don’t know enough to get that granular. We really don’t. If you start digging into the details of depression, anxiety, OCD, bipolar disorder and all the rest, the molecular and cellular-level explanations start coming apart in your hands like wet tissue paper. The field is littered with failed hypotheses, with just-so stories and compelling-but-wrong correlations, and with sources of unreliable data ranging from big, expensive, and completely honest efforts all the way down to plenty of outright charlatanry. Caveat emptor, and how.

39 comments on “There Is No “Depression Gene””

  1. John Wayne says:

    But fast sequencing is going to change the world … I think you need to look at the marketing materials again.

    I use a person’s perceptions of the relevance of gene sequencing as a litmus test for understanding what is going on.

    1. Joseph McGraw says:

      I would like to note that increased sample size isnt always better. Depression as a phenotype likely has multiple forms and some candidate genes are important for some forms but the signal gets lost as noise in a large population. I’m not saying I think there is a monogenic relationship but I think we are often too quick to throw out data. Almost as quick as we are to throw it out into the literature. Best JM

  2. Mad Chemist says:

    This sounds exactly like what you were talking about in your post “Too Much Wasted Time” a week ago. How much of this could be avoided if people were more willing to question their results instead of rushing out publications.

    1. Derek Lowe says:

      It is indeed. The incentives are there to turn out papers like the hundreds of publications building on these nonexistent correlations, though, and that’s what we get. It needs to change, but until it does (if ever), we as scientists have to be on our guards.

      1. loupgarous says:

        As long as the disincentives for publishing studies like that are few (how many public wallopings with a wire brush are handed out in science in general for assertions of effects that aren’t really present?) we’ll still see the studies.

        You have to be way out there like the NASA-funded researcher who announced an extremophile used arsenic where the rest of life uses phosphorus before the hoopla dies down, the bullroarers sound and researchers are admonished out loud for irreproducible results propped up by creation of a few small arsenate molecules.

      2. Kayarros says:

        This seems like bad science in the other direction. To proclaim that genes have absolutely no effect on depression because a review was done on some candidate polymorphisms is akin to examining atmosphere of earth and proclaiming that the entire universe is comprised of oxygen and nitrogen so no need to look further.

        1. loupgarous says:

          “Absence of evidence is not evidence of absence”, but “extraordinary claims require extraordinary proof”, as well.

          Has anyone supplied a strong enough coat rack on which published assertions that polymorphisms in 5-HTTLPR are responsible for everything from depression to a tendency toward nostalgia can hang? That’s a serious question.

        2. Isidore says:

          I am not sure who is arguing that “genes have absolutely no effect on depression”. The paper concludes as follows:
          “The study results do not support previous depression candidate gene findings, in which large genetic effects are frequently reported in samples orders of magnitude smaller than those examined here. Instead, the results suggest that early hypotheses about depression candidate genes were incorrect and that the large number of associations reported in the depression candidate gene literature are likely to be false positives.”
          As I understand it, the study concludes that genes previously associated with depression are in fact not.

  3. msp says:

    So what is the placebo effect with this? Think if the doctor comes to you and says we have done extensive genetic testing on you and it shows that this pill here will work best for you (sugar pill) It might actually be the best possible treatment no?

    1. Wile E. Coyote, Genius says:

      What about the converse? If the doctor comes to you and says that you are prone to depression, does it become a self-fulfilling prophesy?

      1. John Wayne says:

        Bang.

        Too bad we probably can’t run that study; ethics.

  4. navarro says:

    So much of the problem seems to come from a willing suspension of disbelief by people who should know better. The signal to noise ratio approaches zero but every result matches your favorite theory? Run with it!

    We have to stop ourselves from being too credulous about things which could increase our bottom line if we don’t and that is a hard thing to do.

  5. Andrew Molitor says:

    No offense, but the phrase “slate star codex” makes my eye start twitching.

    1. Doctor Memory says:

      It’s a fair cop, but when he’s writing about the field of psychiatry he’s usually on solid ground and evidence-based.

      The further afield from that you get, well… YMMV. And the moment you get near any of the central obsessions (AI, “effective” altruism, etc) of the cult he’s a member of, everything goes pear-shaped very quickly.

      1. Doctor Memory says:

        (And to be fair, that “etc” includes a revanchist obsession with IQ-as-destiny that starts shading into eugenics pretty quickly, so I don’t blame anyone for having a limited patience with the whole kit and kaboodle.)

    2. loupgarous says:

      Slate Star Codex’s points with regard to the published body of work on 5-HTTLPR polymorphisms seem reasonable, if not air-tight (he even admits they’re not air-tight). Many unsupported or only slightly supported conclusions in the popular press rest on the papers Slate Star Codex and Derek criticize.

      The body of work on depression, other psychiatric diease states and 5-HTTLPR polymorphisms borders on cargo cult science which doesn’t critically examine whether cause and effect are really causally related, or merely coincidental.

      Read the last chapter in Richard Feynman’s Surely You’re Joking, Mr. Feynman! for his classic discourse on cargo cult science, and notice which scientists he says are most responsible for cargo cults in modern science.
      Feynman, Richard (1997). Surely You’re Joking, Mr. Feynman!. W. W. Norton & Company. p. 60. ISBN 978-0-393-31604-9.

  6. tlp says:

    When I got my WGS results and started looking into what associations could be found in literature I caught myself thinking about those results pretty much as I’d think about horoscope prognosis. ‘Increased chance for male pattern baldness?’ – Pfft, that’s not me. ‘Mixed muscle types, likely a sprinter’ – yeah, I guess. ‘Probably able to digest milk’ – eye roll.

    At the same time I wonder how much of those polygenetic treats are actually overfitting and artifact of multidimensionality. Anybody has a link to some good meta-analysis?

    1. MrRogers says:

      GWAS studies, which are the basis of most of what gets reported in WGS analyses use strong reporting thresholds of around 10-8. Usually subsequent studies of the same phenotype include meta analyses. What your report isn’t telling you is that GWAS effect sizes are usually small and we don’t have enough explanatory loci to make accurate predictions for the vast majority of phenotypes.

      1. loupgarous says:

        Pathological science, as defined by Langmuir, is a psychological process in which a scientist, originally conforming to the scientific method, unconsciously veers from that method, and begins a pathological process of wishful data interpretation (see the observer-expectancy effect and cognitive bias).

        Some characteristics of pathological science are:
        – The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.
        – The effect is of a magnitude that remains close to the limit of detectability, or many measurements are necessary because of the very low statistical significance of the results.
        – There are claims of great accuracy.
        – Fantastic theories contrary to experience are suggested.
        – Criticisms are met by ad hoc excuses.
        – The ratio of supporters to critics rises and then falls gradually to oblivion.”

        Not to say the case for a genetic basis for depression is at that point in all cases, but Langmuir’s desiderata seem to be cropping up more often.

  7. luysii says:

    There are worse things in the literature. Consider that until 5 years ago, researchers doing functional MRIs did NOT check to see if people were asleep in the scanner (less than half of 71 subjects were still awake after 5 minutes in the paper pointing this out). Then two years later it was reported that the software packages used to analyze fMRI weren’t being used correctly. [ Proc. Natl. Acad. Sci. vol. 113 pp. 7699 – 7600, 7900 – 7905 ’16 ] showed that certain common settings in 3 software pacakages (SPM, FSL, AFNI) used to analyze fMRI data gave false positive results ‘up to’ 70% of the time. Some 3,500 of the 40,000 fMRI studies in the literature over the past 20 years used these settings. The paper also noted that a bug (now corrected after being used for 15 years) in one of them also led to false positive results.

    1. I had absolutely no idea there were so many systemic problems with these studies! How much of an effect do you think this has had on our understanding of the conditions we typically study using fMRIs? Has it just reduced the reliability of a group of studies, or do you think it has completely skewed our grasp of certain disorders?

      1. luysii says:

        Brian:

        I think the early work simply has to be disregarded, and repeated checking for sleep with polysomnography and with decent statistical packages.

        Here is how (in general) the work was done,. Some sort of task or sensory stimulus is given and the parts of the brain showing increased hemoglobin + oxygen are mapped out. As a neurologist, I was naturally interested in this work. Very quickly, I smelled a rat. The authors of all the papers always seemed to confirm their initial hunch about which areas of the brain were involved in whatever they were studying. Science just isn’t like that. Look at any issue of Nature or Science and see how many results were unexpected. Results were largely unreproducible. It got so bad that an article in Science 2 August ’02 p. 749 stated that neuroimaging (e.g. functional MRI) has a reputation for producing “pretty pictures” but not replicable data. It has been characterized as pseudocolor phrenology (or words to that effect).

        What was going on? The data was never actually shown, just the authors’ manipulation of it. Acquiring the data is quite tricky — the slightest head movement alters the MRI pattern. Also the difference in NMR signal between hemoglobin without oxygen and hemoglobin with oxygen is small (only 1 – 2%). Since the technique involves subtracting two data sets for the same brain region, this doubles the error.

        It you’d like to read more on this please see https://luysii.wordpress.com/2014/05/18/how-badly-are-thy-researchers-o-default-mode-network/

        and

        https://luysii.wordpress.com/2016/07/17/functional-mri-research-is-a-scientific-sewer/

        1. Imaging guy says:

          fMRI is probably a modern day phrenology like “polygenic risk scores”. Like gene variants between disease and control in WGS studies, the effect sizes between fMRI image voxels of powerful tasks such as tongue, hand and foot movements versus rest are quite small. Here is what one of the top experts in this field says, “the figure in BOX 2, which lists the resulting BOLD signal-changes and inferred effect sizes, demonstrates that realistic-effect sizes — that is, BOLD changes that are associated-with a range of cognitive tasks — in fMRI are surprisingly-small: even for powerful tasks such as the motor-task, which evokes median BOLD signal changes greater-than 4%, 75% of the voxels in the masks have a standardized-effect size d smaller than 1. For tasks evoking weaker-activation, such as gambling, only 10% of the voxels in-our masks demonstrated standardized effect sizes larger-than 0.5. Thus, the average fMRI study remains poorly-powered for capturing realistic effects……”. It is not like the difference between MRI images of brain tumor versus normal brain. In order to detect the difference in fMRI, you have to combine multiple voxels using multivariate regression methods which is same method used in making “polygenic risk scores”, “gene expression (mRNA) signatures”, “plasma proteins patterns”. The fundamental problem with multivariate regression method/machine learning is overfitting. When you use regression equation/algorithms obtained from one study to make predictions on independent datasets, most of the time it will not work. So whenever someone says multivariate regression/machine learning, you can stop listening.
          1) “Scanning the horizon: towards transparent and reproducible neuroimaging research”, PMID: 28053326 DOI: 10.1038/nrn.2016.167

  8. TG says:

    That’s depressing!

  9. dearieme says:

    I suppose if I were in this field my first step would be to restrict my work to conditions that are unmistakable. Having twice dealt with schizophrenic undergraduates I might wonder whether schizophrenia fits the bill.

    As for the genetics of this-and-that: what if what matters is not so much individual genes as a correlational structure involving many genes? How would you manage to pick that out? How big a sample might be required?

    1. Doctor Memory says:

      God, you’d think, right? There’s clearly something going on there: a reasonably similar cluster of symptoms, some unmistakable evidence of heritability, and a couple of drugs that appear to at least provide symptomatic relief. “Depression” is an amorphous blob of a diagnosis, but schizophrenia feels real. So you’d think that we’d have a working model of what’s going on, of how the drugs work, and that we’d have been making steady progress at better treatments, right?

      Nope. I believe that the last major improvement to the standard of care for schizophrenia in the US was Olanzapine (Zyprexa ) in 1996. The most generally effective treatment is still Clozapine, which dates back to 1958 and has a list of side effects so awful that it is generally only used as a backstop when other, less-effective therapies fail. The diagnostic criteria for schizophrenia are still as hazy as any other psychiatric disorder: clusters of symptoms, self-report and the best judgment of the clinician, wheee. The mechanism of the disease is still basically a mystery: there is some evidence of neurochemical/structural differences in the brains of schizophrenics but we have no idea whether they are cause, effect, or comorbidity.

      It’s not quite as bleak as, say, Alzheimers. (We do have antipsychotic medications and they do often improve things, at least for a while.) But it’s in the ballpark, and the fact that we’ve made so little progress on schizophrenia over the last 50 years should be cause for a great deal of worry about the general state of psychiatric research and therapy for any of the less clear-cut syndromes.

      1. brian terrier says:

        and when they finally accept that schizophrenia is another form of ALZ, the crap hole will get deeper.

        1. Doctor Memory says:

          My favorite fact about schizophrenia is that it absolutely, definitely, without doubt (okay probably/maybe) happens more often in cities, and there appears to be maybe a correlation between population density and disease incidence.

          The result of this has been, primarily, a lot of hand-wringing about the moral superiority of suburban/rural living, rather than trying to find the goddamn pump handle. (But of course some people have gone looking, and they’ve also turned up nada so ¯\_(ツ)_/¯

          1. luysii says:

            Possibly the rural/urban difference in schizophrenia is an artifact of ascertainment. Growing up in a rural area (16 miles by bus to high school), the town took care of its own. One person that with the hindsight of a medical education was a classic schizophrenic. He thought he was married to Queen Elizabeth by ‘interceptor medium’ whatever that was. When his activities were a bit too far out, the police would say ‘ time to go home Joe’ and Joe would go home. He was never institutionalized.

          2. Todd says:

            Part of the difference is social cohesion in rural areas. This has been also noticed in rural areas. Accommodating schizophrenics in some sort of social role seems to help with their treatment, and this has a fair amount of research behind it. It’s far from a cure, but it clearly takes the edge off.

      2. brian terrier says:

        they did come out with “Stupify” ,abilify, and those other azoles, that pulverize people. They should be illegal given the metabolic problems they cause, then pushing them on people with variable emotional functioning from environmental stress. Raising insurance rates one script at a time.

  10. pn says:

    The slate star codex dude’s writing style is so bad it’s giving me a headache

  11. John Harrold says:

    Why is this filed under pharmacokinetics?

  12. Merckofi says:

    there is no rule that states “ an effect in a small sample size that disappears under a larger sample size is illusionary or ‘ hilarious’ “. That would assume that all scientists are equally competent, using the same analysis standards, and in fact this perfect group is the one we find ourselves reading about right now.

    1. Hap says:

      1) It seems a lot more likely to assume that the originally cited effect doesn’t exist (being that the effect is a known statistical problem) than to assume that the original authors were just more competent than everyone else.

      2) Since duplication of results is kind of important to science, even if you were more able than everyone else, you still have to tell others how to reproduce your work. If they can’t, presumably some key detail is missing. It strains credulity to assume that lots of results are due to people significantly more adept at everyone else who yet don’t know enough of the effects to be able to explain them to anyone else.

      3) Since what people are trying to find are effects in a general population, effects that seem to exist in smaller sets or specific authors aren’t useful if they aren’t detectable in the larger population – the only reason you use a smaller set is as a model for the bigger population (because you don’t have the money or ability to measure the population). If the model doesn’t hold up in a larger population, “I’m better than you” does not seem like the most parsimonious explanation for the effect.

      1. Mercofi says:

        1. I’m not saying one group is bad and one is not, but differences in methodology are important…..it’s not just a matter of putting as many studies into a number cruncher as possible and seeing what statistics tells us. Genetics cannot yet be reduced to a statistical problem. There are way to many things we don’t know about genetics.

        2. Again, it’s not about us versus them….in fact I’m arguing the opposite…small groups with results that are “not reproducible “ probably point to some key differences in methods that might allow us to see hidden biases in larger analyses.

        3. Ultimately you are trying to find ways to use genetics to treat many people for common diseases. If that means calling into question the methodology and analysis techniques of large authoritative studies that say such things don’t exist, that is something worth doing.

  13. Thank you for this excellent post and also the references to the further literature which makes very strong points. One of my former psychology students who is now pursuing an MD drew my attention to it.

    The interesting question for me is why scientists produced this unreliable knowledge in the first place. We already teach our undergrad students power analysis and all this basic statistical knowledge. And why did (and do?) leading journals in psychiatry publish so many papers of which we could have known – and some actually did know pretty well – that they were unreliable? (By the way, there is this 2005 paper by Kenneth Kendler in Am. J. Psychiatry which already argued convincingly why there can be no depression gene; or any gene for another mental disorder.)

    That the pharmaceutical industry sells “fake news” to earn billions (and literally billions) does not surprise me. But why did and do scientists help them so much?

    Stephan Schleim, PhD, M.A.
    Associate Professor
    Theory and History of Psychology
    University of Groningen, The Netherlands

  14. bacillus says:

    @Dr. Memory. Although the drugs might be old, the new slow release formulations have been transformative. My son went from taking meds every day to a single injection 4 times a year. His whole world no longer revolves around that daily pill, and he’s much more functional for it. These formulations must also do wonders for compliance.

  15. Ron Richardson says:

    It’s amazing that the last 20-30 years of behavioral genetics have led to the finding that ~half of human behavioral variation is genetic but we have no idea of any key genes/alleles or mechanisms responsible for this variation.

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.