Skip to Content

The Junk DNA Fight

From yesterday’s New York Times Magazine, here’s an excellent intro by Carl Zimmer to the Junk DNA Wars.

In January, Francis Collins, the director of the National Institutes of Health, made a comment that revealed just how far the consensus has moved. At a health care conference in San Francisco, an audience member asked him about junk DNA. “We don’t use that term anymore,” Collins replied. “It was pretty much a case of hubris to imagine that we could dispense with any part of the genome — as if we knew enough to say it wasn’t functional.” Most of the DNA that scientists once thought was just taking up space in the genome, Collins said, “turns out to be doing stuff.”
For Gregory and a group of like-minded biologists, this idea is not just preposterous but also perilous, something that could yield bad science. The turn against the notion of junk DNA, they argue, is based on overinterpretations of wispy evidence and a willful ignorance of years of solid research on the genome. They’ve challenged their opponents face to face at scientific meetings. They’ve written detailed critiques in biology journals. They’ve commented on social media. When the N.I.H.’s official Twitter account relayed Collins’s claim about not using the term “junk DNA” anymore, Michael Eisen, a professor at the University of California, Berkeley, tweeted back with a profanity.

I’ve written about this several times over the last couple of years, and the issues are far from being settled. We medicinal chemists should keep an eye on this stuff, because what’s riding on it is a whole unexplored universe of drug discovery – or not.

58 comments on “The Junk DNA Fight”

  1. watcher says:

    In the olden days of “junk DNA”, never could understand why nature, typically very efficient in it’s choices for biological systems, would includ such a great amount of excess, unused blueprint material.

  2. Pie says:

    …because the effort required to prune crud out of the genome might cost vastly more than leaving it in place?
    I’ve always found Fugu a compelling argument for junk being junk, but maybe I’m just easily lead.

  3. pipeline says:

    There was a similar story with the decipherment of Mayan Language. Eric J. Thompson was rejected the idea that the Maya hieroglyphs are the letters or words or they have info/coding potential. His view was the prevailing one in the field for many years, and many other scholars followed suit. However, one day Yuri Knorozov studies in comparative linguistics drew him to the conclusion that the Mayan script should be ideologically no different from the others languages; he got an approach and it was the key to understanding the script.

  4. Ben Thuronyi says:

    I’m no expert here but I’d expect there’s a case to be made for something in-between “important functional components” and “junk”. As has been said many times, the problem with studying evolved systems is that they may be functional and fit but without being elegant, optimized, or uncluttered.
    Does having all that “junk” have ANY effects on gene expression, metagenomic or post-transcriptional regulation, genome stability and replication — or anything? Undoubtedly yes. Everything in the universe affects everything else! How important are those effects? Are they relevant to medicine? To understanding how the cell works? Those are the hard things to settle on.

  5. Derek Lowe says:

    #3 – a story well told in Breaking the Maya Code by Michael Coe.

  6. Anonymous says:

    @2 While I do not fully believe all the junk is actually junk, I do believe a large part of it likely is. Along the lines of what you pointed out, unless there is an evolutionary pressure / survival benefit to remove the junk, then it won’t be removed. Certainly millions of years of mutations, viral infections, etc. popping up in our genome would not necessarily, in and of itself, create any evolutionary pressure for its removal. If something works well enough, nature does not often mess with it.

  7. Anonymous says:

    Am I missing something here, what is the big dispute? Physicists seem to cope ok with dark matter. Maybe this says something about modern biologists.

  8. pipeline says:

    Thank you 🙂 I know story well. Gave me the good idea for our approach.

  9. an old chemist says:

    From the New York Times article, “To him, junk DNA isn’t a sign of evolution’s failure. It is, instead, evidence of its slow and slovenly triumph.” This simply implies that evolution is still happening! Why would anyone think that the evolution would stop after arriving at the human stage!!! It could well be that various birth abnormalities are a sign of evolution. It could also be that in the future everyone will be an Einstein, Issac Newton!

  10. pipeline says:

    Yes,#9 the field is dominated by people who take easy turn and I do see it is very trivial to say that this junk because the brain has always denied knowledge of which is unclear.
    For Yuri Knozurov the de Landa “alphabet” has become almost the “Rosetta stone” of Mayan decipherment. We got our “Rosetta stone” as well.

  11. Mike says:

    As an aside … In today’s journalism, there is a lot of really crappy writing about scientific topics. Carl Zimmer is one of the few science journalists that doesn’t leave me completely frustrated with the glaring errors, oversimplified explanations, and wild exaggerations that plague most news articles about scientific developments.

  12. regdoug says:

    #7 I’m guessing that, like me, you aren’t a biologist. I think that Derek’s earlier posts on the subject which he linked to explain a bit more about the debate. As far as I can gather, it is important to biologists and med chemists to know how much of the genome is important so that they can analyze it to find e.g. new drug targets and gene therapy approaches. Therefore the dispute about how much of the genome is important has practical consequences.
    I think your comparison with dark matter is particularly apt because really dark matter is something that physicists assume must exist because its effects are visible. However, it should be noted that like the junk DNA debate, there is ongoing dispute about what dark matter is, how it interacts with light matter and even whether it is a thing or just “topological defects” in the quantum field.

  13. luysii says:

    I’ve never bought the idea that the 98% of our 3.2 gigaBase genome not coding for protein is junk. Consider the humble leprosy organism.It’s a mycobacterium (like the organism causing TB), but because it essentially is confined to man, and lives inside humans for most of its existence, it has jettisoned large parts of its genome, first by throwing about 1/3 of it out (the genome is 1/3 smaller than TB from which it is thought to have diverged 66 million years ago), and second by mutation of many of its genes so protein can no longer be made from them. Why throw out all that DNA? The short answer is that it is metabolically expensive to produce and maintain DNA that you’re not using.
    Which brings us to Cell vol. 156 pp. 907 – 919 ’14. At least half of our genome is made of repetitive elements. We have some 520,000 (imperfect) copies of LINE1 elements — each up to 6,000 nucleotides long. There are 1,400,000 (imperfect) copies of Alu each around 300 nucleotides long. This stuff has been called junk for decades. However it has become apparent that over 50% of our entire genome is transcribed into RNA. This is also expensive metabolically.
    Addendum 17 Mar: Just the cost of making a single nucleotide from scratch to hook into mRNA is 50 ATP molecules (according to an estimate I read). It also takes energy for the polymerase to hook two nucleotides together — but I can’t find out what it is (anyone know?). It’s hard to avoid teleology when thinking about biology — but why should a cell expend all this metabolic energy to copy half or more of its genome into RNA, if it weren’t getting something useful back?
    Why hasn’t evolution got rid of this stuff, like the leprosy organism? Probably because it’s doing several important things we don’t understand. Here’s one of them. The cell paper did something clever and obvious (now that someone else thought of it). C0T-1 DNA is placental DNA predominantly 50 – 300 nucleotides in size, very enriched in repetitive DNA sequences. It is used to block nonspecific hybridization in microarray screening for mRNAs coding for protein. The authors used C0T-1 DNA to look at whole cells to find RNA transcribed from these repetitive elements, and more importantly, to find where in the cell it was located.
    Guess what they found? Repetitive DNA is associated big time with interphase (e.g. not undergoing mitosis) active chromatin (aka euchromatin). So RNA transcribed from Alu and LINE1 is a structural component of our chromosomes. Since the length of the 3.2 gigaBases of our genome, if stretched out, is 1 METER, a lot of our DNA occurs in very compact structures (heterochromatin) which is thought to be transcriptionally inactive. What happens when you use RNAase (an enzyme breaking down RNA) to remove it? The chromosomes condense to heterochromatin. So the junk may be keeping our chromosomes in an ‘open’ state, a fairly significant function.
    This is the exact opposite of XIST, a 17,000 nucleotide RNA transcribed from the X chromosome, which keeps one of the two X’s each female possesses inactive by coating it like the ecRNAs
    The authors conclude with “we are far from understanding genome expression and regulation.” Amen

  14. anon says:

    Opinions are nice, but…, anybody read this paper? Parker SCJ, Hansen L, Abaan HO, Tullius TD, Margulies EH. Local DNA Topography Correlates with Functional Noncoding Regions of the Human Genome. Science. 2009;324(5925):389-92. doi: 10.1126/science.1169050. The shape of noncoding DNA is conserved even though sequence is not conserved. My opinion? The effort required to maintain shape seems incompatible with “junk.”

  15. Anonymous says:

    @13 While some junk DNA probably serves a purpose, I do not think that one can argue that it must have a purpose because it is there. As stated previously (@6), traits will exist as long as an organism survives long enough to reproduce at a reasonable level. World hunger aside, human beings have generally been able to provide themselves enough energy such that the metabolic waste of producing junk DNA has not had a negative impact on propagating the species. Perhaps this was not the case for a particular mycobacterium, but I do not think one can extrapolate that to humans.

  16. Slicer says:

    #13 almost certainly has it right. Asking what all that junk DNA is doing there is like opening a package and asking what all the styrofoam peanuts are for.

  17. dvrvm says:

    I think it depends quite a bit of what one’s notion of “junk” is. Wasn’t there a recent paper about more precise terms that distinguished different types of “junk”? Even if I don’t buy all the spurious transcript “function” stuff, junk can still serve a purpose in an evolutionary sense.

  18. Anonymous says:

    miRNAs—fascinating stuff.

  19. antibi says:

    could the junk DNA be buffering our genome from the same things we are discussing here (chemical insults, viral inserts, etc.)?

  20. luysii says:

    Time for you all to look at eRNA (enhancer RNA) — bidirectionally transcribed RNA of length 1,000 – 2,000 nucleotides from 3,000 of the 12,000 known mouse enhancers.
    I hope to put up a post about it soon. To be distinguished from ceRNA (competitive endogenous RNA) and microRNAs, all transcribed from DNA not coding for protein (formerly known as junk).
    Slicer @16 — brilliant analogy ! ! ! !

  21. RKN says:

    I think it comes very close to a just-so story to claim that all the putatively useless DNA in an organism must really serve some role in reproductive success. On the other hand, if one concludes the DNA really is useless, then that presents a problem (albeit different one) for the modern synthesis too.

  22. steve says:

    Imagine you’re reading Chaucer. There are a lot of junk vowels there. They may have been pronounced back in Chaucer’s time but in modern English they’re useless. It takes extra time and energy to write “colour” but “color” works just as well. Yet the Brits have retained the extra u despite the extra effort. I’m sure that there are plenty of examples like that in the human genome as well. Certainly some of the excess DNA serves a structural purpose but that doesn’t mean that the sequences involved are meaningful. Some of it may also just be evolutionary carry over.
    In that sense, evolutionary biologists are a lot like the old story of two economists walking along who spy a 10 dollar bill on the ground. One turns to the other and says “Why didn’t you pick it up?” The other says, “It made no sense; in an efficient economy that bill shouldn’t exist”.
    Maybe evolution isn’t 100% efficient either. A genome just needs to be good enough to find another genome so the two of them can pass their sequences on to the next generation.

  23. Mike says:

    @17
    You’re right. I think a big part of the problem is that when some people say “junk DNA” they just mean DNA that does not code for proteins and when some people hear “junk DNA” they interpret it to mean DNA that serves no purpose at all.
    It’s not so much a scientific disagreement as it is poor communication due to ambiguous terminology.

  24. steve says:

    Just to push the analogy a little further, there was no need to truncate “laughing out loud” to LOL until the advent of texting. If there’s no strong evolutionary pressure to get rid of junk letters then they tend stay around.

  25. Pipeline says:

    #11 I noticed one little negligence; “we can give up thinking about life as always evolving to be perfection”. I would call it not perfection but redundancy. Each species has to set up the margin of safety. It is because life is given to survive and genome is the instrumental part of the process.
    #17 Or may be the answer is very simple the size of the genome does not mean anything it is only the margin of redundancy recorded and fixed in generations by triumph of evolution and non-coding regions is script for this.
    Derek, again example from boring linguistics what was proposed by Yuri Knorozov that information can be written many ways around. Or other words that is only or mainly whole words or concepts, many symbols in fact represented the sound elements of the language in which they were written, and had alphabetic or syllabic elements as well, which if understood could further their decipherment.
    Leder and Nirenberg had good approach and were able to determine the sequences of 54 out of 64 codons in their experiments. A good experiment allows to draw correct conclusions the degeneracy is the redundancy of the genetic code. Currently, no one asks why since it’s obvious.
    Therefore, the statements of Carl Zimmer reminds me of what Thompson said. Only Thomspon was talking about beautiful ornaments on the rocks, and Carl Zimmer cited by many in Derek’s discussion called this spontaneous process. Can we afford to be spontaneous to survive?
    Eric J. Thompson rejected ideas that did not belong to him but only misunderstanding breeds his denial and pessimism. In any situation, the research process will lead to the progress one day, but the overall logic says the nature/”Junk DNA” is more interesting than we can afford.

  26. luysii says:

    #23 Having been around since the genetic code was cracked in the early 60’s, I can say that initially junk DNA meant, DNA that wasn’t coding for protein. You have no idea how powerful the central dogma was (DNA makes RNA makes protein) when it first came out. It was light where before there had been darkness.
    Hopefully you are correct and this is all a semantic quibble.

  27. steve says:

    #26, I believe that “junk DNA” still largely refers to DNA that doesn’t code for protein. It may have been modified a bit to include regulatory DNA sequences but the main idea is the same – large stretches of DNA with no obvious transcriptional utility. It is by no means obvious why a whisk fern – a very simple type of plant – has 100X as much DNA as a human. Similarly, a newt has about 10X as much DNA as we do. It’s hard to believe that all those extra nucleic acids are functional.

  28. a-non says:

    It would seem to me that assuming that non-coding DNA in a genome that only serves a structural function is junk is academic arrogance at its best. The tertiary structure of the genome, must impact on its function. What good is a gene if it only ever exists tightly packed to histone proteins? Without the structural elements how would all our DNA fit into the nucleus with only the right bits available for transcription. Only ever looking at the DNA code as a series of four letter codes seems like it leads some to lose site of the forest for the trees.

  29. pipeline says:

    #27 It is great joke to call a whisk fern is easy plant.
    First, it does not have leaves. To pay for this “easy plant” has special type of nutrition of the gametophyte appears to be myco-heterotrophic, assisted by endophytic fungi.

  30. steve says:

    #28, the point is that the sequence of structural DNA does not contain information and therefore is “junk”. It’s hard to see how Alu sequences are informational when they’re just highly repetitive and their number varies so much from individual to individual.
    #29, I’m not sure what your point is. Whisk ferns don’t have real leaves, fruit or flowers so compared to other plants they are relatively simple. It’s not clear why 100X more DNA would be needed to code for a plant like that than a human. The simplest assumption is that not all that DNA is functional. Call it junk DNA or give it another name, it’s hard to see that it is all functional given the relative simplicity of the organism.

  31. steve says:

    By the way, if you don’t like the whisk fern example then perhaps you’ll like the onion. It has 5X as much DNA as a human. Anyone have a cogent reason why that should be?

  32. SP says:

    In this debate junk DNA is being defined as parts that have no observable interaction with other functional elements of the cell. It’s long been known that noncoding parts have essential roles.
    When one side in an open debate acts so much more evangelical I’m tempted to side with the more open minded ones. Collins says “We dont know”, the other side responds with expletives on Twitter. I know which side I’d believe.

  33. pipeline says:

    #31 The only few things looks rationale for me. First, it is inbred plant so the selection was done by human being not by evolution pressure. But If it left in the soil/storage over winter onion should have some resources for the growing point in the middle of the bulb begins to develop in the spring. Unfortunately, it has not been fully sequenced yet. I do not know details. Interestingly, it is well known fact that strawberry has tiny genome but it is always eaten fresh.

  34. TW says:

    The best way to test it is to make serial knock-out mice. Actually some guys did knock mega base pairs of DNA out of the mice genome and the mice were fine.
    Now we have CRISPR, the work could be easier than before.
    NIH should fund such projects before it throws hundreds of millions of taxpayers’ dollars to the “80% of our DNA is functional”-ish guys.

  35. anon says:

    “the mice were fine”. Lots of important genes when knocked out also have no effects, since there are often redundancies, or there is no important stress performed to reveal their important functions.
    Does the newt’s 10X genome help him regenerate legs when injured? Not so sure why that’s so much more important for the newt than the human, maybe they are more advanced than us?

  36. Evolution finds local maxima.
    Just because there’s a more efficient human (lower energy costs for transcribing tons of DNA or whatever) “pretty close” doesn’t mean there’s not a little valley to cross to get there.
    Humans are pretty complicated and fragile. It doesn’t take much of a negative impact to constitute a valley. Also, simpler organisms, and organisms that are a lot better at reproduction, are going to have a much easier time jettisoning hunks of DNA.
    I don’t know anything about how Fugu reproduce, but if they’re like many fish they pump out a great whack of eggs and whatnot. Each one might get to test a few thousand variations in a year. A human might get a dozen or so kicks at the can, in a couple of decades.

  37. gippgig says:

    Off topic but may be of interest: Resistance, a movie about the problem of antibiotic-resistant bacteria. The Washington Post newspaper gave it a very good review.
    http://www.resistancethefilm.com/

  38. Some idiot says:

    @31: I’ve always loved onions… Now I know why!
    (-;

  39. anon says:

    >50% of human diseases are related to SNPs (changes) in the ncRNA. Messing with the junk can be bad for you.

  40. pipeline says:

    It is only my opinion but I think it is wrong assumption since the position in the evolutionary tree does not define the size of genome. The size it is only the margin of redundancy recorded and fixed in generations by evolutionary pressure. Every species have their own way to write it.
    @35 We are talking not about advantage or evolutionary hierarchy and how human more advanced compared to newt. But personally I think it is very cool go through metamorphoses and to make it possible you need more info in your genome.
    I can not find wright word how to call none coding region of genome yet but for my linguistic/biological perspective it is a chronicle of the species. The chronicle does have only victories because the rest does not matter.
    Why from my view it is not “junk” and it has multiple layers of information structural coding and regulatory given/written by evolution to survive. Lateral gene transfer is the primary reason of genome redundancy. This is why genomes so complex but it is not spontaneous process. LINEs and SINEs make up approximately half of all placental genomes but even now they do not sit quietly and respond to many stress stimulus. Good example: http://www.the-scientist.com//?articles.view/articleNo/42274/title/Wrangling-Retrotransposons/

  41. a. nonymaus says:

    Fast, cheap, accurate, pick two. This applies to genetic transcription and copying too. Various diseases (Huntington’s disease, various dystrophies) are caused by repetitititiititive copying errors in apparently important proteins. Thus, it’s no surprise that DNA that doesn’t code for proteins is full of repeated repeat sequences. For the people speaking about metabolic cost and comparing to a mycobacterium, it is important to remember that mycobacteria and I have different lifestyles (on most days). A mycobacterium spends a significant fraction of its metabolic budget on cell division and thus on DNA copying. In contrast, if I’m burning 2000 calories in a day, how many of them are due to DNA copying? How many of those could I save by pruning out a few repeat units of non-coding DNA? How much more expensive to operate would my polymerases be if they didn’t make a few extra repeats of some sequences? How expensive would it be to recognize and remove non-coding DNA rather than just copying the whole thing, with a bias towards repetition rather than deletion?
    Further, if a mycobacterium makes a poor copy of itself due to over-economizing in base-pair usage, it isn’t so terrible. The original can try again in a bit. The limiting case of this sloppy economy is, of course, viral reproduction. In contrast, if I fail at copying things accurately or delete the wrong sequence by mistake, I’ve just performed a cell division that may lead to cancer. Cancer tends to be selected against in nature. So, it seems possible that (some of) it really is junk, and there is a selection pressure towards not accidentally deleting something important over being a lean, mean genetic machine.

  42. pipeline says:

    #43 Following Lines and pseudogenes are not junk any more….
    http://www.ncbi.nlm.nih.gov/pubmed/24823767
    No one is immune from mistakes, but not all human genomes are equally packed. There is such a thing as homological recombination which raise diversity. So, repetitititiititive copying errors in apparently important proteins are just good example are not equal load of “junk”. Sorry, protectors 🙂

  43. Some idiot says:

    @39,40: Thanks for the links! Particularly the PLOS one. It is far from my field, so it was good to read something that was more than headlines…
    My gut feeling has always been that to call 98% of our genetic material “junk” is a bit of a hard call to chew and swallow… Having said that, I will always let evidence weigh heavier than my gut feelings….!
    (-:
    @16: Love it….! Reminds me of a story a very frustrated electrical engineer told me. Had just finished putting together a very sensitive component, very carefully packing it, and taking it down to the post office in a box plastered with “fragile” stickers. The guy behind the desk took it, and casually threw it into one of the post bags. The engineer said “What on earth are you doing??? Didn’t you see the labels???” “Yeah, it said fragile” What do you think that means?????” “It means it is well packed…”
    (-:

  44. Cellbio says:

    I agree with many commenters here that this is a “special” debate fueled by academic types that disregard well known phenomena that contribute to a nuanced story to push a concept to an extreme (conceptual or semantic) that makes for better story telling.
    Chromatin alignment, DNA packing in sperm, transcriptional control, redundancy, the legacy of replication infidelity that gives rise to gene duplication and generation of functional orthologs whose control elements or precise function can wander, viral integration, noise that has yet to run into positive or negative selection….it all has been part of the story of understanding the human genome. The junk concept has roots in proud people who predicted 5-10 fold more genes than found. Since they were wrong, the rest had to be junk!

  45. qetzal says:

    @28 & others:
    There is actually a good postive case to be made for junk DNA. It’s never been about some arrogant claim that if we don’t see an obvious function (like coding for protein), then it must be junk.
    Anyone who wants to understand the scientific arguments in favor of junk DNA should start with steve’s link at #40.

  46. pipeline says:

    #46 Denial – the mental process attributable to psychological defense mechanisms.They even come up with silly stories that would justify them self. Eric. J. Thompson spill a lot of his bile broth to tell everybody that Mayan script never ever can be decoded. He proposed that everything were written by Maya has only esthetic moment without logistic meaning or spontaneously. Surprisingly, one strange guy who got a struck on his head with a bat in his childhood decoded it. Please do not put the point in the end yet.

  47. Simon Pal says:

    All these “extra” bits that appear to be unnecessary and redundant brings to my (non-biologist) mind things like “incremental backups”, “error correction codes”, and “loop unrolling” found in computer science.

  48. Anonymous says:

    @46:

    The junk concept has roots in proud people who predicted 5-10 fold more genes than found. Since they were wrong, the rest had to be junk!

    Sorry, that’s just wrong. The junk concept has its roots in scientific observations like pseudogenes, the C-value paradox, the genetic load argument, etc. Furthermore, it dates back to the early 1970’s, long before the human genome project showed that there are only ~20K protein coding genes. Read the link @ #40 for details.

  49. pipeline says:

    To catch up with my arrogant and academic view.
    http://www.genome.gov/27560819

  50. NJBiologist says:

    @13 luysii–Your argument about metabolic cost makes sense, but I don’t think it’s the most expensive thing a cell can do. @43 a.nonymaus makes a good point about lifestyle; in thymocytes, for example, ~25% of the cellular energy budget goes to total nucleic acid synthesis (DNA + RNA; Buttgereit & Brand, 1995, Biochem J (312:1) 163-7). I think this fraction is lower for other cells (B&B used ConA-stimulated thymocytes; I’d expect the proportion to be lower in resting thymocytes, far lower in muscle and vastly lower in neurons).

  51. Cellbio says:

    @51
    i don’t disagree. I did not mean to suggest the origin of junk DNA began with genome decoding, but the people who polarize the concepts to their benefit had a role in promoting a broader concept with roots in the genome era. For example,
    ‘Most of the DNA that scientists once thought was just taking up space in the genome, Collins said, “turns out to be doing stuff.”
    Which scientists, exactly?

  52. qetzal says:

    @Cellbio #54
    Sorry, I don’t understand what you’re trying to ask and I don’t understand what you mean by “promoting a broader concept with roots in the genome era.”
    Also, #51 was me. Apparently I forgot to fill in my ‘nym.

  53. Anonymous says:

    @53 NJBiologist: The following might be of interest from the point of view of the metabolic burden of constantly synthesizing the nucleotides making up our genome and stitching them together (for DNA with no function — e.g. junk DNA). Presumably only the stitching together has an overall metabolic cost assuming (1) the number of cells in the body is constant (2) nucleotide salvage pathways are 100% efficient.
    [ Cell vol. 130 pp. 220 – 221 ’07 ] ATP synthase makes ATP at rates over 100 molecules/second. However since every day we produce and consume about half our body weight in ATP [ Nature vol. 417 p. 25 ’02 ] and there are only 24 x 60 x 60 = 86,400 seconds in a day and the molecular mass of ATP is 500 daltons — a single ATPase makes 86,400,000 ATPs per day 43,200,000,000 Daltons of mass a day. It takes 6 * 10^23 daltons to make a gram. So to make a gram of ATP a day there must be 10^13 ATPases in our body. Half our body weight is 35,000 grams (for the legendary 70 kiloGram man) so we’re back up to 10^17 ATP synthases in our body to make this much ATP.
    If we knew the total number of cells in our body, their turnover rate and the actual percentage of ‘junk DNA’ as defined above, we could figure out the metabolic burden of constantly making the stuff. Clearly we’re making a lot of ATP.
    This is for an adult organism. The burden is likely to be much higher for a growing fetus, infant or child.

  54. Cellbio says:

    Not clear, sorry…. was being sarcastic about the grouping of scientists into a monolith of thought, with the sudden revolution of insight (the DNA the ‘scientists’ thought was doing nothing is actually doing something) that is transformative to our thinking. This is the sort of stuff that drives me crazy as it obscures a literature full of diverse opinions that should be well known to the interviewee.

  55. TW says:

    Junk is junk.
    here is a good article:
    the link:
    http://www.homolog.us/blogs/blog/2015/03/11/junk-dna-debate-t-ryan-gregory-versus-francis-collins/
    the text:
    The readers may take a look at the New York Times article written by Carl Zimmer – Is Most of Our DNA Garbage?. It features the work of T. Ryan Gregory, whose name should be familiar those following our evolutionary biology section (e.g check Evolution of Genome Size – Scientific and Religious Analysis). Dr. Gregory’s research consists of measuring the genome sizes of various organisms, and looking for any evolutionary pattern among those sizes. Here is a brief summary of what he found after decades of work.
    1. Based on data collected on ~5000 animal genomes so far, genome sizes were found to vary 7000-fold. That is an enormous range.
    2. Animals with the largest genomes – lungfish (~80-120Gbp), salamanders (similar order), sharks (~10-20Gbp), grasshoppers, flatworms and crustaceans.
    3. The only phenotypic links with genome size are in larger cell size and longer time to do cell division for organisms with large genomes.
    4. Ecological correlation – “An emerging trend from animal (and more specifically, crustacean) genome size studies is the positive relationship between genome size and latitude.”
    5. Correlation with intron size – “Intron size and genome size are known to be positively correlated between species of Drosophila (Moriyama et al. 1998), within the class of mammals (Ogata et al. 1996), and across eukaryotes in general (Vinogradov 1999).”
    6. No relationship between genome size and animal complexity has ever been found. Researchers have been looking into this for over four decades.
    The entirety of collected evidence points to only one explanation for large genome size – ‘junk dna’. That means if organism A and B are evolutionarily close and have similar level of complexity (as defined by the number of different cell types), and the genome of organism B is much larger than organism A, then a large part of the genome of organism B consists of nonfunctional DNA. As a good example, fugu fish and zebrafish are evolutionarily related, but the genome sizes are – fugu (390Mb), zebrafish (1.7 Gb). Therefore, the genome of zebrafish likely has 1.3 Gb more junk DNA than fugu. In fact, humans are evolutionarily not that distant from either of those two fish species. That suggests that human genome has even more junk DNA. That is the core of Gregory’s argument, and nobody has refuted it so far except Francis Collins and ENCODE clowns funded by him.
    In January, Francis Collins, the director of the National Institutes of Health, made a comment that revealed just how far the consensus has moved. At a health care conference in San Francisco, an audience member asked him about junk DNA. “We don’t use that term anymore,” Collins replied. “It was pretty much a case of hubris to imagine that we could dispense with any part of the genome — as if we knew enough to say it wasn’t functional.” Most of the DNA that scientists once thought was just taking up space in the genome, Collins said, “turns out to be doing stuff.”
    Sadly he or the clowns he backs have not provided any solid evidence of ‘turns out to be doing stuff’. They back away from hyped up claims, when challenged.
    John Rinn is one such hype-star featured in the NY Times article. Between 2002-2007, my co-authors and I wrote a number of papers showing that the noncoding regions of the genome were differentially expressed in several organisms under different conditions. Tiling array experiment was our mode of measurement in those experiments. However, the main criticism was that those expressions were likely transcriptional noise, and one needed to actually show the actual function to claim that those expressed regions were functional. I did that in 2006 in yeast for one noncoding RNA, and Sid Altman, who received Nobel prize for discovering catalytic noncoding RNA, confirmed our work in a later paper. Still our result showed the function of only one novel RNA, and we could not say anything about the remaining expressed region.
    Neither did John Rinn, our ex-collaborator. In 2009, he published couple of papers showing the functionality of only one non-coding RNA in the human genome. That was not much extra achievement beyond ours, but he has an amazing talent of hyping things and saying what NHGRI likes to hear. His one functional RNA seemed to have ‘proved’ the functionality of the entire human genome, and now it is time to search for diseases there as well !!
    Interestingly, this ‘scientific consensus’ has its effect on the corresponding wikipedia page as well. It is now full of contrasting scientific and pseudo-scientific claims, and you may feel like being in the middle of a war-zone. For example, check this paragraph written by a scientifically trained person –
    The term “junk DNA” became popular in the 1960s.[26][27] It was formalized in 1972 by Susumu Ohno,[28] who noted that the mutational load from deleterious mutations placed an upper limit on the number of functional loci that could be expected given a typical mutation rate. Ohno predicted that mammal genomes could not have more than 30,000 loci under selection before the “cost” from the mutational load would cause an inescapable decline in fitness, and eventually extinction. This prediction remains robust, with the human genome containing approximately 20,000 genes.
    You learn that a scientist named Ohno came up with the description ‘junk dna’ and made a prediction that remains robust. Yet, the introduction says –
    Initially, a large proportion of noncoding DNA had no known biological function and was therefore sometimes referred to as “junk DNA”, particularly in the lay press. However, it has been known for decades[citation needed] that many noncoding sequences are functional.
    That block makes absolutely no sense, because Ohno did not write for ‘lay press’ and his claim of large proportion of noncoding DNA to be junk was not invalidated by many noncoding sequences being functional. In fact, tRNA was discovered in 1960s and Francis Crick, who hypothesized its existence, was also credited for coming up with the concept of junk DNA, as mentioned in the NY Times piece.
    Faced with this paradox, Crick and other scientists developed a new vision of the genome during the 1970s. Instead of being overwhelmingly packed with coding DNA, the genome was made up mostly of noncoding DNA. And, what’s more, most of that noncoding DNA was junk — that is, pieces of DNA that do nothing for us.
    Therefore, clearly the nonsensical block in wikipedia is inserted by an ENCODE pseudo-scientist. There is no better evidence of that than the section it led to –
    The Encyclopedia of DNA Elements (ENCODE) project[3] suggested in September 2012 that over 80% of DNA in the human genome “serves some purpose, biochemically speaking”.[4]
    Morals of the story –
    (i) The emperor has no cloth, but you can afford to say that only if you are living in Canada like T. Ryan Gregory,
    (ii) Be careful about trusting ‘science’ discussed in wikipedia.

Comments are closed.